A few years ago, we took our first look at the inexpensive 27" 1440p monitors which were starting to flood the market via eBay sellers located in Korea. These monitors proved to be immensely popular and largely credited for moving a large number of gamers past 1080p.
However, in the past few months we have seen a new trend from some of these same Korean monitor manufacturers. Just like the Seiki Pro SM40UNP 40" 4K display that we took a look at a few weeks ago, the new trend is large 4K monitors.
Built around a 42-in LG AH-IPS panel, the Wasabi Mango UHD420 is an impressive display. Inclusion of HDMI 2.0 and DisplayPort 1.2 allow you to achieve 4K at a full 60Hz and 4:4:4 color gamut. At a cost of just under $800 on Amazon, this is an incredibly appealing value.
Whether or not the UHD420 is a TV or a monitor is actually quite the tossup. The lack of a tuner
might initially lead you to believe it's not a TV. Inclusion of a DisplayPort connector, and USB 3.0 hub might make you believe it's a monitor, but it's bundled with a remote control (entirely in Korean). In reality, this display could really be used for either use case (unless you use OTA tuning), and really starts to blur the lines between a "dumb" TV and a monitor. You'll also find VESA 400x400mm mounting holes on this display for easy wall mounting.
Subject: Graphics Cards | July 27, 2015 - 04:33 PM | Jeremy Hellstrom
Tagged: 4k, amd, R9 FuryX, GTX 980 Ti, gtx titan x
[H]ard|OCP have set up their testbed for a 4K showdown between the similarly priced GTX 980 Ti and Radeon R9 Fury X with the $1000 TITAN X tossed in there for those with more money than sense. The test uses the new Catalyst 15.7 and the GeForce 353.30 drivers to give a more even playing field while benchmarking Witcher 3, GTA V and other games. When the dust settled the pattern was obvious and the performance differences could be seen. The deltas were not huge but when you are paying $650 + tax for a GPU even performance a few frames better or a graphical option that can be used really matters. Perhaps the most interesting result was the redemption of the TITAN X, its extra price was reflected in the performance results. Check them out for yourself here.
"We take the new AMD Radeon R9 Fury X and evaluate the 4K gaming experience. We will also compare against the price competitive GeForce GTX 980 Ti as well as a GeForce GTX TITAN X. Which video card provides the best experience and performance when gaming at glorious 4K resolution?"
Here are some more Graphics Card articles from around the web:
- PowerColor PCS+ R9 380 4GB: The Affordable 4GB Solution @ Bjorn3D
- AMD Fury X "Fiji" Voltage Scaling @ techPowerUp
- HIS Radeon R9 390 IceQ X2 OC 8GB Video Card Review @ Madshrimps
- XFX R9 380 Double Dissipation 4GB @ [H]ard|OCP
- The New AMD GPU Open-Source Driver On Linux 4.2 Works, But Still A Lot Of Work Ahead @ Phoronix
- MSI Radeon R7 370 GAMING 4G @ Phoronix
- 15-Way AMD/NVIDIA Graphics Card Comparison For 4K Linux Gaming @ Phoronix
- PNY GTX980 Ti XLR8 OC @ Kitguru
- ASUS GTX 980 Ti STRIX Gaming 6 GB @ techPowerUp
- PNY GTX 960 XLR8 Review @ OCC
- GIGABYTE GeForce GTX 970 WindForce 3X OC 4GB Graphics Card Review @ NikKTech
- Inno3D iChill GTX 980 Ti HerculeZ X3 Air Boss Ultra @ HardwareOverclock
... But Is the Timing Right?
Windows 10 is about to launch and, with it, DirectX 12. Apart from the massive increase in draw calls, Explicit Multiadapter, both Linked and Unlinked, has been the cause of a few pockets of excitement here and there. I am a bit concerned, though. People seem to find this a new, novel concept that gives game developers the tools that they've never had before. It really isn't. Depending on what you want to do with secondary GPUs, game developers could have used them for years. Years!
Before we talk about the cross-platform examples, we should talk about Mantle. It is the closest analog to DirectX 12 and Vulkan that we have. It served as the base specification for Vulkan that the Khronos Group modified with SPIR-V instead of HLSL and so forth. Some claim that it was also the foundation of DirectX 12, which would not surprise me given what I've seen online and in the SDK. Allow me to show you how the API works.
Mantle is an interface that mixes Graphics, Compute, and DMA (memory access) into queues of commands. This is easily done in parallel, as each thread can create commands on its own, which is great for multi-core processors. Each queue, which are lists leading to the GPU that commands are placed in, can be handled independently, too. An interesting side-effect is that, since each device uses standard data structures, such as IEEE754 decimal numbers, no-one cares where these queues go as long as the work is done quick enough.
Since each queue is independent, an application can choose to manage many of them. None of these lists really need to know what is happening to any other. As such, they can be pointed to multiple, even wildly different graphics devices. Different model GPUs with different capabilities can work together, as long as they support the core of Mantle.
DirectX 12 and Vulkan took this metaphor so their respective developers could use this functionality across vendors. Mantle did not invent the concept, however. What Mantle did is expose this architecture to graphics, which can make use of all the fixed-function hardware that is unique to GPUs. Prior to AMD's usage, this was how GPU compute architectures were designed. Game developers could have spun up an OpenCL workload to process physics, audio, pathfinding, visibility, or even lighting and post-processing effects... on a secondary GPU, even from a completely different vendor.
Vista's multi-GPU bug might get in the way, but it was possible in 7 and, I believe, XP too.
AMD is exploring alternate product routes to raise their income and the latest seems to be the Puma powered QNAP TVS-x63. It is a four bay NAS which is powered by the 2.4GHz AMD GX424-CC SoC which happens to have a 28 stream processor GCN Radeon clocked at 497 MHz. It has a pair of gigabit ports with an optional add-in card offering a single 10Gb or two additional 1Gb ports, though that will raise you above the cost of the $630 base model. Bjorn3d found the power consumption to be higher than the competition but the overall operation was flawless.
"The QNAP TVS-x63 marked the world’s first NAS featuring AMD processor. AMD’s new strategy is targeting the markets with high profit return and the company is returning to the server market. NAS, by extension, is like a small scale server, so it makes sense to see AMD putting their processors into these devices."
Here are some more Storage reviews from around the web:
- HGST Ultrastar He8 HDD RAID Review (8x8TB) - 64TB Analysis on the Adaptec 8805 RAID Adapter @ The SSD Review
- Asustor AS5102T @ techPowerUp
- Synology DiskStation DS715 2-Bay Value NAS @ eTeknix
- CineRAID CR-H236 Dual SATA Drive Docking Station Review @ NikKTech
- OCZ TRION 100 480GB
- OCZ Vector 180 240GB SSD Review @ Madshrimps
- Micron M510DC SSD @ The SSD Review
- Kingston HyperX Predator 480 GiB vs. Kingston HyperX Savage 480 GiB SSD Review @ Hardware Secrets
- Kingston HyperX Savage 240GB SSD Review @ NikKTech
- Samsung Pro Plus microSDHC 32GB and EVO Plus 128GB microSDXC @ The SSD Review
Subject: General Tech | July 23, 2015 - 01:53 PM | Ken Addison
Tagged: podcast, video, amd, r9 nano, Fiji, Samsung, 4TB, windows 10, acer, aspire V, X99E-ITX/ac, TSMC, 10nm, 7nm
PC Perspective Podcast #359 - 07/23/2015
Join us this week as we discuss the AMD R9 Nano, 4TB Samsung SSDs, Windows 10 and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano
Program length: 1:10:31
Subject: Processors | July 22, 2015 - 09:56 PM | Scott Michaud
Tagged: amd, APU, Godavari, a8, a8-7670k
AMD's Godavari architecture is the last one based on Bulldozer, which will hold the company's product stack over until their Zen architecture arrives in 2016. The A10-7870K was added a month ago, with a 95W TDP at a MSRP of $137 USD. This involved a slight performance bump of +200 MHz at its base frequency, but a +100 MHz higher Turbo than its predecessor when under high load. More interesting, it does this at the same TDP and the same basic architecture.
Remember that these are AMD's benchmarks.
The refresh has been expanded to include the A8-7670K. Some sites have reported that this uses the Excavator architecture as seen in Carrizo, but this is not the case. It is based on Steamroller. This product has a base clock of 3.6 GHz with a Turbo of up to 3.9 GHz. This is a +300 MHz Base and +100 MHz Turbo increase over the previous A8-7650K. Again, this is with the same architecture and TDP. The GPU even received a bit of a bump, too. It is now clocked at 757 MHz versus the previous generation's 720 MHz with all else equal, as far as I can tell. This should lead to a 5.1% increase in GPU compute throughput.
The A8-7670K just recently launched for an MSRP of $117.99. This 20$ saving should place it in a nice position below the A10-7870K for mainstream users.
Subject: Graphics Cards | July 20, 2015 - 02:00 PM | Jeremy Hellstrom
Tagged: amd, linux, CS:GO
Thankfully it has been quite a while since we saw GPU driver optimization specific to .exe filenames on Windows, in the past both major providers have tweaked performance based on the name of the executable which launches the game. Until now this particular flavour of underhandedness had become passé, at least until now. Phoronix has spotted it once again, this time seeing a big jump in performance in CS:GO when they rename the binary from csgo_linux binary to hl2_Linux. The game is built on the same engine but the optimization for the Source Engine are not properly applied to CS:GO.
There is nothing nefarious about this particular example, it seems more a case of AMD's driver team being lazy, or more likely short-staffed. If you play CS:GO on Linux then rename your binary, you will see a jump in performance with no deleterious side effects. Phoronix is investigating more games to see if there are other inconsistently applied optimizations.
"Should you be using a Radeon graphics card with the AMD Catalyst Linux driver and are disappointed by the poor performance, there is a very easy workaround for gaining much better performance under Linux... In some cases a simple tweak will yield around 40% better performance!"
Here are some more Graphics Card articles from around the web:
- Open-Source Linux Graphics: A10-7870K Godavari vs. i7-4790K Haswell vs. i7-5775C Broadwell @ Phoronix
- 12K (Triple 4K Monitor) Graphics Test Bench Upgrade @ eTeknix
- MSI R9 390X GAMING vs ASUS STRIX R9 Fury @ [H]ard|OCP
- Asus Strix R9 390X Gaming OC 8G @ Bjorn3d
- Sapphire Tri-X R9 Fury 4GB @ eTeknix
- AMD R9 Fury X CrossfireX 12K Eyefinity @ eTeknix
- HIS Radeon R9 390X IceQ X2 OC 8GB Video Card Review @ Madshrimps
- XFX R9 380 4G DD, XFX Review, XFX Rocks the DD Coolers Again! @ Bjorn3d
- Asus Radeon R9 Fury Strix DC3 OC @ Kitguru
- Sapphire Tri-X Radeon R9 Fury Review @ Modders-Inc
- AMD's Latest Open-Source Driver On Linux Is Getting Competitive With Catalyst 15.7 @ Phoronix
- Zotac GTX 980 Ti AMP! Extreme Review @ Hardware Canucks
- Palit GeForce GTX 980Ti Super Jetstream @ Kitguru
- Intel Iris Pro 6200 Graphics Are A Dream Come True For Open-Source Linux Fans @ Phoronix
Subject: General Tech | July 20, 2015 - 01:16 PM | Jeremy Hellstrom
Tagged: amd, lisa su
It has not been a pretty year for AMD with overall sales of $942m representing 34.6% drop from this time last year and even the graphics portion seeing a 54.2% drop which resulted in loss of $147 million. In part this is because all PC component companies have been suffering recently; in part because of a lack of incentive to upgrade high end components and to a larger extent because the general public is not going to pick up a new machine just before the release of a new Windows version. Lisa Su did have some good news, sales of FX processors and A-series APU have been increasing and the second half of the year is historically better for sales. It was suggested to The Register that AMD is not currently planning on reducing their workforce even more at this time but the possibility of future cuts was not completely ruled out.
"AMD has confirmed it is slipping back into cost-cutting mode after its annus horribilis, caused by tanking demand for consumer PCs in a quarter described by CEO Lisa Su as the “revenue trough” for 2015."
Here is some more Tech News from around the web:
- Stephen Hawking and Russian Billionaire Start $100 Million Search For Aliens @ Slashdot
- Microsoft to spoofed Skype users: Change your account passwords NOW @ The Register
- Samsung sets sights on the iPad Air with 5.6mm thick Galaxy Tab S2 @ The Inquirer
- Everything You Need to Know About the Thunderbolt Connection @ Hardware Secrets
- DXRacer OH/IS166/NB Iron Series Gaming Chair Review @HiTech Legion
- Windows 10: Xbox One games streaming now open to all @ The Inquirer
Subject: Graphics Cards, Processors, Mobile | July 19, 2015 - 06:59 AM | Scott Michaud
Tagged: Zen, TSMC, Skylake, pascal, nvidia, Intel, Cannonlake, amd, 7nm, 16nm, 10nm
Getting smaller features allows a chip designer to create products that are faster, cheaper, and consume less power. Years ago, most of them had their own production facilities but that is getting rare. IBM has just finished selling its manufacturing off to GlobalFoundries, which was spun out of AMD when it divested from fabrication in 2009. Texas Instruments, on the other hand, decided that they would continue manufacturing but get out of the chip design business. Intel and Samsung are arguably the last two players with a strong commitment to both sides of the “let's make a chip” coin.
So where do you these chip designers go? TSMC is the name that comes up most. Any given discrete GPU in the last several years has probably been produced there, along with several CPUs and SoCs from a variety of fabless semiconductor companies.
Several years ago, when the GeForce 600-series launched, TSMC's 28nm line led to shortages, which led to GPUs remaining out of stock for quite some time. Since then, 28nm has been the stable work horse for countless high-performance products. Recent chips have been huge, physically, thanks to how mature the process has become granting fewer defects. The designers are anxious to get on smaller processes, though.
In a conference call at 2 AM (EDT) on Thursday, which is 2 PM in Taiwan, Mark Liu of TSMC announced that “the ramping of our 16 nanometer will be very steep, even steeper than our 20nm”. By that, they mean this year. Hopefully this translates to production that could be used for GPUs and CPUs early, as AMD needs it to launch their Zen CPU architecture in 2016, as early in that year as possible. Graphics cards have also been on that technology for over three years. It's time.
Also interesting is how TSMC believes that they can hit 10nm by the end of 2016. If so, this might put them ahead of Intel. That said, Intel was also confident that they could reach 10nm by the end of 2016, right until they announced Kaby Lake a few days ago. We will need to see if it pans out. If it does, competitors could actually beat Intel to the market at that feature size -- although that could end up being mobile SoCs and other integrated circuits that are uninteresting for the PC market.
Following the announcement from IBM Research, 7nm was also mentioned in TSMC's call. Apparently they expect to start qualifying in Q1 2017. That does not provide an estimate for production but, if their 10nm schedule is both accurate and also representative of 7nm, that would production somewhere in 2018. Note that I just speculated on an if of an if of a speculation, so take that with a mine of salt. There is probably a very good reason that this date wasn't mentioned in the call.
Back to the 16nm discussion, what are you hoping for most? New GPUs from NVIDIA, new GPUs from AMD, a new generation of mobile SoCs, or the launch of AMD's new CPU architecture? This should make for a highly entertaining comments section on a Sunday morning, don't you agree?
Subject: Graphics Cards | July 17, 2015 - 08:20 AM | Sebastian Peak
Tagged: radeon, r9 nano, hbm, Fiji, amd
AMD has spilled the beans on at least one aspect of the R9 Nano: the release timeframe. On their Q2 earnings call yesterday AMD CEO Lisa Su made this telling remark:
“Fury just launched, actually this week, and we will be launching Nano in the August timeframe.”
Image credit: VideoCardz.com
Wccftech had the story based on the AMD earnings call, but unfortunately there is no other new information the card just yet. We've speculated on how much lower clocks would need to be to meet the 175W target with full Fiji silicon, and it's going to be significant. The air coolers we've seen on the Fury (non-X) cards to date have extended well beyond the PCB, and the Nano is a mini-ITX form factor design.
Regardless of where the final GPU and memory clock numbers are I think it's safe to assume there won't be much (if any) overclocking headroom. Then again, of the card does have higher performance than the 290X in a mini ITX package at 175W, I don't think OC headroom will be a drawback. I guess we'll have to keep waiting for more information on the official specs before the end of August.