Subject: General Tech | August 25, 2015 - 02:57 PM | Jeremy Hellstrom
Tagged: amd, hot chips, SK Hynix
Thanks to DigiTimes we are getting some information out of Hot Chips about what is coming up from AMD. As Sebastian just posted we now have a bit more about the R9 Nano and you can bet we will see more in the near future. They also describe the new HBM developed in partnership with SK Hynix, 4GB of high-bandwidth memory over a 4096-bit interface will offer an impressive 512Gb/s of memory bandwidth. We also know a bit more about the new A-series APUs which will range up to 12 compute cores, four Excavator based CPUs and eight GCN based GPUs. They will also be introducing new power saving features called Adaptive Voltage and Frequency Scaling (AVFS) and will support the new H.265 compression standard. Click on through to DigiTimes or wait for more pictures and documentation to be released from Hot Chips.
"AMD is showcasing its new high-performance accelerated processing unit (APU), codenamed Carrizo, and the new AMD Radeon R9 Fury family of GPUs, codenamed Fiji, at the annual Hot Chips symposium."
Here is some more Tech News from around the web:
- The TR Podcast 184: Streaming to the Shield and overrunning VRAM
- Backwards S-Pen Can Permanently Damage Note 5 @ Slashdot
- TSMC growing its 16nm client base @ DigiTimes
- Live Booting Linux @ Linux.com
- Office 2016 for Windows looks set for a 22 September launch @ The Inquirer
- MIT creates file system that will survive unexpected crashes @ The Inquirer
- Samsung smart fridge leaves Gmail logins open to attack @ The Register
- Intel Security hires ex-Cisco and Avaya man to run global channels @ The Register
Subject: Graphics Cards | August 25, 2015 - 02:23 PM | Sebastian Peak
Tagged: Radeon R9 Nano, radeon, r9 nano, hbm, graphics, gpu, amd
New detailed photos of the upcoming Radeon R9 Nano have surfaced, and Ryan has confirmed with AMD that these are in fact real.
We've seen the outside of the card before, but for the first time we are provided a detailed look under the hood.
The cooler is quite compact and has copper heatpipes for both core and VRM
The R9 Nano is a very small card and it will be powered with a single 8-pin power connector directed toward the back.
Connectivity is provided via three DisplayPort outputs and a single HDMI port
And fans of backplates will need to seek 3rd-party offerings as it looks like this will have a bare PCB around back.
We will keep you updated if any official specifications become available, and of course we'll have complete coverage once the R9 Nano is officially launched!
Introduction, Specifications, and Packaging
We have reviewed a lot of Variable Refresh Rate displays over the past several years now, and for the most part, these displays have come with some form of price premium attached. Nvidia’s G-Sync tech requires an additional module that adds some cost to the parts list for those displays. AMD took a while to get their FreeSync tech pushed through the scaler makers, and with the added effort needed to implement these new parts, display makers naturally pushed the new features into their higher end displays first. Just look at the specs of these displays:
- ASUS PG278Q 27in TN 1440P 144Hz G-Sync
- Acer XB270H 27in TN 1080P 144Hz G-Sync
- Acer XB280HK 28in TN 4K 60Hz G-Sync
- Acer XB270HU 27in IPS 1440P 144Hz G-Sync
- LG 34UM67 34in IPS 25x18 21:9 48-75Hz FreeSync
- BenQ XL2730Z 27in TN 1440P 40-144Hz FreeSync
- Acer XG270HU 27in TN 1440P 40-144Hz FreeSync
- ASUS MG279Q 27in IPS 1440P 144Hz FreeSync (35-90Hz)
Most of the reviewed VRR panels are 1440P or higher, and the only 1080P display currently runs $500. This unfortunately leaves VRR technology at a price point that is simply out of reach of gamers unable to drop half a grand on a display. What we need was a good 1080P display with a *full* VRR range. Bonus points to high refresh rates and in the case of a FreeSync display, a minimum refresh rate low enough that a typical game will not run below it. This shouldn’t be too hard since 1080P is not that demanding on even lower cost hardware these days. Who was up to this challenge?
Nixeus has answered this call with their new Nixeus Vue display. This is a 24” 1080P 144Hz FreeSync display with a VRR bottom limit of 30 FPS. It comes in two models, distinguished by a trailing letter in the model. The NX-VUE24B contains a ‘base’ model stand with only tilt support, while the NX-VUE24A contains a ‘premium’ stand with full height, rotation, and tilt support.
Does the $330-350 dollar Nixues Vue 24" FreeSync monitor fit the bill?
Subject: Graphics Cards | August 24, 2015 - 02:37 PM | Sebastian Peak
Tagged: rumor, report, Radeon R9 Nano, R9 290X, leak, hot chips, hbm, amd
A report from German-language tech site Golem contains what appears to be a slide leaked from AMD's GPU presentation at Hot Chips in Cupertino, and the results paint a very efficient picture of the upcoming Radeon R9 Nano GPU.
The spelling of "performance" doesn't mean this is fake, does it?
While only managing 3 FPS better than the Radeon R9 290X in this particular benchmark, this result was achieved with 1.9x the performance per watt of the baseline 290X in the test. The article speculates on the possible clock speed of the R9 Nano based on the relative performance, and estimates 850 MHz (which is of course up for debate as no official specs are known).
The most compelling part of the result has to be the ability of the Nano to match or exceed the R9 290X in performance, while only requiring a single 8-pin PCIe connector and needing an average of only 175 watts. With a mini-ITX friendly 15 cm board (5.9 inches) this could be one of the more compelling options for a mini gaming rig going forward.
We have a lot of questions that have yet to be answered of course, including the actual speed of both core and HBM, and just how quiet this air-cooled card might be under load. We shouldn't have to wait much longer!
Subject: Graphics Cards | August 21, 2015 - 11:30 AM | Sebastian Peak
Tagged: PC, nvidia, Matrox, jpr, graphics cards, gpu market share, desktop market share, amd, AIB, add in board
While we reported recently on the decline of overall GPU shipments, a new report out of John Peddie Research covers the add-in board segment to give us a look at the desktop graphics card market. So how are the big two (sorry Matrox) doing?
|GPU Supplier||Market Share This Quarter||Market Share Last Quarter||Market Share Last Year|
The big news is of course a drop in market share for AMD of 4.5% quarter-to-quarter, and down to just 18% from 37.9% last year. There will be many opinions as to why their share has been dropping in the last year, but it certainly didn't help that the 300-series GPUs are rebrands of 200-series, and the new Fury cards have had very limited availability so far.
The graph from Mercury Research illustrates what is almost a mirror image, with NVIDIA gaining 20% as AMD lost 20%, for a 40% swing in overall share. Ouch. Meanwhile (not pictured) Matrox didn't have a statistically meaningful quarter but still manage to appear on the JPR report with 0.1% market share (somehow) last quarter.
The desktop market isn't actually suffering quite as much as the overall PC market, and specifically the enthusiast market.
"The AIB market has benefited from the enthusiast segment PC growth, which has been partially fueled by recent introductions of exciting new powerful (GPUs). The demand for high-end PCs and associated hardware from the enthusiast and overclocking segments has bucked the downward trend and given AIB vendors a needed prospect to offset declining sales in the mainstream consumer space."
But not all is well considering overall the add-in board attach rate with desktops "has declined from a high of 63% in Q1 2008 to 37% this quarter". This is indicative of the overall trend toward integrated GPUs in the industry with AMD APUs and Intel processor graphics, as illustrated by this graphic from the report.
The year-to-year numbers show an overall drop of 18.8%, and even with their dominant 81.9% market share NVIDIA has still seen their shipments decrease by 12% this quarter. These trends seem to indicate a gloomy future for discrete graphics in the coming years, but for now we in the enthusiast community will continue to keep it afloat. It would certainly be nice to see some gains from AMD soon to keep things interesting, which might help lower prices down from their lofty $400 - $600 mark for flagship cards at the moment.
Subject: General Tech | August 13, 2015 - 01:14 PM | Ken Addison
Tagged: podcast, video, amd, nvidia, GTX 970, Zotac GTX 970 AMP! Extreme Core Edition, dx12, 3dfx, voodoo 3, Intel, SSD 750, NVMe, Samsung, R9 Fury, Fiji, gtx 950
PC Perspective Podcast #362 - 08/13/2015
Join us this week as we discuss Benchmarking a Voodoo 3, Flash Media Summit 2015, Skylake Delidding and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Sebastian Peak
Program length: 1:15:23
Subject: Graphics Cards | August 12, 2015 - 05:29 PM | Sebastian Peak
Tagged: STRIX R9 Fury, Radeon R9 Fury, overclocking, oc, LN2, hbm, fury x, asus, amd
What happens when you unlock an AMD Fury to have the Compute Units of a Fury X, and then overclock the snot out of it using LN2? User Xtreme Addict in the HWBot forums has created a comprehensive guide to do just this, and the results are incredible.
Not for the faint of heart (image credit: Xtreme Addict)
"The steps include unlocking the Compute Units to enable Fury X grade performance, enabling the hotwire soldering pads, a 0.95v Rail mod, and of course the trimpot/hotwire VGPU, VMEM, VPLL (VDDCI) mods.
The result? A GPU frequency of 1450 MHz and HBM frequency of 1000 MHz. For the HBM that's a 100% overclock."
Beginning with a stock ASUS R9 Fury STRIX card Xtreme Addict performed some surgery to fully unlock the voltage, and unlocked the Compute Units using a tool from this Overclock.net thread.
The results? Staggering. HBM at 1000 MHz is double the rate of the stock Fury X, and a GPU core of 1450 MHz is a 400 MHz increase. So what kind of performance did this heavily overclocked card achieve?
"The performance goes up from 6237 points at default to 6756 after unlocking the CUs, then 8121 points after overclock on air cooling, to eventually end up at 9634 points when fully unleashed with liquid nitrogen."
Apparently they were able to push the card even further, ending up with a whopping 10033 score in 3DMark Fire Strike Extreme.
While this method is far too extreme for 99% of enthusiasts, the idea of unlocking a retail Fury to the level of a Fury X through software/BIOS mods is much more accessible, as is the possibility of reaching much higher clocks through advanced cooling methods.
Unfortunately, if reading through this makes you want to run out and grab one of these STRIX cards availability is still limited. Hopefully supply catches up to demand in the near future.
A quick look at stock status on Newegg for the featured R9 Fury card
It's Basically a Function Call for GPUs
Mantle, Vulkan, and DirectX 12 all claim to reduce overhead and provide a staggering increase in “draw calls”. As mentioned in the previous editorial, loading graphics card with tasks will take a drastic change in these new APIs. With DirectX 10 and earlier, applications would assign attributes to (what it is told is) the global state of the graphics card. After everything is configured and bound, one of a few “draw” functions is called, which queues the task in the graphics driver as a “draw call”.
While this suggests that just a single graphics device is to be defined, which we also mentioned in the previous article, it also implies that one thread needs to be the authority. This limitation was known about for a while, and it contributed to the meme that consoles can squeeze all the performance they have, but PCs are “too high level” for that. Microsoft tried to combat this with “Deferred Contexts” in DirectX 11. This feature allows virtual, shadow states to be loaded from secondary threads, which can be appended to the global state, whole. It was a compromise between each thread being able to create its own commands, and the legacy decision to have a single, global state for the GPU.
Some developers experienced gains, while others lost a bit. It didn't live up to expectations.
The paradigm used to load graphics cards is the problem. It doesn't make sense anymore. A developer might not want to draw a primitive with every poke of the GPU. At times, they might want to shove a workload of simple linear algebra through it, while other requests could simply be pushing memory around to set up a later task (or to read the result of a previous one). More importantly, any thread could want to do this to any graphics device.
The new graphics APIs allow developers to submit their tasks quicker and smarter, and it allows the drivers to schedule compatible tasks better, even simultaneously. In fact, the driver's job has been massively simplified altogether. When we tested 3DMark back in March, two interesting things were revealed:
- Both AMD and NVIDIA are only a two-digit percentage of draw call performance apart
- Both AMD and NVIDIA saw an order of magnitude increase in draw calls
Subject: General Tech | August 7, 2015 - 01:31 PM | Jeremy Hellstrom
Tagged: fud, security, Intel, amd, x86, SMM
The SSM security hole that Christopher Domas has demonstrated (pdf) is worrying but don't panic, it requires your system to be compromised before you are vulnerable. That said, once you have access to the SMM you can do anything you feel like to the computer up to and including ensuring you can reinfect the machine even after a complete format or UEFI update. The flaw was proven on Intel x86 machines but is likely to apply to AMD processors as well as they were using the same architecture around the turn of the millennium and thankfully the issue has been mitigated in recent processors. Intel will be releasing patches for effected CPUs, although not all the processors can be patched and we have yet to hear from AMD. You can get an over view of the issue by following the link at Slashdot and speculate on if this flaw was a mistake or inserted there on purpose in our comment section.
"Security researcher Christopher Domas has demonstrated a method of installing a rootkit in a PC's firmware that exploits a feature built into every x86 chip manufactured since 1997. The rootkit infects the processor's System Management Mode, and could be used to wipe the UEFI or even to re-infect the OS after a clean install. Protection features like Secure Boot wouldnt help, because they too rely on the SMM to be secure."
Here is some more Tech News from around the web:
- Millions of Android devices pwned in single text attack ... again @ The Inquirer
- Mozilla Issues Fix For Firefox Zero-Day Bug @ Slashdot
- Microsoft plays down playing fast and loose with Windows 10 privacy @ The Inquirer
- Ransacked US OPM wins Pwnie Award for 'Most EPIC Fail' @ The Register
- Hacking Team brewed potent iOS poison for non-jailbroken iThings @ The Register
- Tesla Model S Has Been Hacked @ Slashdot
- Asus EA-AC87 4×4 wireless bridge @ Kitguru
Subject: Graphics Cards | August 7, 2015 - 10:46 AM | Ryan Shrout
Tagged: sdk, Oculus, nvidia, direct driver mode, amd
In an email sent out by Oculus this morning, the company has revealed some interesting details about the upcoming release of the Oculus SDK 0.7 on August 20th. The most interesting change is the introduction of Direct Driver Mode, developed in tandem with both AMD and NVIDIA.
This new version of the SDK will remove the simplistic "Extended Mode" that many users and developers implemented for a quick and dirty way of getting the Rift development kits up and running. However, that implementation had the downside of additional latency, something that Oculus is trying to eliminate completely.
Here is what Oculus wrote about the "Direct Driver Mode" in its email to developers:
Direct Driver Mode is the most robust and reliable solution for interfacing with the Rift to date. Rather than inserting VR functionality between the OS and the graphics driver, headset awareness is added directly to the driver. As a result, Direct Driver Mode avoids many of the latency challenges of Extended Mode and also significantly reduces the number of conflicts between the Oculus SDK and third party applications. Note that Direct Driver Mode requires new drivers from NVIDIA and AMD, particularly for Kepler (GTX 645 or better) and GCN (HD 7730 or better) architectures, respectively.
We have heard NVIDIA and AMD talk about the benefits of direct driver implementations for VR headsets for along time. NVIDIA calls its software implementation GameWorks VR and AMD calls its software support LiquidVR. Both aim to do the same thing - give more direct access to the headset hardware to the developer while offering new ways for faster and lower latency rendering to games.
Both companies have unique features to offer as well, including NVIDIA and it's multi-res shading technology. Check out our interview with NVIDIA on the topic below:
NVIDIA's Tom Petersen came to our offices to talk about GameWorks VR
Other notes in the email include a tentative scheduled release of November for the 1.0 version of the Oculus SDK. But until that version releases, Oculus is only guaranteeing that each new runtime will support the previous version of the SDK. So, when SDK 0.8 is released, you can only guarantee support for it and 0.7. When 0.9 comes out, game developers will need make sure they are at least on SDK 0.8 otherwise they risk incompatibility. Things will be tough for developers in this short window of time, but Oculus claims its necessary to "allow them to more rapidly evolve the software architecture and API." After SDK 1.0 hits, future SDK releases will continue to support 1.0.