Subject: Graphics Cards | September 2, 2015 - 05:58 PM | Ryan Shrout
Tagged: video, r9 nano, Fiji, amd
Tomorrow afternoon, at 12pm PT / 3pm ET, AMD is hosting a live stream on its Twitch channel to show off and discuss a little more about the upcoming Radeon R9 Nano product we previewed last month.
I have no idea what is going to be discussed, I have no idea how long it will be and I don't really know what to expect at all other than that. Apparently AMD is going to play some games on the R9 Nano as well as talk about mods that the small form factor enables.
AM3+ Keeps Chugging Along
Consumers cannot say that MSI has not attempted to keep the AM3+ market interesting with a handful of new products based upon that socket. Throughout this past year MSI has released three different products addressing multiple price points and featuresets. The 970 Gaming was the first, the 970 KRAIT introduced USB 3.1 to the socket, and the latest 990FXA-Gaming board provides the most feature rich implementation of the socket plus USB 3.1.
AMD certainly has not done the platform any real favors as of late in terms of new CPUs and architectures to inhabit that particular socket. The last refresh we had was around a year ago with the release of the FX-8370 and 8370e. These are still based on the Piledriver based Vishera core that was introduced three years ago. Unlike the GPU market, the CPU market has certainly not seen the leaps and bounds in overall performance that we had enjoyed in years past.
MSI has taken the now geriatric 990FX (based upon the 890FX chipset released in 2010- I think AMD might have gotten their money out of this particular chipset iteration) and implemented it in a new design that embraces many of the top end features that are desired by enthusiasts. AMD still has a solid following and their products are very competitive from a price/performance standpoint (check out Ryan’s price/perf graphs from his latest Intel CPU review).
The packing material is pretty basic. Just cardboard and no foam. Still, fits nicely and is quite snug.
The idea behind the 990FXA-Gaming is to provide a very feature-rich product that appeals to gamers and enthusiasts. The key is to provide those features at a price point that will not scare away the budget enthusiasts. Just as MSI has done with the 970 Gaming, there were decisions made to keep costs down. We will get into these tradeoffs shortly.
To the Max?
Much of the PC enthusiast internet, including our comments section, has been abuzz with “Asynchronous Shader” discussion. Normally, I would explain what it is and then outline the issues that surround it, but I would like to swap that order this time. Basically, the Ashes of the Singularity benchmark utilizes Asynchronous Shaders in DirectX 12, but they disable it (by Vendor ID) for NVIDIA hardware. They say that this is because, while the driver reports compatibility, “attempting to use it was an unmitigated disaster in terms of performance and conformance”.
AMD's Robert Hallock claims that NVIDIA GPUs, including Maxwell, cannot support the feature in hardware at all, while all AMD GCN graphics cards do. NVIDIA has yet to respond to our requests for an official statement, although we haven't poked every one of our contacts yet. We will certainly update and/or follow up if we hear from them. For now though, we have no idea whether this is a hardware or software issue. Either way, it seems more than just politics.
So what is it?
Simply put, Asynchronous Shaders allows a graphics driver to cram workloads in portions of the GPU that are idle, but not otherwise available. For instance, if a graphics task is hammering the ROPs, the driver would be able to toss an independent physics or post-processing task into the shader units alongside it. Kollock from Oxide Games used the analogy of HyperThreading, which allows two CPU threads to be executed on the same core at the same time, as long as it has the capacity for it.
Kollock also notes that compute is becoming more important in the graphics pipeline, and it is possible to completely bypass graphics altogether. The fixed-function bits may never go away, but it's possible that at least some engines will completely bypass it -- maybe even their engine, several years down the road.
But, like always, you will not get an infinite amount of performance by reducing your waste. You are always bound by the theoretical limits of your components, and you cannot optimize past that (except for obviously changing the workload itself). The interesting part is: you can measure that. You can absolutely observe how long a GPU is idle, and represent it as a percentage of a time-span (typically a frame).
And, of course, game developers profile GPUs from time to time...
According to Kollock, he has heard of some console developers getting up to 30% increases in performance using Asynchronous Shaders. Again, this is on console hardware and so this amount may increase or decrease on the PC. In an informal chat with a developer at Epic Games, so massive grain of salt is required, his late night ballpark “totally speculative” guesstimate is that, on the Xbox One, the GPU could theoretically accept a maximum ~10-25% more work in Unreal Engine 4, depending on the scene. He also said that memory bandwidth gets in the way, which Asynchronous Shaders would be fighting against. It is something that they are interested in and investigating, though.
This is where I speculate on drivers. When Mantle was announced, I looked at its features and said “wow, this is everything that a high-end game developer wants, and a graphics developer absolutely does not”. From the OpenCL-like multiple GPU model taking much of the QA out of SLI and CrossFire, to the memory and resource binding management, this should make graphics drivers so much easier.
It might not be free, though. Graphics drivers might still have a bunch of games to play to make sure that work is stuffed through the GPU as tightly packed as possible. We might continue to see “Game Ready” drivers in the coming years, even though much of that burden has been shifted to the game developers. On the other hand, maybe these APIs will level the whole playing field and let all players focus on chip design and efficient injestion of shader code. As always, painfully always, time will tell.
Subject: Systems, Mobile | September 2, 2015 - 03:00 AM | Sebastian Peak
Tagged: nvidia, notebooks, Lenovo, laptops, Intel Skylake, Intel Braswell, IFA 2015, ideapad 500S, ideapad 300S, ideapad 100S, Ideapad, gtx, APU, amd
Lenovo has unveiled their reinvented their ideapad (now all lowercase) lineup at IFA 2015 in Berlin, and the new laptops feature updated processors including Intel Braswell and Skylake, as well as some discrete AMD and NVIDIA GPU options.
At the entry-level price-point we find the ideapad 100S which does not contain one of the new Intel chips, instead running an Intel Atom Z3735F CPU and priced accordingly at just $189 for the 11.6” version and $259 for the 14” model. While low-end specs (2GB RAM, 32GB/64GB eMMC storage, 1366x768 screen) aren’t going to blow anyone away, these at least provide a Windows 10 alternative to a Chromebook at about the same cost, and to add some style Lenovo is offering the laptop in four colors: blue, red, white, and silver.
Moving up to the 300S we find a 14” laptop (offered in red, black, or white) with Intel Pentium Braswell processors up to the quad-core N3700, and the option of a FHD 1920x1080 display. Memory and storage options will range up to 8GB DDR3L and up to either 256GB SSD or 1TB HDD/SSHD. At 0.86" thick the 300S weighs 2.9 lbs, and prices will start at $479.
A lower-cost ideapad 300, without the “S” and with more basic styling, will be available in sizes ranging from 14” to 17” and prices starting between $399 and $549 for their respective models. A major distinction will be the inclusion of both Braswell and Intel 6th Gen Skylake CPUs, as well at the option of a discrete AMD GPU (R5 330M).
Last we have the ideapad 500S, available in 13.3”, 14”, and 15.6” versions. With Intel 6th Gen processors up to Core i7 like the 300S, these also offer optional NVIDIA GPUs (GTX 920M for the 13.3", 940M for the 14"+) and up to FHD screen resolution. Memory and storage options range up to 8GB DDR3L and up to either 256GB SSD or 1TB HDD/SSHD, and the 500S is a bit thinner and lighter than the 300S, with the 13.3” version 0.76” thick and 3.4 lbs, moving up to 0.81” and 4.6 lbs with the 15.6” version.
A non-S version of the ideapad 500 will also be available, and this will be the sole AMD CPU representative with the option of an all-AMD solution powered by up to the A10-7300 APU, or a combination of R7 350M graphics along with 6th Gen Intel Core processors. 14” and 15” models will be available starting at $399 for the APU model and $499 with an Intel CPU.
All of the new laptops ship with Windows 10 as Microsoft’s newest OS arrived just in time for the back-to-school season.
Subject: Graphics Cards, Processors | August 30, 2015 - 09:14 PM | Scott Michaud
Tagged: amd, carrizo, Fiji, opencl, opencl 2.0
Apart from manufacturers with a heavy first-party focus, such as Apple and Nintendo, hardware is useless without developer support. In this case, AMD has updated their App SDK to include support for OpenCL 2.0, with code samples. It also updates the SDK for Windows 10, Carrizo, and Fiji, but it is not entirely clear how.
That said, OpenCL is important to those two products. Fiji has a very high compute throughput compared to any other GPU at the moment, and its memory bandwidth is often even more important for GPGPU workloads. It is also useful for Carrizo, because parallel compute and HSA features are what make it a unique product. AMD has been creating first-party software software and helping popular third-party developers such as Adobe, but a little support to the world at large could bring a killer application or two, especially from the open-source community.
The SDK has been available in pre-release form for quite some time now, but it is finally graduated out of beta. OpenCL 2.0 allows for work to be generated on the GPU, which is especially useful for tasks that vary upon previous results without contacting the CPU again.
Subject: General Tech | August 27, 2015 - 12:59 PM | Ken Addison
Tagged: podcast, video, Nixeus, vue24, freesync, gsync, amd, r9 nano, Fiji, asus, PB258Q, qualcomm, snapdragon 820, nvidia
PC Perspective Podcast #364 - 08/27/2015
Join us this week as we discuss the Nixeus Vue 24 FreeSync Monitor, AMD R9 Nano leaks, GPU Marketshare and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts: Allyn Malventano, Jeremy Hellstrom, Josh Walrath, and Sebastian Peak
Program length: 1:22:36
The Tiniest Fiji
Way back on June 16th, AMD held a live stream event during E3 to announce a host of new products. In that group was the AMD Radeon R9 Fury X, R9 Fury and the R9 Nano. Of the three, the Nano was the most intriguing to most of the online press as it was the one we knew the least about. AMD promised a full Fiji GPU in a package with a 6-in PCB and a 175 watt TDP. Well today, AMD is, uh, re-announcing (??) the AMD Radeon R9 Nano with more details on specifications, performance and availability.
First, let’s get this out of the way: AMD is making this announcement today because they publicly promised the R9 Nano for August. And with the final days of summer creeping up on them, rather than answer questions about another delay, AMD is instead going the route of a paper launch, but one with a known end date. We will apparently get our samples of the hardware in early September with reviews and the on-sale date following shortly thereafter. (Update: AMD claims the R9 Nano will be on store shelves on September 10th and should have "critical mass" of availability.)
Now let’s get to the details that you are really here for. And rather than start with the marketing spin on the specifications that AMD presented to the media, let’s dive into the gory details right now.
|R9 Nano||R9 Fury||R9 Fury X||GTX 980 Ti||TITAN X||GTX 980||R9 290X|
|GPU||Fiji XT||Fiji Pro||Fiji XT||GM200||GM200||GM204||Hawaii XT|
|Rated Clock||1000 MHz||1000 MHz||1050 MHz||1000 MHz||1000 MHz||1126 MHz||1000 MHz|
|Memory Clock||500 MHz||500 MHz||500 MHz||7000 MHz||7000 MHz||7000 MHz||5000 MHz|
|Memory Interface||4096-bit (HBM)||4096-bit (HBM)||4096-bit (HBM)||384-bit||384-bit||256-bit||512-bit|
|Memory Bandwidth||512 GB/s||512 GB/s||512 GB/s||336 GB/s||336 GB/s||224 GB/s||320 GB/s|
|TDP||175 watts||275 watts||275 watts||250 watts||250 watts||165 watts||290 watts|
|Peak Compute||8.19 TFLOPS||7.20 TFLOPS||8.60 TFLOPS||5.63 TFLOPS||6.14 TFLOPS||4.61 TFLOPS||5.63 TFLOPS|
AMD wasn’t fooling around, the Radeon R9 Nano graphics card does indeed include a full implementation of the Fiji GPU and HBM, including 4096 stream processors, 256 texture units and 64 ROPs. The GPU core clock is rated “up to” 1.0 GHz, nearly the same as the Fury X (1050 MHz), and the only difference that I can see in the specifications on paper is that the Nano is rated at 8.19 TFLOPS of theoretical compute performance while the Fury X is rated at 8.60 TFLOPS.
Subject: General Tech | August 25, 2015 - 02:57 PM | Jeremy Hellstrom
Tagged: amd, hot chips, SK Hynix
Thanks to DigiTimes we are getting some information out of Hot Chips about what is coming up from AMD. As Sebastian just posted we now have a bit more about the R9 Nano and you can bet we will see more in the near future. They also describe the new HBM developed in partnership with SK Hynix, 4GB of high-bandwidth memory over a 4096-bit interface will offer an impressive 512Gb/s of memory bandwidth. We also know a bit more about the new A-series APUs which will range up to 12 compute cores, four Excavator based CPUs and eight GCN based GPUs. They will also be introducing new power saving features called Adaptive Voltage and Frequency Scaling (AVFS) and will support the new H.265 compression standard. Click on through to DigiTimes or wait for more pictures and documentation to be released from Hot Chips.
"AMD is showcasing its new high-performance accelerated processing unit (APU), codenamed Carrizo, and the new AMD Radeon R9 Fury family of GPUs, codenamed Fiji, at the annual Hot Chips symposium."
Here is some more Tech News from around the web:
- The TR Podcast 184: Streaming to the Shield and overrunning VRAM
- Backwards S-Pen Can Permanently Damage Note 5 @ Slashdot
- TSMC growing its 16nm client base @ DigiTimes
- Live Booting Linux @ Linux.com
- Office 2016 for Windows looks set for a 22 September launch @ The Inquirer
- MIT creates file system that will survive unexpected crashes @ The Inquirer
- Samsung smart fridge leaves Gmail logins open to attack @ The Register
- Intel Security hires ex-Cisco and Avaya man to run global channels @ The Register
Subject: Graphics Cards | August 25, 2015 - 02:23 PM | Sebastian Peak
Tagged: Radeon R9 Nano, radeon, r9 nano, hbm, graphics, gpu, amd
New detailed photos of the upcoming Radeon R9 Nano have surfaced, and Ryan has confirmed with AMD that these are in fact real.
We've seen the outside of the card before, but for the first time we are provided a detailed look under the hood.
The cooler is quite compact and has copper heatpipes for both core and VRM
The R9 Nano is a very small card and it will be powered with a single 8-pin power connector directed toward the back.
Connectivity is provided via three DisplayPort outputs and a single HDMI port
And fans of backplates will need to seek 3rd-party offerings as it looks like this will have a bare PCB around back.
We will keep you updated if any official specifications become available, and of course we'll have complete coverage once the R9 Nano is officially launched!
Introduction, Specifications, and Packaging
We have reviewed a lot of Variable Refresh Rate displays over the past several years now, and for the most part, these displays have come with some form of price premium attached. Nvidia’s G-Sync tech requires an additional module that adds some cost to the parts list for those displays. AMD took a while to get their FreeSync tech pushed through the scaler makers, and with the added effort needed to implement these new parts, display makers naturally pushed the new features into their higher end displays first. Just look at the specs of these displays:
- ASUS PG278Q 27in TN 1440P 144Hz G-Sync
- Acer XB270H 27in TN 1080P 144Hz G-Sync
- Acer XB280HK 28in TN 4K 60Hz G-Sync
- Acer XB270HU 27in IPS 1440P 144Hz G-Sync
- LG 34UM67 34in IPS 25x18 21:9 48-75Hz FreeSync
- BenQ XL2730Z 27in TN 1440P 40-144Hz FreeSync
- Acer XG270HU 27in TN 1440P 40-144Hz FreeSync
- ASUS MG279Q 27in IPS 1440P 144Hz FreeSync (35-90Hz)
Most of the reviewed VRR panels are 1440P or higher, and the only 1080P display currently runs $500. This unfortunately leaves VRR technology at a price point that is simply out of reach of gamers unable to drop half a grand on a display. What we need was a good 1080P display with a *full* VRR range. Bonus points to high refresh rates and in the case of a FreeSync display, a minimum refresh rate low enough that a typical game will not run below it. This shouldn’t be too hard since 1080P is not that demanding on even lower cost hardware these days. Who was up to this challenge?
Nixeus has answered this call with their new Nixeus Vue display. This is a 24” 1080P 144Hz FreeSync display with a VRR bottom limit of 30 FPS. It comes in two models, distinguished by a trailing letter in the model. The NX-VUE24B contains a ‘base’ model stand with only tilt support, while the NX-VUE24A contains a ‘premium’ stand with full height, rotation, and tilt support.
Does the $330-350 dollar Nixues Vue 24" FreeSync monitor fit the bill?