Subject: Graphics Cards, Processors, Displays, Systems | May 15, 2015 - 03:02 PM | Scott Michaud
Tagged: Oculus, oculus vr, nvidia, amd, geforce, radeon, Intel, core i5
Today, Oculus has published a list of what they believe should drive their VR headset. The Oculus Rift will obviously run on lower hardware. Their minimum specifications, published last month and focused on the Development Kit 2, did not even list a specific CPU or GPU -- just a DVI-D or HDMI output. They then went on to say that you really should use a graphics card that can handle your game at 1080p with at least 75 fps.
The current list is a little different:
- NVIDIA GeForce GTX 970 / AMD Radeon R9 290 (or higher)
- Intel Core i5-4590 (or higher)
- 8GB RAM (or higher)
- A compatible HDMI 1.3 output
- 2x USB 3.0 ports
- Windows 7 SP1 (or newer).
I am guessing that, unlike the previous list, Oculus has a more clear vision for a development target. They were a little unclear about whether this refers to the consumer version or the current needs of developers. In either case, it would likely serve as a guide for what they believe developers should target when the consumer version launches.
This post also coincides with the release of the Oculus PC SDK 0.6.0. This version pushes distortion rendering to the Oculus Server process, rather than the application. It also allows multiple canvases to be sent to the SDK, which means developers can render text and other noticeable content at full resolution, but scale back in places that the user is less likely to notice. They can also be updated at different frequencies, such as sleeping the HUD redraw unless a value changes.
The Oculus PC SDK (0.6.0) is now available at the Oculus Developer Center.
Subject: Graphics Cards | May 14, 2015 - 07:00 AM | Scott Michaud
Tagged: tonga, radeon, R9, pitcairn, Fiji, bonaire, amd
Benchlife.info, via WCCFTech, believes that AMD's Radeon R9 300-series GPUs will launch in late June. Specifically, the R9 380, the R7 370, and the R7 360 will arrive on the 18th of June. These are listed as OEM parts, as we have mentioned on the podcast, which Ryan speculates could mean that the flagship Fiji XT might go by a different name. Benchlife.info seems to think that it will be called by the R9 390(X) though, and that it will be released on the 24th of June.
WCCFTech is a bit more timid, calling it simply “Fiji XT”.
In relation to industry events, this has the OEM lineup launching on the last day of E3 and Fiji XT launching in the middle of the following week. This feels a little weird, especially because AMD's E3 event with PC Gamer is on the 16th. While it makes sense for AMD to announce the launch a few days before it happens, that doesn't make sense for OEM parts unless they were going to announce a line of pre-built PCs. The most likely candidate to launch gaming PCs is Valve, and they're one of the few companies that are absent from AMD's event.
And this is where I run out of ideas. Launching a line of OEM parts at E3 is weird unless it was to open the flood gates for OEMs to make their own announcements. Unless Valve is scheduled to make an announcement earlier in the day, or a surprise appearance at the event, that seems unlikely. Something seems up, though.
Subject: Graphics Cards | May 7, 2015 - 09:49 AM | Ryan Shrout
Tagged: wce, radeon, Fiji, amd, 390x
File this under "rumor" for sure, but a cool one none the less...
After yesterday's official tidbit of information surrounding AMD's upcoming flagship graphics card for enthusiasts and its use of HBM (high bandwidth memory), it appears we have another leak on our hands. The guys over at Chiphell have apparently acquired some stock footage of the new Fiji flagship card (whether or not it will be called the 390X has yet to be seen) and it looks...awesome.
In that post from yesterday I noted that with an HBM design AMD could in theory build an add-in card that is of a different form factor than anything we have previously seen for a high end part. Based on the image above, if this turns out to be the high end Fiji offering, it appears the PCB will indeed be quite small as it no longer requires memory surrounding the GPU itself. You can also see that it will in fact be water cooled though it looks like it has barb inlets rather than a pre-attached cooler in this image.
The second leaked image shows display outputs consisting of three full-size DisplayPort connections and a single HDMI port.
All of this could be faked of course, but if it is, the joker did a damn good job of compiling all the information into one design. If it's real, I think AMD might finally have a match for the look and styling of the high-end GeForce offerings.
What do you think: real or fake? Cool or meh? Let us know!
Subject: Graphics Cards | May 6, 2015 - 02:21 PM | Ryan Shrout
Tagged: amd, hbm, radeon, gpu
During today's 2015 AMD Financial Analyst Day, CEO Dr. Lisa Su discussed some of the details of the upcoming enthusiast Radeon graphics product. Though it wasn't given a name, she repeatedly said that the product would be announced "in the coming weeks...at upcoming industry events."
You won't find specifications here but understanding the goals and targets that AMD has for this new flagship product will help tell the story of this new Radeon product. Dr. Su sees AMD investing at very specific inflection points, the most recent of which are DirectX 12, 4K displays and VR technology. With adoption of HBM (high bandwidth memory) that sits on-die with the GPU, rather than across a physical PCB, we will see both a reduction in power consumption as well as a significant increase in GPU memory bandwidth.
HBM will accelerate the performance improvements at those key inflection points Dr. Su mentioned. Additional memory bandwidth will aid the ability for discrete GPUs to push out 4K resolutions and beyond, no longer limited by texture sizes. AMD's LiquidVR software, in conjunction with HBM, will be able to improve latency and reduce performance concerns on current and future generations of virtual reality hardware.
One interesting comment made during the conference was that HBM would enable new form factors for the GPUs now that you now longer need to have memory spread out on a PCB. While there isn't much room in the add-in card market for differentiation, in the mobile space that could mean some very interesting things for higher performance gaming notebooks.
Mark Papermaster, AMD CTO, said earlier in the conference call that HBM would aid in performance but maybe more importantly will lower power and improve total GPU efficiency. HBM will offer more than 3x improved performance/watt compared to GDDR5 while also running more than 50% lower power than GDDR5. Lower power and higher performance upgrades don't happen often so I am really excited to see what AMD does with it.
There weren't any more details on the next flagship Radeon GPU but it doesn't look like we'll have to wait much longer.
It's more than just a branding issue
As a part of my look at the first wave of AMD FreeSync monitors hitting the market, I wrote an analysis of how the competing technologies of FreeSync and G-Sync differ from one another. It was a complex topic that I tried to state in as succinct a fashion as possible given the time constraints and that the article subject was on FreeSync specifically. I'm going to include a portion of that discussion here, to recap:
First, we need to look inside the VRR window, the zone in which the monitor and AMD claims that variable refresh should be working without tears and without stutter. On the LG 34UM67 for example, that range is 48-75 Hz, so frame rates between 48 FPS and 75 FPS should be smooth. Next we want to look above the window, or at frame rates above the 75 Hz maximum refresh rate of the window. Finally, and maybe most importantly, we need to look below the window, at frame rates under the minimum rated variable refresh target, in this example it would be 48 FPS.
AMD FreeSync offers more flexibility for the gamer than G-Sync around this VRR window. For both above and below the variable refresh area, AMD allows gamers to continue to select a VSync enabled or disabled setting. That setting will be handled as you are used to it today when your game frame rate extends outside the VRR window. So, for our 34UM67 monitor example, if your game is capable of rendering at a frame rate of 85 FPS then you will either see tearing on your screen (if you have VSync disabled) or you will get a static frame rate of 75 FPS, matching the top refresh rate of the panel itself. If your game is rendering at 40 FPS, lower than the minimum VRR window, then you will again see the result of tearing (with VSync off) or the potential for stutter and hitching (with VSync on).
But what happens with this FreeSync monitor and theoretical G-Sync monitor below the window? AMD’s implementation means that you get the option of disabling or enabling VSync. For the 34UM67 as soon as your game frame rate drops under 48 FPS you will either see tearing on your screen or you will begin to see hints of stutter and judder as the typical (and previously mentioned) VSync concerns again crop their head up. At lower frame rates (below the window) these artifacts will actually impact your gaming experience much more dramatically than at higher frame rates (above the window).
G-Sync treats this “below the window” scenario very differently. Rather than reverting to VSync on or off, the module in the G-Sync display is responsible for auto-refreshing the screen if the frame rate dips below the minimum refresh of the panel that would otherwise be affected by flicker. So, in a 30-144 Hz G-Sync monitor, we have measured that when the frame rate actually gets to 29 FPS, the display is actually refreshing at 58 Hz, each frame being “drawn” one extra instance to avoid flicker of the pixels but still maintains a tear free and stutter free animation. If the frame rate dips to 25 FPS, then the screen draws at 50 Hz. If the frame rate drops to something more extreme like 14 FPS, we actually see the module quadruple drawing the frame, taking the refresh rate back to 56 Hz. It’s a clever trick that keeps the VRR goals and prevents a degradation of the gaming experience. But, this method requires a local frame buffer and requires logic on the display controller to work. Hence, the current implementation in a G-Sync module.
As you can see, the topic is complicated. So Allyn and I (and an aging analog oscilloscope) decided to take it upon ourselves to try and understand and teach the implementation differences with the help of some science. The video below is where the heart of this story is focused, though I have some visual aids embedded after it.
Still not clear on what this means for frame rates and refresh rates on current FreeSync and G-Sync monitors? Maybe this will help.
Subject: Graphics Cards | March 5, 2015 - 04:46 PM | Ryan Shrout
Tagged: GDC, gdc 15, amd, radeon, R9, 390x, VR, Oculus
Don't get too excited about this news, but AMD tells me that its next flagship Radeon R9 graphics card is up and running at GDC, powering an Oculus-based Epic "Showdown" demo.
Inside the box...
During my meeting with AMD today I was told that inside that little PC sits the "upcoming flagship Radeon R9 graphics card" but, of course, no other information was given. The following is an estimated transcript of the event:
Ryan: Can I see it?
Ryan: I can't even take the side panel off it?
Ryan. How can I know you're telling the truth then? Can I open up the driver or anything?
Ryan: GPU-Z? Anything?
Well, I tried.
Is this the rumored R9 390X with the integrated water cooler? Is it something else completely? AMD wouldn't even let me behind the system to look for a radiator so I'm afraid that is where my speculation will end.
Hooked up to the system was a Crescent Bay Oculus headset running the well-received Epic "Showdown" demo. The experience was smooth though of course there were no indications of frame rate, etc. while it was going on. After our discussion with AMD earlier in the week about its LiquidVR SDK, AMD is clearly taking the VR transition seriously. NVIDIA's GPUs might be dominating the show-floor demos but AMD wanted to be sure it wasn't left out of the discussion.
Can I just get this Fiji card already??
Subject: Graphics Cards | February 21, 2015 - 12:18 PM | Ryan Shrout
Tagged: radeon, nvidia, marketshare, market share, geforce, amd
One of the perennial firms that measures GPU market share, Jon Peddie Research, has come out with a report on Q4 of 2014 this weekend and the results are eye opening. According to the data, NVIDIA and AMD each took dramatic swings from Q4 of 2013 to Q4 of 2014.
|Q4 2014||Q3 2014||Q4 2013||Year-to-year Change|
Data source: Jon Peddie Research
Here is the JPR commentary to start us out:
JPR's AIB Report tracks computer add-in graphics boards, which carry discrete graphics chips. AIBs used in desktop PCs, workstations, servers, and other devices such as scientific instruments. They are sold directly to customers as aftermarket products, or are factory installed. In all cases, AIBs represent the higher end of the graphics industry using discrete chips and private high-speed memory, as compared to the integrated GPUs in CPUs that share slower system memory.
The news was encouraging and seasonally understandable, quarter-to-quarter, the market decreased -0.68% (compared to the desktop PC market, which decreased 3.53%).
On a year-to-year basis, we found that total AIB shipments during the quarter fell -17.52% , which is more than desktop PCs, which fell -0.72%.
However, in spite of the overall decline, somewhat due to tablets and embedded graphics, the PC gaming momentum continues to build and is the bright spot in the AIB market.
NVIDIA's Maxwell GPU
The overall PC desktop market increased quarter-to-quarter including double-attach-the adding of a second (or third) AIB to a system with integrated processor graphics-and to a lesser extent, dual AIBs in performance desktop machines using either AMD's Crossfire or Nvidia's SLI technology.
The attach rate of AIBs to desktop PCs has declined from a high of 63% in Q1 2008 to 36% this quarter.
The year to year change that JPR is reporting is substantial and shows a 20+ point change in market share in favor of NVIDIA over AMD. According to this data, AMD's market share has now dropped from 35% at the end of 2013 to just 24% at the end of 2014. Meanwhile, NVIDIA continues to truck forward, going from 64.9% at the end of 2013 to 76% at the end of 2014.
The Radeon R9 285 release didn't have the impact AMD had hoped
Clearly the release of NVIDIA's Maxwell GPUs, the GeForce GTX 750 Ti, GTX 970 and GTX 980 have impacted the market even more than we initially expected. In recent weeks the GTX 970 has been getting a lot of negative press with the memory issue and I will be curious to see what effect this has on sales in the near future. But the 12 month swing that you see in the table above is the likely cause for the sudden departure of John Byrne, Collette LaForce and Raj Naik.
AMD has good products, even better pricing and a team of PR and marketing folks that are talented and aggressive. So how can the company recover from this? Products, people; new products. Will the rumors circling around the Radeon R9 390X develop into such a product?
Hopefully 2015 will provide it.
Subject: General Tech | February 12, 2015 - 01:46 PM | Ken Addison
Tagged: podcast, video, gtx 960, plextor, m6e black edition, M6e, r9 390, amd, radeon, nvidia, Silverstone, tegra, tx1, Tegra X1, corsair, H100i GTX, H80i GT
PC Perspective Podcast #336 - 02/12/2015
Join us this week as we discuss GTX 960 Overlocking, Plextor M6e Black Edition, AMD R9 3xx Rumors and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano
Program length: 1:11:53
Week in Review:
News item of interest:
0:46:10 NVIDIA Event on March 3rd. Why?
Hardware/Software Picks of the Week:
Subject: Graphics Cards | February 7, 2015 - 06:03 PM | Scott Michaud
Tagged: amd, R9, r9 300, r9 390x, radeon
According to WCCFTech, AMD commented on Facebook that they are “putting the finishing touches on the 300 series to make sure they live up to expectation”. I tried look through AMD's “Posts to Page” for February 3rd and I did not see it listed, so a grain of salt is necessary (either with WCCF or with my lack of Facebook skills).
Image Credit: WCCFTech
The current rumors claim that Fiji XT will have 4096 graphics cores that are fed by a high-bandwidth, stacked memory architecture, which is supposedly rated at 640 GB/s (versus 224 GB/s of the GeForce GTX 980). When you're dealing with data sets at the scale that GPUs are, bandwidth is a precious resource. That said, they also have cache and other methods to reduce this dependency, but let's just say that, if you offer a graphics vendor a free, order-of-magnitude speed-up in memory bandwidth -- you will have friend, and possibly one for life. Need a couch moved? No problem!
The R9 Series is expected to be launched next quarter, which could be as early as about a month.
Subject: General Tech, Systems | January 30, 2015 - 03:03 AM | Tim Verry
Tagged: visionx, SFF, radeon, m270x, haswell, asrock, amd
ASRock has unleashed an update to its small form factor VisionX series. The new VisionX 471D adds a faster Haswell processor and dedicated Radeon mobile graphics to the mini PC.
The 7.9” x 7.9” x 2.8” PC chassis comes in black or silver with rounded corners. External I/O is quite expansive with a DVD optical drive, two audio jacks, one USB 3.0 port, one MHSL* port (MHL compatible port that carries both data and video), and a SD card reader on the front. Further, the back of the PC holds the following ports:
- 5 x Analog audio jacks
- 1 x Optical audio out
- 1 x DVI
- 1 x HDMI
- 1 x Gigabit Ethernet jack
- 802.11ac (2 antennas)
- 5 x USB 3.0
- 1 x USB 2.0
- 1 x eSATA
ASRock has gone with the Intel Core i7-4712MQ processor. This is a 37W Haswell quad core (with eight threads) clocked at up to 3.3GHz. Graphics are handled by the AMD Radeon R9 M270X which is a mobile “Venus” GCN-based GPU with 1GB of memory. The 28nm GPU with 640 cores, 40 TMUs, and 16 ROPs is clocked at 725 MHz base and up to 775 MHz boost. The PC further supports two SO-DIMMS, two 2.5” drives, one mSATA connector, and the above-mentioned DVD drive (DL-8A4SH-01 comes pre-installed).
The VisionX 471D is a “barebones” system where you will have to provide your own OS but does come with bundled storage and memory. Specifically, for $999, the SFF computer comes with 8GB of DDR3 memory, a 2TB mechanical hard drive, and a 256GB mSATA SSD (the ASint SSDMSK256G-M1 using a JMF667 controller and 64GB 20nm IMFT NAND). This leaves room for one additional 2.5” drive for expansion. Although it comes without an operating system, it does ship with a Windows Media Center compatible remote.
This latest addition to the VisionX series succeeds the 420D and features a faster processor. At the time of this writing, the PC is not available for purchase, but it is in the hands of reviewers (such as this review from AnandTech) and will be coming soon to retailers for $999 USD.
The price is on the steep side especially compared to some other recent tiny PCs, but you are getting a top end mobile Haswell chip and good I/O for a small system with enough hardware to possibly be "enough" PC for many people (or at least a second PC or a HTPC in the living room).