It's more than just a branding issue
As a part of my look at the first wave of AMD FreeSync monitors hitting the market, I wrote an analysis of how the competing technologies of FreeSync and G-Sync differ from one another. It was a complex topic that I tried to state in as succinct a fashion as possible given the time constraints and that the article subject was on FreeSync specifically. I'm going to include a portion of that discussion here, to recap:
First, we need to look inside the VRR window, the zone in which the monitor and AMD claims that variable refresh should be working without tears and without stutter. On the LG 34UM67 for example, that range is 48-75 Hz, so frame rates between 48 FPS and 75 FPS should be smooth. Next we want to look above the window, or at frame rates above the 75 Hz maximum refresh rate of the window. Finally, and maybe most importantly, we need to look below the window, at frame rates under the minimum rated variable refresh target, in this example it would be 48 FPS.
AMD FreeSync offers more flexibility for the gamer than G-Sync around this VRR window. For both above and below the variable refresh area, AMD allows gamers to continue to select a VSync enabled or disabled setting. That setting will be handled as you are used to it today when your game frame rate extends outside the VRR window. So, for our 34UM67 monitor example, if your game is capable of rendering at a frame rate of 85 FPS then you will either see tearing on your screen (if you have VSync disabled) or you will get a static frame rate of 75 FPS, matching the top refresh rate of the panel itself. If your game is rendering at 40 FPS, lower than the minimum VRR window, then you will again see the result of tearing (with VSync off) or the potential for stutter and hitching (with VSync on).
But what happens with this FreeSync monitor and theoretical G-Sync monitor below the window? AMD’s implementation means that you get the option of disabling or enabling VSync. For the 34UM67 as soon as your game frame rate drops under 48 FPS you will either see tearing on your screen or you will begin to see hints of stutter and judder as the typical (and previously mentioned) VSync concerns again crop their head up. At lower frame rates (below the window) these artifacts will actually impact your gaming experience much more dramatically than at higher frame rates (above the window).
G-Sync treats this “below the window” scenario very differently. Rather than reverting to VSync on or off, the module in the G-Sync display is responsible for auto-refreshing the screen if the frame rate dips below the minimum refresh of the panel that would otherwise be affected by flicker. So, in a 30-144 Hz G-Sync monitor, we have measured that when the frame rate actually gets to 29 FPS, the display is actually refreshing at 58 Hz, each frame being “drawn” one extra instance to avoid flicker of the pixels but still maintains a tear free and stutter free animation. If the frame rate dips to 25 FPS, then the screen draws at 50 Hz. If the frame rate drops to something more extreme like 14 FPS, we actually see the module quadruple drawing the frame, taking the refresh rate back to 56 Hz. It’s a clever trick that keeps the VRR goals and prevents a degradation of the gaming experience. But, this method requires a local frame buffer and requires logic on the display controller to work. Hence, the current implementation in a G-Sync module.
As you can see, the topic is complicated. So Allyn and I (and an aging analog oscilloscope) decided to take it upon ourselves to try and understand and teach the implementation differences with the help of some science. The video below is where the heart of this story is focused, though I have some visual aids embedded after it.
Still not clear on what this means for frame rates and refresh rates on current FreeSync and G-Sync monitors? Maybe this will help.
Subject: Graphics Cards | March 5, 2015 - 04:46 PM | Ryan Shrout
Tagged: GDC, gdc 15, amd, radeon, R9, 390x, VR, Oculus
Don't get too excited about this news, but AMD tells me that its next flagship Radeon R9 graphics card is up and running at GDC, powering an Oculus-based Epic "Showdown" demo.
Inside the box...
During my meeting with AMD today I was told that inside that little PC sits the "upcoming flagship Radeon R9 graphics card" but, of course, no other information was given. The following is an estimated transcript of the event:
Ryan: Can I see it?
Ryan: I can't even take the side panel off it?
Ryan. How can I know you're telling the truth then? Can I open up the driver or anything?
Ryan: GPU-Z? Anything?
Well, I tried.
Is this the rumored R9 390X with the integrated water cooler? Is it something else completely? AMD wouldn't even let me behind the system to look for a radiator so I'm afraid that is where my speculation will end.
Hooked up to the system was a Crescent Bay Oculus headset running the well-received Epic "Showdown" demo. The experience was smooth though of course there were no indications of frame rate, etc. while it was going on. After our discussion with AMD earlier in the week about its LiquidVR SDK, AMD is clearly taking the VR transition seriously. NVIDIA's GPUs might be dominating the show-floor demos but AMD wanted to be sure it wasn't left out of the discussion.
Can I just get this Fiji card already??
Subject: Graphics Cards | February 21, 2015 - 12:18 PM | Ryan Shrout
Tagged: radeon, nvidia, marketshare, market share, geforce, amd
One of the perennial firms that measures GPU market share, Jon Peddie Research, has come out with a report on Q4 of 2014 this weekend and the results are eye opening. According to the data, NVIDIA and AMD each took dramatic swings from Q4 of 2013 to Q4 of 2014.
|Q4 2014||Q3 2014||Q4 2013||Year-to-year Change|
Data source: Jon Peddie Research
Here is the JPR commentary to start us out:
JPR's AIB Report tracks computer add-in graphics boards, which carry discrete graphics chips. AIBs used in desktop PCs, workstations, servers, and other devices such as scientific instruments. They are sold directly to customers as aftermarket products, or are factory installed. In all cases, AIBs represent the higher end of the graphics industry using discrete chips and private high-speed memory, as compared to the integrated GPUs in CPUs that share slower system memory.
The news was encouraging and seasonally understandable, quarter-to-quarter, the market decreased -0.68% (compared to the desktop PC market, which decreased 3.53%).
On a year-to-year basis, we found that total AIB shipments during the quarter fell -17.52% , which is more than desktop PCs, which fell -0.72%.
However, in spite of the overall decline, somewhat due to tablets and embedded graphics, the PC gaming momentum continues to build and is the bright spot in the AIB market.
NVIDIA's Maxwell GPU
The overall PC desktop market increased quarter-to-quarter including double-attach-the adding of a second (or third) AIB to a system with integrated processor graphics-and to a lesser extent, dual AIBs in performance desktop machines using either AMD's Crossfire or Nvidia's SLI technology.
The attach rate of AIBs to desktop PCs has declined from a high of 63% in Q1 2008 to 36% this quarter.
The year to year change that JPR is reporting is substantial and shows a 20+ point change in market share in favor of NVIDIA over AMD. According to this data, AMD's market share has now dropped from 35% at the end of 2013 to just 24% at the end of 2014. Meanwhile, NVIDIA continues to truck forward, going from 64.9% at the end of 2013 to 76% at the end of 2014.
The Radeon R9 285 release didn't have the impact AMD had hoped
Clearly the release of NVIDIA's Maxwell GPUs, the GeForce GTX 750 Ti, GTX 970 and GTX 980 have impacted the market even more than we initially expected. In recent weeks the GTX 970 has been getting a lot of negative press with the memory issue and I will be curious to see what effect this has on sales in the near future. But the 12 month swing that you see in the table above is the likely cause for the sudden departure of John Byrne, Collette LaForce and Raj Naik.
AMD has good products, even better pricing and a team of PR and marketing folks that are talented and aggressive. So how can the company recover from this? Products, people; new products. Will the rumors circling around the Radeon R9 390X develop into such a product?
Hopefully 2015 will provide it.
Subject: General Tech | February 12, 2015 - 01:46 PM | Ken Addison
Tagged: podcast, video, gtx 960, plextor, m6e black edition, M6e, r9 390, amd, radeon, nvidia, Silverstone, tegra, tx1, Tegra X1, corsair, H100i GTX, H80i GT
PC Perspective Podcast #336 - 02/12/2015
Join us this week as we discuss GTX 960 Overlocking, Plextor M6e Black Edition, AMD R9 3xx Rumors and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano
Program length: 1:11:53
Week in Review:
News item of interest:
0:46:10 NVIDIA Event on March 3rd. Why?
Hardware/Software Picks of the Week:
Subject: Graphics Cards | February 7, 2015 - 06:03 PM | Scott Michaud
Tagged: amd, R9, r9 300, r9 390x, radeon
According to WCCFTech, AMD commented on Facebook that they are “putting the finishing touches on the 300 series to make sure they live up to expectation”. I tried look through AMD's “Posts to Page” for February 3rd and I did not see it listed, so a grain of salt is necessary (either with WCCF or with my lack of Facebook skills).
Image Credit: WCCFTech
The current rumors claim that Fiji XT will have 4096 graphics cores that are fed by a high-bandwidth, stacked memory architecture, which is supposedly rated at 640 GB/s (versus 224 GB/s of the GeForce GTX 980). When you're dealing with data sets at the scale that GPUs are, bandwidth is a precious resource. That said, they also have cache and other methods to reduce this dependency, but let's just say that, if you offer a graphics vendor a free, order-of-magnitude speed-up in memory bandwidth -- you will have friend, and possibly one for life. Need a couch moved? No problem!
The R9 Series is expected to be launched next quarter, which could be as early as about a month.
Subject: General Tech, Systems | January 30, 2015 - 03:03 AM | Tim Verry
Tagged: visionx, SFF, radeon, m270x, haswell, asrock, amd
ASRock has unleashed an update to its small form factor VisionX series. The new VisionX 471D adds a faster Haswell processor and dedicated Radeon mobile graphics to the mini PC.
The 7.9” x 7.9” x 2.8” PC chassis comes in black or silver with rounded corners. External I/O is quite expansive with a DVD optical drive, two audio jacks, one USB 3.0 port, one MHSL* port (MHL compatible port that carries both data and video), and a SD card reader on the front. Further, the back of the PC holds the following ports:
- 5 x Analog audio jacks
- 1 x Optical audio out
- 1 x DVI
- 1 x HDMI
- 1 x Gigabit Ethernet jack
- 802.11ac (2 antennas)
- 5 x USB 3.0
- 1 x USB 2.0
- 1 x eSATA
ASRock has gone with the Intel Core i7-4712MQ processor. This is a 37W Haswell quad core (with eight threads) clocked at up to 3.3GHz. Graphics are handled by the AMD Radeon R9 M270X which is a mobile “Venus” GCN-based GPU with 1GB of memory. The 28nm GPU with 640 cores, 40 TMUs, and 16 ROPs is clocked at 725 MHz base and up to 775 MHz boost. The PC further supports two SO-DIMMS, two 2.5” drives, one mSATA connector, and the above-mentioned DVD drive (DL-8A4SH-01 comes pre-installed).
The VisionX 471D is a “barebones” system where you will have to provide your own OS but does come with bundled storage and memory. Specifically, for $999, the SFF computer comes with 8GB of DDR3 memory, a 2TB mechanical hard drive, and a 256GB mSATA SSD (the ASint SSDMSK256G-M1 using a JMF667 controller and 64GB 20nm IMFT NAND). This leaves room for one additional 2.5” drive for expansion. Although it comes without an operating system, it does ship with a Windows Media Center compatible remote.
This latest addition to the VisionX series succeeds the 420D and features a faster processor. At the time of this writing, the PC is not available for purchase, but it is in the hands of reviewers (such as this review from AnandTech) and will be coming soon to retailers for $999 USD.
The price is on the steep side especially compared to some other recent tiny PCs, but you are getting a top end mobile Haswell chip and good I/O for a small system with enough hardware to possibly be "enough" PC for many people (or at least a second PC or a HTPC in the living room).
Subject: Graphics Cards | January 13, 2015 - 12:22 PM | Ryan Shrout
Tagged: rumor, radeon, r9 380x, 380x
Spotted over at TechReport.com this morning and sourced from a post at 3dcenter.org, it appears that some additional information about the future Radeon R9 380X is starting to leak out through AMD employee LinkedIn pages.
Ilana Shternshain is a ASIC physical design engineer at AMD with more than 18 years of experience, 7-8 years of that with AMD. Under the background section is the line "Backend engineer and team leader at Intel and AMD, responsible for taping out state of the art products like Intel Pentium Processor with MMX technology and AMD R9 290X and 380X GPUs." A bit further down is an experience listing of the Playstation 4 APU as well as "AMD R9 380X GPUs (largest in “King of the hill” line of products)."
Interesting - though not entirely enlightening. More interesting were the details found on Linglan Zhang's LinkedIn page (since removed):
Developed the world’s first 300W 2.5D discrete GPU SOC using stacked die High Bandwidth Memory and silicon interposer.
Now we have something to work with! A 300 watt TDP would make the R9 380X more power hungry than the current R9 290X Hawaii GPU. High bandwidth memory likely implies memory located on the substrate of the GPU itself, similar to what exists on the Xbox One APU, though configurations could differ in considerable ways. A bit of research on the silicon interposer reveals it as an implementation method for 2.5D chips:
There are two classes of true 3D chips which are being developed today. The first is known as 2½D where a so-called silicon interposer is created. The interposer does not contain any active transistors, only interconnect (and perhaps decoupling capacitors), thus avoiding the issue of threshold shift mentioned above. The chips are attached to the interposer by flipping them so that the active chips do not require any TSVs to be created. True 3D chips have TSVs going through active chips and, in the future, have potential to be stacked several die high (first for low-power memories where the heat and power distribution issues are less critical).
An interposer would allow the GPU and stacked die memory to be built on different process technology, for example, but could also make the chips more fragile during final assembly. Obviously there a lot more questions than answers based on these rumors sourced from LinkedIn, but it's interesting to attempt to gauge where AMD is headed in its continued quest to take back market share from NVIDIA.
Subject: Graphics Cards, Displays, Shows and Expos | January 7, 2015 - 03:11 AM | Ryan Shrout
Tagged: video, radeon, monitor, g-sync, freesync, ces 2015, CES, amd
It finally happened - later than I had expected - we got to get hands on with nearly-ready FreeSync monitors! That's right, AMD's alternative to G-Sync will bring variable refresh gaming technology to Radeon gamers later this quarter and AMD had the monitors on hand to prove it. On display was an LG 34UM67 running at 2560x1080 on IPS technology, a Samsung UE590 with a 4K resolution and AHVA panel and BenQ XL2730Z 2560x1440 TN screen.
The three monitors sampled at the AMD booth showcase the wide array of units that will be available this year using FreeSync, possibly even in this quarter. The LG 34UM67 uses the 21:9 aspect ratio that is growing in popularity, along with solid IPS panel technology and 60 Hz top frequency. However, there is a new specification to be concerned with on FreeSync as well: minimum frequency. This is the refresh rate that monitor needs to maintain to avoid artifacting and flickering that would be visible to the end user. For the LG monitor it was 40 Hz.
What happens below that limit and above it differs from what NVIDIA has decided to do. For FreeSync (and the Adaptive Sync standard as a whole), when a game renders at a frame rate above or below this VRR window, the V-Sync setting is enforced. That means on a 60 Hz panel, if your game runs at 70 FPS, then you will have the option to enable or disable V-Sync; you can either force a 60 FPS top limit or allow 70 FPS with screen tearing. If your game runs under the 40 Hz bottom limit, say at 30 FPS, you get the same option: V-Sync on or V-Sync off. With it off, you would get tearing but optimal input/display latency but with it off you would reintroduce frame judder when you cross between V-Sync steps.
There are potential pitfalls to this solution though; what happens when you cross into that top or bottom region can cause issues depending on the specific implementation. We'll be researching this very soon.
Notice this screen shows FreeSync Enabled and V-Sync Disabled, and we see a tear.
FreeSync monitors have the benefit of using industry standard scalers and that means they won't be limited to a single DisplayPort input. Expect to see a range of inputs including HDMI and DVI though the VRR technology will only work on DP.
We have much more to learn and much more to experience with FreeSync but we are eager to get one in the office for testing. I know, I know, we say that quite often it seems.
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: General Tech, Graphics Cards | December 28, 2014 - 09:47 PM | Scott Michaud
Tagged: radeon, nvidia, gtx, geforce, amd
According to an anonymous source of WCCFTech, AMD is preparing a 20nm-based graphics architecture that is expected to release in April or May. Originally, they predicted that the graphics devices, which they call R9 300 series, would be available in February or March. The reason for this “delay” is a massive demand for 20nm production.
The source also claims that NVIDIA will skip 20nm entirely and instead opt for 16nm when that becomes available (which is said to be mid or late 2016). The expectation is that NVIDIA will answer AMD's new graphics devices with a higher-end Maxwell device that is still at 28nm. Earlier rumors, based on a leaked SiSoftware entry, claim 3072 CUDA cores that are clocked between 1.1 GHz and 1.39 GHz. If true, this would give it between 6.75 and 8.54 TeraFLOPs of performance, the higher of which is right around the advertised performance of a GeForce Titan Z (only in a single compute device that does not require distribution of work like what SLI was created to automate).
Will this strategy work in NVIDIA's favor? I don't know. 28nm is a fairly stable process at this point, which will probably allow them to get chips that can be bigger and more aggressively clocked. On the other hand, they pretty much need to rely upon chips that are bigger and more aggressively clocked to be competitive with AMD's slightly more design architecture. Previous rumors also hint that AMD is looking at water-cooling for their reference card, which might place yet another handicap against NVIDIA, although cooling is not an area that NVIDIA struggles in.
Subject: General Tech | November 12, 2014 - 05:10 PM | Jeremy Hellstrom
Tagged: linux, amd, radeon, CS:GO, tf2
With the new driver from AMD and a long list of cards to test, from an R9290 all the way back to an HD4650, Phoronix has put together a rather definitive list of the current performance you can expect from CS:GO and TF2. CS:GO was tested at 2560x1600 and showed many performance changes from the previous driver, including some great news for 290 owners. TF2 was tested at the same resolution and many of the GPUs were capable of providing 60FPS or higher, again with the 290 taking the lead. Phoronix also did testing on the efficiency of these cards, detailing the number of frames per second, per watt used, this may not be pertinent to many users but does offer an interesting look at the efficiency of the GPUs. If you are gaming on a Radeon on Linux now is a good time to upgrade your drivers and associated programs.
"The latest massive set of Linux test data we have to share with Linux gamers and enthusiasts is a look at Counter-Strike: Global Offensive and Team Fortress 2 when using the very newest open-source Radeon graphics driver code. The very latest open-source Radeon driver code tested with these popular Valve Linux games were the Linux 3.18 Git kernel, Mesa 10.4-devel, LLVM 3.6 SVN, and xf86-video-ati 7.5.99."
Here is some more Tech News from around the web:
- A Spaceship For Christmas – Elite: Dangerous Dated @ Rock, Paper, SHOTGUN
- Wot I Think – Call Of Duty: Advanced Warfare Singleplayer @ Rock, Paper, SHOTGUN
- Ryse: Son of Rome PC: It’s Boring but Here’s Why You Should Still Buy It @ eTeknix
- Free Beards And Horse Armour: The Witcher 3 DLC Plans @ Rock, Paper, SHOTGUN
- Borderlands: The Pre-Sequel Review @ OCC
- Assassin's Creed: Unity widely found to be slow and buggy @ HEXUS
- Avalanche confirms Just Cause 3 for PC and next-gen consoles @ HEXUS
- The Witcher 2 And Mount & Blade Free In GOG Sale @ Rock, Paper, SHOTGUN
- Skaven Time: Warhammer’s XCOMish Mordheim Out Soon @ Rock, Paper, SHOTGUN
- Microsoft to bring back beloved 1990s super-hit BATTLETOADS!? @ The Register