Stop by for the BenQ XL2730Z FreeSync display, stay for the conversations

Subject: Displays | May 21, 2015 - 01:44 PM |
Tagged: XL2730Z, freesync, benq, amd

Ryan wasn't the only one to test BenQ's XL2730Z 27-in 2560x1440 144 Hz FreeSync Monitor, The Tech Report also had a chance to test one, as well as talk to NVIDIA's Tom Petersen about their competing technology.  They also had a chance to discuss FreeSync in general with AMD's David Glen who is one of the engineers behind FreeSync.  Their benchmarks and overall impression of the displays capabilities and FreeSync in general are a major portion of the review but the discussion with the two company representatives makes for even more interesting reading.

img.x.jpg

"AMD's FreeSync is here, personified in BenQ's XL2730Z monitor. We've gone deep into the display's performance and smoothness, with direct comparisons to G-Sync using 240-fps video. Here's what we found."

Here are some more Display articles from around the web:

Displays

How about that High Bandwidth Memory

Subject: Graphics Cards | May 19, 2015 - 03:51 PM |
Tagged: memory, high bandwidth memory, hbm, Fiji, amd

Ryan and the rest of the crew here at PC Perspective are excited about AMD's new memory architecture and the fact that they will be first to market with it.  However as any intelligent reader is wont to look for; a second opinion on the topic is worth finding.  Look no further than The Tech Report who have also been briefed on AMD's new memory architecture.  Read on to see what they learned from Joe Macri and their thoughts on the successor to GDDR5 and HBM2 which is already in the works.

stack-diagram.jpg

"HBM is the next generation of memory for high-bandwidth applications like graphics, and AMD has helped usher it to market. Read on to find out more about HBM and what we've learned about the memory subsystem in AMD's next high-end GPU, code-named Fiji."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Author:
Manufacturer: AMD

High Bandwidth Memory

UPDATE: I have embedded an excerpt from our PC Perspective Podcast that discusses the HBM technology that you might want to check out in addition to the story below.

The chances are good that if you have been reading PC Perspective or almost any other website that focuses on GPU technologies for the past year, you have read the acronym HBM. You might have even seen its full name: high bandwidth memory. HBM is a new technology that aims to turn the ability for a processor (GPU, CPU, APU, etc.) to access memory upside down, almost literally. AMD has already publicly stated that its next generation flagship Radeon GPU will use HBM as part of its design, but it wasn’t until today that we could talk about what HBM actually offers to a high performance processor like Fiji. At its core HBM drastically changes how the memory interface works, how much power is required for it and what metrics we will use to compare competing memory architectures. AMD and its partners started working on HBM with the industry more than 7 years ago, and with the first retail product nearly ready to ship, it’s time to learn about HBM.

We got some time with AMD’s Joe Macri, Corporate Vice President and Product CTO, to talk about AMD’s move to HBM and how it will shift the direction of AMD products going forward.

The first step in understanding HBM is to understand why it’s needed in the first place. Current GPUs, including the AMD Radeon R9 290X and the NVIDIA GeForce GTX 980, utilize a memory technology known as GDDR5. This architecture has scaled well over the past several GPU generations but we are starting to enter the world of diminishing returns. Balancing memory performance and power consumption is always a tough battle; just ask ARM about it. On the desktop component side we have much larger power envelopes to work inside but the power curve that GDDR5 is on will soon hit a wall, if you plot it far enough into the future. The result will be either drastically higher power consuming graphics cards or stalling performance improvements of the graphics market – something we have not really seen in its history.

01-gddr5powergraph.jpg

While it’s clearly possible that current and maybe even next generation GPU designs could still have depended on GDDR5 as the memory interface, the move to a different solution is needed for the future; AMD is just making the jump earlier than the rest of the industry.

Continue reading our look at high bandwidth memory (HBM) architecture!!

NVIDIA Under Attack Again for GameWorks in The Witcher 3: Wild Hunt

Subject: Graphics Cards | May 17, 2015 - 12:04 PM |
Tagged: The Witcher 3, nvidia, hairworks, gameworks, amd

I feel like every few months I get to write more stories focusing on the exact same subject. It's almost as if nothing in the enthusiast market is happening and thus the cycle continues, taking all of us with it on a wild ride of arguments and valuable debates. Late last week I started hearing from some of my Twitter followers that there were concerns surrounding the upcoming release of The Witcher 3: Wild Hunt. Then I found a link to this news post over at Overclock3d.net that put some of the information in perspective.

Essentially, The Witcher 3 uses parts of NVIDIA's GameWorks development tools and APIs, software written by NVIDIA to help game developers take advantage of new technologies and to quickly and easily implement them into games. The problem of course is that GameWorks is written and developed by NVIDIA. That means that optimizations for AMD Radeon hardware are difficult or impossible, depending on who you want to believe. Clearly it doesn't benefit NVIDIA to optimize its software for AMD GPUs financially, though many in the community would like NVIDIA to give a better effort - for the good of said community.

gwlogo.jpg

Specifically in regards to The Witcher 3, the game implements NVIDIA HairWorks technology to add realism on many of the creatures of the game world. (Actually, the game includes HairWorks, HBAO+, PhysX,  Destruction and Clothing but our current discussion focuses on HairWorks.) All of the marketing and video surrounding The Witcher 3 has been awesome and the realistic animal fur simulation has definitely been a part of it. However, it appears that AMD Radeon GPU users are concerned that performance with HairWorks enabled will suffer.

An example of The Witcher 3: Wild Hunt with HairWorks

One of the game's developers has been quoted as such:

Many of you have asked us if AMD Radeon GPUs would be able to run NVIDIA’s HairWorks technology – the answer is yes! However, unsatisfactory performance may be experienced as the code of this feature cannot be optimized for AMD products. Radeon users are encouraged to disable NVIDIA HairWorks if the performance is below expectations.

There are at least several interpretations of this statement floating around the web. First, and most enflaming, is that NVIDIA is not allowing CD Project Red to optimize it by not offering source code. Another is that CD Project is choosing to not optimize for AMD hardware due to time considerations. The last is that it simply isn't possible to optimize it because of hardware limitations of HairWorks.

I went to NVIDIA with these complaints about HairWorks and Brian Burke gave me this response:

We are not asking game developers do anything unethical.
 
GameWorks improves the visual quality of games running on GeForce for our customers.  It does not impair performance on competing hardware.
 
Demanding source code access to all our cool technology is an attempt to deflect their performance issues. Giving away your IP, your source code, is uncommon for anyone in the industry, including middleware providers and game developers. Most of the time we optimize games based on binary builds, not source code.
 
GameWorks licenses follow standard industry practice.  GameWorks source code is provided to developers that request it under license, but they can’t redistribute our source code to anyone who does not have a license. 
 
The bottom line is AMD’s tessellation performance is not very good and there is not a lot NVIDIA can/should do about it. Using DX11 tessellation has sound technical reasoning behind it, it helps to keep the GPU memory footprint small so multiple characters can use hair and fur at the same time.
 
I believe it is a resource issue. NVIDIA spent a lot of artist and engineering resources to help make Witcher 3 better. I would assume that AMD could have done the same thing because our agreements with developers don’t prevent them from working with other IHVs. (See also, Project Cars)
 
I think gamers want better hair, better fur, better lighting, better shadows and better effects in their games. GameWorks gives them that.  

Interesting comments for sure. The essential take away from this is that HairWorks depends heavily on tessellation performance and we have known since the GTX 680 was released that NVIDIA's architecture performs better than AMD's GCN for tessellation - often by a significant amount. NVIDIA developed its middleware to utilize the strength of its own GPU technology and while it's clear that some disagree, not to negatively impact AMD. Did NVIDIA know that would be the case when it was developing the software? Of course it did. Should it have done something to help AMD GPUs more gracefully fall back? Maybe.

Next, I asked Burke directly if claims that NVIDIA was preventing AMD or the game developer from optimizing HairWorks for other GPUs and platforms were true? I was told that both AMD and CD Project had the ability to tune the game, but in different ways. The developer could change the tessellation density based on the specific GPU detected (lower for a Radeon GPU with less tessellation capability, for example) but that would require dedicated engineering from either CD Project or AMD to do. AMD, without access to the source code, should be able to make changes in the driver at the binary level, similar to how most other driver optimizations are built. Burke states that in these instances NVIDIA often sends engineers to work with game developers and that AMD "could have done the same had it chosen to."  And again, NVIDIA reiterated that in no way do its agreements with game developers prohibit optimization for AMD GPUs.

It would also be possible for AMD to have pushed for the implementation of TressFX in addition to HairWorks; a similar scenario played out in Grand Theft Auto V where several vendor-specific technologies were included from both NVIDIA and AMD, customized through in-game settings. 

NVIDIA has never been accused of being altruistic; it doesn't often create things and then share it with open arms to the rest of the hardware community. But it has to be understood that game developers know this as well - they are not oblivious. CD Project knew that HairWorks performance on AMD would be poor but decided to implement the technology into The Witcher 3 anyway. They were willing to sacrifice performance penalties for some users to improve the experience of others. You can argue that is not the best choice, but at the very least The Witcher 3 will let you disable the HairWorks feature completely, removing it from the performance debate all together.

In a perfect world for consumers, NVIDIA and AMD would walk hand-in-hand through the fields and develop hardware and software in tandem, making sure all users get the best possible experience with all games. But that style of work is only helpful (from a business perspective) for the organization attempting to gain market share, not the one with the lead. NVIDIA doesn't have to do it and chooses to not. If you don't want to support that style, vote with your wallet.

Another similar controversy surrounded the recent release of Project Cars. AMD GPU performance was significantly lower than comparable NVIDIA GPUs, even though this game does not implement any GameWorks technologies. In that case, the game's developer directly blamed AMD's drivers, saying that it was a lack of reaching out from AMD that caused the issues. AMD has since recanted its stance that the performance delta was "deliberate" and says a pending driver update will address gamers performance issues.

wicher3-1.jpg

All arguing aside, this game looks amazing. Can we all agree on that?

The only conclusion I can come to from all of this is that if you don't like what NVIDIA is doing, that's your right - and you aren't necessarily wrong. There will be plenty of readers that see the comments made by NVIDIA above and continue to believe that they are being at best disingenuous and at worst, are straight up lying. As I mentioned above in my own comments NVIDIA is still a for-profit company that is responsible to shareholders for profit and growth. And in today's world that sometimes means working against other companies than with them, resulting in impressive new technologies for its customers and push back from competitor's customers. It's not fun, but that's how it works today.

Fans of AMD will point to G-Sync, GameWorks, CUDA, PhysX, FCAT and even SLI as indications of NVIDIA's negative impact on open PC gaming. I would argue that more users would look at that list and see improvements to PC gaming, progress that helps make gaming on a computer so much better than gaming on a console. The truth likely rests somewhere in the middle; there will always be those individuals that immediately side with one company or the other. But it's the much larger group in the middle, that shows no corporate allegiance and instead just wants to have as much fun as possible with gaming, that will impact NVIDIA and AMD the most.

So, since I know it will happen anyway, use the comments page below to vent your opinion. But, for the benefit of us all, try to keep it civil!

Oculus Rift "Full Rift Experience" Specifications Released

Subject: Graphics Cards, Processors, Displays, Systems | May 15, 2015 - 03:02 PM |
Tagged: Oculus, oculus vr, nvidia, amd, geforce, radeon, Intel, core i5

Today, Oculus has published a list of what they believe should drive their VR headset. The Oculus Rift will obviously run on lower hardware. Their minimum specifications, published last month and focused on the Development Kit 2, did not even list a specific CPU or GPU -- just a DVI-D or HDMI output. They then went on to say that you really should use a graphics card that can handle your game at 1080p with at least 75 fps.

oculus-dk2-product.jpg

The current list is a little different:

  • NVIDIA GeForce GTX 970 / AMD Radeon R9 290 (or higher)
  • Intel Core i5-4590 (or higher)
  • 8GB RAM (or higher)
  • A compatible HDMI 1.3 output
  • 2x USB 3.0 ports
  • Windows 7 SP1 (or newer).

I am guessing that, unlike the previous list, Oculus has a more clear vision for a development target. They were a little unclear about whether this refers to the consumer version or the current needs of developers. In either case, it would likely serve as a guide for what they believe developers should target when the consumer version launches.

This post also coincides with the release of the Oculus PC SDK 0.6.0. This version pushes distortion rendering to the Oculus Server process, rather than the application. It also allows multiple canvases to be sent to the SDK, which means developers can render text and other noticeable content at full resolution, but scale back in places that the user is less likely to notice. They can also be updated at different frequencies, such as sleeping the HUD redraw unless a value changes.

The Oculus PC SDK (0.6.0) is now available at the Oculus Developer Center.

Source: Oculus

Podcast #349 - Death of Media Center, i7 NUC, Fractal Define S and more!

Subject: General Tech | May 14, 2015 - 02:46 PM |
Tagged: podcast, video, supermicro, X99, Intel, amd, corsair, H100i GTX, H80i GT, fractal, define s, akracing, nvidia, shield, grid, epson, xeon e7 v3

PC Perspective Podcast #349 - 05/14/2015

Join us this week as we discuss the Death of Media Center, i7 NUC, Fractal Define S and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

 

Rumor: AMD Radeon R9 300-series Release Dates

Subject: Graphics Cards | May 14, 2015 - 07:00 AM |
Tagged: tonga, radeon, R9, pitcairn, Fiji, bonaire, amd

Benchlife.info, via WCCFTech, believes that AMD's Radeon R9 300-series GPUs will launch in late June. Specifically, the R9 380, the R7 370, and the R7 360 will arrive on the 18th of June. These are listed as OEM parts, as we have mentioned on the podcast, which Ryan speculates could mean that the flagship Fiji XT might go by a different name. Benchlife.info seems to think that it will be called by the R9 390(X) though, and that it will be released on the 24th of June.

WCCFTech is a bit more timid, calling it simply “Fiji XT”.

amd-gaming-evolved.png

In relation to industry events, this has the OEM lineup launching on the last day of E3 and Fiji XT launching in the middle of the following week. This feels a little weird, especially because AMD's E3 event with PC Gamer is on the 16th. While it makes sense for AMD to announce the launch a few days before it happens, that doesn't make sense for OEM parts unless they were going to announce a line of pre-built PCs. The most likely candidate to launch gaming PCs is Valve, and they're one of the few companies that are absent from AMD's event.

And this is where I run out of ideas. Launching a line of OEM parts at E3 is weird unless it was to open the flood gates for OEMs to make their own announcements. Unless Valve is scheduled to make an announcement earlier in the day, or a surprise appearance at the event, that seems unlikely. Something seems up, though.

Source: WCCFTech

Newegg Jumping Gun on New AMD Game Bundles: GTAV and DiRT Rally?

Subject: Editorial | May 13, 2015 - 02:07 PM |
Tagged: R9, newegg, GTAV, DiRT Rally, bundle, amd, 290x, 290, 285

AMD has been pretty quiet on the bundle scene, but I think we may have had their future plans revealed to us a bit early.

newegg_2games.png

In case the offer gets pulled, here is a screen grab from Newegg on May 13, 2015.

Newegg is offering a number of AMD R9 based cards with one to two free software titles.  The top end R9 290 and 290X products get both Grand Theft Auto V and DiRT Rally.  The value of these two titles are around $95 US.  The lower end cards look to only receive DiRT Rally, which is a $35 US value.

This a pretty nice bundle considering that GTAV is still very new, and DiRT Rally is an early access title that will have a bunch of free content added to it through the next 9 to 10 months.

So far no other retailer that I am aware of is offering this particular bundle.  My assumption here is that Newegg jumped the gun before AMD was able to announce it.

Click here if you want to see these deals on the R9 290X GPUs.

UPDATE: Initial information from AMD is that it "is not an AMD bundle" so we aren't quite sure what to make of this. It could be a Newegg-specific bundle, but I haven't gotten any feedback from the reseller on the issue yet. 

UPDATE 2: Well, we found this certificate for the DiRT Rally portion of the bundle on Newegg.com. Clearly this is an official AMD marketing promotion but we haven't yet found anything official on the Grand Theft Auto V side of things.

amdbundle1.jpg

UPDATE 3: And now we have this Tweet from Newegg:

amdbundle2.jpg

UPDATE 4: After another conversation with AMD, the company is reiterating its point that it is not directly involved in the GTAV bundles we are seeing today with AMD Radeon graphics cards on Newegg. According to AMD, the bundle was solely built by Newegg and the OEMs, which explains why we don't see similar offers on identical cards on Amazon. It's likely then that Newegg interfaced with Take-Two/Rockstar to get approval for the Grand Theft Auto 5 inclusion while the DiRT Rally portion was just a happy coincidence. (Also, apparently a week ago AMD launched the DiRT Rally bundle...who knew?!?)

Source: Newegg

Podcast #348 - DirectX 12, New AMD GPU News, Giveaways and more!

Subject: General Tech | May 7, 2015 - 03:17 PM |
Tagged: podcast, video, amd, Fiji, hbm, microsoft, build 2015, DirectX 12, Intel, SSD 750, freesync, gsync, Oculus, rift

PC Perspective Podcast #348 - 05/07/2015

Join us this week as we discuss DirectX 12, New AMD GPU News, Giveaways and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

AMD has new APUs and new pricing for the ones already on the market

Subject: General Tech | May 7, 2015 - 02:17 PM |
Tagged: amd, Carrizo-L, A8-7410, A6-7310, E2-7110, E1-7010, APU

AMD has provided information on their new Carrizo-L based 7000 series of chips featuring the A8-7410, A6-7310 and the A4-7210 as well as the E2-7110 and E1-7010.  The two E series chips replace the low powered Beema APS, the E1-6010 and E2-6110 which were found in All-in-One machines with the new E2-7110 being the first of that series to have four cores.  The other three models are new desktop chips with newer graphics cores, the full feature set you would expect and slightly higher TDPs than the E-Series.

The existing AMD A-Series Desktop APUs have seen a price reduction today with prices for the top end A10-7850K reduced to $127 with the low end A4-7300 costing a mere $42 which helps AMD's positioning as a supplier in the lower end of the market.  You can see the entire price list as well as some information about the new R300 series of GPUs in their post.

AMDAPU.PNG

"The AMD A-series APU are also the world’s best SoCs for DirectX 12, as independent testing showed a 41% framerate increase under DirectX 12 – read more in the AMD blog here. Additionally, using DirectX 12 the AMD A-series APU was able to demonstrate an incredible 511% increase in performance per watt.

Finally, with a suitably equipped AMD socket FM2+ motherboard featuring DisplayPort, AMD A-series APUs also support AMD FreeSync to deliver all the incredible experience benefits detailed in our AMD blog here."

Here is some more Tech News from around the web:

Tech Talk

Source: AMD