NVIDIA GeForce Driver 337.50 Early Results are Impressive

Subject: Graphics Cards | April 11, 2014 - 12:30 PM |
Tagged: nvidia, geforce, dx11, driver, 337.50

UPDATE: We have put together a much more comprehensive story based on the NVIDIA 337.50 driver that includes more cards and more games while also disputing the Total War: Rome II results seen here. Be sure to read it!!

When I spoke with NVIDIA after the announcement of DirectX 12 at GDC this past March, a lot of the discussion centered around a pending driver release that promised impressive performance advances with current DX11 hardware and DX11 games. 

What NVIDIA did want to focus on with us was the significant improvements that have been made on the efficiency and performance of DirectX 11.  When NVIDIA is questioned as to why they didn’t create their Mantle-like API if Microsoft was dragging its feet, they point to the vast improvements possible and made with existing APIs like DX11 and OpenGL. The idea is that rather than spend resources on creating a completely new API that needs to be integrated in a totally unique engine port (see Frostbite, CryEngine, etc.) NVIDIA has instead improved the performance, scaling, and predictability of DirectX 11.

09.jpg

NVIDIA claims that these fixes are not game specific and will improve performance and efficiency for a lot of GeForce users. Even if that is the case, we will only really see these improvements surface in titles that have addressable CPU limits or very low end hardware, similar to how Mantle works today.

Lofty goals to be sure. This driver was released last week and I immediately wanted to test and verify many of these claims. However, a certain other graphics project kept me occupied most of the week and then a short jaunt to Dallas kept me from the task until yesterday. 

To be clear, I am planning to look at several more games and card configurations next week, but I thought it was worth sharing our first set of results. The test bed in use is the same as our standard GPU reviews.

Test System Setup
CPU Intel Core i7-3960X Sandy Bridge-E
Motherboard ASUS P9X79 Deluxe
Memory Corsair Dominator DDR3-1600 16GB
Hard Drive OCZ Agility 4 256GB SSD
Sound Card On-board
Graphics Card NVIDIA GeForce GTX 780 Ti 3GB
NVIDIA GeForce GTX 770 2GB
Graphics Drivers NVIDIA: 335.23 WHQL, 337.50 Beta
Power Supply Corsair AX1200i
Operating System Windows 8 Pro x64

The most interesting claims from NVIDIA were spikes as high as 70%+ in Total War: Rome II, so I decided to start there. 

First up, let's take a look at the GTX 780 Ti SLI results, the flagship gaming card from NVIDIA.

TWRome2_2560x1440_OFPS.png

TWRome2_2560x1440_PER.png

TWRome2_2560x1440_PLOT.png

With this title, running at the Extreme preset, jumps from an average frame rate of 59 FPS to 88 FPS, an increase of 48%! Frame rate variance does increase a bit with the faster average frame rate but it stays within limits of smoothness, but barely.

Next up, the GeForce GTX 770 SLI results.

TWRome2_2560x1440_OFPS.png

TWRome2_2560x1440_PER.png

TWRome2_2560x1440_PLOT.png

Results here are even more impressive as the pair of GeForce GTX 770 cards running in SLI jump from 29.5 average FPS to 51 FPS, an increase of 72%!! Even better, this occurs without any kind of frame rate variance increase and in fact, the blue line of the 337.50 driver is actually performing better in that perspective.

All of these tests were run with the latest patch on Total War: Rome II and I did specifically ask NVIDIA if there were any differences in the SLI profiles between these two drivers for this game. I was told absolutely not - this just happens to be the poster child example of changes NVIDIA has made with this DX11 efficiency push.

Of course, not all games are going to see performance improvements like this, or even improvements that are measurable at all. Just as we have seen with other driver enhancements over the years, different hardware configurations, image quality settings and even scenes used to test each game will shift the deltas considerably. I can tell you already that based on some results I have (but am holding for my story next week) performance improvements in other games are ranging from <5% up to 35%+. While those aren't reaching the 72% level we saw in Total War: Rome II above, these kinds of experience changes with driver updates are impressive to see.

Even though we are likely looking at the "best case" for NVIDIA's 337.50 driver changes with the Rome II results here, clearly there is merit behind what the company is pushing. We'll have more results next week!

AMD Selects Asetek to Liquid Cool The World’s Fastest Graphics Card

Subject: Graphics Cards | April 8, 2014 - 03:51 PM |
Tagged: asetek, amd, r9 295x2

If you wondered where the custom cooler for the impressively powerful AMD Radeon R9 295X2 came from then wonder no more.  The cooler was designed specifically for this card by Asetek, a veteran in cooling computer components with water.  You should keep that in mind the next time you think about picking up a third party watercooler!

GPU_LC.jpg

Asetek, the world’s leading supplier of computer liquid cooling solutions, today announced that its liquid cooling technology will be used to cool AMD’s latest flagship graphics card. The new AMD Radeon R9 295X2 is the world’s fastest graphics card. Boasting 8 gigabytes of memory and over 11 teraflops of computing power, the AMD Radeon R9 295X2 graphics card is the undisputed graphics performance champion.

“Today’s high-end graphic cards pack insane amounts of power into a very small area and removing that heat is no small task. Utilizing our liquid cooling for graphics cards unlocks new opportunities for performance and low noise,” said André Sloth Eriksen, Founder and CEO of Asetek. “The fact that AMD has chosen Asetek liquid cooling for their reference cooling design is a testament to the reliability and performance of our technology.”

The AMD Radeon R9 295X2 is the first graphics card reference design ever to ship with an advanced closed-loop water cooling system. The Asetek-developed liquid cooling system on the AMD Radeon R9 295X2 graphics card delivers significant benefits for the performance-hungry enthusiast, hardcore gamer or Bitcoin miner. Users will appreciate the unobtrusive noise, low GPU and component temperatures, and blistering performance - right out of the box.

“As the most powerful graphics card offered to date, we knew we needed an outstanding custom cooling solution for the AMD Radeon R9 295X2 graphics card,” said Matt Skynner, corporate vice president and general manager, Graphics Business Unit, AMD. “Asetek’s liquid cooling embodies the efficient performance, reliability and reputation we were seeking in a partner. As GPUs become more powerful, the benefits of collaborating with Asetek and integrating our world-class technologies are clear.”

The AMD Radeon R9 295X2 graphics card utilizes Asetek’s proven, maintenance free, factory sealed liquid cooling technology to cool the two powerful GPUs. This liquid cooling design ensures continuous stability even under full load. The card is easy to install and fits in most computer cases on the market today. With more than 1.5 million units in the field today, Asetek liquid cooling provides worry free operation to gamers and PC enthusiasts alike.

Source: Asetek

Some NVIDIA R337.50 Driver Controversy

Subject: General Tech, Graphics Cards | April 8, 2014 - 03:44 PM |
Tagged: nvidia, geforce, drivers

NVIDIA's GeForce 337.50 Driver was said to address performance when running DirectX 11-based software. Now that it is out, multiple sources are claiming the vendor-supplied benchmarks are exaggerated or simply untrue.

nvidia-337-sli.png

ExtremeTech compiled benchmarks from Anandtech and BlackHoleTec.

Going alphabetically, Anandtech tested the R337.50 and R331.xx drivers with a GeForce GTX 780 Ti, finding a double-digit increase with BioShock: Infinite and Metro: Last Light and basically zero improvement for GRID 2, Rome II, Crysis: Warhead, Crysis 3, and Company of Heroes 2. Adding a second GTX 780 Ti into the mix helped matters, seeing a 76% increase in Rome II and about 9% in most of the other titles.

BlackHoleTec is next. Testing the mid-range, but overclocked GeForce 760 between R337.50 and R335.23 drivers, they found slight improvements (1-3 FPS), except for Battlefield 4 and Skyrim (the latter is not DX11 to be fair) which noticed a slight reduction in performance (about 1 FPS).

ExtremeTech, finally, published one benchmark but it did not compare between drivers. All it really shows is CPU scaling in AMD GPUs.

Unfortunately, I do not have any benchmarks to present of my own because I am not a GPU reviewer nor do I have a GPU testbed. Ironically, the launch of the Radeon R9 295 X2 video card might have lessened that number of benchmarks available for NVIDIA's driver, who knows?

If it is true, and R337.50 does basically nothing in a setup with one GPU, I am not exactly sure what NVIDIA was hoping to accomplish. Of course someone was going to test it and publish their results. The point of the driver update was apparently to show how having a close relationship with Microsoft can lead you to better PC gaming products now and in the future. That can really only be the story if you have something to show. Now, at least I expect, we will probably see more positive commentary about Mantle - at least when people are not talking about DirectX 12.

If you own a GeForce card, I would still install the new driver though, especially if you have an SLi configuration. Scaling to a second GPU does see measurable improvements with Release 337.50. Even for a single-card configuration, it certainly should not hurt anything.

Source: ExtremeTech

NAB 2014: Intel Iris Pro Support in Adobe Creative Cloud (CC)

Subject: General Tech, Graphics Cards, Processors, Shows and Expos | April 8, 2014 - 12:43 PM |
Tagged: Intel, NAB, NAB 14, iris pro, Adobe, premiere pro, Adobe CC

When Adobe started to GPU-accelerate their applications beyond OpenGL, it started with NVIDIA and its CUDA platform. After some period of time, they started to integrate OpenCL support and bring AMD into the fold. At first, it was limited to a couple of Apple laptops but has since expanded to include several GPUs on both OSX and Windows. Since then, Adobe switched to a subscription-based release system and has published updates on a more rapid schedule. The next update of Adobe Premiere Pro CC will bring OpenCL to Intel Iris Pro iGPUs.

Intel-IrisPro-Adobe-Masking.jpg

Of course, they specifically mentioned Adobe Premiere Pro CC which suggests that Photoshop CC users might be coming later. The press release does suggest that the update will affect both Mac and Windows versions of Adobe Premiere Pro CC, however, so at least platforms will not be divided. Well, that is, if you find a Windows machine with Iris Pro graphics. They do exist...

A release date has not been announced for this software upgrade.

Source: Intel

MSI's R9 290X GAMING 4G sports a variety of overclocked settings and a Twin Frozr IV

Subject: Graphics Cards | April 7, 2014 - 04:14 PM |
Tagged: msi, R9 290X GAMING 4G, amd, hawaii, R9 290X, Twin Frozr IV, factory overclocked

The familiar Twin Frozr IV cooler has been added to the R9 290X GPU on MSI's latest AMD graphics card.  The R9 290X GAMING 4G sports 4GB of GDDR5 running at an even 5GHz and a GPU that has three separate top speeds depending on the profile you choose; 1040 MHz with OC Mode, 1030 MHz for Gaming Mode and 1000 MHz in Silent Mode.  [H]ard|OCP also tried manually overclocking and ended up with a peak of 1130MHz GPU and 5.4GHz for the GDDR5, not a bad bump over the factory overclock.  Check out the performance of the various speeds in their full review.

1396151094av674gYKyI_1_6_l.jpg

"On our test bench today is MSI's newest high-end GAMING series graphics cards in the form of the MSI Radeon R9 290X GAMING 4G video card. We will strap it to our test bench and compare it to the MSI GeForce GTX 780 Ti GAMING 3G card out-of-box and overclocked to determine which card provides the best gameplay experience."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

NVIDIA 337.50 Driver and GeForce Experience 2.0 Released

Subject: General Tech, Graphics Cards | April 7, 2014 - 06:01 AM |
Tagged: nvidia, geforce experience, directx 11

We knew that NVIDIA had an impending driver update providing DirectX 11 performance improvements. Launched today, 337.50 still claims significant performance increases over the previous 335.23 version. What was a surprise is GeForce Experience 2.0. This version allows both ShadowPlay and GameStream to operate on notebooks. It also allows ShadowPlay to record, and apparently stream to Twitch, your Windows desktop (but not on notebooks). It also enables Battery Boost, discussed previously.

nvidia-shadowplay-desktop.png

Personally, I find desktop streaming is the headlining feature, although I rarely use laptops (and much less for gaming). This is especially useful for OpenGL, games which run in windowed mode, and if you want to occasionally screencast without paying for Camtasia or tinkering with CamStudio. If I were to make a critique, and of course I will, I would like the option to select which monitor gets recorded. Its current behavior records the primary monitor as far as I can tell.

I should also mention that, in my testing, "shadow recording" is not supported when not recording a fullscreen game. I'm guessing that NVIDIA believes their users would prefer to not record their desktops until manually started and likewise stopped. It seems like it had to have been a conscious decision. It does limit its usefulness in OpenGL or windowed games, however.

This driver also introduces GameStream for devices out of your home discussed in the SHIELD update.

nvidia-337-sli.png

This slide is SLi improvements, driver-to driver, for the GTX 770 and the 780 Ti.

As for the performance boost, NVIDIA claims up to 64% faster performance in configurations with one active GPU and up to 71% faster in SLI. It will obviously vary on a game-by-game and GPU-by-GPU basis. I do not have any benchmarks, besides a few examples provided by NVIDIA, to share. That said, it is a free driver. If you have a GeForce GPU, download it. It does complicate matters if you are deciding between AMD and NVIDIA, however.

Source: NVIDIA

GTC 2014: NVIDIA Launches Iray VCA Networked Rendering Appliance

Subject: General Tech, Graphics Cards | April 1, 2014 - 01:42 PM |
Tagged: VCA, nvidia, GTC 2014

NVIDIA launched a new visual computing appliance called the Iray VCA at the GPU Technology Conference last week. This new piece of enterprise hardware uses full GK 110 graphics cards to accelerate the company’s Iray renderer which is used to create photo realistic models in various design programs.

NVIDIA IRAY VCA.jpg

The Iray VCA specifically is a licensed appliance (hardware + software) that combines NVIDIA hardware and software. On the hardware side of things, the Iray VCA is powered by eight graphics cards, dual processors (unspecified but likely Intel Xeons based on usage in last year’s GRID VCA), 256GB of system RAM, and a 2TB SSD. Networking hardware includes two 10GbE NICs, two 1GbE NICs, and one Infiniband connection. In total, the Iray VCA features 20 CPU cores and 23,040 CUDA cores. The GPUs used are based on the full GK110 die and are paired with 12GB of memory each.

Even better, it is a scalable solution such that companies can add additional Iray VCAs to the network. The appliances reportedly transparently accelerate the Iray accelerated renders done on designer’s workstations. NVIDIA reports that an Iray VCA is approximately 60-times faster than a Quadro K5000-powered workstation. Further, according to NVIDIA, 19 Iray VCAs working together amounts to 1 PetaFLOP of compute performance which is enough to render photo realistic simulations using 1 billion rays with up to hundreds of thousands of bounces.

DSC01431.JPG

The Iray VCA enables some rather impressive real time renders of 3D models with realistic physical properties and lighting. The models are light simulations that use ray tracing, global illumination and other techniques to show photo realistic models using up to billions of rays of light. NVIDIA is positioning the Iray VCA as an alternative to physical prototyping, allowing designers to put together virtual prototypes that can be iterated and changed at significantly less cost and time.

DSC01447.JPG

Iray itself is NVIDIA’s GPU-accelerated photo realistic renderer. The Iray technology is used in a number of design software packages. The Iray VCA is meant to further accelerate that Iray renderer by throwing massive amounts of parallel processing hardware at the resource intensive problem over the network (the Iray VCAs can be installed at a data center or kept on site). Initially the Iray VCA will support 3ds Max, Catia, Bunkspeed, and Maya, but NVIDIA is working on supporting all Iray accelerated software with the VCA hardware.

GTC 2014 IRAY VCA Renders Honda Car Interior In Real Time.jpg

The virtual prototypes can be sliced and examined and can even be placed in real world environments by importing HDR photos. Jen-Hsun Huang demonstrated this by placing Honda’s vehicle model on the GTC stage (virtually).

DSC01450.JPG

In fact, one of NVIDIA’s initial partners with the Iray VCA is Honda. Honda is currently beta testing a cluster of 25 Iray VCAs to refine styling designs for cars and their interiors based on initial artistic work. Honda Research and Development System Engineer Daisuke Ide was quoted by NVIDIA as stating that “Our TOPS tool, which uses NVIDIA Iray on our NVIDIA GPU cluster, enables us to evaluate our original design data as if it were real. This allows us to explore more designs so we can create better designs faster and more affordably.”

The Iray VCA (PDF) will be available this summer for $50,000. The sticker price includes the hardware, Iray license, and the first year of updates and maintenance. This is far from consumer technology, but it is interesting technology that may be used in the design process of your next car or other major purchase.

What do you think about the Iray VCA and NVIDIA's licensed hardware model?

GDC 2014: Shader-limited Optimization for AMD's GCN

Subject: Editorial, General Tech, Graphics Cards, Processors, Shows and Expos | March 29, 2014 - 10:45 PM |
Tagged: gdc 14, GDC, GCN, amd

While Mantle and DirectX 12 are designed to reduce overhead and keep GPUs loaded, the conversation shifts when you are limited by shader throughput. Modern graphics processors are dominated by sometimes thousands of compute cores. Video drivers are complex packages of software. One of their many tasks is converting your scripts, known as shaders, into machine code for its hardware. If this machine code is efficient, it could mean drastically higher frame rates, especially at extreme resolutions and intense quality settings.

amd-gcn-unit.jpg

Emil Persson of Avalanche Studios, probably known best for the Just Cause franchise, published his slides and speech on optimizing shaders. His talk focuses on AMD's GCN architecture, due to its existence in both console and PC, while bringing up older GPUs for examples. Yes, he has many snippets of GPU assembly code.

AMD's GCN architecture is actually quite interesting, especially dissected as it was in the presentation. It is simpler than its ancestors and much more CPU-like, with resources mapped to memory (and caches of said memory) rather than "slots" (although drivers and APIs often pretend those relics still exist) and with how vectors are mostly treated as collections of scalars, and so forth. Tricks which attempt to combine instructions together into vectors, such as using dot products, can just put irrelevant restrictions on the compiler and optimizer... as it breaks down those vector operations into those very same component-by-component ops that you thought you were avoiding.

Basically, and it makes sense coming from GDC, this talk rarely glosses over points. It goes over execution speed of one individual op compared to another, at various precisions, and which to avoid (protip: integer divide). Also, fused multiply-add is awesome.

I know I learned.

As a final note, this returns to the discussions we had prior to the launch of the next generation consoles. Developers are learning how to make their shader code much more efficient on GCN and that could easily translate to leading PC titles. Especially with DirectX 12 and Mantle, which lightens the CPU-based bottlenecks, learning how to do more work per FLOP addresses the other side. Everyone was looking at Mantle as AMD's play for success through harnessing console mindshare (and in terms of Intel vs AMD, it might help). But honestly, I believe that it will be trends like this presentation which prove more significant... even if behind-the-scenes. Of course developers were always having these discussions, but now console developers will probably be talking about only one architecture - that is a lot of people talking about very few things.

This is not really reducing overhead; this is teaching people how to do more work with less, especially in situations (high resolutions with complex shaders) where the GPU is most relevant.

AMD FirePro W9100 Announced: Doing Work in Hawaii.

Subject: General Tech, Graphics Cards | March 26, 2014 - 02:43 PM |
Tagged: amd, firepro, W9100

The AMD FirePro W9100 has been announced, bringing the Hawaii architecture to non-gaming markets. First seen in the Radeon R9 series of graphics cards, it has the capacity for 5 TeraFLOPs of single-precision (32-bit) performance and 2 TeraFLOPs of double-precision (64-bit). The card also has 16GB of GDDR5 memory to support it. From the raw numbers, this is slightly more capacity than either the Titan Black or Quadro K6000 in all categories. It will also support six 4K monitors (or three at 60Hz), per card. AMD supports up to four W9100 cards in a single system.

amd-firepro-w9100.jpg

Professional users can be looking for several things in their graphics cards: compute performance (either directly or through licensed software such as Photoshop, Premiere, Blender, Maya, and so forth), several high-resolution monitors (or digital signage units), and/or a lot of graphics performance. The W9100 is basically the top of the stack which covers all three of these requirements.

amd-firepro-w9100-2.jpg

AMD also announced a system branding initiative called, "AMD FirePro Ultra Workstation". They currently have five launch partners, Supermicro, Boxx, Tarox, Silverdraft, and Versatile Distribution Services, which will have workstations available under this program. The list of components for a "Recommend" certification is: two eight-core 2.6 GHz CPUs, 32GB of RAM, four PCIe 3.0 x16 slots, a 1500W Platinum PSU, and a case with nine expansion slots (to allow four W9100 GPUs along with one SSD or SDI interface card).

amd-firepro-w9100-3.jpg

Also, while the company has heavily discussed OpenCL in their slide deck, they have not mentioned specific versions. As such, I will assume that the FirePro W9100 supports OpenCL 1.2, like the R9-series, and not OpenCL 2.0 which was ratified back in November. This is still a higher conformance level than NVIDIA, which is at OpenCL 1.1.

Currently no word about pricing or availability.

Source: AMD

NVIDIA SHIELD: New Features and Promotional Price Cut

Subject: General Tech, Graphics Cards, Mobile | March 25, 2014 - 12:01 PM |
Tagged: shield, nvidia

The SHIELD from NVIDIA is getting a software update which advances GameStream, TegraZone, and the Android OS, itself, to KitKat. Personally, the GameStream enhancements seem most notable as it now allows users to access their home PC's gaming content outside of the home, as if it were a cloud server (but some other parts were interesting, too). Also, from now until the end of April, NVIDIA has temporarily cut the price down to $199.

nvidia-shield-gamestream-01.jpg

Going into more detail: GameStream, now out of Beta, will stream games which are rendered on your gaming PC to your SHIELD. Typically, we have seen this through "cloud" services, such as OnLive and GaiKai, which allow access to a set of games that run on their servers (with varying license models). The fear with these services is the lack of ownership, but the advantage is that the slave device just needs enough power to decode an HD video stream.

nvidia-shield-gamestream-02.jpg

In NVIDIA's case, the user owns both server (their standard NVIDIA-powered gaming PC, which can now be a laptop) and target device (the SHIELD). This technology was once limited to your own network (which definitely has its uses, especially for the SHIELD as a home theater device) but now can also be exposed over the internet. For this technology, NVIDIA recommends 5 megabit upload and download speeds - which is still a lot of upload bandwidth, even for 2014. In terms of performance, NVIDIA believes that it should live up to expectations set by their GRID. I do not have any experience with this, but others on the conference call took it as good news.

As for content, NVIDIA has expanded the number of supported titles to over a hundred, including new entries: Assassin's Creed IV, Batman: Arkham Origins, Battlefield 4, Call of Duty: Ghosts, Daylight, Titanfall, and Dark Souls II. They also claim that users can add other apps which are not officially supported, Halo 2: Vista was mentioned as an example, for streaming. FPS and Bitrate can now be set by the user. A bluetooth mouse and keyboard can also be paired to SHIELD for that input type through GameStream.

nvidia-shield-checkbox.jpg

Yeah, I don't like checkbox comparisons either. It's just a summary.

A new TegraZone was also briefly mentioned. Its main upgrade was apparently its library interface. There has also been a number of PC titles ported to Android recently, such as Mount and Blade: Warband.

The update is available now and the $199 promotion will last until the end of April.

Source: NVIDIA

NVIDIA GPUs pre-Fermi Are End of Life After 340.xx Drivers

Subject: General Tech, Graphics Cards | March 21, 2014 - 09:43 PM |
Tagged: nvidia

NVIDIA's Release 340.xx GPU drivers for Windows will be the last to contain "enhancements and optimizations" for users with video cards based on architectures before Fermi. While NVIDIA will provided some extended support for 340.xx (and earlier) drivers until April 1st, 2016, they will not be able to install Release 343.xx (or later) drivers. Release 343 will only support Fermi, Kepler, and Maxwell-based GPUs.

nvidia-geforce.png

The company has a large table on their CustHelp website filled with product models that are pining for the fjords. In short, if the model is 400-series or higher (except the GeForce 405) then it is still fully supported. If you do have the GeForce 405, or anything 300-series and prior, then GeForce Release 340.xx drivers will be the end of the line for you.

As for speculation, Fermi was the first modern GPU architecture for NVIDIA. It transitioned to standards-based (IEEE 754, etc.) data structures, introduced L1 and L2 cache, and so forth. From our DirectX 12 live blog, we also noticed that the new graphics API will, likewise, begin support at Fermi. It feels to me that NVIDIA, like Microsoft, wants to shed the transition period and work on developing a platform built around that baseline.

Source: NVIDIA

ASUS custom built ROG MARS GTX 760

Subject: Graphics Cards | March 21, 2014 - 11:33 AM |
Tagged: ROG MARS 760, nvidia, gtx 760, GK104, asus

If you can afford to spend $1000 or more on a GPU the ASUS ROG MARS GTX 760 is an interesting choice.  The two GTX 760 cores on this card are not modified as we have seen on some other two GPU cards, indeed ASUS even overclocked them to a base of 1006MHz and a boost clock of 1072MHz.  Ryan reviewed this card back in December, awarding it with a Gold and [H]ard|OCP is revisiting this card with a new driver and a different lineup of games.  They also pass this unique card from ASUS a Gold after it finished stomping on AMD and the GTX 780Ti.

1395035044ISFXADrH5Z_1_6_l.jpg

"The ASUS ROG MARS 760 is one of the most unique custom built video cards out on the market today. ASUS has designed a video card sporting dual NVIDIA GTX 760 GPUs on a single video card and given gamers something that didn't exist before in the market place. We will find out how it compares with the fastest video cards out there."

Here are some more Graphics Card articles from around the web:

Graphics Cards

 

Source: [H]ard|OCP

Microsoft DirectX 12 Live Blog Recap

Subject: Graphics Cards | March 21, 2014 - 11:25 AM |
Tagged: dx12, DirectX, DirectX 12, GDC, gdc 14, nvidia, Intel, amd, qualcomm, live, live blog

We had some requests for a permanent spot for the live blog images and text from this week's GDC 14 DirectX 12 reveal.  Here it is included below!!

  Microsoft DirectX 12 Announcement Live Blog (03/20/2014) 
9:53
Ryan Shrout: 

Hi everyone!

 
 
Thursday March 20, 2014 9:53 Ryan Shrout
9:53
Ryan Shrout: 

We are just about ready to get started - people are filing in now.

 
 
Thursday March 20, 2014 9:53 Ryan Shrout
9:53
[Comment From GuestGuest: ] 

?video

 
 
Thursday March 20, 2014 9:53 Guest
9:53
Ryan Shrout: 

Sorry, no video for this. They wouldn't allow us to record or stream.

 
 
Thursday March 20, 2014 9:53 Ryan Shrout
9:55
[Comment From GuestGuest: ] 

kk, no worries

 
 
Thursday March 20, 2014 9:55 Guest
9:55
[Comment From GuestGuest: ] 

Pictures?

 
 
Thursday March 20, 2014 9:55 Guest
9:55
Ryan Shrout: 

Yup!

 
 
Thursday March 20, 2014 9:55 Ryan Shrout
9:59
Ryan Shrout: 

Just testing out photos. I promise the others will be more clear.

 
 
Thursday March 20, 2014 9:59 Ryan Shrout
9:59
Ryan Shrout
 
 
 
Thursday March 20, 2014 9:59 
10:00
[Comment From SebastianSebastian: ] 

Looks like it's a very small event

 
 
Thursday March 20, 2014 10:00 Sebastian
10:00
Ryan Shrout: 

The room is much smaller than it should be. Line was way too long for a room like this.

 
 
Thursday March 20, 2014 10:00 Ryan Shrout
10:01
Josh Walrath: 

that is a super small room for such an event. Especially considering the online demand for details!

 
 
Thursday March 20, 2014 10:01 Josh Walrath
10:02
Ryan Shrout
 
Qualcomm's Eric Demers, AMD's Raja Koduri, NVIDIA's Tony Tamasi.
 
 
Thursday March 20, 2014 10:02 
10:03
Ryan Shrout: 

And we are starting!

 
 
Thursday March 20, 2014 10:03 Ryan Shrout
10:03
Josh Walrath: 

Have those boys gotten their knives out yet. Are the press circling them and snapping their fingers?

 
 
Thursday March 20, 2014 10:03 Josh Walrath
10:03
Ryan Shrout: 

Going over a history of DX.

 
 
Thursday March 20, 2014 10:03 Ryan Shrout
10:03
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:03 
10:04
Ryan Shrout: 

Talking about the development process.

 
 
Thursday March 20, 2014 10:04 Ryan Shrout
10:04
Ryan Shrout: 

All partner base.

 
 
Thursday March 20, 2014 10:04 Ryan Shrout
10:04
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:04 
10:05
[Comment From GuestGuest: ] 

why cant I comment ?

 
 
Thursday March 20, 2014 10:05 Guest
10:05
Ryan Shrout: 

GPU performance is "embarrassing parallel" statement here.

 
 
Thursday March 20, 2014 10:05 Ryan Shrout
10:05
Scott Michaud: 

You can, we just need to publish them. And there's *a lot* of comments.

 
 
Thursday March 20, 2014 10:05 Scott Michaud
10:05
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:05 
10:05
Josh Walrath: 

We see everything, Peter.

 
 
Thursday March 20, 2014 10:05 Josh Walrath
10:05
Ryan Shrout: 

CPU performance has not improved at the same rate. This difference rate of increase is a big challenge for DX.

 
 
Thursday March 20, 2014 10:05 Ryan Shrout
10:06
Ryan Shrout: 

Third point has been a challenge, until now.

 
 
Thursday March 20, 2014 10:06 Ryan Shrout
10:07
Ryan Shrout: 

What do developers want? List similar to what AMD presented with Mantle.

 
 
Thursday March 20, 2014 10:07 Ryan Shrout
10:07
Ryan Shrout: 

DX12 "is no dot release"

 
 
Thursday March 20, 2014 10:07 Ryan Shrout
10:08
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:08 
10:08
Ryan Shrout: 

It faster, more direct. ha ha.

 
 
Thursday March 20, 2014 10:08 Ryan Shrout
10:08
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:08 
10:08
Ryan Shrout: 

Xbox One games will see improved performance. Coming to all MS platforms. PC, mobile too.

 
 
Thursday March 20, 2014 10:08 Ryan Shrout
10:08
Josh Walrath: 

Oh look, mobile!

 
 
Thursday March 20, 2014 10:08 Josh Walrath
10:09
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:09 
10:09
Ryan Shrout: 

New tools are a requirement.

 
 
Thursday March 20, 2014 10:09 Ryan Shrout
10:09
Josh Walrath: 

We finally have a MS answer to OpenGL ES.

 
 
Thursday March 20, 2014 10:09 Josh Walrath
10:09
Scott Michaud: 

Hmm, none of the four pictures in the bottom is a desktop or Laptop.

 
 
Thursday March 20, 2014 10:09 Scott Michaud
10:09
Ryan Shrout: 

D3D 12 is the first version to go much lower level.

 
 
Thursday March 20, 2014 10:09 Ryan Shrout
10:09
[Comment From GuestGuest: ] 

The last one is a desktop...

 
 
Thursday March 20, 2014 10:09 Guest
10:10
Scott Michaud: 

Huh, thought it was TV. My mistake.

 
 
Thursday March 20, 2014 10:10 Scott Michaud
10:10
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:10 
10:10
Ryan Shrout: 

Yeah, desktop PC is definitely on the list here guys.

 
 
Thursday March 20, 2014 10:10 Ryan Shrout
10:11
Ryan Shrout: 

Going to show us some prototypes.

 
 
Thursday March 20, 2014 10:11 Ryan Shrout
10:11
Ryan Shrout: 

Ported latest 3DMark.

 
 
Thursday March 20, 2014 10:11 Ryan Shrout
10:12
Ryan Shrout: 

In DX11, one core is doing most of the work.

 
 
Thursday March 20, 2014 10:12 Ryan Shrout
10:12
Ryan Shrout: 

on d3d12, overall CPU utilization is down 50%

 
 
Thursday March 20, 2014 10:12 Ryan Shrout
10:13
Ryan Shrout: 

Also, the workload is more spread out.

 
 
Thursday March 20, 2014 10:13 Ryan Shrout
10:13
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:13 
10:13
Ryan Shrout: 

Interesting data for you all!!

 
 
Thursday March 20, 2014 10:13 Ryan Shrout
10:13
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:13 
10:14
Ryan Shrout: 

Grouping entire pipeline state into state objects. These can be mapped very efficiently to GPU hardware.

 
 
Thursday March 20, 2014 10:14 Ryan Shrout
10:14
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:14 
10:15
Ryan Shrout: 

"Solved" multi-threaded scalability.

 
 
Thursday March 20, 2014 10:15 Ryan Shrout
10:15
Scott Michaud: 

Hmm, from ~8ms to ~4. That's an extra 4ms for the GPU to work. 20 GFLOPs for a GeForce Titan.

 
 
Thursday March 20, 2014 10:15 Scott Michaud
10:15
[Comment From JayJay: ] 

Multicore Scalability.... Seems like a big deal when you have 6-8 cores!

 
 
Thursday March 20, 2014 10:15 Jay
10:16
Josh Walrath: 

It is a big deal for the CPU guys.

 
 
Thursday March 20, 2014 10:16 Josh Walrath
10:16
Ryan Shrout: 

D3D12 allows apps to control graphics memory better.

 
 
Thursday March 20, 2014 10:16 Ryan Shrout
10:16
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:16 
10:17
Ryan Shrout: 

API is now much lower level. Application tracks pipeline status, not the API.

 
 
Thursday March 20, 2014 10:17 Ryan Shrout
10:17
[Comment From JimJim: ] 

20 GFlops from a Titan? Stock Titan gets around 5 ATM.

 
 
Thursday March 20, 2014 10:17 Jim
10:17
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:17 
10:18
Ryan Shrout: 

Less API and driver tracking universally. More more predictability.

 
 
Thursday March 20, 2014 10:18 Ryan Shrout
10:18
Ryan Shrout: 

This is targeted at the smartest developers, but gives you unprecedented performance.

 
 
Thursday March 20, 2014 10:18 Ryan Shrout
10:18
Ryan Shrout: 

Also planning to advance state of rendering features. Feature level 12.

 
 
Thursday March 20, 2014 10:18 Ryan Shrout
10:19
Scott Michaud: 

Titan gets around ~5 Teraflops, actually... if it is fully utilized. I'm saying that an extra 4ms is an extra 20 GFlops per frame.

 
 
Thursday March 20, 2014 10:19 Scott Michaud
10:19
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:19 
10:19
Josh Walrath: 

Titan is around 5 TFlops total, that 20 GFLOPS is potential performance in the time gained by optimizations.

 
 
Thursday March 20, 2014 10:19 Josh Walrath
10:19
Ryan Shrout: 

Better collision and culling

 
 
Thursday March 20, 2014 10:19 Ryan Shrout
10:19
Ryan Shrout: 

Constantly working with GPU vendors to find new ways to render.

 
 
Thursday March 20, 2014 10:19 Ryan Shrout
10:20
Ryan Shrout: 

Forza 5 on stage now. Strictly console developer.

 
 
Thursday March 20, 2014 10:20 Ryan Shrout
10:20
[Comment From Lewap PawelLewap Pawel: ] 

So 20GFLOPS per frame is 20x60 = 1200GFLOPS/sec? 20% improvement?

 
 
Thursday March 20, 2014 10:20 Lewap Pawel
10:21
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:21 
10:21
Scott Michaud: 

Not quite, because we don't know how many FPS we had originally.

 
 
Thursday March 20, 2014 10:21 Scott Michaud
10:21
Ryan Shrout: 

Talking about porting the game to D3D12

 
 
Thursday March 20, 2014 10:21 Ryan Shrout
10:22
Ryan Shrout: 

4 man-months effort to port core rendering engine.

 
 
Thursday March 20, 2014 10:22 Ryan Shrout
10:22
Ryan Shrout: 

Demo time!

 
 
Thursday March 20, 2014 10:22 Ryan Shrout
10:22
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:22 
10:22
Ryan Shrout: 

Rendering at static 60 FPS.

 
 
Thursday March 20, 2014 10:22 Ryan Shrout
10:23
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:23 
10:23
Ryan Shrout: 

Bundles allows for instancing but with variance.

 
 
Thursday March 20, 2014 10:23 Ryan Shrout
10:24
Ryan Shrout: 

Resource lifetime, track memory directly. No longer have D3D tracking that lifetime, much cheaper on resources.

 
 
Thursday March 20, 2014 10:24 Ryan Shrout
10:24
Ryan Shrout: 

"It's all up to us, and that's how we like it."

 
 
Thursday March 20, 2014 10:24 Ryan Shrout
10:24
Ryan Shrout: 

Does anyone else here worry that DX12 might leave out some smaller devs that can't go so low level?

 
 
Thursday March 20, 2014 10:24 Ryan Shrout
10:25
Josh Walrath: 

I would say that depends on the quality of tools that MS provides, as well as IHV support.

 
 
Thursday March 20, 2014 10:25 Josh Walrath
10:25
Scott Michaud: 

Not really, for me. The reason why they can go so much lower these days is because what is lower is more consistent.

 
 
Thursday March 20, 2014 10:25 Scott Michaud
10:26
Ryan Shrout: 

And now back to info. Will you have to buy new hardware? I would say no since they just showed Xbox One... lol

 
 
Thursday March 20, 2014 10:26 Ryan Shrout
10:26
[Comment From killeakkilleak: ] 

Small devs will use an Engine, not make their own.

 
 
Thursday March 20, 2014 10:26 killeak
10:26
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:26 
10:26
Ryan Shrout: 

On stage now is Raja Koduri from AMD.

 
 
Thursday March 20, 2014 10:26 Ryan Shrout
10:27
Scott Michaud: 

Not true at all, actually. Just look at Frictional (Amnesia). They made their own engine tailored for what their game needed.

 
 
Thursday March 20, 2014 10:27 Scott Michaud
10:27
Ryan Shrout: 

AMD has been working very closely with DX12. Heh.

 
 
Thursday March 20, 2014 10:27 Ryan Shrout
10:27
Josh Walrath: 

Shocking!

 
 
Thursday March 20, 2014 10:27 Josh Walrath
10:28
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:28 
10:28
Josh Walrath: 

Strike a pose!

 
 
Thursday March 20, 2014 10:28 Josh Walrath
10:28
Ryan Shrout: 

There is tension: AMD is trying to push hw forward, MS is trying to push their platform forward.

 
 
Thursday March 20, 2014 10:28 Ryan Shrout
10:28
Ryan Shrout: 

Very honest assessment of the current setup between AMD, NVIDIA, MS.

 
 
Thursday March 20, 2014 10:28 Ryan Shrout
10:28
[Comment From GuestGuest: ] 

Scott, with the recent changes with CryEngine, UE4 going subscription based more Indies might just go that route.

 
 
Thursday March 20, 2014 10:28 Guest
10:28
Ryan Shrout: 

DX12 is an area where they had the least tension in Raja's history in this field.

 
 
Thursday March 20, 2014 10:28 Ryan Shrout
10:29
Scott Michaud: 

Definitely. But that is not the same thing as saying that indies will not make their own engine.

 
 
Thursday March 20, 2014 10:29 Scott Michaud
10:29
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:29 
10:29
Ryan Shrout: 

Key is that current users get benefit with this API on day 1.

 
 
Thursday March 20, 2014 10:29 Ryan Shrout
10:29
Ryan Shrout: 

"Like getting 4 generations of hardware ahead."

 
 
Thursday March 20, 2014 10:29 Ryan Shrout
10:29
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:29 
10:31
Josh Walrath: 

That answers a few of the burning questions!

 
 
Thursday March 20, 2014 10:31 Josh Walrath
10:31
Ryan Shrout: 

Up now is Eric Mentzer from Intel.

 
 
Thursday March 20, 2014 10:31 Ryan Shrout
10:31
[Comment From KevKev: ] 

Thank you! Great news guys!

 
 
Thursday March 20, 2014 10:31 Kev
10:31
Scott Michaud: 

You're welcome! : D

 
 
Thursday March 20, 2014 10:31 Scott Michaud
10:32
[Comment From JimJim: ] 

OH, intel and AMD in the same room....

 
 
Thursday March 20, 2014 10:32 Jim
10:32
Scott Michaud: 

Intel, AMD, NVIDIA, and Qualcomm in the same room...

 
 
Thursday March 20, 2014 10:32 Scott Michaud
10:32
Ryan Shrout: 

Intel has made big change in graphics; put a lot more focus on it with tech and process tech.

 
 
Thursday March 20, 2014 10:32 Ryan Shrout
10:32
Josh Walrath: 

DX12 will enhance any modern graphics chip. Driver support from IHVs will be key to enable those features. This is a massive change in how DX addresses the GPU, rather than (so far) the GPU adding features.

 
 
Thursday March 20, 2014 10:32 Josh Walrath
10:32
[Comment From GuestGuest: ] 

so this means xbox one will get a performance boost?

 
 
Thursday March 20, 2014 10:32 Guest
10:32
Scott Michaud: 

Yes

 
 
Thursday March 20, 2014 10:32 Scott Michaud
10:33
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:33 
10:33
Scott Michaud: 

According to "Benefits of Direct3D 12 will extend to Xbox One", at least.

 
 
Thursday March 20, 2014 10:33 Scott Michaud
10:33
Ryan Shrout: 

Intel commits to having Haswell support DX12 at launch.

 
 
Thursday March 20, 2014 10:33 Ryan Shrout
10:34
Ryan Shrout: 

BTW - thanks to everyone for stopping by the live blog!! :)

 
 
Thursday March 20, 2014 10:34 Ryan Shrout
10:34
Josh Walrath: 

Just to reiterate... PS4 utilizes OpenGL, not DX. This change will not affect PS4. Changes to OpenGL will only improve PS4 performance.

 
 
Thursday March 20, 2014 10:34 Josh Walrath
10:34
Ryan Shrout: 

If you like this kind of stuff, check out our weekly podcast! http://pcper.com/podcast

 
 
Thursday March 20, 2014 10:34 Ryan Shrout
10:34
Ryan Shrout: 

No mention of actual DX12 launch time quite yet...

 
 
Thursday March 20, 2014 10:34 Ryan Shrout
10:34
[Comment From MagnarockMagnarock: ] 

Finish?

 
 
Thursday March 20, 2014 10:34 Magnarock
10:34
Ryan Shrout: 

And Intel is gone. Short and sweet.

 
 
Thursday March 20, 2014 10:34 Ryan Shrout
10:34
Scott Michaud: 

Still have NVIDIA and Qualcomm, at least.

 
 
Thursday March 20, 2014 10:34 Scott Michaud
10:35
Scott Michaud: 

So -- not finished.

 
 
Thursday March 20, 2014 10:35 Scott Michaud
10:35
Ryan Shrout: 

Up next is Tony Tamasi from NVIDIA.

 
 
Thursday March 20, 2014 10:35 Ryan Shrout
10:35
Ryan Shrout: 

NVIDIA has been working with MS since the inception of DX12. Still don't know when that is...

 
 
Thursday March 20, 2014 10:35 Ryan Shrout
10:35
[Comment From AlexAlex: ] 

PS4 doesn't use OpenGL, but custom APIs instead...

 
 
Thursday March 20, 2014 10:35 Alex
10:35
Scott Michaud: 

True, it's not actually OpenGL... but is heavily heavily based on OpenGL.

 
 
Thursday March 20, 2014 10:35 Scott Michaud
10:36
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:36 
10:36
Ryan Shrout: 

They think it should be done with standards so there is no fragmentation.

 
 
Thursday March 20, 2014 10:36 Ryan Shrout
10:36
Ryan Shrout: 

lulz.

 
 
Thursday March 20, 2014 10:36 Ryan Shrout
10:37
Scott Michaud: 

Because everything that ends in "x" is all about no fragmentation :p

 
 
Thursday March 20, 2014 10:37 Scott Michaud
10:37
Ryan Shrout: 

NVIDIA will support DX12 on Fermi, Kepler, Maxwell and forward!

 
 
Thursday March 20, 2014 10:37 Ryan Shrout
10:37
Ryan Shrout: 

For developers that want to get down deep and manage all of this, DX12 is going to be really exciting.

 
 
Thursday March 20, 2014 10:37 Ryan Shrout
10:38
Ryan Shrout: 

NVIDIA represents about 55% of the install base.

 
 
Thursday March 20, 2014 10:38 Ryan Shrout
10:38
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:38 
10:38
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:38 
10:39
Ryan Shrout: 

Developers already have DX12 drivers. The Forza demo was running on NVIDIA!!!

 
 
Thursday March 20, 2014 10:39 Ryan Shrout
10:39
Ryan Shrout: 

Holy crap, that wasn't on an Xbox One!!

 
 
Thursday March 20, 2014 10:39 Ryan Shrout
10:39
Scott Michaud: 

Fermi and forward... aligning well with the start of their compute-based architectures... using IEEE standards (etc). Makes perfect sense. Also might help explain why pre-Fermi is deprecated after GeForce 340 drivers...

 
 
Thursday March 20, 2014 10:39 Scott Michaud
10:40
Ryan Shrout: 

Support quote from Tim Sweeney.

 
 
Thursday March 20, 2014 10:40 Ryan Shrout
10:41
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:41 
10:41
[Comment From CrackolaCrackola: ] 

Any current NVIDIA cards DX12 ready? Titan, etc?

 
 
Thursday March 20, 2014 10:41 Crackola
10:41
Ryan Shrout: 

Up now is Eric Demers from Qualcomm.

 
 
Thursday March 20, 2014 10:41 Ryan Shrout
10:42
Scott Michaud: 

NVIDIA said Fermi, Kepler, and Maxwell will be DX12-ready. So like... almost everything since GeForce 400... almost.

 
 
Thursday March 20, 2014 10:42 Scott Michaud
10:42
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:42 
10:42
Ryan Shrout: 

Qualcomm has been working with MS on mobile graphics since there WAS mobile graphics.

 
 
Thursday March 20, 2014 10:42 Ryan Shrout
10:42
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:42 
10:42
Ryan Shrout: 

Most windows phones are powered by Snapdragon.

 
 
Thursday March 20, 2014 10:42 Ryan Shrout
10:42
Josh Walrath: 

We currently don't know what changes in Direct3D will be brought to the table, all we are seeing here is how they are changing the software stack to more efficiently use modern GPUs. This does not mean that all current DX11 hardware will fully support the DX12 specification when it comes to D3D, Direct Compute, etc.

 
 
Thursday March 20, 2014 10:42 Josh Walrath
10:43
Ryan Shrout: 

DX12 will improve power efficiency by reducing overhead.

 
 
Thursday March 20, 2014 10:43 Ryan Shrout
10:43
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:43 
10:44
Ryan Shrout: 

Perf will improve on mobile device as well, of course. But gaming for longer periods on battery life is biggest draw.

 
 
Thursday March 20, 2014 10:44 Ryan Shrout
10:45
Ryan Shrout: 

Portability - bringing titles from the PC to Xbox to mobile platform will be much easier.

 
 
Thursday March 20, 2014 10:45 Ryan Shrout
10:45
[Comment From David UyDavid Uy: ] 

I think all Geforce 400 series is Fermi. so - Geforce 400 and above.

 
 
Thursday March 20, 2014 10:45 David Uy
10:45
Scott Michaud: 

I think the GeForce 405 is the only exception...

 
 
Thursday March 20, 2014 10:45 Scott Michaud
10:45
Ryan Shrout: 

Off goes Eric.

 
 
Thursday March 20, 2014 10:45 Ryan Shrout
10:45
Ryan Shrout: 

MS back on stage.

 
 
Thursday March 20, 2014 10:45 Ryan Shrout
10:46
Ryan Shrout: 

And now a group picture lol.

 
 
Thursday March 20, 2014 10:46 Ryan Shrout
10:46
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:46 
10:47
Ryan Shrout: 

By the time they ship, 50% of all PC gamers will be DX12 capable.

 
 
Thursday March 20, 2014 10:47 Ryan Shrout
10:47
Ryan Shrout: 

Ouch, targeting Holiday 2015 games.

 
 
Thursday March 20, 2014 10:47 Ryan Shrout
10:48
Ryan Shrout: 

Early access coming later this year.

 
 
Thursday March 20, 2014 10:48 Ryan Shrout
10:48
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:48 
10:48
Josh Walrath: 

Yeah, this is a pretty big sea change.

 
 
Thursday March 20, 2014 10:48 Josh Walrath
10:48
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:48 
10:49
Ryan Shrout
 
 
 
Thursday March 20, 2014 10:49 
10:49
Scott Michaud: 

50% of PC Gamers sounds like they're projecting NOT Windows 7.

 
 
Thursday March 20, 2014 10:49 Scott Michaud
10:49
Ryan Shrout: 

They are up for Q&A not sure how informative they will be...

 
 
Thursday March 20, 2014 10:49 Ryan Shrout
10:50
Josh Walrath: 

OS support? Extension changes to D3D/Direct Compute?

 
 
Thursday March 20, 2014 10:50 Josh Walrath
10:50
Ryan Shrout: 

Windows 7 support? Won't be announcing anything today but they understand the request.

 
 
Thursday March 20, 2014 10:50 Ryan Shrout
10:51
Ryan Shrout: 

Q: What about support for multi-GPU? They will have a way to target specific GPUs in a system.

 
 
Thursday March 20, 2014 10:51 Ryan Shrout
10:51
Ryan Shrout: 

This session is wrapping up for now!

 
 
Thursday March 20, 2014 10:51 Ryan Shrout
10:51
Ryan Shrout: 

Looks like we are light on details but we'll be catching more sessions today so check back on http://www.pcper.com/

 
 
Thursday March 20, 2014 10:51 Ryan Shrout
10:52
Scott Michaud: 

"a way to target specific GPUs in a system" this sounds like developers can program their own Crossfire/SLi methods, like OpenCL and Mantle.

 
 
Thursday March 20, 2014 10:52 Scott Michaud
10:52
Ryan Shrout: 

Also, again, if you want more commentary on DX12 and PC hardware, check out our weekly podcast! http://www.pcper.com/podcast!

 
 
Thursday March 20, 2014 10:52 Ryan Shrout
10:52
Ryan Shrout: 

Thanks everyone for joining us! We MIGHT live blog the other sessions today, so you can sign up for our mailing list to find out when we go live. http://www.pcper.com/subscribe

 
 
Thursday March 20, 2014 10:52 Ryan Shrout
10:57
Scott Michaud: 

Apparently NVIDIA's blog says DX12 discussion begun more than four years ago "with discussions about reducing resource overhead". They worked for a year to deliver "a working design and implementation of DX12 at GDC".

 
 
Thursday March 20, 2014 10:57 Scott Michaud
11:03
 

 
 
 

 

Set your calendar! PC Perspective GDC 14 DirectX 12 Live Blog is Coming!

Subject: General Tech, Graphics Cards | March 19, 2014 - 05:26 PM |
Tagged: live blog, gdc 14, dx12, DirectX 12, DirectX

UPDATE: If you are looking for the live blog information including commentary and photos, we have placed it in archive format right over here.  Thanks!!

It is nearly time for Microsoft to reveal the secrets behind DirectX 12 and what it will offer PC gaming going forward.  I will be in San Francisco for the session and will be live blogging from it as networking allows.  We'll have a couple of other PC Perspective staffers chiming in as well, so it should be an interesting event for sure!  We don't know how much detail Microsoft is going to get into, but we will all know soon.

dx12.jpg

Microsoft DirectX 12 Session Live Blog

Thursday, March 20th, 10am PDT

http://www.pcper.com/live

You can sign up for a reminder using the CoverItLive interface below or you can sign up for the PC Perspective Live mailing list. See you Thursday!

GDC 14: WebCL 1.0 Specification is Released by Khronos

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 19, 2014 - 06:03 AM |
Tagged: WebCL, gdc 14, GDC

The Khronos Group has just ratified the standard for WebCL 1.0. The API is expected to provide a massive performance boost to web applications which are dominated by expensive functions which can be offloaded to parallel processors, such as GPUs and multi-core CPUs. Its definition also allows WebCL to communicate and share buffers between it and WebGL with an extension.

WebCL_300_167_75.png

Frequent readers of the site might remember that I have a particular interest in WebCL. Based on OpenCL, it allows web apps to obtain a list of every available compute device and target it for workloads. I have personally executed tasks on an NVIDIA GeForce 670 discrete GPU and other jobs on my Intel HD 4000 iGPU, at the same time, using the WebCL prototype from Tomi Aarnio of Nokia Research. The same is true for users with multiple discrete GPUs installed in their system (even if they are not compatible with Crossfire, SLi, or are from different vendors altogether). This could be very useful for physics, AI, lighting, and other game middleware packages.

Still, browser adoption might be rocky for quite some time. Google, Mozilla, and Opera Software were each involved in the working draft. This leaves both Apple and Microsoft notably absent. Even then, I am not sure how much interest exists within Google, Mozilla, and Opera to take it from a specification to a working feature in their browsers. Some individuals have expressed more faith in WebGL compute shaders than WebCL.

Of course, that can change with just a single "killer app", library, or middleware.

I do expect some resistance from the platform holders, however. Even Google has been pushing back on OpenCL support in Android, in favor of their "Renderscript" abstraction. The performance of a graphics processor is also significant leverage for a native app. There is little, otherwise, that cannot be accomplished with Web standards except a web browser itself (and there is even some non-serious projects for that). If Microsoft can support WebGL, however, there is always hope.

The specification is available at the Khronos website.

Source: Khronos

GDC 14: EGL 1.5 Specification Released by Khronos

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 19, 2014 - 06:02 AM |
Tagged: OpenGL ES, opengl, opencl, gdc 14, GDC, EGL

The Khronos Group has also released their ratified specification for EGL 1.5. This API is at the center of data and event management between other Khronos APIs. This version increases security, interoperability between APIs, and support for many operating systems, including Android and 64-bit Linux.

khronos-EGL_500_123_75.png

The headline on the list of changes is the move that EGLImage objects makes, from the realm of extension into EGL 1.5's core functionality, giving developers a reliable method of transferring textures and renderbuffers between graphics contexts and APIs. Second on the list is the increased security around creating a graphics context, primarily designed for WebGL applications which any arbitrary website can become. Further down the list is the EGLSync object which allows further partnership between OpenGL (and OpenGL ES) and OpenCL. The GPU may not need CPU involvement when scheduling between tasks on both APIs.

During the call, the representative also wanted to mention that developers have asked them to bring EGL back to Windows. While it has not happened yet, they have announced that it is a current target.

The EGL 1.5 spec is available at the Khronos website.

Source: Khronos

GDC 14: SYCL 1.2 Provisional Spec Released by Khronos

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 19, 2014 - 06:01 AM |
Tagged: SYCL, opencl, gdc 14, GDC

To gather community feedback, the provisional specification for SYCL 1.2 has been released by The Khronos Group. SYCL extends itself upon OpenCL with the C++11 standard. This technology is built on another Khronos platform, SPIR, which allows the OpenCL C programming language to be mapped onto LLVM, with its hundreds of compatible languages (and Khronos is careful to note that they intend for anyone to make their own compatible alternative langauge).

khronos-SYCL_Color_Mar14_154_75.png

In short, SPIR allows many languages which can compile into LLVM to take advantage of OpenCL. SYCL is the specification for creating C++11 libraries and compilers through SPIR.

As stated earlier, Khronos wants anyone to make their own compatible language:

While SYCL is one possible solution for developers, the OpenCL group encourages innovation in programming models for heterogeneous systems, either by building on top of the SPIR™ low-level intermediate representation, leveraging C++ programming techniques through SYCL, using the open source CLU libraries for prototyping, or by developing their own techniques.

SYCL 1.2 supports OpenCL 1.2 and they intend to develop it alongside OpenCL. Future releases are expected to support the latest OpenCL 2.0 specification and keep up with future developments.

The SYCL 1.2 provisional spec is available at the Khronos website.

Source: Khronos

Just Delivered: MSI Radeon R9 290X Lightning

Subject: Graphics Cards | March 18, 2014 - 12:58 PM |
Tagged: radeon, R9 290X, msi, just delivered, amd, 290x lightning, 290x

While Ryan may be en route to the Game Developer's Conference in San Francisco right now, work must go on at the PC Perspective office. As it happens my arrival at the office today was greeted by a massively exciting graphics card, the MSI Radeon R9 290X Lightning.

IMG_9901.JPG

While we first got our hands on a prerelease version of this card at CES earlier this year, we can now put the Lightning edition through its paces.

IMG_9900.JPG

To go along with this massive graphics card comes a massive box. Just like the GTX 780 Lightning, MSI paid extra detail to the packaging to create a more premium-feeling experience than your standard reference design card.

IMG_9906.JPG

Comparing the 290X Lightning to the AMD reference design, it is clear how much engineering went into this card - the heatpipe and fins alone are as thick as the entire reference card. This, combined with a redesigned PCB and improved power management should ensure that you never fall victim to the GPU clock variance issues of the reference design cards, and give you one of the best overclocking experiences possible from the Hawaii GPU.

gpu-z.png

While I haven't had a chance to start benchmarking yet, I put it on the testbed and figured I would give a little preview of what you can expect from this card out of the box.

Stay tuned for more coverage of the MSI Radeon R9 290X Lightning and our full review, coming soon on PC Perspective!

Source: MSI

GDC 14: OpenGL ES 3.1 Spec Released by Khronos Group

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 17, 2014 - 06:01 AM |
Tagged: OpenGL ES, opengl, Khronos, gdc 14, GDC

Today, day one of Game Developers Conference 2014, the Khronos Group has officially released the 3.1 specification for OpenGL ES. The main new feature, brought over from OpenGL 4, is the addition of compute shaders. This opens GPGPU functionality to mobile and embedded devices for applications developed in OpenGL ES, especially if the developer does not want to add OpenCL.

The update is backward-compatible with OpenGL ES 2.0 and 3.0 applications, allowing developers to add features, as available, for their existing apps. On the device side, most functionality is expected to be a driver update (in the majority of cases).

opengl-es-logo.png

OpenGL ES, standing for OpenGL for Embedded Systems but is rarely branded as such, delivers what they consider the most important features from the graphics library to the majority of devices. The Khronos Group has been working toward merging ES with the "full" graphics library over time. The last release, OpenGL ES 3.0, was focused on becoming a direct subset of OpenGL 4.3. This release expands upon the feature-space it occupies.

OpenGL ES also forms the basis for WebGL. The current draft of WebGL 2.0 uses OpenGL ES 3.0 although that was not discussed today. I have heard murmurs (not from Khronos) about some parties pushing for compute shaders in that specification, which this announcement puts us closer to.

The new specification also adds other features, such as the ability to issue a draw without CPU intervention. You could imagine a particle simulation, for instance, that wants to draw the result after its compute shader terminates. Shading is also less rigid, where vertex and fragment shaders do not need to be explicitly linked into a program before they are used. I inquired about the possibility that compute devices could be targetted (for devices with two GPUs) and possibly load balanced, in a similar method to WebCL but no confirmation or denial was provided (although he did mention that it would be interesting for apps that fall somewhere in the middle of OpenGL ES and OpenCL).

The OpenGL ES 3.1 spec is available at the Khronos website.

Source: Khronos

AMD Radeon R9 Graphics Stock Friday Night Update

Subject: Graphics Cards | March 14, 2014 - 07:17 PM |
Tagged: radeon, R9 290X, r9 290, r9 280x, r9 280, amd

While sitting on the couch watching some college basketball I decided to start browsing Amazon.com and Newegg.com for some Radeon R9 graphics cards.  With all of the stock and availability issues AMD has had recently, this is a more frequent occurrence for me than I would like to admit.  Somewhat surprisingly, things appear to be improving for AMD at the high end of the product stack.  Take a look at what I found.

  Amazon.com Newegg.com
ASUS Radeon R9 290X DirectCU II $599 -
Visiontek R9 290X $599 -
XFX R9 290X Double D $619 -
ASUS R9 290 DirectCU II $499 -
XFX R9 290 Double D $499 -
MSI R9 290 Gaming $465 $469
PowerColor TurboDuo AXR9 280X - $329
Visiontek R9 280X $370 $349
XFX R9 280 Double D - $289
Sapphire Dual-X R9 280 - $299
Sapphire R7 265 $184 $149

msir9290.jpg

It's not perfect, but it's better.  I was able to find two R9 290X cards at $599, which is just $50 over the expected selling price of $549.  The XFX Double D R9 290X at $619 is pretty close as well.  The least expensive R9 290 I found was $469 but others remain about $100 over the suggested price.  In reality, having the R9 290 and R9 290X only $100 apart, as opposed to the $150 that AMD would like you to believe, is more realistic based on the proximity of performance between the two SKUs.  

Stepping a bit lower, the R9 280X (which is essentially the same as the HD 7970 GHz Edition) can be found for $329 and $349 on Newegg.  Those prices are just $30-50 more than the suggested pricing!  The brand new R9 280, similar in specs to the HD 7950, is starting to show up for $289 and $299; $10 over what AMD told us to expect.

Finally, though not really a high end card, I did see that the R7 265 was showing up at both Amazon.com and Newegg.com for the second time since its announcement in February. For budget 1080p gamers, if you can find it, this could be the best card you can pick up.

What deals are you finding online?  If you guys have one worth adding here, let me know! Is the lack of availability and high prices on AMD GPUs finally behind us??