All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
A new generation of Software Rendering Engines.
We have been busy with side projects, here at PC Perspective, over the last year. Ryan has nearly broken his back rating the frames. Ken, along with running the video equipment and "getting an education", developed a hardware switching device for Wirecase and XSplit.
My project, "Perpetual Motion Engine", has been researching and developing a GPU-accelerated software rendering engine. Now, to be clear, this is just in very early development for the moment. The point is not to draw beautiful scenes. Not yet. The point is to show what OpenGL and DirectX does and what limits are removed when you do the math directly.
Errata: BioShock uses a modified Unreal Engine 2.5, not 3.
In the above video:
- I show the problems with graphics APIs such as DirectX and OpenGL.
- I talk about what those APIs attempt to solve, finding color values for your monitor.
- I discuss the advantages of boiling graphics problems down to general mathematics.
- Finally, I prove the advantages of boiling graphics problems down to general mathematics.
I would recommend watching the video, first, before moving forward with the rest of the editorial. A few parts need to be seen for better understanding.
AMD is up to some interesting things. Today at AMD’s tech day, we discovered a veritable cornucopia of information. Some of it was pretty interesting (audio), some was discussed ad-naseum (audio, audio, and more audio), and one thing in particular was quite shocking. Mantle was the final, big subject that AMD was willing to discuss. Many assumed that the R9 290X would be the primary focus of this talk, but in fact it very much was an aside that was not discussed at any length. AMD basically said, “Yes, the card exists, and it has some new features that we are not going to really go over at this time.” Mantle, as a technology, is at the same time a logical step as well as an unforeseen one. So what all does Mantle mean for users?
Looking back through the mists of time, when dinosaurs roamed the earth, the individual 3D chip makers all implemented low level APIs that allowed programmers to get closer to the silicon than what other APIs such as Direct3D and OpenGL would allow. This was a very efficient way of doing things in terms of graphics performance. It was an inefficient way to do things for a developer writing code for multiple APIs. Microsoft and the Kronos Group had solutions with Direct3D and OpenGL that allowed these programmers to develop for these high level APIs very simply (comparatively so). The developers could write code that would run D3D/OpenGL, and the graphics chip manufacturers would write drivers that would interface with Direct3D/OpenGL, which then go through a hardware abstraction layer to communicate with the hardware. The onus was then on the graphics people to create solid, high performance drivers that would work well with DirectX or OpenGL, so the game developer would not have to code directly for a multitude of current and older graphics cards.
Summary of Events
In January of 2013 I revealed a new testing methodology for graphics cards that I dubbed Frame Rating. At the time I was only able to talk about the process, using capture hardware to record the output directly from the DVI connections on graphics cards, but over the course of a few months started to release data and information using this technology. I followed up the story in January with a collection of videos that displayed some of the capture video and what kind of performance issues and anomalies we were able to easily find.
My first full test results were published in February to quite a bit of stir and then finally in late March released Frame Rating Dissected: Full Details on Capture-based Graphics Performance Testing which dramatically changed the way graphics cards and gaming performance was discussed and evaluated forever.
Our testing proved that AMD CrossFire was not improving gaming experiences in the same way that NVIDIA SLI was. Also, we showed that other testing tools like FRAPS were inadequate in showcasing this problem. If you are at all unfamiliar with this testing process or the results it showed, please check out the Frame Rating Dissected story above.
At the time, we tested 5760x1080 resolution using AMD Eyefinity and NVIDIA Surround but found there were too many issues and problems with our scripts and the results they were presenting to give reasonably assured performance metrics. Running AMD + Eyefinity was obviously causing some problems but I wasn’t quite able to pinpoint what they were and how severe it might have been. Instead I posted graphs like this:
We were able to show NVIDIA GTX 680 performance and scaling in SLI at 5760x1080 but we only were giving results for the Radeon HD 7970 GHz Edition in a single GPU configuration.
Since those stories were released, AMD has been very active. At first they were hesitant to believe our results and called into question our processes and the ability for gamers to really see the frame rate issues we were describing. However, after months of work and pressure from quite a few press outlets, AMD released a 13.8 beta driver that offered a Frame Pacing option in the 3D controls that enables the ability to evenly space out frames in multi-GPU configurations producing a smoother gaming experience.
The results were great! The new AMD driver produced very consistent frame times and put CrossFire on a similar playing field to NVIDIA’s SLI technology. There were limitation though: the driver only fixed DX10/11 games and only addressed resolutions of 2560x1440 and below.
But the story won’t end there. CrossFire and Eyefinity are still very important in a lot of gamers minds and with the constant price drops in 1920x1080 panels, more and more gamers are taking (or thinking of taking) the plunge to the world of Eyefinity and Surround. As it turns out though, there are some more problems and complications with Eyefinity and high-resolution gaming (multi-head 4K) that are cropping up and deserve discussion.
A New TriFrozr Cooler
Graphics cards are by far the most interesting topic we cover at PC Perspective. Between the battles of NVIDIA and AMD as well as the competition between board partners like EVGA, ASUS, MSI and Galaxy, there is very rarely a moment in time when we don't have a different GPU product of some kind on an active test bed. Both NVIDIA and AMD release reference cards (for the most part) with each and every new product launch and it then takes some time for board partners to really put their own stamp on the designs. Other than the figurative stamp that is the sticker on the fan.
One of the companies that has recently become well known for very custom, non-reference graphics card designs is MSI and the pinnacle of the company's engineering falls into the Lightning brand. As far back as the MSI GTX 260 Lightning and as recently as the MSI HD 7970 Lightning, these cards have combined unique cooling, custom power design and good amount of over engineering to really produce a card that has few rivals.
Today we are looking at the brand new MSI GeForce GTX 780 Lightning, a complete revamp of the GTX 780 that was released in May. Based on the same GK110 GPU as the GTX Titan card, with two fewer SMX units, the GTX 780 easily the second fastest single GPU card on the market. MSI is hoping to make the enthusiasts even more excited about the card with the Lightning design that brings a brand new TriFrozr cooler, impressive power design and overclocking capabilities that basic users and LN2 junkies can take advantage of. Just what DO you get for $750 these days?
Plus one GTX 670...
Brand new GPU architectures are typically packaged in reference designs when it comes to power, PCB layout, and cooling. Once manufacturers get a chance to put out their own designs, then interesting things happen. The top end products are usually the ones that get the specialized treatment first, because they typically have the larger margins to work with. Design choices here will eventually trickle down to lower end cards, typically with a price point $20 to $30 more than a reference design. Companies such as MSI have made this their bread and butter with the Lightning series on top, the Hawk line handling the midrange, and then the hopped up reference designs with better cooling under the Twin Frozr moniker.
ASUS has been working with their own custom designs for years and years, but it honestly was not until the DirectCU series debuted did we have a well defined product lineup which pushes high end functionality across the entire lineup of products from top to bottom. Certainly they had custom and unique designs, but things really seemed to crystallize with DirectCU. I guess that is also the power of a good marketing tool as well. DirectCU is a well known brand owned by Asus, and users typically know what to expect when looking at a DirectCU product.
Courtesy of XSPC
The Razor GTX680 water block was among the first in the XSPC full cover line of blocks. The previous generation of XSPC water blocks offered cooling for the GPU as well as the memory and on-board VRMs, but did not offer the protection that a full card-sized block offers to the sensitive components integrated into the card's PCB. At an MSRP of $99.99, the Razor GTX680 water block is a sound investment.
Courtesy of XSPC
The Razor GTX680 block comes with a total of seven G1/4" ports - four on the inlet side (left) and three on the outlet side (right). XSPC included the following component with the block: XSPC thermal compound, dual blue LEDs, five steel port caps, paper washers and mounting screws, and TIM (thermal interface material) for use with the on board memory and VRM chips.
Frame Pacing for CrossFire
When the Radeon HD 7990 launched in April of this year, we had some not-so-great things to say about it. The HD 7990 depends on CrossFire technology to function and we had found quite a few problems with AMD's CrossFire technology over the last months of testing with our Frame Rating technology, the HD 7990 "had a hard time justifying its $1000 price tag." Right at launch, AMD gave us a taste of a new driver that they were hoping would fix the frame pacing and frame time variance issues seen in CrossFire, and it looked positive. The problem was that the driver wouldn't be available until summer.
As I said then: "But until that driver is perfected, is bug free and is presented to buyers as a made-for-primetime solution, I just cannot recommend an investment this large on the Radeon HD 7990."
Today could be a very big day for AMD - the release of the promised driver update that enables frame pacing on AMD 7000-series CrossFire configurations including the Radeon HD 7990 graphics cards with a pair of Tahiti GPUs.
It's not perfect yet and there are some things to keep an eye on. For example, this fix will not address Eyefinity configurations which includes multi-panel solutions and the new 4K 60 Hz displays that require a tiled display configuration. Also, we found some issues with more than two GPU CrossFire that we'll address in a later page too.
New Driver Details
Starting with 13.8 and moving forward, AMD plans to have the frame pacing fix integrated into all future drivers. The software team has implemented a software based frame pacing algorithm that simply monitors the time it takes for each GPU to render a frame, how long a frame is displayed on the screen and inserts delays into the present calls when necessary to prevent very tightly timed frame renders. This balances or "paces" the frame output to the screen without lowering the overall frame rate. The driver monitors this constantly in real-time and minor changes are made on a regular basis to keep the GPUs in check.
As you would expect, this algorithm is completely game engine independent and the games should be completely oblivious to all that is going on (other than the feedback from present calls, etc).
This fix is generic meaning it is not tied to any specific game and doesn't require profiles like CrossFire can from time to time. The current implementation will work with DX10 and DX11 based titles only with DX9 support being added later with another release. AMD claims this was simply a development time issue and since most modern GPU-bound titles are DX10/11 based they focused on that area first. In phase 2 of the frame pacing implementation AMD will add in DX9 and OpenGL support. AMD wouldn't give me a timeline for implementation though so we'll have to see how much pressure AMD continues with internally to get the job done.
Overclocked GTX 770 from Galaxy
When NVIDIA launched the GeForce GTX 770 at the very end of May, we started to get in some retail samples from companies like Galaxy. While our initial review looked at the reference models, other add-in card vendors are putting their own unique touch on the latest GK104 offering and Galaxy was kind enough to send us their GeForce GTX 770 2GB GC model that uses a unique, more efficient cooler design and also runs at overclocked frequencies.
If you haven't yet read up on the GTX 770 GPU, you should probably stop by my first review of the GTX 770 to see what information you are missing out on. Essentially, the GTX 770 is a full-spec GK104 Kepler GPU running at higher clocks (both core and memory speeds) compared to the original GTX 680. The new reference clocks for the GTX 770 were 1046 MHz base clock, 1085 MHz Boost clock and a nice increase to 7.0 GHz memory speeds.
Galaxy GeForce GTX 770 2GB GC Specs
The Galaxy GC model is overclocked with a new base clock setting of 1111 MHz and a higher Boost clock of 1163 MHz; both are about 6.5-7.0% higher than the original clocks. Galaxy has left the memory speeds alone though keeping them running at 7.0 GHz effectively.
Another Wrench – GeForce GTX 760M Results
Just recently, I evaluated some of the current processor-integrated graphics options from our new Frame Rating performance metric. The results were very interesting, proving Intel has done some great work with its new HD 5000 graphics option for Ultrabooks. You might have noticed that the MSI GE40 didn’t just come with the integrated HD 4600 graphics but also included a discrete NVIDIA GeForce GTX 760M, on-board. While that previous article was to focus on the integrated graphics of Haswell, Trinity, and Richland, I did find some noteworthy results with the GTX 760M that I wanted to investigate and present.
The MSI GE40 is a new Haswell-based notebook that includes the Core i7-4702MQ quad-core processor and Intel HD 4600 graphics. Along with it MSI has included the Kepler architecture GeForce GTX 760M discrete GPU.
This GPU offers 768 CUDA cores running at a 657 MHz base clock but can stretch higher with GPU Boost technology. It is configured with 2GB of GDDR5 memory running at 2.0 GHz.
If you didn’t read the previous integrated graphics article, linked above, you’re going to have some of the data presented there spoiled and so you might want to get a baseline of information by getting through that first. Also, remember that we are using our Frame Rating performance evaluation system for this testing – a key differentiator from most other mobile GPU testing. And in fact it is that difference that allowed us to spot an interesting issue with the configuration we are showing you today.
If you are not familiar with the Frame Rating methodology, and how we had to change some things for mobile GPU testing, I would really encourage you to read this page of the previous mobility Frame Rating article for the scoop. The data presented below depends on that background knowledge!
Okay, you’ve been warned – on to the results.
Battle of the IGPs
Our long journey with Frame Rating, a new capture-based analysis tool to measure graphics performance of PCs and GPUs, began almost two years ago as a way to properly evaluate the real-world experiences for gamers. What started as a project attempting to learn about multi-GPU complications has really become a new standard in graphics evaluation and I truly believe it will play a crucial role going forward in GPU and game testing.
Today we use these Frame Rating methods and tools, which are elaborately detailed in our Frame Rating Dissected article, and apply them to a completely new market: notebooks. Even though Frame Rating was meant for high performance discrete desktop GPUs, the theory and science behind the entire process is completely applicable to notebook graphics and even on the integrated graphics solutions on Haswell processors and Richland APUs. It also is able to measure performance of discrete/integrated graphics combos from NVIDIA and AMD in a unique way that has already found some interesting results.
Battle of the IGPs
Even though neither side wants us to call it this, we are testing integrated graphics today. With the release of Intel’s Haswell processor (the Core i7/i5/i3 4000) the company has upgraded the graphics noticeably on several of their mobile and desktop products. In my first review of the Core i7-4770K, a desktop LGA1150 part, the integrated graphics now known as the HD 4600 were only slightly faster than the graphics of the previous generation Ivy Bridge and Sandy Bridge. Even though we had all the technical details of the HD 5000 and Iris / Iris Pro graphics options, no desktop parts actually utilize them so we had to wait for some more hardware to show up.
When Apple held a press conference and announced new MacBook Air machines that used Intel’s Haswell architecture, I knew I could count on Ken to go and pick one up for himself. Of course, before I let him start using it for his own purposes, I made him sit through a few agonizing days of benchmarking and testing in both Windows and Mac OS X environments. Ken has already posted a review of the MacBook Air 11-in model ‘from a Windows perspective’ and in that we teased that we had done quite a bit more evaluation of the graphics performance to be shown later. Now is later.
So the first combatant in our integrated graphics showdown with Frame Rating is the 11-in MacBook Air. A small, but powerful Ultrabook that sports more than 11 hours of battery life (in OS X at least) but also includes the new HD 5000 integrated graphics options. Along with that battery life though is the GT3 variation of the new Intel processor graphics that doubles the number of compute units as compared to the GT2. The GT2 is the architecture behind the HD 4600 graphics that sits with nearly all of the desktop processors, and many of the notebook versions, so I am very curious how this comparison is going to stand.
The GPU Midrange Gets a Kick
I like budget video cards. They hold a soft spot in my heart. I think the primary reason for this is that I too was once a poor college student and could not afford the really expensive cards. Ok, so this was maybe a few more years ago than I like to admit. Back when the Matrox Millennium was very expensive, I ended up getting the STB Lightspeed 128 instead. Instead of the 12 MB Voodoo 2 I went for the 8 MB version. I was never terribly fond of paying top dollar for a little extra performance. I am still not fond of it either.
The sub-$200 range is a bit of a sweet spot that is very tightly packed with products. These products typically perform in the range of a high end card from 3 years ago, yet still encompass the latest features of the top end products from their respective companies. These products can be overclocked by end users to attain performance approaching cards in the $200 to $250 range. Mind, there are some specific limitations to the amount of performance one can actually achieve with these cards. Still, what a user actually gets is very fair when considering the price.
Today I cover several flavors of cards from three different manufacturers that are based on the AMD HD 7790 and the NVIDIA GTX 650 Ti BOOST chips. These range in price from $129 to $179. The features on these cards are amazingly varied, and there are no “sticker edition” parts to be seen here. Each card is unique in its design and the cooling strategies are also quite distinct. Users should not expect to drive monitors above 1920x1200, much less triple monitors in Surround and Eyefinity.
Now let us quickly go over the respective chips that these cards are based on.
Getting even more life from GK104
Have you guys heard about this new GPU from NVIDIA? It’s called GK104 and it turns out that the damn thing is found yet another graphics card this year – the new GeForce GTX 760. Yup, you read that right, what NVIDIA is saying is the last update to the GeForce lineup through Fall 2013 is going to be based on the same GK104 design that we have previously discussed in reviews of the GTX 680, GTX 670, GTX 660 Ti, GTX 690 and more recently, the GTX 770. This isn’t a bad thing though! GK104 has done a fantastic job in every field and market segment that NVIDIA has tossed it into with solid performance and even better performance per watt than the competition. It does mean however that talking up the architecture is kind of mind numbing at this point…
If you are curious about the Kepler graphics architecture and the GK104 in particular, I’m not going to stop you from going back and reading over my initial review of the GTX 680 from January of 2012. The new GTX 760 takes the same GPU, adds a new and improved version of GPU Boost (the same we saw in the GTX 770) and lowers down the specifications a bit to enable NVIDIA to hit a new price point. The GTX 760 will be replacing the GTX 660 Ti – that card will be falling into the ether but the GTX 660 will remain, as will everything below it including the GTX 650 Ti Boost, 650 Ti and plain old 650. The GTX 670 went the way of the dodo with the release of the GTX 770.
Even though the GTX 690 isn't on this list, NVIDIA says it isn't EOL
As for the GeForce GTX 760 it will ship with 1152 CUDA cores running at a base clock of 980 MHz and a typical boost clock of 1033 MHz. The memory speed remains at 6.0 GHz on a 256-bit memory bus and you can expect to find both 2GB and 4GB frame buffer options from retail partners upon launch. The 1152 CUDA cores are broken up over 6 SMX units and that means you’ll see some parts with 3 GPCs and others with 4 – NVIDIA claims any performance delta between them will be negligible.
OpenCL Support in a Meaningful Way
Adobe had OpenCL support since last year. You would never benefit from its inclusion unless you ran one of two AMD mobility chips under Mac OSX Lion, but it was there. Creative Cloud, predictably, furthers this trend with additional GPGPU support for applications like Photoshop and Premiere Pro.
This leads to some interesting points:
- How OpenCL is changing the landscape between Intel and AMD
- What GPU support is curiously absent from Adobe CC for one reason or another
- Which GPUs are supported despite not... existing, officially.
This should be very big news for our readers who do production work whether professional or for a hobby. If not, how about a little information about certain GPUs that are designed to compete with the GeForce 700-series?
Kepler-based Mobile GPUs
Late last month, just before the tech world blew up from the mess that is Computex, NVIDIA announced a new line of mobility discrete graphics parts under the GTX 700M series label. At the time we simply posted some news and specifications about the new products but left performance evaluation for a later time. Today we have that for the highest end offering, the GeForce GTX 780M.
As with most mobility GPU releases it seems, the GTX 700M series is not really a new GPU and only offers cursory feature improvements. Based completely on the Kepler line of parts, the GTX 700M will range from 1536 CUDA cores on the GTX 780M to 768 cores on the GTX 760M.
The flagship GTX 780M is essentially a desktop GTX 680 card in a mobile form factor with lower clock speeds. With 1536 CUDA cores running at 823 MHz and boosting to higher speeds depending on the notebook configuration, a 256-bit memory controller running at 5 GHz, the GTX 780M will likely be the fastest mobile GPU you can buy. (And we’ll be testing that in the coming pages.)
The GTX 760M, 765M and 770M offering ranges of performance that scale down to 768 cores at 657 MHz. NVIDIA claims we’ll see the GTX 760M in systems as small as 14-in and below with weights at 2kg or so from vendors like MSI and Acer. For Ultrabooks and thinner machines you’ll have to step down to smaller, less power hungry GPUs like the GT 750 and 740 but even then we expect NVIDIA to have much faster gaming performance than the Haswell-based processor graphics.
A necessary gesture
NVIDIA views the gaming landscape as a constantly shifting medium that starts with the PC. But the company also sees mobile gaming, cloud gaming and even console gaming as part of the overall ecosystem. But that is all tied together by an investment in content – the game developers and game publishers that make the games that we play on PCs, tablets, phones and consoles.
The slide above shows NVIDIA targeting for each segment – expect for consoles obviously. NVIDIA GRID will address the cloud gaming infrastructure, GeForce and the GeForce Experience will continue with the PC systems and NVIDIA SHIELD and the Tegra SoC will get the focus for the mobile and tablet spaces. I find it interesting that NVIDIA has specifically called out Steam under the PC – maybe a hint of the future for the upcoming Steam Box?
The primary point of focus for today’s press meeting was to talk about the commitment that NVIDIA has to the gaming world and to developers. AMD has been talking up their 4-point attack on gaming that starts really with the dominance in the console markets. But NVIDIA has been the leader in the PC world for many years and doesn’t see that changing.
With several global testing facilities, the most impressive of which exists in Russia, NVIDIA tests more games, more hardware and more settings combinations than you can possibly imagine. They tune drivers and find optimal playing settings for more than 100 games that are now wrapped up into the GeForce Experience software. They write tools for developers to find software bottlenecks and test for game streaming latency (with the upcoming SHIELD). They invest more in those areas than any other hardware vendor.
This is a list of technologies that NVIDIA claims they invented or developed – an impressive list that includes things like programmable shaders, GPU compute, Boost technology and more.
Many of these turned out to be very important in the development and advancement of gaming – not for PCs but for ALL gaming.
GK104 gets cheaper and faster
A week ago today we posted our review of the GeForce GTX 780, NVIDIA's attempt to split the difference between the GTX 680 and the GTX Titan graphics cards in terms of performance and pricing. Today NVIDIA launches the GeForce GTX 770 that, even though it has a fancy new name, is a card and a GPU that you are very familiar with.
The NVIDIA GK104 GPU Diagram
Based on GK104, the same GPU that powers the GTX 680 (released in March 2012), GTX 670 and the GTX 690 (though in a pair), the new GeForce GTX 770 has very few changes from the previous models that are really worth noting. NVIDIA has updated the GPU Boost technology to 2.0 (more granular, better controls in software) but the real changes come in the clocks speeds.
The GTX 770 is still built around 4 GPCs and 8 SMXs for a grand total of 1536 CUDA cores, 128 texture units and 32 ROPs. The clock speeds have increased from 1006 MHz base clock and 1058 MHz Boost up to 1046 MHz base and 1085 MHz Boost. That is a pretty minor speed bump in reality, an increase of just 4% or so over the previous clock speeds.
NVIDIA did bump up the GDDR5 memory speed considerably though, going from 6.0 Gbps to 7.0 Gbps, or 1750 MHz. The memory bus width remains 256-bits wide but the total memory bandwidth has jumped up to 224.3 GB/s.
Maybe the best change for PC gamers is the new starting MSRP for the GeForce GTX 770 at $399 - a full $50-60 less than the GTX 680 was selling for as of yesterday. If you happened to pick up a GTX 680 recently, you are going to want to look into your return options as this will surely annoying the crap out of you.
If you want more information on the architecture design of the GK104 GPU, check out our initial article on the chips release from last year. Otherwise, with those few specification changes out of the way, let's move on to some interesting information.
The NVIDIA GeForce GTX 770 2GB Reference Card
Tired of this design yet? If so, you'll want to look into some of the non-reference options I'll show you on the next page from other vendors, but I for one am still taken with the design of these cards. You will find a handful of vendors offering up re-branded GTX 770 options at the outset of release but most will have their own SKUs to showcase.
GK110 Gets a Lower Price Point
If you want to ask us some questions about the GTX 780 or our review, join us for a LIVE STREAM at 2pm EDT / 11am PDT on our LIVE page.
When NVIDIA released the GeForce GTX Titan in February there was a kind of collective gasp across the enthusiast base. Half of that intake of air was from people amazed at the performance they were seeing on a single GPU graphics cards powered by the GK110 chip. The other half was from people aghast of the $1000 price point that NVIDIA launched it at. The GTX Titan was the fastest single GPU card in the world, without any debate, but with it came a cost we hadn't seen in some time. Even with the debate between it, the GTX 690 and the HD 7990, the Titan was likely my favorite GPU, cost no concerns.
Today we see the extension of the GK110, by cutting it back some, and releasing a new card. The GeForce GTX 780 3GB is based on the same chip as the GTX Titan but with additional SMX units disabled, a lower CUDA core count and less memory. But as you'll soon see, the performance delta between it and the GTX 680 and Radeon HD 7970 GHz is pretty impressive. The $650 price tag though - maybe not.
We held a live stream the day this review launched at http://pcper.com/live. You can see the replay that goes over our benchmark results and thoughts on the GTX 780 below.
The GeForce GTX 780 - A Cut Down GK110
As I mentioned above, the GTX 780 is a pared-down GK110 GPU and for more information on that particular architecture change, you should really take a look at my original GTX Titan launch article from February. There is a lot more that is different on this part compared to GK104 than simple shader counts, but for gamers most of the focus will rest there.
The chip itself is a 7.1 billion mega-ton beast though a card with the GTX 780 label is actually utilizing much fewer than that. Below you will find a couple of block diagrams that represent the reduced functionality of the GTX 780 versus the GTX Titan:
The Intel HD Graphics are joined by Iris
Intel gets a bad wrap on the graphics front. Much of it is warranted but a lot of it is really just poor marketing about the technologies and features they implement and improve on. When AMD or NVIDIA update a driver or fix a bug or bring a new gaming feature to the table, they are sure that every single PC hardware based website knows about and thus, that as many PC gamers as possible know about it. The same cannot be said about Intel though - they are much more understated when it comes to trumpeting their own horn. Maybe that's because they are afraid of being called out on some aspects or that they have a little bit of performance envy compared to the discrete options on the market.
Today might be the start of something new from the company though - a bigger focus on the graphics technology in Intel processors. More than a month before the official unveiling of the Haswell processors publicly, Intel is opening up about SOME of the changes coming to the Haswell-based graphics products.
We first learned about the changes to Intel's Haswell graphics architecture way back in September of 2012 at the Intel Developer Forum. It was revealed then that the GT3 design would essentially double theoretical output over the currently existing GT2 design found in Ivy Bridge. GT2 will continue to exist (though slightly updated) on Haswell and only some versions of Haswell will actually see updates to the higher-performing GT3 options.
In 2009 Intel announced a drive to increase graphics performance generation to generation at an exceptional level. Not long after they released the Sandy Bridge CPU and the most significant performance increase in processor graphics ever. Ivy Bridge followed after with a nice increase in graphics capability but not nearly as dramatic as the SNB jump. Now, according to this graphic, the graphics capability of Haswell will be as much as 75x better than the chipset-based graphics from 2006. The real question is what variants of Haswell will have that performance level...
I should note right away that even though we are showing you general performance data on graphics, we still don't have all the details on what SKUs will have what features on the mobile and desktop lineups. Intel appears to be trying to give us as much information as possible without really giving us any information.
Our 4K Testing Methods
You may have recently seen a story and video on PC Perspective about a new TV that made its way into the office. Of particular interest is the fact that the SEIKI SE50UY04 50-in TV is a 4K television; it has a native resolution of 3840x2160. For those that are unfamiliar with the new upcoming TV and display standards, 3840x2160 is exactly four times the resolution of current 1080p TVs and displays. Oh, and this TV only cost us $1300.
In that short preview we validated that both NVIDIA and AMD current generation graphics cards support output to this TV at 3840x2160 using an HDMI cable. You might be surprised to find that HDMI 1.4 can support 4K resolutions, but it can do so only at 30 Hz (60 Hz 4K TVs won't be available until 2014 most likely), half the refresh rate of most TVs and monitors at 60 Hz. That doesn't mean we are limited to 30 FPS of performance though, far from it. As you'll see in our testing on the coming pages we were able to push out much higher frame rates using some very high end graphics solutions.
I should point out that I am not a TV reviewer and I don't claim to be one, so I'll leave the technical merits of the monitor itself to others. Instead I will only report on my experiences with it while using Windows and playing games - it's pretty freaking awesome. The only downside I have found in my time with the TV as a gaming monitor thus far is with the 30 Hz refresh rate and Vsync disabled situations. Because you are seeing fewer screen refreshes over the same amount of time than you would with a 60 Hz panel, all else being equal, you are getting twice as many "frames" of the game being pushed to the monitor each refresh cycle. This means that the horizontal tearing associated with Vsync will likely be more apparent than it would otherwise.
I would likely recommend enabling Vsync for a tear-free experience on this TV once you are happy with performance levels, but obviously for our testing we wanted to keep it off to gauge performance of these graphics cards.
The card we have been expecting
Despite all the issues that were brought up with our new graphics performance testing methodology we are calling Frame Rating, there is little debate in the industry that AMD is making noise once again in the graphics field. From the elaborate marketing and game bundles with all Radeon HD 7000 series cards over the last year to the hiring of Roy Taylor, VP of sales but also the company's most vocal supporter.
Along with the marketing though goes plenty of technology and important design wins. With the dominance of the APU on the console side (Wii U, Playstation 4 and the next Xbox), AMD is making sure that the familiarity with its GPU architecture there pays dividends on the PC side as well. Developers will be focusing on AMD's graphics hardware for 5-10 years with the console generation and that could result in improved performance and feature support for Radeon graphics for PC gamers.
Today's release of the Radeon HD 7990 6GB Malta dual-GPU graphics card shows a renewed focus on high-end graphics markets since the release of the Radeon HD 7970 in January of 2012. And while you may have seen something for sale previously with the HD 7990 name attached, those were custom designs built by partners, not by AMD.
Both ASUS and PowerColor currently have high-end dual-Tahiti cards for sale. The PowerColor HD 7990 Devil 13 used the brand directly but ASUS' ARES II kept away from the name and focused on its own high-end card brands instead.
The "real" Radeon HD 7990 card was first teased at GDC in March and takes a much less dramatic approach to its design without being less impressive technically. The card includes a pair of Tahiti, HD 7970-class GPUs on a single PCB with 6GB of total memory. The raw specifications are listed here:
Considering there are two HD 7970 GPUs on the HD 7990, the doubling of the major specs shouldn't be surprising though it is a little deceiving. There are 8.6 billion transistors yes, but there are still 4.3 billion on each GPU. Yes there are 4096 stream processors but only 2048 on each GPU requiring software GPU scaling to increase performance. The same goes with texture fill rate, compute performance, memory bandwidth, etc. The same could be said for all dual-GPU graphics cards though.