Author:
Manufacturer: Various

Battle of the IGPs

Our long journey with Frame Rating, a new capture-based analysis tool to measure graphics performance of PCs and GPUs, began almost two years ago as a way to properly evaluate the real-world experiences for gamers.  What started as a project attempting to learn about multi-GPU complications has really become a new standard in graphics evaluation and I truly believe it will play a crucial role going forward in GPU and game testing. 

Today we use these Frame Rating methods and tools, which are elaborately detailed in our Frame Rating Dissected article, and apply them to a completely new market: notebooks.  Even though Frame Rating was meant for high performance discrete desktop GPUs, the theory and science behind the entire process is completely applicable to notebook graphics and even on the integrated graphics solutions on Haswell processors and Richland APUs.  It also is able to measure performance of discrete/integrated graphics combos from NVIDIA and AMD in a unique way that has already found some interesting results.

 

Battle of the IGPs

Even though neither side wants us to call it this, we are testing integrated graphics today.  With the release of Intel’s Haswell processor (the Core i7/i5/i3 4000) the company has upgraded the graphics noticeably on several of their mobile and desktop products.  In my first review of the Core i7-4770K, a desktop LGA1150 part, the integrated graphics now known as the HD 4600 were only slightly faster than the graphics of the previous generation Ivy Bridge and Sandy Bridge.  Even though we had all the technical details of the HD 5000 and Iris / Iris Pro graphics options, no desktop parts actually utilize them so we had to wait for some more hardware to show up. 

 

mbair.JPG

When Apple held a press conference and announced new MacBook Air machines that used Intel’s Haswell architecture, I knew I could count on Ken to go and pick one up for himself.  Of course, before I let him start using it for his own purposes, I made him sit through a few agonizing days of benchmarking and testing in both Windows and Mac OS X environments.  Ken has already posted a review of the MacBook Air 11-in model ‘from a Windows perspective’ and in that we teased that we had done quite a bit more evaluation of the graphics performance to be shown later.  Now is later.

So the first combatant in our integrated graphics showdown with Frame Rating is the 11-in MacBook Air.  A small, but powerful Ultrabook that sports more than 11 hours of battery life (in OS X at least) but also includes the new HD 5000 integrated graphics options.  Along with that battery life though is the GT3 variation of the new Intel processor graphics that doubles the number of compute units as compared to the GT2.  The GT2 is the architecture behind the HD 4600 graphics that sits with nearly all of the desktop processors, and many of the notebook versions, so I am very curious how this comparison is going to stand. 

Continue reading our story on Frame Rating with Haswell, Trinity and Richland!!

Podcast #258 - Corsair 900D, HD 7790 vs GTX 650Ti BOOST, Leaked AMD APUs and more!

Subject: General Tech | July 4, 2013 - 12:45 AM |
Tagged: podcast, video, corsair, 900D, 7790, 650ti boost, amd, Richland, nvidia, kepler, titan, Intel, ssd

PC Perspective Podcast #258 - 07/04/2013

Join us this week as we discuss the Corsair 900D, HD 7790 vs GTX 650Ti BOOST, Leaked AMD APUs and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Josh Walrath, Jeremy Hellstrom and Allyn Malventano

Program length: 1:14:23

  1. Week in Review:
  2. News items of interest:
  3. 0:58:25 Hardware/Software Picks of the Week:
    1. Allyn: USB Practical Meter (kickstarter)
  4. 1-888-38-PCPER or podcast@pcper.com
  5. Closing/outro

 

Author:
Manufacturer: PC Perspective

The GPU Midrange Gets a Kick

I like budget video cards.  They hold a soft spot in my heart.  I think the primary reason for this is that I too was once a poor college student and could not afford the really expensive cards.  Ok, so this was maybe a few more years ago than I like to admit.  Back when the Matrox Millennium was very expensive, I ended up getting the STB Lightspeed 128 instead.  Instead of the 12 MB Voodoo 2 I went for the 8 MB version.  I was never terribly fond of paying top dollar for a little extra performance.  I am still not fond of it either.

The sub-$200 range is a bit of a sweet spot that is very tightly packed with products.  These products typically perform in the range of a high end card from 3 years ago, yet still encompass the latest features of the top end products from their respective companies.  These products can be overclocked by end users to attain performance approaching cards in the $200 to $250 range.  Mind, there are some specific limitations to the amount of performance one can actually achieve with these cards.  Still, what a user actually gets is very fair when considering the price.

budg_01.jpg

Today I cover several flavors of cards from three different manufacturers that are based on the AMD HD 7790 and the NVIDIA GTX 650 Ti BOOST chips.  These range in price from $129 to $179.  The features on these cards are amazingly varied, and there are no “sticker edition” parts to be seen here.  Each card is unique in its design and the cooling strategies are also quite distinct.  Users should not expect to drive monitors above 1920x1200, much less triple monitors in Surround and Eyefinity.

Now let us quickly go over the respective chips that these cards are based on.

Click here to read the entire article!

NVIDIA Releases 326.01 WHQL Drivers For Windows 8.1

Subject: Graphics Cards | July 1, 2013 - 09:24 PM |
Tagged: windows update, Windows 8.1, whql, nvidia, gtx 700, graphics drivers

NVIDIA recently made new WHQL drivers available for users that have upgraded to Windows 8.1. The new drivers are version 326.01 and fully supports Windows 8.1. A full change log of the drivers has not yet been posted, but the 326.01 WHQL is likely very similar to the recent beta version, but with certification to work with the latest service pack/update to Windows 8.

NVIDIA The Way Its Meant To Be Played Logo.jpg

The new 326.01 drivers are available via Windows Update or from the NVIDIA website. Supported GPUs include both desktop and notebook models from the 8000-series to the latest 700 series. Download links are below for the desktop and notebook drivers, depending on your bit-ness of Windows 8.1.

Desktop GPUs:

Notebook GPUs:

Source: NVIDIA

NVIDIA launches the GTX 760 @ $250; let the price wars begin again!

Subject: Graphics Cards | June 25, 2013 - 01:28 PM |
Tagged: geforce, GK104, gtx 760, nvidia, msi, MSI N760 TF 2GD5/OC

To start off with the good news, the GTX 760 is now available between $250 to $260 for the MSI model that [H]ard|OCP reviewed.  No paper launch this, nor another $400+ card for you to dream about but instead a solid performing card at a decent price.  Power is provide by an 8-pin and a 6-pin PCIe power connector, perhaps a little more than the card needs but perfect for overclockers who need the extra juice.  Performance wise the card trumps the GTX 660Ti and matches the GTX 670 and HD7950 boost in almost every test, for a good $50-75 less to pick up.  Even better news is that some certain sites testing Frame Rating and SLI performance saw great scaling in real performance. 

Read on to get the whole picture from [H]ard|OCP.

H_gtx760.jpg

"Today NVIDIA is launching the GeForce GTX 760. The GeForce GTX 760 will be replacing a video card and offering what use to be high-end memory performance, at a mainstream price. We will evaluate a retail MSI N760 TF 2GD5/OC video card with comparisons to find out whether or not this is a true value."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP
Author:
Manufacturer: NVIDIA

Getting even more life from GK104

Have you guys heard about this new GPU from NVIDIA?  It’s called GK104 and it turns out that the damn thing is found yet another graphics card this year – the new GeForce GTX 760.  Yup, you read that right, what NVIDIA is saying is the last update to the GeForce lineup through Fall 2013 is going to be based on the same GK104 design that we have previously discussed in reviews of the GTX 680, GTX 670, GTX 660 Ti, GTX 690 and more recently, the GTX 770. This isn’t a bad thing though!  GK104 has done a fantastic job in every field and market segment that NVIDIA has tossed it into with solid performance and even better performance per watt than the competition.  It does mean however that talking up the architecture is kind of mind numbing at this point…

block.jpg

If you are curious about the Kepler graphics architecture and the GK104 in particular, I’m not going to stop you from going back and reading over my initial review of the GTX 680 from January of 2012.  The new GTX 760 takes the same GPU, adds a new and improved version of GPU Boost (the same we saw in the GTX 770) and lowers down the specifications a bit to enable NVIDIA to hit a new price point.  The GTX 760 will be replacing the GTX 660 Ti – that card will be falling into the ether but the GTX 660 will remain, as will everything below it including the GTX 650 Ti Boost, 650 Ti and plain old 650.  The GTX 670 went the way of the dodo with the release of the GTX 770.

01.jpg

Even though the GTX 690 isn't on this list, NVIDIA says it isn't EOL

As for the GeForce GTX 760 it will ship with 1152 CUDA cores running at a base clock of 980 MHz and a typical boost clock of 1033 MHz.  The memory speed remains at 6.0 GHz on a 256-bit memory bus and you can expect to find both 2GB and 4GB frame buffer options from retail partners upon launch.  The 1152 CUDA cores are broken up over 6 SMX units and that means you’ll see some parts with 3 GPCs and others with 4 – NVIDIA claims any performance delta between them will be negligible. 

Continue reading our review of the NVIDIA GeForce GTX 760 2GB Graphics Card!!

Frame Rating: AMD plans driver release to address frame pacing for July 31st

Subject: Graphics Cards | June 20, 2013 - 04:05 PM |
Tagged: radeon, nvidia, geforce, frame rating, fcat, crossfire, amd

Well, the date has been set.  AMD publicly stated on its @AMDRadeon Twitter account that a new version the prototype driver we originally previewed with the release of the Radeon HD 7990 in April will be released to the public on July 31st.  For a problem that many in the industry didn't think existed.  

 

 

Since that April release AMD has been very quiet about its driver changes and actually has refused to send me updated prototypes over the spring.  Either they have it figured out or they are worried they haven't - but it looks like we'll find out at the end of next month and I feel pretty confident that the team will be able to address the issues we brought to light.

For those of you that might have missed the discussion, our series of Frame Rating stories will tell you all about the issues with frame pacing and stutter in regards to AMD's CrossFire multi-GPU technology. 

AMD gave the media a prototype driver in April to test with the Radeon HD 7990, a card that depends on CrossFire to work correctly, and the improvements were pretty drastic.

BF3_2560x1440_PLOT_0.png

So what can we expect on July 31st?  A driver that will give users the option to disable or enable the frame pacing technology they are developing - though I am still of the mindset that disabling is never advantageous.  More to come in the next 30 days!

Source: Twitter

Podcast #256 - Mobile Frame Rating, NVIDIA licensing Kepler, Xbox One DRM and more!

Subject: General Tech | June 20, 2013 - 02:03 PM |
Tagged: video, podcast, 780m, frame rating, nvidia, kepler, xbox one, Adobe, CC, opencl

PC Perspective Podcast #256 - 06/20/2013

Join us this week as we discuss Mobile Frame Rating, NVIDIA licensing Kepler, Xbox One DRM and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Josh Walrath, Allyn Malventano and Morry Teitelman

Program length: 1:33:43

  1. Week in Review:
  2. News items of interest:
    1. 0:43:30 Ryan's summary of E3
      1. Oculus 1080p, Razer Blade, Monoprice, SHIELD
  3. 1:22:00 Hardware/Software Picks of the Week:
    1. Allyn: Swiss-Tech Keychain Tools
  4. 1-888-38-PCPER or podcast@pcper.com
  5. Closing/outro

Rumored NVIDIA GTX 760 Specifications Leaked: The First Mid-Range 700-Series GPU Option

Subject: Graphics Cards | June 20, 2013 - 12:28 PM |
Tagged: nvidia, kepler, gtx 760, GK104, gk-104, gaming

There have been rumors of a new mid-range Kepler-based graphics card coming that will be next entry in the GTX 700-series. This new GPU is rumored to be called the GeForce GTX 760. If the specifications are true, the card will fit between the existing GTX 660 and GTX 660 Ti graphics cards as far as hardware specifications and pricing. While it will be under the GTX 700-series, it will not have the faster 7Gbps memory clockspeed of the other 700-series cards.

As far as specifications, Videocardz claims to have the final specifications list in a recent news post. The GTX 760 is rumored to be the latest graphics card to use NVIDIA's GK-104 "Kepler" GPU. The GTX 760 will have some units disabled for a GPU with 1,152 CUDA cores, 96 Texture Manipulation Units (TMUs), and 32 Raster Operations Processors (ROPs). The GPU supports NVIDIA's latest GPU Boost 2.0 technology which will automatically ratchet up the Boost clockspeed so long as temperature allows. It has a base clockspeed of 980 MHz and a boost clockspeed of 1,033 Mhz.

GTX 760 graphics cards will allegedly come in both 2GB and 4GB GDDR5 memory flavors. NVIDIA is clocking the memory at 6 Gbps (1502 MHz), which makes it the fist 700-series part to not take advantage of faster memory chips. However, there is a bit of saving grace as NVIDIA has moved to a 256-bit memory bus. This allows the card to still see a respectable bump in memory bandwidth of 192 GB/s on the GTX 760 versus the GTX 660/GTX 660 Ti's 144.2 GB/s bandwidth.

Compared to the existing mid-range 600-series cards, the GTX 760 has base and boost GPU clockspeeds equal to the GTX 660 (and faster than the GTX 660 TI). Memory clockspeed is also unchanged on the new card, though it has a wider memory bus. The GTX 760 has 192 more CUDA cores than the GTX 660, but 192 fewer CUDA cores versus the GTX 660 Ti. TMUs are also sit evenly between the two 600-series cards, but the GTX 760 does have 8 more ROPs enabled than both the 660 and 660 Ti.

Graphics cards with the upcoming GTX 760 GPU will be powered by two 6-pin PCI-E power connectors, and it has a 170W TDP. That power consumption puts the card between the 150W GTX 660 Ti and the higher-end 230W GTX 770. It appears that the card will not come with the high-end stock metallic cooler used in the other 700-series cards, though the various AIBs are likely to fit the GPU with their own custom aftermarket coolers. Video outputs on the cards will incluce DVI-I, DVI-D, HDMI, and DisplayPort.

The chart below compares the specifications between the GTX 660, GTX 660 Ti, GTX 770, and the rumored GTX 760.

  GTX 760 GTX 660 GTX 660 Ti GTX 770
CUDA Cores 1,152 960 1,344 1536
TMUs 96 80 112 128
ROPs 32 24 24 32
GPU Base 980 MHz 980 Mhz 915 Mhz 1046 Mhz
GPU Boost 1033 MHz 1033 MHz 980 Mhz 1085 MHz
Memory Bus 256-bit 192-bit 192-bit 256-bit
Memory Clock 1502 MHz 1502 MHx 1502 Mhz 1752 Mhz
Bandwidth 192 GB/s 144.2 GB/s 144.2 GB/s 224 GB/s
TDP 170 W 140 W 150 W 230 W
Architecture GK-104 GK-106 GK-104 GK-104

The card is supposedly going to be released on June 25th for around $300. It will compete with AMD's 7950 with boost graphics card. Further, the card will be an alternative to NVIDIA's own GTX 660 Ti and an upgrade for gamers still running GTX 560 cards with the company's older Fermi-based GPU.

Source: Videocardz

NVIDIA Enters the Licensing World: My Quick Analysis

Subject: General Tech | June 19, 2013 - 09:51 PM |
Tagged: Volta, nvidia, maxwell, licensing, kepler, Denver, Blogs, arm

Yesterday we all saw the blog piece from NVIDIA that stated that they were going to start licensing their IP to interested third parties.  Obviously, there was a lot of discussion about this particular move.  Some were in favor, some were opposed, and others yet thought that NVIDIA is now simply roadkill.  I believe that it is an interesting move, but we are not yet sure of the exact details or the repercussions of such a decision on NVIDIA’s part.

The biggest bombshell of the entire post was that NVIDIA would be licensing out their latest architecture to interested clients.  The Kepler architecture powers the very latest GTX 700 series of cards and at the top end it is considered one of the fastest and most efficient architectures out there.  Seemingly, there is a price for this though.  Time to dig a little deeper.

keplerdieshot.jpg

Kepler will be the first technology licensed to third party manufacturers.  We will not see full GPUs, these will only be integrated into mobile products.

The very latest Tegra parts from NVIDIA do not feature the Kepler architecture for the graphics portion.  Instead, the units featured in Tegra can almost be described as GeForce 7000 series parts.  The computational units are split between pixel shaders and vertex shaders.  They support a maximum compatibility of D3D 9_3 and OpenGL ES 2.0.  This is a far cry from a unified shader architecture and support for the latest D3D 11 and OpenGL ES 3.0 specifications.  Other mobile units feature the latest Mali and Adreno series of graphics units which are unified and support DX11 and OpenGL ES 3.0.

So why exactly does the latest Tegras not share the Kepler architecture?  Hard to say.  It could be a variety of factors that include time to market, available engineering teams, and simulations which could dictate if power and performance can be better served by a less complex unit.  Kepler is not simple.  A Kepler unit that occupies the same die space could potentially consume more power with any given workload, or conversely it could perform poorly given the same power envelope.

We can look at the desktop side of this argument for some kind of proof.  At the top end Kepler is a champ.  The GTX 680/770 has outstanding performance and consumes far less power than the competition from AMD.  When we move down a notch and see the GTX 660 Ti/HD 7800 series of cards, we see much greater parity in performance and power consumptions.  Going to the HD 7790 as compared to the 650 Ti Boost, we see the Boost part have slightly better performance but consumes significantly more power.  Then we move down to the 650 and 650 Ti and these parts do not consume any more power than the competing AMD parts, but they also perform much more poorly.  I know these are some pretty hefty generalizations and the engineers at NVIDIA could very effectively port Kepler over to mobile applications without significant performance or power penalties.  But so far, we have not seen this work.

Power, performance, and die area aside there is also another issue to factor in.  NVIDIA just announced that they are doing this.  We have no idea how long this effort has been going, but it is very likely that it has only been worked on for the past six months.  In that time NVIDIA needs to hammer out how they are going to license the technology, how much manpower they must provide licensees to get those parts up and running, and what kind of fees they are going to charge.  There is a lot of work going on there and this is not a simple undertaking.

So let us assume that some three months ago an interested partner such as Rockchip or Samsung comes knocking to NVIDIA’s door.  They work out the licensing agreements and this takes several months.  Then we start to see the transfer of technology between the companies.  Obviously Samsung and Rockchip are not going to apply this graphics architecture to currently shipping products, but will instead bundle it in with a next generation ARM based design.  These designs are not spun out overnight.  For example, the 64 bit ARMv8 designs have been finalized for around a year, and we do not expect to see initial parts being shipped until late 1H 2014.  So any partner that decides to utilize NVIDIA’s Kepler architecture for such an application will not see this part be released until 1H 2015 at the very earliest.

project-shield.jpg

Sheild is still based on a GPU posessing separate pixel and vertex shaders.  DX11 and OpenGL ES 3.0?  Nope!

If someone decides to license this technology from NVIDIA, it will not be of great concern.  The next generation of NVIDIA graphics will already be out by that time, and we could very well be approaching the next iteration for the desktop side.  NVIDIA plans on releasing a Kepler based mobile unit in 2014 (Logan), which would be a full year in advance of any competing product.  In 2015 NVIDIA is planning on releasing an ARM product based on the Denver CPU and Maxwell GPU.  So we can easily see that NVIDIA will only be licensing out an older generation product so it will not face direct competition when it comes to GPUs.  NVIDIA obviously is hoping that their GPU tech will still be a step ahead of that of ARM (Mali), Qualcomm (Adreno), and Imagination Technologies (PowerVR).

This is an easy and relatively painfree way to test the waters that ARM, Imagination Technologies, and AMD are already treading.  ARM only licenses IP and have shown the world that it can not only succeed at it, but thrive.  Imagination Tech used to produce their own chips much like NVIDIA does, but they changed direction and continue to be profitable.  AMD recently opened up about their semi-custom design group that will design specific products for customers and then license those designs out.  I do not think this is a desperation move by NVIDIA, but it certainly is one that probably is a little late in coming.  The mobile market is exploding, and we are approaching a time where nearly every electricity based item will have some kind of logic included in it, billions of chips a year will be sold.  NVIDIA obviously wants a piece of that market.  Even a small piece of “billions” is going to be significant to the bottom line.

Source: NVIDIA