NVIDIA Responds to GTX 970 3.5GB Memory Issue

Subject: Graphics Cards | January 24, 2015 - 11:51 AM |
Tagged: nvidia, maxwell, GTX 970, GM204, 3.5gb memory

UPDATE 1/28/15 @ 10:25am ET: NVIDIA has posted in its official GeForce.com forums that they are working on a driver update to help alleviate memory performance issues in the GTX 970 and that they will "help out" those users looking to get a refund or exchange.

UPDATE 1/26/25 @ 1:00pm ET: We have posted a much more detailed analysis and look at the GTX 970 memory system and what is causing the unusual memory divisions. Check it out right here!

UPDATE 1/26/15 @ 12:10am ET: I now have a lot more information on the technical details of the architecture that cause this issue and more information from NVIDIA to explain it. I spoke with SVP of GPU Engineering Jonah Alben on Sunday night to really dive into the quesitons everyone had. Expect an update here on this page at 10am PT / 1pm ET or so. Bookmark and check back!

UPDATE 1/24/15 @ 11:25pm ET: Apparently there is some concern online that the statement below is not legitimate. I can assure you that the information did come from NVIDIA, though is not attributal to any specific person - the message was sent through a couple of different PR people and is the result of meetings and multiple NVIDIA employee's input. It is really a message from the company, not any one individual. I have had several 10-20 minute phone calls with NVIDIA about this issue and this statement on Saturday alone, so I know that the information wasn't from a spoofed email, etc. Also, this statement was posted by an employee moderator on the GeForce.com forums about 6 hours ago, further proving that the statement is directly from NVIDIA. I hope this clears up any concerns around the validity of the below information!

Over the past couple of weeks users of GeForce GTX 970 cards have noticed and started researching a problem with memory allocation in memory-heavy gaming. Essentially, gamers noticed that the GTX 970 with its 4GB of system memory was only ever accessing 3.5GB of that memory. When it did attempt to access the final 500MB of memory, performance seemed to drop dramatically. What started as simply a forum discussion blew up into news that was being reported at tech and gaming sites across the web.


Image source: Lazygamer.net

NVIDIA has finally responded to the widespread online complaints about GeForce GTX 970 cards only utilizing 3.5GB of their 4GB frame buffer. From the horse's mouth:

The GeForce GTX 970 is equipped with 4GB of dedicated graphics memory.  However the 970 has a different configuration of SMs than the 980, and fewer crossbar resources to the memory system. To optimally manage memory traffic in this configuration, we segment graphics memory into a 3.5GB section and a 0.5GB section.  The GPU has higher priority access to the 3.5GB section.  When a game needs less than 3.5GB of video memory per draw command then it will only access the first partition, and 3rd party applications that measure memory usage will report 3.5GB of memory in use on GTX 970, but may report more for GTX 980 if there is more memory used by other commands.  When a game requires more than 3.5GB of memory then we use both segments.
We understand there have been some questions about how the GTX 970 will perform when it accesses the 0.5GB memory segment.  The best way to test that is to look at game performance.  Compare a GTX 980 to a 970 on a game that uses less than 3.5GB.  Then turn up the settings so the game needs more than 3.5GB and compare 980 and 970 performance again.
Here’s an example of some performance data:

  GTX 980 GTX 970
Shadow of Mordor    
<3.5GB setting = 2688x1512 Very High 72 FPS 60 FPS
>3.5GB setting = 3456x1944 55 FPS (-24%) 45 FPS (-25%)
Battlefield 4    
<3.5GB setting = 3840x2160 2xMSAA 36 FPS 30 FPS
>3.5GB setting = 3840x2160 135% res 19 FPS (-47%) 15 FPS (-50%)
Call of Duty: Advanced Warfare    
<3.5GB setting = 3840x2160 FSMAA T2x, Supersampling off 82 FPS 71 FPS
>3.5GB setting = 3840x2160 FSMAA T2x, Supersampling on 48 FPS (-41%) 40 FPS (-44%)

On GTX 980, Shadows of Mordor drops about 24% on GTX 980 and 25% on GTX 970, a 1% difference.  On Battlefield 4, the drop is 47% on GTX 980 and 50% on GTX 970, a 3% difference.  On CoD: AW, the drop is 41% on GTX 980 and 44% on GTX 970, a 3% difference.  As you can see, there is very little change in the performance of the GTX 970 relative to GTX 980 on these games when it is using the 0.5GB segment.

So it would appear that the severing of a trio of SMMs to make the GTX 970 different than the GTX 980 was the root cause of the issue. I'm not sure if this something that we have seen before with NVIDIA GPUs that are cut down in the same way, but I have asked for clarification from NVIDIA on that. The ratios fit: 500MB is 1/8th of the 4GB total memory capacity and 2 SMMs is 1/8th of the total SMM count. (Edit: The ratios in fact do NOT match up...odd.)


The full GM204 GPU that is the root cause of this memory issue.

Another theory presented itself as well: is this possibly the reason we do not have a GTX 960 Ti yet? If the patterns were followed from previous generations a GTX 960 Ti would be a GM204 GPU with fewer cores enabled and additional SMs disconnected to enable a lower price point. If this memory issue were to be even more substantial, creating larger differentiated "pools" of memory, then it could be an issue for performance or driver development. To be clear, we are just guessing on this one and that could be something that would not occur at all. Again, I've asked NVIDIA for some technical clarification.

Requests for information aside, we may never know for sure if this is a bug with the GM204 ASIC or predetermined characteristic of design. 

The questions remains: does NVIDIA's response appease GTX 970 owners? After all, this memory concern is really just a part of a GPU's story and thus performance testing and analysis already incorporates it essentially. Some users will still likely make a claim of a "bait and switch" but do the benchmarks above, as well as our own results at 4K, make it a less significant issue?

Our own Josh Walrath offers this analysis:

A few days ago when we were presented with evidence of the 970 not fully utilizing all 4 GB of memory, I theorized that it had to do with the reduction of SMM units. It makes sense from an efficiency standpoint to perhaps "hard code" memory addresses for each SMM. The thought behind that would be that 4 GB of memory is a huge amount of a video card, and the potential performance gains of a more flexible system would be pretty minimal.

I believe that the memory controller is working as intended and not a bug. When designing a large GPU, there will invariably be compromises made. From all indications NVIDIA decided to save time, die size, and power by simplifying the memory controller and crossbar setup. These things have a direct impact on time to market and power efficiency.  NVIDIA probably figured that a couple percentage of performance lost was outweighed by the added complexity, power consumption, and engineering resources that it would have taken to gain those few percentage points back.

The Latest NVIDIA GeForce Drivers Are Here: Version 347.25 adds GTX 960 Support, MFAA to Most Games

Subject: Graphics Cards | January 23, 2015 - 11:09 PM |
Tagged: nvidia, gtx 960, graphics drivers, graphics cards, GeForce 347.25, geforce, game ready, dying light

With the release of GTX 960 yesterday NVIDIA also introduced a new version of the GeForce graphics driver, 347.25 - WHQL.


NVIDIA states that the new driver adds "performance optimizations, SLI profiles, expanded Multi-Frame Sampled Anti-Aliasing support, and support for the new GeForce GTX 960".

While support for the newly released GPU goes without saying, the expanded MFAA support will help provide better anti-aliasing performance to many existing games, as “MFAA support is extended to nearly every DX10 and DX11 title”. In the release notes three games are listed that do not benefit from the MFAA support, as “Dead Rising 3, Dragon Age 2, and Max Payne 3 are incompatible with MFAA”.

347.25 also brings additional SLI profiles to add support for five new games, and a DirectX 11 SLI profile for one more:

SLI profiles added

  • Black Desert
  • Lara Croft and the Temple of Osiris
  • Nosgoth
  • Zhu Xian Shi Jie
  • The Talos Principle

DirectX 11 SLI profile added

  • Final Fantasy XIV: A Realm Reborn

The update is also the Game Ready Driver for Dying Light, a zombie action/survival game set to debut on January 27.


Much more information is available under the release notes on the driver download page, and be sure to check out Ryan’s chat with Tom Peterson from the live stream for a lot more information about this driver and the new GTX 960 graphics card.

Source: NVIDIA

DirectX 12 Preview in New Windows 10 Build. No Drivers Yet.

Subject: General Tech, Graphics Cards | January 23, 2015 - 07:11 PM |
Tagged: windows 10, microsoft, dx12, DirectX 12, DirectX

Microsoft has added DirectX 12 with the latest Windows 10 Technical Preview that was released today. Until today, DXDIAG reported DirectX 11 in the Windows 10 Technical Preview. At the moment, there has not been any drivers or software released for it, and the SDK is also no-where to be found. Really, all this means is that one barrier has been lifted, leaving the burden on hardware and software partners (except to release the SDK, that's still Microsoft's responsibility).


No-one needs to know how old my motherboard is...

Note: I have already experienced some issues with Build 9926. Within a half hour of using it, I suffered an instant power-down. There was not even enough time for a bluescreen. When it came back, my Intel GPU (which worked for a few minutes after the update) refused to be activated, along with the monitor it is attached to. My point? Not for production machines.

Update: Looks like a stick of RAM (or some other hardware) blew, coincidentally, about 30 minutes after the update finished, while the computer was running, which also confused my UEFI settings. I haven't got around to troubleshooting much, but it seems like a weirdly-timed, abrupt hardware failure (BIOS is only reporting half of the RAM installed, iGPU is "enabled" but without RAM associated to it, etc.).

The interesting part, to me, is how Microsoft pushed DX12 into this release without, you know, telling anyone. It is not on any changelog that I can see, and it was not mentioned anywhere in the briefing as potentially being in an upcoming preview build. Before the keynote, I had a theory that it would be included but, after the announcement, figured that it might be pushed until GDC or BUILD (but I kept an open mind). The only evidence that it might come this month was an editorial on Forbes that referenced a conversation with Futuremark, who allegedly wanted to release an update to 3DMark (they hoped) when Microsoft released the new build. I could not find anything else, so I didn't report on it -- you would think that there would be a second source for that somewhere. It turns out that he might be right.

The new Windows 10 Technical Preview, containing DirectX 12, is available now from the preview build panel. It looks like Futuremark (and maybe others) will soon release software for it, but no hardware vendor has released a driver... yet.

PCPer Live! GeForce GTX 960 Live Stream and Giveaway with Tom Petersen

Subject: General Tech, Graphics Cards | January 22, 2015 - 06:44 PM |
Tagged: video, tom petersen, nvidia, maxwell, live, gtx 960, gtx, GM206, geforce

UPDATE 2: If you missed the live stream you missed the prizes! But you can still watch the replay to get all the information and Q&A that went along with it as we discuss the GTX 960 and many more topics from the NVIDIA universe.

UPDATE (1/22): Well, the secret is out. Today's discussion will be about the new GeForce GTX 960, a $199 graphics card that takes power efficiency to a previously un-seen level! If you haven't read my review of the card yet, you should do so first, but then be sure you are ready for today's live stream and giveaway - details below! And don't forget: if you have questions, please leave them in the comments!

Get yourself ready, it’s time for another GeForce GTX live stream hosted by PC Perspective’s Ryan Shrout and NVIDIA’s Tom Petersen. Though we can’t dive into the exact details of what topics are going to be covered, intelligent readers that keep an eye on the rumors on our site will likely be able to guess what is happening on January 22nd.


On hand to talk about the products, answer questions about technologies in the GeForce family including GPUs, G-Sync, GameWorks, GeForce Experience and more will be Tom Petersen, well known on the LAN party and events circuit. To spice things up as well Tom has worked with graphics card partners to bring along a sizeable swag pack to give away LIVE during the event, including new GTX graphics cards. LOTS of graphics cards.


NVIDIA GeForce GTX 960 Live Stream and Giveaway

10am PT / 1pm ET - January 22nd

PC Perspective Live! Page

Need a reminder? Join our live mailing list!

Here are some of the prizes we have lined up for those of you that join us for the live stream:

  • 3 x MSI GeForce GTX 960 Graphics Cards
  • 4 x EVGA GeForce GTX 960 Graphics Cards
  • 3 x ASUS GeForce GTX 960 Graphics Cards


Thanks to ASUS, EVGA and MSI for supporting the stream!

The event will take place Thursday, January 22nd at 1pm ET / 10am PT at http://www.pcper.com/live. There you’ll be able to catch the live video stream as well as use our chat room to interact with the audience, asking questions for me and Tom to answer live. To win the prizes you will have to be watching the live stream, with exact details of the methodology for handing out the goods coming at the time of the event.

Tom has a history of being both informative and entertaining and these live streaming events are always full of fun and technical information that you can get literally nowhere else. Previous streams have produced news as well – including statements on support for Adaptive Sync, release dates for displays and first-ever demos of triple display G-Sync functionality. You never know what’s going to happen or what will be said!

If you have questions, please leave them in the comments below and we'll look through them just before the start of the live stream. Of course you'll be able to tweet us questions @pcper and we'll be keeping an eye on the IRC chat as well for more inquiries. What do you want to know and hear from Tom or I?

So join us! Set your calendar for this coming Thursday at 1pm ET / 10am PT and be here at PC Perspective to catch it. If you are a forgetful type of person, sign up for the PC Perspective Live mailing list that we use exclusively to notify users of upcoming live streaming events including these types of specials and our regular live podcast. I promise, no spam will be had!

Reading material to go along with the GTX 960 live stream

Subject: Graphics Cards | January 22, 2015 - 01:44 PM |
Tagged: video, nvidia, msi gaming 2g, maxwell, gtx 960, GM206, geforce

Did Ryan somehow miss a benchmark that is important to you?  Perhaps [H]ard|OCP's coverage of the MSI GeForce GTX 960 GAMING 2G will capture that certain something.  MSI runs their 960 at a base of 1216MHz with the boost clock hitting 1279MHz, slightly slower than the ASUS STRIX at 1291 MHz and 1317 MHz.  At the time this was posted the cards were available on Amazon for $210, that is obviously going to change so keep an eye out.  As [H] states in their conclusions, it is a good value but not the great value which the GTX 970 offered at release, check out their full review here or one of the many down below.


"NVIDIA is today launching a GPU aimed at the "sweet spot" of the video card market. With an unexpectedly low MSRP, we find out if the new GeForce GTX 960 has what it takes to compete with the competition. The MSI GTX 960 GAMING reviewed here today is a retail card you will be able to purchase. No reference card in this review."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Graphics Developers: Help Name Next Generation OpenGL

Subject: General Tech, Graphics Cards | January 16, 2015 - 10:37 PM |
Tagged: Khronos, opengl, OpenGL ES, webgl, OpenGL Next

The Khornos Group probably wants some advice from graphics developers because they ultimately want to market to them, as the future platform's success depends on their applications. If you develop games or other software (web browsers?) then you can give your feedback. If not, then it's probably best to leave responses to its target demographic.


As for the questions themselves, first and foremost they ask if you are (or were) an active software developer. From there, they ask you to score your opinion on OpenGL, OpenGL ES, and WebGL. They then ask whether you value “Open” or “GL” in the title. They then ask you whether you feel like OpenGL, OpenGL ES, and WebGL are related APIs. They ask how you learn about the Khronos APIs. Finally, they directly ask you for name suggestions and any final commentary.

Now it is time to (metaphorically) read tea leaves. The survey seems written primarily to establish whether developers consider OpenGL, OpenGL ES, and WebGL as related libraries, and to gauge their overall interest in each. If you look at the way OpenGL ES has been developing, it has slowly brought mobile graphics into a subset of desktop GPU features. It is basically an on-ramp to full OpenGL.

We expect that, like Mantle and DirectX 12, the next OpenGL initiative will be designed around efficiently loading massively parallel processors, with a little bit of fixed-function hardware for common tasks, like rasterizing triangles into fragments. The name survey might be implying that the Next Generation OpenGL Initiative is intended to be a unified platform, for high-end, mobile, and even web. Again, modern graphics APIs are based on loading massively parallel processors as directly as possible.

If you are a graphics developer, the Khronos Group is asking for your feedback via their survey.

Report: NVIDIA GeForce GTX 960 Specs and Synthetic Benchmarks Leaked

Subject: Graphics Cards | January 14, 2015 - 10:49 AM |
Tagged: rumors, NVIDA, leak, gtx 960, gpu, geforce

The GPU news and rumor site VideoCardz.com had yet another post about the GTX 960 yesterday, and this time the site claims they have most of the details about this unreleased GPU with new leaked photos from a forum on the Chinese site PCEVA.


Image credit: PCEVA via VideoCardz.com

The card is reportedly based on Maxwell GM206, a 1024 CUDA core part recently announced with the introduction of the GTX 965M. Clock speed was not listed but alleged screenshots indicate the sample had a 1228 MHz core and 1291 MHz Boost clock. The site is calling this an overclock, but it's still likely that the core would have a faster clock speed than the GTX 970 and 980.


Image credit: PCEVA via VideoCardz.com

The card will reportedly feature 2GB of 128-bit GDDR5 memory, though doubtless 4GB variants would likely be available after launch from the various vendors (an important option considering the possibility of the new card natively supporting triple DisplayPort monitors). Performance will clearly be a step down from the initial GTX 900-series offerings as NVIDIA has led with their more performant parts, but the 960 should still be a solid choice for 1080p gaming if these screenshots are real.

The specs as listed on the page at VideoCardz.com are follows (they do not list clock speed):

  • 28nm GM206-300 GPU
  • 1024 CUDA cores
  • 64(?) TMUs
  • 32 ROPs
  • 1753 MHz memory
  • 128-bit memory bus
  • 2GB memory size
  • 112 GB/s memory bandwidth
  • DirectX 11.3/12
  • 120W TDP
  • 1x 6-pin power connector
  • 1x DVI-I, 1x HDMI 2.0, 3x DP


Image credit: PCEVA via VideoCardz.com

We await official word on pricing and availability for this unreleased GPU.

Source: VideoCardz

More NVIDIA GTX 960 Sightings: Galaxy's KFA2 GTX 960 Lineup Reportedly Pictured

Subject: Graphics Cards | January 13, 2015 - 02:28 PM |
Tagged: rumors, nvidia, multi monitor, mini-ITX GPU, leak, HDMI 2.0, gtx 960, gpu, geforce, DisplayPort

The crew at VideoCardz.com have been reporting some GTX 960 sightings lately, and today they've added no less than three new cards from KFA2, the "European premium brand" of Galaxy.


The reported reference design GTX 960 (VideoCardz.com)

Such reports are becoming more common, with the site posting photos that appear to be other vendors' versions of the new GPU here, here, and here. Of note with these new alleged photos on what appears to be a reference design board: no less than three DisplayPort outputs, as well as HDMI 2.0 and DVI:


Reported GTX 960 outputs (VideoCardz.com)

This would be big news for multi-monitor users as it would provide potential support three high-resolution DisplayPort monitors from a single card in a strictly non-gaming environment (unless you happen to enjoy the frame-rates of an oil painting).


The reported mini-ITX GTX 960 (VideoCardz.com)

The other designs shown in the post include a mini-ITX form-factor design still sporting the triple DisplayPorts, HDMI and DVI, and a larger EXOC edition built on a custom PCB.


Reported EXOC GTX 960 (VideoCardz.com)

The EXOC edition apparently drops the multi-DisplayPort option in favor of a second DVI output, leaving just one DisplayPort along with the lone HDMI 2.0 output.

With the GTX 960 leaks coming in daily now it seems likely that we would be hearing something official soon.

Source: VideoCardz

LinkedIn Posts Hint at Radeon R9 380X Features, Stacked Memory

Subject: Graphics Cards | January 13, 2015 - 12:22 PM |
Tagged: rumor, radeon, r9 380x, 380x

Spotted over at TechReport.com this morning and sourced from a post at 3dcenter.org, it appears that some additional information about the future Radeon R9 380X is starting to leak out through AMD employee LinkedIn pages.

Ilana Shternshain is a ASIC physical design engineer at AMD with more than 18 years of experience, 7-8 years of that with AMD. Under the background section is the line "Backend engineer and team leader at Intel and AMD, responsible for taping out state of the art products like Intel Pentium Processor with MMX technology and AMD R9 290X and 380X GPUs." A bit further down is an experience listing of the Playstation 4 APU as well as "AMD R9 380X GPUs (largest in “King of the hill” line of products)."

Interesting - though not entirely enlightening. More interesting were the details found on Linglan Zhang's LinkedIn page (since removed):

Developed the world’s first 300W 2.5D discrete GPU SOC using stacked die High Bandwidth Memory and silicon interposer.

Now we have something to work with! A 300 watt TDP would make the R9 380X more power hungry than the current R9 290X Hawaii GPU. High bandwidth memory likely implies memory located on the substrate of the GPU itself, similar to what exists on the Xbox One APU, though configurations could differ in considerable ways. A bit of research on the silicon interposer reveals it as an implementation method for 2.5D chips:


Source: SemiWiki.com

There are two classes of true 3D chips which are being developed today. The first is known as 2½D where a so-called silicon interposer is created. The interposer does not contain any active transistors, only interconnect (and perhaps decoupling capacitors), thus avoiding the issue of threshold shift mentioned above. The chips are attached to the interposer by flipping them so that the active chips do not require any TSVs to be created. True 3D chips have TSVs going through active chips and, in the future, have potential to be stacked several die high (first for low-power memories where the heat and power distribution issues are less critical).

An interposer would allow the GPU and stacked die memory to be built on different process technology, for example, but could also make the chips more fragile during final assembly. Obviously there a lot more questions than answers based on these rumors sourced from LinkedIn, but it's interesting to attempt to gauge where AMD is headed in its continued quest to take back market share from NVIDIA.

Source: 3dcenter.org

CES 2015: EVGA Shows Two New GTX 980 Cards

Subject: Graphics Cards, Shows and Expos | January 8, 2015 - 12:46 AM |
Tagged: video, maxwell, Kingpin, hydro copper, GTX 980, GM204, evga, classified, ces 2015, CES

EVGA posted up in its normal location at CES this year and had its entire lineup of goodies on display. There were a pair of new graphics cards we spotted too including the GTX 980 Hydro Copper and the GTX 980 Classified Kingpin Edition.


Though we have seen EVGA water cooling on the GTX 980 already, the new GTX 980 Hydro Copper uses a self-contained water cooler, built by Asetek, rather than a complete GPU water block. The memory and power delivery is cooled by the rest of the original heatsink and blower fan but because of lowered GPU temperatures, the fan will nearly always spin at its lowest RPM.


Speaking of temperatures, EVGA is saying that GPU load temperatures will be in the 40-50C range, significantly lower than what you have with even the best air coolers on the GTX 980 today. As for users that already have GTX 980 cards, EVGA is planning to sell the water cooler individually so you can upgrade yourself. Pricing isn't set on this but it should be available sometime in February.


Fans of the EVGA Classified Kingpin series will be glad to know that the GTX 980 iteration is nearly ready, also available in February and also without an known MSRP.


EVGA has included an additional 6-pin power connector, rearranged the memory traces and layout for added memory frequency and includes a single-slot bracket for any users that eventually remove the impressive air cooler for a full-on water block.

Coverage of CES 2015 is brought to you by Logitech!

PC Perspective's CES 2015 coverage is sponsored by Logitech.

Follow all of our coverage of the show at http://pcper.com/ces!

NVIDIA Quietly Releases the GeForce GTX 965M Mobile GPU

Subject: Graphics Cards | January 7, 2015 - 10:51 AM |
Tagged: nvidia, notebook, mobile graphics, mobile gpu, GeForce 965M

With zero fanfare NVIDIA has released a new mobile graphics chip today, the GeForce GTX 965M.


Based on the 28nm Maxwell GM204 core and positioned just below the existing GTX 970M, the new GTX 965M has 1024 CUDA cores (compared to the 970M's 1280) and the new 965M has a lower 128-bit memory interface (vs 192-bit with the 970M). The base clock is slightly faster at 944 MHz (plus unspecified Boost headroom).

Compared with the flagship GTX 980M which boasts 1536 CUDA cores and 256-bit GDDR5 this new GTX 965M will be a significantly lower performer, but NVIDIA is marketing it towards 1080p mobile gaming. At a lower cost to OEMs the 965M should help create some less expensive 1080p gaming notebooks as the new GPU is adopted.


The chip features proprietary NVIDIA Optimus and Battery Boost support, and is GameStream, ShadowPlay, and GameWorks ready.

Specs from NVIDIA:

  • CUDA Cores: 1024
  • Base Clock: 944 MHz + Boost
  • Memory Clock: 2500 MHz
  • Memory Interface: GDDR5
  • Memory Interface Width: 128-bit
  • Memory Bandwidth: 80 GB/sec
  • DirectX API: 12
  • OpenGL: 4.4
  • OpenCL: 1.1
  • Display Resolution: Up to 3840x2160

More information on this new mobile GPU can be found via the source link.

Source: NVIDIA

CES 2015: AMD Talks Technical about FreeSync Monitors

Subject: Graphics Cards, Displays, Shows and Expos | January 7, 2015 - 03:11 AM |
Tagged: video, radeon, monitor, g-sync, freesync, ces 2015, CES, amd

It finally happened - later than I had expected - we got to get hands on with nearly-ready FreeSync monitors! That's right, AMD's alternative to G-Sync will bring variable refresh gaming technology to Radeon gamers later this quarter and AMD had the monitors on hand to prove it. On display was an LG 34UM67 running at 2560x1080 on IPS technology, a Samsung UE590 with a 4K resolution and AHVA panel and BenQ XL2730Z 2560x1440 TN screen.


The three monitors sampled at the AMD booth showcase the wide array of units that will be available this year using FreeSync, possibly even in this quarter. The LG 34UM67 uses the 21:9 aspect ratio that is growing in popularity, along with solid IPS panel technology and 60 Hz top frequency. However, there is a new specification to be concerned with on FreeSync as well: minimum frequency. This is the refresh rate that monitor needs to maintain to avoid artifacting and flickering that would be visible to the end user. For the LG monitor it was 40 Hz.


What happens below that limit and above it differs from what NVIDIA has decided to do. For FreeSync (and the Adaptive Sync standard as a whole), when a game renders at a frame rate above or below this VRR window, the V-Sync setting is enforced. That means on a 60 Hz panel, if your game runs at 70 FPS, then you will have the option to enable or disable V-Sync; you can either force a 60 FPS top limit or allow 70 FPS with screen tearing. If your game runs under the 40 Hz bottom limit, say at 30 FPS, you get the same option: V-Sync on or V-Sync off. With it off, you would get tearing but optimal input/display latency but with it off you would reintroduce frame judder when you cross between V-Sync steps.

There are potential pitfalls to this solution though; what happens when you cross into that top or bottom region can cause issues depending on the specific implementation. We'll be researching this very soon.


Notice this screen shows FreeSync Enabled and V-Sync Disabled, and we see a tear.

FreeSync monitors have the benefit of using industry standard scalers and that means they won't be limited to a single DisplayPort input. Expect to see a range of inputs including HDMI and DVI though the VRR technology will only work on DP.

We have much more to learn and much more to experience with FreeSync but we are eager to get one in the office for testing. I know, I know, we say that quite often it seems.

Coverage of CES 2015 is brought to you by Logitech!

PC Perspective's CES 2015 coverage is sponsored by Logitech.

Follow all of our coverage of the show at http://pcper.com/ces!

CES 2015: EVGA Teases New NVIDIA GeForce GTX 980 with AIO Liquid Cooling

Subject: Graphics Cards | January 6, 2015 - 11:30 PM |
Tagged: nvidia, liquid cooler, GTX 980, gpu cooler, gpu, evga, ces 2015, CES, AIO

EVGA has posted a photo on Twitter of a new GTX 980 with an integrated AIO liquid cooler.


The pic is captioned "GTX 980 HC AIO", which indicates that it will join the EVGA GTX 980 Hydro Copper (which carries an MSRP as $799) as a liquid-cooled option in their lineup. The big advantage here, however, is that AIO setup dangling off the back of the card. One free (120mm?) fan opening is all you'd need to be up and running without any extra work.

Of course, you could always buy yourself a suitcase full of AIO liquid-cooled GTX 980's for a cool $2999 if you don't want to wait for this EVGA option.


Triple SLI + AIO liquid cooling = suitcase?

We'll post news of this (seemingly) upcoming EVGA product once details are revealed.

Coverage of CES 2015 is brought to you by Logitech!

PC Perspective's CES 2015 coverage is sponsored by Logitech.

Follow all of our coverage of the show at http://pcper.com/ces!

Source: Twitter

Report: NVIDIA Maxwell GM206 Pictured - Leak Claims GTX 960 Core

Subject: Graphics Cards | January 6, 2015 - 09:44 AM |
Tagged: rumor, nvidia, leak, gtx 960, GM206, geforce

VideoCardz.com is reporting that they not only know the upcoming GTX 960 core will be the GM206, but they reportedly have a photo of the unreleased chip.


Why are reported leaks always slightly out of focus? (Credit: VideoCardz.com)

The chip pictured appears to be a GM206-300, which the site claims will be the exact variant in the GTX 960 when it is released. The post speculates that based on the die size we can expect between 8 - 10 SMM's, or 1080 - 1280 CUDA cores. They further claim that the GTX 960 will have a 128-bit memory bus and reference cards will have a 2GB frame buffer (though naturally we can expect models with 4GB of memory after launch).


(Credit: VideoCardz.com)

The post goes on to show what appears to be a search result for an ASUS GTX 960 on their site, but if this existed it has since been taken down. More than likely a GTX 960 is in fact close at hand, and the reported specs (and now multiple claimed listings for the card) are not hard to fathom.

We will keep you updated on this alleged new GPU if more details emerge.

CES 2015: Gigabyte GTX 980 WaterForce 3-Way SLI Monster Spotted

Subject: Graphics Cards, Shows and Expos | January 5, 2015 - 07:15 PM |
Tagged: waterforce, GTX 980, gigabyte, ces 2015, CES, 3-way sli

Back in November Gigabyte asked for all your money in exchange for a set of three GeForce GTX 980 cards each running in a circuit of self-contained water cooling. After finally seeing the GTX 980 WaterForce in person I can tell you that it's big, it's expensive and it's damned impressive looking.


With a price tag of $2999, there is a significant mark up over buying just a set of three retail GTX 980 cards, but this design is unique. Each GPU is individually cooled via a 120mm radiator and fan that is mounted inside of a chassis that rests on top of your PC case. On the front you'll find a temperature, fan speed and pump speed indicator along with some knobs and buttons to adjust settings and targets.


Oh, and it ships inside of a suitcase that you can reuse for later travel. Ha! Think we can convince Gigabyte to send us one for testing?


Coverage of CES 2015 is brought to you by Logitech!

PC Perspective's CES 2015 coverage is sponsored by Logitech.

Follow all of our coverage of the show at http://pcper.com/ces!

ASUS updates their popular series with the GTX 980 STRIX DC II OC

Subject: Graphics Cards | January 5, 2015 - 05:31 PM |
Tagged: GTX 980 STRIX DirectCU II OC, strix, asus, nvidia, factory overclocked

ASUS' popular STRIX line was recently updated to include NVIDIA's top card and now [H]ard|OCP has had a chance to benchmark this GTX 980 with custom quiet cooling.  The DirectCU II cooling system can operate at 0dB under all but the heaviest of loads and the 10 phase power design will mean you can go beyond the small factory overclock that the card arrives with.  [H]ard|OCP took the card from a Boost Clock of 1279MHz to 1500MHz and the RAM from 7GHz to 7.9GHz with noticeable performance improvements part of why it received a Gold Award.  If the ~$130 price difference between this card and the R9 290X does not bother you then it is a great choice for a new GPU.


"Today we delve into the ASUS GTX 980 STRIX DC II OC, which features custom cooling, 0dB fans and high overclocking potential. We'll experiment with this Maxwell GPU by overclocking it to the extreme. It will perform head to head against the ASUS ROG R9 290X MATRIX-P in today's most demanding games, including Far Cry 4."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

CES 2015: MSI Announces GTX 970 Gaming 100ME - 100 millionth GeForce GPU

Subject: Graphics Cards, Shows and Expos | January 4, 2015 - 03:56 PM |
Tagged: msi, GTX 970, gaming, ces 2015, CES, 100me

To celebrate the shipment of 100 million GeForce GPUs, MSI is launching a new revision of the GeForce GTX 970, the Gaming 100ME (millionth edition). The cooler is identical that used in the GTX 970 Gaming 4G but replaces the red color scheme of the MSI Gaming brand with a green very close to that of NVIDIA's.


This will also ship with a "special gift" and will be a limited edition, much like the Golden Edition GTX 970 from earlier this year.


MSI had some other minor updates to its GPU line including the GTX 970 4GD5T OC with a cool looking black and white color scheme and an 8GB version of the Radeon R9 290X.

Coverage of CES 2015 is brought to you by Logitech!

PC Perspective's CES 2015 coverage is sponsored by Logitech.

Follow all of our coverage of the show at http://pcper.com/ces!

GPU Rumors: AMD Plans 20nm but NVIDIA Waits for 16nm

Subject: General Tech, Graphics Cards | December 28, 2014 - 09:47 PM |
Tagged: radeon, nvidia, gtx, geforce, amd

According to an anonymous source of WCCFTech, AMD is preparing a 20nm-based graphics architecture that is expected to release in April or May. Originally, they predicted that the graphics devices, which they call R9 300 series, would be available in February or March. The reason for this “delay” is a massive demand for 20nm production.


The source also claims that NVIDIA will skip 20nm entirely and instead opt for 16nm when that becomes available (which is said to be mid or late 2016). The expectation is that NVIDIA will answer AMD's new graphics devices with a higher-end Maxwell device that is still at 28nm. Earlier rumors, based on a leaked SiSoftware entry, claim 3072 CUDA cores that are clocked between 1.1 GHz and 1.39 GHz. If true, this would give it between 6.75 and 8.54 TeraFLOPs of performance, the higher of which is right around the advertised performance of a GeForce Titan Z (only in a single compute device that does not require distribution of work like what SLI was created to automate).

Will this strategy work in NVIDIA's favor? I don't know. 28nm is a fairly stable process at this point, which will probably allow them to get chips that can be bigger and more aggressively clocked. On the other hand, they pretty much need to rely upon chips that are bigger and more aggressively clocked to be competitive with AMD's slightly more design architecture. Previous rumors also hint that AMD is looking at water-cooling for their reference card, which might place yet another handicap against NVIDIA, although cooling is not an area that NVIDIA struggles in.

Source: WCCFTech

Nvidia GeForce 347.09 beta drivers have arrived

Subject: Graphics Cards | December 17, 2014 - 09:19 PM |
Tagged: geforce, nvidia, 347.09 beta

The 347.09 beta driver is out, which will help performance in Elite: Dangerous and Metal Gear Solid V: Ground Zeroes.  If you use GeForce Experience they will install automatically otherwise head to the driver page to manually install them.  Project CARS should also benefit from this new beta and you will be able to enable 3D on Alien: Isolation, Elite: Dangerous, Escape Dead Island, Far Cry 4 and Middle-Earth - Shadow of Mordor.  NVIDIA's new incremental updates, called GeForce Game Ready will mean more frequent driver updates with less changes than we have become accustomed to but do benefit those playing the games which they have been designed to improve.


As with the previous WHQL driver, GTX 980M SLI and GTX 970M SLI on notebooks does not function and if you do plan on updating your gaming laptop you should disable SLI before installing them.  You can catch up on all the changes in this PDF

Source: NVIDIA

AMD Omega is no longer in Alpha

Subject: Graphics Cards | December 9, 2014 - 03:08 PM |
Tagged: amd, catalyst, driver, omega

With AMD's new leader and restructuring comes a new type of driver update.  The Omega driver is intended to provide a large number of new features as well as performance updates once a year.  It does not replace the current cycle of Beta and WHQL driver updates and the next driver update will incorporate all of the changes from the Omega driver plus the new bug fixes or updates that the driver was released to address.

Many sites including The Tech Report have had at least a small amount of time to test the new driver and have not seen much in the way of installation issues, or unfortunately performance improvements on systems not using an AMD APU.  As more time for testing elapses and more reviews come out we may see improvements on low end systems but for now the higher end machines show little to no improvement on raw FPS rates.  Keep your eyes peeled for an update once we have had time to test the change on frame pacing results, which are far more important than just increasing your FPS. 

The main reason to be excited about this release, it is the long list of new features, from a DSR-like feature called Virtual Super Resolution which allows you to increase the resolution of your monitor although for now 4K super resolution is limited to the R285 as it is the only AMD Tonga card on the market at the moment.  Along with the release of the Omega driver comes news about Freesync displays, another feature enabled in the new driver and their availability; we have a release date of January or February with a 4K model arriving in March.

Check out the links to The Tech Report and below to read the full list of new features that this driver brings and don't forget to click on Ryan's article as well.


"AMD has introduced what may be its biggest graphics driver release ever, with more than 20 new features, 400 bug fixes, and some miscellaneous performance improvements."

Here are some more Graphics Card articles from around the web:

Graphics Cards