So Long Adware, and Thanks for All the Fish!

Subject: Graphics Cards | March 1, 2015 - 07:30 AM |
Tagged: superfish, Lenovo, bloatware, adware

Obviously, this does not forget the controversy that Lenovo got themselves into, but it is certainly the correct response (if they act how they imply). Adware and bloatware is common to find on consumer PCs, which makes the slowest of devices even more sluggish as demos and sometimes straight-up advertisements claim their share of your resources. This does not even begin to discuss the security issues that some of these hitchhikers drag in. Again, I refer you to the aforementioned controversy.

lenovo-do.png

In response, albeit a delayed one, Lenovo has announced that, by the launch of Windows 10, they will only pre-install the OS and “related software”. Lenovo classifies this related software as drivers, security software, Lenovo applications, and applications for “unique hardware” (ex: software for an embedded 3D camera).

It looks to be a great step, but I need to call out “security software”. Windows 10 should ship with Microsoft's security applications in many regions, which really questions why a laptop provider would include an alternative. If the problem is that people expect McAfee or Symantec, then advertise pre-loaded Microsoft anti-malware and keep it clean. Otherwise, it feels like keeping a single finger in the adware take-a-penny dish.

At least it is not as bad as trying to install McAfee every time you update Flash Player. I consider Adobe's tactic the greater of two evils on that one. I mean, unless Adobe just thinks that Flash Player is so insecure that you would be crazy to install it without a metaphorical guard watching over your shoulder.

And then of course we reach the divide between “saying” and “doing”. We will need to see Lenovo's actual Windows 10 devices to find out if they kept their word, and followed its implications to a tee.

Source: Lenovo

Imagination Launches PowerVR GT7900, "Super-GPU" Targeting Consoles

Subject: Graphics Cards, Mobile | February 26, 2015 - 02:15 PM |
Tagged: super-gpu, PowerVR, Imagination Technologies, gt7900

As a preview to announcements and releases being made at both Mobile World Congress (MWC) and the Game Developers Summit (GDC) next week, Imagination Technologies took the wraps off of a new graphics product they are calling a "super-GPU". The PowerVR GT7900 is the new flagship GPU as a part of its Series7XT family that is targeting a growing category called "affordable game consoles." Think about the Android-powered set-top devices like the Ouya or maybe Amazon's Kindle TV.

gt7900-1.png

PowerVR breaks up its GPU designs into unified shading clusters (USCs) and the GT7900 has 16 of them for a total of 512 ALU cores. Imagination has previously posted a great overview of its USC architecture design and how you can compare its designs to other GPUs on the market. Imagination wants to claim that the GT7900 will offer "PC-class gaming experiences" though that is as ambiguous as the idea of a work load of a "console-level game." But with rated peak performance levels hitting over 800 GFLOPS in FP32 and 1.6 TFLOPS in FP16 (half-precision) this GPU does have significant theoretical capability.

  PowerVR GT7900 Tegra X1
Vendor Imagination Technologies NVIDIA
FP32 ALUs 512 256
FP32 GFLOPS 800 512
FP16 GFLOPS 1600 1024
GPU Clock 800 MHz 1000 MHz
Process Tech 16nm FinFET+ 20nm TSMC

Imagination also believes that PowerVR offers a larger portion of its peak performance for a longer period of time than the competition thanks to the tile-based deferred rendering (TBDR) approach that has been "refined over the years to deliver unmatched efficiency."

gt7900-2.png

The FP16 performance number listed above is useful as an extreme power savings option where the half-precision compute operates in a much more efficient manner. A fair concern is how many applications, GPGPU or gaming, actually utilize the FP16 data type but having support for it in the GT7900 allows developers to target it.

Other key features of the GT7900 include support for OpenGL ES 3.1 + AEP (Android Extension Pack), hardware tessellation and ASTC LDR and HDR texture compression standards. The GPU also can run in a multi-domain virtualization mode that would allow multiple operating systems to run in parallel on a single platform.

gt7900-3.png

Imagination believes that this generation of PowerVR will "usher a new era of console-like gaming experiences" and will showcase a new demo at GDC called Dwarf Hall.

I'll be at GDC next week and have already setup a meeting with Imagination to talk about the GT7900 so I can have some hands on experiences to report back with soon. I am continually curious about the market for these types of high-end "mobile" GPUs with the limited market that the Android console market currently addresses. Imagination does claim that the GT7900 is beating products with performance levels as high as the GeForce GT 730M discrete GPU - no small feat.

NVIDIA Faces Class Action Lawsuit for the GeForce GTX 970

Subject: Graphics Cards | February 23, 2015 - 04:12 PM |
Tagged: nvidia, geforce, GTX 970

So apparently NVIDIA and a single AIB partner, Gigabyte, are facing a class action lawsuit because of the GeForce GTX 970 4GB controversy. I am not sure why they singled out Gigabyte, but I guess that is the way things go in the legal world. Unlucky for them, and seemingly lucky for the rest.

nvidia-970-architecture.jpg

For those who are unaware, the controversy is based on NVIDIA claiming that the GeForce GTX 970 has 4GB of RAM, 64 ROPs, and 2048 KB of L2 Cache. In actuality, it has 56 ROPs and 1792KB of L2 Cache. The main talking point is that the RAM is segmented into two partitions, one that is 3.5GB and another that is 0.5GB. All 4GB are present on the card though, and accessible (unlike the disable L2 Cache and ROPs). Then again, I cannot see an instance in that class action lawsuit's exhibits which claim an incorrect number of ROPs or amount of L2 Cache.

Again, the benchmarks that you saw when the GeForce GTX 970 launched are still valid. Since the issue came up, Ryan has also tried various configurations of games in single- and multi-GPU systems to find conditions that would make the issue appear.

Source: Court Filing

Windows Update Installs GeForce 349.65 with WDDM 2.0

Subject: General Tech, Graphics Cards | February 21, 2015 - 04:23 PM |
Tagged: wddm 2.0, nvidia, geforce 349.65, geforce, dx12

Update 2: Outside sources have confirmed to PC Perspective that this driver contains DirectX 12 as well as WDDM 2.0. They also claim that Intel and AMD have DirectX 12 drivers available through Windows Update as well. After enabling iGPU graphics on my i7-4790K, the Intel HD 4600 received a driver update, which also reports as WDDM 2.0 in DXDIAG. I do not have a compatible AMD GPU to test against (just a couple of old Windows 7 laptops) but the source is probably right and some AMD GPUs will be updated to DX12 too.

So it turns out that if your motherboard dies during a Windows Update reboot, then you are going to be spending several hours reinstalling software and patches, but that is not important. What is interesting is the installed version number for NVIDIA's GeForce Drivers when Windows Update was finished with its patching: 349.65. These are not available on NVIDIA's website, and the Driver Model reports WDDM 2.0.

nvidia-34965-driver.png

It looks like Microsoft pushed out NVIDIA's DirectX 12 drivers through Windows Update. Update 1 Pt. 1: The "Runtime" reporting 11.0 is confusing though, perhaps this is just DX11 with WDDM 2.0?

nvidia-34965-dxdiag.png

I am hearing online that these drivers support the GeForce 600 series and later GPUs, and that there are later, non-public drivers available (such as 349.72 whose release notes were leaked online). NVIDIA has already announced that DirectX 12 will be supported on GeForce 400-series and later graphics cards, so Fermi drivers will be coming at some point. For now, it's apparently Kepler-and-later, though.

So with OS support and, now, released graphics drivers, all that we are waiting on is software and an SDK (plus any NDAs that may still be in effect). With Game Developers Conference (GDC 2015) coming up in a little over a week, I expect that we will get each of these very soon.

Update 1 Pt. 2: I should note that the release notes for 349.72 specifically mention DirectX 12. As mentioned above, is possible that 349.65 contains just WDDM 2.0 and not DX12, but it contains at least WDDM 2.0.

GPU Market sees 20-point swing in 2014: NVIDIA gains, AMD falls

Subject: Graphics Cards | February 21, 2015 - 12:18 PM |
Tagged: radeon, nvidia, marketshare, market share, geforce, amd

One of the perennial firms that measures GPU market share, Jon Peddie Research, has come out with a report on Q4 of 2014 this weekend and the results are eye opening. According to the data, NVIDIA and AMD each took dramatic swings from Q4 of 2013 to Q4 of 2014.

  Q4 2014 Q3 2014 Q4 2013 Year-to-year Change
AMD 24.0% 28.4% 35.0% -11.0%
Matrox 0.00% 0.10% 0.10% -0.1%
NVIDIA 76.0% 71.5% 64.9% +11.1%
S3 0.00% 0.00% 0.00% +0.0%

Data source: Jon Peddie Research

Here is the JPR commentary to start us out:

JPR's AIB Report tracks computer add-in graphics boards, which carry discrete graphics chips. AIBs used in desktop PCs, workstations, servers, and other devices such as scientific instruments. They are sold directly to customers as aftermarket products, or are factory installed. In all cases, AIBs represent the higher end of the graphics industry using discrete chips and private high-speed memory, as compared to the integrated GPUs in CPUs that share slower system memory.

The news was encouraging and seasonally understandable, quarter-to-quarter, the market decreased -0.68% (compared to the desktop PC market, which decreased 3.53%).

On a year-to-year basis, we found that total AIB shipments during the quarter fell -17.52% , which is more than desktop PCs, which fell -0.72%.

However, in spite of the overall decline, somewhat due to tablets and embedded graphics, the PC gaming momentum continues to build and is the bright spot in the AIB market.

icon.jpg

NVIDIA's Maxwell GPU

The overall PC desktop market increased quarter-to-quarter including double-attach-the adding of a second (or third) AIB to a system with integrated processor graphics-and to a lesser extent, dual AIBs in performance desktop machines using either AMD's Crossfire or Nvidia's SLI technology.

The attach rate of AIBs to desktop PCs has declined from a high of 63% in Q1 2008 to 36% this quarter.

The year to year change that JPR is reporting is substantial and shows a 20+ point change in market share in favor of NVIDIA over AMD. According to this data, AMD's market share has now dropped from 35% at the end of 2013 to just 24% at the end of 2014. Meanwhile, NVIDIA continues to truck forward, going from 64.9% at the end of 2013 to 76% at the end of 2014.

08.jpg

The Radeon R9 285 release didn't have the impact AMD had hoped

Clearly the release of NVIDIA's Maxwell GPUs, the GeForce GTX 750 Ti, GTX 970 and GTX 980 have impacted the market even more than we initially expected. In recent weeks the GTX 970 has been getting a lot of negative press with the memory issue and I will be curious to see what effect this has on sales in the near future. But the 12 month swing that you see in the table above is the likely cause for the sudden departure of John Byrne, Collette LaForce and Raj Naik.

AMD has good products, even better pricing and a team of PR and marketing folks that are talented and aggressive. So how can the company recover from this? Products, people; new products. Will the rumors circling around the Radeon R9 390X develop into such a product?

Hopefully 2015 will provide it.

EVGA would like to give you a GTX 960 SSC and a Z97 FTW motherboard

Subject: Editorial, General Tech, Graphics Cards, Motherboards | February 20, 2015 - 05:10 PM |
Tagged: z97, gtx 960, giveaway, evga, contest

I know, the nerve of some people. Jacob from EVGA emails me this week, complaining about how he has this graphics card and motherboard just sitting in his cubicle taking up space and "why won't I just give it away already!?"

Fine. I'll do it. For science.

So let's make this simple shall we? EVGA wants to get rid of some kick-ass gaming hardware and you want to win it. Why muddle up a good thing?

The Prizes

  • EVGA GeForce GTX 960 SSC
     
    • The EVGA GeForce GTX 960 delivers incredible performance, power efficiency, and gaming technologies that only NVIDIA Maxwell technology can offer. This is the perfect upgrade, offering 60% faster performance and twice the power efficiency of previous-generation cards*. Plus, it features VXGI for realistic lighting, support for smooth, tear-free NVIDIA G-SYNC technology, and Dynamic Super Resolution for 4K-quality gaming on 1080P displays.
       
    • The new EVGA ACX 2.0+ cooler brings new features to the award winning EVGA ACX 2.0 cooling technology. A Memory MOSFET Cooling Plate (MMCP) reduces MOSFET temperatures up to 11°C, and optimized Straight Heat Pipes (SHP) reduce GPU temperature by an additional 5°C. ACX 2.0+ coolers also feature optimized Swept fan blades, double ball bearings and an extreme low power motor, delivering more air flow with less power, unlocking additional power for the GPU.

evgacontest1.jpg

  • EVGA Z97 FTW Motherboard
     
    • Welcome to a new class of high performance motherboards with the EVGA Z97 lineup. These platforms offer a return to greatness with a new GUI BIOS interface, reimagined power VRM that focuses on efficiency, and are loaded with features such as Intel® Gigabit LAN, Native SATA 6G/USB 3.0 and more.
       
    • Engineered for the performance users with excellent overclocking features. Includes a GUI BIOS that is focused on functionality, new software interface for overclocking in the O.S., high quality components, M.2 storage option and more.

evgacontest2.jpg

The Process (aka how do you win?)

So even though I'm doing all the work getting this hardware out of Jacob's busy hands and to our readers...you do have to do a couple of things to win the hardware as well. 

  1. Fill out the questionnaire below.
     
  2. Enter the "secret phrase" from tonight's 337th episode of the PC Perspective Podcast. We'll be live streaming at 10pm ET / 7pm PT or you can wait for the downloadable version at http://www.pcper.com/podcast or the video version on our PC Perspective YouTube channel

The contest will run for one week so you will have more than enough time to listen to or watch the podcast and get the super-secret answer. We'll ship to anywhere in the world and one person will win both fantastic prizes! Once the contest closes (Wednesday, February 25th at 12pm ET) we'll randomly draw a winner from the form below that got the correct answer!

A HUGE thanks goes to our friends at EVGA for supplying the hardware for our giveaway. Good luck!

Source: EVGA

You have to pay to play, Gigabyte's overclockable GTX 980 G1 GAMING

Subject: Graphics Cards | February 20, 2015 - 02:08 PM |
Tagged: gigabyte, nvidia, GTX 980 G1 GAMING, windforce, maxwell, factory overclocked

If you want the same amount of Maxwell Streaming Multiprocessors and ROP units as a GTX 980 as well as that last 500MB of RAM to run at full speed then you will need to pay for a GTX 980.  One choice is Gigabyte's GTX 980 G1 Gaming which will cost you $580, around $240 more than a GTX 970 but the premium can be worth it if you need the power.  [H]ard|OCP took the already overclocked card from a boost GPU frequency of 1329MHz and RAM of 7GHz all the way to a boost of 1513MHz with RAM topping out at 8.11GHz.  That overclock had a noticeable effect on performance and helped the card garner an Editors Choice award.  See it in action here.

1424030496NnqLS3kD4X_1_1.jpg

"Today we have the GIGABYTE GTX 980 G1 GAMING, which features the WINDFORCE 600W cooling system and a high factory overclock. We will make comparisons to the competition, find out how fast it is compared to a GTX 980, and you won't believe the overclock we achieved! We will make both performance and price comparisons."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

NVIDIA Recants: Overclocking Returning to Mobile GPUs

Subject: Graphics Cards, Mobile | February 19, 2015 - 03:58 PM |
Tagged: nvidia, notebooks, mobile, gpu

After a week or so of debate circling NVIDIA's decsision to disable overclocking on mobility GPUs, we have word that the company has reconsidered and will be re-enabling the feature in next month's driver release:

As you know, we are constantly tuning and optimizing the performance of your GeForce PC.

We obsess over every possible optimization so that you can enjoy a perfectly stable machine that balances game, thermal, power, and acoustic performance.

Still, many of you enjoy pushing the system even further with overclocking.

Our recent driver update disabled overclocking on some GTX notebooks. We heard from many of you that you would like this feature enabled again. So, we will again be enabling overclocking in our upcoming driver release next month for those affected notebooks. 

If you are eager to regain this capability right away, you can also revert back to 344.75.

Now, I don't want to brag here, but we did just rail NVIDIA for this decision on last night's podcast...and then the decision was posted on NVIDIA's forums just four hours ago... I'm not saying, but I'm just saying!

nvidia-logo.jpg

All kidding aside, this is great news! And NVIDIA desperately needs to be paying attention to what consumers are asking for in order to make up for some poor decisions made in the last several months. Now (or at least soon), you will be able to return to your mobile GPU overclocking!

Amazon, Newegg and others offering partial refunds on GTX 970 purchases

Subject: Graphics Cards | February 19, 2015 - 01:51 PM |
Tagged: nvidia, memory issue, GTX 970, geforce

It looks like some online retailers are offering up either partial refunds or full refunds (with return) for those users that complain about the implications of the GeForce GTX 970 memory issue. On January 25th NVIDIA came clean about the true memory architecture of the GTX 970 which included changes to specifications around L2 cache and ROP count in addition to the division of the 4GB of memory into two distinct memory pools. Some users have complained about performance issues in heavy memory-depedent gaming scenarios even though my own testing has been less than conclusive

GM204_arch.jpg

Initially there was a demand for a recall or some kind of compensation from NVIDIA regarding the issue but that has fallen flat. What appears to be working (for some people) is going to the retailer directly. A thread on reddit.com's Hardware sub-reddit shows quite a few users were able to convince Amazon to issue 20-30% price refunds for their trouble. Even Newegg has been spotted offering either a full refund with return after the typical return window or a 20% credit through gift cards. 

gtx970refund.jpg

To be fair, some users are seeing their requests denied:

"After going back and forth for the past hour I manage to escalated my partial refund to a supervisor who promptly declined it."

"Are you serious? NewEgg told me to go complain to the vendor."

So, while we can debate the necessity or vadidity of these types of full or partial refunds from a moral ground, the truth is that this is happening. I'm very curious to hear what NVIDIA's internal thinking is on this matter and if it will impact relationships between NVIDIA, it's add-in card partners and the online retailers themselves. Who ends up paying the final costs is still up in the air I would bet.

Our discussion on the original GTX 970 Issue - Subscribe to our YouTube Channel for more!

What do you think? Is this just some buyers taking advantage of the situation for their own gain? Warranted requests from gamers that were taken advantage of? Leave me your thoughts in the comments!

Further reading on this issue:

Source: Reddit.com

ASUS Announces GTX 960 Mini - A New Small Form-Factor Graphics Card

Subject: Graphics Cards | February 16, 2015 - 11:04 AM |
Tagged: SFF, nvidia, mini-ITX GPU, mini-itx, gtx 960, graphics, gpu, geforce, asus

ASUS returns to the mini-ITX friendly form-factor with the GTX 960 Mini (officially named GTX960-MOC-2GD5 for maximum convenience), their newest NVIDIA GeForce GTX 960 graphics card.

image.jpg

Other than the smaller size to allow compatibility with a wider array of small enclosures, the GTX 960 Mini also features an overclocked core and promises "20% cooler and vastly quieter" performance from its custom heatsink and CoolTech fan. Here's a quick rundown of key specs:

  • 1190 MHz Base Clock / 1253 MHz Boost Clock
  • 1024 CUDA cores
  • 2GB 128-bit GDDR5 @ 7010 MHz
  • 3x DisplayPort, 1x HDMI 2.0, 1x DVI output

image.jpg

​No word on the pricing or availability of the card just yet. The other mini-ITX version of the GTX 960 on the market from Gigabyte has been selling for $199.99, so expect this to run somewhere between $200-$220 at launch.

image.jpg

ASUS has reused this image from the GTX 970 Mini launch, and so have I

The product page is up on the ASUS website so availability seems imminent.

Source: ASUS

Ubisoft Discusses Assassin's Creed: Unity with Investors

Subject: General Tech, Graphics Cards | February 15, 2015 - 07:30 AM |
Tagged: ubisoft, DirectX 12, directx 11, assassins creed, assassin's creed, assasins creed unity

During a conference call with investors, analysts, and press, Yves Guillemot, CEO of Ubisoft, highlighted the issues with Assassin's Creed: Unity with an emphasis on the positive outcomes going forward. Their quarter itself was good, beating expectations and allowing them to raise full-year projections. As expected, they announced that a new Assassin's Creed game would be released at the end of the year based on the technology they created for Unity, with “lessons learned”.

ubisoft-assassins-creed-unity-scene.jpg

Before optimization, every material on every object is at least one draw call.

Of course, there are many ways to optimize... but that effort works against future titles.

After their speech, the question period revisited the topic of Assassin's Creed: Unity and how it affected current sales, how it would affect the franchise going forward, and how should they respond to that foresight (Audio Recording - The question starts at 25:20). Yves responded that they redid “100% of the engine”, which was a tremendous undertaking. “When you do that, it's painful for all the group, and everything has to be recalibrated.” He continues: “[...] but the engine has been created, and it is going to help that brand to shine in the future. It's steps that we need to take regularly so that we can constantly innovated. Those steps are sometimes painful, but they allow us to improve the overall quality of the brand, so we think this will help the brand in the long term.”

This makes a lot of sense to me. When the issues first arose, it was speculated that the engine was pushing way too many draw calls, especially for DirectX 11 PCs. At the time, I figured that Ubisoft chose Assassin's Creed: Unity to be the first title to use their new development pipeline, focused on many simple assets rather than batching things together to minimize host-to-GPU and GPU-to-host interactions. Tens of thousands of individual tasks being sent to the GPU will choke a PC, and getting it to run at all on DirectX 11 might have diverted resources from, or even caused, many of the glitches. Currently, a few thousand is ideal although “amazing developers” can raise the ceiling to about ten thousand.

This also means that I expect the next Assassin's Creed title to support DirectX 12, possibly even in the graphics API's launch window. If I am correct, Ubisoft has been preparing for it for a long time. Of course, it is possible that I am simply wrong, but it would align with Microsoft's Holiday 2015 expectation for the first, big-budget titles to use the new interface and it would be silly to have done their big overhaul without planning on switching to DX12 ASAP.

Then there is the last concern: If I am correct, what should Ubisoft have done? Is it right for them to charge full price for a title that they know will have necessary birth pains? Do they delay it and risk (or even accept) that it will be non-profitable, and upset fans that way? There does not seem to be a clear answer, with all outcomes being some flavor of damage control.

Source: GamaSutra

Maxwell keeps on overclocking

Subject: Graphics Cards | February 12, 2015 - 01:41 PM |
Tagged: overclocking, nvidia, msi, gtx 960, GM206, maxwell

While Ryan was slaving over a baker's dozen of NVIDIA's GTX 960s, [H]ard|OCP focused on overclocking the MSI GeForce GTX 960 GAMING 2G that they recently reviewed.  Out of the box this GPU will hit 1366MHz in game, with memory frequency unchanged at 7GHz effective.  As users have discovered, overclocking cards with thermal protection that automatically downclocks the GPU when a certain TDP threshold has been reached is a little more tricky as simply upping the power provided to the card can raise the temperature enough that you end up with a lesser frequency that before you overvolted.  After quite a bit of experimentation, [H] managed to boost the memory to a full 8GHz and the in game GPU was hitting 1557MHz which is at the higher end of what Ryan saw.  The trick was to increase the Power Limit and turn the clock speed up but leave the voltage alone.

1421920414x2ymoRS6JM_2_4_l.jpg

"We push the new MSI GeForce GTX 960 GAMING video card to its limits of performance by overclocking to its limits. This NVIDIA GeForce GTX 960 GPU based video card has a lot of potential for hardware enthusiasts and gamers wanting more performance. We compare it with other overclocked cards to see if the GTX 960 can keep up."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

ASUS Launches GTX 750 Ti Strix OC Edition With Twice the Memory

Subject: Graphics Cards | February 11, 2015 - 11:55 PM |
Tagged: strix, maxwell, gtx 750ti, gtx 750 ti, gm107, factory overclocked, DirectCU II

ASUS is launching a new version of its factory overclocked GTX 750 Ti STRIX with double the memory of the existing STRIX-GTX750TI-OC-2GD5. The new card will feature 4GB of GDDR5, but is otherwise identical.

The new graphics card pairs the NVIDIA GM107 GPU and 4GB of memory with ASUS’ dual fan "0dB" DirectCU II cooler. The card can output video over DVI, HDMI, and DisplayPort.

Thanks to the aftermarket cooler, ASUS has factory overclocked the GTX 750 Ti GPU (640 CUDA cores) to a respectable 1124 MHz base and 1202 MHz GPU Boost clockspeeds. (For reference, stock clockspeeds are 1020 MHz base and 1085 MHz boost.) However, while the amount of memory has doubled the clockspeeds have remained the same at a stock clock of 5.4 Gbps (effective).

Asus GTX 750Ti Strix 4GB Factory Overclocked Graphics Card.jpg

ASUS has not annouced pricing or availability for the new card but expect it to come soon at a slight premium (~$15) over the $160 2GB STRIX 750Ti.

The additional memory (and it's usefulness vs price premium) is a bit of a headscratcher considering this is a budget card aimed at delivering decent 1080p gaming. The extra memory may help in cranking up the game graphics settings just a bit more. In the end, the extra memory is nice to have, but if you find a good deal on a 2GB card today, don’t get too caught up on waiting for a 4GB model.

Source: TechPowerUp

NVIDIA Event on March 3rd. Why?

Subject: Graphics Cards, Mobile, Shows and Expos | February 11, 2015 - 03:25 PM |
Tagged: Tegra X1, nvidia, mwc 15, MWC, gdc 2015, GDC, DirectX 12

On March 3rd, NVIDIA will host an event called “Made to Game”. Invitations have gone out to numerous outlets, including Android Police, who published a censored screenshot of it. This suggests that it will have something to do with the Tegra X1, especially since the date is the day after Mobile World Congress starts. Despite all of this, I think it is for something else entirely.

nvidia-march-3-2015-event.png

Image Credit: Android Police

Allow me to highlight two points. First, Tech Report claims that the event is taking place in San Francisco, which is about as far away from Barcelona, Spain as you can get. It is close to GDC however, which takes also starts on March 2nd. If this was going to align with Mobile World Congress, you ideally would not want attendees to take 14-hour flight for a day trip.

Second, the invitation specifically says: “More than 5 years in the making, what I want to share with you will redefine the future of gaming.” Compare that to the DirectX 12 announcement blog post on March 20th of last year (2014): “Our work with Microsoft on DirectX 12 began more than four years ago with discussions about reducing resource overhead. For the past year, NVIDIA has been working closely with the DirectX team to deliver a working design and implementation of DX12 at GDC ((2014)).”

So yeah, while it might involve the Tegra X1 processor for Windows 10 on mobile devices, which is the only reason I can think of that they would want Android Police there apart from "We're inviting everyone everywhere", I expect that this event is for DirectX 12. I assume that Microsoft would host their own event that involves many partners, but I could see NVIDIA having a desire to save a bit for something of their own. What would that be? No idea.

NVIDIA Releases 347.52 Drivers and Brothers for GRID

Subject: General Tech, Graphics Cards, Mobile | February 11, 2015 - 08:00 AM |
Tagged: shieldtuesday, shield, nvidia, gridtuesday, grid, graphics drivers, geforce, drivers

Update: Whoops! The title originally said "374.52", when it should be "347.52". My mistake. Thanks "Suddenly" in the comments!

Two things from NVIDIA this week, a new driver and a new game for the NVIDIA GRID. The new driver aligns with the release of Evolve, which came out on Tuesday from the original creators of Left4Dead. The graphics vendor also claims that it will help Assassin's Creed: Unity, Battlefield 4, Dragon Age: Inquisition, The Crew, and War Thunder. Several SLi profiles were also added for Alone in the Dark: Illumination, Black Desert, Dying Light, H1Z1, Heroes of the Storm, Saint's Row: Gat out of Hell, Total War: Attila, and Triad Wars.

Brothers_A_Tale_of_Two_Sons_cover_art.jpg

On the same day, NVIDIA released Brothers: A Tale of Two Sons on GRID, bringing the number of available games up to 37. This game came out in August 2013 and received a lot of critical praise. Its control style is unique, using dual-thumbstick gamepads to simultaneously control both characters. More importantly, despite being short, the game is said to have an excellent story, even achieving Game of the Year (2013) for TotalBiscuit based on its narrative, which is not something that he praises often.

I'd comment on the game, but I've yet to get the time to play it. Apparently it is only a couple hours long, so maybe I can fit it in somewhere.

Also, they apparently are now calling this “#SHIELDTuesday” rather than “#GRIDTuesday”. I assume this rebranding is because people may not know that GRID exists, but they would certainly know if they purchased an Android-based gaming device for a couple hundred dollars or more. We could read into this and make some assumptions about GRID adoption rates versus SHIELD purchases, or even purchases of the hardware itself versus their projections, but it would be pure speculation.

Both announcements were made available on Tuesday, for their respective products.

Source: NVIDIA

AMD R9 300-Series Possibly Coming Soon

Subject: Graphics Cards | February 7, 2015 - 06:03 PM |
Tagged: amd, R9, r9 300, r9 390x, radeon

According to WCCFTech, AMD commented on Facebook that they are “putting the finishing touches on the 300 series to make sure they live up to expectation”. I tried look through AMD's “Posts to Page” for February 3rd and I did not see it listed, so a grain of salt is necessary (either with WCCF or with my lack of Facebook skills).

AMD-putting-the-finishing-touches-on-300-series.jpg

Image Credit: WCCFTech

The current rumors claim that Fiji XT will have 4096 graphics cores that are fed by a high-bandwidth, stacked memory architecture, which is supposedly rated at 640 GB/s (versus 224 GB/s of the GeForce GTX 980). When you're dealing with data sets at the scale that GPUs are, bandwidth is a precious resource. That said, they also have cache and other methods to reduce this dependency, but let's just say that, if you offer a graphics vendor a free, order-of-magnitude speed-up in memory bandwidth -- you will have friend, and possibly one for life. Need a couch moved? No problem!

The R9 Series is expected to be launched next quarter, which could be as early as about a month.

Source: WCCFTech

glNext Initiative Unveiled at GDC 2015

Subject: Graphics Cards, Shows and Expos | February 4, 2015 - 03:33 AM |
Tagged: OpenGL Next, opengl, glnext, gdc 2015, GDC

The first next-gen, released graphics API was Mantle, which launched a little while after Battlefield 4, but the SDK is still invite-only. The DirectX 12 API quietly launched with the recent Windows 10 Technical Preview, but no drivers, SDK, or software (that we know about) are available to the public yet. The Khronos Group has announced their project, and that's about it currently.

opengl_logo.jpg

According to Develop Magazine, the GDC event listing, and participants, the next OpenGL (currently called “glNext initiative”) will be unveiled at GDC 2015. The talk will be presented by Valve, but it will also include Epic Games, who was closely involved in DirectX 12 with Unreal Engine, Oxide Games and EA/DICE, who were early partners with AMD on Mantle, and Unity, who recently announced support for DirectX 12 when it launches with Windows 10. Basically, this GDC talk includes almost every software developer that came out in early support of either DirectX 12 or Mantle, plus Valve. Off the top of my head, I can only think of FutureMark as unlisted. On the other hand, while they will obviously have driver support from at least one graphics vendor, none are listed. Will we see NVIDIA? Intel? AMD? All of the above? We don't know.

When I last discussed the next OpenGL initiative, it was attempting to parse the naming survey to figure out bits of the technology itself. As it turns out, the talk claims to go deep into the API, with demos, examples, and “real-world applications running on glNext drivers and hardware”. If this information makes it out (and some talks remain private unfortunately although this one looks public) then we should know more about it than what we know about any competing API today. Personally, I am hoping that they spent a lot of effort on the GPGPU side of things, sort-of building graphics atop it rather than having them be two separate entities. This would be especially good if it could be sandboxed for web applications.

This could get interesting.

Source: GDC

The GTX 960 at 2560x1440

Subject: Graphics Cards | January 30, 2015 - 03:42 PM |
Tagged: nvidia, msi gaming 2g, msi, maxwell, gtx 960

Sitting right now at $220 ($210 after MIR) the MSI GTX 960 GAMING 2G is more or less the same price as a standard R9 280 or a 280x/290 on discount.  We have seen that the price to performance is quite competitive at 1080p gaming but this year the push seems to be for performance at higher resolutions.  [H]ard|OCP explored the performance of this card by testing at 2560 x 1440 against the GTX 770 and 760 as well as the R9 285 and 280x with some interesting results.  In some cases the 280x turned out on top while in others the 960 won, especially when Gameworks features like God Rays were enabled.  Read through this carefully but from their results the deciding factor between picking up one or the other of these cards could be the price as they are very close in performance.

14224342445gkR2VhJE3_1_1.jpg

"We take the new NVIDIA GeForce GTX 960 GPU based MSI GeForce GTX 960 GAMING 2G and push it to its limits at 1440p. We also include a GeForce GTX 770 and AMD Radeon R9 280X into this evaluation for some unexpected results. How does the GeForce GTX 960 really compare to a wide range of cards?"

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

NVIDIA Plans Driver Update for GTX 970 Memory Issue, Help with Returns

Subject: Graphics Cards | January 28, 2015 - 10:21 AM |
Tagged: nvidia, memory issue, maxwell, GTX 970, GM204, geforce

UPDATE 1/29/15: This forum post has since been edited and basically removed, with statements made on Twitter that no driver changes are planned that will specifically target the performance of the GeForce GTX 970.

The story around the GeForce GTX 970 and its confusing and shifting memory architecture continues to update. On a post in the official GeForce.com forums (on page 160 of 184!), moderator and NVIDIA employee PeterS claims that the company is working on a driver to help improve performance concerns and will also be willing to "help out" for users that honestly want to return the product they already purchased. Here is the quote:

Hey,

First, I want you to know that I'm not just a mod, I work for NVIDIA in Santa Clara.

I totally get why so many people are upset. We messed up some of the stats on the reviewer kit and we didn't properly explain the memory architecture. I realize a lot of you guys rely on product reviews to make purchase decisions and we let you down.

It sucks because we're really proud of this thing. The GTX970 is an amazing card and I genuinely believe it's the best card for the money that you can buy. We're working on a driver update that will tune what's allocated where in memory to further improve performance.

Having said that, I understand that this whole experience might have turned you off to the card. If you don't want the card anymore you should return it and get a refund or exchange. If you have any problems getting that done, let me know and I'll do my best to help.

--Peter

This makes things a bit more interesting - based on my conversations with NVIDIA about the GTX 970 since this news broke, it was stated that the operating system had a much stronger role in the allocation of memory from a game's request than the driver. Based on the above statement though, NVIDIA seems to think it can at least improve on the current level of performance and tune things to help alleviate any potential bottlenecks that might exist simply in software.

IMG_9794.JPG

As far as the return goes, PeterS at least offers to help this one forum user but I would assume the gesture would be available for anyone that has the same level of concern for the product. Again, as I stated in my detailed breakdown of the GTX 970 memory issue on Monday, I don't believe that users need to go that route - the GeForce GTX 970 is still a fantastic performing card in nearly all cases except (maybe) a tiny fraction where that last 500MB of frame buffer might come into play. I am working on another short piece going up today that details my experiences with the GTX 970 running up on those boundaries.

Part 1: NVIDIA Responds to GTX 970 3.5GB Memory Issue
Part 2: NVIDIA Discloses Full Memory Structure and Limitations of GTX 970

NVIDIA is trying to be proactive now, that much we can say. It seems that the company understands its mistake - not in the memory pooling decision but in the lack of clarity it offered to reviewers and consumers upon the product's launch.

NVIDIA Responds to GTX 970 3.5GB Memory Issue

Subject: Graphics Cards | January 24, 2015 - 11:51 AM |
Tagged: nvidia, maxwell, GTX 970, GM204, 3.5gb memory

UPDATE 1/28/15 @ 10:25am ET: NVIDIA has posted in its official GeForce.com forums that they are working on a driver update to help alleviate memory performance issues in the GTX 970 and that they will "help out" those users looking to get a refund or exchange.

UPDATE 1/26/25 @ 1:00pm ET: We have posted a much more detailed analysis and look at the GTX 970 memory system and what is causing the unusual memory divisions. Check it out right here!

UPDATE 1/26/15 @ 12:10am ET: I now have a lot more information on the technical details of the architecture that cause this issue and more information from NVIDIA to explain it. I spoke with SVP of GPU Engineering Jonah Alben on Sunday night to really dive into the quesitons everyone had. Expect an update here on this page at 10am PT / 1pm ET or so. Bookmark and check back!

UPDATE 1/24/15 @ 11:25pm ET: Apparently there is some concern online that the statement below is not legitimate. I can assure you that the information did come from NVIDIA, though is not attributal to any specific person - the message was sent through a couple of different PR people and is the result of meetings and multiple NVIDIA employee's input. It is really a message from the company, not any one individual. I have had several 10-20 minute phone calls with NVIDIA about this issue and this statement on Saturday alone, so I know that the information wasn't from a spoofed email, etc. Also, this statement was posted by an employee moderator on the GeForce.com forums about 6 hours ago, further proving that the statement is directly from NVIDIA. I hope this clears up any concerns around the validity of the below information!

Over the past couple of weeks users of GeForce GTX 970 cards have noticed and started researching a problem with memory allocation in memory-heavy gaming. Essentially, gamers noticed that the GTX 970 with its 4GB of system memory was only ever accessing 3.5GB of that memory. When it did attempt to access the final 500MB of memory, performance seemed to drop dramatically. What started as simply a forum discussion blew up into news that was being reported at tech and gaming sites across the web.

gtx970memory.jpg

Image source: Lazygamer.net

NVIDIA has finally responded to the widespread online complaints about GeForce GTX 970 cards only utilizing 3.5GB of their 4GB frame buffer. From the horse's mouth:

The GeForce GTX 970 is equipped with 4GB of dedicated graphics memory.  However the 970 has a different configuration of SMs than the 980, and fewer crossbar resources to the memory system. To optimally manage memory traffic in this configuration, we segment graphics memory into a 3.5GB section and a 0.5GB section.  The GPU has higher priority access to the 3.5GB section.  When a game needs less than 3.5GB of video memory per draw command then it will only access the first partition, and 3rd party applications that measure memory usage will report 3.5GB of memory in use on GTX 970, but may report more for GTX 980 if there is more memory used by other commands.  When a game requires more than 3.5GB of memory then we use both segments.
 
We understand there have been some questions about how the GTX 970 will perform when it accesses the 0.5GB memory segment.  The best way to test that is to look at game performance.  Compare a GTX 980 to a 970 on a game that uses less than 3.5GB.  Then turn up the settings so the game needs more than 3.5GB and compare 980 and 970 performance again.
 
Here’s an example of some performance data:

  GTX 980 GTX 970
Shadow of Mordor    
<3.5GB setting = 2688x1512 Very High 72 FPS 60 FPS
>3.5GB setting = 3456x1944 55 FPS (-24%) 45 FPS (-25%)
Battlefield 4    
<3.5GB setting = 3840x2160 2xMSAA 36 FPS 30 FPS
>3.5GB setting = 3840x2160 135% res 19 FPS (-47%) 15 FPS (-50%)
Call of Duty: Advanced Warfare    
<3.5GB setting = 3840x2160 FSMAA T2x, Supersampling off 82 FPS 71 FPS
>3.5GB setting = 3840x2160 FSMAA T2x, Supersampling on 48 FPS (-41%) 40 FPS (-44%)

On GTX 980, Shadows of Mordor drops about 24% on GTX 980 and 25% on GTX 970, a 1% difference.  On Battlefield 4, the drop is 47% on GTX 980 and 50% on GTX 970, a 3% difference.  On CoD: AW, the drop is 41% on GTX 980 and 44% on GTX 970, a 3% difference.  As you can see, there is very little change in the performance of the GTX 970 relative to GTX 980 on these games when it is using the 0.5GB segment.

So it would appear that the severing of a trio of SMMs to make the GTX 970 different than the GTX 980 was the root cause of the issue. I'm not sure if this something that we have seen before with NVIDIA GPUs that are cut down in the same way, but I have asked for clarification from NVIDIA on that. The ratios fit: 500MB is 1/8th of the 4GB total memory capacity and 2 SMMs is 1/8th of the total SMM count. (Edit: The ratios in fact do NOT match up...odd.)

GeForce_GTX_980_Block_Diagram_FINAL.png

The full GM204 GPU that is the root cause of this memory issue.

Another theory presented itself as well: is this possibly the reason we do not have a GTX 960 Ti yet? If the patterns were followed from previous generations a GTX 960 Ti would be a GM204 GPU with fewer cores enabled and additional SMs disconnected to enable a lower price point. If this memory issue were to be even more substantial, creating larger differentiated "pools" of memory, then it could be an issue for performance or driver development. To be clear, we are just guessing on this one and that could be something that would not occur at all. Again, I've asked NVIDIA for some technical clarification.

Requests for information aside, we may never know for sure if this is a bug with the GM204 ASIC or predetermined characteristic of design. 

The questions remains: does NVIDIA's response appease GTX 970 owners? After all, this memory concern is really just a part of a GPU's story and thus performance testing and analysis already incorporates it essentially. Some users will still likely make a claim of a "bait and switch" but do the benchmarks above, as well as our own results at 4K, make it a less significant issue?

Our own Josh Walrath offers this analysis:

A few days ago when we were presented with evidence of the 970 not fully utilizing all 4 GB of memory, I theorized that it had to do with the reduction of SMM units. It makes sense from an efficiency standpoint to perhaps "hard code" memory addresses for each SMM. The thought behind that would be that 4 GB of memory is a huge amount of a video card, and the potential performance gains of a more flexible system would be pretty minimal.

I believe that the memory controller is working as intended and not a bug. When designing a large GPU, there will invariably be compromises made. From all indications NVIDIA decided to save time, die size, and power by simplifying the memory controller and crossbar setup. These things have a direct impact on time to market and power efficiency.  NVIDIA probably figured that a couple percentage of performance lost was outweighed by the added complexity, power consumption, and engineering resources that it would have taken to gain those few percentage points back.