Subject: Graphics Cards | February 19, 2015 - 01:51 PM | Ryan Shrout
Tagged: nvidia, memory issue, GTX 970, geforce
It looks like some online retailers are offering up either partial refunds or full refunds (with return) for those users that complain about the implications of the GeForce GTX 970 memory issue. On January 25th NVIDIA came clean about the true memory architecture of the GTX 970 which included changes to specifications around L2 cache and ROP count in addition to the division of the 4GB of memory into two distinct memory pools. Some users have complained about performance issues in heavy memory-depedent gaming scenarios even though my own testing has been less than conclusive.
Initially there was a demand for a recall or some kind of compensation from NVIDIA regarding the issue but that has fallen flat. What appears to be working (for some people) is going to the retailer directly. A thread on reddit.com's Hardware sub-reddit shows quite a few users were able to convince Amazon to issue 20-30% price refunds for their trouble. Even Newegg has been spotted offering either a full refund with return after the typical return window or a 20% credit through gift cards.
To be fair, some users are seeing their requests denied:
"After going back and forth for the past hour I manage to escalated my partial refund to a supervisor who promptly declined it."
"Are you serious? NewEgg told me to go complain to the vendor."
So, while we can debate the necessity or vadidity of these types of full or partial refunds from a moral ground, the truth is that this is happening. I'm very curious to hear what NVIDIA's internal thinking is on this matter and if it will impact relationships between NVIDIA, it's add-in card partners and the online retailers themselves. Who ends up paying the final costs is still up in the air I would bet.
Our discussion on the original GTX 970 Issue - Subscribe to our YouTube Channel for more!
What do you think? Is this just some buyers taking advantage of the situation for their own gain? Warranted requests from gamers that were taken advantage of? Leave me your thoughts in the comments!
Further reading on this issue:
- NVIDIA Responds to GTX 970 3.5GB Memory Issue
- NVIDIA Discloses Full Memory Structure and Limitations of GTX 970
- Frame Rating: Looking at GTX 970 Memory Performance
- Frame Rating: GTX 970 Memory Issues Tested in SLI
Subject: Graphics Cards | February 16, 2015 - 11:04 AM | Sebastian Peak
Tagged: SFF, nvidia, mini-ITX GPU, mini-itx, gtx 960, graphics, gpu, geforce, asus
ASUS returns to the mini-ITX friendly form-factor with the GTX 960 Mini (officially named GTX960-MOC-2GD5 for maximum convenience), their newest NVIDIA GeForce GTX 960 graphics card.
Other than the smaller size to allow compatibility with a wider array of small enclosures, the GTX 960 Mini also features an overclocked core and promises "20% cooler and vastly quieter" performance from its custom heatsink and CoolTech fan. Here's a quick rundown of key specs:
- 1190 MHz Base Clock / 1253 MHz Boost Clock
- 1024 CUDA cores
- 2GB 128-bit GDDR5 @ 7010 MHz
- 3x DisplayPort, 1x HDMI 2.0, 1x DVI output
No word on the pricing or availability of the card just yet. The other mini-ITX version of the GTX 960 on the market from Gigabyte has been selling for $199.99, so expect this to run somewhere between $200-$220 at launch.
ASUS has reused this image from the GTX 970 Mini launch, and so have I
The product page is up on the ASUS website so availability seems imminent.
Subject: General Tech, Graphics Cards | February 15, 2015 - 07:30 AM | Scott Michaud
Tagged: ubisoft, DirectX 12, directx 11, assassins creed, assassin's creed, assasins creed unity
During a conference call with investors, analysts, and press, Yves Guillemot, CEO of Ubisoft, highlighted the issues with Assassin's Creed: Unity with an emphasis on the positive outcomes going forward. Their quarter itself was good, beating expectations and allowing them to raise full-year projections. As expected, they announced that a new Assassin's Creed game would be released at the end of the year based on the technology they created for Unity, with “lessons learned”.
Before optimization, every material on every object is at least one draw call.
Of course, there are many ways to optimize... but that effort works against future titles.
After their speech, the question period revisited the topic of Assassin's Creed: Unity and how it affected current sales, how it would affect the franchise going forward, and how should they respond to that foresight (Audio Recording - The question starts at 25:20). Yves responded that they redid “100% of the engine”, which was a tremendous undertaking. “When you do that, it's painful for all the group, and everything has to be recalibrated.” He continues: “[...] but the engine has been created, and it is going to help that brand to shine in the future. It's steps that we need to take regularly so that we can constantly innovated. Those steps are sometimes painful, but they allow us to improve the overall quality of the brand, so we think this will help the brand in the long term.”
This makes a lot of sense to me. When the issues first arose, it was speculated that the engine was pushing way too many draw calls, especially for DirectX 11 PCs. At the time, I figured that Ubisoft chose Assassin's Creed: Unity to be the first title to use their new development pipeline, focused on many simple assets rather than batching things together to minimize host-to-GPU and GPU-to-host interactions. Tens of thousands of individual tasks being sent to the GPU will choke a PC, and getting it to run at all on DirectX 11 might have diverted resources from, or even caused, many of the glitches. Currently, a few thousand is ideal although “amazing developers” can raise the ceiling to about ten thousand.
This also means that I expect the next Assassin's Creed title to support DirectX 12, possibly even in the graphics API's launch window. If I am correct, Ubisoft has been preparing for it for a long time. Of course, it is possible that I am simply wrong, but it would align with Microsoft's Holiday 2015 expectation for the first, big-budget titles to use the new interface and it would be silly to have done their big overhaul without planning on switching to DX12 ASAP.
Then there is the last concern: If I am correct, what should Ubisoft have done? Is it right for them to charge full price for a title that they know will have necessary birth pains? Do they delay it and risk (or even accept) that it will be non-profitable, and upset fans that way? There does not seem to be a clear answer, with all outcomes being some flavor of damage control.
Subject: Graphics Cards | February 12, 2015 - 01:41 PM | Jeremy Hellstrom
Tagged: overclocking, nvidia, msi, gtx 960, GM206, maxwell
While Ryan was slaving over a baker's dozen of NVIDIA's GTX 960s, [H]ard|OCP focused on overclocking the MSI GeForce GTX 960 GAMING 2G that they recently reviewed. Out of the box this GPU will hit 1366MHz in game, with memory frequency unchanged at 7GHz effective. As users have discovered, overclocking cards with thermal protection that automatically downclocks the GPU when a certain TDP threshold has been reached is a little more tricky as simply upping the power provided to the card can raise the temperature enough that you end up with a lesser frequency that before you overvolted. After quite a bit of experimentation, [H] managed to boost the memory to a full 8GHz and the in game GPU was hitting 1557MHz which is at the higher end of what Ryan saw. The trick was to increase the Power Limit and turn the clock speed up but leave the voltage alone.
"We push the new MSI GeForce GTX 960 GAMING video card to its limits of performance by overclocking to its limits. This NVIDIA GeForce GTX 960 GPU based video card has a lot of potential for hardware enthusiasts and gamers wanting more performance. We compare it with other overclocked cards to see if the GTX 960 can keep up."
Here are some more Graphics Card articles from around the web:
- Five GeForce GTX 960 cards overclocked @ The Tech Report
- NVIDIA GTX 960 Reference Review @ Hardware Canucks
- MSI GTX 960 Gaming GPU, The Sweet Spot @ Bjorn3d
- MSI GTX 960 Gaming 2G @ Modders-Inc
- 6-way GeForce GTX 960 Shootout @ Legion Hardware
- Testing Nvidia’s GeForce GTX 960 2GB Graphics Cards In SLI @ eTeknix
- NVIDIA GeForce GTX 960 Video Card Review - EVGA SSC Edition @ NitroWare
- Gigabyte GTX 980 G1 Gaming 4GB @ Modders-Inc
- Sapphire R9 290X Tri-X OC 8GB @ Kitguru
- Corsair H105 on 4770K and R9 290 @ HardwareOverclock
Subject: Graphics Cards | February 11, 2015 - 11:55 PM | Tim Verry
Tagged: strix, maxwell, gtx 750ti, gtx 750 ti, gm107, factory overclocked, DirectCU II
ASUS is launching a new version of its factory overclocked GTX 750 Ti STRIX with double the memory of the existing STRIX-GTX750TI-OC-2GD5. The new card will feature 4GB of GDDR5, but is otherwise identical.
The new graphics card pairs the NVIDIA GM107 GPU and 4GB of memory with ASUS’ dual fan "0dB" DirectCU II cooler. The card can output video over DVI, HDMI, and DisplayPort.
Thanks to the aftermarket cooler, ASUS has factory overclocked the GTX 750 Ti GPU (640 CUDA cores) to a respectable 1124 MHz base and 1202 MHz GPU Boost clockspeeds. (For reference, stock clockspeeds are 1020 MHz base and 1085 MHz boost.) However, while the amount of memory has doubled the clockspeeds have remained the same at a stock clock of 5.4 Gbps (effective).
ASUS has not annouced pricing or availability for the new card but expect it to come soon at a slight premium (~$15) over the $160 2GB STRIX 750Ti.
The additional memory (and it's usefulness vs price premium) is a bit of a headscratcher considering this is a budget card aimed at delivering decent 1080p gaming. The extra memory may help in cranking up the game graphics settings just a bit more. In the end, the extra memory is nice to have, but if you find a good deal on a 2GB card today, don’t get too caught up on waiting for a 4GB model.
Subject: Graphics Cards, Mobile, Shows and Expos | February 11, 2015 - 03:25 PM | Scott Michaud
Tagged: Tegra X1, nvidia, mwc 15, MWC, gdc 2015, GDC, DirectX 12
On March 3rd, NVIDIA will host an event called “Made to Game”. Invitations have gone out to numerous outlets, including Android Police, who published a censored screenshot of it. This suggests that it will have something to do with the Tegra X1, especially since the date is the day after Mobile World Congress starts. Despite all of this, I think it is for something else entirely.
Image Credit: Android Police
Allow me to highlight two points. First, Tech Report claims that the event is taking place in San Francisco, which is about as far away from Barcelona, Spain as you can get. It is close to GDC however, which takes also starts on March 2nd. If this was going to align with Mobile World Congress, you ideally would not want attendees to take 14-hour flight for a day trip.
Second, the invitation specifically says: “More than 5 years in the making, what I want to share with you will redefine the future of gaming.” Compare that to the DirectX 12 announcement blog post on March 20th of last year (2014): “Our work with Microsoft on DirectX 12 began more than four years ago with discussions about reducing resource overhead. For the past year, NVIDIA has been working closely with the DirectX team to deliver a working design and implementation of DX12 at GDC ((2014)).”
So yeah, while it might involve the Tegra X1 processor for Windows 10 on mobile devices, which is the only reason I can think of that they would want Android Police there apart from "We're inviting everyone everywhere", I expect that this event is for DirectX 12. I assume that Microsoft would host their own event that involves many partners, but I could see NVIDIA having a desire to save a bit for something of their own. What would that be? No idea.
Subject: General Tech, Graphics Cards, Mobile | February 11, 2015 - 08:00 AM | Scott Michaud
Tagged: shieldtuesday, shield, nvidia, gridtuesday, grid, graphics drivers, geforce, drivers
Update: Whoops! The title originally said "374.52", when it should be "347.52". My mistake. Thanks "Suddenly" in the comments!
Two things from NVIDIA this week, a new driver and a new game for the NVIDIA GRID. The new driver aligns with the release of Evolve, which came out on Tuesday from the original creators of Left4Dead. The graphics vendor also claims that it will help Assassin's Creed: Unity, Battlefield 4, Dragon Age: Inquisition, The Crew, and War Thunder. Several SLi profiles were also added for Alone in the Dark: Illumination, Black Desert, Dying Light, H1Z1, Heroes of the Storm, Saint's Row: Gat out of Hell, Total War: Attila, and Triad Wars.
On the same day, NVIDIA released Brothers: A Tale of Two Sons on GRID, bringing the number of available games up to 37. This game came out in August 2013 and received a lot of critical praise. Its control style is unique, using dual-thumbstick gamepads to simultaneously control both characters. More importantly, despite being short, the game is said to have an excellent story, even achieving Game of the Year (2013) for TotalBiscuit based on its narrative, which is not something that he praises often.
I'd comment on the game, but I've yet to get the time to play it. Apparently it is only a couple hours long, so maybe I can fit it in somewhere.
Also, they apparently are now calling this “#SHIELDTuesday” rather than “#GRIDTuesday”. I assume this rebranding is because people may not know that GRID exists, but they would certainly know if they purchased an Android-based gaming device for a couple hundred dollars or more. We could read into this and make some assumptions about GRID adoption rates versus SHIELD purchases, or even purchases of the hardware itself versus their projections, but it would be pure speculation.
Both announcements were made available on Tuesday, for their respective products.
Subject: Graphics Cards | February 7, 2015 - 06:03 PM | Scott Michaud
Tagged: amd, R9, r9 300, r9 390x, radeon
According to WCCFTech, AMD commented on Facebook that they are “putting the finishing touches on the 300 series to make sure they live up to expectation”. I tried look through AMD's “Posts to Page” for February 3rd and I did not see it listed, so a grain of salt is necessary (either with WCCF or with my lack of Facebook skills).
Image Credit: WCCFTech
The current rumors claim that Fiji XT will have 4096 graphics cores that are fed by a high-bandwidth, stacked memory architecture, which is supposedly rated at 640 GB/s (versus 224 GB/s of the GeForce GTX 980). When you're dealing with data sets at the scale that GPUs are, bandwidth is a precious resource. That said, they also have cache and other methods to reduce this dependency, but let's just say that, if you offer a graphics vendor a free, order-of-magnitude speed-up in memory bandwidth -- you will have friend, and possibly one for life. Need a couch moved? No problem!
The R9 Series is expected to be launched next quarter, which could be as early as about a month.
A baker's dozen of GTX 960
Back on the launch day of the GeForce GTX 960, we hosted NVIDIA's Tom Petersen for a live stream. During the event, NVIDIA and its partners provided ten GTX 960 cards for our live viewers to win which we handed out through about an hour and a half. An interesting idea was proposed during the event - what would happen if we tried to overclock all of the product NVIDIA had brought along to see what the distribution of results looked like? After notifying all the winners of their prizes and asking for permission from each, we started the arduous process of testing and overclocking a total of 13 (10 prizes plus our 3 retail units already in the office) different GTX 960 cards.
Hopefully we will be able to provide a solid base of knowledge for buyers of the GTX 960 that we don't normally have the opportunity to offer: what is the range of overclocking you can expect and what is the average or median result. I think you will find the data interesting.
The 13 Contenders
Our collection of thirteen GTX 960 cards includes a handful from ASUS, EVGA and MSI. The ASUS models are all STRIX models, the EVGA cards are of the SSC variety, and the MSI cards include a single Gaming model and three 100ME. (The only difference between the Gaming and 100ME MSI cards is the color of the cooler.)
To be fair to the prize winners, I actually assigned each of them a specific graphics card before opening them up and testing them. I didn't want to be accused of favoritism by giving the best overclockers to the best readers!
Subject: Graphics Cards, Shows and Expos | February 4, 2015 - 03:33 AM | Scott Michaud
Tagged: OpenGL Next, opengl, glnext, gdc 2015, GDC
The first next-gen, released graphics API was Mantle, which launched a little while after Battlefield 4, but the SDK is still invite-only. The DirectX 12 API quietly launched with the recent Windows 10 Technical Preview, but no drivers, SDK, or software (that we know about) are available to the public yet. The Khronos Group has announced their project, and that's about it currently.
According to Develop Magazine, the GDC event listing, and participants, the next OpenGL (currently called “glNext initiative”) will be unveiled at GDC 2015. The talk will be presented by Valve, but it will also include Epic Games, who was closely involved in DirectX 12 with Unreal Engine, Oxide Games and EA/DICE, who were early partners with AMD on Mantle, and Unity, who recently announced support for DirectX 12 when it launches with Windows 10. Basically, this GDC talk includes almost every software developer that came out in early support of either DirectX 12 or Mantle, plus Valve. Off the top of my head, I can only think of FutureMark as unlisted. On the other hand, while they will obviously have driver support from at least one graphics vendor, none are listed. Will we see NVIDIA? Intel? AMD? All of the above? We don't know.
When I last discussed the next OpenGL initiative, it was attempting to parse the naming survey to figure out bits of the technology itself. As it turns out, the talk claims to go deep into the API, with demos, examples, and “real-world applications running on glNext drivers and hardware”. If this information makes it out (and some talks remain private unfortunately although this one looks public) then we should know more about it than what we know about any competing API today. Personally, I am hoping that they spent a lot of effort on the GPGPU side of things, sort-of building graphics atop it rather than having them be two separate entities. This would be especially good if it could be sandboxed for web applications.
This could get interesting.