All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Graphics Cards | February 12, 2015 - 01:41 PM | Jeremy Hellstrom
Tagged: overclocking, nvidia, msi, gtx 960, GM206, maxwell
While Ryan was slaving over a baker's dozen of NVIDIA's GTX 960s, [H]ard|OCP focused on overclocking the MSI GeForce GTX 960 GAMING 2G that they recently reviewed. Out of the box this GPU will hit 1366MHz in game, with memory frequency unchanged at 7GHz effective. As users have discovered, overclocking cards with thermal protection that automatically downclocks the GPU when a certain TDP threshold has been reached is a little more tricky as simply upping the power provided to the card can raise the temperature enough that you end up with a lesser frequency that before you overvolted. After quite a bit of experimentation, [H] managed to boost the memory to a full 8GHz and the in game GPU was hitting 1557MHz which is at the higher end of what Ryan saw. The trick was to increase the Power Limit and turn the clock speed up but leave the voltage alone.
"We push the new MSI GeForce GTX 960 GAMING video card to its limits of performance by overclocking to its limits. This NVIDIA GeForce GTX 960 GPU based video card has a lot of potential for hardware enthusiasts and gamers wanting more performance. We compare it with other overclocked cards to see if the GTX 960 can keep up."
Here are some more Graphics Card articles from around the web:
- Five GeForce GTX 960 cards overclocked @ The Tech Report
- NVIDIA GTX 960 Reference Review @ Hardware Canucks
- MSI GTX 960 Gaming GPU, The Sweet Spot @ Bjorn3d
- MSI GTX 960 Gaming 2G @ Modders-Inc
- 6-way GeForce GTX 960 Shootout @ Legion Hardware
- Testing Nvidia’s GeForce GTX 960 2GB Graphics Cards In SLI @ eTeknix
- NVIDIA GeForce GTX 960 Video Card Review - EVGA SSC Edition @ NitroWare
- Gigabyte GTX 980 G1 Gaming 4GB @ Modders-Inc
- Sapphire R9 290X Tri-X OC 8GB @ Kitguru
- Corsair H105 on 4770K and R9 290 @ HardwareOverclock
Subject: Graphics Cards | February 11, 2015 - 11:55 PM | Tim Verry
Tagged: strix, maxwell, gtx 750ti, gtx 750 ti, gm107, factory overclocked, DirectCU II
ASUS is launching a new version of its factory overclocked GTX 750 Ti STRIX with double the memory of the existing STRIX-GTX750TI-OC-2GD5. The new card will feature 4GB of GDDR5, but is otherwise identical.
The new graphics card pairs the NVIDIA GM107 GPU and 4GB of memory with ASUS’ dual fan "0dB" DirectCU II cooler. The card can output video over DVI, HDMI, and DisplayPort.
Thanks to the aftermarket cooler, ASUS has factory overclocked the GTX 750 Ti GPU (640 CUDA cores) to a respectable 1124 MHz base and 1202 MHz GPU Boost clockspeeds. (For reference, stock clockspeeds are 1020 MHz base and 1085 MHz boost.) However, while the amount of memory has doubled the clockspeeds have remained the same at a stock clock of 5.4 Gbps (effective).
ASUS has not annouced pricing or availability for the new card but expect it to come soon at a slight premium (~$15) over the $160 2GB STRIX 750Ti.
The additional memory (and it's usefulness vs price premium) is a bit of a headscratcher considering this is a budget card aimed at delivering decent 1080p gaming. The extra memory may help in cranking up the game graphics settings just a bit more. In the end, the extra memory is nice to have, but if you find a good deal on a 2GB card today, don’t get too caught up on waiting for a 4GB model.
Subject: Graphics Cards, Mobile, Shows and Expos | February 11, 2015 - 03:25 PM | Scott Michaud
Tagged: Tegra X1, nvidia, mwc 15, MWC, gdc 2015, GDC, DirectX 12
On March 3rd, NVIDIA will host an event called “Made to Game”. Invitations have gone out to numerous outlets, including Android Police, who published a censored screenshot of it. This suggests that it will have something to do with the Tegra X1, especially since the date is the day after Mobile World Congress starts. Despite all of this, I think it is for something else entirely.
Image Credit: Android Police
Allow me to highlight two points. First, Tech Report claims that the event is taking place in San Francisco, which is about as far away from Barcelona, Spain as you can get. It is close to GDC however, which takes also starts on March 2nd. If this was going to align with Mobile World Congress, you ideally would not want attendees to take 14-hour flight for a day trip.
Second, the invitation specifically says: “More than 5 years in the making, what I want to share with you will redefine the future of gaming.” Compare that to the DirectX 12 announcement blog post on March 20th of last year (2014): “Our work with Microsoft on DirectX 12 began more than four years ago with discussions about reducing resource overhead. For the past year, NVIDIA has been working closely with the DirectX team to deliver a working design and implementation of DX12 at GDC ((2014)).”
So yeah, while it might involve the Tegra X1 processor for Windows 10 on mobile devices, which is the only reason I can think of that they would want Android Police there apart from "We're inviting everyone everywhere", I expect that this event is for DirectX 12. I assume that Microsoft would host their own event that involves many partners, but I could see NVIDIA having a desire to save a bit for something of their own. What would that be? No idea.
Subject: General Tech, Graphics Cards, Mobile | February 11, 2015 - 08:00 AM | Scott Michaud
Tagged: shieldtuesday, shield, nvidia, gridtuesday, grid, graphics drivers, geforce, drivers
Update: Whoops! The title originally said "374.52", when it should be "347.52". My mistake. Thanks "Suddenly" in the comments!
Two things from NVIDIA this week, a new driver and a new game for the NVIDIA GRID. The new driver aligns with the release of Evolve, which came out on Tuesday from the original creators of Left4Dead. The graphics vendor also claims that it will help Assassin's Creed: Unity, Battlefield 4, Dragon Age: Inquisition, The Crew, and War Thunder. Several SLi profiles were also added for Alone in the Dark: Illumination, Black Desert, Dying Light, H1Z1, Heroes of the Storm, Saint's Row: Gat out of Hell, Total War: Attila, and Triad Wars.
On the same day, NVIDIA released Brothers: A Tale of Two Sons on GRID, bringing the number of available games up to 37. This game came out in August 2013 and received a lot of critical praise. Its control style is unique, using dual-thumbstick gamepads to simultaneously control both characters. More importantly, despite being short, the game is said to have an excellent story, even achieving Game of the Year (2013) for TotalBiscuit based on its narrative, which is not something that he praises often.
I'd comment on the game, but I've yet to get the time to play it. Apparently it is only a couple hours long, so maybe I can fit it in somewhere.
Also, they apparently are now calling this “#SHIELDTuesday” rather than “#GRIDTuesday”. I assume this rebranding is because people may not know that GRID exists, but they would certainly know if they purchased an Android-based gaming device for a couple hundred dollars or more. We could read into this and make some assumptions about GRID adoption rates versus SHIELD purchases, or even purchases of the hardware itself versus their projections, but it would be pure speculation.
Both announcements were made available on Tuesday, for their respective products.
Subject: Graphics Cards | February 7, 2015 - 06:03 PM | Scott Michaud
Tagged: amd, R9, r9 300, r9 390x, radeon
According to WCCFTech, AMD commented on Facebook that they are “putting the finishing touches on the 300 series to make sure they live up to expectation”. I tried look through AMD's “Posts to Page” for February 3rd and I did not see it listed, so a grain of salt is necessary (either with WCCF or with my lack of Facebook skills).
Image Credit: WCCFTech
The current rumors claim that Fiji XT will have 4096 graphics cores that are fed by a high-bandwidth, stacked memory architecture, which is supposedly rated at 640 GB/s (versus 224 GB/s of the GeForce GTX 980). When you're dealing with data sets at the scale that GPUs are, bandwidth is a precious resource. That said, they also have cache and other methods to reduce this dependency, but let's just say that, if you offer a graphics vendor a free, order-of-magnitude speed-up in memory bandwidth -- you will have friend, and possibly one for life. Need a couch moved? No problem!
The R9 Series is expected to be launched next quarter, which could be as early as about a month.
Subject: Graphics Cards, Shows and Expos | February 4, 2015 - 03:33 AM | Scott Michaud
Tagged: OpenGL Next, opengl, glnext, gdc 2015, GDC
The first next-gen, released graphics API was Mantle, which launched a little while after Battlefield 4, but the SDK is still invite-only. The DirectX 12 API quietly launched with the recent Windows 10 Technical Preview, but no drivers, SDK, or software (that we know about) are available to the public yet. The Khronos Group has announced their project, and that's about it currently.
According to Develop Magazine, the GDC event listing, and participants, the next OpenGL (currently called “glNext initiative”) will be unveiled at GDC 2015. The talk will be presented by Valve, but it will also include Epic Games, who was closely involved in DirectX 12 with Unreal Engine, Oxide Games and EA/DICE, who were early partners with AMD on Mantle, and Unity, who recently announced support for DirectX 12 when it launches with Windows 10. Basically, this GDC talk includes almost every software developer that came out in early support of either DirectX 12 or Mantle, plus Valve. Off the top of my head, I can only think of FutureMark as unlisted. On the other hand, while they will obviously have driver support from at least one graphics vendor, none are listed. Will we see NVIDIA? Intel? AMD? All of the above? We don't know.
When I last discussed the next OpenGL initiative, it was attempting to parse the naming survey to figure out bits of the technology itself. As it turns out, the talk claims to go deep into the API, with demos, examples, and “real-world applications running on glNext drivers and hardware”. If this information makes it out (and some talks remain private unfortunately although this one looks public) then we should know more about it than what we know about any competing API today. Personally, I am hoping that they spent a lot of effort on the GPGPU side of things, sort-of building graphics atop it rather than having them be two separate entities. This would be especially good if it could be sandboxed for web applications.
This could get interesting.
Subject: Graphics Cards | January 30, 2015 - 03:42 PM | Jeremy Hellstrom
Tagged: nvidia, msi gaming 2g, msi, maxwell, gtx 960
Sitting right now at $220 ($210 after MIR) the MSI GTX 960 GAMING 2G is more or less the same price as a standard R9 280 or a 280x/290 on discount. We have seen that the price to performance is quite competitive at 1080p gaming but this year the push seems to be for performance at higher resolutions. [H]ard|OCP explored the performance of this card by testing at 2560 x 1440 against the GTX 770 and 760 as well as the R9 285 and 280x with some interesting results. In some cases the 280x turned out on top while in others the 960 won, especially when Gameworks features like God Rays were enabled. Read through this carefully but from their results the deciding factor between picking up one or the other of these cards could be the price as they are very close in performance.
"We take the new NVIDIA GeForce GTX 960 GPU based MSI GeForce GTX 960 GAMING 2G and push it to its limits at 1440p. We also include a GeForce GTX 770 and AMD Radeon R9 280X into this evaluation for some unexpected results. How does the GeForce GTX 960 really compare to a wide range of cards?"
Here are some more Graphics Card articles from around the web:
- Gigabyte GTX 960 G1 Gaming 2 GB @ techPowerUp
- Asus GeForce GTX 960 DirectCU II OC STRIX @ eTeknix
- ASUS GeForce GTX 960 Strix @ Benchmark Reviews
- Asus GeForce GTX 960 Strix OC @ Silent PC Review
- Gigabyte G1 Gaming GeForce GTX 960 @ eTeknix
- GeForce GTX 960 Article Round Up @HiTech Legion
- Linux Benchmarks Of NVIDIA's Early 2015 GeForce Line-Up @ Phoronix
- Corsair HG-10 gpu bracket @ HardwareOverclock
Subject: Graphics Cards | January 28, 2015 - 10:21 AM | Ryan Shrout
Tagged: nvidia, memory issue, maxwell, GTX 970, GM204, geforce
UPDATE 1/29/15: This forum post has since been edited and basically removed, with statements made on Twitter that no driver changes are planned that will specifically target the performance of the GeForce GTX 970.
The story around the GeForce GTX 970 and its confusing and shifting memory architecture continues to update. On a post in the official GeForce.com forums (on page 160 of 184!), moderator and NVIDIA employee PeterS claims that the company is working on a driver to help improve performance concerns and will also be willing to "help out" for users that honestly want to return the product they already purchased. Here is the quote:
First, I want you to know that I'm not just a mod, I work for NVIDIA in Santa Clara.
I totally get why so many people are upset. We messed up some of the stats on the reviewer kit and we didn't properly explain the memory architecture. I realize a lot of you guys rely on product reviews to make purchase decisions and we let you down.
It sucks because we're really proud of this thing. The GTX970 is an amazing card and I genuinely believe it's the best card for the money that you can buy. We're working on a driver update that will tune what's allocated where in memory to further improve performance.
Having said that, I understand that this whole experience might have turned you off to the card. If you don't want the card anymore you should return it and get a refund or exchange. If you have any problems getting that done, let me know and I'll do my best to help.
This makes things a bit more interesting - based on my conversations with NVIDIA about the GTX 970 since this news broke, it was stated that the operating system had a much stronger role in the allocation of memory from a game's request than the driver. Based on the above statement though, NVIDIA seems to think it can at least improve on the current level of performance and tune things to help alleviate any potential bottlenecks that might exist simply in software.
As far as the return goes, PeterS at least offers to help this one forum user but I would assume the gesture would be available for anyone that has the same level of concern for the product. Again, as I stated in my detailed breakdown of the GTX 970 memory issue on Monday, I don't believe that users need to go that route - the GeForce GTX 970 is still a fantastic performing card in nearly all cases except (maybe) a tiny fraction where that last 500MB of frame buffer might come into play. I am working on another short piece going up today that details my experiences with the GTX 970 running up on those boundaries.
NVIDIA is trying to be proactive now, that much we can say. It seems that the company understands its mistake - not in the memory pooling decision but in the lack of clarity it offered to reviewers and consumers upon the product's launch.
Subject: Graphics Cards | January 24, 2015 - 11:51 AM | Ryan Shrout
Tagged: nvidia, maxwell, GTX 970, GM204, 3.5gb memory
UPDATE 1/28/15 @ 10:25am ET: NVIDIA has posted in its official GeForce.com forums that they are working on a driver update to help alleviate memory performance issues in the GTX 970 and that they will "help out" those users looking to get a refund or exchange.
UPDATE 1/26/25 @ 1:00pm ET: We have posted a much more detailed analysis and look at the GTX 970 memory system and what is causing the unusual memory divisions. Check it out right here!
UPDATE 1/26/15 @ 12:10am ET: I now have a lot more information on the technical details of the architecture that cause this issue and more information from NVIDIA to explain it. I spoke with SVP of GPU Engineering Jonah Alben on Sunday night to really dive into the quesitons everyone had. Expect an update here on this page at 10am PT / 1pm ET or so. Bookmark and check back!
UPDATE 1/24/15 @ 11:25pm ET: Apparently there is some concern online that the statement below is not legitimate. I can assure you that the information did come from NVIDIA, though is not attributal to any specific person - the message was sent through a couple of different PR people and is the result of meetings and multiple NVIDIA employee's input. It is really a message from the company, not any one individual. I have had several 10-20 minute phone calls with NVIDIA about this issue and this statement on Saturday alone, so I know that the information wasn't from a spoofed email, etc. Also, this statement was posted by an employee moderator on the GeForce.com forums about 6 hours ago, further proving that the statement is directly from NVIDIA. I hope this clears up any concerns around the validity of the below information!
Over the past couple of weeks users of GeForce GTX 970 cards have noticed and started researching a problem with memory allocation in memory-heavy gaming. Essentially, gamers noticed that the GTX 970 with its 4GB of system memory was only ever accessing 3.5GB of that memory. When it did attempt to access the final 500MB of memory, performance seemed to drop dramatically. What started as simply a forum discussion blew up into news that was being reported at tech and gaming sites across the web.
Image source: Lazygamer.net
NVIDIA has finally responded to the widespread online complaints about GeForce GTX 970 cards only utilizing 3.5GB of their 4GB frame buffer. From the horse's mouth:
The GeForce GTX 970 is equipped with 4GB of dedicated graphics memory. However the 970 has a different configuration of SMs than the 980, and fewer crossbar resources to the memory system. To optimally manage memory traffic in this configuration, we segment graphics memory into a 3.5GB section and a 0.5GB section. The GPU has higher priority access to the 3.5GB section. When a game needs less than 3.5GB of video memory per draw command then it will only access the first partition, and 3rd party applications that measure memory usage will report 3.5GB of memory in use on GTX 970, but may report more for GTX 980 if there is more memory used by other commands. When a game requires more than 3.5GB of memory then we use both segments.
We understand there have been some questions about how the GTX 970 will perform when it accesses the 0.5GB memory segment. The best way to test that is to look at game performance. Compare a GTX 980 to a 970 on a game that uses less than 3.5GB. Then turn up the settings so the game needs more than 3.5GB and compare 980 and 970 performance again.
Here’s an example of some performance data:
|GTX 980||GTX 970|
|Shadow of Mordor|
|<3.5GB setting = 2688x1512 Very High||72 FPS||60 FPS|
|>3.5GB setting = 3456x1944||55 FPS (-24%)||45 FPS (-25%)|
|<3.5GB setting = 3840x2160 2xMSAA||36 FPS||30 FPS|
|>3.5GB setting = 3840x2160 135% res||19 FPS (-47%)||15 FPS (-50%)|
|Call of Duty: Advanced Warfare|
|<3.5GB setting = 3840x2160 FSMAA T2x, Supersampling off||82 FPS||71 FPS|
|>3.5GB setting = 3840x2160 FSMAA T2x, Supersampling on||48 FPS (-41%)||40 FPS (-44%)|
On GTX 980, Shadows of Mordor drops about 24% on GTX 980 and 25% on GTX 970, a 1% difference. On Battlefield 4, the drop is 47% on GTX 980 and 50% on GTX 970, a 3% difference. On CoD: AW, the drop is 41% on GTX 980 and 44% on GTX 970, a 3% difference. As you can see, there is very little change in the performance of the GTX 970 relative to GTX 980 on these games when it is using the 0.5GB segment.
So it would appear that the severing of a trio of SMMs to make the GTX 970 different than the GTX 980 was the root cause of the issue. I'm not sure if this something that we have seen before with NVIDIA GPUs that are cut down in the same way, but I have asked for clarification from NVIDIA on that.
The ratios fit: 500MB is 1/8th of the 4GB total memory capacity and 2 SMMs is 1/8th of the total SMM count. (Edit: The ratios in fact do NOT match up...odd.)
The full GM204 GPU that is the root cause of this memory issue.
Another theory presented itself as well: is this possibly the reason we do not have a GTX 960 Ti yet? If the patterns were followed from previous generations a GTX 960 Ti would be a GM204 GPU with fewer cores enabled and additional SMs disconnected to enable a lower price point. If this memory issue were to be even more substantial, creating larger differentiated "pools" of memory, then it could be an issue for performance or driver development. To be clear, we are just guessing on this one and that could be something that would not occur at all. Again, I've asked NVIDIA for some technical clarification.
Requests for information aside, we may never know for sure if this is a bug with the GM204 ASIC or predetermined characteristic of design.
The questions remains: does NVIDIA's response appease GTX 970 owners? After all, this memory concern is really just a part of a GPU's story and thus performance testing and analysis already incorporates it essentially. Some users will still likely make a claim of a "bait and switch" but do the benchmarks above, as well as our own results at 4K, make it a less significant issue?
Our own Josh Walrath offers this analysis:
A few days ago when we were presented with evidence of the 970 not fully utilizing all 4 GB of memory, I theorized that it had to do with the reduction of SMM units. It makes sense from an efficiency standpoint to perhaps "hard code" memory addresses for each SMM. The thought behind that would be that 4 GB of memory is a huge amount of a video card, and the potential performance gains of a more flexible system would be pretty minimal.
I believe that the memory controller is working as intended and not a bug. When designing a large GPU, there will invariably be compromises made. From all indications NVIDIA decided to save time, die size, and power by simplifying the memory controller and crossbar setup. These things have a direct impact on time to market and power efficiency. NVIDIA probably figured that a couple percentage of performance lost was outweighed by the added complexity, power consumption, and engineering resources that it would have taken to gain those few percentage points back.
Subject: Graphics Cards | January 23, 2015 - 11:09 PM | Sebastian Peak
Tagged: nvidia, gtx 960, graphics drivers, graphics cards, GeForce 347.25, geforce, game ready, dying light
With the release of GTX 960 yesterday NVIDIA also introduced a new version of the GeForce graphics driver, 347.25 - WHQL.
NVIDIA states that the new driver adds "performance optimizations, SLI profiles, expanded Multi-Frame Sampled Anti-Aliasing support, and support for the new GeForce GTX 960".
While support for the newly released GPU goes without saying, the expanded MFAA support will help provide better anti-aliasing performance to many existing games, as “MFAA support is extended to nearly every DX10 and DX11 title”. In the release notes three games are listed that do not benefit from the MFAA support, as “Dead Rising 3, Dragon Age 2, and Max Payne 3 are incompatible with MFAA”.
347.25 also brings additional SLI profiles to add support for five new games, and a DirectX 11 SLI profile for one more:
SLI profiles added
- Black Desert
- Lara Croft and the Temple of Osiris
- Zhu Xian Shi Jie
- The Talos Principle
DirectX 11 SLI profile added
- Final Fantasy XIV: A Realm Reborn
The update is also the Game Ready Driver for Dying Light, a zombie action/survival game set to debut on January 27.
Much more information is available under the release notes on the driver download page, and be sure to check out Ryan’s chat with Tom Peterson from the live stream for a lot more information about this driver and the new GTX 960 graphics card.
Subject: General Tech, Graphics Cards | January 23, 2015 - 07:11 PM | Scott Michaud
Tagged: windows 10, microsoft, dx12, DirectX 12, DirectX
Microsoft has added DirectX 12 with the latest Windows 10 Technical Preview that was released today. Until today, DXDIAG reported DirectX 11 in the Windows 10 Technical Preview. At the moment, there has not been any drivers or software released for it, and the SDK is also no-where to be found. Really, all this means is that one barrier has been lifted, leaving the burden on hardware and software partners (except to release the SDK, that's still Microsoft's responsibility).
No-one needs to know how old my motherboard is...
Note: I have already experienced some issues with Build 9926. Within a half hour of using it, I suffered an instant power-down. There was not even enough time for a bluescreen. When it came back, my Intel GPU (which worked for a few minutes after the update) refused to be activated, along with the monitor it is attached to. My point? Not for production machines.
Update: Looks like a stick of RAM (or some other hardware) blew, coincidentally, about 30 minutes after the update finished, while the computer was running, which also confused my UEFI settings. I haven't got around to troubleshooting much, but it seems like a weirdly-timed, abrupt hardware failure (BIOS is only reporting half of the RAM installed, iGPU is "enabled" but without RAM associated to it, etc.).
The interesting part, to me, is how Microsoft pushed DX12 into this release without, you know, telling anyone. It is not on any changelog that I can see, and it was not mentioned anywhere in the briefing as potentially being in an upcoming preview build. Before the keynote, I had a theory that it would be included but, after the announcement, figured that it might be pushed until GDC or BUILD (but I kept an open mind). The only evidence that it might come this month was an editorial on Forbes that referenced a conversation with Futuremark, who allegedly wanted to release an update to 3DMark (they hoped) when Microsoft released the new build. I could not find anything else, so I didn't report on it -- you would think that there would be a second source for that somewhere. It turns out that he might be right.
The new Windows 10 Technical Preview, containing DirectX 12, is available now from the preview build panel. It looks like Futuremark (and maybe others) will soon release software for it, but no hardware vendor has released a driver... yet.
Subject: General Tech, Graphics Cards | January 22, 2015 - 06:44 PM | Ryan Shrout
Tagged: video, tom petersen, nvidia, maxwell, live, gtx 960, gtx, GM206, geforce
UPDATE 2: If you missed the live stream you missed the prizes! But you can still watch the replay to get all the information and Q&A that went along with it as we discuss the GTX 960 and many more topics from the NVIDIA universe.
UPDATE (1/22): Well, the secret is out. Today's discussion will be about the new GeForce GTX 960, a $199 graphics card that takes power efficiency to a previously un-seen level! If you haven't read my review of the card yet, you should do so first, but then be sure you are ready for today's live stream and giveaway - details below! And don't forget: if you have questions, please leave them in the comments!
Get yourself ready, it’s time for another GeForce GTX live stream hosted by PC Perspective’s Ryan Shrout and NVIDIA’s Tom Petersen. Though we can’t dive into the exact details of what topics are going to be covered, intelligent readers that keep an eye on the rumors on our site will likely be able to guess what is happening on January 22nd.
On hand to talk about the products, answer questions about technologies in the GeForce family including GPUs, G-Sync, GameWorks, GeForce Experience and more will be Tom Petersen, well known on the LAN party and events circuit. To spice things up as well Tom has worked with graphics card partners to bring along a sizeable swag pack to give away LIVE during the event, including new GTX graphics cards. LOTS of graphics cards.
NVIDIA GeForce GTX 960 Live Stream and Giveaway
10am PT / 1pm ET - January 22nd
Need a reminder? Join our live mailing list!
Here are some of the prizes we have lined up for those of you that join us for the live stream:
- 3 x MSI GeForce GTX 960 Graphics Cards
- 4 x EVGA GeForce GTX 960 Graphics Cards
- 3 x ASUS GeForce GTX 960 Graphics Cards
Thanks to ASUS, EVGA and MSI for supporting the stream!
The event will take place Thursday, January 22nd at 1pm ET / 10am PT at http://www.pcper.com/live. There you’ll be able to catch the live video stream as well as use our chat room to interact with the audience, asking questions for me and Tom to answer live. To win the prizes you will have to be watching the live stream, with exact details of the methodology for handing out the goods coming at the time of the event.
Tom has a history of being both informative and entertaining and these live streaming events are always full of fun and technical information that you can get literally nowhere else. Previous streams have produced news as well – including statements on support for Adaptive Sync, release dates for displays and first-ever demos of triple display G-Sync functionality. You never know what’s going to happen or what will be said!
If you have questions, please leave them in the comments below and we'll look through them just before the start of the live stream. Of course you'll be able to tweet us questions @pcper and we'll be keeping an eye on the IRC chat as well for more inquiries. What do you want to know and hear from Tom or I?
So join us! Set your calendar for this coming Thursday at 1pm ET / 10am PT and be here at PC Perspective to catch it. If you are a forgetful type of person, sign up for the PC Perspective Live mailing list that we use exclusively to notify users of upcoming live streaming events including these types of specials and our regular live podcast. I promise, no spam will be had!
Subject: Graphics Cards | January 22, 2015 - 01:44 PM | Jeremy Hellstrom
Tagged: video, nvidia, msi gaming 2g, maxwell, gtx 960, GM206, geforce
Did Ryan somehow miss a benchmark that is important to you? Perhaps [H]ard|OCP's coverage of the MSI GeForce GTX 960 GAMING 2G will capture that certain something. MSI runs their 960 at a base of 1216MHz with the boost clock hitting 1279MHz, slightly slower than the ASUS STRIX at 1291 MHz and 1317 MHz. At the time this was posted the cards were available on Amazon for $210, that is obviously going to change so keep an eye out. As [H] states in their conclusions, it is a good value but not the great value which the GTX 970 offered at release, check out their full review here or one of the many down below.
"NVIDIA is today launching a GPU aimed at the "sweet spot" of the video card market. With an unexpectedly low MSRP, we find out if the new GeForce GTX 960 has what it takes to compete with the competition. The MSI GTX 960 GAMING reviewed here today is a retail card you will be able to purchase. No reference card in this review."
Here are some more Graphics Card articles from around the web:
- Nvidia's GeForce GTX 960 @ The Tech Report
- Zotac GTX 960 AMP!-edition @ Bjorn3d
- NVIDIA GeForce GTX 960: A Great $200 GPU For Linux Gamers @ Phoronix
- Palit GTX 960 Super JetStream 2 GB @ techPowerUp
- Gigabyte GTX 960 G1 Gaming 2GB @ Modders-Inc
- NVIDIA, MSI, EVGA GTX 960 Review @ OCC
- NVIDIA GeForce GTX 960 SLI @ techPowerUp
- EVGA GTX 960 Super Superclocked Video Card Review @ Hardware Asylum
- ASUS STRIX GTX 960 Review @ Neoseeker
- MSI GTX 960 Gaming OC 2 GB @ techPowerUp
- GTX 960 @ HardwareHeaven
- Gigabyte GTX960 G1 Gaming SOC @ Kitguru
- EVGA GTX 960 SSC 2 GB @ techPowerUp
- ASUS GTX 960 STRIX OC 2 GB @ techPowerUp
- Asus GTX960 Strix OC Edition @ Kitguru
- ASUS Strix Edition GeForce GTX 960 Graphics Card Review @ Techgage
- Palit GeForce GTX 960 JetStream @ Legion Hardware
- The NVIDIA GTX 960 Performance Review @ Hardware Canucks
- EVGA GeForce GTX 970 SSC ACX 2.0 @ HardwareOverlock
- NVIDIA GeForce GTX 970/980: Windows vs. Ubuntu Linux Performance @ Phoronix
- 22-Way AMD+NVIDIA Graphics Card Tests With Metro Redux On Steam For Linux @ Phoronix
Subject: General Tech, Graphics Cards | January 16, 2015 - 10:37 PM | Scott Michaud
Tagged: Khronos, opengl, OpenGL ES, webgl, OpenGL Next
The Khornos Group probably wants some advice from graphics developers because they ultimately want to market to them, as the future platform's success depends on their applications. If you develop games or other software (web browsers?) then you can give your feedback. If not, then it's probably best to leave responses to its target demographic.
As for the questions themselves, first and foremost they ask if you are (or were) an active software developer. From there, they ask you to score your opinion on OpenGL, OpenGL ES, and WebGL. They then ask whether you value “Open” or “GL” in the title. They then ask you whether you feel like OpenGL, OpenGL ES, and WebGL are related APIs. They ask how you learn about the Khronos APIs. Finally, they directly ask you for name suggestions and any final commentary.
Now it is time to (metaphorically) read tea leaves. The survey seems written primarily to establish whether developers consider OpenGL, OpenGL ES, and WebGL as related libraries, and to gauge their overall interest in each. If you look at the way OpenGL ES has been developing, it has slowly brought mobile graphics into a subset of desktop GPU features. It is basically an on-ramp to full OpenGL.
We expect that, like Mantle and DirectX 12, the next OpenGL initiative will be designed around efficiently loading massively parallel processors, with a little bit of fixed-function hardware for common tasks, like rasterizing triangles into fragments. The name survey might be implying that the Next Generation OpenGL Initiative is intended to be a unified platform, for high-end, mobile, and even web. Again, modern graphics APIs are based on loading massively parallel processors as directly as possible.
If you are a graphics developer, the Khronos Group is asking for your feedback via their survey.
Subject: Graphics Cards | January 14, 2015 - 10:49 AM | Sebastian Peak
Tagged: rumors, NVIDA, leak, gtx 960, gpu, geforce
The GPU news and rumor site VideoCardz.com had yet another post about the GTX 960 yesterday, and this time the site claims they have most of the details about this unreleased GPU with new leaked photos from a forum on the Chinese site PCEVA.
The card is reportedly based on Maxwell GM206, a 1024 CUDA core part recently announced with the introduction of the GTX 965M. Clock speed was not listed but alleged screenshots indicate the sample had a 1228 MHz core and 1291 MHz Boost clock. The site is calling this an overclock, but it's still likely that the core would have a faster clock speed than the GTX 970 and 980.
The card will reportedly feature 2GB of 128-bit GDDR5 memory, though doubtless 4GB variants would likely be available after launch from the various vendors (an important option considering the possibility of the new card natively supporting triple DisplayPort monitors). Performance will clearly be a step down from the initial GTX 900-series offerings as NVIDIA has led with their more performant parts, but the 960 should still be a solid choice for 1080p gaming if these screenshots are real.
The specs as listed on the page at VideoCardz.com are follows (they do not list clock speed):
- 28nm GM206-300 GPU
- 1024 CUDA cores
- 64(?) TMUs
- 32 ROPs
- 1753 MHz memory
- 128-bit memory bus
- 2GB memory size
- 112 GB/s memory bandwidth
- DirectX 11.3/12
- 120W TDP
- 1x 6-pin power connector
- 1x DVI-I, 1x HDMI 2.0, 3x DP
We await official word on pricing and availability for this unreleased GPU.
Subject: Graphics Cards | January 13, 2015 - 02:28 PM | Sebastian Peak
Tagged: rumors, nvidia, multi monitor, mini-ITX GPU, leak, HDMI 2.0, gtx 960, gpu, geforce, DisplayPort
The crew at VideoCardz.com have been reporting some GTX 960 sightings lately, and today they've added no less than three new cards from KFA2, the "European premium brand" of Galaxy.
The reported reference design GTX 960 (VideoCardz.com)
Such reports are becoming more common, with the site posting photos that appear to be other vendors' versions of the new GPU here, here, and here. Of note with these new alleged photos on what appears to be a reference design board: no less than three DisplayPort outputs, as well as HDMI 2.0 and DVI:
Reported GTX 960 outputs (VideoCardz.com)
This would be big news for multi-monitor users as it would provide potential support three high-resolution DisplayPort monitors from a single card in a strictly non-gaming environment (unless you happen to enjoy the frame-rates of an oil painting).
The reported mini-ITX GTX 960 (VideoCardz.com)
The other designs shown in the post include a mini-ITX form-factor design still sporting the triple DisplayPorts, HDMI and DVI, and a larger EXOC edition built on a custom PCB.
Reported EXOC GTX 960 (VideoCardz.com)
The EXOC edition apparently drops the multi-DisplayPort option in favor of a second DVI output, leaving just one DisplayPort along with the lone HDMI 2.0 output.
With the GTX 960 leaks coming in daily now it seems likely that we would be hearing something official soon.
Subject: Graphics Cards | January 13, 2015 - 12:22 PM | Ryan Shrout
Tagged: rumor, radeon, r9 380x, 380x
Spotted over at TechReport.com this morning and sourced from a post at 3dcenter.org, it appears that some additional information about the future Radeon R9 380X is starting to leak out through AMD employee LinkedIn pages.
Ilana Shternshain is a ASIC physical design engineer at AMD with more than 18 years of experience, 7-8 years of that with AMD. Under the background section is the line "Backend engineer and team leader at Intel and AMD, responsible for taping out state of the art products like Intel Pentium Processor with MMX technology and AMD R9 290X and 380X GPUs." A bit further down is an experience listing of the Playstation 4 APU as well as "AMD R9 380X GPUs (largest in “King of the hill” line of products)."
Interesting - though not entirely enlightening. More interesting were the details found on Linglan Zhang's LinkedIn page (since removed):
Developed the world’s first 300W 2.5D discrete GPU SOC using stacked die High Bandwidth Memory and silicon interposer.
Now we have something to work with! A 300 watt TDP would make the R9 380X more power hungry than the current R9 290X Hawaii GPU. High bandwidth memory likely implies memory located on the substrate of the GPU itself, similar to what exists on the Xbox One APU, though configurations could differ in considerable ways. A bit of research on the silicon interposer reveals it as an implementation method for 2.5D chips:
There are two classes of true 3D chips which are being developed today. The first is known as 2½D where a so-called silicon interposer is created. The interposer does not contain any active transistors, only interconnect (and perhaps decoupling capacitors), thus avoiding the issue of threshold shift mentioned above. The chips are attached to the interposer by flipping them so that the active chips do not require any TSVs to be created. True 3D chips have TSVs going through active chips and, in the future, have potential to be stacked several die high (first for low-power memories where the heat and power distribution issues are less critical).
An interposer would allow the GPU and stacked die memory to be built on different process technology, for example, but could also make the chips more fragile during final assembly. Obviously there a lot more questions than answers based on these rumors sourced from LinkedIn, but it's interesting to attempt to gauge where AMD is headed in its continued quest to take back market share from NVIDIA.
Subject: Graphics Cards, Shows and Expos | January 8, 2015 - 12:46 AM | Ryan Shrout
Tagged: video, maxwell, Kingpin, hydro copper, GTX 980, GM204, evga, classified, ces 2015, CES
EVGA posted up in its normal location at CES this year and had its entire lineup of goodies on display. There were a pair of new graphics cards we spotted too including the GTX 980 Hydro Copper and the GTX 980 Classified Kingpin Edition.
Though we have seen EVGA water cooling on the GTX 980 already, the new GTX 980 Hydro Copper uses a self-contained water cooler, built by Asetek, rather than a complete GPU water block. The memory and power delivery is cooled by the rest of the original heatsink and blower fan but because of lowered GPU temperatures, the fan will nearly always spin at its lowest RPM.
Speaking of temperatures, EVGA is saying that GPU load temperatures will be in the 40-50C range, significantly lower than what you have with even the best air coolers on the GTX 980 today. As for users that already have GTX 980 cards, EVGA is planning to sell the water cooler individually so you can upgrade yourself. Pricing isn't set on this but it should be available sometime in February.
Fans of the EVGA Classified Kingpin series will be glad to know that the GTX 980 iteration is nearly ready, also available in February and also without an known MSRP.
EVGA has included an additional 6-pin power connector, rearranged the memory traces and layout for added memory frequency and includes a single-slot bracket for any users that eventually remove the impressive air cooler for a full-on water block.
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: Graphics Cards | January 7, 2015 - 10:51 AM | Sebastian Peak
Tagged: nvidia, notebook, mobile graphics, mobile gpu, GeForce 965M
With zero fanfare NVIDIA has released a new mobile graphics chip today, the GeForce GTX 965M.
Based on the 28nm Maxwell GM204 core and positioned just below the existing GTX 970M, the new GTX 965M has 1024 CUDA cores (compared to the 970M's 1280) and the new 965M has a lower 128-bit memory interface (vs 192-bit with the 970M). The base clock is slightly faster at 944 MHz (plus unspecified Boost headroom).
Compared with the flagship GTX 980M which boasts 1536 CUDA cores and 256-bit GDDR5 this new GTX 965M will be a significantly lower performer, but NVIDIA is marketing it towards 1080p mobile gaming. At a lower cost to OEMs the 965M should help create some less expensive 1080p gaming notebooks as the new GPU is adopted.
The chip features proprietary NVIDIA Optimus and Battery Boost support, and is GameStream, ShadowPlay, and GameWorks ready.
Specs from NVIDIA:
- CUDA Cores: 1024
- Base Clock: 944 MHz + Boost
- Memory Clock: 2500 MHz
- Memory Interface: GDDR5
- Memory Interface Width: 128-bit
- Memory Bandwidth: 80 GB/sec
- DirectX API: 12
- OpenGL: 4.4
- OpenCL: 1.1
- Display Resolution: Up to 3840x2160
More information on this new mobile GPU can be found via the source link.
Subject: Graphics Cards, Displays, Shows and Expos | January 7, 2015 - 03:11 AM | Ryan Shrout
Tagged: video, radeon, monitor, g-sync, freesync, ces 2015, CES, amd
It finally happened - later than I had expected - we got to get hands on with nearly-ready FreeSync monitors! That's right, AMD's alternative to G-Sync will bring variable refresh gaming technology to Radeon gamers later this quarter and AMD had the monitors on hand to prove it. On display was an LG 34UM67 running at 2560x1080 on IPS technology, a Samsung UE590 with a 4K resolution and AHVA panel and BenQ XL2730Z 2560x1440 TN screen.
The three monitors sampled at the AMD booth showcase the wide array of units that will be available this year using FreeSync, possibly even in this quarter. The LG 34UM67 uses the 21:9 aspect ratio that is growing in popularity, along with solid IPS panel technology and 60 Hz top frequency. However, there is a new specification to be concerned with on FreeSync as well: minimum frequency. This is the refresh rate that monitor needs to maintain to avoid artifacting and flickering that would be visible to the end user. For the LG monitor it was 40 Hz.
What happens below that limit and above it differs from what NVIDIA has decided to do. For FreeSync (and the Adaptive Sync standard as a whole), when a game renders at a frame rate above or below this VRR window, the V-Sync setting is enforced. That means on a 60 Hz panel, if your game runs at 70 FPS, then you will have the option to enable or disable V-Sync; you can either force a 60 FPS top limit or allow 70 FPS with screen tearing. If your game runs under the 40 Hz bottom limit, say at 30 FPS, you get the same option: V-Sync on or V-Sync off. With it off, you would get tearing but optimal input/display latency but with it off you would reintroduce frame judder when you cross between V-Sync steps.
There are potential pitfalls to this solution though; what happens when you cross into that top or bottom region can cause issues depending on the specific implementation. We'll be researching this very soon.
Notice this screen shows FreeSync Enabled and V-Sync Disabled, and we see a tear.
FreeSync monitors have the benefit of using industry standard scalers and that means they won't be limited to a single DisplayPort input. Expect to see a range of inputs including HDMI and DVI though the VRR technology will only work on DP.
We have much more to learn and much more to experience with FreeSync but we are eager to get one in the office for testing. I know, I know, we say that quite often it seems.
Follow all of our coverage of the show at http://pcper.com/ces!