All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Graphics Cards | May 12, 2015 - 04:11 PM | Ryan Shrout
Tagged: GTX 980, giveaway, contest, asus, anniversary
I bet many of you are going to feel a little old as you read this, but ASUS is celebrating its 20th anniversary of being in the graphics card market! Starting in 1996, when ASUS launched its first discrete graphics product based on the S3 Virge/DX GPU and running through 2015 with the release of updated GTX 900-series and AMD Radeon 200-series cards, ASUS has been a pivotal part of the GPU landscape.
To celebrate this achievement, ASUS built a new, gold, 20th anniversary edition of its GeForce GTX 980 product, limited to only 200 units in all of North America! And the best part? You can win one right here on PC Perspective for FREE!
This amazing graphics card has some killer features:
- GTX 980 GPU
- 4GB GDDR5 memory at 7.0 GHz
- 1317 MHz Base / 1431 MHz Boost clocks
- 8+8-pin power connections
- DirectCU II 0db Fan Technology
- Memory Defroster
- "Safe Mode" VBIOS reload
- 14-phase Super Alloy power
How do you enter for this prize? Pay attention, there are some specifics!
- First and foremost, you must be subscribed to the ASUS PCDIY email list, which you can sign up for at http://pcdiy.asus.com/. Repeat: this is a requirement!
- You must leave a comment on the news story here and tell us what you love about ASUS graphics cards or the new 20th anniversary GTX 980 Special Edition!
- This is a global contest - so feel free to enter from anywhere in the world!
- You can gain other entries by utilizing the Gleam.io form below!
The contest will end on May 15th at 9pm ET so be sure to get your entries in early!
Subject: Graphics Cards | May 6, 2015 - 06:21 PM | Ryan Shrout
Tagged: amd, hbm, radeon, gpu
During today's 2015 AMD Financial Analyst Day, CEO Dr. Lisa Su discussed some of the details of the upcoming enthusiast Radeon graphics product. Though it wasn't given a name, she repeatedly said that the product would be announced "in the coming weeks...at upcoming industry events."
You won't find specifications here but understanding the goals and targets that AMD has for this new flagship product will help tell the story of this new Radeon product. Dr. Su sees AMD investing at very specific inflection points, the most recent of which are DirectX 12, 4K displays and VR technology. With adoption of HBM (high bandwidth memory) that sits on-die with the GPU, rather than across a physical PCB, we will see both a reduction in power consumption as well as a significant increase in GPU memory bandwidth.
HBM will accelerate the performance improvements at those key inflection points Dr. Su mentioned. Additional memory bandwidth will aid the ability for discrete GPUs to push out 4K resolutions and beyond, no longer limited by texture sizes. AMD's LiquidVR software, in conjunction with HBM, will be able to improve latency and reduce performance concerns on current and future generations of virtual reality hardware.
One interesting comment made during the conference was that HBM would enable new form factors for the GPUs now that you now longer need to have memory spread out on a PCB. While there isn't much room in the add-in card market for differentiation, in the mobile space that could mean some very interesting things for higher performance gaming notebooks.
Mark Papermaster, AMD CTO, said earlier in the conference call that HBM would aid in performance but maybe more importantly will lower power and improve total GPU efficiency. HBM will offer more than 3x improved performance/watt compared to GDDR5 while also running more than 50% lower power than GDDR5. Lower power and higher performance upgrades don't happen often so I am really excited to see what AMD does with it.
There weren't any more details on the next flagship Radeon GPU but it doesn't look like we'll have to wait much longer.
Subject: Graphics Cards | May 5, 2015 - 04:46 PM | Sebastian Peak
Tagged: The Witcher 3, nvidia, GTX 980, GTX 970, gtx, geforce, batman arkham knight
NVIDIA has announced a new game bundle for GeForce graphics cards starting now, and it’s a doozy: Purchase a qualifying GTX card and receive download codes for both Witcher 3: Wild Hunt and Batman: Arkham Knight!
Needless to say, both of these titles have been highly anticipated, and neither have been released just yet with Witcher 3: Wild Hunt due to be released on May 19, and Batman: Arkham Knight arriving on June 23. So which cards qualify? Amazon has a page specifically created for this new offer here, and depending on which card you select you’ll be eligible for either both upcoming games, or just Witcher 3. NVIDIA has this chart on their promotion page for reference:
So GeForce GTX 980 and GTX 970 cards will qualify for both games, and GTX 960 cards and the mobile GPUs qualify for Witcher 3 alone. Regardless of which game or card you may choose these PC versions of the new games will feature graphics that can’t be matched by consoles, and NVIDIA points out the advantages in their post:
"Both Batman and The Witcher raise the bar for graphical fidelity, rendering two very different open worlds with a level of detail we could only dream of last decade. And on PC, each is bolstered by NVIDIA GameWorks effects that increase fidelity, realism, and immersion. But to use those effects in conjunction with the many other available options, whilst retaining a high frame rate, it’s entirely possible that you’ll need an upgrade."
NVIDIA also posted this Witcher 3: Wild Hunt behind-the-scenes video:
In addition to the GameWorks effects present in Witcher 3, NVIDIA worked with developer Rocksteady on Batman: Arkham Knight “to create new visual effects exclusively for the game’s PC release.” The post elaborates:
"This is in addition to integrating the latest and greatest versions of our GPU-accelerated PhysX Destruction, PhysX Clothing and PhysX Turbulence technologies, which work in tandem to create immersive effects that react realistically to external forces. In previous Batman: Arkham games, these technologies created immersive fog and ice effects, realistic destructible scenery, and game-enhancing cloth effects that added to the atmosphere and spectacle. Expect to see similar effects in Arkham Knight, along with new features and effects that we'll be talking about in depth as we approach Arkham Knight’s June 23rd release."
Of note, mobile GPUs included in the promotion will only receive download codes if the seller of the notebook is participating in the "Two Times The Adventure" offer, so be sure to check that out when looking for a gaming laptop that qualifies!
The promotion is going on now and is available “for a limited time or while supplies last”.
Subject: Graphics Cards | May 1, 2015 - 12:28 AM | Sebastian Peak
Tagged: PC Gamer, gpu, Fiji, E3 2015, amd
We haven’t had much more than rumor and speculation about upcoming AMD graphics for a while now, but there is more than enough fresh fuel for the GPU fire today to ignore completely. It seems that AMD and PC Gamer magazine have teamed up to announce a special (what else) PC gaming event at this year’s E3 show on June 16, and this would be the perfect place for some new hardware announcements.
Not enough for you? Well, while the AMD Fiji GPU rumors are nothing new to followers of industry news, it has now been indirectly announced that the upcoming Fiji GPU from AMD will in fact feature 2.5D high-bandwidth memory (HBM). As reported by tech news/rumor site wccftech the announcement came via the official schedule for the upcoming Hot Chips symposium, which is slated for August 23-25 in Cupertino, California.
This screenshot was taken this morning from the official online event schedule
(Note: This part of the day 2 schedule has now been changed to read “AMD’s Next Generation GPU and Memory Architecture”, with all mention of Fiji and HBM removed.)
Whether this gives us insight into the actual release date of the long-awaited Fiji GPU from AMD is unclear, but new AMD GPU products certainly seem to be imminent as we move into the summer months. Speculation is fun (for a while), but hopefully the PC gaming event at E3 in June will provide at least some official news from AMD on the new GPU products we've been waiting for.
Subject: Graphics Cards | April 29, 2015 - 07:49 PM | Jeremy Hellstrom
Tagged: GeForce 352.63, beta, windows 10
The new Win 10 NVIDIA GeForce driver is here, in two different flavours depending on your form factor. If you spend the money for a gaming laptop with a GeForce 600M through 900M then this is the driver for you. On the other hand if you have a traditional desktop and a GPU or two then head to this page.
If you have a Sony laptop you should double check your GPU is covered and unfortunately at this point Hybrid Power technology is not supported. NVIDIA did not provide much additional information on the desktop side; it is a beta and so is the OS so make sure to record the full information about your bugs and crashes when reporting them, not just a frowny face followed by expletives.
Subject: Graphics Cards | April 21, 2015 - 08:07 PM | Jeremy Hellstrom
Tagged: GTA5, gaming, titan x, GTX 980, R9 290X, r9 295x2
Some sort of game involving driving stolen prostitutes into cars in an open sore world has arrived and the questions about what it takes to make the game look good are popping up like pills. [H]ard|OCP seems to have heard of the game and tested out its performance on the top performing video cards from AMD and NVIDIA in both single and doubles. You will get more out of a double but unfortunately only around a 50% improvement so obviously that second shot is watered down a bit. In the end the GTX TITAN X was the best choice for those who want to crank everything up, with the 980 tasting slightly better than the 290X for those that actually have to ask the price. Check the full review here.
"Grand Theft Auto V has finally been released on the PC. In this preview we will look at some video card comparisons in performance, maximize graphics settings at 1440p and 4K. We will briefly test AMD CHS Shadow and NVIDIA PCSS shadow and talk about them. We will even see if SLI and CrossFire work."
Here are some more Graphics Card articles from around the web:
Subject: General Tech, Graphics Cards | April 20, 2015 - 11:30 AM | Scott Michaud
Tagged: Red Hat, Khronos
With a brief blog post, Red Hat has announced that they are now members of the Khronos Group. Red Hat, one of the largest vendors of Linux software and services, would like to influence the direction of OpenGL and the upcoming Vulkan API. Also, apart from Valve, they are one of the only Linux vendors that contributes to the Khronos Group as an organization. I hope that their input counter-balances Apple, Google, and Microsoft, who are each members, in areas that are beneficial to the open-source operating system.
As for now, Red Hat intends to use their membership to propose OpenGL extensions as well as influence Vulkan as previously mentioned. It also seems reasonable that they would push for extensions to Vulkan, which the Khronos Group mentioned would support extensions at GDC, especially if something that they need fails to reach “core” status. While this feels late, I am glad that they at least joined now.
Subject: General Tech, Graphics Cards, Processors | April 19, 2015 - 06:08 PM | Scott Michaud
Tagged: moores law, Intel
While he was the director of research and development at Fairchild Semiconductor, Gordon E. Moore predicted that the number of components in an integrated circuits would double every year. Later, this time-step would slow to every two years; you can occasionally hear people talk about eighteen months too, but I am not sure who derived that number. In a few years, he would go on to found Intel with Robert Noyce, where they spend tens of billions of dollars annually to keep up with the prophecy.
It works out for the most part, but we have been running into physical issues over the last few years though. One major issue is that, with our process technology dipping into the single- and low double-digit nanometers, we are running out of physical atoms to manipulate. The distance between silicon atoms in a solid at room temperature is about 0.5nm; a 14nm product has features containing about 28 atoms, give or take a few in rounding error.
It has been a good fifty years since the start of Moore's Law. Humanity has been developing plans for how to cope with the eventual end of silicon lithography process shrinks. We will probably transition to smaller atoms and molecules and later consider alternative technologies like photonic crystals, which routes light in the hundreds of terahertz through a series of waveguides that make up an integrated circuit. Another interesting thought: will these technologies fall in line with Moore's Law in some way?
Subject: Graphics Cards | April 14, 2015 - 05:27 AM | Scott Michaud
Tagged: nvidia, amd, GTA5
Grand Theft Auto V launched today at around midnight GMT worldwide. This corresponded to 7PM EDT for those of us in North America. Well, add a little time for Steam to unlock the title and a bit longer for Rockstar to get enough servers online. One thing you did not need to wait for was new video card drivers. Both AMD and NVIDIA have day-one drivers that provide support.
You can get the NVIDIA drivers at their landing page
You can get the AMD drivers at their release notes
Personally, I ran the game for about a half hour on Windows 10 (Build 10049) with a GeForce GTX 670. Since these drivers are not for the pre-release operating system, I tried running it on 349.90 to see how it performed before upgrading. Surprisingly, it seems to be okay (apart from a tree that was flickering in and out of existence during a cut-scene). I would definitely update my drivers if they were available and supported, but I'm glad that it seems to be playable even on Windows 10.
Subject: Graphics Cards | April 6, 2015 - 08:54 PM | Jeremy Hellstrom
Tagged: factory overclocked, powercolor pcs+, R9 290X
The lowest priced GTX 980 on Amazon is currently $530 while the PowerColor PCS+ R9 290X is $380, about 72% of the price of the GTX 980. The performance that [H]ard|OCP saw after overclocking the 290X was much closer, in some games even matching it but usually about 5-10% slower than the GTX 980, making it quite obvious which card is the better value. The GTX 970 is a different story, you can find a card for $310 and the performance is only slightly behind the 290X although the 290X takes a larger lead at higher resolutions. Read through the review carefully as the performance delta and overall smoothness varies from game to game but unless you like paying to brag about your handful of extra frames the 970 and 290X are the cards offering you the best bang for your buck.
"Today we examine what value the PowerColor PCS+ R9 290X holds compared to overclocked GeForce GTX 970. AMD's Radeon R9 290X pricing has dropped considerably since launch and constitutes a great value and competition for the GeForce GTX 970. At $350 this may be an excellent value compared to the competition."
Here are some more Graphics Card articles from around the web:
- Sapphire R9 290X Tri-X 8GB CrossFireX @ eTeknix
- Bitspower MSI GTX 970 Full Cover Waterblock @ Modders-Inc
- PNY GTX 980 XLR8 Pro OC Review @ Hardware Canucks
- NVIDIA GeForce GTX TITAN X Overclocking & Best Playable Settings @ Techgage
- NVIDIA GeForce GTX TITAN X @ NitroWare
Subject: Graphics Cards | March 27, 2015 - 08:02 PM | Jeremy Hellstrom
Tagged: gtx titan x, linux, nvidia
Perhaps somewhere out there is a Linux user who wants a TITAN X and if there is they will like the results of Phoronix's testing. The card works perfectly straight out of the box with the latest 346.47 driver as well as the 349.12 Beta; if you want to use Nouveau then don't buy this card. The TITAN did not win any awards for power efficiency but for OpenCL tests, synthetic OpenGL benchmarks and Unigine on Linux it walked away a clear winner. Phoronix, and many others, hope that AMD is working on an updated Linux driver to accompany the new 300 series of cards we will see soon to help them be more competitive on open source systems.
If you are sick of TITAN X reviews by now, just skip to their 22 GPU performance roundup of Metro Redux.
"Last week NVIDIA unveiled the GeForce GTX TITAN X during their annual GPU Tech Conference. Of course, all of the major reviews at launch were under Windows and thus largely focused on the Direct3D performance. Now that our review sample arrived this week, I've spent the past few days hitting the TITAN X hard under Linux with various OpenGL and OpenCL workloads compared to other NVIDIA and AMD hardware on the binary Linux drivers."
Here are some more Graphics Card articles from around the web:
- Nvidia Geforce GTX Titan X 12GB @ Kitguru
- Nvidia GeForce GTX Titan X @ Legion Hardware
- Asus GeForce GTX 970 DirectCU Mini @ Kitguru
- ASUS STRIX GTX 960 DirectCU II OC @ [H]ard|OCP
- Zotac GeForce GTX980 AMP Omega Edition @ Bjorn3d
- PowerColor R9 285 2GB Turbo Duo @ Modders-Inc
- The Best Graphics Solution You Can Buy For Around £1000: Sapphire 295X2’s @ eTeknix
Subject: Graphics Cards | March 23, 2015 - 11:30 AM | Scott Michaud
Tagged: quadro, nvidia, m6000, gm200
Alongside the Titan X, NVIDIA has announced the Quadro M6000. In terms of hardware, they are basically the same component: 12 GB of GDDR5 on a 384-bit memory bus, 3072 CUDA cores, and a reduction in double precision performance to 1/32nd of its single precision. The memory, but not the cache, is capable of ECC (error-correction) for enterprises who do not want a stray photon to mess up their computation. That might be the only hardware difference between it and the Titan X.
Compared to other Quadro cards, it loses some double precision performance as mentioned earlier, but it will be an upgrade in single precision (FP32). The add-in board connects to the power supply with just a single eight-pin plug. Technically, with its 250W TDP, it is slightly over the rating for one eight-pin PCIe connector, but NVIDIA told Anandtech that they're confident that it won't matter for the card's intended systems.
That is probably true, but I wouldn't put it past someone to do something spiteful given recent events.
The lack of double precision performance (IEEE 754 FP64) could be disappointing for some. While NVIDIA would definitely know their own market better than I do, I was under the impression that a common workstation system for GPU compute was a Quadro driving a few Teslas (such as two of these). It would seem weird for a company to have such a high-end GPU be paired with Teslas that have such a significant difference in FP64 compute. I wonder what this means for the Tesla line, and whether we will see a variant of Maxwell with a large boost in 64-bit performance, or if that line will be in an awkward place until Pascal.
Or maybe not? Maybe NVIDIA is planning to launch products based on an unannounced, FP64-focused architecture? The aim could be to let the Quadro deal with the heavy FP32 calculations, while the customer could opt to load co-processors according to their double precision needs? It's an interesting thought as I sit here at my computer musing to myself, but then I immediately wonder why did they not announce it at GTC if that is the case? If that is the case, and honestly I doubt it because I'm just typing unfiltered thoughts here, you would think they would kind-of need to be sold together. Or maybe not. I don't know.
Pricing and availability is not currently known, except that it is “soon”.
Subject: Graphics Cards | March 19, 2015 - 07:20 PM | Jeremy Hellstrom
Tagged: titan x, nvidia, gtx titan x, gm200, geforce, 4k
You have read Ryan's review of the $999 behemoth from NVIDIA and now you can take the opportunity to see what other reviewers think of the card. [H]ard|OCP tested it against the GTX 980 which shares the same cooler and is every bit as long as the TITAN X. Along the way they found a use for the 12GB of VRAM as both Watch_Dogs and Far Cry 4 used over 7GB of memory when tested at 4k resolution though the frame rates were not really playable, you will need at least two TITAN X's to pull that off. They will be revisiting this card in the future, providing more tests for a card with incredible performance and an even more incredible price.
"The TITAN X video card has 12GB of VRAM, not 11.5GB, 50% more streaming units, 50% more texture units, and 50% more CUDA cores than the current GTX 980 flagship NVIDIA GPU. While this is not our full TITAN X review, this preview focuses on what the TITAN X delivers when directly compared to the GTX 980."
Here are some more Graphics Card articles from around the web:
- Nvidia's GeForce GTX Titan X @ The Tech Report
- NVIDIA GeForce Titan X 12GB GPU Review @HiTech Legion
- NVIDIA GeForce GTX Titan X Review @ OCC
- NVIDIA GTX TITAN X Performance Review @ Hardware Canucks
- he New Single GPU King Of The Hill: A Look At NVIDIA’s GeForce GTX TITAN X @ Techgage
- NVIDIA GeForce GTX Titan X Review @ Neoseeker
- NVIDIA GeForce Titan X 12 GB @ techPowerUp
- ASUS ROG Poseidon GTX 980 Platinum vs. AMD R9 295X2 @ [H]ard|OCP
- Inno3D GeForce GTX 960 iChill X3 Air Boss Ultra @ Kitguru
- EVGA GTX 960 SuperSC SweetSpot Plus Style @ Bjorn3d
Subject: General Tech, Graphics Cards, Shows and Expos | March 17, 2015 - 07:44 PM | Allyn Malventano
Tagged: nvidia, DIGITS
At GTC, NVIDIA announced a new device called the DIGITS DevBox:
The DIGITS DevBox is a device that data scientists can purchase and install locally. Plugged into a single electrical outlet, this modified Corsair Air 540 case equipped with quad TITAN X (reviewed here) GPUs can crank out 28 TeraFLOPS of compute power. The installed CPU is a Haswell-E 5930K, and the system is rated to draw 1300W of power. NVIDIA is building these in-house as the expected volume is low, with these units likely going to universities and small compute research firms.
Why would you want such compute power?
DIGITS is a software package available from NVIDIA. Its purpose is to act as a tool for data scientists to manipulate deep learning environments (neural networks). This package, running on a DIGITS DevBox, will give much more compute power capability to scientists who need it for their work. Getting this tech in the hands of more scientists will accelerate this technology and lead to what NVIDIA hopes will be a ‘Big Bang’ in this emerging GPU-compute-heavy field.
Ryan interviewed the lead developer of DIGITS in the video below. This offers a great explanation (and example) of what this deep learning stuff is all about:
Subject: Graphics Cards | March 17, 2015 - 05:47 PM | Ryan Shrout
Tagged: pascal, nvidia, gtc 2015, GTC, geforce
At the keynote of the GPU Technology Conference (GTC) today, NVIDIA CEO Jen-Hsun Huang disclosed some more updates on the roadmap for future GPU technologies.
Most of the detail was around Pascal, due in 2016, that will introduce three new features including mixed compute precision, 3D (stacked) memory, and NVLink. Mixed precision is a method of computing in FP16, allowing calculations to run much faster at lower accuracy than full single or double precision when they are not necessary. Keeping in mind that Maxwell doesn't have an implementation with full speed DP compute (today), it would seem that NVIDIA is targeting different compute tasks moving forward. Though details are short, mixed precision would likely indicate processing cores than can handle both data types.
3D memory is the ability to put memory on-die with the GPU directly to improve overall memory banwidth. The visual diagram that NVIDIA showed on stage indicated that Pascal would have 750 GB/s of bandwidth, compared to 300-350 GB/s on Maxwell today.
NVLink is a new way of connecting GPUs, improving on bandwidth by more than 5x over current implementations of PCI Express. They claim this will allow for connecting as many as 8 GPUs for deep learning performance improvements (up to 10x). What that means for gaming has yet to be discussed.
NVIDIA made some other interesting claims as well. Pascal will be more than 2x more performance per watt efficient than Maxwell, even without the three new features listed above. It will also ship (in a compute targeted product) with a 32GB memory system compared to the 12GB of memory announced on the Titan X today. Pascal will also have 4x the performance in mixed precision compute.
Subject: Graphics Cards, Shows and Expos | March 17, 2015 - 02:31 PM | Ryan Shrout
Tagged: nvidia, video, GTC, gtc 2015
NVIDIA is streaming today's keynote from the GPU Technology Conference (GTC) on Ustream, and we have the embed below for you to take part. NVIDIA CEO Jen-Hsun Huang will reveal the details about the new GeForce GTX TITAN X but there are going to be other announcements as well, including one featuring Tesla CEO Elon Musk.
Should be interesting!
Subject: Graphics Cards | March 16, 2015 - 11:13 PM | Ryan Shrout
Tagged: video, tom petersen, titan x, nvidia, maxwell, live, gtx titan x, gtx, gm200, geforce
UPDATE 2: If you missed the live stream, we now have the replay available below!
UPDATE: The winner has been announced: congrats to Ethan M. for being selected as the random winner of the GeForce GTX TITAN X graphics card!!
Get yourself ready, it’s time for another GeForce GTX live stream hosted by PC Perspective’s Ryan Shrout! This time the focus is going to be NVIDIA's brand-new GeForce GTX TITAN X graphics card, first teased a couple of weeks back at GDC. NVIDIA's Tom Petersen will be joining us live from the GPU Technology Conference show floor to discuss the GM200 GPU, it's performance and to show off some demos of the hardware in action.
And what's a live stream without a prize? One lucky live viewer will win a GeForce GTX TITAN X 12GB graphics card of their very own! That's right - all you have to do is tune in for the live stream tomorrow afternoon and you could win a Titan X!!
NVIDIA GeForce GTX TITAN X Live Stream and Giveaway
1pm PT / 4pm ET - March 17th
Need a reminder? Join our live mailing list!
The event will take place Tuesday, March 17th at 1pm PT / 4pm ET at http://www.pcper.com/live. There you’ll be able to catch the live video stream as well as use our chat room to interact with the audience. To win the prize you will have to be watching the live stream, with exact details of the methodology for handing out the goods coming at the time of the event.
Tom has a history of being both informative and entertaining and these live streaming events are always full of fun and technical information that you can get literally nowhere else.
If you have questions, please leave them in the comments below and we'll look through them just before the start of the live stream. Of course you'll be able to tweet us questions @pcper and we'll be keeping an eye on the IRC chat as well for more inquiries. What do you want to know and hear from Tom or I?
So join us! Set your calendar for this coming Tuesday at 1pm PT / 4pm ET and be here at PC Perspective to catch it. If you are a forgetful type of person, sign up for the PC Perspective Live mailing list that we use exclusively to notify users of upcoming live streaming events including these types of specials and our regular live podcast. I promise, no spam will be had!
Huge thanks to ASUS for supplying a new G751JY notebook, featuring an Intel Core i7-4710HQ and a GeForce GTX 980M 4GB GPU to power our live stream from GTC!!
Subject: Graphics Cards | March 14, 2015 - 06:12 PM | Ryan Shrout
Tagged: nvidia, quadro, m6000, deadmau5, gtc 2015
Sometimes information comes from the least likely of sources. Deadmau5, one of the world's biggest names in house music, posted an interesting picture to his Instagram feed a couple of days ago.
Well, that's interesting. A quick hunt on Google for the NVIDIA M6000 reveals rumors of it being a GM200-based 12GB graphics card. Sound familiar? NVIDIA recently announced the GeForce GTX TITAN X based on an identical configuration at GDC last week.
A backup of the Instagram image...in case it gets removed.
With NVIDIA's GPU Technology Conference coming up starting this Tuesday, it would appear we have more than one version of GM200 incoming.
Subject: Graphics Cards | March 13, 2015 - 03:13 AM | Sebastian Peak
Tagged: nvidia, maxwell, GTX 960M, GTX 950M, gtx 860m, gtx 850m, gm107, geforce
NVIDIA has announced new GPUs to round out their 900-series mobile lineup, and the new GTX 960M and GTX 950M are based on the same GM107 core as the previous 860M/850M parts.
Both GPUs feature 640 CUDA Cores and are separated by Base clock speed, with the GTX 960M operating at 1096 MHz and GTX 950M at 914 MHz. Both have unlisted maximum Boost frequencies that will likely vary based on thermal constraints. The memory interface is the other differentiator between the GPUs, with the GTX 960M sporting dedicated GDDR5 memory, and the GTX 950M can be implemented with either DDR3 or GDDR5 memory. Both GTX 960M and 950M use the same 128-bit memory interface and support up to 4GB of memory.
As reported by multiple sources the core powering the 960M/950M is a GM107 Maxwell GPU, which means that we are essentially talking about rebadged 860M/850M products, though the unlisted Boost frequencies could potentially be higher with these parts with improved silicon on a mature 28nm process. In contrast the previously announced GTX 965M is based on a cut down Maxwell GM204 GPU, with its 1024 CUDA Cores representing half of the GPU core introduced with the GTX 980.
New notebooks featuring the GTX 960M have already been announced by NVIDIA's partners, so we will soon see if there is any performance improvement to these refreshed GM107 parts.
Subject: Graphics Cards | March 10, 2015 - 11:35 PM | Sebastian Peak
Tagged: nvidia, msi, gtx 960, geforce, 960 Gaming, 4GB GTX 960
Manufacturers announcing 4GB versions of the GeForce GTX 960 has become a regular occurrence of late, and today MSI has announced their own 4GB GTX 960, adding a model with this higher memory capacity to their popular MSI Gaming graphics card lineup.
The GTX 960 Gaming 4GB features an overclocked core in addition to the doubled frame buffer, with 1241MHz Base, 1304MHz Boost clocks (compared to the stock GTX 960 1127MHz Base, 1178MHz Boost clocks). The card also features their proprietary Twin Frozr V (5 for non-Romans) cooler, which they claim surpasses previous generations of their Twin Frozr coolers "by a large margin", with a new design featuring their SuperSU heat pipes and a pair of 100mm Torx fans with alternating standard/dispersion fan blades.
The card is set to be shown at the Intel Extreme Masters gaming event in Poland later this week, and pricing/availability have not been announced.