All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Graphics Cards | January 25, 2016 - 03:19 PM | Jeremy Hellstrom
Tagged: XFX R9 380 Double Dissipation Black Edition OC 4GB, xfx, gtx 960
In one corner is the XFX R9 380 DD Black Edition OC 4GB, at factory settings and with an overclock of 1170MHz core and 6.4GHz memory and in the other corner is a GTX 960 with a 1178MHz Boost clock and 7GHz memory. These two contenders will compete in a six round 1080p match featuring Fallout 4, Project Cars, Witcher 3, GTAV, Dying Light and BF4 to see which is worthy of your hard earned buckaroos. Your referee for today will be [H]ard|OCP, tune in to see the final results.
"Today we evaluate a custom R9 380 from XFX, the XFX R9 380 DD BLACK EDITION OC 4GB. Sporting a hefty factory overclock and the Ghost Thermal 3.0 custom cooling with Double Dissipation, we compare it to an equally priced reference GeForce GTX 960. Find out which video card provides the better bargain."
Here are some more Graphics Card articles from around the web:
- ASUS STRIX R9 380X DirectCU II OC 1080p @ [H]ard|OCP
- Sapphire R9 390 Nitro 8 GB @ techPowerUp
- The OpenGL Speed & Perf-Per-Watt From The Radeon HD 2000/3000 Series Through The R9 Fury @ Phoronix
- 1080p NVIDIA Linux Comparison From GeForce 8 To GeForce 900 Series @ Phoronix
Subject: Graphics Cards | January 25, 2016 - 11:51 AM | Ryan Shrout
Tagged: fury x2, Fiji, dual fiji, amd
Lo and behold! The dual-Fiji card that we have previous dubbed the AMD Radeon Fury X2 still lives! Based on a tweet from AMD PR dude Antal Tungler, a PC from Falcon Northwest at the VRLA convention was utilizing a dual-GPU Fiji graphics card to power some demos.
— Antal Tungler (@coloredrocks) January 23, 2016
This prototype Falcon Northwest Tiki system was housing the GPU beast but no images were shown of the interior of the system. Still, it's good to see AMD at least recognize that this piece of hardware still exists at all, since it was initially promised to the enthusiast market by "fall of 2015." Even in October we had hints that the card might be coming soon after seeing some shipping manifests leak out to the web.
Better late than never, right? One theory floating around inside the offices here is that AMD is going to release the Fury X2 along with the VR headsets coming out this spring, with hopes of making it THE VR graphics card of choice. The value of using multi-GPU for VR is interesting, with one GPU dedicated to each eye, though the pitfalls that could haunt both AMD and NVIDIA in this regard (latency, frame time consistency) make the technological capability a debate.
Subject: Graphics Cards, Memory | January 22, 2016 - 11:08 AM | Ryan Shrout
Tagged: Polaris, pascal, nvidia, jedec, gddr5x, GDDR5, amd
Though information about the technology has been making rounds over the last several weeks, GDDR5X technology finally gets official with an announcement from JEDEC this morning. The JEDEC Solid State Foundation is, as Wikipedia tells us, an "independent semiconductor engineering trade organization and standardization body" that is responsible for creating memory standards. Getting the official nod from the org means we are likely to see implementations of GDDR5X in the near future.
The press release is short and sweet. Take a look.
ARLINGTON, Va., USA – JANUARY 21, 2016 –JEDEC Solid State Technology Association, the global leader in the development of standards for the microelectronics industry, today announced the publication of JESD232 Graphics Double Data Rate (GDDR5X) SGRAM. Available for free download from the JEDEC website, the new memory standard is designed to satisfy the increasing need for more memory bandwidth in graphics, gaming, compute, and networking applications.
Derived from the widely adopted GDDR5 SGRAM JEDEC standard, GDDR5X specifies key elements related to the design and operability of memory chips for applications requiring very high memory bandwidth. With the intent to address the needs of high-performance applications demanding ever higher data rates, GDDR5X is targeting data rates of 10 to 14 Gb/s, a 2X increase over GDDR5. In order to allow a smooth transition from GDDR5, GDDR5X utilizes the same, proven pseudo open drain (POD) signaling as GDDR5.
“GDDR5X represents a significant leap forward for high end GPU design,” said Mian Quddus, JEDEC Board of Directors Chairman. “Its performance improvements over the prior standard will help enable the next generation of graphics and other high-performance applications.”
JEDEC claims that by using the same signaling type as GDDR5 but it is able to double the per-pin data rate to 10-14 Gb/s. In fact, based on leaked slides about GDDR5X from October, JEDEC actually calls GDDR5X an extension to GDDR5, not a new standard. How does GDDR5X reach these new speeds? By doubling the prefech from 32 bytes to 64 bytes. This will require a redesign of the memory controller for any processor that wants to integrate it.
Image source: VR-Zone.com
As for usable bandwidth, though information isn't quoted directly, it would likely see a much lower increase than we are seeing in the per-pin statements from the press release. Because the memory bus width would remain unchanged, and GDDR5X just grabs twice the chunk sizes in prefetch, we should expect an incremental change. No mention of power efficiency is mentioned either and that was one of the driving factors in the development of HBM.
Performance efficiency graph from AMD's HBM presentation
I am excited about any improvement in memory technology that will increase GPU performance, but I can tell you that from my conversations with both AMD and NVIDIA, no one appears to be jumping at the chance to integrate GDDR5X into upcoming graphics cards. That doesn't mean it won't happen with some version of Polaris or Pascal, but it seems that there may be concerns other than bandwidth that keep it from taking hold.
Subject: Graphics Cards | January 20, 2016 - 03:26 PM | Scott Michaud
Tagged: nvidia, linux, tesla, fermi, kepler, maxwell
It's nice to see long-term roundups every once in a while. They do not really provide useful information for someone looking to make a purchase, but they show how our industry is changing (or not). In this case, Phoronix tested twenty-seven NVIDIA GeForce cards across four architectures: Tesla, Fermi, Kepler, and Maxwell. In other words, from the GeForce 8 series all the way up to the GTX 980 Ti.
Image Credit: Phoronix
Nine years of advancements in ASIC design, with a doubling time-step of 18 months, should yield a 64-fold improvement. The number of transistors falls short, showing about a 12-fold improvement between the Titan X and the largest first-wave Tesla, although that means nothing for a fabless semiconductor designer. The main reason why I include this figure is to show the actual Moore's Law trend over this time span, but it also highlights the slowdown in process technology.
Performance per watt does depend on NVIDIA though, and the ratio between the GTX 980 Ti and the 8500 GT is about 72:1. While this is slightly better than the target 64:1 ratio, these parts are from very different locations in their respective product stacks. Swapping the 8500 GT for the following year's 9800 GTX, which leads to a comparison between top-of-the-line GPUs of their respective times, and you see a 6.2x improvement in performance per watt versus the GTX 980 Ti. On the other hand, that part was outstanding for its era.
I should note that each of these tests take place on Linux. It might not perfectly reflect the landscape on Windows, but again, it's interesting in its own right.
Subject: Graphics Cards, Processors | January 19, 2016 - 11:38 PM | Scott Michaud
Digitimes is reporting on statements that were allegedly made by TSMC co-CEO, Mark Liu. We are currently seeing 16nm parts come out of the foundry, which is expected to be used in the next generation of GPUs, replacing the long-running 28nm node that launched with the GeForce GTX 680. (It's still unannounced whether AMD and NVIDIA will use 14nm FinFET from Samsung or GlobalFoundries, or 16nm FinFET from TSMC.)
Update (Jan 20th, @4pm EST): Couple minor corrections. Radeon HD 7970 launched at 28nm first by a couple of months. I just remember NVIDIA getting swamped in delays because it was a new node, so that's probably why I thought of the GTX 680. Also, AMD announced during CES that they will use GlobalFoundries to fab their upcoming GPUs, which I apparently missed. We suspect that NVIDIA will use TSMC, and have assumed that for a while, but it hasn't been officially announced yet (if ever).
According to their projections, which (again) are filtered through Digitimes, the foundry expects to have 7nm in the first half of 2018. They also expect to introduce extreme ultraviolet (EUV) lithography methods with 5nm in 2020. Given that Silicon in a solid has a lattice spacing of ~0.54nm at room temperature, 7nm transistors will consist of about 13 atoms, and 5nm transistors will have features containing about 9 atoms.
We continue the march toward the end of silicon lithography.
Even if the statement is correct, much can happen between then and now. It wouldn't be the first time that I've seen a major foundry believe that a node would be available, but end up having it delayed. I wouldn't hold my breath, but I might cross my fingers if my hands were free.
At the very least, we can assume that TSMC's roadmap is 16nm, 10nm, 7nm, and then 5nm.
Subject: Graphics Cards, Memory | January 19, 2016 - 11:01 PM | Scott Michaud
Tagged: Samsung, HBM2, hbm
Samsung has just announced that they have begun mass production of 4GB HBM2 memory modules. When used on GPUs, four packages can provide 16GB of Video RAM with very high performance. They do this with a very wide data bus, which trade off frequency for transferring huge chunks. Samsung's offering is rated at 256 GB/s per package, which is twice what the Fury X could do with HBM1.
They also expect to mass produce 8GB HBM2 packages within this calendar year. I'm guessing that this means we'll see 32GB GPUs in the late-2016 or early-2017 time frame unless "within this year" means very, very soon (versus Q3/Q4). They will likely be for workstation or professional cards, but, in NVIDIA's case, those are usually based on architectures that are marketed to high-end gaming enthusiasts through some Titan offering. There's a lot of ways this could go, but a 32GB Titan seems like a bit much; I wouldn't expect that this affects the enthusiast gamer segment. It might mean that professionals looking to upgrade from the Kepler-based Tesla K-series might be waiting a little longer, maybe even GTC 2017. Alternatively, they might get new cards, just with a 16GB maximum until a refresh next year. There's not enough information to know one way or the other, but it's something to think about when more of it starts rolling in.
Samsung's HBM2 are compatible with ECC, although I believe that was also true for at least some HBM1 modules from SK Hynix.
Subject: Graphics Cards | January 19, 2016 - 10:31 AM | Sebastian Peak
Tagged: rumor, report, nvidia, GTX 980MX, GTX 980M, GTX 970MX, GTX 970M, geforce
NVIDIA is reportedly preparing faster mobile GPUs based on Maxwell, with a GTX 980MX and 970MX on the way.
The new GTX 980MX would sit between the GTX 980M and the laptop version of the full GTX 980, with 1664 CUDA cores (compared to 1536 with the 980M), 104 Texture Units (up from the 980M's 96), a 1048 MHz core clock, and up to 8 GB of GDDR5. Memory speed and bandwidth will reportedly be identical to the GTX 980M at 5000 MHz and 160 GB/s respectively, with both GPUs using a 256-bit memory bus.
The GTX 970MX represents a similar upgrade over the existing GTX 970M, with CUDA Core count increased from 1280 to 1408, Texture Units up from 80 to 88, and 8 additional raster devices available (56 vs. 48). Both the 970M and 970MX use 192-bit GDDR5 clocked at 5000 MHz, and available with the same 3 GB or 6 GB of frame buffer.
WCCFtech prepared a chart to demonstrate the differences between NVIDIA's mobile offerings:
|Model||GeForce GTX 980 Laptop Version||GeForce GTX 980MX||
GeForce GTX 980M
|GeForce GTX 970MX||GeForce GTX 970M||GeForce GTX 965M||
GeForce GTX 960M
|Clock Speed||1218 MHz||1048 MHz||1038 MHz||941 MHz||924 MHz||950 MHz||1097 MHz|
|Frame Buffer||8 GB GDDR5||8/4 GB GDDR5||8/4 GB GDDR5||6/3 GB GDDR5||6/3 GB GDDR5||4 GB GDDR5||4 GB GDDR5|
|Memory Frequency||7008 MHz||5000 MHz||5000 MHz||5000 MHz||5000 MHz||5000 MHz||5000 MHz|
|Memory Bandwidth||224 GB/s||160 GB/s||160 GB/s||120 GB/s||120 GB/s||80 GB/s||80 GB/s|
These new GPUs will reportedly be based on the same Maxwell GM204 core, and TDPs are apparently unchanged at 125W for the GTX 980MX, and 100W for the 970MX.
We will await any official announcement.
Subject: Graphics Cards | January 18, 2016 - 09:44 PM | Scott Michaud
Tagged: Polaris, amd
When AMD announced their Polaris architecture at CES, it was focused on mid-range applications. Their example was an add-in board that could compete against an NVIDIA GeForce GTX 950, 1080p60 medium settings in Battlefront, but do so at 39% less wattage than this 28nm, Maxwell chip. These Polaris chips are planned for a “mid 2016” launch.
Raja Koduri, Chief Architect for the Radeon Technologies Group, spoke with VentureBeat at the show. In his conversation, he mentioned two architectures, Polaris 10 and Polaris 11, in the context of a question about their 2016 product generation. In the “high level” space, they are seeing “the most revolutionary jump in performance so far.” This doesn't explicitly state that the high-end Polaris video card will launch in 2016. That said, when combined with the November announcement, covered by us as “AMD Plans Two GPUs in 2016,” it further supports this interpretation.
We still don't know much about what the actual performance of this high-end GPU will be, though. AMD was able to push 8 TeraFLOPs of compute throughput by creating a giant 28nm die and converting the memory subsystem to HBM, which supposedly requires less die complexity than a GDDR5 memory controller (according to a conference call last year that preceded Fury X). The two-generation jump will give them more complexity to work with, but that could be partially offset by a smaller die because of the potential differences in yields (and so forth).
Also, while the performance of the 8 TeraFLOP Fury X was roughly equivalent to NVIDIA's 5.6 TeraFLOP GeForce GTX 980 Ti, we still don't know why. AMD has redesigned a lot of their IP blocks with Polaris; you would expect that, if something unexpected was bottlenecking Fury X, the graphics manufacturer wouldn't overlook it the next chance that they are able to tweak it. This could have been graphics processing or something much more mundane. Either way, upcoming benchmarks will be interesting.
And it seems like that may be this year.
Subject: Graphics Cards | January 13, 2016 - 07:42 PM | Scott Michaud
Tagged: graphics drivers, amd
AMD's recent “Hotfix” drivers don't seem to mean what NVIDIA's does. In the Green Team's case, they usually fix one or two issues that slipped past QA. While they likely won't break anything, they are probably a bad idea to install if you're not experiencing the listed problems. The changelog on AMD's drivers are significantly longer with a list of known issues that is roughly the same size.
So should you install it? That depends. It's a little less cut-and-dry than NVIDIA's hotfixes, which are only useful for a handful of people. It sounds like the worst known issue is “Game stuttering may be experienced when running two Radeon R9 295X2 graphics cards in CrossFire mode” and “Display corruption may occur on multiple display systems when it has been running idle for some time.” The latter would affect me greatly, because I run four displays and basically never sleep or shutdown (except for updates). On the other hand, it fixes a variety of crash, hang, and flicker issues.
Check it out. If it sounds good, then pick it up. Otherwise, wait for the next Beta or WHQL driver.
Subject: Graphics Cards | January 12, 2016 - 08:11 PM | Scott Michaud
Tagged: graphics drivers, graphics driver, nvidia
NVIDIA has been pushing for WHQL certification for their drivers, but sometimes issues slip through QA, both at Microsoft and their own, internal team(s). Sometimes these issues will be fixed in a future release, but sometimes they push out a “HotFix” driver immediately. This is often great for people who experience the problems, but they should not be installed otherwise.
In this case, GeForce Hotfix driver 361.60 fixes two issues. One is listed as “install & clocking related issues,” which refers to the GPU memory clock. According to Manuel Guzman of NVIDIA, some games and software was not causing the driver to fully wake the memory clock to a high-performance state. The other issue is “Crashes in Photoshop & Illustrator,” which fixes blue screen issues in both software, and possibly other programs that use the GPU in similar ways. I've never seen GeForce Driver 361.43 cause a BSOD in Photoshop, but I am a few versions behind with CS5.5.
Download links are available at NVIDIA Support, but unaffected users should just wait for an official driver in case the patch causes other issues, due to its minimal QA.
Subject: Graphics Cards | January 11, 2016 - 06:05 PM | Sebastian Peak
Tagged: rumor, report, pascal, nvidia, HBM2, hbm, GP104
A delivery of GPUs and related test equipment from Taiwan to Banglore has led to speculation about NVIDIA's upcoming GP104 Pascal GPU.
Image via Zauba.com
How much information can be gleaned from an import shipping manifest (linked here)? The data indicates a chip with a 37.5 x 37.5 mm package and 2152 pins, which is being attributed to the GP104 based on knowledge of “earlier, similar deliveries” (or possible inside information). This has prompted members of the 3dcenter.org forums (German language) to speculate on the use of GDDR5 or GDDR5X memory based on the likelihood of HBM being implemented on a die of this size.
Of course, NVIDIA has stated that Pascal will implement 3D memory, and the upcoming GP100 will reportedly be on a 55 x 55 mm package using HBM2. Could this be a new, lower-cost part using the existing GDDR5 standard or the faster GDDR5X instead? VideoCardz and WCCFtech have posted stories based on the 3DCenter report, and to quote directly from the VideoCardz post on the subject:
"3DCenter has a theory that GP104 could actually not use HBM, but GDDR5(X) instead. This would rather be a very strange decision, but could NVIDIA possibly make smaller GPU (than GM204) and still accommodate 4 HBM modules? This theory is not taken from the thin air. The GP100 aka the Big Pascal, would supposedly come in 55x55mm BGA package. That’s 10mm more than GM200, which were probably required for additional HBM modules. Of course those numbers are for the whole package (with interposer), not just the GPU."
All of this is a lot to take from a shipping record that might not even be related to an NVIDIA product, but the report has made the rounds at this point so now we’ll just have to wait for new information.
Subject: Graphics Cards | January 11, 2016 - 08:32 AM | Sebastian Peak
Tagged: radeon, r9 nano, R9 Fury X, price cut, press release, amd
AMD has announced a price cut for the Radeon R9 Nano, which will now have a suggested price of $499, a $150 drop from the original $649 MSRP.
VideoCardz had the story this morning, quoting the official press release from AMD:
"This past September, the AMD Radeon™ R9 Nano graphics card launched to rave reviews, claiming the title of the world’s fastest and most power efficient Mini ITX gaming card, powered by the world’s most advanced and innovative GPU with on-chip High-Bandwidth Memory (HBM) for incredible 4K gaming performance. There was nothing like it ever seen before, and today, it remains in a class of its own, delivering smooth, true-to-life, premium 4K and VR gaming in a small form factor PC.
At a peak power of 175W and in a 6-inch form factor, it drives levels of performance that are on par with larger, more power-hungry GPUs from competitors, and blows away Mini ITX competitors with up to 30 percent better performance than the GTX 970 Mini ITX.
As of today, 11 January, this small card will have an even bigger impact on gamers around the world as AMD announces a change in the AMD Radeon™ R9 Nano graphics card’s SEP from $649 to $499. At the new price, the AMD Radeon™ R9 Nano graphics card will be more accessible than ever before, delivering incredible performance and leading technologies, with unbelievable efficiency in an astoundingly small form factor that puts it in a class all of its own."
The R9 Nano (reviewed here) had been the most interesting GPU released in 2015 to the team at PC Perspective. It was a compelling product for its tiny size, great performance, and high power efficiency, but the dialogue here probably mirrored that of a lot of potential buyers; for the price of a Fury X, did it make sense to buy the Nano? It was all going to depend on need, but very few enclosures on the market do not support a full-length GPU, as we discovered when testing out small R9 Nano builds.
Now that the price will move down $150 it becomes an easier choice: $499 will buy you a full R9 Fury X core for $150 less. The performance of a Fury X is only a few percentage points higher than the slighly lower-clocked Nano, so you're now getting most of the way there for much less. We have seen some R9 Fury X cards selling for $599, but even at $100 more would you buy the Fury X over a Nano? If nothing else the lower price makes the conversation a lot more interesting.
Subject: Graphics Cards, Processors | January 9, 2016 - 07:00 AM | Scott Michaud
Tagged: ubisoft, quad-core, pc gaming, far cry primal, dual-core
If you remember back when Far Cry 4 launched, it required a quad-core processor. It would block your attempts to launch the game unless it detected four CPU threads, either native quad-core or dual-core with two SMT threads per core. This has naturally been hacked around by the PC gaming community, but it is not supported by Ubisoft. It's also, apparently, a bad experience.
The follow-up, Far Cry Primal, will be released in late February. Oddly enough, it has similar, but maybe slightly lower, system requirements. I'll list them, and highlight the differences.
- 64-bit Windows 7, 8.1, or 10 (basically unchanged from 4)
- Intel Core i3-550 (down from i5-750)
- or AMD Phenom II X4 955 (unchanged from 4)
- 4GB RAM (unchanged from 4)
- 1GB NVIDIA GTX 460 (unchanged from 4)
- or 1GB AMD Radeon HD 5770 (down from HD 5850)
- 20GB HDD Space (down from 30GB)
- Intel Core i7-2600K (up from i5-2400S)
- or AMD FX-8350 (unchanged from 4)
- 8GB of RAM (unchanged from 4)
- NVIDIA GeForce GTX 780 (up from GTX 680)
- or AMD Radeon R9 280X (down from R9 290X)
While the CPU is interesting, the opposing directions of the recommended GPU is fascinating. Either the parts are within Ubisoft's QA margin of error, or they increased the GPU load, but were able to optimize AMD better than Far Cry 4, which was a net gain in performance (and explains the slight bump in CPU power required to feed the extra content). Of course, either way is just a guess.
Back on the CPU topic though, I would be interested to see the performance of Pentium Anniversary Edition parts. I wonder whether they removed the two-thread lock, and, especially if hacks are still required, whether it is playable anyway.
That is, in a month and a half.
Subject: Graphics Cards, Displays | January 8, 2016 - 02:56 PM | Ryan Shrout
Tagged: video, Polaris, hdmi, freesync, CES 2016, CES, amd
At its suite at CES this year, AMD was showing off a couple of new technologies. First, we got to see the upcoming Polaris GPU architecture in action running Star Wars Battlefront with some power meters hooked up. This is a similar demo to what I saw in Sonoma back in December, and it compares an upcoming Polaris GPU against the NVIDIA GTX 950. The result: total system power of just 86 watts on the AMD GPU and over 150 watts on the NVIDIA GPU.
Another new development from AMD on the FreeSync side of things was HDMI integration. The company took time at CES to showcase a pair of new HDMI-enabled monitors working with FreeSync variable refresh rate technology.
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: Graphics Cards, Processors | January 8, 2016 - 02:38 AM | Scott Michaud
Tagged: Intel, kaby lake, linux, mesa
Quick post about something that came to light over at Phoronix. Someone noticed that Intel published a handful of PCI device IDs for graphics processors to Mesa and libdrm. It will take a few months for graphics drivers to catch up, although this suggests that Kaby Lake will be releasing relatively soon.
It also gives us hints about what Kaby Lake will be. Of the published batch, there will be six tiers of performance: GT1 has five IDs, GT1.5 has three IDs, GT2 has six IDs, GT2F has one ID, GT3 has three IDs, and GT4 has four IDs. Adding them up, we see that Intel plans 22 GPU devices. The Phoronix post lists what those device IDs are, but that is probably not interesting for our readers. Whether some of those devices overlap in performance or numbering is unclear, but it would make sense given how few SKUs Intel usually provides. I have zero experience in GPU driver development.
Subject: Graphics Cards | January 7, 2016 - 06:55 PM | Jeremy Hellstrom
Tagged: crimson, amd
That's right ladies and germs, not even a full week into 2016 and AMD has a new driver for you, or at least a hotfix version of Crimson 16.1. If you are playing Elite:Dangerous, Fallout 4 or Just Cause 3 there are a number of fixes to known issues from the original 12.1 which will make it worth picking up as soon as you can. So far in testing, game profiles do seem to last through an update so if you did take advantage of the game specific overclocking settings you should not lose all your hard work.
They have also included fixes specific to VSR, odd HDMI setups and flickering in Freesync displays. As always there are a few bugs still to be ironed out, which is why you should fill in the bug reporting tool at the bottom of the driver page instead of just throwing things at random passersby ... as fun and entertaining as that may be.
Subject: Graphics Cards | January 7, 2016 - 02:36 PM | Jeremy Hellstrom
Tagged: XFX R9 380X Double Dissipation XXX OC 4GB, xfx, amd, 380x
Take a quick break from reading about the soon to be released technology at CES for a look at a GPU you can buy right now. The XFX DD XXX series has been around for a few generations and the XFX R9 380X Double Dissipation XXX OC 4GB sports the same custom DD cooler you would expect. The factory overclock is quite modest, 20MHz on the GPU taking it to 990MHz and retaining the default 5.7GHz memory clock. Of course [H]ard|OCP were not going to leave that as is, they hit a 1040MHz core and 6.1GHz memory clock thanks to the custom cooling on the card, although with no way to adjust voltage they felt this card could be capable of more if that feature was added to the card. Read on to see how this card compares against the ASUS STRIX GTX 960 DCU II OC in this ~$220 GPU showdown.
"On our test bench today is the XFX R9 380X Double Dissipation XXX OC 4GB video card. It features the latest Ghost Thermal 3.0 cooling technology from XFX and a factory overclock. We will compare it to the ASUS STRIX GTX 960 DCU II OC 4GB in a battle of the $229 price point video cards to determine the better overall value."
Here are some more Graphics Card articles from around the web:
- ASUS R9 390 STRIX DirectCU III @ [H]ard|OCP
- ASUS R9 390X STRIX OC Review @ Hardware Canucks
- New AMD GPU Performance To Be Boosted By Linux 4.5; How It Compares To The Binary Blob @ Phoronix
- NVIDIA Linux Driver 2015 Year-in-Review @ Phoronix
- NVIDIA Quadro M4000 @ Kitguru
- Gigabyte GeForce GTX 950 Xtreme Review @ HiTech Legion
Subject: Graphics Cards, Shows and Expos | January 7, 2016 - 02:03 PM | Scott Michaud
Tagged: square enix, nvidia, CES 2016, CES
NVIDIA has just announced a new game bundle. If you purchase an NVIDIA GeForce GTX 970, GTX 980 desktop or mobile, GTX 980 Ti, GTX 980M, or GTX 970M, then you will receive a free copy of Rise of the Tomb Raider. As always, make sure the retailer is selling the participating card. If the product has a download code, it will be specially marked. NVIDIA will not upgrade non-participating stock to the bundle.
Rise of the Tomb Raider will go live on January 29th. It was originally released in November as an Xbox One timed exclusive. It will also arrive on the PlayStation 4, but not until “holiday,” which is probably around Q4 (or maybe late Q3).
If you purchase the bundle, then you graphics card will obviously be powerful enough to run the game. At a minimum, you will require a GeForce GTX 650 (2GB) or an AMD HD 7770 (2GB). The CPU needs are light too, requiring just a Sandy Bridge Core i3 (Intel Core i3-2100) or AMD's equivalent. Probably the only concern would be the minimum of 6GB system RAM, which also requires a 64-bit operating system. Now that the Xbox 360 and PlayStation 3 have been deprecated, 32-bit gaming will be increasingly rare for “AAA” titles. That said, we've been ramping up to 64-bit for the last decade. one of the first games that supported x86-64 was Unreal Tournament 2004.
The Rise of the Tomb Raider NVIDIA bundle starts today.
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: Graphics Cards, Shows and Expos | January 5, 2016 - 09:39 PM | Ryan Shrout
Tagged: vr ready, VR, virtual reality, video, Oculus, nvidia, htc, geforce, CES 2016, CES
Other than the in-depth discussion from NVIDIA on the Drive PX 2 and its push into autonomous driving, NVIDIA didn't have much other news to report. We stopped by the suite and got a few updates on SHIELD and the company's VR Ready program to certify systems that meet minimum recommended specifications for a solid VR experience.
For the SHIELD, NVIDIA is bringing Android 6.0 Marshmallow to the device, with new features like shared storage and the ability to customize the home screen of the Android TV interface. Nothing earth shattering and all of it is part of the 6.0 rollout.
The VR Ready program from NVIDIA will validate notebooks, systems and graphics cards that have the amount of horsepower to meet the minimum performance levels for a good VR experience. At this point, the specs essentially match up with what Oculus has put forth: a GTX 970 or better on the desktop and a GTX 980 (full, not 980M) on mobile.
Other than that, Ken and I took in some of the more recent VR demos including Epic's Bullet Train on the final Oculus Rift and Google's Tilt Brush on the latest iteration of the HTC Vive. Those were both incredibly impressive though the Everest demo that simulates a portion of the mountain climb was the one that really made me feel like I was somewhere else.
Check out the video above for more impressions!
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: Graphics Cards, Mobile, Shows and Expos | January 5, 2016 - 04:47 PM | Scott Michaud
Tagged: external graphics, CES 2016, CES, asus
While external graphics has been a thing for quite some time, it was rarely an available thing. Several companies, such as AMD, Lucid, and others, announced products that were never sold. ASUS had their XG Station for Windows Vista that allowed laptops to plug into a GeForce 8600 GT, which was only available in Australia. Only now are we beginning to see options from Alienware, MSI, and even Microsoft that are widely available.
ASUS is jumping back in, too. Not much is known about the XG Station 2, except that it is “specially designed for ASUS laptops and graphics cards.” This sounds like it is using a proprietary connector, similar to Alienware and MSI, to connect to ASUS laptops. Also saying it's specifically for ASUS graphics cards is a bit confusing, though. If it is an open PCIe slot, I'm not sure why or how it would be limited to ASUS cards. If the graphics cards are pre-installed, then we don't know the list of potential GPUs.
Either way, ASUS states that the dock can be disconnected without shutting down the PC. I'm interested to see how the GPU is supposed to be unplugged, as Alienware's option can only be done when the system is off, and Microsoft's Surface Book has a software detach with a hardware latch. The connector will also charge the laptop, which is an interesting add-in.
Pricing and availability varies, like the other ASUS announcements, by region.
Follow all of our coverage of the show at http://pcper.com/ces!