All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Graphics Cards | October 23, 2015 - 12:29 AM | Sebastian Peak
Tagged: r9 nano, mITX, mini-itx, graphics card, gpu, asus, amd
AMD's Radeon R9 Nano is a really cool product, able to provide much of power of the bigger R9 Fury X without the need for more than a standard air cooler, and doing so with an impossibly tiny size for a full graphics card. And while mini-ITX graphics cards serve a small segment of the market, just who might be buying a white one when this is released?
According to a report published first by Computer Base in Germany, ASUS is releasing an all-white AMD R9 Nano, and it looks really sharp. The stock R9 Nano is no slouch in the looks department as you can see here in our full review of AMD's newest GPU, but with this design ASUS provides a totally different look that could help unify the style of your build depending on your other component choices. White is just starting to show up for things like motherboard PCBs, but it's pretty rare in part due to the difficulty in manufacturing white parts that stay white when they are subjected to heat.
There was no mention on a specific release window for the ASUS R9 Nano White, so we'll have to wait for official word on that. It is possible that ASUS has also implemented their own custom PCB, though details are not know just yet. We should know more by the end of next month according to the report.
Subject: Graphics Cards | October 21, 2015 - 07:18 AM | Sebastian Peak
Tagged: water cooling, nvidia, liquid cooled, GTX 980 WATERFORCE, GTX 980, GPU Water Block, gigabyte, AIO
Gigabyte has announced the GeForce GTX 980 WATERFORCE water-cooled graphics card, and this one is ready to go out of the box thanks to an integrated closed-loop liquid cooler.
In addition to full liquid cooling, the card - model GV-N980WAOC-4GD - also features "GPU Gauntlet Sorting", meaning that each card has a binned GTX 980 core for better overclocking performance.
"The GTX 980 WATERFORCE is fitted with only the top-performing GPU core through the very own GPU Gauntlet Sorting technology that guarantees superior overclocking capabilities in terms of excellent power switching and thermal efficiency. Only the strongest processors survived can be qualified for the GTX 980 WATERFORCE, which can fulfill both gaming enthusiasts’ and overclockers’ expectations with greater overclocking headroom, and higher, stable boost clocks under heavy load."
The cooling system for the GTX 980 WATERFORCE begins with a full-coverage block that cools the GPU, RAM, power delivery, without the need for any additional fan for board components. The tubes carrying liquid to the radiator are 45 cm SFP, which Gigabyte says "effectively prevent...leak(s) and fare a lower coolant evaporation rate", and the system is connected to a 120 mm radiator.
Gigabyte says both the fan and the pump offer low noise output, and claim that this cooling system allows the GTX 980 WATERFORCE to "perform up to 38.8% cooler than the reference cooling" for cool and quiet gaming.
The WATERFORCE card also features two DVI outputs (reference is one dual-link output) in addition to the standard three DisplayPort 1.2 and single HDMI 2.0 outputs of a GTX 980.
Pricing and availability have not been announced.
Subject: Graphics Cards | October 15, 2015 - 12:01 PM | Ryan Shrout
Tagged: nvidia, geforce experience, beta drivers
NVIDIA just released a new driver, version 358.50, with an updated version of GeForce Experience that brings about some interesting changes to the program. First, let's talk about the positive changes, including beta access to the updated NVIDIA Share utility and improvements in GameStream.
As we detailed first with the release of the GeForce GTX 950, NVIDIA is making some impressive additions to the ShadowPlay portion of GeForce Experience, along with a rename to NVIDIA Share.
The idea is to add functionality to the Shadowplay feature including an in-game overlay to control the settings and options for local recording and even an in-overlay editor and previewer for your videos. This allows the gamer to view, edit, snip and then upload those completed videos to YouTube directly, without ever having to leave the game. (Though you’ll obviously want to pause it before going through that process.) Capture and “Instant Replay” support is now capable of 4K / 60 Hz capture and upload as well – nice!
Besides added capability for the local recording portion of Share, NVIDIA is also adding some new features to mix. NVIDIA Share will now allow for point to point stream sharing, giving you the ability to send a link to your friend that they can open in a web browser and watch the game that you are playing with very low latency. You could use this as a way to show your friend that new skill you learned for Rocket League, to try and convince him to pick up his own copy or even just for a social event. It supports voice communication for the ability to talk smack if necessary.
But it goes beyond just viewing the game – this point to point streaming allows the remote player to take over the controls to teach the local gamer something new or to finish a difficult portion of the game you might be stuck on. And if the game supports local multiplayer, you can BOTH play as the remote gaming session will emulate a second attached Xbox / SHIELD controller to the system! This does have a time limit of 1 hour as a means to persuade game developers and publishers to not throw a hissy-fit.
The demo I saw recently was very impressive and it all worked surprisingly well out of the box.
Fans of NVIDIA local network GameStream might enjoy the upgrade to support streaming games at 4K 60 FPS - as long as you have an NVIDIA SHIELD Android TV device connected to a 4K capable TV in your home. Clearly this will make the visual presentation of your games on your television more impressive than ever and NVIDIA has added support for 5.1 channel surround sound pass through.
There is another change coming with this release of GFE that might turn some heads surrounding the frequently updated "Game Ready" drivers NVIDIA puts out for specific game launches. These drivers have been a huge part of NVIDIA's success in recent years as the day one experience for GeForce users has been improved over AMD in many instances. It is vital for drivers and performance to be optimal on the day of a game's release as many enthusiast gamers are the ones going through the preloading process and midnight release timings.
Future "Game Ready" drivers will no longer be made available through GeForce.com and instead will ONLY be delivered through GeForce Experience. You'll also be required to have a validated email address to get the downloads for beta drivers - though NVIDIA admitted to me you would be able to opt-out of the mailing list anytime after signing up.
NVIDIA told media that this method of driver release was planning for stuff in the future but gamers would be getting early access to new features, chances to win free hardware and the ability to take part in the driver development process like never before. Honestly though, this is a way to get users to sign up for a marketing mailing list that has some specific purpose going forward. Not all mailing lists are bad obviously (have you signed up for the PC Perspective Live! Mailing List yet?!?) but there is bound to be some raised eyebrows over this.
NVIDIA says that more than 90% of its driver downloads today already come through GeForce Experience, so changes to the user experience should be minimal. We'll wait to see how the crowd reacts but I imagine once we get past the initial shock of the change over to this system, the roll outs will be fast, clean and simple. But dammit - we fear change.
Subject: Graphics Cards | October 14, 2015 - 11:24 AM | Sebastian Peak
Tagged: radeon, dx12, DirectX 12, Catalyst 15.10 beta, catalyst, ashes of the singularity, amd
The AMD Catalyst 15.9 beta driver was released just two weeks ago, and already AMD is ready with a new version. 15.10 is available now and offers several bug fixes, though the point of emphasis is DX12 performance improvements to the Ashes of the Singularity benchmark.
Highlights of AMD Catalyst 15.10 Beta Windows Driver
- Ashes of the Singularity - DirectX 12 Quality and Performance optimizations
- Video playback of MPEG2 video fails with a playback error/error code message
- A TDR error or crash is experienced when running the Unreal Engine 4 DirectX benchmark
- Star Wars: Battlefront is able to use high performance graphics when launched on mobile devices with switchable graphics
- Intermittent playback issues with Cyberlink PowerDVD when connecting to a 3D display with an HDMI cable
- Ashes of the Singularity - A 'Driver has stopped responding' error may be experienced in DirectX 12 mode
- Driver installation may halt on some configurations
- A TDR error may be experienced while toggling between minimized and maximized mode while viewing 4K YouTube content
- Ashes of the Singularity may crash on some AMD 300 series GPUs
- Core clock fluctuations may be experienced when FreeSync and FRTC are both enabled on some AMD CrossFire systems
- Ashes of the Singularity may fail to launch on some GPUs with 2GB Video Memory. AMD continues to work with Stardock to resolve the issue. In the meantime, deleting the game config file helps resolve the issue
- The secondary display adapter is missing in the Device Manager and the AMD Catalyst Control Center after installing the driver on a Microsoft Windows 8.1 system
- Elite: Dangerous - poor performance may be experienced in SuperCruise mode
- A black screen may be encountered on bootup on Windows 10 systems. The system will ultimately continue to the Windows login screen
The driver is available now from AMD's Catalyst beta download page.
Subject: Graphics Cards | October 7, 2015 - 01:45 PM | Scott Michaud
Tagged: opengl es 3.2, nvidia, graphics drivers, geforce
The GeForce Game Ready 358.50 WHQL driver has been released so users can perform their updates before the Star Wars Battlefront beta goes live tomorrow (unless you already received a key). As with every “Game Ready” driver, NVIDIA ensures that the essential performance and stability tweaks are rolled in to this version, and tests it against the title. It is WHQL certified too, which is a recent priority for NVIDIA. Years ago, “Game Ready” drivers were often classified as Beta, but the company now intends to pass their work through Microsoft for a final sniff test.
Another interesting addition to this driver is the inclusion of OpenGL 2015 ARB and OpenGL ES 3.2. To use OpenGL ES 3.2 on the PC, if you want to develop software in it for instance, you needed to use a separate release since it was released at SIGGRAPH. It has now been rolled into the main, public driver. The mobile devs who use their production machines to play Battlefront rejoice, I guess. It might also be useful if developers, for instance at Mozilla or Google, want to create pre-release implementations of future WebGL specs too.
Subject: Graphics Cards | October 7, 2015 - 07:01 AM | Scott Michaud
Tagged: opengl, metal, apple
Ars Technica took it upon themselves to benchmark Metal in the latest OSX El Capitan release. Even though OpenGL on Mac OSX is not considered to be on par with its Linux counterparts, which is probably due to the driver situation until recently, it pulls ahead of Metal in many situations.
Image Credit: Ars Technica
Unlike the other graphics APIs, Metal uses the traditional binding model. Basically, you have a GPU object that you attach your data to, then call one of a handful of “draw” functions to signal the driver. DirectX 12, Vulkan, and Mantle, on the other hand, treat work like commands on queues. The latter model works better in multi-core environments, and it aligns with GPU compute APIs, but the former is easier to port OpenGL and DirectX 11 applications to.
Ars Technica notes that faster GPUs, such as the NVIDIA GeForce GTX 680MX, show higher gains than slower ones. Their “best explanation” is that “faster GPUs can offload more work from the CPU”. That is pretty much true, yes. The new APIs are designed to keep GPUs loaded and working as much as possible, because they really do sit around doing nothing a lot. If you are able to keep a GPU loaded, because it can't accept much load in the first place, then there is little benefit to decreasing CPU load or spreading out across multiple cores.
Granted, there are many ways that benchmarks like these could be incorrectly used. I'll assume that Ars Technica and GFXBench are not making any simple mistakes, though, but it's good to be critical just in case.
Subject: Graphics Cards | October 7, 2015 - 01:51 AM | Sebastian Peak
The latest game bundle for NVIDIA GPU customers offers the buyer a choice between Tom Clancy’s Rainbow Six Siege or Assassin’s Creed Syndicate.
To qualify for the free game you need to purchase a GTX 980 Ti, GTX 980, or GTX 970 graphics card. On the mobile side of things purchasing a laptop with GTX 970M or above graphics earns the game.
"It’s the final few months of the year, and as always that means a rush of new triple-A games that promise to excite and delight over the Holiday season. This year, Ubisoft's Assassin’s Creed Syndicate andTom Clancy's Rainbow Six Siege are vying for glory. And to ensure the definitive versions are found on PC we’ve teamed up with Ubisoft once again to add NVIDIA GameWorks effects to each, bringing richer, more detailed experiences to your desktop."
The Bullets or Blades bundle is already underway as of 10/06/15, and to qualify for the game codes purchases require the retailer to be participating in this program. Full details are available from NVIDIA here.
Subject: Graphics Cards | October 6, 2015 - 02:40 PM | Jeremy Hellstrom
Tagged: 4k, gtx titan x, fury x, GTX 980 Ti, crossfire, sli
[H]ard|OCP shows off just what you can achieve when you spend over $1000 on graphics cards and have a 4K monitor in their latest review. In Project Cars you can expect never to see less than 40fps with everything cranked to maximum and if you invested in Titan X's you can even enable DS2X AntiAliasing for double the resolution, before down sampling. Witcher 3 is a bit more challenging and no card is up for HairWorks without a noticeable hit to performance. FarCry 4 still refuses to believe in Crossfire and as far as NVIDIA performance goes, if you want to see soft shadows you are going to have to invest in a pair of Titan X's. Check out the full review to see what the best of the current market is capable of.
"The ultimate 4K battle is about to begin, AMD Radeon R9 Fury X CrossFire, NVIDIA GeForce GTX 980 Ti SLI, and NVIDIA GeForce GTX TITAN X SLI will compete for the best gameplay experience at 4K resolution. Find out what $1300 to $2000 worth of GPU backbone will buy you. And find out if Fiji really can 4K."
Here are some more Graphics Card articles from around the web:
- Sapphire R7 370 Nitro Review @ OCC
- PNY GTX 950 2GB @ Kitguru
- Gigabyte GTX 950 Xtreme Gaming 2GB @ Kitguru
- Nvidia's GeForce GTX 950 @ The Tech Report
Subject: Graphics Cards | October 5, 2015 - 07:13 AM | Scott Michaud
Tagged: graphics drivers, amd
Apparently users of AMD's Catalyst 15.9 drivers have been experiencing issues. Specifically, “major memory leaks” could be caused by adjusting windows, such as resizing them or snapping them to edges of the desktop. According to PC Gamer, AMD immediately told users to roll back when they found out about the bug.
They have since fixed it with Catalyst 15.9.1 Beta. This subversion driver also fixes crashes and potential “signal loss” problems with a BenQ FreeSync monitor. As such, if you were interested in playing around with the Catalyst 15.9 beta driver, then it should be safe to do so now. I wish I could offer more input, but I just found out about it and it seems pretty cut-and-dry: if you had problems, they should be fixed. The update is available here.
Subject: Graphics Cards | October 5, 2015 - 02:33 AM | Sebastian Peak
Tagged: rumor, report, radeon, graphics cards, Gemini, fury x, fiji xt, dual-GPU, amd
The AMD R9 Fury X, Fury, and Nano have all been released, but a dual-GPU Fiji XT card could be on the way soon according to a new report.
Back in June at AMD's E3 event we were shown Project Quantum, AMD's concept for a powerful dual-GPU system in a very small form-factor. It was speculated that the system was actually housing an unreleased dual-GPU graphic card, which would have made sense given the very small size of the system (and mini-ITX motherboard therein). Now a report from WCCFtech is pointing to a manifest that just might be a shipment of this new dual-GPU card, and the code-name is Gemini.
"Gemini is the code-name AMD has previously used in the past for dual GPU variants and surprisingly, the manifest also contains another phrase: ‘Tobermory’. Now this could simply be a reference to the port that the card shipped from...or it could be the actual codename of the card, with Gemini just being the class itself."
The manifest also indicates a Cooler Master cooler for the card, the maker of the liquid cooling solution for the Fury X. As the Fury X has had its share of criticism for pump whine issues it would be interesting to see how a dual-GPU cooling solution would fare in that department, though we could be seeing an entirely new generation of the pump as well. Of course speculation on an unreleased product like this could be incorrect, and verifiable hard details aren't available yet. Still, of the dual-GPU card is based on a pair of full Fiji XT cores the specs could be very impressive to say the least:
- Core: Fiji XT x2
- Stream Processors: 8192
- GCN Compute Units: 128
- ROPs: 128
- TMUs: 512
- Memory: 8 GB (4GB per GPU)
- Memory Interface: 4096-bit x2
- Memory Bandwidth: 1024 GB/s
In addition to the specifics above the report also discussed the possibility of 17.2 TFLOPS of performance based on 2x the performance of Fury X, which would make the Gemini product one of the most powerful single-card GPU solutions in the world. The card seems close enough to the final stage that we should expect to hear something official soon, but for now it's fun to speculate - unless of course the speculation concerns a high initial retail price, and unfortunately something at or above $1000 is quite likely. We shall see.
Subject: Graphics Cards | September 28, 2015 - 04:45 PM | Jeremy Hellstrom
Tagged: R9 Fury, asus strix r9 fury, r9 390x, GTX 980, crossfire, sli, 4k
Bring your wallets to this review from [H]ard|OCP which pits multiple AMD and NVIDIA GPUs against each other at 4K resolutions and no matter the outcome it won't be cheap! They used the Catalyst 15.8 Beta and the GeForce 355.82 WHQL which were the latest drivers available at the time of writing as well as trying out Windows 10 Pro x64. There were some interesting results, for instance you want an AMD card when driving in the rain playing Project Cars as the GTX 980's immediately slowed down in inclement weather. With Witcher 3, AMD again provided frames faster but unfortunately the old spectre of stuttering appeared, which those of you familiar with our Frame Rating tests will understand the source of. Dying Light proved to be a game that liked VRAM with the 390X taking top spot though sadly neither AMD card could handle Crossfire in Far Cry 4. There is a lot of interesting information in the review and AMD's cards certainly show their mettle but the overall winner is not perfectly clear, [H] chose Fury the R9 Fury with a caveat about Crossfire support.
"We gear up for multi-GPU gaming with AMD Radeon R9 Fury CrossFire, NVIDIA GeForce GTX 980 SLI, and AMD Radeon R9 390X CrossFire and share our head-to-head results at 4K resolution and find out which solution offers the best gameplay experience. How well does Fiji game when utilized in a CrossFire configuration?"
Here are some more Graphics Card articles from around the web:
- XFX R9 390X Review @ OCC
- MSI Radeon R9 380 Gaming 2G Review @ NikKTech
- Gigabyte GTX 950 Xtreme Gaming 2 GB @ techPowerUp
Subject: Graphics Cards | September 26, 2015 - 09:10 PM | Scott Michaud
Tagged: microsoft, windows 10, DirectX 12, dx12, nvidia
Programming with DirectX 12 (and Vulkan, and Mantle) is a much different process than most developers are used to. The biggest change is how work is submit to the driver. Previously, engines would bind attributes to a graphics API and issue one of a handful of “draw” commands, which turns the current state of the API into a message. Drivers would play around with queuing them and manipulating them, to optimize how these orders are sent to the graphics device, but the game developer had no control over that.
Now, the new graphics APIs are built more like command lists. Instead of bind, call, bind, call, and so forth, applications request queues to dump work into, and assemble the messages themselves. It even allows these messages to be bundled together and sent as a whole. This allows direct control over memory and the ability to distribute a lot of the command control across multiple CPU cores. Applications are only as fast as its slowest (relevant) thread, so the ability to spread work out increases actual performance.
NVIDIA has created a large list of things that developers should do, and others that they should not, to increase performance. Pretty much all of them apply equally, regardless of graphics vendor, but there are a few NVIDIA-specific comments, particularly the ones about NvAPI at the end and a few labeled notes in the “Root Signatures” category.
The tips are fairly diverse, covering everything from how to efficiently use things like command lists, to how to properly handle multiple GPUs, and even how to architect your engine itself. Even if you're not a developer, it might be interesting to look over to see how clues about what makes the API tick.
Subject: Graphics Cards | September 26, 2015 - 03:46 PM | Scott Michaud
Tagged: Nintendo, Khronos
Console developers need to use the APIs that are laid out by the system's creator. Nintendo has their own graphics API for the last three generations, called GX, although it is rumored to be somewhat like OpenGL. A few days ago, Nintendo's logo appeared on the Khronos Group's website as a Contributor Member. This leads sites like The Register to speculate that Nintendo “pledges allegiance to the Vulkan (API)”.
I wouldn't be so hasty.
There are many reasons why a company would want to become a member of the Khronos Group. Microsoft, for instance, decided that the small, $15,000 USD/year membership fee was worth it to influence the future of WebGL. Nintendo, at least currently, does not make their own web browser, they license NetFront from Access Co. Ltd., but that could change (just like their original choice of Opera Mini did). Even with a licensed browser, they might want to discuss and vote on the specifics. But yes, WebGL is unlikely to be on their minds, let alone a driving reason, especially since they are not involved with the W3C. Another unlikely option is OpenCL, especially if they get into cloud services, but I can't see them caring enough about the API to do anything more than blindly use it.
Vulkan is, in fact, most likely what Nintendo is interested in, but that also doesn't mean that they will support it. The membership fee is quite low for a company like Nintendo, and, even if they don't use the API, their input could benefit them, especially since they rely upon third parties for graphics processors. Pushing for additions to Vulkan could force GPU vendors to adopt it, so it will be available for their own APIs, and so forth. There might even be some learning, up to the limits of the Khronos Group's confidentiality requirements.
Or, of course, Nintendo could adopt the Vulkan API to some extent. We'll see. Either way, the gaming company is beginning to open up with industry bodies. This could be positive.
Subject: Graphics Cards | September 24, 2015 - 02:53 PM | Jeremy Hellstrom
Tagged: radeon, nvidia, lionhead, geforce, fable legends, fable, dx12, benchmark, amd
By now you should have memorized Ryan's review of Fable's DirectX 12 performance on a variety of cards and hopefully tried out our new interactive IFU charts. You can't always cover every card, as those who were brave enough to look at the CSV file Ryan provided might have come to realize. That's why it is worth peeking at The Tech Report's review after reading through ours. They have included an MSI R9 285 and XFX R9 390 as well as an MSI GTX 970, which may be cards you are interested in seeing. They also spend some time looking at CPU scaling and the effect that has on AMD and NVIDIA's performance. Check it out here.
"Fable Legends is one of the first games to make use of DirectX 12, and it produces some truly sumptuous visuals. Here's a look at how Legends performs on the latest graphics cards."
Here are some more Graphics Card articles from around the web:
- The Graphics Cards For Linux Gaming With The Best Value & Efficiency At Higher Resolutions @ Phoronix
- AMD Has A Vulkan Linux Driver, But Will Be Closed-Source At First @ Phoronix
- ASUS R9 Fury STRIX Review @ Hardware Canucks
- XFX Radeon R9 390X Double Dissipation Core Edition Review @HiTech Legion
- AMD Radeon R9 Nano CrossFire @ techPowerUp
- Sapphire R9 380 Nitro 4GB @ Kitguru
- AMD Radeon R9 Nano 4 GB @ techPowerUp
Subject: Graphics Cards | September 22, 2015 - 09:09 PM | Scott Michaud
Tagged: nvidia, linux, graphics drivers
In the NVIDIA driver control panel, there is a slider that controls Performance vs Quality. On Windows, I leave it set to “Let the 3D application decide” and change my 3D settings individually, as needed. I haven't used NVIDIA's control panel on Linux too much, mostly because my laptop is what I usually install Linux on, which runs an AMD GPU, but the UI seems to put a little more weight on it.
Or is that GTux?
Phoronix decided to test how each of these settings affects a few titles, and the only benchmark they bothered reporting is Team Fortress 2. It turns out that other titles see basically zero variance. TF2 saw a difference of 6FPS though, from 115 FPS at High Quality to 121 FPS at Quality. Oddly enough, Performance and High Performance were worse performance than Quality.
To me, this sounds like NVIDIA has basically forgot about the feature. It barely affects any title, the game it changes anything measureable in is from 2007, and it contradicts what the company is doing on other platforms. I predict that Quality is the default, which is the same as Windows (albeit with only 3 choices: “Performance”, “Balanced”, and the default “Quality”). If it is, you probably should just leave it there 24/7 in case NVIDIA has literally not thought about tweaking the other settings. On Windows, it is kind-of redundant with GeForce Experience, anyway.
Final note: Phoronix has only tested the GTX 980. Results may vary elsewhere, but probably don't.
Subject: Graphics Cards, Processors | September 17, 2015 - 09:33 PM | Scott Michaud
Tagged: Skylake, kaby lake, iris pro, Intel, edram
Update: Sept 17, 2015 @ 10:30 ET -- To clarify: I'm speaking of socketed desktop Skylake. There will definitely be Iris Pro in the BGA options.
Before I begin, the upstream story has a few disputes that I'm not entirely sure on. The Tech Report published a post in September that cited an Intel spokesperson, who said that Skylake would not be getting a socketed processor with eDRAM (unlike Broadwell did just before Skylake launched). This could be a big deal, because the fast, on-processor cache could be used by the CPU as well as the RAM. It is sometimes called “128MB of L4 cache”.
Later, ITWorld and others posted stories that said Intel killed off a Skylake processor with eDRAM, citing The Tech Report. After, Scott Wasson claimed that a story, which may or may not be ITWorld's one, had some “scrambled facts” but wouldn't elaborate. Comparing the two articles doesn't really illuminate any massive, glaring issues, but I might just be missing something.
Update: Sept 18, 2015 @ 9:45pm -- So I apparently misunderstood the ITWorld article. They were claiming that Broadwell-C was discontinued, while The Tech Report was talking about Socketed Skylake with Iris Pro. I thought they both were talking about the latter. Moreover, Anandtech received word from Intel that Broadwell-C is, in fact, not discontinued. This is odd, because ITWorld said they had confirmation from Intel. My guess is that someone gave them incorrect information. Sorry that it took so long to update.
In the same thread, Ian Cutress of Anandtech asked whether The Tech Report benchmarked the processor after Intel tweaked its FCLK capabilities, which Scott did not (but is interested in doing so). Intel addressed a slight frequency boost between the CPU and PCIe lanes after Skylake shipped, which naturally benefits discrete GPUs. Since the original claim was that Broadwell-C is better than Skylake-K for gaming, giving a 25% boost to GPU performance (or removing a 20% loss, depending on how you look at it) could tilt Skylake back above Broadwell. We won't know until it's benchmarked, though.
Iris Pro and eDRAM, while skipping Skylake, might arrive in future architectures though, such as Kaby Lake. It seems to have been demonstrated that, in some situations, and ones relevant to gamers at that, that this boost in eDRAM can help computation -- without even considering the compute potential of a better secondary GPU. One argument is that cutting the extra die room gives Intel more margins, which is almost definitely true, but I wonder how much attention Kaby Lake will get. Especially with AVX-512 and other features being debatably removed, it almost feels like Intel is treating this Tock like a Tick, since they didn't really get one with Broadwell, and Kaby Lake will be the architecture that will lead us to 10nm. On the other hand, each of these architectures are developed by independent teams, so I might be wrong in comparing them serially.
Subject: Graphics Cards | September 17, 2015 - 03:34 PM | Jeremy Hellstrom
Tagged: linux, amd, nvidia
If you are using a 1080p monitor or perhaps even outputting to a large 1080p TV, there is no point in picking up a $500+ GPU as you will not be using the majority of its capabilities. Phoronix has just done research on what GPU offers you the best value for gaming at that resolution, putting five AMD GPUs from the Radeon R9 270X to the R9 Fury and six NVIDIA cards ranging from the GTX 950 to a GTX TITAN X into their test bench. The TITAN X is a bit of overkill, unless somehow your display is capable of 200+ fps. When you look at frames per second per dollar the GTX 950 came out on top, providing playable frame rates at a very low cost. These results may change as AMD's Linux driver improves but for now NVIDIA is the way to go for those who game on Linux.
"Earlier this week I posted a graphics card comparison using the open-source drivers and looking at the best value and power efficiency. In today's article is a larger range of AMD Radeon and NVIDIA GeForce graphics cards being tested under a variety of modern Linux OpenGL games/demos while using the proprietary AMD/NVIDIA Linux graphics drivers to see how not only the raw performance compares but also the performance-per-Watt, overall power consumption, and performance-per-dollar metrics."
Here are some more Graphics Card articles from around the web:
- AMD R9 Nano @ Kitguru
- AMD Radeon R9 Nano @ Hardwareheaven
- AMD Radeon R9 Nano @ Legion Hardware
- The AMD R9 Nano Performance Review @ Hardware Canucks
- Asus R9 390X STRIX DC3 OC 8GB @ Kitguru
- Asus ROG Poseidon Platinum GTX 980 Ti Review @ Bjorn3d
Subject: Graphics Cards | September 17, 2015 - 09:14 AM | Sebastian Peak
Tagged: nvidia, msi, liquid cooled, GTX980Ti SEA HAWK, GTX 980 Ti, graphics card, corsair
We reported last night on Corsair's new Hydro GFX, a liquid-cooled GTX 980 Ti powered by an MSI GPU, and MSI has their own new product based on this concept as well.
"The MSI GTX 980Ti SEA HAWK utilizes the popular Corsair H55 closed loop liquid-cooling solution. The micro-fin copper base takes care of an efficient heat transfer to the high-speed circulation pump. The low-profile aluminum radiator is easy to install and equipped with a super silent 120 mm fan with variable speeds based on the GPU temperature. However, to get the best performance, the memory and VRM need top-notch cooling as well. Therefore, the GTX 980Ti SEA HAWK is armed with a ball-bearing radial fan and a custom shroud design to ensure the best cooling performance for all components."
The MSI GTX 980 Ti Sea Hawk actually appears identical to the Corsair Hydro GFX, and a looking through the specs confirms the similarities:
With a 1190 MHz Base and 1291 MHz Boost clock the SEA HAWK has the same factory overclock speeds as the Corsair-branded unit, and MSI is also advertising the card's potential to go further:
"Even though the GTX 980Ti SEA HAWK boasts some serious clock speeds out-of-the-box, the MSI Afterburner overclocking utility allows users to go even further. Explore the limits with Triple Overvoltage, custom profiles and real-time hardware monitoring."
I imagine the availability of this MSI branded product will be greater than the Corsair branded equivalent, but in either case you get a GTX 980 Ti with the potential to run as fast and cool as a custom cooled solution, without any of the extra work. Pricing wasn't immediately available this morning but expect something close to the $739 MSRP we saw with Corsair.
Subject: Graphics Cards | September 16, 2015 - 09:00 PM | Sebastian Peak
Tagged: nvidia, msi, liquid cooler, GTX 980 Ti, geforce, corsair, AIO
A GPU with attached closed-loop liquid cooler is a little more mainstream these days with AMD's Fury X a high-profile example, and now a partnership between Corsair and MSI is bringing a very powerful NVIDIA option to the market.
The new product is called the Hydro GFX, with NVIDIA's GeForce GTX 980 Ti supplying the GPU horsepower. Of course the advantage of a closed-loop cooler would be higher (sustained) clocks and lower temps/noise, which in turns means much better performance. Corsair explains:
"Hydro GFX consists of a MSI GeForce GTX 980 Ti card with an integrated aluminum bracket cooled by a Corsair Hydro Series H55 liquid cooler.
Liquid cooling keeps the card’s hottest, most critical components - the GPU, memory, and power circuitry - 30% cooler than standard cards while running at higher clock speeds with no throttling, boosting the GPU clock 20% and graphics performance up to 15%.
The Hydro Series H55 micro-fin copper cooling block and 120mm radiator expels the heat from the PC reducing overall system temperature and noise. The result is faster, smoother frame rates at resolutions of 4K and beyond at whisper quiet levels."
The factory overclock this 980 Ti is pretty substantial out of the box with a 1190 MHz Base (stock 1000 MHz) and 1291 MHz Boost clock (stock 1075 MHz). Memory is not overclocked (running at the default 7096 MHz), so there should still be some headroom for overclocking thanks to the air cooling for the RAM/VRM.
A look at the box - and the Corsair branding
Specs from Corsair:
- NVIDIA GeForce GTX 980 Ti GPU with Maxwell 2.0 microarchitecture
- 1190/1291 MHz base/boost clock
- Clocked 20% faster than standard GeForce GTX 980 Ti cards for up to a 15% performance boost.
- Integrated liquid cooling technology keeps GPU, video RAM, and voltage regulator 30% cooler than standard cards
- Corsair Hydro Series H55 liquid cooler with micro-fin copper block, 120mm radiator/fan
- Memory: 6GB GDDR5, 7096 MHz, 384-bit interface
- Outputs: 3x DisplayPort 1.2, HDMI 2.0, and Dual Link DVI
- Power: 250 watts (600 watt PSU required)
- Requirements: PCI Express 3.0 16x dual-width slot, 8+6-pin power connector, 600 watt PSU
- Dimensions: 10.5 x 4.376 inches
- Warranty: 3 years
- MSRP: $739.99
As far as pricing/availability goes Corsair says the new card will debut in October in the U.S. with an MSRP of $739.99.
Subject: Graphics Cards | September 16, 2015 - 09:16 AM | Sebastian Peak
Tagged: TSMC, Samsung, pascal, nvidia, hbm, graphics card, gpu
According to a report by BusinessKorea TSMC has been selected to produce the upcoming Pascal GPU after initially competing with Samsung for the contract.
Though some had considered the possibility of both Samsung and TSMC sharing production (albeit on two different process nodes, as Samsung is on 14 nm FinFET), in the end the duties fall on TSMC's 16 nm FinFET alone if this report is accurate. The move is not too surprising considering the longstanding position TSMC has maintained as a fab for GPU makers and Samsung's lack of experience in this area.
The report didn't make the release date for Pascal any more clear, naming it "next year" for the new HBM-powered GPU, which will also reportedly feature 16 GB of HBM 2 memory for the flagship version of the card. This would potentially be the first GPU released at 16 nm (unless AMD has something in the works before Pascal's release), as all current AMD and NVIDIA GPUs are manufactured at 28 nm.