All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Graphics Cards | July 17, 2018 - 12:38 PM | Ken Addison
Tagged: VR, VirtualLink, valve, usb 3.1, Type-C, Oculus, nvidia, microsoft, DisplayPort, amd
Today, NVIDIA, Oculus, Valve, AMD, and Microsoft, members of the VirtualLink consortium, have announced the VirtualLink standard, which aims to unify physically connecting Virtual Reality headsets to devices.
Based upon the physical USB Type-C connector, VirtualLink will combine the bandwidth of DisplayPort 1.4 (32.1Gbit/s) with a USB 3.1 Data connection, and the ability to deliver up to 27W of power.
VirualLink aims to simplify the setup of current VR Headsets
Given the current "Medusa-like" nature of VR headsets with multiple cables needing to feed video, audio, data, and power to the headset, simplifying to a single cable should provide a measurable benefit to the VR experience. In addition, having a single, unified connector could provide an easier method for third parties to provide wireless solutions, like the current TPCast device.
VirtualLink is an open standard, and the initial specifications can currently be found on the consortium website.
Subject: Graphics Cards | July 11, 2018 - 05:25 PM | Jeremy Hellstrom
Tagged: RX VEGA 64, amd, undervolting, killing floor 2, wolfenstein 2: The New Colossus, Middle-earth: Shadow of War
You may have stumbled across threads on the wild web created by AMD enthusiasts who have been undervolting their Vega cards and are bragging about it. This will seem counter intuitive to overclockers who regularly increase the voltage their GPU will accept in order to increase the frequencies on those cards. There is a method to this madness, and it is not simply that they are looking to save on power bills. Overclockers Club investigates the methods used and the performance effect it has on the Vega 64 in several modern titles in their latest GPU review.
"Across all three games we saw a noticeable drop in power use when undervolting and not limiting the frame rate, or using a high limit. This reduction in power use is important as it improves the efficiency of the RX Vega 64 and it allows increased clock speeds with the reduction of thermal throttling."
Here are some more Graphics Card articles from around the web:
- ASRock Phantom Gaming X Radeon RX580 8G OC @ Guru of 3D
- ASRock Phantom Gaming X RX 580 @ Kitguru
- GeForce GTX 1050 3GB @ Guru of 3D
- GeForce GT 1030: The DDR4 Abomination Benchmarked @ Techspot
- Workstation GPU Performance Testing: Redshift, Blender & MAGIX Vegas @ Techgage
Subject: Graphics Cards | July 2, 2018 - 01:08 PM | Ken Addison
Tagged: vive pro, steamvr, oculus rift, Oculus, htc
Although the HTC Vive Pro has been available in headset only form as an upgrade for previous VIVE owners for several months, there has been a lack of a full solution for customers looking to enter the ecosystem from scratch.
Today, HTC announced immediate availability for their full VIVE Pro kit featuring Steam VR 2.0 Base Stations and the latest revision of the HTC Vive Controllers.
For those who need a refresher, the HTC Vive Pro improves upon the original Vive VR Headset with 78% improved resolution (2880x1600), as well as a built-in deluxe audio strap.
New with the HTC Vive Pro Full Kit, the Steam VR 2.0 Base Station trackers allow users to add up to 4 base stations (previously limited to 2), for a wider area up to 10x10 meters (32x32 feet), as well as improved positional tracking.
It's worth noting that this kit does not include the upcoming next generation of Steam VR controllers codenamed "Knuckles," which likely won't be available until 2019.
Given the steep asking price and "Pro" moniker, it remains clear here that HTC is only attempting to target very high-end gaming enthusiasts and professionals with this headset, rather than the more general audience the original Vive targets. As of now, it's expected that the original VIVE will continue to be available as a lower cost alternative.
Subject: Graphics Cards | June 26, 2018 - 10:01 PM | Scott Michaud
Tagged: nvidia, graphics drivers, geforce
NVIDIA aligns their graphics driver releases with game launches, and today’s 398.36 is for Ubisoft’s The Crew 2. The game comes out on Friday, but the graphics vendors like to give a little room if possible (and a Friday makes that much easier than a Tuesday). NVIDIA is also running a bundle deal – you get The Crew 2 Standard Edition free when you purchase a qualifying GTX 1080, GTX 1080 Ti, GeForce gaming desktop, or GeForce gaming laptop. Personally, I would wait for new graphics cards to launch, but if you need one now then – hey – free game!
Now onto the driver itself.
GeForce 398.36 is actually from the 396.xx branch, which means that it’s functionally similar to the previous drivers. NVIDIA seems to release big changes with the start of an even-numbered branch, such as new API support, and then spend the rest of the release, and its odd-numbered successor, fixing bugs and adding game-specific optimizations. While this means that there shouldn’t be anything surprising, it also means that it should be stable and polished.
This brings us to the bug fixes.
If you were waiting for the blue-screen issue with Gears of War 4 to be fixed on Pascal GPUs, then grab your chainsaws it should be good to go. Likewise, if you had issues with G-SYNC causing stutter outside of G-SYNC games, such as the desktop, then that has apparently been fixed, too.
When you get around to it, the new driver is available on GeForce Experience and NVIDIA’s site.
Subject: Graphics Cards | June 12, 2018 - 02:21 PM | Ryan Shrout
Tagged: Intel, graphics, gpu, raja koduri
Intel CEO Brian Krzanich disclosed during an analyst event last week that it will have its first discrete graphics chips available in 2020. This will mark the beginning of the chip giant’s journey towards a portfolio of high-performance graphics products for various markets including gaming, data center, and AI.
Some previous rumors posited that a launch at CES 2019 this coming January might be where Intel makes its graphics reveal, but that timeline was never adopted by Intel. It would have been drastically overaggressive and in no way reasonable with the development process of a new silicon design.
Back in November 2017 Intel brought on board Raja Koduri to lead the graphics and compute initiatives inside the company. Koduri was previously in charge of the graphics division at AMD, helping to develop and grow the Radeon brand, and his departure to Intel was thought to have significant impact on the industry.
A typical graphics architecture and chip development cycle is three years for complex design, so even hitting the 2020 window with engineering talent is aggressive.
Intel did not go into detail about what performance level or target market this first discrete GPU solution might address, but Intel EVP of the Data Center Group Navin Shenoy confirmed that the company’s strategy will include solutions for data center segments (think AI, machine learning) along with client (think gaming, professional development).
This is a part of the wider scale AI and machine learning strategy for Intel, that includes these discrete graphics chip products in addition to other options like the Xeon processor family, FPGAs from its acquisition of Altera, and custom AI chips like the Nervana-based NNP.
While the leader in the space, NVIDIA, maintains its position with graphics chips, it is modifying and augmenting these processors with additional features and systems to accelerate AI even more. It will be interesting to see how Intel plans to catch up in design and deployment.
Though few doubt the capability of Intel for chip design, building a new GPU architecture from the ground up is not a small task. Intel needs to provide a performance and efficiency level that is in the same ballpark as NVIDIA and AMD; within 20% or so. Doing that on the first attempt, while also building and fostering the necessary software ecosystem and tools around the new hardware is a tough ask of any company, Silicon Valley juggernaut or no. Until we see the first options available in 2020 to gauge, NVIDIA and AMD have the leadership positions.
Both AMD and NVIDIA will be watching Intel with great interest as GPU development accelerates. AMD’s Forest Norrod, SVP of its data center group, recently stated in an interview that he didn’t expect Koduri at Intel to “have any impact at Intel for at least another three years.” If Intel can deliver on its 2020 target for the first in a series of graphics releases, it might put pressure on these two existing graphics giants sooner than most expected.
Subject: Graphics Cards | June 8, 2018 - 08:22 AM | Tim Verry
Tagged: Vega Nano, SFF, RX Vega 56, powercolor, mini ITX, computex 2018, computex, amd
PowerColor’s new small form factor RX Vega 56 based graphics card was shown off at Computex 2018 and finally made the card official with more information provided on it following the rumors and official teaser last month. The PowerColor RX Vega 56 Nano Edition is the spiritual successor to AMD’s Fiji XT-based R9 Nano from 2015 and features an AMD RX Vega 56 GPU with 8GB HBM2 memory in a short dual slot graphics card measuring 170mm x 95mm x 38mm. In fact, PowerColor’s RX Vega 56 Nano Edition has a PCB that is only 5mm longer (according to TechPowerUp) than AMD’s previous Nano card and including the cooler is less than 2 cm longer.
PowerColor’s new SFF graphics card is a dual slot design with a single 80mm fan and dense aluminum heatsink covered by a black plastic shroud providing cooling. The card is powered by 8-pin and 6-pin power connectors and the card offers three DisplayPort 1.4 and one HDMI 2.0b display outputs.
The RX Vega 56 GPU features 56 CUs (compute units) with 3,584 shader processors and 224 texture units. PowerColor has kept the GPU at reference clockspeeds of 1,156 MHz base and up to 1,471 MHz boost. The 8GB of HBM2 memory is stock clocked at 800 MHz and connects to the GPU via a 2048-bit bus.
The PowerColor RX Vega 56 Nano Edition will reportedly be available shortly with a $449 MSRP. The new small form factor Nano Edition card offers an interesting proposition for gamers wanting to build in Mini ITX systems. So long as PowerColor can get the card out at close to MSRP and performance is still there without too much thermal limitations I think there is a definite niche market for it. (Note that the R9 Nano debuted at $650 MSRP!)
Subject: Graphics Cards | June 5, 2018 - 11:58 PM | Tim Verry
Tagged: Vega, machine learning, instinct, HBM2, gpu, computex 2018, computex, amd, 7nm
AMD showed off its first 7nm GPU in the form of the expected AMD Radeon Instinct RX Vega graphics product and RX Vega GPU with 32GB of HBM2 memory. The new GPU uses the Vega architecture along with the open source ecosystem built by AMD to enable both graphics and GPGPU workloads. AMD demonstrated using the 7nm RX Vega GPU for ray tracing in a cool demo that showed realistic reflections and shadows being rendered on a per pixel basis in a model. Granted, we are still a long way away from seeing that kind of detail in real time gaming, but is still cool to see glimpses of that ray traced future.
According to AMD, the 32GB of HBM2 memory will greatly benefit creators and enterprise clients that need to work with large datasets and be able to quickly make changes and updates to models before doing a final render. The larger memory buffer will also help in HPC applications with more big data databases being able to be kept close to the GPU for processing using the wide HBM2 memory bus. Further, HBM2 has physical size and energy efficiency benefits which will pique the interest of datacenters focused on maximizing TCO numbers.
Dr. Lisa Su came on state towards the end of the 7nm Vega demonstration to show off the GPU in person, and you can see that it is rather tiny for the compute power it provides! It is shorter than the two stacks of HBM2 dies on either side, for example.
Of course AMD did not disclose all the nitty-gritty specifications of the new machine learning graphics card that enthusiasts want to know. We will have to wait a bit longer for that information unfortunately!
As for other 7nm offerings? As Ryan talked about during CES in January, 2018 will primarily be the year for the machine learning-focused Radeon Instinct RX Vega 7nm GPU, with other consumer-focused GPUs using the smaller process node likely coming out in 2019. Whether those 7nm GPUs in 2019 will be a refreshed Vega or the new Navi is still up for debate, however AMD's graphics roadmap certainly doesn't rule out Navi as a possibility. In any case, AMD did state during the livestream that it intends to release a new GPU every year with the GPUs alternating between new architecture and new process node.
What are your thoughts on AMD's graphics roadmap and its first 7nm Vega GPU?
Subject: Graphics Cards | May 29, 2018 - 03:30 PM | Jeremy Hellstrom
Tagged: amd, nvidia, marketshare, jon peddie
Jon Peddie Research have just released their latest look at the discrete GPU market, which has been doing significantly better than the PC market overall, for reasons we are all quite familiar with. While sales of full systems have declined 24.5%, GPU sales increased 6.4% over the past quarter and an impressive increase of 66.4% when compared to this time last year.
With just two suppliers in the market now, any gain by one results in a loss for the other, and it has been AMD's turn to succeed. The gain of 1.2% over this quarter is not as impressive as AMD's total gains over the past 12 months, which saw 7.4% of the sales once going to NVIDIA shift to AMD. Vega may not be the most powerful architecture on the planet, but it is selling along with previous generations of GPU.
The next quarter may level out, not just due to decreases in the purchasing of new mining equipment but also due to historical trends as stock is accumulated to prepare for sales in the fourth quarter. There is also the fact that it has been a while since either AMD or NVIDIA have released new kit and the majority of those planning an upgrade on this cycle have already done so.
Once we see new kit arrive and the prices of products from the previous generation receive discounts, there should be another spike in sales. The mystery is what the next generation will bring from these two competitors.
Subject: Graphics Cards | May 28, 2018 - 01:56 PM | Jeremy Hellstrom
Tagged: gtx 1080 ti, watercooler, Heatkiller IV, WATERCOOL
WATERCOOL's Heatkiller IV for the GTX 1080 Ti is up for review at [H]ard|OCP, as you can see below it certainly adds a cool look to your card, but that is only half the story. WATERCOOL added some features which should improve cooling, such as a flowplate as well as changes to the internals which should improve flow rate. This results in noticeable improvements over a Founders Edition, with both lower temperatures and higher clocks. Check out the full review to see if it convinces you to switch your cooling methods.
"Watercool and its Heatkiller series of custom water components are well known for being some of the best in the world when it comes to performance and design. We give its Heatkiller IV water block for the NVIDIA GTX 1080 Ti a good once over, and come away very impressed. Quality and performance all in one package."
Here are some more Graphics Card articles from around the web:
- The Best Graphics Cards 2018 @ TechSpot
- ASRock Radeon RX 580 Phantom Gaming X 8 GB @ TechPowerUp
- Radeon Pro Software Enterprise Edition 18Q2 Tech Report @ TechARP
- Last Gen Games - Max IQ and Perf on Today's GPUs @ [H]ard|OCP
Subject: Graphics Cards | May 23, 2018 - 09:01 PM | Ken Addison
Tagged: vega frontier edition, titan xp, specviewperf 13, specgpc
SPECgpc, makers of industry standard benchmarks such as SPECint, released an updated version of SPECviewperf today. The new SPECviewperf 13, is an update to the industry staple benchmark for measuring the graphics performance in workstation and professional applications.
Ranging from a wide array of applications such as Solidworks, Maya, Creo, 3ds Max, and more, SPECviewperf provides an insight into the performance of mission-critical, but often difficult to benchmark scenarios.
Changes for this new version of SPECviewperf include:
- Support for 4K resolution displays.
- New reporting methods, including JSON output that enables more robust and flexible result parsing.
- A new user interface that will be standardized across all SPEC/GWPG benchmarks.
- New workloads and scoring that reflect the range of activities found in real-world applications.
- Various bug fixes and performance improvements.
Given that the changes include new datasets for the energy, medical, Creo, and Maya viewsets, as well as tweaks to the others, we decided to grab some quick results from two high-end prosumer level GPUs, the NVIDIA Titan Xp and the AMD RX Vega Frontier Edition.
The full testbed configuration is listed below:
|Test System Setup|
Intel Core i9-7960XE
|Motherboard||ASUS PRIME X299 Deluxe|
32GB Corsair Vengeance DDR4-3200
Operating at: 2400MHz
|Storage||Intel Optane SSD DC P4800X 750GB|
NVIDIA GeForce TITAN Xp 12GB
AMD Radeon Vega Frontier Edition (Liquid) 16GB
AMD Radeon Pro 18.Q2.1
|Power Supply||Corsair RM1000x|
|Operating System||Windows 10 Pro x64 RS4|
While we see the Titan Xp handily winning most of the tests in SPECviewperf 13, there are some notable exceptions, including the newly updated energy workload where the Vega Frontier Edition manages to pull off a 13% lead. Additionally, Solidworks—a very widely used application for CAD work—sees a 23% performance advantage for AMD.
SPECviewperf is a benchmark that we rely on to evaluate profession application performance, and we are glad to see it's getting some improvements.
For anyone curious about the performance of their system, SPECviewperf 13 is free to download and use for non-profit entities that do not sell computer hardware, software, or related services.
Subject: Graphics Cards | May 23, 2018 - 06:21 PM | Tim Verry
Tagged: pascal, nvidia, GP107, GDDR5, budget
NVIDIA recently quietly launched a new budget graphics card that neatly slots itself between the GTX 1050 and the GTX 1050 Ti. The new GTX 1050 3GB, as the name suggests, features 3GB of GDDR5 memory. The new card is closer to the GTX 1050 Ti than the name would suggest, however as it uses the same 768 CUDA cores instead of the 640 of the GTX 1050 2GB. The GDDR5 memory is where the card differs from the GTX 1050 Ti though as NVIDIA has cut the number of memory controllers by one along with the corresponding ROPs and cache meaning that the new GTX 1050 3GB has a smaller memory bus and less memory bandwidth than both the GTX 1050 2GB and GTX 1050 Ti 4GB.
Specifically, the GTX 1050 with 3GB GDDR5 has a 96-bit memory bus that when paired with 7 Gbps GDDR5 results in maximum memory bandwidth of 84 GB/s versus the other previously released cards' 128-bit memory buses and 112 GB/s of bandwidth.
Clockspeeds on the new GTX 1050 3GB start are a good bit higher than the other cards though with the base clocks starting at 1392 MHz which is the boost clock of the 1050 Ti and running up to 1518 MHz boost clockspeeds. Thanks to the clockspeeds bumps, the theoretical GPU performance of 2.33 TFLOPS is actually higher than the GTX 1050 Ti (2.14 TFLOPS) and existing GTX 1050 2GB (1.86 TFLOPS) though the reduced memory bus (and loss of a small amount of ROPs and cache) will hold the card back from surpassing the Ti variant in most workloads – NVIDIA needs to maintain product segmentation somehow!
|NVIDIA GTX 1050 2GB||NVIDIA GTX 1050 3GB||NVIDIA GTX 1050 Ti 4GB||AMD RX 560 4GB|
|GPU Cores||640||768||768||896 or 1024|
|TFLOPS||1.86||2.33||2.14||up to 2.6|
|Memory||2GB GDDR5||3GB GDDR5||4GB GDDR5||2GB or 4GB GDDR5|
|Memory Clockspeed||7 Gbps||7 Gbps||7 Gbps||7 Gbps|
|Memory Bandwidth||112 GB/s||84 GB/s||112 GB/s||112 GB/s|
|TDP||75W||75W||75W||60W to 80W|
The chart above compares the specifications of the GTX 1050 3GB with the GTX 1050 and the GTX 1050 Ti on the NVIDIA side and the AMD RX 560 which appears to be its direct competitor based on pricing. The new 3GB GTX 1050 should compete well with AMD's Polaris 11 based GPU as well as NVIDIA's own cards in the budget gaming space where hopefully the downside of a reduced memory bus will at least dissuade cryptocurrency miners from adopting this card as an entry level miner for Ethereum and other alt coins giving gamers a chance to buy something a bit better than the GTX 1050 and RX 550 level at close to MSRP while the miners fight over the Ti and higher variants with more memory and compute units.
NVIDIA did not release formal pricing or release date information, but the cards are expected to launch in June and prices should be around $160 to $180 depending on retailer and extra things like fancier coolers and factory overclocks.
What are your thoughts on the GTX 1050 3GB? Is it the bastion of hope budget gamers have been waiting for? hehe Looking around online it seems pricing for these budget cards has somewhat returned to sane levels and hopefully alternative options like these aimed at gamers will help further stabilize the market for us DIYers that want to game more than mine. I do wish that NVIDIA could have changed the name a bit to better differentiate the card, maybe the GTX 1050G or something but oh well. I suppose so long as the 640 CUDA core GTX 1050 doesn't ever get 3GB GDDR5 at least gamers will be able to tell them apart by the amount of memory listed on the box or website.
Subject: Graphics Cards, Processors | May 18, 2018 - 04:33 PM | Ken Addison
Tagged: Vega, ryzen, raven ridge, Radeon Software Adrenalin Edition, r5 2400g, r3 2200g, amd
The new Q2 2018 drivers are based on AMD's current Radeon Software Adrenalin Edition release and bring features such as ReLive and the Radeon overlay to the Vega-powered desktop platform.
We haven't had a lot of time to look for potential performance enhancements this driver brings yet, but we did do a quick 3DMark run on our Ryzen 5 2400G with memory running at DDR4-3200.
Here, we see healthy gains of around 5% in 3DMark Firestrike for the new driver. While I wouldn't expect big gains for older titles, newer titles that have come out since the initial Raven Ridge drive release in February will see the biggest gains.
We are still eager to see the mobile iterations of AMD's Raven Ridge processors get updated drivers, as notebooks such as the HP Envy X360 have not been updated since they launched in November of last year.
It's good to see progress from AMD on this front, but they must work harder to unify the graphics drivers of their APU products into the mainstream graphics driver releases if they want those products to be taken seriously as gaming options.
Subject: Graphics Cards | May 9, 2018 - 12:23 PM | Sebastian Peak
Tagged: video card, pricing, msrp, mining, GTX 1080, gtx 1070, gtx 1060, gtx, graphics, gpu, gaming, crypto
The wait for in-stock NVIDIA graphics cards without inflated price tags seems to be over. Yes, in the wake of months of crypto-fueled disappointment for gamers the much anticipated, long-awaited return of graphics cards at (gasp) MSRP prices is at hand. NVIDIA has now listed most of their GTX lineup as in-stock (with a limit of 2) at normal MSRPs, with the only exception being the GTX 1080 Ti (still out of stock). The lead time from NVIDIA is one week, but worth it for those interested in the lower prices and 'Founders Edition' coolers.
Many other GTX 10 Series options are to be found online at near-MSRP pricing, though as before many of the aftermarket designs command a premium, with factory overclocks and proprietary cooler designs to help justify the added cost. Even Amazon - previously home to some of the most outrageous price-gouging from third-party sellers in months past - has cards at list pricing, which seems to solidify a return to GPU normalcy.
The GTX 1080 inches closer to standard pricing once again on Amazon
Some of the current offers include:
GTX 1070 cards continue to have the highest premium outside of NVIDIA's store, with the lowest current pricing on Newegg or Amazon at $469.99. Still, the overall return to near-MSRP pricing around the web is good news for gamers who have been forced to play second (or third) fiddle to cryptomining "entrepreneurs" for several months now; a disturbing era in which pre-built gaming systems from Alienware and others actually presented a better value than DIY builds.
Subject: Graphics Cards | May 7, 2018 - 02:48 PM | Jeremy Hellstrom
Tagged: Dunia 2, far cry 5, 4k
Armed with a GTX 1080 Ti Founders Edition and a 4k display, you venture forth into the advanced graphics settings of Ubisoft's latest Far Cry game. Inside you face a multitude of challenges, from volumetric fog through various species of anti-aliasing until finally facing the beast known as overall quality. [H]ard|OCP completed this quest and you can benefit from their experience, although no matter how long they searched they could not locate any sign of NVIDIA's Gameworks which appeared in the previous Far Cry. There were signs of rapid packed math enhancements, much to the rejoicing of those who have no concrete interest in Gameworks existence.
"We will be comparing Far Cry 5's Overall Quality, Shadows, Volumetric Fog, and Anti-Aliasing image quality. In addition, we will find out if we can improve IQ in the game by adding Anisotropic Filtering and forcing AA from the control panel. We’ve have a video showing you many IQ issues we found in Far Cry 5, and those are plentiful."
Here are some more Graphics Card articles from around the web:
- 20-Way NVIDIA GeForce / AMD Radeon GPU Comparison For Rise of The Tomb Raider On Vulkan/Linux @ Phoronix
- AMD Ryzen 5 2600 - The Funky One @ Guru of 3D
- GTX 1070 Ti Overclocking Guide @ OCC
- Radeon Pro vs. Quadro: A Fresh Look At Workstation GPU Performance @ Techgage
- Download: MSI AfterBurner 4.5.0 @ Guru of 3D
Subject: Graphics Cards | May 3, 2018 - 08:41 PM | Scott Michaud
Tagged: nvidia, graphics drivers
The previous set of drivers, version 397.31 that was released last week, had a few bugs in them… so NVIDIA has released a hotfix (397.55) to address the issues (without waiting for the next “Game Ready” date). Of course, these drivers also went through a reduced QA process, so they should be avoided unless one of the problems affect you.
And the fixed bugs are:
- Device Manager may report Code 43 on certain GTX 1060 models
- Netflix playback may occasionally stutter
- Added support for Microsoft Surface Book notebooks
- Driver may get removed after PC has been idle for extended periods of time
The last issue manifests in a couple of different forms. The forum page specifically mentions Windows 10, although users with Windows 7 and Windows 8 could also be affected by the bug, just with different symptoms. I experienced it, and for me (on Windows 10) it was just a matter of force-quitting all processes prefixed with “nv” in task manager. My symptoms were that GeForce Experience would attempt to re-download the drivers and StarCraft II would fail to launch. If you’re experiencing similar issues, then you’ll probably want to give this driver a shot.
You can download it from their CustHelp page.
Subject: Graphics Cards | April 25, 2018 - 08:27 PM | Scott Michaud
Tagged: nvidia, graphics drivers, rtx, Volta
It’s quite the jump in version number from 391.35 to 397.31, but NVIDIA has just released a new graphics driver. Interestingly, it is “Game Ready” tied to the Battletech, which I have been looking forward to, but I was always under the impression that no-one else was. Apparently not.
As for its new features? The highlight is a developer preview of NVIDIA RTX Technology. This requires a Volta GPU, which currently means Titan V unless your team was seeded something that doesn’t necessarily exist, as well as 396.xx+ drivers, the new Windows 10 update, and Microsoft DXR developer package. Speaking of which, I’m wondering how much of the version number bump could be attributed to RTX being on the 396.xx branch. Even then, it still feels like a branch or two never left NVIDIA’s dev team. Hmm.
Moving on, the driver also conforms with the Vulkan 1.1 test suite (version 220.127.116.11). If you remember back from early March, the Khronos Group released the new standard, which integrated a bunch of features into core, and brought Subgroup Operations into the mix. This could allow future shaders to perform quicker by being compiled with new intrinsic functions.
Also – the standalone installer will apparently clean up after itself better than it used to. Often I can find a few gigabytes of old NVIDIA folders when I’m looking for space to save, so it’s good for NVIDIA to finally address at least some of that.
Pick up the new drivers on NVIDIA’s website or through GeForce Experience.
Subject: Graphics Cards, Processors | April 9, 2018 - 04:25 PM | Ryan Shrout
Tagged: Vega, Polaris, kaby lake-g, Intel, amd
Over the weekend, some interesting information has surfaced surrounding the new Kaby Lake-G hardware from Intel. A product that is officially called the “8th Generation Intel Core Processors with Radeon RX Vega M Graphics” is now looking like it might be more of a Polaris-based GPU than a Vega-based one. This creates an interesting marketing and technology capability discussion for the community, and both Intel and AMD, that is worth diving into.
PCWorld first posted the question this weekend, using some interesting data points as backup that Kaby Lake-G may in fact be based on Polaris. In Gordon’s story he notes that in AIDA64 the GPU is identified as “Polaris 22” while the Raven Ridge-based APUs from AMD show up as “Raven Ridge.” Obviously the device identification of a third party piece of software is a suspect credential in any situation, but the second point provided is more salient: based on the DXDiag information, the GPU on the Hades Canyon NUC powered by Kaby Lake-G does not support DirectX 12.1.
Image source: PCWorld
AMD clearly stated in its launch of the Vega architecture last year that the new GPUs supported DX 12.1, among other features. The fact that the KBL-G part does NOT include support for it is compelling evidence that the GPU might be more similar to Polaris than Vega.
Tom’s Hardware did some more digging that was posted this morning, using a SiSoft Sandra test that can measure performance of FP16 math and FP32. For both the Radeon RX Vega 64 and 56 discrete graphics cards, running the test with FP16 math results in a score that is 65% faster than the FP32 results. With a Polaris-based graphics card, an RX 470, the scores between FP32 and FP16 were identical as the architecture can support FP16 math functions but doesn’t accelerate it with AMD’s “rapid packed math” feature (that was a part of the Vega launch).
Image source: Tom's Hardware
And you guessed it, the Kaby Lake-G part only runs essentially even in the FP16 mode. (Also note that AMD’s Raven Ridge APU that integrated Vega graphics does get accelerated by 61% using FP16.)
What Kaby Lake-G does have that leans toward Vega is support for HBM2 memory (which none of the Polaris cards have) and “high bandwidth memory cache controller and enhanced compute units with additional ROPs” according to the statement from Intel given to Tom’s Hardware.
It should be noted that just because the benchmarks and games that can support rapid packed math don’t take advantage of that capability today, does not mean they won’t have the capability to do so after a driver or firmware update. That being said, if that’s the plan, and even if it’s not, Intel should come out and tell the consumers and media.
The debate and accusations of conspiracy are running rampant again today with this news. Is Intel trying to pull one over on us by telling the community that this is a Vega-based product when it is in fact based on Polaris? Why would AMD allow and promote the Vega branding with a part that it knows didn’t meet the standards it created to be called a Vega architecture solution?
Another interesting thought comes when analyzing this debate with the Ryzen 7 2400G and Ryzen 5 2200G products, both of which claim to use Vega GPUs as a portion of the APU. However, without support for HBM2 or the high-bandwidth cache controller, does that somehow shortchange the branding for it? Or are the memory features of the GPU considered secondary to its design?
This is the very reason why companies hate labels, hate specifications, and hate having all of this tracked by a competent and technical media. Basically every company in the tech industry is guilty of this practice: Intel has 2-3 architectures running as “8th Generation” in the market, AMD is selling RX 500 cards that were once RX 400 cards, and NVIDIA has changed performance capabilities of the MX 150 at least once or twice.
The nature of semi-custom chips designs is that they are custom. Are the GPUs used in the PS4 and Xbox One or Xbox One X called Polaris, Vega, or something else? It would be safer for AMD and its partners to give each new product its own name, its own brand—but then the enthusiasts would want to know what it was most like, and how did it compare to Polaris, or Vega, etc.? It’s also possible that AMD was only willing to sell this product to Intel if it included some of these feature restrictions. In complicated negotiations like this one surely was, anything is feasible.
These are tough choices for companies to make. AMD loves having the Vega branding in more products as it gives weight to the development cost and time it spent on the design. Having Vega associated with more high-end consumer products, including those sold by Intel, give them leverage for other products down the road. From Intel’s vantage point using the Vega brand makes it looks like it has the very latest technology in its new processor and it can benefit from any cross-promotion that occurs around the Vega brand from AMD or its partners.
Unfortunately, it means that the devil is in the details, and the details are something that no one appears to be willing to share. Does it change the performance we saw in our recent Hades Canyon NUC review or our perspective on it as a product? It does not. But as features like Rapid Packed Math or the new geometry shader accelerate in adoption, the capability for Kaby Lake-G to utilize them is going to be scrutinized more heavily.
Subject: Graphics Cards | April 2, 2018 - 02:36 PM | Jeremy Hellstrom
Tagged: cryptocurrency, graphics cards
It has been a while since the Hardware Leaderboard has been updated as it is incredibly depressing to try to price out a new GPU, for obvious reasons. TechSpot have taken an interesting approach to dealing with the crypto-blues, they have just benchmarked 44 older GPUs on current games to see how well they fare. The cards range from the GTX 560 and HD7770 through to current model cards which are available to purchase used from sites such as eBay. Buying a used card brings the price down to somewhat reasonable levels, though you do run the risk of getting a dead or dying card. With interesting metrics such as price per frame, this is a great resource if you find yourself in desperate need of a GPU in the current market. Check it out here.
"Along with our recent editorials on why it's a bad time to build a gaming PC, we've been revisiting some older GPUs to see how they hold up in today's games. But how do you know how much you should be paying for a secondhand graphics card?"
Here are some more Graphics Card articles from around the web:
- NVIDIA TITAN V @ [H]ard|OCP
- NVIDIA VOLTA's TITAN V DX12 Performance Efficiency @ [H]ard|OCP
- ZOTAC MEK1 Gaming PC (GTX 1070 Ti) @ TechPowerUp
- Revisiting the GeForce GTX 680: Nvidia's 2012 Flagship Graphics Card @ TechSpot
- Windows 10 vs. Ubuntu Linux With Radeon / GeForce GPUs On The Latest 2018 Drivers @ Phoronix
- Sapphire NITRO+ Radeon RX Vega 64 Limited Edition @ Modders-Inc
- Revisiting the Radeon R9 280X / HD 7970 @ TechSpot
Subject: Graphics Cards | March 29, 2018 - 09:52 PM | Scott Michaud
Tagged: nvidia, GTC, gp102, quadro p6000
At GTC 2018, Walt Disney Imagineering unveiled a work-in-progress clip of their upcoming Star Wars: Galaxy’s Edge attraction, which is expected to launch next year at Disneyland and Walt Disney World Resort. The cool part about this ride is that it will be using Unreal Engine 4 with eight, GP102-based Quadro P6000 graphics cards. NVIDIA also reports that Disney has donated the code back to Epic Games to help them with their multi-GPU scaling in general – a win for us consumers… in a more limited fashion.
See? SLI doesn’t need to be limited to two cards if you have a market cap of $100 billion USD.
Another interesting angle to this story is how typical PC components are contributing to these large experiences. Sure, Quadro hardware isn’t exactly cheap, but it can be purchased through typical retail channels and it allows the company to focus their engineering time elsewhere.
Ironically, this also comes about two decades after location-based entertainment started to decline… but, you know, it’s Disneyland and Disney World. They’re fine.
Subject: Graphics Cards | March 29, 2018 - 05:45 PM | Tim Verry
Tagged: RX 580, RX 570, RX 560, RX 550, Polaris, mining, asrock, amd
ASRock, a company known mostly for its motherboards that was formerly an Asus sub-brand but is now an independent company owned by Pegatron since 2010 is now getting into the graphics card market with a new Phantom Gaming series. At launch, the Phantom Gaming series is comprised of four AMD Polaris-based graphics cards including the Phantom Gaming RX 550 2G and RX 560 2G on the low end and the Phantom Gaming X RX 570 8G OC and RX 580 8G OC on the mid/high end range.
ASRock is using black shrouds with white accents and silver and red logos. The lower end Phantom Gaming cards utilize a single dual ball bearing fan while the Phantom Gaming X cards use a dual fan configuration. ASRock is using copper baseplates paired with aluminum heatsinks and composite heatpipes. The Phantom Gaming RX 550 and RX 560 cards use only PCI-E slot power while the Phantom Gaming X RX 570 and RX 580 cards get power from both the slot and a single 8-pin PCI-E power connector.
Video outputs include one HDMI 2.0, one DisplayPort 1.4, and one DL-DVI-D on the Phantom Gaming parts and one HDMI 2.0, three DisplayPort 1.4, and one DL-DVI-D on the higher-end Phantom Gaming X graphics cards. All of the graphics card models feature both silent and overclocked modes in addition to their out-of-the-box default clocks depending on whether you value performance or noise. Users can select which mode they want or perform a custom overclock or fan curve using ASRock's Phantom Gaming Tweak utility.
On the performance front, out of the box ASRock is slightly overclocking the Phantom Gaming X OC cards (the RX 570 and RX 580 based ones) and slightly underclocking the lower end Phantom Gaming cards (including the memory which is downclocked to 6 GHz) compared to their AMD reference specifications.
|ASRock RX 580 OC||RX 580||ASRock RX 570 OC||RX 570||ASRock RX 560||RX 560||ASRock RX 550||RX 550|
|GPU Clock (MHz)||1380||1340||1280||1244||1149||1275||1100||1183|
|GPU Clock OC Mode (MHz)||1435||-||1331||-||1194||-||1144||-|
|Memory Clock (GHz)||8GHz||8GHz||7GHz||7GHz||6GHz||7GHz||6GHz||7GHz|
|Memory Clock OC Mode (MHz)||8320||-||7280||-||6240||-||6240||-|
The table above shows the comparisons between the ASRock graphics cards and their AMD reference card counterparts. Note that the Phantom Gaming RX 560 2G is based on the cut-down 14 CU (compute unit) model rather than the launch 16 CU GPU. Also, even in OC Mode, ASRock does not bring the memory up to the 7 GT/s reference spec. On the positive side, turning on OC mode does give a decent factory overclock of the GPU over reference. Also nice to see is that on the higher end "OC Certified" Phantom Gaming X cards, ASRock overclocks both the GPU and memory speeds which is often not the case with factory overclocks.
ASRock did not detail pricing with any of the launch announcement cards, but they should be coming soon with 4GB models of the RX 560 an RX 550 to follow later this year.
It is always nice to have more competition in this space and hopefully a new AIB partner for AMD helps alleviate shortages and demand for gaming cards if only by a bit. I am curious how well the cards will perform as while they look good on paper the company is new to graphics cards and the build quality really needs to be there. I am just hoping that the Phantom Gaming moniker is not an allusion to how hard these cards are going to be to find for gaming! (heh) If the rumored Ethereum ASICs do not kill the demand for AMD GPUs I do expect that ASRock will also be releasing mining specific cards as well at some point.
What are your thoughts on the news of ASRock moving into graphics cards?