Subject: Graphics Cards | August 14, 2018 - 01:08 AM | Jeremy Hellstrom
Tagged: Siggraph, ray tracing, quadro rtx 8000, quadro rtx 5000, nvidia, jensen
The attempt to describe the visual effects Jensen Huang showed off at his Siggraph keynote is bound to fail, not that this has ever stopped any of us before. If you have seen the short demo movie they released earlier this year in cooperation with Epic and ILMxLAB you have an idea what they can do with ray tracing. However they pulled a fast one on us, as they were hiding the actual hardware that this was shown with as it was not pre-rendered but instead was actually our first look at their real time ray tracing. The hardware required for this feat is the brand new RTX series and the specs are impressive.
The ability to process 10 Giga rays means that each and every pixel can be influenced by numerous rays of light, perhaps 100 per pixel in a perfect scenario with clean inputs, or 5-20 in cases where their AI de-noiser is required to calculate missing light sources or occlusions, in real time. The card itself functions well as a light source as well. The ability to perform 16 TFLOPS and 16 TIPS means this card is happy doing both floating point and integer calculations simultaneously.
The die itself is significantly larger than the previous generation at 754mm2, and will sport a 300W TDP to keep it in line with the PCIe spec; though we will run it through the same tests as the RX 480 to see how well they did if we get the chance. 30W of the total power is devoted to the onboard USB controller which implies support for VR Link.
The cards can be used in pairs, utilizing Jensun's chest decoration, more commonly known as an NVLink bridge, and more than one pair can be run in a system but you will not be able to connect three or more cards directly.
As that will give you up to 96GB of GDDR6 for your processing tasks, it is hard to consider that limiting. The price is rather impressive as well, compared to previous render farms such as this rather tiny one below you are looking at a tenth the cost to power your movie with RTX cards. The card is not limited to proprietary engines or programs either, with DirectX and Vulkan APIs being supported in addition to Pixar's software. Their Material Definition Language will be made open source, allowing for even broader usage for those who so desire.
You will of course wonder what this means in terms of graphical eye candy, either pre-rendered quickly for your later enjoyment or else in real time if you have the hardware. The image below attempts to show the various features which RTX can easily handle. Mirrored surfaces can be emulated with multiple reflections accurately represented, again handled on the fly instead of being preset, so soon you will be able to see around corners.
It also introduces a new type of anti-aliasing called DLAA and there is no money to win for guessing what the DL stands for. DLAA works by taking an already anti-aliased image and training itself to provide even better edge smoothing, though at a processing cost. As with most other features on these cards, it is not the complexity of the scene which has the biggest impact on calculation time but rather the amount of pixels, as each pixel has numerous rays associated with it.
This new feature also allows significantly faster processing than Pascal, not the small evolutionary changes we have become accustomed to but more of a revolutionary change.
In addition to effects in movies and other video there is another possible use for Turing based chips which might appeal to the gamer, if the architecture reaches the mainstream. With the ability to render existing sources with added ray tracing and de-noising features it might be possible for an enterprising soul to take an old game and remaster it in a way never before possible. Perhaps one day people who try to replay the original System Shock or Deus Ex will make it past the first few hours before the graphical deficiencies overwhelm their senses.
We expect to see more from NVIDIA tomorrow so stay tuned.
Subject: Graphics Cards | August 7, 2018 - 03:24 PM | Jeremy Hellstrom
Tagged: amd, RX 570, RX 580, msi, MECH 2 OC, factory overclocked
MSI have released two new Polaris cards, the MECH 2 versions of the RX 570 and 580. The cards come factory overclocked and the Guru of 3D were able to push the clocks higher using Afterburner, with noticeable improvements in performance. For those more interested in quiet performance, the tests show these two to be some of the least noisy on the market, with the 570 hitting roughly ~34 dBA under full load and the 580 producing ~38dBA. Check out the full review and remember that picking one of these up qualifies you for three free games!
"Join us as we review the MSI Radeon RX 570 and 580 MECH 2 OC with 8GB graphics memory. This all-new two slot cooled mainstream graphics card series will allow you to play your games in both the Full HD 1080P as well as gaming in WQHD (2560x1440) domain. The new MECH 2 series come with revamped looks and cooling."
Here are some more Graphics Card articles from around the web:
- MSI Radeon RX 580 Mech 2 8 GB @ TechPowerUp
- NVIDIA GPU Generational Performance Part 1 @ [H]ard|OCP
- NVIDIA GPU Generational Performance Part 2 @ [H]ard|OCP
- AMD’s “fine wine” revisited – the Fury X vs. the GTX 980 Ti @ BabelTechReviews
- GTX 1060 6GB vs the RX 580 8GB vs the GTX 980 4GB revisited @ BabelTechReviews
- eForce GTX 1060 3GB vs. Radeon RX 570 4GB: 2018 Update @ Techspot
- XFX RX 570 RS 8GB XXX Edition @ OCC
- The GTX 1070 versus the GTX 980 Ti @ BabelTechReviews
Subject: Graphics Cards, Processors | August 3, 2018 - 04:41 PM | Ryan Shrout
Tagged: Zen, Vega, SoC, ryzen, China, APU, amd
Continuing down the path with its semi-custom design division, AMD today announced a partnership with Chinese company Zhongshan Subor to design and build a new chip to be utilized for both a Chinese gaming PC and Chinese gaming console.
The chip itself will include a quad-core integration of the Zen processor supporting 8 threads at a clock speed of 3.0 GHz, no Turbo or XFR is included. The graphics portion is built around a Vega GPU with 24 Compute Units running at 1.3 GHz. Each CU has 64 stream processors giving the “Fenghuang” chip a total of 1536 SPs. That is the same size GPU used in the Kaby Lake-G Vega M GH part, but with a higher clock speed.
The memory system is also interesting as Zhongshan Subor has integrated 8GB of GDDR5 on a single package. (Update: AMD has clarified that this is a GDDR5 memory controller on package, and the memory itself is on the mainboard. Much more sensible.) This is different than how Intel integrated basically the same product from AMD as it utilized HBM2 memory. As far as I can see, this is the first time that an AMD-built SoC has utilized GDDR memory for both the GPU and CPU outside of the designs used for Microsoft and Sony.
This custom built product will still support AMD and Radeon-specific features like FreeSync, the Radeon Software suite, and next-gen architecture features like Rapid Packed Math. It is being built at GlobalFoundries.
Though there are differences in the apparent specs from the leaks that showed up online earlier in the year, they are pretty close. This story thought the custom SoC would include a 28 CU GPU and HBM2. Perhaps there is another chip design for a different customer pending or more likely there were competing integrations and the announced version won out due to cost efficiency.
Zhongshan Subor is a Chinese holding company that owns everything from retail stores to an education technology business. You might have heard its name in association with a gluttony of Super Famicom clones years back. I don’t expect this new console to have near the reach of an Xbox or PlayStation but with the size of the Chinese market, anything is possible if the content portfolio is there.
It is interesting that despite the aggressiveness of both Microsoft and Sony in the console space in regards to hardware upgrades this generation, this Chinese design will be the first to ship with a Zen-based APU, though it will lag behind the graphics performance of the Xbox One X (and probably PS4 Pro). Don’t be surprised if both major console players integrate a similar style of APU design with their next-generation products, pairing Zen with Vega.
Revenue for AMD from this arrangement is hard to predict but it does get an upfront fee from any semi-custom chip customer for the design and validation of the product. There is no commitment for a minimum chip purchase so AMD will see extended income only if the console and PC built around the APU succeeds.
Enthusiasts and PC builders have already started questioning whether this is the type of product that might make its way to the consumer. The truth is that the market for a high-performance, fully-integrated SoC like this is quite small, with DIY and SI (system integrator) markets preferring discrete components most of the time. If we remove the GDDR5 integration, which is one of the key specs that makes the “Fenghuang” chip so interesting and expensive, I’d bet the 24 CU GPU would be choked by standard DDR4/5 DRAM. For now, don’t hold out hope that AMD takes the engineering work of this Chinese gaming product and applies it to the general consumer market.
Subject: Graphics Cards | July 30, 2018 - 03:32 PM | Ken Addison
Tagged: nvidia, geforce, gaming celebration, gamescom, cologne
Earlier today, NVIDIA announced the GeForce Gaming Celebration, taking place August 20th-21st, in Cologne, Germany.
NVIDIA promises that this open to the public event taking place before the Gamescom convention "will be loaded with new, exclusive, hands-on demos of the hottest upcoming games, stage presentations from the world’s biggest game developers, and some spectacular surprises."
For any readers that might be in the area and interested in attending, first come first served registration can be found here. For readers outside of the area, the event will also be live streamed.
PC Perspective will be attending the event, so stay tuned for more news and details! We can't possibly imagine what NVIDIA could be getting ready to announce.
Subject: Graphics Cards | July 22, 2018 - 03:10 PM | Scott Michaud
Tagged: nvidia, gtx 1170, geforce
Take these numbers with a grain of salt, but WCCFTech has published what they claim is leaked GeForce GTX 1170 benchmarks, found “on Polish hardware forums”. If true, the results show that the graphics card, which would be below the GTX 1180 in performance, is still above the enthusiast-tier GTX 1080 Ti (at least on 3DMark FireStrike). It also suggests that both the GPU core and 16GB of memory are running at ~2.5 GHz.
Image Credit: “Polish Hardware Forums” via WCCFTech
So not only would the GTX 1180 be above the GTX 1080 Ti… but the GTX 1170 apparently is too? Also… 16GB on the second-tier card? Yikes.
Beyond the raw performance, new architectures also give NVIDIA the chance to add new features directly to the silicon. That said, FireStrike is an old-enough benchmark that it won’t take advantage of tweaks for new features, like NVIDIA RTX, so those should be above-and-beyond the increase seen in the score.
Don’t trust every screenshot you see…
Again, if this is true. The source is a picture of a computer monitor, which begs the question, “Why didn’t they just screenshot it?” Beyond that, it’s easy to make a website say whatever you want with the F12 developer tools of any mainstream web browser these days… as I’ve demonstrated in the image above.
Subject: Graphics Cards | July 17, 2018 - 12:38 PM | Ken Addison
Tagged: VR, VirtualLink, valve, usb 3.1, Type-C, Oculus, nvidia, microsoft, DisplayPort, amd
Today, NVIDIA, Oculus, Valve, AMD, and Microsoft, members of the VirtualLink consortium, have announced the VirtualLink standard, which aims to unify physically connecting Virtual Reality headsets to devices.
Based upon the physical USB Type-C connector, VirtualLink will combine the bandwidth of DisplayPort 1.4 (32.1Gbit/s) with a USB 3.1 Data connection, and the ability to deliver up to 27W of power.
VirualLink aims to simplify the setup of current VR Headsets
Given the current "Medusa-like" nature of VR headsets with multiple cables needing to feed video, audio, data, and power to the headset, simplifying to a single cable should provide a measurable benefit to the VR experience. In addition, having a single, unified connector could provide an easier method for third parties to provide wireless solutions, like the current TPCast device.
VirtualLink is an open standard, and the initial specifications can currently be found on the consortium website.
Subject: Graphics Cards | July 11, 2018 - 05:25 PM | Jeremy Hellstrom
Tagged: RX VEGA 64, amd, undervolting, killing floor 2, wolfenstein 2: The New Colossus, Middle-earth: Shadow of War
You may have stumbled across threads on the wild web created by AMD enthusiasts who have been undervolting their Vega cards and are bragging about it. This will seem counter intuitive to overclockers who regularly increase the voltage their GPU will accept in order to increase the frequencies on those cards. There is a method to this madness, and it is not simply that they are looking to save on power bills. Overclockers Club investigates the methods used and the performance effect it has on the Vega 64 in several modern titles in their latest GPU review.
"Across all three games we saw a noticeable drop in power use when undervolting and not limiting the frame rate, or using a high limit. This reduction in power use is important as it improves the efficiency of the RX Vega 64 and it allows increased clock speeds with the reduction of thermal throttling."
Here are some more Graphics Card articles from around the web:
- ASRock Phantom Gaming X Radeon RX580 8G OC @ Guru of 3D
- ASRock Phantom Gaming X RX 580 @ Kitguru
- GeForce GTX 1050 3GB @ Guru of 3D
- GeForce GT 1030: The DDR4 Abomination Benchmarked @ Techspot
- Workstation GPU Performance Testing: Redshift, Blender & MAGIX Vegas @ Techgage
Subject: Graphics Cards | July 2, 2018 - 01:08 PM | Ken Addison
Tagged: vive pro, steamvr, oculus rift, Oculus, htc
Although the HTC Vive Pro has been available in headset only form as an upgrade for previous VIVE owners for several months, there has been a lack of a full solution for customers looking to enter the ecosystem from scratch.
Today, HTC announced immediate availability for their full VIVE Pro kit featuring Steam VR 2.0 Base Stations and the latest revision of the HTC Vive Controllers.
For those who need a refresher, the HTC Vive Pro improves upon the original Vive VR Headset with 78% improved resolution (2880x1600), as well as a built-in deluxe audio strap.
New with the HTC Vive Pro Full Kit, the Steam VR 2.0 Base Station trackers allow users to add up to 4 base stations (previously limited to 2), for a wider area up to 10x10 meters (32x32 feet), as well as improved positional tracking.
It's worth noting that this kit does not include the upcoming next generation of Steam VR controllers codenamed "Knuckles," which likely won't be available until 2019.
Given the steep asking price and "Pro" moniker, it remains clear here that HTC is only attempting to target very high-end gaming enthusiasts and professionals with this headset, rather than the more general audience the original Vive targets. As of now, it's expected that the original VIVE will continue to be available as a lower cost alternative.
Subject: Graphics Cards | June 26, 2018 - 10:01 PM | Scott Michaud
Tagged: nvidia, graphics drivers, geforce
NVIDIA aligns their graphics driver releases with game launches, and today’s 398.36 is for Ubisoft’s The Crew 2. The game comes out on Friday, but the graphics vendors like to give a little room if possible (and a Friday makes that much easier than a Tuesday). NVIDIA is also running a bundle deal – you get The Crew 2 Standard Edition free when you purchase a qualifying GTX 1080, GTX 1080 Ti, GeForce gaming desktop, or GeForce gaming laptop. Personally, I would wait for new graphics cards to launch, but if you need one now then – hey – free game!
Now onto the driver itself.
GeForce 398.36 is actually from the 396.xx branch, which means that it’s functionally similar to the previous drivers. NVIDIA seems to release big changes with the start of an even-numbered branch, such as new API support, and then spend the rest of the release, and its odd-numbered successor, fixing bugs and adding game-specific optimizations. While this means that there shouldn’t be anything surprising, it also means that it should be stable and polished.
This brings us to the bug fixes.
If you were waiting for the blue-screen issue with Gears of War 4 to be fixed on Pascal GPUs, then grab your chainsaws it should be good to go. Likewise, if you had issues with G-SYNC causing stutter outside of G-SYNC games, such as the desktop, then that has apparently been fixed, too.
When you get around to it, the new driver is available on GeForce Experience and NVIDIA’s site.
A long time coming
To say that the ASUS ROG Swift PG27UQ has been a long time coming is a bit of an understatement. In the computer hardware world where we are generally lucky to know about a product for 6-months, the PG27UQ is a product that has been around in some form or another for at least 18 months.
Originally demonstrated at CES 2017, the ASUS ROG Swift PG27UQ debuted alongside the Acer Predator X27 as the world's first G-SYNC displays supporting HDR. With promised brightness levels of 1000 nits, G-SYNC HDR was a surprising and aggressive announcement considering that HDR was just starting to pick up steam on TVs, and was unheard of for PC monitors. On top of the HDR support, these monitors were the first announced displays sporting a 144Hz refresh rate at 4K, due to their DisplayPort 1.4 connections.
However, delays lead to the PG27UQ being displayed yet again at CES this year, with a promised release date of Q1 2018. Even more slippages in release lead us to today, where the ASUS PG27UQ is available for pre-order for a staggering $2,000 and set to ship at some point this month.
In some ways, the launch of the PG27UQ very much mirrors the launch of the original G-SYNC display, the ROG Swift PG278Q. Both displays represented the launch of an oft waited technology, in a 27" form factor, and were seen as extremely expensive at their time of release.
Finally, we have our hands on a production model of the ASUS PG27UQ, the first monitor to support G-SYNC HDR, as well as 144Hz refresh rate at 4K. Can a PC monitor really be worth a $2,000 price tag?