Subject: Graphics Cards | August 21, 2018 - 08:43 PM | Scott Michaud
Tagged: nvidia, Volta, turing, tu102, gv100
In the past, when NVIDIA launched a new GPU architecture, they would make a few designs for each of their market segments. All SKUs would be one of those chips, with varying amounts of it disabled or re-clocked to hit multiple price points. The mainstream enthusiast (GTX -70/-80) chip of each generation is typically 300mm2, and the high-end enthusiast (Titan / -80 Ti) chip is often around 600mm2.
Kepler used quite a bit of that die space for FP64 calculations, but that did not happen with consumer versions of Pascal. Instead, GP100 supported 1:2:4 FP64:FP32:FP16 performance ratios. This is great for the compute community, such as scientific researchers, but games are focused on FP32. Shortly thereafter, NVIDIA releases GP102, which had the same number of FP32 cores (3840) as GP100 but with much-reduced 64-bit performance… and much reduced die area. GP100 was 610mm2, but GP102 was just 471mm2.
At this point, I’m thinking that NVIDIA is pulling scientific computing chips away from the common user to increase the value of their Tesla parts. There was no reason to either make a cheap 6XXmm2 card available to the public, and a 471mm2 part could take the performance crown, so why not reap extra dies from your wafer (and be able to clock them higher because of better binning)?
And then Volta came out. And it was massive (815mm2).
At this point, you really cannot manufacture a larger integrated circuit. You are at the limit of what TSMC (and other fabs) can focus onto your silicon. Again, it’s a 1:2:4 FP64:FP32:FP16 ratio. Again, there is no consumer version in sight. Again, it looked as if NVIDIA was going to fragment their market and leave consumers behind.
And then Turing was announced. Apparently, NVIDIA still plans on making big chips for consumers… just not with 64-bit performance. The big draw of this 754mm2 chip is its dedicated hardware for raytracing. We knew this technology was coming, and we knew that the next generation would have technology to make this useful. I figured that meant consumer-Volta, and NVIDIA had somehow found a way to use Tensor cores to cast rays. Apparently not… but, don’t worry, Turing has Tensor cores too… they’re just for machine-learning gaming applications. Those are above and beyond the raytracing ASICs, and the CUDA cores, and the ROPs, and the texture units, and so forth.
But, raytracing hype aside, let’s think about the product stack:
- NVIDIA now has two ~800mm2-ish chips… and
- They serve two completely different markets.
In fact, I cannot see either FP64 or raytracing going anywhere any time soon. As such, it’s my assumption that NVIDIA will maintain two different architectures of GPUs going forward. The only way that I can see this changing is if they figure out a multi-die solution, because neither design can get any bigger. And even then, what workload would it even perform? (Moment of silence for 10km x 10km video game maps.)
What do you think? Will NVIDIA keep two architectures going forward? If not, how will they serve all of their customers?
Subject: Graphics Cards | August 20, 2018 - 03:08 PM | Tim Verry
Tagged: turing, RTX 2080 Ti, RTX 2080, nvidia, geforce, asus
Following Jensen Huang's reveal of the RTX family of Turing-based graphics cards, Asus announced that it will have graphics cards from its ROG Strix, Dual, and Turbo product lines available in mid-September. The new graphics cards will be based around the NVIDIA Geforce RTX 2080 Ti and the Geforce RTX 2080 GPUs.
According to Asus, their new Turing-based graphics cards will be built using their Auto-Extreme technology and with redesigned coolers to increase card-to-card product consistency and cooling efficiency. The triple fan ROG Strix and dual fan Dual series cards use a new 2.7 slot design that results in 20% and 50% increases (respectively) in cooling array surface area versus their 1000 series predecessors. The ROG Strix card uses Axial fans that reportedly offer better airflow and IP5X dust resistance while the Dual series cards use Wing Blade fans that also offer dust resistance along with being allegedly quieter while pushing more air. Meanwhile, the Turbo series uses a blower-style cooler that has been redesigned and uses an 80mm dual ball bearing fan with a new shroud that allows for more airflow even in small cases or when cards are sandwiched together in a multi-GPU setup.
The ROG Strix RTX 2080 Ti and RTX 2080 cards will have one USB Type-C (VirtualLink), two HDMI 2.0b, and two Display Port 1.4a outputs. The Dual RTX 2080 Ti and RTX 2080 cards will have one USB Type-C, one HDMI 2.0b, and three Display Port 1.4 outputs. Finally, the Turbo series RTX 2080 Ti and RTX 2080 cards will have one USB Type-C, one HDMI 2.0b, and two Display Port 1.4 ports.
|RTX 2080 Ti||RTX 2080|
|Base Clock||1350 MHz (Turbo model)||1515 MHz (Turbo model)|
|Boost Clock||1545 MHz (Turbo model)||1710 MHz (Turbo model)|
|Ray Tracing Speed||10 GRays/s||8 GRays/s|
|Memory Clock||14000 MHz||14000 MHz|
|Memory Interface||352-bit G6||256-bit G6|
|Memory Bandwidth||616GB/s||448 GB/s|
Exact specification are still unknown though Asus did reveal clockspeeds for the Turbo models which are listed above. The clockspeeds for the Dual and ROG Strix cards should be quite a bit higher than those thanks to the much beefier coolers, and the OC Editions in particular should be clocked higher than reference specs.
Asus did not disclose exact MSRP pricing, but it did state that several models will be available for pre-order starting today and will be officially avaialble in the middle of September. It appears that a couple RTX 2080 Ti and RTX 2080 cards have already appeared on Newegg, but not all of them have shown up yet. The models slated to be available for preorder include the Dual GeForce RTX 2080 Ti OC Edition, Turbo RTX 2080 Ti, ROG Strix GeForce RTX 2080 OC Edition, and the Dual RTX 2080 OC Edition.
- NVIDIA Announces GeForce RTX 2070, RTX 2080 and RTX 2080 Ti at Gamescom 2018
- Newegg Lists GeForce RTX 2080 and 2080 Ti Graphics Cards Before Announcement
- NVIDIA Announcement Live Stream at 12:00 PM Eastern Today
- NVIDIA Officially Announces Turing GPU Architecture at SIGGRAPH 2018
- Real time ray tracing in still life
Subject: Graphics Cards | August 20, 2018 - 01:58 PM | Ken Addison
Tagged: turing, tensor cores, rtx 2080ti, RTX 2080, RTX 2070, rtx, rt cores, ray tracing, quadro, preorder, nvidia, gtx, geforce
* Update *
NVIDIA's pre-order page is now live, as well as info on the RTX 2070! Details below:
*Update 2 *
Post-Founders Edition pricing comes in a bit lower than the Founders pricing noted above:
* End update *
Just like we saw with the Quadro RTX lineup, NVIDIA is designating these gaming-oriented graphics card with the RTX brand to emphasize their capabilities with ray tracing.
Through the combination of dedicated Ray Tracing (RT) cores and Tensor cores for AI-powered denoising, NVIDIA is claiming these RTX GPUs are capable of high enough ray tracing performance to be used real-time in games, as shown by their demos of titles of Battlefield V, Shadow of the Tomb Raider, and Metro: Exodus.
Not every GPU in NVIDIA's lineup will be capable of this real-time ray tracing performance, with those lower tier cards retaining the traditional GTX branding.
Here are the specifications as we know them so far compared to the Quadro RTX cards, as well as the previous generation GeForce cards, and the top offering from AMD.
|RTX 2080 Ti||Quadro RTX 6000||GTX 1080 Ti||RTX 2080||Quadro RTX 5000||GTX 1080||TITAN V||RX Vega 64 (Air)||RTX 2070|
|Base Clock||1350||?||1408 MHz||1515||?||1607 MHz||1200 MHz||1247 MHz||1410|
|?||1733 MHz||1455 MHz||1546 MHz||1620
|Ray Tracing Speed||10 GRays/s||10 GRays/s||--||8 GRays/s||6? GRays/s||--||--||--||6 GRays/s|
|Memory Clock||14000 MHz||14000 MHz||11000 MHz||14000 MHz||14000 MHz||10000 MHz||1700 MHz||1890 MHz||14000 MHz|
|Memory Interface||352-bit G6||384-bit G6||352-bit G5X||256-bit G6||256-bit G6||256-bit G5X||3072-bit HBM2||2048-bit HBM2||256-bit G6|
|Memory Bandwidth||616GB/s||672GB/s||484 GB/s||448 GB/s||448 GB/s||320 GB/s||653 GB/s||484 GB/s||448GB/s|
|300 watts||250 watts||215W
|Peak Compute||?||?||10.6 TFLOPS||?||?||8.2 TFLOPS||14.9 TFLOPS||13.7 TFLOPS||?|
|Transistor Count||?||?||12.0 B||?||?||7.2 B||21.0 B||12.5 B||?|
We hope to fill out the rest of the information on these GPUs in the coming days during subsequent press briefings during Gamescom.
One big change to the RTX lineup is NVIDIA's revised Founders Edition cards. Instead of the blower-style cooler that we've seen on every other NVIDIA reference design, the Founder's Edition RTX cards instead move to a dual-axial fan setup, similar to 3rd party designs in the past.
These new GPUs do not come cheaply, however, with an increased MSRP across the entire lineup when compared to the 1000-series cards. The RTX 2080 Ti's MSRP of $1200 is an increase of $500 over the previous generation GTX 1080 Ti, while the GTX 2080 sports a $200 increase over the GTX 2080. These prices will come down after the Founders Edition wave pricing passes (the same was done with the GTX 10xx launches).
Both the Founder's Edition card from NVIDIA, as well as third-party designs from partners such as EVGA and ASUS, are available for preorder from retailers including Amazon and Newegg starting today and are set to ship on August 27th.
Subject: Graphics Cards | August 20, 2018 - 12:15 PM | Sebastian Peak
Tagged: video card, RTX 2080 Ti, RTX 2080, nvidia, newegg, graphics, gpu, geforce
Newegg has listed NVIDIA GeForce RTX cards ahead of a probably announcement at today's "BeForTheGame" event in Germany, apparently confirming the rumors about the existence of these two GPUs. Both RTX 2080 and RTX 2080 Ti cards are featured on this Newegg promo page:
Clearly this went live a bit early (none of the linked RTX products bring up a valid page yet) as NVIDIA's announcement has yet to take place, though live coverage continues on NVIDIA's Twitch channel now.
Subject: Graphics Cards | August 20, 2018 - 11:30 AM | Sebastian Peak
Tagged: video card, nvidia, live stream, graphics, gpu, announcement
The wait (and endless speculation) is nearly over, as NVIDIA will will be hosting their "BeForTheGame" event with probable product announcements at noon eastern today, and this will be streamed live on the company's Twitch channel.
You can watch the event right here:
Will there be new GeForce cards? Is it GTX or RTX? Were the rumors true or totally off-base? There is only one way to find out! (And of course we will cover any news stories emerging from this event, so stay tuned!)
Subject: Graphics Cards | August 17, 2018 - 02:59 PM | Sebastian Peak
Tagged: VideoCardz, video card, rumor, RTX 2080 Ti, RTX 2080, report, pcb, nvidia, leak, graphics, gpu
The staff at VideoCardz.com have been a very busy of late, posting various articles on rumored NVIDIA graphics cards expected to be revealed this month. Today in particular we are seeing more (and more) information and imagery concerning what seems assured to be RTX 2080 branding, and somewhat surprising is the rumor that the RTX 2080 Ti will launch simultaneously (with a reported 4352 CUDA cores, no less).
Reported images of MSI GAMING X TRIO variants of RTX 2080/2080 Ti (via VideoCardz)
From the reported product images one thing in particular stand out, as memory for each card appears unchanged from current GTX 1080 and 1080 Ti cards, at 8GB and 11GB, respectively (though a move to GDDR6 from GDDR5X has also been rumored/reported).
Even (reported) PCB images are online, with this TU104-400-A1 quality sample pictured on Chiphell via VideoCardz.com:
The TU104-400-A1 pictured is presumed to be the RTX 2080 GPU (Chiphell via VideoCardz)
Other product images from AIB partners (PALIT and Gigabyte) were recently posted over at VideoCardz.com if you care to take a look, and as we near a likely announcement it looks like the (reported) leaks will keep on coming.
Subject: Graphics Cards | August 14, 2018 - 01:08 AM | Jeremy Hellstrom
Tagged: Siggraph, ray tracing, quadro rtx 8000, quadro rtx 5000, nvidia, jensen
The attempt to describe the visual effects Jensen Huang showed off at his Siggraph keynote is bound to fail, not that this has ever stopped any of us before. If you have seen the short demo movie they released earlier this year in cooperation with Epic and ILMxLAB you have an idea what they can do with ray tracing. However they pulled a fast one on us, as they were hiding the actual hardware that this was shown with as it was not pre-rendered but instead was actually our first look at their real time ray tracing. The hardware required for this feat is the brand new RTX series and the specs are impressive.
The ability to process 10 Giga rays means that each and every pixel can be influenced by numerous rays of light, perhaps 100 per pixel in a perfect scenario with clean inputs, or 5-20 in cases where their AI de-noiser is required to calculate missing light sources or occlusions, in real time. The card itself functions well as a light source as well. The ability to perform 16 TFLOPS and 16 TIPS means this card is happy doing both floating point and integer calculations simultaneously.
The die itself is significantly larger than the previous generation at 754mm2, and will sport a 300W TDP to keep it in line with the PCIe spec; though we will run it through the same tests as the RX 480 to see how well they did if we get the chance. 30W of the total power is devoted to the onboard USB controller which implies support for VR Link.
The cards can be used in pairs, utilizing Jensun's chest decoration, more commonly known as an NVLink bridge, and more than one pair can be run in a system but you will not be able to connect three or more cards directly.
As that will give you up to 96GB of GDDR6 for your processing tasks, it is hard to consider that limiting. The price is rather impressive as well, compared to previous render farms such as this rather tiny one below you are looking at a tenth the cost to power your movie with RTX cards. The card is not limited to proprietary engines or programs either, with DirectX and Vulkan APIs being supported in addition to Pixar's software. Their Material Definition Language will be made open source, allowing for even broader usage for those who so desire.
You will of course wonder what this means in terms of graphical eye candy, either pre-rendered quickly for your later enjoyment or else in real time if you have the hardware. The image below attempts to show the various features which RTX can easily handle. Mirrored surfaces can be emulated with multiple reflections accurately represented, again handled on the fly instead of being preset, so soon you will be able to see around corners.
It also introduces a new type of anti-aliasing called DLAA and there is no money to win for guessing what the DL stands for. DLAA works by taking an already anti-aliased image and training itself to provide even better edge smoothing, though at a processing cost. As with most other features on these cards, it is not the complexity of the scene which has the biggest impact on calculation time but rather the amount of pixels, as each pixel has numerous rays associated with it.
This new feature also allows significantly faster processing than Pascal, not the small evolutionary changes we have become accustomed to but more of a revolutionary change.
In addition to effects in movies and other video there is another possible use for Turing based chips which might appeal to the gamer, if the architecture reaches the mainstream. With the ability to render existing sources with added ray tracing and de-noising features it might be possible for an enterprising soul to take an old game and remaster it in a way never before possible. Perhaps one day people who try to replay the original System Shock or Deus Ex will make it past the first few hours before the graphical deficiencies overwhelm their senses.
We expect to see more from NVIDIA tomorrow so stay tuned.
Subject: Graphics Cards | August 7, 2018 - 03:24 PM | Jeremy Hellstrom
Tagged: amd, RX 570, RX 580, msi, MECH 2 OC, factory overclocked
MSI have released two new Polaris cards, the MECH 2 versions of the RX 570 and 580. The cards come factory overclocked and the Guru of 3D were able to push the clocks higher using Afterburner, with noticeable improvements in performance. For those more interested in quiet performance, the tests show these two to be some of the least noisy on the market, with the 570 hitting roughly ~34 dBA under full load and the 580 producing ~38dBA. Check out the full review and remember that picking one of these up qualifies you for three free games!
"Join us as we review the MSI Radeon RX 570 and 580 MECH 2 OC with 8GB graphics memory. This all-new two slot cooled mainstream graphics card series will allow you to play your games in both the Full HD 1080P as well as gaming in WQHD (2560x1440) domain. The new MECH 2 series come with revamped looks and cooling."
Here are some more Graphics Card articles from around the web:
- MSI Radeon RX 580 Mech 2 8 GB @ TechPowerUp
- NVIDIA GPU Generational Performance Part 1 @ [H]ard|OCP
- NVIDIA GPU Generational Performance Part 2 @ [H]ard|OCP
- AMD’s “fine wine” revisited – the Fury X vs. the GTX 980 Ti @ BabelTechReviews
- GTX 1060 6GB vs the RX 580 8GB vs the GTX 980 4GB revisited @ BabelTechReviews
- eForce GTX 1060 3GB vs. Radeon RX 570 4GB: 2018 Update @ Techspot
- XFX RX 570 RS 8GB XXX Edition @ OCC
- The GTX 1070 versus the GTX 980 Ti @ BabelTechReviews
Subject: Graphics Cards, Processors | August 3, 2018 - 04:41 PM | Ryan Shrout
Tagged: Zen, Vega, SoC, ryzen, China, APU, amd
Continuing down the path with its semi-custom design division, AMD today announced a partnership with Chinese company Zhongshan Subor to design and build a new chip to be utilized for both a Chinese gaming PC and Chinese gaming console.
The chip itself will include a quad-core integration of the Zen processor supporting 8 threads at a clock speed of 3.0 GHz, no Turbo or XFR is included. The graphics portion is built around a Vega GPU with 24 Compute Units running at 1.3 GHz. Each CU has 64 stream processors giving the “Fenghuang” chip a total of 1536 SPs. That is the same size GPU used in the Kaby Lake-G Vega M GH part, but with a higher clock speed.
The memory system is also interesting as Zhongshan Subor has integrated 8GB of GDDR5 on a single package. (Update: AMD has clarified that this is a GDDR5 memory controller on package, and the memory itself is on the mainboard. Much more sensible.) This is different than how Intel integrated basically the same product from AMD as it utilized HBM2 memory. As far as I can see, this is the first time that an AMD-built SoC has utilized GDDR memory for both the GPU and CPU outside of the designs used for Microsoft and Sony.
This custom built product will still support AMD and Radeon-specific features like FreeSync, the Radeon Software suite, and next-gen architecture features like Rapid Packed Math. It is being built at GlobalFoundries.
Though there are differences in the apparent specs from the leaks that showed up online earlier in the year, they are pretty close. This story thought the custom SoC would include a 28 CU GPU and HBM2. Perhaps there is another chip design for a different customer pending or more likely there were competing integrations and the announced version won out due to cost efficiency.
Zhongshan Subor is a Chinese holding company that owns everything from retail stores to an education technology business. You might have heard its name in association with a gluttony of Super Famicom clones years back. I don’t expect this new console to have near the reach of an Xbox or PlayStation but with the size of the Chinese market, anything is possible if the content portfolio is there.
It is interesting that despite the aggressiveness of both Microsoft and Sony in the console space in regards to hardware upgrades this generation, this Chinese design will be the first to ship with a Zen-based APU, though it will lag behind the graphics performance of the Xbox One X (and probably PS4 Pro). Don’t be surprised if both major console players integrate a similar style of APU design with their next-generation products, pairing Zen with Vega.
Revenue for AMD from this arrangement is hard to predict but it does get an upfront fee from any semi-custom chip customer for the design and validation of the product. There is no commitment for a minimum chip purchase so AMD will see extended income only if the console and PC built around the APU succeeds.
Enthusiasts and PC builders have already started questioning whether this is the type of product that might make its way to the consumer. The truth is that the market for a high-performance, fully-integrated SoC like this is quite small, with DIY and SI (system integrator) markets preferring discrete components most of the time. If we remove the GDDR5 integration, which is one of the key specs that makes the “Fenghuang” chip so interesting and expensive, I’d bet the 24 CU GPU would be choked by standard DDR4/5 DRAM. For now, don’t hold out hope that AMD takes the engineering work of this Chinese gaming product and applies it to the general consumer market.
Subject: Graphics Cards | July 30, 2018 - 03:32 PM | Ken Addison
Tagged: nvidia, geforce, gaming celebration, gamescom, cologne
Earlier today, NVIDIA announced the GeForce Gaming Celebration, taking place August 20th-21st, in Cologne, Germany.
NVIDIA promises that this open to the public event taking place before the Gamescom convention "will be loaded with new, exclusive, hands-on demos of the hottest upcoming games, stage presentations from the world’s biggest game developers, and some spectacular surprises."
For any readers that might be in the area and interested in attending, first come first served registration can be found here. For readers outside of the area, the event will also be live streamed.
PC Perspective will be attending the event, so stay tuned for more news and details! We can't possibly imagine what NVIDIA could be getting ready to announce.