All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Graphics Cards | September 24, 2018 - 03:33 PM | Jeremy Hellstrom
Tagged: nvidia, overclocking, RTX 2080, turing
We don't know how many sales the new Turing-based GPUs NVIDIA has made but it certainly has generated a lot of reviews. [H]ard|OCP have been working hard on overclocking the Founders Edition RTX 2080 and recently published their findings. They tried three different methods; simply setting the fan to 100%, running NVIDIA's new scanner tool, which does not void your warranty, as well as a manual overclock. They ran into some issues with the scanner tool and limited success with only increasing the fan speed, unsurprisingly the manual OC provided the best results. That manual overclock managed to hit and maintain 2055MHz on the core, which some noticeable improvements.
"We finally got in the RTX 2080 video cards we purchased, and we have been putting those to good use. While Brent is banging out a real-world gameplay preview, I have been seeing just where our RTX 2080 Founders Edition ends up in terms of overclocking. We finally got a solid handle on what our particular Turing GPU and memory are capable of."
Here are some more Graphics Card articles from around the web:
- Weighing the trade-offs of Nvidia DLSS for image quality and performance @ The Tech Report
- ASUS ROG STRIX RTX 2080 Ti and 2080 1440p Preview @ [H]ard|OCP
- ASUS GeForce RTX 2080 STRIX OC 8G @ Guru of 3D
- Palit GeForce RTX 2080 Super Jetstream 8 GB @ TechPowerUp
- Nvidia GeForce RTX 2080 and RTX 2080 Ti Overclocking Guide @ Techspot
- NVIDIA GeForce RTX 2080 Ti & RTX 2080 Review @ Neoseeker
- Ethereum Crypto Mining Performance Benchmarks On The GeForce RTX 2080 Ti @ Phoronix
Subject: Graphics Cards | September 21, 2018 - 03:30 PM | Jeremy Hellstrom
Tagged: RTX 2080, nvidia, TU104
The Tech Report takes a look at the less of the two new Turing cards, the RTX 2080. It has not been as well received as the 2080 Ti as it is very similar in performance to the GTX 1080 Ti. One possible area which the new card might hold an advantage is in frametimes, with the new card providing smoother performance, as opposed to raw frames per second. As their review shows, this is true in some cases but not all; see if your preferred games might benefit from the new RTX while we await releases which support the new features present on the RTX series.
"Nvidia's GeForce RTX 2080 brings Turing to a price point that's more accessible than the flagship RTX 2080 Ti. At $800, however, the Founders Edition card we're testing still has to contend with the GTX 1080 Ti in today's games. We see whether the RTX 2080 can establish a foothold as gamers await its future potential."
Here are some more Graphics Card articles from around the web:
- GeForce RTX 2080 Overclocking Preview with Scanner @ [H]ard|OCP
- NVIDIA GeForce RTX 2080 Ti Shows Very Strong Compute Performance Potential @ Phoronix
- MSI GeForce RTX 2080 Ti DUKE @ Guru of 3D
- NVIDIA GeForce GTX 680 To RTX 2080 Ti Graphics/Compute Performance @ Phoronix
- Gigabyte GeForce RTX 2080 GAMING OC 8G @ Guru of 3D
- GeForce RTX 2080 Ti & 2080 Mega Benchmark @ Techspot
- Initial NVIDIA GeForce RTX 2080 Ti Linux Benchmarks @ Phoronix
- Asus GeForce RTX 2080 Ti RoG Strix @ Guru of 3D
- AMD GPU Generational Performance Part 2 @ [H]ard|OCP
Subject: Graphics Cards | September 20, 2018 - 03:21 AM | Tim Verry
Tagged: turing, RTX 2080 Ti, RTX 2080, nvidia, evga
NVIDIA's Turing-based 2000 series graphics cards are finally official, and partners are unleashing all manner of custom cards based on the new GPU. EVGA is launching the RTX 2080 Ti and RTX 2080 under a new XC Ultra Gaming series that uses a translucent shroud (with a very Gameboy Color nostalgia vibe) that wraps a dual fan ICX2 cooler in customizable white, black, and red trim and a large multi-heatpipe cooler to pair with the Turing GPU and GDDR6 memory.
EVGA is introducing four XC Ultra Gaming series cards, with two RTX 2080 Tis and two RTX 2080s which differ in price and boost clockspeeds. The graphics cards feature 2.75 slot designs with ICX2 coolers and hydro dynamic bearing fans. EVGA claims the cooler is 14% cooler and 19% quieter. The taller card design reportedly allows for a taller fan hub and thicker blades that can push air through the thicker heatsink without extra noise (whereas its 2-slot cards use a smaller fan hub with more blades to try to balance things). Display outputs include three DisplayPort, one HDMI, and one USB-C VirtualLink.
The EVGA RTX 2080 Ti XC Ultra Gaming comes in two models: the 11G-P4-2383-KR and 11G-P4-2382-KR. Memory clocks on the 11GB of GDDR6 memory is clocked at 14000 MHz on both models, but the $1,199.99 11G-P4-2382-KR features a 1635 MHz boost clock for its 4352 CUDA cores while the $1,249.99 11G-P4-2383-KR takes things up a notch to a 1650 MHz boost clock. Of course, enthusiasts can use EVGA's Precision X1 or NVIDIA's new OC Scanner software to overclock on their own. The RTX 2080 Ti graphics cards have 2 8-pin power connectors.
As far as the RTX 2080 XC Ultra Gaming cards, the $799.99 08G-P4-2182-KR and the $849.99 08G-P4-2183-KR pair a TU104 GPU with 2944 CUDA cores with 8GB of GDDR6 memory clocked at 14000 MHz. The cheaper model features a 1815 MHz boost clock while the higher priced model clocks in at 1850 MHz. EVGA's RTX 2080 XC Ultra Gaming cards use a 6+8 pin power connectors.
EVGA's XC Ultra Gaming cards come with a 3-year warranty and are currently being offered on the company's website. While they were previously available for pre-order, at the time of writing the cards are listed as auto-notify presumably due to the launch window slipping back a week.
What are your thoughts on EVGA's take on Turing?
- Asus Announces ROG Strix, Dual, and Turbo Series RTX 2080 Ti and RTX 2080 Graphics Cards
- The NVIDIA GeForce RTX 2080 and RTX 2080 Ti Review
- The Architecture of NVIDIA's RTX GPUs - Turing Explored
Subject: Graphics Cards | September 19, 2018 - 01:35 PM | Jeremy Hellstrom
Tagged: turing, tu102, RTX 2080 Ti, rtx, ray tracing, nvidia, gtx, geforce, founders edition, DLSS
Today is the day the curtain is pulled back and the performance of NVIDIA's Turing based consumer cards is revealed. If there was a benchmark, resolution or game that was somehow missed in our review then you will find it below, but make sure to peek in at the last page for a list of the games which will support Ray Tracing, DLSS or both!
The Tech Report found that the RTX 2080 Ti is an amazing card to use if you are playing Hellblade: Senua's Sacrifice as it clearly outperforms cards from previous generations as well as the base RTX 2080. In many cases the RTX 2080 matches the GTX 1080 Ti, though with the extra features it is an attractive card for those with GPUs several generations old. There is one small problem for those looking to adopt one of these cards, we have not seen prices like these outside of the Titan series before now.
"Nvidia's Turing architecture is here on board the GeForce RTX 2080 Ti, and we put it through its paces for 4K HDR gaming with some of today's most cutting-edge titles. We also explore the possibilities of Nvidia's Deep Learning Super-Sampling tech for the future of 4K gaming. Join us as we put Turing to the test."
Here are some more Graphics Card articles from around the web:
- MSI GeForce RTX 2080 Ti Gaming X TRIO @ Guru of 3D
- Nvidia RTX 2080 and 2080 Ti review: A tale of two very expensive graphics cards @ Ars Technica
- GeForce RTX 2080 @ Guru of 3D
- RTX 2080 Ti Founder Edition @ Guru of 3D
- Turing RTX 2080 and RTX 2080 Ti Benchmarked with 36 Games @ BabelTechReviews
- NVIDIA GeForce RTX IS HERE. Introducing the GeForce RTX 2080 & RTX 2080 Ti – 4K 60 FPS or bust! Review @ Bjorn3d
- Nvidia GeForce RTX 2080TI & RTX 2080 @ Modders-Inc
- MSI GeForce RTX 2080 Gaming X Trio 8 GB @ TechPowerUp
- ASUS GeForce RTX 2080 STRIX OC 8 GB @ TechPowerUp
- Palit GeForce RTX 2080 Gaming Pro OC 8 GB @ TechPowerUp
- MSI GeForce RTX 2080 Ti Duke 11 GB @ TechPowerUp
- Nvidia GeForce RTX 2080 & 2080 Ti @ Techspot
- ASUS GeForce RTX 2080 Ti STRIX OC 11 GB @ TechPowerUp
- MSI GeForce RTX 2080 Ti Gaming X Trio 11 GB @ TechPowerUp
- NVIDIA GeForce RTX 2080 Ti & RTX 2080 Founders Edition Reviewed @ OCC
- NVIDIA GeForce RTX 2080 Founders Edition 8 GB @ TechPowerUp
- Nvidia RTX 2080 @ Kitguru
- Nvidia RTX 2080 Ti @ Kitguru
- NVIDIA GeForce RTX 2080 Ti Founders Edition 11 GB @ TechPowerUp
- Nvidia Turing GeForce 2080 (Ti) architecture @ Guru of 3D
- NVIDIA Turing GeForce RTX Technology & Architecture @ TechPowerUp
Subject: Graphics Cards | September 16, 2018 - 11:18 AM | Scott Michaud
Tagged: nvidia, rtx, RTX 2080 Ti, RTX 2080
There are two changes to the launch of NVIDIA’s GeForce RTX 20-series of cards. The first change is that the general availability, as in the first possible moment to purchase a GeForce RTX 2080 Ti without a pre-order, has slipped a week, from September 20th to September 27th. The second is that pre-orders of the GeForce RTX 2080 Ti have also slipped. They will ship between September 20th and September 27th, rather than all of them shipping on September 20th.
The GeForce RTX 2080 (without the Ti) will still launch on September 20th.
This was all announced on the NVIDIA forums. The brief, ~six-sentence post did not clarify whether this applied to the OEMs, such as ASUS, EVGA, MSI, PNY, ZOTAC, and Gigabyte. It’s entirely possible that they are just referring to the Founder’s Edition. NVIDIA also did not mention why the delay occurred. Given the relatively short duration, it could be anything from one of the recent natural disasters to accidentally forgetting to add an automatic stop threshold to the pre-order page. Who knows?
The NVIDIA website has been updated to show “Notify Me” instead of “Pre-Order” for the GeForce RTX 2080 Ti, so pre-orders have officially shut down for that product. The regular RTX 2080 is still available for pre-order on NVIDIA’s website, though, so you still have a little time to pre-order those.
You can also, of course, wait for the reviews to make a more informed decision later.
Subject: General Tech, Graphics Cards | September 7, 2018 - 01:36 PM | Jeremy Hellstrom
Tagged: jon peddie, gpu market share, amd, nvidia
Last week we had a peek at the overall GPU market, including APUs, and the news was not great. This week Jon Peddie released details on the discrete GPU market, which also saw contractions. When you look at this quarter versus last quarter, sales dropped by 28% and are down 5.7% from this time last year, similar to the trend we saw with the total market. If you look back over time Q2 tends to be a bad quarter for GPU sales and the current market is actually larger in total volume than two years ago, before the mining craze was fully underway.
You can see the details of AMD and NVIDIA's quarter below.
The market shares for the desktop discrete GPU suppliers shifted in the quarter, Nvidia increased market share from last quarter, while AMD enjoyed an increase in share year-to-year."
Here is some more Tech News from around the web:
- Valve Explains How It Decides Who's a 'Straight Up Troll' Publishing Video Games On Steam @ Slashdot
- Memory production value growth to slow in 2019, says Digitimes Research @ DigiTimes
- British Airways breach sees hackers take-off with customers' payment details @ The Inquirer
- Do you really think crims would do that? Just go on the 'net and exploit a Windows zero-day? @ The Register
- iPhone XS release date, price and specs: Apple's 2018 iPhones look set to be most expensive yet @ The Inquirer
- Voyager 1 left the planet 41 years ago. SpaceX hopes to land on it on Saturday @ The Register
- Tech ARP 20th Anniversary Giveaway Week 2 by Dell!
- CORSAIR T2 ROAD WARRIOR Gaming Chair @ [H]ard|OCP
Subject: Graphics Cards | September 5, 2018 - 05:50 PM | Jeremy Hellstrom
Tagged: amd, GCN, R9 290X, r9 390x, R9 Fury X, RX VEGA 64
[H]ard|OCP have been examining the generational performance differences between GPUs, starting with NVIDIA and moving onto AMD. In this review they compare Hawaii GCN 1.1, Fiji GCN 1.3 and Vega10 GCN 1.5 on a wide variety of games. AMD is a more interesting case as they have made more frequent changes to their architecture, while at the same time tending towards mid-range performance as opposed to aiming for the high end of performance and pricing. This has led to interesting results, with certain GCN versions offering more compelling upgrade paths than others. Take a close look to see how AMD's GPUs have changed over the past five years.
"Wonder how much performance you are truly getting from GPU to GPU upgrade in games? We take GPUs from AMD and compare performance gained from 2013 to 2018. This is our AMD GPU Generational Performance Part 1 article focusing on the Radeon R9 290X, Radeon R9 390X, Radeon R9 Fury X, and Radeon RX Vega 64 in 14 games."
Here are some more Graphics Card articles from around the web:
- The New 3GB GeForce GTX 1050: Good Product or Misleading Product? @ TechSpot
- Razer Core X @ Kitguru
- Blackmagic external GPU review: A very Apple graphics solution @ Ars Technica
Subject: Graphics Cards | August 28, 2018 - 01:46 PM | Jeremy Hellstrom
Tagged: Radeon Software Adrenalin Edition, radeon, amd, 18.8.2
Hot on the heels of the NVIDIA update, AMD has released a new driver for your Radeon and Vega cards or your APU, with optimizations for Strange Brigade and F1 2018 with a focus on high resolution performance.
In addition to the new games, there are fixes for Far Cry 5 and solutions to problems some users encountered with FRTC and Instant Replay enabled. You can grab them right here.
- Strange Brigade
- Up to 5% faster performance in Strange BrigadeTM using Radeon Software Adrenalin Edition 18.8.2 on the RadeonTM RX Vega 64 (8GB) graphics card than with RadeonTM Software Adrenalin Edition 18.8.1 at 3840x2160 (4K).
- Up to 3% faster performance in Strange BrigadeTM using Radeon Software Adrenalin Edition 18.8.2 on the RadeonTM RX 580 (8GB) graphics card than with RadeonTM Software Adrenalin Edition 18.8.1 at 2560x1440 (1440p).
- F1 2018
- Some games may experience instability or stutter when playing with FRTC and Instant Replay enabled.
- Upgrade Advisor may not appear in Radeon Settings game manager.
- Far Cry 5 may experience dimmed or grey images with HDR10 enabled on some system configurations.
- Far Cry 5 may experience an application hang when changing video settings on some system configurations.
- Radeon Chill min and max values may not sync on multi GPU system configurations.
- Radeon FreeSync may fail to enable when playing Call of Duty®: Black Ops 4.
Subject: General Tech, Graphics Cards, Shows and Expos | August 22, 2018 - 02:06 PM | Jeremy Hellstrom
Tagged: turing, RTX 2080, nvidia, geforce, ansel
NVIDIA has been showing off a slideshow in Germany, offering a glimpse at the new features Turing brings to the desktop as well as in-house performance numbers. As you can see below, their testing shows a significant increase in performance from Pascal, it will be interesting to see how the numbers match up once reviewers get their hands on these cards.
While those performance numbers should be taken with a grain of salt or three, the various features which the new generation of chip brings to the table will appear as presented. For fans of Ansel, you will be able to upscale your screenshots to 8k with Ansel AI UpRes, which offers an impressive implementation of anti-aliasing. They also showed off a variety of filtres you can utilize to make your screenshots even more impressive.
The GigaRays of real time ray tracing capability on Turing look very impressive but with Ansel, your card has a lot more time to process reflections, refractions and shadows which means your screenshots will look significantly more impressive than what the game shows while you are playing. In the example below you can see how much more detail a little post-processing can add.
There are a wide variety of released and upcoming games which will support these features; 22 listed by name at the conference. A few of the titles only support some of the new features, such as NVIDIA Highlights, however the games below should offer full support, as well as framerates high enough to play at 4k with HDR enabled.
Keep your eyes peeled for more news from NVIDIA and GamesCom.
Subject: Graphics Cards | August 21, 2018 - 08:43 PM | Scott Michaud
Tagged: nvidia, Volta, turing, tu102, gv100
In the past, when NVIDIA launched a new GPU architecture, they would make a few designs for each of their market segments. All SKUs would be one of those chips, with varying amounts of it disabled or re-clocked to hit multiple price points. The mainstream enthusiast (GTX -70/-80) chip of each generation is typically 300mm2, and the high-end enthusiast (Titan / -80 Ti) chip is often around 600mm2.
Kepler used quite a bit of that die space for FP64 calculations, but that did not happen with consumer versions of Pascal. Instead, GP100 supported 1:2:4 FP64:FP32:FP16 performance ratios. This is great for the compute community, such as scientific researchers, but games are focused on FP32. Shortly thereafter, NVIDIA releases GP102, which had the same number of FP32 cores (3840) as GP100 but with much-reduced 64-bit performance… and much reduced die area. GP100 was 610mm2, but GP102 was just 471mm2.
At this point, I’m thinking that NVIDIA is pulling scientific computing chips away from the common user to increase the value of their Tesla parts. There was no reason to either make a cheap 6XXmm2 card available to the public, and a 471mm2 part could take the performance crown, so why not reap extra dies from your wafer (and be able to clock them higher because of better binning)?
And then Volta came out. And it was massive (815mm2).
At this point, you really cannot manufacture a larger integrated circuit. You are at the limit of what TSMC (and other fabs) can focus onto your silicon. Again, it’s a 1:2:4 FP64:FP32:FP16 ratio. Again, there is no consumer version in sight. Again, it looked as if NVIDIA was going to fragment their market and leave consumers behind.
And then Turing was announced. Apparently, NVIDIA still plans on making big chips for consumers… just not with 64-bit performance. The big draw of this 754mm2 chip is its dedicated hardware for raytracing. We knew this technology was coming, and we knew that the next generation would have technology to make this useful. I figured that meant consumer-Volta, and NVIDIA had somehow found a way to use Tensor cores to cast rays. Apparently not… but, don’t worry, Turing has Tensor cores too… they’re just for machine-learning gaming applications. Those are above and beyond the raytracing ASICs, and the CUDA cores, and the ROPs, and the texture units, and so forth.
But, raytracing hype aside, let’s think about the product stack:
- NVIDIA now has two ~800mm2-ish chips… and
- They serve two completely different markets.
In fact, I cannot see either FP64 or raytracing going anywhere any time soon. As such, it’s my assumption that NVIDIA will maintain two different architectures of GPUs going forward. The only way that I can see this changing is if they figure out a multi-die solution, because neither design can get any bigger. And even then, what workload would it even perform? (Moment of silence for 10km x 10km video game maps.)
What do you think? Will NVIDIA keep two architectures going forward? If not, how will they serve all of their customers?
Subject: Graphics Cards | August 20, 2018 - 03:08 PM | Tim Verry
Tagged: turing, RTX 2080 Ti, RTX 2080, nvidia, geforce, asus
Following Jensen Huang's reveal of the RTX family of Turing-based graphics cards, Asus announced that it will have graphics cards from its ROG Strix, Dual, and Turbo product lines available in mid-September. The new graphics cards will be based around the NVIDIA Geforce RTX 2080 Ti and the Geforce RTX 2080 GPUs.
According to Asus, their new Turing-based graphics cards will be built using their Auto-Extreme technology and with redesigned coolers to increase card-to-card product consistency and cooling efficiency. The triple fan ROG Strix and dual fan Dual series cards use a new 2.7 slot design that results in 20% and 50% increases (respectively) in cooling array surface area versus their 1000 series predecessors. The ROG Strix card uses Axial fans that reportedly offer better airflow and IP5X dust resistance while the Dual series cards use Wing Blade fans that also offer dust resistance along with being allegedly quieter while pushing more air. Meanwhile, the Turbo series uses a blower-style cooler that has been redesigned and uses an 80mm dual ball bearing fan with a new shroud that allows for more airflow even in small cases or when cards are sandwiched together in a multi-GPU setup.
The ROG Strix RTX 2080 Ti and RTX 2080 cards will have one USB Type-C (VirtualLink), two HDMI 2.0b, and two Display Port 1.4a outputs. The Dual RTX 2080 Ti and RTX 2080 cards will have one USB Type-C, one HDMI 2.0b, and three Display Port 1.4 outputs. Finally, the Turbo series RTX 2080 Ti and RTX 2080 cards will have one USB Type-C, one HDMI 2.0b, and two Display Port 1.4 ports.
|RTX 2080 Ti||RTX 2080|
|Base Clock||1350 MHz (Turbo model)||1515 MHz (Turbo model)|
|Boost Clock||1545 MHz (Turbo model)||1710 MHz (Turbo model)|
|Ray Tracing Speed||10 GRays/s||8 GRays/s|
|Memory Clock||14000 MHz||14000 MHz|
|Memory Interface||352-bit G6||256-bit G6|
|Memory Bandwidth||616GB/s||448 GB/s|
Exact specification are still unknown though Asus did reveal clockspeeds for the Turbo models which are listed above. The clockspeeds for the Dual and ROG Strix cards should be quite a bit higher than those thanks to the much beefier coolers, and the OC Editions in particular should be clocked higher than reference specs.
Asus did not disclose exact MSRP pricing, but it did state that several models will be available for pre-order starting today and will be officially avaialble in the middle of September. It appears that a couple RTX 2080 Ti and RTX 2080 cards have already appeared on Newegg, but not all of them have shown up yet. The models slated to be available for preorder include the Dual GeForce RTX 2080 Ti OC Edition, Turbo RTX 2080 Ti, ROG Strix GeForce RTX 2080 OC Edition, and the Dual RTX 2080 OC Edition.
- NVIDIA Announces GeForce RTX 2070, RTX 2080 and RTX 2080 Ti at Gamescom 2018
- Newegg Lists GeForce RTX 2080 and 2080 Ti Graphics Cards Before Announcement
- NVIDIA Announcement Live Stream at 12:00 PM Eastern Today
- NVIDIA Officially Announces Turing GPU Architecture at SIGGRAPH 2018
- Real time ray tracing in still life
Subject: Graphics Cards | August 20, 2018 - 01:58 PM | Ken Addison
Tagged: turing, tensor cores, rtx 2080ti, RTX 2080, RTX 2070, rtx, rt cores, ray tracing, quadro, preorder, nvidia, gtx, geforce
* Update *
NVIDIA's pre-order page is now live, as well as info on the RTX 2070! Details below:
*Update 2 *
Post-Founders Edition pricing comes in a bit lower than the Founders pricing noted above:
* End update *
Just like we saw with the Quadro RTX lineup, NVIDIA is designating these gaming-oriented graphics card with the RTX brand to emphasize their capabilities with ray tracing.
Through the combination of dedicated Ray Tracing (RT) cores and Tensor cores for AI-powered denoising, NVIDIA is claiming these RTX GPUs are capable of high enough ray tracing performance to be used real-time in games, as shown by their demos of titles of Battlefield V, Shadow of the Tomb Raider, and Metro: Exodus.
Not every GPU in NVIDIA's lineup will be capable of this real-time ray tracing performance, with those lower tier cards retaining the traditional GTX branding.
Here are the specifications as we know them so far compared to the Quadro RTX cards, as well as the previous generation GeForce cards, and the top offering from AMD.
|RTX 2080 Ti||Quadro RTX 6000||GTX 1080 Ti||RTX 2080||Quadro RTX 5000||GTX 1080||TITAN V||RX Vega 64 (Air)||RTX 2070|
|Base Clock||1350||?||1408 MHz||1515||?||1607 MHz||1200 MHz||1247 MHz||1410|
|?||1733 MHz||1455 MHz||1546 MHz||1620
|Ray Tracing Speed||10 GRays/s||10 GRays/s||--||8 GRays/s||6? GRays/s||--||--||--||6 GRays/s|
|Memory Clock||14000 MHz||14000 MHz||11000 MHz||14000 MHz||14000 MHz||10000 MHz||1700 MHz||1890 MHz||14000 MHz|
|Memory Interface||352-bit G6||384-bit G6||352-bit G5X||256-bit G6||256-bit G6||256-bit G5X||3072-bit HBM2||2048-bit HBM2||256-bit G6|
|Memory Bandwidth||616GB/s||672GB/s||484 GB/s||448 GB/s||448 GB/s||320 GB/s||653 GB/s||484 GB/s||448GB/s|
|300 watts||250 watts||215W
|Peak Compute||?||?||10.6 TFLOPS||?||?||8.2 TFLOPS||14.9 TFLOPS||13.7 TFLOPS||?|
|Transistor Count||?||?||12.0 B||?||?||7.2 B||21.0 B||12.5 B||?|
We hope to fill out the rest of the information on these GPUs in the coming days during subsequent press briefings during Gamescom.
One big change to the RTX lineup is NVIDIA's revised Founders Edition cards. Instead of the blower-style cooler that we've seen on every other NVIDIA reference design, the Founder's Edition RTX cards instead move to a dual-axial fan setup, similar to 3rd party designs in the past.
These new GPUs do not come cheaply, however, with an increased MSRP across the entire lineup when compared to the 1000-series cards. The RTX 2080 Ti's MSRP of $1200 is an increase of $500 over the previous generation GTX 1080 Ti, while the GTX 2080 sports a $200 increase over the GTX 2080. These prices will come down after the Founders Edition wave pricing passes (the same was done with the GTX 10xx launches).
Both the Founder's Edition card from NVIDIA, as well as third-party designs from partners such as EVGA and ASUS, are available for preorder from retailers including Amazon and Newegg starting today and are set to ship on August 27th.
Subject: Graphics Cards | August 20, 2018 - 12:15 PM | Sebastian Peak
Tagged: video card, RTX 2080 Ti, RTX 2080, nvidia, newegg, graphics, gpu, geforce
Newegg has listed NVIDIA GeForce RTX cards ahead of a probably announcement at today's "BeForTheGame" event in Germany, apparently confirming the rumors about the existence of these two GPUs. Both RTX 2080 and RTX 2080 Ti cards are featured on this Newegg promo page:
Clearly this went live a bit early (none of the linked RTX products bring up a valid page yet) as NVIDIA's announcement has yet to take place, though live coverage continues on NVIDIA's Twitch channel now.
Subject: Graphics Cards | August 20, 2018 - 11:30 AM | Sebastian Peak
Tagged: video card, nvidia, live stream, graphics, gpu, announcement
The wait (and endless speculation) is nearly over, as NVIDIA will will be hosting their "BeForTheGame" event with probable product announcements at noon eastern today, and this will be streamed live on the company's Twitch channel.
You can watch the event right here:
Will there be new GeForce cards? Is it GTX or RTX? Were the rumors true or totally off-base? There is only one way to find out! (And of course we will cover any news stories emerging from this event, so stay tuned!)
Subject: Graphics Cards | August 17, 2018 - 02:59 PM | Sebastian Peak
Tagged: VideoCardz, video card, rumor, RTX 2080 Ti, RTX 2080, report, pcb, nvidia, leak, graphics, gpu
The staff at VideoCardz.com have been a very busy of late, posting various articles on rumored NVIDIA graphics cards expected to be revealed this month. Today in particular we are seeing more (and more) information and imagery concerning what seems assured to be RTX 2080 branding, and somewhat surprising is the rumor that the RTX 2080 Ti will launch simultaneously (with a reported 4352 CUDA cores, no less).
Reported images of MSI GAMING X TRIO variants of RTX 2080/2080 Ti (via VideoCardz)
From the reported product images one thing in particular stand out, as memory for each card appears unchanged from current GTX 1080 and 1080 Ti cards, at 8GB and 11GB, respectively (though a move to GDDR6 from GDDR5X has also been rumored/reported).
Even (reported) PCB images are online, with this TU104-400-A1 quality sample pictured on Chiphell via VideoCardz.com:
The TU104-400-A1 pictured is presumed to be the RTX 2080 GPU (Chiphell via VideoCardz)
Other product images from AIB partners (PALIT and Gigabyte) were recently posted over at VideoCardz.com if you care to take a look, and as we near a likely announcement it looks like the (reported) leaks will keep on coming.
Subject: Graphics Cards | August 14, 2018 - 01:08 AM | Jeremy Hellstrom
Tagged: Siggraph, ray tracing, quadro rtx 8000, quadro rtx 5000, nvidia, jensen
The attempt to describe the visual effects Jensen Huang showed off at his Siggraph keynote is bound to fail, not that this has ever stopped any of us before. If you have seen the short demo movie they released earlier this year in cooperation with Epic and ILMxLAB you have an idea what they can do with ray tracing. However they pulled a fast one on us, as they were hiding the actual hardware that this was shown with as it was not pre-rendered but instead was actually our first look at their real time ray tracing. The hardware required for this feat is the brand new RTX series and the specs are impressive.
The ability to process 10 Giga rays means that each and every pixel can be influenced by numerous rays of light, perhaps 100 per pixel in a perfect scenario with clean inputs, or 5-20 in cases where their AI de-noiser is required to calculate missing light sources or occlusions, in real time. The card itself functions well as a light source as well. The ability to perform 16 TFLOPS and 16 TIPS means this card is happy doing both floating point and integer calculations simultaneously.
The die itself is significantly larger than the previous generation at 754mm2, and will sport a 300W TDP to keep it in line with the PCIe spec; though we will run it through the same tests as the RX 480 to see how well they did if we get the chance. 30W of the total power is devoted to the onboard USB controller which implies support for VR Link.
The cards can be used in pairs, utilizing Jensun's chest decoration, more commonly known as an NVLink bridge, and more than one pair can be run in a system but you will not be able to connect three or more cards directly.
As that will give you up to 96GB of GDDR6 for your processing tasks, it is hard to consider that limiting. The price is rather impressive as well, compared to previous render farms such as this rather tiny one below you are looking at a tenth the cost to power your movie with RTX cards. The card is not limited to proprietary engines or programs either, with DirectX and Vulkan APIs being supported in addition to Pixar's software. Their Material Definition Language will be made open source, allowing for even broader usage for those who so desire.
You will of course wonder what this means in terms of graphical eye candy, either pre-rendered quickly for your later enjoyment or else in real time if you have the hardware. The image below attempts to show the various features which RTX can easily handle. Mirrored surfaces can be emulated with multiple reflections accurately represented, again handled on the fly instead of being preset, so soon you will be able to see around corners.
It also introduces a new type of anti-aliasing called DLAA and there is no money to win for guessing what the DL stands for. DLAA works by taking an already anti-aliased image and training itself to provide even better edge smoothing, though at a processing cost. As with most other features on these cards, it is not the complexity of the scene which has the biggest impact on calculation time but rather the amount of pixels, as each pixel has numerous rays associated with it.
This new feature also allows significantly faster processing than Pascal, not the small evolutionary changes we have become accustomed to but more of a revolutionary change.
In addition to effects in movies and other video there is another possible use for Turing based chips which might appeal to the gamer, if the architecture reaches the mainstream. With the ability to render existing sources with added ray tracing and de-noising features it might be possible for an enterprising soul to take an old game and remaster it in a way never before possible. Perhaps one day people who try to replay the original System Shock or Deus Ex will make it past the first few hours before the graphical deficiencies overwhelm their senses.
We expect to see more from NVIDIA tomorrow so stay tuned.
Subject: Graphics Cards | August 7, 2018 - 03:24 PM | Jeremy Hellstrom
Tagged: amd, RX 570, RX 580, msi, MECH 2 OC, factory overclocked
MSI have released two new Polaris cards, the MECH 2 versions of the RX 570 and 580. The cards come factory overclocked and the Guru of 3D were able to push the clocks higher using Afterburner, with noticeable improvements in performance. For those more interested in quiet performance, the tests show these two to be some of the least noisy on the market, with the 570 hitting roughly ~34 dBA under full load and the 580 producing ~38dBA. Check out the full review and remember that picking one of these up qualifies you for three free games!
"Join us as we review the MSI Radeon RX 570 and 580 MECH 2 OC with 8GB graphics memory. This all-new two slot cooled mainstream graphics card series will allow you to play your games in both the Full HD 1080P as well as gaming in WQHD (2560x1440) domain. The new MECH 2 series come with revamped looks and cooling."
Here are some more Graphics Card articles from around the web:
- MSI Radeon RX 580 Mech 2 8 GB @ TechPowerUp
- NVIDIA GPU Generational Performance Part 1 @ [H]ard|OCP
- NVIDIA GPU Generational Performance Part 2 @ [H]ard|OCP
- AMD’s “fine wine” revisited – the Fury X vs. the GTX 980 Ti @ BabelTechReviews
- GTX 1060 6GB vs the RX 580 8GB vs the GTX 980 4GB revisited @ BabelTechReviews
- eForce GTX 1060 3GB vs. Radeon RX 570 4GB: 2018 Update @ Techspot
- XFX RX 570 RS 8GB XXX Edition @ OCC
- The GTX 1070 versus the GTX 980 Ti @ BabelTechReviews
Subject: Graphics Cards, Processors | August 3, 2018 - 04:41 PM | Ryan Shrout
Tagged: Zen, Vega, SoC, ryzen, China, APU, amd
Continuing down the path with its semi-custom design division, AMD today announced a partnership with Chinese company Zhongshan Subor to design and build a new chip to be utilized for both a Chinese gaming PC and Chinese gaming console.
The chip itself will include a quad-core integration of the Zen processor supporting 8 threads at a clock speed of 3.0 GHz, no Turbo or XFR is included. The graphics portion is built around a Vega GPU with 24 Compute Units running at 1.3 GHz. Each CU has 64 stream processors giving the “Fenghuang” chip a total of 1536 SPs. That is the same size GPU used in the Kaby Lake-G Vega M GH part, but with a higher clock speed.
The memory system is also interesting as Zhongshan Subor has integrated 8GB of GDDR5 on a single package. (Update: AMD has clarified that this is a GDDR5 memory controller on package, and the memory itself is on the mainboard. Much more sensible.) This is different than how Intel integrated basically the same product from AMD as it utilized HBM2 memory. As far as I can see, this is the first time that an AMD-built SoC has utilized GDDR memory for both the GPU and CPU outside of the designs used for Microsoft and Sony.
This custom built product will still support AMD and Radeon-specific features like FreeSync, the Radeon Software suite, and next-gen architecture features like Rapid Packed Math. It is being built at GlobalFoundries.
Though there are differences in the apparent specs from the leaks that showed up online earlier in the year, they are pretty close. This story thought the custom SoC would include a 28 CU GPU and HBM2. Perhaps there is another chip design for a different customer pending or more likely there were competing integrations and the announced version won out due to cost efficiency.
Zhongshan Subor is a Chinese holding company that owns everything from retail stores to an education technology business. You might have heard its name in association with a gluttony of Super Famicom clones years back. I don’t expect this new console to have near the reach of an Xbox or PlayStation but with the size of the Chinese market, anything is possible if the content portfolio is there.
It is interesting that despite the aggressiveness of both Microsoft and Sony in the console space in regards to hardware upgrades this generation, this Chinese design will be the first to ship with a Zen-based APU, though it will lag behind the graphics performance of the Xbox One X (and probably PS4 Pro). Don’t be surprised if both major console players integrate a similar style of APU design with their next-generation products, pairing Zen with Vega.
Revenue for AMD from this arrangement is hard to predict but it does get an upfront fee from any semi-custom chip customer for the design and validation of the product. There is no commitment for a minimum chip purchase so AMD will see extended income only if the console and PC built around the APU succeeds.
Enthusiasts and PC builders have already started questioning whether this is the type of product that might make its way to the consumer. The truth is that the market for a high-performance, fully-integrated SoC like this is quite small, with DIY and SI (system integrator) markets preferring discrete components most of the time. If we remove the GDDR5 integration, which is one of the key specs that makes the “Fenghuang” chip so interesting and expensive, I’d bet the 24 CU GPU would be choked by standard DDR4/5 DRAM. For now, don’t hold out hope that AMD takes the engineering work of this Chinese gaming product and applies it to the general consumer market.
Subject: Graphics Cards | July 30, 2018 - 03:32 PM | Ken Addison
Tagged: nvidia, geforce, gaming celebration, gamescom, cologne
Earlier today, NVIDIA announced the GeForce Gaming Celebration, taking place August 20th-21st, in Cologne, Germany.
NVIDIA promises that this open to the public event taking place before the Gamescom convention "will be loaded with new, exclusive, hands-on demos of the hottest upcoming games, stage presentations from the world’s biggest game developers, and some spectacular surprises."
For any readers that might be in the area and interested in attending, first come first served registration can be found here. For readers outside of the area, the event will also be live streamed.
PC Perspective will be attending the event, so stay tuned for more news and details! We can't possibly imagine what NVIDIA could be getting ready to announce.
Subject: Graphics Cards | July 22, 2018 - 03:10 PM | Scott Michaud
Tagged: nvidia, gtx 1170, geforce
Take these numbers with a grain of salt, but WCCFTech has published what they claim is leaked GeForce GTX 1170 benchmarks, found “on Polish hardware forums”. If true, the results show that the graphics card, which would be below the GTX 1180 in performance, is still above the enthusiast-tier GTX 1080 Ti (at least on 3DMark FireStrike). It also suggests that both the GPU core and 16GB of memory are running at ~2.5 GHz.
Image Credit: “Polish Hardware Forums” via WCCFTech
So not only would the GTX 1180 be above the GTX 1080 Ti… but the GTX 1170 apparently is too? Also… 16GB on the second-tier card? Yikes.
Beyond the raw performance, new architectures also give NVIDIA the chance to add new features directly to the silicon. That said, FireStrike is an old-enough benchmark that it won’t take advantage of tweaks for new features, like NVIDIA RTX, so those should be above-and-beyond the increase seen in the score.
Don’t trust every screenshot you see…
Again, if this is true. The source is a picture of a computer monitor, which begs the question, “Why didn’t they just screenshot it?” Beyond that, it’s easy to make a website say whatever you want with the F12 developer tools of any mainstream web browser these days… as I’ve demonstrated in the image above.