All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Graphics Cards | January 18, 2019 - 03:56 PM | Jeremy Hellstrom
Tagged: RTX 2060 Gaming Z, RTX 2060, nvidia, msi
At first glance, the MSI RTX 2060 Gaming Z is very similar to the Founders Edition, with only the boost clock of 1830MHz justifying that it's MSRP is $35 higher. Once TechPowerUp got into testing, they found that the custom cooler helps the card maintain peak speed far more effectively than the FE. This also let them hit an impressive manual overclock of 2055MHz boost clock and 1990MHz on the memory. Check out the results as well as a complete tear down of the card in the review.
"MSI's GeForce RTX 2060 Gaming Z is the best RTX 2060 custom-design we've reviewed so far. It comes with idle-fan-stop and a large triple-slot cooler that runs cooler than the Founders Edition. Noise levels are excellent, too, it's the quietest RTX 2060 card to date."
Here are some more Graphics Card articles from around the web:
- MSI RTX 2060 Gaming Z 6G @ Kitguru
- Nvidia GeForce RTX 2060 Founders Edition – Nvidia Unleashes RTX on The $350 Market @ Bjorn3d
- Using FreeSync with GeForce GPUs:How Well Does It Work @ TechSpot
- Adrenalin Software Edition 19.1.1 Driver Performance Analysis @ BabelTechReviews
- Radeon RX 570 vs. RX 580 vs. GeForce GTX 1060 3GB vs. GTX 1060 6GB @ TechSpot
Subject: Graphics Cards | January 17, 2019 - 01:29 PM | Sebastian Peak
Citing word from a board partner, VideoCardz.com has published a rumor about an upcoming NVIDIA GeForce GPU based on Turing, but without ray tracing support. While such a product seems inevitable as we move further down the chain into midrange and mainstream graphics options, where ray tracing makes less sense from a performance standpoint, the name accompanying the report is harder to fathom: GTX 1660 Ti.
Image via VideoCardz.com
"The GeForce GTX 1660 Ti is to become NVIDIA’s first Turing-based card under GTX brand. Essentially, this card lacks ray tracing features of RTX series, which should (theoretically) result in a lower price. New SKU features TU116 graphics processor and 1536 CUDA cores. This means that GTX 1660 Ti will undoubtedly be slower than RTX 2060." - VideoCardz.com
Beyond the TU116 GPU and 1536 CUDA cores, VideoCardz goes on to state that their sources also claim that this new GTX card will still make use of GDDR6 memory on the same 192-bit bus as the RTX 2060. As to the name, while it may seem odd not to adopt the same 2000-series branding for all Turing cards, the potential for confusion with RTX vs GTX branding might be the reason - if indeed cards in a 1600-series make it to market.
Subject: Graphics Cards | January 16, 2019 - 04:33 PM | Jeremy Hellstrom
Tagged: linux, geforce, nvidia, ubuntu 18.04, gtx 760, gtx 960, RTX 2060, gtx 1060
If you are running an Ubuntu system with an older GPU and are curious about upgrading but unsure if it is worth it, Phoronix has a great review for you. Whether you are gaming with OpenGL and Vulkan, or curious about the changes in OpenCL/CUDA compute performance they have you covered. They even delve into the power efficiency numbers so you can spec out the operating costs of a large deployment, if you happen to have the budget to consider buying RTX 2060's in bulk.
"In this article is a side-by-side performance comparison of the GeForce RTX 2060 up against the GTX 1060 Pascal, GTX 960 Maxwell, and GTX 760 Kepler graphics cards."
Here are some more Graphics Card articles from around the web:
- MSI GeForce RTX 2060 Gaming Z (6G) @ Guru of 3D
- Palit GeForce RTX 2060 Gaming Pro OC 6 GB @ TechPowerUp
- EVGA GeForce RTX 2060 XC Ultra 6 GB @ TechPowerUp
Subject: Graphics Cards | January 15, 2019 - 03:25 AM | Jim Tanous
Tagged: variable refresh rate, nvidia, graphics driver, gpu, geforce, g-sync compatibility, g-sync, freesync
One of NVIDIA's biggest and most surprising CES announcements was the introduction of support for "G-SYNC Compatible Monitors," allowing the company's G-SYNC-capable Pascal and Turing-based graphics cards to work with FreeSync and other non-G-SYNC variable refresh rate displays. NVIDIA is initially certifying 12 FreeSync monitors but will allow users of any VRR display to manually enable G-SYNC and determine for themselves if the quality of the experience is acceptable.
Those eager to try the feature can now do so via NVIDIA's latest driver, version 417.71, which is rolling out worldwide right now. As of the date of this article's publication, users in the United States who visit NVIDIA's driver download page are still seeing the previous driver (417.35), but direct download links are already up and running.
The current list of FreeSync monitors that are certified by NVIDIA:
- Acer XFA240
- Acer XG270HU
- Acer XV273K
- Acer XZ321Q
- AOC Agon AG241QG4
- AOC G2590FX
- ASUS MG278Q
- ASUS XG248
- ASUS VG258Q
- ASUS XG258
- ASUS VG278Q
- BenQ XL2740
Users with a certified G-SYNC compatible monitor will have G-SYNC automatically enabled via the NVIDIA Control Panel when the driver is updated and the display is connected, the same process as connecting an official G-SYNC display. Those with a variable refresh rate display that is not certified must manually open the NVIDIA Control Panel and enable G-SYNC.
NVIDIA notes, however, that enabling the feature on displays that don't meet the company's performance capabilities may lead to a range of issues, from blurring and stuttering to flickering and blanking. The good news is that the type and severity of the issues will vary by display, so users can determine for themselves if the potential problems are acceptable.
Update: Users over at the NVIDIA subreddit have created a public Google Sheet to track their reports and experiences with various FreeSync monitors. Check it out to see how others are faring with your preferred monitor.
Subject: Graphics Cards | January 14, 2019 - 02:23 PM | Jeremy Hellstrom
Tagged: 3dmark, port royal, ray tracing
3D Mark recently released an inexpensive update to their benchmarking suite to let you test your ray tracing performance; you can grab Port Royal for a few dollars from Steam. As there has been limited time to use the benchmark as well as a small sample of GPUs which can properly run it, it has not yet made it into most benchmarking suites. Bjorn3D took the time to install it on a decent system and tested the performance of the Titan and the five RTX cards available on the market.
As you can see, it is quite the punishing test, not even NVIDIA's flagship card can maintain 60fps.
"3DMark is finally updated with its newest benchmark designed specifically to test real time ray tracing performance. The benchmark we are looking at today is Port Royal, it is the first really good repeatable benchmark I have seen available that tests new real time ray tracing features."
Here are some more Graphics Card articles from around the web:
- NVIDIA GeForce RTX 2060 Linux Performance From Gaming To TensorFlow & Compute @ Phoronix
- Zotac GeForce RTX 2060 AMP 6 GB @ TechPowerUp
- ASUS ROG RTX 2060 Strix OC @ Kitguru
- Palit GeForce RTX 2060 GamingPro OC @ The Guru of 3D
- NVIDIA GeForce RTX 2060 Founders Edition 6 GB @ TechPowerUp
- Overclocking Showdown – the RTX 2060 vs. the Red Devil RX Vega 56 & vs. the GTX 1070 Ti @ BabelTechReviews
- RX 570 vs. GTX 1050 Ti: What's the best $150 GPU @ Techspot
- OCC Unveils Its Best Graphics Cards Picks 2018 @ Overclockers Club
Subject: Graphics Cards | January 12, 2019 - 08:17 AM | Jim Tanous
Tagged: vega 64, Vega, RX VEGA 64, radeon vii, gpu, benchmarks, amd, 7nm
After announcing the Radeon VII this week at CES, AMD has quietly released its own internal benchmarks showing how the upcoming card potentially compares to the Radeon RX Vega 64, AMD's current flagship desktop GPU released in August 2017.
The internal benchmarks, compiled by AMD Performance Labs earlier this month, were released as a footnote in AMD's official Radeon VII press release and first noticed by HardOCP. AMD tested 25 games and 4 media creation applications, with the Radeon VII averaging around a 29 percent improvement in games and 36 percent improvement in professional apps.
AMD's test platform for its gaming Radeon VII benchmarks was an Intel Core i7-7700K with 16GB of DDR4 memory clocked at 3000MHz running Windows 10 with AMD Driver version 18.50. CPU frequencies and exact Windows 10 version were not disclosed. AMD states that all games were run at "4K max settings" with reported frame rate results based on the average of three separate runs each.
For games, the Radeon VII benchmarks show a wide performance delta compared to RX Vega 64, from as little as 7.5 percent in Hitman 2 to as much as 68.4 percent for Fallout 76. Below is a chart created by PC Perspective from AMD's data of the frame rate results from all 25 games.
In terms of media creation applications, AMD changed its testing platform to the Ryzen 7 2700X, also paired with 16GB of DDR4 at 3000MHz. Again, exact processor frequencies and other details were not disclosed. The results reveal between a 27% and 62% improvement:
It is important to reiterate that the data presented in the above charts is from AMD's own internal testing, and should therefore be viewed skeptically until third party Radeon VII benchmarks are available. However, these benchmarks do provide an interesting first look at potential Radeon VII performance compared to its predecessor.
Radeon VII is scheduled to launch February 7, 2019 with an MSRP of $699. In addition to the reference design showcased at CES, AMD has confirmed that third party Radeon VII boards will be available from the company's GPU partners.
Subject: Graphics Cards | January 11, 2019 - 04:35 PM | Sebastian Peak
Tagged: video cards, Vega VII, Vega, Refresh, radeon, Mark Papermaster, graphics, gpus, cto, amd, 7nm
AMD CTO Mark Papermaster spoke with The Street in a video interview published yesterday, where he made it clear that we can indeed expect a new Radeon lineup this year. “It’s like what we do every year,” he said, “we’ll round out the whole roadmap”.
Part of this refresh has already been announced, of course, as Papermaster noted, “we’re really excited to start on the high end” (speaking about the Radeon VII) and he concluded with the promise that “you’ll see the announcements over the course of the year as we refresh across our Radeon roadmap”. It was not mentioned if the refreshed lineup will include 7 nm parts derived from the Vega VII shown at CES, but it seems reasonable to assume that we haven’t seen the last of Vega 2 in 2019.
Subject: Graphics Cards | January 9, 2019 - 01:14 PM | Jeremy Hellstrom
Tagged: radeon 7, ces 2019, amd
AMD is still mid-keynote but that's no reason not to start filling you in on what we know, especially since the CES gang just got a free copy of The Division 2 so we may not see them for a while.
The new Radeon 7 looks similar to the Vega series but offers improved performance, especially at 4K resolutions. According to their internal benchmarks you will see a noticeable improvement from the VEGA series on a number of games.
The new card is not just for gaming, they also showed a slide covering the increases you can expect on a variety of creative software.
As far as the specifications go, we know the card will feature 60 CUs, or 3840 Stream Processors and an impressive 16GB of HBM2 memory with a 1.8GHz GPU at the core. It will require a pair of 8pin PCIe power connectors to drive all of that.
The card will be available on Feb 7th for an MSRP of $699, with a free copy of The Division 2 for as long as supplies last, you can also enjoy that deal on select Ryzen chips That places it under the cost of NVIDIA's top GPUs, but significantly more than the new RTX 2060, but we still have to see where it sits in the benchmark charts!
3rd-Gen Ryzen CPUs Coming
The new third generation Ryzen uses AMD's chiplet design, with a smaller core and a large IO chip. Code-named Matisse, the 7nm Zen 2 desktop parts are not yet ready for release, with final clock speeds not announced. (AnandTech was able to go a little deeper into the the matter before the announcement, and they offer some analysis of the feasability of adding another chiplet to the die and meeting the 16-core number some expected based on the rumors we saw prior to this event. Ed.)
Dr. Su did not share much information of the new chip with us on stage, though we know it may pull less power than a Core i9, at least in Cinebench. Owners of AM4 boards can rest assured knowing that upgrading to the new chips will be as easy as a BIOS update as the socket will indeed remain the same.
Expect more coverage as we catch up!
Subject: Graphics Cards | January 8, 2019 - 05:46 PM | Jeremy Hellstrom
Tagged: visontek, thunderbolt 3, external gpu, ces 2019
VisionTek announced a new external GPU enclosure today at CES, costing $350 for the enclosure and 240W power (cinder) block. The dual Thunderbolt 3 controllers provide enough bandwidth for your GPU as well as a pair of USB 3.0 ports, Gigabit ethernet and a SATA III port.
At 8.5x6x2.75" it is small enough to be easily accomodated on a desk whle still able to hold a variety of cards, up to about the size of an AMD Vega 56 8GB. It is compatible with Windows 10 and OS X and will let you string together up to six 4K displays @ 60fps. You can see the full PR below.
The VisionTek Thunderbolt 3 Mini eGFX Enclosure is ideal for creative professionals, IT/enterprise power users, professional users in healthcare, finance, scientific research labs, etc., and gaming enthusiasts seeking the ultimate GPU performance improvement to Thunderbolt 3 equipped laptops. Delivering up to 40Gbps of bandwidth, Thunderbolt 3 is the industry’s fastest interface that is rapidly becoming popular on new generation of laptops and mini PCs. VisionTek’s Mini eGFX combined with a graphics add-in card significantly boosts the GPU performance of a Thunderbolt 3 enabled laptop, via a plug and play connection to the enclosure.
Sleek, Portable, and Future Proof for the Most Demanding Applications
The VisionTek Thunderbolt 3 Mini eGFX Enclosure combines a sleek and portable design that easily fits discretely on a desk, or hidden away, to handle all your graphic intensive applications. VisionTek’s Mini eGFX Enclosure can be plugged into any Thunderbolt 3 enabled laptop or mini-PC to accelerate the most demanding 3D intensive software programs. Best of all, the Mini eGFX enclosure can be upgraded to perfectly match the application’s performance requirements. Consumers have the option of selecting from many mini ITX cards or standard compatible graphic card models for the Mini eGFX to optimize GPU processing requirements for each user’s specific needs.
"With the launch of the Mini eGFX external enclosure, users can turbocharge their Thunderbolt 3 enabled laptops with cutting edge discrete GPU add-in cards on the fly," said Michael Innes, President, VisionTek Products, LLC. “VisionTek embraces technology innovations from Intel that enhance the way we utilize our GPU technology to increase the efficiency, performance, and resolution of 3D visual PC applications.” “The VisionTek Thunderbolt 3 Mini eGFX enclosure is one of the most compact, yet flexible eGFX enclosures available in the market today,” said Jason Ziller, General Manager, Client Connectivity Division at Intel. “With this solution, VisionTek can broadly address the many professional graphics verticals, enterprise and consumer gaming markets as it can be easily configured to fit the needs of the customer with many inter-changeable graphic cards available.”
Expansive Selection of Laptop Compatibility
Thunderbolt 3 enabled laptops and mini-PCs connected to the new VisionTek Mini eGFX dock enclosure drives the most demanding 3D graphic intensive applications. Visiontek is proud to announce compatibility and availability with many new Thunderbolt 3 equipped laptops and mini-PCs in 2019. The speed, reliability, efficiency and compact size of VisionTek’s Mini eGFX Enclosure eliminates limitations of a laptop or mini-PC environment and opens possibilities for the perfect combination of portability and performance when required.
Select from a wide performance range of add-in graphics cards certified by VisionTek to improve 3D imaging rendering, 4K HD applications, video editing, run multi-monitor displays, improve PC gaming, and more. The VisionTek Mini eGFX is design to fit most mini ITX discrete graphics cards, as well as select reference card designs. Visit VisionTek’s product page for the most current list of graphics cards recommended. Benefits of the VisionTek Thunderbolt 3 Mini eGFX Enclosure for external GPUs include:
- Compact Design – Form & function collide to create one of the most compact and flexible enclosure designs in the industry to accommodate a variety of graphics cards (enclosure dimensions: 8.5” x 6” x 2.75”)
- 3D Graphics Performance – Whether you’re rendering complex 3D images or playing intense first-person shooters, the eGFX enclosure supports a wide range of mini ITX size graphics cards.
- Power – 240W of dedicated power is provided with the VisionTek Mini eGFX Enclosure.
- Multiple Displays – Supports up to six 4K displays @ 60fps from laptops & mini PC’s. Scalable to the needs of the user with the addition of a graphics card to fit the application’s needs.
- Maximum 3D Resolution Control –Set limits using the GPU’s proprietary firmware controls to customize resolution settings, enhance 3D performance, and assign multi-monitor layouts.
- Additional High-Speed USB 3.0, Ethernet Connection, and SATA III Port – The design uses a second Thunderbolt controller with PCIe-to-USB and PCIe-to-LAN controllers to provide Two (2) additional USB 3.0 ports that are conveniently accessible on the front panel of the eGFX enclosure and one (1) RJ45 ethernet Gigabit LAN connection located on the back side of the enclosure.
Subject: Graphics Cards | January 7, 2019 - 04:34 PM | Jeremy Hellstrom
Tagged: video card, turing, tu106, RTX 2060, rtx, nvidia, graphics card, gpu, gddr6, gaming
After months of rumours and guesses as to what the RTX 2060 will actually offer, we finally know. It is built on the same TU106 the RTX 2070 uses and sports somewhat similar core clocks though the drop in TC, ROPs and TUs reduces it to producing a mere 5 GigaRays. The memory is rather different, with the 6GB of GDDR6 connected via 192-bit bus offering 336.1 GB/s of bandwidth. As you saw in Sebastian's testing the overall performance is better than you would expect from a mid-range card but at the cost of a higher price.
If we missed out on your favourite game, check the Guru of 3D's suite of benchmarks or one of the others below.
"NVIDIA today announced the GeForce RTX 2060, the graphics card will be unleashed next week the 15th at a sales price of 349 USD / 359 EUR. Today, however, we can already bring you a full review of what is a pretty feisty little graphics card really."
Here are some more Graphics Card articles from around the web:
- NVIDIA GeForce RTX 2060 FE Review @ Legit Reviews
- RTX 2060 Review with 39 games @ BabelTechReviews
- NVIDIA Geforce RTX 2060 Founders Edition Review @ OCC
- Nvidia RTX 2060 Founders Edition 6GB @ Kitguru
- Battlefield V NVIDIA Ray Tracing RTX 2080 @ [H]ard|OCP
- The GPU Compute Performance From The NVIDIA GeForce GTX 680 To TITAN RTX @ Phoronix
- The HD 7970 vs. the GTX 680 – revisited after 7 years @ BabelTechReviews
Subject: Graphics Cards | January 7, 2019 - 02:46 AM | Jim Tanous
Tagged: rtx mobile, RTX 2080, RTX 2070, RTX 2060, rtx, nvidia, max-q, gaming laptop, ces2019
NVIDIA just wrapped up its CES keynote, and in addition to the expected unveiling of the RTX 2060, the company announced new mobile GeForce RTX options. More than 40 upcoming laptops, including 17 sporting NVIDIA’s Max-Q design, will offer RTX 2080, RTX 2070, and RTX 2060 graphics options.
NVIDIA CEO Jensen Huang likened GeForce RTX-powered laptops to a gaming console platform, pointing out multiple times performance comparisons to traditional game consoles like the PlayStation 4.
Laptops are the fastest growing gaming platform — and just getting started. The world’s top OEMs are using Turing to bring next-generation console performance to thin, sleek laptops that gamers can take anywhere. Hundreds of millions of people worldwide — an entire generation — are growing up gaming. I can’t wait for them to experience this new wave of laptops.
New GeForce RTX laptops will continue to support features like WhisperMode, which paces frame rates for AC-connected laptops to reduce heat and therefore fan noise, NVIDIA Battery Boost, which uses GeForce Experience to optimize performance for longer battery life, and of course G-SYNC.
Beyond gaming, NVIDIA is touting the benefits of the RTX platform for content creators, such as real-time video encoding for live streamers, faster rendering for video editors, and accurate interactive lighting, reflections, and shadows for animators.
Laptops sporting GeForce RTX cards will be available starting January 29th from NVIDIA partners including Acer, Alienware, ASUS, Dell, Gigabyte, HP, Lenovo, MSI, Razer, and Samsung. Pricing, detailed configuration options, and exact availability will vary and is not yet available for all manufacturers.
Subject: Graphics Cards | January 7, 2019 - 01:59 AM | Sebastian Peak
Tagged: video card, RTX 2060, rtx, ray tracing, nvidia, graphics, gpu, geforce, ces 2019, CES
On stage at an event tonight at CES 2019, NVIDIA CEO Jensen Huang made it offical: the RTX 2060 exists and will be available this month. The card is priced at $349, and is based on the same Turing architecture as the rest of the RTX family.
The RTX 2060 was announced with 6GB of GDDR6 memory, and like its bigger siblings the RTX 2060 offers ray tracing support (with 240 Tensor Cores onboard), and NVIDIA targets 60 FPS performance with ray tracing enabled in Battlefield V:
"The RTX 2060 is 60 percent faster on current titles than the prior-generation GTX 1060, NVIDIA’s most popular GPU, and beats the gameplay of the GeForce GTX 1070 Ti. With Turing’s RT Cores and Tensor Cores, it can run Battlefield V with ray tracing at 60 frames per second."
That 60% increase comes from benchmarks the company ran using 2560x1440 resolution, and the RTX 2060 is targeting resolutions from the mainstream 1920x1080 up to 2560x1440, though with performance between a GTX 1070 and 1080 the RTX 2060 could very well support 3840x2160 gaming at medium-to-high settings as well.
The official launch of the RTX 2060 is January 15 from add-in partners, as well as a Founders Edition card from NVIDIA beginning on that date. NVIDIA is also launching a new bundle deal. Qualifying RTX 2060 purchasers, either as a standalone card or as part of a desktop including the RTX 2060, can choose to receive either Battlefield V or the upcoming Anthem for free.
Stay tuned for more details on the GeForce RTX 2060 soon.
Subject: Graphics Cards | January 2, 2019 - 12:34 PM | Sebastian Peak
Tagged: pascal, overclocking, OC Scanner, nvidia, GTX 1080, gtx 1070, gtx 1060, geforce
GPU overclocking utility MSI Afterburner now supports automatic Pascal overclocking, bringing this feature to the GTX 10-series for the first time. NVIDIA had previously offered the OC Scanner only for the Turing-based RTX graphics cards (we compared OC Scanner vs. manual results using a previous version in our MSI GeForce RTX 2080 Gaming X Trio review), but a new version of the API is incorporated in Afterburner v4.6.0 beta 10.
"If you purchased a GeForce GTX 1050, 1060, 1070, 1080, Titan X, Tian Xp, Titan V (Volta) or AMD Radeon RX 5x0 and Vega graphics card we can recommend you to at least try out this latest release. We have written a GeForce GTX 1070 and 1080 overclocking guide right here. This is the new public final release of MSI AfterBurner. Over the past few weeks we have made a tremendous effort to get a lot of features enabled for this build."
The release notes are massive for this latest version, and you can view them in full after the break.
Subject: Graphics Cards | January 1, 2019 - 12:41 AM | Tim Verry
Tagged: turing, tu106, RTX 2060, nvidia, gaming
Videocardz recently released information on the NVIDIA RTX 2060 that sheds more light on the rumored card. Reportedly sourced from a copy of the official reviewer's guide, Videocardz claims that they are now able to confirm the specifications of the RTX 2060 including 1920 CUDA cores, 240 tensor cores, 30 ray tracing cores, and 6GB GDDR6 memory.
Graphics cards using the TU106-300 GPU will be available in stock and factory overclocked designs with the NVIDIA reference or AIB custom coolers. Display outputs include DVI, HDMI, and DisplayPort
|RTX 2060||RTX 2070||GTX 1070 Ti||RX Vega 64||RX Vega 56|
|GPU||TU106-300||TU106-400||GP104||Vega 10||Vega 10|
|CUDA cores||1920||2304||2432||4096 SPs||3584 SPs|
|Memory||6GB GDDR6||8GB GDDR6||8GB GDDR5||8GB HBM2||8GB HBM2|
|SP Compute||6.5 TF||7.5 TF||7.8 TF||12.5 TF (13.7 AIO)||10.5 TF|
|Base clock||1365||1410||1607||1200 (1406 AIO)||1156|
|Boost clock||1680||1710 (FE)||1683||1546 (1677 AIO)||1471|
|Memory clock||14000 MHz||14000 MHz||8000 MHz||1890 MHz||1600 MHz|
|Launch MSRP||$349||$499 (599 FE)||$449||$499||$399|
|Pricing 1-1-19||?||$500+||$405+||$400+ ($500+ AIO)||$470+(?)|
Allegedly, the RTX 2060 will offer up performance that is comparable to last generation's GTX 1070 Ti in 1080p and 1440p gaming scenarios. In a couple games the card even gets close to the GTX 1080 but in most of the titles listed by Videocardz (from the alleged reviewer's guide) the new GPU comes in slightly faster ot slightly slower than the 1070 Ti depending on the specific game. The RTX 2060 and its 30 RT cores can reportedly pull off playable 65 FPS Battlefield V even with RTX enabled with performance looking better with DLSS turned on at 88 FPS compared to RTX off performance of 90 FPS. Granted, that is Battlefield V at 1080p rather than the 1440p or 4k that the beefier RTX cards can push out.
When it comes to pricing, the RTX 2060 will have a MSRP of $349 with AIB and Founder's Edition being at the same level. RTX 2060 graphics cards are slated to launch om January 7th and will be available as soon as January 15th. If true we will not have long to wait until it is official and reviews are unveiled.
If you are curious about the rumored performance, check out the charts Videocardz uncovered.
Subject: Graphics Cards | December 24, 2018 - 03:19 PM | Jeremy Hellstrom
Tagged: sea hawk, RTX 2080, overclocking, msi
[H]ard|OCP takes a look at MSI's Sea Hawk RTX 2080, which sports a GPU covered by an AiO watercooler as well as a blower fan to ensure the memory and VRM are actively cooled as well. The design of the cooler also slims the card so you don't need to worry about the spacing between your PCIe slots as with some other coolers. Without any work whatsoever, you can expect an average 1954MHz GPU clock, 2040MHz with a bit of a power boost or 2060MHz if you don't mind the noise produced by fans spinning at 100%. The VRMs did prove a little finicky as you can see in the full review.
"MSI sent over its new Sea Hawk RTX 2080 card for use in a build video. This is a fair simple RTX card build that is purchased with a pre-installed All-In-One cooler. We wanted to see how well it overclocked and spent a night of gaming in order to do that and we have to say we were pleased with our results."
Here are some more Graphics Card articles from around the web:
- MSI GeForce RTX 2080 Duke 8G OC @ Modders-Inc
- ZOTAC GAMING GeForce RTX 2080 Ti AMP Extreme @ Guru of 3D
- ZOTAC RTX 2080 AMP Extreme Video Card Review @ Hardware Asylum
- ASUS GeForce RTX 2070 STRIX OC 8 GB @ TechPowerUp
- Zotac Gaming RTX 2070 OC Mini 8GB @ Kitguru
- KFA2 GeForce GTX 1060 6 GB GDDR5X @ TechPowerUp
- Initial Linux Benchmarks Of The NVIDIA TITAN RTX Graphics Card For Compute & Gaming @ Phoronix
- OCC NVIDIA RTX 2080 Overclocking Guide
- Battlefield V NVIDIA Ray Tracing RTX 2070 Performance @ [H]ard|OCP
- NVIDIA DLSS Test in Final Fantasy XV @ TechPowerUp
Subject: Graphics Cards | December 21, 2018 - 02:00 PM | Jim Tanous
Tagged: physx 4.0, PhysX, open source, nvidia
As promised in the company's initial announcement earlier this month, NVIDIA has released the newly open-sourced PhysX 4.0 SDK via GitHub. Now, thanks to its 3-Clause BSD license, any game developer, hardware company, or coding enthusiast can grab the latest version of NVIDIA's realtime physics engine and tinker, improve, or implement it in hopefully creative new ways.
The one limitation, of course, is that in its current form PhysX 4.0 (and version 3.4, which is now open source, too) still references lots of NVIDIA's closed source APIs, notably CUDA. But with the PhysX framework now available to fork, there's nothing to stop an eager company or programmer from creating and implementing their own alternatives to NVIDIA's proprietary tech.
In addition to going open source, PhysX 4.0 introduces a number of new features as outlined on NVIDIA's developer site:
- Temporal Gauss-Seidel Solver (TGS), which makes machinery, characters/ragdolls, and anything else that is jointed or articulated much more robust. TGS dynamically re-computes constraints with each iteration, based on bodies’ relative motion.
- The new reduced coordinate articulations feature makes the simulation of joints possible with no relative position error and realistic actuation.
- New automatic multi-broad phase.
- Increased scalability with new filtering rules for kinematics and statics.
- Actor-centric scene queries significantly improve performance for actors with many shapes.
- Build system now based on CMake.
BSD 3 licensed platforms:
- Apple iOS
- Apple MacOS
- Google Android ARM
- Microsoft Windows
Unchanged NVIDIA EULA platforms:
- Microsoft XBox One
- Sony Playstation 4
- Nintendo Switch
Subject: Graphics Cards, Memory | December 17, 2018 - 04:33 PM | Sebastian Peak
Tagged: Vega, radeon, JESD235, jedec, high bandwidth memory, hbm, DRAM, amd
In a press release today JEDEC has announced an update to the HBM standard, with potential implications for graphics cards utilizing the technology (such as an AMD Radeon Vega 64 successor, perhaps?).
"This update extends the per pin bandwidth to 2.4 Gbps, adds a new footprint option to accommodate the 16 Gb-layer and 12-high configurations for higher density components, and updates the MISR polynomial options for these new configurations."
Original HBM graphic via AMD
The revised spec brings the JEDEC standard up to the level we saw with Samsung's "Aquabolt" HBM2 and its 307.2 GB/s per-stack bandwidth, but with 12-high TSV stacks (up from 8) which raises memory capacity from 8GB to a whopping 24GB per stack.
The full press release from JEDEC follows:
ARLINGTON, Va., USA – DECEMBER 17, 2018 – JEDEC Solid State Technology Association, the global leader in the development of standards for the microelectronics industry, today announced the publication of an update to JESD235 High Bandwidth Memory (HBM) DRAM standard. HBM DRAM is used in Graphics, High Performance Computing, Server, Networking and Client applications where peak bandwidth, bandwidth per watt, and capacity per area are valued metrics to a solution’s success in the market. The standard was developed and updated with support from leading GPU and CPU developers to extend the system bandwidth growth curve beyond levels supported by traditional discrete packaged memory. JESD235B is available for download from the JEDEC website.
JEDEC standard JESD235B for HBM leverages Wide I/O and TSV technologies to support densities up to 24 GB per device at speeds up to 307 GB/s. This bandwidth is delivered across a 1024-bit wide device interface that is divided into 8 independent channels on each DRAM stack. The standard can support 2-high, 4-high, 8-high, and 12-high TSV stacks of DRAM at full bandwidth to allow systems flexibility on capacity requirements from 1 GB – 24 GB per stack.
This update extends the per pin bandwidth to 2.4 Gbps, adds a new footprint option to accommodate the 16 Gb-layer and 12-high configurations for higher density components, and updates the MISR polynomial options for these new configurations. Additional clarifications are provided throughout the document to address test features and compatibility across generations of HBM components.
Subject: Graphics Cards | December 17, 2018 - 03:24 PM | Sebastian Peak
Tagged: rumor, report, nvidia, leak, GTX 2060, graphics, gpu, geforce, gaming
We've been hearing rumors about a GeForce RTX 2060 since at least August, with screen captures of a reported mid-range Pascal card (then assumed to be "GTX" 2060) - seemingly with GTX 1080 levels of performance - surfacing at that time.
Then in November there was the reported Final Fantasy XV benchmark leak, showing performance a little below a GTX 1070 with the game running at 3840x2160 (high quality preset) - but this was possibly the mobile skew according to leaker APISAK on Twitter.
A week or so ago we saw an image of a Gigabyte card from VideoCardz.com which the site said was the RTX 2060:
Image via VideoCardz.com
"Our sources at Gigabyte have confirmed GeForce RTX 2060 graphics card launching soon. The card features TU106 GPU with 1920 CUDA cores and 6GB of GDDR6 memory. The model pictured below is factory-overclocked, but the exact clock remains unconfirmed." (Source: VideoCardz)
It seems fair to assume that a launch is imminent, with reports of a potential announcement the second week of January which may or may not coincide with CES 2019. As to final specs and pricing? Let the speculation commence!
Subject: Graphics Cards | December 13, 2018 - 09:01 AM | Jim Tanous
Tagged: Radeon Software Adrenalin Edition, radeon software, radeon, gpu, drivers, amd, Adrenalin Edition
AMD today released the latest major update to its Radeon software and driver suite. Building on the groundwork laid last year, AMD Radeon Software Adrenalin 2019 Edition brings a number of new features and performance improvements.
With this year’s software update, AMD continues to make significant gains in game performance compared to last year’s driver release, with an average gain of up to 15 percent in across a range of popular titles. Examples include Assassin’s Creed Odyssey (11%), Battlefield V (39%), and Shadow of the Tomb Raider (15%).
Beyond performance, Adrenalin 2019 Edition introduces a number of new and improved features. Highlights include:
Game Streaming: Radeon gamers can now stream any game or application from their PCs to their mobile devices via the AMD Link app at up to 4K 60fps. The feature supports both on-screen controls as well as Bluetooth controllers. ReLive streaming is also expanding to VR, with users able to stream games and videos from their PCs to standalone VR headsets via new AMD VR store apps. This includes Steam VR titles, allowing users to play high-quality PC-based VR games on select standalone headsets. AMD claims that its streaming technology offers “up to 44% faster responsiveness” than other game streaming solutions.
ReLive Streaming and Sharing: Gamers more interested in streaming their games to other people will find several new features in AMD’s ReLive feature, including adjustable picture-in-picture instant replays from 5 to 30 seconds, automatic GIF creation, and a new scene editor with more stream overlay options and hotkey-based scene transition control.
Radeon Game Advisor: A new overlay available in-game that helps users designate their target experience (performance vs. quality) and then recommends game-specific settings to achieve that target. Since the tool is running live alongside the game, it can respond to changes as they occur and dynamically recommend updated settings and options.
Radeon Settings Advisor: A new tool in the Radeon Software interface that scans system configuration and settings and recommends changes (e.g., enabling or disabling Radeon Chill, changing the display refresh rate, enabling HDR) to achieve an optimal gaming experience.
WattMan One-Click Tuning Improvements: Radeon WattMan now supports automatic tuning of memory overclocking, GPU undervolting, expanded fan control options, and unlocked DPM states for RX Vega series cards.
Display Improvements: FreeSync 2 can now tone-map HDR content to look better on displays that don’t support the full color and contrast of the HDR spec, and AMD’s Virtual Super Resolution feature is now supported on ultra-wide displays.
Radeon Overlay: AMD’s Overlay feature which allows gamers to access certain Radeon features without leaving their game has been updated to display system performance metrics, WattMan configuration options, Radeon Enhanced Sync controls, and the aforementioned Game Advisor.
AMD Link: AMD’s mobile companion app now offers easier setup via QR code scanning, voice control of various Radeon and ReLive settings (e.g., start/stop streaming, save replay, take screenshot), WattMan controls, enhanced performance metrics, and the ability to initiate a Radeon Software update.
Radeon Software Adrenalin 2019 Edition is available now from AMD’s support website for all supported AMD GPUs.
Subject: Graphics Cards, Mobile | December 12, 2018 - 10:04 PM | Tim Verry
Tagged: turing, rumor, RTX 2070, RTX 2060, nvidia
Rumors have appeared online that suggest NVIDIA may be launching mobile versions of its RTX 2070 and RTX 2060 GPUs based on its new Turing architecture. The new RTX 2070 and RTX 2060 with Max-Q designs were leaked by Twitter user TUM_APISAK who posted cropped screenshots of Geekbench 4.3.1 and 3DMark 11 Performance results.
Allegedly handling the graphics duties in a Lenovo 81HE, the GeForce RTX 2070 with Max-Q Design (8GB VRAM) combined with a Core i7-8750H Coffee Lake six core CPU and 32 GB system memory managed a Geekbench 4.3.1 score of 223,753. The GPU supposedly has 36 Compute Units (CUs) and a core clockspeed of 1,300 MHz. The desktop RTX 2070 GPU which is already available also has 36 CUs with 2,304 CUDA cores, 144 texture units, 64 ROPS, 288 Tensor cores, and 36 RT (ray tracing) cores. The desktop GPU has a 175W reference (non FE) TDP and clocks of 1410 MHz base and 1680 MHz boost (1710 MHz for Founder's Edition). Assuming that 36 CU number is accurate, the mobile (RTX 2070M) may well have the same core counts, just running at lower clocks which would be nice to see but would require a beefy mobile cooling solution.
As far as the RTX 2060 Max-Q Design graphics processor, not as much information was leaked as far as specifications as the leak was limited to two screenshots allegedly from Final Fantasy XV's benchmark results page comparing a desktop RTX 2060 with a Max-Q RTX 2060. The number of CUs (and other numbers like CUDA/Tensor/RT cores, TMUs, and ROPs) was not revealed in those screenshots, for example. The comparison does lend further credence to the rumors of the RTX 2060 utilizing 6 GB of GDDR6 memory though. Tom's Hardware does have a screenshot that shows the RTX 2060 with 30 CUs which suggest 1,920 CUDA cores, 240 Tensor cores, and 30 RT cores though with clocks up to 1.2 GHz (which does mesh well with previous rumors of the desktop part).
|Graphics Card||Generic VGA||Generic VGA|
|Memory||6144 MB||6144 MB|
|Core clock||960 MHz||975 MHz|
|Memory Clock||1750 MHz||1500 MHz|
|Driver name||NVIDIA GeForce RTX 2060||NVIDIA GeForce RTX 2060 with Maz-Q Design|
Also, the TU106 RTX 2060 with Max-Q Design reportedly has a 975 MHz core clock and a 1500 MHz (6 GHz) memory clock. Note that the 960 MHz core clock and 1750 MHz (7 GHz) memory clocks don't match previous RTX 2060 rumors which suggested higher GPU clocks in particular (up to 1.2 GHz). To be fair, it could just be the software reporting incorrect numbers due to the GPUs not being official yet. One final bit of leaked information included a note about 3DMark 11 performance with the RTX 2060 Max Q Design GPU hitting at least 19,000 in the benchmark's Performance preset which allegedly puts it in between the scores of the mobile GTX 1070 and the mobile GTX 1070 Max-Q. (A graphics score between nineteen and twenty thousand would put it a bit above a desktop GTX 1060 but far below the desktop 1070).
As usual, take these rumors and leaked screenshots with a healthy heaping of salt, but they are interesting nonetheless. Combined with the news about NVIDIA possibly announcing new mid-range GPUs at CES 2019, we may well see new laptops and other mobile graphics solutions shown off at CES and available within the first half of 2019 which would be quite the coup.
What are your thoughts on the rumored RTX 2060 for desktops and its mobile RTX 2060 and RTX 2070 Max-Q siblings?