Subject: General Tech | December 13, 2018 - 05:31 AM | Jim Tanous
Tagged: Zen 2, Sunny Cove, snapdragon, ryzen 3, ray tracing, radeon pro, podcast, Optane, Intel, edge, chromium, amd, 3dmark
PC Perspective Podcast #525 - 12/12/2018
Our podcast this week features discusion of the new Intel Sunny Cove architecture, Ryzen 3 rumors, the high-end Snapdragon 8cx, an affordable Radeon Pro GPU, and more!
Subscribe to the PC Perspective Podcast
Check out previous podcast episodes: http://pcper.com/podcast
00:03:21 - AMD Radeon Pro WX8200 Review
00:14:50 - Intel Architecture Day: Sunny Cove, Gen11 iGPU, Foveros
00:27:16 - Ryzen 3 Rumors
00:38:57 - Using a 4K TV as a Monitor
00:43:21 - Snapdragon 8cx
00:57:29 - Microsoft Edge Switching to Chromium
01:03:38 - MSI GTX 1060 with GDDR5X
01:05:40 - 3DMark Port Royal Ray Tracing Benchmark
01:09:03 - Hunting Speculative Execution Vulnerabilities
01:11:38 - 7nm Vega Logo
01:13:49 - Intel Optane DIMM Latency
01:30:45 - The Outer Worlds
Subject: Graphics Cards | December 10, 2018 - 10:36 AM | Jim Tanous
Tagged: 3dmark, ray tracing, directx raytracing, raytracing, rtx, benchmarking, benchmarks
After first announcing it last month, UL this weekend provided new information on its upcoming ray tracing-focused addition to the 3DMark benchmarking suite. Port Royal, what UL calls the "world's first dedicated real-time ray tracing benchmark for gamers," will launch Tuesday, January 8, 2019.
For those eager for a glimpse of the new ray-traced visual spectacle, or for the majority of gamers without a ray tracing-capable GPU, the company has released a video preview of the complete Port Royal demo scene.
Access to the new Port Royal benchmark will be limited to the Advanced and Professional editions of 3DMark. Existing 3DMark users can upgrade to the benchmark for $2.99, and it will become part of the base $29.99 Advanced Edition package for new purchasers starting January 8th.
Real-time ray tracing promises to bring new levels of realism to in-game graphics. Port Royal uses DirectX Raytracing to enhance reflections, shadows, and other effects that are difficult to achieve with traditional rendering techniques.
As well as benchmarking performance, 3DMark Port Royal is a realistic and practical example of what to expect from ray tracing in upcoming games— ray tracing effects running in real-time at reasonable frame rates at 2560 × 1440 resolution.
3DMark Port Royal was developed with input from AMD, Intel, NVIDIA, and other leading technology companies. We worked especially closely with Microsoft to create a first-class implementation of the DirectX Raytracing API.
Port Royal will run on any graphics card with drivers that support DirectX Raytracing. As with any new technology, there are limited options for early adopters, but more cards are expected to get DirectX Raytracing support in 2019.
Subject: Graphics Cards | September 25, 2018 - 02:19 PM | Ken Addison
Tagged: turing, tensor cores, rtx, ray tracing, nvidia, 2080 Ti, 2080, 2070
Earlier today, via a surprise message on Twitter, NVIDIA has officially announced the availability date for the RTX 2070—October 17th.
Beautiful any way you look at it.
The GeForce RTX 2070 will be available on October 17th. #GraphicsReinvented
Shop starting at $499 ($599 Founders Edition) → https://t.co/ammFWibyFy pic.twitter.com/IsScoXm5rZ
— NVIDIA GeForce (@NVIDIAGeForce) September 25, 2018
Based on the Turing microarchitecture, the RTX 2070 will include the same RT cores for Ray Tracing and Tensor Cores for deep learning as the RTX and 2080 Ti, albeit in different quantities.
|RTX 2080 Ti||RTX 2080||RTX 2070|
|Base Clock||1350 MHz||1515 MHz||1410 MHz|
|Boost Clock||1545 MHz/
1635 MHz (FE)
|1710 MHz/ 1800 MHz (FE)||1620 MHz / 1710 MHz (FE)|
|Ray Tracing Speed||10 GRays/s||8 GRays/s||6 GRays/s|
|Memory Clock||14000 MHz||14000 MHz||14000 MHz|
|Memory Interface||352-bit G6||256-bit G6||256-bit G6|
|Memory Bandwidth||616GB/s||448 GB/s||448 GB/s|
|TDP||250 W /
260 W (FE)
|215W / 225W (FE)||175 W / 185 W (FE)|
|Peak Compute (FP32)||13.4 TFLOPS / 14.2 TFLOP (FE)||10 TFLOPS / 10.6 TFLOPS (FE)||?|
|Transistor Count||18.6 B||13.6 B||?|
|MSRP (current)||$1200 (FE)/
|$800 (FE) / $700||$599 (FE) / $499|
While we don't have a full looks at the specifications yet, NVIDIA has posted some technical aspects on the RTX 2070 product page.
The RTX 2070 Founders Edition will be available for $599, with partners cards "starting" at $499.
Subject: Graphics Cards | September 19, 2018 - 01:35 PM | Jeremy Hellstrom
Tagged: turing, tu102, RTX 2080 Ti, rtx, ray tracing, nvidia, gtx, geforce, founders edition, DLSS
Today is the day the curtain is pulled back and the performance of NVIDIA's Turing based consumer cards is revealed. If there was a benchmark, resolution or game that was somehow missed in our review then you will find it below, but make sure to peek in at the last page for a list of the games which will support Ray Tracing, DLSS or both!
The Tech Report found that the RTX 2080 Ti is an amazing card to use if you are playing Hellblade: Senua's Sacrifice as it clearly outperforms cards from previous generations as well as the base RTX 2080. In many cases the RTX 2080 matches the GTX 1080 Ti, though with the extra features it is an attractive card for those with GPUs several generations old. There is one small problem for those looking to adopt one of these cards, we have not seen prices like these outside of the Titan series before now.
"Nvidia's Turing architecture is here on board the GeForce RTX 2080 Ti, and we put it through its paces for 4K HDR gaming with some of today's most cutting-edge titles. We also explore the possibilities of Nvidia's Deep Learning Super-Sampling tech for the future of 4K gaming. Join us as we put Turing to the test."
Here are some more Graphics Card articles from around the web:
- MSI GeForce RTX 2080 Ti Gaming X TRIO @ Guru of 3D
- Nvidia RTX 2080 and 2080 Ti review: A tale of two very expensive graphics cards @ Ars Technica
- GeForce RTX 2080 @ Guru of 3D
- RTX 2080 Ti Founder Edition @ Guru of 3D
- Turing RTX 2080 and RTX 2080 Ti Benchmarked with 36 Games @ BabelTechReviews
- NVIDIA GeForce RTX IS HERE. Introducing the GeForce RTX 2080 & RTX 2080 Ti – 4K 60 FPS or bust! Review @ Bjorn3d
- Nvidia GeForce RTX 2080TI & RTX 2080 @ Modders-Inc
- MSI GeForce RTX 2080 Gaming X Trio 8 GB @ TechPowerUp
- ASUS GeForce RTX 2080 STRIX OC 8 GB @ TechPowerUp
- Palit GeForce RTX 2080 Gaming Pro OC 8 GB @ TechPowerUp
- MSI GeForce RTX 2080 Ti Duke 11 GB @ TechPowerUp
- Nvidia GeForce RTX 2080 & 2080 Ti @ Techspot
- ASUS GeForce RTX 2080 Ti STRIX OC 11 GB @ TechPowerUp
- MSI GeForce RTX 2080 Ti Gaming X Trio 11 GB @ TechPowerUp
- NVIDIA GeForce RTX 2080 Ti & RTX 2080 Founders Edition Reviewed @ OCC
- NVIDIA GeForce RTX 2080 Founders Edition 8 GB @ TechPowerUp
- Nvidia RTX 2080 @ Kitguru
- Nvidia RTX 2080 Ti @ Kitguru
- NVIDIA GeForce RTX 2080 Ti Founders Edition 11 GB @ TechPowerUp
- Nvidia Turing GeForce 2080 (Ti) architecture @ Guru of 3D
- NVIDIA Turing GeForce RTX Technology & Architecture @ TechPowerUp
New Generation, New Founders Edition
At this point, it seems that calling NVIDIA's 20-series GPUs highly anticipated would be a bit of an understatement. Between months and months of speculation about what these new GPUs would be called, what architecture they would be based off, and what features they would bring, the NVIDIA GeForce RTX 2080 and RTX 2080 Ti were officially unveiled in August, alongside the Turing architecture.
We've already posted our deep dive into the Turing architecture and the TU 102 and TU 104 GPUs powering these new graphics cards, but here's a short take away. Turing provides efficiency improvements in both memory and shader performance, as well as adds additional specialized hardware to accelerate both deep learning (Tensor cores), and enable real-time ray tracing (RT cores).
|RTX 2080 Ti||Quadro RTX 6000||GTX 1080 Ti||RTX 2080||Quadro RTX 5000||GTX 1080||TITAN V||RX Vega 64 (Air)|
|Base Clock||1350 MHz||1455 MHz||1408 MHz||1515 MHz||1620 MHz||1607 MHz||1200 MHz||1247 MHz|
|Boost Clock||1545 MHz/
1635 MHz (FE)
|1770 MHz||1582 MHz||1710 MHz/
1800 MHz (FE)
|1820 MHz||1733 MHz||1455 MHz||1546 MHz|
|Ray Tracing Speed||10 GRays/s||10 GRays/s||--||8 GRays/s||8 GRays/s||--||--||--|
|Memory Clock||14000 MHz||14000 MHz||11000 MHz||14000 MHz||14000 MHz||10000 MHz||1700 MHz||1890 MHz|
|Memory Interface||352-bit G6||384-bit G6||352-bit G5X||256-bit G6||256-bit G6||256-bit G5X||3072-bit HBM2||2048-bit HBM2|
|Memory Bandwidth||616GB/s||672GB/s||484 GB/s||448 GB/s||448 GB/s||320 GB/s||653 GB/s||484 GB/s|
260 W (FE)
|260 W||250 watts||215W
|230 W||180 watts||250W||292|
|Peak Compute (FP32)||13.4 TFLOPS / 14.2 TFLOP (FE)||16.3 TFLOPS||10.6 TFLOPS||10 TFLOPS / 10.6 TFLOPS (FE)||11.2 TFLOPS||8.2 TFLOPS||14.9 TFLOPS||13.7 TFLOPS|
|Transistor Count||18.6 B||18.6B||12.0 B||13.6 B||13.6 B||7.2 B||21.0 B||12.5 B|
|MSRP (current)||$1200 (FE)/
As unusual as it is for them NVIDIA has decided to release both the RTX 2080 and RTX 2080 Ti at the same time, as the first products in the Turing family.
The TU102-based RTX 2080 Ti features 4352 CUDA cores, while the TU104-based RTX 2080 features 2944, less than the GTX 1080 Ti. Also, these new RTX GPUs have moved to GDDR6 from the GDDR5X we found on the GTX 10-series.
Subject: General Tech | September 14, 2018 - 10:32 PM | Scott Michaud
Tagged: rtx, Unity, ray tracing, directx raytracing, DirectX 12
As Ken wrote up his take in a separate post, NVIDIA has made Turing architecture details public, which will bring real-time ray tracing to PC gaming later this month. When it was announced, NVIDIA had some demos in Unreal Engine 4, and a few partnered games (Battlefield V, Shadow of the Tomb Raider, and Metro Exodus) showed off their implementations.
As we expected, Unity is working on supporting it too.
Not ray tracing, but from the same project at Unity.
The first commit showed up on Unity’s GitHub for their Scriptable Render Pipelines project, dated earlier today. Looking through the changes, it appears to just generate the acceleration structure based on the objects of type renderer in the current scene (as well as define the toggle properties of course). It looks like we are still a long way out.
I’m looking forward to ray tracing implementations, though. I tend to like art styles with anisotropic metal trims and soft shadows, which is difficult to get right with rasterization alone due to the reliance on other objects in the scene. In the case of metal, reflections dominate the look and feel of the material. In the case of soft shadows, you really need to keep track of how much of a light has been blocked between the rendered fragment and the non-point light.
And yes, it will depend on the art style, but mine just happens to be computationally expensive.
Subject: Graphics Cards | August 20, 2018 - 01:58 PM | Ken Addison
Tagged: turing, tensor cores, rtx 2080ti, RTX 2080, RTX 2070, rtx, rt cores, ray tracing, quadro, preorder, nvidia, gtx, geforce
* Update *
NVIDIA's pre-order page is now live, as well as info on the RTX 2070! Details below:
*Update 2 *
Post-Founders Edition pricing comes in a bit lower than the Founders pricing noted above:
* End update *
Just like we saw with the Quadro RTX lineup, NVIDIA is designating these gaming-oriented graphics card with the RTX brand to emphasize their capabilities with ray tracing.
Through the combination of dedicated Ray Tracing (RT) cores and Tensor cores for AI-powered denoising, NVIDIA is claiming these RTX GPUs are capable of high enough ray tracing performance to be used real-time in games, as shown by their demos of titles of Battlefield V, Shadow of the Tomb Raider, and Metro: Exodus.
Not every GPU in NVIDIA's lineup will be capable of this real-time ray tracing performance, with those lower tier cards retaining the traditional GTX branding.
Here are the specifications as we know them so far compared to the Quadro RTX cards, as well as the previous generation GeForce cards, and the top offering from AMD.
|RTX 2080 Ti||Quadro RTX 6000||GTX 1080 Ti||RTX 2080||Quadro RTX 5000||GTX 1080||TITAN V||RX Vega 64 (Air)||RTX 2070|
|Base Clock||1350||?||1408 MHz||1515||?||1607 MHz||1200 MHz||1247 MHz||1410|
|?||1733 MHz||1455 MHz||1546 MHz||1620
|Ray Tracing Speed||10 GRays/s||10 GRays/s||--||8 GRays/s||6? GRays/s||--||--||--||6 GRays/s|
|Memory Clock||14000 MHz||14000 MHz||11000 MHz||14000 MHz||14000 MHz||10000 MHz||1700 MHz||1890 MHz||14000 MHz|
|Memory Interface||352-bit G6||384-bit G6||352-bit G5X||256-bit G6||256-bit G6||256-bit G5X||3072-bit HBM2||2048-bit HBM2||256-bit G6|
|Memory Bandwidth||616GB/s||672GB/s||484 GB/s||448 GB/s||448 GB/s||320 GB/s||653 GB/s||484 GB/s||448GB/s|
|300 watts||250 watts||215W
|Peak Compute||?||?||10.6 TFLOPS||?||?||8.2 TFLOPS||14.9 TFLOPS||13.7 TFLOPS||?|
|Transistor Count||?||?||12.0 B||?||?||7.2 B||21.0 B||12.5 B||?|
We hope to fill out the rest of the information on these GPUs in the coming days during subsequent press briefings during Gamescom.
One big change to the RTX lineup is NVIDIA's revised Founders Edition cards. Instead of the blower-style cooler that we've seen on every other NVIDIA reference design, the Founder's Edition RTX cards instead move to a dual-axial fan setup, similar to 3rd party designs in the past.
These new GPUs do not come cheaply, however, with an increased MSRP across the entire lineup when compared to the 1000-series cards. The RTX 2080 Ti's MSRP of $1200 is an increase of $500 over the previous generation GTX 1080 Ti, while the GTX 2080 sports a $200 increase over the GTX 2080. These prices will come down after the Founders Edition wave pricing passes (the same was done with the GTX 10xx launches).
Both the Founder's Edition card from NVIDIA, as well as third-party designs from partners such as EVGA and ASUS, are available for preorder from retailers including Amazon and Newegg starting today and are set to ship on August 27th.
Subject: Graphics Cards | August 14, 2018 - 01:08 AM | Jeremy Hellstrom
Tagged: Siggraph, ray tracing, quadro rtx 8000, quadro rtx 5000, nvidia, jensen
The attempt to describe the visual effects Jensen Huang showed off at his Siggraph keynote is bound to fail, not that this has ever stopped any of us before. If you have seen the short demo movie they released earlier this year in cooperation with Epic and ILMxLAB you have an idea what they can do with ray tracing. However they pulled a fast one on us, as they were hiding the actual hardware that this was shown with as it was not pre-rendered but instead was actually our first look at their real time ray tracing. The hardware required for this feat is the brand new RTX series and the specs are impressive.
The ability to process 10 Giga rays means that each and every pixel can be influenced by numerous rays of light, perhaps 100 per pixel in a perfect scenario with clean inputs, or 5-20 in cases where their AI de-noiser is required to calculate missing light sources or occlusions, in real time. The card itself functions well as a light source as well. The ability to perform 16 TFLOPS and 16 TIPS means this card is happy doing both floating point and integer calculations simultaneously.
The die itself is significantly larger than the previous generation at 754mm2, and will sport a 300W TDP to keep it in line with the PCIe spec; though we will run it through the same tests as the RX 480 to see how well they did if we get the chance. 30W of the total power is devoted to the onboard USB controller which implies support for VR Link.
The cards can be used in pairs, utilizing Jensun's chest decoration, more commonly known as an NVLink bridge, and more than one pair can be run in a system but you will not be able to connect three or more cards directly.
As that will give you up to 96GB of GDDR6 for your processing tasks, it is hard to consider that limiting. The price is rather impressive as well, compared to previous render farms such as this rather tiny one below you are looking at a tenth the cost to power your movie with RTX cards. The card is not limited to proprietary engines or programs either, with DirectX and Vulkan APIs being supported in addition to Pixar's software. Their Material Definition Language will be made open source, allowing for even broader usage for those who so desire.
You will of course wonder what this means in terms of graphical eye candy, either pre-rendered quickly for your later enjoyment or else in real time if you have the hardware. The image below attempts to show the various features which RTX can easily handle. Mirrored surfaces can be emulated with multiple reflections accurately represented, again handled on the fly instead of being preset, so soon you will be able to see around corners.
It also introduces a new type of anti-aliasing called DLAA and there is no money to win for guessing what the DL stands for. DLAA works by taking an already anti-aliased image and training itself to provide even better edge smoothing, though at a processing cost. As with most other features on these cards, it is not the complexity of the scene which has the biggest impact on calculation time but rather the amount of pixels, as each pixel has numerous rays associated with it.
This new feature also allows significantly faster processing than Pascal, not the small evolutionary changes we have become accustomed to but more of a revolutionary change.
In addition to effects in movies and other video there is another possible use for Turing based chips which might appeal to the gamer, if the architecture reaches the mainstream. With the ability to render existing sources with added ray tracing and de-noising features it might be possible for an enterprising soul to take an old game and remaster it in a way never before possible. Perhaps one day people who try to replay the original System Shock or Deus Ex will make it past the first few hours before the graphical deficiencies overwhelm their senses.
We expect to see more from NVIDIA tomorrow so stay tuned.
Subject: General Tech | June 28, 2017 - 06:24 PM | Scott Michaud
Tagged: solidworks, ray tracing, radeon, prorender, nvidia, mental ray, Blender, amd
AMD has released a free ray-tracing engine for Blender, as well as Maya, 3D Studio Max, and SolidWorks, called Radeon ProRender. It uses a physically-based workflow, which allows multiple materials to be expressed in a single, lighting-independent shader, making it easy to color objects and have them usable in any sensible environment.
Image Credit: Mike Pan (via Twitter)
I haven’t used it yet, and I definitely haven’t tested how it stacks up against Cycles, but we’re beginning to see some test renders from Blender folks. It looks pretty good, as you can see with the water-filled Cornell box (above). Moreover, it’s rendered on an NVIDIA GPU, which I’m guessing they had because of Cycles, but that also shows that AMD is being inclusive with their software.
Radeon ProRender puts more than a little pressure on Mental Ray, which is owned by NVIDIA and licensed on annual subscriptions. We’ll need to see how quality evolves, but, as you see in the test render above, it looks pretty good so far... and the price can’t be beat.
Subject: Graphics Cards, Mobile | June 2, 2017 - 02:23 AM | Scott Michaud
Tagged: Imagination Technologies, PowerVR, ray tracing, ue4, vulkan
Imagination Technologies has published another video that demonstrates ray tracing with their PowerVR Wizard GPU. The test system, today, is a development card that is running on Ubuntu, and powering Unreal Engine 4. Specifically, it is using UE4’s Vulkan renderer.
The demo highlights two major advantages of ray traced images. The first is that, rather than applying a baked cubemap with screen-space reflections to simulate metallic objects, this demo calculates reflections with secondary rays. From there, it’s just a matter of hooking up the gathered information into the parameters that the shader requires and doing the calculations.
The second advantage is that it can do arbitrary lens effects, like distortion and equirectangular, 360 projections. Rasterization, which projects 3D world coordinates into 2D coordinates on a screen, assumes that edges are still straight, and that causes problems as FoV gets very large, especially full circle. Imagination Technologies acknowledges that workarounds exist, like breaking up the render into six faces of a cube, but the best approximation is casting a ray per pixel and seeing what it hits.
The demo was originally for GDC 2017, back in February, but the videos have just been released.