Subject: General Tech | December 13, 2018 - 05:31 AM | Jim Tanous
Tagged: Zen 2, Sunny Cove, snapdragon, ryzen 3, ray tracing, radeon pro, podcast, Optane, Intel, edge, chromium, amd, 3dmark
PC Perspective Podcast #525 - 12/12/2018
Our podcast this week features discusion of the new Intel Sunny Cove architecture, Ryzen 3 rumors, the high-end Snapdragon 8cx, an affordable Radeon Pro GPU, and more!
Subscribe to the PC Perspective Podcast
Check out previous podcast episodes: http://pcper.com/podcast
00:03:21 - AMD Radeon Pro WX8200 Review
00:14:50 - Intel Architecture Day: Sunny Cove, Gen11 iGPU, Foveros
00:27:16 - Ryzen 3 Rumors
00:38:57 - Using a 4K TV as a Monitor
00:43:21 - Snapdragon 8cx
00:57:29 - Microsoft Edge Switching to Chromium
01:03:38 - MSI GTX 1060 with GDDR5X
01:05:40 - 3DMark Port Royal Ray Tracing Benchmark
01:09:03 - Hunting Speculative Execution Vulnerabilities
01:11:38 - 7nm Vega Logo
01:13:49 - Intel Optane DIMM Latency
01:30:45 - The Outer Worlds
Subject: Graphics Cards | December 10, 2018 - 10:36 AM | Jim Tanous
Tagged: 3dmark, ray tracing, directx raytracing, raytracing, rtx, benchmarking, benchmarks
After first announcing it last month, UL this weekend provided new information on its upcoming ray tracing-focused addition to the 3DMark benchmarking suite. Port Royal, what UL calls the "world's first dedicated real-time ray tracing benchmark for gamers," will launch Tuesday, January 8, 2019.
For those eager for a glimpse of the new ray-traced visual spectacle, or for the majority of gamers without a ray tracing-capable GPU, the company has released a video preview of the complete Port Royal demo scene.
Access to the new Port Royal benchmark will be limited to the Advanced and Professional editions of 3DMark. Existing 3DMark users can upgrade to the benchmark for $2.99, and it will become part of the base $29.99 Advanced Edition package for new purchasers starting January 8th.
Real-time ray tracing promises to bring new levels of realism to in-game graphics. Port Royal uses DirectX Raytracing to enhance reflections, shadows, and other effects that are difficult to achieve with traditional rendering techniques.
As well as benchmarking performance, 3DMark Port Royal is a realistic and practical example of what to expect from ray tracing in upcoming games— ray tracing effects running in real-time at reasonable frame rates at 2560 × 1440 resolution.
3DMark Port Royal was developed with input from AMD, Intel, NVIDIA, and other leading technology companies. We worked especially closely with Microsoft to create a first-class implementation of the DirectX Raytracing API.
Port Royal will run on any graphics card with drivers that support DirectX Raytracing. As with any new technology, there are limited options for early adopters, but more cards are expected to get DirectX Raytracing support in 2019.
Subject: Processors | September 25, 2017 - 09:36 PM | Tim Verry
Tagged: skylake-x, overclocking, Intel Skylake-X, Intel, Cinebench, 7980xe, 3dmark, 14nm
Renowned overclocker der8auer got his hands on the new 18-core Intel Core i9-7980XE and managed to break a few records with more than a bit of LN2 and thermal paste. Following a delid, der8auer slathered the bare die and surrounding PCB with a polymer-based (Kryonaut) TIM and reattached the HIS to prepare for the extreme overclock. He even attempted to mill out the middle of the IHS to achieve a balance between direct die cooling and using the IHS to prevent bending the PCB and spread out the pressure from the LN2 cooler block, but ran into inconsistent results between runs and opted not to proceed with that method.
Using an Asus Rampage VI Apex X299 motherboard and the Core i9-7980XE at an Asus ROG event in Taiwan der8auer used liquid nitrogen to push all eighteen cores (plus Hyper-Threading) to 6.1 GHz for a CPU-Z validation. To get those clockspeeds he needed to crank up the voltage to 1.55V (1.8V VCCIN) which is a lot for the 14nm Skylake X processor. Der8auer noted that overclocking was temperature limited beyond this point as at 6.1 GHz he was seeing positive temperatures on the CPU cores despite the surface of the LN2 block being as low as -100 °C! Perhaps even more incredible is the power draw of the processor as it runs at these clockspeeds with the system drawing as much as 1,000 watts (~83 amps) on the +12V rail with the CPU being responsible for almost all of that number! That is a lot of power running through the motherboard VRMs and the on-processor FIVR!
For comparison, at 5.5 GHz he measured 70 amps on the +12V rail (840W) with the chip using 1.45V vcore under load.
For Cinebench R15, the extreme overclocker opted for a tamer 5.7 GHz where the i9-7980XE achieved a multithreaded score of 5,635 points. He compared that to his AMD Threadripper overclock of 5.4 GHz where he achieved a Cinebench score of 4,514 (granted the Intel part was using four more threads and clocked higher).
To push things (especially his power supply heh) further, the overclocker added a LN2 cooled NVIDIA Titan Xp to the mix and managed to overclock the graphics card to 2455 MHz at 1.4V. With the 3840 Pascal cores at 2.455 GHz he managed to break three single card world records by scoring 45,705 in 3DMark 11, 35,782 in 3DMark Fire Strike, and 120,425 in 3DMark Vantage!
Der8auer also made a couple interesting statements regarding overclocking at these levels including the issues of cold bugs not allowing the CPU and/or GPU to boot up if the cooler plate is too cold. On the other side of things, once the chip is running the power consumption can jump drastically with more voltage and higher clocks such that even LN2 can’t maintain sub-zero core temperatures! The massive temperature delta can also create condensation issues that need to be dealt with. He mentions that while for 24/7 overclocking liquid metal TIMs are popular choices, when extreme overclocking the alloy actually works against them because the sub-zero temperatures reduce the effectiveness and thermal conductivity of the interface material which is why polymer-based TIMs are used when cooling with liquid nitrogen, liquid helium, or TECs. Also, while most people apply a thin layer of thermal paste to the direct die or HIS, when extreme overclocking he “drowns” the processor die and PCB in the TIM to get as much contact as possible with the cooler as every bit of heat transfer helps even the small amount he can transfer through the PCB. Further, FIVR has advantages such as per-core voltage fine tuning, but it also can hold back further overclocking from cold bugs that will see the processor shut down past -100 to -110 °C temperature limiting overclocks whereas with an external VRM setup they could possibly push the processor further.
For the full scoop, check out his overclocking video. Interesting stuff!
- The Intel Core i9-7980XE and 7960X Review: Skylake-X at $1999 and 18-cores
- Delidded Ryzen 7 1700 Confirms AMD Is Using Solder With IHS On Ryzen Processors
- The AMD Ryzen Threadripper 1950X and 1920X Review
- Overclocking the AMD Ryzen 7 1700 - The Real Winner?
- Overclockers Push Ryzen 7 1800X to 5.2 GHz On LN2, Break Cinebench Record
Subject: Graphics Cards | March 28, 2017 - 04:32 PM | Scott Michaud
Tagged: vulkan, DirectX 12, Futuremark, 3dmark
The latest update to 3DMark adds Vulkan support to its API Overhead test, which attempts to render as many simple objects as possible while keeping above 30 FPS. This branch of game performance allows developers to add more objects into a scene, and design these art assets in a more simple, straight-forward way. This is, now, one of the first tests that can directly compare DirectX 12 and Vulkan, which we expect to be roughly equivalent, but we couldn’t tell for sure.
While I wasn’t able to run the tests myself, Luca Rocchi of Ocaholic gave it a shot on their Core i7-5820K and GTX 980. Apparently, Vulkan was just under 10% faster than DirectX 12 in their results, reaching 22.6 million draw calls in Vulkan, but 20.6 million in DirectX 12. Again, this is one test, done by a third-party, for a single system, and a single GPU driver, on a single 3D engine, and one that is designed to stress a specific portion of the API at that; take it with a grain of salt. Still, this suggests that Vulkan can keep pace with the slightly-older DirectX 12 API, and maybe even beat it.
This update also removed Mantle support. I just thought I’d mention that.
Subject: General Tech | July 21, 2016 - 12:21 PM | Ryan Shrout
Tagged: Wraith, Volta, video, time spy, softbank, riotoro, retroarch, podcast, nvidia, new, kaby lake, Intel, gtx 1060, geforce, asynchronous compute, async compute, arm, apollo lake, amd, 3dmark, 10nm, 1070m, 1060m
PC Perspective Podcast #409 - 07/21/2016
Join us this week as we discuss the GTX 1060 review, controversy surrounding the async compute of 3DMark Time Spy and more!!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store (audio only)
- Google Play - Subscribe to our audio podcast directly through Google Play!
- RSS - Subscribe through your regular RSS reader (audio only)
- MP3 - Direct download link to the MP3 file
This episode of the PC Perspective Podcast is sponsored by Casper!
Hosts: Ryan Shrout, Allyn Malventano, Jeremy Hellstrom, and Josh Walrath
Yes, We're Writing About a Forum Post
Update - July 19th @ 7:15pm EDT: Well that was fast. Futuremark published their statement today. I haven't read it through yet, but there's no reason to wait to link it until I do.
Update 2 - July 20th @ 6:50pm EDT: We interviewed Jani Joki, Futuremark's Director of Engineering, on our YouTube page. The interview is embed just below this update.
Original post below
The comments of a previous post notified us of an Overclock.net thread, whose author claims that 3DMark's implementation of asynchronous compute is designed to show NVIDIA in the best possible light. At the end of the linked post, they note that asynchronous compute is a general blanket, and that we should better understand what is actually going on.
So, before we address the controversy, let's actually explain what asynchronous compute is. The main problem is that it actually is a broad term. Asynchronous compute could describe any optimization that allows tasks to execute when it is most convenient, rather than just blindly doing them in a row.
This is asynchronous computing.
Subject: General Tech | June 23, 2016 - 06:37 PM | Jeremy Hellstrom
Tagged: Futuremark. Time Spy, 3dmark
A new version of DirectX hitting the market means we need a new benchmark and once again Futuremark has delivered, with the Time Spy benchmark. Right now 3D Mark is 80% off on Steam and if you pick it up you will get access to the new Time Spy Basic benchmark when it is released.
Time Spy uses the new DirectX 12 API and supports new features like asynchronous compute, explicit multi-adapter, and multi-threading. It will have reviewers digging out hardware they thought they had already tested to provide you with new benchmark data points that will apply to the currently available DX12 games as well as those which will be released.
This is also a great opportunity to pick up the full version of the benchmark for your own usage, even if you have yet to upgrade to DX12 hardware. You should check out the teaser trailer if you are familiar with past 3D Mark versions as you will see a few glimpses at benchmark screens that caused you mental raster burn in years past.
Subject: Graphics Cards | May 27, 2016 - 02:58 PM | Allyn Malventano
Tagged: sli, review, led, HB, gtx, evga, Bridge, ACX 3.0, 3dmark, 1080
...so the time where we manage to get multiple GTX 1080's in the office here would, of course, be when Ryan is on the other side of the planet. We are also missing some other semi-required items, like the new 'SLI HB 'bridge, but we should be able to test on an older LED bridge at 2560x1440 (under the resolution where the newer style is absolutely necessary to avoid a sub-optimal experience). That said, surely the storage guy can squeeze out a quick run of 3DMark to check out the SLI scaling, right?
For this testing, I spent just a few minutes with EVGA's OC Scanner to take advantage of GPU Boost 3.0. I cranked the power limits and fans on both cards, ending up at a stable overclock hovering at right around 2 GHz on the pair. I'm leaving out the details of the second GPU we got in for testing as it may be under NDA and I can't confirm that as all of the people to ask are in an opposite time zone, so I'm leaving out that for now (pfft - it has an aftermarket cooler). Then I simply ran Firestrike (25x14) with SLI disabled:
...and then with it enabled:
That works out to a 92% gain in 3DMark score, with the FPS figures jumping by almost exactly 2x. Now remember, this is by no means a controlled test, and the boss will be cranking out a much more detailed piece with frame rated results galore in the future, but for now I just wanted to get some quick figures out to the masses for consumption and confirmation that 1080 SLI is a doable thing, even on an older bridge.
*edit* here's another teaser:
Aftermarket coolers are a good thing as evidenced by the 47c of that second GPU, but the Founders Edition blower-style cooler is still able to get past 2GHz just fine. Both cards had their fans at max speed in this example.
I was able to confirm we are not under NDA on the additional card we received. Behold:
This is the EVGA Superclocked edition with their ACX 3.0 cooler.
More to follow (yes, again)!
Subject: General Tech | April 2, 2015 - 01:16 PM | Ken Addison
Tagged: podcast, video, dx12, 3dmark, freesync, g-sync, Intel, 3d nand, 20nm, 28nm, micron, nvidia, shield, Tegra X1, raptr, 850 EVO, msata, M.2
Join us this week as we discuss DX12 Performance, Dissecting G-SYNC and FreeSync, Intel 3D NAND and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts:Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano
Program length: 1:27:10
Week in Review:
News item of interest:
Hardware/Software Picks of the Week:
Jeremy: Just pretend it is Half Life 3
Our first DX12 Performance Results
Late last week, Microsoft approached me to see if I would be interested in working with them and with Futuremark on the release of the new 3DMark API Overhead Feature Test. Of course I jumped at the chance, with DirectX 12 being one of the hottest discussion topics among gamers, PC enthusiasts and developers in recent history. Microsoft set us up with the latest iteration of 3DMark and the latest DX12-ready drivers from AMD, NVIDIA and Intel. From there, off we went.
First we need to discuss exactly what the 3DMark API Overhead Feature Test is (and also what it is not). The feature test will be a part of the next revision of 3DMark, which will likely ship in time with the full Windows 10 release. Futuremark claims that it is the "world's first independent" test that allows you to compare the performance of three different APIs: DX12, DX11 and even Mantle.
It was almost one year ago that Microsoft officially unveiled the plans for DirectX 12: a move to a more efficient API that can better utilize the CPU and platform capabilities of future, and most importantly current, systems. Josh wrote up a solid editorial on what we believe DX12 means for the future of gaming, and in particular for PC gaming, that you should check out if you want more background on the direction DX12 has set.
One of DX12 keys for becoming more efficient is the ability for developers to get closer to the metal, which is a phrase to indicate that game and engine coders can access more power of the system (CPU and GPU) without having to have its hand held by the API itself. The most direct benefit of this, as we saw with AMD's Mantle implementation over the past couple of years, is improved quantity of draw calls that a given hardware system can utilize in a game engine.