The SDM845 Reference Platform and CPU Results
The Snapdragon 845 is Qualcomm’s latest flagship mobile platform, officially announced on December 6 and known officially as the SDM845 (moving from the MSMxxxx nomenclature of previous iterations). At a recent media event we had a chance to go hands-on with a development platform device for a preview of this new Snapdragon's performance, the results of which we can now share. Will the Snapdragon 845 be Qualcomm's Android antidote to Apple's A11? Read on to find out!
The SDM845 QRD (Qualcomm Reference Design) Device
While this article will focus on CPU and GPU performance with a few known benchmarks, the Snapdragon 845 is of course a full mobile platform which combines 8-core Kryo 385 CPU, Adreno 630 graphics, Hexagon 685 DSP (which includes the Snapdragon Neural Processing Engine), Spectra 280 image processor, X20 LTE modem, etc. The reference device was packaged like a typical 5.5-inch Android smartphone, which can only help to provide a real-world application of thermal management during benchmarking.
Qualcomm Reference Design Specifications:
- Baseband Chipset: SDM845
- Memory: 6 GB LPDDR4X (PoP)
- Display: 5.5-inch 1440x2560
- Front: IMX320 12 MP Sensor
- Rear: IMX386 12 MP Sensor
- No 3.5 mm headset jack (Analog over USB-C)
- 4 Digital Microphones
- Connector: USB 3.1 Type-C
- DisplayPort over USB-C
At the heart of the Snapdragon 845 is the octa-core Kryo 385 CPU, configured with 4x performance cores and 4x efficiency cores, and offering clock speeds of up to 2.8 GHz. In comparison the Snapdragon 835 had a similar 8x CPU configuration (Kryo 280) clocked up to 2.45 GHz. The SDM845 is produced on 10 nm LPP process technology, while the SD835 (MSM8998) was the first to be manufactured at 10 nm (LPE). It is not surprising that Qualcomm is getting higher clock speeds from this new chip at the same process node, and increases in efficiency (the new 10nm LPP FinFET process) should theoretically result in similar - or possibly even lower - power draw from these higher clocks.
Why things are different in VR performance testing
It has been an interesting past several weeks and I find myself in an interesting spot. Clearly, and without a shred of doubt, virtual reality, more than any other gaming platform that has come before it, needs an accurate measure of performance and experience. With traditional PC gaming, if you dropped a couple of frames, or saw a slightly out of sync animation, you might notice and get annoyed. But in VR, with a head-mounted display just inches from your face taking up your entire field of view, a hitch in frame or a stutter in motion can completely ruin the immersive experience that the game developer is aiming to provide. Even worse, it could cause dizziness, nausea and define your VR experience negatively, likely killing the excitement of the platform.
My conundrum, and the one that I think most of our industry rests in, is that we don’t yet have the tools and ability to properly quantify the performance of VR. In a market and a platform that so desperately needs to get this RIGHT, we are at a point where we are just trying to get it AT ALL. I have read and seen some other glances at performance of VR headsets like the Oculus Rift and the HTC Vive released today, but honest all are missing the mark at some level. Using tools built for traditional PC gaming environments just doesn’t work, and experiential reviews talk about what the gamer can expect to “feel” but lack the data and analysis to back it up and to help point the industry in the right direction to improve in the long run.
With final hardware from both Oculus and HTC / Valve in my hands for the last three weeks, I have, with the help of Ken and Allyn, been diving into the important question of HOW do we properly test VR? I will be upfront: we don’t have a final answer yet. But we have a direction. And we have some interesting results to show you that should prove we are on the right track. But we’ll need help from the likes of Valve, Oculus, AMD, NVIDIA, Intel and Microsoft to get it right. Based on a lot of discussion I’ve had in just the last 2-3 days, I think we are moving in the correct direction.
Why things are different in VR performance testing
So why don’t our existing tools work for testing performance in VR? Things like Fraps, Frame Rating and FCAT have revolutionized performance evaluation for PCs – so why not VR? The short answer is that the gaming pipeline changes in VR with the introduction of two new SDKs: Oculus and OpenVR.
Though both have differences, the key is that they are intercepting the draw ability from the GPU to the screen. When you attach an Oculus Rift or an HTC Vive to your PC it does not show up as a display in your system; this is a change from the first developer kits from Oculus years ago. Now they are driven by what’s known as “direct mode.” This mode offers improved user experiences and the ability for the Oculus an OpenVR systems to help with quite a bit of functionality for game developers. It also means there are actions being taken on the rendered frames after we can last monitor them. At least for today.
The tale of the Samsung 840 EVO is a long and winding one, with many hitches along the way. Launched at the Samsung 2013 Global SSD Sumit, the 840 EVO was a unique entry into the SSD market. Using 19nm planar TLC flash, the EVO would have had only mediocre write performance if not for the addition of a TurboWrite cache, which added 3-12GB (depending on drive capacity) of SLC write-back cache. This gave the EVO great all around performance in most consumer usage scenarios. It tested very well, was priced aggressively, and remained our top recommended consumer SSD for quite some time. Other editors here at PCPer purchased them for their own systems. I even put one in the very laptop on which I'm writing this article.
An 840 EVO read speed test, showing areas where old data had slowed.
About a year after release, some 840 EVO users started noticing something weird with their systems. The short version is that data that sat unmodified for a period of months was no longer able to be read at full speed. Within a month of our reporting on this issue, Samsung issued a Performance Restoration Tool, which was a combination of a firmware and a software tool that initiated a 'refresh', where all stale data was rewritten, restoring read performance back to optimal speeds. When the tool came out, many were skeptical that the drives would not just slow down again in the future. We kept an eye on things, and after a few more months of waiting, we noted that our test samples were in fact slowing down again. We did note it was taking longer for the slow down to manifest this time around, and the EVOs didn't seem to be slowing down to the same degree, but the fact remained that the first attempt at a fix was not a complete solution. Samsung kept up their end of the bargain, promising another fix, but their initial statement was a bit disappointing, as it suggested they would only be able to correct this issue with a new version of their Samsung Magician software that periodically refreshed the old data. This came across as a band-aid solution, but it was better than nothing.
Well here we are again with this Samsung 840 EVO slow down issue cropping up here, there, and everywhere. The story for this one is so long and convoluted that I’m just going to kick this piece off with a walk through of what was happening with this particular SSD, and what was attempted so far to fix it:
The Samsung 840 EVO is a consumer-focused TLC SSD. Normally TLC SSDs suffer from reduced write speeds when compared to their MLC counterparts, as writing operations take longer for TLC than for MLC (SLC is even faster). Samsung introduced a novel way of speeding things up with their TurboWrite caching method, which adds a fast SLC buffer alongside the slower flash. This buffer is several GB in size, and helps the 840 EVO maintain fast write speeds in most typical usage scenarios, but the issue with the 840 EVO is not its write speed – the problem is read speed. Initial reviews did not catch this issue as it only impacted data that had been stagnant for a period of roughly 6-8 weeks. As files aged their read speeds were reduced, starting from the speedy (and expected) 500 MB/sec and ultimately reaching a worst case speed of 50-100 MB/sec:
There were other variables that impacted the end result, which further complicated the flurry of reports coming in from seemingly everywhere. The slow speeds turned out to be the result of the SSD controller working extra hard to apply error correction to the data coming in from flash that was (reportedly) miscalibrated at the factory. This miscalibration caused the EVO to incorrectly adapt to cell voltage drifts over time (an effect that occurs in all flash-based storage – TLC being the most sensitive). Ambient temperature could even impact the slower read speeds as the controller was working outside of its expected load envelope and thermally throttled itself when faced with bulk amounts of error correction.
An example of file read speed slowing relative to age, thanks to a tool developed by Techie007.
Once the community reached sufficient critical mass to get Samsung’s attention, they issued a few statements and ultimately pushed out a combination firmware and tool to fix EVO’s that were seeing this issue. The 840 EVO Performance Restoration Tool was released just under two months after the original thread on the Overclock.net forums was started. Despite a quick update a few weeks later, that was not a bad turnaround considering Intel took three months to correct a firmware issue of one of their own early SSDs. While the Intel patch restored full performance to their X25-M, the Samsung update does not appear to be faring so well now that users have logged a few additional months after applying their fix.