All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Graphics Cards | August 17, 2018 - 02:59 PM | Sebastian Peak
Tagged: VideoCardz, video card, rumor, RTX 2080 Ti, RTX 2080, report, pcb, nvidia, leak, graphics, gpu
The staff at VideoCardz.com have been a very busy of late, posting various articles on rumored NVIDIA graphics cards expected to be revealed this month. Today in particular we are seeing more (and more) information and imagery concerning what seems assured to be RTX 2080 branding, and somewhat surprising is the rumor that the RTX 2080 Ti will launch simultaneously (with a reported 4352 CUDA cores, no less).
Reported images of MSI GAMING X TRIO variants of RTX 2080/2080 Ti (via VideoCardz)
From the reported product images one thing in particular stand out, as memory for each card appears unchanged from current GTX 1080 and 1080 Ti cards, at 8GB and 11GB, respectively (though a move to GDDR6 from GDDR5X has also been rumored/reported).
Even (reported) PCB images are online, with this TU104-400-A1 quality sample pictured on Chiphell via VideoCardz.com:
The TU104-400-A1 pictured is presumed to be the RTX 2080 GPU (Chiphell via VideoCardz)
Other product images from AIB partners (PALIT and Gigabyte) were recently posted over at VideoCardz.com if you care to take a look, and as we near a likely announcement it looks like the (reported) leaks will keep on coming.
Subject: Editorial | August 17, 2018 - 09:00 AM | Jim Tanous
Tagged: video, Ryan Shrout, pcper mailbag
It's time for the PCPer Mailbag, our weekly show where Ryan and the team answer your questions about the tech industry, the latest and greatest GPUs, the process of running a tech review website, and more!
Yeah, OK, we missed a few weeks. It's all Jim's fault. Anyway, Ryan's back to tackle these questions:
00:22 - SATA cable failures?
01:54 - Tiered storage for consumers? Windows Storage Spaces vs. StoreMI?
04:29 - Low-end PC gaming vs. future consoles?
07:11 - Ryzen cores on future consoles?
10:34 - GPU for 1440p HDR ultrawide?
12:25 - TR4 socket issue?
13:26 - Why doesn't Intel make RAM?
14:40 - Negative pressure PC case?
16:05 - Normalizing RAM prices?
Want to have your question answered on a future Mailbag? Leave a comment on this post or in the YouTube comments for the latest video. Check out new Mailbag videos each Friday!
Be sure to subscribe to our YouTube Channel to make sure you never miss our weekly reviews and podcasts, and please consider supporting PC Perspective via Patreon to help us keep videos like our weekly mailbag coming!
Subject: General Tech | August 16, 2018 - 03:16 PM | Alex Lustenberg
Tagged: xeon, video, Turning, Threadripper, ssd, Samsung, QLC, podcast, PA32UC, nvidia, nand, L1TF, Intel, DOOM Eternal, asus, amd, 660p, 2990wx, 2950x
PC Perspective Podcast #509 - 08/16/18
Join us this week for discussion on Modded Thinkpads, EVGA SuperNOVA PSUs, and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the iTunes Store (audio only)
- Google Play - Subscribe to our audio podcast directly through Google Play!
- RSS - Subscribe through your regular RSS reader (audio only)
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, Allyn Malventano
Peanut Gallery: Ken Addison, Alex Lustenberg
Program length: 1:35:10
There is no 3
Week in Review:
News items of interest:
Picks of the Week:
Subject: General Tech | August 16, 2018 - 02:28 PM | Jeremy Hellstrom
Tagged: amd, threadripper 2, 2990wx, overclocking, LN2
The low cost workstation class 2990WX has been verified as running at 5.955GHz on an MSI MEG X399 Creation board, with the help of a lot of liquid nitrogen. The Inquirer has links to the setup that Indonesian overclocker Ivan Cupa needed in order to manage this feat, which required fans to cool certain portions of the motherboard as well. You are not likely to see this set up installed in a server room but the achievement is no less impressive as that is an incredible frequency to reach. Check it out in all it's glory.
"So far, it would seem that AMD is on top when it comes to willy-waving, though it's worth noting that overclocked performance is a tad nebulous and real-world in-app performance is really where choosing an Intel or AMD chip comes to play."
Here is some more Tech News from around the web:
- TSMC sees pickup in orders for mining ASICs @ DigiTimes
- ARM takes aim at Intel with its laptop-class processor ambitions @ The Inquirer
- Foreshadow and Intel SGX software attestation: 'The whole trust model collapses' @ The Register
- Intel’s 10nm Cannon Lake chip gets another outing in new NUC mini PC @ Ars Technica
- IoT shouters Chirp get themselves added to Microsoft Azure IoT @ The Register
- What Are the Best CCleaner Alternatives? @ TechSpot
- Cougar Armor S Gaming Chair @ TechPowerUp
- NikKTech & 1MORE Feel The Sound European Giveaway
Subject: General Tech | August 15, 2018 - 03:45 PM | Jeremy Hellstrom
Tagged: gaming, jagged alliance rage
The first and second Jagged Alliance games, and to an extent the add-on to JA2 were incredible games for those that liked turn based tactical shooters. From there it was all downhill as the original idea was corrupted into Jagged Alliance Online and the KickStarted Jagged Alliance: Flashback. The devs behind JA Online are now working on Jagged Alliance: Rage!, which puts you in control of some aged mercenaries susceptible to infections and permanent injuries. That mechanic is new and might indicate there is hope for the game yet, especially at the $20 price tag that has been chosen. Take a peek at the announcement video over at Rock, Paper, SHOTGUN.
"In Jagged Alliance: Rage! you are constantly on the brink of breakdown. Badly equipped and outnumbered, it’s up to the player to lead their seasoned mercenaries in tactical turn-based missions and to light the spark of a revolution."
Here is some more Tech News from around the web:
- Doom Eternal embraces its retro roots in tons of new QuakeCon footage @ Rock, Paper, SHOTGUN
- Monster Hunter: World Benchmark Performance Analysis @ TechPowerUp
- Frag for free as Quake Champions drops its initial entry fee @ Rock, Paper, SHOTGUN
- The hottest new board games from Gen Con 2018 @ Ars Technica
- Total War: Rome 2 gets prettied up, expanded and sprouts some family trees @ Rock, Paper, SHOTGUN
- How Many FPS Do You Need? @ TechSpot
- Fallout 76 turns the gaming trolls into targets @ HEXUS
- Wot I Think - Phantom Doctrine @ Rock, Paper, SHOTGUN
Subject: General Tech | August 15, 2018 - 02:42 PM | Jeremy Hellstrom
Tagged: SK Hynix, Terabyte, toshiba, QLC NAND
This year at the Flash Memory Summit big is in as Toshiba unveils an 85TB 2.5" SSH and suggested a 20TB M.2 drive is not far off. SK Hynix will release a 64TB 2.5" SSD with a 1Tbit die size which analysts expect to offer somewhat improved reads and writes compared o their previous offerings. The two companies will be using 96-layer QLC 3D NAND in these drives and The Register expects we will see them use an NVMe interface as opposed to SATA. Check out the story for more detail on these drives as well as what Intel is working on.
"The Flash Memory Summit saw two landmark capacity announcements centred on 96-layer QLC (4bits/cell) flash that seemingly herald a coming virtual abolition of workstation and server read-intensive flash capacity constraints."
Here is some more Tech News from around the web:
- John McAfee lashes out at Bitfi 'hackers' @ The Inquirer
- A Community-Run ISP Is the Highest Rated Broadband Company In America @ Slashdot
- Intel finally emits Puma 1Gbps modem fixes – just as new ping-of-death bug emerges @ The Register
- Bitcoin Sinks Below $6,000 as Almost Everything Crypto Tumbles @ Slashdot
- Intel to launch X599 platform for its 28-core Skylake-X CPU @ The Inquirer
- The Ars Technica Back to School buying guide
- It's official: TLS 1.3 approved as standard while spies weep @ The Register
- Three more data-leaking security holes found in Intel chips as designers swap security for speed @ The Register
- An Early Look At The L1 Terminal Fault "L1TF" Performance Impact On Virtual Machines (Foreshadow) @ Phoronix
Subject: General Tech | August 15, 2018 - 10:59 AM | Ken Addison
Tagged: sysmark, sysmark 2018, bapco, benchmarks
SYSMark is an application-based benchmarking suite used by many PC OEMs and enterprises to evaluate hardware deployments, as well as by us here at PC Perspective to evaluate system performance.
By using a variety of widely used applications such as Microsoft Office and the Adobe Creative Suite, SYSMark can provide insight into the performance levels of typical user activities like Productivity, which can be difficult to quantify otherwise.
As part of the upgrade to SYSMark 2018, the applications used to test are updated as well, including Microsoft Office 2016, Google Chrome version 65, Adobe Acrobat Pro DC, Adobe Photoshop CC (2018), Cyberlink PowerDirector 15, Adobe Lightroom Classic CC, AutoIT 184.108.40.206.
SYSMark 2018 is available today from BAPCo's online store.
Subject: Graphics Cards | August 14, 2018 - 01:08 AM | Jeremy Hellstrom
Tagged: Siggraph, ray tracing, quadro rtx 8000, quadro rtx 5000, nvidia, jensen
The attempt to describe the visual effects Jensen Huang showed off at his Siggraph keynote is bound to fail, not that this has ever stopped any of us before. If you have seen the short demo movie they released earlier this year in cooperation with Epic and ILMxLAB you have an idea what they can do with ray tracing. However they pulled a fast one on us, as they were hiding the actual hardware that this was shown with as it was not pre-rendered but instead was actually our first look at their real time ray tracing. The hardware required for this feat is the brand new RTX series and the specs are impressive.
The ability to process 10 Giga rays means that each and every pixel can be influenced by numerous rays of light, perhaps 100 per pixel in a perfect scenario with clean inputs, or 5-20 in cases where their AI de-noiser is required to calculate missing light sources or occlusions, in real time. The card itself functions well as a light source as well. The ability to perform 16 TFLOPS and 16 TIPS means this card is happy doing both floating point and integer calculations simultaneously.
The die itself is significantly larger than the previous generation at 754mm2, and will sport a 300W TDP to keep it in line with the PCIe spec; though we will run it through the same tests as the RX 480 to see how well they did if we get the chance. 30W of the total power is devoted to the onboard USB controller which implies support for VR Link.
The cards can be used in pairs, utilizing Jensun's chest decoration, more commonly known as an NVLink bridge, and more than one pair can be run in a system but you will not be able to connect three or more cards directly.
As that will give you up to 96GB of GDDR6 for your processing tasks, it is hard to consider that limiting. The price is rather impressive as well, compared to previous render farms such as this rather tiny one below you are looking at a tenth the cost to power your movie with RTX cards. The card is not limited to proprietary engines or programs either, with DirectX and Vulkan APIs being supported in addition to Pixar's software. Their Material Definition Language will be made open source, allowing for even broader usage for those who so desire.
You will of course wonder what this means in terms of graphical eye candy, either pre-rendered quickly for your later enjoyment or else in real time if you have the hardware. The image below attempts to show the various features which RTX can easily handle. Mirrored surfaces can be emulated with multiple reflections accurately represented, again handled on the fly instead of being preset, so soon you will be able to see around corners.
It also introduces a new type of anti-aliasing called DLAA and there is no money to win for guessing what the DL stands for. DLAA works by taking an already anti-aliased image and training itself to provide even better edge smoothing, though at a processing cost. As with most other features on these cards, it is not the complexity of the scene which has the biggest impact on calculation time but rather the amount of pixels, as each pixel has numerous rays associated with it.
This new feature also allows significantly faster processing than Pascal, not the small evolutionary changes we have become accustomed to but more of a revolutionary change.
In addition to effects in movies and other video there is another possible use for Turing based chips which might appeal to the gamer, if the architecture reaches the mainstream. With the ability to render existing sources with added ray tracing and de-noising features it might be possible for an enterprising soul to take an old game and remaster it in a way never before possible. Perhaps one day people who try to replay the original System Shock or Deus Ex will make it past the first few hours before the graphical deficiencies overwhelm their senses.
We expect to see more from NVIDIA tomorrow so stay tuned.
Subject: General Tech | August 13, 2018 - 07:43 PM | Ken Addison
Tagged: turing, siggraph 2018, rtx, quadro rtx 8000, quadro rtx 6000, quadro rtx 5000, quadro, nvidia
Today at the professional graphics-focused SIGGRAPH conference, NVIDIA's Jen-Hsun Huang has unveiled details on their much-rumored next GPU architecture, codenamed Turing.
At the core of the Turing architecture are what NVIDIA is referring these as two "engines"– one for accelerating Ray Tracing, and the other for accelerating AI Inferencing.
The Ray Tracing units are called RT cores and are not to be confused with the announcement of NVIDIA RTX technology for real-time ray-tracing that we saw at GDC this year. There, NVIDIA was using their Optix AI-powered denoising filter to clean up ray-traced images, allowing them to save on rendering resources, but the actual ray-tracing was still being done on the GPU cores itself.
Now, these RT cores will perform the ray calculations themselves at what NVIDIA is claiming is up to 10 GigaRays/second, or up to 25X the performance of the current Pascal architecture.
Just like we saw in the Volta-based Quadro GV100, these new Quadro RTX cards will also feature Tensor Cores for deep learning acceleration. It is unclear if these tensor cores remain unchanged from what we saw in Volta or not.
In addition to the RT Cores and Tensor Units, Turing also features an all-new design for the tradition Streaming Multiprocessor (SM) GPU units. Changes include an integer execution unit executing in parallel with the floating point datapath, and a new unified cache architecture with double the bandwidth of the previous generation.
NVIDIA claims these changes combined with the up to 4,608 available CUDA cores in the highest configuration will enable up to 16 TFLOPS and 16 trillion integer operations per second.
Alongside the announcement of the Turing Architecture, NVIDIA unveiled the Quadro RTX 5000, 6000, 8000-series products, due in Q4 2018.
In addition to the announcements at SIGGRAPH tonight, NVIDIA is expected to announce the consumer, GeForce products featuring the Turing architecture next week at an event in Germany.
PC Perspective is at both SIGGRAPH and will be at NVIDIA's event in Germany next week so stay tuned for more details!
Subject: Processors | August 13, 2018 - 02:18 PM | Jeremy Hellstrom
Tagged: Zen+, Threadripper, second generation threadripper, ryzen, Intel, Core i9, 7980xe, 7960x, 7900x, 2990wx, 2950x
The 2950X and 2990WX are both ThreadRipper 2 chips but are very different beasts under the hood. The 2950X has two active die similar to the original chips while the 2990WX has four active die, two of which utilize an Infinity Fabric link to the other two to communicate to the memory subsystem. The W in the naming convention indicates the 2990WX is designed for workstation tasks and benchmarks support that designation. You will have seen our results here, but there are many other sources to read through. [H]ard|OCP offers up a different set of benchmarks in their review, with a similar result; with ThreadRipper AMD has a winner. The 2990WX is especially important as it opens up the lucrative lower cost workstations market for AMD.
"AMD teased us a bit last week by showing off its new 2nd Generation Threadripper 2990WX and 2950X packaging and specifications. This week AMD lets us share all our Threadripper data we have been collecting. The 2990WX is likely a lot different part than many people were expecting, and it turns out that it might usher AMD into a newly created market."
Here are some more Processor articles from around the web:
- AMD's Ryzen Threadripper 2990WX @ The Tech Report
- AMD Ryzen Threadripper 2950X and 2990WX @ Guru of 3D
- AMD Ryzen Threadripper 2990WX & 2950X @ TechSpot
- AMD Ryzen Threadripper 2950X @ TechPowerUp
- AMD Threadripper 2950X Offers Great Linux Performance At $900 USD @ Phoronix
- AMD Threadripper 2990WX Linux Benchmarks: The 32-Core / 64-Thread Beast @ Phoronix
- AMD Threadripper 2990WX Cooling Performance - Testing Five Heatsinks & Two Water Coolers @ Phoronix
Subject: General Tech | August 13, 2018 - 01:49 PM | Jeremy Hellstrom
Tagged: Intel, rumour, release, coffee lake s, i9-9900K, i5-9600K, i7-9700K
According to the various sources The Inquirer has, the Coffee Lake refresh will be launched on the first of October, in time to ensure systems builders have models ready for the holidays. This new processor does not offer a compelling upgrade for those with a modern system, as it is very similar to it's predecessor. If you have something a little older however, the three new processors offer increased frequencies and core counts, the 9900K sports a default Boost Clock of 5GHz, which is nothing to sneeze at.
"If you were expecting anything bigger then allow us to disappoint you as, really the ninth-gen chips are mild upgrades on their predecessors, unless Intel has been keeping something very well hidden up its corporate sleeves."
Here is some more Tech News from around the web:
- Intel hands first Optane DIMM to Google, where it'll collect dust until a supporting CPU arrives @The Register
- Android Pie is borking fast charging on some Pixel XL handsets @ The Inquirer
- Many Google Services on Android Devices and iPhones Store Location Data, Even if Location Sharing is Disabled From Privacy Settings @ Slashdot
- The off-brand 'military-grade' x86 processors, in the library, with the root-granting 'backdoor' @ The Register
- NETGEAR Orbi (RBK23) AC2200 Mesh Wi-Fi System @ Kitguru
- Reolink Argus 2 Wire-Free 1080p Security Camera Review @ NikKTech
Subject: Cases and Cooling | August 11, 2018 - 11:24 PM | Scott Michaud
Tagged: thermal paste
A couple of weeks ago, GamersNexus published a video and article that benchmarked CPU performance across various thermal paste patterns. It’s well established that the best method of applying the compound is to spread it out as thin as possible, so it fills the gaps with something better than air but doesn’t insulate the parts that would naturally make perfect contact. That takes effort, though, and it’s not clear how much that buys you for modern CPUs with integrated heat-spreaders (IHS).
Video credit: GamersNexus
If you’re attaching a heatsink to a GPU or other bare die ASIC? Different story. Their tests are focused on CPUs with heat spreaders.
Long story short? Not so much difference. The “pea sized” method had a little issue because it didn’t fully cover the IHS, but they went on with the tests because it’s supposed to reflect real-world situations, and that was a real-world type of error. Even still, that corresponded to less than a degree Celsius under load (as measured on an Intel Core i7-8086k). The article mentions something about delidding the CPU, although the photos clearly have an IHS (and that’s the point of the test in the first place) so I’m guessing they only took the IHS off temporarily and replaced it.
It’s interesting how close they ended up. I would have thought that 30 minutes of full load would show at least a few degrees of variance, but apparently not, even with a little patch of uncovered space.
Subject: General Tech | August 10, 2018 - 11:17 PM | Scott Michaud
Tagged: Blender, benchmark
The Blender Foundation is wrapping up development on Blender 2.8, “The Workflow Update”. We have been following it for a while, but today’s announcement caught me by surprise: a benchmark database. It seems simple, right? Blender wants its users to know what hardware is best to use, especially when rendering images in Cycles (which can be damn slow).
A bit lopsided...
The solution is to make a version of Blender that creates and validates benchmarks, then compiles the data on their website. It’s still early days for this, with just 2052 entries (at the time of writing) and the majority of those were from Linux boxes. Also, they only break it down into a handful of categories: Fastest CPU, Fastest Compute Device, Submissions Per OS, then a few charts that compare the individual benchmark scenes against one another in a hardware-agnostic fashion. They pledge to add a lot of more metrics in the future.
Personally, I’m curious to see a performance vs OS metric. Some benchmarks back from 2016 (Blender 2.77 on an EVGA GTX 980 Ti) show Linux out-performing Windows 10 by over 2x, with Windows 7 landing in between (closer to Linux than Windows 10). At the time, it was attributed to NVIDIA’s CUDA driver being horribly optimized for the newer OS, which seems to be validated by the close showing of the GTX 1080 on Windows 10 and Linux, but I would like to see a compiled list of up-to-date results. I could soon be able to.
Subject: General Tech | August 10, 2018 - 10:45 PM | Scott Michaud
Tagged: pc gaming, discord, Rust, mozilla, steam, GOG
Starting with a slowly-ramping group of ~50,000 Canadians, Discord has begun distributing PC games. Specifically, there will be two services for paying members of the Discord Nitro beta program: a store, where games can be purchased as normal, and a library of other games that are available with the (aforementioned) Discord Nitro subscription.
“It’s kinda like Netflix for games.”
When talking about subscription services for video games, I am typically hesitant. That said, the previous examples were, like, OnLive, where they planned on making games that ran exclusively on that platform. The concern is that, when those games disappear from the service, they could be gone from our society as a whole work of art. (Consoles and DRM also play into this topic.)
In this case, however, it looks like they are just getting into curated, off-the-shelf PC games. While GoG holds its own, it will be nice to see another contender to Steam in the Win32 (maybe Linux?) games market. (I say Win32 because of the developer certification requirements for Windows Store / UWP.)
Dead horse rant aside, Discord is doing games… including a subscription service. Yay.
One more aspect to this story!
Over the last five-or-so years, Mozilla has been talking about upgrading their browser to use a more safe, multi-theaded, functional, job system, via their home-grown programming language, Rust. Turns out: Discord used this language for a lot of the store (and surrounding SDKs). Specifically, the native code for the store, the game SDK (with C, C++, and C# bindings), and the multiplayer network layer are all in Rust. This should make it fast and secure, which were the two design goals for Rust in the first place.
It was intended for web browsers after all...
Subject: General Tech | August 10, 2018 - 10:16 PM | Scott Michaud
Tagged: pc gaming, doom, bethesda
Bethesda, as usual, held a keynote at their QuakeCon event in the Dallas / Fort Worth region of Texas. So far so good. They then revealed DOOM Eternal with over 15 minutes of gameplay spread across three brutal segments.
Even though the reboot had a lot more… airborne activity… than the original, the new “meat hook” ability allows the player to grapple toward enemies. (At least, I only saw them grapple enemies. Maybe other things too? Probably not, though.) While not exactly a new mechanic, it looks like it flows well with DOOM’s faster-paced gameplay.
DOOM Eternal is coming to the PC, PS4, Xbox One, and even the Nintendo Switch. No release date has been announced.
Subject: Cases and Cooling | August 10, 2018 - 09:55 PM | Scott Michaud
Tagged: cases, caselab, bankrupt
Due in part to the tariffs on aluminum, enthusiast case manufacturer, CaseLabs, has shut down.
It happened quite abruptly, too. Their caselabs.net site, linked in their Twitter profile, has been up and down while I've been writing this post; their store page, while it can load in a browser, will not accept and further orders. In fact, some existing orders are expected to be canceled. They believe that they can ship all the orders for individual parts, but some of their backlog of full cases will not.
Obviously, this sucks for everyone involved. Some of their cases, while on the expensive side to say the least, looked interesting, particularly in terms of customization. I’m looking at one that had the option for both front-panel HDMI and USB type C.
No specifics have been announced about their bankruptcy and liquidation plan.
Subject: Cases and Cooling | August 10, 2018 - 02:57 PM | Jeremy Hellstrom
Tagged: watercooler, EKWB, EK-MLC Phoenix, AIO, 360mm radiator
If you have space in your case and a need to move a lot of heat, the 360mm EK-MLC Phoenix might be a good choice. It comes with all the features you expect from EKWB, Vardar fans, quick connect tubing and compatibility with most modern sockets, including ThreadRipper with an extra attachment. You will notice it can include the GPU in the cooling loop with the purchase of additional modules. The investment is somewhat high, NikkTech priced it at 270 Euros for just the CPU and arund 400 Euros if you include the parts to cool your GPU. Is that worth it?
"Following the massive success of the EK-XLC Predator line of AIO liquid coolers EK Waterblocks recently released the EK-MLC Phoenix line and on our test bench today we have the top of the line tri-fan 360 model."
Here are some more Cases & Cooling reviews from around the web:
- ID-Cooling Dashflow 360 @ TechPowerUp
- Corsair SPEC Omega RGB @ Guru of 3D
- CORSAIR Spec-Omega RGB Mid-Tower Tempered Glass Gaming Case Review @ NikKTech
- Fractal Design Focus G Mini @ TechPowerUp
- COUGAR PANZER-G Tempered Glass Gaming Mid-Tower Review @ NikKTech
Subject: General Tech | August 10, 2018 - 12:37 PM | Jeremy Hellstrom
Tagged: Intel, QLC NAND, 600p, D5-P4320, PCIe SSD, M.2
Not to be out done by Samsung, Intel have also announced new QLC based SSDs, the 600p series and the D5-P4320. The two series of drives rely on SLC caches to provide extra lifetime to the QLC flash which is not as robust as other varieties, the 600p is rated at 0.1 drive writes per day and the D5-P4320 is rated for 0.9 sequential writes per day, 0.2 for random writes. The two drives also share a five year warranty in common. The D5-P4320 sports a capacity of 7.68TB whereas the 600p comes in more affordable 520GB, 1TB and 2TB capacities. Drop by The Inquirer for more information on the two new series of NVMe SSDs from Intel.
"The SSD 660p is a single-sided M.2 format consumer drive following on from the 600p with 52GB, 1TB and 2TB capacities. The 600p topped out at 1TB."
Here is some more Tech News from around the web:
- EmuParadise is binning its ROM library following Nintendo's 'copyright' action @ The Inquirer
- ZX Spectrum Vega+ blows a FUSE: It runs open-source emulator @ The Register
- US-CERT sounds the alarms over North Korean 'KeyMarble' Trojan @ The Inquirer
- Windows 10 Enterprise Getting 'InPrivate Desktop' Sandboxed Execution Feature @ Slashdot
- AI renames BBC sound effect files, hilarity ensues @ The Inquirer
Subject: Mobile | August 10, 2018 - 09:08 AM | Tim Verry
Tagged: X12 Modem, snapdragon 670, snapdragon, qualcomm 600, qualcomm, LTE
Qualcomm recently introduced the Snapdragon 670 mobile platform that brings upgraded processing and power efficiencies to the 600-series lineup while being very close to the specifications of the new Snapdragon 710 SoC. Based on the 10nm LPP design, the Snapdragon 670 uses up to 30% less power (that number is while recording 4K video and relates to the Spectra ISP, overall power efficiency gains are likely less but still notable) while offering up to 15% more CPU and 25% more GPU processing power versus its predecessor. The new mobile processor is also better optimized for AI with up to 1.8X AI Engine performance mostly thanks to upgraded Hexagon DSP co-processors and ARM CPU cores.
The Snapdragon 670 features a Kryo 360 CPU with two ARM Cortex A75 cores at 2.0 GHz and six Cortex A53 cores at 1.7 GHz along with bringing 200-series DSPs and ISPs to the Snapdragon 600-series in the form of the Hexagon 685 DSP and Spectra 250 ISP. As far as graphics, the Snapdragon 670 will use a new Adreno 615 GPU which should be very close to the GPU in the SD710 (Adreno 616. The new processor supports a single 24MP camera or dual 16MP cameras and can record up to 4k30Hz video. According to Anandtech, Qualcomm has stripped out the 10-bit HDR pipelines as well as lowering the maximum supported display resolution. Another differentiator between the new Snapdragon 710 and the older Snapdragon 660 is that the SD670 uses the same Snapdragon X12 LTE modem as the SD660 rather than the X15 LTE modem of the 710 processor meaning that maximum cellular download speeds are capped at 600 Mbps downloads versus 800 Mbps.
While the Snapdragon 670 and Snapdragon 710 are reportedly pin and software compatible which will allow smartphone manufacturers the ability to use either chip in the same mobile platform the chips are allegedly different designs and the SD670 is not merely a lower binned SD710 which is interesting if true.
Qualcomm’s Snapdragon 670 appears to be a decent midrange offering that is very close to the specifications of the SD710 while being cheaper and much more power efficient than the older SD660. This should enable some midrange smartphone designs that can offer similar performance with much better battery life.
Of course, depending on the workload, the newer SD670 may or may not live up to the alleged 15% CPU performance boost versus 2017’s SD660 as the SD670 loses two of the big ARM cores in the big.LITTLE setup vs the SD660 while having two more smaller cores. The two A75 (2GHz) and six A55 (1.7GHz) are faster per core than the four A73 (2.2GHz) and four A53 (1.8GHz), but if a single app is heavily multithreaded the older chip may still hold its own. The bright side is that worst case the new chip should at least not be that much slower at most tasks and at best it delivers better battery life especially with lots of background tasks running. More efficient cores and the move from 14nm LPP to 10nm LPP definitely helps with that, and you do have to keep in mind that this is a midrange part for midrange smartphones.
The real deciding factor though in terms of the value proposition of this chip is certainly going to be pricing and the mobile platforms that manufacturers offer it in.
- Qualcomm Snapdragon 710 Specifications
- Qualcomm Launches Snapdragon 660 and 630 Mobile Platforms
- Google, Qualcomm Partner for Faster Android P Upgrade Adoption
Subject: General Tech | August 9, 2018 - 11:10 PM | Tim Verry
Tagged: V-NAND, sata ssd, Samsung, QLC, enterprise ssd
Earlier this week Samsung announced that it has begun mass production on its first consumer solid state drive based on QLC (4 bits per cell) V-NAND. According to the company, the initial drives will offer 4TB capacities and deliver equivalent performance to Samsung’s TLC offerings along with a three year warranty.
Samsung claims that its fourth generation V-NAND flash in QLC mode (with 16 voltage states) with 64 layers is able to offer up to 1Tb per chip. The 4TB SATA SSD uses a 3-bit SSD controller, TurboWrite technology, and 32 1Tb QLC V-NAND chips and thanks to the write cache (running the V-NAND in SLC or MLC modes) Samsung is able to wring extra performance out of the drive though it’s obviously limited ultimately by the SATA interface. Specifically, Samsung is promising sequential reads of 540 MB/s and sequential writes of up to 520 MB/s with the new QLC SSD. For comparison, Samsung’s fourth generation V-NAND operating in TLC mode is able to offer up to 256Gb and 512Gb capacities depending on package. Moving to fifth generation V-NAND in TLC mode Samsung is offering 256Gb per chip capacities (using 96 layers). Scouring the internet, it appears that Samsung has yet to reveal what it expects to achieve from 5th generation V-NAND in QLC mode. It should be able to at least match the 1Tb of 4th generation QLC V-NAND with the improved performance and efficiencies of the newer generation (including the faster Toggle DDR 4.0 interface) though I would guess Samsung could get more, maybe topping out at as much as 1.5Tb (eventually and if they use 96 layers--I was finding conflicting info on this). In any event, for futher comparison, Intel and Micron have been able to get 1Tb QLC 3D NAND flash chips and Western Digital and Toshiba are working on 96 Layer BiCS4 which is expected to offer up to 1.33Tb capacities when run in 4-bits per cell mode (QLC).
It seems that Samsung is playing a bit of catch up when it comes to solid state storage using QLC though they do still have a bit of time to launch products this year along with the other players. Samsung claims that it will launch its 4TB 2.5” consumer SSD first with 1TB and 2TB models to follow later this year.
Interestingly (and more vaguely), Samsung mentioned in its press release that it plans to begin rolling out M.2 SSDs for the enterprise market and that it will begin mass producing fifth generation 4-bit V-NAND later this year.
I am looking forward to more details on Samsung’s plans for QLC and especially on the specifications of fifth generation 4-bit V-NAND and the drives that it will enable for both consumer systems and the data center markets.
What are your thoughts on Samsung’s QLC V-NAND?
- Intel SSD 660p 1TB SSD Review - QLC Goes Mainstream
- Intel, Micron Jointly Announce QLC NAND FLASH, 96-Layer 3D Development
- Micron Launches 5210 ION - First QLC NAND Enterprise SATA SSD
- FMS 2017: Samsung Announces QLC V-NAND, 16TB NGSFF SSD, Z-SSD V2, Key Value
- Toshiba and Western Digital announce QLC and 96-Layer BiCS Flash