Flash player not detected. Click here to install flash.
« 1 2 3 4 5 »

Rage against the remake; Jagged Alliance exhumed once again

Subject: General Tech | August 15, 2018 - 03:45 PM |
Tagged: gaming, jagged alliance rage

The first and second Jagged Alliance games, and to an extent the add-on to JA2 were incredible games for those that liked turn based tactical shooters.  From there it was all downhill as the original idea was corrupted into Jagged Alliance Online and the KickStarted Jagged Alliance: Flashback.  The devs behind JA Online are now working on Jagged Alliance: Rage!, which puts you in control of some aged mercenaries susceptible to infections and permanent injuries.  That mechanic is new and might indicate there is hope for the game yet, especially at the $20 price tag that has been chosen.  Take a peek at the announcement video over at Rock, Paper, SHOTGUN.

70.jpg

"In Jagged Alliance: Rage! you are constantly on the brink of breakdown. Badly equipped and outnumbered, it’s up to the player to lead their seasoned mercenaries in tactical turn-based missions and to light the spark of a revolution."

Here is some more Tech News from around the web:

Tech Talk

 

The biggest little storehouse in Texas ... terabytes on gumsticks

Subject: General Tech | August 15, 2018 - 02:42 PM |
Tagged: SK Hynix, Terabyte, toshiba, QLC NAND

This year at the Flash Memory Summit big is in as Toshiba unveils an 85TB 2.5" SSH and suggested a 20TB M.2 drive is not far off.  SK Hynix will release a 64TB 2.5" SSD with a 1Tbit die size which analysts expect to offer somewhat improved reads and writes compared o their previous offerings.  The two companies will be using 96-layer QLC 3D NAND in these drives and The Register expects we will see them use an NVMe interface as opposed to SATA.  Check out the story for more detail on these drives as well as what Intel is working on.

index.jpg

"The Flash Memory Summit saw two landmark capacity announcements centred on 96-layer QLC (4bits/cell) flash that seemingly herald a coming virtual abolition of workstation and server read-intensive flash capacity constraints."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register

BAPCo Launches SYSMark 2018 Benchmarking Suite for PCs

Subject: General Tech | August 15, 2018 - 10:59 AM |
Tagged: sysmark, sysmark 2018, bapco, benchmarks

SYSMark is an application-based benchmarking suite used by many PC OEMs and enterprises to evaluate hardware deployments, as well as by us here at PC Perspective to evaluate system performance.

SMEXP.png

By using a variety of widely used applications such as  Microsoft Office and the Adobe Creative Suite, SYSMark can provide insight into the performance levels of typical user activities like Productivity, which can be difficult to quantify otherwise.

As part of the upgrade to SYSMark 2018, the applications used to test are updated as well, including Microsoft Office 2016, Google Chrome version 65, Adobe Acrobat Pro DC, Adobe Photoshop CC (2018), Cyberlink PowerDirector 15, Adobe Lightroom Classic CC, AutoIT 3.3.14.2.

SYSMark 2018 is available today from BAPCo's online store.

Source: BAPCo

Real time ray tracing in still life

Subject: Graphics Cards | August 14, 2018 - 01:08 AM |
Tagged: Siggraph, ray tracing, quadro rtx 8000, quadro rtx 5000, nvidia, jensen

The attempt to describe the visual effects Jensen Huang showed off at his Siggraph keynote is bound to fail, not that this has ever stopped any of us before.  If you have seen the short demo movie they released earlier this year in cooperation with Epic and ILMxLAB you have an idea what they can do with ray tracing.  However they pulled a fast one on us, as they were hiding the actual hardware that this was shown with as it was not pre-rendered but instead was actually our first look at their real time ray tracing.  The hardware required for this feat is the brand new RTX series and the specs are impressive.

rtx specs.JPG

The ability to process 10 Giga rays means that each and every pixel can be influenced by numerous rays of light, perhaps 100 per pixel in a perfect scenario with clean inputs, or 5-20 in cases where their AI de-noiser is required to calculate missing light sources or occlusions, in real time.  The card itself functions well as a light source as well.  The ability to perform 16 TFLOPS and 16 TIPS means this card is happy doing both floating point and integer calculations simultaneously. 

die.JPG

The die itself is significantly larger than the previous generation at 754mm2, and will sport a 300W TDP to keep it in line with the PCIe spec; though we will run it through the same tests as the RX 480 to see how well they did if we get the chance.  30W of the total power is devoted to the onboard USB controller which implies support for VR Link.

bigg.JPG

The cards can be used in pairs, utilizing Jensun's chest decoration, more commonly known as an NVLink bridge, and more than one pair can be run in a system but you will not be able to connect three or more cards directly. 

models.JPG

As that will give you up to 96GB of GDDR6 for your processing tasks, it is hard to consider that limiting.  The price is rather impressive as well, compared to previous render farms such as this rather tiny one below you are looking at a tenth the cost to power your movie with RTX cards.  The card is not limited to proprietary engines or programs either, with DirectX and Vulkan APIs being supported in addition to Pixar's software.  Their Material Definition Language will be made open source, allowing for even broader usage for those who so desire.

copmare.JPG

uses.JPG

You will of course wonder what this means in terms of graphical eye candy, either pre-rendered quickly for your later enjoyment or else in real time if you have the hardware.  The image below attempts to show the various features which RTX can easily handle.  Mirrored surfaces can be emulated with multiple reflections accurately represented, again handled on the fly instead of being preset, so soon you will be able to see around corners. 

itdoesthis.JPG

It also introduces a new type of anti-aliasing called DLAA and there is no money to win for guessing what the DL stands for.  DLAA works by taking an already anti-aliased image and training itself to provide even better edge smoothing, though at a processing cost.  As with most other features on these cards, it is not the complexity of the scene which has the biggest impact on calculation time but rather the amount of pixels, as each pixel has numerous rays associated with it.

dlaa.JPG

This new feature also allows significantly faster processing than Pascal, not the small evolutionary changes we have become accustomed to but more of a revolutionary change.

zoom.JPG

In addition to effects in movies and other video there is another possible use for Turing based chips which might appeal to the gamer, if the architecture reaches the mainstream.  With the ability to render existing sources with added ray tracing and de-noising features it might be possible for an enterprising soul to take an old game and remaster it in a way never before possible.  Perhaps one day people who try to replay the original System Shock or Deus Ex will make it past the first few hours before the graphical deficiencies overwhelm their senses.

We expect to see more from NVIDIA tomorrow so stay tuned.

 

Source: NVIDIA

NVIDIA Officially Announces Turing GPU Architecture at SIGGRAPH 2018

Subject: General Tech | August 13, 2018 - 07:43 PM |
Tagged: turing, siggraph 2018, rtx, quadro rtx 8000, quadro rtx 6000, quadro rtx 5000, quadro, nvidia

Today at the professional graphics-focused SIGGRAPH conference, NVIDIA's Jen-Hsun Huang has unveiled details on their much-rumored next GPU architecture, codenamed Turing.

NVIDIA_Turing_Architecture_1534184555.png

At the core of the Turing architecture are what NVIDIA is referring these as two "engines"– one for accelerating Ray Tracing, and the other for accelerating AI Inferencing.

The Ray Tracing units are called RT cores and are not to be confused with the announcement of NVIDIA RTX technology for real-time ray-tracing that we saw at GDC this year. There, NVIDIA was using their Optix AI-powered denoising filter to clean up ray-traced images, allowing them to save on rendering resources, but the actual ray-tracing was still being done on the GPU cores itself.

Now, these RT cores will perform the ray calculations themselves at what NVIDIA is claiming is up to 10 GigaRays/second, or up to 25X the performance of the current Pascal architecture.

Volta-Tensor-Core.jpg

Just like we saw in the Volta-based Quadro GV100, these new Quadro RTX cards will also feature Tensor Cores for deep learning acceleration. It is unclear if these tensor cores remain unchanged from what we saw in Volta or not.

In addition to the RT Cores and Tensor Units, Turing also features an all-new design for the tradition Streaming Multiprocessor (SM) GPU units. Changes include an integer execution unit executing in parallel with the floating point datapath, and a new unified cache architecture with double the bandwidth of the previous generation.

NVIDIA claims these changes combined with the up to 4,608 available CUDA cores in the highest configuration will enable up to 16 TFLOPS and 16 trillion integer operations per second.

quadro-rtx-lineup.png

Alongside the announcement of the Turing Architecture, NVIDIA unveiled the Quadro RTX 5000, 6000, 8000-series products, due in Q4 2018.

In addition to the announcements at SIGGRAPH tonight, NVIDIA is expected to announce the consumer, GeForce products featuring the Turing architecture next week at an event in Germany

PC Perspective is at both SIGGRAPH and will be at NVIDIA's event in Germany next week so stay tuned for more details!

Source: NVIDIA

ThreadRipper 2: Die Four Real

Subject: Processors | August 13, 2018 - 02:18 PM |
Tagged: Zen+, Threadripper, second generation threadripper, ryzen, Intel, Core i9, 7980xe, 7960x, 7900x, 2990wx, 2950x

The 2950X and 2990WX are both ThreadRipper 2 chips but are very different beasts under the hood.  The 2950X has two active die similar to the original chips while the 2990WX has four active die, two of which utilize an Infinity Fabric link to the other two to communicate to the memory subsystem.  The W in the naming convention indicates the 2990WX is designed for workstation tasks and benchmarks support that designation.  You will have seen our results here, but there are many other sources to read through.  [H]ard|OCP offers up a different set of benchmarks in their review, with a similar result; with ThreadRipper AMD has a winner.  The 2990WX is especially important as it opens up the lucrative lower cost workstations market for AMD.

DSC05134.JPG

"AMD teased us a bit last week by showing off its new 2nd Generation Threadripper 2990WX and 2950X packaging and specifications. This week AMD lets us share all our Threadripper data we have been collecting. The 2990WX is likely a lot different part than many people were expecting, and it turns out that it might usher AMD into a newly created market."

Here are some more Processor articles from around the web:

Processors

Source: [H]ard|OCP

Coffee Lake S will be released along with the pumpkin spice

Subject: General Tech | August 13, 2018 - 01:49 PM |
Tagged: Intel, rumour, release, coffee lake s, i9-9900K, i5-9600K, i7-9700K

According to the various sources The Inquirer has, the Coffee Lake refresh will be launched on the first of October, in time to ensure systems builders have models ready for the holidays.  This new processor does not offer a compelling upgrade for those with a modern system, as it is very similar to it's predecessor.  If you have something a little older however, the three new processors offer increased frequencies and core counts, the 9900K sports a default Boost Clock of 5GHz, which is nothing to sneeze at.

118.png

"If you were expecting anything bigger then allow us to disappoint you as, really the ninth-gen chips are mild upgrades on their predecessors, unless Intel has been keeping something very well hidden up its corporate sleeves."

Here is some more Tech News from around the web:

Tech Talk

Source: The Inquirer
Author:
Subject: Processors
Manufacturer: AMD

Widening the Offerings

Today, we are talking about something that would have seen impossible just a few shorts years ago— a 32-core processor for consumers. While I realize that talking about the history of computer hardware can be considered superfluous in a processor review, I think it's important to understand the context here of why this is just a momentous shift for the industry.

May 2016 marked the launch of what was then the highest core count consumer processor ever seen, the Intel Core i7-6950X. At 10 cores and 20 threads, the 6950X was easily the highest performing consumer CPU in multi-threaded tasks but came at a staggering $1700 price tag. In what we will likely be able to look back on as the peak of Intel's sole dominance of the x86 CPU space, it was an impossible product to recommend to almost any consumer.

Just over a year later saw the launch of Skylake-X with the Intel Core i9-7900X. Retaining the same core count as the 6950X, the 7900X would have been relatively unremarkable on its own. However, a $700 price drop and the future of upcoming 12, 14, 16, and 18-core processors on this new X299 platform showed an aggressive new course for Intel's high-end desktop (HEDT) platform.

This aggressiveness was brought on by the success of AMD's Ryzen platform, and the then upcoming Threadripper platform. Promising up to 16 cores/32 threads, and 64 lanes of PCI Express connectivity, it was clear that Intel would for the first time have a competitor on their hands in the HEDT space that they created back with the Core i7-920.

Fast forward another year, and we have the release of the 2nd Generation Threadripper. Promising to bring the same advancements we saw with the Ryzen 7 2700X, AMD is pushing Threadripper to even more competitive states with higher performance and lower cost. 

DSC05134.JPG

Will Threadripper finally topple Intel from their high-end desktop throne?

Click here to continue reading our review of the Ryzen Threadripper 2950X and 2990WX.

Intro and NNEF 1.0 Finalization

SIGGRAPH 2018 is a huge computer graphics expo that occurs in a seemingly random host city around North America. (Asia has a sister event, called SIGGRAPH Asia, which likewise shuffles around.) In the last twenty years, the North American SIGGRAPH seems to like Los Angeles, which hosted the event nine times over that period, but Vancouver won out this year. As you would expect, the maintainers of OpenGL and Vulkan are there, and they have a lot to talk about.

In summary:

  • NNEF 1.0 has been finalized and released!
  • The first public demo of OpenXR is available and on the show floor.
  • glTF Texture Transmission Extension is being discussed.
  • OpenCL Ecosystem Roadmap is being discussed.
  • Khronos Educators Program has launched.

I will go through each of these points. Feel free to skip around between the sections that interest you!

Read on to see NNEF or see page 2 for the rest!

GamersNexus vs The Thermal Paste Cabal

Subject: Cases and Cooling | August 11, 2018 - 11:24 PM |
Tagged: thermal paste

A couple of weeks ago, GamersNexus published a video and article that benchmarked CPU performance across various thermal paste patterns. It’s well established that the best method of applying the compound is to spread it out as thin as possible, so it fills the gaps with something better than air but doesn’t insulate the parts that would naturally make perfect contact. That takes effort, though, and it’s not clear how much that buys you for modern CPUs with integrated heat-spreaders (IHS).

Video credit: GamersNexus

If you’re attaching a heatsink to a GPU or other bare die ASIC? Different story. Their tests are focused on CPUs with heat spreaders.

Long story short? Not so much difference. The “pea sized” method had a little issue because it didn’t fully cover the IHS, but they went on with the tests because it’s supposed to reflect real-world situations, and that was a real-world type of error. Even still, that corresponded to less than a degree Celsius under load (as measured on an Intel Core i7-8086k). The article mentions something about delidding the CPU, although the photos clearly have an IHS (and that’s the point of the test in the first place) so I’m guessing they only took the IHS off temporarily and replaced it.

It’s interesting how close they ended up. I would have thought that 30 minutes of full load would show at least a few degrees of variance, but apparently not, even with a little patch of uncovered space.

Check out their post (and video above) for more info!

Source: GamersNexus

Blender Benchmark / Blender Open Data Announced

Subject: General Tech | August 10, 2018 - 11:17 PM |
Tagged: Blender, benchmark

The Blender Foundation is wrapping up development on Blender 2.8, “The Workflow Update”. We have been following it for a while, but today’s announcement caught me by surprise: a benchmark database. It seems simple, right? Blender wants its users to know what hardware is best to use, especially when rendering images in Cycles (which can be damn slow).

blender-2018-a-bit-lopsided-benchmark.png

A bit lopsided...

The solution is to make a version of Blender that creates and validates benchmarks, then compiles the data on their website. It’s still early days for this, with just 2052 entries (at the time of writing) and the  majority of those were from Linux boxes. Also, they only break it down into a handful of categories: Fastest CPU, Fastest Compute Device, Submissions Per OS, then a few charts that compare the individual benchmark scenes against one another in a hardware-agnostic fashion. They pledge to add a lot of more metrics in the future.

Personally, I’m curious to see a performance vs OS metric. Some benchmarks back from 2016 (Blender 2.77 on an EVGA GTX 980 Ti) show Linux out-performing Windows 10 by over 2x, with Windows 7 landing in between (closer to Linux than Windows 10). At the time, it was attributed to NVIDIA’s CUDA driver being horribly optimized for the newer OS, which seems to be validated by the close showing of the GTX 1080 on Windows 10 and Linux, but I would like to see a compiled list of up-to-date results. I could soon be able to.

Discord Nitro Dips Toes into Game Sale and Distribution

Subject: General Tech | August 10, 2018 - 10:45 PM |
Tagged: pc gaming, discord, Rust, mozilla, steam, GOG

Starting with a slowly-ramping group of ~50,000 Canadians, Discord has begun distributing PC games. Specifically, there will be two services for paying members of the Discord Nitro beta program: a store, where games can be purchased as normal, and a library of other games that are available with the (aforementioned) Discord Nitro subscription.

“It’s kinda like Netflix for games.”

discord-2018-gamestore.png

When talking about subscription services for video games, I am typically hesitant. That said, the previous examples were, like, OnLive, where they planned on making games that ran exclusively on that platform. The concern is that, when those games disappear from the service, they could be gone from our society as a whole work of art. (Consoles and DRM also play into this topic.)

In this case, however, it looks like they are just getting into curated, off-the-shelf PC games. While GoG holds its own, it will be nice to see another contender to Steam in the Win32 (maybe Linux?) games market. (I say Win32 because of the developer certification requirements for Windows Store / UWP.)

Dead horse rant aside, Discord is doing games… including a subscription service. Yay.

One more aspect to this story!

Over the last five-or-so years, Mozilla has been talking about upgrading their browser to use a more safe, multi-theaded, functional, job system, via their home-grown programming language, Rust. Turns out: Discord used this language for a lot of the store (and surrounding SDKs). Specifically, the native code for the store, the game SDK (with C, C++, and C# bindings), and the multiplayer network layer are all in Rust. This should make it fast and secure, which were the two design goals for Rust in the first place.

It was intended for web browsers after all...

Source: Discord

DOOM Eternal Gameplay at QuakeCon 2018

Subject: General Tech | August 10, 2018 - 10:16 PM |
Tagged: pc gaming, doom, bethesda

Bethesda, as usual, held a keynote at their QuakeCon event in the Dallas / Fort Worth region of Texas. So far so good. They then revealed DOOM Eternal with over 15 minutes of gameplay spread across three brutal segments.

Even though the reboot had a lot more… airborne activity… than the original, the new “meat hook” ability allows the player to grapple toward enemies. (At least, I only saw them grapple enemies. Maybe other things too? Probably not, though.) While not exactly a new mechanic, it looks like it flows well with DOOM’s faster-paced gameplay.

DOOM Eternal is coming to the PC, PS4, Xbox One, and even the Nintendo Switch. No release date has been announced.

CaseLabs (and Parent Company) Bankrupt and Liquidating

Subject: Cases and Cooling | August 10, 2018 - 09:55 PM |
Tagged: cases, caselab, bankrupt

Due in part to the tariffs on aluminum, enthusiast case manufacturer, CaseLabs, has shut down.

caselabs-2018-closed.jpg

It happened quite abruptly, too. Their caselabs.net site, linked in their Twitter profile, has been up and down while I've been writing this post; their store page, while it can load in a browser, will not accept and further orders. In fact, some existing orders are expected to be canceled. They believe that they can ship all the orders for individual parts, but some of their backlog of full cases will not.

Obviously, this sucks for everyone involved. Some of their cases, while on the expensive side to say the least, looked interesting, particularly in terms of customization. I’m looking at one that had the option for both front-panel HDMI and USB type C.

No specifics have been announced about their bankruptcy and liquidation plan.

Source: CaseLabs

EKWB offers Phoenix Down for your system with their new 360mm GPU and CPU watercooler

Subject: Cases and Cooling | August 10, 2018 - 02:57 PM |
Tagged: watercooler, EKWB, EK-MLC Phoenix, AIO, 360mm radiator

If you have space in your case and a need to move a lot of heat, the 360mm EK-MLC Phoenix might be a good choice.  It comes with all the features you expect from EKWB, Vardar fans, quick connect tubing and compatibility with most modern sockets, including ThreadRipper with an extra attachment.   You will notice it can include the GPU in the cooling loop with the purchase of additional modules.  The investment is somewhat high, NikkTech priced it at 270 Euros for just the CPU and arund 400 Euros if you include the parts to cool your GPU.  Is that worth it? 

Check out the full review to see.

ekwb_phoenix_mlcb.jpg

"Following the massive success of the EK-XLC Predator line of AIO liquid coolers EK Waterblocks recently released the EK-MLC Phoenix line and on our test bench today we have the top of the line tri-fan 360 model."

Here are some more Cases & Cooling reviews from around the web:

CASES & COOLING

 

Source: Nikktech

Intel would rather you talk about their new QLC SSDs

Subject: General Tech | August 10, 2018 - 12:37 PM |
Tagged: Intel, QLC NAND, 600p, D5-P4320, PCIe SSD, M.2

Not to be out done by Samsung, Intel have also announced new QLC based SSDs, the 600p series and the D5-P4320.  The two series of drives rely on SLC caches to provide extra lifetime to the QLC flash which is not as robust as other varieties, the 600p is rated at 0.1 drive writes per day and the D5-P4320 is rated for 0.9 sequential writes per day, 0.2 for random writes.  The two drives also share a five year warranty in common.  The D5-P4320 sports a capacity of 7.68TB whereas the 600p comes in more affordable 520GB, 1TB and 2TB capacities.  Drop by The Inquirer for more information on the two new series of NVMe SSDs from Intel.

intel_qlc_ssd_660p.jpg

"The SSD 660p is a single-sided M.2 format consumer drive following on from the 600p with 52GB, 1TB and 2TB capacities. The 600p topped out at 1TB."

Here is some more Tech News from around the web:

Tech Talk

Source: The Inquirer

Qualcomm Adds 10nm LPP Snapdragon 670 Mobile Platform to Mid-Range Lineup

Subject: Mobile | August 10, 2018 - 09:08 AM |
Tagged: X12 Modem, snapdragon 670, snapdragon, qualcomm 600, qualcomm, LTE

Qualcomm recently introduced the Snapdragon 670 mobile platform that brings upgraded processing and power efficiencies to the 600-series lineup while being very close to the specifications of the new Snapdragon 710 SoC. Based on the 10nm LPP design, the Snapdragon 670 uses up to 30% less power (that number is while recording 4K video and relates to the Spectra ISP, overall power efficiency gains are likely less but still notable) while offering up to 15% more CPU and 25% more GPU processing power versus its predecessor. The new mobile processor is also better optimized for AI with up to 1.8X AI Engine performance mostly thanks to upgraded Hexagon DSP co-processors and ARM CPU cores.

Qualcomm Snapdragon 670.png

The Snapdragon 670 features a Kryo 360 CPU with two ARM Cortex A75 cores at 2.0 GHz and six Cortex A53 cores at 1.7 GHz along with bringing 200-series DSPs and ISPs to the Snapdragon 600-series in the form of the Hexagon 685 DSP and Spectra 250 ISP. As far as graphics, the Snapdragon 670 will use a new Adreno 615 GPU which should be very close to the GPU in the SD710 (Adreno 616. The new processor supports a single 24MP camera or dual 16MP cameras and can record up to 4k30Hz video. According to Anandtech, Qualcomm has stripped out the 10-bit HDR pipelines as well as lowering the maximum supported display resolution. Another differentiator between the new Snapdragon 710 and the older Snapdragon 660 is that the SD670 uses the same Snapdragon X12 LTE modem as the SD660 rather than the X15 LTE modem of the 710 processor meaning that maximum cellular download speeds are capped at 600 Mbps downloads versus 800 Mbps.

While the Snapdragon 670 and Snapdragon 710 are reportedly pin and software compatible which will allow smartphone manufacturers the ability to use either chip in the same mobile platform the chips are allegedly different designs and the SD670 is not merely a lower binned SD710 which is interesting if true.

Qualcomm’s Snapdragon 670 appears to be a decent midrange offering that is very close to the specifications of the SD710 while being cheaper and much more power efficient than the older SD660. This should enable some midrange smartphone designs that can offer similar performance with much better battery life.

Of course, depending on the workload, the newer SD670 may or may not live up to the alleged 15% CPU performance boost versus 2017’s SD660 as the SD670 loses two of the big ARM cores in the big.LITTLE setup vs the SD660 while having two more smaller cores. The two A75 (2GHz) and six A55 (1.7GHz) are faster per core than the four A73 (2.2GHz) and four A53 (1.8GHz), but if a single app is heavily multithreaded the older chip may still hold its own. The bright side is that worst case the new chip should at least not be that much slower at most tasks and at best it delivers better battery life especially with lots of background tasks running. More efficient cores and the move from 14nm LPP to 10nm LPP definitely helps with that, and you do have to keep in mind that this is a midrange part for midrange smartphones.

The real deciding factor though in terms of the value proposition of this chip is certainly going to be pricing and the mobile platforms that manufacturers offer it in.

Also read:

Source: Qualcomm

Samsung Begins Mass Production of QLC SATA SSDs for Consumers

Subject: General Tech | August 9, 2018 - 11:10 PM |
Tagged: V-NAND, sata ssd, Samsung, QLC, enterprise ssd

Earlier this week Samsung announced that it has begun mass production on its first consumer solid state drive based on QLC (4 bits per cell) V-NAND. According to the company, the initial drives will offer 4TB capacities and deliver equivalent performance to Samsung’s TLC offerings along with a three year warranty.

Samsung 4TB QLC SSD - photo1.jpg

Samsung claims that its fourth generation V-NAND flash in QLC mode (with 16 voltage states) with 64 layers is able to offer up to 1Tb per chip. The 4TB SATA SSD uses a 3-bit SSD controller, TurboWrite technology, and 32 1Tb QLC V-NAND chips and thanks to the write cache (running the V-NAND in SLC or MLC modes) Samsung is able to wring extra performance out of the drive though it’s obviously limited ultimately by the SATA interface. Specifically, Samsung is promising sequential reads of 540 MB/s and sequential writes of up to 520 MB/s with the new QLC SSD. For comparison, Samsung’s fourth generation V-NAND operating in TLC mode is able to offer up to 256Gb and 512Gb capacities depending on package. Moving to fifth generation V-NAND in TLC mode Samsung is offering 256Gb per chip capacities (using 96 layers). Scouring the internet, it appears that Samsung has yet to reveal what it expects to achieve from 5th generation V-NAND in QLC mode. It should be able to at least match the 1Tb of 4th generation QLC V-NAND with the improved performance and efficiencies of the newer generation (including the faster Toggle DDR 4.0 interface) though I would guess Samsung could get more, maybe topping out at as much as 1.5Tb (eventually and if they use 96 layers--I was finding conflicting info on this). In any event, for futher comparison, Intel and Micron have been able to get 1Tb QLC 3D NAND flash chips and Western Digital and Toshiba are working on 96 Layer BiCS4 which is expected to offer up to 1.33Tb capacities when run in 4-bits per cell mode (QLC).

It seems that Samsung is playing a bit of catch up when it comes to solid state storage using QLC though they do still have a bit of time to launch products this year along with the other players. Samsung claims that it will launch its 4TB 2.5” consumer SSD first with 1TB and 2TB models to follow later this year.

Interestingly (and more vaguely), Samsung mentioned in its press release that it plans to begin rolling out M.2 SSDs for the enterprise market and that it will begin mass producing fifth generation 4-bit V-NAND later this year.

I am looking forward to more details on Samsung’s plans for QLC and especially on the specifications of fifth generation 4-bit V-NAND and the drives that it will enable for both consumer systems and the data center markets.

What are your thoughts on Samsung’s QLC V-NAND?

Also read:

Source: Samsung

Not Podcast - Live Q&A 2018-08-08

Subject: General Tech | August 9, 2018 - 04:41 PM |
Tagged: video, q&a, podcast

PC Perspective Podcast Live Q&A  08/08/18

Join us this week for a special live Q&A session!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts: Jeremy Hellstrom, Josh Walrath, Sebastion Peak, Alex Lustenberg

Program length: 1:22:48

Topics of discussion:
0:02:44 Why don't we have SMT on GPUs like on CPUs?
0:06:16 Did PCPer get a Threadripper2?
0:11:15 Why are Cherry MY switches no longer used or sold?
0:13:25 Is Celeron vs Atom still a big deal?
0:19:15 Windows over agressively caching resulting in jitter?
0:22:35 What was more impactful, x86-64 or multi core?
0:30:30 What do you think has gone wrong with Intel's 10nm process?
0:35:40 Will AMD's ZEN 2 launch result in a process lead?
0:41:05 Can Jeremy do IT support in his underwear?
0:42:40 What are y'all's favorite bugers/beer/whiskey?
0:54:30 How much would you justify spending on a case?
1:03:20 Is there any benefit in NVLink on consumer cards?
1:08:30 Why are password cracking benchmarks not used for benchmarking CPUs and GPUS?
1:13:30 Why do consumer GPUs do not have virtualization support?
1:17:20 Jeremy, are you going with TR1 or TR2?
 
 
Source:

Zen+ and the art of thermal maintenance

Subject: Processors | August 9, 2018 - 04:36 PM |
Tagged: Ryzen 7 2700, amd, Zen+

There is a ~$30 difference between the Ryzen 7 2700 and the 2700X, which begs the question as to whom would chose the former over the latter.  The Tech Report points out another major difference between the two processors, the 2700 has a 65W TDP while the 2700X is 105W; pointing to one possible reason for choosing the less expensive part.  The question remains as to what you will be missing out on and if there is any reason not to go with the even less expensive and highly overclockable Ryzen 7 1700?   Find out the results of their tests and get the answer right here.

chiponfan.jpg

"AMD's Ryzen 7 2700 takes all the benefits of AMD's Zen+ architecture and wraps eight of those cores up in a 65-W TDP. We tested the Ryzen 7 2700's performance out in stock and overclocked tune to see what it offers over the hugely popular Ryzen 7 1700."

Here are some more Processor articles from around the web:

Processors