All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: General Tech | August 15, 2018 - 02:42 PM | Jeremy Hellstrom
Tagged: SK Hynix, Terabyte, toshiba, QLC NAND
This year at the Flash Memory Summit big is in as Toshiba unveils an 85TB 2.5" SSH and suggested a 20TB M.2 drive is not far off. SK Hynix will release a 64TB 2.5" SSD with a 1Tbit die size which analysts expect to offer somewhat improved reads and writes compared o their previous offerings. The two companies will be using 96-layer QLC 3D NAND in these drives and The Register expects we will see them use an NVMe interface as opposed to SATA. Check out the story for more detail on these drives as well as what Intel is working on.
"The Flash Memory Summit saw two landmark capacity announcements centred on 96-layer QLC (4bits/cell) flash that seemingly herald a coming virtual abolition of workstation and server read-intensive flash capacity constraints."
Here is some more Tech News from around the web:
- John McAfee lashes out at Bitfi 'hackers' @ The Inquirer
- A Community-Run ISP Is the Highest Rated Broadband Company In America @ Slashdot
- Intel finally emits Puma 1Gbps modem fixes – just as new ping-of-death bug emerges @ The Register
- Bitcoin Sinks Below $6,000 as Almost Everything Crypto Tumbles @ Slashdot
- Intel to launch X599 platform for its 28-core Skylake-X CPU @ The Inquirer
- The Ars Technica Back to School buying guide
- It's official: TLS 1.3 approved as standard while spies weep @ The Register
- Three more data-leaking security holes found in Intel chips as designers swap security for speed @ The Register
- An Early Look At The L1 Terminal Fault "L1TF" Performance Impact On Virtual Machines (Foreshadow) @ Phoronix
Subject: General Tech | August 15, 2018 - 10:59 AM | Ken Addison
Tagged: sysmark, sysmark 2018, bapco, benchmarks
SYSMark is an application-based benchmarking suite used by many PC OEMs and enterprises to evaluate hardware deployments, as well as by us here at PC Perspective to evaluate system performance.
By using a variety of widely used applications such as Microsoft Office and the Adobe Creative Suite, SYSMark can provide insight into the performance levels of typical user activities like Productivity, which can be difficult to quantify otherwise.
As part of the upgrade to SYSMark 2018, the applications used to test are updated as well, including Microsoft Office 2016, Google Chrome version 65, Adobe Acrobat Pro DC, Adobe Photoshop CC (2018), Cyberlink PowerDirector 15, Adobe Lightroom Classic CC, AutoIT 220.127.116.11.
SYSMark 2018 is available today from BAPCo's online store.
Subject: Graphics Cards | August 14, 2018 - 01:08 AM | Jeremy Hellstrom
Tagged: Siggraph, ray tracing, quadro rtx 8000, quadro rtx 5000, nvidia, jensen
The attempt to describe the visual effects Jensen Huang showed off at his Siggraph keynote is bound to fail, not that this has ever stopped any of us before. If you have seen the short demo movie they released earlier this year in cooperation with Epic and ILMxLAB you have an idea what they can do with ray tracing. However they pulled a fast one on us, as they were hiding the actual hardware that this was shown with as it was not pre-rendered but instead was actually our first look at their real time ray tracing. The hardware required for this feat is the brand new RTX series and the specs are impressive.
The ability to process 10 Giga rays means that each and every pixel can be influenced by numerous rays of light, perhaps 100 per pixel in a perfect scenario with clean inputs, or 5-20 in cases where their AI de-noiser is required to calculate missing light sources or occlusions, in real time. The card itself functions well as a light source as well. The ability to perform 16 TFLOPS and 16 TIPS means this card is happy doing both floating point and integer calculations simultaneously.
The die itself is significantly larger than the previous generation at 754mm2, and will sport a 300W TDP to keep it in line with the PCIe spec; though we will run it through the same tests as the RX 480 to see how well they did if we get the chance. 30W of the total power is devoted to the onboard USB controller which implies support for VR Link.
The cards can be used in pairs, utilizing Jensun's chest decoration, more commonly known as an NVLink bridge, and more than one pair can be run in a system but you will not be able to connect three or more cards directly.
As that will give you up to 96GB of GDDR6 for your processing tasks, it is hard to consider that limiting. The price is rather impressive as well, compared to previous render farms such as this rather tiny one below you are looking at a tenth the cost to power your movie with RTX cards. The card is not limited to proprietary engines or programs either, with DirectX and Vulkan APIs being supported in addition to Pixar's software. Their Material Definition Language will be made open source, allowing for even broader usage for those who so desire.
You will of course wonder what this means in terms of graphical eye candy, either pre-rendered quickly for your later enjoyment or else in real time if you have the hardware. The image below attempts to show the various features which RTX can easily handle. Mirrored surfaces can be emulated with multiple reflections accurately represented, again handled on the fly instead of being preset, so soon you will be able to see around corners.
It also introduces a new type of anti-aliasing called DLAA and there is no money to win for guessing what the DL stands for. DLAA works by taking an already anti-aliased image and training itself to provide even better edge smoothing, though at a processing cost. As with most other features on these cards, it is not the complexity of the scene which has the biggest impact on calculation time but rather the amount of pixels, as each pixel has numerous rays associated with it.
This new feature also allows significantly faster processing than Pascal, not the small evolutionary changes we have become accustomed to but more of a revolutionary change.
In addition to effects in movies and other video there is another possible use for Turing based chips which might appeal to the gamer, if the architecture reaches the mainstream. With the ability to render existing sources with added ray tracing and de-noising features it might be possible for an enterprising soul to take an old game and remaster it in a way never before possible. Perhaps one day people who try to replay the original System Shock or Deus Ex will make it past the first few hours before the graphical deficiencies overwhelm their senses.
We expect to see more from NVIDIA tomorrow so stay tuned.
Subject: General Tech | August 13, 2018 - 07:43 PM | Ken Addison
Tagged: turing, siggraph 2018, rtx, quadro rtx 8000, quadro rtx 6000, quadro rtx 5000, quadro, nvidia
Today at the professional graphics-focused SIGGRAPH conference, NVIDIA's Jen-Hsun Huang has unveiled details on their much-rumored next GPU architecture, codenamed Turing.
At the core of the Turing architecture are what NVIDIA is referring these as two "engines"– one for accelerating Ray Tracing, and the other for accelerating AI Inferencing.
The Ray Tracing units are called RT cores and are not to be confused with the announcement of NVIDIA RTX technology for real-time ray-tracing that we saw at GDC this year. There, NVIDIA was using their Optix AI-powered denoising filter to clean up ray-traced images, allowing them to save on rendering resources, but the actual ray-tracing was still being done on the GPU cores itself.
Now, these RT cores will perform the ray calculations themselves at what NVIDIA is claiming is up to 10 GigaRays/second, or up to 25X the performance of the current Pascal architecture.
Just like we saw in the Volta-based Quadro GV100, these new Quadro RTX cards will also feature Tensor Cores for deep learning acceleration. It is unclear if these tensor cores remain unchanged from what we saw in Volta or not.
In addition to the RT Cores and Tensor Units, Turing also features an all-new design for the tradition Streaming Multiprocessor (SM) GPU units. Changes include an integer execution unit executing in parallel with the floating point datapath, and a new unified cache architecture with double the bandwidth of the previous generation.
NVIDIA claims these changes combined with the up to 4,608 available CUDA cores in the highest configuration will enable up to 16 TFLOPS and 16 trillion integer operations per second.
Alongside the announcement of the Turing Architecture, NVIDIA unveiled the Quadro RTX 5000, 6000, 8000-series products, due in Q4 2018.
In addition to the announcements at SIGGRAPH tonight, NVIDIA is expected to announce the consumer, GeForce products featuring the Turing architecture next week at an event in Germany.
PC Perspective is at both SIGGRAPH and will be at NVIDIA's event in Germany next week so stay tuned for more details!
Subject: Processors | August 13, 2018 - 02:18 PM | Jeremy Hellstrom
Tagged: Zen+, Threadripper, second generation threadripper, ryzen, Intel, Core i9, 7980xe, 7960x, 7900x, 2990wx, 2950x
The 2950X and 2990WX are both ThreadRipper 2 chips but are very different beasts under the hood. The 2950X has two active die similar to the original chips while the 2990WX has four active die, two of which utilize an Infinity Fabric link to the other two to communicate to the memory subsystem. The W in the naming convention indicates the 2990WX is designed for workstation tasks and benchmarks support that designation. You will have seen our results here, but there are many other sources to read through. [H]ard|OCP offers up a different set of benchmarks in their review, with a similar result; with ThreadRipper AMD has a winner. The 2990WX is especially important as it opens up the lucrative lower cost workstations market for AMD.
"AMD teased us a bit last week by showing off its new 2nd Generation Threadripper 2990WX and 2950X packaging and specifications. This week AMD lets us share all our Threadripper data we have been collecting. The 2990WX is likely a lot different part than many people were expecting, and it turns out that it might usher AMD into a newly created market."
Here are some more Processor articles from around the web:
- AMD's Ryzen Threadripper 2990WX @ The Tech Report
- AMD Ryzen Threadripper 2950X and 2990WX @ Guru of 3D
- AMD Ryzen Threadripper 2990WX & 2950X @ TechSpot
- AMD Ryzen Threadripper 2950X @ TechPowerUp
- AMD Threadripper 2950X Offers Great Linux Performance At $900 USD @ Phoronix
- AMD Threadripper 2990WX Linux Benchmarks: The 32-Core / 64-Thread Beast @ Phoronix
- AMD Threadripper 2990WX Cooling Performance - Testing Five Heatsinks & Two Water Coolers @ Phoronix
Subject: General Tech | August 13, 2018 - 01:49 PM | Jeremy Hellstrom
Tagged: Intel, rumour, release, coffee lake s, i9-9900K, i5-9600K, i7-9700K
According to the various sources The Inquirer has, the Coffee Lake refresh will be launched on the first of October, in time to ensure systems builders have models ready for the holidays. This new processor does not offer a compelling upgrade for those with a modern system, as it is very similar to it's predecessor. If you have something a little older however, the three new processors offer increased frequencies and core counts, the 9900K sports a default Boost Clock of 5GHz, which is nothing to sneeze at.
"If you were expecting anything bigger then allow us to disappoint you as, really the ninth-gen chips are mild upgrades on their predecessors, unless Intel has been keeping something very well hidden up its corporate sleeves."
Here is some more Tech News from around the web:
- Intel hands first Optane DIMM to Google, where it'll collect dust until a supporting CPU arrives @The Register
- Android Pie is borking fast charging on some Pixel XL handsets @ The Inquirer
- Many Google Services on Android Devices and iPhones Store Location Data, Even if Location Sharing is Disabled From Privacy Settings @ Slashdot
- The off-brand 'military-grade' x86 processors, in the library, with the root-granting 'backdoor' @ The Register
- NETGEAR Orbi (RBK23) AC2200 Mesh Wi-Fi System @ Kitguru
- Reolink Argus 2 Wire-Free 1080p Security Camera Review @ NikKTech
Subject: Cases and Cooling | August 11, 2018 - 11:24 PM | Scott Michaud
Tagged: thermal paste
A couple of weeks ago, GamersNexus published a video and article that benchmarked CPU performance across various thermal paste patterns. It’s well established that the best method of applying the compound is to spread it out as thin as possible, so it fills the gaps with something better than air but doesn’t insulate the parts that would naturally make perfect contact. That takes effort, though, and it’s not clear how much that buys you for modern CPUs with integrated heat-spreaders (IHS).
Video credit: GamersNexus
If you’re attaching a heatsink to a GPU or other bare die ASIC? Different story. Their tests are focused on CPUs with heat spreaders.
Long story short? Not so much difference. The “pea sized” method had a little issue because it didn’t fully cover the IHS, but they went on with the tests because it’s supposed to reflect real-world situations, and that was a real-world type of error. Even still, that corresponded to less than a degree Celsius under load (as measured on an Intel Core i7-8086k). The article mentions something about delidding the CPU, although the photos clearly have an IHS (and that’s the point of the test in the first place) so I’m guessing they only took the IHS off temporarily and replaced it.
It’s interesting how close they ended up. I would have thought that 30 minutes of full load would show at least a few degrees of variance, but apparently not, even with a little patch of uncovered space.
Subject: General Tech | August 10, 2018 - 11:17 PM | Scott Michaud
Tagged: Blender, benchmark
The Blender Foundation is wrapping up development on Blender 2.8, “The Workflow Update”. We have been following it for a while, but today’s announcement caught me by surprise: a benchmark database. It seems simple, right? Blender wants its users to know what hardware is best to use, especially when rendering images in Cycles (which can be damn slow).
A bit lopsided...
The solution is to make a version of Blender that creates and validates benchmarks, then compiles the data on their website. It’s still early days for this, with just 2052 entries (at the time of writing) and the majority of those were from Linux boxes. Also, they only break it down into a handful of categories: Fastest CPU, Fastest Compute Device, Submissions Per OS, then a few charts that compare the individual benchmark scenes against one another in a hardware-agnostic fashion. They pledge to add a lot of more metrics in the future.
Personally, I’m curious to see a performance vs OS metric. Some benchmarks back from 2016 (Blender 2.77 on an EVGA GTX 980 Ti) show Linux out-performing Windows 10 by over 2x, with Windows 7 landing in between (closer to Linux than Windows 10). At the time, it was attributed to NVIDIA’s CUDA driver being horribly optimized for the newer OS, which seems to be validated by the close showing of the GTX 1080 on Windows 10 and Linux, but I would like to see a compiled list of up-to-date results. I could soon be able to.
Subject: General Tech | August 10, 2018 - 10:45 PM | Scott Michaud
Tagged: pc gaming, discord, Rust, mozilla, steam, GOG
Starting with a slowly-ramping group of ~50,000 Canadians, Discord has begun distributing PC games. Specifically, there will be two services for paying members of the Discord Nitro beta program: a store, where games can be purchased as normal, and a library of other games that are available with the (aforementioned) Discord Nitro subscription.
“It’s kinda like Netflix for games.”
When talking about subscription services for video games, I am typically hesitant. That said, the previous examples were, like, OnLive, where they planned on making games that ran exclusively on that platform. The concern is that, when those games disappear from the service, they could be gone from our society as a whole work of art. (Consoles and DRM also play into this topic.)
In this case, however, it looks like they are just getting into curated, off-the-shelf PC games. While GoG holds its own, it will be nice to see another contender to Steam in the Win32 (maybe Linux?) games market. (I say Win32 because of the developer certification requirements for Windows Store / UWP.)
Dead horse rant aside, Discord is doing games… including a subscription service. Yay.
One more aspect to this story!
Over the last five-or-so years, Mozilla has been talking about upgrading their browser to use a more safe, multi-theaded, functional, job system, via their home-grown programming language, Rust. Turns out: Discord used this language for a lot of the store (and surrounding SDKs). Specifically, the native code for the store, the game SDK (with C, C++, and C# bindings), and the multiplayer network layer are all in Rust. This should make it fast and secure, which were the two design goals for Rust in the first place.
It was intended for web browsers after all...
Subject: General Tech | August 10, 2018 - 10:16 PM | Scott Michaud
Tagged: pc gaming, doom, bethesda
Bethesda, as usual, held a keynote at their QuakeCon event in the Dallas / Fort Worth region of Texas. So far so good. They then revealed DOOM Eternal with over 15 minutes of gameplay spread across three brutal segments.
Even though the reboot had a lot more… airborne activity… than the original, the new “meat hook” ability allows the player to grapple toward enemies. (At least, I only saw them grapple enemies. Maybe other things too? Probably not, though.) While not exactly a new mechanic, it looks like it flows well with DOOM’s faster-paced gameplay.
DOOM Eternal is coming to the PC, PS4, Xbox One, and even the Nintendo Switch. No release date has been announced.
Subject: Cases and Cooling | August 10, 2018 - 09:55 PM | Scott Michaud
Tagged: cases, caselab, bankrupt
Due in part to the tariffs on aluminum, enthusiast case manufacturer, CaseLabs, has shut down.
It happened quite abruptly, too. Their caselabs.net site, linked in their Twitter profile, has been up and down while I've been writing this post; their store page, while it can load in a browser, will not accept and further orders. In fact, some existing orders are expected to be canceled. They believe that they can ship all the orders for individual parts, but some of their backlog of full cases will not.
Obviously, this sucks for everyone involved. Some of their cases, while on the expensive side to say the least, looked interesting, particularly in terms of customization. I’m looking at one that had the option for both front-panel HDMI and USB type C.
No specifics have been announced about their bankruptcy and liquidation plan.
Subject: Cases and Cooling | August 10, 2018 - 02:57 PM | Jeremy Hellstrom
Tagged: watercooler, EKWB, EK-MLC Phoenix, AIO, 360mm radiator
If you have space in your case and a need to move a lot of heat, the 360mm EK-MLC Phoenix might be a good choice. It comes with all the features you expect from EKWB, Vardar fans, quick connect tubing and compatibility with most modern sockets, including ThreadRipper with an extra attachment. You will notice it can include the GPU in the cooling loop with the purchase of additional modules. The investment is somewhat high, NikkTech priced it at 270 Euros for just the CPU and arund 400 Euros if you include the parts to cool your GPU. Is that worth it?
"Following the massive success of the EK-XLC Predator line of AIO liquid coolers EK Waterblocks recently released the EK-MLC Phoenix line and on our test bench today we have the top of the line tri-fan 360 model."
Here are some more Cases & Cooling reviews from around the web:
- ID-Cooling Dashflow 360 @ TechPowerUp
- Corsair SPEC Omega RGB @ Guru of 3D
- CORSAIR Spec-Omega RGB Mid-Tower Tempered Glass Gaming Case Review @ NikKTech
- Fractal Design Focus G Mini @ TechPowerUp
- COUGAR PANZER-G Tempered Glass Gaming Mid-Tower Review @ NikKTech
Subject: General Tech | August 10, 2018 - 12:37 PM | Jeremy Hellstrom
Tagged: Intel, QLC NAND, 600p, D5-P4320, PCIe SSD, M.2
Not to be out done by Samsung, Intel have also announced new QLC based SSDs, the 600p series and the D5-P4320. The two series of drives rely on SLC caches to provide extra lifetime to the QLC flash which is not as robust as other varieties, the 600p is rated at 0.1 drive writes per day and the D5-P4320 is rated for 0.9 sequential writes per day, 0.2 for random writes. The two drives also share a five year warranty in common. The D5-P4320 sports a capacity of 7.68TB whereas the 600p comes in more affordable 520GB, 1TB and 2TB capacities. Drop by The Inquirer for more information on the two new series of NVMe SSDs from Intel.
"The SSD 660p is a single-sided M.2 format consumer drive following on from the 600p with 52GB, 1TB and 2TB capacities. The 600p topped out at 1TB."
Here is some more Tech News from around the web:
- EmuParadise is binning its ROM library following Nintendo's 'copyright' action @ The Inquirer
- ZX Spectrum Vega+ blows a FUSE: It runs open-source emulator @ The Register
- US-CERT sounds the alarms over North Korean 'KeyMarble' Trojan @ The Inquirer
- Windows 10 Enterprise Getting 'InPrivate Desktop' Sandboxed Execution Feature @ Slashdot
- AI renames BBC sound effect files, hilarity ensues @ The Inquirer
Subject: Mobile | August 10, 2018 - 09:08 AM | Tim Verry
Tagged: X12 Modem, snapdragon 670, snapdragon, qualcomm 600, qualcomm, LTE
Qualcomm recently introduced the Snapdragon 670 mobile platform that brings upgraded processing and power efficiencies to the 600-series lineup while being very close to the specifications of the new Snapdragon 710 SoC. Based on the 10nm LPP design, the Snapdragon 670 uses up to 30% less power (that number is while recording 4K video and relates to the Spectra ISP, overall power efficiency gains are likely less but still notable) while offering up to 15% more CPU and 25% more GPU processing power versus its predecessor. The new mobile processor is also better optimized for AI with up to 1.8X AI Engine performance mostly thanks to upgraded Hexagon DSP co-processors and ARM CPU cores.
The Snapdragon 670 features a Kryo 360 CPU with two ARM Cortex A75 cores at 2.0 GHz and six Cortex A53 cores at 1.7 GHz along with bringing 200-series DSPs and ISPs to the Snapdragon 600-series in the form of the Hexagon 685 DSP and Spectra 250 ISP. As far as graphics, the Snapdragon 670 will use a new Adreno 615 GPU which should be very close to the GPU in the SD710 (Adreno 616. The new processor supports a single 24MP camera or dual 16MP cameras and can record up to 4k30Hz video. According to Anandtech, Qualcomm has stripped out the 10-bit HDR pipelines as well as lowering the maximum supported display resolution. Another differentiator between the new Snapdragon 710 and the older Snapdragon 660 is that the SD670 uses the same Snapdragon X12 LTE modem as the SD660 rather than the X15 LTE modem of the 710 processor meaning that maximum cellular download speeds are capped at 600 Mbps downloads versus 800 Mbps.
While the Snapdragon 670 and Snapdragon 710 are reportedly pin and software compatible which will allow smartphone manufacturers the ability to use either chip in the same mobile platform the chips are allegedly different designs and the SD670 is not merely a lower binned SD710 which is interesting if true.
Qualcomm’s Snapdragon 670 appears to be a decent midrange offering that is very close to the specifications of the SD710 while being cheaper and much more power efficient than the older SD660. This should enable some midrange smartphone designs that can offer similar performance with much better battery life.
Of course, depending on the workload, the newer SD670 may or may not live up to the alleged 15% CPU performance boost versus 2017’s SD660 as the SD670 loses two of the big ARM cores in the big.LITTLE setup vs the SD660 while having two more smaller cores. The two A75 (2GHz) and six A55 (1.7GHz) are faster per core than the four A73 (2.2GHz) and four A53 (1.8GHz), but if a single app is heavily multithreaded the older chip may still hold its own. The bright side is that worst case the new chip should at least not be that much slower at most tasks and at best it delivers better battery life especially with lots of background tasks running. More efficient cores and the move from 14nm LPP to 10nm LPP definitely helps with that, and you do have to keep in mind that this is a midrange part for midrange smartphones.
The real deciding factor though in terms of the value proposition of this chip is certainly going to be pricing and the mobile platforms that manufacturers offer it in.
- Qualcomm Snapdragon 710 Specifications
- Qualcomm Launches Snapdragon 660 and 630 Mobile Platforms
- Google, Qualcomm Partner for Faster Android P Upgrade Adoption
Subject: General Tech | August 9, 2018 - 11:10 PM | Tim Verry
Tagged: V-NAND, sata ssd, Samsung, QLC, enterprise ssd
Earlier this week Samsung announced that it has begun mass production on its first consumer solid state drive based on QLC (4 bits per cell) V-NAND. According to the company, the initial drives will offer 4TB capacities and deliver equivalent performance to Samsung’s TLC offerings along with a three year warranty.
Samsung claims that its fourth generation V-NAND flash in QLC mode (with 16 voltage states) with 64 layers is able to offer up to 1Tb per chip. The 4TB SATA SSD uses a 3-bit SSD controller, TurboWrite technology, and 32 1Tb QLC V-NAND chips and thanks to the write cache (running the V-NAND in SLC or MLC modes) Samsung is able to wring extra performance out of the drive though it’s obviously limited ultimately by the SATA interface. Specifically, Samsung is promising sequential reads of 540 MB/s and sequential writes of up to 520 MB/s with the new QLC SSD. For comparison, Samsung’s fourth generation V-NAND operating in TLC mode is able to offer up to 256Gb and 512Gb capacities depending on package. Moving to fifth generation V-NAND in TLC mode Samsung is offering 256Gb per chip capacities (using 96 layers). Scouring the internet, it appears that Samsung has yet to reveal what it expects to achieve from 5th generation V-NAND in QLC mode. It should be able to at least match the 1Tb of 4th generation QLC V-NAND with the improved performance and efficiencies of the newer generation (including the faster Toggle DDR 4.0 interface) though I would guess Samsung could get more, maybe topping out at as much as 1.5Tb (eventually and if they use 96 layers--I was finding conflicting info on this). In any event, for futher comparison, Intel and Micron have been able to get 1Tb QLC 3D NAND flash chips and Western Digital and Toshiba are working on 96 Layer BiCS4 which is expected to offer up to 1.33Tb capacities when run in 4-bits per cell mode (QLC).
It seems that Samsung is playing a bit of catch up when it comes to solid state storage using QLC though they do still have a bit of time to launch products this year along with the other players. Samsung claims that it will launch its 4TB 2.5” consumer SSD first with 1TB and 2TB models to follow later this year.
Interestingly (and more vaguely), Samsung mentioned in its press release that it plans to begin rolling out M.2 SSDs for the enterprise market and that it will begin mass producing fifth generation 4-bit V-NAND later this year.
I am looking forward to more details on Samsung’s plans for QLC and especially on the specifications of fifth generation 4-bit V-NAND and the drives that it will enable for both consumer systems and the data center markets.
What are your thoughts on Samsung’s QLC V-NAND?
- Intel SSD 660p 1TB SSD Review - QLC Goes Mainstream
- Intel, Micron Jointly Announce QLC NAND FLASH, 96-Layer 3D Development
- Micron Launches 5210 ION - First QLC NAND Enterprise SATA SSD
- FMS 2017: Samsung Announces QLC V-NAND, 16TB NGSFF SSD, Z-SSD V2, Key Value
- Toshiba and Western Digital announce QLC and 96-Layer BiCS Flash
Subject: General Tech | August 9, 2018 - 04:41 PM | Alex Lustenberg
Tagged: video, q&a, podcast
Podcast Live Q&A 08/08/18
Join us this week for a special live Q&A session!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the iTunes Store (audio only)
- Google Play - Subscribe to our audio podcast directly through Google Play!
- RSS - Subscribe through your regular RSS reader (audio only)
- MP3 - Direct download link to the MP3 file
Hosts: Jeremy Hellstrom, Josh Walrath, Sebastion Peak, Alex Lustenberg
Program length: 1:22:48
0:06:16 Did PCPer get a Threadripper2?
0:11:15 Why are Cherry MY switches no longer used or sold?
0:13:25 Is Celeron vs Atom still a big deal?
0:19:15 Windows over agressively caching resulting in jitter?
0:22:35 What was more impactful, x86-64 or multi core?
0:30:30 What do you think has gone wrong with Intel's 10nm process?
0:35:40 Will AMD's ZEN 2 launch result in a process lead?
0:41:05 Can Jeremy do IT support in his underwear?
0:42:40 What are y'all's favorite bugers/beer/whiskey?
0:54:30 How much would you justify spending on a case?
1:03:20 Is there any benefit in NVLink on consumer cards?
1:08:30 Why are password cracking benchmarks not used for benchmarking CPUs and GPUS?
1:13:30 Why do consumer GPUs do not have virtualization support?
1:17:20 Jeremy, are you going with TR1 or TR2?
Subject: Processors | August 9, 2018 - 04:36 PM | Jeremy Hellstrom
Tagged: Ryzen 7 2700, amd, Zen+
There is a ~$30 difference between the Ryzen 7 2700 and the 2700X, which begs the question as to whom would chose the former over the latter. The Tech Report points out another major difference between the two processors, the 2700 has a 65W TDP while the 2700X is 105W; pointing to one possible reason for choosing the less expensive part. The question remains as to what you will be missing out on and if there is any reason not to go with the even less expensive and highly overclockable Ryzen 7 1700? Find out the results of their tests and get the answer right here.
"AMD's Ryzen 7 2700 takes all the benefits of AMD's Zen+ architecture and wraps eight of those cores up in a 65-W TDP. We tested the Ryzen 7 2700's performance out in stock and overclocked tune to see what it offers over the hugely popular Ryzen 7 1700."
Here are some more Processor articles from around the web:
- One year with Threadripper @ TechSpot
- Battle of the Workstations: AMD Ryzen Threadripper vs Intel Core X-Series @ Techgage
- We Test a $1,000 CPU From 2010 vs. Ryzen 3 @ TechSpot
- Intel's Spectre 'Variant 4' Performance Tested: Speculative Store Bypass @ TechSpot
- Qualcomm's Snapdragon 670 packs high-end features into a mid-range chip @ The Inquirer
Subject: Mobile | August 9, 2018 - 12:59 PM | Jeremy Hellstrom
Tagged: gaming laptop, razer blade
The most noticeable change on this years version of the Razer Blade is the bezel on the 15.6" IPS display, which is looking rather svelte. The display is powered by a GTX 1070 Max-Q and is available in several varieties, a 1080p model with a 144Hz refresh rate or a 4K 60Hz option along with a less expensive 1080p 60Hz if that fits your budget better. The remainder of the internals of the model which TechSpot reviewed were a Core i7-8750H, 16GB of DDR4-2666 and a 512GB Samsung PM981 NVMe SSD. What do you get for your $2400?
"One of the most popular gaming laptops on the market continues to be the Razer Blade. New for 2018, Razer has refined the design and improved the internal hardware to make it even better than before. With the 2018 Blade, Razer is fully on board with the thin bezel revolution."
Here are some more Mobile articles from around the web:
More Mobile Articles
- PC Specialist Recoil II @ Kitguru
- 2018 15-inch MacBook Pro review: Better, faster, stronger, throttle-ier? @ Ars Technica
- Dell XPS 15 2-in-1 @ Techspot
- Garmin Fenix 5S Plus review: So capable, so enviable, so expensive @ Ars Technica
- Honor Play In-Depth Review - Mobile Gaming FTW! @ TechARP
- Galaxy Tab S4 review: Even Samsung’s Dex desktop can’t save Android tablets @ Ars Technica
- The Best Smartphones 2018 @ TechSpot
- ASUS ZenFone Max Pro M1 @ TechARP
Subject: General Tech | August 9, 2018 - 11:49 AM | Jeremy Hellstrom
Tagged: mcafee, BitFi, wallet, doom
It is a good thing the internet's favourite eccentric was specific about what kind of hack would qualify someone for the $250K bounty for breaching the BitFi wallet as it has been modded twice now. The most recent hack was executed by a bright 15yr old who managed to get the wallet to run the original DooM; sadly winning the game does not free the BitCoins stored in the wallet, which is what would be required to get the reward. The Inquirer has links to information on Saleem Rashid and code you can play with yourself.
Of course, John McAfee has far more pressing cryptocurrency concerns than a mere quarter of a million dollars.
"John McAfee and Bitfi are yet to respond to either hack, aside from quoting an interview which points out that, although hacked, nobody had hacked it to the point of stealing any Bitcoins. "
Here is some more Tech News from around the web:
- Wait, did you hear that? That rumbling in the distance? Sounds like... a 16-socket IBM Power9 box shuffling this way @ The Register
- Galaxy Note 9 release date, specs and price: Watch Samsung's Unpacked live stream right here @ The Inquirer
- Acer to spin off gaming PC peripheral unit @ DigiTimes
- For all the excitement, Pie may be Android's most minimal makeover yet – thankfully @ The Register
Subject: Motherboards | August 8, 2018 - 11:41 PM | Tim Verry
Tagged: X399, tr4, threadripper 2, Threadripper, gigabyte, aorus, amd
Gigabyte is gearing up for AMD’s Threadripper 2 processors with the launch of the X399 Aorus Xtreme motherboard which represents the company’s new flagship model for the TR4 socket. The new high end motherboard has been souped-up a bit to support the higher TDP Threadripper 2 processors with a beefed up 10+3 digital power phase and active cooling for the VRMs while also featuring RGB Fusion and modern I/O connectivity options for internal and external components. Gigabyte’s new flagship will be available soon with MSRP pricing of $449.
The X399 Aorus Xtreme motherboard nestles the TR4 socket in the middle of eight DDR4 DIMM slots (quad channel up to 3466 MHz or 3600+ when overclocking). Along the top edge of the motherboard are 10 50A IR3578 digital power phases with high current mosfets for vCore and there are 3 more phases in the left corner of the board (between the DIMM slot and rear IO) for SOC power. The digital IR power phases are cooled by a direct touch heat pipe where the mosfets make contact using new 5W/mK thermal pads on the top side and also reportedly a bit more cooling from a “nano carbon” baseplate on the underside of the motherboard PCB. The heatpipe runs through one passive and one actively cooled fins array heatsink that uses two 30mm fans hidden under the rear IO armor. The board is powered by two 8-pin CPU power, one 24-pin ATX, and one six pin PCI-E power connectors.
The motherboard further features four PCI-E 3.0 x16 slots, one PCI-E 2.0 x1 slot, three M.2 (PCI-E 3.0 NVMe) slots (two 22110 and one 2280), and six SATA ports. The motherboard supports 4-way graphics card setups with the cards running at x16/x8/x16/x8.
Onboard controllers include audio from Realtek, three Ethernet NICs, and a Wi-Fi/Bluetooth radios. The Aorus AMP UP audio uses a Realtek ALC1220-VB audio codec paired with an ESS 9118EQ SABRE DAC and Nichicon capacitors and gold-plated audio jacks. As for the networking, the board has a single 10 Gigabit Ethernet NIC from Aquantia, two Intel i220AT Gigabit Ethernet NICs, and Intel wireless controller for dual band 802.11ac and Bluetooth 4.2.
Rear I/O on the X399 Aorus Xtreme includes:
- Two physical buttons for power/reset and clear CMOS
- 8x USB 3.0 Type-A
- 2x USB 3.1 (1x Type-A, 1x Type-C)
- 6x Audio (5x 3.5mm analog, 1x optical S/PDIF)
Of course, no high-end enthusiast motherboard would be complete without RGB, and the X399 Aorus Xtreme has that in spades with built in RGB on the heatsinks and multiple headers for adding even more RGB. The RGB Fusion includes two addressable RGB LED headers and two standard RGBW LED headers. The board also has multiple fan and hybrid cooling headers scattered throughout as part of Gigabyte’s Smart Fan 5 suite.
I am looking forward to the reviews of this and other Threadripper 2 motherboards and in seeing how the beefed up cooling and power might help both second and first generation Threadripper processors especially if overclocking!