Sennheiser's GSX 1000 7.1 USB DAC; audio the way it wants to play

Subject: General Tech | February 8, 2018 - 02:34 PM |
Tagged: audio, sennheiser, GSX 1000, DAC, 7.1

Sennheiser's GSX 1000 is an external USB 24-bit/96 kHz DAC, using Sennheiser's own 7.1 Binaural Rendering Engine with a headphone amp and a line-out port to connect to active speakers. The only difference between the GSX 1000 and the 1200 is that you will not be able to daisy chain multiple DACs together, a feature not many of us need.  TechPowerUp were more than impressed with the sound, but the DAC fell short of perfection as you cannot modify the preset equalizer choices nor disable the noise cancellation on the mic jack; both of which should be possible on an audio device of this price.


"The Sennheiser GSX 1000 Audio Amplifier is a beautiful external USB sound card equipped with the best 7.1 virtual surround sound system we've heard so far, and a host of other interesting features primarily aimed towards hardcore gaming."

Here is some more Tech News from around the web:

Audio Corner

Source: TechPowerUp

LinkedIn and Microsoft find a way to help you need to find a new job ... in a hurry

Subject: General Tech | February 8, 2018 - 12:50 PM |
Tagged: résumé, microsoft, linkedin, bad idea, résumé assistant

It is so obvious that it is hard to believe Microsoft didn't do this years ago.  Obviously the best time and place to search for a new job is over your current employers network, using Microsoft Word.  Now you can, as Word and LinkedIn will now be joined at the hip.  Yes, that source of bizarre requests to connect with people you have essentially nothing in common with apart from the fact that you may have been employed at some time in your life is coming to O359!  It won't start out as annoyingly persistent as Clippy, it will be buried under the Review tab on your ribbon, but it will be there unless IT decides to block it. 

It is of course referred to as having an AI, to pop up those completely inappropriate job suggestions LinkedIn excels at, as well as scanning the résumés of others to offer you advice on how to best write about your qualifications.  Read more about Microsoft's $25 billion Résumé Assistant over at El Reg.


"Microsoft has glued LinkedIn and Office 365's Word together so it can automatically help folks write or update their résumés – and find them new jobs at the same time."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register

Podcast #486 - AMD Mobile APUs, new Xeon-D processors, EPYC offerings from Dell, and more!

Subject: General Tech | February 8, 2018 - 11:21 AM |
Tagged: podcast, amd, raven ridge, 2500U, APU, Intel, xeon-d, dell, EPYC, vaunt, Tobii

PC Perspective Podcast #486 - 02/08/18

Join us this week for a recap of news and reviews including AMD Mobile APUs, new Xeon-D processors, EPYC offerings from Dell, and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: - Share with your friends!

Hosts: Allyn Malventano, Jeremy Hellstrom, Josh Walrath, Ken Addison

Peanut Gallery: Alex Lustenberg

Program length: 1:16:53

Podcast topics of discussion:

  1. Week in Review:
  2. News items of interest:
  3. Picks of the Week:
    1. 1:12:15 Alex: Terraria
  4. Closing/outro

Qualcomm 5G devices coming in 2019, leaving Apple behind

Subject: Mobile | February 8, 2018 - 11:01 AM |
Tagged: qualcomm, 5G, 5g nr, x50, snapdragon, apple, Samsung

This story originally appeared on

With significant pressure to show the value and growth opportunities for the company with a looming hostile takeover bid from Broadcom, mobile chip design house Qualcomm is hoping that its position in the market of next-generation cellular radio technology will be a foundation of its future. The company revealed today partnerships with 18 global OEMs that will be launching 5G-ready devices in 2019 and 18 worldwide cellular carriers will be completing tests of Qualcomm 5G radios in 2018.

5G is the follow up iteration to the current 4G cellular technology in the majority of the world’s smartphones. It will improve speed of connectivity, lower latency, and transform numerous markets from self-driving cars to industrial automation. And it can do all of this while lowering the load on carrier networks, giving all users a noticeable increase in performance and usability.


Qualcomm has been leaning on this 4G-to-5G transition as a portion of its long-term plan and strategy for many years. As a part of the company’s recent call to action for shareholders to resist the hostile takeover from Broadcom, the San Diego-based company believes that it has a 12-24 month lead over competing connectivity providers, namely Intel. This position will allow Qualcomm to capitalize on what many believe could be the most disruptive and market shifting wireless transition in history.

To maintain the leadership role, despite mass-market availability being limited to 2019 products, Qualcomm has announced partnerships with 18 different OEMs that will build those products using the Snapdragon X50 modem. This modem was the first announced to support the finalized specification of 5G radios. OEMs like LG, HTC, Sony, ASUS, and vivo are committed to using the X50 modem in devices ranging from next-generation smartphones to Windows-based PCs.

There has been talk that 5G products would not be available until 2020, but Qualcomm believes that 5G will have an impact on revenue a year earlier than that. This collection of phone and device providers puts Qualcomm well ahead of Intel in terms of integration and support in the market, something Qualcomm has believed would be the case but is only now finally confirmed. Commercialization of 5G and collaborations with the leading device manufacturers will push Intel further back in the race, with time running out for it to catch up.

Two big OEMs are missing from the list in Qualcomm’s announcement: Samsung and Apple. While it makes sense that Apple would not want to be included in the public statements from Qualcomm considering the continuing legal dispute between the two companies, there is a legitimate question as to whether Apple will be an early-adopter of 5G technology at all. It has shown in the past that it is more than willing to let others experiment and drive wireless technology shifts on the networks, with both the iPhone 3G and first iPhone with LTE (iPhone 5) lagging behind other smartphones by several quarters. If Apple choses to not integrate the Qualcomm modem, it will depend on Intel to provide a solution instead, and could miss out on 5G technology for all of 2019.

Not seeing Samsung as a part of this announcement from Qualcomm is more surprising, but likely an omission of politics than of technology. I recently wrote about the extension and expansion of the licensing agreement between Samsung and Qualcomm and it is unlikely that this contract would not have included the X50 modem for 5G. I expect the 2019 models of Samsung’s Galaxy devices to include the Qualcomm chip as well.

The second part to this story is that 18 different global cellular carriers, including Verizon and AT&T in the US, China Mobile, and SK Telecom, will be testing 5G with Qualcomm devices and infrastructure in 2018. These validation tests are used to demonstrate the capabilities of new wireless technology and finalize the implementation methods for the hardware in the field.

These two announcements put Qualcomm in the driver’s seat for 5G adoption and integration. 5G will offer consumers speeds 4-5x faster than today’s top offerings, lower latency for more responsive web browsing and new capabilities like streaming virtual reality. It will make Wi-Fi less necessary. The cellular carriers will take advantage of 5G for its ability to run more data through existing infrastructure, opening capacity for more users, devices, and upgradable services.

Samsung Mass Producing 256 GB eUFS For Automotive Industry

Subject: Storage | February 8, 2018 - 08:04 AM |
Tagged: UFS, Samsung, eUFS, embedded, automotive, adas, 256GB

Samsung announced yesterday that it has begun mass production of 256 GB eUFS (Embedded Universal Flash Storage) flash storage for embedded automotive applications. Doubling the capacity of the 128GB eUFS flash it announced last fall, the new embedded flash conforms to the newer JEDEC eUFS 3.0 standard including the new temperature monitoring and thermal throttling safety features which Samsung reportedly had a hand in developing. The new embedded storage is aimed at smart vehicles for use in driver assistance features (ADAS), infotainment systems, and next-generation dashboards.


The new eUFS 3.0 compliant flash is notable for featuring increased temperature ranges of between -40°C and 105°C for both operational and idle/power saving modes which makes it much better suited for use in vehicles where temperature extremes can be reach both from extreme weather and engine heat. Samsung compares its eUFS flash with traditional eMMC 5.1 storage which has a temperature range of only -25°C to 85°C when in use and -40°C to 85°C when in power saving mode.

Samsung’s eUFS can hit sequential read speeds of up to 850 MB/s and random read performance of up to 45,000 IOPS. Samsung did not specify write performance numbers but based on its other eUFS flash sequential and random writes should be in the neighborhood of 250 MB/s and 40,000 IOPS respectively. According to Samsung in its press material for 512GB eUFS for smartphones, the 256GB eUFS for the automotive market is composed of 8 stacks of 48-layer 256Gb V-NAND and a controller all packaged together to hit the 256GB storage capacity. Samsung has included a temperature sensor in the flash along with the ability for the controller to notify the host AP (application processor) at any pre-set temperature thresholds to enable the AP to downclock to lower power and heat to acceptable levels. The temperature monitoring hardware is intended to help protect the heat sensitive NAND flash from extreme temperatures to improve data reliability and longevity. The eUFS flash also features a “data refresh” feature that improves long term performance by relocating older data to less-often used cells. Embedded Universal Flash Storage (eUFS) is interesting compared to eMMC for more than temperatures though as it uses a dual channel LVDS serial interface that allows it to operate in full duplex mode rather than the half duplex mode of eMMC with its x8 parallel interface. This means that eUFS can be read and written to simultaneously and with the addition of command queueing, the controller is able to efficiently execute and prioritize read/write operations and perform error correction without involving the host processor and software.

I am looking forward to the advancements in eUFS storage and its use in more performant mobile devices and vehicles, especially on the low end in tablets and notebooks where eMMC is currently popular.

Source: Samsung

SK Hynix Sampling Enterprise SSDs With 72-Layer 512Gb 3D TLC Flash

Subject: Storage | February 7, 2018 - 10:03 PM |
Tagged: tlc, SK Hynix, enterprise ssd, 72-layer tlc, 3d-v4, 3d nand

SK Hynix has revealed its new enterprise solid state drives based on 72-layer 512 Gb 3D TLC NAND flash dies paired with the company's own in-house controller and firmware. The SK Hynix eSSDs are available in a traditional SAS/SATA interfacing product with capacities up to 4TB and a PCI-E variant that comes in 'above 1TB." Both drive types are reportedly being sampled to datacenter customers in the US.

SK Hynix Enterprise SSDs eSSD.jpg

SK Hynix has managed to double the capacity and improve the read latency of its new 512 Gb 72-layer NAND flash over its previous 256 Gb 72-layer flash which debuted last year. The eSSD product reportedly hits sequential read and write speeds of 560 MB/s and 515 MB/s respectively. Interestingly, while random read IOPS hit 98,000, random write performance is significantly lower at 32,000 IOPS. SK Hynix did not go into details, but I suspect this has to do with the tuning they did to improve read latency and the nature of the 72-layer stacked TLC flash.

Moving up to the PCI-E interfacing eSSD, customers can expect greater than 1TB capacities (SK Hynix did not specify the maximum capacity they will offer) with sequential reads hitting up to 2,700 MB/s and sequential writes hitting 1,100 MB/s. The random performance is similar to the above eSSD with write performance being much lower than read performance at 230K read IOPS and 35K write IOPS maximum. The greatly limited write performance may be the result of the drive not having enough flash channels or the flash itself not being fast enough at writes which was a tradeoff SK Hynix had to make to hit the capacity targets with larger capacity 512 Gb (64 GB) dies.

Unfortunately, SK Hynix has not yet provided further details on its new eSSDs or the 3D-V4 TLC NAND it is using in the new drives. SK Hynix continuing to push into the enterprise storage market with its own SSDs is an interesting play that should encourage them push for advancements and production efficiencies to advance NAND flash technology.

Also read:

Valve Supporting AMD's GPU-Powered TrueAudio Next In Latest Steam Audio Beta

Subject: General Tech, Graphics Cards | February 7, 2018 - 09:02 PM |
Tagged: VR, trueaudio next, TrueAudio, steam audio, amd

Valve has announced support for AMD's TrueAudio Next technology in its Steam Audio SDK for developers. The partnership will allow game and VR application developers to reserve a portion of a GCN-based GPU's compute units for audio processing and increase the quality and quantity of audio sources as a result. AMD's OpenCL-based TrueAudio Next technology can run CPUs as well but it's strength is in the ability to run on a dedicated portion of the GPU to improve both frame times and audio quality since threads are not competing for the same GPU resources during complex scenes and the GPU can process complex audio scenes and convolutions much more efficiently than a CPU (especially as the number of sources and impulse responses increase) respectively.

AMD True Audio Next In Steam Audio.jpg

Steam Audio's TrueAudio Next integration is being positioned as an option for developers and the answer to increasing the level of immersion in virtual reality games and applications. While TrueAudio Next is not using ray tracing for audio, it is physics-based and can be used to great effect to create realistic scenes with large numbers of direct and indirect audio sources, ambisonics, increased impulse response lengths, echoes, reflections, reverb, frequency equalization, and HRTF (Head Related Transfer Function) 3D audio. According to Valve indirect audio from multiple sources with convolution reverb is one of the most computationally intensive parts of Steam Audio, and TAN is able to handle it much more efficiently and accurately without affecting GPU frame times and freeing the CPU up for additional physics and AI tasks which it is much better at anyway. Convolution is a way of modeling and filtering audio to create effects such as echoes and reverb. In the case of indirect audio, Steam Audio uses ray tracing to generate an impulse response (it measures the distance and path audio would travel from source to listener) and then convolution is used to generate a reverb effect which, while very accurate, can be quite computationally intensive with it requiring hundreds of thousands of sound samples. Ambisonics further represent the directional nature of indirect sound which helps to improve positional audio and the immersion factor as sounds are more real-world modeled.

GPU Convolution Performance versus CPU.png

GPU versus CPU convolution (audio filtering) performance. Lower is better.

In addition to the ability of developers to dedicate a portion (up to 20 to 25%) of a GPU's compute units to audio processing, developers can enable/disable TrueAudio processing including the level of acoustic complexity and detail on a scene-by-scene basis. Currently it appears that Unity, FMOD Studio, and C API engines can hook into Steam Audio and the TrueAudio Next features, but it remains up to developers to use the features and integrate them into their games.

Note that GPU-based TrueAudio Next requires a GCN-based graphics card of the RX 470, RX 480, RX 570, RX 580, R9 Fury, R9 Fury X, Radeon Pro Duo, RX Vega 56, and RX Vega 64 variety in order to work, so that is a limiting factor in adoption much like the various hair and facial tech is for AMD and NVIDIA on the visual side of things where the question of is the target market large enough to encourage developers to put in the time and effort to enable X optional feature arises.

I do not pretend to be an audio engineer, nor do I play a GPU programmer on TV but more options are always good and I hope that developers take advantage of the resource reservation and GPU compute convolution algorithms of TrueAudio Next to further the immersion factor of audio as much as they have the visual side of things. As VR continues to become more relevant I think that developers will have to start putting more emphasis on accurate and detailed audio and that's a good thing for an aspect of gaming that has seemingly taken a backseat since Windows Vista. 

What are your thoughts on the state of audio in gaming and Steam Audio's new TrueAudio Next integration?

Also read:

Source: Valve

Dell's Epyc package

Subject: General Tech | February 7, 2018 - 04:41 PM |
Tagged: amd, dell, EPYC, R6415, R7415, R7425

Dell has released three new PowerEdge server models, all powered by one or two of AMD's new EPYC chips.  The R6415 is a single socket, 1U server which supports 1TB of RAM, though The Register does point to a PR slide that implies 2TB might be achievable.  The R7415 is larger at 2U because it can hold up to 12 SAS/SATA/NVMe + 12 NVMe drives or up to 14 3.5" drives.  Last up is the dual socket R7425 with either 32 SATA/SAS drives or 24 NVMe flash drives and up to 4TB of RAM.  Check out more specs at The Register.


"There are three PowerEdge AMD-powered servers: the R6415, R7415, and R7425. These accompany the PowerEdge 14G R640 and R740 Xeon SP servers in the Round Rock company's server portfolio, and they inherit general PowerEdge management and feature goodness."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register

Intel Pushes Xeon to the Edge With Refreshed Skylake-Based Xeon D SoCs

Subject: Processors | February 7, 2018 - 09:01 AM |
Tagged: Xeon D, xeon, servers, networking, micro server, Intel, edge computing, augmented reality, ai

Intel announced a major refresh of its Xeon D System on a Chip processors aimed at high density servers that bring the power of the datacenter as close to end user devices and sensors as possible to reduce TCO and application latency. The new Xeon D 2100-series SoCs are built on Intel’s 14nm process technology and feature the company’s new mesh architecture (gone are the days of the ring bus). According to Intel the new chips are squarely aimed at “edge computing” and offer up 2.9-times the network performance, 2.8-times the storage performance, and 1.6-times the compute performance of the previous generation Xeon D-1500 series.

Intel New Xeon D-2100.png

Intel has managed to pack up to 18 Skylake-based processing cores, Quick Assist Technology co-processing (for things like hardware accelerated encryption/decryption), four DDR4 memory channels addressing up to 512 GB of DDR4 2666 MHz ECC RDIMMs, four Intel 10 Gigabit Ethernet controllers, 32 lanes of PCI-E 3.0, and 20 lanes of flexible high speed I/O that includes up to 14 lanes of SATA 3.0, four USB 3.0 ports, or 20 lanes of PCI-E. Of course, the SoCs support Intel’s Management Engine, hardware virtualization, HyperThreading, Turbo Boost 2.0, and AVX-512 instructions with 1 FMA (fuse-multiply-add) as well..


Suffice it to say, there is a lot going on here with these new chips which represent a big step up in capabilities (and TDPs) further bridging the gap between the Xeon E3 v5 family and Xeon E5 family and the new Xeon Scalable Processors. Xeon D is aimed at datacenters where power and space are limited and while the soldered SoCs are single socket (1P) setups, high density is achieved by filling racks with as many single processor Mini ITX boards as possible. Xeon D does not quite match the per-core clockspeeds of the “proper” Xeons but has significantly more cores than Xeon E3 and much lower TDPs and cost than Xeon E5. It’s many lower clocked and lower power cores excel at burstable tasks such as serving up websites where many threads may be generated and maintained for long periods of time but not need a lot of processing power and when new page requests do come in the cores are able to turbo boost to meet demand. For example, Facebook is using Xeon D processors to serve up its front end websites in its Yosemite OpenRack servers where each server rack holds 192 Xeon D 1540 SoCs (four Xeon D boards per 1U sleds) for 1,536 Broadwell cores. Other applications include edge routers, network security appliances, self-driving vehicles, and augmented reality processing clusters. The autonomous vehicles use case is perhaps the best example of just what the heck edge computing is. Rather than fighting the laws of physics to transfer sensor data back to a datacenter for processing to be sent back to the car to in time for it to safely act on the processed information, the idea of edge computing is to bring most of the processing, networking, and storage power as close as possible to both the input sensors and the device (and human) that relies on accurate and timely data to make decisions.


As far as specifications, Intel’s new Xeon D lineup includes 14 processor models broken up into three main categories. The Edge Server and Cloud SKUs include eight, twelve, and eighteen core options with TDPs ranging from 65W to 90W. Interestingly, the 18 core Xeon D does not feature the integrated 10 GbE networking the lower end models have though it supports higher DDR4 memory frequencies. The two remaining classes of Xeon D SoCs are “Network Edge and Storage” and “Integrated Intel Quick Assist Technology” SKUs. These are roughly similar with two eight core, one 12 core, and one 16 core processor (the former also has a quad core that isn’t present in the latter category) though there is a big differentiator in clockspeeds. It seems customers will have to choose between core clockspeeds or Quick Assist acceleration (up to 100 Gbps) as the chips that do have QAT are clocked much lower than the chips without the co-processor hardware which makes sense because they have similar TDPs so clocks needed to be sacrificed to maintain the same core count. Thanks to the updated architecture, Intel is encroaching a bit on the per-core clockspeeds of the Xeon E3 and Xeon E5s though when turbo boost comes into play the Xeon Ds can’t compete.

Intel Xeon D-2100 SKU Information.png

The flagship Xeon D 2191 offers up two more cores (four additional threads) versus the previous Broadwell-based flagship Xeon D 1577 as well as higher clockspeeds at 1.6 GHz base versus 1.3 GHz and 2.2 GHz turbo versus 2.1 GHz turbo. The Xeon D 2191 does lack the integrated networking though. Looking at the two 16 core refreshed Xeon Ds compared to the 16 core Xeon D 1577, Intel has managed to increase clocks significantly (up to 2.2 GHz base and 3.0 GHz boost versus 1.3 GHz base and 2.10 GHz boost), double the number of memory channels and network controllers, and increase the maximum amount of memory from 128 GB to 512 GB. All those increases did come at the cost of TDP though which went from 45W to 100W.


Xeon D has always been an interesting platform both for enthusiasts running VM labs and home servers and big data enterprise clients building and serving up the 'next big thing' built on the astonishing amounts of data people create and consume on a daily basis. (Intel estimates a single self driving car would generate as much as 4TB of data per day while the average person in 2020 will generate 1.5 GB of data per day and VR recordings such as NFL True View will generate up to 3TB a minute!) With Intel ramping up both the core count, per-core performance, and I/O the platform is starting to not only bridge the gap between single socket Xeon E3 and dual socket Xeon E5 but to claim a place of its own in the fast-growing server market.

I am looking forward to seeing how Intel's partners and the enthusiast community take advantage of the new chips and what new projects they will enable. It is also going to be interesting to see the responses from AMD (e.g. Snowy Owl and to a lesser extent Great Horned Owl at the low and niche ends as it has fewer CPU cores but a built in GPU) and the various ARM partners (Qualcomm Centriq, X-Gene, Ampere, ect.*) as they vie for this growth market space with higher powered SoC options in 2018 and beyond.

Also read:

*Note that X-Gene and Ampere are both backed by the Carlyle Group now with MACOM having sold X-Gene to Project Denver Holdings and the ex-Intel employee led Ampere being backed by the Carlyle Group.

Source: Intel

Intel flaunts their Vaunt and its fricking laser beams

Subject: General Tech | February 6, 2018 - 11:40 AM |
Tagged: Intel, vaunt, AR

Intel recently showed off a prototype of their Vaunt smart glasses, which have a significant advantage over Google's failed Glass, no visible camera.   Instead these glasses fire a laser into your eyeholes, something you usually are told to avoid but in this case should be perfectly safe.  The laser projects small monochrome images or text at the bottom of your field of vision, which does not interfere with your line of sight and is mostly invisible until you look down.  So far the amount of information able to be displayed is limited on the prototype and it is a long way off of hitting the market so you should expect changes.  If you have some sort of minor vision problem, The Inquirer assures us that you will still be able to see the information the Vaunt displays.


"Instead, the Vaunt glasses use a low-powered class one laser to project a monochrome 400x150 resolution image on to the retina of your eye. Yeah, if you find eyes queasy you might want to get yourself a cup of tea."

Here is some more Tech News from around the web:

Tech Talk


Source: The Inquirer

Microsoft Launches Cheaper Surface Book and Surface Laptop

Subject: General Tech | February 6, 2018 - 12:46 AM |
Tagged: surface laptop, surface book, surface, microsoft

Microsoft is introducing lower-end versions of its Surface Book 2 and Surface Laptop thin and lights in a very good news/bad news way. The good news is that customers will not have to give up much in the way of specifications, but the bad news being that these new SKUs are not much cheaper than their predecessors as a result. If you were hoping for a budget Surface Book, this is not the device you are looking for.

Tech Report reports that Microsoft is now offering a Surface Book 2 with the same Core i5 7300U (dual core with Hyperthreading) and 8GB base RAM as the exiting i5 model, but with half the storage at 128 GB. All other specifications remain the same including the 13.5” 3000x2000 resolution display, 23mm thick chassis with 2-in-1 folding hinge, and the same USB 3.1 Gen 1, headphone, SD card, and Surface Dock I/O ports. The new “budget” model starts at $1,199 which is $300 cheaper than the i5 7300U model with 256 GB storage. Not bad considering you are only giving up storage space but still priced at a premium.

Microsoft Surface Laptop.jpg

In addition to the Surface Book 2, Microsoft is also adding a cheaper Surface Laptop which cuts the cost to entry to $799. Customers will have to settle for the silver version however, as that is currently the only color option at that price point. Performance as well as storage take a hit on this cost-cutting endeavor as well with the previous Core i5 base CPU (2c/4t up to 3.1 GHz) replaced with a Core m3-7Y30 (2c/4t up to 2.6 GHz). The new budget model further includes 4GB of RAM and 128 GB of internal storage. Fortunately, the 13.5” 2256x1504 touchscreen display remains the same. The price difference between the Core m3 SKU and the previously base Core i5 7200U SKU is only $200 and you are giving up more than storage this time to get there.

It appears the Surface Laptop still comes with Windows 10 S while the Surface Book 2 comes with Windows 10 Pro. Microsoft provides 1-year warranties on these machines.

Are the new lower-cost versions enough to get you to buy into the Surface and Windows 10 ecosystem? 

Also read:

Source: Tech Report

Plextor Launches Budget M8V SATA SSDs

Subject: Storage | February 5, 2018 - 11:54 PM |
Tagged: toshiba, ssd, SM2258, silicon motion, plextor, BiCS, 3d nand

Plextor is introducing a new SATA SSD option with its 2.5” M8VC and M.2 M8VG solid state drives. The M8V series pairs a Silicon Motion SM2258 controller with Toshiba’s 64-layer 3D TLC NAND (BICS flash) to deliver budget SSDs in 128 GB, 256 GB, and 512 GB capacities. Plextor is using its own Plex Nitro firmware and includes SLC cache, system RAM cache support, Plex Compressor compression, 128-bit ECC and LDPC error correction, and hardware AES encryption. Plextor warranties its M8V series SSDs for three years.


Plextor’s new drives are limited by the SATA 6 Gbps interface and max out at 560 MB/s sequential reads. Sequential writes top out at 400 MB/s for the 128 GB model, 510 MB/s for the 256 GB model, and 520 MB/s for the 512 GB drive. Similarly, 4K random reads and 4K random writes scale up as you add more flash which is shown in the table below. The top-end 512 GB drive hits 82K 4K random read IOPS and 81K 4K random write IOPS. The 256 GB solid state drives are only slightly slower at 81K and 80K respectively. The 128 GB M8V SSDs do not appear to have enough flash channels to keep up with the larger capacity drives though as their performance maxes out at 60K random reads and 70K random writes.

Plextor M8V Series 128 GB 256 GB 512 GB
Sequential Reads 560 MB/s 560 MB/s 560 MB/s
Sequential Writes 400 MB/s 510 MB/s 520 MB/s
4K Random Read IOPS 60K 81K 82K
4K Random Write IOPS 70K 80K 81K
Endurance 70 TBW 140 TBW 280 TBW
DWPD 0.5 0.5 0.5
MTBF (hours) 1.5 Million 1.5 Million 1.5 Million

Plextor rates the M8V series at 0.5 DWPD (drive writes per day) and write endurance of 70 TB for the 128 GB, 140 TB for the 256 GB, and 280 TB for the 512 GB model. Plextor rates them at 1.5 million hours MTBF. These numbers aren’t too bad considering this is TLC flash and they are likely to get more life than the ratings (it’s just not guaranteed).

The SM2258 controller appears to be fairly well established and has also been used by Adata, Mushkin, and others for their budget solid state drives. Plextor did not announced pricing or availability and in searching around online I was not able to find them for sale yet. Its previous S2C series (M7V replacement) SATA drives came in at just under 26 cents/gigabyte using the same SMI 2258 controller but with SK Hynix 16nm planar TLC flash though so I would expect the M8V to come in close to that if not better.

I just wish we could get a SATA 4 standard already to at least get consumer systems up to the 12 Gbps enterprise-oriented SAS can hit. While RAM and GPU shopping may make your wallet cry more than a Steam sale, at least it is a good time to be shopping for storage. What do you think about the influx of budget SSDs? Have you upgraded your family’s PCs to the magical performance of solid state storage yet?

Source: Plextor

It takes a hefty CPU to power a Chocobo

Subject: Processors | February 5, 2018 - 04:28 PM |
Tagged: final fantasy xv, round up

The new iteration of Final Fantasy sports some hefty recommendations, including the need for a Core i7-3770 or FX-8350 powering your system.  TechSpot decided to test out a variety of CPUs to see how they performed in tandem with a GTX 1080 Ti.  With 14 CPUs represented, including several generations of Intel chips and a representative from each of the three Ryzen lines they proceeded to run through a battery of benchmarks.  The tests quickly showed that if you are running a quad core CPU clocked lower than 4GHz, from either vendor, you are not going to have a good time.  Check out the full results to see if your system can handle it or if you should be shopping for a Ryzen 5 or 7, or perhaps a higher end Coffee Lake if Intel is your cup of tea.


"Today we're checking out Final Fantasy XV CPU performance using the new standalone benchmark released ahead of next month's PC launch. The reason we want to look at CPU performance first is because the game is extremely CPU intensive, far more so than we were expecting."

Here are some more Processor articles from around the web:



Source: Techspot

I think I'm a clone now, Thermaltake's Core G21 and View 21

Subject: Cases and Cooling | February 5, 2018 - 03:15 PM |
Tagged: thermaltake, core 21, view 21, tempered glass

The only difference between the Core 21 and View 21 is the front panel, the Core G21 sports a matte grill while the View 21 offers a smoky glass front panel.  This makes it somewhat easier for [H]ard|OCP to review both cases at once.  They both offer extensive cooling options, the front can house your choice of to two 140mm fans, three 120mm fans, or a radiator between 120mm to 360mm in size.  The top of the case can support a 120 or 140mm fan, the bottom another 120mm fan in addition to having an intake for your PSU, neither can support a radiator, however the rear can accommodate a 120mm fan or rad.  $70 is a decent price for a case sporting tempered glass, however [H] did feel Thermaltake should have included more than just a single fan in the package.  Get the full details here.


"This is a two-for-one review: Thermaltake's Core 21 and View 21 Tempered Glass Edition cases are identical aside from the front panel. Both cases feature tempered glass panels that show off your build, and we'll find out what difference the drastically different front panel designs have on case performance."

Here are some more Cases & Cooling reviews from around the web:



Source: [H]ard|OCP

Windows S is now just an awkward phase which your PC can grow out of

Subject: General Tech | February 5, 2018 - 01:39 PM |
Tagged: windows s, windows 10, microsoft

Microsoft is changing how they will distribute Windows S, their Chrome-like locked down OS.  It will now become an option on all Windows 10 installations, allowing you to enable it if you feel the need to set up a computer which can only run apps from the Microsoft Store and only surf via Edge.  The Inquirer cites an interesting fact, 83% of users who do not disable Windows S mode in the first week remain with that OS permanently.  Perhaps they don't know any better, or perhaps they were one of those who were satisfied with the original Surface's Windows RT?


"Now, the company has confirmed that it will instead offer an "S Mode" on standard versions of Windows 10 instead, locking the machine down to a walled garden of apps from the Microsoft Store, and blocking traditional Win32 programs. And, of course, restricting you to using bloody Edge browser. "

Here is some more Tech News from around the web:

Tech Talk

Source: The Inquirer

MSI Launches Radeon RX 580 Armor MK2 Graphics Cards

Subject: Graphics Cards | February 3, 2018 - 05:00 PM |
Tagged: RX 580, msi, GDDR5, factory overclocked, amd, 8gb

MSI is updating its Radeon RX 580 Armor series with a new MK2 variant (in both standard and OC editions) that features an updated cooler with red and black color scheme and a metal backplate along with Torx 2.0 fans.

MSI Radeon RX 580 Armor MK2 OC Graphics Card.png

The graphics card is powered by a single 8-pin PCI-E power connection and has two DisplayPort, two HDMI, and one DVI display output. MSI claims the MK2 cards use its Military Class 4 hardware including high end solid capacitors. The large heatsink features three copper heatpipes and a large aluminum fin stack. It appears that the cards are using the same PCB as the original Armor series but it is not clear from MSI’s site if they have dome anything different to the power delivery.

The RX 580 Polaris GPU is running at a slight factory overclock out of the box with a boost clock of up to 1353 MHz (reference is 1340) for the standard edition and at up to 1366 MHz for the RX 580 Armor MK2 OC Edition. The OC edition can further clock up to 1380 MHz when run in OC mode using the company’s software utility (enthusiasts can attempt to go beyond that but MSi makes no guarantees). Both cards come with 8GB of GDDR5 memory clocked at the reference 8GHz.

MSI did not release pricing or availability but expect them to be difficult to find and for well above MSRP when they are in stock  If you have a physical Microcenter near you, it might be worth watching for one of these cards there to have a chance of getting one closer to MSRP.

Source: MSI

External storage for the terminally impatient, OWC's Mercury Helios

Subject: Storage | February 2, 2018 - 03:39 PM |
Tagged: owc, Mercury Helios, thunderbolt 3, PCIe SSD, external ssd

External storage does not have to be slow, as the OWC Mercury Helios 3 PCIe Thunderbolt 3 external drive demonstrates.  The TB3 connection is capable of up to 40Gbps, assuming you have the proper connection, which will keep a drive such as the the Kingston DC1000 NVMe SSD very busy.  In The SSD Reviews testing, they saw the data transfer cap out at 2.8GB/s read and between 2.5-2.7GB/s write, which makes this perfect for HD video or for manipulating large media files. The enclosure will set you back about $200, the cost of the PCIe SSD you put inside it is a choice for you to make.


"The trick…is Thunderbolt 3 and the external devices companies envision to solve this speed and data storage problem. This is where the OWC Mercury helios 3 PCIe Thunderbolt 3 Expansion Chassis comes in."

Here are some more Storage reviews from around the web:


Ripping out cryptocurrency with AMD's 1950X

Subject: General Tech | February 2, 2018 - 02:50 PM |
Tagged: cryptocurrency, amd, Threadripper, 1950x

For the next little while at least, you should be able to pay off the purchase of a Threadripper 1950X by mining with it.  [H]ard|OCP did some testing using Monero and found that Threadripper is quite efficient at mining.  When mining full tilt the system, including a GTX 1080, used only 335W which could keep your energy bill somewhat lower than alternative systems.  Of course, with Bitcoin's value wobbling drunkenly might want to move quickly ... or skip it altogether.


"If you could have your AMD Ryzen Threadripper pay for itself over time, would you? No matter your feelings towards cryptocurrency mining, you can get your Threadripper mining today, and paying for itself. The process could not be much easier either. The big kicker is the actual wattage load on your system is likely much less than you would guess."

Here is some more Tech News from around the web:

Tech Talk

Source: [H]ard|OCP

PCPer Mailbag #29 - 2/2/2018

Subject: Editorial | February 2, 2018 - 09:00 AM |
Tagged: video, Ryan Shrout, pcper mailbag, pcper

It's time for the PCPer Mailbag, our weekly show where Ryan and the team answer your questions about the tech industry, the latest and greatest GPUs, the process of running a tech review website, and more!

On today's show:

00:39 - Ryan's worst PC build?
03:18 - PCIe vs. USB sound card?
06:10 - AMD APU Infinity Fabric for GPU & CPU?
08:06 - SSD prices in 2018?
10:42 - Firmware upgrades for GPUs?
13:13 - Storage configuration for Premiere Pro editing?
16:11 - PC hardware for Star Citizen?
19:18 - 10-gigabit networking?

Want to have your question answered on a future Mailbag? Leave a comment on this post or in the YouTube comments for the latest video. Check out new Mailbag videos each Friday!

Be sure to subscribe to our YouTube Channel to make sure you never miss our weekly reviews and podcasts, and please consider supporting PC Perspective via Patreon to help us keep videos like our weekly mailbag coming!

Source: YouTube

Tobii Eye Tracking Showed Impressive VR Headset Integration at CES

Subject: General Tech | February 1, 2018 - 05:53 PM |
Tagged: VR, virtual reality, Tobii, htc vive, eye tracking, CES 2018, CES

Last month in Tobii's suite at CES I was given a demonstration of a prototype VR headset that looked like any other HTC Vive - except for the ring of Tobii eye-tracking sensors inside and around each lens. While this might seem like a bit of an odd concept at first I was patient as the benefits were explained to me, and then blown away when I actually tried it myself.


As you know, if you have used a VR headset like the Oculus Rift or HTC Vive, the basic mechanics of VR interaction involve pointing your head in the direction you want to look, reaching with your hand (and controller) to point to an object, and then pressing a button on the controller to act. I will be completely honest here: I don't like it. After a little while the fatigue and general unnatural feeling of rapid, bird-like head movements kills whatever enthusiasm I might have for the experience, and I was the last person to give high praise to a new VR product. HOWEVER, I will attempt to explain why simply adding eye tracking actually made the entire experience 1000 times better (for me, anyway).


When I put on the prototype headset, the only setup I had to do was quickly follow a dot in my field of vision as it moved up/down/left/right, like a vision test for a driver's license. That's the entire calibration process, and with that out of the way I was suddenly able to look around without moving my head, which made the head movements when they followed feel completely natural. I would instinctively look up, or to the side, with my head following when I decided to focus attention on that area. The amount of physical head movements was reduced to normal, human levels, which alone prevented me from feeling sick after a few minutes. Of course, this was not the only demonstrated feature of the integrated eye-tracking, and if you are familiar with Tobii you will know what's next.


This looks primitive, but it was an effective demo of the eye-tracking integration

The ability of the headset to know exactly where you are looking allows you to aim based on your line of sight if the game implements it, and I tried some target practice (throwing rocks at glass bottles in the demo world) and it felt completely natural. After launching a few rocks at distant bottles I instantly decided that this should be the mechanic of fantastic VR football video game - that I could throw at different receivers just by looking them down.

I also received a demo of simulated AR integration (still within the VR world), and a demo of what eye-tracking adds to a home theater experience - and it was pretty convincing. I could scroll around and select movie titles from an interface by simply looking around, and within the VR world it was as if I was looking up at a big projection screen. Throughout the different demos I kept thinking about how much more natural everything felt when I wasn't constantly moving my head around and pointing at things with my controller.


Finally, there was another side to everything I experienced - and it might have been the most interesting thing from a PC enthusiast perspective: if the VR headset can track your focus, the GPU doesn't have to render anything else at full resolution. That alone could make this something of a breakthrough addition to the current VR headset space, as performance is very expensive (even before the mining craze) and absolutely necessary for a smooth, high frame-rate experience. After 45 minutes with the headset on, I felt totally fine - and that was a change.

So what is the takeaway from all this? I'm just an editor who had a meeting with Tobii at CES, and I walked out of the meeting with a couple of business cards and nothing else. I admit that I am a VR skeptic who went into the meeting with no expectations. And I still left thinking it was the best product I saw at the show.

More information and media about the CES demos are available from Tobii on their CES blog post.

Source: Tobii