Subject: Graphics Cards, Shows and Expos | January 4, 2015 - 03:56 PM | Ryan Shrout
Tagged: msi, GTX 970, gaming, ces 2015, CES, 100me
To celebrate the shipment of 100 million GeForce GPUs, MSI is launching a new revision of the GeForce GTX 970, the Gaming 100ME (millionth edition). The cooler is identical that used in the GTX 970 Gaming 4G but replaces the red color scheme of the MSI Gaming brand with a green very close to that of NVIDIA's.
This will also ship with a "special gift" and will be a limited edition, much like the Golden Edition GTX 970 from earlier this year.
MSI had some other minor updates to its GPU line including the GTX 970 4GD5T OC with a cool looking black and white color scheme and an 8GB version of the Radeon R9 290X.
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: General Tech, Shows and Expos | December 29, 2014 - 10:06 AM | Scott Michaud
Tagged: sony, Samsung, playstation now, Playstation
I know that I have said it in the past, but I am not big on cloud streaming services. For art, the ability to genuinely own your content keeps it safe from censorship and licensing disagreements. You only need to look back a year to see Disney pulling access to legally purchased content on Amazon because they wanted their TV channel to have exclusive rights to the Christmas movies in the holiday season. This does not apply to people who actually owned the content (semi-)DRM-free. Streaming services, especially for video games, are examples of perfection for anyone willing to abuse the system.
Remember: If you build it, the abuse will come.
With that commentary out of the way, what streaming services are good at is pure entertainment. They are just about peak convenience to deliver... some form of entertaining content... unless you have spotty internet (or some other exception). These services have definite merit, so long as they augment platforms for actual art and not attempt to replace them.
So why am I rambling? Recently, Sony has announced that PlayStation Now will arrive for Samsung Smart TVs alongside Sony devices. At first, this might sound surprising. Sony, a console manufacturer, is providing access to the PlayStation ecosystem on other platforms – and yes, that is noteworthy. It is also not without precedent. While the initiative is mostly abandoned, Sony tried opening up to third-party mobile manufacturers (HTC, Sharp, Fujitsu, Wikipad, and Alcatel) with “PlayStation Certified”.
There is also a second reason why this is not too surprising: Samsung and Sony are fairly close partners in TV technology. Until just a few years ago, Sony LCD TV panels were manufactured by S-LCD, until Samsung eventually bought out Sony's interest in the company. The two companies are not really hostile in the TV market. If we see Sony open up PlayStation Now to LG Electronics, then I will scratch my head.
While announced ahead of CES, PlayStation Now is expected to be present at the show on Samsung TVs.
Subject: Storage, Shows and Expos | September 16, 2014 - 02:29 PM | Allyn Malventano
Tagged: ssd, slc, sata, mlc, micron, M600, crucial
You may already be familiar with the Micron Crucial M550 line of SSDs (if not, familiarize yourself with our full capacity roundup here). Today Micron is pushing their tech further by releasing a new M600 line. The M600's are the first full lineup from Micron to use their 16nm flash (previously only in their MX100 line). Aside from the die shrink, Micron has addressed the glaring issue we noted in our M550 review - that issue being the sharp falloff in write speeds in lower capacities of that line. Their solution is rather innovative, to say the least.
Recall the Samsung 840 EVO's 'TurboWrite' cache, which gave that drive a burst of write speed during short sustained write periods. The 840 EVO accomplished this by each TLC die having a small SLC section of flash memory. All data written passed through this cache, and once full (a few GB, varying with drive capacity), write speed slowed to TLC levels until the host system stopped writing for long enough for the SSD to flush the cached data from SLC to TLC.
The Micron M600 SSD in 2.5" SATA, MSATA, and M.2 form factors.
Micron flips the 'typical' concept of caching methods on its head. It does employ two different types of flash writing (SLC and MLC), but the first big difference is that the SLC is not really cache at all - not in the traditional sense, at least. The M600 controller, coupled with some changes made to Micron's 16nm flash, is able to dynamically change the mode of each flash memory die *on the fly*. For example, the M600 can place most of the individual 16GB (MLC) dies into SLC mode when the SSD is empty. This halves the capacity of each die, but with the added benefit of much faster and more power efficient writes. This means the M600 would really perform more like an SLC-only SSD so long as it was kept less than half full.
As you fill the SSD towards (and beyond) half capacity, the controller incrementally clears the SLC-written data, moving that data onto dies configured to MLC mode. Once empty, the SLC die is switched over to MLC mode, effectively clearing more flash area for the increasing amount of user data to be stored on the SSD. This process repeats over time as the drive is filled, meaning you will see less SLC area available for accelerated writing (see chart above). Writing to the SLC area is also advantageous in mobile devices, as those writes not only occur more quickly, they consume less power in the process:
For those worst case / power user scenarios, here is a graph of what a sustained sequential write to the entire drive area would look like:
Realize this is not typical usage, but if it happened, you would see SLC speeds for the first ~45% of the drive, followed by MLC speeds for another 10%. After the 65% point, the drive is forced to initiate the process of clearing SLC and flipping dies over to MLC, doing so while the host write is still in progress, and therefore resulting in the relatively slow write speed (~50 MB/sec) seen above. Realize that in normal use (i.e. not filling the entire drive at full speed in one go), garbage collection would be able to rearrange data in the background during idle time, meaning write speeds should be near full SLC speed for the majority of the time. Even with the SSD nearly full, there should be at least a few GB of SLC-mode flash available for short bursts of SLC speed writes.
This caching has enabled some increased specs over the prior generation models:
Note the differences in write speeds, particularly in the lower capacity models. The 128GB M550 was limited to 190MB/sec, while the M600 can write at 400MB/sec in SLC mode (which is where it should sit most of the time).
We'll be testing the M600 shortly and will come back with a full evaluation of the SSD as a whole and more specifically how it handles this new tech under real usage scenarios.
Subject: Storage, Shows and Expos | September 16, 2014 - 12:49 PM | Allyn Malventano
Tagged: ram, NVMe, IOPS, idf 2014, idf, ddr4, DDR
The Intel Developer Forum was last week, and there were many things to be seen for sure. Mixed in with all of the wearable and miniature technology news, there was a sprinkling of storage goodness. Kicking off the show, we saw new cold storage announcements from both HGST and Western Digital, but that was about it for HDD news, as the growing trend these days is with solid state storage technologies. I'll start with RAM:
First up was ADATA, who were showing off 64GB DDR3 (!) DIMMs:
Next up were various manufacturers pushing DDR4 technology quite far. First was SK Hynix's TSV 128GB DIMMs (covered in much greater depth last week):
Next up is Kingston, who were showing a server chassis equipped with 256GB of DDR4:
If you look closer at the stats, you'll note there is more RAM in this system than flash:
Next up is IDT, who were showing off their LRDIMM technology:
This technology adds special data buffers to the DIMM modules, enabling significantly higher amounts of installed RAM into a single system, with a 1-2 step de-rating of clock speeds as you take capacities to the far extremes. The above server has 768GB of DDR4 installed and running!:
Moving onto flash memory type stuff, Scott covered Intel's new 40 Gbit Ethernet technology last week. At IDF, Intel had a demo showing off some of the potential of these new faster links:
This demo used a custom network stack that allowed a P3700 in a local system to be matched in IOPS by an identical P3700 *being accessed over the network*. Both local and networked storage turned in the same 450k IOPS, with the remote link adding only 8ms of latency. Here's a close-up of one of the SFF-8639 (2.5" PCIe 3.0 x4) SSDs and the 40 Gbit network card above it (low speed fans were installed in these demo systems to keep some air flowing across the cards):
Stepping up the IOPS a bit further, Microsoft was showing off the capabilities of their 'Inbox AHCI driver', shown here driving a pair of P3700's at a total of 1.5 million IOPS:
...for those who want to get their hands on this 'Inbox driver', guess what? You already have it! "Inbox" is Microsoft's way of saying the driver is 'in the box', meaning it comes with Windows 8. Bear in bind you may get better performance with manufacturer specific drivers, but it's still a decent showing for a default driver.
Now for even more IOPS:
Yes, you are reading that correctly. That screen is showing a system running over 11 million IOPS. Think it's RAM? Wrong. This is flash memory pulling those numbers. Remember the 2.5" P3700 from a few pics back? How about 24 of them:
The above photo shows three 2U systems (bottom), which are all connected to a single 2U flash memory chassis (top). The top chassis supports three submodules, each with eight SFF-8639 SSDs. The system, assembled by Newisys, demonstrates just how much high speed flash you can fit within an 8U space. The main reason for connecting three systems to one flash chassis is because it takes those three systems to process the full IOPS capability of 24 low latency NVMe SSDs (that's 96 total lanes of PCIe 3.0!)!
So there you have it, IDF storage tech in a nutshell. More to come as we follow these emerging technologies to their maturity.
Subject: General Tech, Cases and Cooling, Systems, Shows and Expos | September 12, 2014 - 02:20 PM | Scott Michaud
Tagged: idf, idf 2014, nuc, Intel, SFF, small form factor
A few years ago, Intel introduced the NUC line of small form factor PCs. At this year's IDF, they have announced plans to make even smaller, and cheaper, specifications that are intended for OEMs to install Windows, Linux, Android, and Chrome OS on. This initiative is not yet named, but will consist of mostly soldered components, leaving basically just the wireless adapters user-replaceable, rather than the more user-serviceable NUC.
Image Credit: Liliputing
Being the owner of Moore's Law, they just couldn't help but fit it to some type of exponential curve. While it is with respect to generation, not time, Intel expects the new, currently unnamed form factor to halve both the volume (size) and bill of material (BOM) cost of the NUC. They then said that another generation after ("Future SFF") will halve the BOM cost again, to a quarter of the NUC.
What do our readers think? Would you be willing to give up socketed components for smaller and cheaper devices in this category or does this just become indistinguishable from mobile devices (which we already know can be cheap and packed into small spaces)?
ECS hosted a press event in the third week of August to unveil its new product lineup and corporate direction. The press event, named "Live, Liva, Lead, L337", lays out the important aspects of the "new ECS" and its intended market direction. They introduced the LIVA mini computer with integrated 32GB and 64GB integrated SSDs, their Z97-based product line-up, and the North America LIVA design contest.
Their naming of the event was apropos to their renewed corporate vision with the first two terms, Live and LIVA, referencing their LIVA mini-PC platform. ECS developed the name LIVA by combining the words Live and Viva (Life in Spanish), signifying the LIVA line's aim at integrating itself into your daily routine and providing the ability to live a better life. Lead signifies ECS' desire to become a market leader in the Mini-PC space with their LIVA platform as well as become a more dominant player in the PC space. The last term, L337, is a reference to their L337 Gaming line of motherboards, a clear reminder of their Z97 offerings to be unveiled.
ECS seeks to consolidate its product lines, re-focusing its energy on what it excels at - offering quality products at reasonable prices. ECS seeks to leverage its corporate partnerships and design experience to build products equivalent to competitor lines at a much reduced cost to the end user. This renewed focus on quality and the end user led to a much revised Z97 board lineup in comparison to its Z87-based offerings. Additionally, their newly introduced mini-PC line, branded LIVA, seeks to offer a cheaper all-in-one alternative to the Intel NUC and GIGABYTE BRIX systems.
Subject: Storage, Shows and Expos | September 10, 2014 - 03:34 PM | Allyn Malventano
Tagged: TSV, Through Silicon Via, memory, idf 2014, idf
If you're a general computer user, you might have never heard the term "Through Silicon Via". If you geek out on photos of chip dies and wafers, and how chips are assembled and packaged, you might have heard about it. Regardless of your current knowledge of TSV, it's about to be a thing that impacts all of you in the near future.
Let's go into a bit of background first. We're going to talk about how chips are packaged. Micron has an excellent video on the process here:
The part we are going to focus on appears at 1:31 in the above video:
This is how chip dies are currently connected to the outside world. The dies are stacked (four high in the above pic) and a machine has to individually wire them to a substrate, which in turn communicates with the rest of the system. As you might imagine, things get more complex with this process as you stack more and more dies on top of each other:
16 layer die stack, pic courtesy NovaChips
...so we have these microchips with extremely small features, but to connect them we are limited to a relatively bulky process (called package-on-package). Stacking these flat planes of storage is a tricky thing to do, and one would naturally want to limit how many of those wires you need to connect. The catch is that those wires also equate to available throughput from the device (i.e. one wire per bit of a data bus). So, just how can we improve this method and increase data bus widths, throughput, etc?
Before I answer that, let me lead up to it by showing how flash memory has just taken a leap in performance. Samsung has recently made the jump to VNAND:
By stacking flash memory cells vertically within a die, Samsung was able to make many advances in flash memory, simply because they had more room within each die. Because of the complexity of the process, they also had to revert back to an older (larger) feature size. That compromise meant that the capacity of each die is similar to current 2D NAND tech, but the bonus is speed, longevity, and power reduction advantages by using this new process.
I showed you the VNAND example because it bears a striking resemblance to what is now happening in the area of die stacking and packaging. Imagine if you could stack dies by punching holes straight through them and making the connections directly through the bottom of each die. As it turns out, that's actually a thing:
Core M 5Y70 Early Testing
During a press session today with Intel, I was able to get some early performance results on Broadwell-Y in the form of the upcoming Core M 5Y70 processor.
Testing was done on a reference design platform code named Llama Mountain and at the heart of the system is the Broadwell-Y designed dual-core CPU, the Core M 5Y70, which is due out later this year. Power consumption of this system is low enough that Intel has built it with a fanless design. As we posted last week, this processor has a base frequency of just 1.10 GHz but it can boost as high as 2.6 GHz for extra performance when it's needed.
Before we dive into the actual result, you should keep in mind a couple of things. First, we didn't have to analyze the systems to check driver revisions, etc., so we are going on Intel's word that these are setup as you would expect to see them in the real world. Next, because of the disjointed nature of test were were able to run, the comparisons in our graphs aren't as great as I would like. Still, the results for the Core M 5Y70 are here should you want to compare them to any other scores you like.
First, let's take a look at old faithful: CineBench 11.5.
UPDATE: A previous version of this graph showed the TDP for the Intel Core M 5Y70 as 15 watts, not the 4.5 watt listed here now. The reasons are complicated. Even though the Intel Ark website lists the TDP of the Core M 5Y70, Intel has publicly stated the processor will make very short "spikes" at 15 watts when in its highest Turbo Boost modes. It comes to a discussion of semantics really. The cooling capability of the tablet is only targeted to 4.5-6.0 watts and those very short 15 watt spikes can be dissipated without the need for extra heatsink surface...because they are so short. SDP anyone? END UPDATE
With a score of 2.77, the Core M 5Y70 processor puts up an impressive fight against CPUs with much higher TDP settings. For example, Intel's own Pentium G3258 gets a score of 2.71 in CB11, and did so with a considerably higher thermal envelope. The Core i3-4330 scores 38% higher than the Core M 5Y70 but it requires a TDP 3.6-times larger to do so. Both of AMD's APUs in the 45 watt envelope fail to keep up with Core M.
Subject: Shows and Expos | September 9, 2014 - 05:27 PM | Ryan Shrout
Tagged: Skylake, Intel, idf 2014, idf, 14nm
2015 is shaping up to be an interesting year for Intel's consumer processor product lines. We are still expected to see Broadwell make some kind of debut in a socketed form in addition to the mobile releases trickling out beginning this holiday, but it looks like we will also get our first taste of Skylake late next year.
Skylake is Intel's next microarchitecture and will be built on the same 14nm process technology currently shipping with Broadwell-Y. Intel stated that it expects to see dramatic improvements in all areas of measurement including performance, power consumption and silicon efficiency.
On stage the company demoed Skylake running the 3DMark Fire Strike benchmark though without providing any kind of performance result (obviously). That graphics demo was running on an engineering development board and platform and though it looked incredibly good from where we were sitting, we can't make any guess as to the performance quite yet.
Intel then surprised us by bringing a notebook out from behind the monitor showing Skylake up and running in a mobile form factor decoding and playing back 4K video. Once again, the demo was smooth and impressive though you expect no more from an overly rehearsed keynote.
Intel concluded that it was "excited about the health of Skylake" and that they should be in mass production in the first quarter of 2015 with samples going out to customers. Looking even further down the rabbit hole the company believes they have a "great line of sight to 10nm and beyond."
Even though details were sparse, it is good news for Intel that they would be willing to show Skylake so early and yet I can't help but worry about a potentially shorter-than-expected life span for Broadwell in the desktop space. Mobile users will find the increased emphasis on power efficiency a big win for thin and light notebooks but enthusiast are still on the look out for a new product to really drive performance up in the mainstream.
Subject: Storage, Shows and Expos | September 9, 2014 - 04:51 PM | Allyn Malventano
Tagged: WDC< Western Digital, WD, idf 2014, idf, hdd, Cold, Archival, Ae
We talked about helium filled, shingled HDD's from HGST earlier today. Helium may give you reduced power demands, but at the added expensive of hermetically sealed enclosures over conventional HDD's. Shingling may give added capacity, but at the expense of being forced into specific writing methods. Now we know Western Digital's angle into archival / cold storage:
..so instead of going with higher cost newer technologies, WD is taking their consumer products and making them more robust. They are also getting rid of the conventional thinking of capacity increments and are moving to 100GB increments. The idea is that once a large company or distributor has qualified a specific HDD model on their hardware, that model will stick around for a while, but be continued at an increased capacity as platter density yields increase over time. WD has also told me that capacities may even be mixed an matched within a 20-box of drives, so long as the average capacity matches the box label. This works in the field of archival / cold storage for a few reasons:
- Archival storage systems generally do not use conventional RAID (where an entire array of matching capacity disks are spinning simultaneously). Drives are spun up and written to individually, or spun up individually to service the occasional read request. This saves power overall, and it also means the individual drives can vary in capacity with no ill effects.
- Allowing for variable capacity binning helps WD ship more usable platters/drives overall (i.e. not rejecting drives that can't meet 6TB). This should drive overall costs down.
- Increasing capacity by only a few hundred GB per drive turns into *huge* differences in cost when you scale that difference up to the number of drives you would need to handle a very large total capacity (i.e. Exabytes).
So the idea here is that WD is choosing to stick with what they do best, which they can potentially do for even cheaper than their consumer products. That said, this is really meant for enterprise use and not as a way for a home power user to save a few bucks on a half-dozen drives for their home NAS. You really need an infrastructure in place that can handle variable capacity drives seamlessly. While these drives do not employ SMR to get greater capacity, that may work out as a bonus, as writes can be performed in a way that all systems are currently compatible with (even though I suspect they will be tuned more for sequential write workloads).
Here's an illustration of this difference:
The 'old' method meant that drives on the left half of the above bell curve would have to be sold as 5TB units.
With the 'new' method, drives can be sold based on a spec closer to their actual capacity yield. For a given model, shipping capacities would increase as time goes on (top to bottom of the above graphic).
To further clarify what is meant by the term 'cold storage' - the data itself is cold, as in rarely if ever accessed:
Examples of this would be Facebook posts / images from months or years ago. That data may be rarely touched, but it needs to be accessible enough to be browsed to via the internet. The few second archival HDD spinup can handle this sort of thing, while a tape system would take far too long and would likely timeout that data request.