Intel insists their clock is still running

Subject: General Tech | February 19, 2016 - 01:34 PM |
Tagged: Intel, delay, 10nm

Today Intel has insisted that the rumours of a further delay in their scheduled move to a 10nm process are greatly exaggerated.  They had originally hoped to make this move in the latter half of this year but difficulties in the design process moved that target into 2017.  They have assured The Inquirer and others that the speculation, based on information in a job vacancy posting, is inaccurate and that the they still plan on releasing processors built on a 10nm node by the end of next year.  You can still expect Kaby Lake before the end of the year and Intel also claims to have found promising techniques to shrink their processors below 10nm in the future,

intel_10nm_panel2-Copy.png

"INTEL HAS moved to quash speculation that its first 10nm chips could be pushed back even further than the second half of 2017, after already delaying them from this year."

Here is some more Tech News from around the web:

Tech Talk

Source: The Inquirer

What Micron's Upcoming 3D NAND Means for SSD Capacity, Performance, and Cost

Subject: Storage | February 14, 2016 - 02:51 PM |
Tagged: vnand, ssd, Samsung, nand, micron, Intel, imft, 768Gb, 512GB, 3d nand, 384Gb, 32 Layer, 256GB

You may have seen a wave of Micron 3D NAND news posts these past few days, and while many are repeating the 11-month old news with talks of 10TB/3.5TB on a 2.5"/M.2 form factor SSDs, I'm here to dive into the bigger implications of what the upcoming (and future) generation of Intel / Micron flash will mean for SSD performance and pricing.

progression-3-.png

Remember that with the way these capacity increases are going, the only way to get a high performance and high capacity SSD on-the-cheap in the future will be to actually get those higher capacity models. With such a large per-die capacity, smaller SSDs (like 128GB / 256GB) will suffer significantly slower write speeds. Taking this upcoming Micron flash as an example, a 128GB SSD will contain only four flash memory dies, and as I wrote about back in 2014, such an SSD would likely see HDD-level sequential write speeds of 160MB/sec. Other SSD manufacturers already recognize this issue and are taking steps to correct it. At Storage Visions 2016, Samsung briefed me on the upcoming SSD 750 Series that will use planar 16nm NAND to produce 120GB and 250GB capacities. The smaller die capacities of these models will enable respectable write performance and will also enable them to discontinue their 120GB 850 EVO as they transition that line to higher capacity 48-layer VNAND. Getting back to this Micron announcement, we have some new info that bears analysis, and that pertains to the now announced page and block size:

  • 256Gb MLC: 16KB Page / 16MB Block / 1024 Pages per Block

  • 384Gb TLC: 16KB Page / 24MB Block / 1536 Pages per Block

To understand what these numbers mean, using the MLC line above, imagine a 16MB CD-RW (Block) that can write 1024 individual 16KB 'sessions' (Page). Each 16KB can be added individually over time, and just like how files on a CD-RW could be modified by writing a new copy in the remaining space, flash can do so by writing a new Page and ignoring the out of date copy. Where the rub comes in is when that CD-RW (Block) is completely full. The process at this point is very similar actually, in that the Block must be completely emptied before the erase command (which wipes the entire Block) is issued. The data has to go somewhere, which typically means writing to empty blocks elsewhere on the SSD (and in worst case scenarios, those too may need clearing before that is possible), and this moving and erasing takes time for the die to accomplish. Just like how wiping a CD-RW took a much longer than writing a single file to it, erasing a Block takes typically 3-4x as much time as it does to program a page.

With that explained, of significance here are the growing page and block sizes in this higher capacity flash. Modern OS file systems have a minimum bulk access size of 4KB, and Windows versions since Vista align their partitions by rounding up to the next 2MB increment from the start of the disk. These changes are what enabled HDDs to transition to Advanced Format, which made data storage more efficient by bringing the increment up from the 512 Byte sector up to 4KB. While most storage devices still use 512B addressing, it is assumed that 4KB should be the minimum random access seen most of the time. Wrapping this all together, the Page size (minimum read or write) is 16KB for this new flash, and that is 4x the accepted 4KB minimum OS transfer size. This means that power users heavy on their page file, or running VMs, or any other random-write-heavy operations being performed over time will have a more amplified effect of wear of this flash. That additional shuffling of data that must take place for each 4KB write translates to lower host random write speeds when compared to lower capacity flash that has smaller Page sizes closer to that 4KB figure.

schiltron-IMFT-edit.jpg

A rendition of 3D IMFT Floating Gate flash, with inset pulling back some of the tunnel oxide layer to show the location of the floating gate. Pic courtesy Schiltron.

Fortunately for Micron, their choice to carry Floating Gate technology into their 3D flash has netted them some impressive endurance benefits over competing Charge Trap Flash. One such benefit is a claimed 30,000 P/E (Program / Erase) cycle endurance rating. Planar NAND had dropped to the 3,000 range at its lowest shrinks, mainly because there was such a small channel which could only store so few electrons, amplifying the (negative) effects of electron leakage. Even back in the 50nm days, MLC ran at ~10,000 cycle endurance, so 30,000 is no small feat here. The key is that by using that same Floating Gate tech so good at controlling leakage for planar NAND on a new 3D channel that can store way more electrons enables excellent endurance that may actually exceed Samsung's Charge Trap Flash equipped 3D VNAND. This should effectively negate the endurance hit on the larger Page sizes discussed above, but the potential small random write performance hit still stands, with a possible remedy being to crank up the Over-Provisioning of SSDs (AKA throwing flash at the problem). Higher OP means less active pages per block and a reduction in the data shuffling forced by smaller writes.

25nm+penny.jpg

A 25nm flash memory die. Note the support logic (CMOS) along the upper left edge.

One final thing helping out Micron here is that their Floating Gate design also enables a shift of 75% of the CMOS circuitry to a layer *underneath* the flash storage array. This logic is typically part of what you see 'off to the side' of a flash memory die. Layering CMOS logic in such a way is likely thanks to Intel's partnership and CPU development knowledge. Moving this support circuitry to the bottom layer of the die makes for less area per die dedicated to non-storage, more dies per wafer, and ultimately lower cost per chip/GB.

progression slide.png

Samsung's Charge Trap Flash, shown in both planar and 3D VNAND forms.

One final thing before we go. If we know anything about how the Intel / Micron duo function, it is that once they get that freight train rolling, it leads to relatively rapid advances. In this case, the changeover to 3D has taken them a while to perfect, but once production gains steam, we can expect to see some *big* advances. Since Samsung launched their 3D VNAND their gains have been mostly iterative in nature (24, 32, and most recently 48). I'm not yet at liberty to say how the second generation of IMFT 3D NAND will achieve it, but I can say that it appears the next iteration after this 32-layer 256Gb (MLC) /384Gb (TLC) per die will *double* to 512Gb/768Gb (you are free to do the math on what that means for layer count). Remember back in the day where Intel launched new SSDs at a fraction of the cost/GB of the previous generation? That might just be happening again within the next year or two.

Fancy new Intel powered routers from Wind River

Subject: General Tech | February 12, 2016 - 12:28 PM |
Tagged: Intel, wind river, telecoms

The next dream of telecoms providers is network function virtualization, the ability to virtualize customers hardware instead of shipping them a device.  The example given to the The Register were DVRs, instead of shipping a cablebox with recording capability to the customer the DVR would be virtualized on the telcos internal infrastructure.   You could sign up for a DVR VM, point your smart TV at the appropriate IP address and plug in a USB disk for local storage.

The problem has been the hardware available to the telco, the routers simply did not have the power to provide a consistent internet or cable connection, let alone add virtual devices to their systems.  At the upcoming MWC, Wind River will be showing off Titanium Servers for virtualizing customer premises equipment, with enough processing power and VM optimizations that these types of services could be supported.

banner_logo_windriver.png

"Intel is starting to deliver on its vision of x86-powered modem/routers in the home , as its Wind River subsidiary releases a server dedicated to delivery of functions to virtual customer premises equipment (CPE)."

Here is some more Tech News from around the web:

Tech Talk

 

Source: The Register

Extreme Overclocking of Skylake (7.02566 GHz)

Subject: Processors | February 6, 2016 - 09:00 PM |
Tagged: Skylake, overclocking, asrock, Intel, gskill

I recently came across a post at PC Gamer that looked at the extreme overclocking leaderboard of the Skylake-based Intel Core i7-6700K. Obviously, these competitions will probably never end as long as higher numbers are possible on parts that are interesting for one reason or another. Skylake is the new chip on the liquid nitrogen block. It cannot reach frequencies as high as its predecessors, but teams still compete to get as high as possible on that specific SKU.

overclock-2016-skylake6700k7ghz.jpg

The current world record for a single-threaded Intel Core i7-6700K is 7.02566 GHz, which is achieved with a voltage of 4.032V. For comparison, the i7-6700K is typically around 1.3V at load. This record was apparently set about a month ago, on January 11th.

This is obviously a huge increase, about three-fold more voltage for the extra 3 GHz. For comparison, the current world record over all known CPUs is the AMD FX-8370 with a clock of 8.72278 GHz. Many Pentium 4-era processors make up the top 15 places too, as those parts were designed for high clock rates with relatively low IPC.

The rest of the system used G.SKILL Ripjaws 4 DDR4 RAM, an ASRock Z170M OC Formula motherboard, and an Antec 1300W power supply. It used an NVIDIA GeForce GT 630 GPU, which offloaded graphics from the integrated chip, but otherwise interfered as little as possible. They also used Windows XP, because why not I guess? I assume that it does the least amount of work to boot, allowing a quicker verification, but that is only a guess.

Source: HWBot

ASRock Releases BIOS to Disable Non-K Skylake Overclocking

Subject: Processors | February 5, 2016 - 11:44 AM |
Tagged: Intel, Skylake, overclocking, cpu, Non-K, BCLK, bios, SKY OC, asrock, Z170

ASRock's latest batch of motherboard BIOS updates remove the SKY OS function, which permitted overclocking of non-K Intel processors via BCLK (baseclock).

20151215-8.jpg

The news comes amid speculation that Intel had pressured motherboard vendors to remove such functionality. Intel's unlocked K parts (i5-6600K, i7-6700K) will once again be the only options for Skylake overclocking on Z170 on ASRock boards (assuming prior BIOS versions are no longer available), and with no Pentium G3258 this generation Intel is no longer a budget friendly option for enthusiasts looking to push their CPU past factory specs.

3386ebeb-34f8-4a83-9909-0e29985f4712.jpg

(Image credit: Hexus.net)

It sounds like now would be a good time to archive that SKY OS enabled BIOS update file if you've downloaded it - or simply refrain from this BIOS update. What remains to be seen of course is whether other vendors will follow suit and disable BCLK overclocking of non-K processors. This had become a popular feature on a number of Z170 motherboards on the market, but ASRock may have been in too weak a position to battle Intel on this issue.

Source: Hexus

Podcast #384 - Corsair Carbide 600Q, GDDR5X, a Dual Fiji Graphics card and more!

Subject: General Tech | January 28, 2016 - 01:38 PM |
Tagged: podcast, video, corsair, carbide, 600q, 600c, gddr5x, jdec, amd, Fiji, fury x, fury x2, scythe, Ninja 4, logitech, g502 spectrum, Intel, Tigerlake, nzxt, Manta

PC Perspective Podcast #384 - 01/28/2016

Join us this week as we discuss the Corsair Carbide 600Q, GDDR5X, a Dual Fiji Graphics card and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

Report: Intel Tigerlake Revealed; Company's Third 10nm CPU

Subject: Processors | January 24, 2016 - 12:19 PM |
Tagged: Tigerlake, rumor, report, processor, process node, Intel, Icelake, cpu, Cannonlake, 10 nm

A report from financial website The Motley Fool discusses Intel's plan to introduce three architectures at the 10 nm node, rather than the expected two. This comes after news that Kaby Lake will remain at the present 14 nm, interrupting Intel's 2-year manufacturing tech pace.

intel_10nm.jpg

(Image credit: wccftech)

"Management has told investors that they are pushing to try to get back to a two-year cadence post-10-nanometer (presumably they mean a two-year transition from 10-nanometer to 7-nanometer), however, from what I have just learned from a source familiar with Intel's plans, the company is working on three, not two, architectures for the 10-nanometer node."

Intel's first 10 nm processor architecture will be known as Cannonlake, with Icelake expected to follow about a year afterward. With Tigerlake expected to be the third architecture build on 10 nm, and not coming until "the second half of 2019", we probably won't see 7 nm from Intel until the second half of 2020 at the earliest.

It appears that the days of two-year, two product process node changes are numbered for Intel, as the report continues:

"If all goes well for the company, then 7-nanometer could be a two-product node, implying a transition to the 5-nanometer technology node by the second half of 2022. However, the source that I spoke to expressed significant doubts that Intel will be able to return to a two-years-per-technology cycle."

intel-node-density_large.png

(Image credit: The Motley Fool)

It will be interesting to see how players like TSMC, themselves "planning to start mass production of 7-nanometer in the first half of 2018", will fare moving forward as Intel's process development (apparently) slows.

Know anyone who uses the Intel Driver Update Utility? Update the updater ASAP

Subject: General Tech | January 21, 2016 - 12:52 PM |
Tagged: Intel, intel driver update utility, security

The Intel Driver Update Utility is not the most commonly found application on PCs but someone you know may have stumbled upon it or had it installed by Geek Squad or the local equivalent.  Since Windows Vista the tool has been available, it checks your system for any Intel parts, from your APU to your NIC and then looks for any applicable drivers that are available.  Unfortunately it was doing so over a non-SSL URL which leaves the utility wide open to a man in the middle attack and you really do not want a compromised NIC driver.  The Inquirer reports today that Intel quietly updated the tool on January 19th to resolve the issue, ensuring all communication and downloads are over SSL.  If you know anyone using this tool, recommend they update it immediately.

intel-driver-update.jpg

"Intel has issued a fix for a major security vulnerability in a driver utility tool that could have allowed a man-in-the-middle attack and a malware maelstrom on victims' computers."

Here is some more Tech News from around the web:

Tech Talk

 

Source: The Inquirer

GDC 2016 Sessions Are Up and DirectX 12 / Vulkan Are There

Subject: General Tech | January 20, 2016 - 07:06 PM |
Tagged: vulkan, ue4, nvidia, Intel, gdc 2016, GDC, epic games, DirectX 12, Codemasters, arm, amd

The 30th Game Developers Conference (GDC) will take place on March 14th through March 18th, with the expo itself starting on March 16th. The sessions have been published at some point, with DX12 and Vulkan prominently featured. While the technologies have not been adopted as quickly as advertised, the direction is definitely forward. In fact, NVIDIA, Khronos Group, and Valve have just finished hosting a developer day for Vulkan. It is coming.

gdc-2016-logo.png

One interesting session will be hosted by Codemasters and Intel, which discusses bringing the F1 2015 engine to DirectX 12. It will highlight a few features they implemented, such as voxel based raytracing using conservative rasterization, which overestimates the size of individual triangles so you don't get edge effects on pixels that are partially influenced by an edge that cuts through a tiny, but not negligible, portion of them. Sites like Game Debate (Update: Whoops, forgot the link) wonder if these features will be patched in to older titles, like F1 2015, or if they're just R&D for future games.

Another keynote will discuss bringing Vulkan to mobile through Unreal Engine 4. This one will be hosted by ARM and Epic Games. Mobile processors have quite a few cores, albeit ones that are slower at single-threaded tasks, and decent GPUs. Being able to keep them loaded will bring their gaming potential up closer to the GPU's theoretical performance, which has surpassed both the Xbox 360 and PlayStation 3, sometimes by a factor of 2 or more.

Many (most?) slide decks and video recordings are available for free after the fact, but we can't really know which ones ahead of time. It should be an interesting year, though.

Source: GDC

Skylake and Later Will Be Withheld Windows 7 / 8.x Support

Subject: Processors | January 17, 2016 - 02:20 AM |
Tagged: Windows 8.1, Windows 7, windows 10, Skylake, microsoft, kaby lake, Intel, Bristol Ridge, amd

Microsoft has not been doing much to put out the fires in comment threads all over the internet. The latest flare-up involves hardware support with Windows 7 and 8.x. Currently unreleased architectures, such as Intel's Kaby Lake and AMD's Bristol Ridge, will only be supported on Windows 10. This is despite Windows 7 and Windows 8.x being supported until 2020 and 2023, respectively. Microsoft does not believe that they need to support older hardware, though.

windows-10-bandaid.png

This brings us to Skylake. These processors are out, but Microsoft considers them “transition” parts. Microsoft provided PC World with a list of devices that will be gjven Windows 7 and Windows 8.x drivers, which enable support until July 17, 2017. Beyond that date, only a handful of “most critical” updates will be provided until the official end of life.

I am not sure what the cut-off date for unsupported Skylake processors is, though; that is, Skylake processors that do not line up with Microsoft's list could be deprecated at any time. This is especially a problem for the ones that are potentially already sold.

As I hinted earlier, this will probably reinforce the opinion that Microsoft is doing something malicious with Windows 10. As Peter Bright of Ars Technica reports, Windows 10 does not exactly have an equivalent in the server space yet, which makes you wonder what that support cycle will be like. If they can continue to patch Skylake-based servers in Windows Server builds that are derived from Windows 7 and Windows 8.x, like Windows Server 2012 R2, then why are they unwilling to port those changes to the base operating system? If they will not patch current versions of Windows Server, because the Windows 10-derived version still isn't out yet, then what will happen with server farms, like Amazon Web Services, when Xeon v5s are suddenly incompatible with most Windows-based OS images? While this will, no doubt, be taken way out of context, there is room for legitimate commentary about this whole situation.

Of course, supporting new hardware on older operating systems can be difficult, and not just for Microsoft at that. Peter Bright also noted that Intel has a similar, spotty coverage of drivers, although that mostly applies to Windows Vista, which, while still in extended support for another year, doesn't have a significant base of users who are unwilling to switch. The point remains, though, that Microsoft could be doing a favor for their hardware vendor partners.

I'm not sure whether that would be less concerning, or more.

Whatever the reason, this seems like a very silly, stupid move on Microsoft's part, given the current landscape. Windows 10 can become a great operating system, but users need to decide that for themselves. When users are pushed, and an adequate reason is not provided, they will start to assume things. Chances are, it will not be in your favor. Some may put up with it, but others might continue to hold out on older platforms, maybe even including older hardware.

Other users may be able to get away with Windows 7 VMs on a Linux host.

Source: Ars Technica