Report: Intel Tigerlake Revealed; Company's Third 10nm CPU

Subject: Processors | January 24, 2016 - 12:19 PM |
Tagged: Tigerlake, rumor, report, processor, process node, Intel, Icelake, cpu, Cannonlake, 10 nm

A report from financial website The Motley Fool discusses Intel's plan to introduce three architectures at the 10 nm node, rather than the expected two. This comes after news that Kaby Lake will remain at the present 14 nm, interrupting Intel's 2-year manufacturing tech pace.

View Full Size

(Image credit: wccftech)

"Management has told investors that they are pushing to try to get back to a two-year cadence post-10-nanometer (presumably they mean a two-year transition from 10-nanometer to 7-nanometer), however, from what I have just learned from a source familiar with Intel's plans, the company is working on three, not two, architectures for the 10-nanometer node."

Intel's first 10 nm processor architecture will be known as Cannonlake, with Icelake expected to follow about a year afterward. With Tigerlake expected to be the third architecture build on 10 nm, and not coming until "the second half of 2019", we probably won't see 7 nm from Intel until the second half of 2020 at the earliest.

It appears that the days of two-year, two product process node changes are numbered for Intel, as the report continues:

"If all goes well for the company, then 7-nanometer could be a two-product node, implying a transition to the 5-nanometer technology node by the second half of 2022. However, the source that I spoke to expressed significant doubts that Intel will be able to return to a two-years-per-technology cycle."

View Full Size

(Image credit: The Motley Fool)

It will be interesting to see how players like TSMC, themselves "planning to start mass production of 7-nanometer in the first half of 2018", will fare moving forward as Intel's process development (apparently) slows.

Video News

January 24, 2016 | 12:43 PM - Posted by cypherl (not verified)

That is quite interesting. Even 5 years ago the prediction was 14 nm may be the end of the road. This is the first time I have heard of 5nm plans. Curious what comes after single atom precision (0.2nm). Nanotechnology to be sure, but my guess would be photon computing.

January 24, 2016 | 02:13 PM - Posted by xnor

That's because the sizes involved, let's take transistor gate pitch, is actually 70nm for the process that Intel calls "14nm". It was 90nm for their "22nm" process.

January 24, 2016 | 03:00 PM - Posted by Anonymous (not verified)

They can not shrink the gate pitch unless that can get the leakage down, as they need enough substrate atoms to carry away the heat and not damage the circuits. Even at 22nm there where problem with such densely packed circuits, that's one of the reasons that 14nm was such a problem, along with the problems with lithography at 14nm. The amount of total circuit shrinkage is slowing also with each step farther yielding diminishing returns. They will be looking at processor circuits going even more into the 3d with fenfets being replaced with the gate all around circuits to allow for less leakage and less heat. The costs per transistor is going to go up now with the costs of going below 14nm making things expensive. Intel is going into a Tick Tock Tock, with each fabrication process node used across more time, and processor micro-architectures! Look for processor die stacking and 3d technologies allowing for more processor cores, but because of heat dissipation issues the processor cores will be clocked lower to allow for acceptable thermal levels.

IBM is researching coolant capillaries laser etched into processor die stacks to allow for coolant to flow through processor die stacks, but that technology is still in the research stage. Either way at first the only way to get more processing power is to start stacking processor dies and maybe using more CPU cores clocked lower to allow for heat to dissipate, because switching from the silicon process to a process based on another element is going to cost trillions and take decades.

Maybe CPUs will have to process like GPUs and do with more cores clocked lower, and those GPU high density layout libraries allow for denser circuit packing but have to be clocked lower to allow for heat to be removed/dissipated. AMD's Carrizo mobile variant used GPU style design layout libraries on Carrizo's CPU cores an got 30% more space saving at the 28nm process node, So AMD could maybe have more CPU cores on a process node for lower clocked CPU cores but make up the difference with more CPU cores, the same way it does for GPUs and their thousands of cores. With GPUs and HSA more processing will be done on the GPU anyways so maybe things will not be so bad even with the slowing of Moore's Law/observation. HBM and interposer technology will allow for even more space/thermal efficiency on future SOCs, so other technologies are helping, including using GPUs for more types of compute workloads.

January 25, 2016 | 01:13 PM - Posted by Drazen (not verified)

Increasing number of cores is not a solution cause speed increase is not proportional (2 cores at 1GHz work slower than one on 2GHz). Also, multiple cores require specially written software increasing overhead on sync objects, complexity, etc.
At the end there is a limit on core number when everything becomes useless (Amdahl law).

January 24, 2016 | 04:13 PM - Posted by Scott Michaud

A single Silicon atom in a solid is 0.5nm, because it's in a crystal lattice at roughly room temperature. There is talk of moving off Silicon, though. Then, of course, we get to other compute paradigms altogether, like Photonics, Quantum, etc.

January 24, 2016 | 12:53 PM - Posted by John Blanton (not verified)

I'd like to know what Intel plans on doing to control electron teleportation at sub 5nm. Maybe Intel would be able to find a way to use it to their advantage. :D

January 24, 2016 | 03:20 PM - Posted by alexander christensen (not verified)

im not a particle physicist, so i might be wrong here, but i dont think that even a room temperature superconductor would be enough to overcome the heisenberg uncertainty principle.
the issue being that you cant actually stop quantum tunneling, since the electron never actually goes through the object, but due to the particle/wave duality actually exists with a certain level of probability everywhere the wave reaches until interacted with in the transistor.

there is the possibility of using some sort of statistical error correction, and shrinking the actual transistors instead of the space between them, but even those are limited in scope. i guess what im trying to say, is that we're more than likely nearing the very end of moore's law, and companies like intel are probably looking towards new materials, larger chips, and higher frequencies before moving to an entirely new type of computing. cloud computing comes to mind here, since we are nowhere near the limits of data transfer, and with the first data having already been sent with neutrino's, price of establishing connections are likely to decrease drastically in the future, since we can literally beam data directly through object as large as the planet instead of relying on satellite connections.

(if my memory serves me correctly i believe 96 bits of data was successfully transmitted via neutrino's through 24 miles of dirt)

January 24, 2016 | 04:07 PM - Posted by xnor

Wow, you confuse/mix a lot of topics here.

January 24, 2016 | 04:25 PM - Posted by alexander christensen (not verified)

enlighten me please then, i might learn something.

January 24, 2016 | 07:25 PM - Posted by Anonymous (not verified)

Intel are kidding themselves, they will never achieve 2 year cadence again, it's not possible IMO. Instead the 10->7nm transition will be much harder than the 14->10nm transition, and I'd expect almost 4 year cycle for 7nm, and 4-5 years for 5nm cycle.

January 24, 2016 | 09:42 PM - Posted by Scott Michaud

This news is actually admitting that they will be on a three-year cycle. It doesn't say they're expecting a 2-year cycle.

January 24, 2016 | 10:35 PM - Posted by Anonymous (not verified)

Companies with fabs are not going to come out and say that they cannot scale down any further. This would be disastrous from a buisness perspective, so they are going to say that they are planing 10 nm, 7 nm, and 5 nm even if they don't have a clear path to achieving it. They will just delay if it doesn't actually work out. While people have been incorrectly predicting the end of scaling for a long time, at some point, those predictions will come true. You can't maintain exponential scaling forever, and we seem to be close to the end of scaling now.

There are a lot of other things that they can do to tweak the processes without actually going smaller. A big target will be interconnect resistance and capacitance. Modern high performance chips expend a considerable portion of their power on interconnect. We are going to see a large performance jump with the upcoming 3D stacked systems and it will be a little while before the low hanging fruit is exhausted.

January 25, 2016 | 06:59 AM - Posted by Anonymous (not verified)

What we need is proper full scale ipc improvement not just die shrinks.

January 25, 2016 | 09:42 PM - Posted by Anonymous (not verified)

Anyone have a guess whether any of these 10nm processors will launch with a 5ghz K model?

January 25, 2016 | 09:42 PM - Posted by Anonymous (not verified)

Anyone have a guess whether any of these 10nm processors will launch with a 5ghz K model?

January 25, 2016 | 09:55 PM - Posted by TIM (not verified)

There has been talk of 5nm for some time and it is exciting and impressive it shows human resolve and ingenuity at its best. But lets say they get to 5nm would that mean much in terms of better computers? I see intels efforts as one of the modern day wonders of the world. But can anyone tell me why we need smaller than we have now?

January 28, 2016 | 10:29 PM - Posted by the laser it burns

Three-year cadence = tick-tock-toe?

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.