Today Intel has insisted that the rumours of a further delay in their scheduled move to a 10nm process are greatly exaggerated. They had originally hoped to make this move in the latter half of this year but difficulties in the design process moved that target into 2017. They have assured The Inquirer and others that the speculation, based on information in a job vacancy posting, is inaccurate and that the they still plan on releasing processors built on a 10nm node by the end of next year. You can still expect Kaby Lake before the end of the year and Intel also claims to have found promising techniques to shrink their processors below 10nm in the future,
"INTEL HAS moved to quash speculation that its first 10nm chips could be pushed back even further than the second half of 2017, after already delaying them from this year."
Here is some more Tech News from around the web:
- 519070 or blank: The PINs that can pwn 80k online security cams @ The Register
- 6 Excellent Lightweight Linuxes for x86 and ARM @ Linux.com
- Samsung launches 14nm SoC for mid-range smartphones @ DigiTimes
- PC sales aren't doing so great – but good God, you're buying mountains of Nvidia graphics cards @ The Register
- Your anger is our energy, says Microsoft as it fixes Surface @ The Register
- Under-fire Apple backs down, crafts new iOS to kill security safeguard @ The Register
- iPhone 5SE price, release date, specs and rumours @ The Inquirer
- Firefox 2.0 for iOS adds 3D Touch and better password management @ The Inquirer
- Original 1977 Star Wars 35mm Print Has Been Restored and Released Online @ Slashdot
- Ventev Chargesync Alloy Cable @ TechwareLabs
- A Wireless Router That Means Business: Synology RT1900ac Review @ Techgage
Seriously, why even BOTHER
Seriously, why even BOTHER with holding onto the Moore’s “Law”? It’s just a speculative prediction of one man, and nothing less or more than just that. To build an entire computing product-producing industry on one man’s words, was always kind of lame and unprofessional, at least in my eyes and personal opinion. Just do your job, but don’t alter it to fit under some highly speculative and downright Vanga-like predictions, that’s just stupid. No one can predict future of computing, and the recent introduction of quartz memory crystals alongside with graphene lens developments has clearly proven to us that nothing is 100% certain in this particular segment of our lives, so holding to something so speculative as one man’s “law” on transistor and tech-process shrinking is downright technological progress and innovation stopping-like, IMHO.
We are getting market
We are getting market saturation in a lot of different markets. For a lot of people, their current smart phone does what they want it to, so there isn’t really a reason to upgrade. Luckily, a lot of smart phones are made out of easily shattered glass and metal, which creates a bigger upgrade market. Televisions have also hit market saturation in many cases. If you already have a large flat screen, high definition television, then why would you upgrade? They tried to push 3D features for a while, but not very many people were convinced that 3D was a worthwhile upgrade. Now they are trying to push 4K and HDR now to get people to upgrade.
In the PC space, having regular performance updates drove the upgrade cycle for a long time. This ran out of steam in about 2006 or 2007. A lot of systems from that time are still usable for most general compute task. I was still using an old core2duo laptop up until recently when it broke. There was no way you could still use a 10 year old laptop in 2006. Even a 5 year old laptop would have been so low of performance that it would have been close to unusable. This means that the upgrade market shrank to mostly PC gamers, since GPU performance was still scaling for a while. This has slowed down significantly also with the process tech scaling issues. Also, a lot of gaming doesn’t actually need the latest and greatest performance. If tech companies can’t continue scaling performance then they lose yet another reason to get people to upgrade. Mobile offers more of a reason to upgrade because people care more what their mobile devices look like and mobile devices just get broken more often.
So, this isn’t really that they are holding on to Moore’s law specifically. They do need to have new products with a compelling reason to upgrade though. Intel’s tick-tock cadence was to provide a new marketable device, even if it wasn’t really much of an upgrade, every year. We are going to finally get the update to 14 nm for GPUs but with smaller processes being difficult, especially for large chips like GPUs, we may be stuck at 14 nm for a while like we were at 28 nm. The semiconductor companies obviously want people to believe that they can continue to drive Moore’s law (an impossibility really) because they don’t have any features to get people to upgrade otherwise.
This wall of text seriously
This wall of text seriously Ticks my Tocks.
Going from 90nm to 65nm
Going from 90nm to 65nm yields a 25nm space savings of 28%
Going from 65nm to 45nm yields a 20nm space savings of 31%
Going from 45nm to 32nm yields a 13nm space savings of 29%
Going from 32nm to 22nm yields a 10nm space savings of 31%
Going from 22nm to 14nm yields a 08nm space savings of 36%
Going from 14nm to 10nm yields a 04nm space savings of 29%
But don’t let those percentages fool you because there needs to be more information about the circuit pitch also to know how much actual die space can be saved! And the costs of making the circuits smaller is going to go up even more so after 14nm! And that’s not including the actual geometry of the transistor/s(square, rectangular) and the actual pitch between circuits, as there are signal cross talk and heat dissipation into the silicon substrate issues to keep the circuit pitch from getting as small relative to the smallest actual gate size of the circuit. So the gates may get smaller but the circuit pitch(space between the individual gates) may not be able to shrink as much for heat transference/signal issues as well as the quantum issues that will come more into play at the even smaller process nodes.
The smaller gate sizes will save on power usage, but other things may prevent the die space savings from getting as large mostly dew to heat transference/other issues! Those actual nanometers per atoms calculations are really going to be what stops the numbers of transistors per planer unit area from going up as the circuit pitch will not be able to get any smaller even if the gate size can!
Intel is already talking about reducing clock speeds to make up for the losses elsewhere of going smaller than 14nm!
It seems that others gain
It seems that others gain access to alien technology better than the alien technology Intel was using the last decades. So, are TSMC and Samsung going to pass Intel in the near future? That’s an interesting question.
There are about 6 or so main
There are about 6 or so main technology companies that provide the majority of the world’s chip fabrication technology as well as IP licensing and hardware to Intel and the rest of the fab owners, and Intel has had only MORE MONEY to give these technology companies and others to keep Intel’s chip fab technology ahead of the rest of the marketplace. So it’s just Intel’s Money that buys the most advanced chip fab technology/hardware from these chip fab equipment and IP licensing from these companies and universities!
Intel, like M$, merely grabbed onto IBM’s coattails and where able to get a large market share of the IBM PC CLONE market that metamorphosed into the PC/laptop market we see today! So its more of Intel having the market share to outspend the others on the very expensive third party chip lithography machines etc. And FINFET was pioneered by Professors (Chenming Hu, Tsu-Jae King-Liu and Jeffrey Bokor) of the Graduate School at the University of California, Berkeley! So Intel had the revenues in which to outspend the other market players on chip fab equipment and IP licensing, as well as the money to pay to have the processes tweaked and brought to market before the others.
As the CPU/SOC market has outgrown just the x86 only based CPUs/SOCs in addition to the GPU market, the Independent Fab industry has had for the past few years the funds/revenues to invest in the latest technologies, so now they are catching up to Intel’s lead with FINFET processes of their own. The ARM based market, as well as the GPU market has contributed more revenues to the independent chip fab industry than AMD’s x86 needs could have ever allowed, so it’s the GPU, and the ARM market that has generated the revenues that are giving the independent chip fab industry the funding to easily catch up to Intel!
Good post.
Good post.
I think the bigger problem is
I think the bigger problem is that they’ve been building boring CPUs @ 14nm and they’re going to build boring CPUs at 10nm. Start doing something interesting for a change Intel.
Does it really matter? Intel
Does it really matter? Intel has only managed small performance increases since the first core series i7 was released. Sure they have done good things with energy conservation but that does not help users that need every bit of performance they can get. What is the technical reason preventing multiple i7s from being used in parallel on a desktop computer? With ARM design mobile processors from Apple, Samsung, Qualcomm and others making massive performance gains every year and now encroaching rapidly on desktop processors, why is Intel holding back multiprocessing and major performance increases?