More Talks About Process Technology

Subject: Editorial, General Tech | December 8, 2013 - 04:11 AM |

Josh Walrath titled the intro of his "Next Gen Graphics and Process Migration: 20nm and Beyond" editorial: "The Really Good Times are Over". Moore's Law predicts that, with each ~2 year generation, we will be able to double the transistor count of our integrated circuits. It does not, however, set a price.

View Full Size

A look into GlobalFoundries.

"Moore's Law is expensive" remarked Tom Kilroy during his Computex 2013 keynote. Intel spends about $12 billion USD in capital, every year, to keep the transistors coming. It shows. They are significantly ahead of their peers in terms of process technology. Intel is a very profitable company who can squirrel away justifications for these research and development expenses across numerous products and services.

The benefits of a process shrink are typically three-fold: increased performance, decreased power consumption, and lower cost per chip (as a single wafer is better utilized). Chairman and CTO of Broadcom, Henry Samueli, told reporters that manufacturing complexity is pushing chip developers into a situation where one of those three benefits must be sacrificed for the other two.

You are suddenly no longer searching for an overall better solution. You are searching for a more optimized solution in many respects but with inherent tradeoffs.

He expects GlobalFoundries and TSMC to catch up to Intel and "the cost curve should come back to normal". Still, he sees another wall coming up when we hit the 5nm point (you can count the width or height of these transistors, in atoms, using two hands) and even more problems beyond that.

View Full Size

Image Credit: IONAS

From my perspective: at some point, we will need to say goodbye to electronic integrated circuits. The theorists are already working on how we can develop integrated circuits using non-electronic materials. For instance, during the end of my Physics undergraduate degree, my thesis adviser was working on nonlinear optics within photonic crystals; waveguides which transmit optical frequency light rather than radio frequency electric waves. Of course I do not believe his research was on Optical Integrated Circuits, but that is not really the point.

Humanity is great at solving problems when backs are against walls. But, what problem will they try?

Power consumption? Cost? Performance?

Source: ITWorld

December 8, 2013 | 02:44 PM - Posted by Anonymous (not verified)

We are living in an exciting, yet terrifying time. Something new and revolutionary, or simply a means to marginally progress at a vastly slower pace. This is the time where great things happen in technology and I can't wait.

December 8, 2013 | 03:40 PM - Posted by Anonymous (not verified)

With new process node engineering costs even taxing Intel's fat wallet, Intel will have only a few more years to take advantage of its manfacturing process lead. Look for competition in the many core area from ARM 64 bit designs, that will be using AMD or licensed Nvidia graphics.
AMD will be developing custom ARM based APUs, the same way Apple has done, by licensing the ARM 64 bit instruction set, and producing custom ARM clones, only AMD will be creating an ARM 64 bit many core APU, with HSA and hUMA, and AMD GPU graphics/gpu acceleration. These APUs will have 8, or more, ARM cores to compete with lower end Haswell SKUs in mobile devices, at first, and by then adding more ARM 64 bit cores, and a custom on die interconnect fabric, these devices will be able to scale to more Laptop/Desktop platforms by just adding cores. Apple will probably be the first to take its custom ARM designs to the low end laptop devices, but starting with a many core ARM, with AMD or Nvidia descrete GPUs, or later possibly licensing Nvidia integrated GPU designs and going with an Apple CPU/Nvidia GPU on die solution.

The ability to pack more ARM cores, per unit of die space, than x86, along with high speed on die intelligent BUS interface fabrics, and the software base fully implementing HSA/parallel programming, single threaded performence will not be an issue, as all software will be able to use multicore CPUs and GPUs for general purpose computational workloads. With Moors Law in a stagnant phase, and everyone catching up in the process node/multicore software area, expect the start of a new battle between CISC and RISC designs, on the Many Cores front.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.