Review Index:
Feedback

IDF 2006: Quad Core, Future Architectures and Robson Technology

Author: Ryan Shrout
Manufacturer: Intel
Tagged:

Justin Rattner, CTO

During Justin Rattner's keynote, focused on R&D, Intel seemed to divert away from the discussions about upcoming cores and more onto areas that Intel is helping the rest of the PC and server ecosystem find solutions to their problems.  One of the biggest: huge power demands from large server centers.



The discussion started with the idea of moving from the current 55-70% efficient power supplies to a more slim-lined 90%+ efficient power supply.  You can see in the image above that the potential 90%+ PSU is much simpler looking as it only features a single 12V output rail.  In order for that to be feasible, platform designers would need to get rid of the current required 3.3v and 5v rails; and it obviously can't happen over night. 



Another potential area to make up some power costs is in the conversion from AC power to DC power.  This slide shows the numerous conversions between AC and DC that occur and most major data centers going from a UPS, to a power grid then into the power supply and to the components.  The problem is that with every conversion, efficiency is lost (raising costs) in the form of heat (raising cooling costs!) and Intel and its partners are looking at the advantages of DC power distribution.



Here you can see that the AC power sill comes in from the main power producers, but at the UPS level the power is kept in DC state right through to the PC unit.  There is a lot less conversion, saving power.



How much?  Well Intel had a full rack of servers on hand that could run on both AC and DC power input to show us.  You can see that running the typical AC power cycle that servers and data centers use now, power consumption was at 3837 watts total.  Switching to the higher efficiency DC model lowered power consumption to 3333 watts, 14% power savings.  That could equal a lot of saved revenue when looking at companies that own and operate tens of thousands server nodes.


Terabits of I/O with Silicon Photonics


Another area that Rattner discussed R&D about was in silicon-based photonics; basically the ability to create a laser using only silicon and electrons.  This was previously thought to be impossible and forced the cost of optical networking connections to remain very high.  With a silicon-based option, optical data transmission could become very inexpensive.



The basic premise involves several layers of substrate that are applied in a specific order and that have small channels cut out of a layer near the middle.  Sending an electrical charge over the silicon chip then is able to produce light in the removed channel to a great enough scale to create a laser with significant range. 



Building silicon based receptors is a relatively easy task and you then have the components of a very fast and cheap way to transmit loads of data.  In my terascale processor article, we talk more about photonics as being the method Intel plans to provide processors with 80+ cores enough data to stay busy.



This slide shows what all the fuss is about: a 50x improvement in data transmission rates without any increase in power required.


Next Page - Justin Rattner, CTO

No comments posted yet.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.