2D semiconductors anyone?

Subject: General Tech | July 19, 2016 - 12:37 PM |
Tagged: 2d, molybdenum sulphide, moores law, graphene

Over at Nanotechweb is an article on some rather impressive research being done to create what are, for all intents and purposes, almost two dimensional.  The process used by the researchers created transistors made up of two three-atom thick MoS2 layers, both slightly overlapped with graphene, sandwiched between two one-atom think graphene layers.  The trick is in the use of graphene, itself unsuitable for use as a transistor but perfect for interconnects thanks to its conductance.  Read on to learn more about these researchers and the process they are working on, including a link to their publication in Nature.

nnano.2016.115-f1.jpg

"Researchers in the US have succeeded in chemically assembling the electronic junctions between a 2D semiconductor (molybdenum sulphide) and graphene, and have made an atomic transistor with good properties. They have also assembled the heterostructures into 2D logic circuits, such as an NMOS inverter with a voltage gain as high as 70."

Here is some more Tech News from around the web:

Tech Talk

Source: Nanotechweb
Manufacturer: PC Perspective
Tagged: moores law, gpu, cpu

Are Computers Still Getting Faster?

It looks like CES is starting to wind down, which makes sense because it ended three days ago. Now that we're mostly caught up, I found a new video from The 8-Bit Guy. He doesn't really explain any old technologies in this one. Instead, he poses an open question about computer speed. He was able to have a functional computing experience on a ten-year-old Apple laptop, which made him wonder if the rate of computer advancement is slowing down.

I believe that he (and his guest hosts) made great points, but also missed a few important ones.

One of his main arguments is that software seems to have slowed down relative to hardware. I don't believe that is true, but I believe it's looking in the right area. PCs these days are more than capable of doing just about anything in terms of 2D user interface that we would want to, and do so with a lot of overhead for inefficient platforms and sub-optimal programming (relative to the 80's and 90's at the very least). The areas that require extra horsepower are usually doing large batches of many related tasks. GPUs are key in this area, and they are keeping up as fast as they can, despite some stagnation with fabrication processes and a difficulty (at least before HBM takes hold) in keeping up with memory bandwidth.

For the last five years to ten years or so, CPUs have been evolving toward efficiency as GPUs are being adopted for the tasks that need to scale up. I'm guessing that AMD, when they designed the Bulldozer architecture, hoped that GPUs would have been adopted much more aggressively, but even as graphics devices, they now have a huge effect on Web, UI, and media applications.

google-android-opengl-es-extensions.jpg

These are also tasks that can scale well between devices by lowering resolution (and so forth). The primary thing that a main CPU thread needs to do is figure out the system's state and keep the graphics card fed before the frame-train leaves the station. In my experience, that doesn't scale well (although you can sometimes reduce the amount of tracked objects for games and so forth). Moreover, it is easier to add GPU performance, compared to single-threaded CPU, because increasing frequency and single-threaded IPC should be more complicated than planning out more, duplicated blocks of shaders. These factors combine to give lower-end hardware a similar experience in the most noticeable areas.

So, up to this point, we discussed:

  • Software is often scaling in ways that are GPU (and RAM) limited.
  • CPUs are scaling down in power more than up in performance.
  • GPU-limited tasks can often be approximated with smaller workloads.
    • Software gets heavier, but it doesn't need to be "all the way up" (ex: resolution).
    • Some latencies are hard to notice anyway.

Back to the Original Question

This is where “Are computers still getting faster?” can be open to interpretation.

intel-devilscanyon-overview.JPG

Tasks are diverging from one class of processor into two, and both have separate industries, each with their own, multiple goals. As stated, CPUs are mostly progressing in power efficiency, which extends (an assumed to be) sufficient amount of performance downward to multiple types of devices. GPUs are definitely getting faster, but they can't do everything. At the same time, RAM is plentiful but its contribution to performance can be approximated with paging unused chunks to the hard disk or, more recently on Windows, compressing them in-place. Newer computers with extra RAM won't help as long as any single task only uses a manageable amount of it -- unless it's seen from a viewpoint that cares about multi-tasking.

In short, computers are still progressing, but the paths are now forked and winding.

Moore's Law Is Fifty Years Old!

Subject: General Tech, Graphics Cards, Processors | April 19, 2015 - 02:08 PM |
Tagged: moores law, Intel

While he was the director of research and development at Fairchild Semiconductor, Gordon E. Moore predicted that the number of components in an integrated circuits would double every year. Later, this time-step would slow to every two years; you can occasionally hear people talk about eighteen months too, but I am not sure who derived that number. In a few years, he would go on to found Intel with Robert Noyce, where they spend tens of billions of dollars annually to keep up with the prophecy.

Intel-logo.png

It works out for the most part, but we have been running into physical issues over the last few years though. One major issue is that, with our process technology dipping into the single- and low double-digit nanometers, we are running out of physical atoms to manipulate. The distance between silicon atoms in a solid at room temperature is about 0.5nm; a 14nm product has features containing about 28 atoms, give or take a few in rounding error.

Josh has a good editorial that discusses this implication with a focus on GPUs.

It has been a good fifty years since the start of Moore's Law. Humanity has been developing plans for how to cope with the eventual end of silicon lithography process shrinks. We will probably transition to smaller atoms and molecules and later consider alternative technologies like photonic crystals, which routes light in the hundreds of terahertz through a series of waveguides that make up an integrated circuit. Another interesting thought: will these technologies fall in line with Moore's Law in some way?

Source: Tom Merritt
Author:
Manufacturer: Epic Games

The Truth

There are few people in the gaming industry that you simply must pay attention to when they speak.  One of them is John Carmack, founder of id Software and a friend of the site, creator of Doom.  Another is Epic Games' Tim Sweeney, another pioneer in the field of computer graphics that brought us the magic of Unreal before bringing the rest of the gaming industry the Unreal Engine. 

At DICE 2012, a trade show for game developers to demo their wares and learn from each other, Sweeney gave a talk on the future of computing hardware and its future.  (You can see the source of my information and slides here at Gamespot.) Many pundits, media and even developers have brought up the idea that the next console generation that we know is coming will be the last - we will have reached the point in our computing capacity that gamers and designers will be comfortable with the quality and realism provided.  Forever. 

tim-sweeney.jpg

Think about that a moment; has anything ever appeared so obviously crazy?  Yet, in a world where gaming has seemed to regress into the handheld spaces of iPhone and iPad, many would have you believe that it is indeed the case.  Companies like NVIDIA and AMD that spend billions of dollars developing new high-powered graphics technologies would simply NOT do so anymore and instead focus only on low power.  Actually...that is kind of happening with NVIDIA Tegra and AMD's move to APUs, but both claim that the development of leading graphics technology is what allows them to feed the low end - the sub-$100 graphics cards, SoC for phones and tablets and more.

Sweeney started the discussion by teaching everyone a little about human anatomy. 

01.jpg

The human eye has been studied quite extensively and the amount of information we know about it would likely surprise.  With 120 million monochrome receptors and 5M color, the eye and brain are able to do what even our most advanced cameras are unable to.

Continue reading our story on the computing needs for visual computing!!

Take that Moore! Electron beam etching set to take us to the 10nm process

Subject: General Tech | February 15, 2012 - 01:45 PM |
Tagged: photolithography, moores law, MAPPER, etching, electron lithography

Josh has covered the lithography process in depth in several of his processor reviews, watching the shrink from triple digit process to double digit process as the manufacturers refine existing processes and invent new ways of etching smaller transitors and circuits.  We've also mentioned Moore's Law several times, which was a written observation by Gordon E. Moore that has proven to be accurate far beyond his initial 10 year estimate for the continuation of the trend that saw that "the number of components in integrated circuits had doubled every year from the invention of the integrated circuit in 1958 until 1965".  It is a measure of density, not processing power as many intarweb denizens interpret it.

With UV light currently being the solution that most companies currently implement and expect to use for the near future, the single digit process seems out of reach as the bandwidth of UV light can only be compressed so small without very expensive work arounds being implemented.  That is why the news from MAPPER Lithography of Delft, The Netherlands is so exciting.  They've found a way to utilize directed electron beams to etch circuitry and are testing out 14nm and 10nm processes and doing it to the standards expected by industry.  This may be the process used to take us below 9nm and extend the doubling of tranisitor density for a few years to come.  Check out more about the process at The Register and check out the video below.

"An international consortium of chip boffins has demonstrated a maskless wafer-baking technology that they say "meets the industry requirement" for next-generation 14- and 10-nanometer process nodes.

Current chip-manufacturing lithography uses masks to guide light onto chip wafers in order to etch a chip's features. However, as process sizes dip down to 20nm and below, doubling up on masks begins to become necessary – an expensive proposition."

Here is some more Tech News from around the web:

Tech Talk

 

Source: The Register