Subject: General Tech | September 27, 2017 - 12:58 PM | Jeremy Hellstrom
Tagged: moores law, nvidia, jen-hsun huang
You've heard this one before, though not from Jen-Hsun Huang of NVIDIA who has a vested interest in seeing Moore's Law finally be relegated to computing history. NVIDIA is pushing GPUs as a better alternative to CPUs for a variety of heavy computational lifting. Volta has been adopted by many large companies and he also just announced TensorRT3 a programmable inference accelerator with applications in self-driving cars, robotics and numerous other tasks previously best done with a CPU. DigiTimes quotes Jen-Hsun as saying "while number of CPU transistors has grown at an annual pace of 50%, the CPU performance has advanced by only 10%", more or less accurate in broad strokes but certainly not a death rattle yet.
Intel has a different opinion of course, reporting Moore's Law to be perfectly healthy just last Tuesday.
"Nvidia founder and CEO Jensen Huang has said that with the emergence of GPU computing following the decline of the CPU era, Moore's Law has come to an end, stressing that his company's GPU-centered ecosystem has won support from China's top-five AI (artificial intelligence) players."
Here is some more Tech News from around the web:
- Guru3D Rig of the Month - September 2017
- Docs ran a simulation of what would happen if really nasty malware hit a city's hospitals. RIP :( @ The Register
- Deloitte is a sitting duck: Key systems with RDP open, VPN and proxy 'login details leaked' @ The Register
- watchOS 4 breathes new life into fitness side of the Apple Watch @ Ars Technica
- iPhone X vs Galaxy Note 8 specs comparison @ The Inquirer
- EWin Racing Champion Series Gaming Chair Review @ NikKTech
Subject: General Tech | July 19, 2016 - 12:37 PM | Jeremy Hellstrom
Tagged: 2d, molybdenum sulphide, moores law, graphene
Over at Nanotechweb is an article on some rather impressive research being done to create what are, for all intents and purposes, almost two dimensional. The process used by the researchers created transistors made up of two three-atom thick MoS2 layers, both slightly overlapped with graphene, sandwiched between two one-atom think graphene layers. The trick is in the use of graphene, itself unsuitable for use as a transistor but perfect for interconnects thanks to its conductance. Read on to learn more about these researchers and the process they are working on, including a link to their publication in Nature.
"Researchers in the US have succeeded in chemically assembling the electronic junctions between a 2D semiconductor (molybdenum sulphide) and graphene, and have made an atomic transistor with good properties. They have also assembled the heterostructures into 2D logic circuits, such as an NMOS inverter with a voltage gain as high as 70."
Here is some more Tech News from around the web:
- Good gravy, Toshiba QLC flash chips are getting closer @ The Register
- Boffins unveil 500TB/in2 disk. Yeah, it's made of chlorine. -196˚C, why? @ The Register
- Seagate unveils 10TB monsters for PC users with out-of-control Steam libraries @ The Inquirer
- How to scam $750,000 out of Microsoft Office: Two-factor auth calls to premium-rate numbers @ The Register
- Netflix Stock Price Tanks As Customers Quit Over Higher Prices @ Slashdot
- Sonic 3D Printer Auto Bed Leveling Makes a Swoosh @ Hack a Day
Are Computers Still Getting Faster?
It looks like CES is starting to wind down, which makes sense because it ended three days ago. Now that we're mostly caught up, I found a new video from The 8-Bit Guy. He doesn't really explain any old technologies in this one. Instead, he poses an open question about computer speed. He was able to have a functional computing experience on a ten-year-old Apple laptop, which made him wonder if the rate of computer advancement is slowing down.
I believe that he (and his guest hosts) made great points, but also missed a few important ones.
One of his main arguments is that software seems to have slowed down relative to hardware. I don't believe that is true, but I believe it's looking in the right area. PCs these days are more than capable of doing just about anything in terms of 2D user interface that we would want to, and do so with a lot of overhead for inefficient platforms and sub-optimal programming (relative to the 80's and 90's at the very least). The areas that require extra horsepower are usually doing large batches of many related tasks. GPUs are key in this area, and they are keeping up as fast as they can, despite some stagnation with fabrication processes and a difficulty (at least before HBM takes hold) in keeping up with memory bandwidth.
For the last five years to ten years or so, CPUs have been evolving toward efficiency as GPUs are being adopted for the tasks that need to scale up. I'm guessing that AMD, when they designed the Bulldozer architecture, hoped that GPUs would have been adopted much more aggressively, but even as graphics devices, they now have a huge effect on Web, UI, and media applications.
These are also tasks that can scale well between devices by lowering resolution (and so forth). The primary thing that a main CPU thread needs to do is figure out the system's state and keep the graphics card fed before the frame-train leaves the station. In my experience, that doesn't scale well (although you can sometimes reduce the amount of tracked objects for games and so forth). Moreover, it is easier to add GPU performance, compared to single-threaded CPU, because increasing frequency and single-threaded IPC should be more complicated than planning out more, duplicated blocks of shaders. These factors combine to give lower-end hardware a similar experience in the most noticeable areas.
So, up to this point, we discussed:
- Software is often scaling in ways that are GPU (and RAM) limited.
- CPUs are scaling down in power more than up in performance.
- GPU-limited tasks can often be approximated with smaller workloads.
- Software gets heavier, but it doesn't need to be "all the way up" (ex: resolution).
- Some latencies are hard to notice anyway.
Back to the Original Question
This is where “Are computers still getting faster?” can be open to interpretation.
Tasks are diverging from one class of processor into two, and both have separate industries, each with their own, multiple goals. As stated, CPUs are mostly progressing in power efficiency, which extends (an assumed to be) sufficient amount of performance downward to multiple types of devices. GPUs are definitely getting faster, but they can't do everything. At the same time, RAM is plentiful but its contribution to performance can be approximated with paging unused chunks to the hard disk or, more recently on Windows, compressing them in-place. Newer computers with extra RAM won't help as long as any single task only uses a manageable amount of it -- unless it's seen from a viewpoint that cares about multi-tasking.
In short, computers are still progressing, but the paths are now forked and winding.
Subject: General Tech, Graphics Cards, Processors | April 19, 2015 - 02:08 PM | Scott Michaud
Tagged: moores law, Intel
While he was the director of research and development at Fairchild Semiconductor, Gordon E. Moore predicted that the number of components in an integrated circuits would double every year. Later, this time-step would slow to every two years; you can occasionally hear people talk about eighteen months too, but I am not sure who derived that number. In a few years, he would go on to found Intel with Robert Noyce, where they spend tens of billions of dollars annually to keep up with the prophecy.
It works out for the most part, but we have been running into physical issues over the last few years though. One major issue is that, with our process technology dipping into the single- and low double-digit nanometers, we are running out of physical atoms to manipulate. The distance between silicon atoms in a solid at room temperature is about 0.5nm; a 14nm product has features containing about 28 atoms, give or take a few in rounding error.
It has been a good fifty years since the start of Moore's Law. Humanity has been developing plans for how to cope with the eventual end of silicon lithography process shrinks. We will probably transition to smaller atoms and molecules and later consider alternative technologies like photonic crystals, which routes light in the hundreds of terahertz through a series of waveguides that make up an integrated circuit. Another interesting thought: will these technologies fall in line with Moore's Law in some way?
There are few people in the gaming industry that you simply must pay attention to when they speak. One of them is John Carmack, founder of id Software and a friend of the site, creator of Doom. Another is Epic Games' Tim Sweeney, another pioneer in the field of computer graphics that brought us the magic of Unreal before bringing the rest of the gaming industry the Unreal Engine.
At DICE 2012, a trade show for game developers to demo their wares and learn from each other, Sweeney gave a talk on the future of computing hardware and its future. (You can see the source of my information and slides here at Gamespot.) Many pundits, media and even developers have brought up the idea that the next console generation that we know is coming will be the last - we will have reached the point in our computing capacity that gamers and designers will be comfortable with the quality and realism provided. Forever.
Think about that a moment; has anything ever appeared so obviously crazy? Yet, in a world where gaming has seemed to regress into the handheld spaces of iPhone and iPad, many would have you believe that it is indeed the case. Companies like NVIDIA and AMD that spend billions of dollars developing new high-powered graphics technologies would simply NOT do so anymore and instead focus only on low power. Actually...that is kind of happening with NVIDIA Tegra and AMD's move to APUs, but both claim that the development of leading graphics technology is what allows them to feed the low end - the sub-$100 graphics cards, SoC for phones and tablets and more.
Sweeney started the discussion by teaching everyone a little about human anatomy.
The human eye has been studied quite extensively and the amount of information we know about it would likely surprise. With 120 million monochrome receptors and 5M color, the eye and brain are able to do what even our most advanced cameras are unable to.
Subject: General Tech | February 15, 2012 - 01:45 PM | Jeremy Hellstrom
Tagged: photolithography, moores law, MAPPER, etching, electron lithography
Josh has covered the lithography process in depth in several of his processor reviews, watching the shrink from triple digit process to double digit process as the manufacturers refine existing processes and invent new ways of etching smaller transitors and circuits. We've also mentioned Moore's Law several times, which was a written observation by Gordon E. Moore that has proven to be accurate far beyond his initial 10 year estimate for the continuation of the trend that saw that "the number of components in integrated circuits had doubled every year from the invention of the integrated circuit in 1958 until 1965". It is a measure of density, not processing power as many intarweb denizens interpret it.
With UV light currently being the solution that most companies currently implement and expect to use for the near future, the single digit process seems out of reach as the bandwidth of UV light can only be compressed so small without very expensive work arounds being implemented. That is why the news from MAPPER Lithography of Delft, The Netherlands is so exciting. They've found a way to utilize directed electron beams to etch circuitry and are testing out 14nm and 10nm processes and doing it to the standards expected by industry. This may be the process used to take us below 9nm and extend the doubling of tranisitor density for a few years to come. Check out more about the process at The Register and check out the video below.
"An international consortium of chip boffins has demonstrated a maskless wafer-baking technology that they say "meets the industry requirement" for next-generation 14- and 10-nanometer process nodes.
Current chip-manufacturing lithography uses masks to guide light onto chip wafers in order to etch a chip's features. However, as process sizes dip down to 20nm and below, doubling up on masks begins to become necessary – an expensive proposition."
Here is some more Tech News from around the web:
- Ultrabooks less desirable in Europe, say Taiwan makers @ DigiTimes
- HP gives sysadmins a little mobility @ The Register
- Bulldozer Wprime and SuperPI records broken by XSR writer @ XSReviews