Subject: Processors | January 24, 2016 - 12:19 PM | Sebastian Peak
Tagged: Tigerlake, rumor, report, processor, process node, Intel, Icelake, cpu, Cannonlake, 10 nm
A report from financial website The Motley Fool discusses Intel's plan to introduce three architectures at the 10 nm node, rather than the expected two. This comes after news that Kaby Lake will remain at the present 14 nm, interrupting Intel's 2-year manufacturing tech pace.
(Image credit: wccftech)
"Management has told investors that they are pushing to try to get back to a two-year cadence post-10-nanometer (presumably they mean a two-year transition from 10-nanometer to 7-nanometer), however, from what I have just learned from a source familiar with Intel's plans, the company is working on three, not two, architectures for the 10-nanometer node."
Intel's first 10 nm processor architecture will be known as Cannonlake, with Icelake expected to follow about a year afterward. With Tigerlake expected to be the third architecture build on 10 nm, and not coming until "the second half of 2019", we probably won't see 7 nm from Intel until the second half of 2020 at the earliest.
It appears that the days of two-year, two product process node changes are numbered for Intel, as the report continues:
"If all goes well for the company, then 7-nanometer could be a two-product node, implying a transition to the 5-nanometer technology node by the second half of 2022. However, the source that I spoke to expressed significant doubts that Intel will be able to return to a two-years-per-technology cycle."
(Image credit: The Motley Fool)
It will be interesting to see how players like TSMC, themselves "planning to start mass production of 7-nanometer in the first half of 2018", will fare moving forward as Intel's process development (apparently) slows.
Subject: Graphics Cards, Processors | January 19, 2016 - 11:38 PM | Scott Michaud
Digitimes is reporting on statements that were allegedly made by TSMC co-CEO, Mark Liu. We are currently seeing 16nm parts come out of the foundry, which is expected to be used in the next generation of GPUs, replacing the long-running 28nm node that launched with the GeForce GTX 680. (It's still unannounced whether AMD and NVIDIA will use 14nm FinFET from Samsung or GlobalFoundries, or 16nm FinFET from TSMC.)
Update (Jan 20th, @4pm EST): Couple minor corrections. Radeon HD 7970 launched at 28nm first by a couple of months. I just remember NVIDIA getting swamped in delays because it was a new node, so that's probably why I thought of the GTX 680. Also, AMD announced during CES that they will use GlobalFoundries to fab their upcoming GPUs, which I apparently missed. We suspect that NVIDIA will use TSMC, and have assumed that for a while, but it hasn't been officially announced yet (if ever).
According to their projections, which (again) are filtered through Digitimes, the foundry expects to have 7nm in the first half of 2018. They also expect to introduce extreme ultraviolet (EUV) lithography methods with 5nm in 2020. Given that Silicon in a solid has a lattice spacing of ~0.54nm at room temperature, 7nm transistors will consist of about 13 atoms, and 5nm transistors will have features containing about 9 atoms.
We continue the march toward the end of silicon lithography.
Even if the statement is correct, much can happen between then and now. It wouldn't be the first time that I've seen a major foundry believe that a node would be available, but end up having it delayed. I wouldn't hold my breath, but I might cross my fingers if my hands were free.
At the very least, we can assume that TSMC's roadmap is 16nm, 10nm, 7nm, and then 5nm.
Subject: Processors | January 17, 2016 - 02:20 AM | Scott Michaud
Tagged: Windows 8.1, Windows 7, windows 10, Skylake, microsoft, kaby lake, Intel, Bristol Ridge, amd
Microsoft has not been doing much to put out the fires in comment threads all over the internet. The latest flare-up involves hardware support with Windows 7 and 8.x. Currently unreleased architectures, such as Intel's Kaby Lake and AMD's Bristol Ridge, will only be supported on Windows 10. This is despite Windows 7 and Windows 8.x being supported until 2020 and 2023, respectively. Microsoft does not believe that they need to support older hardware, though.
This brings us to Skylake. These processors are out, but Microsoft considers them “transition” parts. Microsoft provided PC World with a list of devices that will be gjven Windows 7 and Windows 8.x drivers, which enable support until July 17, 2017. Beyond that date, only a handful of “most critical” updates will be provided until the official end of life.
I am not sure what the cut-off date for unsupported Skylake processors is, though; that is, Skylake processors that do not line up with Microsoft's list could be deprecated at any time. This is especially a problem for the ones that are potentially already sold.
As I hinted earlier, this will probably reinforce the opinion that Microsoft is doing something malicious with Windows 10. As Peter Bright of Ars Technica reports, Windows 10 does not exactly have an equivalent in the server space yet, which makes you wonder what that support cycle will be like. If they can continue to patch Skylake-based servers in Windows Server builds that are derived from Windows 7 and Windows 8.x, like Windows Server 2012 R2, then why are they unwilling to port those changes to the base operating system? If they will not patch current versions of Windows Server, because the Windows 10-derived version still isn't out yet, then what will happen with server farms, like Amazon Web Services, when Xeon v5s are suddenly incompatible with most Windows-based OS images? While this will, no doubt, be taken way out of context, there is room for legitimate commentary about this whole situation.
Of course, supporting new hardware on older operating systems can be difficult, and not just for Microsoft at that. Peter Bright also noted that Intel has a similar, spotty coverage of drivers, although that mostly applies to Windows Vista, which, while still in extended support for another year, doesn't have a significant base of users who are unwilling to switch. The point remains, though, that Microsoft could be doing a favor for their hardware vendor partners.
I'm not sure whether that would be less concerning, or more.
Whatever the reason, this seems like a very silly, stupid move on Microsoft's part, given the current landscape. Windows 10 can become a great operating system, but users need to decide that for themselves. When users are pushed, and an adequate reason is not provided, they will start to assume things. Chances are, it will not be in your favor. Some may put up with it, but others might continue to hold out on older platforms, maybe even including older hardware.
Other users may be able to get away with Windows 7 VMs on a Linux host.
Subject: Processors | January 14, 2016 - 02:26 PM | Jeremy Hellstrom
Tagged: opteron a1100, amd
The chip once known as Seattle has arrived from AMD, the Opteron A1100 Series which is built upon up to eight cores based on a 64-bit ARM Cortex-A57. The chips will have up to 4 MB of shared L2 cache and 8 MB L3 cache with an integrated dual-channel memory controller that supports up to 128 GB of DDR3 or DDR4 memory. For connectivity options you will have two 10Gb Ethernet ports, 8 lanes of PCIe 3.0 and up to 14 SATA3 devices.
As you can see above the TDPs range from 25W to 32W, perfect for power conscious data centres. The SoftIron Overdrive 3000 systems will use the new A1100 chips and AMD is working with Silver Lining Systems to integrate SLS’ fabric technology for interconnecting systems.
TechARP has posted a number of slides from AMD's presentation or you can head straight over to AMD to get the scoop. You won't see these chips on the desktop but new server chips are great news for AMD's bottom line in the coming year. They also speak well of AMD's continued innovations, using low powered and low cost 64-bit ARM chips, combined with their interconnect technologies opens up a new market for AMD.
Subject: Processors | January 11, 2016 - 06:26 PM | Sebastian Peak
Tagged: rumor, report, FM2+, carrizo, Athlon X4, amd
According to a report published by CPU World, a pair of unreleased AMD Athlon X4 processors appeared in a supported CPU list on Gigabyte's website (since removed) long enough to give away some information about these new FM2+ models.
Image credit: CPU World
The CPUs in question are the Athlon X4 835 and Athlon X4 845, 65W quad-core parts that are both based on AMD's Excavator core, according to CPU World. The part numbers are AD835XACI43KA and AD845XACI43KA, which the CPU World report interprets:
"The 'I43' letters and digits in the part number signify Socket FM2+, 4 CPU cores, and 1 MB L2 cache per module, or 2MB in total. The last two letters 'KA' confirm that the CPUs are based on Carrizo design."
The report further states that the Athlon X4 835 will operate at 3.1 GHz, with 3.5 GHz for the X4 845. No Turbo Core frequency information is known for these parts.
Are Computers Still Getting Faster?
It looks like CES is starting to wind down, which makes sense because it ended three days ago. Now that we're mostly caught up, I found a new video from The 8-Bit Guy. He doesn't really explain any old technologies in this one. Instead, he poses an open question about computer speed. He was able to have a functional computing experience on a ten-year-old Apple laptop, which made him wonder if the rate of computer advancement is slowing down.
I believe that he (and his guest hosts) made great points, but also missed a few important ones.
One of his main arguments is that software seems to have slowed down relative to hardware. I don't believe that is true, but I believe it's looking in the right area. PCs these days are more than capable of doing just about anything in terms of 2D user interface that we would want to, and do so with a lot of overhead for inefficient platforms and sub-optimal programming (relative to the 80's and 90's at the very least). The areas that require extra horsepower are usually doing large batches of many related tasks. GPUs are key in this area, and they are keeping up as fast as they can, despite some stagnation with fabrication processes and a difficulty (at least before HBM takes hold) in keeping up with memory bandwidth.
For the last five years to ten years or so, CPUs have been evolving toward efficiency as GPUs are being adopted for the tasks that need to scale up. I'm guessing that AMD, when they designed the Bulldozer architecture, hoped that GPUs would have been adopted much more aggressively, but even as graphics devices, they now have a huge effect on Web, UI, and media applications.
These are also tasks that can scale well between devices by lowering resolution (and so forth). The primary thing that a main CPU thread needs to do is figure out the system's state and keep the graphics card fed before the frame-train leaves the station. In my experience, that doesn't scale well (although you can sometimes reduce the amount of tracked objects for games and so forth). Moreover, it is easier to add GPU performance, compared to single-threaded CPU, because increasing frequency and single-threaded IPC should be more complicated than planning out more, duplicated blocks of shaders. These factors combine to give lower-end hardware a similar experience in the most noticeable areas.
So, up to this point, we discussed:
- Software is often scaling in ways that are GPU (and RAM) limited.
- CPUs are scaling down in power more than up in performance.
- GPU-limited tasks can often be approximated with smaller workloads.
- Software gets heavier, but it doesn't need to be "all the way up" (ex: resolution).
- Some latencies are hard to notice anyway.
Back to the Original Question
This is where “Are computers still getting faster?” can be open to interpretation.
Tasks are diverging from one class of processor into two, and both have separate industries, each with their own, multiple goals. As stated, CPUs are mostly progressing in power efficiency, which extends (an assumed to be) sufficient amount of performance downward to multiple types of devices. GPUs are definitely getting faster, but they can't do everything. At the same time, RAM is plentiful but its contribution to performance can be approximated with paging unused chunks to the hard disk or, more recently on Windows, compressing them in-place. Newer computers with extra RAM won't help as long as any single task only uses a manageable amount of it -- unless it's seen from a viewpoint that cares about multi-tasking.
In short, computers are still progressing, but the paths are now forked and winding.
Subject: Graphics Cards, Processors | January 9, 2016 - 07:00 AM | Scott Michaud
Tagged: ubisoft, quad-core, pc gaming, far cry primal, dual-core
If you remember back when Far Cry 4 launched, it required a quad-core processor. It would block your attempts to launch the game unless it detected four CPU threads, either native quad-core or dual-core with two SMT threads per core. This has naturally been hacked around by the PC gaming community, but it is not supported by Ubisoft. It's also, apparently, a bad experience.
The follow-up, Far Cry Primal, will be released in late February. Oddly enough, it has similar, but maybe slightly lower, system requirements. I'll list them, and highlight the differences.
- 64-bit Windows 7, 8.1, or 10 (basically unchanged from 4)
- Intel Core i3-550 (down from i5-750)
- or AMD Phenom II X4 955 (unchanged from 4)
- 4GB RAM (unchanged from 4)
- 1GB NVIDIA GTX 460 (unchanged from 4)
- or 1GB AMD Radeon HD 5770 (down from HD 5850)
- 20GB HDD Space (down from 30GB)
- Intel Core i7-2600K (up from i5-2400S)
- or AMD FX-8350 (unchanged from 4)
- 8GB of RAM (unchanged from 4)
- NVIDIA GeForce GTX 780 (up from GTX 680)
- or AMD Radeon R9 280X (down from R9 290X)
While the CPU is interesting, the opposing directions of the recommended GPU is fascinating. Either the parts are within Ubisoft's QA margin of error, or they increased the GPU load, but were able to optimize AMD better than Far Cry 4, which was a net gain in performance (and explains the slight bump in CPU power required to feed the extra content). Of course, either way is just a guess.
Back on the CPU topic though, I would be interested to see the performance of Pentium Anniversary Edition parts. I wonder whether they removed the two-thread lock, and, especially if hacks are still required, whether it is playable anyway.
That is, in a month and a half.
Subject: Graphics Cards, Processors | January 8, 2016 - 02:38 AM | Scott Michaud
Tagged: Intel, kaby lake, linux, mesa
Quick post about something that came to light over at Phoronix. Someone noticed that Intel published a handful of PCI device IDs for graphics processors to Mesa and libdrm. It will take a few months for graphics drivers to catch up, although this suggests that Kaby Lake will be releasing relatively soon.
It also gives us hints about what Kaby Lake will be. Of the published batch, there will be six tiers of performance: GT1 has five IDs, GT1.5 has three IDs, GT2 has six IDs, GT2F has one ID, GT3 has three IDs, and GT4 has four IDs. Adding them up, we see that Intel plans 22 GPU devices. The Phoronix post lists what those device IDs are, but that is probably not interesting for our readers. Whether some of those devices overlap in performance or numbering is unclear, but it would make sense given how few SKUs Intel usually provides. I have zero experience in GPU driver development.
Subject: Processors, Mobile | January 6, 2016 - 10:56 PM | Scott Michaud
Tagged: xiaomi, Intel, atom
So this rumor cites anonymous source(s) that leaked info to Digitimes. That said, it aligns with things that I've suspected in a few other situations. We'll discuss this throughout the article.
Xiaomi, a popular manufacturer of mobile devices, are breaking into the laptop space. One model was spotted on pre-order in China with an Intel Core i7 processor. According to the aforementioned leak, Intel has agreed to bundle an additional Intel Atom processor with every Core i7 that they order. Use Intel in a laptop, and they can use Intel in an x86-based tablet for no additional cost.
A single grain of salt... ...
Image Source: Wikipedia
While it's not an explicit practice, we've been seeing hints of similar initiatives for years now. A little over a year ago, Intel's mobile group reported revenues that are ~$1 million, which are offset by ~$1 billion in losses. We would also see phones like the ASUS ZenFone 2, which has amazing performance at a seemingly impossible $199 / $299 price point. I'm not going to speculate on what the actual relationships are, but it sounds more complicated than a listed price per tray.
And that's fine, of course. I know comments will claim the opposite, either that x86 is unsuitable for mobile devices or alleging that Intel is doing shady things. In my view, it seems like Intel has products that they believe can change established mindsets if given a chance. Personally, I would be hesitant to get an x86-based developer phone, but that's because I would only want to purchase one and I'd prefer to target the platform that the majority has. It's that type of inertia that probably frustrates Intel, but they can afford to compete against it.
It does make you wonder how long Intel plans to make deals like this -- again, if they exist.
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: Processors | December 28, 2015 - 09:03 PM | Scott Michaud
Tagged: optical, photonics
A typical integrated circuit pushes electrical voltage across pathways, with transistors and stuff modifying it. When you interpret those voltages as mathematical values and logical instructions, then congratulations, you have created a processor, memory, and so forth. You don't need to use electricity for this. In fact, the history of Charles Babbage and Ada Lovelace was their attempts to perform computation on mechanical state.
Image Credit: University of Colorado
Chip contains optical (left) and electric (top and right) circuits.
One possible follow-up is photonic integrated circuits. This routes light through optical waveguides, rather than typical electric traces. The prototype made by University of Colorado Boulder (and UC Berkeley) seem to use photonics just to communicate, and an electrical IC for the computation. The advantage is high bandwidth, high density, and low power.
This sort of technology was being investigated for several years. My undergraduate thesis for Physics involved computing light transfer through defects in a photonic crystal, using it to create 2D waveguides. With all the talk of silicon fabrication coming to its limits, as 14nm transistors are typically made of around two-dozen atoms, this could be a new direction to innovate.
And honestly, wouldn't you want to overclock your PC to 400+ THz? Make it go plaid for ludicrous speed. (Yes, this paragraph is a joke.)