How Intel Job Cuts and Restructuring Affect Enthusiasts

Subject: Processors | April 21, 2016 - 06:44 PM |
Tagged: restructure, Intel

Earlier this week Intel announced a major restructuring that will result in the loss of 12,000 jobs over the next several weeks, an amount equal to approximately 11% of the company's workforce. I've been sitting on the news for a while, trying to decide what I could add to the hundreds of reports on it and honestly, I haven't come to any definitive conclusion. But here it goes.

It's obviously worth noting the humanitarian part of this announcement - 12,000 people will be losing their job. I feel for them and wish them luck finding employment quickly. It sucks to see anyone lose their job, and maybe more so with a company that is still so profitable and innovative.

intelofficelogo.jpg

The reasons for the restructuring are obviously complex, but the major concern is the shift in focus towards IoT (Internet of Things) and cloud infrastructure as the primary growth drivers. 

The data center and Internet of Things (IoT) businesses are Intel’s primary growth engines, with memory and field programmable gate arrays (FPGAs) accelerating these opportunities – fueling a virtuous cycle of growth for the company. These growth businesses delivered $2.2 billion in revenue growth last year, and made up 40 percent of revenue and the majority of operating profit, which largely offset the decline in the PC market segment.

That last line is the one that might be the most concerning for enthusiasts and builders that read PC Perspective. The decline of the PC market has been a constant hum in the back of minds for the better part of 10 years. Everyone from graphics card vendors to motherboard manufacturers and any other product that depends on the consumer PC to be relevant, has been worried about what will happen as the PC continues in a southward spiral.

But it's important to point out that Intel has done this before, has taken the stance that the consumer PC is bad business. Remember the netbook craze and the rise of the Atom product line? When computers were "fast enough" for people to open up a browser and get to their email? At that point Intel had clearly pushed the enthusiast and high performance computing market to back burner. This also occurred when management pushed Intel into the mobile space, competing directly with the likes of Qualcomm in a market that it didn't quite have the product portfolio to do so.

Then something happened - PC gaming proved to be a growth segment after all. Intel started to realize that high end components mattered and they made attempts to recapture the market's mind share (as it never lost the market share). That is where the unlocked processors in notebooks and "anniversary edition" CPUs were born, in the labs of Intel where gamers and enthusiasts mattered. Hell the entire creation of the Devil's Canyon platform was predicated on the idea that the enthusiast community mattered.

slides01.jpg

I thought we were moving in the right direction. But it appears we have another setback. Intel is going to downplay the value and importance of the market that literally defines and decides what every other consumer buys. Enthusiasts are the trend setters, the educators and the influencers. When families and friends and co-workers ask for suggestions for new phones, tablets and notebooks, they ask us. 

Maybe Intel is just in another cycle, another loop about the fate of the PC and what it means. Did tablets and the iPad kill off the notebook? Did mobile games on your iPhone keep users from flocking to PC games? Have the PS4 or Xbox One destroyed the market for PC-based gaming and VR? No. 

The potential worry now is that one of these times, as Intel feigns disinterest in the PC, it may stick.

Source: Intel

Sony plans PlayStation NEO with massive APU hardware upgrade

Subject: Graphics Cards, Processors | April 19, 2016 - 03:21 PM |
Tagged: sony, ps4, Playstation, neo, giant bomb, APU, amd

Based on a new report coming from Giant Bomb, Sony is set to release a new console this year with upgraded processing power and a focus on 4K capabilities, code named NEO. We have been hearing for several weeks that both Microsoft and Sony were planning partial generation upgrades but it appears that details for Sony's update have started leaking out in greater detail, if you believe the reports.

Giant Bomb isn't known for tossing around speculation and tends to only report details it can safely confirm. Austin Walker says "multiple sources have confirmed for us details of the project, which is internally referred to as the NEO." 

ps4gpu.jpg

The current PlayStation 4 APU
Image source: iFixIt.com

There are plenty of interesting details in the story, including Sony's determination to not split the user base with multiple consoles by forcing developers to have a mode for the "base" PS4 and one for NEO. But most interesting to us is the possible hardware upgrade.

The NEO will feature a higher clock speed than the original PS4, an improved GPU, and higher bandwidth on the memory. The documents we've received note that the HDD in the NEO is the same as that in the original PlayStation 4, but it's not clear if that means in terms of capacity or connection speed.

...

Games running in NEO mode will be able to use the hardware upgrades (and an additional 512 MiB in the memory budget) to offer increased and more stable frame rate and higher visual fidelity, at least when those games run at 1080p on HDTVs. The NEO will also support 4K image output, but games themselves are not required to be 4K native.

Giant Bomb even has details on the architectural changes.

  Shipping PS4 PS4 "NEO"
CPU 8 Jaguar Cores @ 1.6 GHz 8 Jaguar Cores @ 2.1 GHz
GPU AMD GCN, 18 CUs @ 800 MHz AMD GCN+, 36 CUs @ 911 MHz
Stream Processors 1152 SPs ~ HD 7870 equiv. 2304 SPs ~ R9 390 equiv.
Memory 8GB GDDR5 @ 176 GB/s 8GB GDDR5 @ 218 GB/s

(We actually did a full video teardown of the PS4 on launch day!)

If the Compute Unit count is right from the GB report, then the PS4 NEO system will have 2,304 stream processors running at 911 MHz, giving it performance nearing that of a consumer Radeon R9 390 graphics card. The R9 390 has 2,560 SPs running at around 1.0 GHz, so while the NEO would be slower, it would be a substantial upgrade over the current PS4 hardware and the Xbox One. Memory bandwidth on NEO is still much lower than a desktop add-in card (218 GB/s vs 384 GB/s).

DSC02539.jpg

Could Sony's NEO platform rival the R9 390?

If the NEO hardware is based on Grenada / Hawaii GPU design, there are some interesting questions to ask. With the push into 4K that we expect with the upgraded PlayStation, it would be painful if the GPU didn't natively support HDMI 2.0 (4K @ 60 Hz). With the modularity of current semi-custom APU designs it is likely that AMD could swap out the display controller on NEO with one that can support HDMI 2.0 even though no consumer shipping graphics cards in the 300-series does so. 

It is also POSSIBLE that NEO is based on the upcoming AMD Polaris GPU architecture, which supports HDR and HDMI 2.0 natively. That would be a much more impressive feat for both Sony and AMD, as we have yet to see Polaris released in any consumer GPU. Couple that with the variables of 14/16nm FinFET process production and you have a complicated production pipe that would need significant monitoring. It would potentially lower cost on the build side and lower power consumption for the NEO device, but I would be surprised if Sony wanted to take a chance on the first generation of tech from AMD / Samsung / Global Foundries.

However, if you look at recent rumors swirling about the June announcement of the Radeon R9 480 using the Polaris architecture, it is said to have 2,304 stream processors, perfectly matching the NEO specs above.

polaris-5.jpg

New features of the AMD Polaris architecture due this summer

There is a lot Sony and game developers could do with roughly twice the GPU compute capability on a console like NEO. This could make the PlayStation VR a much more comparable platform to the Oculus Rift and HTC Vive though the necessity to work with the original PS4 platform might hinder the upgrade path. 

The other obvious use is to upgrade the image quality and/or rendering resolution of current games and games in development or just to improve the frame rate, an area that many current generation consoles seem to have been slipping on

In the documents we’ve received, Sony offers suggestions for reaching 4K/UltraHD resolutions for NEO mode game builds, but they're also giving developers a degree of freedom with how to approach this. 4K TV owners should expect the NEO to upscale games to fit the format, but one place Sony is unwilling to bend is on frame rate. Throughout the documents, Sony repeatedly reminds developers that the frame rate of games in NEO Mode must meet or exceed the frame rate of the game on the original PS4 system.

There is still plenty to read in the Giant Bomb report, and I suggest you head over and do so. If you thought the summer was going to be interesting solely because of new GPU releases from AMD and NVIDIA, it appears that Sony and Microsoft have their own agenda as well.

Source: Giant Bomb

AMD Pre-Announces 7th Gen A-Series SOC

Subject: Processors | April 5, 2016 - 10:30 AM |
Tagged: mobile, hp, GCN, envy, ddr4, carrizo, Bristol Ridge, APU, amd, AM4

Today AMD is “pre-announcing” their latest 7th generation APU.  Codenamed “Bristol Ridge”, this new SOC is based off of the Excavator architecture featured in the previous Carrizo series of products.  AMD provided very few hints as to what was new and different in Bristol Ridge as compared to Carrizo, but they have provided a few nice hints.

br_01.png

They were able to provide a die shot of the new Bristol Ridge APU and there are some interesting differences between it and the previous Carrizo. Unfortunately, there really are no changes that we can see from this shot. Those new functional units that you are tempted to speculate about? For some reason AMD decided to widen out the shot of this die. Those extra units around the border? They are the adjacent dies on the wafer. I was bamboozled at first, but happily Marc Sauter pointed it out to me. No new functional units for you!

carrizo_die.jpg

This is the Carrizo shot. It is functionally identical to what we see with Bristol Ridge.

AMD appears to be using the same 28 nm HKMG process from GLOBALFOUNDRIES.  This is not going to give AMD much of a jump, but from information in the industry GLOBALFOUNDRIES and others have put an impressive amount of work into several generations of 28 nm products.  TSMC is on their third iteration which has improved power and clock capabilities on that node.  GLOBALFOUNDRIES has continued to improve their particular process and likely Bristol Ridge is going to be the last APU built on that node.

br_02.png

All of the competing chips are rated at 15 watts TDP. Intel has the compute advantage, but AMD is cleaning up when it comes to graphics.

The company has also continued to improve upon their power gating and clocking technologies to keep TDPs low, yet performance high.  AMD recently released the Godavari APUs to the market which exhibit better clocking and power characteristics from the previous Kaveri.  Little was done on the actual design, rather it was improved process tech as well as better clock control algorithms that achieved these advances.  It appears as though AMD has continued this trend with Bristol Ridge.

We likely are not seeing per clock increases, but rather higher and longer sustained clockspeeds providing the performance boost that we are seeing between Carrizo and Bristol Ridge.  In these benchmarks AMD is using 15 watt TDP products.  These are mobile chips and any power improvements will show off significant gains in overall performance.  Bristol Ridge is still a native quad core part with what looks to be an 8 module GCN unit.

br_03.png

Again with all three products at a 15 watt TDP we can see that AMD is squeezing every bit of performance it can with the 28 nm process and their Excavator based design.

The basic core and GPU design look relatively unchanged, but obviously there were a lot of tweaks applied to give the better performance at comparable TDPs.  

AMD is announcing this along with the first product that will feature this APU.  The HP Envy X360.  This convertible tablet offers some very nice features and looks to be one of the better implementations that AMD has seen using its latest APUs.  Carrizo had some wins, but taking marketshare back from Intel in the mobile space has been tortuous at best. AMD obviously hopes that Bristol Ridge in the sub-35 watt range will continue to show fight for the company in this important market.  Perhaps one of the more interesting features is the option for the PCIe SSD.  Hopefully AMD will send out a few samples so we can see what a more “premium” type convertible can do with the AMD silicon.

br_04.png

The HP Envy X360 convertible in all of its glory.

Bristol Ridge will be coming to the AM4 socket infrastructure in what appears to be a Computex timeframe.  These parts will of course feature higher TDPs than what we are seeing here with the 15 watt unit that was tested.  It seems at that time AMD will announce the full lineup from top to bottom and start seeding the market with AM4 boards that will eventually house the “Zen” CPUs that will show up in late 2016.

Source: AMD

Intel officially ends the era of "tick-tock" processor production

Subject: Processors | March 22, 2016 - 09:08 PM |
Tagged: Intel, tick tock, tick-tock, process technology, kaby lake

It should come as little surprise to our readers that have followed news about Kaby Lake, Intel's extension of the Skylake architecture that officially broke nearly a decade of tick-tock processor design. With tick-tock, Intel would iterate in subsequent years between a new processor microarchitecture (Sandy Bridge, Ivy Bridge, etc.) and a new process technology (45nm, 32nm, 22nm, etc.). According to this story over at Fool.com, Intel's officially ending that pattern of production.

From the company's latest K-10 filing:

"We expect to lengthen the amount of time we will utilize our 14 [nanometer] and our next-generation 10 [nanometer] process technologies, further optimizing our products and process technologies while meeting the yearly market cadence for product introductions."

ticktockout.JPG

It is likely that that graphic above that showcases the changes from Tick-Tock to what is going on now isn't "to scale" and we may see more than three steps in each iteration along the way. Intel still believes that it has and will continue to have the best process technology in the world and that its processors will benefit.

Continuing further, the company indicates that "this competitive advantage will be extended in the future as the costs to build leading-edge fabrication facilities increase, and as fewer semiconductor companies will be able to leverage platform design and manufacturing."

intel-roadmap-5q-002-1920x1080.jpg

Kaby Lake details leaking out...

As Scott pointed out in our discussions about this news, it might mean consumers will see advantages in longer socket compatibility going forward though I would still see this as a net-negative for technology. As process technology improvements slow down, either due to complexity or lack of competition in the market, we will see less innovation in key areas of performance and power consumption. 

Source: Fool.com

ARM Partners with TSMC to Produce SoCs on 7nm FinFET

Subject: Processors | March 15, 2016 - 04:52 PM |
Tagged: TSMC, SoC, servers, process technology, low power, FinFET, datacenter, cpu, arm, 7nm, 7 nm FinFET

ARM and TSMC have announced their collaboration on 7 nm FinFET process technology for future SoCs. A multi-year agreement between the companies, products produces on this 7 nm FinFET process are intended to expand ARM’s reach “beyond mobile and into next-generation networks and data centers”.

tsmc-headquarters.jpg

TSMC Headquarters (Image credit: AndroidHeadlines)

So when can we expect to see 7nm SoCs on the market? The report from The Inquirer offers this quote from TSMC:

“A TSMC spokesperson told the INQUIRER in a statement: ‘Our 7nm technology development progress is on schedule. TSMC's 7nm technology development leverages our 10nm development very effectively. At the same time, 7nm offers a substantial density improvement, performance improvement and power reduction from 10nm’.”

Full press release after the break.

Source: ARM

Tesla Motors Hires Peter Bannon of Apple

Subject: Graphics Cards, Processors | February 29, 2016 - 11:48 PM |
Tagged: tesla motors, tesla, SoC, Peter Bannon, Jim Keller

When we found out that Jim Keller has joined Tesla, we were a bit confused. He is highly skilled in processor design, and he moved to a company that does not design processors. Kind of weird, right? There are two possibilities that leap to mind: either he wanted to try something new in life, and Elon Musk hired him for his general management skills, or Tesla wants to get more involved in the production of their SoCs, possibly even designing their own.

tesla-2016-logo2.png

Now Peter Bannon, who was a colleague of Jim Keller at Apple, has been hired by Tesla Motors. Chances are, the both of them were not independently interested in an abrupt career change that led them to the same company. That seems highly unlikely, to say the least. So it appears that Tesla Motors wants experienced chip designers in house. What for? We don't know. This is a lot of talent to just look over the shoulders of NVIDIA and other SoC partners, to make sure they have an upper hand in negotiation. Jim Keller is at Tesla as their “Vice-President of Autopilot Hardware Engineering.” We don't know what Peter Bannon's title will be.

And then, if Tesla Motors does get into creating their own hardware, we wonder what they will do with it. The company has a history of open development and releasing patents (etc.) into the public. That said, SoC design is a highly encumbered field, depending on what they're specifically doing, which we have no idea about.

Source: Eletrek

MWC 2016: MediaTek Announces Helio P20 True Octa-Core SoC

Subject: Processors, Mobile | February 22, 2016 - 04:11 PM |
Tagged: TSMC, SoC, octa-core, MWC 2016, MWC, mediatek, Mali-T880, LPDDR4X, Cortex-A53, big.little, arm

MediaTek might not be well-known in the United States, but the company has been working to expand from China, where it had a 40% market share as of June 2015, into the global market. While 2015 saw the introduction of the 8-core Helio P10 and the 10-core helio X20 SoCs, the company continues to expand their lineup, today announcing the Helio P20 SoC.

Helio_P20.jpg

There are a number of differences between the recent SoCs from MediaTek, beginning with the CPU core configuration. This new Helio P20 is a “True Octa-Core” design, but rather than a big.LITTLE configuration it’s using 8 identically-clocked ARM Cortex-A53 cores at 2.3 GHz. The previous Helio P10 used a similar CPU configuration, though clocks were limited to 2.0 GHz with that SoC. Conversely, the 10-core Helio X20 uses a tri-cluster configuration, with 2x ARM Cortex-A72 cores running at 2.5 GHz, along with a typical big.LITTLE arrangement (4x Cortex-A53 cores at 2.0 Ghz and 4x Cortex-A53 cores at 1.4 GHz).

Another change affecting MediaTek’s new SoC and he industry at large is the move to smaller process nodes. The Helio P10 was built on 28 nm HPM, and this new P20 moves to 16 nm FinFET. Just as with the Helio P10 and Helio X20 (a 20 nm part) this SoC is produced at TSMC using their 16FF+ (FinFET Plus) technology. This should provide up to “40% higher speed and 60% power saving” compared to the company’s previous 20 nm process found in the Helio X20, though of course real-world results will have to wait until handsets are available to test.

The Helio P20 also takes advantage of LPDDR4X, and is “the world’s first SoC to support low power double data rate random access memory” according to MediaTek. The company says this new memory provides “70 percent more bandwidth than the LPDDR3 and 50 percent power savings by lowering supply voltage to 0.6v”. Graphics are powered by ARM’s high-end Mali T880 GPU, clocked at an impressive 900 MHz. And all-important modem connectivity includes CAT6 LTE with 2x carrier aggregation for speeds of up to 300 Mbps down, 50 Mbps up. The Helio P20 also supports up to 4k/30 video decode with H.264/265 support, and the 12-bit dual camera ISP supports up to 24 MP sensors.

Specs from MediaTek:

  • Process: 16nm
  • Apps CPU: 8x Cortex-A53, up to 2.3GHz
  • Memory: Up to 2 x LPDDR4X 1600MHz (up to 6GB) + 1x LPDDR3 933Mhz (up to 4GB) + eMMC 5.1
  • Camera: Up to 24MP at 24FPS w/ZSD, 12bit Dual ISP, 3A HW engine, Bayer & Mono sensor support
  • Video Decode: Up to 4Kx2K 30fps H.264/265
  • Video Encode: Up to 4Kx2K 30fps H.264
  • Graphics: Mali T-880 MP2 900MHz
  • Display: FHD 1920x1080 60fps. 2x DSI for dual display
  • Modem: LTE FDD TDD R.11 Cat.6 with 2x20 CA. C2K SRLTE. L+W DSDS support
  • Connectivity: WiFiac/abgn (with MT6630). GPS/Glonass/Beidou/BT/FM.
  • Audio: 110db SNR & -95db THD

It’s interesting to see SoC makers experiment with less complex CPU designs after a generation of multi-cluster (big.LITTLE) SoCs, as even the current flagship Qualcomm SoC, the Snapdragon 820, has reverted to a straight quad-core design. The P20 is expected to be in shipping devices by the second half of 2016, and we will see how this configuration performs once some devices using this new P20 SoC are in the wild.

Full press release after the break:

Source: MediaTek

Extreme Overclocking of Skylake (7.02566 GHz)

Subject: Processors | February 7, 2016 - 02:00 AM |
Tagged: Skylake, overclocking, asrock, Intel, gskill

I recently came across a post at PC Gamer that looked at the extreme overclocking leaderboard of the Skylake-based Intel Core i7-6700K. Obviously, these competitions will probably never end as long as higher numbers are possible on parts that are interesting for one reason or another. Skylake is the new chip on the liquid nitrogen block. It cannot reach frequencies as high as its predecessors, but teams still compete to get as high as possible on that specific SKU.

overclock-2016-skylake6700k7ghz.jpg

The current world record for a single-threaded Intel Core i7-6700K is 7.02566 GHz, which is achieved with a voltage of 4.032V. For comparison, the i7-6700K is typically around 1.3V at load. This record was apparently set about a month ago, on January 11th.

This is obviously a huge increase, about three-fold more voltage for the extra 3 GHz. For comparison, the current world record over all known CPUs is the AMD FX-8370 with a clock of 8.72278 GHz. Many Pentium 4-era processors make up the top 15 places too, as those parts were designed for high clock rates with relatively low IPC.

The rest of the system used G.SKILL Ripjaws 4 DDR4 RAM, an ASRock Z170M OC Formula motherboard, and an Antec 1300W power supply. It used an NVIDIA GeForce GT 630 GPU, which offloaded graphics from the integrated chip, but otherwise interfered as little as possible. They also used Windows XP, because why not I guess? I assume that it does the least amount of work to boot, allowing a quicker verification, but that is only a guess.

Source: HWBot

ASRock Releases BIOS to Disable Non-K Skylake Overclocking

Subject: Processors | February 5, 2016 - 04:44 PM |
Tagged: Intel, Skylake, overclocking, cpu, Non-K, BCLK, bios, SKY OC, asrock, Z170

ASRock's latest batch of motherboard BIOS updates remove the SKY OS function, which permitted overclocking of non-K Intel processors via BCLK (baseclock).

20151215-8.jpg

The news comes amid speculation that Intel had pressured motherboard vendors to remove such functionality. Intel's unlocked K parts (i5-6600K, i7-6700K) will once again be the only options for Skylake overclocking on Z170 on ASRock boards (assuming prior BIOS versions are no longer available), and with no Pentium G3258 this generation Intel is no longer a budget friendly option for enthusiasts looking to push their CPU past factory specs.

3386ebeb-34f8-4a83-9909-0e29985f4712.jpg

(Image credit: Hexus.net)

It sounds like now would be a good time to archive that SKY OS enabled BIOS update file if you've downloaded it - or simply refrain from this BIOS update. What remains to be seen of course is whether other vendors will follow suit and disable BCLK overclocking of non-K processors. This had become a popular feature on a number of Z170 motherboards on the market, but ASRock may have been in too weak a position to battle Intel on this issue.

Source: Hexus

So That's Where Jim Keller Went To... Tesla Motors...

Subject: General Tech, Processors, Mobile | January 29, 2016 - 10:28 PM |
Tagged: tesla, tesla motors, amd, Jim Keller, apple

Jim Keller, a huge name in the semiconductor industry for his work at AMD and Apple, recently left AMD before the launch of the Zen architecture. This made us nervous, because when a big name leaves a company before a product launch, it could either be that their work is complete... or they're evacuating before a stink-bomb detonates and the whole room smells like rotten eggs.

jim_keller.jpg

It turns out a third option is possible: Elon Musk offers you a job making autonomous vehicles. Jim Keller's job title at Tesla will be Vice President of Autopilot Hardware Engineering. I could see this position being enticing, to say the least, even if you are confident in your previous employer's upcoming product stack. It doesn't mean that AMD's Zen architecture will be either good or bad, but it nullifies the earlier predictions, when Jim Keller left AMD, at least until further notice.

We don't know who approached who, or when.

Another point of note: Tesla Motors currently uses NVIDIA Tegra SoCs in their cars, who are (obviously) competitors of Jim Keller's former employer, AMD. It sounds like Jim Keller is moving into a somewhat different role than he had at AMD and Apple, but it could be interesting if Tesla starts taking chip design in-house, to customize the chip to their specific needs, and take away responsibilities from NVIDIA.

The first time he was at AMD, he was the lead architecture of the Athlon 64 processor, and he co-authored x86-64. When he worked at Apple, he helped design the Apple A4 and A5 processors, which were the first two that Apple created in-house; the first three iPhone processors were Samsung SoCs.

Report: Intel Tigerlake Revealed; Company's Third 10nm CPU

Subject: Processors | January 24, 2016 - 05:19 PM |
Tagged: Tigerlake, rumor, report, processor, process node, Intel, Icelake, cpu, Cannonlake, 10 nm

A report from financial website The Motley Fool discusses Intel's plan to introduce three architectures at the 10 nm node, rather than the expected two. This comes after news that Kaby Lake will remain at the present 14 nm, interrupting Intel's 2-year manufacturing tech pace.

intel_10nm.jpg

(Image credit: wccftech)

"Management has told investors that they are pushing to try to get back to a two-year cadence post-10-nanometer (presumably they mean a two-year transition from 10-nanometer to 7-nanometer), however, from what I have just learned from a source familiar with Intel's plans, the company is working on three, not two, architectures for the 10-nanometer node."

Intel's first 10 nm processor architecture will be known as Cannonlake, with Icelake expected to follow about a year afterward. With Tigerlake expected to be the third architecture build on 10 nm, and not coming until "the second half of 2019", we probably won't see 7 nm from Intel until the second half of 2020 at the earliest.

It appears that the days of two-year, two product process node changes are numbered for Intel, as the report continues:

"If all goes well for the company, then 7-nanometer could be a two-product node, implying a transition to the 5-nanometer technology node by the second half of 2022. However, the source that I spoke to expressed significant doubts that Intel will be able to return to a two-years-per-technology cycle."

intel-node-density_large.png

(Image credit: The Motley Fool)

It will be interesting to see how players like TSMC, themselves "planning to start mass production of 7-nanometer in the first half of 2018", will fare moving forward as Intel's process development (apparently) slows.

TSMC Allegedly Wants 5nm by 2020

Subject: Graphics Cards, Processors | January 20, 2016 - 04:38 AM |
Tagged: TSMC

Digitimes is reporting on statements that were allegedly made by TSMC co-CEO, Mark Liu. We are currently seeing 16nm parts come out of the foundry, which is expected to be used in the next generation of GPUs, replacing the long-running 28nm node that launched with the GeForce GTX 680. (It's still unannounced whether AMD and NVIDIA will use 14nm FinFET from Samsung or GlobalFoundries, or 16nm FinFET from TSMC.)

Update (Jan 20th, @4pm EST): Couple minor corrections. Radeon HD 7970 launched at 28nm first by a couple of months. I just remember NVIDIA getting swamped in delays because it was a new node, so that's probably why I thought of the GTX 680. Also, AMD announced during CES that they will use GlobalFoundries to fab their upcoming GPUs, which I apparently missed. We suspect that NVIDIA will use TSMC, and have assumed that for a while, but it hasn't been officially announced yet (if ever).

tsmc.jpg

According to their projections, which (again) are filtered through Digitimes, the foundry expects to have 7nm in the first half of 2018. They also expect to introduce extreme ultraviolet (EUV) lithography methods with 5nm in 2020. Given that Silicon in a solid has a lattice spacing of ~0.54nm at room temperature, 7nm transistors will consist of about 13 atoms, and 5nm transistors will have features containing about 9 atoms.

We continue the march toward the end of silicon lithography.

Even if the statement is correct, much can happen between then and now. It wouldn't be the first time that I've seen a major foundry believe that a node would be available, but end up having it delayed. I wouldn't hold my breath, but I might cross my fingers if my hands were free.

At the very least, we can assume that TSMC's roadmap is 16nm, 10nm, 7nm, and then 5nm.

Source: Digitimes

Skylake and Later Will Be Withheld Windows 7 / 8.x Support

Subject: Processors | January 17, 2016 - 07:20 AM |
Tagged: Windows 8.1, Windows 7, windows 10, Skylake, microsoft, kaby lake, Intel, Bristol Ridge, amd

Microsoft has not been doing much to put out the fires in comment threads all over the internet. The latest flare-up involves hardware support with Windows 7 and 8.x. Currently unreleased architectures, such as Intel's Kaby Lake and AMD's Bristol Ridge, will only be supported on Windows 10. This is despite Windows 7 and Windows 8.x being supported until 2020 and 2023, respectively. Microsoft does not believe that they need to support older hardware, though.

windows-10-bandaid.png

This brings us to Skylake. These processors are out, but Microsoft considers them “transition” parts. Microsoft provided PC World with a list of devices that will be gjven Windows 7 and Windows 8.x drivers, which enable support until July 17, 2017. Beyond that date, only a handful of “most critical” updates will be provided until the official end of life.

I am not sure what the cut-off date for unsupported Skylake processors is, though; that is, Skylake processors that do not line up with Microsoft's list could be deprecated at any time. This is especially a problem for the ones that are potentially already sold.

As I hinted earlier, this will probably reinforce the opinion that Microsoft is doing something malicious with Windows 10. As Peter Bright of Ars Technica reports, Windows 10 does not exactly have an equivalent in the server space yet, which makes you wonder what that support cycle will be like. If they can continue to patch Skylake-based servers in Windows Server builds that are derived from Windows 7 and Windows 8.x, like Windows Server 2012 R2, then why are they unwilling to port those changes to the base operating system? If they will not patch current versions of Windows Server, because the Windows 10-derived version still isn't out yet, then what will happen with server farms, like Amazon Web Services, when Xeon v5s are suddenly incompatible with most Windows-based OS images? While this will, no doubt, be taken way out of context, there is room for legitimate commentary about this whole situation.

Of course, supporting new hardware on older operating systems can be difficult, and not just for Microsoft at that. Peter Bright also noted that Intel has a similar, spotty coverage of drivers, although that mostly applies to Windows Vista, which, while still in extended support for another year, doesn't have a significant base of users who are unwilling to switch. The point remains, though, that Microsoft could be doing a favor for their hardware vendor partners.

I'm not sure whether that would be less concerning, or more.

Whatever the reason, this seems like a very silly, stupid move on Microsoft's part, given the current landscape. Windows 10 can become a great operating system, but users need to decide that for themselves. When users are pushed, and an adequate reason is not provided, they will start to assume things. Chances are, it will not be in your favor. Some may put up with it, but others might continue to hold out on older platforms, maybe even including older hardware.

Other users may be able to get away with Windows 7 VMs on a Linux host.

Source: Ars Technica

Meet the new AMD Opteron A1100 Series SoC with ARM onboard

Subject: Processors | January 14, 2016 - 07:26 PM |
Tagged: opteron a1100, amd

The chip once known as Seattle has arrived from AMD, the Opteron A1100 Series which is built upon up to eight cores based on a 64-bit ARM Cortex-A57.  The chips will have up to 4 MB of shared L2 cache and 8 MB L3 cache with an integrated dual-channel memory controller that supports up to 128 GB of DDR3 or DDR4 memory.  For connectivity options you will have two 10Gb Ethernet ports, 8 lanes of PCIe 3.0 and up to 14 SATA3 devices.

a1100models.PNG

As you can see above the TDPs range from 25W to 32W, perfect for power conscious data centres.  The SoftIron Overdrive 3000 systems will use the new A1100 chips and AMD is working with Silver Lining Systems to integrate SLS’ fabric technology for interconnecting systems. 

a1100.PNG

TechARP has posted a number of slides from AMD's presentation or you can head straight over to AMD to get the scoop.  You won't see these chips on the desktop but new server chips are great news for AMD's bottom line in the coming year.  They also speak well of AMD's continued innovations, using low powered and low cost 64-bit ARM chips, combined with their interconnect technologies opens up a new market for AMD.

Full PR is available after the break.

Source: AMD

Report: AMD Carrizo FM2+ Processor Listing Appears Online

Subject: Processors | January 11, 2016 - 11:26 PM |
Tagged: rumor, report, FM2+, carrizo, Athlon X4, amd

According to a report published by CPU World, a pair of unreleased AMD Athlon X4 processors appeared in a supported CPU list on Gigabyte's website (since removed) long enough to give away some information about these new FM2+ models.

Athlon_X4_835_and_845.jpg

Image credit: CPU World

The CPUs in question are the Athlon X4 835 and Athlon X4 845, 65W quad-core parts that are both based on AMD's Excavator core, according to CPU World. The part numbers are AD835XACI43KA and AD845XACI43KA, which the CPU World report interprets:

"The 'I43' letters and digits in the part number signify Socket FM2+, 4 CPU cores, and 1 MB L2 cache per module, or 2MB in total. The last two letters 'KA' confirm that the CPUs are based on Carrizo design."

The report further states that the Athlon X4 835 will operate at 3.1 GHz, with 3.5 GHz for the X4 845. No Turbo Core frequency information is known for these parts.

Source: CPU-World

Far Cry Primal System Requirements Slightly Lower than 4?

Subject: Graphics Cards, Processors | January 9, 2016 - 12:00 PM |
Tagged: ubisoft, quad-core, pc gaming, far cry primal, dual-core

If you remember back when Far Cry 4 launched, it required a quad-core processor. It would block your attempts to launch the game unless it detected four CPU threads, either native quad-core or dual-core with two SMT threads per core. This has naturally been hacked around by the PC gaming community, but it is not supported by Ubisoft. It's also, apparently, a bad experience.

ubisoft-2015-farcryprimal.jpg

The follow-up, Far Cry Primal, will be released in late February. Oddly enough, it has similar, but maybe slightly lower, system requirements. I'll list them, and highlight the differences.

Minimum:

  • 64-bit Windows 7, 8.1, or 10 (basically unchanged from 4)
  • Intel Core i3-550 (down from i5-750)
    • or AMD Phenom II X4 955 (unchanged from 4)
  • 4GB RAM (unchanged from 4)
  • 1GB NVIDIA GTX 460 (unchanged from 4)
    • or 1GB AMD Radeon HD 5770 (down from HD 5850)
  • 20GB HDD Space (down from 30GB)

Recommended:

  • Intel Core i7-2600K (up from i5-2400S)
    • or AMD FX-8350 (unchanged from 4)
  • 8GB of RAM (unchanged from 4)
  • NVIDIA GeForce GTX 780 (up from GTX 680)
    • or AMD Radeon R9 280X (down from R9 290X)

While the CPU is interesting, the opposing directions of the recommended GPU is fascinating. Either the parts are within Ubisoft's QA margin of error, or they increased the GPU load, but were able to optimize AMD better than Far Cry 4, which was a net gain in performance (and explains the slight bump in CPU power required to feed the extra content). Of course, either way is just a guess.

Back on the CPU topic though, I would be interested to see the performance of Pentium Anniversary Edition parts. I wonder whether they removed the two-thread lock, and, especially if hacks are still required, whether it is playable anyway.

That is, in a month and a half.

Source: Ubisoft

Intel Pushes Device IDs of Kaby Lake GPUs

Subject: Graphics Cards, Processors | January 8, 2016 - 07:38 AM |
Tagged: Intel, kaby lake, linux, mesa

Quick post about something that came to light over at Phoronix. Someone noticed that Intel published a handful of PCI device IDs for graphics processors to Mesa and libdrm. It will take a few months for graphics drivers to catch up, although this suggests that Kaby Lake will be releasing relatively soon.

intel-2015-linux-driver-mesa.png

It also gives us hints about what Kaby Lake will be. Of the published batch, there will be six tiers of performance: GT1 has five IDs, GT1.5 has three IDs, GT2 has six IDs, GT2F has one ID, GT3 has three IDs, and GT4 has four IDs. Adding them up, we see that Intel plans 22 GPU devices. The Phoronix post lists what those device IDs are, but that is probably not interesting for our readers. Whether some of those devices overlap in performance or numbering is unclear, but it would make sense given how few SKUs Intel usually provides. I have zero experience in GPU driver development.

Source: Phoronix

Rumor: Intel and Xiaomi "Special Deal"

Subject: Processors, Mobile | January 7, 2016 - 03:56 AM |
Tagged: xiaomi, Intel, atom

So this rumor cites anonymous source(s) that leaked info to Digitimes. That said, it aligns with things that I've suspected in a few other situations. We'll discuss this throughout the article.

Xiaomi, a popular manufacturer of mobile devices, are breaking into the laptop space. One model was spotted on pre-order in China with an Intel Core i7 processor. According to the aforementioned leak, Intel has agreed to bundle an additional Intel Atom processor with every Core i7 that they order. Use Intel in a laptop, and they can use Intel in an x86-based tablet for no additional cost.

Single_grain_of_table_salt_(electron_micrograph).jpg

A single grain of salt... ...
Image Source: Wikipedia

While it's not an explicit practice, we've been seeing hints of similar initiatives for years now. A little over a year ago, Intel's mobile group reported revenues that are ~$1 million, which are offset by ~$1 billion in losses. We would also see phones like the ASUS ZenFone 2, which has amazing performance at a seemingly impossible $199 / $299 price point. I'm not going to speculate on what the actual relationships are, but it sounds more complicated than a listed price per tray.

And that's fine, of course. I know comments will claim the opposite, either that x86 is unsuitable for mobile devices or alleging that Intel is doing shady things. In my view, it seems like Intel has products that they believe can change established mindsets if given a chance. Personally, I would be hesitant to get an x86-based developer phone, but that's because I would only want to purchase one and I'd prefer to target the platform that the majority has. It's that type of inertia that probably frustrates Intel, but they can afford to compete against it.

It does make you wonder how long Intel plans to make deals like this -- again, if they exist.

Coverage of CES 2016 is brought to you by Logitech!

PC Perspective's CES 2016 coverage is sponsored by Logitech.

Follow all of our coverage of the show at http://pcper.com/ces!

Source: Digitimes

Photonic IC Created by University of Colorado Boulder

Subject: Processors | December 29, 2015 - 02:03 AM |
Tagged: optical, photonics

A typical integrated circuit pushes electrical voltage across pathways, with transistors and stuff modifying it. When you interpret those voltages as mathematical values and logical instructions, then congratulations, you have created a processor, memory, and so forth. You don't need to use electricity for this. In fact, the history of Charles Babbage and Ada Lovelace was their attempts to perform computation on mechanical state.

ucolorado-2015-opticalic.jpg

Image Credit: University of Colorado
Chip contains optical (left) and electric (top and right) circuits.

One possible follow-up is photonic integrated circuits. This routes light through optical waveguides, rather than typical electric traces. The prototype made by University of Colorado Boulder (and UC Berkeley) seem to use photonics just to communicate, and an electrical IC for the computation. The advantage is high bandwidth, high density, and low power.

This sort of technology was being investigated for several years. My undergraduate thesis for Physics involved computing light transfer through defects in a photonic crystal, using it to create 2D waveguides. With all the talk of silicon fabrication coming to its limits, as 14nm transistors are typically made of around two-dozen atoms, this could be a new direction to innovate.

And honestly, wouldn't you want to overclock your PC to 400+ THz? Make it go plaid for ludicrous speed. (Yes, this paragraph is a joke.)

Intel Adds New Processors to Broadwell and Skylake Lineups

Subject: Processors | December 28, 2015 - 12:00 PM |
Tagged: skylake-u, Skylake, mobile cpu, Intel, desktop cpu, core i7, core i5, core i3, Broadwell

As reported by CPU World Intel has added a total of eight new processors to the 5th-gen “Broadwell” and 6th-gen “Skylake” CPU lineups, with new mobile and desktop models appearing in Intel’s price lists. The models include Core and Celeron, and range from dual core (five with Hyper-Threading) to a new quad-core i5:

CPU_World_chart.png

Chart of new Intel models from CPU-World

“Intel today added 8 new Broadwell- and Skylake-based microprocessors to the official price list. New CPUs have unusual model numbers, like i5-6402P and i5-5200DU, which indicates that they may have different feature-set than the mainstream line of desktop and mobile CPUs. Intel also introduced today Celeron 3855U and 3955U ultra-low voltage models.”

It is unclear if the desktop models (Core i3-6098P, Core i5-6402P) listed with enter the retail channel, or if they are destined for OEM applications. The report points out these models have a P suffix “that was used to signify the lack of integrated GPU in older generations of Core i3/i5 products. There is a good chance that it still means just that”.

Source: CPU-World