Earlier this week Intel announced a major restructuring that will result in the loss of 12,000 jobs over the next several weeks, an amount equal to approximately 11% of the company's workforce. I've been sitting on the news for a while, trying to decide what I could add to the hundreds of reports on it and honestly, I haven't come to any definitive conclusion. But here it goes.
It's obviously worth noting the humanitarian part of this announcement – 12,000 people will be losing their job. I feel for them and wish them luck finding employment quickly. It sucks to see anyone lose their job, and maybe more so with a company that is still so profitable and innovative.
The reasons for the restructuring are obviously complex, but the major concern is the shift in focus towards IoT (Internet of Things) and cloud infrastructure as the primary growth drivers.
The data center and Internet of Things (IoT) businesses are Intel’s primary growth engines, with memory and field programmable gate arrays (FPGAs) accelerating these opportunities – fueling a virtuous cycle of growth for the company. These growth businesses delivered $2.2 billion in revenue growth last year, and made up 40 percent of revenue and the majority of operating profit, which largely offset the decline in the PC market segment.
That last line is the one that might be the most concerning for enthusiasts and builders that read PC Perspective. The decline of the PC market has been a constant hum in the back of minds for the better part of 10 years. Everyone from graphics card vendors to motherboard manufacturers and any other product that depends on the consumer PC to be relevant, has been worried about what will happen as the PC continues in a southward spiral.
But it's important to point out that Intel has done this before, has taken the stance that the consumer PC is bad business. Remember the netbook craze and the rise of the Atom product line? When computers were "fast enough" for people to open up a browser and get to their email? At that point Intel had clearly pushed the enthusiast and high performance computing market to back burner. This also occurred when management pushed Intel into the mobile space, competing directly with the likes of Qualcomm in a market that it didn't quite have the product portfolio to do so.
Then something happened – PC gaming proved to be a growth segment after all. Intel started to realize that high end components mattered and they made attempts to recapture the market's mind share (as it never lost the market share). That is where the unlocked processors in notebooks and "anniversary edition" CPUs were born, in the labs of Intel where gamers and enthusiasts mattered. Hell the entire creation of the Devil's Canyon platform was predicated on the idea that the enthusiast community mattered.
I thought we were moving in the right direction. But it appears we have another setback. Intel is going to downplay the value and importance of the market that literally defines and decides what every other consumer buys. Enthusiasts are the trend setters, the educators and the influencers. When families and friends and co-workers ask for suggestions for new phones, tablets and notebooks, they ask us.
Maybe Intel is just in another cycle, another loop about the fate of the PC and what it means. Did tablets and the iPad kill off the notebook? Did mobile games on your iPhone keep users from flocking to PC games? Have the PS4 or Xbox One destroyed the market for PC-based gaming and VR? No.
The potential worry now is that one of these times, as Intel feigns disinterest in the PC, it may stick.
Old Hardware Enthusiasts
Old Hardware Enthusiasts which you at first refer to in your correlation with Intel aren’t necessarily the trendsetters. They are the ones with the money. If you haven’t been rooting old phones, playing in Linux and using Raspberry Pi’s all of which doesn’t require much additional cost, you are hardly a trendsetter anymore, No?
Wintel is dead. Well not dead but hasn’t been interesting for a long long time.
Which brings us to E3/Sony/PS4.5 or whatever and the rumors of no PS5. This could be your x86 lifeforce moving forward. I’m sure MS will play that game as well as they have the users.
The PC still matters, just not to the masses anymore
The Personal Computer is dead
The Personal Computer is dead because of communist technology such as Cloud computing where the user is enslave in subscription (according to Steve Wozniak) and is losing control of its data.
Moreover the processor design lacks disruptive technology like integrated FPU while the printing technology reached the limit of silicon.
Actually integrated GPU in APU is a step forward to the processor design but I think it’s too tied to video games to be useful for programmers. The real fusion should be a Vector Processor Unit for general purpose programming. One day maybe someone come out with a IEEE754 like standard for vector computing. From my POV SIMD instruction set is a failure since no programming language defines primitive types to use it easily with a significant performance reward.
Intel has no disruptive
Intel has no disruptive processor designs, only evolutions. There are, however, disruptive CPU designs, such as The Mill Architecture. Don’t expect big corporations to disrupt themselves.
APUs can be trivially programmed by C++ AMP, OpenCL and HSA solutions. It depends on what you mean by “useful to programmers.” It essentially targets similar code to SIMD, so if your Program can use more of those, then it is useful to programmers.
Why would we need an IEEE754 standard for Vector Computing? And what is Vector Computing? SIMD?
We are already very good at SIMD cases, with Intel’s AVX. It is trivially easy to go down that route, compared to The Mill, which parallelizes 31 instructions per clock cycle.
Now, our biggest hurdle to make faster computers isn’t really the CPU, it is the Memory interface. Our CPUs are constantly starved. It is useful to be able to run the CPU faster than memory a little, but not the large gap we have today. We need to solve the latency and bandwidth issues of the memory system next.
And with as many Programming languages as there are, there are some that natively support SIMD. Javascript has just added support, and JAI will have native support for common SIMD layouts. (JAI is still in development, so is not the best example.) However, there are more than 2,000 programming languages, but what you really mean is none of the popular ones, right?
Most of these are depending on Compilers to optimize their code to do this.
As I said programmers can’t
As I said programmers can’t easily use SIMD instruction set without assembly language which is painful for the new generation of programmers. What’s more you are mostly wasting the processor time by moving data in memory before the execution of SIMD instructions.
Why should be we need IEEE754 like standard for vector computing? For the same reason we need IEEE754 for floating-point computing: performance at lower labour cost (the programmer’s time). Make it easy to use by programmers, performance will follow. Make it hard to use by programmers, seldom optimization will follow.
SIMD support in Javascript is a waste of time since you should better use a compiled language to get performance (C, C++, Java, etc). Scripting languages aren’t made for efficient programming but for prototyping or lazy programmers (or worse incompetents). The Unix philosphy prefers short binaries instead of fat scripts and I think it’s the right thing to do.
Programmers can use SIMD
Programmers can use SIMD without assembly. They let the compiler do it for them.
In JAI, you create a data structure just like in C and C++, and then use one keyword to tell the layout in memory of the structure, which makes the efficient use of SIMD even better.
You didn’t answer what Vector Compute is.
IEEE754 is a standard for floating point number handling. It defines a standard for highly technical purposes to solve problems in specific problem domains. It isn’t useful for all domains, and isn’t directly available on all CPUs.
Are you equating Vector Compute with SIMD? Vector mathematics don’t need a standard like IEEE754. IEEE754 is focused on a specific computer issue, representing floating point numbers in a binary computer.
Vector math is a language issue, and not a data format issue. These two things exist at different levels. I am not sure why you think a standard like that would save programmers time? many languages have built in vector types, and for those that don’t, there are libraries around that implement such features.
A compiled language is not an option in every use case, such as on the web that must be able to run code on x86, ARM, MIPS, PowerPC and others. Scripting languages are made for the benefit of the programmers with specific needs. Need to run a piece of code in a browser? JS. Need to allow third parties to create UI in your game? LUA. Need to have access to a REPL and a large base of Scientific libraries? Python. Or Haskell. Or R.
There are very valid reasons to choose each of these languages, and laziness or incompetence isn’t the language’s fault.
The Unix philosophy prefers programs that do one task, and do it well. Which doesn’t say anything about big or small binaries or scripts. The Unix philosophy uses the SH shell because it can be scripted. (BASH is the most common SH shell, and most scripts are written in BASH due to this.) Many of the earliest forms of popular commands in the Unix/Posix world were shell scripts.
Indeed, programmers can
Indeed, programmers can automagically assume the autovectorization by compilers is efficient but in reality it’s not!
SIMD is a failure since even with an ubercool library to hide assembly instructions you have to move floats in the right order from the stack to an SSE register before doing the effective math operation with little benefit. With a vector format like IEEE754 defines floating-point numbers, you would manage directly a vector object.
What don’t you understand with vector computing? Vectors are in wide use in multimedia application (image processing, 3d modeling, etc) which require most of the computing power available in hardware. It seems you’re thinking like others before that a dedicated vector unit is as specific as FPU or GPU but why Intel integrated its 80387 chip in its mainstream processors?
A compiled language is always an option since the user machine needs a processor to run the web browser. The concept of web application with scripting language like Javascript is a waste of computing power from user’s machines for the webmaster profit who doesn’t want to invest time and hardware to run binary CGI on the web server. Moreover most of web scripters don’t master real programming concepts as long it is easier to copy and paste buggy code found on the web. As you said the need for scripting languages comes from programmers and many of them maintain their laziness with scripting languages. 😛
Intel is shedding jobs in
Intel is shedding jobs in advance of the recent announcement from Google/others about the use of the licensed power8/power9 server SKUs in Google’s/others server farms. Intel is about to be hit with more serious competition in the server/workstation/HPC market, with power9s paired with Nvidia’s GPU accelerators. Added to that Intel will be facing AMD’s new server/HPC/workstation APUs on an interposer variants as AMD tries to re-enter the server market with both the x86 and Custom(K12) ARM server variants starting into 2017 and beyond. AMD has cut to the bone their employee workforce and is operating in an already geared for growth low overhead business model where AMD will be able to disregard the higher margins that a top heavy Intel needs to keep Intel operating while producing for the shareholders at the expected high rate of return. Intel has had to hide their mobile devices division losses in a larger accounting structure within Intel’s other operating unit.
The competition across all of Intel’s product offerings/markets will be getting even more fierce once the Power8/Power9 third party licensed IP begins to reach a critical mass and economy of scale. the Licensed IP business model has been embraced to an extent by IBM via the OpenPower third party licensees creating their own third party power/Power8/9 systems for sale on the open market and like the licensed IP ARM based market the margins will be much lower for server SKUs owing to the more decentralized and competitive nature of the licensed IP markets for server SKUs, ARM and Power/other based.
The one good thing to come from this is that the server SKUs from all concerned will have to come at a more competitive lower pricing, so enthusiasts SKUs will have to come down in pricing least the enthusiasts switch to the server variants if they become more affordable. The other good news with the OpenPower based licensed IP is that Nvidia itself a non owner of an x86 license could begin to brand some OpenPower based gaming SKUs of its own and not have to be behind AMD’s x86 license advantage should Nvidia choose to go with the OpenPower licensing options. Nvidia is already heavily integrating its GPU accelerators to power8 and Power9 based server/HPC systems.
AMD with is newer APUs on an Interposer technology will be in the position of offering both Custom ARM based server APUs on an interposer, as well as x86 based HPC/Workstation and exascale APUs on an interposer. And AMD’s Use of interposer technology to create APUs with very wide and very high effective bandwidth connection fabrics CPU to GPU on the Interposer package will allow AMD to offer the highest CPU to single GPU connection speeds via the interposer’s silicon substrate ability to be etched with ten’s of thousands of traces from a CPU(separately fabricated on its DIE) to a GPU(separately fabricated on its DIE), or any other Processor DIE(FPGA/DSP/Other). These Server variants will certainly be adapted in consumer gaming variants, it is simply a matter of time before AMD does this after using the HPC/Workstation/Server markets and government exascale grants to fund the R&D and produce the revenues to allow AMD to convert over to high end gaming APUs on an interposer for the gaming market.
Things may not look so good for Intel’s high margin cash cow server business, but that’s only a problem for Intel’s stockholders. As for gaming or any other market the more competition the better!
This is very true. This has
This is very true. This has everything to do with where Intel makes its money: the datacemter and supercomputer business and the race to exascale. They have competetion now, and that is a good thing. The people who dont get it likely have no idea what EMIB, Knights Hill or Altera are either.
I think a lot of people saw
I think a lot of people saw this coming when they annouced their new strategy of tick, tock, tock a few months back.
Affordable process node
Affordable process node shrinks in the X and Y are coming to an end a la Moore’s Law/economic observation running out, so it’s time to go into the Z and start stacking some things. Sure things Can be made a little smaller in the X and Y but that will cost too much and eat all the profits, so Intel started the Ultrabook initiative with its Gimping down of laptop SOC SKU performance to milk more profits. Look everybody let’s tout Ultrabook gimped down SOCs at Ultra-High pricing for some dual core i7 U/M series variants at more SOCs per wafer and at the same high prices as the quad-core i7s of the earlier generation of laptop SOC SKUs, as if the entire PC/laptop market was made of of Apple slack-jaws! Never mind the GPUs on these SOCs where Intel’s not so good graphics/Graphics driver support and for the dollar were priced much higher than AMDs much better graphics, as Intel had that laptop OEM market by the 8 balls.
So most non Apple Laptop consumers have never fallen for that thin and light gimping down at high profit margins, so the overall laptop market is shrinking with very little in the way of the good old laptop as a desktop replacement offerings of years past. There are mostly crappy Ultrabook(TM) form factor selections that no one but a fool would want for all the thermal throttling and such when they already have some older laptop SKUs that have much better thermals and overall better performance than any Ultrabook/Thin and light junk! Even AMD has fallen victim as the entire laptop case supply chain economy of scale has switched over to making these cutout bin Ultrabook/Thin and light thermally restricted parts for the entire laptop market and forcing the NON Apple laptop fans into the same low powered over priced one size fits all slack-jawed UltraBook/thin and light abominations via Intel’s unhealthy for decades monopolistic control over the PC/laptop SOC/CPU market.
Bad karma is coming around to bite Intel in its A$$, and the mobile market OEMs will never let themselves get that Intel ring through their noses like the PC/laptop market did and Intel siphoning off all of the PC/Laptop OEM’s profits for decades!
I carried a “Dell brick”
I carried a “Dell brick” laptop around for a long time. I would not want to go back to that. It is terrible for people who actually travel. If you want desktop level gaming power, it just doesn’t come in a small and light package. The devices using desktop level components are generally close to 10 pounds. Thin and light laptops are the most common because that is what people want. There are also a lot of games with modest graphics that can be played on them. High performance is mostly only required for a specific segment of the gaming market, not even the entire gaming market, so “DTR” laptops have become a very niche product. They are still available, so what exactly are you complaining about?
I have an IvyBridge core i7
I have an IvyBridge core i7 quad core HP probook regular form factor laptop and it’s plenty light enough for me to carry with! It even comes with a discrete mobile GPU so it can be of use for more powerful workloads while still allowing for the integrated GPU to be used for lighter workloads and the discrete GPU power gated down. And the laptop cost much less that the current gimped down beyond all reason Ultrabook/Thin and light ripoff crap on the market today. I got the laptop at a special price because it was a previous years model, but it sure outperforms any current Intel Ultrabook SKU!
Regular form factor laptops for the WIN, Ultrabooks for the slack-jaws so easily separated from their money!
At last Intel will make
At last Intel will make processors for adults doing real work instead of toys for rich kids…
At last? Havent been paying
At last? Havent been paying attention for the last few decades?
The market for computing
The market for computing devices is all crazy. There are so many options out there. The fact that you can do email and web surfing on a device you can hold in your hand is amazing. For some that is all they want. But on the other hand, many understand the need for powerful desktop type hardware. I just built two new Intel “Skylake” systems for a couple business clients, because they want and understand that these types of computers are much more capable in the long term, than say a laptop. So the tower PC as we have known it for decades isn’t dead.
It isn’t dead, but it may
It isn’t dead, but it may become a saturated market with lots of competition and very low margins. Intel doesn’t seem to stick around in low margin markets.
I have seen a lot of
I have seen a lot of enthusiast saying that the lack of performance increase in the CPU market is because of lack of competition. This isn’t the case though. It is because of actual physical limitations. See the extremetech article for some explanation:
http://www.extremetech.com/extreme/223022-the-myths-of-moores-law
There have been improvements, but nothing like in the past. I was still using a core 2 duo from 2006 up until recently. It still handled everything I was using it for fine. It wasn’t quite as bad with GPUs, but we still had significantly slower performance scaling compared to previous nodes. The low power focus is not only due to focusing on mobile, it is also due to these professors being power limited. A SkyLake core is down to around 10 to 12 mm2, which includes L2 cache and L3 slice. This is absolutely tiny; only about 3.5 mm on a sidle. Getting the heat out of such a small area is difficult. Intel isn’t going to sell a CPU that requires a water cooler at stock speed. This is why they sell k versions. If you want to buy much more expensive overclocking gear, then they will support that though, since the CPU is capable of greater speed, just not within the stock thermal budget.
This move by Intel seems like it is admitting that CMOS scaling really is at an end with no replacement in sight. This obviously doesn’t look good for the PC market since they will be facing market saturation. I know a lot of people still using relatively old hardware. I see similar things happening in the smart phone market. They are running out of things that add to get people to upgrade. New product launches are relatively boring.
GPUs may scale for a little while going forward. We finally have the 14 nm node, but I suspect that 10 nm will be slow in coming or it will be available soon, but will not offer much scaling. This makes AMD’s decisions seem very smart. They focused on system architecture with HBM, silicon interposers, and such. These changes in system architecture may be the main source of performance improvements going forward. I think we will see interposers with multiple small GPU die rather than one large die. We may also see distributed memory of some type.
Intel has been trying to break into the mobile market for a while and they have also, essentially, been pushing into the graphics market with IGPs. The mobile market has been a disaster really. The GPU/IGP market is a bit unclear. When we start switching to APUs, I don’t know if Intel will be able to compete on the graphics front. The CPU component will probably be mostly irrelevant going forward. With the lack of process tech scaling, all of the other companies will be quickly catching up with Intel. This means that a larger number of companies will be able to make performance competitive CPUs.
It may be quite a while before a large number of companies can offer a competitive GPU though. It sounds like Intel may be giving up on this. They would need to pour a huge amount of money into GPU development, but they also may need to pour a huge amount of money into system architecture development. If that takes a few years to catch up, then it is not going to work. AMD has been working on HBM and silicon interposers for years. Intel has HMC, but it is not suited to consumer level graphics. I haven’t seen much else from Intel that can compete with HBM. HBM is a JEDEC standard, so they could use it, but they still would need a powerful GPU to take advantage of it. The GPU market is relatively low margin compared to the high end CPU market; compare the price of a high-end Xeon processor vs. the price of a high end GPU. The magement may not want to go into the GPU market, so they may pull back into the server market. The HPC market is actually going to be dominated by GPU style compute. Intel may have ended up with a kind of dead-end GPU design due to decisions made years ago. That would be kind of like AMD’s bulldozer CPU line.
It’s not dead as such just a
It’s not dead as such just a shift is likely coming, But don’t be surprised if in future cpu’s become even more irrelevant, Gpu’s will continue to push forward and add more functions onto the board itself in turn relying less and less on the cpu to do anything. Might see gpu’s adding their own cpu onto pcb, May see a completely new approach to the way we are now used to motherboard, cpu, gpu,ram. Some of those may end up being combined, Hell we may even see ssd like storage built in also.
PC is not dead, it wasn’t for
PC is not dead, it wasn’t for past 20 years despite many theories.
Intel shouldn’t be surprised by slide in PC sales. It’s theirs own fault.
Instead pushing boundaries, creating something new and exiting every 2-3 years, they continue this ludicrous tick-tock cycle (although they extended that to slow progress even further), cutting coupons, filling corporate coffers while offering no progress whatsoever.
Give people something really good and they will buy it by the tons, offer negligible difference and nobody will care.
Just look at 3 steps P4 S423/478 – C2D (S775) – HEDT S1366. Each of these provided gargantuan improvement in performance and work-flow. Now look at everything after X58. There is minimal difference between Nehalem and Haswell-E. Granted in heavily threaded tasks H-E will run circles around Nehalem, but most people don’t do that. For gaming it only matters when you run heavily CPU bound game. I have my indestructible X58 fully functional and there is little performance difference between it and 5930K. Yes video rendering show it claws on H-E, but for everyday work difference is minimal.
Sad for the folks who got the kick. That’s corporation life, they don’t care about people or progress, all what matters is $.
They say (power/leakage)
They say (power/leakage) scaling really ended around 65nm or 45nm which would explain the wall around X58. Even most of X58 had to do with on die memory controller vs. Core 2 Duo still lacking one, not necessarily process or processor otherwise. That’s a one time gimme.
https://en.wikipedia.org/wiki/Dennard_scaling#Breakdown_of_Dennard_scaling_around_2006
The reason ARM hasn’t appeared to have hit the same wall at 65nm is because they still had a lot more ‘low hanging fruit’ on the table and they’ve been steadily using their power envelope to achieve those gains too (Intel already hit it’s power enevlope long ago.. 130W desktop, etc).
GPUs are in a similar boat – no real incentive to be super power efficient per transistor until the last few years — much later than Intel’s first serious attempts with Banias in March of 2003.
A materials or chemistry breakthrough may provide the key to true ‘next generation’ performance for a future CPU…
“Intel Lays Off 12,000 After
“Intel Lays Off 12,000 After Seeking Visas to Import 14,523 Foreign Professionals Since 2010”
http://www.breitbart.com/big-government/2016/04/21/intel-lays-off-12k-looking-import-11600-foreign-workers-since-2010/
It looks like US universities
It looks like US universities sucks and students overpaid their CS degree…
US workers want too much
US workers want too much money, and they have been bringing H1B visa workers for many years. See what happens when you do not have an economic depression every once and a while, depressions get rid of a lot of that paper wealth that makes living too expensive in first world countries! so the real bad guys are the ones that stopped that depression from happening and the needed flushing of all that on paper wealth down the drain that casues the overpricing of the housing/other markets in first world countries!