Intel Allegedly Working to Replace Sandy Bridge

Subject: Processors | January 2, 2017 - 05:33 PM |
Tagged: sandy bridge, Intel

OC3D is claiming that Intel is working on a significantly new architecture, targeting somewhere around the 2019 or 2020 time frame. Like AMD’s Bulldozer, while there were several architectures after the initial release, they were all based around a set of the same basic assumptions with tweaks for better IPC, reducing bottlenecks, and so forth. Intel has also been using the same fundamentals since Sandy Bridge, albeit theirs aligned much better with how x86 applications were being developed.

View Full Size

According to the report, Intel’s new architecture is expected to remove some old instructions, which will make it less compatible with applications that use these commands. This is actually very similar to what AMD was attempting to do with Bulldozer... to a point. AMD projected that applications would scale well to multiple cores, and use GPUs for floating-point operations; as such, they designed cores in pairs, and decided to eliminate redundant parts, such as half of the floating-point units. Hindsight being 20/20, we now know that developers didn’t change their habits (and earlier Bulldozer parts were allegedly overzealous with cutting out elements in a few areas, too).

In Intel’s case, from what we hear about at the moment, their cuts should be less broad than AMD’s. Rather than projecting a radical shift in programming, they’re just going to cut the fat of their existing instruction set, unless there’s bigger changes planned for the next couple years of development. As for the unlucky applications that use these instructions, OC3D speculates that either Intel or the host operating systems will provide some emulation method, likely in software.

If the things they cut haven’t been used in several years, then you can probably get acceptable performance in the applications that require them via emulation. On the other hand, a bad decision could choke the processor in the same way that Bulldozer, especially the early variants, did for AMD. On the other-other hand, Intel has something that AMD didn’t: the market-share to push (desktop) developers in a given direction. On the fourth hand, which I’ll return to its rightful owner, I promise, we don’t know how much the “(desktop)” clause will translate to overall software in two years.

Right now, it seems like x86 is successfully holding off ARM in performance-critical, consumer applications. If that continues, then Intel might be able to push x86 software development, even if they get a little aggressive like AMD did five-plus-development-time years ago.

Source: OC3D

Video News


January 2, 2017 | 05:35 PM - Posted by Arc (not verified)

They would be a lousy company if they weren't actively trying to make better chips.

January 2, 2017 | 05:46 PM - Posted by this_is_eric

Thanks for finally getting your act together, AMD!

January 2, 2017 | 06:36 PM - Posted by Anonymous (not verified)

This new architecture will produce an ASTOUNDING!!! 7% IPC increase instead of the 2%-5% in refreshes and node shrinks.

January 3, 2017 | 12:23 AM - Posted by Mike D (not verified)

The way I see it, Intel feels they have milked us long enough. Now with AMD looming to gain their share of the market and also due to the declining of PC sale, it's time to excite it again... not to much, just enough to stir and stimulate us.

January 2, 2017 | 07:23 PM - Posted by Anonymous (not verified)

Wait.. I thought Sandy Bridge was the last CPU Intel was ever going to make....

January 2, 2017 | 07:50 PM - Posted by Anonymous (not verified)

Intel has more to worry about competing with Power9s and the Openpower licensing that is allowing companies like Google/others to License Power9 IP and design their own custom Power9 CPU with GPU accelerator solutions. The current custom ARM designs are no where near as wide order superscalar as the x86 designs and Power8/Power9 designs. I’m sure AMD could field a very powerful custom ARM design if K12 has SMT capabilities like the x86 and Power/Power9 CPUs. The Power9s are coming in SMT4 and SMT8 variants while the consumer x86 CPUs support only SMT2. There is currently no custom ARM designs that make use of SMT, so that keeps the custom ARM designs behind. Maybe that custom ARM based Fujitsu exaflop supercomputer will have more execution resources and those new ARM ISA called SVE(Scalable vector extensions).

AMD is a founding member of OpenCAPI along with IBM/Others so AMD will be making future GPUs that can use OpenCAPI and interface with Power9 CPUs! So it's not only Nvidia and NVLink that can get some of that Power9 accelerator business. Apple makes the widest order superscalar custom ARMV8A ISA running design but it lacks any SMT capabilities and AMD’s K12 is nowhere to be found currently in any of AMD’s current announcements but maybe AMD is too busy with Zen/Ryzen and Vega at the moment so K12 may still be alive.

January 2, 2017 | 08:26 PM - Posted by Macintux (not verified)

Is this old news or something? Sandy Bridge was replaced several generations ago by Ivy Bridge.

https://en.wikipedia.org/wiki/Sandy_Bridge

January 2, 2017 | 10:37 PM - Posted by serpico (not verified)

Either reread the first paragraph of Scott's post or the second paragraph of the linked article.

January 2, 2017 | 11:01 PM - Posted by Scott Michaud

We're talking about fundamental changes, like AMD going from Bulldozer to Zen, rather than Bulldozer to Piledriver (etc.).

January 3, 2017 | 03:19 AM - Posted by Anonymous (not verified)

It might be better to title the post "Intel replacing Core architecture", as that's the architecture Intel have based all post-P4 (which was Netburst) CPUs on.

January 3, 2017 | 03:20 AM - Posted by Anonymous (not verified)

/s/architecture/microarchitecture

January 3, 2017 | 04:09 PM - Posted by Scott Michaud

The switch from Nehalem to Sandy Bridge was quite huge. Also, Core, Core 2, and "first-generation Core iX" are considered descendents of P6, which is in the old Pentium-M line.

(See last two paragraphs of the linked article, from September 2010).

This isn't to say the Nehalem, which integrated the memory controller, and others weren't big changes. The last full redesign was Sandy Bridge, though... the oddly named "Second-Generation Core iX" series... and Ivy Bridge / Haswell / Broadwell / Skylake / Kaby Lake / Coffee Lake / Cannonlake / Icelake / Tigerlake are adjustments to it.

January 2, 2017 | 10:02 PM - Posted by gamerk2 (not verified)

This would actually be a significant change for Intel; they've never one broken HW compatibility with previous generations of chips [you can still run in 286 protected mode].

January 2, 2017 | 11:04 PM - Posted by Scott Michaud

Pretty much. I'm wondering if it's going to be like "AMD is dropping 3DNow!" or something much more drastic (like SIMD prior to AVX).

January 2, 2017 | 11:08 PM - Posted by John H (not verified)

It sounds like more of a specialized data center chip.. unless it can easily emulate the other instructions or maybe have a dedicated tiny core for them (atom) I would see this in parallel with "legacy compatible" chips..

January 3, 2017 | 01:05 AM - Posted by Anonymous (not verified)

Most likely Intel will just drop IGPU. It will be much bigger change then dropping few instructions.

January 3, 2017 | 07:42 AM - Posted by jabbadap (not verified)

Hmm let see... Nope. Why on earth would they do that.

January 3, 2017 | 11:10 AM - Posted by odizzido (not verified)

Because their igpu eats half the die and for a lot of people it's just a waste.

January 3, 2017 | 11:36 AM - Posted by Mikey (not verified)

You do realize their business doesnt cater to the PC gaming crowd right?

Almost every office employ except for the few that would need a discrete GPU to work use an integrated one. The business space represents a great deal of their money. The IGPU is going no where

January 3, 2017 | 01:01 PM - Posted by Anonymous (not verified)

The integrated GPU has little to do with the micro-architecture; it is more of an implementation detail. They already make the same cores without the integrated GPU for Xeon and Extreme Edition parts (the Extreme Edition parts are salvaged Xeon parts with cores and/or cache blocks disabled). Even fat AMD64 cores are tiny these days (10 to 12 square mm), so they have plenty of room for an integrated GPU and other things. The system controller hub with memory controllers and pci-e root complex takes quite a bit of die area. The IGP is not limiting CPU core performance.

January 3, 2017 | 03:46 PM - Posted by Anonymous (not verified)

Intel is still trying to shoehorn its x86 CISC ISA designs down into the lower power usage range of RISC ISA based designs to compete with ARM! But Intel does not realize that the mobile market OEMs will have nothing to do with Intel and its nefarious market practices they will stick with ARM and have full control over their vital SOC supply chains. Hell the 28nm ARM designs where beating Intel's 14nm x86 designs in that power usage metric long before the ARM designs went 16nm/14nm finfet and now starting on 10nm.

It’s too late Intel that market has passed you by, better watch out for Zen/Ryzen and the Power9 third party OpenPower(Google, others) Power9 licensee market because that’s going to go after the PC and HPC/Workstation/supercomputer business also.

January 3, 2017 | 08:40 AM - Posted by Master Chen (not verified)

This sounds like VERY bad news for the worldwide emulation scene, because majority of hardcore coders and hackers out there tend to be VERY conservative when it comes down to utilizing specific sets of instructions, so if Intel starts cutting down on some of the more often-used/popular ones (like MMX, SSE and SSE2), this WILL break compatibility with absolute majority of the older emulation software, essentially rendering such famous programs as NullDC and/or Snes9x as completely useless simply due to them not working at all on those "new Intel processors". This can only lead to utter chaos in the emulation community and massive migration to AMD/MCST/VIA/Raspberry Pi/Loongson/ARM/Qualcomm, if such thing actually happens.

January 3, 2017 | 01:56 PM - Posted by serpico (not verified)

If it's a popular instruction then why would Intel get rid of it?

January 3, 2017 | 11:28 AM - Posted by Anonymous (not verified)

In a previous news article.. Intel us actually looking to use AMD iGPUs in this upcoming generation

Interesting times for the cpu world

I'm actually quite curious to how the upcoming AMD Ryzen APUs are going to work with HBM on die

January 3, 2017 | 04:04 PM - Posted by Geforcepat (not verified)

I think WCCFTECH broke the story first.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.