Intel's 8 Cores meal; comes with a Coffee Refresh

Subject: General Tech | June 15, 2018 - 02:38 PM |
Tagged: Intel, rumour, coffee lake refresh

Kudos to Intel for choosing to name their Coffee Lake Refresh exactly that, instead of adding an 'S' to the end of the name.   The refresh is rumoured to include an 8-core mainstream model, which will somewhat narrow AMD's current lead.  The rumours The Inquirer heard also suggest a 22-core high end model is a possibility, certainly an interesting count if nothing else.  This would come at a cost however, a run of Coffee Lake Refresh suggests that Cannon Lake may need a little work before it can be fired off.

View Full Size

"The schedule states that Intel will launch the 8-core chip as an extension of the existing Coffee Lake family of processors in a few months' time, but it will be named the Coffee Lake Refresh, not the Coffee Lake S as previously speculated."

Here is some more Tech News from around the web:

Tech Talk

 

Source: The Inquirer

June 15, 2018 | 05:09 PM - Posted by elites2012

i really love when intel brings out new stuff. shows how they can price gouge and people will still buy it. the i7 7700k was a 4 core with a high end price. you get less product for more money.

June 15, 2018 | 07:05 PM - Posted by Hood

People who think AMD has Intel running scared should try to explain why AMD is dropping their prices across the board. Nobody will believe that they don't like money, so I guess it'll be the usual - "they love all their customers so much, they're doing it to show they care about us". Let me take a stab at it - AMD is dropping all their CPU prices because Ryzen/TR is not selling well, because high core counts are just not relevant to the home desktop PC market, especially gaming (the reason Intel avoided this route for so long).
I'm not saying there's anything wrong with Ryzen, just that AMD pushed into a market that barely exists, hoping to create demand after the fact (always a bad business model). It hasn't really worked out for them like they hoped, and they sacrificed momentum in the GPU market in order to concentrate resources on Ryzen. Recently AMD's CPU market share has dropped back to around 20%, after a surge to ~23% in Q2 2017. So why should Intel be worried?

June 15, 2018 | 11:16 PM - Posted by thecoreiryzen780XEunlockedextremeaniversaryeditionftw (not verified)

For a long time, Intel has had a significant IPC advantage. Even prior to the release of Sandy Bridge, AMD couldn't match Intel's best offerings. I think Intel has surreptitiously leveraged that advantage, and that in turn has kind of shaped the market we see today.

The release of Zen has made that advantage almost negligible. Now, I think what we're seeing is AMD attempting to beat Intel at its on game, leveraging one of its own advantages: Scalability.

Will Intel allow the amount of time required for the market to adapt? I wouldn't bet against them. However, another question to ask is, can they even do anything about it? I hope so, but that 5ghz tech demo, and this 5ghz 8086k chip dont inspire much confidence.

And I feel any momentum AMD had in the gpu market was lost after the release of the 7970 back in Dec of 2012. They've been out muscled ever since. The console contracts are nice.

June 16, 2018 | 01:31 AM - Posted by James

That is complete BS. Hardware always has to lead software. Companies aren't going to put in the work to actually do serious multithreaded code without an installed base. They have to target the lowest common denomenator. That was really 2 to 4 cores up until recently. That means there isn't very much multi-threaded software in the consumer space except stuff that trivially parallelizable like video encoding. Games can take advantage of 2 to 4 cores with little to no multithreading since the OS, video driver and other stuff can use thise cores. Intel has probably delayed multi-threaded game by about 4 to 5 years by keeping the mainstream on 2 and 4 core chips.

We could have had mainstream 8 core chips at 22 nm easily. We in fact had 8 core AMD chips at 28 nm, they just didn't perform well in single threaded applications. Intel sticking with 2 and 4 cores had nothing to do with it being better for gaming. They did it because they could keep selling tiny 4 core chips for $300 to $400. That also allowed them to keep ridiculously high prices on Xeon parts for the professional market where more cores have always been useful.

At this point, I would doubt whether a 4 core part will be able to even keep up with next generation gpus. Single thread performance is massively overrated. I use up to 32 core machines at work, where a single core is only 3.125 percent of the available cpu power. For apps that have multi-threading, there is no comparison. A few hundred more MHz is nothing. We are getting multi-threaded game engine developement now, partly due to Xbox One with 8 low power cores. DX12 and Vulcan will help a lot in helping cpus keep up with next gen gpus. Single thread performance is a losing game. It has run out of steam. The single thread performance increases have been minimal for many years now. How is a cpu with 4 cores which haven't really scaled much for close to 10 years going to keep up with gpus that have continued scaling? I wouldn't recommend anyone get a 4 core machine at this point unless it is for email and web surfing.

June 16, 2018 | 05:17 AM - Posted by Anonymouse (not verified)

"That is complete BS. Hardware always has to lead software. Companies aren't going to put in the work to actually do serious multithreaded code without an installed base. "

The problem is simply that most workloads can't be usefully threaded.

We've had multi-core CPUs for well over a decade. That install base has been the norm for all that time, over multiple versions of Windows, multiple versions of Office, multiple versions of Adobe CS, multiple versions of Unreal and Unity, and so on. Throwing "but now we have /even more/ cores everything is totally different!" is just pure nonsense.

And don't give me "but we've hit an IPC wall now, people will be FORCED to scale to more cores!" either. We hit the 5nm gate length limit many years (and process nodes) ago too.

June 16, 2018 | 09:06 AM - Posted by msroadkill612

Well its odd that games e.g. have improved perf considerably since they adjusted to the new, post ryzen, 4+ core norm for mainstream.

Intel shills are shrill about IPC because its a lone corner case where the do have a trifling edge. They are not above calling a barely measurable difference "better".

Far more generally important, is the radically better AMD threaded performance/$.

June 16, 2018 | 10:50 AM - Posted by Isaac Johnson

And AMD shills constantly shrill about multithreading because...blah blah blah...

You're all equally as annoying to the rest of us who just want a good CPU and don't care what color box it comes in.

June 16, 2018 | 11:31 AM - Posted by HowNowGoldenServerCowWithFallingMarginsForChipzilla (not verified)

Most people want a CPU at an affordable price with some resonable price/performance metric. You may have some ePEEN needs of some gamers who want some sycophant bragging rights more than any rationsl game play metrics.

Some folks want thir PC's for more than just some limited single threaded gaming workloads and a lot of the newer DX12/Vulkan gaming titles are making good use of all those cores/threads and the OS Bloat running on any non needed cores. So yes Cores/Threads matter for gaming also when the modern game titles are making use of the latest Graphics APIs and there are plenty of extra cores/threads available to handle the OS/System bloat and not steal as much for the games.

Most rational folks care about the Price and the Price/performance and more gaming where the GPU is more of the bottleneck rather than the CPU, at least for the PC gamers that are gaming 1440p and above!

June 16, 2018 | 11:19 AM - Posted by HowNowGoldenServerCowWithFallingMarginsForChipzilla (not verified)

"The problem is simply that most workloads can't be usefully threaded."

This is BS as there are more workloads that can make use of all the cores/threads that they can get with no problems. Intel sacrificed security in the name of IPC and look at all the mitigations that are slowing things down now that Intel has had to try and fix things.

Intel's process node engineers are not at fault for the lack of system security at the hardware level as that's the job of the CPU architectural engineering that placed IPC above security in those Branch units and Cache/other units that were tuned for IPC and not security.

Intel's 10nm is an interesting process but those problems the fault of Intel trying to get too much density too soon and that's going to have to be fixed before Intel's mass production at 10nm becomes economically vaible.

Intel's management has a history of messing things up once Intel gets a large lead in process node technology by not doing enough to fix any architectural defficiencies. Now Intel's process node lead is no longer a lead very much at all with the third party fab industry catching up and going smaller.

Intel's is not going to quickly catch up at 10nm and that server market share in going to be a problem for Intel with AMD's, IBM's/OpenPower's, and the ARM server compstition getting more share in the same and different server workloads. Intel can do little with its current high overheads to compete on a price/performance metric against the competition as AMD has narrowed that IPC gap closer to the margin of error. IBM/Nvidia via OpenPower is getting that flagship supercomputer design wins and even AMD will be bidding for some of the next supercomputer systems also. The Japanese and some EU supercomputers are going with custom ARM deigns and others are looking at custom ARM ISA Running designs also.

Intel can not afford to see its Gross Margins fall below 55% while AMD actually is showing a profit at the low gross margin rate of around 36%, so streamlined and low overhead AMD's business operation has become after so many years of running on fumes. Now AMD can only grow while Intel has to start to make the painful cuts in order to become more price/performance competative in the most competative market that Intel has ever had to experience! And 2019 will be even worse for Intel's high margin server holden cow business.

June 16, 2018 | 01:13 PM - Posted by FallingRetailPricesIsBestThisSummerIntoFall (not verified)

So the Bargains on First generation Ryzen are getting better(1)!

The Price Performance metrics need to be re-run at the review websites that do those sorts of things because there is now the first generation of Ryzen being priced to clear inventory. Folks that do not need the latest and greatest are in for some nice deals.

"[Sale] (CPU)Massive Ryzen 7 1st Gen Price Drops! 1800X - $240, 1700X - $220, 1700 - $200 on Newegg and Amazon newegg.com

submitted 8 hours ago by T1beriu" [6/16/2018]

https://www.reddit.com/r/Amd/comments/8ri3ip/massive_ryzen_7_1st_gen_pri...

June 18, 2018 | 08:21 AM - Posted by Johan SteynJohan Steyn (not verified)

You are so clueless how business works. AMD can be double the performance and yet need to be cheaper. To change market sentiment takes time. They need market share and pricing is the way to go.

So go feed you Intel monster, that is what fanboys do.

June 18, 2018 | 10:34 AM - Posted by WhyMe (not verified)

Exactly ^^this^^ it's about gaining market share and going from a company that a lot of non-tech literate people probably didn't know was still around, market momentum counts for much more than makes logical sense.

June 19, 2018 | 04:52 PM - Posted by elites2012

they actually do have intel a bit scared. i got this info directly from several employees at intel. their sales have droppped a lot and intel is scurrying to find counters to AMD new line ups. if they were not scared, then why did they release an X line and drop it. then rush the coffee lake. the specs for the coffee lake now are not what intel wanted. Since AMD has put a thorn in their side, they need to gain back ground. also look at the failure rate on the 10nm AND the coffee lake.

June 16, 2018 | 12:57 PM - Posted by YearsOfNefariousBusinessPracticesToppedOfWithGPP (not verified)

Folks go and watch the latest Video at AdoredTV and get the goods on the green eyed goblin! It's tour de force on Big Green's nefarious business history.

That said where is the Zbox with AMD's Raven Ridge Inside it was announced 6 Mo ago but is currently MIA!

Monopolies Are Limiting Conuumer Choice and hold back technology advancments!

June 18, 2018 | 04:54 AM - Posted by Othertomperson (not verified)

Wow he’s released another hit piece on a competitor to AMD? That hack never does that... I’m not sure if he is an undisclosed paid shill for AMD, or if he really is just stupid enough to dedicate his entire online presence towards selling products for a company who doesn’t care about him. Either way it’s pathetic.

June 18, 2018 | 08:24 AM - Posted by Johan Steyn (not verified)

Talking about a paid shill. You clearly have never watched one of AdoredTV's videos, otherwise you would have known that he bashed AMD extremely hard for Vega.

Please stop doing what you are doing, it is embarrassing for you.

June 18, 2018 | 09:37 AM - Posted by ProMarketsMoneyTalksLouderThanGamersNickelsAndDimes (not verified)

From a gaming perspective AMD did only design Vega 64 to compete with the GTX 1080(GP104 based at 64 ROPs max) and that Vega 10 single base die Tapeout was all that AMD could afford at the time to act as a base die tapeout for the professional Compute/AI MI25/Instinct and Radeon Pro WX 9100 SKUs. That same Vega 10 base die tapeout that was also used for the Consumer/Gaming Vega 64/56 variants also.

Looking at the standard GPU development time line of Vega shows that its design made use of in that single Vega 10 tapeout that had it's design frozen long before Nvidia decided to take it professional GP102 base die tapeout, GP102, with it 96 available ROPs and tapeout the GTX 1080Ti that was to make use of 88 of that GP102's 96 available ROPs.

To this day the average gamer can not tell the difference for a GPU Micro-Arch and a base DIE tapeout that makes use of Shader Cores that are based on that Vega(For Example) GPU micro-arch. Vega the Micro-Arch is very sucessful for AMD as it's in the integrated Vega Graphics that is very efficient in spite of what folks think if one looks and the Raven Ridge SKUs.

That One Vega 10 base die tapeout is AMD's only fault in that Vega 10 is rather shader heavy and ROP sparce with the Vega 10 base DIE at the time only able to offer up 64 ROPs max just like the GP104 Nvidia Base DIE that only offers 64 ROPs max. So Nvidia did not beat AMD with its Pascal GPU micro-arch Nvidia Beat AMD with the available base die tapeouts with Nvidia having 5 tapeouts the GP100, GP102, GP104, GP106, GP108 Base DIE tapeouts all with various degrees of higher amounts of Available ROPs with which to throw out the higher FPS at those higher pixel fill rates provided by MORE ROPs than that AMD single Vega 10 Based die tapeout(64 ROPs MAX) which was all that AMD had the funds to develop at that point in time.

AMD's Vega 56 has the exact same numbers of shader cores and TMUs as the GTX 1080Ti but the Vega 56 only has 64 ROPs max avaialbe ROPs from that Binned Vega 10 base die Tapeout. So If AMD could have allowed for up to 96 available ROPs on Vega 10's Tapeout for Vega 56/64 then its gaming GPU vairants could have competed with any GP102 based Nvidia variant but AMD's management was loonking for compute/shader cores more than ROPs/Pixel Fill rates for AMD's first Vega micro-arch based Vega 10 Base DIE tapeout. And look at how much sales that compute heavy Vega 10 base DIE tapeout earned for AMD in spite of its lack of available ROPs to compete with that Nvidia GP102 based GTX 1080Ti

Gamers measure a GPU's sucess on FPS/Pixel Fill rates from ROPs and not compute but in spite of the gamers being dissappointed the miners where willing to pay even more for the Vega 10 Veriants than even the Nvidia GP102 based GTX1080Ti. Vega 56/64 compete very well against the GP104 base Nvidia variants as Vega was designed to do at that time. AMD can not be blamed for not having the funds but for only One Desktop GPU base die tapeout and AMD was betting its existance on it Zen CPU micro-arch and that's where the money went.

Vega 10 was and is a very sucessful Tapeout for AMD in the compute/AI markets and the Gaming/Mining markets and will always be sucessful from a financial stanpoint of AMD's investors. Vega the GPU micro-arch will continue to be in use on some newer tapeouts in Raven Ridge SKUs and most likely the next generation of gaming consoles as well.

And Vega 20 and new Compute/AI oriented Base Die tapeout is incoming fabbed on a 7nm process node with the Vega GPU micro-arch tweaked with some AI oriented instruction extentions and more DP FP Units per shader core to offer a 1/2 DP FP to 1 SP FP compute ratio for more DP FP compute. So Vega 20 is also a more Compute/AI(Where the REAL money is) design for AMD to complement AMD's Epyc CPUs in the HPC Compute/AI markets.

Vega is still making AMD millions if not billions of dollars in revenues in the long run along with Epyc's multi-billions as AMD takes more server/cloud services market share from Intel.

Gamers are just mad becaue they do not have the money to compete with the miners and the professional markets for GPUs, but that's Lisa Su's plan for returning AMD to long term profitability and the professional CPU/GPU compute/AI markets are where the big revenue potentials are and always will be compared to consumre/gaming for AMD, Nvidia, Intel and anyone else in the processor market.

AMD is even getting dual Vega 10 DIE SKUs out for the compute/AI markets and virtual GPU user markets where one GPU can be logically subdivided into many logical GPUs for cloud GPU processing among many different VMs/Users. Professional Visualation markets are growing also in addition to the Compute/AI markets and the Revenues are much higher there than any consumer only markets, Just ask Lisa Su, or Nvidia's JHH, Ditto for Intel's CEO.

June 18, 2018 | 08:27 AM - Posted by Johan Steyn (not verified)

Talking about a paid shill. You clearly have never watched one of AdoredTV's videos, otherwise you would have known that he bashed AMD extremely hard for Vega.

Please stop doing what you are doing, it is embarrassing for you.

June 17, 2018 | 12:50 AM - Posted by Dark_wizzie

Wow, discussion here is a special brand of cancer.

June 18, 2018 | 05:25 AM - Posted by Spunjji

Straw men and sock puppets abound in these dark woods

June 17, 2018 | 06:53 AM - Posted by HeyDingleDonglesSchlongleHere (not verified)

https://fuse.wikichip.org/news/1285/intel-launches-cannon-lake/

Cannon Lake is already out. Intel and AMD, as well as the rest of the industry, are facing an increasing cost per transistor now instead of a decreasing one, so that's part of the reason they need to keep nodes around for a long time.

A lot of people also dont seem to realize that the decrease in node size doesnt do that much for performance, IPC or otherwise. The biggest issue with modern CPUs is that the rest of the system can't keep up with the cores.

CPUs for normal desktops and even most servers still use DDR DIMMs, although Fujitsu has been using HMC on its PrimeHPC FX100 since 2015 to great effect.

Desktop processors don't have huge caches either, even though the low clock speed Broadwell desktop CPUs showed that a 3GHz CPU with eDRAM L4$ can perform as well as a 4.2GHz CPU without it. IBM has hundreds of MB of cache per CPU SCM in their Z14 processors, and even Intel offers LGA3647 CPUs with low core count but huge caches, and has offered Xeons like that for a while.

Once CPUs stop being a die shrink of the previous version, add more memory bandwidth for a better Byte/FLOP ratio and add more on die or off die L3 and L4 caches, you'll see a huge performance increase, but that costs money.

People don't want to pay for more performance usually, and when they do, they have more money than sense and buy more cores instead of better cores. Well, better x86 cores don't really exist right now.

There needs to be a push toward more cache and memory bandwidth before people will see the huge gains they want in applications without rewriting them to scale across a ridiculous number of cores. Also concurrency in threading needs to happen otherwise having a bunch of threads is meaningless.

June 18, 2018 | 05:42 AM - Posted by Martin (not verified)

It is just a fight about timing. Intel is betting on multithreaded software not being too mainsteam until they can catch up on core counts (i.e. before current buch of CPUs become irrelevant) and AMD is betting otherwise.

People buying CPUs to be futureproof beyond couple of years tends to be a small minority. These same people are very vocal though which is working very well for AMD.

June 18, 2018 | 08:30 AM - Posted by Johan Steyn (not verified)

"suggests that Cannon Lake may need a little work before it can be fired off."

A little work? Have you guys been on an island somewhere? Intel themselves made it clear that they are looking at second half next year and you call it "a little work." Come on guys, if you keep this up, I might believe you are trying to make Intel look good. Oh sorry, you already have done that....

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.