All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Processors | March 20, 2018 - 04:33 PM | Ryan Shrout
Tagged: ryzenfall, masterkey, fallout, cts labs, chimera, amd
AMD’s CTO Mark Papermaster released a blog today that both acknowledges the security vulnerabilities first shown by a CTS Labs report last week, while also laying the foundation for the mitigations to be released. Though the company had already acknowledged the report, and at least one other independent security company validated the claims, we had yet to hear from AMD officially on the potential impact and what fixes might be possible for these concerns.
In the write up, Papermaster is clear to call out the short period of time AMD was given with this information, quoting “less than 24 hours” from the time it was notified to the time the story was public on news outlets and blogs across the world. It is important to detail for some that may not follow the security landscape clearly that this has no relation to the Spectre and Meltdown issues that are affecting the industry and what CTS did find has nothing to do with the Zen architecture itself. Instead, the problem revolves around the embedded security protocol processor; while an important distinction moving forward, from a practical view to customers this is one and the same.
AMD states that it has “rapidly completed its assessment and is in the process of developing and staging the deployment of mitigations.” Rapidly is an understatement – going from blindsided to an organized response is a delicate process and AMD has proven its level of sincerity with the priority it placed on this.
Papermaster goes on to mention that all these exploits require administrative access to the computer being infected, a key differentiator from the Spectre/Meltdown vulnerabilities. The post points out that “any attacker gaining unauthorized administrative access would have a wide range of attacks at their disposal well beyond the exploits identified in this research.” I think AMD does an excellent job threading the needle in this post balancing the seriousness of these vulnerabilities with the overzealous hype that was created upon their initial release and the accompanying financial bullshit that followed.
AMD provides an easy to understand table with a breakdown of the vulnerabilities, the potential impact of the security risk, and what the company sees as its mitigation capability. Both sets that affect the secure processor in the Ryzen and EPYC designs are addressable with a firmware update for the secure unit itself, distributed through a standard BIOS update. For the Promontory chipset issue, AMD is utilizing a combination of a BIOS update and further work with ASMedia to further enhance the security updates.
That is the end of the update from AMD at this point. In my view, the company is doing a satisfactory job addressing the problems in what must be an insanely accelerated time table. I do wish AMD was willing to offer more specific time tables for the distribution of those security patches, and how long we should expect to wait to see them in the form of BIOS updates for consumer and enterprise customers. For now, we’ll monitor the situation and look for other input from AMD, CTS, or secondary security firms to see if the risks laid out ever materialize.
For what could have been a disastrous week for AMD, it has pivoted to provide a controlled, well-executed plan. Despite the hype and hysteria that might have started with stock-shorting and buzzwords, the plight of the AMD processor family looks stable.
Subject: Processors | March 15, 2018 - 10:29 AM | Ryan Shrout
Tagged: spectre, meltdown, Intel, cascade lake, cannon lake
In continuing follow up from the spectacle that surrounded the Meltdown and Spectre security vulnerabilities released in January, Intel announced that it has provided patches and updates that address 100% of the products it has launched in the last 5 years. The company also revealed its plan for updated chip designs that will address both the security and performance concerns surrounding the vulnerabilities.
Intel hopes that by releasing new chips to address the security and performance questions quickly it will cement its position as the leader in the enterprise compute space. Customers like Amazon, Microsoft, and Google that run the world’s largest data centers are looking for improved products to make up for the performance loss and assurances moving forward that a similar situation won’t impact their bottom line.
For current products, patches provide mitigations for the security flaws in the form operating system updates (for Windows, Linux) and what are called microcode updates, a small-scale firmware that helps provide instruction processing updates for a processor. Distributed by Intel OEMs (system vendors and component providers) as well as Microsoft, the patches have seemingly negated the risks for consumers and enterprise customer data, but with a questionable impact on performance.
The mitigations cause the processors to operate differently than originally designed and will cause performance slowdowns on some workloads. These performance degradations are the source of the handful of class-action lawsuits hanging over Intel’s head and are a potential sore spot for its relationship with partners. Details on the performance gaps from the security mitigations have been sparse from Intel, with only small updates posted on corporate blogs. And because the problem has been so widespread, covering the entire Intel product line of the last 10 years, researchers are struggling to keep up.
The new chips that Intel is promising will address both security and performance considerations in silicon rather than software, and will be available in 2018. For the data center this is the Cascade Lake server processor, and for the consumer and business markets this is known as Cannon Lake. Both will include what Intel is calling “virtual fences” between user and operating system privilege levels and will create a significant additional obstacle for potential vulnerabilities.
The chips will also lay the ground work and foundation for future security improvement, providing a method to more easily update the security of the processors through patching.
By moving the security mitigations from software (both operating system and firmware) into silicon, Intel is reducing the performance impact that Spectre and Meltdown cause on select computing tasks. Assurances that future generations of parts won’t suffer from a performance hit is good news for Intel and its customer base, but I don’t think currently afflicted customers will be satisfied at the assertion they need to buy updated Intel chips to avoid the performance penalty. It will be interesting to see how, if at all, the legal disputes are affected.
The speed at which Intel is releasing updated chips to the market is an impressive engineering feat, and indicates at top-level directive to get this fixed as quickly as possible. In the span of just 12 months (from Intel’s apparent notification of the security vulnerability to the expected release of this new hardware) the company will have integrated fairly significant architectural changes. While this may have been a costly more for the company, it is a drop in the bucket compared to the potential risks of lowered consumer trust or partner migration to competitive AMD processors.
For its part, AMD has had its own security issues pop up this week from a research firm called CTS Labs. While there are extenuating circumstances that cloud the release of the information, AMD does now have a template for how to quickly and effectively address a hardware-level security problem, if it exists.
The full content of Intel's posted story on the subject is included below:
Hardware-based Protection Coming to Data Center and PC Products Later this Year
By Brian Krzanich
In addressing the vulnerabilities reported by Google Project Zero earlier this year, Intel and the technology industry have faced a significant challenge. Thousands of people across the industry have worked tirelessly to make sure we delivered on our collective priority: protecting customers and their data. I am humbled and thankful for the commitment and effort shown by so many people around the globe. And, I am reassured that when the need is great, companies – and even competitors – will work together to address that need.
But there is still work to do. The security landscape is constantly evolving and we know that there will always be new threats. This was the impetus for the Security-First Pledge I penned in January. Intel has a long history of focusing on security, and now, more than ever, we are committed to the principles I outlined in that pledge: customer-first urgency, transparent and timely communications, and ongoing security assurance.
Today, I want to provide several updates that show continued progress to fulfill that pledge. First, we have now released microcode updates for 100 percent of Intel products launched in the past five years that require protection against the side-channel method vulnerabilities discovered by Google. As part of this, I want to recognize and express my appreciation to all of the industry partners who worked closely with us to develop and test these updates, and make sure they were ready for production.
With these updates now available, I encourage everyone to make sure they are always keeping their systems up-to-date. It’s one of the easiest ways to stay protected. I also want to take the opportunity to share more details of what we are doing at the hardware level to protect against these vulnerabilities in the future. This was something I committed to during our most recent earnings call.
While Variant 1 will continue to be addressed via software mitigations, we are making changes to our hardware design to further address the other two. We have redesigned parts of the processor to introduce new levels of protection through partitioning that will protect against both Variants 2 and 3. Think of this partitioning as additional “protective walls” between applications and user privilege levels to create an obstacle for bad actors.
These changes will begin with our next-generation Intel® Xeon® Scalable processors (code-named Cascade Lake) as well as 8th Generation Intel® Core™ processors expected to ship in the second half of 2018. As we bring these new products to market, ensuring that they deliver the performance improvements people expect from us is critical. Our goal is to offer not only the best performance, but also the best secure performance.
But again, our work is not done. This is not a singular event; it is a long-term commitment. One that we take very seriously. Customer-first urgency, transparent and timely communications, and ongoing security assurance. This is our pledge and it’s what you can count on from me, and from all of Intel.
Subject: Cases and Cooling, Processors | March 9, 2018 - 02:45 PM | Jeremy Hellstrom
Tagged: amd, Threadripper, tim, ryzen
If you are looking for advice on how to install and cool a Threadripper. [H]ard|OCP have quickly become the site to reference. They've benchmarked the majority of waterblocks which are compatible with AMD's big chip as well as publishing videos on how to install it on your motherboard. Today the chip is out again, this time it is getting a manually applied TIM facial. Check out Kyle's tips on getting ready to coat your chip and the best way to spread the TIM to ensure even cooling.
"AMD's Threadripper has shown to be a very different CPU in all sorts of ways and this includes how you install the Thermal Interface Material as well should you be pushing your Threadripper's clocks beyond factory defaults. We show you what techniques we have found to give us the best temperatures when overclocking. "
Here are some more Processor articles from around the web:
- How To Install the AMD Threadripper CPU @ [H]ard|OCP
- AMD Ryzen 3 2200G & Ryzen 5 2400G APU Review @ Neoseeker
- The AMD Ryzen 3 2200G With Radeon Vega 8 @ TechARP
- AMD Ryzen 3 2200G + Ryzen 5 2400G Linux CPU Performance, 21-Way Intel/AMD Comparison @ Phoronix
- The AMD Ryzen 5 2400G With Radeon RX Vega 11 @ TechARP
- AMD Ryzen 5 2400G Linux Gaming Benchmarks @ Phoronix
Subject: Processors | February 21, 2018 - 11:22 AM | Ryan Shrout
Tagged: amd, ryzen, EPYC, embedded, ryzen v1000, epyc 3000
Continuing its expansion of bringing modern processor and graphics designs to as many of its targeted market segments as possible, AMD announced today two new families that address the embedded processor space. The company has already seen double-digital YoY sequential growth in revenue from embedded markets, but the release of the Epyc Embedded 3000 and Ryzen Embedded V1000 family create significant additional opportunity for the company.
Embedded markets are unique from traditional consumer and enterprise channels as they address areas from military and aerospace applications to networking hardware and storage devices to retail compute and even casino and arcade gaming. These markets tend to be consistent and stable without the frequent or dramatic swings in architectural preference or market share that we often witness in consumer PCs. As AMD continues to grow and look for stable sources of adjacent income, embedded processors are a critical avenue and one that I believe AMD has distinct advantages in.
Research firm IDC estimates the market size that AMD can address with this pair of chip families exceeds $14-15B annually. The largest portion of that ($11-12B) includes storage and networking infrastructure systems that the Epyc 3000 line will target. The remaining amount includes IoT gateways, medical systems, and casino gaming hardware and is the purview of the Ryzen V1000.
Competitors in this space include Intel (with its Xeon D-series and Core family of chips) and many Arm-based designs that focus on low power integration. Intel has the most potential for immediate negative impact with AMD’s expansion in the embedded markets as the shared architecture and compatibility mean customers can more easily move between platforms. AMD is positioning both parts directly against Intel with proposed advantages in value and performance, hoping to move embedded customers to the combined AMD solution.
The Ryzen V1000 family combines the company’s recent processor and graphics architectures on a single chip, similar in function to the consumer Ryzen design that was released for notebook and desktop PCs. For the embedded customers and devices being targeted, this marks a completely new class of product with two key benefits over competing solutions. First, it allows for smaller and cooler system designs (critical for the cramped working environments of the embedded space) while increasing maximum performance.
Second, the V1000 allows integrators to downscale from using a combination of an Intel processor and a separate, discrete graphics chip to a single chip design. This both raises the ASP (average selling price) for AMD, increasing revenue and potential margin, while lowering the price that customers pay in total for system components.
While AMD struggles to find ways to promote the value of higher performance graphics on its new processors, where it has a significant advantage over Intel, for the consumer and business space, in the embedded markets that additional performance value is well understood. Casino gaming often utilizes multiple high-resolution displays for a single device with demand for high-quality rendered 3D graphics, of which the V1000 can now provide in a single chip design. The same is seen with medical imaging hardware, including ultrasound machines for women’s healthcare and cardiovascular diagnostics.
The Epyc Embedded 3000 family does not include integrated graphics on-chip and instead offers higher core performance and performance per dollar compared to competing Intel solutions. AMD believes that the Epyc 3000 will double the total addressable market for the company when it comes to networking and storage infrastructure.
AMD previously has disclosed its partnership with Cisco that included AMD-built processor options for some families of switches and other networking gear. As the demand for edge computing grows (systems that will exist near the consumer or enterprise side of a network to aid in computational needs of high speed networks), AMD is offering a compelling solution to counter the Intel Xeon family of processors.
Both the Epyc 3000 and Ryzen V1000 chips represent the first time AMD has targeted embedded customers with specific features and capabilities at the hardware level. During the design phase of its Zen CPU and Vega graphics architecture, business unit leaders included capabilities like multiple 10-gigabit network integration, support of four 4K display outputs, ECC memory (error correction capability for mission-critical applications), and unique embedded-based interfaces for external connectivity.
While these were not needed for the consumer segments of the market, and weren’t exposed in those hardware launches, they provide crucial benefits for AMD customers when selecting a chip for embedded markets.
Subject: Processors | February 19, 2018 - 08:33 PM | Scott Michaud
Tagged: amd, Zen, Zen 2
WCCFTech found some rumors (scroll down near the bottom of the linked article) about AMD’s upcoming EPYC “Rome” generation of EPYC server processors. The main point is that users will be able to buy up to 64 cores (128 threads) on a single packaged processor. This increase in core count will likely be due to the process node shrink, from 14nm down to GlobalFoundries’ 7nm. This is not the same as the upcoming second-generation Zen processors, which are built on 12nm and expected to ship in a few months.
Rome is probably not coming until 2019.
But when it does… up to 128 threads. Also, if I’m understanding WCCFTech’s post correctly, AMD will produce two different dies for this product line. One design will have 12 cores per die (x4 for 48 cores per package) and the other will have 16 cores per die (x4 for 64 cores per package). The reason why this is interesting is because AMD is, apparently, expecting to sell enough volume to warrant multiple chip designs, rather than just making a flagship and filling in SKUs with bin sorting and cutting off the cores that require abnormally high voltage for a given clock rate as parts with lesser core count. (That will happen too, as usual, but from two different intended designs instead of just the flagship.)
If it works out as AMD plans, this could be an opportunity to acquire prime market share away from Intel and their Xeon processors. The second chip might let them get into second-tier servers with an even more cost-efficient part, because a 12-core die will bin better than a 16-core one and, as mentioned, yield more from a wafer anyway.
Again, this is a common practice from a technical standpoint; the interesting part is that it could work out well for AMD from a strategic perspective. The timing and market might be right for EPYC in various classes of high-end servers.
Subject: Processors | February 16, 2018 - 08:52 AM | Sebastian Peak
Tagged: tim, thermal paste, Ryzen 5 2400G, ryzen, overclocking, der8aur, delidding, APU, amd
Overclocker der8auer has posted a video demonstrating the delidding process of the AMD Ryzen 5 2400G, and his findings on its effect on temperatures and overclocking headroom.
The delidded Ryzen 5 2400G (image credit der8auer via YouTube)
The full video is embedded below:
The results are interesting, but disappointing from an overclocking standpoint, as he was only able to increase his highest frequency by 25 MHz. Thermals were far more impressive, as the liquid metal used in place of the factory TIM did lower temps considerably.
Here are his temperature results for both the stock and overclocked R5 2400G:
The process was actually quite straightforward, and used an existing Intel delidding tool (the Delid Die Mate 2) along with a small piece of acrylic to spread the force against the PCB.
Delidding the Ryzen 5 2400G (image credit der8auer via YouTube)
The Ryzen 5 2400G is using thermal paste and is not soldered, which enables this process to be reasonably safe - or as safe as delidding a CPU and voiding your warranty ever is. Is it worth it for lower temps and slight overclocking gains? That's up to the user, but integration of an APU like this invites small form-factors that could benefit from the lower temps, especially with low-profile air coolers.
Subject: Processors | February 13, 2018 - 03:10 PM | Jeremy Hellstrom
Tagged: 2200G, 2400G, amd, raven ridge, ryzen, Zen
Ryan covered the launch of AMD's new Ryzen 5 2400G and Ryzen 3 2200G which you should have already checked out. The current options on the market offer more setup variations and tests than there is time in the day, which is why you should check out the links below to get a full view of how these new APUs function. For instance, The Tech Report tested using DDR4-3200 CL14 RAM when benchmarking, which AMD's architecture can take advantage of. As far as productivity and CPU bound tasks perform, Intel's i5-8400 does come out on top, however it is a different story for the Vega APU. The 11 CUs of the 2400G perform at the same level or slightly better than a GTX 1030 which could make this very attractive for a gamer on a budget.
"AMD's Ryzen 5 2400G and Ryzen 3 2200G bring Raven Ridge's marriage of Radeon Vega graphics processors and Zen CPU cores to the desktop. Join us as we see what a wealth of new technology in one chip means for the state of gaming and productivity performance from the same socket."
Here are some more Processor articles from around the web:
- AMD Ryzen R3 2200G & R5 2400G Raven Ridge APU @ Modders-Inc
- AMD Ryzen 3 2200G With Radeon Vega 8 @ TechARP
- AMD Ryzen 3 2200G 3.5 GHz with Vega 8 Graphics @ TechPowerUp
- AMD Ryzen 5 2400G & Ryzen 3 2200G @ Techspot
- AMD Ryzen 5 2400G & Ryzen 3 2200G Raven Ridge @ Kitguru
- AMD Ryzen 3 2200G and Ryzen 5 2400G @ Guru of 3D
- AMD Ryzen 5 2400G 3.6 GHz with Vega 11 Graphics @ TechPowerUp
Subject: Processors | February 7, 2018 - 09:01 AM | Tim Verry
Tagged: Xeon D, xeon, servers, networking, micro server, Intel, edge computing, augmented reality, ai
Intel announced a major refresh of its Xeon D System on a Chip processors aimed at high density servers that bring the power of the datacenter as close to end user devices and sensors as possible to reduce TCO and application latency. The new Xeon D 2100-series SoCs are built on Intel’s 14nm process technology and feature the company’s new mesh architecture (gone are the days of the ring bus). According to Intel the new chips are squarely aimed at “edge computing” and offer up 2.9-times the network performance, 2.8-times the storage performance, and 1.6-times the compute performance of the previous generation Xeon D-1500 series.
Intel has managed to pack up to 18 Skylake-based processing cores, Quick Assist Technology co-processing (for things like hardware accelerated encryption/decryption), four DDR4 memory channels addressing up to 512 GB of DDR4 2666 MHz ECC RDIMMs, four Intel 10 Gigabit Ethernet controllers, 32 lanes of PCI-E 3.0, and 20 lanes of flexible high speed I/O that includes up to 14 lanes of SATA 3.0, four USB 3.0 ports, or 20 lanes of PCI-E. Of course, the SoCs support Intel’s Management Engine, hardware virtualization, HyperThreading, Turbo Boost 2.0, and AVX-512 instructions with 1 FMA (fuse-multiply-add) as well..
Suffice it to say, there is a lot going on here with these new chips which represent a big step up in capabilities (and TDPs) further bridging the gap between the Xeon E3 v5 family and Xeon E5 family and the new Xeon Scalable Processors. Xeon D is aimed at datacenters where power and space are limited and while the soldered SoCs are single socket (1P) setups, high density is achieved by filling racks with as many single processor Mini ITX boards as possible. Xeon D does not quite match the per-core clockspeeds of the “proper” Xeons but has significantly more cores than Xeon E3 and much lower TDPs and cost than Xeon E5. It’s many lower clocked and lower power cores excel at burstable tasks such as serving up websites where many threads may be generated and maintained for long periods of time but not need a lot of processing power and when new page requests do come in the cores are able to turbo boost to meet demand. For example, Facebook is using Xeon D processors to serve up its front end websites in its Yosemite OpenRack servers where each server rack holds 192 Xeon D 1540 SoCs (four Xeon D boards per 1U sleds) for 1,536 Broadwell cores. Other applications include edge routers, network security appliances, self-driving vehicles, and augmented reality processing clusters. The autonomous vehicles use case is perhaps the best example of just what the heck edge computing is. Rather than fighting the laws of physics to transfer sensor data back to a datacenter for processing to be sent back to the car to in time for it to safely act on the processed information, the idea of edge computing is to bring most of the processing, networking, and storage power as close as possible to both the input sensors and the device (and human) that relies on accurate and timely data to make decisions.
As far as specifications, Intel’s new Xeon D lineup includes 14 processor models broken up into three main categories. The Edge Server and Cloud SKUs include eight, twelve, and eighteen core options with TDPs ranging from 65W to 90W. Interestingly, the 18 core Xeon D does not feature the integrated 10 GbE networking the lower end models have though it supports higher DDR4 memory frequencies. The two remaining classes of Xeon D SoCs are “Network Edge and Storage” and “Integrated Intel Quick Assist Technology” SKUs. These are roughly similar with two eight core, one 12 core, and one 16 core processor (the former also has a quad core that isn’t present in the latter category) though there is a big differentiator in clockspeeds. It seems customers will have to choose between core clockspeeds or Quick Assist acceleration (up to 100 Gbps) as the chips that do have QAT are clocked much lower than the chips without the co-processor hardware which makes sense because they have similar TDPs so clocks needed to be sacrificed to maintain the same core count. Thanks to the updated architecture, Intel is encroaching a bit on the per-core clockspeeds of the Xeon E3 and Xeon E5s though when turbo boost comes into play the Xeon Ds can’t compete.
The flagship Xeon D 2191 offers up two more cores (four additional threads) versus the previous Broadwell-based flagship Xeon D 1577 as well as higher clockspeeds at 1.6 GHz base versus 1.3 GHz and 2.2 GHz turbo versus 2.1 GHz turbo. The Xeon D 2191 does lack the integrated networking though. Looking at the two 16 core refreshed Xeon Ds compared to the 16 core Xeon D 1577, Intel has managed to increase clocks significantly (up to 2.2 GHz base and 3.0 GHz boost versus 1.3 GHz base and 2.10 GHz boost), double the number of memory channels and network controllers, and increase the maximum amount of memory from 128 GB to 512 GB. All those increases did come at the cost of TDP though which went from 45W to 100W.
Xeon D has always been an interesting platform both for enthusiasts running VM labs and home servers and big data enterprise clients building and serving up the 'next big thing' built on the astonishing amounts of data people create and consume on a daily basis. (Intel estimates a single self driving car would generate as much as 4TB of data per day while the average person in 2020 will generate 1.5 GB of data per day and VR recordings such as NFL True View will generate up to 3TB a minute!) With Intel ramping up both the core count, per-core performance, and I/O the platform is starting to not only bridge the gap between single socket Xeon E3 and dual socket Xeon E5 but to claim a place of its own in the fast-growing server market.
I am looking forward to seeing how Intel's partners and the enthusiast community take advantage of the new chips and what new projects they will enable. It is also going to be interesting to see the responses from AMD (e.g. Snowy Owl and to a lesser extent Great Horned Owl at the low and niche ends as it has fewer CPU cores but a built in GPU) and the various ARM partners (Qualcomm Centriq, X-Gene, Ampere, ect.*) as they vie for this growth market space with higher powered SoC options in 2018 and beyond.
- New Intel Xeon D Broadwell Processors Aimed at Low Power, High Density Servers
- Intel Xeon Scalable Processor Launch - New Architecture, New Platform for Data Center
- Qualcomm Centriq 2400 Arm-based Server Processor Begins Commercial Shipment
- Today's bonus AMD rumour: Starship, Naples, Zeppelin and a flock of Owls
*Note that X-Gene and Ampere are both backed by the Carlyle Group now with MACOM having sold X-Gene to Project Denver Holdings and the ex-Intel employee led Ampere being backed by the Carlyle Group.
Subject: Processors | February 5, 2018 - 04:28 PM | Jeremy Hellstrom
Tagged: final fantasy xv, round up
The new iteration of Final Fantasy sports some hefty recommendations, including the need for a Core i7-3770 or FX-8350 powering your system. TechSpot decided to test out a variety of CPUs to see how they performed in tandem with a GTX 1080 Ti. With 14 CPUs represented, including several generations of Intel chips and a representative from each of the three Ryzen lines they proceeded to run through a battery of benchmarks. The tests quickly showed that if you are running a quad core CPU clocked lower than 4GHz, from either vendor, you are not going to have a good time. Check out the full results to see if your system can handle it or if you should be shopping for a Ryzen 5 or 7, or perhaps a higher end Coffee Lake if Intel is your cup of tea.
"Today we're checking out Final Fantasy XV CPU performance using the new standalone benchmark released ahead of next month's PC launch. The reason we want to look at CPU performance first is because the game is extremely CPU intensive, far more so than we were expecting."
Here are some more Processor articles from around the web:
- AMD AOCC 1.1 Code Compiler Speeds Up Performance On Zen CPUs @ Phoronix
- 6-core/12-thread Core i7 for $200, i7-5820K Revisited @ TechSpot
- The Fastest Linux Distribution For Ryzen: A 10-Way Linux OS Comparison On Ryzen 7 & Threadripper @ Phoronix
Subject: Processors | January 22, 2018 - 09:40 PM | Scott Michaud
Tagged: spectre, meltdown, Intel
A couple of weeks ago, Intel acknowledged reports that firmware updates for Spectre and Meltdown resulted in reboots and other stability issues. At the time, they still suggested that end-users should apply the patch regardless. They have since identified the cause and their recommendation has changed: OEMs, cloud service providers, system manufacturers, software vendors, and end users should stop deploying the firmware until a newer solution is released.
The new blog post also states that an early version of the updated patch has been created. Testing on the updated firmware started over the weekend, and it will be published shortly after that process has finished.
According to their security advisory, another patch that solved both Spectre 1 and Meltdown did not exhibit stability and reboot issues. This suggests that something went wrong with the Spectre 2 mitigation, which could be a fun course of speculation for tea-leaf readers to guess what went wrong in the patch. Ultimately, it doesn’t matter, though, because new code will be available soon.
Subject: Motherboards, Processors | January 19, 2018 - 01:39 PM | Sebastian Peak
Tagged: small form-factor, SFF, pentium, motherboard, mini ITX, Intel Pentium Silver, Intel, integrated CPU, gigabyte, gemini lake, fanless, embedded, celeron
GIGABYTE has announced motherboards for the new Gemini Lake platform featuring built-in Intel Pentium Silver and Intel Celeron processors. These fanless J/N series motherboards also offer the company's trademark "Ultra Durable" components and customizable performance settings.
As to the Gemini Lake platform, here are some of the details as reported by CNXSoft at last month's CPU launch:
"The models include two Pentium Silver quad core processor with N5000 for mobile, J5005 for desktop, and four Celeron dual/quad core processors with N4000 & N4100 for mobile, and Celeron J4005 & J4105 for desktop.
All processors share the same 4MB cache which will help with performance improvement, and dual channel DDR4-2400, LPDDR4-2400 memory. Pentium processors come with Intel UHD Graphics 605 clocked up to 750/800 MHz, and Celeron processors are instead equipped with UHD Graphics 600 up to 650/750 MHz which the exactly frequency depending on model."
Image credit: CNXSoft
"[our] newest J/N series motherboards utilize a fanless cooling solution and the built-in Intel Gemini Lake processors make them perfect for compact, mainstream builds. The motherboards support HDMI 2.0 4K at 21:9 resolution for high definition video quality. Integrated PCIe Gen2 x2 M.2 slots supporting high speed NVMe SSD allows for fast data transfer speeds. The board's native Intel WIFI via the M.2 Connector along with an independently sold Intel CNVi wireless networking solution can make way for impressive wireless connectivity exceeding 1 gigabit per second, traditionally found in wired connections. Additionally, its support for M.2 SATA SSD, UDIMM DDR4 modules rated for 2400MHz, and noise free configurations makes it a perfect option for school, business, and home usage."
Pricing and availability were not specified in the press release (full PR after the break).
Subject: Processors | January 18, 2018 - 01:17 PM | Sebastian Peak
Tagged: update, spectre, security, restart, reboot, processor, patch, meltdown, Intel, cpu
The news will apparently get worse before it gets any better for Intel, as the company updated their security recommendations for the Spectre/Meltdown patches for affected CPUs to address post-patch system restart issues. Specifically, Intel notes that issues may be introduced in some configurations with the current patches, though the company does not recommend discontinued use of such updates:
" Intel recommends that these partners, at their discretion, continue development and release of updates with existing microcode to provide protection against these exploits, understanding that the current versions may introduce issues such as reboot in some configurations".
Image credit: HotHardware
The recommendation section of the security bulletin, updated yesterday (January 17, 2018), is reproduced below:
- Intel has made significant progress in our investigation into the customer reboot sightings that we confirmed publicly last week
- Intel has reproduced these issues internally and has developed a test method that allows us to do so in a predictable manner
- Initial sightings were reported on Broadwell and Haswell based platforms in some configurations. During due diligence we determined that similar behavior occurs on other products including Ivy Bridge, Sandy Bridge, Skylake, and Kaby Lake based platforms in some configurations
- We are working toward root cause
- While our root cause analysis continues, we will start making beta microcode updates available to OEMs, Cloud service providers, system manufacturers and Software vendors next week for internal evaluation purposes
- In all cases, the existing and any new beta microcode updates continue to provide protection against the exploit (CVE-2017-5715) also known as “Spectre Variant 2”
- Variants 1 (Spectre) and Variant 3 (Meltdown) continue to be mitigated through system software changes from operating system and virtual machine vendors
- As we gather feedback from our customers we will continue to provide updates that improve upon performance and usability
Intel recommendations to OEMs, Cloud service providers, system manufacturers and software vendors
- Intel recommends that these partners maintain availability of existing microcode updates already released to end users. Intel does not recommend pulling back any updates already made available to end users
- NEW - Intel recommends that these partners, at their discretion, continue development and release of updates with existing microcode to provide protection against these exploits, understanding that the current versions may introduce issues such as reboot in some configurations
- NEW - We further recommend that OEMs, Cloud service providers, system manufacturers and software vendors begin evaluation of Intel beta microcode update releases in anticipation of definitive root cause and subsequent production releases suitable for end users
Intel recommendations to end users
- Following good security practices that protect against malware in general will also help protect against possible exploitation until updates can be applied
- For PCs and Data Center infrastructure, Intel recommends that patches be applied as soon as they are available from your system manufacturer, and software vendors
- For data center infrastructure, Intel additionally recommends that IT administrators evaluate potential impacts from the reboot issue and make decisions based on the security profile of the infrastructure
Intel has worked with operating system vendors, equipment manufacturers, and other ecosystem partners to develop software updates that can help protect systems from these methods. End users and systems administrators should check with their operating system vendors and apply any available updates as soon as practical.
The full list of affected processors from Intel's security bulletin follows:
- Intel® Core™ i3 processor (45nm and 32nm)
- Intel® Core™ i5 processor (45nm and 32nm)
- Intel® Core™ i7 processor (45nm and 32nm)
- Intel® Core™ M processor family (45nm and 32nm)
- 2nd generation Intel® Core™ processors
- 3rd generation Intel® Core™ processors
- 4th generation Intel® Core™ processors
- 5th generation Intel® Core™ processors
- 6th generation Intel® Core™ processors
- 7th generation Intel® Core™ processors
- 8th generation Intel® Core™ processors
- Intel® Core™ X-series Processor Family for Intel® X99 platforms
- Intel® Core™ X-series Processor Family for Intel® X299 platforms
- Intel® Xeon® processor 3400 series
- Intel® Xeon® processor 3600 series
- Intel® Xeon® processor 5500 series
- Intel® Xeon® processor 5600 series
- Intel® Xeon® processor 6500 series
- Intel® Xeon® processor 7500 series
- Intel® Xeon® Processor E3 Family
- Intel® Xeon® Processor E3 v2 Family
- Intel® Xeon® Processor E3 v3 Family
- Intel® Xeon® Processor E3 v4 Family
- Intel® Xeon® Processor E3 v5 Family
- Intel® Xeon® Processor E3 v6 Family
- Intel® Xeon® Processor E5 Family
- Intel® Xeon® Processor E5 v2 Family
- Intel® Xeon® Processor E5 v3 Family
- Intel® Xeon® Processor E5 v4 Family
- Intel® Xeon® Processor E7 Family
- Intel® Xeon® Processor E7 v2 Family
- Intel® Xeon® Processor E7 v3 Family
- Intel® Xeon® Processor E7 v4 Family
- Intel® Xeon® Processor Scalable Family
- Intel® Xeon Phi™ Processor 3200, 5200, 7200 Series
- Intel® Atom™ Processor C Series
- Intel® Atom™ Processor E Series
- Intel® Atom™ Processor A Series
- Intel® Atom™ Processor x3 Series
- Intel® Atom™ Processor Z Series
- Intel® Celeron® Processor J Series
- Intel® Celeron® Processor N Series
- Intel® Pentium® Processor J Series
- Intel® Pentium® Processor N Series
We await further updates and developments from Intel, system integrators, and motherboard partners.
Subject: Processors | January 8, 2018 - 07:24 PM | Jeremy Hellstrom
Tagged: meltdown, security, linux, nvidia
Thanks to a wee tech conference going on, performing a wide gamut of testing of the effect of the Meltdown patch is taking some time. Al has performed benchmarks focusing on the performance impact the patch has on your storage subsystem, which proved to be very minimal. Phoronix are continuing their Linux testing, the latest of which focuses on the impact the patch has on NVIDIA GPUs, specifically the GTX 1060 and GTX 1080 Ti. The performance delta they see falls within measurement error levels; in other words there is no measurable impact after the patch was installed. For now it seems the most impact this patch has is for scientific applications and hosting providers which use select high I/O workloads and large amounts of virtual machines. For now the cure to Meltdown is nowhere near as bad as what it protects against for most users ... pity the same cannot be said for Spectre.
"Earlier this week when news was still emerging on the "Intel CPU bug" now known as Spectre and Meltdown I ran some Radeon gaming tests with the preliminary Linux kernel patches providing Kernel Page Table Isolation (KPTI) support. Contrary to the hysteria, the gaming performance was minimally impacted with those open-source Radeon driver tests while today are some tests using the latest NVIDIA driver paired with a KPTI-enabled kernel."
Here are some more Processor articles from around the web:
- Patched Desktop PC: Meltdown & Spectre Benchmarked @ Techspot
- Benchmarking Linux With The Retpoline Patches For Spectre @ Phoronix
- Battle of the 16-cores: Intel’s Core i9-7960X vs. AMD’s Threadripper 1950X @ Techgage
Subject: Processors | January 8, 2018 - 12:00 AM | Jim Tanous
Tagged: Threadripper, ryzen, processor, price cut, cpu, CES 2018, CES, amd
AMD announced today a price drop for most of its Ryzen processor lineup, making the company's multi-core-focused parts even more competitive to Intel in terms of cost-to-performance. While not every Ryzen and Threadripper processor is seeing a price reduction, many parts are being reduced by up to 30 percent.
|Processor||Cores/Threads||Previous SEP||New SEP||Percent Reduction|
|Ryzen 7 1800X||8/16||$499||$349||-30.1%|
|Ryzen 7 1700X||8/16||$399||$309||-22.5%|
|Ryzen 7 1700||8/16||$329||$299||-9.1%|
|Ryzen 5 1600X||6/12||$249||$219||-12.0%|
|Ryzen 5 1600||6/12||$219||$189||-13.7%|
|Ryzen 5 1500X||4/8||$189||$174||-7.9%|
|Ryzen 5 2400G||4/8||$169||N/A|
|Ryzen 3 1300X||4/4||$129||$129||N/A|
|Ryzen 3 2200G||4/4||$99||N/A|
Note also in the price chart the new "G" series Ryzen APUs with integrated Radeon Vega graphics. Check pcper.com for more info on this new part.
Some of the new prices are already reflected, and in some cases reduced further, at retailers like Amazon.
To determine the new prices, AMD performed comparative price testing with its online retail partners last quarter, and determined that these new prices were the best balance between performance and value.
With second generation Ryzen processors not scheduled to launch until later this spring, the price drop not only helps AMD move existing inventory, it also keeps the company at the top of enthusiasts' minds in the midst of the fallout around the recent processor security issues, one of which primarily affects Intel processors.
Subject: Processors | January 8, 2018 - 12:00 AM | Ryan Shrout
Tagged: Zen+, Zen, ryzen 2000, ryzen, CES 2018, CES, amd
During AMD’s CES 2018 Tech Day, CEO Lisa Su announced the plans for the second-generation Ryzen processor roll-out in April. This is the revised design that has been rumored for months, with a process technology change and slight tweaks to features.
Details are expectantly short, but what we know is that these parts will move from a 14nm process technology to 12nm from GlobalFoundries. AMD is calling the design “Zen+” and this is NOT Zen 2 – that is coming next year. You should expect higher clocks for Ryzen 2000-series processors and improvements to Precision Boost that will enable more consistent and gradual clock speed shifts in workloads of interesting like gaming.
Also on the roadmap now are updated Threadripper processors with the same “Zen+” enhancements, coming out in 2H of 2018.
The great news for enthusiasts that have already bought into AMD’s current generation platform is existing motherboards will support this processor update, as long as you have the associated BIOS. Motherboards are already being updated today for the channel (to support the Ryzen APU launch) so there should be little concern with compatibility come April.
However, there IS a new chipset coming with “Zen+”, the AMD X470. Information on it is also slim, but it includes some optimizations and fixes. AMD had growing pains with the initial set of motherboard releases including power concerns and routing issues, both of which are addressed with the new design.
That’s all we know for now, but I am excited to get my hands on the Ryzen second-generation processors this spring to see how much performance and behavior has changed. Intel has definitely changed the landscape since Ryzen’s first release in March of 2017, so enthusiasts should welcome the back and forth competition cycle once again.
Subject: Processors | January 8, 2018 - 12:00 AM | Ryan Shrout
Tagged: Zen, Vega, ryzen, CES 2018, CES, APU, amd, 2400G, 2200G
Though AMD might not use the term APU anymore, that’s what we are looking at today. The Ryzen + Vega processor (single die implementation, to be clear) for desktop solutions will begin shipping February 12 and will bring high-performance integrated graphics to low cost PCs. Fully titled the “AMD Ryzen Desktop Processor with Radeon Vega Graphics”, this new processor will utilize the same AM4 socket and motherboards that have been shipping since March of 2017. Finally, a good use for those display outputs!
Though enthusiasts might have little interest in these parts, it is an important step for AMD. Building a low-cost PC with a Ryzen CPU has been difficult due to the requirement of a discrete graphics card. Nearly all of Intel’s processors have integrated graphics, and though we might complain about the performance it provides in games, the truth is that the value of not needing another component is crucial for reducing costs.
Without an APU that had both graphics and the company’s greatly improved Zen CPU architecture, AMD was leaving a lot of potential sales on the table. Also, the market for entry-level gaming in small form factor designs is significant.
Two models will be launching: the Ryzen 5 2400G and Ryzen 3 2200G. Clock speeds are higher than what exists on the Ryzen 5 1400 and Ryzen 3 1200 and match the core and thread count. The 2400G includes 11 Compute Units (704 stream processors) and the 2200G has 8 CUs (512 stream processors). The TDP of both is 65 watts.
The pricing configuration gives AMD some impressive placement. The $169 Ryzen 5 2400G will offer much better graphics performance than the $30 more expensive Core i5-8400 (based on current pricing) and has equivalent performance to the $100+ higher Core i5-8400 and NVIDIA GT 1030 discrete solution.
When looking at CPU performance, the new Ryzen processors offer higher scores than the units they are replacing. They do this while adding Vega graphics capability and matching or lower prices.
AMD even went as far to show the overclocking headroom that the Ryzen APU can offer. During an on-site demo we saw the Ryzen 5 2400G improve its 3DMark score by 39% with memory frequency and GPU clock speed increases. Moving the GPU clock from ~1100 MHz to 1675 MHz will mean a significant increase in power consumption, and I do question the size of the audience that wants to overclock an APU. Still – cool to see!
The Ryzen CPU with Vega graphics is a product we all expected to see, it’s the first perfect marriage of AMD’s revitalized CPU division and its considerable advantage in integrated graphics. It has been a long time since one of AMD’s APUs appeared interesting to me and stoked my desire to build a low-cost, mainstream gaming build. Looks for reviews in just a few short weeks!
Subject: Processors | January 4, 2018 - 01:15 PM | Jeremy Hellstrom
Tagged: linux, spectre, meltdown, Intel
As the Linux patch for the Intel kernel issue is somewhat more mature than the Windows patch which was just pushed out, and because the patch may have more impact on hosting solutions than gaming machines, we turn to Phoronix for test results. Their testing overview looks at both Intel and AMD, as the PTI patch can be installed on AMD systems and it is not a bad idea to do so. The results are somewhat encouraging, CPUs with PCID (Process Context ID) such as Sandy Bridge and newer seem to see little effect from the patch, network performance seems unchanged and Xeon's see far less of an effect across the board than desktop machines. That is not to say there is no impact whatsoever, in synthetic benchmarks which make frequent system calls or depend on optimized access to the kernel they did see slowdowns; thankfully those workloads are not common for enthusiast software. Expect a lot more results from both Windows and Linux over the coming weeks.
"2018 has been off to a busy start with all the testing around the Linux x86 PTI (Page Table Isolation) patches for this "Intel CPU bug" that potentially dates back to the Pentium days but has yet to be fully disclosed. Here is the latest."
Here are some more Processor articles from around the web:
- Testing Windows 10 Performance Before and After the Meltdown Flaw Emergency Patch @ TechSpot
- 2nd-Gen Core i7 vs. 8th-Gen Core i7: RIP Sandy Bridge @ Techspot
- Intel Core i7 8700k @ Modders-Inc
- Ryzen Mobile Finally Arrives: AMD Ryzen 5 2500U @ Techspot
- Intel Core i9-7900X 3.3 GHz @ TechPowerUp
- The Best CPUs: This is what you should get @ Techspot
Subject: Processors | January 3, 2018 - 08:17 PM | Ryan Shrout
Tagged: Intel, amd, arm, meltdown, spectre, security
The following story was originally posted on ShroutResearch.com.
UPDATE 1 - 8:25pm
Just before the closing bell on Wednesday, Intel released a statement responding to the security issues brought up in this story. While acknowledging that these new security concerns do exist, the company went out of its way to insinuate that AMD, Arm Holdings, and others were at risk. Intel also states that performance impact on patched machines “should not be significant and will be mitigated over time.”
Intel’s statement is at least mostly accurate though the released report from the Google Project Zero group responsible for finding the security vulnerability goes into much more detail. The security issue concerns a feature called “speculative execution” in which a computer tries to predict work that will be needed beforehand to speed up processing tasks. The paper details three variants of this particular vulnerability, the first of which applies to Intel, AMD, Arm, any nearly every other modern processor architecture. This variant is easily patched and should have near-zero effect on performance.
The second variant is deeply architecture specific, meaning attackers would need a unique code for each different Intel or AMD processor. This example should be exceedingly rare in the wild, and AMD goes as far as to call it a “near-zero” risk for systems.
The third is where things are more complex and where the claim that AMD processors are not susceptible is confirmed. This one is the source of the leaks and information that filtered out and was the target of the information for the story below. In its statement, AMD makes clear that due to architectural design differences on its products, past and modern processors from its family are not at risk.
The final outlook from this story looks very similar to how it did early on Wednesday though with a couple of added wrinkles. The security report released by Project Zero indicates that most modern hardware is at risk though to different degrees based on the design of the chips themselves. Intel is not alone in this instance, but it does have additional vulnerabilities that other processor designs do not incur. To insinuate otherwise in its public statement is incorrect.
As for performance impact, most of the initial testing and speculation is likely exaggerating how it will change the landscape, if at all. Neither Intel nor AMD see a “doomsday” scenario of regressing computing performance because of this security patch.
At the end of 2017, Intel CEO Brian Krzanich said his company would be going through changes in the New Year, becoming more aggressive, and taking the fight to its competitors in new and existing markets. It seems that BK will have his first opportunity to prove out this new corporate strategy with a looming security issue that affects nearly 10 years of processors.
A recently revealed hardware bug in Intel processors is coming to light as operating system vendors like Microsoft and the Linux community scramble to update platforms to avoid potential security concerns. This bug has been rumored for some time, with updates to core Linux software packages indicating that a severe vulnerability was being fixed, but with comments redacted when published. Security flaws are often kept secret to avoid being exploited by attackers until software patches are available to correct them.
This hardware-level vulnerability allows user-mode applications, those run by general consumers or businesses, to potentially gain access to kernel-level memory space, an area that is handled by the operating system exclusively and can contain sensitive information like passwords, biometrics, and more. An attacker could use this flaw to potentially access other user-mode application data, compromising entire systems with bypass around integrated operating system firewalls.
At a time when Intel is being pressured from many different angles and markets, this vulnerability and hardware bug comes at an incredibly inopportune time. AMD spent its 2017 releasing competitive products in the consumer space with Ryzen and the enterprise space with EPYC. The enterprise markets in particular are at risk for Intel. The EPYC processors already offered performance and pricing advantages and now AMD can showcase security as none of its processor are affected by the same vulnerability that Intel is saddled with. Though the enterprise space works in cycles, and AMD won’t see an immediate uptick in sales, I would be surprised if this did not push more cloud providers and large scale server deployments to look at the AMD offerings.
At this point, only the Linux community has publicly discussed the fixes taking place, with initial patches going out earlier this week. Much of the enterprise and cloud ecosystem runs on Linux-based platforms and securing these systems against attack is a crucial step. Microsoft has yet to comment publicly on what its software updates will look like, when they will be delivered, and what impact they have might on consumer systems.
While hardware and software vulnerabilities are common in today’s connected world, there are two key points that make this situation more significant. First, this is a hardware bug, meaning that it cannot be fixed or addressed completely without Intel making changes to its hardware design, a process that can take months or years to complete. As far as we can tell, this bug will affect ALL Intel processors released in the last decade or more, including enterprise Xeon processors and consumer Core and Pentium offerings. And as Intel has been the dominate market leader in both the enterprise and consumer spaces, there are potentially hundreds of millions of affected systems in the field.
The second differentiating point for this issue is that the software fix could impact the performance of systems. Initial numbers have been claiming as much as a 30% reduction in performance, but those results are likely worst case scenarios. Some early testing of the updated Linux platforms indicate performance could decrease from 6-20% depending on the application. Other testing of consumer workloads including gaming show almost no performance impact. Linux founder and active developer Linus Torvalds claims performance impact would range from nothing to “double-digit slowdowns.”
Even though the true nature of this vulnerability is still tied behind non-disclosure agreements, it is unlikely that there will be a double-digit performance reduction on servers at a mass scale when these updates are pushed out. Intel is aware of this vulnerability and has been for some time, and financially it would need to plan for any kind of product replacement or reimbursement campaign it might undertake with partners and customers.
Subject: General Tech, Processors | December 12, 2017 - 04:52 PM | Tim Verry
Tagged: training, nnp, nervana, Intel, flexpoint, deep learning, asic, artificial intelligence
Intel recently provided a few insights into its upcoming Nervana Neural Network Processor (NNP) on its blog. Built in partnership with deep learning startup Nervana Systems which Intel acquired last year for over $400 million, the AI-focused chip previously codenamed Lake Crest is built on a new architecture designed from the ground up to accelerate neural network training and AI modeling.
The full details of the Intel NNP are still unknown, but it is a custom ASIC with a Tensor-based architecture placed on a multi-chip module (MCM) along with 32GB of HBM2 memory. The Nervana NNP supports optimized and power efficient Flexpoint math and interconnectivity is huge on this scalable platform. Each AI accelerator features 12 processing clusters (with an as-yet-unannounced number of "cores" or processing elements) paired with 12 proprietary inter-chip links that 20-times faster than PCI-E, four HBM2 memory controllers, a management-controller CPU, as well as standard SPI, I2C, GPIO, PCI-E x16, and DMA I/O. The processor is designed to be highly configurable and to meet both mode and data parallelism goals.
The processing elements are all software controlled and can communicate with each other using high speed bi-directional links at up to a terabit per second. Each processing element has more than 2MB of local memory and the Nervana NNP has 30MB in total of local memory. Memory accesses and data sharing is managed with QOS software which controls adjustable bandwidth over multiple virtual channels with multiple priorities per channel. Processing elements can talk to and send/receive data between each other and the HBM2 stacks locally as well as off die to processing elements and HBM2 on other NNP chips. The idea is to allow as much internal sharing as possible and to keep as much data stored and transformed in local data as possible in order to save precious HBM2 bandwidth (1TB/s) for pre-fetching upcoming tensors, reduce the number of hops and resulting latency by not having to go out to the HBM2 memory and back to transfer data between cores and/or processors, and to save power. This setup also helps Intel achieve an extremely parallel and scalable platform where multiple Nervana NNP Xeon co-processors on the same and remote boards effectively act as a massive singular compute unit!
Intel's Flexpoint is also at the heart of the Nervana NNP and allegedly allows Intel to achieve similar results to FP32 with twice the memory bandwidth while being more power efficient than FP16. Flexpoint is used for the scalar math required for deep learning and uses fixed point 16-bit multiply and addition operations with a shared 5-bit exponent. Unlike FP16, Flexpoint uses all 16-bits of address space for the mantissa and passes the exponent in the instruction. The NNP architecture also features zero cycle transpose operations and optimizations for matrix multiplication and convolutions to optimize silicon usage.
Software control allows users to dial in the performance for their specific workloads, and since many of the math operations and data movement are known or expected in advance, users can keep data as close to the compute units working on that data as possible while minimizing HBM2 memory accesses and data movements across the die to prevent congestion and optimize power usage.
Intel is currently working with Facebook and hopes to have its deep learning products out early next year. The company may have axed Knights Hill, but it is far from giving up on this extremely lucrative market as it continues to push towards exascale computing and AI. Intel is pushing for a 100x increase in neural network performance by 2020 which is a tall order but Intel throwing its weight around in this ring is something that should give GPU makers pause as such an achievement could cut heavily into their GPGPU-powered entries into this market that is only just starting to heat up.
You won't be running Crysis or even Minecraft on this thing, but you might be using software on your phone for augmented reality or in your autonomous car that is running inference routines on a neural network that was trained on one of these chips soon enough! It's specialized and niche, but still very interesting.
- Intel Launches Stratix 10 FPGA With ARM CPU and HBM2
- Intel's Nervana chip targets Nvidia on artificial intelligence
- New AI products will Crest Computex
- Intel to Ship FPGA-Accelerated Xeons in Early 2016
- Intel Kills Knights Hill, Will Launch Xeon Phi Architecture for Exascale Computing @ ExtremeTech
- NVIDIA Discusses Multi-Die GPUs
Subject: Processors | December 3, 2017 - 03:16 PM | Scott Michaud
Tagged: Intel, Cannonlake, 10nm
According to Fudzilla’s unnamed, “well-placed” sources, Intel could have already launched a 10nm CPU, but they are waiting until yields get better. This comment can be parsed in multiple ways. If they mean that “yeah, we could have a 10nm part out, but not covering our entire product stack and our yields would be so bad that we’d have shortages for several months” then, well, yeah. That is a bit of a “duh” comment. Intel can technically make a 10nm product if you don’t care about yields, supply, and intended TDP.
If, however, the comment means something along the lines of “we currently have a worst-case yield of 85%, but we’re waiting until we cross 90%” then… I doubt it’s true (or, at least, it’s not a whole truth). Coffee Lake is technically (if you count Broadwell) their fourth named 14nm architecture. I would expect that Intel’s yields would need to be less-than-mediocre to delay 10nm for this long. Their reactions to AMD seems to be a knee-jerk “add cores” with a little “we’re still the best single-threaded tech” on the side. Also, they are looking like they have fallen behind the other fabs, which mostly ship 10nm in mobile.
I doubt Intel would let all that stigma propagate just to get a few extra percent yield at launch.
Of course, I could be wrong. It just seems like the “we’re waiting for better yields” argument is a little more severe than the post is letting on. They would have pushed out a product by now if it was viable-but-suboptimal, right? That would have been the lesser of two evils, right?