Subject: Processors | March 20, 2018 - 04:33 PM | Ryan Shrout
Tagged: ryzenfall, masterkey, fallout, cts labs, chimera, amd
AMD’s CTO Mark Papermaster released a blog today that both acknowledges the security vulnerabilities first shown by a CTS Labs report last week, while also laying the foundation for the mitigations to be released. Though the company had already acknowledged the report, and at least one other independent security company validated the claims, we had yet to hear from AMD officially on the potential impact and what fixes might be possible for these concerns.
In the write up, Papermaster is clear to call out the short period of time AMD was given with this information, quoting “less than 24 hours” from the time it was notified to the time the story was public on news outlets and blogs across the world. It is important to detail for some that may not follow the security landscape clearly that this has no relation to the Spectre and Meltdown issues that are affecting the industry and what CTS did find has nothing to do with the Zen architecture itself. Instead, the problem revolves around the embedded security protocol processor; while an important distinction moving forward, from a practical view to customers this is one and the same.
AMD states that it has “rapidly completed its assessment and is in the process of developing and staging the deployment of mitigations.” Rapidly is an understatement – going from blindsided to an organized response is a delicate process and AMD has proven its level of sincerity with the priority it placed on this.
Papermaster goes on to mention that all these exploits require administrative access to the computer being infected, a key differentiator from the Spectre/Meltdown vulnerabilities. The post points out that “any attacker gaining unauthorized administrative access would have a wide range of attacks at their disposal well beyond the exploits identified in this research.” I think AMD does an excellent job threading the needle in this post balancing the seriousness of these vulnerabilities with the overzealous hype that was created upon their initial release and the accompanying financial bullshit that followed.
AMD provides an easy to understand table with a breakdown of the vulnerabilities, the potential impact of the security risk, and what the company sees as its mitigation capability. Both sets that affect the secure processor in the Ryzen and EPYC designs are addressable with a firmware update for the secure unit itself, distributed through a standard BIOS update. For the Promontory chipset issue, AMD is utilizing a combination of a BIOS update and further work with ASMedia to further enhance the security updates.
That is the end of the update from AMD at this point. In my view, the company is doing a satisfactory job addressing the problems in what must be an insanely accelerated time table. I do wish AMD was willing to offer more specific time tables for the distribution of those security patches, and how long we should expect to wait to see them in the form of BIOS updates for consumer and enterprise customers. For now, we’ll monitor the situation and look for other input from AMD, CTS, or secondary security firms to see if the risks laid out ever materialize.
For what could have been a disastrous week for AMD, it has pivoted to provide a controlled, well-executed plan. Despite the hype and hysteria that might have started with stock-shorting and buzzwords, the plight of the AMD processor family looks stable.
Subject: Processors | March 15, 2018 - 10:29 AM | Ryan Shrout
Tagged: spectre, meltdown, Intel, cascade lake, cannon lake
In continuing follow up from the spectacle that surrounded the Meltdown and Spectre security vulnerabilities released in January, Intel announced that it has provided patches and updates that address 100% of the products it has launched in the last 5 years. The company also revealed its plan for updated chip designs that will address both the security and performance concerns surrounding the vulnerabilities.
Intel hopes that by releasing new chips to address the security and performance questions quickly it will cement its position as the leader in the enterprise compute space. Customers like Amazon, Microsoft, and Google that run the world’s largest data centers are looking for improved products to make up for the performance loss and assurances moving forward that a similar situation won’t impact their bottom line.
For current products, patches provide mitigations for the security flaws in the form operating system updates (for Windows, Linux) and what are called microcode updates, a small-scale firmware that helps provide instruction processing updates for a processor. Distributed by Intel OEMs (system vendors and component providers) as well as Microsoft, the patches have seemingly negated the risks for consumers and enterprise customer data, but with a questionable impact on performance.
The mitigations cause the processors to operate differently than originally designed and will cause performance slowdowns on some workloads. These performance degradations are the source of the handful of class-action lawsuits hanging over Intel’s head and are a potential sore spot for its relationship with partners. Details on the performance gaps from the security mitigations have been sparse from Intel, with only small updates posted on corporate blogs. And because the problem has been so widespread, covering the entire Intel product line of the last 10 years, researchers are struggling to keep up.
The new chips that Intel is promising will address both security and performance considerations in silicon rather than software, and will be available in 2018. For the data center this is the Cascade Lake server processor, and for the consumer and business markets this is known as Cannon Lake. Both will include what Intel is calling “virtual fences” between user and operating system privilege levels and will create a significant additional obstacle for potential vulnerabilities.
The chips will also lay the ground work and foundation for future security improvement, providing a method to more easily update the security of the processors through patching.
By moving the security mitigations from software (both operating system and firmware) into silicon, Intel is reducing the performance impact that Spectre and Meltdown cause on select computing tasks. Assurances that future generations of parts won’t suffer from a performance hit is good news for Intel and its customer base, but I don’t think currently afflicted customers will be satisfied at the assertion they need to buy updated Intel chips to avoid the performance penalty. It will be interesting to see how, if at all, the legal disputes are affected.
The speed at which Intel is releasing updated chips to the market is an impressive engineering feat, and indicates at top-level directive to get this fixed as quickly as possible. In the span of just 12 months (from Intel’s apparent notification of the security vulnerability to the expected release of this new hardware) the company will have integrated fairly significant architectural changes. While this may have been a costly more for the company, it is a drop in the bucket compared to the potential risks of lowered consumer trust or partner migration to competitive AMD processors.
For its part, AMD has had its own security issues pop up this week from a research firm called CTS Labs. While there are extenuating circumstances that cloud the release of the information, AMD does now have a template for how to quickly and effectively address a hardware-level security problem, if it exists.
The full content of Intel's posted story on the subject is included below:
Hardware-based Protection Coming to Data Center and PC Products Later this Year
By Brian Krzanich
In addressing the vulnerabilities reported by Google Project Zero earlier this year, Intel and the technology industry have faced a significant challenge. Thousands of people across the industry have worked tirelessly to make sure we delivered on our collective priority: protecting customers and their data. I am humbled and thankful for the commitment and effort shown by so many people around the globe. And, I am reassured that when the need is great, companies – and even competitors – will work together to address that need.
But there is still work to do. The security landscape is constantly evolving and we know that there will always be new threats. This was the impetus for the Security-First Pledge I penned in January. Intel has a long history of focusing on security, and now, more than ever, we are committed to the principles I outlined in that pledge: customer-first urgency, transparent and timely communications, and ongoing security assurance.
Today, I want to provide several updates that show continued progress to fulfill that pledge. First, we have now released microcode updates for 100 percent of Intel products launched in the past five years that require protection against the side-channel method vulnerabilities discovered by Google. As part of this, I want to recognize and express my appreciation to all of the industry partners who worked closely with us to develop and test these updates, and make sure they were ready for production.
With these updates now available, I encourage everyone to make sure they are always keeping their systems up-to-date. It’s one of the easiest ways to stay protected. I also want to take the opportunity to share more details of what we are doing at the hardware level to protect against these vulnerabilities in the future. This was something I committed to during our most recent earnings call.
While Variant 1 will continue to be addressed via software mitigations, we are making changes to our hardware design to further address the other two. We have redesigned parts of the processor to introduce new levels of protection through partitioning that will protect against both Variants 2 and 3. Think of this partitioning as additional “protective walls” between applications and user privilege levels to create an obstacle for bad actors.
These changes will begin with our next-generation Intel® Xeon® Scalable processors (code-named Cascade Lake) as well as 8th Generation Intel® Core™ processors expected to ship in the second half of 2018. As we bring these new products to market, ensuring that they deliver the performance improvements people expect from us is critical. Our goal is to offer not only the best performance, but also the best secure performance.
But again, our work is not done. This is not a singular event; it is a long-term commitment. One that we take very seriously. Customer-first urgency, transparent and timely communications, and ongoing security assurance. This is our pledge and it’s what you can count on from me, and from all of Intel.
Subject: Cases and Cooling, Processors | March 9, 2018 - 02:45 PM | Jeremy Hellstrom
Tagged: amd, Threadripper, tim, ryzen
If you are looking for advice on how to install and cool a Threadripper. [H]ard|OCP have quickly become the site to reference. They've benchmarked the majority of waterblocks which are compatible with AMD's big chip as well as publishing videos on how to install it on your motherboard. Today the chip is out again, this time it is getting a manually applied TIM facial. Check out Kyle's tips on getting ready to coat your chip and the best way to spread the TIM to ensure even cooling.
"AMD's Threadripper has shown to be a very different CPU in all sorts of ways and this includes how you install the Thermal Interface Material as well should you be pushing your Threadripper's clocks beyond factory defaults. We show you what techniques we have found to give us the best temperatures when overclocking. "
Here are some more Processor articles from around the web:
- How To Install the AMD Threadripper CPU @ [H]ard|OCP
- AMD Ryzen 3 2200G & Ryzen 5 2400G APU Review @ Neoseeker
- The AMD Ryzen 3 2200G With Radeon Vega 8 @ TechARP
- AMD Ryzen 3 2200G + Ryzen 5 2400G Linux CPU Performance, 21-Way Intel/AMD Comparison @ Phoronix
- The AMD Ryzen 5 2400G With Radeon RX Vega 11 @ TechARP
- AMD Ryzen 5 2400G Linux Gaming Benchmarks @ Phoronix
It's clear by now that AMD's latest CPU releases, the Ryzen 3 2200G and the Ryzen 5 2400G are compelling products. We've already taken a look at them in our initial review, as well as investigated how memory speed affected the graphics performance of the internal GPU but it seemed there was something missing.
Recently, it's been painfully clear that GPUs excel at more than just graphics rendering. With the rise of cryptocurrency mining, OpenCL and CUDA performance are as important as ever.
Cryptocurrency mining certainly isn't the only application where having a powerful GPU can help system performance. We set out to see how much of an advantage the Radeon Vega 11 graphics in the Ryzen 5 2400G provided over the significantly less powerful UHD 630 graphics in the Intel i5-8400.
|Test System Setup|
|CPU||AMD Ryzen 5 2400G
Intel Core i5-8400
|Motherboard||Gigabyte AB350N-Gaming WiFi
ASUS STRIX Z370-E Gaming
|Memory||2 x 8GB G.SKILL FlareX DDR4-3200
(All memory running at 3200 MHz)
|Storage||Corsair Neutron XTi 480 SSD|
|Graphics Card||AMD Radeon Vega 11 Graphics
Intel UHD 630 Graphics
|Graphics Drivers||AMD 17.40.3701
|Power Supply||Corsair RM1000x|
|Operating System||Windows 10 Pro x64 RS3|
Before we take a look at some real-world examples of where a powerful GPU can be utilized, let's look at the relative power of the Vega 11 graphics on the Ryzen 5 2400G compared to the UHD 630 graphics on the Intel i5-8400.
SiSoft Sandra is a suite of benchmarks covering a wide array of system hardware and functionality, including an extensive range of GPGPU tests, which we are looking at today.
Comparing the raw shader performance of the Ryzen 5 2400G and the Intel i5-8400 provides a clear snapshot of what we are dealing with. In every precision category, the Vega 11 graphics in the AMD part are significantly more powerful than the Intel UHD 630 graphics. This all combines to provide a 175% increase in aggregate shader performance over Intel for the AMD part.
Now that we've taken a look at the theoretical power of these GPUs, let's see how they perform in real-world applications.
Subject: Processors | February 21, 2018 - 11:22 AM | Ryan Shrout
Tagged: amd, ryzen, EPYC, embedded, ryzen v1000, epyc 3000
Continuing its expansion of bringing modern processor and graphics designs to as many of its targeted market segments as possible, AMD announced today two new families that address the embedded processor space. The company has already seen double-digital YoY sequential growth in revenue from embedded markets, but the release of the Epyc Embedded 3000 and Ryzen Embedded V1000 family create significant additional opportunity for the company.
Embedded markets are unique from traditional consumer and enterprise channels as they address areas from military and aerospace applications to networking hardware and storage devices to retail compute and even casino and arcade gaming. These markets tend to be consistent and stable without the frequent or dramatic swings in architectural preference or market share that we often witness in consumer PCs. As AMD continues to grow and look for stable sources of adjacent income, embedded processors are a critical avenue and one that I believe AMD has distinct advantages in.
Research firm IDC estimates the market size that AMD can address with this pair of chip families exceeds $14-15B annually. The largest portion of that ($11-12B) includes storage and networking infrastructure systems that the Epyc 3000 line will target. The remaining amount includes IoT gateways, medical systems, and casino gaming hardware and is the purview of the Ryzen V1000.
Competitors in this space include Intel (with its Xeon D-series and Core family of chips) and many Arm-based designs that focus on low power integration. Intel has the most potential for immediate negative impact with AMD’s expansion in the embedded markets as the shared architecture and compatibility mean customers can more easily move between platforms. AMD is positioning both parts directly against Intel with proposed advantages in value and performance, hoping to move embedded customers to the combined AMD solution.
The Ryzen V1000 family combines the company’s recent processor and graphics architectures on a single chip, similar in function to the consumer Ryzen design that was released for notebook and desktop PCs. For the embedded customers and devices being targeted, this marks a completely new class of product with two key benefits over competing solutions. First, it allows for smaller and cooler system designs (critical for the cramped working environments of the embedded space) while increasing maximum performance.
Second, the V1000 allows integrators to downscale from using a combination of an Intel processor and a separate, discrete graphics chip to a single chip design. This both raises the ASP (average selling price) for AMD, increasing revenue and potential margin, while lowering the price that customers pay in total for system components.
While AMD struggles to find ways to promote the value of higher performance graphics on its new processors, where it has a significant advantage over Intel, for the consumer and business space, in the embedded markets that additional performance value is well understood. Casino gaming often utilizes multiple high-resolution displays for a single device with demand for high-quality rendered 3D graphics, of which the V1000 can now provide in a single chip design. The same is seen with medical imaging hardware, including ultrasound machines for women’s healthcare and cardiovascular diagnostics.
The Epyc Embedded 3000 family does not include integrated graphics on-chip and instead offers higher core performance and performance per dollar compared to competing Intel solutions. AMD believes that the Epyc 3000 will double the total addressable market for the company when it comes to networking and storage infrastructure.
AMD previously has disclosed its partnership with Cisco that included AMD-built processor options for some families of switches and other networking gear. As the demand for edge computing grows (systems that will exist near the consumer or enterprise side of a network to aid in computational needs of high speed networks), AMD is offering a compelling solution to counter the Intel Xeon family of processors.
Both the Epyc 3000 and Ryzen V1000 chips represent the first time AMD has targeted embedded customers with specific features and capabilities at the hardware level. During the design phase of its Zen CPU and Vega graphics architecture, business unit leaders included capabilities like multiple 10-gigabit network integration, support of four 4K display outputs, ECC memory (error correction capability for mission-critical applications), and unique embedded-based interfaces for external connectivity.
While these were not needed for the consumer segments of the market, and weren’t exposed in those hardware launches, they provide crucial benefits for AMD customers when selecting a chip for embedded markets.
Subject: Processors | February 19, 2018 - 08:33 PM | Scott Michaud
Tagged: amd, Zen, Zen 2
WCCFTech found some rumors (scroll down near the bottom of the linked article) about AMD’s upcoming EPYC “Rome” generation of EPYC server processors. The main point is that users will be able to buy up to 64 cores (128 threads) on a single packaged processor. This increase in core count will likely be due to the process node shrink, from 14nm down to GlobalFoundries’ 7nm. This is not the same as the upcoming second-generation Zen processors, which are built on 12nm and expected to ship in a few months.
Rome is probably not coming until 2019.
But when it does… up to 128 threads. Also, if I’m understanding WCCFTech’s post correctly, AMD will produce two different dies for this product line. One design will have 12 cores per die (x4 for 48 cores per package) and the other will have 16 cores per die (x4 for 64 cores per package). The reason why this is interesting is because AMD is, apparently, expecting to sell enough volume to warrant multiple chip designs, rather than just making a flagship and filling in SKUs with bin sorting and cutting off the cores that require abnormally high voltage for a given clock rate as parts with lesser core count. (That will happen too, as usual, but from two different intended designs instead of just the flagship.)
If it works out as AMD plans, this could be an opportunity to acquire prime market share away from Intel and their Xeon processors. The second chip might let them get into second-tier servers with an even more cost-efficient part, because a 12-core die will bin better than a 16-core one and, as mentioned, yield more from a wafer anyway.
Again, this is a common practice from a technical standpoint; the interesting part is that it could work out well for AMD from a strategic perspective. The timing and market might be right for EPYC in various classes of high-end servers.
Memory speed is not a factor that the average gamer thinks about when building their PC. For the most part, memory performance hasn't had much of an effect on modern processors running high-speed memory such as DDR3 and DDR4.
With the launch of AMD's Ryzen processors, last year emerged a platform that was more sensitive to memory speeds. By running Ryzen processors with higher frequency and lower latency memory, users should see significant performance improvements, especially in 1080p gaming scenarios.
However, the Ryzen processors are not the only ones to exhibit this behavior.
Gaming on integrated GPUs is a perfect example of a memory starved situation. Take for instance the new AMD Ryzen 5 2400G and it's Vega-based GPU cores. In a full Vega 56 or 64 situation, these Vega cores utilize blazingly fast HBM 2.0 memory. However, due to constraints such as die space and cost, this processor does not integrate HBM.
Instead, both the CPU portion and the graphics portion of the APU must both depend on the same pool of DDR4 system memory. DDR4 is significantly slower than memory traditionally found on graphics cards such as GDDR5 or HBM. As a result, APU performance is usually memory limited to some extent.
In the past, we've done memory speed testing with AMD's older APUs, however with the launch of the new Ryzen and Vega based R3 2200G and R5 2400G, we decided to take another look at this topic.
For our testing, we are running the Ryzen 5 2400G at three different memory speeds, 2400 MHz, 2933 MHz, and 3200 MHz. While the maximum supported JEDEC memory standard for the R5 2400G is 2933, the memory provided by AMD for our processor review will support overclocking to 3200MHz just fine.
Subject: Processors | February 16, 2018 - 08:52 AM | Sebastian Peak
Tagged: tim, thermal paste, Ryzen 5 2400G, ryzen, overclocking, der8aur, delidding, APU, amd
Overclocker der8auer has posted a video demonstrating the delidding process of the AMD Ryzen 5 2400G, and his findings on its effect on temperatures and overclocking headroom.
The delidded Ryzen 5 2400G (image credit der8auer via YouTube)
The full video is embedded below:
The results are interesting, but disappointing from an overclocking standpoint, as he was only able to increase his highest frequency by 25 MHz. Thermals were far more impressive, as the liquid metal used in place of the factory TIM did lower temps considerably.
Here are his temperature results for both the stock and overclocked R5 2400G:
The process was actually quite straightforward, and used an existing Intel delidding tool (the Delid Die Mate 2) along with a small piece of acrylic to spread the force against the PCB.
Delidding the Ryzen 5 2400G (image credit der8auer via YouTube)
The Ryzen 5 2400G is using thermal paste and is not soldered, which enables this process to be reasonably safe - or as safe as delidding a CPU and voiding your warranty ever is. Is it worth it for lower temps and slight overclocking gains? That's up to the user, but integration of an APU like this invites small form-factors that could benefit from the lower temps, especially with low-profile air coolers.
Subject: Processors | February 13, 2018 - 03:10 PM | Jeremy Hellstrom
Tagged: 2200G, 2400G, amd, raven ridge, ryzen, Zen
Ryan covered the launch of AMD's new Ryzen 5 2400G and Ryzen 3 2200G which you should have already checked out. The current options on the market offer more setup variations and tests than there is time in the day, which is why you should check out the links below to get a full view of how these new APUs function. For instance, The Tech Report tested using DDR4-3200 CL14 RAM when benchmarking, which AMD's architecture can take advantage of. As far as productivity and CPU bound tasks perform, Intel's i5-8400 does come out on top, however it is a different story for the Vega APU. The 11 CUs of the 2400G perform at the same level or slightly better than a GTX 1030 which could make this very attractive for a gamer on a budget.
"AMD's Ryzen 5 2400G and Ryzen 3 2200G bring Raven Ridge's marriage of Radeon Vega graphics processors and Zen CPU cores to the desktop. Join us as we see what a wealth of new technology in one chip means for the state of gaming and productivity performance from the same socket."
Here are some more Processor articles from around the web:
- AMD Ryzen R3 2200G & R5 2400G Raven Ridge APU @ Modders-Inc
- AMD Ryzen 3 2200G With Radeon Vega 8 @ TechARP
- AMD Ryzen 3 2200G 3.5 GHz with Vega 8 Graphics @ TechPowerUp
- AMD Ryzen 5 2400G & Ryzen 3 2200G @ Techspot
- AMD Ryzen 5 2400G & Ryzen 3 2200G Raven Ridge @ Kitguru
- AMD Ryzen 3 2200G and Ryzen 5 2400G @ Guru of 3D
- AMD Ryzen 5 2400G 3.6 GHz with Vega 11 Graphics @ TechPowerUp
Raven Ridge Desktop
As we approach the one-year anniversary of the release of the Ryzen family of processors, the full breadth of the releases AMD put forth inside of 12 months is more apparent than ever. Though I feel like I have written summations of 2017 for AMD numerous times, it still feels like an impressive accomplishment as I reflect for today’s review. Starting with the Ryzen 7 family of processors targeting enthusiasts, AMD iterated through Ryzen 5, Ryzen 3, Ryzen Threadripper, Ryzen Pro, EPYC, and Ryzen Mobile.
Today, though its is labeled as a 2000-series of parts, we are completing what most would consider the first full round of the Ryzen family. As the first consumer desktop APU (AMD’s term for a processor with tightly integrated on-die graphics), the Ryzen 5 2400G and the Ryzen 3 2200G look very much like the Ryzen parts before them and like the Ryzen mobile APUs that we previously looked at in notebook form. In fact, from an architectural standpoint, these are the same designs.
Before diving into the hardware specifications and details, I think it is worth discussing the opportunity that AMD has with the Ryzen with Vega graphics desktop part. By most estimates, more than 30% of the desktop PCs sold around the world ship without a discrete graphics card installed. This means they depend on the integrated graphics from processor to handle the functions of general compute and any/all gaming that might happen locally. Until today, AMD has been unable to address that market with its currently family of Ryzen processors, as they require discrete graphics solutions.
While most of our readers fall into the camp of not just using a discrete solution but requiring one for gaming purposes, there are a lot of locales and situations where the Ryzen APU is going to provide more than enough graphics horsepower. The emerging markets in China and India, for example, are regularly using low-power systems with integrated graphics, often based on Intel HD Graphics or previous generation AMD solutions. These gamers and consumers will see dramatic increases in performance with the Zen + Vega solution that today’s processor releases utilize.
Let’s not forget about secondary systems, small form factor designs, and PCs design for your entertainment centers as possible outlets for and uses for Ryzen APUs even for the most hardcore of enthusiast. Mom or Dad need a new PC for basic tasks on a budget? Again, AMD is hoping to make a case today for those sales.