Beating AMD and Analyst Estimates
January 30th has rolled around and AMD released their Q4 2017 results. The results were positive and somewhat unexpected. I have been curious how the company fared and was waiting for these results to compare them to the relatively strong quarter that Intel experienced. At the Q3 earnings AMD was not entirely bullish about how Q4 would go. The knew that it was going to be a down quarter as compared to an unexpectedly strong third quarter, but they were unsure how that was going to pan out. The primary reason that Q4 was not going to be as strong was due to the known royalty income that AMD was expecting from their Semi-Custom Group. Q4 has traditionally been bad for that group as all of their buildup for the holiday season came from Q1 and Q2 rampings of the physical products that would be integrated into consoles.
The results exceeded AMD’s and analysts’ expectations. They were expecting in the $1.39B range, but their actual revenue came in at a relatively strong $1.48B. Not only was the quarter stronger than expected, but AMD was able to pull out another positive net income of $61M. It has been a while since AMD was able to post back to back profitable quarters. This allowed AMD to have a net positive year to the tune of $43M where in 2016 AMD had a loss of $497M. 2017 as a whole was $1.06B more in revenue over 2016. AMD has been historically lean in terms of expenses for the past few years, and a massive boost in revenue has allowed them to invest in R&D as well as more aggressively ramp up their money making products to compete more adequately with Intel, who is having their own set of issues right now with manufacturing and security.
Another Strong Quarter for the Giant
This afternoon Intel released their Q4 2017 financial results. The quarter was higher in revenue than was expected by analysts. The company made $17.1B US in revenue and recorded a non-GAAP net of $1.08 a share. On the surface it looks like Intel had another good quarter that was expected by the company and others alike. Underneath the surface these results have shown a few more interesting things about the company as well as the industry it exists in.
We have been constantly hearing about how the PC market is weak and it will start to negatively affect those companies who's primary products go into these machines. Intel did see a 2% drop in revenue year on year from their Client Computing Group, but it certainly did not look to be a collapse. We can also speculate that part of the drop is from a much more competitive AMD and their strong performing Ryzen processors. These indications point to the PC market still being pretty stable and robust, even though it isn't growing at the rate it once had.
The Data Center Group was quite the opposite. It grew around 20% over the same timespan. Intel did not provide more detail but it seems that datacenters and cloud computing are still growing at a tremendous rate. With the proliferation of low power devices yet increased computing needs, data centers are continuing to expand and purchase the latest and greatest CPUs from Intel. So far AMD's EPYC has not been rolled out aggressively so far, but 2H 2018 should shed a lot more light on where this part of the market is going.
Subject: Processors | November 16, 2017 - 04:38 PM | Jeremy Hellstrom
Tagged: amd, EPYC, 7401P
AMD's new EPYC server chips range in price from around $4000 for the top end 32 core 7601 to around $500 for the 8 core 7251 with the $1000, 24 core EPYC 7401P sitting towards the middle of this family. Phoronix have tested quite a few of these processors, today focusing on the aforementioned 7401P, testing it against several other EPYC processors as well as several Xeon E3 and E5 models as well as a Gold and a Silver. To say that AMD showed up Intel in multithreaded performance is somewhat of an understatement as you can see in their benchmarks. Indeed in many cases you need around $5000 worth of Intel CPU to compete with the 7401P and even then Intel lags behind in many tests. The only shortcoming of the 7401P is that it can only be run in single socket configurations, not that you necessarily need two of these chips!
"We've been looking at the interesting AMD EPYC server processors recently from the high-end EPYC 7601 to the cheapest EPYC 7251 at under $500 as well as the EPYC 7351P that offers 16 cores / 32 threads for only about $750. The latest EPYC processor for testing at Phoronix has been the EPYC 7401P, a 24 core / 48 thread part that is slated to retail for around $1075 USD."
Here are some more Processor articles from around the web:
- AMD EPYC 7551 @ Phoronix
- Core i5-8400 vs. Overclocked Ryzen 5 1600 @ TechSpot
- In Hindsight: Some of the Worst CPU/GPUs Purchases of 2017 @ TechSpot
- The Latest In Our Massive Linux Benchmarking Setup - November 2017 @ Phoronix
- i7-2600K vs. i7-8700K - Is Upgrading Worthwhile? @ Hardware Canucks
Subject: Editorial | October 25, 2017 - 12:43 PM | Josh Walrath
Tagged: Vega, Threadripper, sony, ryzen, Q3, microsoft, EPYC, earnings, amd, 2017
Expectations for AMD’s Q3 earnings were not exactly sky high, but they were trending towards the positive. It seems that AMD exceeded those expectations. The company announced revenue of $1.64 billion, up significantly from the expected $1.52 billion that was the consensus on The Street.
The company also showed a $71 million (GAAP), $110 million (non-GAAP) net for the quarter, which is a 300% increase from a year ago. The reasons for this strong quarter are pretty obvious. Ryzen has been performing well on the desktop since its introduction last Spring and sales have been steady with a marked increase in ASPs. The latest Vega GPUs are competitive in the marketplace, but it does not seem as though AMD has been able to provide as many of these products as they would like. Add into that the coin mining effect on prices and stocks of these latest AMD graphics units. Perhaps a bigger boost to the bottom line is the introduction of the Epyc and Threadripper CPUs to the mix.
Part of this good news is the bittersweet royalties from the console manufacturers. Both Sony and Microsoft have refreshed their consoles in the past year, and Microsoft is about to release the new Xbox One X to consumers shortly. This has provided a strong boost to AMD’s semi-custom business, but these boosts are also strongly seasonal. The downside to this boost is of course when orders trail off and royalty checks take a severe beating. Consoles have a longer ramp up due to system costs and integration as compared to standalone CPUs or video cards. Microsoft and Sony ordered production of these new parts several quarters ago, so revenue from those royalties typically show up a quarter sooner than when actual product starts shipping. So the lion’s share of royalties are paid up in Q3 so that there is adequate supply of consoles in the strong Q4/Holiday season. Since Q1 of the next year is typically the softest quarter, the amount of parts ordered by Sony/Microsoft is slashed significantly to make sure that as much of the Holiday orders are sold and not left in inventory.
Ryzen continues to be strong due to multiple factors. It has competitive single and multi-core performance in a large variety of applications as compared to Intel’s latest. It has a much smaller die size than previous AMD parts such as Bulldozer/Piledriver/Phenom II, so they can fit more chips on a wafer and thereby lower overall costs while maximizing margins. Their product mix is very good from the Ryzen 3 to the Ryzen 7 parts, but is of course still missing the integrated graphics Ryzen parts that are expected either late this year or early next. Overall Ryzen has made AMD far more competitive and the marketplace has rewarded the company.
Vega is in an interesting spot. There have been many rumors about how the manufacturing costs of the chip (GPU and HBM) along with board implementations are actually being sold for a small loss. I find that hard to believe, but my gut here does not feel like AMD is making good margins on the product either. This could account for what is generally seen as lower than expected units in the market as well as correspondingly higher prices than expected. The Vega products are competitive with NVIDIA’s 1070 and 1080 products, but in those products we are finally seeing them start to settle down closer to MSRP with adequate supplies available for purchase. HBM is an interesting technology with some very acute advantages over standard GDDR-5/X. However, it seems that both the cost and implementation of HBM at this point in time is still not competitive with having gone the more traditional route with memory.
There is no doubt that AMD has done very well this quarter due to its wide variety of parts that are available to consumers. The news is not all great though and AMD expects to see Q4 revenues down around 15%. This is not exactly unexpected due to the seasonal nature of console sales and the resulting loss of royalties in what should be a strong quarter. We can still expect AMD to ship plenty of Ryzen parts as well as Vega GPUs. We can also surmise that we will see a limited impact of the integrated Ryzen/Vega APUs and any potential mobile parts based on those products as well.
Q3 was a surprise for many, and a pleasant one at that. While the drop in Q4 is not unexpected, it does sour a bit of the news that AMD has done so well. The share price of AMD has taken a hit due to this news, but we will start to see a clearer picture of how AMD is competing in their core spaces as well as what kind of uptick we can expect from richer Epyc sales throughout the quarter. Vega is still a big question for many, but Holiday season demand will likely keep those products limited and higher in price.
AMD’s outlook overall is quite positive and we can expect a refresh of Zen desktop parts sometime in 1H 2018 due to the introduction of GLOBALFOUNDRIES 12nm process which should give a clock and power uplift to the Zen design. There should be a little bit of cleanup in the Zen design much as Piledriver was optimized from Bulldozer. Add in the advantages of the new process and we should see AMD more adequately compete with Coffee Lake products from Intel which should be very common by then.
Subject: General Tech | September 21, 2017 - 12:43 PM | Alex Lustenberg
Tagged: z270, windows 10, WD, video, toshiba, ShadowPlay, ryzen, podcast, nvidia, nuc, msi, max-q, Intel, gs63vr, GLOBALFOUNDRIES, gigabyte, EPYC, ansel, 2500U, 12TB
PC Perspective Podcast #468 - 09/21/17
Join us for discussion on AMD Raven Ridge rumors, Intel and Global Foundries new fabrication technology!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the iTunes Store (audio only)
- Google Play - Subscribe to our audio podcast directly through Google Play!
- RSS - Subscribe through your regular RSS reader (audio only)
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Josh Walrath, Sebastion Peak, Allyn Malventano
Peanut Gallery: Ken Addison, Alex Lustenberg
Program length: 1:39:59
Week in Review:
News items of interest:
Hardware/Software Picks of the Week
Subject: Systems | September 19, 2017 - 06:57 PM | Jeremy Hellstrom
Tagged: 7351P, amd, EPYC
Patrick Kennedy of Serve The Home has just published his server-centric test EPYC test results and in his own words, "while AMD is very competitive at the high-end, its mainstream offerings are competing with de-featured Xeon Silver CPUs and absolutely obliterate what Intel is offering."
The EPYC 7351P, which should sell for roughly $750 was tested against Intel's Xeon Silver 4108 which runs about $440 in various server applications such as GROMACS, OpenSSL and even a chess benchmark. The tests were done with single socket EPYCs, the "P" series, which are offered at a significant discount when compared to AMD's dual socket family; benchmarked against Intel's Xeon Silver in both single and dual socket configurations. The only time that the Xeon's performance came close to the single socket 7351P were when they were configured in dual socket systems, even then AMD's EPYC chip came out on top, often by a significant margin.
Raw performance is not the only advantage AMD offers on EPYC, the feature sest also far outstrips the somewhat watered down Xeon Silver family. The single socket 7351P offers 128 PCIe lanes while a dual socket Xeon Silver can only offer 96 and EPYC can handle up to 2TB of DDR4-2666 in its eight channel memory controller whereas Intel is limited to 1.5TB DDR4-2400 in a dual socket server nor can it support dual AVX-512 nor Omni-Path fabric.
Intel does have some advantages that come with the maturity of their platform, including superb NVMe hotswap support as well as QuickAssist and they do have higher end Xeon Gold chips which include the aforementioned features that the Xeon Silver line lacks, however they are also significantly more expensive than EPYC.
You can expect more tests to appear in the future as STH invested a lot of money in new hardware to test and as the tests can take days to complete there will be some delay before they have good data to share. It is looking very positive for AMD's EPYC family, they offer an impressive amount of value for the money and it will be interesting to see how Intel reacts.
Subject: Processors | September 18, 2017 - 05:13 PM | Jeremy Hellstrom
Tagged: linux, EPYC 7601, EPYC
Phoronix have been hard at work testing out AMD's new server chip, specifically the 2.2/2.7/3.2GHz EPYC 7601 with 32 physical cores. The frequency numbers now have a third member which is the top frequency all 32 cores can hit simultaneously, for this processor that would be 2.7GHz. Benchmarking server processors is somewhat different from testing consumer CPUs, gaming performance is not as important as dealing with specific productivity applications. Phoronix started their testing of EPYC, in both NUMA and non-NUMA configurations, comparing against several Xeon models and the performance delta is quite impressive, sometimes leaving even a system with dual Xeon Gold 6138's in the dust. They also followed up with a look at how EPYC compares to Opteron, AMD's last server offerings. The evolution is something to behold.
"By now you have likely seen our initial AMD EPYC 7601 Linux benchmarks. If you haven't, check them out, EPYC does really deliver on being competitive with current Intel hardware in the highly threaded space. If you have been curious to see some power numbers on EPYC, here they are from the Tyan Transport SX TN70A-B8026 2U server. Making things more interesting are some comparison benchmarks showing how the AMD EPYC performance compares to AMD Opteron processors from about ten years ago."
Here are some more Processor articles from around the web:
- Core i7 vs. Ryzen 5 with Vega 64 & GTX 1080 @ TechSpot
- AMD Threadripper 1950X Linux Benchmarks @ Phoronix
- The Top 5 Best CPUs of All Time @ [H]ard|OCP
Subject: Cases and Cooling | August 3, 2017 - 08:41 PM | Scott Michaud
Tagged: noctua, amd, Threadripper, EPYC
Noctua has announced three new heatsinks for AMD’s new high-end CPU platforms, Threadripper and EPYC. If you’ve been following the company, or Morry’s motherboard reviews, then you know that these coolers are huge (and effective).
Apparently the main difference is the contact surface, 70mm x 56mm, to accommodate for the processor’s large package. AMD connects multiple dies together with their Infinity Fabric, which means a huge total surface area. The cooler comes in three sizes, corresponding to the fan that’s intended to be used with it: 140mm (NH-U14S TR4-SP3), 120mm (NH-U12S TR4-SP3), and 92mm (NH-U9S TR4-SP3).
The two “smallest” sizes, NH-U12S and NH-U9S, are both expected to retail for $69.90 USD, so I guess choose whichever makes the most sense for your system. The largest one, the NH-U14S, is $10 more expensive at $79.90 USD. They should be available by the end of the month.
Subject: Editorial | July 25, 2017 - 10:48 PM | Josh Walrath
Tagged: Vega, Threadripper, ryzen, RX, Results, quarterly earnings, Q2 2017, EPYC, amd
The big question that has been going through the minds of many is how much marketshare did AMD take back and how would that affect the bottom line? We know the second half of that question, but it is still up in the air how much AMD has taken from Intel. We know that they have, primarily due to the amount of money that AMD has made. Now we just need to find out how much.
Q2 revenue surpassed the expectations of both the Street and what AMD had predicted. It was not a mind-blowing quarter, but it was a solid one for what has been a slowly sinking AMD. The Q2 quarter is of course very important for AMD as it is the first full quarter of revenue from Ryzen parts as well as the introduction of the refreshed RX 500 series of GPUs.
The Ryzen R7 and R5 parts have been well received by press and consumers alike. While it is not a completely overwhelming product in every aspect as compared to Intel’s product stack, it does introduce an incredibly strong dollar/thread value proposition. Consumers can purchase an 8 core/16 thread part with competitive clock speeds and performance for around $300 US. That same price point from Intel will give a user better single threaded and gaming performance, but comes short at 4 cores/8 threads.
The latest RX series of GPUs are slightly faster refreshes of the previous RX 400 series of cards and exist in the same price range of those previous cards. These have been popular with AMD enthusiasts as they deliver solid performance for the price. They are also quite popular with the coin miners due to the outstanding hash rate that they offer at their respective price points as compared to NVIDIA GPUs.
AMD ended up reporting GAAP revenue of $1.22B with a net income of -$16M. Non-GAAP net income came in at a positive $19M. This is a significant boost from Q1 figures which included a revenue of $984M and a net income of -$73M. The tail end of Q1 did include some Ryzen sales, but not nearly enough to offset the losses that they accumulated. These beat out the Street numbers by quite a bit, hence the uptick in AMD’s share price after hours.
The server/semi-custom group did well, but is still down some 5% as compared to last year. This is primarily due to seasonal weaknesses with the consoles. Microsoft will be ramping up production of their Xbox One X and AMD will start to receive royalties from that production later this year. AMD has seen its marketshare in the data and server market tumble from years past to where it is at 1% and below. AMD expects to change this trend with EPYC and has recorded the initial revenue from EPYC datacenter processor shipments.
We cannot emphasize enough how much the CPU/GPU group has grown over the past year. Revenue from that group has increased by 51% since last year. We do need to temper that with the reality that at that time AMD had not released the new RX series of GPUs nor did they have Ryzen. Instead, it was all R5/R7 3x0 and Fury products as well as the FX CPUs based on Piledriver and Excavator cores. It would honestly be hard for things to get worse than that point of time Still, a 51% improvement with Ryzen and the RX 5x0 series of chips is greater than anyone really expected. We must also consider that Q2 is still one of the slowest quarters in a year.
AMD expects next quarter to grow well beyond expectations. The company is estimating that revenue will grow by 23%, plus or minus 3%. If this holds true, AMD will be looking at a $1.5B quarter. Something that has not been seen for some time (especially post foundry split). The product stack that they will continue to introduce is quite impressive. AMD will continue with the Ryzen R7 and R5 parts, but will also introduce the first R3 parts for the budget market. RX Vega will be introduced next week at Siggraph. Threadripper will be released to the wild as well as the x399 chipset. EPYC is already shipping and they expect that product to grow steadily. Ryzen Pro and then the mobile APUs will follow up later in the 2nd half of the year. Semi-custom will get a boost when Microsoft starts shipping Xbox One X consoles.
What a change a year makes. Lisa Su and the gang have seemingly turned the boat around with a lot of smart moves, a lot of smart people, and a lot of effort. They are not exactly at Easy Street yet, but they are moving in the right direction. Ryzen has been a success with press and consumers and sets them on a level plane with Intel in overall performance and power. The RX series continues to be popular and selling well (especially with miners). AMD still has not caught up with demand for those parts, but I get the impression that they are being fairly conservative there by not flooding the market with RX chips in case coin mining bottoms out again. The demand there is at least making miners and retailers happy, though could be causing some hard feelings among AMD enthusiasts who just want a gaming card at a reasonable price.
AMD continues to move forward and has recorded an impressive quarter. Next quarter, if it falls in line with expectations, should help return AMD to profitability with some real momentum moving forward in selling product to multiple markets where it has not been a power for quite some time. The company has been able to tread water for the past few years, but has planned far enough ahead to actually release competitive products at good prices to regain marketshare and achieve profitability again. 2017 has been a good year for AMD, and it looks to continue to Q3 and Q4.
Performance not two-die four.
When designing an integrated circuit, you are attempting to fit as much complexity as possible within your budget of space, power, and so forth. One harsh limitation for GPUs is that, while your workloads could theoretically benefit from more and more processing units, the number of usable chips from a batch shrinks as designs grow, and the reticle limit of a fab’s manufacturing node is basically a brick wall.
What’s one way around it? Split your design across multiple dies!
NVIDIA published a research paper discussing just that. In their diagram, they show two examples. In the first diagram, the GPU is a single, typical die that’s surrounded by four stacks of HBM, like GP100; the second configuration breaks the GPU into five dies, four GPU modules and an I/O controller, with each GPU module attached to a pair of HBM stacks.
NVIDIA ran simulations to determine how this chip would perform, and, in various workloads, they found that it out-performed the largest possible single-chip GPU by about 45.5%. They scaled up the single-chip design until it had the same amount of compute units as the multi-die design, even though this wouldn’t work in the real world because no fab could actual lithograph it. Regardless, that hypothetical, impossible design was only ~10% faster than the actually-possible multi-chip one, showing that the overhead of splitting the design is only around that much, according to their simulation. It was also faster than the multi-card equivalent by 26.8%.
While NVIDIA’s simulations, run on 48 different benchmarks, have accounted for this, I still can’t visualize how this would work in an automated way. I don’t know how the design would automatically account for fetching data that’s associated with other GPU modules, as this would probably be a huge stall. That said, they spent quite a bit of time discussing how much bandwidth is required within the package, and figures of 768 GB/s to 3TB/s were mentioned, so it’s possible that it’s just the same tricks as fetching from global memory. The paper touches on the topic several times, but I didn’t really see anything explicit about what they were doing.
If you’ve been following the site over the last couple of months, you’ll note that this is basically the same as AMD is doing with Threadripper and EPYC. The main difference is that CPU cores are isolated, so sharing data between them is explicit. In fact, when that product was announced, I thought, “Huh, that would be cool for GPUs. I wonder if it’s possible, or if it would just end up being Crossfire / SLI.”
Apparently not? It should be possible?
I should note that I doubt this will be relevant for consumers. The GPU is the most expensive part of a graphics card. While the thought of four GP102-level chips working together sounds great for 4K (which is 4x1080p in resolution) gaming, quadrupling the expensive part sounds like a giant price-tag. That said, the market of GP100 (and the upcoming GV100) would pay five-plus digits for the absolute fastest compute device for deep-learning, scientific research, and so forth.
The only way I could see this working for gamers is if NVIDIA finds the sweet-spot for performance-to-yield (for a given node and time) and they scale their product stack with multiples of that. In that case, it might be cost-advantageous to hit some level of performance, versus trying to do it with a single, giant chip.
This is just my speculation, however. It’ll be interesting to see where this goes, whenever it does.