Subject: General Tech | February 25, 2019 - 12:48 PM | Jeremy Hellstrom
Tagged: vega 10, Sunny Cove, rumour, Iris Plus Graphics 940, Intel, ice lake
If you liked Jim's example of a bad chart on the podcast, you are going to love these leaked Intel Ice Lake graphics benchmarks. At the root, the as yet to be released Iris Plus Graphics 940 portion of the APU is faster than AMD's Vega 10, which was released in 2017. This should not shock anyone.
The numbers at The Inquirer show just how much salt you should take this with, the frequently posted 77.41% better performance is when you compare a coming generation of GPU against a previous one and drops to about 44% when a specific test which favours Intel is dropped. Remember that AMD and Intel both have tests which favour their architecture, and keep that in mind when you are reading PR from either company.
When you compare Intel's scores to AMD's current Vega 11 the advantage drops to a hair under 2% better and falls behind when you don't order a Manhattan.
"The incoming part, also referred to as the Iris Plus Graphics 940, is, on average, 77.41 per cent faster than Gen9 in the GFXBench 5.0 benchmark and around 62.97 per cent faster than AMD's Vega 10 graphics."
Here is some more Tech News from around the web:
- Microsoft Announces HoloLens 2 Mixed Reality Headset For $3,500 @ Slashdot
- ZX Spectrum Vega+ 'backer'? Nope, you're now a creditor – and should probably act fast @ The Register
- SD Association Unveils microSD Express Format That Promises Transfer Speeds of Up To 985 MB/s @ Slashdot
- Linus Torvalds pulls pin, tosses in grenade: x86 won, forget about Arm in server CPUs, says Linux kernel supremo @ The Register
- LG Announces G8 ThinQ Smartphone That Uses 'Advanced Palm Vein Authentication' Tech To Unlock @ Slashdot
- OnePlus 5G phone first look: Firm shows off Snapdragon 855 prototype @ Ars Technica
- You can now run Android on the Nintendo Switch (but you probably don't want to) @ The Inquirer
Subject: Mobile | January 24, 2018 - 12:20 PM | Ken Addison
Tagged: vega APU, vega 8, vega 10, swift 3, ryzen mobile, raven ridge, Lenovo, ideapad 720s, amd, acer, 2700u, 2500U
Last October, when AMD launched their mobile-oriented Ryzen Processor with Radeon Vega Graphics product line (Raven Ridge), they talked about several different notebooks that would be shipping with these new parts. However, up until now, there has only been one officially launched and shipping product—the HP Envy x360.
We have an article on the performance of the Ryzen 5 2500U and the HP Envy x360 coming very soon, but today Ryzen Mobile-enabled notebooks have become available to order from both Acer and Lenovo.
First, we'll take a look at Acer's offering, the Swift 3.
For anyone who might be familiar with Acer's current notebook offerings, the Ryzen Swift 3 will seem very similar. From the photos, it appears to be nearly identical to its 8th Generation Intel equipped counterpart. That's certainly not a negative though, as I have been impressed with the Intel variant during some recent testing.
|Acer Swift 3|
|Screen||15.6” FHD (1920 x 1080) IPS Display|
|CPU||Ryzen 5 2500U||Ryzen 7 2700U|
|GPU||Integrated Radeon Vega 8||Integrated Radeon Vega 10|
|RAM||8GB DDR4 Dual Channel (non-upgradable)|
|Storage||256GB SSD||512GB SSD|
|Network||802.11ac Dual Band 2x2 MU-MIMO|
1 x USB 3.1 Gen 1 Type-C
48Wh Battery, "Up to 8 Hours Battery Life"
As far as specs are concerned, Acer seems to be checking all of the boxes. RAM will ship in a dual-channel configuration (although we don't know at what speed it will be running, likely 2133 or 2400,) but will not be user replaceable according to questions answered by an Acer representative on their Amazon listing.
Additionally, Acer seems to be the only notebook maker set to ship the Ryzen 5 2700U variant. Not only does the 2700U give users increased clock speeds of 200MHz at base speeds on the CPU portion, but the GPU sees a significant bump. The 2700U gets an upgrade from Vega 8 graphics with 512 stream processors running at 1100MHz to Vega 10 graphics with 640 stream processors at 1300MHz. This should provide a nice performance boost for the extra $200 Acer is asking.
The Acer Swift 3 is set to start shipping on February 9th from Amazon.
Next up is Lenovo, with their Ideapad 720S.
The only 13" Ryzen Mobile option to be announced, the Lenovo Ideapad 720S also shares a lot of design DNA with Lenovo's Intel counterparts.
|Lenovo Ideapad 720S|
|Processor||AMD Ryzen 5 2500U|
|Graphics||Integrated Radeon Vega 8|
|Memory||8GB DDR4-2133 (Single Channel)|
|Screen||13.3-in 1920x1080 IPS|
|Storage||512GB PCIe SSD|
|Camera||720p / Dual Digital Array Microphone|
|Wireless||802.11AC (1x1) + Bluetooth® 4.1|
|Connections||2 x USB 3.0
1 x USB 3.0 Type-C (DP & Power Delivery)
1 x USB 3.0 Type-C (DP)
|Battery||48Wh "Up to 9.5 hours battery life"|
12.0" x 8.4" x 0.5" / 305.9 x 213.8 x 13.6 (mm)
2.5 lbs (1.14 kg)
|OS||Windows 10 Home|
|Price||$1049 - Lenovo.com|
Disappointingly, the Lenovo Ideapad 720S will ship only in a single memory channel configuration. This will significantly affect the performance of the integrated graphics, as it is highly dependant on memory bandwidth. I wouldn't expect the memory to be user upgradable either; it's likely a single DIMMs worth of memory soldered onto the motherboard.
Curiously, although AMD listed a 2700U variant of the Ideapad 720S in their slides in October, those models have yet to be seen. However, we've seen this before from Lenovo where they start skipping a single SKU that is the most popular configuration and then filling out the rest of the options shortly after.
The Lenovo Ideapad 720S is available to order now directly from Lenovo, with an estimated shipping date or 5-7 business days.
At a price premium above the Acer Swift 3, the Ideapad 720S seems like a hard sell with lack of dual channel memory. However, for users who may be set on a 13" screen size, it appears it will be the only option.
Overall, I am excited to see more AMD-powered options in the thin-and-light notebook category, and I look forward to getting our hands on some of these new models soon!
Subject: Graphics Cards | December 1, 2017 - 02:48 PM | Tim Verry
Tagged: xfx, vega 10, Vega, RX VEGA 64, RX Vega 56, double edition, amd
Not content to let Asus have all the fun with X shaped products, graphics card manufacturer XFX is prepping two new Vega graphics cards that feature a cut-out backplate and cooler shroud that resembles a stretched-out X. XFX has, so far, only released a few pictures of the card but they do show off most of the card including the top edge, cooler, and backplate.
XFX has opted for a short PCB that extends slightly past the first cooling fan. The card is a dual slot design with a large heatsink and two large red fans and a bit less than half of the cooler extends past the PCB as a result. Cooling is not an issue thanks to liberal use of heat pipes (I think there are five main copper heat pipes), but the cooler hanging so far past the PCB has resulted in the two 8-pin PCI-E power connectors ending up in the middle of the cooler (the middle of the X shape) which is not ideal for cable management (still waiting for someone to put the PCI-E power connectors on the back edge closest to the motherboard!) but with a bit of modding maybe it would be possible to hid the wires under the shroud and route them around the card as one of the photos it looks like there is a bit of a gap between the heatsink and the shroud/backplate heh).
The design is sure to be divisive with some people loving it and other hating it, but XFX has put quite a bit of work into it. The red fans are surrounded by a stylized black shroud with a carbon fiber texture while the top edge holds the red XFX logo. The backplate specifically looks great with a black and grey design with red accent that features numerous cutouts for extra ventilation.
Display outputs are standard with three DisplayPort and one HDMI out.
TechPowerUp along with Videocardz are reporting that the card will come in both RX Vega 56 and RX Vega 64 variants. Unfortunately, while XFX has gone all out in the custom cooling and backplate, they are not pushing any of the clockspeeds past factory settings with the RX Vega 56 Double Edition clocking in at 1156 MHz base and 1471 MHz boost on the GPU and 1600 MHz on the 8GB of HBM2 memory. The XFX RX Vega 64 Double Edition is also stock clocked at 1247 MHz base, 1546 MHz boost, and 1890 MHz memory. It is not all bad news though, because with such a beefy cooler, enthusiasts should be able to overclock the chips themselves at least a bit (depending on how lucky they are in the silicon lottery) but it does mean that XFX isn’t guaranteeing anything. Also, overclocking might be more top-end overclock limited on the Vega 64 version versus other custom cards due to it only including two 8-pin power connectors (which does make me wonder what they have done as far as the VRMs versus reference if anything).
XFX has not yet revealed pricing or availability for their custom RX Vega cards.
What are your thoughts on the X design?
Subject: Processors | July 31, 2017 - 03:18 PM | Jeremy Hellstrom
Tagged: vega 64, vega 56, vega 10, Vega, radeon, amd, X399, Threadripper, ryzen, 1950x, 1920x, 1900x
Just in case you wanted to relive this weekends event, or you feel that somehow Ryan missed a detail when he was describing Threadripper or Vega, here is a roundup of other coverage of the event. The Tech Report contrast the Vega 64 and Vega 56 with a few older NVIDIA cards as well as more modern ones, giving you a sense of the recent evolution of the GPU. They also delve a bit into the pricing and marketing strategies which AMD has chosen, which you can check out here.
"AMD's Radeon RX Vega graphics cards are finally here in the form of the RX Vega 64 and RX Vega 56. Join us as we see what AMD's new high-end graphics cards have in store for gamers."
Here are some more Processor articles from around the web:
- AMD Radeon RX Vega GPU Specs and Pricing Revealed @ [H]ard|OCP
- AMD Radeon RX Vega Preview @ techPowerUp
- AMD Vega Microarchitecture Technical Overview @ techPowerUp
- AMD Ryzen Threadripper Specs and Pricing Revealed @ [H]ard|OCP
- AMD's Ryzen Threadripper 1950X, Threadripper 1920X, and Threadripper 1900X CPUs revealed @ The Tech Report
RX Vega is here
Though we are still a couple of weeks from availability and benchmarks, today we finally have the details on the Radeon RX Vega product line. That includes specifications, details on the clock speed changes, pricing, some interesting bundle programs, and how AMD plans to attack NVIDIA through performance experience metrics.
There is a lot going on today and I continue to have less to tell you about more products, so I’m going to defer a story on the architectural revelations that AMD made to media this week and instead focus on what I think more of our readers will want to know. Let’s jump in.
Radeon RX Vega Specifications
Though the leaks have been frequent and getting closer to reality, as it turns out AMD was in fact holding back quite a bit of information about the positioning of RX Vega for today. Radeon will launch the Vega 64 and Vega 56 today, with three different versions of the Vega 64 on the docket. Vega 64 uses the full Vega 10 chip with 64 CUs and 4096 stream processors. Vega 56 will come with 56 CUs enabled (get it?) and 3584 stream processors.
Pictures of the various product designs have already made it out to the field including the Limited Edition with the brushed anodized aluminum shroud, the liquid cooled card with a similar industrial design, and the more standard black shroud version that looks very similar to the previous reference cards from AMD.
|RX Vega 64 Liquid||RX Vega 64 Air||RX Vega 56||Vega Frontier Edition||GTX 1080 Ti||GTX 1080||TITAN X||GTX 980||R9 Fury X|
|GPU||Vega 10||Vega 10||Vega 10||Vega 10||GP102||GP104||GM200||GM204||Fiji XT|
|Base Clock||1406 MHz||1247 MHz||1156 MHz||1382 MHz||1480 MHz||1607 MHz||1000 MHz||1126 MHz||1050 MHz|
|Boost Clock||1677 MHz||1546 MHz||1471 MHz||1600 MHz||1582 MHz||1733 MHz||1089 MHz||1216 MHz||-|
|Memory Clock||1890 MHz||1890 MHz||1600 MHz||1890 MHz||11000 MHz||10000 MHz||7000 MHz||7000 MHz||1000 MHz|
|Memory Interface||2048-bit HBM2||2048-bit HBM2||2048-bit HBM2||2048-bit HBM2||352-bit G5X||256-bit G5X||384-bit||256-bit||4096-bit (HBM)|
|Memory Bandwidth||484 GB/s||484 GB/s||484 GB/s||484 GB/s||484 GB/s||320 GB/s||336 GB/s||224 GB/s||512 GB/s|
|TDP||345 watts||295 watts||210 watts||300 watts||250 watts||180 watts||250 watts||165 watts||275 watts|
|Peak Compute||13.7 TFLOPS||12.6 TFLOPS||10.5 TFLOPS||13.1 TFLOPS||10.6 TFLOPS||8.2 TFLOPS||6.14 TFLOPS||4.61 TFLOPS||8.60 TFLOPS|
If you are a frequent reader of PC Perspective, you have already seen our reviews of the Vega Frontier Edition air cooled and liquid cards, so some of this is going to look very familiar. Looking at the Vega 64 first, we need to define the biggest change to the performance ratings of RX and FE versions of the Vega architecture. When we listed the “boost clock” of the Vega FE cards, and really any Radeon cards previous to RX Vega, we were referring the maximum clock speed of the card in its out of box state. This was counter to the method that NVIDIA used for its “boost clock” rating that pointed towards a “typical” clock speed that the card would run at in a gaming workload. Essentially, the NVIDIA method was giving consumers a more realistic look at how fast the card would be running while AMD was marketing the theoretical peak with perfect thermals, perfect workloads. This, to be clear, never happened.
With the RX Vega cards and their specifications, the “boost clock” is now a typical clock rate. AMD has told me that this is what they estimate the average clock speed of the card will be during a typical gaming workload with a typical thermal and system design. This is great news! It means that gamers will have a more realistic indication of performance, both theoretical and expected, and the listings on the retailers and partner sites will be accurate. It also means that just looking at the spec table above will give you an impression that the performance gap between Vega FE and RX Vega is smaller than it will be in testing. (This is, of course, if AMD’s claims are true; I haven’t tested it myself yet.)
Subject: Graphics Cards | January 8, 2017 - 03:53 AM | Tim Verry
Tagged: vega 11, vega 10, navi, gpu, amd
During CES, AMD showed off demo machines running Ryzen CPUs and Vega graphics cards as well as gave the world a bit of information on the underlying architecture of Vega in an architectural preview that you can read about (or watch) here. AMD's Vega GPU is coming and it is poised to compete with NVIDIA on the high end (an area that has been left to NVIDIA for awhile now) in a big way.
Thanks to Videocardz, we have a bit more info on the products that we might see this year and what we can expect to see in the future. Specifically, the slides suggest that Vega 10 – the first GPUs to be based on the company's new architecture – may be available by the (end of) first half of 2017. Following that a dual GPU Vega 10 product is slated for a release in Q3 or Q4 of 2017 and a refreshed GPU based on smaller process node with more HBM2 memory called Vega 20 in the second half of 2018. The leaked slides also suggest that Navi (Vega's successor) might launch as soon as 2019 and will come in two variants called Navi 10 and Navi 11 (with Navi 11 being the smaller / less powerful GPU).
The 14nm Vega 10 GPU allegedly offers up 64 NCUs and as much as 12 TFLOPS of single precision and 750 GFLOPS of double precision compute performance respectively. Half precision performance is twice that of FP32 at 24 TFLOPS (which would be good for things like machine learning). The NCUs allegedly run FP16 at 2x and DPFP at 1/16. If each NCU has 64 shaders like Polaris 10 and other GCN GPUs, then we are looking at a top-end Vega 10 chip having 4096 shaders which rivals that of Fiji. Further, Vega 10 supposedly has a TDP up to 225 watts.
For comparison, the 28nm 8.9 billion transistor Fiji-based R9 Fury X ran at 1050 MHz with a TDP of 275 watts and had a rated peak compute of 8.6 TFLOPS. While we do not know clock speeds of Vega 10, the numbers suggest that AMD has been able to clock the GPU much higher than Fiji while still using less power (and thus putting out less heat). This is possible with the move to the smaller process node, though I do wonder what yields will be like at first for the top end (and highest clocked) versions.
Vega 10 will be paired with two stacks of HBM2 memory on package which will offer 16GB of memory with memory bandwidth of 512 GB/s. The increase in memory bandwidth is thanks to the move to HBM2 from HBM (Fiji needed four HBM dies to hit 512 GB/s and had only 4GB).
The slide also hints at a "Vega 10 x2" in the second half of the year which is presumably a dual GPU product. The slide states that Vega 10 x2 will have four stacks of HBM2 (1TB/s) though it is not clear if they are simply adding the two stacks per GPU to claim the 1TB/s number or if both GPUs will have four stacks (this is unlikely though as there does not appear to be room on the package for two more stacks each and I am not sure if they could make the package bit enough to make room for them either). Even if we assume that they really mean 2x 512 GB/s per GPU (and maybe they can get more out of that in specific workloads across both) for memory bandwidth, the doubling of cores and at least potential compute performance will be big. This is going to be a big number crunching and machine learning card as well as for games of course. Clockspeeds will likely have to be much lower compared to the single GPU Vega 10 (especially with stated TDP of 300W) and workloads wont scale perfectly so potential compute performance will not be quite 2x but should still be a decent per-card boost.
Raja Koduri holds up a Vega GPU at CES 2017 via eTeknix
Moving into the second half of 2018, the leaked slides suggest that a Vega 20 GPU will be released based on a 7nm process node with 64 CUs and paired with four stacks of HBM2 for 16 GB or 32 GB of memory with 1TB/s of bandwidth. Interestingly, the shaders will be setup such that the GPU can still do half precision calculations at twice that of single precision, but will not take nearly the hit on double precision at Vega 10 at only 1/2 single precision rather than 1/16. The GPU(s) will use between 150W and 300W of power, and it seems these are set to be the real professional and workstation workhorses. A Vega 10 with 1/2 DPFP compute would hit 6 TFLOPS which is not bad (and it would hopefully be more than this due to faster clocks and architecture improvements).
Beyond that, the slides mention Navi's existence and that it will come in Navi 10 and Navi 11 but no other details were shared which makes sense as it is still far off.
You can see the leaked slides here. In all, it is an interesting look at potential Vega 10 and beyond GPUs but definitely keep in mind that this is leaked information and that the information allegedly came from an internal presentation that likely showed the graphics processors in their best possible/expect light. It does add a bit more hope to the fire of excitement for Vega though, and I hope that AMD pulls it off as my unlocked 6950 is no longer supported and it is only a matter of time before new games perform poorly or not at all!
- AMD Vega GPU Architecture Preview: Redesigned Memory Architecture
- CES 2017: AMD Vega Running DOOM at 4K
- AMD GPU Roadmap: Capsaicin Names Upcoming Architectures
- AMD's Raja Koduri talks moving past CrossFire, smaller GPU dies, HBM2 and more.
High Bandwidth Cache
Apart from AMD’s other new architecture due out in 2017, its Zen CPU design, there is no other product that has had as much build up and excitement surrounding it than its Vega GPU architecture. After the world learned that Polaris would be a mainstream-only design that was released as the Radeon RX 480, the focus for enthusiasts came straight to Vega. It’s been on the public facing roadmaps for years and signifies the company’s return to the world of high end GPUs, something they have been missing since the release of the Fury X in mid-2015.
Let’s be clear: today does not mark the release of the Vega GPU or products based on Vega. In reality, we don’t even know enough to make highly educated guesses about the performance without more details on the specific implementations. That being said, the information released by AMD today is interesting and shows that Vega will be much more than simply an increase in shader count over Polaris. It reminds me a lot of the build to the Fiji GPU release, when the information and speculation about how HBM would affect power consumption, form factor and performance flourished. What we can hope for, and what AMD’s goal needs to be, is a cleaner and more consistent product release than how the Fury X turned out.
The Design Goals
AMD began its discussion about Vega last month by talking about the changes in the world of GPUs and how the data sets and workloads have evolved over the last decade. No longer are GPUs only worried about games, but instead they must address profession workloads, enterprise workloads, scientific workloads. Even more interestingly, as we have discussed the gap in CPU performance vs CPU memory bandwidth and the growing gap between them, AMD posits that the gap between memory capacity and GPU performance is a significant hurdle and limiter to performance and expansion. Game installs, professional graphics sets, and compute data sets continue to skyrocket. Game installs now are regularly over 50GB but compute workloads can exceed petabytes. Even as we saw GPU memory capacities increase from Megabytes to Gigabytes, reaching as high as 12GB in high end consumer products, AMD thinks there should be more.
Coming from a company that chose to release a high-end product limited to 4GB of memory in 2015, it’s a noteworthy statement.
The High Bandwidth Cache
Bold enough to claim a direct nomenclature change, Vega 10 will feature a HBM2 based high bandwidth cache (HBC) along with a new memory hierarchy to call it into play. This HBC will be a collection of memory on the GPU package just like we saw on Fiji with the first HBM implementation and will be measured in gigabytes. Why the move to calling it a cache will be covered below. (But can’t we call get behind the removal of the term “frame buffer”?) Interestingly, this HBC doesn’t have to be HBM2 and in fact I was told that you could expect to see other memory systems on lower cost products going forward; cards that integrate this new memory topology with GDDR5X or some equivalent seem assured.
Subject: Graphics Cards | December 12, 2016 - 04:05 PM | Jeremy Hellstrom
Tagged: vega 10, Vega, training, radeon, Polaris, machine learning, instinct, inference, Fiji, deep neural network, amd
Ryan was not the only one at AMD's Radeon Instinct briefing, covering their shot across NVIDIA's HPC products. The Tech Report just released their coverage of the event and the tidbits which AMD provided about the MI25, MI8 and MI6; no relation to a certain British governmental department. They focus a bit more on the technologies incorporated into GEMM and point out that AMD's top is not matched by an NVIDIA product, the GP100 GPU does not come as an add-in card. Pop by to see what else they had to say.
"Thus far, Nvidia has enjoyed a dominant position in the burgeoning world of machine learning with its Tesla accelerators and CUDA-powered software platforms. AMD thinks it can fight back with its open-source ROCm HPC platform, the MIOpen software libraries, and Radeon Instinct accelerators. We examine how these new pieces of AMD's machine-learning puzzle fit together."
Here are some more Graphics Card articles from around the web:
- The Complete AMD Radeon Instinct Tech Briefing @ Tech ARP
- Chill With Radeon Software Crimson ReLive Edition @ Techgage
- Radeon Software Crimson ReLive Edition—an overview @ The Tech Report
- AMD Radeon Crimson ReLive Drivers @ techPowerUp
- AMD talk to KitGuru about Crimson ReLive
- We retest Radeon Chill 2 The Tech Report
- MSI RX 480 Gaming X 8G Review @ OCC
- NVIDIA GeForce GTX 1080 PCI-Express Scaling @ techPowerUp
AMD Enters Machine Learning Game with Radeon Instinct Products
NVIDIA has been diving in to the world of machine learning for quite a while, positioning themselves and their GPUs at the forefront on artificial intelligence and neural net development. Though the strategies are still filling out, I have seen products like the DIGITS DevBox place a stake in the ground of neural net training and platforms like Drive PX to perform inference tasks on those neural nets in self-driving cars. Until today AMD has remained mostly quiet on its plans to enter and address this growing and complex market, instead depending on the compute prowess of its latest Polaris and Fiji GPUs to make a general statement on their own.
The new Radeon Instinct brand of accelerators based on current and upcoming GPU architectures will combine with an open-source approach to software and present researchers and implementers with another option for machine learning tasks.
The statistics and requirements that come along with the machine learning evolution in the compute space are mind boggling. More than 2.5 quintillion bytes of data are generated daily and stored on phones, PCs and servers, both on-site and through a cloud infrastructure. That includes 500 million tweets, 4 million hours of YouTube video, 6 billion google searches and 205 billion emails.
Machine intelligence is going to allow software developers to address some of the most important areas of computing for the next decade. Automated cars depend on deep learning to train, medical fields can utilize this compute capability to more accurately and expeditiously diagnose and find cures to cancer, security systems can use neural nets to locate potential and current risk areas before they affect consumers; there are more uses for this kind of network and capability than we can imagine.