More views from Raven Ridge, Ryzen 3 2200G and Ryzen 5 2400G roundup

Subject: Processors | February 13, 2018 - 03:10 PM |
Tagged: 2200G, 2400G, amd, raven ridge, ryzen, Zen

Ryan covered the launch of AMD's new Ryzen 5 2400G and Ryzen 3 2200G which you should have already checked out.  The current options on the market offer more setup variations and tests than there is time in the day, which is why you should check out the links below to get a full view of how these new APUs function.  For instance, The Tech Report tested using DDR4-3200 CL14 RAM when benchmarking, which AMD's architecture can take advantage of.  As far as productivity and CPU bound tasks perform, Intel's i5-8400 does come out on top, however it is a different story for the Vega APU.  The 11 CUs of the 2400G perform at the same level or slightly better than a GTX 1030 which could make this very attractive for a gamer on a budget. 


View Full Size

"AMD's Ryzen 5 2400G and Ryzen 3 2200G bring Raven Ridge's marriage of Radeon Vega graphics processors and Zen CPU cores to the desktop. Join us as we see what a wealth of new technology in one chip means for the state of gaming and productivity performance from the same socket."

Here are some more Processor articles from around the web:



February 13, 2018 | 04:16 PM - Posted by ProductivityAcceleratedOnGPUs (not verified)

I thought that a lot of the Office Productivity Applications where starting to make use of OpenCL accelerated math on the GPU's cores for spread-sheet math and other sorts of GPU accelerated GPGPU usage.

And the computing Industry as a whole is sarting to make use of AI/AI Accelerated algorithms on the GPU's cores across a wide varity of workloads.

Maybe there can be some testing of more of that GPGPU functionality in the future. I'm starting to read about a lot of research into AI assisted Spell Checkers and other grammar and translation sorts of workloads accelerated on GPUs that are on Phones, PCs, and laptops.

This work is done by using some heavy infrencing/AI sorts of AI/Training that is done on the powerful Cluster/AI Inferencing and AI/Neural Net training Systems and the trained AI transferred to a variety of devices. So once the AI is trained using powerful computing clusters that trained neural net is then loaded onto a PC's, laptop's or Phone's GPU or AI processor and that is used to accelerate various featuers like contextual spelling/grammer checking and other word-processor/other office functionality.

Trained AI functionality is even being used for graphics applications(photoshop/others) usage where Trained AIs can instantly identify any humans and animals in a image for selection or filtering where the background can be replaced or filtered(Blured) and other such effects. Smartphones are getting a lot of dedicated AI processing units but Integrated and Discrete GPUs can also do AI sorts of processing on the GPU's cores.

So maybe those CPU only productivity benchmarks are becoming a little dated and that AI usage needs to be looked into also for maybe starting to test APUs/SOCs and Discrete GPUs on more AI ability built into any office application suites/other productivity software and non gaming graphics, and even gaming graphics, if AI's runninning on the GPU can be of help.

February 13, 2018 | 07:24 PM - Posted by psuedonymous

"I thought that a lot of the Office Productivity Applications where starting to make use of OpenCL accelerated math on the GPU's cores for spread-sheet math and other sorts of GPU accelerated GPGPU usage. "

Nope. After many years of the HSA consortium pushing it, there has simply been no desire for GPU acceleration in standard everyday programs. GPGPU is just not a good fit for those sort of tasks.

February 13, 2018 | 09:57 PM - Posted by FoolsRushInToEfficientlyGetFootInsertedIntoMouth (not verified)

This has nothing to do with the HSA foundation, OpenCL was started by Apple and handed over to the Khronos group for continued management and development.

So Khronos has this not the HSA Foundation(Which is still around by the way).

So once again you have failed to do your proper research and vetting on OpenCL and the standards body in control of OpenCL's development and maintainence.

You Always do this, psuedonymous, and as a result your rambelings carry little weight among those with a proper amount of gray matter instead of lard between the ears!

Ditto for your proper researching the loads of productivity software that makes use of OpenCL to accelerate tasks on the GPU. My goodness you are a Daft one!

February 13, 2018 | 05:23 PM - Posted by pdjblum

jeremy, has there been any indication that they will release an r7 version with an eight core cpu and the vega M they use on the intel chip, or a similar vega with the same number of cores as the vega M?

that chip along with the ones they released yesterday would satisfy the vast number of gamers and alleviate the discrete gpu shortage in a big way

now that i think about it more, that one has hbm 2 on it, so i guess the memory supply issue is why they have not

February 13, 2018 | 06:34 PM - Posted by MixAndMatchWithZenVegaOrIntelVega (not verified)

No HBM2 on the Ryzen 3 2200G or the Ryzen 5 2400G so Vega's HBCC on these APUs has no HBM2 to make use of as High-Bandwidth-Cache(HBC). AMD should have Included some eDRAM on its APUs or made the Integrated Vega GPU's L2 cache larger.

AMD needs to introduce an APU with some HBM2, even 1-2GB of HBM2 would be enough for 8-11 Vega nCUs. AMD should have also had a lower cost version of HBM/HBM2 created with say a 512 bit wide(divided into 4, 128 bit independiently operating channels) interface instead of the HBM/HBM2 full JEDEC standard of 1024 bit wide interface that is divided into 8, 128 bit independiently operating channels per HBM/HBM2 stack.

Last year there where rumors about an AMD Workstation Grade professional APU on an Interposer with a larger GPU die, a separate Zen/cores die, and HBM2 stacks for the Professional Workstation market. But so far no new news about that SKU this year.

Those Intel/Vega EMIB/MCM based SKUs are not really APUs as the semi-custom Vega discrete die is only wired to the Intel SOC via an x8 PCEe 3.0 connection over to the Vega discrete die/HBM2 across the EMIB's low density organic substrate. And the only part of the whole MCM making use of an embedded silicom interposer bridge chip being the Vega GPU die to HBM2 interface. So the Whole Intel/Vega EMIB/MCM package is more like a mini-motherbard arrangement attatched mezzanine style to a larger PCB on that NUC image makeing the rounds online. The Intel SOC on that MCM also has its own on SOC die integrated graphics in addition to the Vega Graphics out on the far end of the EMIB/MCM.

Vega M will be nice when it comes out as 4GB of HBM2 will make for plenty of VRAM, and the Vega HBCC able to use HBM2 as a last level cache for the GPU(HBM2 being the HBC). So some great mobile gaming where the game's can make use of textures/mesh data that can exceed the HBM2/VRAM's 4GB physical capacity.

That Intel/Vega EMIB/MCM is going to be a good performer but at what Price($$$$).

February 13, 2018 | 07:45 PM - Posted by pdjblum

thanks for the response

so is it possible to put a vega m class gpu on the same die as a r7 cpu and hbm2, or does it have to be a package with seperate dies for the gpu and cpu?

obviously the 2200 and 2400 have no memory on board, but i gather it is possible to make a single die with memory too, or i hope so

i would buy one in a heart beat

February 13, 2018 | 11:00 PM - Posted by ManyPossibilitiesTooLittleMoneyCurrentlyButThatWillChange (not verified)

Yes it is possible to create an APU on a silicon Interposer Package(Zeppelin dies, Vega Die, and HBM2 stacks)! But the cost on the consumer side will be prohibitive while cost is not as much of a concern on the Workstation/Professional side.

The first APU SKUs and HBM2 are more than likely to be branded under the Epyc/Radeon Pro "WX" branding and thus require and Epyc/SP3 motherboard, or some Workstation APU professioal MB Varint that's not related to any consumer AMD MB SKUs.

Consumer APUs that use HBM2 may have to wait for Samsung/AMD and others that make the JEDEC HBM/HBM2/HBM-Next committees and working groups to work out any HBM#/Low-Cost-Variant standard. Samsung is sure interested in a low cost version/variant of HBM#.

There is nothing stopping AMD from using Interposer Bridge Chips in some fashon that does not require any Intel IP so maybe AMD will be able to work somthing out by 2019 for the consumer market.

eDRAM is also and option at 7nm for there to be enough space on any newer Zen/Vega APU to include that also and for that in place of HBM2. It all depends of the cost of HBM2/HBM3. I'd say that for any Vega GPU micro-arch on integrated graphics that more L2 GPU cache may be a better option at 7nm and only enough eDRAM to serve as HBC(High Bandwidth Cache) in the Place of any HBM2/HBM3. Hell for only the needs of 11 Vega nCUs an eDRAM could be as small as 250-500MB and allow for the Graphics to feed from mostly the eDRAM with any memory transfers to and from slower system DIMM based DRAM managed in the background.

The L2 cache arrangement is different for Vega as stated by theis Vega whiteppaper(1). So for any APU graphics maybe enlargeing the L2 cache with a small amount of eDRAM(for VRAM) to take the place of HBM2 may be enough to allow Vega's HBCC IP to be enabled and used.

"To extract maximum benefit from “Vega’s” new cache
hierarchy, all of the graphics blocks have been made clients
of the L2 cache. This arrangement departs from previous
GCN-based architectures, where the pixel engines had their
own independent caches, and enables greater data re-use.
Since the GPU's L2 cache plays a central role in the new
memory hierarchy, GPUs based on the “Vega” architecture
are designed with generous amounts of it."(1)


"Radeon’s next-generation Vega architecture"

February 14, 2018 | 08:57 AM - Posted by Hifihedgehog (not verified)

I have been doing some digging and found that althought current-generation AM4 motherboards lack formal HDMI 2.0 certification, just like many HDMI 1.4 cables will pass an HDMI 2.0 signal seamlessly without a hitch, the same appears to be the case for these boards whose HDMI traces and connectors may indeed be agnostic to the differences, if any. Could you do a quick test to see if HDMI 2.0 signals work for the Raven Ridge APUs on the AM4 motherboards you have access to? For further reference on the topic, see this forum thread below:

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.