AMD's Upcoming Socket AM4 Pictured with 1331 Pins

Subject: Processors | September 19, 2016 - 10:35 AM |
Tagged: Socket AM4, processor, FX, cpu, APU, amd, 1331 pins

A report from Hungarian site HWSW (cited by Bit-Tech) has a close-up photo of the new AMD AM4 processor socket, and it looks like this will have 1331 pins (go ahead and count them, if you dare!).

View Full Size

Image credit: Bit-Tech via HWSW

AMD's newest socket will merge the APU and FX series CPUs into this new AM4 socket, unlike the previous generation which split the two between AM3+ and FM2+. This is great news for system builders, who now have the option of starting with an inexpensive CPU/APU, and upgrading to a more powerful FX processor later on - with the same motherboard.

The new socket will apparently require a new cooler design, which is contrary to early reports (yes, we got it wrong, too) that the AM4 socket would be compatible with existing AM3 cooler mounts (manufacturers could of course offer hardware kits for existing cooler designs). In any case, AMD's new socket takes more of the delicate copper pins you love to try not to bend!

Source: Bit-Tech

September 19, 2016 | 10:57 AM - Posted by Anonymous (not verified)

and here's a AM4 board: http://support.hp.com/doc-images/803/c05254641.jpg

September 22, 2016 | 06:49 AM - Posted by Anonymous (not verified)

Original source http://support.hp.com/ca-en/document/c05254568 supports 65W APUs/CPUs only and memory speed is limited to DDR4-2133. Also according to the specifications, DDR4-2400 will be reduced to DDR4-2133.

September 19, 2016 | 11:02 AM - Posted by Anonymous (not verified)

They couldn't add 6 more pins? Seems like a huge oversight.

September 19, 2016 | 11:21 AM - Posted by boidsonly

I agree.

September 19, 2016 | 11:50 AM - Posted by Jann5s

+6, hilarious

September 19, 2016 | 02:06 PM - Posted by Anonymous (not verified)

I see what you did there ;)

September 19, 2016 | 11:03 AM - Posted by JohnGR

There was also pictures of A12 9800 and some performance numbers

http://wccftech.com/amd-bristol-ridge-a12-9800-am4-platform-performance/

September 19, 2016 | 11:50 AM - Posted by Anonymous (not verified)

Man I wish they would get some more info on the laptop Bristol Ridge SKUs. I want every laptop OEM that uses a single channel to the DIMM DRAM called out for gimping this time around!

And still the only AMD GCN based laptops that I can find for sale at Micro Center only have up to the GCN 1.0 generation of GCN GPUs inside. Hopefully BR/AM4 will get more laptop design wins with dual channel DDR4. But maybe I'll have to wait for Zen/Polaris before there will be the right kind of proper laptop treatment from the laptop OEM's when using AMD's APU SKUs inside. Hopefully AMD will offer its mobile Zen Polaris APUs with at least one stack of HBM2 so the laptop OEMs will not be able to starve the APU's graphics of bandwidth! Yes a Zen/polaris APU with One 4GB HBM2 stack and the rest of the DRAM memory supplied by DIMM based DRAM(so a laptop OEM's gimping AMD's APUs to a single DIMM channel would not/could not starve the APU's GPU/Graphics of needed bandwidth)!

Also in the news: "Southern Islands Support Will Come To AMDGPU On Linux 4.9"

https://www.phoronix.com/scan.php?page=news_item&px=AMDGPU-SI-Next-4.9

September 19, 2016 | 12:31 PM - Posted by Phartindust AMDRTP (not verified)

Bristol ridge goodness can be found starting from around $350 and up.

This is an A10 - 9600, 6GB (4+2) Ram, 1TB Hdd, 15.6" touch screen budget laptop.

http://www.bestbuy.com/site/hp-15-6-touch-screen-laptop-amd-a10-series-6...

September 19, 2016 | 01:55 PM - Posted by Anonymous (not verified)

I only want the top of the line BR laptop APU with the most watts and the most CUs in the GPU! So It's the FX9830P, and that "30P" at the end indicates the 35 watt part, dual DDR4 channels only! I want nothing to do with any 15 watt laptop SKUs from AMD! And AMD better damn well get more technical information in the same manner as Intel's ARK processor specification pages. You hear that AMD, get the proper CPU/GPU processor specification web based data base for your products and link to that web based CPU/CPU SKU specification data base on your main web page. I'm tired of being insulted by that marking designed website that AMD currently has! Get the proper links to the information/specification sheets for your APU/CPU, as well as GPU, SKUs written in a NON marketing focused properly technical manner!

Most of all AMD if you have to damn near give away the part, get some Linux OS Laptop OEM's to build laptops using the FX9830P with dual channel DDR4 DRAM and a discrete GPU option to go along with the integrated APU graphics in the FX9830P APU!

I want a real laptop not any Thin and Light crap POS! F_CK thin and light! I want plenty of watts/cooling available for any laptop Blender 3D rendering that I may want to do using a real laptop and not a UltraBook style crap form factor from hell! And I want the laptop to come with generic GPU drivers that can be updated from AMD and not the laptop's OEM.

September 19, 2016 | 01:58 PM - Posted by Anonymous (not verified)

edit: marking designed
to: marketing designed

September 19, 2016 | 02:06 PM - Posted by JohnGR

And yes there is a useless touch screen on it. HP continues to ask Intel's opinion for the final configuration of AMD laptops.

September 19, 2016 | 02:33 PM - Posted by Anonymous (not verified)

HP is really screwing up it's ProBook line of SKUs with that Thin and light crap! Give me a workhorse laptop ProBook SKU with plenty of CPU/GPU compute! HP lost my repeat ProBook business going with that thin and light crap and shoving AMD’s APUs inside only starved of thermal/cooling headroom POS/Crap business offerings. HP should have offered a ProBook with the Carrizo FX8800P at 35 watts with dual channel memory and a discrete GCN GPU option but that never happened! And I’ll bet the BR/AM4(or whatever the laptop AM4 motherboard version is called) FX9830P(35 watt part) will only maybe be available in a gaming laptop with windows 10 factory installed.

It looks like the entire Linux OS based OEM laptop market is under Intel’s nefarious thumb with regards to offering any Linux OS based laptops with AMD’s APUs inside and better graphics options for the dollar from AMD.

Ultrabooks(TM) UltraSuck for real workloads, and Thin And Light means only one thing! GIMPED of potential!

September 20, 2016 | 09:00 PM - Posted by Anonymous (not verified)

More OEM Gimping for Bristol Ridge, G-Damn laptop OEM's!

"An Anecdotal Musing of Brick and Mortar AMD Notebook Offerings"

http://www.anandtech.com/show/10681/an-anecdotal-musing-of-brick-and-mor...

September 22, 2016 | 06:37 AM - Posted by Anonymous (not verified)

And still using Bulldozer derived Excavator cores. Look at at the Cinebench R15 numbers, just another APU-like CPU performance (example http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/71760-inte... ) and not even up to Core i5 6500 level also thus quite disappointing. Again showing that AMD slides on Bristol Ridge performance are very misleading.

September 19, 2016 | 11:27 AM - Posted by Anonymous (not verified)

I have not kept up with all of the Zen/AM4 news but have they said they plan to use the AM4 socket for an extended period of time?

Or are we going to see an "AM4+" after a couple years.

Still very happy to see them go with a unified socket (CPU/APU).

September 19, 2016 | 12:02 PM - Posted by Anonymous (not verified)

The closest I can find to any assurance that an AM4+ socket won't be appearing any time soon, is this quote from AMD's marketing.

"The AMD chips will use the AM4 platform, which is designed to last multiple generations, said Don Woligroski, global marketing manager for Desktop Processors at AMD."

http://venturebeat.com/2016/09/05/amd-launches-7th-generation-a-series-p...

September 19, 2016 | 12:59 PM - Posted by Casey (not verified)

there must be some guy at AMD constantly saying cant we fit 6 more pins in - for the love of god please.

September 21, 2016 | 12:36 PM - Posted by Anonymous (not verified)

They can just square off 2 of the corners with 3 more pins each, even if they're just fake pins it will be worth it.

September 19, 2016 | 01:15 PM - Posted by Anonymous Nvidia User (not verified)

This must be part of AMD's new strategy to sell more CPUs as bent pins are generally considered user install error and is not returnable. I'd have a professional install it or buy it preinstalled.

September 19, 2016 | 02:02 PM - Posted by JohnGR

I rarely read about bent pins on AMD CPUs. Someone have to be completely stupid to bend pins on an AMD CPU when inserting or removing it from the socket. Except if he enjoys throwing them(literally throwing them) in a drawer without any box.

On the other hand I had read plenty of times posts like
"I know that I put the processor correctly into the socket, but the pins on the motherboard bent somehow"
and
"I send my Intel board for RMA in that shop and they told me that they could not cover it because there where bent pins in the socket. I never noticed it".

From the small experience I have in inserting Intel cpus and securing coolers on them, AMD's platform is like a walk in the park, while Intel's platform is like a walk in hell.

September 22, 2016 | 06:43 AM - Posted by Anonymous (not verified)

Rubbish, bend pins are common with PGA chips like AMD's CPUs and APUs. One just need to google "amd bend pins" and results aplenty. This often happens when installing/re-installing, changing coolers or when moving CPUs to a new mainboard.

September 19, 2016 | 02:59 PM - Posted by Anonymous (not verified)

Intel's costly dog food graphics and Nvidia's gimped of compute GPUs at even more cost! It's time to fight back against the NivTel TRUST and get better APU value for the dollar! Just look at those Nvidia GTX 1060/1070 overclocks to get near to the same FP Flops as the RX 480 that is clocked much lower. The RX 480 uses more power, but that is only because it has more FP hardware and async compute hardware on its Polaris/GCN GPU. GimpVidia the great Green Goblin Gimper of FP/Async compute and GimpTel the Dog Food graphics foister that even a starving Mangy dog would turn its nose away from, are at it again! Only a mindless GIT could bend a CPU pin in ZIF processor socket!

September 19, 2016 | 07:05 PM - Posted by johnc (not verified)

dafuq did i just read

September 19, 2016 | 11:01 PM - Posted by Anonymous (not verified)

Mindless Git with bent pins!

September 20, 2016 | 07:23 AM - Posted by Bri (not verified)

If you bend pins that easily you should probably not be mucking around your computer internals.

September 20, 2016 | 07:51 AM - Posted by Anonymous Nvidia User (not verified)

I install and build all my systems myself with 25+ years experience. I've never ever bent a pin. Most people buying an AMD product don't have that kind of experience. Plus it's an awful lot of pins to line up correctly. Besides pins are like yesterday's news.

September 20, 2016 | 08:36 AM - Posted by JohnGR

To line up correctly? What are you talking about? Are you manually aligning one pin at a time while inserting the cpu into the ZIF(that's ZERO insertion force ) socket?

September 20, 2016 | 03:57 PM - Posted by Anonymous Nvidia User (not verified)

How does the CPU get in the socket? Magic. It must be inserted by user into the socket. ZIF has been around since I built my first 486 system. Unless they've changed things since I've upgraded last almost 3.5 years ago, don't you still align the CPU with a corner and gently drop it down into the pin holes on motherboard CPU mounting. Pulling/pushing the ZIF lever unlocks and locks CPU from/to motherboard.

September 20, 2016 | 04:09 PM - Posted by Anonymous Nvidia User (not verified)

It's still the same even with AMD as I found a guide on their site. Align pin 1 on CPU and pin 1 insert on motherboard. CPU must be lowered into socket and seated. Even AMD mentions bent pins in their guide.

http://support.amd.com/en-us/kb-articles/pages/howtoreplaceamdcpunhsf.aspx

September 21, 2016 | 03:34 AM - Posted by JohnGR

As you said "GENTLY DROP IT DOWN". Do you need 25+ years of experience for that?

September 23, 2016 | 09:55 PM - Posted by Anonymous (not verified)

just line up the chip key and the chip flipping falls into place, if it doesn't just DROP RIGHT IN, its not lined up right.

Do people really bend pins installing the proc?!?!? This is basic level shit here.

September 19, 2016 | 01:55 PM - Posted by BlackDove (not verified)

IF YOU MADE IT 1337 PINS I MIGHT HAVE BOUGHT IT!

September 19, 2016 | 02:13 PM - Posted by CodeMonkeyX

Been a Intel user for a while. I had no idea AMD was still using the pin's. :D I thought everyone moved the pads or what ever Intel uses.

September 19, 2016 | 02:59 PM - Posted by Anonymous (not verified)

Land Grid Arrays. :D

September 19, 2016 | 02:35 PM - Posted by heyitsme (not verified)

where is the am4 mobo specs?

September 19, 2016 | 09:01 PM - Posted by Anonymous (not verified)

With 1331 pins it is time for AMD to wire internally one memory controller with a quantity of DDRx-SDRAM and serialize the second one for external modules.

This way could provide a low access time with the internal DDRx-SDRAM for small memory allocation (typically integers or pointers) and a better data rate for fat memory allocation (typically pixel maps) with external modules.

Consequently, the socket would need far less pins and the mainboard would be easier to wire on the pcb.

September 20, 2016 | 12:21 AM - Posted by Anonymous (not verified)

One stack of HBM2 with at least 4GB would fill that role, with the GPU/CPU cores feeding mainly off the the HBM2 stack and doing any transfers from a single channel to regular DDR4 memory in the background! That would cut down on the number of pins necessary for the off the interposer based CPU/GPU die to external DRAM memory needs. Any access to slower external DIMM based DRAM could be done in parallel with the HBM2's 1024 bit interconnect being split into several independent channels(already done with JEDEC standard HBM/HBM2) to service any background transfers from the slower external DRAM/DIMM memory.

AMD could add some smaller amount of DDRx-SDRAM to service the needs of the CPU cores and give over to the GPU a priority access to the HBM2 for textures/GPU kernels for the integrated GPU. With the HBM2 and a smaller amount of DDRx-SDRAM(To assist the CPU) the CPU cores(Zen Based) would do fine with a single channel to DDR4 DRAM and the HBM2 would provide plenty of bandwidth for any integrated GPU with more than 8 CUs(a single stack of HBM2 could probably feed a 16 CU integrated APU's GPU) and still have plenty of available bandwidth to feed some Zen cores also. Really a single 4GB stack of HBM2 would be fine configured to act as a L4 cache for the 4 Zen cores/complex and a 16 CU integrated Polaris GPU with the rest of the off interposer DIMM/DDR4 DRAM connected by a single channel with the HBM2/L4 Cache feeding directly into the Zen cores and Polaris GPU’s cores/CUs.

That single HBM2 stack would very easily allow for AMD to build an interposer based Zen/Polaris/HBM2 APU’s integrated GPU with a Polaris 10 based GPU design and the 256 bit GPU bus instead of a Polaris 11 based(128 bit GPU bus) based integrated graphics. There is that HBM2 stack’s possible 8GB option for even more HBM2 based memory with even more room for GPU textures and that 8GB could probably allow for most of the games/gaming engine and OS to rarely need to worry about much need for any narrow bus/lower bandwidth external DRAM access for most computing except for maybe some gaming/graphics application usage. With HBM2 memory feeding the Zen/Polaris APU there would be no performance penalty for having a single channel to external DIMM/DRAM.

September 20, 2016 | 09:02 AM - Posted by Anonymous (not verified)

HBM2 don't need any pin on the APU to be used, so what's the goal of your comment if not to blindly support AMD? What do you suggest to overcome the limit of DDR4-SDRAM and the complexity of wiring a high density socket on the PCB?

September 20, 2016 | 11:39 AM - Posted by Anonymous (not verified)

The HBM2 is wired up to the CPU/GPU or APU die and resides on the interposer along with the APU. So no requirement to go off of the interposer to any externally connected DRAM DIMM based memory for any HBM2 access by the APU. So no outside pins required to get that One HBM2 stack and its 1024 bit interface etched on the interposer's silicon substrate directly wired to the APU die. The goal with having an APU die wired to the single HBM2 stack is to stop any OEMs(Under Intel's influence) from having any adverse effects on AMD's APUs by only making laptops with a single channel to regular DIMM based DRAM.

The HBM2 could be made to act as a L4 cache and would be on the interposer and not affected by any single channel gimping from any laptop OEMs. So the APU would have its own on interposer based 4GB of HBM2 memory for the integrated GPU, Zen CPU cores also, to run from at a very high effective bandwidth over that 1024 bit connection(JEDEC HBM/HBM2 divides the 1024 bit connection into smaller independent channels). Now 4GB of memory is not a whole lot, but a Zen/Polaris/HBM2 based APU on an Interposer could still provide enough pins to run a single channel off of the interposer to external/slower/lower bandwidth DDR4 DRAM of say 12+ GB of DDR4 DIMM based DRAM. So that’s a net savings of a single channel’s worth of pins(64-128+ pins) run for connecting to any outside DDR4 DIMM based secondary RAM.

There is not much need for an APU with its own internal/interposer based 4GB/8GB HBM2 stack to have two channels to any outside DIMM based DRAM, as the single stack HBM2 could easily host the OS and game/gaming engine. So that saves the need for as much pins to be necessary. A single stack of HBM2 supports 256 GB/s memory bandwidth so 16 Polaris 10 CUs could have plenty of available bandwidth to run from, ditto for the 4 Zen Cores. Now this APU’s memory controller would have to manage both the HBM2 and the external single channel to regular DIMM based DRAM but that would not be hard at all! The HBM2’s data access could happen in parallel with any outside access to slower DIMM based DRAM so the memory controller could be designed to treat the HBM2 memory like a large multi-way set associative L4 cache to a larger amount of slower DIMM based DDR4 DRAM.

All of the Zen CPU cores and Polaris CUs would operate from the HBM2 memory with any slower off interposer memory accesses managed by the APU’s memory controller in the background via some included HBM2/L4 cache algorithms implemented in the APU memory controller’s hardware. So in effect any latency hiding for access to slower external/secondary DRAM could be done with the ZEN/Polaris APU only ever needing to access the HBM2 to run from with that HBM2 always providing it’s 256 GB/s of effective bandwidth. AMD could maybe even have an HBM3 version ready for even more bandwidth, but for an Zen/Polaris APU with 16 GPU CUs, in addition to the 4 Zen/Cores complex, 256 GB/s from a single HBM2 stack would be plenty of bandwidth.

I’m a fan of technological innovation and if AMD is innovating then I’m a fan of that technological innovation. Computing is not sports, computing is science and there is no room for any sports types of irrational fanaticism in computing! The HBM/HBM2 technology developed in part by AMD and part by SK Hynix is definitely innovative. Just look at the “Green Team’s” usage of the HBM2 technology!

P.S. I'm happy with AMD's AM4 complement of Pins so maybe there could be that 4GB of HBM2 and two external channels to DDR4 memory. But even if the Laptop OEMs only designed a single external memory channel the Zen/Polaris/HBM2 APU would be mostly be running from the 4GB of HBM2 with its 256 GB/s of bandwidth.

September 20, 2016 | 07:58 PM - Posted by Anonymous (not verified)

I'm afraid you didn't answer to my question but only wrote a sequence of technical words without any logic if not trying to look "smart"... What do you smoke dude?

September 22, 2016 | 12:47 AM - Posted by Anonymous (not verified)

He answers your question perfectly.
You're too dumb to understand it.
Somehow that's HIS fault.

Seriously, just shut up.

September 20, 2016 | 04:38 PM - Posted by Anonymous (not verified)

It will be possible to use stacked memory on the CPU package with a DDR4 interface. I don't know if you would have both at once though. You would want the higher speed stuff to act as a cache, not just as a regular memory channel. Also it probably isn't designed to run one channel at high speed and another at much lower speed.

I don't know if we are really going to see much HBM outside of the enterprise market or the very high end consumer market. The parts in the consumer market will probably just be cheaper versions of enterprise parts. It sounds like they will be making a cheaper version of HBM using wafer level packaging. This should allow slower or higher power HBM via a PCB rather than an expensive silicon interposer.

For consumer systems, the on package stacked DRAM may be sufficient. An HSA system is more efficient on memory since you don't need to keep two copies of everything; on in CPU memory and one in GPU memory. We may not need a serialized interface for consumer. The on package stacked memory plus a PCI-e attached SSD (flash, x-point,!or other non-volatile) may be sufficient. It is unclear what the HPC industry will eventually move to. In my opinion, HBM and HMC are not really direct competitors. HBM (and similar tech like what Intel is doing with EMIB attached memory) is of limited capacity, but it can more easily reach much higher bandwidth than HMC. HMC can scale up to very large memory size, but it can not supply as much bandwidth. They could compliment each other, but HMC is not a JEDEC standard, so it will only be where Intel and Micron want it to be. There isn't much demand for it in the consumer space.

September 21, 2016 | 03:30 AM - Posted by Anonymous (not verified)

"An HSA system is more efficient on memory since you don't need to keep two copies of everything;"

A better solution doesn't mean this is the best solution. You still need to move data over the PCI-E bus to work with the CPU even with pointers to the graphic memory. I think HSA is a transient solution and we need a real fusion between CPU and GPU like the x87 FPU co-processor.

"We may not need a serialized interface for consumer."

I don't agree but if you think the consumer doesn't need a faster computer...

"The on package stacked memory plus a PCI-e attached SSD (flash, x-point,!or other non-volatile) may be sufficient."

It's pretty obscure to determine what is sufficient without goal and points to satisfy.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.