AMD Sea Islands HD 8850 and 8870 Specifications Leaked

Subject: Graphics Cards | September 18, 2012 - 06:34 PM |
Tagged: Sea Islands, oland, hd8870, hd8850, gpu, amd radeon, amd

AMD beat NVIDIA to the punch with its 7000-series “Southern Islands” graphics cards, and if the rumors hold true the company may well accomplish the same feat with its next generation architecture. Codenamed Sea Islands, the architecture of AMD’s 8800-series is set to (allegedly) debut around January 2013 time frame. Featuring DirectX 11, GPGPU and power efficiency improvements, 3.4 billion transistors on a 28nm process, and a rumored sub-$300 price, will the 8850 and 8870 win over enthusiasts?

View Full Size

AMD launched its Southern Island graphics cards with the Graphics Core Next (GCN) architecture and Pitcairn GPU in March of this year. Since then NVIDIA has moved into the market with the 660 and 660Ti, and budget gamers have lots of options. However, yet another budget gaming GPU from AMD will be coming in just a few months if certain sources' leaks prove correct. The 8850 and 8870 graphics cards are rumored to launch in January 2013 for under $300 and offer up some significant performance and efficiency improvements. Both the 8850 and 8870 GPUs are based on the Oland variant of AMD’s Sea Islands architecture. As a point of reference, AMD’s 7850 and 7870 are using the Pitcairn version of AMD’s Southern Islands architecture – thus Sea Islands is the overarching architecture and Oland is an actual chip based on it.

Sea Islands is essentially an improved and tweaked Graphics Core Next design. It will continue to utilize TSMC's 28 nm process, but will require less power than the 7000-series while being much faster. While the specifications for the top-end 8900-series is still up in the air, Videocardz is claiming sources in the know have supplied the following numbers for the mid-range 8850 and 8870 Oland cards. 

View Full Size

Videocardz put together a table comparing AMD's current and future GPU series.

The GPU die size has reportedly increased to 270mm^2 (squared) versus the 7850/7870’s 212mm^2 die. This increase is the result of AMD packing an additional 600 million transistors for a total of 3.4 billion. 3D Center further breaks the GPU down in stating that the 8870 will feature 1792 shader units, 112 texture manipulation units (TMU), 32 ROPs, and support a 256-bit memory interface. The 8850 graphics card will scale the Oland GPU down a bit further by featuring only 1536 shader units and 96 TMUs, but keeping the 32 ROPs and 256-bit interface.

For comparison, here’s a handy table comparing the 8850/8870 to the current-generation 7850/7870 (which we recently reviewed).

  Radeon HD 7850 Radeon HD 8850 Radeon HD 7870 Radeon HD 8870
Die Size 212mm^2 270mm^2 212mm^2 270mm^2
Shader Count 1024 1536 1280 1792
TMUs 64 96 80 112
ROPs 32 32 32 32
Memory Interface 256-bit 256-bit 256-bit 256-bit
Bandwidth 153.6 GB/s 192 GB/s 153.6 GB/s 192 GB/s

 

So while the memory bus and number of ROP units is staying the same, you are getting more shaders and texture units along with a boost to the overall memory bandwidth with the larger die size – sounds like an okay compromise to me!

AMD has managed to increase the clock speeds and GPGPU performance with Oland/Sea Islands as well. On the clockspeed front, the 8850 has a base boost GPU clockspeed of 925 MHz and 975 MHz respectively. Further, the 8870 has base/boost clocks of 1050 MHz/1100 MHz. That is a nice improvement over the 7850’s 860 MHz clockspeed, and 7870’s 1000 MHz clockspeed. AMD is also adding its PowerTune with Boost functionality to the Oland-based graphics cards which is a welcome addition. The theoretical computational power of the graphics chips has been increased as well, by as much as 75% for single precision and 60% for double precision (7870 to 8870). The single precision performance has been increased to 2.99 TFLOPS on the 8850 (1.76 TFLOPS on the 7850), and 3.94 TFLOPS on the 8870 (7870 has 2.25 TFLOPS). The single precision numbers are relevant to gaming and general applications that consumers would run that are GPU accelerated. The figures are not really suited/representative of high performance computing (HPC) workloads where precision is important (think simulations and high-end mathematics), and that is where the double precision numbers come in. The 8800 series gets a nice boost in potential performance as well, topping out at 187.2 GFLOPS for the 8850 and 246 GFLOPS for the 8870. That is in comparison the 7850’s 110 GFLOPS and 7870’s 160 GFLOPS.

The sources also disclosed that while the 8850 would have the same TDP (thermal design power) rating as the 7850, the higher-end 8870 would actually see a decreased 160W TDP versus the previous generation’s 175W. Unfortunately, there were not any specific power draw numbers talked about, just that the cards were more power efficient, so it remains to be seen just how much (if at all) less power the GPUs will need. The sources put the 8870 at the same performance level as the NVIDIA GeForce GTX 680, which would mean that this will be an amazing mid-range card if true. Especially considering that the cards have a rumored price of $279 for the 8870 and $199 for the 8850. Granted, those prices are likely much lower than what we will actually see if AMD does indeed launch the cards in January as the company will not have competition from NVIDIA’s 700 series right away.

In some respects, the rumored specifications seem almost too good to be true, but I’m going to remain hopeful and am looking forward to not only seeing the mid-range Oland GPU coming out, but the unveiling of AMD’s top-end 8900 series (which should be amazing, based on the 8800-series rumors).

What do you think of the rumored 8850 and 8870 graphics cards from AMD? Will they be enough to temp even NVIDIA fans?

Source: Videocardz
September 18, 2012 | 08:05 PM - Posted by icebug

I just need to decide if I want to buy a 7950 now and crossfire it later when its much cheaper or wait through the next few grueling months to pick up an 8870 or 8950

September 18, 2012 | 08:19 PM - Posted by Tim Verry

Totally, I tend to upgrade ever other, or every third (depeding how much life I can squeeze out it), generation and the 8950 looks like a good candidate to replace my 6950 :)

September 20, 2012 | 04:44 PM - Posted by aparsh335i (not verified)

6950 2gb card...best bang for the buck I've ever seen when unlocked to 6970.

September 21, 2012 | 06:50 AM - Posted by Tim Verry

Yup, that's the one I have, a psuedo 6970 :). It's an XFX 6950 2GB referrence version. I have it with unlocked shaders but running at 6950 clockspeeds. It can be clocked higher, but gets a bit too warm for comfort (doesn't help the AC in the hot summer heat).

September 21, 2012 | 05:05 PM - Posted by aparsh335i (not verified)

I was actually running mine fully unlocked and at stock 6970 speeds no problem (MSI ref version). At one point i even OC'd it for a friendly PC mark competition with a buddy of mine and had it OC'd about as high as real 6970s OC to, which was a huge surprise. I figured it was stupid to leave it oc'd so high though because all i was playing was MW2 at the time. Not exactly a graphics hog of a game.

September 22, 2012 | 04:45 AM - Posted by Tim Verry

hehe, yeah the 6950s had plenty of overclocking headroom so long as you didn't get a dud with a particular card. I managed to get mine to 1005MHz core, and I could probably go a bit higher now that I'm using water cooling but I'd mostly just be wasting power :). Once games come out that it can't handle at the settings I'm used to, I'll probably crank it up to wring enough life out of it to get me to a 8970 upgrade :D.

 

1005MHc @ 1250mV :)

https://docs.google.com/spreadsheet/ccc?key=0AslQjNW7LjDpdHBlMkpoMkxDZGN...

January 2, 2013 | 08:42 AM - Posted by Anonymous (not verified)

Same here, I've stuck with my 2x5870 till now, it is finally starting to have trouble keeping up decent and fluid framerates on new games.

I was first thinking of getting a 78xx or 79xx series card, but hearing the 8xxx series was coming rather soon after, waiting wasn't a hard decision.

Even if the 8xxx series isn't a large improvement over the 7xxx series, it should still mean that the 7xxx series will go down even further in price, allowing for the cheap replacement of both my cards, but looking at the projected pricepoint got the new generation, it wouldn't seem to be that much of a bigger investment to get 2 8xxx series cards to mop the floor with everything to come for a few years again.

August 1, 2013 | 09:04 AM - Posted by Graelock (not verified)

The 7950 will still be very competitive throughout the entire 8000 series, so it's still a good choice. I still run a 6970, they are all high end DX11 cards and are good. The only reason I am contemplating jumping over to 2x 7850's, is for the very slight performance boost, of, probably, 10-15% vs my single 6970, at less the power consumption of my overclocked 6970, and less ambient heat, and reduced noise (e.g. my 6970 works overtime on its own and throws off a lot of heat and sound, but is blazing fast, considering its age). I can run any game maxed. There's another part of me telling me just to pick up another 6970, but that would require a PSU upgrade, since, they are power hungry. I would then give my wife my 6970 and let it run quiet and cool-like, in her system.

August 1, 2013 | 09:08 AM - Posted by Graelock (not verified)

but even today, you can't go wrong with a 6xxx-8xxx AMD card, just like you can't go wrong with a 4xx-6xxx card. They are all DX11, personally, if you fit into any of these categories, at least consider purchasing a 2nd, 3rd or 4 card (SLI), if you are running a single card, and just hold out until DX12.

September 18, 2012 | 09:12 PM - Posted by Anonymous Coward (not verified)

I think AMD will find that by cashing in on people's goodwill with their "fuck you" prices on the 7000 series cards, they have really only screwed themselves in the future. AMD graphics is not a premium brand, and they're about to find out the dollar store doesn't get to become Target simply by raising prices because they suddenly want to be "upscale".

Crash and burn AMD, crash and burn so someone can buy out the whole thing (large patent-holding companies never die, they just get bought) and fire all the management, starting with Rory "you've got to be kidding" Read.

September 18, 2012 | 11:21 PM - Posted by Ryan Stelly (not verified)

at the previous comment you are a fucking retard. Dare you to beg of an answer.

September 19, 2012 | 03:09 PM - Posted by Anonymous Coward (not verified)

The butthurt is strong with this one...

September 20, 2012 | 04:45 PM - Posted by aparsh335i (not verified)

The "high" AMD prices have been gone for a long time...

September 22, 2012 | 12:02 PM - Posted by Anonymous (not verified)

WOW, so the butt hurting trolls are out again, it happens everytime AMD are about to release or stomp on nvidia. I feel this time, nvidia are really going to feel the pain. Nvidia have only soo many engineers, working on both the Tegra and GPU etc, whereas AMD have separate engineers for CPU and GPU and they both work together on the APU. We will see nvidia falling behind AMD every Gen, we are already seeing this happen. People need to start selling their nvidia shares ASAP and buy AMD shares which will soon rocket up.

September 25, 2012 | 07:19 AM - Posted by renz (not verified)

lol. i'm not sure you're being serious or joking around

December 28, 2012 | 03:39 PM - Posted by renz_is_retarded_brainfuck (not verified)

Shut up, stupid miscarriage, and stop thinking, because it hurts the humanity.

September 18, 2012 | 11:56 PM - Posted by James (not verified)

I like the idea of a -20% to -25% price reduction on the same tier of cards on the next generation. If performance mirrors the increases in clock speeds and whatnot it can be a new golden boy card like the GTX 460 ti in that 150 to 250 price point.

September 19, 2012 | 12:09 AM - Posted by Anonymous (not verified)

GL for 1500mhz memory to achive that BW.

September 19, 2012 | 01:54 AM - Posted by Tim Verry

Indeed, I was curious about that as well, hence the 'too good to be true' worry of mine :(

September 19, 2012 | 11:59 AM - Posted by Josh Walrath

Pretty much all of the NVIDIA GTX 680 cards features 1500 MHz memory (6 GHz effective).  The stuff is pretty common now.  What isn't common is a memory controller that can actually run the chips at those speeds.  NVIDIA spent a LOT of time getting their controller right, and able to run at those speeds.  AMD just has some catching up to do, but the memory chips are there.

September 19, 2012 | 01:58 PM - Posted by Tim Verry

Ah, you're right GTX 680 is 256-bit bus, I was thinking it was larger than that. In that case, this should be doable if AMD follows NVIDIA's lead :)

September 19, 2012 | 02:29 AM - Posted by Daniel Meier (not verified)

If the price point is fair. I will gladly get one of these cards and throw my GTX570 away.

September 19, 2012 | 03:31 AM - Posted by Tim Verry

hehe, feel free to toss it my way ;) lol!

September 20, 2012 | 04:51 PM - Posted by aparsh335i (not verified)

And as some of you may remember, Nvidia's GTX 680 is not actually their real flagship GK110, theyOnly put the GK110 into their Tesla line so far. The GTX 680 was supposed to be their mid-range, but it was such a leap in performance that they didnt need to release the bigger/badder GPU and could make more money on the cheaper one.

Rumors rumors rumors :)

September 21, 2012 | 06:54 AM - Posted by Tim Verry

Hmm true, GK110 does look really beastly from what I've seen specs wise :). I'm assuming that will be 700 series?

September 21, 2012 | 05:08 PM - Posted by aparsh335i (not verified)

Who knows...i mean i don't know anyone that works for Nvidia (cough cough). But i wouldn't be surprised if they already redeveloped the GK110 for something better for 700 series....I mean they finished that chip months ago and only released it as Tesla.

September 22, 2012 | 11:58 AM - Posted by Tim Verry

hehe :)

Yeah, it would be really weird for NV to spend all that money developing GK110 and then only use it for Tesla. Granted, they probably make more money with Tesla per card than gaming cards (just my guess) but if they could also use it for the basis of a 700 series gaming card lineup, I would think that would be the smart thing to do :).

September 25, 2012 | 07:28 AM - Posted by renz (not verified)

yup. i take GK110 with some defective core will be most likely end up being GeForce or Quadro parts

September 25, 2012 | 07:27 AM - Posted by renz (not verified)

based on the spec fully enabled GK110 will have 15 SMX and IMO nvidia can makes several model based on the chip alone. i already think that the next nvidia flagship might not using fully enabled GK110.

January 31, 2013 | 01:13 PM - Posted by Anonymous (not verified)

fuck you

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.