According to a report from VideoCardz (via Overclock.net/Chip Hell) high quality images have leaked of the upcoming GP104 die, which is expected to power the GeForce GTX 1070 graphics card.
Image credit: VideoCardz.com
"This GP104-200 variant is supposedly planned for GeForce GTX 1070. Although it is a cut-down version of GP104-400, both GPUs will look exactly the same. The only difference being modified GPU configuration. The high quality picture is perfect material for comparison."
A couple of interesting things have emerged with this die shot, with the relatively small size of the GPU (die size estimated at 333 mm2), and the assumption that this will be using conventional GDDR5 memory – based on a previously leaked photo of the die on PCB.
Alleged photo of GP104 using GDDR5 memory (Image credit: VideoCardz via ChipHell)
"Leaker also says that GTX 1080 will feature GDDR5X memory, while GTX 1070 will stick to GDDR5 standard, both using 256-bit memory bus. Cards based on GP104 GPU are to be equipped with three DisplayPorts, HDMI and DVI."
While this is no doubt disappointing to those anticipating HBM with the upcoming Pascal consumer GPUs, the move isn't all that surprising considering the consistent rumors that GTX 1080 would use GDDR5X.
Is the lack of HBM (or HBM2) enough to make you skip this generation of GeForce GPU? This author points out that AMD's Fury X – the first GPU to use HBM – was still unable to beat a GTX 980 Ti in many tests, even though the 980 Ti uses conventional GDDR5. Memory is obviously important, but the core defines the performance of the GPU.
If NVIDIA has made improvements to performance and efficiency we should see impressive numbers, but this might be a more iterative update than originally expected – which only gives AMD more of a chance to win marketshare with their upcoming Radeon 400-series GPUs. It should be an interesting summer.
And the wait
And the wait continues…..
Might be next year for the HMB2 variants to come out unless anybody wants their GDDR5x variants this year hopefully.
Yeah, I was holding out for
Yeah, I was holding out for HBM2 as well. Partially because of the space and power savings. Technically my SFF case can fit a full 12 inch card, but that really wouldn’t be a fun installation. If there’s no killer app this year, then it’s time to hit the GPU upgrade snooze button again.
Not like the Radeon 400
Not like the Radeon 400 series will be any less iterative.
I’m holding out for HBM2 myself. I don’t expect any revolutionary difference in performance though. I just prefer to get top end cards.
Will get a GTX1080 at launch
Will get a GTX1080 at launch or around there to help with Rift games. GTX970 is getting long in the tooth, been a good card, but time for an upgrade.
Still on the 670 and I’m
Still on the 670 and I’m waiting for HBM2 and Vive to mature a bit before upgrading. Not many titles these days require a very high end card.
Long in the tooth?
It’s not
Long in the tooth?
It’s not even 2 years old!!!
Thats ancient in computer
Thats ancient in computer time for one.
Secondly, GM200 and 204 were 28nm.
On the subject of Fury X
On the subject of Fury X being “unable” to beat a GTX 980 Ti…
http://www.techpowerup.com/reviews/Gigabyte/GTX_980_Ti_XtremeGaming/23.html
AMD’s performance at 4K is stellar. I think you might want to revise that statement. NVIDIA always launches strong out of the gate, but since GCN AMD has been able to improve or maintain performance over time.
I said “in many tests”, not
I said "in many tests", not all tests. You will always be able to pick out a scenario where one consistently beats the other. The point is the HBM was not the world-beater that it was expected by some to be. GDDR5 is actually pretty good, and it turns out the core of the GPU has a lot to do with performance.
I'll cite just one example of the Fury X vs. 980 Ti at 4K, from Ryan's review:
https://pcper.com/reviews/Graphics-Cards/AMD-Radeon-R9-Fury-X-4GB-Review-Fiji-Finally-Tested/Grand-Theft-Auto-V
The 980 Ti was 15% faster in this scenario. Can you go elsewhere and find more favorable numbers for the Fury X? Sure. What were the settings used for the game in those tests? Are they identical to ours? We received comments right away pointing out that other sites had better results with the Fury X. There will always be differences, often based on detail settings.
In general, the 980 Ti is a faster card.
Just look at the GPUBoss.com
Just look at the GPUBoss.com comparison. Do a “980 Ti Vs Fury X GPU BOSS” on google (including the quotation) and you’ll see clearly that the 980 Ti is superior overall.
Until you measure the power
Until you measure the power usage on the HBM relative to GDDR5/5X, and you will see why the server/HPC users will get first cracks at the the available supplies of HBM based SKUs, way before these HBM/HBM2 supplies are available in enough numbers for the consumer market. It’s all supply and demand and its really not Nvidia’s or AMD’s fault more than it’s simply supply economics. In addition to that the server/HPC market can pay a HIGHER price for the HBM product simply because they will need so much of it and they spend so much on electricity that they can recoup the higher costs of HBM through power savings alone because they use hundreds of thousands of GPU accelerators, and most likely server workstation APUs/SOCs on an interposer that use HBM, those will be coming in 2017 and beyond!
GPU Boss is useful for
GPU Boss is useful for next-to nothing…
Really? Just say that its
Really? Just say that its your personal card, and STFU nvidia dickhead
lol…, too much anger
lol…, too much anger
“In general, the 980 Ti is a
“In general, the 980 Ti is a faster card.”
is that because you got one ?
“In general, the 980 Ti is a
“In general, the 980 Ti is a faster card.”
I’m sorry but thats not true. GM200 is really crippled architecture and when it comes to row GPU performance Fiji XT is a faster GPU.
“In general, the 980 Ti is a
“In general, the 980 Ti is a faster card”
Just piss off.
“In general, the 980 Ti is a
“In general, the 980 Ti is a faster card.”
Fuck off Nvidia fanboy, you know very well that its not true.
So it beat it in one bracket?
So it beat it in one bracket? and then an alternate version of the 980 ti kicked its ass?
Nobody plays PC Games at 4K
Nobody plays PC Games at 4K with all eye candy to get what 20-30fps lmfao unless its an old ass DX9 games from years ago where you can atleast get 60fps
Sweet spot is 1440p with all details cracked up and an overclocked 980 Ti beats Fury X anyday! No drop the fury x vs. 980 Ti debate already its dumb now at this point in time…
Ah, good ol’ TPU, the go to
Ah, good ol’ TPU, the go to site for AMD fans looking for ammo.
4K on a single card.
4K on a single card. oookay…. Let’s highlight this to present the Fury X in the best light.
No.
The 1440p chart is more relevant for the typical usage scenarios for both the 980 Ti and the Fury X in single GPU configs.
why are you comparig a super
why are you comparig a super OC card with a stock fury x ?
Never expected HBM on GP104,
Never expected HBM on GP104, these will most likely be 8GB, the cost and 4GB limit of HBM puts it in an akward position, and HBM2 isn’t ready and will most likely cost even more than HBM at launch, so only in the top end products.
now i think micron can do 8Go
now i think micron can do 8Go of HBM1
As far as I know, Micron is
As far as I know, Micron is not making HBM. They are making GDDR5x. HBM was a project between AMD and SK-Hynix. They are both JEDEC standards, but I don’t know if anyone can make HBM. The standard specifies things external to the die stack, but not the internals. Micron does have some experience with stacked memory due to HMC, but I don’t know if that is directly applicable to designing HBM stacks. Samsung is apparently making HBM2; they have some experience with stacked memory die due to their work on Wide IO, which is similar to HBM, but aimed at low power for mobile use. It was designed to stack on top of an SOC package, rather than on an interposer.
I haven’t seen much indication that we will see more than 1 GB HBM1 stacks. It would be great if they could, since buying a high end card with only 4 GB seems like a bad idea to me. If they can’t increase the per die capacity, then the only option seems to be to use more stacks and increase the width of the interface. It would be great if the could do 6 stacks for 6 GB and 768 GB/s to fill the gap until we get HBM2, but that seems unlikely. They would need to expand the interface to 6144 bits.
lol i mean SK Hynix, i have
lol i mean SK Hynix, i have no clue why my brain went to Micron lol, but i saw slides from Hynix when announcing HBM2, they have slides about HBM1 8Go, so if i am not mistaken they HBM1 isnt limited to 4Go, so still the possibility of having HBM1 on Gpus unless there is a technincal issue for like size or hight or whatever.
I’m inclined to stick with
I’m inclined to stick with the 980Ti until the “Ti equivalent” Pascal card arrives (which will probably be near the end of this year or early next year), even if the ‘1080’ is marginally faster. If it’s dramatically faster, OR if Pascal has some VR-specific features that noticeable impact performance or may do in the near future (e.g. Maxwell’s high-resolution shading) then it might be worth it.
“Maxwell’s high-resolution
“Maxwell’s high-resolution shading”
Brainfart, I meant MULTI-resolution shading (having multiple viewports populated by the same draw calls).
this only needs to keep up
this only needs to keep up with the ps4 neo games since ports is all that is released.
no need to come out with the best product when you can come out with 2 and people like Silverbullet with buy both. LOL
GDDR5X will still offer (2x?)
GDDR5X will still offer (2x?) memory bandwidth improvement won’t it? The key things for a new generation are price, performance and power consumption. The GTX 970 was equivalent to a GTX 780ti when it came out. A GTX 1070 would need to have 980ti performance for 970 prices and power consumption. A GTX 0180 would need to be above this performance level by 20-30%.
It may be able to offer 2x
It may be able to offer 2x the bandwidth. That seems to be based on projections, not necessarily existing devices. I doubt the initial products will be anywhere close to 2x. It is going to take a lot more power and waste a lot of die area compared to HBM.
317mm² that is a big chip for
317mm² that is a big chip for a first line up on a die shrink and thats not even the 1080(would be like what 400?), with AMD’s polaris 10 at 232mm², thats a really big difference in size, leaves us with 2 scenario
1-TSMC is much more mature and allow great yields, which will probably translate into RIP AMD.
2- or Nvidia have density issues forcing them to go bigger, which will translate into much more expensive cards compared to AMD, and basicaly RIP Nvidia.
well this is exciting i hope AMD doesnt screw this up this is practicaly their last chance to grab some market share back and get out of the red.
I believe Geforce X70 and X80
I believe Geforce X70 and X80 are typically on the same die. The X70 parts just have some bits disabled. If the 1070 die is 317mm² the 1080 will more than likely be the same size unless Nvidia does something different.
Not really. Back in fermi
Not really. Back in fermi days nvidia coming out with 500mm2 plus chip from the get go. It depends on what strategy the company employ. For AMD they always go for smaller die and favor denser transistor. When TSMC did not come with 20nmHP you see their die size go above 400mm2 where their previous flagship like 5870 and 6970 are much smaller than that. Even 7970 is below 400mm2. Nvidia change their strategy a bit since kepler but their flagship usually still above 500mm2 in size. Now tjey go even bigger into 600mm2 range.
People have been debating this transistor die size and relative cost for years but in the end we don’t even know if both amd and nvidia pay the same price for the same die size.
Over 300 mm2 is quite large
Over 300 mm2 is quite large for a 16 nm die. They could do 600 mm2 on 28 nm because it is a very mature process that has been tweaked for close to 5 years now. A lot of rumors indicate that yeilds on processes 20 nm and below are not that good. This also will probably not improve that much compared to past process technologies due to the inherent physical issues involved. This is probably why AMD has been talking about mult-gpu set-ups. We are probably going get multiple GPUs on a single interposer because it will be more economical to make smaller die and place them on an interposer than it will be to make a single, much larger, die.
Nvidia has gone for big die early, but that results in products like the Titan X; great performance, but massively overpriced. It was mostly marketing until they built up enough stock of salvaged parts to make a real product. A 980 Ti is a “real product”, but it is still a very low volume, high priced part. This strategy has served Nvidia well, since people read reviews of these high priced parts, and then buy Nvidia’s lower end parts, even though they are seldom the best value. If you want to pay $1500 for Nvidia’s latest and greatest marketing part, then that is your business
AMD GPU’s fail 4-10x as often
AMD GPU’s fail 4-10x as often as Nvidia GPU’s..
GDDR5X is fine. Besides, you
GDDR5X is fine. Besides, you get more memory bandwidth, speed, and capacity in lower cost. That way, it’s easier to stream high resolution textures in today’s game. I can’t wait to upgrade over GTX 680 which has been around 4 years now!
hbm ? wait for vega/pascal
hbm ? wait for vega/pascal gp100/102 next year. Hoping for full 60 SM’s on big pascal, and at least 450mm2 chip, 250w tdp.
I will wait until they come
I will wait until they come up with cards that can handle full 4k at well over 60fps before upgrading next.
GDDR5 is pathetic. Kind of sick how AMD is ahead of NVidia on memory technology. I guess they really do not want to be king of the GPU as time moves forward.
If this is similar to other
If this is similar to other node shrinks the 1080 should be between 20 and 30% faster than the 980 Ti at a lower price, just one year later… which just reiterates what a poor value flagships usually are.
I dunno, they’re not so bad
I dunno, they’re not so bad anymore. Node shrinks are so far apart now that they might just milk 14/16nm for the next five years and not really get much higher. When node shrinks where yearly or bi-yearly the flagships where definitely poor value.
Even if they don’t have a
Even if they don’t have a smaller process to increase performance, they may be able to do a lot with the system architecture. For AMD, Vega will be HBM2. Their next generation after Vega (Navi) list scalability and next gen memory. We don’t know much more info, but I suspect that Navi may be multi-gpu by default and use distributed memory of some type. While we were stuck at 28 nm for a long time, they still managed to scale performance by going with very large die. Going forward, they may not be able to depend on process shrinks, but silicon interposers may offer a way to go bigger. Don’t expect the high end to stand still even if we don’t have process shrinks.
I would wager they’re holding
I would wager they’re holding out on the HBM/HBM2 implementation for this core (if they’re going to do it) so they can drop a TI variant. I’d wager they ran into either yield or stability issues that made them shift to this plan, so now they can have a few more months to get the kinks worked out.
what plan ? if you are
what plan ? if you are talking about HBM2, it’s not really up to them, it’s a new tech micron haven’t started mass production yet, and samsung just started production few weeks back, there is no big volume yet for mass produced gpus, and have to wait the end of the year or early 2017
Getting HBM2 this soon after
Getting HBM2 this soon after HBM1 was a bit of wishful thinking. HBM2 isn’t just larger and faster, it has several new features also. We haven’t seen hardly any HBM1 products yet. I am still wondering if they will make an HBM1 system with over 4 GB. There doesn’t seem to be anything in the specification that sets a hard limit on the capacity. It does specify the size of the package due to specifying the micro-bump interface areas on the bottom of the stack. That size limitation plus the stack height limitation may effectively limit the capacity unless they are willing to go with a wider interface. The capacity issue may be overblown though. They seem to have put some work into memory savings and the Fury seems to do very well at 4K.
I’m using a 780 Ti that I got
I’m using a 780 Ti that I got at it’s launch, I’ll probably pick this up when it comes out. I’d like to wait for the Ti version, but I really want the additional memory and faster GPU for 1440 resolution gaming.
If this is true, then the
If this is true, then the 1080 will likely be worth buying if you have a 780 or lower, or need more RAM. If the 1070 is 256bit GDDR5 and not X then thats going to be pretty pathetic.
I hope my 780 lasts until 2017 when i cam get a GP100 Geforce.
Lol and to think I bought a
Lol and to think I bought a new model lenovo computer with an intel graphics card in it and it struggles to play empire earth 2 on it. Hmmmm I think I was scammed.
intel graphics card??? you
intel graphics card??? you mean integrated GPU. That means you can pop-in one of these babies when they come out.