It should come as no surprise to anyone how the GTX 1070 Ti performs, better than a GTX 1070 but not quite as fast as a GTX 1080 … unless you overclock. With the push of two buttons Ryan was able to hit 1987 MHz which surpasses your average GTX 1080 by a fair margin. Hardware Canucks saw 2088MHz when they overclocked as well as memory of 8.9Gbps which pushed the performance past the reference GTX 1080 in many games. Their benchmark suite encompasses a few different games so you should check to see if your favourites are there.
The real hope of this launch was that prices would change, not so much the actual prices you pay but the MSRP of cards both AMD and NVIDIA. For now that has not happened but perhaps soon it will, though Bitcoin hitting $7000 does not help.
"NVIDIA’s launch of their new GTX 1070 Ti is both senseless and completely sensible depending on which way you tend to look at things. The emotional among you are going to wonder why NVIDIA is even bothering to introduce a new product into a lineup that’s more than a year old."
Here are some more Graphics Card articles from around the web:
- GeForce GTX 1070 Ti Founder Edition @ Guru of 3D
- GeForce GTX 1070 Ti 2-way FCAT SLI @ Guru of 3D
- MSI GeForce GTX 1070 Ti Gaming @ Guru of 3D
- Palit GeForce GTX 1070 Ti Super Jetstream @ Guru of 3D
- Nvidia GTX 1070 Ti review: A fine graphics card—but price remains high @ Ars Technica
- GTX 1070 Ti Review- 35 Games benchmarked @ BabelTechReviews
- MSI GTX 1070 Ti Gaming 8 GB @ TechPowerUp
- NVIDIA GeForce GTX 1070 Ti Founders Edition 8 GB @ TechPowerUp
- A Quick Look At NVIDIA’s GeForce GTX 1070 Ti @ Techgage
- MSI GeForce GTX 1070 Ti Gaming
- MSI GTX 1070 Ti Gaming 8G @ Kitguru
- Palit GTX 1070 Ti Super JetStream 8 GB @ TechPowerUp
- Palit GTX 1070 Ti Super JetStream @ Kitguru
- The NVIDIA GeForce GTX 1080 Ti Founders Edition @ TechARP
- MSI GeForce GTX 1080 Ti GAMING X TRIO @ [H]ard|OCP
- Sapphire RX VEGA 64 Limited Edition @ Modders-Inc
- The AMD Radeon RX Vega 64 @ TechARP
Nice Starwars Reference
Nice Starwars Reference Jeremy. I think its safe to say the 1070 Ti hates us lol.
Are these GTX1080 GPU’s that
Are these GTX1080 GPU’s that were slightly defective, and thus have been piling up for a year or so to the point they can justify a new card?
Or GTX1080 GPU’s intentionally modified? Or do we have BOTH things going on?
I didn’t see OC GTX1080 results to compare in the same review, however the CUDA and Texture Units are only 5% different.
But MEMORY is also only 80% of what most GTX1080’s are so it’s hard to guess at what the average game performance would be.
I’d guess in OC GTX1080 vs OC GTX1070Ti it would be 0% to 15% difference, with closer to 7% average difference.
So should be within 5% if memory bandwidth was the same but it is NOT. Thus, performance should be worse in games at higher resolutions where it matters a lot more what the memory bandwidth is. Hence MAYBE up to 15% difference worst-case at 4K due to memory but that’s only a guess.
Nvidia is getting every last
Nvidia is getting every last bit of revenue out of those extra Pascal shaders as it can on those GP104 dies that came off the diffusion lines at TSMC with not enough working shaders to become at least a GTX 1080. So Nvidia spins up a GTX 1070Ti on the binned dies that at least have close to the GTX 1080’s number of shaders for Nvidia to make the GTX 1070Ti’s shader count. Now Nvidia can sell out more GP104 based dies within the RX 56 performance range without having to price cut the GTX 1080 at all. And TSMC’s wafer production for the best binned GTX 1080 grade dies can probably get a little better overclocks for Nvidia’s AIB partners so it’s just extra gravy for Nvidia’s Thanksgiving before the Volta micro-arch GV104 dies are needed in 2018.
Nvidia probably held back these dies for months before Vega 64 and 56 arrived and instead of binning them father down to a GTX 1070 Nvidia let these stocks of binned dies that just barely missed having the GTX 1080’s complement of working shaders build up. And Nvidia’s binning process mamagement software was loaded with every one of those dies’ working shader counts to give Nvidia a complete statistical range of shader counts with which to effectively allow Nvidia to decide on the exact necessary numbers of shaders for the GXT 1070Ti to perform better than the Vega 56 on average. This GP104/1070ti binning process targeted shader counts that where just enough for those DX11 titles that are still more abundent than the DX12/Vulkan titles where Vega 56 may compete with even the GTX 1080 under the new graphics APIs.
Nvidia can still sell Pascal/GP104 new bins(1070ti) using mostly DX11 benchmarks and not cannibalise the GTX 1080 AIB SKU sales at all until Volta is needed to compete with any RX Vega refreshes that will arrive in 2018. And the newer GTX 1080 AIB GP104 newest binns are probably now overclocking higher on those AIB SKUs and still making Nvidia more money. Nvidia still has a larger pool of base die variants compared to AMD’s One Vega 10 based die variant so Nvidia has always had a few more hold cards to put out better performing variants(GTX 1070ti) if needed to compete with AMD.
Now to wait for the DX12 and Vulkan benchmarks to be more thoroughly done between Vega 56 and GXT 1070Ti, while reducing fruther the trust of any website that spends too much time on DX11 benchmarks and forgets to more thoroughly test all of the DX12/Vulkan titles available with Vega 56 and the GTX 1070Ti. There is still no AIB custom Vega 56 or Vega 64 SKUs to test yet so that’s another round of benchmarking to look forward to before some refreshes arrive in 2018. it will probably not be until any Vega refreshes arrive at 12nm before Nvidia will feel the need to drop Volta designs onto the market. Nvidia has had so much more Pascal/GP104 die bins in the hold with which to compete with Vega.
At this point in time AMD needs to be working on some discrete mobile Vega GPU variants and getting its Raven Ridge desktop SKUs to market. As that integrated graphics market is more dominated by Intel what with Nvidia having no x86 license with which to compete there in the x86 ISA based SOC/APU market. So AMD’s best Vega graphics chance this year and into next may just be more with integrated Vega graphics on lower cost laptops and Desktop APUs beginning in 2018.
I’m wanting a gaming laptop with a Desktop Raven Ridge inside and if the Laptop OEM’s can get a Ryzen 7 1700 into laptop form factor then the laptop OEMs will easily be able to get a desktop Raven Ridge APU insde a laptop and that’s somthing that Intel will really have a hard time competing with on the integrated graphics front, even compared to the Ryzen 7 2700U and Ryzen 5 2500U that are mostly for thin and light laptops. AMD’s desktop APUs in those mini desktop designs in the Mac Mini form factor range hopefully and maybe that will be a better option than any laptop if the mini dasktop SKUs come in bare bones AM4 socketed versions for the home system builder market.
(I know APU’s are off topic,
(I know APU’s are off topic, but be warned that most laptops will have woefully insufficient DDR4 memory bandwidth such as a single stick of 2400MHz memory)
As for DX12/Vulkan it’s important to understand there aren’t many of those titles. Even the ones with DX12 support are more tacked on though DOOM with Vulkan might be closest to showing what we can expect.
VEGA has a lot of benefits that should be realized in the long run though so VEGA56 with an Asus Strix cooler is arguably the best long-term card to buy (if the price is right).
While AMD cards – especially
While AMD cards – especially Vega – put on a good show in idTech6 games – both DOOM and the latest Wolfenstein – this is not thanks to Vulkan. It is due to some rather AMD-specific extensions being properly implemented. Shader intrinsics, FP16 for Wolfenstein and a couple more.
DX12/Vulkan themselves have so far been very underwhelming. There are a couple real games where either of them shows (often conditional or arguable) benefit – Rise of the Tomb Raider, DOOM/Wolfenstein, Hitman. The only game unequivocally better with DX12 is Sniper Elite 4. That’s it.
For AMD DX12 often shows “performance increase” but in almost all cases this is simply because AMD DX11 performance is still incredibly awful.
Do you have any good examples as to why DX12 support is “tacked on”? I have seen this argument around for a long while but no real explanation as to why this would be the case.
DX12 has been out for over 2 years now (and known to developers for 3). The adoption is not as widespread as some expected and it does not really seem to be changing either. Ever since its introduction Microsoft has been saying that DX11 (path) will remain there for developers why do not want to tackle the more rewarding but also more complex lower-level API.
Both AMD and Nvidia have
Both AMD and Nvidia have registered extentions to the Vulkan API for their hardware specific functionality and any of that functionality if adopted by the other GPU makers can then become part of the Vulkan standard. So AMD getting better performance by having somthing in its GPU hardware that Nvidia lacks is exactly what needs to happen, ditto for Nvidia, to allow competition to occur. Whatever in the GPU hardware solution that AND is currenty using Nvidia is free to add a similar functionality to its GPUs and this is what drives the GPU makers towards improving their hardware or be left behind.
Developers not taking the chance with any new hardware features so that gaming performance just stagnates between the two competing GPU producers are not doing gaming any good! As that just removes the motivation for the GPU makers to improve or be left behind. And that improve or be left behind motvation is what drives innovation to begin with.
Vulkan is an extensible Graphics API with the GPU makers able to create and register their hardware specific API extentions to Vulkan and there is even a path towards that vendor specific Vulkan extention functionality being adopted into the larger Vulkan standard and no longer being registered as a Vulkan extention but considered part of the standard.
Folks are not adopting DX12 because DX12’s use is tied to windows 10, and windows 7 still has a larger user base. I’d expect that by 2020 DX12 is not going to have chance compared to the cross platform friendly Vulkan Graphics API, and Vulkan already has a larger install base on the mobile market than DX12 will ever have. Look for more of the Desktop games to support Vulkan and be able to run under any Linux OS distro while also being usable under windows 7/8.1! So DX12 looks like it will never see as wide an adoption rate as the Vulkan API that will be used by Nintendo/Switch, Sony/PS4, and the entire Android/Linux Kernel based Phone/Tablet ecosystem, Desktop Linux Gaming Included.
AND DX12 games may still be able to run uder the Vulkan API as there are developers creating DX## to Vulkan translation layers so DX12(in development)/11 games can be run under WINE/Vulkan via Wine Staging currently.
Actually AMD cards are
Actually AMD cards are selling at retail now. Maybe the story could get updated?
https://www.newegg.com/Product/ProductList.aspx?Submit=ENE&DEPA=0&Order=BESTMATCH&Description=AMD+RX+VEGA&N=-1&isNodeId=1
I dont like silicon, it’s
I dont like silicon, it’s coarse and rough, and irritating, and it gets everwhere, not like silicone, silicone is soft, and smooth