According to an anonymous source of WCCFTech, AMD is preparing a 20nm-based graphics architecture that is expected to release in April or May. Originally, they predicted that the graphics devices, which they call R9 300 series, would be available in February or March. The reason for this “delay” is a massive demand for 20nm production.
The source also claims that NVIDIA will skip 20nm entirely and instead opt for 16nm when that becomes available (which is said to be mid or late 2016). The expectation is that NVIDIA will answer AMD's new graphics devices with a higher-end Maxwell device that is still at 28nm. Earlier rumors, based on a leaked SiSoftware entry, claim 3072 CUDA cores that are clocked between 1.1 GHz and 1.39 GHz. If true, this would give it between 6.75 and 8.54 TeraFLOPs of performance, the higher of which is right around the advertised performance of a GeForce Titan Z (only in a single compute device that does not require distribution of work like what SLI was created to automate).
Will this strategy work in NVIDIA's favor? I don't know. 28nm is a fairly stable process at this point, which will probably allow them to get chips that can be bigger and more aggressively clocked. On the other hand, they pretty much need to rely upon chips that are bigger and more aggressively clocked to be competitive with AMD's slightly more design architecture. Previous rumors also hint that AMD is looking at water-cooling for their reference card, which might place yet another handicap against NVIDIA, although cooling is not an area that NVIDIA struggles in.
I expect a few ppl to get
I expect a few ppl to get their panties in a bunch over me saying this but, Is this another timetable AMD will miss yet again?
Depends They could just be
Depends They could just be playing it safe by opting to go with 20nm because 16nm might have some unexpected delays, maybe glo-flo and Samsung are giving AMD a good deal for the 20nm wafers.to hard to tell right now. All this being said if the AMD gpu’s come to market in mid 2015 then they will be ahead of Nvidia anyway. As a blanket statment the whole industry is going to start missing wafer sizes including Intel who has a really strong wafer supply.
Knowing 16nm nvidia gpu’s
Knowing 16nm nvidia gpu’s coming out in coming months after AMD 20nm part, could keep ppl from buying as well since could be price drops that come with it and to see what kinda performance nvidia brings to the table as well.
your avatar should say “Are
your avatar should say “Are you Nvidiot?”
As soon as I read the first
As soon as I read the first post, I Got that WCCF T vibe, but it’s good to see a process node shrink, and the difference between 20nm and 16 nm is not that great, so maybe Ronald will show up and the three ring circus will begin.
With all the fabs heading towards 14nm, at least there will be a more even parity among process nodes, and then everybody will have to get more out of reengineering their CPU/GPU/SOC microarchitectures rather than simply relying on process node shrinks to provide better power usage, and performance. Maybe the SOC/GPU makers will focus more on drivers, and getting better graphics capabilities on their SOC/APU products, as well as the desktop variants. I’m looking forward to a tablet with the PowerVR wizard, and ray tracing ability built into the GPU, hopefully the discrete GPUs will get this ability also, now that the process node shrinks will allow for more functionality on the GPU’s die. AMD needs a to get a custom ARMv8 ISA based competitor to Nvidia’s Tegra K1 Denver, and force more innovation in the Tablet SOC market, Nvidia has the lead, outside of the closed hardware/software ecosystem of Apple’s tablet SOC, so the OEM’s market needs a competing product to keep Nvidia on its toes.
I’ll take whatever shrink they can give, as long as I can look forward to better laptop/mobile discrete GPUs, with good driver support for OpenGL, Mantle, DX, whatever, and both windows, and Linux OSs, and that includes Win. 7. Having Mantle available for 7 is great news, but Mantle support for Linux need to get here ASAP.
Retail NVIDIA 16nm cards will
Retail NVIDIA 16nm cards will be out probably 10-12 months after AMD 20nm hit shelves. Only people that currently have OC 780Ti cards will be waiting that long. Everyone else either already upgraded to 970/980 or snagged some really cheap 290/X cards.
Yeah, keep hoping they step
Yeah, keep hoping they step up with a happy surprise, but freesync seems to be well behind schedule and this seems to be the same. Now nvidia can sit on the real Maxwell enthusiast flagship gpu, the 210 I gather, for a good while longer.
Hard to root against Nvidia’s only competition, but AMD is doing everything it can to alienate us.
I am not waiting on AMD I am
I am not waiting on AMD I am buying a pair of GTX970’s for SLI I would buy a pair of R9290x’s but the power requirements are too high I have a new 850 watt PSU from evga I am not changing the PSU for a cross fire set up. AMD missed the boat for my money this upgrade cycle I am not waiting I need something now.
that was more the Chip fabs
that was more the Chip fabs that caused the miss not AMD.
Apple grabbed all the
Apple grabbed all the bleeding edge fab capacity from the fabs, and the Fab partners are more responsible for getting their processes working with their clients designs. Apple had the big bucks to throw at the fabs, and offer them some financial assistance in return for first dibs on the capacity. Those tall stacks of Benjamins scream.
Its a bit more complicated
Its a bit more complicated then that. Apple uses a smaller chip, Arm cpu’s are low power and small. Which that was worked out. The process at larger sizes that are needed for GPU’s was delayed a bit.
HUH? A 850 watt can easily
HUH? A 850 watt can easily drive 2 290X’s!
Edit:
Standard version of a 290X has a 6 pin and an 8 pin PCI-E connectors, the math goes (75×2 from mobo PCI-E slots) (150×2 From the 8 pin PCI-E) (75×2 From the 6 pin PCI-E)=600 watts, that leaves you 250 watts for CPU, SSD’s, HDD’s and fans, so unless you run an AMD FX9590 or an obscene amount of drives and fans, you will be just fine.
AMD only used water in the
AMD only used water in the past, because their only option was to raise clocks. It also added to cost of the card. If AMD uses water with 16nm flagships I’ll be shocked.
^20nm
^20nm
Why would you be surprised?
Why would you be surprised? Intel chips has shown us that a move to a 3D chip design will make the temperatures spike more aggressively.
I dont see the point in any
I dont see the point in any of this…
Until there are 21:9 OLED f/a/gsync monitors out there i dont see the point i ANY video cards.
I am not expecting gpus at
I am not expecting gpus at 20nm. AMD could make a few low end gpus for mobile or desktop market at 20nm, but I don’t expect something major like a new midrange or a new hi end in that process.
The only problem with
The only problem with reporting from WCCFTech is that they have such a history of making stuff up that when you repost their rumours and talk about it you give more credence than the rumour deserves. It could be true but WCCFtech have a terrible reputation because they make stories like this up for the clicks.
Honestly its best to avoid them as a source, they are worse than Fudzilla and Semiaccurate in accuracy.
I disagree with Paul K.
I disagree with Paul K.
accurate or not i agree with
accurate or not i agree with him about wccftech click bait tactic. to them it is better if they have something to put out rather than nothing at all. so what they do is pick up all the rumor that they can get on the net and then make article out of it. then add their own speculation to it. then all of a sudden we heard new rumor that never being discuss on another forum before. if it’s about GPU i believe VCZ more.
I agree as well, when I seen
I agree as well, when I seen this linked to WCCFTech and source was Anonymous. Who would have such access to both sides and what their future plans?
AMD doing what they always
AMD doing what they always do, riding the coat tails of NVidia and falling off constantly
AMD FTF
AMD got no boost from Nvidia,
AMD got no boost from Nvidia, and does just fine considering their limited financial resources. Really the coattail riding was mostly from that x86 license that IBM forced Intel to give to AMD and others at the beginning of the PC market, and it’s a good thing IBM did this, otherwise there would be no PC market outside of Intel’s control. If only IBM had forced the cross licensing of the OS it used on their PCs, or bought the CPM OS outright, and forced M$ into more competition, things would be much better OS wise. They all rode the coattails of IBM at one time, hopefully the Power8’s that are up for ARM style licensing will find their way into some gaming PCs, once the third party licensees get ahold of a Power8 license, and some more IBM coattail riding can happen on the licensed IP side, and give Intel some serious competition for the CPU market. It would be great it those Nintendo rumors pan out, and AMD gets to integrate its graphics, with a power8 based Nintendo gaming console, how long would take others, maybe even AMD, and Nvidia to begin offering their own APUs/SOCs with a licensed power8 design, there’re doing it with ARM, so a licensed Power8 based product from AMD, Nvidia, and others is not out of the question.
Guessing Scott missed the
Guessing Scott missed the WCCFTech Sisoft leak that had a AMD Fiji scoring way higher than the leaked full blown Maxwell.
Nvidia big Maxwell leak high scored a 55.2GB
AMD Fiji leaked high scored 63.6GB
http://wccftech.com/amd-fiji-r9-390x-specs-leak/
Looking at the dates of the Sisoft leaks. AMD higher score came a week earlier.
well even so it is still
well even so it is still speculation. is there a way to really know that the info is not fake (even the article from wccftech itself bearing rumor tag)? and PCPer for one is not a rumor/speculating site. they post rumor from time to time but not like it is their bread and butter like Wccftech. you can’t expect PCPer to post all the rumor and speculation out there.
Scott is basing his Maxwell
Scott is basing his Maxwell prediction of between 6.75 and 8.54 TeraFLOPs of performance on rumors. If you believe that rumor you kind of have to believe that AMD has a better performing product based on the scores.
If you also take the Chiphell leaks they all point to AMD having a faster product than Nvidia.
Unless you want to be selective on which rumors you believe.
rumor is rumor. there is
rumor is rumor. there is nothing selective about it. selective in which rumor to believe? since when rumor become a fact? i’ve seen some people take all the these speculation as a straight fact.
ChipHell have been on the
ChipHell have been on the mark for most leaks including the 780 and 980. So i would take their word over everyone else’s. The problem is AMD are faster according to Chip-hell so nVidia people are changing the goal posts or in denial. We will find out soon, but I definitely believe one company out of the two will start getting far ahead since they have started using different foundries.
Well Chiphell have been very
Well Chiphell have been very accurate in the past for 680, 780 and 980. So why are they suddenly wrong now? Is it bcoz they say AMD is going to wreck nVidia?
oh come on this is nothing to
oh come on this is nothing to do with AMD vs Nvidia. accurate or not rumors are rumors. can you prove something like captain jack really exist? is there real fact to back it up right now? if there is then give me the link. not from the leak article or leak source but one that is directly from AMD themselves. as long as it is not something official from AMD then it is not final.
Yea, AMD has yet to make a
Yea, AMD has yet to make a chip that is fast and power efficient. That captain jack leak shows both which puts some doubt on it. r9 285 Tonga, AMD claimed 60watt drop in TDP but really it was only a 10 watt drop. So I will not believe it til AMD puts the chip out and proves it legit and not fudged numbers or someone else’s attempt to BS people.
Like Nvidia is doing by
Like Nvidia is doing by putting average power usage numbers.
Got it. No bias here.
Actually I didn’t see
Actually I didn't see that.
As for reporting on rumours, it's complicated. We don't want to be spouting crap, but they can still be interesting for our readers even if they don't always work out. It's not even a matter of being first — personally, I'm a few days behind most stories anyway (unless we're given advanced notice or we luck out).
any word on when AMD going to
any word on when AMD going to release Mantle public SDK? they promised to release it this year right? we still got a day or so before 2014 ends.
Well I hope AMD don’t release
Well I hope AMD don’t release Mantle SDK for free bcoz they spent allot of money and time in this tech. AMD is a company, not a charity OK!
AMD already claimed they
AMD already claimed they would and if they pull out and don’t do it well kinda puts the fork in it as dead. If AMD tries to charge what they claimed they will release for free, Less companies have monetary incentive to put it in their game, they won’t.
The tech ONLY works on AMD gpu’s. SO if they charge for it well any chance of others adopting it ends. Before you say Physx works only on Nvidia gpu’s, is not completely true as it can work on cpu though much slower.
if you say so then you must
if you say so then you must also agree with nvidia proprietary approach.
Um last i checked right outta
Um last i checked right outta the gate most ALL nvidia’s stuff does work on AMD cards, or will work in a system with an AMD gpu. Unlike mantle where its locked to AMD gpu’s Only. AMD said they would make it open source but over a year but we have yet to see that happen as of now.
Other thing with AMD they let PR talk about everything they are working on even when they don’t have a working product yet. Nvidia from what I seen stays quiet about what they are working on til they got a working one ready to show off. AMD should take that to heart as well instead of ranting off what they are going to do and in the end not living up to everything or even having it ready when they claim they will have it ready.
if you say so then you must
if you say so then you must also agree with nvidia proprietary approach.
So yeah, I’ve been reading
So yeah, I've been reading about this…
Couple of things. 20 nm planar is bad for large GPUs. Voltage/frequency scaling is not appropriate for such a design. I had hoped that someone would have gone for 20 nm FDSOI planar during this time, as it has slightly superior characteristics than Intel's 22 nm TriGate (aka FinFET). That would have been a logical jump for the GPU guys as they would have had really nice density scaling and some major improvements in frequency and power. Alas, nobody went that direction for this particular node (except for some small runs at ST Micro) and some rumors that GLOBALFOUNDRIES was looking in that direction for RF applications.
AMD might go 20 nm planar for a smaller, low power design. Not for a big GPU though. Next gen Kabini variants are more appropriate for such a process. Apple's latest SOCs are all low power and smaller designs than what typically goes for a GPU or a desktop processor today.
I think there is some credence to AMD looking at GF for a 28 nm SHP production of GPUs, but you have to wonder how much extra development they need in the design stage to get that to work effectively. TSMC and GF certainly do not share common standard cells between their 28 nm processes, so redesigns for each production line are needed. They could potentially get a 10% improvement in power performance over TSMC's process, plus a few % improvement in density. It certainly will not be the jump that they were hoping for, but between the latest GCN designs with regards to power consumption, every extra bit will count when competing with the latest Maxwell GPUs from NVIDIA.