Subject: Graphics Cards | March 14, 2019 - 01:33 PM | Jeremy Hellstrom
Tagged: video card, turing, rtx, nvidia, gtx 1660 ti, gtx 1660, gtx 1060, graphics card, geforce, GDDR5, gaming, 6Gb
Sebastian has given you a look at the triple slot EVGA GTX 1660 XC Black as well as the dual fan and dual slot MSI GTX 1660 GAMING X, both doing well in benchmarks especially when overclocked. The new GTX 1660 does come in other shapes and sizes, like the dual slot, single fan GTX 1660 StormX OC 6G from Palit which The Guru of 3D reviewed. Do not underestimate it because of its diminutive size, the Boost Clock is 1830MHz out of the box and with some tweaking will sit around 2070MHz and the GDDR5 pushed up to 9800MHz.
Check out even more models below.
"We review a GeForce GTX 1660 that is priced spot on that 219 USD marker, the MSRP of the new non-Ti model, meet the petite Palit GeForce GTX 1660 StormX OC edition. Based on a big single fan and a small form factor you should not be fooled by its looks. It performs well on all fronts, including cooling acoustic levels."
Here are some more Graphics Card articles from around the web:
- MSI GeForce GTX 1660 VENTUS XS 6G OC @ Guru of 3D
- Palit GeForce GTX 1660 StormX 6 GB @ TechPowerUp
- MSI GeForce GTX 1660 X
TiGAMING X 6G @ Guru of 3D
- Zotac GeForce GTX 1660 Twin Fan 6 GB @ TechPowerUp
- EVGA GTX 1660 XC takes on the Red Devil RX 590 in 41 @ BabelTechReviews
- EVGA GTX 1660 XC Ultra 6 GB @ TechPowerUp
- MSI GTX 1660 Gaming X 6G @ Kitguru
- ZOTAC GAMING GeForce RTX 2070 MINI @ Modders-Inc
Turing at $219
NVIDIA has introduced another midrange GPU with today’s launch of the GTX 1660. It joins the GTX 1660 Ti as the company’s answer to high frame rate 1080p gaming, and hits a more aggressive $219 price point, with the GTX 1660 Ti starting at $279. What has changed, and how close is this 1660 to the “Ti” version launched just last month? We find out here.
RTX and Back Again
We are witnessing a shift in branding from NVIDIA, as GTX was supplanted by RTX with the introduction of the 20 series, only to see “RTX” give way to GTX as we moved down the product stack beginning with the GTX 1660 Ti. This has been a potentially confusing change for consumers used to the annual uptick in series number. Most recently we saw the 900 series move logically to 1000 series (aka 10 series) cards, so when the first 2000 series cards were released it seemed as if the 20 series would be a direct successor to the GTX cards of the previous generation.
But RTX ended up being more of a feature level designation, and not so much a new branding for GeForce cards as we had anticipated. No, GTX is here to stay it appears, and what then of the RTX cards and their real-time ray tracing capabilities? Here the conversation changes to focus on higher price tags and the viability of early adoption of ray tracing tech, and enter the internet of outspoken individuals who decry ray-tracing, and more so DLSS; NVIDIA’s proprietary deep learning secret sauce that has seemingly become as controversial as the Genesis planet in Star Trek III.
|GTX 1660||GTX 1660 Ti||RTX 2060||RTX 2070||GTX 1080||GTX 1070||GTX 1060 6GB|
|Base Clock||1530 MHz||1500 MHz||1365 MHz||1410 MHz||1607 MHz||1506 MHz||1506 MHz|
|Boost Clock||1785 MHz||1770 MHz||1680 MHz||1620 MHz||1733 MHz||1683 MHz||1708 MHz|
|Memory||6GB GDDR5||6GB GDDR6||6GB GDDR6||8GB GDDR6||8GB GDDR5X||8GB GDDR5||6GB GDDR5|
|Memory Data Rate||8 Gbps||12 Gbps||14 Gbps||14 Gbps||10 Gbps||8 Gbps||8 Gbps|
|Memory Bandwidth||192.1 GB/s||288.1 GB/s||336.1 GB/s||448.0 GB/s||320.3 GB/s||256.3 GB/s||192.2 GB/s|
|Die Size||284 mm2||284 mm2||445 mm2||445 mm2||314 mm2||314 mm2||200 mm2|
|Process Tech||12 nm||12 nm||12 nm||12 nm||16 nm||16 nm||16 nm|
So what is a GTX 1660 minus the “Ti”? A hybrid product of sorts, it turns out. The card is based on the same TU116 GPU as the GTX 1660 Ti, and while the Ti features the full version of TU116, this non-Ti version has two of the SMs disabled, bringing the count from 24 to 22. This results in a total of 1408 CUDA cores - down from 1536 with the GTX 1660 Ti. This 128-core drop is not as large as I was expecting from the vanilla 1660, and with the same memory specs the capabilities of this card would not fall far behind - but this card uses the older GDDR5 standard, matching the 8 Gbps speed and 192 GB/s bandwidth of the outgoing GTX 1060, and not the 12 Gbps GDDR6 and 288.1 GB/s bandwidth of the GTX 1660 Ti.
Subject: Graphics Cards | May 23, 2018 - 06:21 PM | Tim Verry
Tagged: pascal, nvidia, GP107, GDDR5, budget
NVIDIA recently quietly launched a new budget graphics card that neatly slots itself between the GTX 1050 and the GTX 1050 Ti. The new GTX 1050 3GB, as the name suggests, features 3GB of GDDR5 memory. The new card is closer to the GTX 1050 Ti than the name would suggest, however as it uses the same 768 CUDA cores instead of the 640 of the GTX 1050 2GB. The GDDR5 memory is where the card differs from the GTX 1050 Ti though as NVIDIA has cut the number of memory controllers by one along with the corresponding ROPs and cache meaning that the new GTX 1050 3GB has a smaller memory bus and less memory bandwidth than both the GTX 1050 2GB and GTX 1050 Ti 4GB.
Specifically, the GTX 1050 with 3GB GDDR5 has a 96-bit memory bus that when paired with 7 Gbps GDDR5 results in maximum memory bandwidth of 84 GB/s versus the other previously released cards' 128-bit memory buses and 112 GB/s of bandwidth.
Clockspeeds on the new GTX 1050 3GB start are a good bit higher than the other cards though with the base clocks starting at 1392 MHz which is the boost clock of the 1050 Ti and running up to 1518 MHz boost clockspeeds. Thanks to the clockspeeds bumps, the theoretical GPU performance of 2.33 TFLOPS is actually higher than the GTX 1050 Ti (2.14 TFLOPS) and existing GTX 1050 2GB (1.86 TFLOPS) though the reduced memory bus (and loss of a small amount of ROPs and cache) will hold the card back from surpassing the Ti variant in most workloads – NVIDIA needs to maintain product segmentation somehow!
|NVIDIA GTX 1050 2GB||NVIDIA GTX 1050 3GB||NVIDIA GTX 1050 Ti 4GB||AMD RX 560 4GB|
|GPU Cores||640||768||768||896 or 1024|
|TFLOPS||1.86||2.33||2.14||up to 2.6|
|Memory||2GB GDDR5||3GB GDDR5||4GB GDDR5||2GB or 4GB GDDR5|
|Memory Clockspeed||7 Gbps||7 Gbps||7 Gbps||7 Gbps|
|Memory Bandwidth||112 GB/s||84 GB/s||112 GB/s||112 GB/s|
|TDP||75W||75W||75W||60W to 80W|
The chart above compares the specifications of the GTX 1050 3GB with the GTX 1050 and the GTX 1050 Ti on the NVIDIA side and the AMD RX 560 which appears to be its direct competitor based on pricing. The new 3GB GTX 1050 should compete well with AMD's Polaris 11 based GPU as well as NVIDIA's own cards in the budget gaming space where hopefully the downside of a reduced memory bus will at least dissuade cryptocurrency miners from adopting this card as an entry level miner for Ethereum and other alt coins giving gamers a chance to buy something a bit better than the GTX 1050 and RX 550 level at close to MSRP while the miners fight over the Ti and higher variants with more memory and compute units.
NVIDIA did not release formal pricing or release date information, but the cards are expected to launch in June and prices should be around $160 to $180 depending on retailer and extra things like fancier coolers and factory overclocks.
What are your thoughts on the GTX 1050 3GB? Is it the bastion of hope budget gamers have been waiting for? hehe Looking around online it seems pricing for these budget cards has somewhat returned to sane levels and hopefully alternative options like these aimed at gamers will help further stabilize the market for us DIYers that want to game more than mine. I do wish that NVIDIA could have changed the name a bit to better differentiate the card, maybe the GTX 1050G or something but oh well. I suppose so long as the 640 CUDA core GTX 1050 doesn't ever get 3GB GDDR5 at least gamers will be able to tell them apart by the amount of memory listed on the box or website.
Subject: Graphics Cards | February 3, 2018 - 05:00 PM | Tim Verry
Tagged: RX 580, msi, GDDR5, factory overclocked, amd, 8gb
MSI is updating its Radeon RX 580 Armor series with a new MK2 variant (in both standard and OC editions) that features an updated cooler with red and black color scheme and a metal backplate along with Torx 2.0 fans.
The graphics card is powered by a single 8-pin PCI-E power connection and has two DisplayPort, two HDMI, and one DVI display output. MSI claims the MK2 cards use its Military Class 4 hardware including high end solid capacitors. The large heatsink features three copper heatpipes and a large aluminum fin stack. It appears that the cards are using the same PCB as the original Armor series but it is not clear from MSI’s site if they have dome anything different to the power delivery.
The RX 580 Polaris GPU is running at a slight factory overclock out of the box with a boost clock of up to 1353 MHz (reference is 1340) for the standard edition and at up to 1366 MHz for the RX 580 Armor MK2 OC Edition. The OC edition can further clock up to 1380 MHz when run in OC mode using the company’s software utility (enthusiasts can attempt to go beyond that but MSi makes no guarantees). Both cards come with 8GB of GDDR5 memory clocked at the reference 8GHz.
MSI did not release pricing or availability but expect them to be difficult to find and for well above MSRP when they are in stock If you have a physical Microcenter near you, it might be worth watching for one of these cards there to have a chance of getting one closer to MSRP.
Subject: Editorial, Graphics Cards | May 18, 2016 - 01:18 PM | Tim Verry
Tagged: rumor, Polaris, opinion, HDMI 2.0, gpu, gddr5x, GDDR5, GCN, amd, 4k
While Nvidia's Pascal has held the spotlight in the news recently, it is not the only new GPU architecture debuting this year. AMD will soon be bringing its Polaris-based graphics cards to market for notebooks and mainstream desktop users. While several different code names have been thrown around for these new chips, they are consistently in general terms referred to as Polaris 10 and Polaris 11. AMD's Raja Kudori stated in an interview with PC Perspective that the numbers used in the naming scheme hold no special significance, but eventually Polaris will be used across the entire performance lineup (low end to high end graphics).
Naturally, there are going to be many rumors and leaks as the launch gets closer. In fact, Tech Power Up recently came into a number of interesting details about AMD's plans for Polaris-based graphics in 2016 including specifications and which areas of the market each chip is going to be aimed at.
Citing the usual "industry sources" familiar with the matter (take that for what it's worth, but the specifications do not seem out of the realm of possibility), Tech Power Up revealed that there are two lines of Polaris-based GPUs that will be made available this year. Polaris 10 will allegedly occupy the mid-range (mainstream) graphics option in desktops as well as being the basis for high end gaming notebook graphics chips. On the other hand, Polaris 11 will reportedly be a smaller chip aimed at thin-and-light notebooks and mainstream laptops.
Now, for the juicy bits of the leak: the rumored specifications!
AMD's "Polaris 10" GPU will feature 32 compute units (CUs) which TPU estimates – based on the assumption that each CU still contains 64 shaders on Polaris – works out to 2,048 shaders. The GPU further features a 256-bit memory interface along with a memory controller supporting GDDR5 and GDDR5X (though not at the same time heh). This would leave room for cheaper Polaris 10 derived products with less than 32 CUs and/or cheaper GDDR5 memory. Graphics cards would have as much as 8GB of memory initially clocked at 7 Gbps. Reportedly, the full 32 CU GPU is rated at 5.5 TFLOPS of single precision compute power and runs at a TDP of no more than 150 watts.
Compared to the existing Hawaii-based R9 390X, the upcoming R9 400 Polaris 10 series GPU has fewer shaders and less memory bandwidth. The memory is clocked 1 GHz higher, but the GDDR5X memory bus is half that of the 390X's 512-bit GDDR5 bus which results in 224 GB/s memory bandwidth for Polaris 10 versus 384 GB/s on Hawaii. The R9 390X has a slight edge in compute performance at 5.9 TFLOPS versus Polaris 10's 5.5 TFLOPS however the Polaris 10 GPU is using much less power and easily wins at performance per watt! It almost reaches the same level of single precision compute performance at nearly half the power which is impressive if it holds true!
|R9 390X||R9 390||R9 380||R9 400-Series "Polaris 10"|
|GPU Code name||Grenada (Hawaii)||Grenada (Hawaii)||Antigua (Tonga)||Polaris 10|
|Rated Clock||1050 MHz||1000 MHz||970 MHz||~1343 MHz|
|Memory Clock||6000 MHz||6000 MHz||5700 MHz||7000 MHz|
|Memory Bandwidth||384 GB/s||384 GB/s||182.4 GB/s||224 GB/s|
|TDP||275 watts||275 watts||190 watts||150 watts (or less)|
|Peak Compute||5.9 TFLOPS||5.1 TFLOPS||3.48 TFLOPS||5.5 TFLOPS|
|MSRP (current)||~$400||~$310||~$199||$ unknown|
Note: Polaris GPU clocks esitmated using assumption of 5.5 TFLOPS being peak compute and accurate number of shaders. (Thanks Scott.)
Another comparison that can be made is to the Radeon R9 380 which is a Tonga-based GPU with similar TDP. In this matchup, the Polaris 10 based chip will – at a slightly lower TDP – pack in more shaders, twice the amount of faster clocked memory with 23% more bandwidth, and provide a 58% increase in single precision compute horsepower. Not too shabby!
Likely, a good portion of these increases are made possible by the move to a smaller process node and utilizing FinFET "tri-gate" like transistors on the Samsung/Globalfoundries 14LPP FinFET manufacturing process, though AMD has also made some architecture tweaks and hardware additions to the GCN 4.0 based processors. A brief high level introduction is said to be made today in a webinar for their partners (though AMD has come out and said preemptively that no technical nitty-gritty details will be divulged yet). (Update: Tech Altar summarized the partner webinar. Unfortunately there was no major reveals other than that AMD will not be limiting AIB partners from pushing for the highest factory overclocks they can get).
Moving on from Polaris 10 for a bit, Polaris 11 is rumored to be a smaller GCN 4.0 chip that will top out at 14 CUs (estimated 896 shaders/stream processors) and 2.5 TFLOPS of single precision compute power. These chips aimed at mainstream and thin-and-light laptops will have 50W TDPs and will be paired with up to 4GB of GDDR5 memory. There is apparently no GDDR5X option for these, which makes sense at this price point and performance level. The 128-bit bus is a bit limiting, but this is a low end mobile chip we are talking about here...
|R7 370||R7 400 Series "Polaris 11"|
|GPU Code name||Trinidad (Pitcairn)||Polaris 11|
925 MHz base (975 MHz boost)
|Memory||2 or 4GB||4GB|
|Memory Clock||5600 MHz||? MHz|
|Memory Bandwidth||179.2 GB/s||? GB/s|
|TDP||110 watts||50 watts|
|Peak Compute||1.89 TFLOPS||2.5 TFLOPS|
|MSRP (current)||~$140 (less after rebates and sales)||$?|
Note: Polaris GPU clocks esitmated using assumption of 2.5 TFLOPS being peak compute and accurate number of shaders. (Thanks Scott.)
Fewer details were unveiled concerning Polaris 11, as you can see from the chart above. From what we know so far, it should be a promising successor to the R7 370 series even with the memory bus limitation and lower shader count as the GPU should be clocked higher, (it also might have more shaders in M series mobile variants versus of the 370 and lower mobile series) and a much lower TDP for at least equivalent if not a decent increase in performance. The lower power usage in particular will be hugely welcomed in mobile devices as it will result in longer battery life under the same workloads, ideally. I picked the R7 370 as the comparison as it has 4 gigabytes of memory and not that many more shaders and being a desktop chip readers may be more widely familiar with it. It also appears to sit between the R7 360 and R7 370 in terms of shader count and other features but is allegedly going to be faster than both of them while using at least (on paper) less than half the power.
Of course these are still rumors until AMD makes Polaris officially, well, official with a product launch. The claimed specifications appear reasonable though, and based on that there are a few important takeaways and thoughts I have.
The first thing on my mind is that AMD is taking an interesting direction here. While NVIDIA has chosen to start out its new generation at the top by announcing "big Pascal" GP100 and actually launching the GP104 GTX 1080 (one of its highest end consumer chips/cards) yesterday and then over the course of the year introducing lower end products AMD has opted for the opposite approach. AMD will be starting closer to the lower end with a mainstream notebook chip and high end notebook/mainstream desktop GPU (Polaris 11 and 10 respectively) and then over a year fleshing out its product stack (remember Raja Kudori stated Polaris and GCN 4 would be used across the entire product stack) and building up with bigger and higher end GPUs over time finally topping off with its highest end consumer (and professional) GPUs based on "Vega" in 2017.
This means, and I'm not sure if this was planned by either Nvidia or AMD or just how it happened to work out based on them following their own GPU philosophies (but I'm thinking the latter), that for some time after both architectures are launched AMD and NVIDIA's newest architectures and GPUs will not be directly competing with each other. Eventually they should meet in the middle (maybe late this year?) with a mid-range desktop graphics card and it will be interesting to see how they stack up at similar price points and hardware levels. Then, of course once "Vega" based GPUs hit (sadly probably in time for NV's big Pascal to launch heh. I'm not sure if Vega is Fury X replacement only or even beyond that to 1080Ti or even GP100 competitor) we should see GCN 4 on the new smaller process node square up against NVIDIA and it's 16nm Pascal products across the board (entire lineup). Which will have the better performance, which will win out in power usage and performance/watt and performance/$? All questions I wish I knew the answers to, but sadly do not!!
Speaking of price and performance/$... Polaris is actually looking pretty good so far at hitting much lower TDPs and power usage targets while delivering at least similar performance if not a good bit more. Both AMD and NVIDIA appear to be bringing out GPUs better than I expected to see as far as technological improvements in performance and power usage (these die shrinks have really helped even though from here on out that trend isn't really going to continue...). I hope that AMD can at least match NV in these areas at the mid range even if they do not have a high end GPU coming out soon (not until sometime after these cards launch and not really until Vega, the high end GCN GPU successor). At least on paper based on the leaked information the GPUs so far look good. My only worry is going to be pricing which I think is going to make or break these cards. AMD will need to price them competitively and aggressively to ensure their adoption and success.
I hope that doing the rollout this way (starting with lower end chips) helps AMD to iron out the new smaller process node and that they are able to get good yields so that they can be aggressive with pricing here and eventually at the hgh end!
I am looking forward to more information on AMD's Polaris architecture and the graphics cards based on it!
- AMD Capsaicin GDC Live Stream and Live Blog TODAY!!
- AMD GPU Roadmap: Capsaicin Names Upcoming Architectures
- AMD's Raja Koduri talks moving past CrossFire, smaller GPU dies, HBM2 and more.
- AMD High-End Polaris Expected for 2016
- CES 2016: AMD Shows Polaris Architecture and HDMI FreeSync Displays
I will admit that I am not 100% up on all the rumors and I apologize for that. With that said, I would love to hear what your thoughts are on AMD's upcoming GPUs and what you think about these latest rumors!
Subject: Graphics Cards | April 22, 2016 - 10:16 AM | Sebastian Peak
Tagged: rumor, report, pascal, nvidia, leak, graphics card, gpu, gddr5x, GDDR5
According to a report from VideoCardz (via Overclock.net/Chip Hell) high quality images have leaked of the upcoming GP104 die, which is expected to power the GeForce GTX 1070 graphics card.
Image credit: VideoCardz.com
"This GP104-200 variant is supposedly planned for GeForce GTX 1070. Although it is a cut-down version of GP104-400, both GPUs will look exactly the same. The only difference being modified GPU configuration. The high quality picture is perfect material for comparison."
A couple of interesting things have emerged with this die shot, with the relatively small size of the GPU (die size estimated at 333 mm2), and the assumption that this will be using conventional GDDR5 memory - based on a previously leaked photo of the die on PCB.
Alleged photo of GP104 using GDDR5 memory (Image credit: VideoCardz via ChipHell)
"Leaker also says that GTX 1080 will feature GDDR5X memory, while GTX 1070 will stick to GDDR5 standard, both using 256-bit memory bus. Cards based on GP104 GPU are to be equipped with three DisplayPorts, HDMI and DVI."
While this is no doubt disappointing to those anticipating HBM with the upcoming Pascal consumer GPUs, the move isn't all that surprising considering the consistent rumors that GTX 1080 would use GDDR5X.
Is the lack of HBM (or HBM2) enough to make you skip this generation of GeForce GPU? This author points out that AMD's Fury X - the first GPU to use HBM - was still unable to beat a GTX 980 Ti in many tests, even though the 980 Ti uses conventional GDDR5. Memory is obviously important, but the core defines the performance of the GPU.
If NVIDIA has made improvements to performance and efficiency we should see impressive numbers, but this might be a more iterative update than originally expected - which only gives AMD more of a chance to win marketshare with their upcoming Radeon 400-series GPUs. It should be an interesting summer.
Subject: Graphics Cards, Memory | January 22, 2016 - 11:08 AM | Ryan Shrout
Tagged: Polaris, pascal, nvidia, jedec, gddr5x, GDDR5, amd
Though information about the technology has been making rounds over the last several weeks, GDDR5X technology finally gets official with an announcement from JEDEC this morning. The JEDEC Solid State Foundation is, as Wikipedia tells us, an "independent semiconductor engineering trade organization and standardization body" that is responsible for creating memory standards. Getting the official nod from the org means we are likely to see implementations of GDDR5X in the near future.
The press release is short and sweet. Take a look.
ARLINGTON, Va., USA – JANUARY 21, 2016 –JEDEC Solid State Technology Association, the global leader in the development of standards for the microelectronics industry, today announced the publication of JESD232 Graphics Double Data Rate (GDDR5X) SGRAM. Available for free download from the JEDEC website, the new memory standard is designed to satisfy the increasing need for more memory bandwidth in graphics, gaming, compute, and networking applications.
Derived from the widely adopted GDDR5 SGRAM JEDEC standard, GDDR5X specifies key elements related to the design and operability of memory chips for applications requiring very high memory bandwidth. With the intent to address the needs of high-performance applications demanding ever higher data rates, GDDR5X is targeting data rates of 10 to 14 Gb/s, a 2X increase over GDDR5. In order to allow a smooth transition from GDDR5, GDDR5X utilizes the same, proven pseudo open drain (POD) signaling as GDDR5.
“GDDR5X represents a significant leap forward for high end GPU design,” said Mian Quddus, JEDEC Board of Directors Chairman. “Its performance improvements over the prior standard will help enable the next generation of graphics and other high-performance applications.”
JEDEC claims that by using the same signaling type as GDDR5 but it is able to double the per-pin data rate to 10-14 Gb/s. In fact, based on leaked slides about GDDR5X from October, JEDEC actually calls GDDR5X an extension to GDDR5, not a new standard. How does GDDR5X reach these new speeds? By doubling the prefech from 32 bytes to 64 bytes. This will require a redesign of the memory controller for any processor that wants to integrate it.
Image source: VR-Zone.com
As for usable bandwidth, though information isn't quoted directly, it would likely see a much lower increase than we are seeing in the per-pin statements from the press release. Because the memory bus width would remain unchanged, and GDDR5X just grabs twice the chunk sizes in prefetch, we should expect an incremental change. No mention of power efficiency is mentioned either and that was one of the driving factors in the development of HBM.
Performance efficiency graph from AMD's HBM presentation
I am excited about any improvement in memory technology that will increase GPU performance, but I can tell you that from my conversations with both AMD and NVIDIA, no one appears to be jumping at the chance to integrate GDDR5X into upcoming graphics cards. That doesn't mean it won't happen with some version of Polaris or Pascal, but it seems that there may be concerns other than bandwidth that keep it from taking hold.
Subject: Graphics Cards | November 7, 2015 - 04:46 PM | Sebastian Peak
Tagged: tonga, rumor, report, Radeon R9 380X, r9 285, graphics card, gpu, GDDR5, amd
AMD will reportedly be launching their latest performance graphics card soon, and specs for this rumored R9 380X have now been reported at VR-Zone (via Hardware Battle).
(Image credit: VR-Zone)
Here are the full specifications from this report:
- GPU Codename: Antigua
- Process: 28 nm
- Stream Processors: 2048
- GPU Clock: Up to 1000 – 1100 MHz (exact number not known)
- Memory Size: 4096 MB
- Memory Type: GDDR5
- Memory Interface: 256-bit
- Memory Clock: 5500 – 6000 MHz (exact number not known)
- Display Output: DisplayPort 1.2, HDMI 1.4, Dual-Link DVI-D
The launch date is reportedly November 15, and the card will (again, reportedly) carry a $249 MSRP at launch.
The 380X would build on the existing R9 285
Compared to the R9 280X, which also offers 2048 stream processors, a boost clock up to 1000 MHz, and 6000 MHz GDDR5, the R9 380X would lose memory bandwidth due to the move from a 384-bit memory interface to 256-bit. The actual performance won’t be exactly comparable however, as the core (Antigua, previously Tonga) will share more in common with the R9 285 (Tonga), though the R9 285 only offered 1792 Stream processors and 2 GB of GDDR5.
You can check out our review of the R9 285 here to see how it performed against the R9 280X, and it will certainly be interesting to see how this R9 380X will fare if these specifications are accurate.
Subject: Graphics Cards | June 7, 2015 - 07:51 PM | Sebastian Peak
Tagged: r9 390x, leak, hbm, hawaii, GDDR5, Fiji, amd
On the XFX R9 290X Double Dissipation product page something very curious appears when you scroll all the way down to the bottom…
What’s this image over here on the right, I wonder…
Well would you look at that. The box is clearly labeled for an AMD Radeon R9 390X with 8GB of GDDR5 memory, further indicating that the upcoming GPU will in fact be a Hawaii rebrand; and that the HBM-based flagship Fiji GPU we keep hearing about (and seeing pictures of) will have a new name. Whether that ends up being R9 490X or a name like “Fury” we will soon find out. As it is, it looks like we know at least part of what to expect from AMD’s gaming event at E3 on June 16.
Hmm. What might this be about??
Of course we will have complete coverage when any official announcement is made, but for now enjoy the accidental product reveal!
Update: XFX has removed the R9 390X images from their R9 290X DD product page, but not before numerous sites took their own screenshots before posting the news as well. There has been some disagreement about what the leaked photos actually reveal, or if anything has genuinely been "confirmed", but it seems likely that the product named 390X will be a rebranded 290X with 8GB of GDDR5.