Subject: General Tech, Graphics Cards | February 18, 2014 - 09:03 AM | Scott Michaud
Tagged: nvidia, gtx titan black, geforce titan, geforce
NVIDIA has just announced the GeForce GTX Titan Black. Based on the full high-performance Kepler (GK110) chip, it is mostly expected to be a lower cost development platform for GPU processing applications. All 2,880 single precision (FP32) CUDA Cores and 960 double precision (FP64) CUDA Cores are unlocked, yielding 5.1 TeraFLOPs of 32-bit decimal and 1.3 TeraFLOPs of 64-bit decimal performance. The chip contains 1536kB of L2 Cache and will be paired with 6GB of video memory on the board.
The original GeForce GTX Titan launched last year, almost to the day. Also based on the GK110 design, it also featured full double precision performance with only one SMX disabled. Of course, no component at the time contained a fully-enabled GK110 processor. The first product with all 15 SMX units active was not realized until the Quadro K6000, announced in July but only available in the fall. It was followed by the GeForce GTX 780 Ti (with a fraction of its FP64 performance) in November, and the fully powered Tesla K40 less than two weeks after that.
For gaming applications, this card is expected to have comparable performance to the GTX 780 Ti... unless you can find a use for the extra 3GB of memory. Games do not display much benefit with the extra 64-bit floating point (decimal) performance because the majority of their calculations are at 32-bit precision.
The NVIDIA GeForce GTX Titan Black is available today at a price of $999.
What we know about Maxwell
I'm going to go out on a limb and guess that many of you reading this review would not have normally been as interested in the launch of the GeForce GTX 750 Ti if a specific word hadn't been mentioned in the title: Maxwell. It's true, the launch of GTX 750 Ti, a mainstream graphics card that will sit in the $149 price point, marks the first public release of the new NVIDIA GPU architecture code named Maxwell. It is a unique move for the company to start at this particular point with a new design, but as you'll see in the changes to the architecture as well as the limitations, it all makes a certain bit of sense.
For those of you that don't really care about the underlying magic that makes the GTX 750 Ti possible, you can skip this page and jump right to the details of the new card itself. There I will detail the product specifications, performance comparison and expectations, etc.
If you are interested in learning what makes Maxwell tick, keep reading below.
The NVIDIA Maxwell Architecture
When NVIDIA first approached us about the GTX 750 Ti they were very light on details about the GPU that was powering it. Even though the fact it was built on Maxwell was confirmed the company hadn't yet determined if it was going to do a full architecture deep dive with the press. In the end they went somewhere in between the full detail we are used to getting with a new GPU design and the original, passive stance. It looks like we'll have to wait for the enthusiast GPU class release to really get the full story but I think the details we have now paint the story quite clearly.
During the course of design the Kepler architecture, and then implementing it with the Tegra line in the form of the Tegra K1, NVIDIA's engineering team developed a better sense of how to improve the performance and efficiency of the basic compute design. Kepler was a huge leap forward compared to the likes of Fermi and Maxwell is promising to be equally as revolutionary. NVIDIA wanted to address both GPU power consumption as well as finding ways to extract more performance from the architecture at the same power levels.
The logic of the GPU design remains similar to Kepler. There is a Graphics Processing Cluster (GPC) that houses Simultaneous Multiprocessors (SM) built from a large number of CUDA cores (stream processors).
GM107 Block Diagram
Readers familiar with the look of Kepler GPUs will instantly see changes in the organization of the various blocks of Maxwell. There are more divisions, more groupings and fewer CUDA cores "per block" than before. As it turns out, this reorganization was part of the ability for NVIDIA to improve performance and power efficiency with the new GPU.
Subject: General Tech, Graphics Cards | February 14, 2014 - 06:02 PM | Scott Michaud
Tagged: supply shortage, shortage, R9 290X, podcast, litecoin, dogecoin, bitcoin
UPDATE (Feb 14th, 11pm ET): As a commenter has pointed out below, suddenly, as if by magic, Newegg has lowered prices on the currently in stock R9 290X cards by $200. That means you can currently find them for $699 - only $150 over the expected MSRP. Does that change anything about what we said above or in the video? Not really. It only lowers the severity.
I am curious to know if this was done by Newegg voluntarily due to pressure from news stories such as these, lack of sales at $899 or with some nudging from AMD...
If you have been keeping up with our podcasts and reviews, you will know that AMD cards are great compute devices for their MSRP. This is something that cryptocurrency applies a value to. Run a sufficient amount of encryption tasks and you are rewarded with newly created tokens (or some fee from validated transactions). Some people seem to think that GPUs are more valuable for that purpose than their MSRP, so retailers raise prices and people still buy them.
Currently, the cheapest R9 290X is being sold for $900. This is a 64% increase over AMD's intended $549 MSRP. They are not even the ones receiving this money!
This shortage also affects other products such as Corsair's 1200W power supply. Thankfully, only certain components are necessary for mining (mostly GPUs and a lot of power) so at least we are not seeing the shortage spread to RAM, CPUs, APUs, and so forth. We noted a mining kit on Newegg which was powered by a Sempron processor. This line of cheap and low-performance CPUs has not been updated since 2009.
We have kept up with GPU shortages, historically. We did semi-regular availability checks during the GeForce GTX 680 and 690 launch windows. The former was out of stock for over two months after its launch. Those also sometimes strayed from their MSRP, slightly.
Be sure to check out the clip (above) for a nice, 15-minute discussion.
Subject: Graphics Cards | February 13, 2014 - 02:31 PM | Jeremy Hellstrom
Tagged: radeon, r7 265, pitcairn, Mantle, gpu, amd
Some time in late February or March you will be able to purchase the R7 265 for around $150, a decent price for an entry level GPU that will benefit those who are currently dependent on the GPU portion of an APU. This leads to the question of its performance and if this Pitcairn refresh will really benefit a gamer on a tight budget. Hardware Canucks tested it against the two NVIDIA cards closest in price, the GTX 650 Ti Boost which is almost impossible to find and the GTX 660 2GB which is $40 more than the MSRP of the R7 265. The GTX 660 is faster overall but when you look at the price to performance ratio the R7 265 is a more attractive offering. Of course with NVIDIA's Maxwell release just around the corner this could change drastically.
If you already caught Ryan's review, you might have missed the short video he just added on the last page.
"AMD's R7 265 is meant to reside in the space between the R7 260X and R9 270, though performance is closer to its R9 sibling. Could this make it a perfect budget friendly graphics card?"
Here are some more Graphics Card articles from around the web:
- AMD updates Radeon R7 series with R7 265 GPU, promising 25 percent more power @ The Inquirer
- Sapphire Radeon R7 265 2 GB @ techPowerUp
- Sapphire R7 265 Dual X @ Kitguru
- Gigabyte Windforce Radeon R9 280X OC Video Card Review @HiTech Legion
- XFX Radeon R9 290 Double Dissipation @ Benchmark Reviews
- Sapphire R7 260X 2GB OC 2x DVI Video Card Review @ Legit Reviews
- Sapphire R9-290X Tri-X “Sapphire Takes a Shot at Cooling the Monster” Review! @ Bjorn3D
- Asus R9 290 Direct CU II OC @ Kitguru
- Sapphire R9 290X Tri-X OC 4 GB @ techPowerUp
- GIGABYTE R9 290X WindForce OC Review @ Hardware Canucks
- AMD Mantle BF4 and StarSwarm Testing Part 2 @ Legit Reviews
- Gigabyte GTX 780 Ti GHz Edition 3GB @ eTeknix
- MSI GeForce GTX 780 Ti GAMING 3G @ [H]ard|OCP
Straddling the R7 and R9 designation
It is often said that the sub-$200 graphics card market is crowded. It will get even more so over the next 7 days. Today AMD is announcing a new entry into this field, the Radeon R7 265, which seems to straddle the line between their R7 and R9 brands. The product is much closer in its specifications to the R9 270 than it is the R7 260X. As you'll see below, it is built on a very familiar GPU architecture.
AMD claims that the new R7 265 brings a 25% increase in performance to the R7 line of graphics cards. In my testing, this does turn out to be true and also puts it dangerously close to the R9 270 card released late last year. Much like we saw with the R9 290 compared to the R9 290X, the less expensive but similarly performing card might make the higher end model a less attractive option.
Let's take a quick look at the specifications of the new R7 265.
Based on the Pitcairn GPU, a part that made its debut with the Radeon HD 7870 and HD 7850 in early 2012, this card has 1024 stream processors running at 925 MHz equating to 1.89 TFLOPS of total peak compute power. Unlike the other R7 cards, the R7 265 has a 256-bit memory bus and will come with 2GB of GDDR5 memory running at 5.6 GHz. The card requires a single 6-pin power connection but has a peak TDP of 150 watts - pretty much the maximum of the PCI Express bus and one power connector. And yes, the R7 265 supports DX 11.2, OpenGL 4.3, and Mantle, just like the rest of the AMD R7/R9 lineup. It does NOT support TrueAudio and the new CrossFire DMA units.
|Radeon R9 270X||Radeon R9 270||Radeon R7 265||Radeon R7 260X||Radeon R7 260|
|GPU Code name||Pitcairn||Pitcairn||Pitcairn||Bonaire||Bonaire|
|Rated Clock||1050 MHz||925 MHz||925 MHz||1100 MHz||1000 MHz|
|Memory Clock||5600 MHz||5600 MHz||5600 MHz||6500 MHz||6000 MHz|
|Memory Bandwidth||179 GB/s||179 GB/s||179 GB/s||104 GB/s||96 GB/s|
|TDP||180 watts||150 watts||150 watts||115 watts||95 watts|
|Peak Compute||2.69 TFLOPS||2.37 TFLOPS||1.89 TFLOPS||1.97 TFLOPS||1.53 TFLOPS|
The table above compares the current AMD product lineup, ranging from the R9 270X to the R7 260, with the R7 265 directly in the middle. There are some interesting specifications to point out that make the 265 a much closer relation to the R7 270/270X cards than anything below it. Though the R7 265 has four fewer compute units (which is 256 stream processors) than the R9 270. The biggest performance gap here is going to be found with the 256-bit memory bus that persists; the available memory bandwidth of 179 GB/s is 72% higher than the 104 GB/s from the R7 260X! That will definitely improve performance drastically compared to the rest of the R7 products. Pay no mind to that peak performance of the 260X being higher than the R7 265; in real world testing that never happened.
Subject: Graphics Cards | February 10, 2014 - 12:00 AM | Ryan Shrout
Tagged: radeon, R7, hd 7770, amd, 250x
With the exception of the R9 290X, the R9 290, and the R7 260X, AMD's recent branding campaign with the Radeon R7 and R9 series of graphics cards is really just a reorganization and rebranding of existing parts. When we reviewed the Radeon R9 280X and R9 270X, both were well known entities though this time with lower price tags to sweeten the pot.
Today, AMD is continuing the process of building the R7 graphics card lineup with the R7 250X. If you were looking for a new ASIC, maybe one that includes TrueAudio support, you are going to be let down. The R7 250X is essentially the same part that was released as the HD 7770 in February of 2012: Cape Verde.
AMD calls the R7 250X "the successor" to the Radeon HD 7770 and its targeting the 1080p gaming landscape in the $99 price range. For those keeping track at home, the Radeon HD 7770 GHz Edition parts are currently selling for the same price. The R7 250X will be available in both 1GB and 2GB variants with a 128-bit GDDR5 memory bus running at 4.5 GHz. The card requires a single 6-pin power connection and we expect a TDP of 95 watts.
Here is a table that details the current product stack of GPUs from AMD under $140. It's quite crowded as you can see.
|Radeon R7 260X||Radeon R7 260||Radeon R7 250X||Radeon R7 250||Radeon R7 240|
|GPU Code name||Bonaire||Bonaire||Cape Verde||Oland||Oland|
|Rated Clock||1100 MHz||1000 MHz||1000 MHz||1050 MHz||780 MHz|
|Memory||2GB||2GB||1 or 2GB||1 or 2GB||1 or 2GB|
|Memory Clock||6500 MHz||6000 MHz||4500 MHz||4600 MHz||4600 MHz|
|Memory Bandwidth||104 GB/s||96 GB/s||72 GB/s||73.6 GB/s||28.8 GB/s|
|TDP||115 watts||95 watts||95 watts||65 watts||30 watts|
|Peak Compute||1.97 TFLOPS||1.53 TFLOPS||1.28 TFLOPS||0.806 TFLOPS||0.499 TFLOPS|
The current competition from NVIDIA rests in the hands of the GeForce GTX 650 and the GTX 650 Ti, a GPU that was released itself in late 2012. Since we already know what performance to expect from the R7 250X because of its pedigree, the numbers below aren't really that surprising, as provided by AMD.
AMD did leave out the GTX 650 Ti from the graph above... but no matter, we'll be doing our own testing soon enough, once our R7 250X cards find there way into the PC Perspective offices.
The AMD Radeon R7 250X will be available starting today but if that is the price point you are looking at, you might want to keep an eye out for sales on those remaining Radeon HD 7770 GHz Edition parts.
Subject: General Tech, Graphics Cards | February 7, 2014 - 03:54 AM | Scott Michaud
Tagged: sli, crossfire
I will not even call this a thinly-veiled rant. Linus admits it. To make a point, he assembled a $5000 PC running a pair of NVIDIA GeForce 780 Ti GPUs and another pair of AMD Radeon R9 290X graphics cards. While Bitcoin mining would likely utilize all four video cards well enough, games will not. Of course, he did not even mention the former application (thankfully).
Honestly, he's right. One of the reasons why I am excited about OpenCL (and its WebCL companion) is that it simply does not care about devices. Your host code manages the application but, when the jobs get dirty, it enlists help from an available accelerator by telling it to perform a kernel (think of it like function) and share the resulting chunk of memory.
This can be an AMD GPU. This can be an NVIDIA GPU. This can be an x86 CPU. This can be an FPGA. If the host has multiple, independent tasks, it can be several of the above (and in any combination). OpenCL really does not care.
Obviously, to be fair, AMD is very receptive to open platforms. NVIDIA is less-so, and they are honest about that, but they conform to standards when it benefits their users more than their proprietary ones. I know that point can be taken multiple ways, and several will be hotly debated, but I really cannot find the words to properly narrow it.
Despite the fragmentation in features, there is one thing to be proud of as a PC gamer. You may have different experiences depending on the components you purchase.
But, at least you will always have an experience.
Subject: General Tech, Graphics Cards | February 6, 2014 - 08:54 PM | Scott Michaud
Tagged: amd, radeon, R7 250X
The AMD Radeon R7 250X has been mentioned on a few different websites over the last day, one of which was tweeted by AMD Radeon Italia. The SKU, which bridges the gap between the R7 250 and the R7 260, is expected to have a graphics processor with 640 Stream Processors, 40 TMUs, and 16 ROPs. It should be a fairly silent launch, with 1GB and 2GB versions appearing soon for an expected price of around 90 Euros, including VAT.
Image Credit: Videocardz.com
The GPU is expected to be based on the 28nm Oland chip design.
While it may seem like a short, twenty Euro jump from the R7 250 to the R7 260, the single-precision FLOP performance actually doubles from around 800 GFLOPs to around 1550 GFLOPs. If that metric is indicative of overall performance, there is quite a large gap to place a product within.
We still do not know official availability, yet.
Subject: General Tech, Graphics Cards | February 6, 2014 - 01:44 PM | Jeremy Hellstrom
We have more news and it is good for Galaxy fans. The newest update states that they will be sticking around!
Good news GPU fans, the rumours that Galaxy's GPU team is leaving the North American market might be somewhat exaggerated, at least according to their PR Team.
This post appeared on Facebook and was quickly taken off again, perhaps for rewording or perhaps it is a perfect example of the lack of communication that [H]ard|OCP cites in their story. Stay tuned as we update you as soon as we hear more.
Party like it's 2008!
[H]ard|OCP have been following Galaxy's business model closely for the past year as they have been seeing hints that the reseller just didn't get the North American market. Their concern grew as they tried and failed to contact Galaxy at the end of 2013, emails went unanswered and advertising campaigns seemed to have all but disappeared. Even with this reassurance that Galaxy is not planning to leave the North American market a lot of what [H] says rings true, with the stock and delivery issues Galaxy seemed to have over the past year there is something going on behind the scenes. Still it is not worth abandoning them completely and turning this into a self fulfilling prophecy, they have been in this market for a long time and may just be getting ready to move forward in a new way. On the other hand you might be buying a product which will not have warranty support in the future.
"The North American GPU market has been one that is at many times a swirling mass of product. For the last few years though, we have seen the waters calm in that regard as video card board partners have somewhat solidified and we have seen solid players emerge and keep the stage. Except now we seen one exit stage left."
Here is some more Tech News from around the web:
- Microsoft's new CEO: The technology isn't his problem @ The Register
- Oculus Releases Open Source Hardware @ Hack a Day
- HP retains the top spot in a declining PC market @ The Inquirer
- Is Intel Selling Bay Trail Chips Below Cost? @ Slashdot
- Lenovo hires ex-CIA bod to push through Moto deal @ The Register
Subject: General Tech, Graphics Cards | February 5, 2014 - 02:43 PM | Jeremy Hellstrom
Tagged: gaming, Mantle, amd, battlefield 4
Now that the new Mantle enabled driver has been released several sites have had a chance to try out the new API to see what effect it has on Battlefield 4. [H]ard|OCP took a stock XFX R9 290X paired with an i7-3770K and tested both single and multiplayer BF4 performance and the pattern they saw lead them to believe Mantle is more effective at relieving CPU bottlenecks than ones caused by the GPU. The performance increases they saw were greater at lower resolutions than at high resolutions. At The Tech Report another XFX R9 290X was paired with an A10-7850K and an i7-4770K and compared the systems performance in D3D as well as Mantle. To make the tests even more interesting they also tested D3D with a 780Ti, which you should fully examine before deciding which performs the best. Their findings were in line with [H]ard|OCP's and they made the observation that Mantle is going to offer the greatest benefits to lower powered systems, with not a lot to be gained by high end systems with the current version of Mantle. Legit Reviews performed similar tests but also brought the Star Swarm demo into the mix, using an R7 260X for their GPU. You can catch all of our coverage by clicking on the Mantle tag.
"Does AMD's Mantle graphics API deliver on its promise of smoother gaming with lower-spec CPUs? We take an early look at its performance in Battlefield 4."
Here is some more Tech News from around the web:
- Humble Sid Meier Bundle announced: So much Civilisation! @ HEXUS
- HARD ONES: Three new PC games that are BLOODY DIFFICULT @ The Register
- Developers Reporting No Payments From Strategy First @ Rock, Paper, SHOTGUN
Get notified when we go live!