AMD Launches Another Graphics Card: Radeon R9 280

Subject: Graphics Cards | March 4, 2014 - 05:00 AM |
Tagged: radeon, r9 280, R9, hd 7950, amd

AMD continues to churn out its Radeon graphics card line. Out today, or so we are told, is the brand new Radeon R9 280! That's right kids, it's kind of like the R9 280X, but without the letter at the end.  In fact, do you know what it happens to be very similar to? The Radeon HD 7950. Check out the testing card we got in.

280-5.jpg

It's okay AMD, it's just a bit of humor...

Okay, let's put the jokes aside and talk about what we are really seeing here.  

The new Radeon R9 280 is the latest in the line of rebranding and reorganizing steps made by AMD with the move from the "HD" moniker to "R9/R7". As the image above would indicate, the specifications of the R9 280 are nearly 1:1 with that of the Radeon HD 7950 released in August of 2012 with Boost. We built a specification table below.

  Radeon R9 280X Radeon R9 280 Radeon R9 270X Radeon R9 270 Radeon R7 265
GPU Code name Tahiti Tahiti Pitcairn Pitcairn Pitcairn
GPU Cores 2048 1792 1280 1280 1024
Rated Clock 1000 MHz 933 MHz 1050 MHz 925 MHz 925 MHz
Texture Units 128 112 80 80 64
ROP Units 32 32 32 32 32
Memory 3GB 3GB 2GB 2GB 2GB
Memory Clock 6000 MHz 6000 MHz 5600 MHz 5600 MHz 5600 MHz
Memory Interface 384-bit 384-bit 256-bit 256-bit 256-bit
Memory Bandwidth 288 GB/s 288 GB/s 179 GB/s 179 GB/s 179 GB/s
TDP 250 watts 250 watts 180 watts 150 watts 150 watts
Peak Compute 4.10 TFLOPS 3.34 TFLOPS 2.69 TFLOPS 2.37 TFLOPS 1.89 TFLOPS
MSRP $299 $279 $199 $179 $149
Current Pricing $420 - Amazon ??? $259 - Amazon $229 - Amazon ???

If you are keeping track, AMD should just about be out of cards to drag over to the new naming scheme. The R9 280 has a slightly higher top boost clock than the Radeon HD 7950 did (933 MHz vs. 925 MHz) but otherwise looks very similar. Oh, and apparently the R9 280 will require a 6+8 pin PCIe power combination while the HD 7950 was only 6+6 pin. Despite that change, it is still built on the same Tahiti GPU that has been chugging long for years now.  

The Radeon R9 280 continues to support an assortment of AMD's graphics technologies including Mantle, PowerTune, CrossFire, Eyefinity, and included support for DX11.2. Note that because we are looking at an ASIC that has been around for a while, you will not find XDMA or TrueAudio support.

280-3.jpg

The estimated MSRP of $279 is only $20 lower than the MSRP of the R9 280X, but you should take all pricing estimates from AMD with a grain of salt. The prices listed in the table above from Amazon.com were current as of March 3rd, and of course, we did see Newegg attempt get people to buy R9 290X cards for $900 recently. AMD did use some interesting language on the availability of the R9 280 in its emails to me.

The AMD Radeon R9 280 will become available at a starting SEP of $279USD the first week of March, with wider availability the second week of March. Following the exceptional demand for the entire R9 Series, we believe the introduction of the R9 280 will help ensure that every gamer who plans to purchase an R9 Series graphics card has an opportunity to do so.

I like what the intent is from AMD with this release - get more physical product in the channel to hopefully lower prices and enable more gamers to purchase the Radeon card they really want. However, until I see a swarm of parts on Newegg.com or Amazon.com at, or very close to, the MSRPs listed on the table above for an extended period, I think the effects of coin mining (and the rumors of GPU shortages) will continue to plague us. No one wants to see competition in the market and great options at reasonable prices for gamers more than us!

280-1.jpg

AMD hasn't sent out any samples of the R9 280 as far as I know (at least we didn't get any) but the performance should be predictable based on its specifications relative to the R9 280X and the HD 7950 before it.  

Do you think the R9 280 will fix the pricing predicament that AMD finds itself in today, and if it does, are you going to buy one?

EVGA Launches GTX 750 and GTX 750 SC With 2GB GDDR5

Subject: Graphics Cards | March 3, 2014 - 07:33 PM |
Tagged: nvidia, maxwell, gtx 750, evga

EVGA recently launched two new GTX 750 graphics cards with 2GB of GDDR5 memory. The new cards include a reference clocked GTX 750 2GB and a factory overclocked GTX 750 2GB SC (Super Clocked).

EVGA GTX 750 2GB GDDR5 GPU.jpg

The new graphics cards are based around NVIDIA’s GTX 750 GPU with 512 Maxwell architecture CUDA cores. The GTX 750 is the little brother to the GTX 750 Ti we recently reviewed which has 640 cores. EVGA has clocked the GTX 750 2GB card’s GPU at reference clockspeeds of 1020 MHz base and 1085 MHz boost and memory at a reference speed of 1253 MHz. The “Super Clocked” GTX 750 2GB SC card keeps the memory at reference speeds but overclocks the GPU quite a bit to 1215 MHz base and 1294 MHz boost.

  EVGA GTX 750 2GB EVGA GTX 750 2GB Super Clocked
GPU 512 CUDA Cores (Maxwell) 512 CUDA Cores (Maxwell)
-    GPU Base 1020 MHz 1215 MHz
-    GPU Boost 1085 MHz 1294 MHz
Memory 2 GB GDDR5 @ 1253 MHz on 128-bit bus
I/O
1 x DVI, 1 x HDMI, 1 x DP
TDP 55W 55W
Price $129.99 $139.99

Both cards have a 55W TDP sans any PCI-E power connector and utilize a single shrouded fan heatsink. The cards are short but occupy two PCI slots. The rear panel hosts one DVI, one HDMI, and one DisplayPort video output along with ventilation slots for the HSF. Further, the cards both support NVIDIA’s G-Sync technology.

EVGA GTX 750 2GB GDDR5 SC.jpg

The reference clocked GTX 750 2GB is $129.99 while the factory overclocked model is $139.99. Both cards are similar to their respective predecessors except for the additional 1GB of GDDR5 memory which comes at a $10 premium and should will help a bit at high resolutions.

Source: EVGA

Speaking of Passive Cooling: Tom's Hardware's GTX 750 Ti

Subject: General Tech, Graphics Cards | March 2, 2014 - 02:20 PM |
Tagged: passive cooling, maxwell, gtx 750 ti

The NVIDIA GeForce GTX 750 Ti is fast but also power efficient, enough-so that Ryan found it a worthwhile upgrade for cheap desktops with cheap power supplies that were never intended for discrete graphics. Of course, this recommendation is about making the best of what you got; better options probably exist if you are building a PC (or getting one built by a friend or a computer store).

toms-passive-geforce-gtx-750-ti-cooling,O-U-423390-22.jpg

Image Credit: Tom's Hardware

Tom's Hardware went another route: make it fanless.

After wrecking a passively-cooled Radeon HD 7750, which is probably a crime in Texas, they clamped it on to the Maxwell-based GTX 750 Ti. While the cooler was designed for good airflow, they decided to leave it in a completely-enclosed case without fans. Under load, the card reached 80 C within about twenty minutes. The driver backed off performance slightly, 1-3% depending on your frame of reference, but was able to maintain that target temperature.

Now, if only it accepted SLi, this person might be happy.

Sapphire Launches Low Profile R7 240 GPU For HTPCs

Subject: Graphics Cards | March 2, 2014 - 12:14 AM |
Tagged: sapphire, R7 240, htpc, SFF, low profile, steam os

Sapphire is preparing a new low profile Radeon R7 240 graphics card for home theater PCs and small form factor desktop builds. The new graphics card is a single slot design that uses a small heatsink with fan cooler that is shorter than the low profile PCI bracket for assured compatibility with even extremely cramped cases.

The Sapphire R7 240 card pairs a 28nm AMD GCN-based GPU with 2GB of DDR3 memory. There are two HDMI 1.4a display outputs that each support 4K 4096 x 2160 resolutions. Specifically, this particular iteration of the Radeon R7 240 has 320 stream processors clocked at 730 MHz base and 780 MHz boost along with 2GB DDR3 memory clocked at 900 MHz on a 128-bit bus. The card further has 20 TMUs and 8 ROPs. The card has a power sipping 30W TDP.

Sapphire Radeon R7 240 Low Profile Graphics Card for SFF Desktops and HTPCs.jpg

This low profile R7 240 is a sub-$100 part that can easily power a home theater PC or Steam OS streaming endpoint. Actually, the R7 240 itself can deliver playable gaming frame rates with low quality settings and lowered resolutions delivering at least 30 average FPS in modern titles like Bioshock Infinite and BF4 according to this review. Another use case would be to add the card to an existing AMD APU-based system in Hybrid CrossFire (which has seen Frame Pacing fixes!) for a bit more gaming horsepower under a strict budget.

The card occupies a tight space where it is only viable in specific situations constrained by a tight budget, physical size, and the requirement to buy a card new and not an older (single and faster, not Hybrid CrossFire) generation card on the used market. Still, it is nice to have options and this will be one such new budget alternative. Exact pricing is not yet available, but it should be hitting store shelves soon. For an idea on pricing, the full height Sapphire R7 240 retails for around $70, so expect the new low profile variant to be around that price if at a slight premium.

Video Perspective: Gaming on an Overclocked AMD A10-7850K APU

Subject: Graphics Cards, Processors | February 26, 2014 - 04:18 PM |
Tagged:

Overclocking the memory and GPU clock speeds on an AMD APU can greatly improve gaming performance - it is known.  With the new AMD A10-7850K in hand I decided to do a quick test and see how much we could improve average frame rates for mainstream gamers with only some minor tweaking of the motherboard BIOS.  

Using some high-end G.Skill RipJaws DDR3-2400 memory, we were able to push memory speeds on the Kaveri APU up to 2400 MHz, a 50% increase over the stock 1600 MHz rate.  We also increased the clock speed on the GPU portion of the A10-7850K from 720 MHz to 1028 MHz, a 42% boost.  Interestingly, as you'll see in the video below, the memory speed had a MUCH more dramatic impact on our average frame rates in-game.  

In the three games we tested for this video, GRID 2, Bioshock Infinite and Battlefield 4, total performance gain ranged from 26% to 38%.  Clearly that can make the AMD Kaveri APU an even more potent gaming platform if you are willing to shell out for the high speed memory.

  Stock GPU OC Memory OC Total OC Avg FPS Change
Battlefield 4
1920x1080
Medium
22.4 FPS 23.7 FPS 28.2 FPS 29.1 FPS +29%
GRID 2
1920x1080
High + 2xAA
33.5 FPS 36.3 FPS 41.1 FPS 42.3 FPS +26%
Bioshock Infinite
1920x1080
Low
30.1 FPS 30.9 FPS 40.2 FPS 41.8 FPS +38%

DirectX 12 and a new OpenGL to challenge AMD Mantle coming at GDC?

Subject: Graphics Cards | February 26, 2014 - 03:17 PM |
Tagged: opengl, nvidia, Mantle, gdc 14, GDC, DirectX 12, DirectX, amd

UPDATE (2/27/14): AMD sent over a statement today after seeing our story.  

AMD would like you to know that it supports and celebrates a direction for game development that is aligned with AMD’s vision of lower-level, ‘closer to the metal’ graphics APIs for PC gaming. While industry experts expect this to take some time, developers can immediately leverage efficient API design using Mantle, and AMD is very excited to share the future of our own API with developers at this year’s Game Developers Conference.

Credit over to Scott and his reader at The Tech Report for spotting this interesting news today!!

It appears that DirectX and OpenGL are going to be announcing some changes at next month's Game Developers Conference in San Francisco.  According to some information found in the session details, both APIs are trying to steal some of the thunder from AMD's Mantle, recently released with the Battlefield 4 patch.  Mantle is na API was built by AMD to enable more direct access (lower level) to its GCN graphics hardware allowing developers to code games that are more efficient, providing better performance for the PC gamer.

mantle.jpg

From the session titled DirectX: Evolving Microsoft's Graphics Platform we find this description (emphasis mine):

For nearly 20 years, DirectX has been the platform used by game developers to create the fastest, most visually impressive games on the planet.

However, you asked us to do more. You asked us to bring you even closer to the metal and to do so on an unparalleled assortment of hardware. You also asked us for better tools so that you can squeeze every last drop of performance out of your PC, tablet, phone and console.

Come learn our plans to deliver.

Another DirectX session hosted by Microsoft is titled DirectX: Direct3D Futures (emphasis mine): 

Come learn how future changes to Direct3D will enable next generation games to run faster than ever before!

In this session we will discuss future improvements in Direct3D that will allow developers an unprecedented level of hardware control and reduced CPU rendering overhead across a broad ecosystem of hardware. 

If you use cutting-edge 3D graphics in your games, middleware, or engines and want to efficiently build rich and immersive visuals, you don't want to miss this talk.

Now look at a line from our initial article on AMD Mantle when announced at its Hawaii tech day event:

It bypasses DirectX (and possibly the hardware abstraction layer) and developers can program very close to the metal with very little overhead from software.

This is all sounding very familiar.  It would appear that Microsoft has finally been listening to the development community and is working on the performance aspects of DirectX.  Likely due in no small part to the push of AMD and Mantle's development, an updated DirectX 12 that includes a similar feature set and similar performance changes would shift the market in a few key ways.

olddirectx.jpg

Is it time again for innovation with DirectX?

First and foremost, what does this do for AMD's Mantle in the near or distant future?  For now, BF4 will still include Mantle support as will games like Thief (update pending) but going forward, if these DX12 changes are as specific as I am being led to believe, then it would be hard to see anyone really sticking with the AMD-only route.  Of course, if DX12 doesn't really address the performance and overhead issues in the same way that Mantle does then all bets are off and we are back to square one.

Interestingly, OpenGL might also be getting into the ring with the session Approaching Zero Driver Overhead in OpenGL

Driver overhead has been a frustrating reality for game developers for the entire life of the PC game industry. On desktop systems, driver overhead can decrease frame rate, while on mobile devices driver overhead is more insidious--robbing both battery life and frame rate. In this unprecedented sponsored session, Graham Sellers (AMD), Tim Foley (Intel), Cass Everitt (NVIDIA) and John McDonald (NVIDIA) will present high-level concepts available in today's OpenGL implementations that radically reduce driver overhead--by up to 10x or more. The techniques presented will apply to all major vendors and are suitable for use across multiple platforms. Additionally, they will demonstrate practical demos of the techniques in action in an extensible, open source comparison framework.

This description seems to indicate more about new or lesser known programming methods that can be used with OpenGL to lower overhead without the need for custom APIs or even DX12.  This could be new modules from vendors or possibly a new revision to OpenGL - we'll find out next month.

All of this leaves us with a lot of questions that will hopefully be answered when we get to GDC in mid-March.  Will this new version of DirectX be enough to reduce API overhead to appease even the stingiest of game developers?  How will AMD react to this new competitor to Mantle (or was Mantle really only created to push this process along)?  What time frame does Microsoft have on DX12?  Does this save NVIDIA from any more pressure to build its own custom API?

Gaming continues to be the driving factor of excitement and innovation for the PC!  Stay tuned for an exciting spring!

Source: Tech Report

AMD Catalyst 14.2 Beta V1.3 Driver Released

Subject: General Tech, Graphics Cards | February 25, 2014 - 11:46 AM |
Tagged: amd, Mantle, TrueAudio, Thief 4, thief

AMD released their Catalyst 14.2 Beta V1.3 graphics drivers today, coinciding with the launch of Thief. The game, developed by Eidos Montreal and published by Square Enix, is another entry in "Gaming Evolved" and their "Never Settle" promotion. Soon, it will also support Mantle and TrueAudio.

AMD-Catalyst.jpg

Being Theif's launch driver, it provides optimizations for both single-GPU and Crossfire customers in that title. It also provides fixes for other titles, especially Battlefield 4 which can now run Mantle with up-to four GPUs. Battlefield 3 and 4 also supports Frame Pacing on very high (greater than 2560x1600) resolution monitors in dual-card Crossfire. It also fixes a couple of bugs in using Crossfire with DirectX 9 games, missing textures Minecraft, and corruption in X-Plane.

Catalyst 14.2 Beta V1.3 driver is available now at AMD's website.

Source: AMD

New Intel Graphics Drivers Further Spread Quick Sync Video

Subject: General Tech, Graphics Cards, Processors | February 25, 2014 - 10:33 AM |
Tagged: Ivy Bridge, Intel, iGPU, haswell

Recently, Intel released the 15.33.14.3412 (15.33.14.64.3412 for 64-bit) drivers for their Ivy Bridge and Haswell integrated graphics. The download was apparently published on January 29th while its patch notes are dated February 22nd. It features expanded support for Intel Quick Sync Video Technology, allowing certain Pentium and Celeron-class processors to access the feature, as well as an alleged increase in OpenGL-based games. Probably the most famous OpenGL title of our time is Minecraft, although I do not know if that specific game will see improvements (and if so, how much).

Intel-logo.svg_.png

The new driver enables Quick Sync Video for the following processors:

  • Pentium 3558U
  • Pentium 3561Y
  • Pentium G3220(Unsuffixed/T/TE)
  • Pentium G3420(Unsuffixed/T)
  • Pentium G3430
  • Celeron 2957U
  • Celeron 2961Y
  • Celeron 2981U
  • Celeron G1820(Unsuffixed/T/TE)
  • Celeron G1830

Besides the addition for these processors and the OpenGL performance improvements, the driver obviously fixes several bugs in each of its supported OSes. You can download the appropriate drivers from the Intel Download Center.

Source: Intel

NVIDIA Coin Mining Performance Increases with Maxwell and GTX 750 Ti

Subject: General Tech, Graphics Cards | February 20, 2014 - 02:45 PM |
Tagged: nvidia, mining, maxwell, litecoin, gtx 750 ti, geforce, dogecoin, coin, bitcoin, altcoin

As we have talked about on several different occasions, Altcoin mining (anything that is NOT Bitcoin specifically) is a force on the current GPU market whether we like it or not. Traditionally, Miners have only bought AMD-based GPUs, due to the performance advantage when compared to their NVIDIA competition. However, with continued development of the cudaMiner application over the past few months, NVIDIA cards have been gaining performance in Scrypt mining.

The biggest performance change we've seen yet has come with a new version of cudaMiner released yesterday. This new version (2014-02-18) brings initial support for the Maxwell architecture, which was just released yesterday in the GTX 750 and 750 Ti. With support for Maxwell, mining starts to become a more compelling option with this new NVIDIA GPU.

With the new version of cudaMiner on the reference version of the GTX 750 Ti, we were able to achieve a hashrate of 263 KH/s, impressive when you compare it to the performance of the previous generation, Kepler-based GTX 650 Ti, which tops out at about 150KH/s or so.

IMG_9552.JPG

As you may know from our full GTX 750 Ti Review,  the GM107 overclocks very well. We were able to push our sample to the highest offset configurable of +135 MHz, with an additional 500 MHz added to the memory frequency, and 31 mV bump to the voltage offset. All of this combined to a ~1200 MHz clockspeed while mining, and an additional 40 KH/s or so of performance, bringing us to just under 300KH/s with the 750 Ti.

perf.png

As we compare the performance of the 750 Ti to AMD GPUs and previous generation NVIDIA GPUs, we start to see how impressive the performance of this card stacks up considering the $150 MSRP. For less than half the price of the GTX 770, and roughly the same price as a R7 260X, you can achieve the same performance.

power.png

When we look at power consumption based on the TDP of each card, this comparison only becomes more impressive. At 60W, there is no card that comes close to the performance of the 750 Ti when mining. This means you will spend less to run a 750 Ti than a R7 260X or GTX 770 for roughly the same hash rate.

perfdollar.png

Taking a look at the performance per dollar ratings of these graphics cards, we see the two top performers are the AMD R7 260X and our overclocked GTX 750 Ti.

perfpower.png

However, when looking at the performance per watt differences of the field, the GTX 750 Ti looks more impressive. While most miners may think they don't care about power draw, it can help your bottom line. By being able to buy a smaller, less efficient power supply the payoff date for the hardware is moved up.  This also bodes well for future Maxwell based graphics cards that we will likely see released later in 2014.  

Continue reading our look at Coin Mining performance with the GTX 750 Ti and Maxwell!!

Was leading with a low end Maxwell smart?

Subject: Graphics Cards | February 19, 2014 - 01:43 PM |
Tagged: geforce, gm107, gpu, graphics, gtx 750 ti, maxwell, nvidia, video

We finally saw Maxwell yesterday, with a new design for the SMs called SMM each of which consist of four blocks of 32 dedicated, non-shared CUDA cores.  In theory that should allow NVIDIA to pack more SMMs onto the card than they could with the previous SMK units.  This new design was released on a $150 card which means we don't really get to see what this new design is capable of yet.  At that price it competes with AMD's R7 260X and R7 265, at least if you can find them at their MSRP and not at inflated cryptocurrency levels.  Legit Reviews contrasted the performance of two overclocked GTX 750 Ti to those two cards as well as to the previous generation GTX 650Ti Boost on a wide selection of games to see how it stacks up performance-wise which you can read here.

That is of course after you read Ryan's full review.

nvidia-geforce-gtx750ti-645x399.jpg

"NVIDIA today announced the new GeForce GTX 750 Ti and GTX 750 video cards, which are very interesting to use as they are the first cards based on NVIDIA's new Maxwell graphics architecture. NVIDIA has been developing Maxwell for a number of years and have decided to launch entry-level discrete graphics cards with the new technology first in the $119 to $149 price range. NVIDIA heavily focused on performance per watt with Maxwell and it clearly shows as the GeForce GTX 750 Ti 2GB video card measures just 5.7-inches in length with a tiny heatsink and doesn't require any internal power connectors!"

Here are some more Graphics Card articles from around the web:

Graphics Cards

AMD Gaming Evolved App with Redeemable Prizes

Subject: General Tech, Graphics Cards | February 18, 2014 - 09:01 PM |
Tagged: raptr, gaming evolved, amd

The AMD Gaming Evolved App updates your drivers, optimizes your game settings, streams your gameplay to Twitch, accesses some social media platforms, and now gives prizes. Points are given for playing games using the app, optimizing game settings, and so forth. These can be exchanged for rewards ranging from free games, to Sapphire R9-series graphics cards.

amd-raptr.jpg

This program has been in beta for a little while now, without the ability to redeem points. The system has been restructured to encourage using the entire app by lowering the accumulation rate for playing games and adding other goals. Beta participants do not lose all of their points, rather it is rescaled more in line with the new system.

The Gaming Evolved prize program has launched today.

Press release after the teaser.

Source: raptr

NVIDIA Releases GeForce TITAN Black

Subject: General Tech, Graphics Cards | February 18, 2014 - 06:03 AM |
Tagged: nvidia, gtx titan black, geforce titan, geforce

NVIDIA has just announced the GeForce GTX Titan Black. Based on the full high-performance Kepler (GK110) chip, it is mostly expected to be a lower cost development platform for GPU processing applications. All 2,880 single precision (FP32) CUDA Cores and 960 double precision (FP64) CUDA Cores are unlocked, yielding 5.1 TeraFLOPs of 32-bit decimal and 1.3 TeraFLOPs of 64-bit decimal performance. The chip contains 1536kB of L2 Cache and will be paired with 6GB of video memory on the board.

nvidia-titan-black-2.jpg

The original GeForce GTX Titan launched last year, almost to the day. Also based on the GK110 design, it also featured full double precision performance with only one SMX disabled. Of course, no component at the time contained a fully-enabled GK110 processor. The first product with all 15 SMX units active was not realized until the Quadro K6000, announced in July but only available in the fall. It was followed by the GeForce GTX 780 Ti (with a fraction of its FP64 performance) in November, and the fully powered Tesla K40 less than two weeks after that.

nvidia-titan-black-3.jpg

For gaming applications, this card is expected to have comparable performance to the GTX 780 Ti... unless you can find a use for the extra 3GB of memory. Games do not display much benefit with the extra 64-bit floating point (decimal) performance because the majority of their calculations are at 32-bit precision.

The NVIDIA GeForce GTX Titan Black is available today at a price of $999.

Source: NVIDIA

AMD Radeon R9 290X Hits $900 on Newegg. Thanks *coin

Subject: General Tech, Graphics Cards | February 14, 2014 - 03:02 PM |
Tagged: supply shortage, shortage, R9 290X, podcast, litecoin, dogecoin, bitcoin

UPDATE (Feb 14th, 11pm ET): As a commenter has pointed out below, suddenly, as if by magic, Newegg has lowered prices on the currently in stock R9 290X cards by $200.  That means you can currently find them for $699 - only $150 over the expected MSRP.  Does that change anything about what we said above or in the video?  Not really.  It only lowers the severity.

I am curious to know if this was done by Newegg voluntarily due to pressure from news stories such as these, lack of sales at $899 or with some nudging from AMD...

If you have been keeping up with our podcasts and reviews, you will know that AMD cards are great compute devices for their MSRP. This is something that cryptocurrency applies a value to. Run a sufficient amount of encryption tasks and you are rewarded with newly created tokens (or some fee from validated transactions). Some people seem to think that GPUs are more valuable for that purpose than their MSRP, so retailers raise prices and people still buy them.

amd-shortage-900.png

Currently, the cheapest R9 290X is being sold for $900. This is a 64% increase over AMD's intended $549 MSRP. They are not even the ones receiving this money!

This shortage also affects other products such as Corsair's 1200W power supply. Thankfully, only certain components are necessary for mining (mostly GPUs and a lot of power) so at least we are not seeing the shortage spread to RAM, CPUs, APUs, and so forth. We noted a mining kit on Newegg which was powered by a Sempron processor. This line of cheap and low-performance CPUs has not been updated since 2009.

We have kept up with GPU shortages, historically. We did semi-regular availability checks during the GeForce GTX 680 and 690 launch windows. The former was out of stock for over two months after its launch. Those also sometimes strayed from their MSRP, slightly.

Be sure to check out the clip (above) for a nice, 15-minute discussion.

Pitcairn rides again in the R7 265

Subject: Graphics Cards | February 13, 2014 - 11:31 AM |
Tagged: radeon, r7 265, pitcairn, Mantle, gpu, amd

Some time in late February or March you will be able to purchase the R7 265 for around $150, a decent price for an entry level GPU that will benefit those who are currently dependent on the GPU portion of an APU.  This leads to the question of its performance and if this Pitcairn refresh will really benefit a gamer on a tight budget.  Hardware Canucks tested it against the two NVIDIA cards closest in price, the GTX 650 Ti Boost which is almost impossible to find and the GTX 660 2GB which is $40 more than the MSRP of the R7 265.  The GTX 660 is faster overall but when you look at the price to performance ratio the R7 265 is a more attractive offering.  Of course with NVIDIA's Maxwell release just around the corner this could change drastically.

If you already caught Ryan's review, you might have missed the short video he just added on the last page.

slides04.jpg

Crowded house

"AMD's R7 265 is meant to reside in the space between the R7 260X and R9 270, though performance is closer to its R9 sibling. Could this make it a perfect budget friendly graphics card?"

Here are some more Graphics Card articles from around the web:

Graphics Cards

AMD Launches Radeon R7 250X at $99 - HD 7770 Redux

Subject: Graphics Cards | February 9, 2014 - 09:00 PM |
Tagged: radeon, R7, hd 7770, amd, 250x

With the exception of the R9 290X, the R9 290, and the R7 260X, AMD's recent branding campaign with the Radeon R7 and R9 series of graphics cards is really just a reorganization and rebranding of existing parts.  When we reviewed the Radeon R9 280X and R9 270X, both were well known entities though this time with lower price tags to sweeten the pot.  

Today, AMD is continuing the process of building the R7 graphics card lineup with the R7 250X.  If you were looking for a new ASIC, maybe one that includes TrueAudio support, you are going to be let down.  The R7 250X is essentially the same part that was released as the HD 7770 in February of 2012: Cape Verde.

02.jpg

AMD calls the R7 250X "the successor" to the Radeon HD 7770 and its targeting the 1080p gaming landscape in the $99 price range.  For those keeping track at home, the Radeon HD 7770 GHz Edition parts are currently selling for the same price.  The R7 250X will be available in both 1GB and 2GB variants with a 128-bit GDDR5 memory bus running at 4.5 GHz.  The card requires a single 6-pin power connection and we expect a TDP of 95 watts.  

Here is a table that details the current product stack of GPUs from AMD under $140.  It's quite crowded as you can see.

  Radeon R7 260X Radeon R7 260 Radeon R7 250X Radeon R7 250 Radeon R7 240
GPU Code name Bonaire Bonaire Cape Verde Oland Oland
GPU Cores 896 768 640 384 320
Rated Clock 1100 MHz 1000 MHz 1000 MHz 1050 MHz 780 MHz
Texture Units 56 48 40 24 20
ROP Units 16 16 16 8 8
Memory 2GB 2GB 1 or 2GB 1 or 2GB 1 or 2GB
Memory Clock 6500 MHz 6000 MHz 4500 MHz 4600 MHz 4600 MHz
Memory Interface 128-bit 128-bit 128-bit 128-bit 128-bit
Memory Bandwidth 104 GB/s 96 GB/s 72 GB/s 73.6 GB/s 28.8 GB/s
TDP 115 watts 95 watts 95 watts 65 watts 30 watts
Peak Compute 1.97 TFLOPS 1.53 TFLOPS 1.28 TFLOPS 0.806 TFLOPS 0.499 TFLOPS
MSRP $139 $109 $99 $89 $79

The current competition from NVIDIA rests in the hands of the GeForce GTX 650 and the GTX 650 Ti, a GPU that was released itself in late 2012.  Since we already know what performance to expect from the R7 250X because of its pedigree, the numbers below aren't really that surprising, as provided by AMD.

01.jpg

AMD did leave out the GTX 650 Ti from the graph above... but no matter, we'll be doing our own testing soon enough, once our R7 250X cards find there way into the PC Perspective offices.  

The AMD Radeon R7 250X will be available starting today but if that is the price point you are looking at, you might want to keep an eye out for sales on those remaining Radeon HD 7770 GHz Edition parts.

Source: AMD

Linus Brings SLI and Crossfire Together

Subject: General Tech, Graphics Cards | February 7, 2014 - 12:54 AM |
Tagged: sli, crossfire

I will not even call this a thinly-veiled rant. Linus admits it. To make a point, he assembled a $5000 PC running a pair of NVIDIA GeForce 780 Ti GPUs and another pair of AMD Radeon R9 290X graphics cards. While Bitcoin mining would likely utilize all four video cards well enough, games will not. Of course, he did not even mention the former application (thankfully).

No, his complaint was about vendor-specific features.

Honestly, he's right. One of the reasons why I am excited about OpenCL (and its WebCL companion) is that it simply does not care about devices. Your host code manages the application but, when the jobs get dirty, it enlists help from an available accelerator by telling it to perform a kernel (think of it like function) and share the resulting chunk of memory.

This can be an AMD GPU. This can be an NVIDIA GPU. This can be an x86 CPU. This can be an FPGA. If the host has multiple, independent tasks, it can be several of the above (and in any combination). OpenCL really does not care.

The only limitation is whether tasks can effectively utilized all accelerators present in a machine. This might be the future we are heading for. This is the future I envisioned when I started designing my GPU-accelerated software rendering engine. In that case, I also envisioned the host code being abstracted into Javascript - because when you jump into platform agnosticism, jump in!

Obviously, to be fair, AMD is very receptive to open platforms. NVIDIA is less-so, and they are honest about that, but they conform to standards when it benefits their users more than their proprietary ones. I know that point can be taken multiple ways, and several will be hotly debated, but I really cannot find the words to properly narrow it.

Despite the fragmentation in features, there is one thing to be proud of as a PC gamer. You may have different experiences depending on the components you purchase.

But, at least you will always have an experience.

AMD Radeon R7 250X Spotted

Subject: General Tech, Graphics Cards | February 6, 2014 - 05:54 PM |
Tagged: amd, radeon, R7 250X

The AMD Radeon R7 250X has been mentioned on a few different websites over the last day, one of which was tweeted by AMD Radeon Italia. The SKU, which bridges the gap between the R7 250 and the R7 260, is expected to have a graphics processor with 640 Stream Processors, 40 TMUs, and 16 ROPs. It should be a fairly silent launch, with 1GB and 2GB versions appearing soon for an expected price of around 90 Euros, including VAT.

AMD-Sapphire-R7-250X-620x620.jpg

Image Credit: Videocardz.com

The GPU is expected to be based on the 28nm Oland chip design.

While it may seem like a short, twenty Euro jump from the R7 250 to the R7 260, the single-precision FLOP performance actually doubles from around 800 GFLOPs to around 1550 GFLOPs. If that metric is indicative of overall performance, there is quite a large gap to place a product within.

We still do not know official availability, yet.

Source: Videocardz

In a Galaxy far far away?

Subject: General Tech, Graphics Cards | February 6, 2014 - 10:44 AM |
Tagged:

*****update*******

We have more news and it is good for Galaxy fans.  The newest update states that they will be sticking around!

galaxy2.png

Good news GPU fans, the rumours that Galaxy's GPU team is leaving the North American market might be somewhat exaggerated, at least according to their PR Team. 

Streisand.png

This post appeared on Facebook and was quickly taken off again, perhaps for rewording or perhaps it is a perfect example of the lack of communication that [H]ard|OCP cites in their story.  Stay tuned as we update you as soon as we hear more.

box.jpg

Party like it's 2008!

[H]ard|OCP have been following Galaxy's business model closely for the past year as they have been seeing hints that the reseller just didn't get the North American market.  Their concern grew as they tried and failed to contact Galaxy at the end of 2013, emails went unanswered and advertising campaigns seemed to have all but disappeared.  Even with this reassurance that Galaxy is not planning to leave the North American market a lot of what [H] says rings true, with the stock and delivery issues Galaxy seemed to have over the past year there is something going on behind the scenes.  Still it is not worth abandoning them completely and turning this into a self fulfilling prophecy, they have been in this market for a long time and may just be getting ready to move forward in a new way.  On the other hand you might be buying a product which will not have warranty support in the future.

"The North American GPU market has been one that is at many times a swirling mass of product. For the last few years though, we have seen the waters calm in that regard as video card board partners have somewhat solidified and we have seen solid players emerge and keep the stage. Except now we seen one exit stage left."

Here is some more Tech News from around the web:

Tech Talk

Source: [H]ard|OCP

Focus on Mantle

Subject: General Tech, Graphics Cards | February 5, 2014 - 11:43 AM |
Tagged: gaming, Mantle, amd, battlefield 4

Now that the new Mantle enabled driver has been released several sites have had a chance to try out the new API to see what effect it has on Battlefield 4.  [H]ard|OCP took a stock XFX R9 290X paired with an i7-3770K and tested both single and multiplayer BF4 performance and the pattern they saw lead them to believe Mantle is more effective at relieving CPU bottlenecks than ones caused by the GPU.  The performance increases they saw were greater at lower resolutions than at high resolutions.  At The Tech Report another XFX R9 290X was paired with an A10-7850K and an i7-4770K and compared the systems performance in D3D as well as Mantle.  To make the tests even more interesting they also tested D3D with a 780Ti, which you should fully examine before deciding which performs the best.  Their findings were in line with [H]ard|OCP's and they made the observation that Mantle is going to offer the greatest benefits to lower powered systems, with not a lot to be gained by high end systems with the current version of Mantle.  Legit Reviews performed similar tests but also brought the Star Swarm demo into the mix, using an R7 260X for their GPU.  You can catch all of our coverage by clicking on the Mantle tag.

bf4-framegraph.jpg

"Does AMD's Mantle graphics API deliver on its promise of smoother gaming with lower-spec CPUs? We take an early look at its performance in Battlefield 4."

Here is some more Tech News from around the web:

Gaming

NitroWare Tests AMD's Photoshop OpenCL Claims

Subject: General Tech, Graphics Cards, Processors | February 4, 2014 - 11:08 PM |
Tagged: photoshop, opencl, Adobe

Adobe has recently enhanced Photoshop CC to accelerate certain filters via OpenCL. AMD contacted NitroWare with this information and claims of 11-fold performance increases with "Smart Sharpen" on Kaveri, specifically. The computer hardware site decided to test these claims on a Radeon HD 7850 using the test metrics that AMD provided them.

Sure enough, he noticed a 16-fold gain in performance. Without OpenCL, the filter's loading bar was on screen for over ten seconds; with it enabled, there was no bar.

Dominic from NitroWare is careful to note that an HD 7850 is significantly higher performance than an APU (barring some weird scenario involving memory transfers or something). This might mark the beginning of Adobe's road to sensible heterogeneous computing outside of video transcoding. Of course, this will also be exciting for AMD. While they cannot keep up with Intel, thread per thread, they are still a heavyweight in terms of total performance. With Photoshop, people might actually notice it.