Author:
Manufacturer: AMD

Straddling the R7 and R9 designation

It is often said that the sub-$200 graphics card market is crowded.  It will get even more so over the next 7 days.  Today AMD is announcing a new entry into this field, the Radeon R7 265, which seems to straddle the line between their R7 and R9 brands.  The product is much closer in its specifications to the R9 270 than it is the R7 260X. As you'll see below, it is built on a very familiar GPU architecture.

slides01.jpg

AMD claims that the new R7 265 brings a 25% increase in performance to the R7 line of graphics cards.  In my testing, this does turn out to be true and also puts it dangerously close to the R9 270 card released late last year. Much like we saw with the R9 290 compared to the R9 290X, the less expensive but similarly performing card might make the higher end model a less attractive option.

Let's take a quick look at the specifications of the new R7 265.

slides02.jpg

Based on the Pitcairn GPU, a part that made its debut with the Radeon HD 7870 and HD 7850 in early 2012, this card has 1024 stream processors running at 925 MHz equating to 1.89 TFLOPS of total peak compute power.  Unlike the other R7 cards, the R7 265 has a 256-bit memory bus and will come with 2GB of GDDR5 memory running at 5.6 GHz.  The card requires a single 6-pin power connection but has a peak TDP of 150 watts - pretty much the maximum of the PCI Express bus and one power connector.  And yes, the R7 265 supports DX 11.2, OpenGL 4.3, and Mantle, just like the rest of the AMD R7/R9 lineup.  It does NOT support TrueAudio and the new CrossFire DMA units.

  Radeon R9 270X Radeon R9 270 Radeon R7 265 Radeon R7 260X Radeon R7 260
GPU Code name Pitcairn Pitcairn Pitcairn Bonaire Bonaire
GPU Cores 1280 1280 1024 896 768
Rated Clock 1050 MHz 925 MHz 925 MHz 1100 MHz 1000 MHz
Texture Units 80 80 64 56 48
ROP Units 32 32 32 16 16
Memory 2GB 2GB 2GB 2GB 2GB
Memory Clock 5600 MHz 5600 MHz 5600 MHz 6500 MHz 6000 MHz
Memory Interface 256-bit 256-bit 256-bit 128-bit 128-bit
Memory Bandwidth 179 GB/s 179 GB/s 179 GB/s 104 GB/s 96 GB/s
TDP 180 watts 150 watts 150 watts 115 watts 95 watts
Peak Compute 2.69 TFLOPS 2.37 TFLOPS 1.89 TFLOPS 1.97 TFLOPS 1.53 TFLOPS
MSRP $199 $179 $149 $119 $109

The table above compares the current AMD product lineup, ranging from the R9 270X to the R7 260, with the R7 265 directly in the middle.  There are some interesting specifications to point out that make the 265 a much closer relation to the R7 270/270X cards than anything below it.  Though the R7 265 has four fewer compute units (which is 256 stream processors) than the R9 270. The biggest performance gap here is going to be found with the 256-bit memory bus that persists; the available memory bandwidth of 179 GB/s is 72% higher than the 104 GB/s from the R7 260X!  That will definitely improve performance drastically compared to the rest of the R7 products.  Pay no mind to that peak performance of the 260X being higher than the R7 265; in real world testing that never happened.

Continue reading our review of the new AMD Radeon R7 265 2GB Graphics Card!!

AMD Launches Radeon R7 250X at $99 - HD 7770 Redux

Subject: Graphics Cards | February 10, 2014 - 12:00 AM |
Tagged: radeon, R7, hd 7770, amd, 250x

With the exception of the R9 290X, the R9 290, and the R7 260X, AMD's recent branding campaign with the Radeon R7 and R9 series of graphics cards is really just a reorganization and rebranding of existing parts.  When we reviewed the Radeon R9 280X and R9 270X, both were well known entities though this time with lower price tags to sweeten the pot.  

Today, AMD is continuing the process of building the R7 graphics card lineup with the R7 250X.  If you were looking for a new ASIC, maybe one that includes TrueAudio support, you are going to be let down.  The R7 250X is essentially the same part that was released as the HD 7770 in February of 2012: Cape Verde.

02.jpg

AMD calls the R7 250X "the successor" to the Radeon HD 7770 and its targeting the 1080p gaming landscape in the $99 price range.  For those keeping track at home, the Radeon HD 7770 GHz Edition parts are currently selling for the same price.  The R7 250X will be available in both 1GB and 2GB variants with a 128-bit GDDR5 memory bus running at 4.5 GHz.  The card requires a single 6-pin power connection and we expect a TDP of 95 watts.  

Here is a table that details the current product stack of GPUs from AMD under $140.  It's quite crowded as you can see.

  Radeon R7 260X Radeon R7 260 Radeon R7 250X Radeon R7 250 Radeon R7 240
GPU Code name Bonaire Bonaire Cape Verde Oland Oland
GPU Cores 896 768 640 384 320
Rated Clock 1100 MHz 1000 MHz 1000 MHz 1050 MHz 780 MHz
Texture Units 56 48 40 24 20
ROP Units 16 16 16 8 8
Memory 2GB 2GB 1 or 2GB 1 or 2GB 1 or 2GB
Memory Clock 6500 MHz 6000 MHz 4500 MHz 4600 MHz 4600 MHz
Memory Interface 128-bit 128-bit 128-bit 128-bit 128-bit
Memory Bandwidth 104 GB/s 96 GB/s 72 GB/s 73.6 GB/s 28.8 GB/s
TDP 115 watts 95 watts 95 watts 65 watts 30 watts
Peak Compute 1.97 TFLOPS 1.53 TFLOPS 1.28 TFLOPS 0.806 TFLOPS 0.499 TFLOPS
MSRP $139 $109 $99 $89 $79

The current competition from NVIDIA rests in the hands of the GeForce GTX 650 and the GTX 650 Ti, a GPU that was released itself in late 2012.  Since we already know what performance to expect from the R7 250X because of its pedigree, the numbers below aren't really that surprising, as provided by AMD.

01.jpg

AMD did leave out the GTX 650 Ti from the graph above... but no matter, we'll be doing our own testing soon enough, once our R7 250X cards find there way into the PC Perspective offices.  

The AMD Radeon R7 250X will be available starting today but if that is the price point you are looking at, you might want to keep an eye out for sales on those remaining Radeon HD 7770 GHz Edition parts.

Source: AMD

Linus Brings SLI and Crossfire Together

Subject: General Tech, Graphics Cards | February 7, 2014 - 03:54 AM |
Tagged: sli, crossfire

I will not even call this a thinly-veiled rant. Linus admits it. To make a point, he assembled a $5000 PC running a pair of NVIDIA GeForce 780 Ti GPUs and another pair of AMD Radeon R9 290X graphics cards. While Bitcoin mining would likely utilize all four video cards well enough, games will not. Of course, he did not even mention the former application (thankfully).

No, his complaint was about vendor-specific features.

Honestly, he's right. One of the reasons why I am excited about OpenCL (and its WebCL companion) is that it simply does not care about devices. Your host code manages the application but, when the jobs get dirty, it enlists help from an available accelerator by telling it to perform a kernel (think of it like function) and share the resulting chunk of memory.

This can be an AMD GPU. This can be an NVIDIA GPU. This can be an x86 CPU. This can be an FPGA. If the host has multiple, independent tasks, it can be several of the above (and in any combination). OpenCL really does not care.

The only limitation is whether tasks can effectively utilized all accelerators present in a machine. This might be the future we are heading for. This is the future I envisioned when I started designing my GPU-accelerated software rendering engine. In that case, I also envisioned the host code being abstracted into Javascript - because when you jump into platform agnosticism, jump in!

Obviously, to be fair, AMD is very receptive to open platforms. NVIDIA is less-so, and they are honest about that, but they conform to standards when it benefits their users more than their proprietary ones. I know that point can be taken multiple ways, and several will be hotly debated, but I really cannot find the words to properly narrow it.

Despite the fragmentation in features, there is one thing to be proud of as a PC gamer. You may have different experiences depending on the components you purchase.

But, at least you will always have an experience.

AMD Radeon R7 250X Spotted

Subject: General Tech, Graphics Cards | February 6, 2014 - 08:54 PM |
Tagged: amd, radeon, R7 250X

The AMD Radeon R7 250X has been mentioned on a few different websites over the last day, one of which was tweeted by AMD Radeon Italia. The SKU, which bridges the gap between the R7 250 and the R7 260, is expected to have a graphics processor with 640 Stream Processors, 40 TMUs, and 16 ROPs. It should be a fairly silent launch, with 1GB and 2GB versions appearing soon for an expected price of around 90 Euros, including VAT.

AMD-Sapphire-R7-250X-620x620.jpg

Image Credit: Videocardz.com

The GPU is expected to be based on the 28nm Oland chip design.

While it may seem like a short, twenty Euro jump from the R7 250 to the R7 260, the single-precision FLOP performance actually doubles from around 800 GFLOPs to around 1550 GFLOPs. If that metric is indicative of overall performance, there is quite a large gap to place a product within.

We still do not know official availability, yet.

Source: Videocardz

In a Galaxy far far away?

Subject: General Tech, Graphics Cards | February 6, 2014 - 01:44 PM |
Tagged:

*****update*******

We have more news and it is good for Galaxy fans.  The newest update states that they will be sticking around!

galaxy2.png

Good news GPU fans, the rumours that Galaxy's GPU team is leaving the North American market might be somewhat exaggerated, at least according to their PR Team. 

Streisand.png

This post appeared on Facebook and was quickly taken off again, perhaps for rewording or perhaps it is a perfect example of the lack of communication that [H]ard|OCP cites in their story.  Stay tuned as we update you as soon as we hear more.

box.jpg

Party like it's 2008!

[H]ard|OCP have been following Galaxy's business model closely for the past year as they have been seeing hints that the reseller just didn't get the North American market.  Their concern grew as they tried and failed to contact Galaxy at the end of 2013, emails went unanswered and advertising campaigns seemed to have all but disappeared.  Even with this reassurance that Galaxy is not planning to leave the North American market a lot of what [H] says rings true, with the stock and delivery issues Galaxy seemed to have over the past year there is something going on behind the scenes.  Still it is not worth abandoning them completely and turning this into a self fulfilling prophecy, they have been in this market for a long time and may just be getting ready to move forward in a new way.  On the other hand you might be buying a product which will not have warranty support in the future.

"The North American GPU market has been one that is at many times a swirling mass of product. For the last few years though, we have seen the waters calm in that regard as video card board partners have somewhat solidified and we have seen solid players emerge and keep the stage. Except now we seen one exit stage left."

Here is some more Tech News from around the web:

Tech Talk

Source: [H]ard|OCP

Focus on Mantle

Subject: General Tech, Graphics Cards | February 5, 2014 - 02:43 PM |
Tagged: gaming, Mantle, amd, battlefield 4

Now that the new Mantle enabled driver has been released several sites have had a chance to try out the new API to see what effect it has on Battlefield 4.  [H]ard|OCP took a stock XFX R9 290X paired with an i7-3770K and tested both single and multiplayer BF4 performance and the pattern they saw lead them to believe Mantle is more effective at relieving CPU bottlenecks than ones caused by the GPU.  The performance increases they saw were greater at lower resolutions than at high resolutions.  At The Tech Report another XFX R9 290X was paired with an A10-7850K and an i7-4770K and compared the systems performance in D3D as well as Mantle.  To make the tests even more interesting they also tested D3D with a 780Ti, which you should fully examine before deciding which performs the best.  Their findings were in line with [H]ard|OCP's and they made the observation that Mantle is going to offer the greatest benefits to lower powered systems, with not a lot to be gained by high end systems with the current version of Mantle.  Legit Reviews performed similar tests but also brought the Star Swarm demo into the mix, using an R7 260X for their GPU.  You can catch all of our coverage by clicking on the Mantle tag.

bf4-framegraph.jpg

"Does AMD's Mantle graphics API deliver on its promise of smoother gaming with lower-spec CPUs? We take an early look at its performance in Battlefield 4."

Here is some more Tech News from around the web:

Gaming

Manufacturer: PC Perspective
Tagged: Mantle, interview, amd

What Mantle signifies about GPU architectures

Mantle is a very interesting concept. From the various keynote speeches, it sounds like the API is being designed to address the current state (and trajectory) of graphics processors. GPUs are generalized and highly parallel computation devices which are assisted by a little bit of specialized silicon, when appropriate. The vendors have even settled on standards, such as IEEE-754 floating point decimal numbers, which means that the driver has much less reason to shield developers from the underlying architectures.

Still, Mantle is currently a private technology for an unknown number of developers. Without a public SDK, or anything beyond the half-dozen keynotes, we can only speculate on its specific attributes. I, for one, have technical questions and hunches which linger unanswered or unconfirmed, probably until the API is suitable for public development.

Or, until we just... ask AMD.

amd-mantle-interview-01.jpg

Our response came from Guennadi Riguer, the chief architect for Mantle. In it, he discusses the API's usage as a computation language, the future of the rendering pipeline, and whether there will be a day where Crossfire-like benefits can occur by leaving an older Mantle-capable GPU in your system when purchasing a new, also Mantle-supporting one.

Q: Mantle's shading language is said to be compatible with HLSL. How will optimizations made for DirectX, such as tweaks during shader compilation, carry over to Mantle? How much tuning will (and will not) be shared between the two APIs?

[Guennadi] The current Mantle solution relies on the same shader generation path games the DirectX uses and includes an open-source component for translating DirectX shaders to Mantle accepted intermediate language (IL). This enables developers to quickly develop Mantle code path without any changes to the shaders. This was one of the strongest requests we got from our ISV partners when we were developing Mantle.

AMD-mantle-dx-hlsl-GSA_screen_shot.jpg

Follow-Up: What does this mean, specifically, in terms of driver optimizations? Would AMD, or anyone else who supports Mantle, be able to re-use the effort they spent on tuning their shader compilers (and so forth) for DirectX?

[Guennadi] With the current shader compilation strategy in Mantle, the developers can directly leverage DirectX shader optimization efforts in Mantle. They would use the same front-end HLSL compiler for DX and Mantle, and inside of the DX and Mantle drivers we share the shader compiler that generates the shader code our hardware understands.

Read on to see the rest of the interview!

NitroWare Tests AMD's Photoshop OpenCL Claims

Subject: General Tech, Graphics Cards, Processors | February 5, 2014 - 02:08 AM |
Tagged: photoshop, opencl, Adobe

Adobe has recently enhanced Photoshop CC to accelerate certain filters via OpenCL. AMD contacted NitroWare with this information and claims of 11-fold performance increases with "Smart Sharpen" on Kaveri, specifically. The computer hardware site decided to test these claims on a Radeon HD 7850 using the test metrics that AMD provided them.

Sure enough, he noticed a 16-fold gain in performance. Without OpenCL, the filter's loading bar was on screen for over ten seconds; with it enabled, there was no bar.

Dominic from NitroWare is careful to note that an HD 7850 is significantly higher performance than an APU (barring some weird scenario involving memory transfers or something). This might mark the beginning of Adobe's road to sensible heterogeneous computing outside of video transcoding. Of course, this will also be exciting for AMD. While they cannot keep up with Intel, thread per thread, they are still a heavyweight in terms of total performance. With Photoshop, people might actually notice it.

AMD Catalyst 14.1 Beta Available Now. Now, Chewie, NOW!

Subject: General Tech, Graphics Cards | February 1, 2014 - 11:29 PM |
Tagged: Mantle, BF4, amd

AMD has released the Catalyst 14.1 Beta driver (even for Linux) but you should, first, read Ryan's review. This is a little less than what he expects in a Beta from AMD. We are talking about crashes to desktop and freezes while loading a map on a single-GPU configuration - and Crossfire is a complete wash in his experience (although AMD acknowledges the latter in their release notes). According to AMD, there is even the possibility that the Mantle version of Battlefield 4 will render with your APU and ignore your dedicated graphics.

amd-bf4-mantle.jpg

If you are determined to try Catalyst 14.1, however, it does make a first step into the promise of Mantle. Some situations show slightly lower performance than DirectX 11, albeit with a higher minimum framerate, while other results impress with double-digit percentage gains.

Multiplayer in BF4, where the CPU is more heavily utilized, seems to benefit the most (thankfully).

If you understand the risk (in terms of annoyance and frustration), and still want to give it a try, pick up the driver from AMD's support website. If not? Give it a little more time for AMD to whack-a-bug. At some point, there should be truly free performance waiting for you.

Press release after the break!

Source: AMD
Author:
Manufacturer: AMD

A quick look at performance results

Late last week, EA and Dice released the long awaited patch for Battlefield 4 that enables support for the Mantle renderer.  This new API technology was introduced by AMD back in September. Unfortunately, AMD wasn't quite ready for its release with their Catalyst 14.1 beta driver.  I wrote a short article that previewed the new driver's features, its expected performance with the Mantle version of BF4, and commentary about the current state of Mantle.  You should definite read that as a primer before continuing if you haven't yet.  

Today, after really just a few short hours with a useable driver, I have only limited results.  Still, I know that you, our readers, clamor for ANY information on the topic.  I thought I would share what we have thus far.

Initial Considerations

As I mentioned in the previous story, the Mantle version of Battlefield 4 has the biggest potential to show advantages in times where the game is more CPU limited.  AMD calls this the "low hanging fruit" for this early release of Mantle and claim that further optimizations will come, especially for GPU-bound scenarios.  Because of that dependency on CPU limitations, that puts some non-standard requirements on our ability to showcase Mantle's performance capabilities.

bf42.jpg

For example, the level of the game and even the section of that level, in the BF4 single player campaign, can show drastic swings in Mantle's capabilities.  Multiplayer matches will also show more consistent CPU utilization (and thus could be improved by Mantle) though testing those levels in a repeatable, semi-scientific method is much more difficult.  And, as you'll see in our early results, I even found a couple instances in which the Mantle API version of BF4 ran a smidge slower than the DX11 instance.  

For our testing, we compiled two systems that differed in CPU performance in order to simulate the range of processors installed within consumers' PCs.  Our standard GPU test bed includes a Core i7-3960X Sandy Bridge-E processor specifically to remove the CPU as a bottleneck and that has been included here today.  We added in a system based on the AMD A10-7850K Kaveri APU which presents a more processor-limited (especially per-thread) system, overall, and should help showcase Mantle benefits more easily.

Continue reading our early look at the performance advantages of AMD Mantle on Battlefield 4!!