In a Galaxy far far away?

Subject: General Tech, Graphics Cards | February 6, 2014 - 01:44 PM |
Tagged:

*****update*******

We have more news and it is good for Galaxy fans.  The newest update states that they will be sticking around!

galaxy2.png

Good news GPU fans, the rumours that Galaxy's GPU team is leaving the North American market might be somewhat exaggerated, at least according to their PR Team. 

Streisand.png

This post appeared on Facebook and was quickly taken off again, perhaps for rewording or perhaps it is a perfect example of the lack of communication that [H]ard|OCP cites in their story.  Stay tuned as we update you as soon as we hear more.

box.jpg

Party like it's 2008!

[H]ard|OCP have been following Galaxy's business model closely for the past year as they have been seeing hints that the reseller just didn't get the North American market.  Their concern grew as they tried and failed to contact Galaxy at the end of 2013, emails went unanswered and advertising campaigns seemed to have all but disappeared.  Even with this reassurance that Galaxy is not planning to leave the North American market a lot of what [H] says rings true, with the stock and delivery issues Galaxy seemed to have over the past year there is something going on behind the scenes.  Still it is not worth abandoning them completely and turning this into a self fulfilling prophecy, they have been in this market for a long time and may just be getting ready to move forward in a new way.  On the other hand you might be buying a product which will not have warranty support in the future.

"The North American GPU market has been one that is at many times a swirling mass of product. For the last few years though, we have seen the waters calm in that regard as video card board partners have somewhat solidified and we have seen solid players emerge and keep the stage. Except now we seen one exit stage left."

Here is some more Tech News from around the web:

Tech Talk

Source: [H]ard|OCP

Focus on Mantle

Subject: General Tech, Graphics Cards | February 5, 2014 - 02:43 PM |
Tagged: gaming, Mantle, amd, battlefield 4

Now that the new Mantle enabled driver has been released several sites have had a chance to try out the new API to see what effect it has on Battlefield 4.  [H]ard|OCP took a stock XFX R9 290X paired with an i7-3770K and tested both single and multiplayer BF4 performance and the pattern they saw lead them to believe Mantle is more effective at relieving CPU bottlenecks than ones caused by the GPU.  The performance increases they saw were greater at lower resolutions than at high resolutions.  At The Tech Report another XFX R9 290X was paired with an A10-7850K and an i7-4770K and compared the systems performance in D3D as well as Mantle.  To make the tests even more interesting they also tested D3D with a 780Ti, which you should fully examine before deciding which performs the best.  Their findings were in line with [H]ard|OCP's and they made the observation that Mantle is going to offer the greatest benefits to lower powered systems, with not a lot to be gained by high end systems with the current version of Mantle.  Legit Reviews performed similar tests but also brought the Star Swarm demo into the mix, using an R7 260X for their GPU.  You can catch all of our coverage by clicking on the Mantle tag.

bf4-framegraph.jpg

"Does AMD's Mantle graphics API deliver on its promise of smoother gaming with lower-spec CPUs? We take an early look at its performance in Battlefield 4."

Here is some more Tech News from around the web:

Gaming

Manufacturer: PC Perspective
Tagged: Mantle, interview, amd

What Mantle signifies about GPU architectures

Mantle is a very interesting concept. From the various keynote speeches, it sounds like the API is being designed to address the current state (and trajectory) of graphics processors. GPUs are generalized and highly parallel computation devices which are assisted by a little bit of specialized silicon, when appropriate. The vendors have even settled on standards, such as IEEE-754 floating point decimal numbers, which means that the driver has much less reason to shield developers from the underlying architectures.

Still, Mantle is currently a private technology for an unknown number of developers. Without a public SDK, or anything beyond the half-dozen keynotes, we can only speculate on its specific attributes. I, for one, have technical questions and hunches which linger unanswered or unconfirmed, probably until the API is suitable for public development.

Or, until we just... ask AMD.

amd-mantle-interview-01.jpg

Our response came from Guennadi Riguer, the chief architect for Mantle. In it, he discusses the API's usage as a computation language, the future of the rendering pipeline, and whether there will be a day where Crossfire-like benefits can occur by leaving an older Mantle-capable GPU in your system when purchasing a new, also Mantle-supporting one.

Q: Mantle's shading language is said to be compatible with HLSL. How will optimizations made for DirectX, such as tweaks during shader compilation, carry over to Mantle? How much tuning will (and will not) be shared between the two APIs?

[Guennadi] The current Mantle solution relies on the same shader generation path games the DirectX uses and includes an open-source component for translating DirectX shaders to Mantle accepted intermediate language (IL). This enables developers to quickly develop Mantle code path without any changes to the shaders. This was one of the strongest requests we got from our ISV partners when we were developing Mantle.

AMD-mantle-dx-hlsl-GSA_screen_shot.jpg

Follow-Up: What does this mean, specifically, in terms of driver optimizations? Would AMD, or anyone else who supports Mantle, be able to re-use the effort they spent on tuning their shader compilers (and so forth) for DirectX?

[Guennadi] With the current shader compilation strategy in Mantle, the developers can directly leverage DirectX shader optimization efforts in Mantle. They would use the same front-end HLSL compiler for DX and Mantle, and inside of the DX and Mantle drivers we share the shader compiler that generates the shader code our hardware understands.

Read on to see the rest of the interview!

NitroWare Tests AMD's Photoshop OpenCL Claims

Subject: General Tech, Graphics Cards, Processors | February 5, 2014 - 02:08 AM |
Tagged: photoshop, opencl, Adobe

Adobe has recently enhanced Photoshop CC to accelerate certain filters via OpenCL. AMD contacted NitroWare with this information and claims of 11-fold performance increases with "Smart Sharpen" on Kaveri, specifically. The computer hardware site decided to test these claims on a Radeon HD 7850 using the test metrics that AMD provided them.

Sure enough, he noticed a 16-fold gain in performance. Without OpenCL, the filter's loading bar was on screen for over ten seconds; with it enabled, there was no bar.

Dominic from NitroWare is careful to note that an HD 7850 is significantly higher performance than an APU (barring some weird scenario involving memory transfers or something). This might mark the beginning of Adobe's road to sensible heterogeneous computing outside of video transcoding. Of course, this will also be exciting for AMD. While they cannot keep up with Intel, thread per thread, they are still a heavyweight in terms of total performance. With Photoshop, people might actually notice it.

AMD Catalyst 14.1 Beta Available Now. Now, Chewie, NOW!

Subject: General Tech, Graphics Cards | February 1, 2014 - 11:29 PM |
Tagged: Mantle, BF4, amd

AMD has released the Catalyst 14.1 Beta driver (even for Linux) but you should, first, read Ryan's review. This is a little less than what he expects in a Beta from AMD. We are talking about crashes to desktop and freezes while loading a map on a single-GPU configuration - and Crossfire is a complete wash in his experience (although AMD acknowledges the latter in their release notes). According to AMD, there is even the possibility that the Mantle version of Battlefield 4 will render with your APU and ignore your dedicated graphics.

amd-bf4-mantle.jpg

If you are determined to try Catalyst 14.1, however, it does make a first step into the promise of Mantle. Some situations show slightly lower performance than DirectX 11, albeit with a higher minimum framerate, while other results impress with double-digit percentage gains.

Multiplayer in BF4, where the CPU is more heavily utilized, seems to benefit the most (thankfully).

If you understand the risk (in terms of annoyance and frustration), and still want to give it a try, pick up the driver from AMD's support website. If not? Give it a little more time for AMD to whack-a-bug. At some point, there should be truly free performance waiting for you.

Press release after the break!

Source: AMD
Author:
Manufacturer: AMD

A quick look at performance results

Late last week, EA and Dice released the long awaited patch for Battlefield 4 that enables support for the Mantle renderer.  This new API technology was introduced by AMD back in September. Unfortunately, AMD wasn't quite ready for its release with their Catalyst 14.1 beta driver.  I wrote a short article that previewed the new driver's features, its expected performance with the Mantle version of BF4, and commentary about the current state of Mantle.  You should definite read that as a primer before continuing if you haven't yet.  

Today, after really just a few short hours with a useable driver, I have only limited results.  Still, I know that you, our readers, clamor for ANY information on the topic.  I thought I would share what we have thus far.

Initial Considerations

As I mentioned in the previous story, the Mantle version of Battlefield 4 has the biggest potential to show advantages in times where the game is more CPU limited.  AMD calls this the "low hanging fruit" for this early release of Mantle and claim that further optimizations will come, especially for GPU-bound scenarios.  Because of that dependency on CPU limitations, that puts some non-standard requirements on our ability to showcase Mantle's performance capabilities.

bf42.jpg

For example, the level of the game and even the section of that level, in the BF4 single player campaign, can show drastic swings in Mantle's capabilities.  Multiplayer matches will also show more consistent CPU utilization (and thus could be improved by Mantle) though testing those levels in a repeatable, semi-scientific method is much more difficult.  And, as you'll see in our early results, I even found a couple instances in which the Mantle API version of BF4 ran a smidge slower than the DX11 instance.  

For our testing, we compiled two systems that differed in CPU performance in order to simulate the range of processors installed within consumers' PCs.  Our standard GPU test bed includes a Core i7-3960X Sandy Bridge-E processor specifically to remove the CPU as a bottleneck and that has been included here today.  We added in a system based on the AMD A10-7850K Kaveri APU which presents a more processor-limited (especially per-thread) system, overall, and should help showcase Mantle benefits more easily.

Continue reading our early look at the performance advantages of AMD Mantle on Battlefield 4!!

Video Perspective: Free to Play Games on the A10-7850K vs. Intel Core i3 + GeForce GT 630

Subject: Graphics Cards, Processors | January 31, 2014 - 04:36 PM |
Tagged: 7850k, A10-7850K, amd, APU, gt 630, Intel, nvidia, video

As a follow up to our first video posted earlier in the week that looked at the A10-7850K and the GT 630 from NVIDIA in five standard games, this time we compare the A10-7850K APU against the same combination of the Intel and NVIDIA hardware in five of 2013's top free to play games.

UPDATE: I've had some questions about WHICH of the GT 630 SKUs were used in this testing.  Our GT 630 was this EVGA model that is based on 96 CUDA cores and a 128-bit DDR3 memory interface.  You can see a comparison of the three current GT 630 options on NVIDIA's website here.

If you are looking for more information on AMD's Kaveri APUs you should check out my review of the A8-7600 part as well our testing of Dual Graphics with the A8-7600 and a Radeon R7 250 card.

Need the Double D? XFX has the R9 290X for you!

Subject: Graphics Cards | January 30, 2014 - 03:15 PM |
Tagged: xfx, double d, R9 290X

The only thing more fun that an XFX Double Dissipation R9 290X is two of them in Crossfire, which is exactly what [H]ard|OCP just tested.  These cards sport the familiar custom cooler though they are not overclocked nor is [H] testing overclocking in this review though they will revisit this card in the future to do exactly that.  This review is about the Crossfire performance of these cards straight out of the box and it is rather impressive.  When [H] tested 4K performance they could feel the frame pacing improvements the new driver gives as well as seeing these cards outperform the SLI'd GTX 780 Ti cards in every test; though not always by a huge margin.  The current selling price of these cards is about $100 above the MSRP but still come in cheaper than the current NVIDIA card; these particular cards really show off what Hawaii can be capable of.

1390541180GVyEsgrEO9_1_1.jpg

"Take two custom XFX R9 290X Double Dissipation Edition video cards, enable CrossFire, and let your jaw hit the floor. We will test this combination against the competition in a triple-display Eyefinity setup as well as 4K Ultra HD display gaming. We will find out if custom cards hold any advantage over the reference designed R9 290X."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP
Author:
Manufacturer: AMD

A troubled launch to be sure

AMD has released some important new drivers with drastic feature additions over the past year.  Remember back in August of 2013 when Frame Pacing was first revealed?  Today’s Catalyst 14.1 beta release will actually complete the goals that AMD set forth upon itself in early 2013 in regards to introducing (nearly) complete Frame Pacing technology integration for non-XDMA GPUs while also adding support for Mantle and HSA capability.

Frame Pacing Phase 2 and HSA Support

When AMD released the first frame pacing capable beta driver in August of 2013, it added support to existing GCN designs (HD 7000-series and a few older generations) at resolutions of 2560x1600 and below.  While that definitely addressed a lot of the market, the fact was that CrossFire users were also amongst the most likely to have Eyefinity (3+ monitors spanned for gaming) or even 4K displays (quickly dropping in price).  Neither of those advanced display options were supported with any Catalyst frame pacing technology.

That changes today as Phase 2 of the AMD Frame Pacing feature has finally been implemented for products that do not feature the XDMA technology (found in Hawaii GPUs for example).  That includes HD 7000-series GPUs, the R9 280X and 270X cards, as well as older generation products and Dual Graphics hardware combinations such as the new Kaveri APU and R7 250.  I have already tested Kaveri and the R7 250 in fact, and you can read about its scaling and experience improvements right here.  That means that users of the HD 7970, R9 280X, etc., as well as those of you with HD 7990 dual-GPU cards, will finally be able to utilize the power of both GPUs in your system with 4K displays and Eyefinity configurations!

BF3_5760x1080_PLOT.png

This is finally fixed!!

As of this writing I haven’t had time to do more testing (other than the Dual Graphics article linked above) to demonstrate the potential benefits of this Phase 2 update, but we’ll be targeting it later in the week.  For now, it appears that you’ll be able to get essentially the same performance and pacing capabilities on the Tahiti-based GPUs as you can with Hawaii (R9 290X and R9 290). 

Catalyst 14.1 beta is also the first public driver to add support for HSA technology, allowing owners of the new Kaveri APU to take advantage of the appropriately enabled applications like LibreOffice and the handful of Adobe apps.  AMD has since let us know that this feature DID NOT make it into the public release of Catalyst 14.1.

The First Mantle Ready Driver (sort of) 

A technology that has been in development for more than two years according to AMD, the newly released Catalyst 14.1 beta driver is the first to enable support for the revolutionary new Mantle API for PC gaming.  Essentially, Mantle is AMD’s attempt at creating a custom API that will replace DirectX and OpenGL in order to more directly target the GPU hardware in your PC, specifically the AMD-based designs of GCN (Graphics Core Next). 

slide2_0.jpg

Mantle runs at a lower level than DX or OGL does, able to more directly access the hardware resources of the graphics chips, and with that ability is able to better utilize the hardware in your system, both CPU and GPU.  In fact, the primary benefit of Mantle is going to be seen in the form of less API overhead and bottlenecks such as real-time shader compiling and code translation. 

If you are interested in the meat of what makes Mantle tick and why it was so interesting to us when it was first announced in September of 2013, you should check out our first deep-dive article written by Josh.  In it you’ll get our opinion on why Mantle matters and why it has the potential for drastically changing the way the PC is thought of in the gaming ecosystem.

Continue reading our coverage of the launch of AMD's Catalyst 14.1 driver and Battlefield 4 Mantle patch!!

Video Perspective: 2013 Games on the A10-7850K vs. Intel Core i3 + GeForce GT 630

Subject: Graphics Cards, Processors | January 29, 2014 - 03:44 PM |
Tagged: video, nvidia, Intel, gt 630, APU, amd, A10-7850K, 7850k

The most interesting aspect of the new Kaveri-based APUs from AMD, in particularly the A10-7850K part, is how it improves mainstream gaming performance.  AMD has always stated that these APUs shake up the need for low-cost discrete graphics and when we got the new APU in the office we did a couple of quick tests to see how much validity there to that claim.

In this short video we compare the A10-7850K APU against a combination of the Intel Core i3-4330 and GeForce GT 630 discrete graphics card in five of 2013's top PC releases.  I think you'll find the results pretty interesting.

UPDATE: I've had some questions about WHICH of the GT 630 SKUs were used in this testing.  Our GT 630 was this EVGA model that is based on 96 CUDA cores and a 128-bit DDR3 memory interface.  You can see a comparison of the three current GT 630 options on NVIDIA's website here.

If you are looking for more information on AMD's Kaveri APUs you should check out my review of the A8-7600 part as well our testing of Dual Graphics with the A8-7600 and a Radeon R7 250 card.