AMD Selects Asetek to Liquid Cool The World’s Fastest Graphics Card

Subject: Graphics Cards | April 8, 2014 - 03:51 PM |
Tagged: asetek, amd, r9 295x2

If you wondered where the custom cooler for the impressively powerful AMD Radeon R9 295X2 came from then wonder no more.  The cooler was designed specifically for this card by Asetek, a veteran in cooling computer components with water.  You should keep that in mind the next time you think about picking up a third party watercooler!

GPU_LC.jpg

Asetek, the world’s leading supplier of computer liquid cooling solutions, today announced that its liquid cooling technology will be used to cool AMD’s latest flagship graphics card. The new AMD Radeon R9 295X2 is the world’s fastest graphics card. Boasting 8 gigabytes of memory and over 11 teraflops of computing power, the AMD Radeon R9 295X2 graphics card is the undisputed graphics performance champion.

“Today’s high-end graphic cards pack insane amounts of power into a very small area and removing that heat is no small task. Utilizing our liquid cooling for graphics cards unlocks new opportunities for performance and low noise,” said André Sloth Eriksen, Founder and CEO of Asetek. “The fact that AMD has chosen Asetek liquid cooling for their reference cooling design is a testament to the reliability and performance of our technology.”

The AMD Radeon R9 295X2 is the first graphics card reference design ever to ship with an advanced closed-loop water cooling system. The Asetek-developed liquid cooling system on the AMD Radeon R9 295X2 graphics card delivers significant benefits for the performance-hungry enthusiast, hardcore gamer or Bitcoin miner. Users will appreciate the unobtrusive noise, low GPU and component temperatures, and blistering performance - right out of the box.

“As the most powerful graphics card offered to date, we knew we needed an outstanding custom cooling solution for the AMD Radeon R9 295X2 graphics card,” said Matt Skynner, corporate vice president and general manager, Graphics Business Unit, AMD. “Asetek’s liquid cooling embodies the efficient performance, reliability and reputation we were seeking in a partner. As GPUs become more powerful, the benefits of collaborating with Asetek and integrating our world-class technologies are clear.”

The AMD Radeon R9 295X2 graphics card utilizes Asetek’s proven, maintenance free, factory sealed liquid cooling technology to cool the two powerful GPUs. This liquid cooling design ensures continuous stability even under full load. The card is easy to install and fits in most computer cases on the market today. With more than 1.5 million units in the field today, Asetek liquid cooling provides worry free operation to gamers and PC enthusiasts alike.

Source: Asetek
Author:
Manufacturer: AMD

A Powerful Architecture

AMD likes to toot its own horn. Just a take a look at the not-so-subtle marketing buildup to the Radeon R9 295X2 dual-Hawaii graphics card, released today. I had photos of me shipped to…me…overnight. My hotel room at GDC was also given a package which included a pair of small Pringles cans (chips) and a bottle of volcanic water. You may have also seen some photos posted of a mysterious briefcase with its side stickered by with the silhouette of a Radeon add-in board.

This tooting is not without some validity though. The Radeon R9 295X2 is easily the fastest graphics card we have ever tested and that says a lot based on the last 24 months of hardware releases. It’s big, it comes with an integrated water cooler, and it requires some pretty damn specific power supply specifications. But AMD did not compromise on the R9 295X2 and, for that, I am sure that many enthusiasts will be elated. Get your wallets ready, though, this puppy will run you $1499.

01.jpg

Both AMD and NVIDIA have a history of producing high quality dual-GPU graphics cards late in the product life cycle. The most recent entry from AMD was the Radeon HD 7990, a pair of Tahiti GPUs on a single PCB with a triple fan cooler. While a solid performing card, the product was released in a time when AMD CrossFire technology was well behind the curve and, as a result, real-world performance suffered considerably. By the time the drivers and ecosystem were fixed, the HD 7990 was more or less on the way out. It was also notorious for some intermittent, but severe, overheating issues, documented by Tom’s Hardware in one of the most harshly titled articles I’ve ever read. (Hey, Game of Thrones started again this week!)

The Hawaii GPU, first revealed back in September and selling today under the guise of the R9 290X and R9 290 products, is even more power hungry than Tahiti. Many in the industry doubted that AMD would ever release a dual-GPU product based on Hawaii as the power and thermal requirements would be just too high. AMD has worked around many of these issues with a custom water cooler and placing specific power supply requirements on buyers. Still, all without compromising on performance. This is the real McCoy.

Continue reading our review of the AMD Radeon R9 295X2 8GB Dual Hawaii Graphics Card!!

MSI's R9 290X GAMING 4G sports a variety of overclocked settings and a Twin Frozr IV

Subject: Graphics Cards | April 7, 2014 - 04:14 PM |
Tagged: msi, R9 290X GAMING 4G, amd, hawaii, R9 290X, Twin Frozr IV, factory overclocked

The familiar Twin Frozr IV cooler has been added to the R9 290X GPU on MSI's latest AMD graphics card.  The R9 290X GAMING 4G sports 4GB of GDDR5 running at an even 5GHz and a GPU that has three separate top speeds depending on the profile you choose; 1040 MHz with OC Mode, 1030 MHz for Gaming Mode and 1000 MHz in Silent Mode.  [H]ard|OCP also tried manually overclocking and ended up with a peak of 1130MHz GPU and 5.4GHz for the GDDR5, not a bad bump over the factory overclock.  Check out the performance of the various speeds in their full review.

1396151094av674gYKyI_1_6_l.jpg

"On our test bench today is MSI's newest high-end GAMING series graphics cards in the form of the MSI Radeon R9 290X GAMING 4G video card. We will strap it to our test bench and compare it to the MSI GeForce GTX 780 Ti GAMING 3G card out-of-box and overclocked to determine which card provides the best gameplay experience."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Rumor: VESA Might Have Accepted AMD's FreeSync

Subject: General Tech, Displays | April 5, 2014 - 11:41 PM |
Tagged: vesa, freesync, DisplayPort, amd

According to French website, hardware.fr, the VESA standards body has accepted AMD's proposal for FreeSync into an extension of the DisplayPort 1.2a standard. FreeSync is the standards-based answer to NVIDIA's G-Sync, a process for allowing the monitor to time itself according to its driving GPU. At CES 2014, AMD claimed that the technology was already in development to be used for mobile devices to save power (less frequent monitor refreshes).

vesa-logoBlack.png

By presenting image to the user only when the work is complete, you can avoid "tearing" and latency. The tearing will be eliminated because the graphics card does not change the image being drawn by the monitor as it is trying to display it. The latency is eliminated because it does not need to wait until the monitor is ready (up to one-over-the maximum refresh rate of the monitor). It should also save power by reducing its refresh rate on slower scenes, such as an idle desktop, but that is less of a concern when you are plugged into a wall.

What does this mean? Nothing yet, really, except that a gigantic standards body seems to approve.

Source: Hardware.fr
Author:
Manufacturer: Various

Athlon and Pentium Live On

Over the past year or so, we have taken a look at a few budget gaming builds here at PC Perspective. One of our objectives with these build guides was to show people that PC gaming can be cost competitive with console gaming, and at a much higher quality.

However, we haven't stopped pursuing our goal of the perfect inexpensive gaming PC, which is still capable of maxing out image quality settings on today's top games at 1080p.

Today we take a look at two new systems, featuring some parts which have been suggested to us after our previous articles.

  AMD System Intel System
Processor AMD Athlon X4 760K - $85 Intel Pentium G3220 - $65
Cores / Threads 4 / 4 2 / 2
Motherboard Gigabyte F2A55M-HD2 - $60 ASUS H81M-E - $60
Graphics MSI R9 270 Gaming - $180 MSI R9 270 Gaming - $180
System Memory Corsair 8GB DDR3-1600 (1x8GB) - $73 Corsair 8GB DDR3-1600 (1x8GB) - $73
Hard Drive Western Digital 1TB Caviar Green - $60 Western Digital 1TB Caviar Green - $60
Power Supply  Cooler Master GX 450W - $50 Cooler Master GX 450W - $50
Case Cooler Master N200 MicroATX - $50 Cooler Master N200 MicroATX - $50
Price $560 $540

(Editor's note: If you don't already have a copy of Windows, and don't plan on using Linux or SteamOS, you'll need an OEM copy of Windows 8.1 - currently selling for $98.)

These are low prices for a gaming computer, and feature some parts which many of you might not know a lot about. Let's take a deeper look at the two different platforms which we built upon.

The Platforms

IMG_9973.JPG

First up is the AMD Athlon X4 760K. While you may not have known the Athlon brand was still being used on current parts, they represent an interesting part of the market. On the FM2 socket, the 760K is essentially a high end Richland APU, with the graphics portion of the chip disabled.

What this means is that if you are going to pair your processor with a discrete GPU anyway, you can skip paying extra for the integrated GPU.

As for the motherboard, we went for an ultra inexpensive A55 option from Gigabyte, the GA-F2A55M-HD2. This board features the A55 chipset which launched with the Llano APUs in 2011. Because of this older chipset, the board does not feature USB 3.0 or SATA 6G capability, but since we are only concerned about gaming performance here, it makes a great bare bones option.

Continue reading our build guide for a gaming PC under $550!!!

Podcast #294 - Frame Rating Mantle in BF4, DirectX 12, Sub-$700 4K Monitors and more!

Subject: General Tech | April 3, 2014 - 10:30 AM |
Tagged: video, Samsung, podcast, Mantle, Glacer 240L, GDC 2014, frame rating, dx12, cooler master, BUILD 2014, BF4, amd, adata, 4k

PC Perspective Podcast #294 - 04/03/2014

Join us this week as we discuss Frame Rating Mantle in BF4, DirectX 12, Sub-$700 4K Monitors and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath and Allyn Malventano

 
This podcast is brought to you by Coolermaster, and the CM Storm Pulse-R Gaming Headset!
 
Program length: 1:12:29
 
  1. Week in Review:
  2. 0:43:40 This podcast is brought to you by Coolermaster, and the CM Storm Pulse-R Gaming Headset
  3. News items of interest:
  4. Hardware/Software Picks of the Week:
    1. Allyn: Like MAME? Try MESS, and further - UME (systems list)
  5. Closing/outro

Be sure to subscribe to the PC Perspective YouTube channel!!

 

Author:
Manufacturer: AMD

BF4 Integrates FCAT Overlay Support

Back in September AMD publicly announced Mantle, a new lower level API meant to offer more performance for gamers and more control for developers fed up with the restrictions of DirectX. Without diving too much into the politics of the release, the fact that Battlefield 4 developer DICE was integrating Mantle into the Frostbite engine for Battlefield was a huge proof point for the technology. Even though the release was a bit later than AMD had promised us, coming at the end of January 2014, one of the biggest PC games on the market today had integrated a proprietary AMD API.

When I did my first performance preview of BF4 with Mantle on February 1st, the results were mixed but we had other issues to deal with. First and foremost, our primary graphics testing methodology, called Frame Rating, wasn't able to be integrated due to the change of API. Instead we were forced to use an in-game frame rate counter built by DICE which worked fine, but didn't give us the fine grain data we really wanted to put the platform to the test. It worked, but we wanted more. Today we are happy to announce we have full support for our Frame Rating and FCAT testing with BF4 running under Mantle.

A History of Frame Rating

In late 2012 and throughout 2013, testing graphics cards became a much more complicated beast. Terms like frame pacing, stutter, jitter and runts were not in the vocabulary of most enthusiasts but became an important part of the story just about one year ago. Though complicated to fully explain, the basics are pretty simple.

Rather than using software on the machine being tested to measure performance, our Frame Rating system uses a combination of local software and external capture hardware. On the local system with the hardware being evaluated we run a small piece of software called an overlay that draws small colored bars on the left hand side of the game screen that change successively with each frame rendered by the game. Using a secondary system, we capture the output from the graphics card directly, intercepting it from the display output, in real-time in an uncompressed form. With that video file captured, we then analyze it frame by frame, measuring the length of each of those colored bars, how long they are on the screen, how consistently they are displayed. This allows us to find the average frame rate but also to find how smoothly the frames are presented, if there are dropped frames and if there are jitter or stutter issues. 

screen1.jpg

Continue reading our first look at Frame Rating / FCAT Testing with Mantle in Battlefield 4!!

GDC 2014: Shader-limited Optimization for AMD's GCN

Subject: Editorial, General Tech, Graphics Cards, Processors, Shows and Expos | March 29, 2014 - 10:45 PM |
Tagged: gdc 14, GDC, GCN, amd

While Mantle and DirectX 12 are designed to reduce overhead and keep GPUs loaded, the conversation shifts when you are limited by shader throughput. Modern graphics processors are dominated by sometimes thousands of compute cores. Video drivers are complex packages of software. One of their many tasks is converting your scripts, known as shaders, into machine code for its hardware. If this machine code is efficient, it could mean drastically higher frame rates, especially at extreme resolutions and intense quality settings.

amd-gcn-unit.jpg

Emil Persson of Avalanche Studios, probably known best for the Just Cause franchise, published his slides and speech on optimizing shaders. His talk focuses on AMD's GCN architecture, due to its existence in both console and PC, while bringing up older GPUs for examples. Yes, he has many snippets of GPU assembly code.

AMD's GCN architecture is actually quite interesting, especially dissected as it was in the presentation. It is simpler than its ancestors and much more CPU-like, with resources mapped to memory (and caches of said memory) rather than "slots" (although drivers and APIs often pretend those relics still exist) and with how vectors are mostly treated as collections of scalars, and so forth. Tricks which attempt to combine instructions together into vectors, such as using dot products, can just put irrelevant restrictions on the compiler and optimizer... as it breaks down those vector operations into those very same component-by-component ops that you thought you were avoiding.

Basically, and it makes sense coming from GDC, this talk rarely glosses over points. It goes over execution speed of one individual op compared to another, at various precisions, and which to avoid (protip: integer divide). Also, fused multiply-add is awesome.

I know I learned.

As a final note, this returns to the discussions we had prior to the launch of the next generation consoles. Developers are learning how to make their shader code much more efficient on GCN and that could easily translate to leading PC titles. Especially with DirectX 12 and Mantle, which lightens the CPU-based bottlenecks, learning how to do more work per FLOP addresses the other side. Everyone was looking at Mantle as AMD's play for success through harnessing console mindshare (and in terms of Intel vs AMD, it might help). But honestly, I believe that it will be trends like this presentation which prove more significant... even if behind-the-scenes. Of course developers were always having these discussions, but now console developers will probably be talking about only one architecture - that is a lot of people talking about very few things.

This is not really reducing overhead; this is teaching people how to do more work with less, especially in situations (high resolutions with complex shaders) where the GPU is most relevant.

Taking the A10-7850K out for a spin and leaving marks on the bench

Subject: Processors | March 27, 2014 - 12:44 PM |
Tagged: Kaveri, APU, amd, A10-7850K

It is about time we took a look at AMD's new flagship processor, the A10-7850K Kaveri chip running at 3.7GHz or 4GHz at full boost with 4 Steamroller CPU cores and 8 Hawaii GPU cores.  While we are still shy on HSA benchmarks at the moment, HiTech Legion did have a chance to do some Mantle testing with the APU alone and paired with a discrete GPU which showed off some of the benefits on Mantle.  They also reached a decent overclock, a hair shy of 4.5GHz on air which is not too shabby for a processor that costs under $200.  Check out the full review here.

tech1.jpg

"AMD has launched their fourth generation of APU, codenamed “Kaveri”. Kaveri boasts increased processor power coupled with advanced Radeon graphics but there are other technologies, such as HSA, that balance memory loads via “compute” to both the CPU and GPU."

Here are some more Processor articles from around the web:

Processors

AMD FirePro W9100 Announced: Doing Work in Hawaii.

Subject: General Tech, Graphics Cards | March 26, 2014 - 02:43 PM |
Tagged: amd, firepro, W9100

The AMD FirePro W9100 has been announced, bringing the Hawaii architecture to non-gaming markets. First seen in the Radeon R9 series of graphics cards, it has the capacity for 5 TeraFLOPs of single-precision (32-bit) performance and 2 TeraFLOPs of double-precision (64-bit). The card also has 16GB of GDDR5 memory to support it. From the raw numbers, this is slightly more capacity than either the Titan Black or Quadro K6000 in all categories. It will also support six 4K monitors (or three at 60Hz), per card. AMD supports up to four W9100 cards in a single system.

amd-firepro-w9100.jpg

Professional users can be looking for several things in their graphics cards: compute performance (either directly or through licensed software such as Photoshop, Premiere, Blender, Maya, and so forth), several high-resolution monitors (or digital signage units), and/or a lot of graphics performance. The W9100 is basically the top of the stack which covers all three of these requirements.

amd-firepro-w9100-2.jpg

AMD also announced a system branding initiative called, "AMD FirePro Ultra Workstation". They currently have five launch partners, Supermicro, Boxx, Tarox, Silverdraft, and Versatile Distribution Services, which will have workstations available under this program. The list of components for a "Recommend" certification is: two eight-core 2.6 GHz CPUs, 32GB of RAM, four PCIe 3.0 x16 slots, a 1500W Platinum PSU, and a case with nine expansion slots (to allow four W9100 GPUs along with one SSD or SDI interface card).

amd-firepro-w9100-3.jpg

Also, while the company has heavily discussed OpenCL in their slide deck, they have not mentioned specific versions. As such, I will assume that the FirePro W9100 supports OpenCL 1.2, like the R9-series, and not OpenCL 2.0 which was ratified back in November. This is still a higher conformance level than NVIDIA, which is at OpenCL 1.1.

Currently no word about pricing or availability.

Source: AMD