MSI's R9 290X GAMING 4G sports a variety of overclocked settings and a Twin Frozr IV

Subject: Graphics Cards | April 7, 2014 - 07:14 PM |
Tagged: msi, R9 290X GAMING 4G, amd, hawaii, R9 290X, Twin Frozr IV, factory overclocked

The familiar Twin Frozr IV cooler has been added to the R9 290X GPU on MSI's latest AMD graphics card.  The R9 290X GAMING 4G sports 4GB of GDDR5 running at an even 5GHz and a GPU that has three separate top speeds depending on the profile you choose; 1040 MHz with OC Mode, 1030 MHz for Gaming Mode and 1000 MHz in Silent Mode.  [H]ard|OCP also tried manually overclocking and ended up with a peak of 1130MHz GPU and 5.4GHz for the GDDR5, not a bad bump over the factory overclock.  Check out the performance of the various speeds in their full review.

1396151094av674gYKyI_1_6_l.jpg

"On our test bench today is MSI's newest high-end GAMING series graphics cards in the form of the MSI Radeon R9 290X GAMING 4G video card. We will strap it to our test bench and compare it to the MSI GeForce GTX 780 Ti GAMING 3G card out-of-box and overclocked to determine which card provides the best gameplay experience."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Rumor: VESA Might Have Accepted AMD's FreeSync

Subject: General Tech, Displays | April 6, 2014 - 02:41 AM |
Tagged: vesa, freesync, DisplayPort, amd

According to French website, hardware.fr, the VESA standards body has accepted AMD's proposal for FreeSync into an extension of the DisplayPort 1.2a standard. FreeSync is the standards-based answer to NVIDIA's G-Sync, a process for allowing the monitor to time itself according to its driving GPU. At CES 2014, AMD claimed that the technology was already in development to be used for mobile devices to save power (less frequent monitor refreshes).

vesa-logoBlack.png

By presenting image to the user only when the work is complete, you can avoid "tearing" and latency. The tearing will be eliminated because the graphics card does not change the image being drawn by the monitor as it is trying to display it. The latency is eliminated because it does not need to wait until the monitor is ready (up to one-over-the maximum refresh rate of the monitor). It should also save power by reducing its refresh rate on slower scenes, such as an idle desktop, but that is less of a concern when you are plugged into a wall.

What does this mean? Nothing yet, really, except that a gigantic standards body seems to approve.

Source: Hardware.fr
Author:
Manufacturer: Various

Athlon and Pentium Live On

Over the past year or so, we have taken a look at a few budget gaming builds here at PC Perspective. One of our objectives with these build guides was to show people that PC gaming can be cost competitive with console gaming, and at a much higher quality.

However, we haven't stopped pursuing our goal of the perfect inexpensive gaming PC, which is still capable of maxing out image quality settings on today's top games at 1080p.

Today we take a look at two new systems, featuring some parts which have been suggested to us after our previous articles.

  AMD System Intel System
Processor AMD Athlon X4 760K - $85 Intel Pentium G3220 - $65
Cores / Threads 4 / 4 2 / 2
Motherboard Gigabyte F2A55M-HD2 - $60 ASUS H81M-E - $60
Graphics MSI R9 270 Gaming - $180 MSI R9 270 Gaming - $180
System Memory Corsair 8GB DDR3-1600 (1x8GB) - $73 Corsair 8GB DDR3-1600 (1x8GB) - $73
Hard Drive Western Digital 1TB Caviar Green - $60 Western Digital 1TB Caviar Green - $60
Power Supply  Cooler Master GX 450W - $50 Cooler Master GX 450W - $50
Case Cooler Master N200 MicroATX - $50 Cooler Master N200 MicroATX - $50
Price $560 $540

(Editor's note: If you don't already have a copy of Windows, and don't plan on using Linux or SteamOS, you'll need an OEM copy of Windows 8.1 - currently selling for $98.)

These are low prices for a gaming computer, and feature some parts which many of you might not know a lot about. Let's take a deeper look at the two different platforms which we built upon.

The Platforms

IMG_9973.JPG

First up is the AMD Athlon X4 760K. While you may not have known the Athlon brand was still being used on current parts, they represent an interesting part of the market. On the FM2 socket, the 760K is essentially a high end Richland APU, with the graphics portion of the chip disabled.

What this means is that if you are going to pair your processor with a discrete GPU anyway, you can skip paying extra for the integrated GPU.

As for the motherboard, we went for an ultra inexpensive A55 option from Gigabyte, the GA-F2A55M-HD2. This board features the A55 chipset which launched with the Llano APUs in 2011. Because of this older chipset, the board does not feature USB 3.0 or SATA 6G capability, but since we are only concerned about gaming performance here, it makes a great bare bones option.

Continue reading our build guide for a gaming PC under $550!!!

Podcast #294 - Frame Rating Mantle in BF4, DirectX 12, Sub-$700 4K Monitors and more!

Subject: General Tech | April 3, 2014 - 01:30 PM |
Tagged: video, Samsung, podcast, Mantle, Glacer 240L, GDC 2014, frame rating, dx12, cooler master, BUILD 2014, BF4, amd, adata, 4k

PC Perspective Podcast #294 - 04/03/2014

Join us this week as we discuss Frame Rating Mantle in BF4, DirectX 12, Sub-$700 4K Monitors and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath and Allyn Malventano

 
This podcast is brought to you by Coolermaster, and the CM Storm Pulse-R Gaming Headset!
 
Program length: 1:12:29
 
  1. Week in Review:
  2. 0:43:40 This podcast is brought to you by Coolermaster, and the CM Storm Pulse-R Gaming Headset
  3. News items of interest:
  4. Hardware/Software Picks of the Week:
    1. Allyn: Like MAME? Try MESS, and further - UME (systems list)
  5. Closing/outro

Be sure to subscribe to the PC Perspective YouTube channel!!

 

Author:
Manufacturer: AMD

BF4 Integrates FCAT Overlay Support

Back in September AMD publicly announced Mantle, a new lower level API meant to offer more performance for gamers and more control for developers fed up with the restrictions of DirectX. Without diving too much into the politics of the release, the fact that Battlefield 4 developer DICE was integrating Mantle into the Frostbite engine for Battlefield was a huge proof point for the technology. Even though the release was a bit later than AMD had promised us, coming at the end of January 2014, one of the biggest PC games on the market today had integrated a proprietary AMD API.

When I did my first performance preview of BF4 with Mantle on February 1st, the results were mixed but we had other issues to deal with. First and foremost, our primary graphics testing methodology, called Frame Rating, wasn't able to be integrated due to the change of API. Instead we were forced to use an in-game frame rate counter built by DICE which worked fine, but didn't give us the fine grain data we really wanted to put the platform to the test. It worked, but we wanted more. Today we are happy to announce we have full support for our Frame Rating and FCAT testing with BF4 running under Mantle.

A History of Frame Rating

In late 2012 and throughout 2013, testing graphics cards became a much more complicated beast. Terms like frame pacing, stutter, jitter and runts were not in the vocabulary of most enthusiasts but became an important part of the story just about one year ago. Though complicated to fully explain, the basics are pretty simple.

Rather than using software on the machine being tested to measure performance, our Frame Rating system uses a combination of local software and external capture hardware. On the local system with the hardware being evaluated we run a small piece of software called an overlay that draws small colored bars on the left hand side of the game screen that change successively with each frame rendered by the game. Using a secondary system, we capture the output from the graphics card directly, intercepting it from the display output, in real-time in an uncompressed form. With that video file captured, we then analyze it frame by frame, measuring the length of each of those colored bars, how long they are on the screen, how consistently they are displayed. This allows us to find the average frame rate but also to find how smoothly the frames are presented, if there are dropped frames and if there are jitter or stutter issues. 

screen1.jpg

Continue reading our first look at Frame Rating / FCAT Testing with Mantle in Battlefield 4!!

GDC 2014: Shader-limited Optimization for AMD's GCN

Subject: Editorial, General Tech, Graphics Cards, Processors, Shows and Expos | March 30, 2014 - 01:45 AM |
Tagged: gdc 14, GDC, GCN, amd

While Mantle and DirectX 12 are designed to reduce overhead and keep GPUs loaded, the conversation shifts when you are limited by shader throughput. Modern graphics processors are dominated by sometimes thousands of compute cores. Video drivers are complex packages of software. One of their many tasks is converting your scripts, known as shaders, into machine code for its hardware. If this machine code is efficient, it could mean drastically higher frame rates, especially at extreme resolutions and intense quality settings.

amd-gcn-unit.jpg

Emil Persson of Avalanche Studios, probably known best for the Just Cause franchise, published his slides and speech on optimizing shaders. His talk focuses on AMD's GCN architecture, due to its existence in both console and PC, while bringing up older GPUs for examples. Yes, he has many snippets of GPU assembly code.

AMD's GCN architecture is actually quite interesting, especially dissected as it was in the presentation. It is simpler than its ancestors and much more CPU-like, with resources mapped to memory (and caches of said memory) rather than "slots" (although drivers and APIs often pretend those relics still exist) and with how vectors are mostly treated as collections of scalars, and so forth. Tricks which attempt to combine instructions together into vectors, such as using dot products, can just put irrelevant restrictions on the compiler and optimizer... as it breaks down those vector operations into those very same component-by-component ops that you thought you were avoiding.

Basically, and it makes sense coming from GDC, this talk rarely glosses over points. It goes over execution speed of one individual op compared to another, at various precisions, and which to avoid (protip: integer divide). Also, fused multiply-add is awesome.

I know I learned.

As a final note, this returns to the discussions we had prior to the launch of the next generation consoles. Developers are learning how to make their shader code much more efficient on GCN and that could easily translate to leading PC titles. Especially with DirectX 12 and Mantle, which lightens the CPU-based bottlenecks, learning how to do more work per FLOP addresses the other side. Everyone was looking at Mantle as AMD's play for success through harnessing console mindshare (and in terms of Intel vs AMD, it might help). But honestly, I believe that it will be trends like this presentation which prove more significant... even if behind-the-scenes. Of course developers were always having these discussions, but now console developers will probably be talking about only one architecture - that is a lot of people talking about very few things.

This is not really reducing overhead; this is teaching people how to do more work with less, especially in situations (high resolutions with complex shaders) where the GPU is most relevant.

Taking the A10-7850K out for a spin and leaving marks on the bench

Subject: Processors | March 27, 2014 - 03:44 PM |
Tagged: Kaveri, APU, amd, A10-7850K

It is about time we took a look at AMD's new flagship processor, the A10-7850K Kaveri chip running at 3.7GHz or 4GHz at full boost with 4 Steamroller CPU cores and 8 Hawaii GPU cores.  While we are still shy on HSA benchmarks at the moment, HiTech Legion did have a chance to do some Mantle testing with the APU alone and paired with a discrete GPU which showed off some of the benefits on Mantle.  They also reached a decent overclock, a hair shy of 4.5GHz on air which is not too shabby for a processor that costs under $200.  Check out the full review here.

tech1.jpg

"AMD has launched their fourth generation of APU, codenamed “Kaveri”. Kaveri boasts increased processor power coupled with advanced Radeon graphics but there are other technologies, such as HSA, that balance memory loads via “compute” to both the CPU and GPU."

Here are some more Processor articles from around the web:

Processors

AMD FirePro W9100 Announced: Doing Work in Hawaii.

Subject: General Tech, Graphics Cards | March 26, 2014 - 05:43 PM |
Tagged: amd, firepro, W9100

The AMD FirePro W9100 has been announced, bringing the Hawaii architecture to non-gaming markets. First seen in the Radeon R9 series of graphics cards, it has the capacity for 5 TeraFLOPs of single-precision (32-bit) performance and 2 TeraFLOPs of double-precision (64-bit). The card also has 16GB of GDDR5 memory to support it. From the raw numbers, this is slightly more capacity than either the Titan Black or Quadro K6000 in all categories. It will also support six 4K monitors (or three at 60Hz), per card. AMD supports up to four W9100 cards in a single system.

amd-firepro-w9100.jpg

Professional users can be looking for several things in their graphics cards: compute performance (either directly or through licensed software such as Photoshop, Premiere, Blender, Maya, and so forth), several high-resolution monitors (or digital signage units), and/or a lot of graphics performance. The W9100 is basically the top of the stack which covers all three of these requirements.

amd-firepro-w9100-2.jpg

AMD also announced a system branding initiative called, "AMD FirePro Ultra Workstation". They currently have five launch partners, Supermicro, Boxx, Tarox, Silverdraft, and Versatile Distribution Services, which will have workstations available under this program. The list of components for a "Recommend" certification is: two eight-core 2.6 GHz CPUs, 32GB of RAM, four PCIe 3.0 x16 slots, a 1500W Platinum PSU, and a case with nine expansion slots (to allow four W9100 GPUs along with one SSD or SDI interface card).

amd-firepro-w9100-3.jpg

Also, while the company has heavily discussed OpenCL in their slide deck, they have not mentioned specific versions. As such, I will assume that the FirePro W9100 supports OpenCL 1.2, like the R9-series, and not OpenCL 2.0 which was ratified back in November. This is still a higher conformance level than NVIDIA, which is at OpenCL 1.1.

Currently no word about pricing or availability.

Source: AMD

GDC wasn't just about DirectX; OpenGL was also a hot topic

Subject: General Tech | March 24, 2014 - 12:26 PM |
Tagged: opengl, nvidia, gdc 14, GDC, amd, Intel

DX12 and its Mantle-like qualities garnered the most interest from gamers at GDC but an odd trio of companies were also pushing a different API.  OpenGL has been around for over 20 years and has waged a long war against Direct3D, a war which may be intensifying again.  Representatives from Intel, AMD and NVIDIA all took to the stage to praise the new OpenGL standard, suggesting that with a tweaked implementation of OpenGL developers could expect to see performance increases between 7 to 15 times.  The Inquirer has embedded an hour long video in their story, check it out to learn more.

slide-1-638.jpg

"CHIP DESIGNERS AMD, Intel and Nvidia teamed up to tout the advantages of the OpenGL multi-platform application programming interface (API) at this year's Game Developers Conference (GDC)."

Here is some more Tech News from around the web:

Tech Talk

Source: The Inquirer
Author:
Manufacturer: NVIDIA

DX11 could rival Mantle

The big story at GDC last week was Microsoft’s reveal of DirectX 12 and the future of the dominant API for PC gaming.  There was plenty of build up to the announcement with Microsoft’s DirectX team posting teasers and starting up a Twitter account of the occasion. I hosted a live blog from the event which included pictures of the slides. It was our most successful of these types of events with literally thousands of people joining in the conversation. Along with the debates over the similarities of AMD’s Mantle API and the timeline for DX12 release, there are plenty of stories to be told.

After the initial session, I wanted to setup meetings with both AMD and NVIDIA to discuss what had been shown and get some feedback on the planned direction for the GPU giants’ implementations.  NVIDIA presented us with a very interesting set of data that both focused on the future with DX12, but also on the now of DirectX 11.

15.jpg

The reason for the topic is easy to decipher – AMD has built up the image of Mantle as the future of PC gaming and, with a full 18 months before Microsoft’s DirectX 12 being released, how developers and gamers respond will make an important impact on the market. NVIDIA doesn’t like to talk about Mantle directly, but it’s obvious that it feels the need to address the questions in a roundabout fashion. During our time with NVIDIA’s Tony Tamasi at GDC, the discussion centered as much on OpenGL and DirectX 11 as anything else.

What are APIs and why do you care?

For those that might not really understand what DirectX and OpenGL are, a bit of background first. APIs (application programming interface) are responsible for providing an abstraction layer between hardware and software applications.  An API can deliver consistent programming models (though the language can vary) and do so across various hardware vendors products and even between hardware generations.  They can provide access to feature sets of hardware that have a wide range in complexity, but allow users access to hardware without necessarily knowing great detail about it.

Over the years, APIs have developed and evolved but still retain backwards compatibility.  Companies like NVIDIA and AMD can improve DirectX implementations to increase performance or efficiency without adversely (usually at least) affecting other games or applications.  And because the games use that same API for programming, changes to how NVIDIA/AMD handle the API integration don’t require game developer intervention.

With the release of AMD Mantle, the idea of a “low level” API has been placed in the minds of gamers and developers.  The term “low level” can mean many things, but in general it is associated with an API that is more direct, has a thinner set of abstraction layers, and uses less translation from code to hardware.  The goal is to reduce the amount of overhead (performance hit) that APIs naturally impair for these translations.  With additional performance available, the CPU cycles can be used by the program (game) or be slept to improve battery life. In certain cases, GPU throughput can increase where the API overhead is impeding the video card's progress.

Passing additional control to the game developers, away from the API or GPU driver developers, gives those coders additional power and improves the ability for some vendors to differentiate. Interestingly, not all developers want this kind of control as it requires more time, more development work, and small teams that depend on that abstraction to make coding easier will only see limited performance advantages.

The reasons for this transition to a lower level API is being driven the by widening gap of performance between CPU and GPUs.  NVIDIA provided the images below.

04.jpg

On the left we see performance scaling in terms of GFLOPS and on the right the metric is memory bandwidth. Clearly the performance of NVIDIA's graphics chips has far outpaced (as have AMD’s) what the best Intel desktop processor have been able and that gap means that the industry needs to innovate to find ways to close it.

Continue reading NVIDIA Talks DX12, DX11 Efficiency Improvements!!!