AMD Catalyst 14.7 Release Candidate 3

Subject: Graphics Cards | August 14, 2014 - 04:20 PM |
Tagged: catalyst 14.7 RC3, beta, amd

A new Catalyst Release Candidate has arrived and as with the previous driver it no longer supports Windows 8.0 or the WDDM 1.2 driver, so upgrade to Win 7 or Win 8.1 before installing please.  AMD will eventually release a driver which supports WDDM 1.1 under Win 8.0 for those who do not upgrade.

AMD-Catalyst-12-11-Beta-11-7900-Modded-Driver-Crafted-for-Performance.jpg

Feature Highlights of the AMD Catalyst 14.7​ RC3 Driver for Windows Includes all improvements found in the AMD Catalyst 14.7 RC driver

  • Display interface enhancements to improve 4k monitor performance and reduce flickering.
  • Improvements apply to the following products: ​
    • AMD Radeon R9 290 Series
    • AMD Radeon R9 270 Series
    • AMD Radeon HD 7800 Series​ ​​
  • Even with these improvements, cable quality and other system variables can affect 4k performance. AMD recommends using DisplayPort 1.2 HBR2 certified cables with a length of 2m (~6 ft) or less when driving 4K monitors.​
  • Wildstar: AMD Crossfire profile support
  • Lichdom: Single GPU and Multi-GPU performance enhancements
  • Watch Dogs: Smoother gameplay on single GPU and Multi-GPU configurations​

Feature Highlights of the AMD Catalyst 14.7​ RC Driver for Windows

  • Includes all improvements found in the AMD Catalyst 14.6 RC driver
    • AMD ​CrossFire and AMD Radeon Dual Graphics profile update for Plants vs. Zombies​​​
    • Assassin's Creed IV - improved CrossFire scaling (3840x2160 High Settings) up to 93%
    • Collaboration with AOC has identified non-standard display timings as the root cause of 60Hz SST flickering exhibited by the AOC U2868PQU panel on certain AMD Radeon graphics cards.
    • A software workaround has been implemented in AMD Catalyst 14.7 RC driver to resolve the display timing issues with this display. Users are further encouraged to obtain newer display firmware from AOC that will resolve flickering at its origin.
    • Users are additionally advised to utilize DisplayPort-certified cables to ensure the integrity of the DisplayPort data connection.​​​

Feature Highlights of the AMD Catalyst 14.6 RC Driver for Windows

  • Plants vs. Zombies (Direct3D performance improvements):
    • AMD Radeon R9 290X - 1920x1080 Ultra – improves up to 11%
    • AMD Radeon R9 290X - 2560x1600 Ultra – improves up to 15%
    • AMD Radeon R9 290X CrossFire configuration (3840x2160 Ultra) - 92% scaling
  • 3DMark Sky Diver improvements:
    • AMD A4-6300 – improves up to 4%
    • Enables AMD Dual Graphics/AMD CrossFire support
  • Grid Auto Sport: AMD CrossFire profile
  • Wildstar: Power Xpress profile
    • Performance improvements to improve smoothness of application
    • Performance improves up to 24% at 2560x1600 on the AMD Radeon R9 and R7 Series of products for both single GPU and multi-GPU configurations.
  • Watch Dogs: AMD CrossFire – Frame pacing improvements
  • Battlefield Hardline Beta: AMD CrossFire profile

Known Issues

  • Running Watch Dogs with a R9 280X CrossFire configuration may result in the application running in CrossFire software compositing mode
  • Enabling Temporal SMAA in a CrossFire configuration when playing Watch Dogs will result in flickering
  • AMD CrossFire configurations with AMD Eyefinity enabled will see instability with BattleField 4 or Thief when running Mantle
  • Catalyst Install Manager text is covered by Express/Custom radio button text
  • Express Uninstall does not remove C:\Program Files\(AMD or ATI) folder
Source: AMD

Intel and Microsoft Show DirectX 12 Demo and Benchmark

Subject: General Tech, Graphics Cards, Processors, Mobile, Shows and Expos | August 13, 2014 - 06:55 PM |
Tagged: siggraph 2014, Siggraph, microsoft, Intel, DirectX 12, directx 11, DirectX

Along with GDC Europe and Gamescom, Siggraph 2014 is going on in Vancouver, BC. At it, Intel had a DirectX 12 demo at their booth. This scene, containing 50,000 asteroids, each in its own draw call, was developed on both Direct3D 11 and Direct3D 12 code paths and could apparently be switched while the demo is running. Intel claims to have measured both power as well as frame rate.

intel-dx12-LockedFPS.png

Variable power to hit a desired frame rate, DX11 and DX12.

The test system is a Surface Pro 3 with an Intel HD 4400 GPU. Doing a bit of digging, this would make it the i5-based Surface Pro 3. Removing another shovel-load of mystery, this would be the Intel Core i5-4300U with two cores, four threads, 1.9 GHz base clock, up-to 2.9 GHz turbo clock, 3MB of cache, and (of course) based on the Haswell architecture.

While not top-of-the-line, it is also not bottom-of-the-barrel. It is a respectable CPU.

Intel's demo on this processor shows a significant power reduction in the CPU, and even a slight decrease in GPU power, for the same target frame rate. If power was not throttled, Intel's demo goes from 19 FPS all the way up to a playable 33 FPS.

Intel will discuss more during a video interview, tomorrow (Thursday) at 5pm EDT.

intel-dx12-unlockedFPS-1.jpg

Maximum power in DirectX 11 mode.

For my contribution to the story, I would like to address the first comment on the MSDN article. It claims that this is just an "ideal scenario" of a scene that is bottlenecked by draw calls. The thing is: that is the point. Sure, a game developer could optimize the scene to (maybe) instance objects together, and so forth, but that is unnecessary work. Why should programmers, or worse, artists, need to spend so much of their time developing art so that it could be batch together into fewer, bigger commands? Would it not be much easier, and all-around better, if the content could be developed as it most naturally comes together?

That, of course, depends on how much performance improvement we will see from DirectX 12, compared to theoretical max efficiency. If pushing two workloads through a DX12 GPU takes about the same time as pushing one, double-sized workload, then it allows developers to, literally, perform whatever solution is most direct.

intel-dx12-unlockedFPS-2.jpg

Maximum power when switching to DirectX 12 mode.

If, on the other hand, pushing two workloads is 1000x slower than pushing a single, double-sized one, but DirectX 11 was 10,000x slower, then it could be less relevant because developers will still need to do their tricks in those situations. The closer it gets, the fewer occasions that strict optimization is necessary.

If there are any DirectX 11 game developers, artists, and producers out there, we would like to hear from you. How much would a (let's say) 90% reduction in draw call latency (which is around what Mantle claims) give you, in terms of fewer required optimizations? Can you afford to solve problems "the naive way" now? Some of the time? Most of the time? Would it still be worth it to do things like object instancing and fewer, larger materials and shaders? How often?

To boldy go where no 290X has gone before?

Subject: Graphics Cards | August 13, 2014 - 03:11 PM |
Tagged: factory overclocked, sapphire, R9 290X, Vapor-X R9 290X TRI-X OC

As far as factory overclocks go, the 1080MHz core and 5.64GHz RAM on the new Sapphire Vapor-X 290X is impressive and takes the prize for the highest factory overclock on this card [H]ard|OCP has seen yet.  That didn't stop them from pushing it to 1180MHz and 5.9GHz after a little work which is even more impressive.  At both the factory and manual overclocks the card handily beat the reference model and the manually overclocked benchmarks could meet or beat the overclocked MSI GTX 780 Ti GAMING 3G OC card.  The speed is not the only good feature, Intelligent Fan Control keeps two of the three fans from spinning when the GPU is under 60C which vastly reduces the noise produced by this card.  It is currently selling for $646, lower than the $710 that the GeForce is currently selling for as well.

1406869221rJVdvhdB2o_1_6_l.jpg

"We take a look at the SAPPHIRE Vapor-X R9 290X TRI-X OC video card which has the highest factory overclock we've ever encountered on any AMD R9 290X video card. This video card is feature rich and very fast. We'll overclock it to the highest GPU clocks we've seen yet on R9 290X and compare it to the competition."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP
Author:
Manufacturer: ASUS

The Waiting Game

NVIDIA G-Sync was announced at a media event held in Montreal way back in October, and promised to revolutionize the way the display and graphics card worked together to present images on the screen. It was designed to remove hitching, stutter, and tearing -- almost completely. Since that fateful day in October of 2013, we have been waiting. Patiently waiting. We were waiting for NVIDIA and its partners to actually release a monitor that utilizes the technology and that can, you know, be purchased.

In December of 2013 we took a look at the ASUS VG248QE monitor, the display for which NVIDIA released a mod kit to allow users that already had this monitor to upgrade to G-Sync compatibility. It worked, and I even came away impressed. I noted in my conclusion that, “there isn't a single doubt that I want a G-Sync monitor on my desk” and, “my short time with the NVIDIA G-Sync prototype display has been truly impressive…”. That was nearly 7 months ago and I don’t think anyone at that time really believed it would be THIS LONG before the real monitors began to show in the hands of gamers around the world.

IMG_9328.JPG

Since NVIDIA’s October announcement, AMD has been on a marketing path with a technology they call “FreeSync” that claims to be a cheaper, standards-based alternative to NVIDIA G-Sync. They first previewed the idea of FreeSync on a notebook device during CES in January and then showed off a prototype monitor in June during Computex. Even more recently, AMD has posted a public FAQ that gives more details on the FreeSync technology and how it differs from NVIDIA’s creation; it has raised something of a stir with its claims on performance and cost advantages.

That doesn’t change the product that we are reviewing today of course. The ASUS ROG Swift PG278Q 27-in WQHD display with a 144 Hz refresh rate is truly an awesome monitor. What did change is the landscape, from NVIDIA's original announcement until now.

Continue reading our review of the ASUS ROG Swift PG278Q 2560x1440 G-Sync Monitor!!

Time to update your Gallium3D

Subject: General Tech, Graphics Cards | August 6, 2014 - 10:34 AM |
Tagged: radeon, Gallium3D, catalyst 14.6 Beta, linux, ubuntu 14.04

The new Gallium3D is up against the open source Catalyst 14.6 Beta, running under Ubuntu 14.04 and both the 3.14 and 3.16 Linux kernels, giving Phoronix quite a bit of testing to do.  They have numerous cards in their test ranging from an HD 6770 to an R9 290 though unfortunately there are no Gallium3D results for the R9 290 as it will not function until the release of the Linux 3.17 kernel.  Overall the gap is closing, the 14.6 Beta still remains the best performer but the open source alternative is quickly closing the gap.

image.php_.jpg

"After last week running new Nouveau vs. NVIDIA proprietary Linux graphics benchmarks, here's the results when putting AMD's hardware on the test bench and running both their latest open and closed-source drivers. Up today are the results of using the latest Radeon Gallium3D graphics code and Linux kernel against the latest beta of the binary-only Catalyst driver."

Here is some more Tech News from around the web:

Tech Talk

Source: Phoronix
Author:
Manufacturer: ASUS

Experience with Silent Design

In the time periods between major GPU releases, companies like ASUS have the ability to really dig down and engineer truly unique products. With the expanded time between major GPU releases, from either NVIDIA or AMD, these products have continued evolving to offer better features and experiences than any graphics card before them. The ASUS Strix GTX 780 is exactly one of those solutions – taking a GTX 780 GPU that was originally released in May of last year and twisting it into a new design that offers better cooling, better power and lower noise levels.

ASUS intended, with the Strix GTX 780, to create a card that is perfect for high end PC gamers, without crossing into the realm of bank-breaking prices. They chose to go with the GeForce GTX 780 GPU from NVIDIA at a significant price drop from the GTX 780 Ti, with only a modest performance drop. They double the reference memory capacity from 3GB to 6GB of GDDR5, to assuage any buyer’s thoughts that 3GB wasn’t enough for multi-screen Surround gaming or 4K gaming. And they change the cooling solution to offer a near silent operation mode when used in “low impact” gaming titles.

The ASUS Strix GTX 780 Graphics Card

The ASUS Strix GTX 780 card is a pretty large beast, both in physical size and in performance. The cooler is a slightly modified version of the very popular DirectCU II thermal design used in many of the custom built ASUS graphics cards. It has a heat dissipation area more than twice that of the reference NVIDIA cooler and uses larger fans that allow them to spin slower (and quieter) at the improved cooling capacity.

IMG_0325.JPG

Out of the box, the ASUS Strix GTX 780 will run at 889 MHz base clock and 941 MHz Boost clock, a fairly modest increase over the 863/900 MHz rates of the reference card. Obviously with much better cooling and a lot of work being done on the PCB of this custom design, users will have a lot of headroom to overclock on their own, but I continue to implore companies like ASUS and MSI to up the ante out of the box! One area where ASUS does impress is with the memory – the Strix card features a full 6GB of GDDR5 running 6.0 GHz, twice the capacity of the reference GTX 780 (and even GTX 780 Ti) cards. If you had any concerns about Surround or 4K gaming, know that memory capacity will not be a problem. (Though raw compute power may still be.)

Continue reading our review of the ASUS Strix GTX 780 6GB Graphics Card!!

Rumor: NVIDIA GeForce GTX 880 Is Actually September?

Subject: General Tech, Graphics Cards | August 3, 2014 - 01:59 PM |
Tagged: nvidia, maxwell, gtx 880

Just recently, we posted a story that claimed NVIDIA was preparing to launch high-end Maxwell in the October/November time frame. Apparently, that was generous. The graphics company is said to announce their GeForce GTX 880 in mid-September, with availability coming later in the month. It is expected to be based on the GM204 architecture (which previous rumors claim is 28nm).

nvidia-geforce.png

It is expected that the GeForce GTX 880 will be available with 4GB of video memory, with an 8GB version possible at some point. As someone who runs multiple (five) monitors, I can tell you that 2GB is not enough for someone of my use case. Windows 7 says the same. It kicks me out of applications to tell me that it does not have enough video memory. This would be enough reason for me to get more GPU memory.

We still do not know how many CUDA cores will be present in the GM204 chip, or if the GeForce GTX 880 will have all of them enabled (but I would be surprised if it didn't). Without any way to derive its theoretical performance, we cannot compare it against the GTX 780 or 780Ti. It could be significantly faster, it could be marginally faster, or it could be somewhere between.

But we will probably find out within two months.

Source: Videocardz

AMD Releases FreeSync Information as a FAQ

Subject: General Tech, Graphics Cards, Displays | July 29, 2014 - 06:02 PM |
Tagged: vesa, nvidia, g-sync, freesync, DisplayPort, amd

Dynamic refresh rates have two main purposes: save power by only forcing the monitor to refresh when a new frame is available, and increase animation smoothness by synchronizing to draw rates (rather than "catching the next bus" at 16.67ms, on the 16.67ms, for 60 Hz monitors). Mobile devices prefer the former, while PC gamers are interested in the latter.

Obviously, the video camera nullifies the effect.

NVIDIA was first to make this public with G-Sync. AMD responded with FreeSync, starting with a proposal that was later ratified by VESA as DisplayPort Adaptive-Sync. AMD, then, took up "Project FreeSync" as an AMD "hardware/software solution" to make use of DisplayPort Adaptive-Sync in a way that benefits PC gamers.

Today's news is that AMD has just released an FAQ which explains the standard much more thoroughly than they have in the past. For instance, it clarifies the distinction between DisplayPort Adaptive-Sync and Project FreeSync. Prior to the FAQ, I thought that FreeSync became DisplayPort Adaptive-Sync, and that was that. Now, it is sounding a bit more proprietary, just built upon an open, VESA standard.

If interested, check out the FAQ at AMD's website.

Source: AMD

NVIDIA 340.52 Drivers Are Now Available

Subject: General Tech, Graphics Cards | July 29, 2014 - 05:27 PM |
Tagged: nvidia, geforce, graphics drivers, shield tablet, shield

Alongside the NVIDIA SHIELD Tablet launch, the company has released their GeForce 340.52 drivers. This version allows compatible devices to use GameStream and it, also, is optimized for Metro: Redux and Final Fantasy XIV (China).

nvidia-geforce.png

The driver supports GeForce 8-series graphics cards, and later. As a reminder, for GPUs that are not based on the Fermi architecture (or later), 340.xx will be your last driver version. NVIDIA does intend to provided extended support for 340.xx (and earlier) drivers until April 1st, 2016. But, when Fermi, Kepler, and Maxwell move on to 343.xx, Tesla and earlier will not. That said, most of the content of this driver is aimed at Kepler and later. Either way, the driver itself is available for those pre-Fermi cards.

I should also mention that a user of Anandtech's forums noted the removal of Miracast from NVIDIA documentation. NVIDIA has yet to comment, although it is still very short notice, at this point.

Source: NVIDIA

This high end multi-GPU 4k showdown includes overclocking

Subject: Graphics Cards | July 29, 2014 - 11:27 AM |
Tagged: asus, gtx 780, R9 290X DC2 OC, sli, crossfire, STRIX GTX 780 OC 6GB, R9 290X

We have seen [H]ard|OCP test ASUS' STRIX GTX 780 OC 6GB and R9 290X DirectCU II before but this time they have been overclocked and paired up for a 4k showdown.  For a chance NewEgg gives the price advantage to AMD, $589 versus $599 at the time of writing (with odd blips in prices on Amazon).   The GTX 780 has been set to 1.2GHz and 6.6GHz while the 290X is 1.1GHz and 5.6GHz, keep in mind dual GPU setups may not reach the same frequencies as single cards.  Read on for their conclusions and decide if you prefer to brag about a higher overclock or have better overall performance.

14060235239aDa7rbLPT_1_1_l.jpg

"We take the ASUS STRIX GTX 780 OC 6GB video card and run two in SLI and overclock both of these at 4K resolutions to find the ultimate gameplay performance with 6GB of VRAM. We will also compare these to two overclocked ASUS Radeon R9 290X DirectCU II CrossFire video cards for the ultimate VRAM performance showdown."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP