Google I/O 2014: Android Extension Pack Announced

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | July 7, 2014 - 04:06 AM |
Tagged: tegra k1, OpenGL ES, opengl, Khronos, google io, google, android extension pack, Android

Sure, this is a little late. Honestly, when I first heard the announcement, I did not see much news in it. The slide from the keynote (below) showed four points: Tesselation, Geometry Shaders, Computer [sic] Shaders, and ASTC Texture Compression. Honestly, I thought tesselation and geometry shaders were part of the OpenGL ES 3.1 spec, like compute shaders. This led to my immediate reaction: "Oh cool. They implemented OpenGL ES 3.1. Nice. Not worth a news post."

google-android-opengl-es-extensions.jpg

Image Credit: Blogogist

Apparently, they were not part of the ES 3.1 spec (although compute shaders are). My mistake. It turns out that Google is cooking their their own vendor-specific extensions. This is quite interesting, as it adds functionality to the API without the developer needing to target a specific GPU vendor (INTEL, NV, ATI, AMD), waiting for approval from the Architecture Review Board (ARB), or using multi-vendor extensions (EXT). In other words, it sounds like developers can target Google's vendor without knowing the actual hardware.

Hiding the GPU vendor from the developer is not the only reason for Google to host their own vendor extension. The added features are mostly from full OpenGL. This makes sense, because it was announced with NVIDIA and their Tegra K1, Kepler-based SoC. Full OpenGL compatibility was NVIDIA's selling point for the K1, due to its heritage as a desktop GPU. But, instead of requiring apps to be programmed with full OpenGL in mind, Google's extension pushes it to OpenGL ES 3.1. If the developer wants to dip their toe into OpenGL, then they could add a few Android Extension Pack features to their existing ES engine.

Epic Games' Unreal Engine 4 "Rivalry" Demo from Google I/O 2014.

The last feature, ASTC Texture Compression, was an interesting one. Apparently the Khronos Group, owners of OpenGL, were looking for a new generation of texture compression technologies. NVIDIA suggested their ZIL technology. ARM and AMD also proposed "Adaptive Scalable Texture Compression". ARM and AMD won, although the Khronos Group stated that the collaboration between ARM and NVIDIA made both proposals better than either in isolation.

Android Extension Pack is set to launch with "Android L". The next release of Android is not currently associated with a snack food. If I was their marketer, I would block out the next three versions as 5.x, and name them (L)emon, then (M)eringue, and finally (P)ie.

Would I do anything with the two skipped letters before pie? (N)(O).

ASUS STRIX GTX 780 OC 6GB in SLI, better than a Titan and less expensive to boot!

Subject: Graphics Cards | July 4, 2014 - 01:40 PM |
Tagged: STRIX GTX 780 OC 6GB, sli, crossfire, asus, 4k

Multiple monitor and 4k testing of the ASUS STRIX GTX 780 OC cards in SLI is not about the 52MHz out of box overclock but about the 12GB of VRAM that your system will have.  Apart from an issue with BF4, [H]ard|OCP tested the STRIX against a pair of reference GTX 780s and HD 290X cards at resolutions of 5760x1200 and 3840x2160.   The extra RAM made the STRIX shine in comparison to the reference card as not only was the performance better but [H] could raise many of the graphical settings but was not enough to push its performance past the 290X cards in Crossfire.  One other takeaway from this review is that even 6GB of VRAM is not enough to run Watch_Dogs with Ultra textures at these resolutions.

1402436254j0CnhAb2Z5_1_20_l.jpg

"You’ve seen the new ASUS STRIX GTX 780 OC Edition 6GB DirectCU II video card, now let’s look at two of these in an SLI configuration! We will explore 4K and NV Surround performance with two ASUS STRIX video cards for the ultimate high-resolution experience and see if the extra memory helps this GPU make better strides at high resolutions."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Intel's Knights Landing (Xeon Phi, 2015) Details

Subject: General Tech, Graphics Cards, Processors | July 2, 2014 - 03:55 AM |
Tagged: Intel, Xeon Phi, xeon, silvermont, 14nm

Anandtech has just published a large editorial detailing Intel's Knights Landing. Mostly, it is stuff that we already knew from previous announcements and leaks, such as one by VR-Zone from last November (which we reported on). Officially, few details were given back then, except that it would be available as either a PCIe-based add-in board or as a socketed, bootable, x86-compatible processor based on the Silvermont architecture. Its many cores, threads, and 512 bit registers are each pretty weak, compared to Haswell, for instance, but combine to about 3 TFLOPs of double precision performance.

itsbeautiful.png

Not enough graphs. Could use another 256...

The best way to imagine it is running a PC with a modern, Silvermont-based Atom processor -- only with up to 288 processors listed in your Task Manager (72 actual cores with quad HyperThreading).

The main limitation of GPUs (and similar coprocessors), however, is memory bandwidth. GDDR5 is often the main bottleneck of compute performance and just about the first thing to be optimized. To compensate, Intel is packaging up-to 16GB of memory (stacked DRAM) on the chip, itself. This RAM is based on "Hybrid Memory Cube" (HMC), developed by Micron Technology, and supported by the Hybrid Memory Cube Consortium (HMCC). While the actual memory used in Knights Landing is derived from HMC, it uses a proprietary interface that is customized for Knights Landing. Its bandwidth is rated at around 500GB/s. For comparison, the NVIDIA GeForce Titan Black has 336.4GB/s of memory bandwidth.

Intel and Micron have worked together in the past. In 2006, the two companies formed "IM Flash" to produce the NAND flash for Intel and Crucial SSDs. Crucial is Micron's consumer-facing brand.

intel-knights-landing.jpg

So the vision for Knights Landing seems to be the bridge between CPU-like architectures and GPU-like ones. For compute tasks, GPUs edge out CPUs by crunching through bundles of similar tasks at the same time, across many (hundreds of, thousands of) computing units. The difference with (at least socketed) Xeon Phi processors is that, unlike most GPUs, Intel does not rely upon APIs, such as OpenCL, and drivers to translate a handful of functions into bundles of GPU-specific machine language. Instead, especially if the Xeon Phi is your system's main processor, it will run standard, x86-based software. The software will just run slowly, unless it is capable of vectorizing itself and splitting across multiple threads. Obviously, OpenCL (and other APIs) would make this parallelization easy, by their host/kernel design, but it is apparently not required.

It is a cool way that Intel arrives at the same goal, based on their background. Especially when you mix-and-match Xeons and Xeon Phis on the same computer, it is a push toward heterogeneous computing -- with a lot of specialized threads backing up a handful of strong ones. I just wonder if providing a more-direct method of programming will really help developers finally adopt massively parallel coding practices.

I mean, without even considering GPU compute, how efficient is most software at splitting into even two threads? Four threads? Eight threads? Can this help drive heterogeneous development? Or will this product simply try to appeal to those who are already considering it?

Source: Intel

AMD Catalyst 14.6 RC is now available

Subject: Graphics Cards | June 24, 2014 - 07:00 PM |
Tagged: amd, beta, Catalyst 14.6 RC

Starting with AMD Catalyst 14.6 Beta, AMD will no longer support Windows 8.0 (and the WDDM 1.2 driver) so Windows 8.0 users should upgrade to Windows 8.1, AMD Catalyst 14.4 will continue to work on Windows 8.0.

The WDDM 1.1 Windows 7 driver currently works on Win 7 and in a future release will be used to install updated drivers under Windows 8.0.

Features of the lastest Catalyst include:

  • Plants vs. Zombies (Direct3D performance improvements):
    • AMD Radeon R9 290X - 1920x1080 Ultra – improves up to 11%
    • AMD Radeon R9290X - 2560x1600 Ultra – improves up to 15%
    • AMD Radeon R9290X CrossFire configuration (3840x2160 Ultra) - 92% scaling
  • 3DMark Sky Diver improvements:
    • AMD A4 6300 – improves up to 4%
    • Enables AMD Dual Graphics / AMD CrossFire support
  • Grid Auto Sport: AMD CrossFire profile
  • Wildstar:
    • Power Xpress profile
    • Performance improvements to improve smoothness of application
  • Watch Dogs: AMD CrossFire – Frame pacing improvements
  • Battlefield Hardline Beta: AMD CrossFire profile

Get the driver and more information right here.

images.jpg

Source: AMD

AMD Planning Open Source GameWorks Competitor, Mantle for Linux

Subject: Graphics Cards | June 19, 2014 - 10:35 AM |
Tagged: video, richard huddy, radeon, openworks, Mantle, freesync, amd

On Tuesday, AMD's newly minted Gaming Scientist, Richard Huddy, stopped by the PC Perspective office to talk about the current state of the company's graphics division. The entire video of the interview is embedded below but several of the points that are made are quite interesting and newsworthy. During the discussion we hear about Mantle on Linux, a timeline for Mantle being opened publicly as well as a surprising new idea for a competitor to NVIDIA's GameWorks program.

Richard is new to the company but not new to the industry, starting with 3DLabs many years ago and taking jobs at NVIDIA, ATI, Intel and now returning to AMD. The role of Gaming Scientist is to directly interface with the software developers for gaming and make sure that the GPU hardware designers are working hand in hand with future, high end graphics technology. In essence, Huddy's job is to make sure AMD continues to innovate on the hardware side to facilitate innovation on the software side.

AMD Planning an "OpenWorks" Program

(33:00) After the volume of discussion surrounding the NVIDIA GameWorks program and its potential to harm the gaming ecosystem by not providing source code in an open manner, Huddy believes that the answer to problem is to simply have NVIDIA release the SDK with source code publicly. Whether or not NVIDIA takes that advice has yet to be seen, but if they don't, it appears that AMD is going down the road of creating its own competing solution that is open and flexible.

The idea of OpenFX or OpenWorks as Huddy refers to it is to create an open source repository for gaming code and effects examples that can be updated, modified and improved upon by anyone in the industry. AMD would be willing to start the initiative by donating its entire SDK to the platform and then invite other software developers, as well as other hardware developers, to add or change to the collection. The idea is to create a competitor to what GameWorks accomplishes but in a license free and open way.

gameworks.jpg

NVIDIA GameWorks has been successful; can AMD OpenWorks derail it?

Essentially the "OpenWorks" repository would work in a similar way to a Linux group where the public has access to the code to submit changes that can be implemented by anyone else. Someone would be able to improve the performance for specific hardware easily but if performance was degraded on any other hardware then it could be easily changed and updated. Huddy believes this is how you move the industry forward and how you ensure that the gamer is getting the best overall experience regardless of the specific platform they are using.

"OpenWorks" is still in the planning stages and AMD is only officially "talking about it" internally. However, bringing Huddy back to AMD wasn't done without some direction already in mind and it would not surprise me at all if this was essentially a done deal. Huddy believes that other hardware companies like Qualcomm and Intel would participate in such an open system but the real question is whether or not NVIDIA, as the discrete GPU market share leader, would be in any way willing to do as well.

Still, this initiative continues to show the differences between the NVIDIA and AMD style of doing things. NVIDIA prefers a more closed system that it has full control over to perfect the experience, to hit aggressive timelines and to improve the ecosystem as they see it. AMD wants to provide an open system that everyone can participate in and benefit from but often is held back by the inconsistent speed of the community and partners. 

Mantle to be Opened by end of 2014, Potentially Coming to Linux

(7:40) The AMD Mantle API has been an industry changing product, I don't think anyone can deny that. Even if you don't own AMD hardware or don't play any of the games currently shipping with Mantle support, the re-focusing on a higher efficiency API has impacted NVIDIA's direction with DX11, Microsoft's plans for DX12 and perhaps even Apple's direction with Metal. But for a company that pushes the idea of open standards so heavily, AMD has yet to offer up Mantle source code in a similar fashion to its standard SDK. As it stands right now, Mantle is only given to a group of software developers in the beta program and is specifically tuned for AMD's GCN graphics hardware.

mantlepic.jpg

Huddy reiterated that AMD has made a commitment to release a public SDK for Mantle by the end of 2014 which would allow any other hardware vendor to create a driver that could run Mantle game titles. If AMD lives up to its word and releases the full source code for it, then in theory, NVIDIA could offer support for Mantle games on GeForce hardware, Intel could offer support those same games on Intel HD graphics. There will be no license fees, no restrictions at all.

The obvious question is whether or not any other IHV would choose to do so. Both because of competitive reasons and with the proximity of DX12's release in late 2015. Huddy agrees with me that the pride of these other hardware vendors may prevent them from considering Mantle adoption though the argument can be made that the work required to implement it properly might not be worth the effort with DX12 (and its very similar feature set) around the corner.

(51:45) When asked about AMD input on SteamOS and its commitment to the gamers that see that as the future, Huddy mentioned that AMD was considering, but not promising, bringing the Mantle API to Linux. If the opportunity exists, says Huddy, to give the gamer a better experience on that platform with the help of Mantle, and developers ask for the support for AMD, then AMD will at the very least "listen to that." It would incredibly interesting to see a competitor API in the landscape of Linux where OpenGL is essentially the only game in town. 

AMD FreeSync / Adaptive Sync Benefits

(59:15) Huddy discussed the differences, as he sees it, between NVIDIA's G-Sync technology and the AMD option called FreeSync but now officially called Adaptive Sync as part of the DisplayPort 1.2a standard. Beside the obvious difference of added hardware and licensing costs, Adaptive Sync is apparently going to be easier to implement as the maximum and minimum frequencies are actually negotiated by the display and the graphics card when the monitor is plugged in. G-Sync requires a white list in the NVIDIA driver to work today and as long as NVIDIA keeps that list updated, the impact on gamers buying panels should be minimal. But with DP 1.2a and properly implemented Adaptive Sync monitors, once a driver supports the negotiation it doesn't require knowledge about the specific model beforehand.

freesync1.jpg

AMD demos FreeSync at Computex 2014

According to Huddy, the new Adaptive Sync specification will go up to as high as 240 Hz and as low as 9 Hz; these are specifics that before today weren't known. Of course, not every panel (and maybe no panel) will support that extreme of a range for variable frame rate technology, but this leaves a lot of potential for improved panel development in the years to come. More likely you'll see Adaptive Sync ready display listing a range closer to 30-60 Hz or 30-80 Hz initially. 

Prototypes of FreeSync monitors will be going out to some media in the September or October time frame, while public availability will likely occur in the January or February window. 

How does AMD pick game titles for the Never Settle program?

(1:14:00) Huddy describes the fashion in which games are vetted for inclusion in the AMD Never Settle program. The company looks for games that have a good history of course, but also ones that exemplify the use of AMD hardware. Games that benchmark well and have reproducible results that can be reported by AMD and the media are also preferred. Inclusion of an integrated benchmark mode in the game is also a plus as it more likely gets review media interested in including that game in their test suite and also allows the public to run their own tests to compare results. 

Another interesting note was the games that are included in bundles often are picked based on restrictions in certain countries. Germany, for example, has very strict guidelines for violence in games and thus add-in card partners would much prefer a well known racing game than an ultra-bloody first person shooter. 

Closing Thoughts

First and foremost, a huge thanks to Richard Huddy for making time to stop by the offices and talk with us. And especially for allowing us to live stream it to our fans and readers. I have had the privilege to have access to some of the most interesting minds in the industry, but they are very rarely open to having our talks broadcast to the world without editing and without a precompiled list of questions. For allowing it, both AMD and Mr. Huddy have gained some respect! 

There is plenty more discussed in the interview including AMD's push to a non-PC based revenue split, whether DX12 will undermine the use of the Mantle API, and how code like TressFX compares to NVIDIA GameWorks. If you haven't watched it yet I think you'll find the full 90 minutes to be quite informative and worth your time.

UPDATE: I know that some of our readers, and some contacts and NVIDIA, took note of Huddy's comments about TressFX from our interview. Essentially, NVIDIA denied that TressFX was actually made available before the release of Tomb Raider. When I asked AMD for clarification, Richard Huddy provided me with the following statement.

I would like to take the opportunity to correct a false impression that I inadvertently created during the interview.

Contrary to what I said, it turns out that TressFX was first published in AMD's SDK _after_ the release of Tomb Raider.

Nonetheless the full source code to TressFX was available to the developer throughout, and we also know that the game was available to NVIDIA several weeks ahead of the actual release for NVIDIA to address the bugs in their driver and to optimize for TressFX.

Again, I apologize for the mistake.

That definitely paints a little bit of a different picture on around the release of TressFX with the rebooted Tomb Raider title. NVIDIA's complaint that "AMD was doing the same thing" holds a bit more weight. Since Richard Huddy was not with AMD at the time of this arrangement I can see how he would mix up the specifics, even after getting briefed by other staff members.

END UPDATE

If you want to be sure you don't miss any more of our live streaming events, be sure to keep an eye on the schedule on the right hand side of our page or sign up for our PC Perspective Live mailing list right here.

PCPer Live! Interview with AMD's Richard Huddy June 17th, 4pm ET / 1pm PT

Subject: General Tech, Graphics Cards | June 18, 2014 - 05:08 PM |
Tagged: video, richard huddy, live, amd

UPDATE: Did you miss the live event? Well, there's good news and bad news. First, the bad: you can't win any of those prizes we discussed. The good: you can watch the replay posted below!

AMD recently brought back Richard Huddy in the role of Gaming Scientist, acting as the information conduit between hardware development, the software and driver teams and the game developers that make our industry exciting. 

Richard stopped by the offices of PC Perspective to talk about several subjects including his history in the industry (including NVIDIA and Intel), Mantle and other low-level APIs, the NVIDIA GameWorks debate, G-Sync versus FreeSync and a whole lot more.

This is an interview that you won't want to miss! 

On June 3rd it was announced that Richard Huddy, an industry stalwart and vetern of ATI, NVIDIA and Intel, would be rejoining AMD as Chief Gaming Scientist

Interesting news is crossing the ocean today as we learn that Richard Huddy, who has previously had stints at NVIDIA, ATI, AMD and most recently, Intel, is teaming up with AMD once again. Richard brings with him years of experience and innovation in the world of developer relations and graphics technology. Often called "the Godfather" of DirectX, AMD wants to prove to the community it is taking PC gaming seriously.

richardhuddy.jpg

Richard Huddy will be stopping by the PC Perspective offices on June 17th for a live, on-camera interview that you can watch unfold on PC Perspective's Live page. Though we plan to talk anything and everything centered on gaming and PC hardware we have a few topics that have been hot-buttons lately we know we want to ask about. Those include the AMD versus NVIDIA stint with GameWorks, AMD's developer relations and the Gaming Evolved program, how AMD feels about the current status of Adaptive Sync (G-Sync like features) and much more.

We want to take your questions as well, which is one of the reasons for this post. Richard has agreed to answer as many inquiries as possible in our allotted time and to help make this easier, we are asking our readers to give us their questions and input in the comments section of this news post. We will still take live questions in the chat room during the event, but if your question is here then you have a much better chance of that being seen and addressed.

If the intensity of these topics wasn't enough to entice you to watch the live stream, then how about this? We have a massive prize pool provided by AMD that is unmatched in our live stream history! Here's the list:

  • 1x AMD Radeon R9 295X2 8GB Graphics Card plus a power supply!
  • 1x MSI Radeon R9 280X
  • 1x Sapphire Radeon R9 280
  • 1x MSI Radeon R9 270
  • 1x HIS Radeon R9 270
  • 1x Sapphire R7 260X
  • 15x Never Settle Forever codes

Yup, that's all correct; no typos there. All you have to do is be on the PC Perspective Live! page during the stream on June 17th! We will be giving all of this hardware away to those watching the interview.

pcperlive.png

AMD's Richard Huddy Interview and Q&A

4pm ET / 1pm PT - June 17th

PC Perspective Live! Page

How can you be sure you are here at the right time? If you want some additional security besides just setting your own alarm, you can sign up for our PC Perspective Live mailing list, a simple email list that is used ONLY for these types of live events. Just head over to this page, give us your name and email address, and we'll let you know before we start the event!

I am very excited to talk with Richard again and I think that anyone interested in PC gaming is going to want to take part in this discussion!

UPDATE: I know that some of our readers, and some contacts and NVIDIA, took note of Huddy's comments about TressFX from our interview. Essentially, NVIDIA denied that TressFX was actually made available before the release of Tomb Raider. When I asked AMD for clarification, Richard Huddy provided me with the following statement.

I would like to take the opportunity to correct a false impression that I inadvertently created during the interview.

Contrary to what I said, it turns out that TressFX was first published in AMD's SDK _after_ the release of Tomb Raider.

Nonetheless the full source code to TressFX was available to the developer throughout, and we also know that the game was available to NVIDIA several weeks ahead of the actual release for NVIDIA to address the bugs in their driver and to optimize for TressFX.

Again, I apologize for the mistake.

That definitely paints a little bit of a different picture on around the release of TressFX with the rebooted Tomb Raider title. NVIDIA's complaint that "AMD was doing the same thing" holds a bit more weight. Since Richard Huddy was not with AMD at the time of this arrangement I can see how he would mix up the specifics, even after getting briefed by other staff members.

END UPDATE

Source: PCPer Live!

ASUS got a brand new brand; meet STRIX

Subject: Graphics Cards | June 17, 2014 - 03:29 PM |
Tagged: asus, STRIX GTX 780 OC 6GB, DirectCU II, silent, factory overclocked

The new ASUS STRIX series, which currently has only one member, is a custom built card designed for silent operation by not spinning up the fans until the GPU hits 65C.  They've also doubled the RAM for the first model, the GTX 780 OC 6GB which should help with 4k gaming as well as putting a 52MHz overclock on the GPU out of the box.  [H]ard|OCP had a chance to try out this new card and test it against the R9 290X and a standard GTX 780.  Considering the price premium of $100 on this card it needs to do significantly better than the base GTX 780 and in line with the R9 290X which indeed it does do out of the box. 

Of course the first thing you do with a silent card is attempt to overclock it until it screams which of course [H] did and managed to get GPU up to 1.215GHz on air which offered noticeable improvements.  Stay tuned for 4k and SLI results in the near future.

1402436254j0CnhAb2Z5_1_11_l.jpg

"We take the new ASUS STRIX GTX 780 OC 6GB video card and push it to its limits while overclocking. We will compare performance overclocked with a GeForce GTX 780 Ti, AMD Radeon R9 290X and find out what gameplay improvements overclocking allows. This card isn't just silent, its got overclocking prowess too."

Here are some more Graphics Card articles from around the web:

Graphics Cards

 

Source: [H]ard|OCP

AMD Restructures. Lisa Su Is Now COO.

Subject: Editorial, General Tech, Graphics Cards, Processors, Chipsets | June 13, 2014 - 06:45 PM |
Tagged: x86, restructure, gpu, arm, APU, amd

According to VR-Zone, AMD has reworked their business, last Thursday, sorting each of their projects into two divisions and moving some executives around. The company is now segmented into the "Enterprise, Embedded, and Semi-Custom Business Group", and the "Computing and Graphics Business Group". The company used to be divided between "Computing Solutions", which handled CPUs, APUs, chipsets, and so forth, "Graphics and Visual Solutions", which is best known for GPUs but also contains console royalties, and "All Other", which was... everything else.

amd-new2.png

Lisa Su, former general manger of global business, has moved up to Chief Operating Officer (COO), along with other changes.

This restructure makes sense for a couple of reasons. First, it pairs some unprofitable ventures with other, highly profitable ones. AMD's graphics division has been steadily adding profitability to the company while its CPU division has been mostly losing money. Secondly, "All Other" is about a nebulous as a name can get. Instead of having three unbalanced divisions, one of which makes no sense to someone glancing at AMD's quarterly earnings reports, they should now have two, roughly equal segments.

At the very least, it should look better to an uninformed investor. Someone who does not know the company might look at the sheet and assume that, if AMD divested from everything except graphics, that the company would be profitable. If, you know, they did not know that console contracts came into their graphics division because their compute division had x86 APUs, and so forth. This setup is now more aligned to customers, not products.

Source: VR-Zone

GeForce GTX Titan Z Overclocking Testing

Subject: Graphics Cards | June 12, 2014 - 06:17 PM |
Tagged: overclocking, nvidia, gtx titan z, geforce

Earlier this week I posted a review of the NVIDIA GeForce GTX Titan Z graphics card, a dual-GPU Kepler GK110 part that currently sells for $3000. If you missed that article you should read it first and catch up but the basic summary was that, for PC gamers, it's slower and twice the price of AMD's Radeon R9 295X2.

During that article though I mentioned that the Titan Z had more variable clock speeds than any other GeForce card I had tested. At the time I didn't go any further than that since the performance of the card already pointed out the deficit it had going up against the R9 295X2. However, several readers asked me to dive into overclocking with the Titan Z and with that came the need to show clock speed changes. 

My overclocking was done through EVGA's PrecisionX software and we measured clock speeds with GPU-Z. The first step in overclocking an NVIDIA GPU is to simply move up the Power Target sliders and see what happens. This tells the card that it is allowed to consume more power than it would normally be allowed to, and then thanks to GPU Boost technology, the clock speed should scale up naturally. 

titanzoc.jpg

Click to Enlarge

And that is exactly what happened. I ran through 30 minutes of looped testing with Metro: Last Light at stock settings, with the Power Target at 112%, with the Power Target at 120% (the maximum setting) and then again with the Power Target at 120% and the GPU clock offset set to +75 MHz. 

That 75 MHz offset was the highest setting we could get to run stable on the Titan Z, which brings the Base clock up to 781 MHz and the Boost clock to 951 MHz. Though, as you'll see in our frequency graphs below the card was still reaching well above that.

clockspeedtitanz.png

Click to Enlarge

This graph shows clock rates of the GK110 GPUs on the Titan Z over the course of 25 minutes of looped Metro: Last Light gaming. The green line is the stock performance of the card without any changes to the power settings or clock speeds. While it starts out well enough, hitting clock rates of around 1000 MHz, it quickly dives and by 300 seconds of gaming we are often going at or under the 800 MHz mark. That pattern is consistent throughout the entire tested time and we have an average clock speed of 894 MHz.

Next up is the blue line, generated by simply moving the power target from 100% to 112%, giving the GPUs a little more thermal headroom to play with. The results are impressive, with a much more consistent clock speed. The yellow line, for the power target at 120%, is even better with a tighter band of clock rates and with a higher average clock. 

Finally, the red line represents the 120% power target with a +75 MHz offset in PrecisionX. There we see a clock speed consistency matching the yellow line but offset up a bit, as we have been taught to expect with NVIDIA's recent GPUs. 

clockspeedtitan-avg.png

Click to Enlarge

The result of all this data comes together in the bar graph here that lists the average clock rates over the entire 25 minute test runs. At stock settings, the Titan Z was able to hit 894 MHz, just over the "typical" boost clock advertised by NVIDIA of 876 MHz. That's good news for NVIDIA! Even though there is a lot more clock speed variance than I would like to see with the Titan Z, the clock speeds are within the expectations set by NVIDIA out the gate.

Bumping up that power target though will help out gamers that do invest in the Titan Z quite a bit. Just going to 112% results in an average clock speed of 993 MHz, a 100 MHz jump worth about 11% overall. When we push that power target up even further, and overclock the frequency offset a bit, we actually get an average clock rate of 1074 MHz, 20% faster than the stock settings. This does mean that our Titan Z is pulling more power and generating more noise (quite a bit more actually) with fan speeds going from around 2000 to 2700 RPM.

MetroLL_2560x1440_OFPS.png

MetroLL_2560x1440_PER.png

MetroLL_3840x2160_OFPS.png

MetroLL_3840x2160_PER.png

At both 2560x1440 and 3840x2160, in the Metro: Last Light benchmark we ran, the added performance of the Titan Z does put it at the same level of the Radeon R9 295X2. Of course, it goes without saying that we could also overclock the 295X2 a bit further to improve ITS performance, but this is an exercise in education.

IMG_0270.JPG

Does it change my stance or recommendation for the Titan Z? Not really; I still think it is overpriced compared to the performance you get from AMD's offerings and from NVIDIA's own lower priced GTX cards. However, it does lead me to believe that the Titan Z could have been fixed and could have offered at least performance on par with the R9 295X2 had NVIDIA been willing to break PCIe power specs and increase noise.

UPDATE (6/13/14): Some of our readers seem to be pretty confused about things so I felt the need to post an update to the main story here. One commenter below mentioned that I was one of "many reviewers that pounded the R290X for the 'throttling issue' on reference coolers" and thinks I am going easy on NVIDIA with this story. However, there is one major difference that he seems to overlook: the NVIDIA results here are well within the rated specs. 

When I published one of our stories looking at clock speed variance of the Hawaii GPU in the form of the R9 290X and R9 290, our results showed that clock speed of these cards were dropping well below the rated clock speed of 1000 MHz. Instead I saw clock speeds that reached as low as 747 MHz and stayed near the 800 MHz mark. The problem with that was in how AMD advertised and sold the cards, using only the phrase "up to 1.0 GHz" in its marketing. I recommended that AMD begin selling the cards with a rated base clock and a typical boost clock instead only labeling with the, at the time, totally incomplete "up to" rating. In fact, here is the exact quote from this story: "AMD needs to define a "base" clock and a "typical" clock that users can expect." Ta da.

The GeForce GTX Titan Z though, as we look at the results above, is rated and advertised with a base clock of 705 MHz and a boost clock of 876 MHz. The clock speed comparison graph at the top of the story shows the green line (the card at stock) never hitting that 705 MHz base clock while averaging 894 MHz. That average is ABOVE the rated boost clock of the card. So even though the GPU is changing between frequencies more often than I would like, the clock speeds are within the bounds set by NVIDIA. That was clearly NOT THE CASE when AMD launched the R9 290X and R9 290. If NVIDIA had sold the Titan Z with only the specification of "up to 1006 MHz" or something like then the same complaint would be made. But it is not.

The card isn't "throttling" at all, in fact, as someone specifies below. That term insinuates that it is going below a rated performance rating. It is acting in accordance with the GPU Boost technology that NVIDIA designed.

Some users seem concerned about temperature: the Titan Z will hit 80-83C in my testing, both stock and overclocked, and simply scales the fan speed to compensate accordingly. Yes, overclocked, the Titan Z gets quite a bit louder but I don't have sound level tests to show that. It's louder than the R9 295X2 for sure but definitely not as loud as the R9 290 in its original, reference state.

Finally, some of you seem concerned that I was restrticted by NVIDIA on what we could test and talk about on the Titan Z. Surprise, surprise, NVIDIA didn't send us this card to test at all! In fact, they were kind of miffed when I did the whole review and didn't get into showing CUDA benchmarks. So, there's that.

The R9 280 versus the GTX 760 in a photo finish

Subject: Graphics Cards | June 9, 2014 - 02:53 PM |
Tagged: amd, r9 280, msi, R9 280 GAMING OC, factory overclocked

[H]ard|OCP has just posted a review of MSI's factory overclocked HD7950 R9 280 GAMING OC card, with a 67MHz overclock on the GPU out of the box bringing it up to the 280X's default speed of 1GHz.  With a bit of work that can be increased, [H]'s testing was also done at 1095MHz with the RAM raised to 5.4GHz which was enough to take it's performance just beyond the stock GTX 760 it was pitted against.  Considering the equality of the performance as well as the price of these cards the decision as to which to go can be based on bundled games or personal preference.

1401660056twsPHhtr7E_1_1.jpg

"Priced at roughly $260 we have the MSI R9 280 GAMING OC video card, which features pre-overclocked performance, MSI's Twin Frozr IV cooling system, and highest end components. We'll focus on performance when gaming at 1080p between this boss and the GeForce GTX 760 video card!"

Here are some more Graphics Card articles from around the web:

Graphics Cards

 

Source: [H]ard|OCP

AMD Demonstrates Prototype FreeSync Monitor with DisplayPort Adaptive Sync Feature

Subject: Graphics Cards, Displays | June 4, 2014 - 12:40 AM |
Tagged: gsync, g-sync, freesync, DisplayPort, computex 2014, computex, adaptive sync

AMD FreeSync is likely a technology or brand or term that is going to be used a lot between now and the end of 2014. When NVIDIA introduced variable refresh rate monitor technology to the world in October of last year, one of the immediate topics of conversation was the response that AMD was going to have. NVIDIA's G-Sync technology is limited to NVIDIA graphics cards and only a few (actually just one still as I write this) monitors actually have the specialized hardware to support it. In practice though, variable refresh rate monitors fundamentally change the gaming experience for the better

freesync1.jpg

At CES, AMD went on the offensive and started showing press a hacked up demo of what they called "FreeSync", a similar version of the variable refresh technology working on a laptop. At the time, the notebook was a requirement of the demo because of the way AMD's implementation worked. Mobile displays have previously included variable refresh technologies in order to save power and battery life. AMD found that it could repurpose that technology to emulate the effects that NVIDIA G-Sync creates - a significantly smoother gaming experience without the side effects of Vsync.

Our video preview of NVIDIA G-Sync Technology

Since that January preview, things have progressed for the "FreeSync" technology. Taking the idea to the VESA board responsible for the DisplayPort standard, in April we found out that VESA had adopted the technology and officially and called it Adaptive Sync

So now what? AMD is at Computex and of course is taking the opportunity to demonstrate a "FreeSync" monitor with the DisplayPort 1.2a Adaptive Sync feature at work. Though they aren't talking about what monitor it is or who the manufacturer is, the demo is up and running and functions with frame rates wavering between 40 FPS and 60 FPS - the most crucial range of frame rates that can adversely affect gaming experiences. AMD has a windmill demo running on the system, perfectly suited to showing Vsync enabled (stuttering) and Vsync disabled (tearing) issues with a constantly rotating object. It is very similar to the NVIDIA clock demo used to show off G-Sync.

freesync2.jpg

The demo system is powered by an AMD FX-8350 processor and Radeon R9 290X graphics card. The monitor is running at 2560x1440 and is the very first working prototype of the new standard. Even more interesting, this is a pre-existing display that has had its firmware updated to support Adaptive Sync. That's potentially exciting news! Monitors COULD BE UPGRADED to support this feature, but AMD warns us: "...this does not guarantee that firmware alone can enable the feature, it does reveal that some scalar/LCD combinations are already sufficiently advanced that they can support some degree of DRR (dynamic refresh rate) and the full DPAS (DisplayPort Adaptive Sync) specification through software changes."

freesync3.jpg

The time frame for retail available monitors using DP 1.2a is up in the air but AMD has told us that the end of 2014 is entirely reasonable. Based on the painfully slow release of G-Sync monitors into the market, AMD has less of a time hole to dig out of than we originally thought, which is good. What is not good news though is that this feature isn't going to be supported on the full range of AMD Radeon graphics cards. Only the Radeon R9 290/290X and R7 260/260X (and the R9 295X2 of course) will actually be able to support the "FreeSync" technology. Compare that to NVIDIA's G-Sync: it is supported by NVIDIA's entire GTX 700 and GTX 600 series of cards.

freesync4.jpg

All that aside, seeing the first official prototype of "FreeSync" is awesome and is getting me pretty damn excited about the variable refresh rate technologies once again! Hopefully we'll get some more hands on time (eyes on, whatever) with a panel in the near future to really see how it compares to the experience that NVIDIA G-Sync provides. There is still the chance that the technologies are not directly comparable and some in-depth testing will be required to validate.

Richard Huddy Departs Intel, Rejoins AMD

Subject: Graphics Cards, Processors | June 3, 2014 - 02:10 PM |
Tagged: Intel, amd, richard huddy

Interesting news is crossing the ocean today as we learn that Richard Huddy, who has previously had stints at NVIDIA, ATI, AMD and most recently, Intel, is teaming up with AMD once again. Richard brings with him years of experience and innovation in the world of developer relations and graphics technology. Often called "the Godfather" of DirectX, AMD wants to prove to the community it is taking PC gaming seriously.

richardhuddy.jpg

The official statement from AMD follows:

AMD is proud to announce the return of the well-respected authority in gaming, Richard Huddy. After three years away from AMD, Richard returns as AMD's Gaming Scientist in the Office of the CTO - he'll be serving as a senior advisor to key technology executives, like Mark Papermaster, Raja Koduri and Joe Macri. AMD is extremely excited to have such an industry visionary back. Having spent his professional career with companies like NVIDIA, Intel and ATI, and having led the worldwide ISV engineering team for over six years at AMD, Mr. Huddy has a truly unique perspective on the PC and Gaming industries.

Mr. Huddy rejoins AMD after a brief stint at Intel, where he had a major impact on their graphics roadmap.  During his career Richard has made enormous contributions to the industry, including the development of DirectX and a wide range of visual effects technologies.  Mr. Huddy’s contributions in gaming have been so significant that he was immortalized as ‘The Scientist’ in Max Payne (if you’re a gamer, you’ll see the resemblance immediately). 

Kitguru has a video from Richard Huddy explaining his reasoning for the move back to AMD.

Source: Kitguru.net

This move points AMD in a very interesting direction going forward. The creation of the Mantle API and the debate around AMD's developer relations programs are going to be hot topics as we move into the summer and I am curious how quickly Huddy thinks he can have an impact.

I have it on good authority we will find out very soon.

Computex 2014: ASUS Announces ROG ARES III Water-Cooled Gaming Graphics Card

Subject: Graphics Cards | June 2, 2014 - 11:41 PM |
Tagged: computex, radeon, r9 295x2, Hawaii XT, dual gpu, computex 2014, ASUS ROG, asus, Ares, amd

The latest installment in the ASUS ARES series of ultra-powerful, limited-edition graphics cards has been announced, and the Ares III is set to be the “world’s fastest” video card.

IMG_20140602_190930.jpg

The ARES III features a full EK water block

The dual-GPU powerhouse is driven by two “hand-selected” Radeon Hawaii XT GPUs (R9 290X cores) with 8GB of GDDR5 memory. The card is overclockable according to ASUS, and will likely arrive factory overclocked as they claim it will be faster out of the box than the reference R9-295x2. The ARES III features a custom-designed EK water block, so unlike the R9 295x2 the end user will need to supply the liquid cooling loop.

ASUS claims that the ARES III will “deliver 25% cooler performance than reference R9 295X designs“, but to achieve this ASUS “highly” recommends a high flow rate loop with at least a 120x3 radiator “to extract maximum performance from the card,” and they “will provide a recommended list of water cooling systems at launch”.

Only 500 of the ARES III will be made, and are individually numbered. No pricing has been announced, but ASUS says to expect it to be more than a 295x2 ($1499) - but less than a TITAN Z ($2999). The ASUS ROG ARES III will be available in Q3 2014.

For more Computex 2014 coverage, please check out our feed!

Source: ASUS

NVIDIA Launches GeForce Experience 2.1

Subject: General Tech, Graphics Cards | June 2, 2014 - 05:52 PM |
Tagged: nvidia, geforce, geforce experience, ShadowPlay

NVIDIA has just launched another version of their GeForce Experience, incrementing the version to 2.1. This release allows video of up to "2500x1600", which I assume means 2560x1600, as well as better audio-video synchronization in Adobe Premiere. Also, because why stop going after FRAPS once you start, it also adds an in-game framerate indicator. It also adds push-to-talk for recording the microphone.

nvidia-geforce-experience.png

Another note: when GeForce Experience 2.0 launched, it introduced streaming of the user's desktop. This allowed recording of OpenGL and windowed-mode games by simply capturing an entire monitor. This mode was not capable of "Shadow Mode", which I believed was because they thought users didn't want a constant rolling video to be taken of their desktop in the event that they wanted to save a few minutes of it at some point. Turns out that I was wrong; the feature was coming and it arrived with GeForce Experience 2.1.

GeForce Experience 2.1 is now available at NVIDIA's website, unless it already popped up a notification for you.

Source: NVIDIA

AMD Catalyst Driver 14.6 BETA released

Subject: Graphics Cards | May 28, 2014 - 07:17 PM |
Tagged: driver, Catalyst 14.4 beta, amd

Get the latest Catalyst for your Radeon!

images.jpg

  • Starting with AMD Catalyst 14.6 Beta, AMD will no longer support Windows 8.0 (and the WDDM 1.2 driver) Windows 8.0 users should upgrade (for Free) to Windows 8.1 to take advantage of the new features found in the AMD Catalyst 14.6 Beta
  • AMD Catalyst 14.4 will remain available for users who wish to remain on Windows 8 A future AMD Catalyst release will allow for the WDDM 1.1 (Windows 7 driver) to be installed under Windows 8.0 for those users unable to upgrade to Windows 8.1
  • The AMD Catalyst 14.6 Beta Driver can be downloaded from the following links: AMD Catalyst 14.6 Beta Driver for Windows
  • NOTE! This Catalyst Driver is provided "AS IS", and under the terms and conditions of the End User License Agreement provided with it.

Featured Improvements

  • Performance improvements
    • Watch Dogs performance improvements AMD Radeon R9 290X - 1920x1080 4x MSAA – improves up to 25%
    • AMD Radeon R9290X - 2560x1600 4x MSAA – improves up to 28%
    • AMD Radeon R9290X CrossFire configuration (3840x2160 Ultra settings, MSAA = 4X) - 92% scaling
  • Murdered Soul Suspect performance improvements
    • AMD Radeon R9 290X – 2560x1600 4x MSAA – improves up to 16%
    • AMD Radeon R9290X CrossFire configuration (3840x2160 Ultra settings, MSAA = 4X) - 93% scaling
  • AMD Eyefinity enhancements: Mixed Resolution Support
    • A new architecture providing brand new capabilities
    • Display groups can be created with monitors of different resolution (including difference sizes and shapes)
    • Users have a choice of how surface is created over the display group
      • Fill – legacy mode, best for identical monitors
      • Fit – create the Eyefinity surface using best available rectangular area with attached displays.
      • Expand – create a virtual Eyefinity surface using desktops as viewports onto the surface.
  • Eyefinity Display Alignment
    • Enables control over alignment between adjacent monitors
    • One-Click Setup Driver detects layout of extended desktops
    • Can create Eyefinity display group using this layout in one click!
    • New user controls for video color and display settings
  • Greater control over Video Color Management:
    • Controls have been expanded from a single slider for controlling Boost and Hue to per color axis
    • Color depth control for Digital Flat Panels (available on supported HDMI and DP displays)
    • Allows users to select different color depths per resolution and display
  • AMD Mantle enhancements
    • Mantle now supports AMD Mobile products with Enduro technology
    • Battlefield 4: AMD Radeon HD 8970M (1366x768; high settings) – 21% gain
    • Thief: AMD Radeon HD 8970M (1920x1080; high settings) – 14% gain
    • Star Swarm: AMD Radeon HD 8970M (1920x1080; medium settings) – 274% gain
    • Enables support for Multi-GPU configurations with Thief (requires the latest Thief update)

... and much more, grab it here.

Source: AMD

Futuremark Announces 3DMark Sky Diver Benchmark

Subject: General Tech, Graphics Cards | May 28, 2014 - 01:49 PM |
Tagged: benchmarking, 3dmark

HELSINKI, FINLAND – May 28, 2014 – Futuremark today announced 3DMark Sky Diver, a new DirectX 11 benchmark test for gaming laptops and mid-range PCs. 3DMark Sky Diver is the ideal test for benchmarking systems with mainstream DirectX 11 graphics cards, mobile GPUs, or integrated graphics. A preview trailer for the new benchmark shows a wingsuited woman skydiving into a mysterious, uncharted location. The scene is brought to life with tessellation, particles and advanced post-processing effects. Sky Diver will be shown in full at Computex from June 3-7, or find out more on the Futuremark website.

Jukka Mäkinen, Futuremark CEO said, "Some people think that 3DMark is only for high-end hardware and extreme overclocking. Yet millions of PC gamers rely on 3DMark to choose systems that best balance performance, efficiency and affordability. 3DMark Sky Diver complements our other tests by providing the ideal benchmark for gaming laptops and mainstream PCs."

3DMark - The Gamer's Benchmark for all your hardware
3DMark is the only benchmark that offers a range of tests for different classes of hardware:

  • Fire Strike, for high performance gaming PCs (DirectX 11, feature level 11)
  • Sky Diver, for gaming laptops and mid-range PCs (DirectX 11, feature level 11)
  • Cloud Gate, for notebooks and typical home PCs (DirectX 11 feature level 10)
  • Ice Storm, for tablets and entry level PCs (DirectX 11 feature level 9)

With 3DMark, you can benchmark the full performance range of modern DirectX 11 graphics hardware. Where Fire Strike is like a modern game on ultra high settings, Sky Diver is closer to a DirectX 11 game played on normal settings. This makes Sky Diver the best choice for benchmarking entry level to mid-range systems and Fire Strike the perfect benchmark for high performance gaming PCs.

See 3DMark Sky Diver in full at Computex
3DMark Sky Diver will be on display on the ASUS, MSI, GIGABYTE, Galaxy, Inno3D, and G-Skill booths at Computex, June 3-7.

S.Y. Shian, ASUS Vice President & General Manager of Notebook Business Unit said,

"We are proud to partner with Futuremark to show 3DMark Sky Diver at Computex. Sky Diver helps PC gamers choose systems that offer great performance and great value. We invite everyone to visit our stand to experience 3DMark Sky Diver on a range of new ASUS products."

unnamed.jpg

Sky Diver will be released as an update for all editions of 3DMark, including the free 3DMark Basic Edition. 

Source: Futuremark

NVIDIA Finally Launches GeForce GTX Titan Z Graphics Card

Subject: Graphics Cards | May 28, 2014 - 11:19 AM |
Tagged: titan z, nvidia, gtx, geforce

Though delayed by a month, today marks the official release of NVIDIA's Titan Z graphics card, the dual GK110 beast with the $3000 price tag. The massive card was shown for the first time in March at NVIDIA's GPU Technology Conference and our own Tim Verry was on the grounds to get the information

The details remain the same:

Specifically, the GTX TITAN Z is a triple slot graphics card that marries two full GK110 (big Kepler) GPUs for a total of 5,760 CUDA cores, 448 TMUs, and 96 ROPs with 12GB of GDDR5 memory on a 384-bit bus (6GB on a 384-bit bus per GPU). For the truly adventurous, it appears possible to SLI two GTX Titan Z cards using the single SLI connector. Display outputs include two DVI, one HDMI, and one DisplayPort connector.

The difference now of course is that all the clock speeds and pricing are official. 

titanzspecs.png

A base clock speed of 705 MHz with a Boost rate of 876 MHz places it well behind the individual GPU performance of a GeForce GTX 780 Ti or GTX Titan Black (rated at 889/980 MHz). The memory clock speed remains the same at 7.0 Gbps and you are still getting a massive 6GB of memory per GPU.

Maybe most interesting with the release of the GeForce GTX Titan Z is that NVIDIA seems to have completely fixated on non-DIY consumers with the card. We did not receive a sample of the Titan Z (nor did we get one of the Titan Black) and when I inquired as to why, NVIDIA PR stated that they were "only going to CUDA developers and system builders."

geforce-gtx-titan-z-3qtr.png

I think it is more than likely that after the release of AMD's Radeon R9 295X2 dual GPU graphics card on April 8th, with a price tag of $1500 (half of the Titan Z), the target audience was redirected. NVIDIA already had its eye on the professional markets that weren't willing to dive into the Quadro/Tesla lines (CUDA developers will likely drop $3k at the drop of a hat to get this kind of performance density). But a side benefit of creating the best flagship gaming graphics card on the planet was probably part of the story - and promptly taken away by AMD.

geforce-gtx-titan-z-bracket.png

I still believe the Titan Z will be an impressive graphics card to behold both in terms of look and style and in terms of performance. But it would take the BIGGEST NVIDIA fans to be able to pass up buying a pair of Radeon R9 295X2 cards for a single GeForce GTX Titan Z. At least that is our assumption until we can test one for ourselves.

I'm still working to get my hands on one of these for some testing as I think the ultra high end graphics card coverage we offer is incomplete without it. 

Several of NVIDIA's partners are going to be offering the Titan Z including EVGA, ASUS, MSI and Zotac. Maybe the most intersting though is EVGA's water cooled option!

evgatitanz.jpg

So, what do you think? Anyone lining up for a Titan Z when they show up for sale?

AMD Catalyst 14.6 Beta Driver Now Available, Adds Mixed Resolution Eyefinity

Subject: General Tech, Graphics Cards | May 27, 2014 - 12:00 AM |
Tagged: radeon, R9, R7, eyefinity, amd

AMD has just launched their Catalyst 14.6 Beta drivers for Windows and Linux. This driver will contain performance improvements for Watch Dogs, launching today in North America, and Murdered: Soul Suspect, which arrives next week. On Linux, the driver now supports Ubuntu 14.04 and its installation process has been upgraded for simplicity and user experience.

amd-146-eyefinity.png

Unless performance improvements are more important to you, the biggest feature is the support for Eyefinity with mixed resolutions. With Catalyst 14.6, you no longer need a grid of identical monitors. One example use case, suggested by AMD, is a gamer who purchases an ultra-wide 2560x1080 monitor. They will be able to add a pair of 1080p monitors on either side to create a 6400x1080 viewing surface.

amd-146-eyefinity2.png

If the monitors are very mismatched, the driver will allow users to letterbox to the largest rectangle contained by every monitor, or "expand" to draw the largest possible rectangle (which will lead to some assets drawing outside of any monitor). A third mode, fill, behaves like Eyefinity currently does. I must give AMD a lot of credit for leaving the choice to the user.

Returning to performance with actual figures, AMD claims "up to" 25% increases in Watch Dogs at 1080p or 28% at 1600p, compared to the previous version. The new CrossFire profile also claims up to 99% scaling in that game, at 2560x1600 with 8x MSAA. Murdered: Soul Suspect will see "up to" 16% improvements on a single card, and "up to" 93% scaling. Each of these results were provided by AMD, which tested on Radeon R9 290X cards. If these CrossFire profiles (well, first, are indicative of actual performance, and) see 99% scaling across two cards, that is pretty remarkable.

amd-146-jpeg.png

A brief mention, AMD has also expanded their JPEG decoder to Kabini. Previously, it was available to Kaveri, as of Catalyst 14.1. This allows using the GPU to display images, with their test showing a series of images being processed in about half of the time. While not claimed by AMD, I expect that the GPU will also be more power-efficient (as the processor can go back to its idle state much quicker, despite activitating another component to do so). Ironically, the three images I used for this news post are encoded in PNG. You might find that amusing.

AMD Catalyst 14.6 Beta Drivers should be now available at their download site.

Source: AMD

What if you can't afford that second R9 295X2?

Subject: Graphics Cards | May 26, 2014 - 05:16 PM |
Tagged: amd, radeon, r9 295x2, R9 290X

Through hard work or good luck you find yourself the proud owner of an R9 295X2 and a 4K display but somehow the performance just isn't quite good enough.  You can't afford another X2 though there is an R9 290X in your price range but you just aren't sure if it will help your system out at all.  That is where [H]ard|OCP steps in with this review where they prove that tri-fire in this configuration does indeed work.  Not only does it work, it allows you to vastly increase your performance over a 295X2 or to improve the performance somewhat while raising your graphics settings to new highs.  For those using 5760x1200 Eyefinity you probably already have your graphics options cranked; this upgrade will still offer you a linear increase in performance.  Not bad if you have the money to invest!

1399909966FHzxcUxVw3_1_3_l.jpg

"Will adding a single AMD Radeon R9 290X video card to the AMD Radeon R9 295X2 work? Will you get triple-GPU performance, ala TriFire CrossFire performance? This just might be a more financially feasible configuration for gamers versus QuadFire that provides a great gaming experience in Eyefinity and 4K resolutions."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

NVIDIA Tegra K1 Benchmarks Spotted

Subject: General Tech, Graphics Cards, Mobile | May 22, 2014 - 04:58 PM |
Tagged: tegra k1, nvidia, iris pro, iris, Intel, hd 4000

The Chinese tech site, Evolife, acquired a few benchmarks for the Tegra K1. We do not know exactly where they got the system from, but we know that it has 4GB of RAM and 12 GB of storage. Of course, this is the version with four ARM Cortex-A15 cores (not the upcoming, 64-bit version based on Project Denver). On 3DMark Ice Storm Unlimited, it was capable of 25737 points, full system.

nvidia-k1-benchmark.jpg

Image Credit: Evolife.cn

You might remember that our tests with an Intel Core i5-3317U (Ivy Bridge), back in September, achieved a score of 25630 on 3DMark Ice Storm. Of course, that was using the built-in Intel HD 4000 graphics, not a discrete solution, but it still kept up for gaming. This makes sense, though. Intel HD 4000 (GT2) graphics has a theoretical performance of 332.8 GFLOPs, while the Tegra K1 is rated at 364.8 GFLOPs. Earlier, we said that its theoretical performance is roughly on par with the GeForce 9600 GT, although the Tegra K1 supports newer APIs.

Of course, Intel has released better solutions with Haswell. Benchmarks show that Iris Pro is able to play Battlefield 4 on High settings, at 720p, with about 30FPS. The HD 4000 only gets about 12 FPS with the same configuration (and ~30 FPS on Low). This is not to compare Intel to NVIDIA's mobile part, but rather compare Tegra K1 to modern, mainstream laptops and desktops. It is getting fairly close, especially with the first wave of K1 tablets entering at the mid-$200 USD MSRP in China.

As a final note...

There was a time where Tim Sweeney, CEO of Epic Games, said that the difference between high-end and low-end PCs "is something like 100x". Scaling a single game between the two performance tiers would be next-to impossible. He noted that ten years earlier, that factor was more "10x".

Now, an original GeForce Titan is about 12x faster than the Tegra K1 and they support the same feature set. In other words, it is easier to develop a game for the PC and high-end tablet than it was to develop an PC game for high-end and low-end machines, back in 2008. PC Gaming is, once again, getting healthier.

Source: Evolife.cn