Author:
Manufacturer: Various

EVGA GTX 750 Ti ACX FTW

The NVIDIA GeForce GTX 750 Ti has been getting a lot of attention around the hardware circuits recently, but for good reason.  It remains interesting from a technology stand point as it is the first, and still the only, Maxwell based GPU available for desktop users.  It's a completely new architecture which is built with power efficiency (and Tegra) in mind. With it, the GTX 750 Ti was able to push a lot of performance into a very small power envelope while still maintaining some very high clock speeds.

IMG_9872.JPG

NVIDIA’s flagship mainstream part is also still the leader when it comes to performance per dollar in this segment (for at least as long as it takes for AMD’s Radeon R7 265 to become widely available).  There has been a few cases that we have noticed where the long standing shortages and price hikes from coin mining have dwindled, which is great news for gamers but may also be bad news for NVIDIA’s GPUs in some areas.  Though, even if the R7 265 becomes available, the GTX 750 Ti remains the best card you can buy that doesn’t require a power connection. This puts it in a unique position for power limited upgrades. 

After our initial review of the reference card, and then an interesting look at how the card can be used to upgrade an older or under powered PC, it is time to take a quick look at a set of three different retail cards that have made their way into the PC Perspective offices.

On the chopping block today we’ll look at the EVGA GeForce GTX 750 Ti ACX FTW, the Galaxy GTX 750 Ti GC and the PNY GTX 750 Ti XLR8 OC.  All of them are non-reference, all of them are overclocked, but you’ll likely be surprised how they stack up.

Continue reading our round up of EVGA, Galaxy and PNY GTX 750 Ti Graphics Cards!!

GDC 14: WebCL 1.0 Specification is Released by Khronos

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 19, 2014 - 06:03 AM |
Tagged: WebCL, gdc 14, GDC

The Khronos Group has just ratified the standard for WebCL 1.0. The API is expected to provide a massive performance boost to web applications which are dominated by expensive functions which can be offloaded to parallel processors, such as GPUs and multi-core CPUs. Its definition also allows WebCL to communicate and share buffers between it and WebGL with an extension.

WebCL_300_167_75.png

Frequent readers of the site might remember that I have a particular interest in WebCL. Based on OpenCL, it allows web apps to obtain a list of every available compute device and target it for workloads. I have personally executed tasks on an NVIDIA GeForce 670 discrete GPU and other jobs on my Intel HD 4000 iGPU, at the same time, using the WebCL prototype from Tomi Aarnio of Nokia Research. The same is true for users with multiple discrete GPUs installed in their system (even if they are not compatible with Crossfire, SLi, or are from different vendors altogether). This could be very useful for physics, AI, lighting, and other game middleware packages.

Still, browser adoption might be rocky for quite some time. Google, Mozilla, and Opera Software were each involved in the working draft. This leaves both Apple and Microsoft notably absent. Even then, I am not sure how much interest exists within Google, Mozilla, and Opera to take it from a specification to a working feature in their browsers. Some individuals have expressed more faith in WebGL compute shaders than WebCL.

Of course, that can change with just a single "killer app", library, or middleware.

I do expect some resistance from the platform holders, however. Even Google has been pushing back on OpenCL support in Android, in favor of their "Renderscript" abstraction. The performance of a graphics processor is also significant leverage for a native app. There is little, otherwise, that cannot be accomplished with Web standards except a web browser itself (and there is even some non-serious projects for that). If Microsoft can support WebGL, however, there is always hope.

The specification is available at the Khronos website.

Source: Khronos

GDC 14: EGL 1.5 Specification Released by Khronos

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 19, 2014 - 06:02 AM |
Tagged: OpenGL ES, opengl, opencl, gdc 14, GDC, EGL

The Khronos Group has also released their ratified specification for EGL 1.5. This API is at the center of data and event management between other Khronos APIs. This version increases security, interoperability between APIs, and support for many operating systems, including Android and 64-bit Linux.

khronos-EGL_500_123_75.png

The headline on the list of changes is the move that EGLImage objects makes, from the realm of extension into EGL 1.5's core functionality, giving developers a reliable method of transferring textures and renderbuffers between graphics contexts and APIs. Second on the list is the increased security around creating a graphics context, primarily designed for WebGL applications which any arbitrary website can become. Further down the list is the EGLSync object which allows further partnership between OpenGL (and OpenGL ES) and OpenCL. The GPU may not need CPU involvement when scheduling between tasks on both APIs.

During the call, the representative also wanted to mention that developers have asked them to bring EGL back to Windows. While it has not happened yet, they have announced that it is a current target.

The EGL 1.5 spec is available at the Khronos website.

Source: Khronos

GDC 14: SYCL 1.2 Provisional Spec Released by Khronos

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 19, 2014 - 06:01 AM |
Tagged: SYCL, opencl, gdc 14, GDC

To gather community feedback, the provisional specification for SYCL 1.2 has been released by The Khronos Group. SYCL extends itself upon OpenCL with the C++11 standard. This technology is built on another Khronos platform, SPIR, which allows the OpenCL C programming language to be mapped onto LLVM, with its hundreds of compatible languages (and Khronos is careful to note that they intend for anyone to make their own compatible alternative langauge).

khronos-SYCL_Color_Mar14_154_75.png

In short, SPIR allows many languages which can compile into LLVM to take advantage of OpenCL. SYCL is the specification for creating C++11 libraries and compilers through SPIR.

As stated earlier, Khronos wants anyone to make their own compatible language:

While SYCL is one possible solution for developers, the OpenCL group encourages innovation in programming models for heterogeneous systems, either by building on top of the SPIR™ low-level intermediate representation, leveraging C++ programming techniques through SYCL, using the open source CLU libraries for prototyping, or by developing their own techniques.

SYCL 1.2 supports OpenCL 1.2 and they intend to develop it alongside OpenCL. Future releases are expected to support the latest OpenCL 2.0 specification and keep up with future developments.

The SYCL 1.2 provisional spec is available at the Khronos website.

Source: Khronos

Just Delivered: MSI Radeon R9 290X Lightning

Subject: Graphics Cards | March 18, 2014 - 12:58 PM |
Tagged: radeon, R9 290X, msi, just delivered, amd, 290x lightning, 290x

While Ryan may be en route to the Game Developer's Conference in San Francisco right now, work must go on at the PC Perspective office. As it happens my arrival at the office today was greeted by a massively exciting graphics card, the MSI Radeon R9 290X Lightning.

IMG_9901.JPG

While we first got our hands on a prerelease version of this card at CES earlier this year, we can now put the Lightning edition through its paces.

IMG_9900.JPG

To go along with this massive graphics card comes a massive box. Just like the GTX 780 Lightning, MSI paid extra detail to the packaging to create a more premium-feeling experience than your standard reference design card.

IMG_9906.JPG

Comparing the 290X Lightning to the AMD reference design, it is clear how much engineering went into this card - the heatpipe and fins alone are as thick as the entire reference card. This, combined with a redesigned PCB and improved power management should ensure that you never fall victim to the GPU clock variance issues of the reference design cards, and give you one of the best overclocking experiences possible from the Hawaii GPU.

gpu-z.png

While I haven't had a chance to start benchmarking yet, I put it on the testbed and figured I would give a little preview of what you can expect from this card out of the box.

Stay tuned for more coverage of the MSI Radeon R9 290X Lightning and our full review, coming soon on PC Perspective!

Source: MSI

GDC 14: OpenGL ES 3.1 Spec Released by Khronos Group

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 17, 2014 - 06:01 AM |
Tagged: OpenGL ES, opengl, Khronos, gdc 14, GDC

Today, day one of Game Developers Conference 2014, the Khronos Group has officially released the 3.1 specification for OpenGL ES. The main new feature, brought over from OpenGL 4, is the addition of compute shaders. This opens GPGPU functionality to mobile and embedded devices for applications developed in OpenGL ES, especially if the developer does not want to add OpenCL.

The update is backward-compatible with OpenGL ES 2.0 and 3.0 applications, allowing developers to add features, as available, for their existing apps. On the device side, most functionality is expected to be a driver update (in the majority of cases).

opengl-es-logo.png

OpenGL ES, standing for OpenGL for Embedded Systems but is rarely branded as such, delivers what they consider the most important features from the graphics library to the majority of devices. The Khronos Group has been working toward merging ES with the "full" graphics library over time. The last release, OpenGL ES 3.0, was focused on becoming a direct subset of OpenGL 4.3. This release expands upon the feature-space it occupies.

OpenGL ES also forms the basis for WebGL. The current draft of WebGL 2.0 uses OpenGL ES 3.0 although that was not discussed today. I have heard murmurs (not from Khronos) about some parties pushing for compute shaders in that specification, which this announcement puts us closer to.

The new specification also adds other features, such as the ability to issue a draw without CPU intervention. You could imagine a particle simulation, for instance, that wants to draw the result after its compute shader terminates. Shading is also less rigid, where vertex and fragment shaders do not need to be explicitly linked into a program before they are used. I inquired about the possibility that compute devices could be targetted (for devices with two GPUs) and possibly load balanced, in a similar method to WebCL but no confirmation or denial was provided (although he did mention that it would be interesting for apps that fall somewhere in the middle of OpenGL ES and OpenCL).

The OpenGL ES 3.1 spec is available at the Khronos website.

Source: Khronos

AMD Radeon R9 Graphics Stock Friday Night Update

Subject: Graphics Cards | March 14, 2014 - 07:17 PM |
Tagged: radeon, R9 290X, r9 290, r9 280x, r9 280, amd

While sitting on the couch watching some college basketball I decided to start browsing Amazon.com and Newegg.com for some Radeon R9 graphics cards.  With all of the stock and availability issues AMD has had recently, this is a more frequent occurrence for me than I would like to admit.  Somewhat surprisingly, things appear to be improving for AMD at the high end of the product stack.  Take a look at what I found.

  Amazon.com Newegg.com
ASUS Radeon R9 290X DirectCU II $599 -
Visiontek R9 290X $599 -
XFX R9 290X Double D $619 -
ASUS R9 290 DirectCU II $499 -
XFX R9 290 Double D $499 -
MSI R9 290 Gaming $465 $469
PowerColor TurboDuo AXR9 280X - $329
Visiontek R9 280X $370 $349
XFX R9 280 Double D - $289
Sapphire Dual-X R9 280 - $299
Sapphire R7 265 $184 $149

msir9290.jpg

It's not perfect, but it's better.  I was able to find two R9 290X cards at $599, which is just $50 over the expected selling price of $549.  The XFX Double D R9 290X at $619 is pretty close as well.  The least expensive R9 290 I found was $469 but others remain about $100 over the suggested price.  In reality, having the R9 290 and R9 290X only $100 apart, as opposed to the $150 that AMD would like you to believe, is more realistic based on the proximity of performance between the two SKUs.  

Stepping a bit lower, the R9 280X (which is essentially the same as the HD 7970 GHz Edition) can be found for $329 and $349 on Newegg.  Those prices are just $30-50 more than the suggested pricing!  The brand new R9 280, similar in specs to the HD 7950, is starting to show up for $289 and $299; $10 over what AMD told us to expect.

Finally, though not really a high end card, I did see that the R7 265 was showing up at both Amazon.com and Newegg.com for the second time since its announcement in February. For budget 1080p gamers, if you can find it, this could be the best card you can pick up.

What deals are you finding online?  If you guys have one worth adding here, let me know! Is the lack of availability and high prices on AMD GPUs finally behind us??

Author:
Manufacturer: NZXT

Installation

When the Radeon R9 290 and R9 290X first launched last year, they were plagued by issues of overheating and variable clock speeds.  We looked at the situation several times over the course of a couple months and AMD tried to address the problem with newer drivers.  These drivers did help stabilize clock speeds (and thus performance) of the reference built R9 290 and R9 290X cards but caused noise levels to increase as well.  

The real solution was the release of custom cooled versions of the R9 290 and R9 290X from AMD partners like ASUS, MSI and others.  The ASUS R9 290X DirectCU II model for example, ran cooler, quieter and more consistently than any of the numerous reference models we had our hands on.  

But what about all those buyers that are still purchasing, or have already purchased, reference style R9 290 and 290X cards?  Replacing the cooler on the card is the best choice and thanks to our friends at NZXT we have a unique solution that combines standard self contained water coolers meant for CPUs with a custom built GPU bracket.  

IMG_9179_0.JPG

Our quick test will utilize one of the reference R9 290 cards AMD sent along at launch and two specific NZXT products.  The Kraken X40 is a standard CPU self contained water cooler that sells for $100 on Amazon.com.  For our purposes though we are going to team it up with the Kraken G10, a $30 GPU-specific bracket that allows you to use the X40 (and other water coolers) on the Radeon R9 290.

IMG_9181_0.JPG

Inside the box of the G10 you'll find an 80mm fan, a back plate, the bracket to attach the cooler to the GPU and all necessary installation hardware.  The G10 will support a wide range of GPUs, though they are targeted towards the reference designs of each:

NVIDIA : GTX 780 Ti, 780, 770, 760, Titan, 680, 670, 660Ti, 660, 580, 570, 560Ti, 560, 560SE 
AMD : R9 290X, 290, 280X*, 280*, 270X, 270 HD7970*, 7950*, 7870, 7850, 6970, 6950, 6870, 6850, 6790, 6770, 5870, 5850, 5830
 

That is pretty impressive but NZXT will caution you that custom designed boards may interfere.

IMG_9184_0.JPG

The installation process begins by removing the original cooler which in this case just means a lot of small screws.  Be careful when removing the screws on the actual heatsink retention bracket and alternate between screws to take it off evenly.

Continue reading about how the NZXT Kraken G10 can improve the cooling of the Radeon R9 290 and R9 290X!!

AMD Teasing Dual GPU Graphics Card, Punks Me at Same Time

Subject: Graphics Cards | March 13, 2014 - 07:52 AM |
Tagged: radeon, amd

This morning I had an interesting delivery on my door step.

The only thing inside it was an envelope stamped TOP SECRET and this photo.  Coming from AMD's PR department, the hashtag #2betterthan1 adorned the back of the picture.  

twobetter.jpg

This original photo is from like....2004.  Nice, very nice AMD.

With all the rumors circling around the release of a new dual-GPU graphics card based on Hawaii, it seems that AMD is stepping up the viral marketing campaign a bit early.  Code named 'Vesuvius', the idea of a dual R9 290X single card seems crazy due to high power consumption but maybe AMD has been holding back the best, most power efficient GPUs for such a release.

What do you think?  Can AMD make a dual-GPU Hawaii card happen?  How will this affect or be affected by the GPU shortages and price hikes still plaguing the R9 290 and R9 290X?  How much would you be willing to PAY for something like this?

Author:
Manufacturer: NVIDIA

Maxwell and Kepler and...Fermi?

Covering the landscape of mobile GPUs can be a harrowing experience.  Brands, specifications, performance, features and architectures can all vary from product to product, even inside the same family.  Rebranding is rampant from both AMD and NVIDIA and, in general, we are met with one of the most confusing segments of the PC hardware market.  

Today, with the release of the GeForce GTX 800M series from NVIDIA, we are getting all of the above in one form or another. We will also see performance improvements and the introduction of the new Maxwell architecture (in a few parts at least).  Along with the GeForce GTX 800M parts, you will also find the GeForce 840M, 830M and 820M offerings at lower performance, wattage and price levels.

slides01.jpg
 

With some new hardware comes a collection of new software for mobile users, including the innovative Battery Boost that can increase unplugged gaming time by using frame rate limiting and other "magic" bits that NVIDIA isn't talking about yet.  ShadowPlay and GameStream also find their way to mobile GeForce users as well.

Let's take a quick look at the new hardware specifications.

  GTX 880M GTX 780M GTX 870M GTX 770M
GPU Code name Kepler Kepler Kepler Kepler
GPU Cores 1536 1536 1344 960
Rated Clock 954 MHz 823 MHz 941 MHz 811 MHz
Memory Up to 4GB Up to 4GB Up to 3GB Up to 3GB
Memory Clock 5000 MHz 5000 MHz 5000 MHz 4000 MHz
Memory Interface 256-bit 256-bit 192-bit 192-bit
Features Battery Boost
GameStream
ShadowPlay
GFE
GameStream
ShadowPlay
GFE
Battery Boost
GameStream
ShadowPlay
GFE
GameStream
ShadowPlay
GFE

Both the GTX 880M and the GTX 870M are based on Kepler, keeping the same basic feature set and hardware specifications of their brethren in the GTX 700M line.  However, while the GTX 880M has the same CUDA core count as the 780M, the same cannot be said of the GTX 870M.  Moving from the GTX 770M to the 870M sees a significant 40% increase in core count as well as a jump in clock speed from 811 MHz (plus Boost) to 941 MHz.  

Continue reading about the NVIDIA GeForce GTX 800M Launch and Battery Boost!!