Subject: General Tech, Graphics Cards | March 19, 2014 - 08:26 PM | Ryan Shrout
Tagged: live blog, gdc 14, dx12, DirectX 12, DirectX
UPDATE: If you are looking for the live blog information including commentary and photos, we have placed it in archive format right over here. Thanks!!
It is nearly time for Microsoft to reveal the secrets behind DirectX 12 and what it will offer PC gaming going forward. I will be in San Francisco for the session and will be live blogging from it as networking allows. We'll have a couple of other PC Perspective staffers chiming in as well, so it should be an interesting event for sure! We don't know how much detail Microsoft is going to get into, but we will all know soon.
Microsoft DirectX 12 Session Live Blog
Thursday, March 20th, 10am PDT
You can sign up for a reminder using the CoverItLive interface below or you can sign up for the PC Perspective Live mailing list. See you Thursday!
EVGA GTX 750 Ti ACX FTW
The NVIDIA GeForce GTX 750 Ti has been getting a lot of attention around the hardware circuits recently, but for good reason. It remains interesting from a technology stand point as it is the first, and still the only, Maxwell based GPU available for desktop users. It's a completely new architecture which is built with power efficiency (and Tegra) in mind. With it, the GTX 750 Ti was able to push a lot of performance into a very small power envelope while still maintaining some very high clock speeds.
NVIDIA’s flagship mainstream part is also still the leader when it comes to performance per dollar in this segment (for at least as long as it takes for AMD’s Radeon R7 265 to become widely available). There has been a few cases that we have noticed where the long standing shortages and price hikes from coin mining have dwindled, which is great news for gamers but may also be bad news for NVIDIA’s GPUs in some areas. Though, even if the R7 265 becomes available, the GTX 750 Ti remains the best card you can buy that doesn’t require a power connection. This puts it in a unique position for power limited upgrades.
After our initial review of the reference card, and then an interesting look at how the card can be used to upgrade an older or under powered PC, it is time to take a quick look at a set of three different retail cards that have made their way into the PC Perspective offices.
On the chopping block today we’ll look at the EVGA GeForce GTX 750 Ti ACX FTW, the Galaxy GTX 750 Ti GC and the PNY GTX 750 Ti XLR8 OC. All of them are non-reference, all of them are overclocked, but you’ll likely be surprised how they stack up.
Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 19, 2014 - 09:03 AM | Scott Michaud
Tagged: WebCL, gdc 14, GDC
The Khronos Group has just ratified the standard for WebCL 1.0. The API is expected to provide a massive performance boost to web applications which are dominated by expensive functions which can be offloaded to parallel processors, such as GPUs and multi-core CPUs. Its definition also allows WebCL to communicate and share buffers between it and WebGL with an extension.
Frequent readers of the site might remember that I have a particular interest in WebCL. Based on OpenCL, it allows web apps to obtain a list of every available compute device and target it for workloads. I have personally executed tasks on an NVIDIA GeForce 670 discrete GPU and other jobs on my Intel HD 4000 iGPU, at the same time, using the WebCL prototype from Tomi Aarnio of Nokia Research. The same is true for users with multiple discrete GPUs installed in their system (even if they are not compatible with Crossfire, SLi, or are from different vendors altogether). This could be very useful for physics, AI, lighting, and other game middleware packages.
Still, browser adoption might be rocky for quite some time. Google, Mozilla, and Opera Software were each involved in the working draft. This leaves both Apple and Microsoft notably absent. Even then, I am not sure how much interest exists within Google, Mozilla, and Opera to take it from a specification to a working feature in their browsers. Some individuals have expressed more faith in WebGL compute shaders than WebCL.
Of course, that can change with just a single "killer app", library, or middleware.
I do expect some resistance from the platform holders, however. Even Google has been pushing back on OpenCL support in Android, in favor of their "Renderscript" abstraction. The performance of a graphics processor is also significant leverage for a native app. There is little, otherwise, that cannot be accomplished with Web standards except a web browser itself (and there is even some non-serious projects for that). If Microsoft can support WebGL, however, there is always hope.
The specification is available at the Khronos website.
Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 19, 2014 - 09:02 AM | Scott Michaud
Tagged: OpenGL ES, opengl, opencl, gdc 14, GDC, EGL
The Khronos Group has also released their ratified specification for EGL 1.5. This API is at the center of data and event management between other Khronos APIs. This version increases security, interoperability between APIs, and support for many operating systems, including Android and 64-bit Linux.
The headline on the list of changes is the move that EGLImage objects makes, from the realm of extension into EGL 1.5's core functionality, giving developers a reliable method of transferring textures and renderbuffers between graphics contexts and APIs. Second on the list is the increased security around creating a graphics context, primarily designed for WebGL applications which any arbitrary website can become. Further down the list is the EGLSync object which allows further partnership between OpenGL (and OpenGL ES) and OpenCL. The GPU may not need CPU involvement when scheduling between tasks on both APIs.
During the call, the representative also wanted to mention that developers have asked them to bring EGL back to Windows. While it has not happened yet, they have announced that it is a current target.
The EGL 1.5 spec is available at the Khronos website.
Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 19, 2014 - 09:01 AM | Scott Michaud
Tagged: SYCL, opencl, gdc 14, GDC
To gather community feedback, the provisional specification for SYCL 1.2 has been released by The Khronos Group. SYCL extends itself upon OpenCL with the C++11 standard. This technology is built on another Khronos platform, SPIR, which allows the OpenCL C programming language to be mapped onto LLVM, with its hundreds of compatible languages (and Khronos is careful to note that they intend for anyone to make their own compatible alternative langauge).
In short, SPIR allows many languages which can compile into LLVM to take advantage of OpenCL. SYCL is the specification for creating C++11 libraries and compilers through SPIR.
As stated earlier, Khronos wants anyone to make their own compatible language:
While SYCL is one possible solution for developers, the OpenCL group encourages innovation in programming models for heterogeneous systems, either by building on top of the SPIR™ low-level intermediate representation, leveraging C++ programming techniques through SYCL, using the open source CLU libraries for prototyping, or by developing their own techniques.
SYCL 1.2 supports OpenCL 1.2 and they intend to develop it alongside OpenCL. Future releases are expected to support the latest OpenCL 2.0 specification and keep up with future developments.
The SYCL 1.2 provisional spec is available at the Khronos website.
Subject: Graphics Cards | March 18, 2014 - 03:58 PM | Ken Addison
Tagged: radeon, R9 290X, msi, just delivered, amd, 290x lightning, 290x
While Ryan may be en route to the Game Developer's Conference in San Francisco right now, work must go on at the PC Perspective office. As it happens my arrival at the office today was greeted by a massively exciting graphics card, the MSI Radeon R9 290X Lightning.
While we first got our hands on a prerelease version of this card at CES earlier this year, we can now put the Lightning edition through its paces.
To go along with this massive graphics card comes a massive box. Just like the GTX 780 Lightning, MSI paid extra detail to the packaging to create a more premium-feeling experience than your standard reference design card.
Comparing the 290X Lightning to the AMD reference design, it is clear how much engineering went into this card - the heatpipe and fins alone are as thick as the entire reference card. This, combined with a redesigned PCB and improved power management should ensure that you never fall victim to the GPU clock variance issues of the reference design cards, and give you one of the best overclocking experiences possible from the Hawaii GPU.
While I haven't had a chance to start benchmarking yet, I put it on the testbed and figured I would give a little preview of what you can expect from this card out of the box.
Stay tuned for more coverage of the MSI Radeon R9 290X Lightning and our full review, coming soon on PC Perspective!
Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 17, 2014 - 09:01 AM | Scott Michaud
Tagged: OpenGL ES, opengl, Khronos, gdc 14, GDC
Today, day one of Game Developers Conference 2014, the Khronos Group has officially released the 3.1 specification for OpenGL ES. The main new feature, brought over from OpenGL 4, is the addition of compute shaders. This opens GPGPU functionality to mobile and embedded devices for applications developed in OpenGL ES, especially if the developer does not want to add OpenCL.
The update is backward-compatible with OpenGL ES 2.0 and 3.0 applications, allowing developers to add features, as available, for their existing apps. On the device side, most functionality is expected to be a driver update (in the majority of cases).
OpenGL ES, standing for OpenGL for Embedded Systems but is rarely branded as such, delivers what they consider the most important features from the graphics library to the majority of devices. The Khronos Group has been working toward merging ES with the "full" graphics library over time. The last release, OpenGL ES 3.0, was focused on becoming a direct subset of OpenGL 4.3. This release expands upon the feature-space it occupies.
OpenGL ES also forms the basis for WebGL. The current draft of WebGL 2.0 uses OpenGL ES 3.0 although that was not discussed today. I have heard murmurs (not from Khronos) about some parties pushing for compute shaders in that specification, which this announcement puts us closer to.
The new specification also adds other features, such as the ability to issue a draw without CPU intervention. You could imagine a particle simulation, for instance, that wants to draw the result after its compute shader terminates. Shading is also less rigid, where vertex and fragment shaders do not need to be explicitly linked into a program before they are used. I inquired about the possibility that compute devices could be targetted (for devices with two GPUs) and possibly load balanced, in a similar method to WebCL but no confirmation or denial was provided (although he did mention that it would be interesting for apps that fall somewhere in the middle of OpenGL ES and OpenCL).
The OpenGL ES 3.1 spec is available at the Khronos website.
Subject: Graphics Cards | March 14, 2014 - 10:17 PM | Ryan Shrout
Tagged: radeon, R9 290X, r9 290, r9 280x, r9 280, amd
While sitting on the couch watching some college basketball I decided to start browsing Amazon.com and Newegg.com for some Radeon R9 graphics cards. With all of the stock and availability issues AMD has had recently, this is a more frequent occurrence for me than I would like to admit. Somewhat surprisingly, things appear to be improving for AMD at the high end of the product stack. Take a look at what I found.
|ASUS Radeon R9 290X DirectCU II||$599||-|
|Visiontek R9 290X||$599||-|
|XFX R9 290X Double D||$619||-|
|ASUS R9 290 DirectCU II||$499||-|
|XFX R9 290 Double D||$499||-|
|MSI R9 290 Gaming||$465||$469|
|PowerColor TurboDuo AXR9 280X||-||$329|
|Visiontek R9 280X||$370||$349|
|XFX R9 280 Double D||-||$289|
|Sapphire Dual-X R9 280||-||$299|
|Sapphire R7 265||$184||$149|
It's not perfect, but it's better. I was able to find two R9 290X cards at $599, which is just $50 over the expected selling price of $549. The XFX Double D R9 290X at $619 is pretty close as well. The least expensive R9 290 I found was $469 but others remain about $100 over the suggested price. In reality, having the R9 290 and R9 290X only $100 apart, as opposed to the $150 that AMD would like you to believe, is more realistic based on the proximity of performance between the two SKUs.
Stepping a bit lower, the R9 280X (which is essentially the same as the HD 7970 GHz Edition) can be found for $329 and $349 on Newegg. Those prices are just $30-50 more than the suggested pricing! The brand new R9 280, similar in specs to the HD 7950, is starting to show up for $289 and $299; $10 over what AMD told us to expect.
Finally, though not really a high end card, I did see that the R7 265 was showing up at both Amazon.com and Newegg.com for the second time since its announcement in February. For budget 1080p gamers, if you can find it, this could be the best card you can pick up.
What deals are you finding online? If you guys have one worth adding here, let me know! Is the lack of availability and high prices on AMD GPUs finally behind us??
When the Radeon R9 290 and R9 290X first launched last year, they were plagued by issues of overheating and variable clock speeds. We looked at the situation several times over the course of a couple months and AMD tried to address the problem with newer drivers. These drivers did help stabilize clock speeds (and thus performance) of the reference built R9 290 and R9 290X cards but caused noise levels to increase as well.
The real solution was the release of custom cooled versions of the R9 290 and R9 290X from AMD partners like ASUS, MSI and others. The ASUS R9 290X DirectCU II model for example, ran cooler, quieter and more consistently than any of the numerous reference models we had our hands on.
But what about all those buyers that are still purchasing, or have already purchased, reference style R9 290 and 290X cards? Replacing the cooler on the card is the best choice and thanks to our friends at NZXT we have a unique solution that combines standard self contained water coolers meant for CPUs with a custom built GPU bracket.
Our quick test will utilize one of the reference R9 290 cards AMD sent along at launch and two specific NZXT products. The Kraken X40 is a standard CPU self contained water cooler that sells for $100 on Amazon.com. For our purposes though we are going to team it up with the Kraken G10, a $30 GPU-specific bracket that allows you to use the X40 (and other water coolers) on the Radeon R9 290.
Inside the box of the G10 you'll find an 80mm fan, a back plate, the bracket to attach the cooler to the GPU and all necessary installation hardware. The G10 will support a wide range of GPUs, though they are targeted towards the reference designs of each:
NVIDIA : GTX 780 Ti, 780, 770, 760, Titan, 680, 670, 660Ti, 660, 580, 570, 560Ti, 560, 560SE
AMD : R9 290X, 290, 280X*, 280*, 270X, 270 HD7970*, 7950*, 7870, 7850, 6970, 6950, 6870, 6850, 6790, 6770, 5870, 5850, 5830
That is pretty impressive but NZXT will caution you that custom designed boards may interfere.
The installation process begins by removing the original cooler which in this case just means a lot of small screws. Be careful when removing the screws on the actual heatsink retention bracket and alternate between screws to take it off evenly.
Subject: Graphics Cards | March 13, 2014 - 10:52 AM | Ryan Shrout
Tagged: radeon, amd
This morning I had an interesting delivery on my door step.
The only thing inside it was an envelope stamped TOP SECRET and this photo. Coming from AMD's PR department, the hashtag #2betterthan1 adorned the back of the picture.
This original photo is from like....2004. Nice, very nice AMD.
With all the rumors circling around the release of a new dual-GPU graphics card based on Hawaii, it seems that AMD is stepping up the viral marketing campaign a bit early. Code named 'Vesuvius', the idea of a dual R9 290X single card seems crazy due to high power consumption but maybe AMD has been holding back the best, most power efficient GPUs for such a release.
What do you think? Can AMD make a dual-GPU Hawaii card happen? How will this affect or be affected by the GPU shortages and price hikes still plaguing the R9 290 and R9 290X? How much would you be willing to PAY for something like this?
Get notified when we go live!