GDC 14: EGL 1.5 Specification Released by Khronos

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 19, 2014 - 09:02 AM |
Tagged: OpenGL ES, opengl, opencl, gdc 14, GDC, EGL

The Khronos Group has also released their ratified specification for EGL 1.5. This API is at the center of data and event management between other Khronos APIs. This version increases security, interoperability between APIs, and support for many operating systems, including Android and 64-bit Linux.

khronos-EGL_500_123_75.png

The headline on the list of changes is the move that EGLImage objects makes, from the realm of extension into EGL 1.5's core functionality, giving developers a reliable method of transferring textures and renderbuffers between graphics contexts and APIs. Second on the list is the increased security around creating a graphics context, primarily designed for WebGL applications which any arbitrary website can become. Further down the list is the EGLSync object which allows further partnership between OpenGL (and OpenGL ES) and OpenCL. The GPU may not need CPU involvement when scheduling between tasks on both APIs.

During the call, the representative also wanted to mention that developers have asked them to bring EGL back to Windows. While it has not happened yet, they have announced that it is a current target.

The EGL 1.5 spec is available at the Khronos website.

Source: Khronos

GDC 14: SYCL 1.2 Provisional Spec Released by Khronos

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 19, 2014 - 09:01 AM |
Tagged: SYCL, opencl, gdc 14, GDC

To gather community feedback, the provisional specification for SYCL 1.2 has been released by The Khronos Group. SYCL extends itself upon OpenCL with the C++11 standard. This technology is built on another Khronos platform, SPIR, which allows the OpenCL C programming language to be mapped onto LLVM, with its hundreds of compatible languages (and Khronos is careful to note that they intend for anyone to make their own compatible alternative langauge).

khronos-SYCL_Color_Mar14_154_75.png

In short, SPIR allows many languages which can compile into LLVM to take advantage of OpenCL. SYCL is the specification for creating C++11 libraries and compilers through SPIR.

As stated earlier, Khronos wants anyone to make their own compatible language:

While SYCL is one possible solution for developers, the OpenCL group encourages innovation in programming models for heterogeneous systems, either by building on top of the SPIR™ low-level intermediate representation, leveraging C++ programming techniques through SYCL, using the open source CLU libraries for prototyping, or by developing their own techniques.

SYCL 1.2 supports OpenCL 1.2 and they intend to develop it alongside OpenCL. Future releases are expected to support the latest OpenCL 2.0 specification and keep up with future developments.

The SYCL 1.2 provisional spec is available at the Khronos website.

Source: Khronos

Just Delivered: MSI Radeon R9 290X Lightning

Subject: Graphics Cards | March 18, 2014 - 03:58 PM |
Tagged: radeon, R9 290X, msi, just delivered, amd, 290x lightning, 290x

While Ryan may be en route to the Game Developer's Conference in San Francisco right now, work must go on at the PC Perspective office. As it happens my arrival at the office today was greeted by a massively exciting graphics card, the MSI Radeon R9 290X Lightning.

IMG_9901.JPG

While we first got our hands on a prerelease version of this card at CES earlier this year, we can now put the Lightning edition through its paces.

IMG_9900.JPG

To go along with this massive graphics card comes a massive box. Just like the GTX 780 Lightning, MSI paid extra detail to the packaging to create a more premium-feeling experience than your standard reference design card.

IMG_9906.JPG

Comparing the 290X Lightning to the AMD reference design, it is clear how much engineering went into this card - the heatpipe and fins alone are as thick as the entire reference card. This, combined with a redesigned PCB and improved power management should ensure that you never fall victim to the GPU clock variance issues of the reference design cards, and give you one of the best overclocking experiences possible from the Hawaii GPU.

gpu-z.png

While I haven't had a chance to start benchmarking yet, I put it on the testbed and figured I would give a little preview of what you can expect from this card out of the box.

Stay tuned for more coverage of the MSI Radeon R9 290X Lightning and our full review, coming soon on PC Perspective!

Source: MSI

GDC 14: OpenGL ES 3.1 Spec Released by Khronos Group

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 17, 2014 - 09:01 AM |
Tagged: OpenGL ES, opengl, Khronos, gdc 14, GDC

Today, day one of Game Developers Conference 2014, the Khronos Group has officially released the 3.1 specification for OpenGL ES. The main new feature, brought over from OpenGL 4, is the addition of compute shaders. This opens GPGPU functionality to mobile and embedded devices for applications developed in OpenGL ES, especially if the developer does not want to add OpenCL.

The update is backward-compatible with OpenGL ES 2.0 and 3.0 applications, allowing developers to add features, as available, for their existing apps. On the device side, most functionality is expected to be a driver update (in the majority of cases).

opengl-es-logo.png

OpenGL ES, standing for OpenGL for Embedded Systems but is rarely branded as such, delivers what they consider the most important features from the graphics library to the majority of devices. The Khronos Group has been working toward merging ES with the "full" graphics library over time. The last release, OpenGL ES 3.0, was focused on becoming a direct subset of OpenGL 4.3. This release expands upon the feature-space it occupies.

OpenGL ES also forms the basis for WebGL. The current draft of WebGL 2.0 uses OpenGL ES 3.0 although that was not discussed today. I have heard murmurs (not from Khronos) about some parties pushing for compute shaders in that specification, which this announcement puts us closer to.

The new specification also adds other features, such as the ability to issue a draw without CPU intervention. You could imagine a particle simulation, for instance, that wants to draw the result after its compute shader terminates. Shading is also less rigid, where vertex and fragment shaders do not need to be explicitly linked into a program before they are used. I inquired about the possibility that compute devices could be targetted (for devices with two GPUs) and possibly load balanced, in a similar method to WebCL but no confirmation or denial was provided (although he did mention that it would be interesting for apps that fall somewhere in the middle of OpenGL ES and OpenCL).

The OpenGL ES 3.1 spec is available at the Khronos website.

Source: Khronos

AMD Radeon R9 Graphics Stock Friday Night Update

Subject: Graphics Cards | March 14, 2014 - 10:17 PM |
Tagged: radeon, R9 290X, r9 290, r9 280x, r9 280, amd

While sitting on the couch watching some college basketball I decided to start browsing Amazon.com and Newegg.com for some Radeon R9 graphics cards.  With all of the stock and availability issues AMD has had recently, this is a more frequent occurrence for me than I would like to admit.  Somewhat surprisingly, things appear to be improving for AMD at the high end of the product stack.  Take a look at what I found.

  Amazon.com Newegg.com
ASUS Radeon R9 290X DirectCU II $599 -
Visiontek R9 290X $599 -
XFX R9 290X Double D $619 -
ASUS R9 290 DirectCU II $499 -
XFX R9 290 Double D $499 -
MSI R9 290 Gaming $465 $469
PowerColor TurboDuo AXR9 280X - $329
Visiontek R9 280X $370 $349
XFX R9 280 Double D - $289
Sapphire Dual-X R9 280 - $299
Sapphire R7 265 $184 $149

msir9290.jpg

It's not perfect, but it's better.  I was able to find two R9 290X cards at $599, which is just $50 over the expected selling price of $549.  The XFX Double D R9 290X at $619 is pretty close as well.  The least expensive R9 290 I found was $469 but others remain about $100 over the suggested price.  In reality, having the R9 290 and R9 290X only $100 apart, as opposed to the $150 that AMD would like you to believe, is more realistic based on the proximity of performance between the two SKUs.  

Stepping a bit lower, the R9 280X (which is essentially the same as the HD 7970 GHz Edition) can be found for $329 and $349 on Newegg.  Those prices are just $30-50 more than the suggested pricing!  The brand new R9 280, similar in specs to the HD 7950, is starting to show up for $289 and $299; $10 over what AMD told us to expect.

Finally, though not really a high end card, I did see that the R7 265 was showing up at both Amazon.com and Newegg.com for the second time since its announcement in February. For budget 1080p gamers, if you can find it, this could be the best card you can pick up.

What deals are you finding online?  If you guys have one worth adding here, let me know! Is the lack of availability and high prices on AMD GPUs finally behind us??

AMD Teasing Dual GPU Graphics Card, Punks Me at Same Time

Subject: Graphics Cards | March 13, 2014 - 10:52 AM |
Tagged: radeon, amd

This morning I had an interesting delivery on my door step.

The only thing inside it was an envelope stamped TOP SECRET and this photo.  Coming from AMD's PR department, the hashtag #2betterthan1 adorned the back of the picture.  

twobetter.jpg

This original photo is from like....2004.  Nice, very nice AMD.

With all the rumors circling around the release of a new dual-GPU graphics card based on Hawaii, it seems that AMD is stepping up the viral marketing campaign a bit early.  Code named 'Vesuvius', the idea of a dual R9 290X single card seems crazy due to high power consumption but maybe AMD has been holding back the best, most power efficient GPUs for such a release.

What do you think?  Can AMD make a dual-GPU Hawaii card happen?  How will this affect or be affected by the GPU shortages and price hikes still plaguing the R9 290 and R9 290X?  How much would you be willing to PAY for something like this?

Win a GeForce GTX 750 Ti by Showing Off Your Upgrade-Worthy Rig!

Subject: General Tech, Graphics Cards | March 11, 2014 - 09:06 PM |
Tagged: nvidia, gtx 750 ti, giveaway, geforce, contest

UPDATE:  We have our winners! Congrats to the following users that submitted upgrade worthy PCs that will be shipped a free GeForce GTX 750 Ti courtesy of NVIDIA! 

  • D. Todorov
  • C. Fogg
  • K. Rowe
  • K. Froehlich
  • D. Aarssen

When NVIDIA launched the GeForce GTX 750 Ti this month it convinced us to give this highly efficient graphics card a chance to upgrade some off-the-shelf, under powered PCs.  In a story that we published just a week ago, we were able to convert three pretty basic and pretty boring computers into impressive gaming PCs by adding in the $150 Maxwell-based graphics card.

gateway-bioshock.png

If you missed the video we did on the upgrade process and results, check it out here.

Now we are going to give our readers the chance to do the same thing to their PCs.  Do you have a computer in your home that is just not up to the task of playing the latest PC games?  Then this contest is right up your alley.

IMG_9552.JPG

Prizes: 1 of 5 GeForce GTX 750 Ti Graphics Cards

Your Task: You are going to have to do a couple of things to win one of these cards in our "Upgrade Story Giveaway."  We want to make sure these cards are going to those of you that can really use it so here is what we are asking for (you can find the form to fill out right here):

  1. Show us your PC that is in need of an upgrade!  Take a picture of your machine with this contest page on the screen or something similar and share it with us.  You can use Imgur.com to upload your photo if you need some place to put it.  An inside shot would be good as well.  Place the URL for your image in the appropriate field in the form below.
  2. Show us your processor and integrated graphics that need some help!  That means you can use a program like CPU-Z to view the processor in your system and then GPU-Z to show us the graphics setup.  Take a screenshot of both of these programs so we can see what hardware you have that needs more power for PC gaming!  Place the URL for that image in the correct field below.
  3. Give us your name and email address so we can contact you for more information if you win!
  4. Leave us a comment below to let me know why you think you should win!!
  5. Subscribing to our PCPer Live! mailing list or even our PCPer YouTube channel wouldn't hurt either...

That's pretty much it!  We'll run this promotion for 2 weeks with a conclusion date of March 13th. That should give you plenty of time to get your entry in.

Good luck!!

Microsoft, Along with AMD, Intel, NVIDIA, and Qualcomm, Will Announce DirectX 12 at GDC 2014

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 5, 2014 - 08:28 PM |
Tagged: qualcomm, nvidia, microsoft, Intel, gdc 14, GDC, DirectX 12, amd

The announcement of DirectX 12 has been given a date and time via a blog post on the Microsoft Developer Network (MSDN) blogs. On March 20th at 10:00am (I assume PDT), a few days into the 2014 Game Developers Conference in San Francisco, California, the upcoming specification should be detailed for attendees. Apparently, four GPU manufacturers will also be involved with the announcement: AMD, Intel, NVIDIA, and Qualcomm.

microsoft-dx12-gdc-announce.jpg

As we reported last week, DirectX 12 is expected to target increased hardware control and decreased CPU overhead for added performance in "cutting-edge 3D graphics" applications. Really, this is the best time for it. Graphics processors are mostly settled into highly-efficient co-processors of parallel data, with some specialized logic for geometry and video tasks. A new specification can relax the needs of video drivers and thus keep the GPU (or GPUs, in Mantle's case) loaded and utilized.

But, to me, the most interesting part of this announcement is the nod to Qualcomm. Microsoft values DirectX as leverage over other x86 and ARM-based operating systems. With Qualcomm, clearly Microsoft believes that either Windows RT or Windows Phone will benefit from the API's next version. While it will probably make PC gamers nervous, mobile platforms will benefit most from reducing CPU overhead, especially if it can be spread out over multiple cores.

Honestly, that is fine by me. As long as Microsoft returns to treating the PC as a first-class citizen, I do not mind them helping mobile, too. We will definitely keep you up to date as we know more.

Source: MSDN Blogs

AMD Radeon R9 290X shows up for $549 on Newegg. Is the worst behind us?

Subject: Graphics Cards | March 4, 2014 - 03:38 PM |
Tagged: radeon, R9 290X, hawaii, amd, 290x

Yes, I know it is only one card.  And yes I know that this could sell out in the next 10 minutes and be nothing, but I was so interested, excited and curious about this that I wanted to put together a news post.  I just found a Radeon R9 290X card selling for $549 on Newegg.com.  That is the normal, regular, non-inflated, expected retail price.

WAT.

290xwat.jpg

You can get a Powercolor AXR9 290X with 4GB of memory for $549 right now, likely only if you hurry.  That same GPU on Amazon.com will cost you $676.  This same card at Newegg.com has been as high as $699:

290xwat2.jpg

Again - this is only one card on one site, but the implications are positive.  This is also a reference design card, rather than one of the superior offerings with a custom cooler.  After that single card, the next lowest price is $629, followed by a couple at $649 and then more at $699.  We are still waiting to hear from AMD on the issue, what its response is and if it can actually even do anything to fix it.  It seems plausible, but maybe not likely, that the draw of coin mining is reached a peak (and who can blame them) and the pricing of AMD GPUs could stabilize.  Maybe.  It's classified.

But for now, if you want an R9 290X, Newegg.com has at least one option that makes sense.

AMD Launches Another Graphics Card: Radeon R9 280

Subject: Graphics Cards | March 4, 2014 - 08:00 AM |
Tagged: radeon, r9 280, R9, hd 7950, amd

AMD continues to churn out its Radeon graphics card line. Out today, or so we are told, is the brand new Radeon R9 280! That's right kids, it's kind of like the R9 280X, but without the letter at the end.  In fact, do you know what it happens to be very similar to? The Radeon HD 7950. Check out the testing card we got in.

280-5.jpg

It's okay AMD, it's just a bit of humor...

Okay, let's put the jokes aside and talk about what we are really seeing here.  

The new Radeon R9 280 is the latest in the line of rebranding and reorganizing steps made by AMD with the move from the "HD" moniker to "R9/R7". As the image above would indicate, the specifications of the R9 280 are nearly 1:1 with that of the Radeon HD 7950 released in August of 2012 with Boost. We built a specification table below.

  Radeon R9 280X Radeon R9 280 Radeon R9 270X Radeon R9 270 Radeon R7 265
GPU Code name Tahiti Tahiti Pitcairn Pitcairn Pitcairn
GPU Cores 2048 1792 1280 1280 1024
Rated Clock 1000 MHz 933 MHz 1050 MHz 925 MHz 925 MHz
Texture Units 128 112 80 80 64
ROP Units 32 32 32 32 32
Memory 3GB 3GB 2GB 2GB 2GB
Memory Clock 6000 MHz 6000 MHz 5600 MHz 5600 MHz 5600 MHz
Memory Interface 384-bit 384-bit 256-bit 256-bit 256-bit
Memory Bandwidth 288 GB/s 288 GB/s 179 GB/s 179 GB/s 179 GB/s
TDP 250 watts 250 watts 180 watts 150 watts 150 watts
Peak Compute 4.10 TFLOPS 3.34 TFLOPS 2.69 TFLOPS 2.37 TFLOPS 1.89 TFLOPS
MSRP $299 $279 $199 $179 $149
Current Pricing $420 - Amazon ??? $259 - Amazon $229 - Amazon ???

If you are keeping track, AMD should just about be out of cards to drag over to the new naming scheme. The R9 280 has a slightly higher top boost clock than the Radeon HD 7950 did (933 MHz vs. 925 MHz) but otherwise looks very similar. Oh, and apparently the R9 280 will require a 6+8 pin PCIe power combination while the HD 7950 was only 6+6 pin. Despite that change, it is still built on the same Tahiti GPU that has been chugging long for years now.  

The Radeon R9 280 continues to support an assortment of AMD's graphics technologies including Mantle, PowerTune, CrossFire, Eyefinity, and included support for DX11.2. Note that because we are looking at an ASIC that has been around for a while, you will not find XDMA or TrueAudio support.

280-3.jpg

The estimated MSRP of $279 is only $20 lower than the MSRP of the R9 280X, but you should take all pricing estimates from AMD with a grain of salt. The prices listed in the table above from Amazon.com were current as of March 3rd, and of course, we did see Newegg attempt get people to buy R9 290X cards for $900 recently. AMD did use some interesting language on the availability of the R9 280 in its emails to me.

The AMD Radeon R9 280 will become available at a starting SEP of $279USD the first week of March, with wider availability the second week of March. Following the exceptional demand for the entire R9 Series, we believe the introduction of the R9 280 will help ensure that every gamer who plans to purchase an R9 Series graphics card has an opportunity to do so.

I like what the intent is from AMD with this release - get more physical product in the channel to hopefully lower prices and enable more gamers to purchase the Radeon card they really want. However, until I see a swarm of parts on Newegg.com or Amazon.com at, or very close to, the MSRPs listed on the table above for an extended period, I think the effects of coin mining (and the rumors of GPU shortages) will continue to plague us. No one wants to see competition in the market and great options at reasonable prices for gamers more than us!

280-1.jpg

AMD hasn't sent out any samples of the R9 280 as far as I know (at least we didn't get any) but the performance should be predictable based on its specifications relative to the R9 280X and the HD 7950 before it.  

Do you think the R9 280 will fix the pricing predicament that AMD finds itself in today, and if it does, are you going to buy one?

EVGA Launches GTX 750 and GTX 750 SC With 2GB GDDR5

Subject: Graphics Cards | March 3, 2014 - 10:33 PM |
Tagged: nvidia, maxwell, gtx 750, evga

EVGA recently launched two new GTX 750 graphics cards with 2GB of GDDR5 memory. The new cards include a reference clocked GTX 750 2GB and a factory overclocked GTX 750 2GB SC (Super Clocked).

EVGA GTX 750 2GB GDDR5 GPU.jpg

The new graphics cards are based around NVIDIA’s GTX 750 GPU with 512 Maxwell architecture CUDA cores. The GTX 750 is the little brother to the GTX 750 Ti we recently reviewed which has 640 cores. EVGA has clocked the GTX 750 2GB card’s GPU at reference clockspeeds of 1020 MHz base and 1085 MHz boost and memory at a reference speed of 1253 MHz. The “Super Clocked” GTX 750 2GB SC card keeps the memory at reference speeds but overclocks the GPU quite a bit to 1215 MHz base and 1294 MHz boost.

  EVGA GTX 750 2GB EVGA GTX 750 2GB Super Clocked
GPU 512 CUDA Cores (Maxwell) 512 CUDA Cores (Maxwell)
-    GPU Base 1020 MHz 1215 MHz
-    GPU Boost 1085 MHz 1294 MHz
Memory 2 GB GDDR5 @ 1253 MHz on 128-bit bus
I/O
1 x DVI, 1 x HDMI, 1 x DP
TDP 55W 55W
Price $129.99 $139.99

Both cards have a 55W TDP sans any PCI-E power connector and utilize a single shrouded fan heatsink. The cards are short but occupy two PCI slots. The rear panel hosts one DVI, one HDMI, and one DisplayPort video output along with ventilation slots for the HSF. Further, the cards both support NVIDIA’s G-Sync technology.

EVGA GTX 750 2GB GDDR5 SC.jpg

The reference clocked GTX 750 2GB is $129.99 while the factory overclocked model is $139.99. Both cards are similar to their respective predecessors except for the additional 1GB of GDDR5 memory which comes at a $10 premium and should will help a bit at high resolutions.

Source: EVGA

Speaking of Passive Cooling: Tom's Hardware's GTX 750 Ti

Subject: General Tech, Graphics Cards | March 2, 2014 - 05:20 PM |
Tagged: passive cooling, maxwell, gtx 750 ti

The NVIDIA GeForce GTX 750 Ti is fast but also power efficient, enough-so that Ryan found it a worthwhile upgrade for cheap desktops with cheap power supplies that were never intended for discrete graphics. Of course, this recommendation is about making the best of what you got; better options probably exist if you are building a PC (or getting one built by a friend or a computer store).

toms-passive-geforce-gtx-750-ti-cooling,O-U-423390-22.jpg

Image Credit: Tom's Hardware

Tom's Hardware went another route: make it fanless.

After wrecking a passively-cooled Radeon HD 7750, which is probably a crime in Texas, they clamped it on to the Maxwell-based GTX 750 Ti. While the cooler was designed for good airflow, they decided to leave it in a completely-enclosed case without fans. Under load, the card reached 80 C within about twenty minutes. The driver backed off performance slightly, 1-3% depending on your frame of reference, but was able to maintain that target temperature.

Now, if only it accepted SLi, this person might be happy.

Sapphire Launches Low Profile R7 240 GPU For HTPCs

Subject: Graphics Cards | March 2, 2014 - 03:14 AM |
Tagged: sapphire, R7 240, htpc, SFF, low profile, steam os

Sapphire is preparing a new low profile Radeon R7 240 graphics card for home theater PCs and small form factor desktop builds. The new graphics card is a single slot design that uses a small heatsink with fan cooler that is shorter than the low profile PCI bracket for assured compatibility with even extremely cramped cases.

The Sapphire R7 240 card pairs a 28nm AMD GCN-based GPU with 2GB of DDR3 memory. There are two HDMI 1.4a display outputs that each support 4K 4096 x 2160 resolutions. Specifically, this particular iteration of the Radeon R7 240 has 320 stream processors clocked at 730 MHz base and 780 MHz boost along with 2GB DDR3 memory clocked at 900 MHz on a 128-bit bus. The card further has 20 TMUs and 8 ROPs. The card has a power sipping 30W TDP.

Sapphire Radeon R7 240 Low Profile Graphics Card for SFF Desktops and HTPCs.jpg

This low profile R7 240 is a sub-$100 part that can easily power a home theater PC or Steam OS streaming endpoint. Actually, the R7 240 itself can deliver playable gaming frame rates with low quality settings and lowered resolutions delivering at least 30 average FPS in modern titles like Bioshock Infinite and BF4 according to this review. Another use case would be to add the card to an existing AMD APU-based system in Hybrid CrossFire (which has seen Frame Pacing fixes!) for a bit more gaming horsepower under a strict budget.

The card occupies a tight space where it is only viable in specific situations constrained by a tight budget, physical size, and the requirement to buy a card new and not an older (single and faster, not Hybrid CrossFire) generation card on the used market. Still, it is nice to have options and this will be one such new budget alternative. Exact pricing is not yet available, but it should be hitting store shelves soon. For an idea on pricing, the full height Sapphire R7 240 retails for around $70, so expect the new low profile variant to be around that price if at a slight premium.

Video Perspective: Gaming on an Overclocked AMD A10-7850K APU

Subject: Graphics Cards, Processors | February 26, 2014 - 07:18 PM |
Tagged:

Overclocking the memory and GPU clock speeds on an AMD APU can greatly improve gaming performance - it is known.  With the new AMD A10-7850K in hand I decided to do a quick test and see how much we could improve average frame rates for mainstream gamers with only some minor tweaking of the motherboard BIOS.  

Using some high-end G.Skill RipJaws DDR3-2400 memory, we were able to push memory speeds on the Kaveri APU up to 2400 MHz, a 50% increase over the stock 1600 MHz rate.  We also increased the clock speed on the GPU portion of the A10-7850K from 720 MHz to 1028 MHz, a 42% boost.  Interestingly, as you'll see in the video below, the memory speed had a MUCH more dramatic impact on our average frame rates in-game.  

In the three games we tested for this video, GRID 2, Bioshock Infinite and Battlefield 4, total performance gain ranged from 26% to 38%.  Clearly that can make the AMD Kaveri APU an even more potent gaming platform if you are willing to shell out for the high speed memory.

  Stock GPU OC Memory OC Total OC Avg FPS Change
Battlefield 4
1920x1080
Medium
22.4 FPS 23.7 FPS 28.2 FPS 29.1 FPS +29%
GRID 2
1920x1080
High + 2xAA
33.5 FPS 36.3 FPS 41.1 FPS 42.3 FPS +26%
Bioshock Infinite
1920x1080
Low
30.1 FPS 30.9 FPS 40.2 FPS 41.8 FPS +38%

DirectX 12 and a new OpenGL to challenge AMD Mantle coming at GDC?

Subject: Graphics Cards | February 26, 2014 - 06:17 PM |
Tagged: opengl, nvidia, Mantle, gdc 14, GDC, DirectX 12, DirectX, amd

UPDATE (2/27/14): AMD sent over a statement today after seeing our story.  

AMD would like you to know that it supports and celebrates a direction for game development that is aligned with AMD’s vision of lower-level, ‘closer to the metal’ graphics APIs for PC gaming. While industry experts expect this to take some time, developers can immediately leverage efficient API design using Mantle, and AMD is very excited to share the future of our own API with developers at this year’s Game Developers Conference.

Credit over to Scott and his reader at The Tech Report for spotting this interesting news today!!

It appears that DirectX and OpenGL are going to be announcing some changes at next month's Game Developers Conference in San Francisco.  According to some information found in the session details, both APIs are trying to steal some of the thunder from AMD's Mantle, recently released with the Battlefield 4 patch.  Mantle is na API was built by AMD to enable more direct access (lower level) to its GCN graphics hardware allowing developers to code games that are more efficient, providing better performance for the PC gamer.

mantle.jpg

From the session titled DirectX: Evolving Microsoft's Graphics Platform we find this description (emphasis mine):

For nearly 20 years, DirectX has been the platform used by game developers to create the fastest, most visually impressive games on the planet.

However, you asked us to do more. You asked us to bring you even closer to the metal and to do so on an unparalleled assortment of hardware. You also asked us for better tools so that you can squeeze every last drop of performance out of your PC, tablet, phone and console.

Come learn our plans to deliver.

Another DirectX session hosted by Microsoft is titled DirectX: Direct3D Futures (emphasis mine): 

Come learn how future changes to Direct3D will enable next generation games to run faster than ever before!

In this session we will discuss future improvements in Direct3D that will allow developers an unprecedented level of hardware control and reduced CPU rendering overhead across a broad ecosystem of hardware. 

If you use cutting-edge 3D graphics in your games, middleware, or engines and want to efficiently build rich and immersive visuals, you don't want to miss this talk.

Now look at a line from our initial article on AMD Mantle when announced at its Hawaii tech day event:

It bypasses DirectX (and possibly the hardware abstraction layer) and developers can program very close to the metal with very little overhead from software.

This is all sounding very familiar.  It would appear that Microsoft has finally been listening to the development community and is working on the performance aspects of DirectX.  Likely due in no small part to the push of AMD and Mantle's development, an updated DirectX 12 that includes a similar feature set and similar performance changes would shift the market in a few key ways.

olddirectx.jpg

Is it time again for innovation with DirectX?

First and foremost, what does this do for AMD's Mantle in the near or distant future?  For now, BF4 will still include Mantle support as will games like Thief (update pending) but going forward, if these DX12 changes are as specific as I am being led to believe, then it would be hard to see anyone really sticking with the AMD-only route.  Of course, if DX12 doesn't really address the performance and overhead issues in the same way that Mantle does then all bets are off and we are back to square one.

Interestingly, OpenGL might also be getting into the ring with the session Approaching Zero Driver Overhead in OpenGL

Driver overhead has been a frustrating reality for game developers for the entire life of the PC game industry. On desktop systems, driver overhead can decrease frame rate, while on mobile devices driver overhead is more insidious--robbing both battery life and frame rate. In this unprecedented sponsored session, Graham Sellers (AMD), Tim Foley (Intel), Cass Everitt (NVIDIA) and John McDonald (NVIDIA) will present high-level concepts available in today's OpenGL implementations that radically reduce driver overhead--by up to 10x or more. The techniques presented will apply to all major vendors and are suitable for use across multiple platforms. Additionally, they will demonstrate practical demos of the techniques in action in an extensible, open source comparison framework.

This description seems to indicate more about new or lesser known programming methods that can be used with OpenGL to lower overhead without the need for custom APIs or even DX12.  This could be new modules from vendors or possibly a new revision to OpenGL - we'll find out next month.

All of this leaves us with a lot of questions that will hopefully be answered when we get to GDC in mid-March.  Will this new version of DirectX be enough to reduce API overhead to appease even the stingiest of game developers?  How will AMD react to this new competitor to Mantle (or was Mantle really only created to push this process along)?  What time frame does Microsoft have on DX12?  Does this save NVIDIA from any more pressure to build its own custom API?

Gaming continues to be the driving factor of excitement and innovation for the PC!  Stay tuned for an exciting spring!

Source: Tech Report

AMD Catalyst 14.2 Beta V1.3 Driver Released

Subject: General Tech, Graphics Cards | February 25, 2014 - 02:46 PM |
Tagged: amd, Mantle, TrueAudio, Thief 4, thief

AMD released their Catalyst 14.2 Beta V1.3 graphics drivers today, coinciding with the launch of Thief. The game, developed by Eidos Montreal and published by Square Enix, is another entry in "Gaming Evolved" and their "Never Settle" promotion. Soon, it will also support Mantle and TrueAudio.

AMD-Catalyst.jpg

Being Theif's launch driver, it provides optimizations for both single-GPU and Crossfire customers in that title. It also provides fixes for other titles, especially Battlefield 4 which can now run Mantle with up-to four GPUs. Battlefield 3 and 4 also supports Frame Pacing on very high (greater than 2560x1600) resolution monitors in dual-card Crossfire. It also fixes a couple of bugs in using Crossfire with DirectX 9 games, missing textures Minecraft, and corruption in X-Plane.

Catalyst 14.2 Beta V1.3 driver is available now at AMD's website.

Source: AMD

New Intel Graphics Drivers Further Spread Quick Sync Video

Subject: General Tech, Graphics Cards, Processors | February 25, 2014 - 01:33 PM |
Tagged: Ivy Bridge, Intel, iGPU, haswell

Recently, Intel released the 15.33.14.3412 (15.33.14.64.3412 for 64-bit) drivers for their Ivy Bridge and Haswell integrated graphics. The download was apparently published on January 29th while its patch notes are dated February 22nd. It features expanded support for Intel Quick Sync Video Technology, allowing certain Pentium and Celeron-class processors to access the feature, as well as an alleged increase in OpenGL-based games. Probably the most famous OpenGL title of our time is Minecraft, although I do not know if that specific game will see improvements (and if so, how much).

Intel-logo.svg_.png

The new driver enables Quick Sync Video for the following processors:

  • Pentium 3558U
  • Pentium 3561Y
  • Pentium G3220(Unsuffixed/T/TE)
  • Pentium G3420(Unsuffixed/T)
  • Pentium G3430
  • Celeron 2957U
  • Celeron 2961Y
  • Celeron 2981U
  • Celeron G1820(Unsuffixed/T/TE)
  • Celeron G1830

Besides the addition for these processors and the OpenGL performance improvements, the driver obviously fixes several bugs in each of its supported OSes. You can download the appropriate drivers from the Intel Download Center.

Source: Intel

NVIDIA Coin Mining Performance Increases with Maxwell and GTX 750 Ti

Subject: General Tech, Graphics Cards | February 20, 2014 - 05:45 PM |
Tagged: nvidia, mining, maxwell, litecoin, gtx 750 ti, geforce, dogecoin, coin, bitcoin, altcoin

As we have talked about on several different occasions, Altcoin mining (anything that is NOT Bitcoin specifically) is a force on the current GPU market whether we like it or not. Traditionally, Miners have only bought AMD-based GPUs, due to the performance advantage when compared to their NVIDIA competition. However, with continued development of the cudaMiner application over the past few months, NVIDIA cards have been gaining performance in Scrypt mining.

The biggest performance change we've seen yet has come with a new version of cudaMiner released yesterday. This new version (2014-02-18) brings initial support for the Maxwell architecture, which was just released yesterday in the GTX 750 and 750 Ti. With support for Maxwell, mining starts to become a more compelling option with this new NVIDIA GPU.

With the new version of cudaMiner on the reference version of the GTX 750 Ti, we were able to achieve a hashrate of 263 KH/s, impressive when you compare it to the performance of the previous generation, Kepler-based GTX 650 Ti, which tops out at about 150KH/s or so.

IMG_9552.JPG

As you may know from our full GTX 750 Ti Review,  the GM107 overclocks very well. We were able to push our sample to the highest offset configurable of +135 MHz, with an additional 500 MHz added to the memory frequency, and 31 mV bump to the voltage offset. All of this combined to a ~1200 MHz clockspeed while mining, and an additional 40 KH/s or so of performance, bringing us to just under 300KH/s with the 750 Ti.

perf.png

As we compare the performance of the 750 Ti to AMD GPUs and previous generation NVIDIA GPUs, we start to see how impressive the performance of this card stacks up considering the $150 MSRP. For less than half the price of the GTX 770, and roughly the same price as a R7 260X, you can achieve the same performance.

power.png

When we look at power consumption based on the TDP of each card, this comparison only becomes more impressive. At 60W, there is no card that comes close to the performance of the 750 Ti when mining. This means you will spend less to run a 750 Ti than a R7 260X or GTX 770 for roughly the same hash rate.

perfdollar.png

Taking a look at the performance per dollar ratings of these graphics cards, we see the two top performers are the AMD R7 260X and our overclocked GTX 750 Ti.

perfpower.png

However, when looking at the performance per watt differences of the field, the GTX 750 Ti looks more impressive. While most miners may think they don't care about power draw, it can help your bottom line. By being able to buy a smaller, less efficient power supply the payoff date for the hardware is moved up.  This also bodes well for future Maxwell based graphics cards that we will likely see released later in 2014.  

Continue reading our look at Coin Mining performance with the GTX 750 Ti and Maxwell!!

Was leading with a low end Maxwell smart?

Subject: Graphics Cards | February 19, 2014 - 04:43 PM |
Tagged: geforce, gm107, gpu, graphics, gtx 750 ti, maxwell, nvidia, video

We finally saw Maxwell yesterday, with a new design for the SMs called SMM each of which consist of four blocks of 32 dedicated, non-shared CUDA cores.  In theory that should allow NVIDIA to pack more SMMs onto the card than they could with the previous SMK units.  This new design was released on a $150 card which means we don't really get to see what this new design is capable of yet.  At that price it competes with AMD's R7 260X and R7 265, at least if you can find them at their MSRP and not at inflated cryptocurrency levels.  Legit Reviews contrasted the performance of two overclocked GTX 750 Ti to those two cards as well as to the previous generation GTX 650Ti Boost on a wide selection of games to see how it stacks up performance-wise which you can read here.

That is of course after you read Ryan's full review.

nvidia-geforce-gtx750ti-645x399.jpg

"NVIDIA today announced the new GeForce GTX 750 Ti and GTX 750 video cards, which are very interesting to use as they are the first cards based on NVIDIA's new Maxwell graphics architecture. NVIDIA has been developing Maxwell for a number of years and have decided to launch entry-level discrete graphics cards with the new technology first in the $119 to $149 price range. NVIDIA heavily focused on performance per watt with Maxwell and it clearly shows as the GeForce GTX 750 Ti 2GB video card measures just 5.7-inches in length with a tiny heatsink and doesn't require any internal power connectors!"

Here are some more Graphics Card articles from around the web:

Graphics Cards

AMD Gaming Evolved App with Redeemable Prizes

Subject: General Tech, Graphics Cards | February 19, 2014 - 12:01 AM |
Tagged: raptr, gaming evolved, amd

The AMD Gaming Evolved App updates your drivers, optimizes your game settings, streams your gameplay to Twitch, accesses some social media platforms, and now gives prizes. Points are given for playing games using the app, optimizing game settings, and so forth. These can be exchanged for rewards ranging from free games, to Sapphire R9-series graphics cards.

amd-raptr.jpg

This program has been in beta for a little while now, without the ability to redeem points. The system has been restructured to encourage using the entire app by lowering the accumulation rate for playing games and adding other goals. Beta participants do not lose all of their points, rather it is rescaled more in line with the new system.

The Gaming Evolved prize program has launched today.

Press release after the teaser.

Source: raptr