PNY's Customized OC series, decent value for great performance

Subject: Graphics Cards | August 25, 2014 - 03:57 PM |
Tagged: pny, gtx 780, gtx 780 ti, Customized OC, factory overclocked

PNY is not as heavily marketed as some GPU resellers in North America but that doesn't mean they are not hard at work designing custom cards.  Hardware Canucks tried out the Customized OC GTX 780 and 780 Ti recently with the factory overclock as well as pushing the cards to the limit by manual overclocking.  Using EVGA's Precision overclocking tools they pushed the GTX 780 to 1120MHz Core, 6684MHz RAM and the Ti to an impressive 1162MHz Core, 7800MHz RAM.  Read on to see how effective the custom cooler proved to be as it is also a major part of the Customized series.

PNY-CUSTOM-1.jpg

"PNY's latest Customized series will be rolling through their GTX 780 and GTX 780 Ti lineups, bringing high end cooling and increased performance."

Here are some more Graphics Card articles from around the web:

Graphics Cards

AMD Announces Radeon R9 285X and R9 285 Graphics Card

Subject: Graphics Cards | August 23, 2014 - 10:46 AM |
Tagged: radeon, r9 285, R9, amd, 285

Today during AMD's live stream event celebrating 30 years of graphics and gaming, the company spent a bit of time announcing and teasing a new graphics card, the Radeon R9 285X and R9 285. Likely based on the Tonga GPU die, the specifications haven't been confirmed but most believe that the chip will feature 2048 stream processors, 128 texture units, 32 ROPs and a 256-bit memory bus.

r9285.jpg

In a move to help donate to the Child's Play charity, AMD currently has an AMD Radeon R9 285 on Ebay. It lists an ASUS built Strix-style cooled retail card, with 2GB of memory being the only specification that is visible on the box.

r92852.jpg

The R9 285X and R9 285 will replace the R9 280X and R9 280 more than likely and we should see these shipping and available in very early September.

UPDATE: AMD showed specifications of the Radeon R9 285 during the live stream.

r92853.jpg

For those of you with eyes as bad as mine, here are the finer points:

  • 1,792 Stream Processors
  • 918 MHz GPU Clock
  • 3.29 TFLOPS peak performance
  • 112 Texture units
  • 32 ROPs
  • 2GB GDDR5
  • 256-bit memory bus
  • 5.5 GHz memory clock
  • 2x 6-pin power connectors
  • 190 watt TDP
  • $249 MSRP
  • Release date: September 2nd

These Tonga GPU specifications are VERY similar to that of the R9 280: 1792 stream units, 112 texture units, etc. However, the R9 280 had a wider memory bus (384-bit) but runs at 500 MHz lower effective frequency. Clock speeds on Tonga look like they are just slightly lower as well. Maybe most interesting is the frame buffer size drop from 3GB to 2GB.

That's all we have for now, but I expect we'll have our samples in very soon and expect a full review shortly!

UPDATE 2: Apparently AMD hasn't said anything about the Radeon R9 285X, so for the time being, that still falls under the "rumor" category. I'm sure we'll know more soon though.

Source: AMD

PCPer Live! Recap - NVIDIA G-Sync Surround Demo and Q&A

Subject: Graphics Cards, Displays | August 22, 2014 - 08:05 PM |
Tagged: video, gsync, g-sync, tom petersen, nvidia, geforce

Earlier today we had NVIDIA's Tom Petersen in studio to discuss the retail availability of G-Sync monitors as well as to get hands on with a set of three ASUS ROG Swift PG278Q monitors running in G-Sync Surround! It was truly an impressive sight and if you missed any of it, you can catch the entire replay right here.

Even if seeing the ASUS PG278Q monitor again doesn't interest you (we have our full review of the monitor right here), you won't want to miss the very detailed Q&A that occurs, answering quite a few reader questions about the technology. Covered items include:

  • Potential added latency of G-Sync
  • Future needs for multiple DP connections on GeForce GPUs
  • Upcoming 4K and 1080p G-Sync panels
  • Can G-Sync Surround work through an MST Hub?
  • What happens to G-Sync when the frame rate exceeds the panel refresh rate? Or drops below minimum refresh rate?
  • What does that memory on the G-Sync module actually do??
  • A demo of the new NVIDIA SHIELD Tablet capabilities
  • A whole lot more!

Another big thank you to NVIDIA and Tom Petersen for stopping out our way and for spending the time to discuss these topics with our readers. Stay tuned here at PC Perspective as we will have more thoughts and reactions to G-Sync Surround very soon!!

NVIDIA Live Stream: We Want Your Questions!

Subject: Graphics Cards, Displays, Mobile | August 21, 2014 - 05:23 PM |
Tagged: nvidia, video, live, shield, shield tablet, g-sync, gsync, tom petersen

Tomorrow at 12pm EDT / 9am PDT, NVIDIA's Tom Petersen will be stopping by the PC Perspective office to discuss some topics of interest. There has been no lack of topics floating around the world of graphics card, displays, refresh rates and tablets recently and I expect the show tomorrow to be incredibly interesting and educational.

On hand we'll be doing demonstrations of G-Sync Surround (3 panels!) with the ASUS ROG Swift PG278Q display (our review here) and also show off the SHIELD Tablet (we have a review of that too) with some multiplayer action. If you thought the experience with a single G-Sync monitor was impressive, you will want to hear what a set of three of them can be like.

pcperlive.png

NVIDIA Live Stream with Tom Petersen

9am PT / 12pm ET - August 22nd

PC Perspective Live! Page

The topic list is going to include (but not limited to):

  • ASUS PG278Q G-Sync monitor
  • G-Sync availability and pricing
  • G-Sync Surround setup, use and requirements
  • Technical issues surrounding G-Sync: latency, buffers, etc.
  • Comparisons of G-Sync to Adaptive Sync
  • SHIELD Tablet game play
  • Altoids?

gsyncsurround.jpg

But we want your questions! Do you have burning issues that you think need to be addressed by Tom and the NVIDIA team about G-Sync, FreeSync, GameWorks, Tegra, tablets, GPUs and more? Nothing is off limits here, though obviously Tom may be cagey on future announcements. Please use the comments section on this news post below (registration not required) to ask your questions and we can organize them before the event tomorrow. We MIGHT even be able to come up with a couple of prizes to giveaway for live viewers as well...

See you tomorrow!!

An odd Q2 for tablets and PCs

Subject: General Tech, Graphics Cards | August 19, 2014 - 12:30 PM |
Tagged: jon peddie, gpu market share, q2 2014

Jon Peddie Research's latest Market Watch adds even more ironic humour to the media's continuing proclamations of the impending doom of the PC industry.  This quarter saw tablet sales decline while overall PCs were up and that was without any major releases to drive purchasers to adopt new technology.  While JPR does touch on the overall industry this report is focused on the sale of GPUs and APUs and happens to contain some great news for AMD.  They saw their overall share of the market increase by 11% from last quarter and by just over a percent of the entire market.  Intel saw a small rise in share though it does still hold the majority of the market as PCs with no discrete GPU are more likely to contain Intel's chips than AMDs.  That leaves NVIDIA who are still banking solely on discrete GPUs and saw over an 8% decline from last quarter and a decline of almost two percent in the total market.  Check out the other graphs in JPR's overview right here.

unnamed.jpg

"The big drop in graphics shipments in Q1 has been partially offset by a small rise this quarter. Shipments were up 3.2% quarter-to-quarter, and down 4.5% compared to the same quarter last year."

Here is some more Tech News from around the web:

Tech Talk

Khronos Announces "Next" OpenGL & Releases OpenGL 4.5

Subject: General Tech, Graphics Cards, Shows and Expos | August 15, 2014 - 08:33 PM |
Tagged: siggraph 2014, Siggraph, OpenGL Next, opengl 4.5, opengl, nvidia, Mantle, Khronos, Intel, DirectX 12, amd

Let's be clear: there are two stories here. The first is the release of OpenGL 4.5 and the second is the announcement of the "Next Generation OpenGL Initiative". They both occur on the same press release, but they are two, different statements.

OpenGL 4.5 Released

OpenGL 4.5 expands the core specification with a few extensions. Compatible hardware, with OpenGL 4.5 drivers, will be guaranteed to support these. This includes features like direct_state_access, which allows accessing objects in a context without binding to it, and support of OpenGL ES3.1 features that are traditionally missing from OpenGL 4, which allows easier porting of OpenGL ES3.1 applications to OpenGL.

opengl_logo.jpg

It also adds a few new extensions as an option:

ARB_pipeline_statistics_query lets a developer ask the GPU what it has been doing. This could be useful for "profiling" an application (list completed work to identify optimization points).

ARB_sparse_buffer allows developers to perform calculations on pieces of generic buffers, without loading it all into memory. This is similar to ARB_sparse_textures... except that those are for textures. Buffers are useful for things like vertex data (and so forth).

ARB_transform_feedback_overflow_query is apparently designed to let developers choose whether or not to draw objects based on whether the buffer is overflowed. I might be wrong, but it seems like this would be useful for deciding whether or not to draw objects generated by geometry shaders.

KHR_blend_equation_advanced allows new blending equations between objects. If you use Photoshop, this would be "multiply", "screen", "darken", "lighten", "difference", and so forth. On NVIDIA's side, this will be directly supported on Maxwell and Tegra K1 (and later). Fermi and Kepler will support the functionality, but the driver will perform the calculations with shaders. AMD has yet to comment, as far as I can tell.

nvidia-opengl-debugger.jpg

Image from NVIDIA GTC Presentation

If you are a developer, NVIDIA has launched 340.65 (340.23.01 for Linux) beta drivers for developers. If you are not looking to create OpenGL 4.5 applications, do not get this driver. You really should not have any use for it, at all.

Next Generation OpenGL Initiative Announced

The Khronos Group has also announced "a call for participation" to outline a new specification for graphics and compute. They want it to allow developers explicit control over CPU and GPU tasks, be multithreaded, have minimal overhead, have a common shader language, and "rigorous conformance testing". This sounds a lot like the design goals of Mantle (and what we know of DirectX 12).

amd-mantle-queues.jpg

And really, from what I hear and understand, that is what OpenGL needs at this point. Graphics cards look nothing like they did a decade ago (or over two decades ago). They each have very similar interfaces and data structures, even if their fundamental architectures vary greatly. If we can draw a line in the sand, legacy APIs can be supported but not optimized heavily by the drivers. After a short time, available performance for legacy applications would be so high that it wouldn't matter, as long as they continue to run.

Add to it, next-generation drivers should be significantly easier to develop, considering the reduced error checking (and other responsibilities). As I said on Intel's DirectX 12 story, it is still unclear whether it will lead to enough performance increase to make most optimizations, such as those which increase workload or developer effort in exchange for queuing fewer GPU commands, unnecessary. We will need to wait for game developers to use it for a bit before we know.

AMD Catalyst 14.7 Release Candidate 3

Subject: Graphics Cards | August 14, 2014 - 07:20 PM |
Tagged: catalyst 14.7 RC3, beta, amd

A new Catalyst Release Candidate has arrived and as with the previous driver it no longer supports Windows 8.0 or the WDDM 1.2 driver, so upgrade to Win 7 or Win 8.1 before installing please.  AMD will eventually release a driver which supports WDDM 1.1 under Win 8.0 for those who do not upgrade.

AMD-Catalyst-12-11-Beta-11-7900-Modded-Driver-Crafted-for-Performance.jpg

Feature Highlights of the AMD Catalyst 14.7​ RC3 Driver for Windows Includes all improvements found in the AMD Catalyst 14.7 RC driver

  • Display interface enhancements to improve 4k monitor performance and reduce flickering.
  • Improvements apply to the following products: ​
    • AMD Radeon R9 290 Series
    • AMD Radeon R9 270 Series
    • AMD Radeon HD 7800 Series​ ​​
  • Even with these improvements, cable quality and other system variables can affect 4k performance. AMD recommends using DisplayPort 1.2 HBR2 certified cables with a length of 2m (~6 ft) or less when driving 4K monitors.​
  • Wildstar: AMD Crossfire profile support
  • Lichdom: Single GPU and Multi-GPU performance enhancements
  • Watch Dogs: Smoother gameplay on single GPU and Multi-GPU configurations​

Feature Highlights of the AMD Catalyst 14.7​ RC Driver for Windows

  • Includes all improvements found in the AMD Catalyst 14.6 RC driver
    • AMD ​CrossFire and AMD Radeon Dual Graphics profile update for Plants vs. Zombies​​​
    • Assassin's Creed IV - improved CrossFire scaling (3840x2160 High Settings) up to 93%
    • Collaboration with AOC has identified non-standard display timings as the root cause of 60Hz SST flickering exhibited by the AOC U2868PQU panel on certain AMD Radeon graphics cards.
    • A software workaround has been implemented in AMD Catalyst 14.7 RC driver to resolve the display timing issues with this display. Users are further encouraged to obtain newer display firmware from AOC that will resolve flickering at its origin.
    • Users are additionally advised to utilize DisplayPort-certified cables to ensure the integrity of the DisplayPort data connection.​​​

Feature Highlights of the AMD Catalyst 14.6 RC Driver for Windows

  • Plants vs. Zombies (Direct3D performance improvements):
    • AMD Radeon R9 290X - 1920x1080 Ultra – improves up to 11%
    • AMD Radeon R9 290X - 2560x1600 Ultra – improves up to 15%
    • AMD Radeon R9 290X CrossFire configuration (3840x2160 Ultra) - 92% scaling
  • 3DMark Sky Diver improvements:
    • AMD A4-6300 – improves up to 4%
    • Enables AMD Dual Graphics/AMD CrossFire support
  • Grid Auto Sport: AMD CrossFire profile
  • Wildstar: Power Xpress profile
    • Performance improvements to improve smoothness of application
    • Performance improves up to 24% at 2560x1600 on the AMD Radeon R9 and R7 Series of products for both single GPU and multi-GPU configurations.
  • Watch Dogs: AMD CrossFire – Frame pacing improvements
  • Battlefield Hardline Beta: AMD CrossFire profile

Known Issues

  • Running Watch Dogs with a R9 280X CrossFire configuration may result in the application running in CrossFire software compositing mode
  • Enabling Temporal SMAA in a CrossFire configuration when playing Watch Dogs will result in flickering
  • AMD CrossFire configurations with AMD Eyefinity enabled will see instability with BattleField 4 or Thief when running Mantle
  • Catalyst Install Manager text is covered by Express/Custom radio button text
  • Express Uninstall does not remove C:\Program Files\(AMD or ATI) folder
Source: AMD

Intel and Microsoft Show DirectX 12 Demo and Benchmark

Subject: General Tech, Graphics Cards, Processors, Mobile, Shows and Expos | August 13, 2014 - 09:55 PM |
Tagged: siggraph 2014, Siggraph, microsoft, Intel, DirectX 12, directx 11, DirectX

Along with GDC Europe and Gamescom, Siggraph 2014 is going on in Vancouver, BC. At it, Intel had a DirectX 12 demo at their booth. This scene, containing 50,000 asteroids, each in its own draw call, was developed on both Direct3D 11 and Direct3D 12 code paths and could apparently be switched while the demo is running. Intel claims to have measured both power as well as frame rate.

intel-dx12-LockedFPS.png

Variable power to hit a desired frame rate, DX11 and DX12.

The test system is a Surface Pro 3 with an Intel HD 4400 GPU. Doing a bit of digging, this would make it the i5-based Surface Pro 3. Removing another shovel-load of mystery, this would be the Intel Core i5-4300U with two cores, four threads, 1.9 GHz base clock, up-to 2.9 GHz turbo clock, 3MB of cache, and (of course) based on the Haswell architecture.

While not top-of-the-line, it is also not bottom-of-the-barrel. It is a respectable CPU.

Intel's demo on this processor shows a significant power reduction in the CPU, and even a slight decrease in GPU power, for the same target frame rate. If power was not throttled, Intel's demo goes from 19 FPS all the way up to a playable 33 FPS.

Intel will discuss more during a video interview, tomorrow (Thursday) at 5pm EDT.

intel-dx12-unlockedFPS-1.jpg

Maximum power in DirectX 11 mode.

For my contribution to the story, I would like to address the first comment on the MSDN article. It claims that this is just an "ideal scenario" of a scene that is bottlenecked by draw calls. The thing is: that is the point. Sure, a game developer could optimize the scene to (maybe) instance objects together, and so forth, but that is unnecessary work. Why should programmers, or worse, artists, need to spend so much of their time developing art so that it could be batch together into fewer, bigger commands? Would it not be much easier, and all-around better, if the content could be developed as it most naturally comes together?

That, of course, depends on how much performance improvement we will see from DirectX 12, compared to theoretical max efficiency. If pushing two workloads through a DX12 GPU takes about the same time as pushing one, double-sized workload, then it allows developers to, literally, perform whatever solution is most direct.

intel-dx12-unlockedFPS-2.jpg

Maximum power when switching to DirectX 12 mode.

If, on the other hand, pushing two workloads is 1000x slower than pushing a single, double-sized one, but DirectX 11 was 10,000x slower, then it could be less relevant because developers will still need to do their tricks in those situations. The closer it gets, the fewer occasions that strict optimization is necessary.

If there are any DirectX 11 game developers, artists, and producers out there, we would like to hear from you. How much would a (let's say) 90% reduction in draw call latency (which is around what Mantle claims) give you, in terms of fewer required optimizations? Can you afford to solve problems "the naive way" now? Some of the time? Most of the time? Would it still be worth it to do things like object instancing and fewer, larger materials and shaders? How often?

To boldy go where no 290X has gone before?

Subject: Graphics Cards | August 13, 2014 - 06:11 PM |
Tagged: factory overclocked, sapphire, R9 290X, Vapor-X R9 290X TRI-X OC

As far as factory overclocks go, the 1080MHz core and 5.64GHz RAM on the new Sapphire Vapor-X 290X is impressive and takes the prize for the highest factory overclock on this card [H]ard|OCP has seen yet.  That didn't stop them from pushing it to 1180MHz and 5.9GHz after a little work which is even more impressive.  At both the factory and manual overclocks the card handily beat the reference model and the manually overclocked benchmarks could meet or beat the overclocked MSI GTX 780 Ti GAMING 3G OC card.  The speed is not the only good feature, Intelligent Fan Control keeps two of the three fans from spinning when the GPU is under 60C which vastly reduces the noise produced by this card.  It is currently selling for $646, lower than the $710 that the GeForce is currently selling for as well.

1406869221rJVdvhdB2o_1_6_l.jpg

"We take a look at the SAPPHIRE Vapor-X R9 290X TRI-X OC video card which has the highest factory overclock we've ever encountered on any AMD R9 290X video card. This video card is feature rich and very fast. We'll overclock it to the highest GPU clocks we've seen yet on R9 290X and compare it to the competition."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP
Author:
Manufacturer: ASUS

The Waiting Game

NVIDIA G-Sync was announced at a media event held in Montreal way back in October, and promised to revolutionize the way the display and graphics card worked together to present images on the screen. It was designed to remove hitching, stutter, and tearing -- almost completely. Since that fateful day in October of 2013, we have been waiting. Patiently waiting. We were waiting for NVIDIA and its partners to actually release a monitor that utilizes the technology and that can, you know, be purchased.

In December of 2013 we took a look at the ASUS VG248QE monitor, the display for which NVIDIA released a mod kit to allow users that already had this monitor to upgrade to G-Sync compatibility. It worked, and I even came away impressed. I noted in my conclusion that, “there isn't a single doubt that I want a G-Sync monitor on my desk” and, “my short time with the NVIDIA G-Sync prototype display has been truly impressive…”. That was nearly 7 months ago and I don’t think anyone at that time really believed it would be THIS LONG before the real monitors began to show in the hands of gamers around the world.

IMG_9328.JPG

Since NVIDIA’s October announcement, AMD has been on a marketing path with a technology they call “FreeSync” that claims to be a cheaper, standards-based alternative to NVIDIA G-Sync. They first previewed the idea of FreeSync on a notebook device during CES in January and then showed off a prototype monitor in June during Computex. Even more recently, AMD has posted a public FAQ that gives more details on the FreeSync technology and how it differs from NVIDIA’s creation; it has raised something of a stir with its claims on performance and cost advantages.

That doesn’t change the product that we are reviewing today of course. The ASUS ROG Swift PG278Q 27-in WQHD display with a 144 Hz refresh rate is truly an awesome monitor. What did change is the landscape, from NVIDIA's original announcement until now.

Continue reading our review of the ASUS ROG Swift PG278Q 2560x1440 G-Sync Monitor!!