A few days with some magic monitors
Last month friend of the site and technology enthusiast Tom Petersen, who apparently does SOMETHING at NVIDIA, stopped by our offices to talk about G-Sync technology. A variable refresh rate feature added to new monitors with custom NVIDIA hardware, G-Sync is a technology that has been frequently discussed on PC Perspective.
The first monitor to ship with G-Sync is the ASUS ROG Swift PG278Q - a fantastic 2560x1440 27-in monitor with a 144 Hz maximum refresh rate. I wrote a glowing review of the display here recently with the only real negative to it being a high price tag: $799. But when Tom stopped out to talk about the G-Sync retail release, he happened to leave a set of three of these new displays for us to mess with in a G-Sync Surround configuration. Yummy.
So what exactly is the current experience of using a triple G-Sync monitor setup if you were lucky enough to pick up a set? The truth is that the G-Sync portion of the equation works great but that game support for Surround (or Eyefinity for that matter) is still somewhat cumbersome.
In this quick impressions article I'll walk through the setup and configuration of the system and tell you about my time playing seven different PC titles in G-Sync Surround.
Subject: General Tech, Graphics Cards | September 3, 2014 - 09:28 PM | Scott Michaud
Tagged: amd, R9, r9 295x2, price cut
While not fully in effect yet, AMD is cutting $500 off of the R9 295X2 price tag to $999 USD. Currently, there are two models available on Newegg USA at the reduced price, and one at Amazon for $1200. We expect to see other SKUs reduce soon, as well. This puts the water-cooled R9 295X2 just below the cost of two air-cooled R9 290X graphics cards.
If you were interested in this card, now might be the time (if one of the reduced units are available).
Subject: General Tech, Graphics Cards | September 3, 2014 - 06:15 PM | Scott Michaud
Tagged: Matrox, firepro, cape verde xt gl, cape verde xt, cape verde, amd
Matrox, along with S3, develop GPU ASICs for use with desktop add-in boards, alongside AMD and NVIDIA. Last year, they sold less than 7000 units in their quarter according to my math (rounding to 0.0% market share implies < 0.05% of total market, which was 7000 units that quarter). Today, Matrox Graphics Inc. announce that they will use an AMD GPU on their upcoming product line.
While they do not mention a specific processor, they note that "the selected AMD GPU" will be manufactured at a 28nm process with 1.5 billion transistors. It will support DirectX 11.2, OpenGL 4.4, and OpenCL 1.2. It will have a 128-bit memory bus.
Basically, it kind-of has to be Cape Verde XT (or XT GL) unless it is a new, unannounced GPU.
If it is Cape Verde XT, it would have about 1.0 to 1.2 TFLOPs of single precision performance (depending on the chosen clock rate). Whatever clock rate is chosen, the chip contains 640 shader processors. It was first released in February 2012 with the Radeon HD 7770 GHz Edition. Again, this is assuming that AMD will not release a GPU refresh for that category.
Matrox will provide their PowerDesk software to configure multiple monitors. It will work alongside AMD's professional graphics drivers. It is a sad that to see a GPU ASIC manufacturer throw in the towel, at least temporarily, but hopefully they can use AMD's technology to remain in the business with competitive products. Who knows: maybe they will make a return when future graphics APIs reduce the burden of driver and product development?
Tonga GPU Features
On December 22, 2011, AMD launched the first 28nm GPU based on an architecture called GCN on the code name Tahiti silicon. That was the release of the Radeon HD 7970 and it was the beginning of an incredibly long adventure for PC enthusiasts and gamers. We eventually saw the HD 7970 GHz Edition and the R9 280/280X releases, all based on essentially identical silicon, keeping a spot in the market for nearly 3 years. Today AMD is launching the Tonga GPU and Radeon R9 285, a new piece of silicon that shares many traits of Tahiti but adds support for some additional features.
Replacing the Radeon R9 280 in the current product stack, the R9 285 will step in at $249, essentially the same price. Buyers will be treated to an updated feature set though including options that were only previously available on the R9 290 and R9 290X (and R7 260X). These include TrueAudio, FreeSync, XDMA CrossFire and PowerTune.
Many people have been calling this architecture GCN 1.1 though AMD internally doesn't have a moniker for it. The move from Tahiti, to Hawaii and now to Tonga, reveals a new design philosophy from AMD, one of smaller and more gradual steps forward as opposed to sudden, massive improvements in specifications. Whether this change was self-imposed or a result of the slowing of process technology advancement is really a matter of opinion.
Subject: Graphics Cards | August 25, 2014 - 03:57 PM | Jeremy Hellstrom
Tagged: pny, gtx 780, gtx 780 ti, Customized OC, factory overclocked
PNY is not as heavily marketed as some GPU resellers in North America but that doesn't mean they are not hard at work designing custom cards. Hardware Canucks tried out the Customized OC GTX 780 and 780 Ti recently with the factory overclock as well as pushing the cards to the limit by manual overclocking. Using EVGA's Precision overclocking tools they pushed the GTX 780 to 1120MHz Core, 6684MHz RAM and the Ti to an impressive 1162MHz Core, 7800MHz RAM. Read on to see how effective the custom cooler proved to be as it is also a major part of the Customized series.
"PNY's latest Customized series will be rolling through their GTX 780 and GTX 780 Ti lineups, bringing high end cooling and increased performance."
Here are some more Graphics Card articles from around the web:
- PNY GeForce GTX 760 XLR8 OC @ [H]ard|OCP
- Gigabyte GeForce 750Ti Ultra Durable BLACK EDITION Video Card Review @ Madshrimps
- Diamond UGA USB 3.0/2.0 to DVI/HDMI/VGA Adapter Review @ OCC
- Graphics Card Overclocking Guide Featuring The AMD Gigabyte R9 270 @ eTeknix
- Sapphire R7 260X (100366-3L) Video Card Review @ Modders-Inc
- HIS R9 290 iPower IceQ X2 OC 4GB GDDR5 Video Card Review @ Madshrimps
Subject: Graphics Cards | August 23, 2014 - 10:46 AM | Ryan Shrout
Tagged: radeon, r9 285, R9, amd, 285
Today during AMD's live stream event celebrating 30 years of graphics and gaming, the company spent a bit of time announcing and teasing a new graphics card, the Radeon R9 285X and R9 285. Likely based on the Tonga GPU die, the specifications haven't been confirmed but most believe that the chip will feature 2048 stream processors, 128 texture units, 32 ROPs and a 256-bit memory bus.
In a move to help donate to the Child's Play charity, AMD currently has an AMD Radeon R9 285 on Ebay. It lists an ASUS built Strix-style cooled retail card, with 2GB of memory being the only specification that is visible on the box.
The R9 285X and R9 285 will replace the R9 280X and R9 280 more than likely and we should see these shipping and available in very early September.
UPDATE: AMD showed specifications of the Radeon R9 285 during the live stream.
For those of you with eyes as bad as mine, here are the finer points:
- 1,792 Stream Processors
- 918 MHz GPU Clock
- 3.29 TFLOPS peak performance
- 112 Texture units
- 32 ROPs
- 2GB GDDR5
- 256-bit memory bus
- 5.5 GHz memory clock
- 2x 6-pin power connectors
- 190 watt TDP
- $249 MSRP
- Release date: September 2nd
These Tonga GPU specifications are VERY similar to that of the R9 280: 1792 stream units, 112 texture units, etc. However, the R9 280 had a wider memory bus (384-bit) but runs at 500 MHz lower effective frequency. Clock speeds on Tonga look like they are just slightly lower as well. Maybe most interesting is the frame buffer size drop from 3GB to 2GB.
That's all we have for now, but I expect we'll have our samples in very soon and expect a full review shortly!
UPDATE 2: Apparently AMD hasn't said anything about the Radeon R9 285X, so for the time being, that still falls under the "rumor" category. I'm sure we'll know more soon though.
Subject: Graphics Cards, Displays | August 22, 2014 - 08:05 PM | Ryan Shrout
Tagged: video, gsync, g-sync, tom petersen, nvidia, geforce
Earlier today we had NVIDIA's Tom Petersen in studio to discuss the retail availability of G-Sync monitors as well as to get hands on with a set of three ASUS ROG Swift PG278Q monitors running in G-Sync Surround! It was truly an impressive sight and if you missed any of it, you can catch the entire replay right here.
Even if seeing the ASUS PG278Q monitor again doesn't interest you (we have our full review of the monitor right here), you won't want to miss the very detailed Q&A that occurs, answering quite a few reader questions about the technology. Covered items include:
- Potential added latency of G-Sync
- Future needs for multiple DP connections on GeForce GPUs
- Upcoming 4K and 1080p G-Sync panels
- Can G-Sync Surround work through an MST Hub?
- What happens to G-Sync when the frame rate exceeds the panel refresh rate? Or drops below minimum refresh rate?
- What does that memory on the G-Sync module actually do??
- A demo of the new NVIDIA SHIELD Tablet capabilities
- A whole lot more!
Another big thank you to NVIDIA and Tom Petersen for stopping out our way and for spending the time to discuss these topics with our readers. Stay tuned here at PC Perspective as we will have more thoughts and reactions to G-Sync Surround very soon!!
Subject: Graphics Cards, Displays, Mobile | August 21, 2014 - 05:23 PM | Ryan Shrout
Tagged: nvidia, video, live, shield, shield tablet, g-sync, gsync, tom petersen
Tomorrow at 12pm EDT / 9am PDT, NVIDIA's Tom Petersen will be stopping by the PC Perspective office to discuss some topics of interest. There has been no lack of topics floating around the world of graphics card, displays, refresh rates and tablets recently and I expect the show tomorrow to be incredibly interesting and educational.
On hand we'll be doing demonstrations of G-Sync Surround (3 panels!) with the ASUS ROG Swift PG278Q display (our review here) and also show off the SHIELD Tablet (we have a review of that too) with some multiplayer action. If you thought the experience with a single G-Sync monitor was impressive, you will want to hear what a set of three of them can be like.
NVIDIA Live Stream with Tom Petersen
9am PT / 12pm ET - August 22nd
The topic list is going to include (but not limited to):
- ASUS PG278Q G-Sync monitor
- G-Sync availability and pricing
- G-Sync Surround setup, use and requirements
- Technical issues surrounding G-Sync: latency, buffers, etc.
- Comparisons of G-Sync to Adaptive Sync
- SHIELD Tablet game play
But we want your questions! Do you have burning issues that you think need to be addressed by Tom and the NVIDIA team about G-Sync, FreeSync, GameWorks, Tegra, tablets, GPUs and more? Nothing is off limits here, though obviously Tom may be cagey on future announcements. Please use the comments section on this news post below (registration not required) to ask your questions and we can organize them before the event tomorrow. We MIGHT even be able to come up with a couple of prizes to giveaway for live viewers as well...
See you tomorrow!!
Subject: General Tech, Graphics Cards | August 19, 2014 - 12:30 PM | Jeremy Hellstrom
Tagged: jon peddie, gpu market share, q2 2014
Jon Peddie Research's latest Market Watch adds even more ironic humour to the media's continuing proclamations of the impending doom of the PC industry. This quarter saw tablet sales decline while overall PCs were up and that was without any major releases to drive purchasers to adopt new technology. While JPR does touch on the overall industry this report is focused on the sale of GPUs and APUs and happens to contain some great news for AMD. They saw their overall share of the market increase by 11% from last quarter and by just over a percent of the entire market. Intel saw a small rise in share though it does still hold the majority of the market as PCs with no discrete GPU are more likely to contain Intel's chips than AMDs. That leaves NVIDIA who are still banking solely on discrete GPUs and saw over an 8% decline from last quarter and a decline of almost two percent in the total market. Check out the other graphs in JPR's overview right here.
"The big drop in graphics shipments in Q1 has been partially offset by a small rise this quarter. Shipments were up 3.2% quarter-to-quarter, and down 4.5% compared to the same quarter last year."
Here is some more Tech News from around the web:
- Open Source GPU Released @ Hack a Day
- BlackBerry slices off juiciest bits, bottles them in 'Tech Solutions' @ The Register
- LinuxCon and CloudOpen This Week in Chicago @ Linux.com
Subject: General Tech, Graphics Cards, Shows and Expos | August 15, 2014 - 08:33 PM | Scott Michaud
Tagged: siggraph 2014, Siggraph, OpenGL Next, opengl 4.5, opengl, nvidia, Mantle, Khronos, Intel, DirectX 12, amd
Let's be clear: there are two stories here. The first is the release of OpenGL 4.5 and the second is the announcement of the "Next Generation OpenGL Initiative". They both occur on the same press release, but they are two, different statements.
OpenGL 4.5 Released
OpenGL 4.5 expands the core specification with a few extensions. Compatible hardware, with OpenGL 4.5 drivers, will be guaranteed to support these. This includes features like direct_state_access, which allows accessing objects in a context without binding to it, and support of OpenGL ES3.1 features that are traditionally missing from OpenGL 4, which allows easier porting of OpenGL ES3.1 applications to OpenGL.
It also adds a few new extensions as an option:
ARB_pipeline_statistics_query lets a developer ask the GPU what it has been doing. This could be useful for "profiling" an application (list completed work to identify optimization points).
ARB_sparse_buffer allows developers to perform calculations on pieces of generic buffers, without loading it all into memory. This is similar to ARB_sparse_textures... except that those are for textures. Buffers are useful for things like vertex data (and so forth).
ARB_transform_feedback_overflow_query is apparently designed to let developers choose whether or not to draw objects based on whether the buffer is overflowed. I might be wrong, but it seems like this would be useful for deciding whether or not to draw objects generated by geometry shaders.
KHR_blend_equation_advanced allows new blending equations between objects. If you use Photoshop, this would be "multiply", "screen", "darken", "lighten", "difference", and so forth. On NVIDIA's side, this will be directly supported on Maxwell and Tegra K1 (and later). Fermi and Kepler will support the functionality, but the driver will perform the calculations with shaders. AMD has yet to comment, as far as I can tell.
Image from NVIDIA GTC Presentation
If you are a developer, NVIDIA has launched 340.65 (340.23.01 for Linux) beta drivers for developers. If you are not looking to create OpenGL 4.5 applications, do not get this driver. You really should not have any use for it, at all.
Next Generation OpenGL Initiative Announced
The Khronos Group has also announced "a call for participation" to outline a new specification for graphics and compute. They want it to allow developers explicit control over CPU and GPU tasks, be multithreaded, have minimal overhead, have a common shader language, and "rigorous conformance testing". This sounds a lot like the design goals of Mantle (and what we know of DirectX 12).
And really, from what I hear and understand, that is what OpenGL needs at this point. Graphics cards look nothing like they did a decade ago (or over two decades ago). They each have very similar interfaces and data structures, even if their fundamental architectures vary greatly. If we can draw a line in the sand, legacy APIs can be supported but not optimized heavily by the drivers. After a short time, available performance for legacy applications would be so high that it wouldn't matter, as long as they continue to run.
Add to it, next-generation drivers should be significantly easier to develop, considering the reduced error checking (and other responsibilities). As I said on Intel's DirectX 12 story, it is still unclear whether it will lead to enough performance increase to make most optimizations, such as those which increase workload or developer effort in exchange for queuing fewer GPU commands, unnecessary. We will need to wait for game developers to use it for a bit before we know.