Subject: General Tech, Graphics Cards | August 19, 2014 - 09:30 AM | Jeremy Hellstrom
Tagged: jon peddie, gpu market share, q2 2014
Jon Peddie Research's latest Market Watch adds even more ironic humour to the media's continuing proclamations of the impending doom of the PC industry. This quarter saw tablet sales decline while overall PCs were up and that was without any major releases to drive purchasers to adopt new technology. While JPR does touch on the overall industry this report is focused on the sale of GPUs and APUs and happens to contain some great news for AMD. They saw their overall share of the market increase by 11% from last quarter and by just over a percent of the entire market. Intel saw a small rise in share though it does still hold the majority of the market as PCs with no discrete GPU are more likely to contain Intel's chips than AMDs. That leaves NVIDIA who are still banking solely on discrete GPUs and saw over an 8% decline from last quarter and a decline of almost two percent in the total market. Check out the other graphs in JPR's overview right here.
"The big drop in graphics shipments in Q1 has been partially offset by a small rise this quarter. Shipments were up 3.2% quarter-to-quarter, and down 4.5% compared to the same quarter last year."
Here is some more Tech News from around the web:
- Open Source GPU Released @ Hack a Day
- BlackBerry slices off juiciest bits, bottles them in 'Tech Solutions' @ The Register
- LinuxCon and CloudOpen This Week in Chicago @ Linux.com
Subject: General Tech, Graphics Cards, Shows and Expos | August 15, 2014 - 05:33 PM | Scott Michaud
Tagged: siggraph 2014, Siggraph, OpenGL Next, opengl 4.5, opengl, nvidia, Mantle, Khronos, Intel, DirectX 12, amd
Let's be clear: there are two stories here. The first is the release of OpenGL 4.5 and the second is the announcement of the "Next Generation OpenGL Initiative". They both occur on the same press release, but they are two, different statements.
OpenGL 4.5 Released
OpenGL 4.5 expands the core specification with a few extensions. Compatible hardware, with OpenGL 4.5 drivers, will be guaranteed to support these. This includes features like direct_state_access, which allows accessing objects in a context without binding to it, and support of OpenGL ES3.1 features that are traditionally missing from OpenGL 4, which allows easier porting of OpenGL ES3.1 applications to OpenGL.
It also adds a few new extensions as an option:
ARB_pipeline_statistics_query lets a developer ask the GPU what it has been doing. This could be useful for "profiling" an application (list completed work to identify optimization points).
ARB_sparse_buffer allows developers to perform calculations on pieces of generic buffers, without loading it all into memory. This is similar to ARB_sparse_textures... except that those are for textures. Buffers are useful for things like vertex data (and so forth).
ARB_transform_feedback_overflow_query is apparently designed to let developers choose whether or not to draw objects based on whether the buffer is overflowed. I might be wrong, but it seems like this would be useful for deciding whether or not to draw objects generated by geometry shaders.
KHR_blend_equation_advanced allows new blending equations between objects. If you use Photoshop, this would be "multiply", "screen", "darken", "lighten", "difference", and so forth. On NVIDIA's side, this will be directly supported on Maxwell and Tegra K1 (and later). Fermi and Kepler will support the functionality, but the driver will perform the calculations with shaders. AMD has yet to comment, as far as I can tell.
Image from NVIDIA GTC Presentation
If you are a developer, NVIDIA has launched 340.65 (340.23.01 for Linux) beta drivers for developers. If you are not looking to create OpenGL 4.5 applications, do not get this driver. You really should not have any use for it, at all.
Next Generation OpenGL Initiative Announced
The Khronos Group has also announced "a call for participation" to outline a new specification for graphics and compute. They want it to allow developers explicit control over CPU and GPU tasks, be multithreaded, have minimal overhead, have a common shader language, and "rigorous conformance testing". This sounds a lot like the design goals of Mantle (and what we know of DirectX 12).
And really, from what I hear and understand, that is what OpenGL needs at this point. Graphics cards look nothing like they did a decade ago (or over two decades ago). They each have very similar interfaces and data structures, even if their fundamental architectures vary greatly. If we can draw a line in the sand, legacy APIs can be supported but not optimized heavily by the drivers. After a short time, available performance for legacy applications would be so high that it wouldn't matter, as long as they continue to run.
Add to it, next-generation drivers should be significantly easier to develop, considering the reduced error checking (and other responsibilities). As I said on Intel's DirectX 12 story, it is still unclear whether it will lead to enough performance increase to make most optimizations, such as those which increase workload or developer effort in exchange for queuing fewer GPU commands, unnecessary. We will need to wait for game developers to use it for a bit before we know.
Subject: Graphics Cards | August 14, 2014 - 04:20 PM | Jeremy Hellstrom
Tagged: catalyst 14.7 RC3, beta, amd
A new Catalyst Release Candidate has arrived and as with the previous driver it no longer supports Windows 8.0 or the WDDM 1.2 driver, so upgrade to Win 7 or Win 8.1 before installing please. AMD will eventually release a driver which supports WDDM 1.1 under Win 8.0 for those who do not upgrade.
Feature Highlights of the AMD Catalyst 14.7 RC3 Driver for Windows Includes all improvements found in the AMD Catalyst 14.7 RC driver
- Display interface enhancements to improve 4k monitor performance and reduce flickering.
- Improvements apply to the following products:
- AMD Radeon R9 290 Series
- AMD Radeon R9 270 Series
- AMD Radeon HD 7800 Series
- Even with these improvements, cable quality and other system variables can affect 4k performance. AMD recommends using DisplayPort 1.2 HBR2 certified cables with a length of 2m (~6 ft) or less when driving 4K monitors.
- Wildstar: AMD Crossfire profile support
- Lichdom: Single GPU and Multi-GPU performance enhancements
- Watch Dogs: Smoother gameplay on single GPU and Multi-GPU configurations
Feature Highlights of the AMD Catalyst 14.7 RC Driver for Windows
- Includes all improvements found in the AMD Catalyst 14.6 RC driver
- AMD CrossFire and AMD Radeon Dual Graphics profile update for Plants vs. Zombies
- Assassin's Creed IV - improved CrossFire scaling (3840x2160 High Settings) up to 93%
- Collaboration with AOC has identified non-standard display timings as the root cause of 60Hz SST flickering exhibited by the AOC U2868PQU panel on certain AMD Radeon graphics cards.
- A software workaround has been implemented in AMD Catalyst 14.7 RC driver to resolve the display timing issues with this display. Users are further encouraged to obtain newer display firmware from AOC that will resolve flickering at its origin.
- Users are additionally advised to utilize DisplayPort-certified cables to ensure the integrity of the DisplayPort data connection.
Feature Highlights of the AMD Catalyst 14.6 RC Driver for Windows
- Plants vs. Zombies (Direct3D performance improvements):
- AMD Radeon R9 290X - 1920x1080 Ultra – improves up to 11%
- AMD Radeon R9 290X - 2560x1600 Ultra – improves up to 15%
- AMD Radeon R9 290X CrossFire configuration (3840x2160 Ultra) - 92% scaling
- 3DMark Sky Diver improvements:
- AMD A4-6300 – improves up to 4%
- Enables AMD Dual Graphics/AMD CrossFire support
- Grid Auto Sport: AMD CrossFire profile
- Wildstar: Power Xpress profile
- Performance improvements to improve smoothness of application
- Performance improves up to 24% at 2560x1600 on the AMD Radeon R9 and R7 Series of products for both single GPU and multi-GPU configurations.
- Watch Dogs: AMD CrossFire – Frame pacing improvements
- Battlefield Hardline Beta: AMD CrossFire profile
- Running Watch Dogs with a R9 280X CrossFire configuration may result in the application running in CrossFire software compositing mode
- Enabling Temporal SMAA in a CrossFire configuration when playing Watch Dogs will result in flickering
- AMD CrossFire configurations with AMD Eyefinity enabled will see instability with BattleField 4 or Thief when running Mantle
- Catalyst Install Manager text is covered by Express/Custom radio button text
- Express Uninstall does not remove C:\Program Files\(AMD or ATI) folder
Subject: General Tech, Graphics Cards, Processors, Mobile, Shows and Expos | August 13, 2014 - 06:55 PM | Scott Michaud
Tagged: siggraph 2014, Siggraph, microsoft, Intel, DirectX 12, directx 11, DirectX
Along with GDC Europe and Gamescom, Siggraph 2014 is going on in Vancouver, BC. At it, Intel had a DirectX 12 demo at their booth. This scene, containing 50,000 asteroids, each in its own draw call, was developed on both Direct3D 11 and Direct3D 12 code paths and could apparently be switched while the demo is running. Intel claims to have measured both power as well as frame rate.
Variable power to hit a desired frame rate, DX11 and DX12.
The test system is a Surface Pro 3 with an Intel HD 4400 GPU. Doing a bit of digging, this would make it the i5-based Surface Pro 3. Removing another shovel-load of mystery, this would be the Intel Core i5-4300U with two cores, four threads, 1.9 GHz base clock, up-to 2.9 GHz turbo clock, 3MB of cache, and (of course) based on the Haswell architecture.
While not top-of-the-line, it is also not bottom-of-the-barrel. It is a respectable CPU.
Intel's demo on this processor shows a significant power reduction in the CPU, and even a slight decrease in GPU power, for the same target frame rate. If power was not throttled, Intel's demo goes from 19 FPS all the way up to a playable 33 FPS.
Intel will discuss more during a video interview, tomorrow (Thursday) at 5pm EDT.
Maximum power in DirectX 11 mode.
For my contribution to the story, I would like to address the first comment on the MSDN article. It claims that this is just an "ideal scenario" of a scene that is bottlenecked by draw calls. The thing is: that is the point. Sure, a game developer could optimize the scene to (maybe) instance objects together, and so forth, but that is unnecessary work. Why should programmers, or worse, artists, need to spend so much of their time developing art so that it could be batch together into fewer, bigger commands? Would it not be much easier, and all-around better, if the content could be developed as it most naturally comes together?
That, of course, depends on how much performance improvement we will see from DirectX 12, compared to theoretical max efficiency. If pushing two workloads through a DX12 GPU takes about the same time as pushing one, double-sized workload, then it allows developers to, literally, perform whatever solution is most direct.
Maximum power when switching to DirectX 12 mode.
If, on the other hand, pushing two workloads is 1000x slower than pushing a single, double-sized one, but DirectX 11 was 10,000x slower, then it could be less relevant because developers will still need to do their tricks in those situations. The closer it gets, the fewer occasions that strict optimization is necessary.
If there are any DirectX 11 game developers, artists, and producers out there, we would like to hear from you. How much would a (let's say) 90% reduction in draw call latency (which is around what Mantle claims) give you, in terms of fewer required optimizations? Can you afford to solve problems "the naive way" now? Some of the time? Most of the time? Would it still be worth it to do things like object instancing and fewer, larger materials and shaders? How often?
Subject: Graphics Cards | August 13, 2014 - 03:11 PM | Jeremy Hellstrom
Tagged: factory overclocked, sapphire, R9 290X, Vapor-X R9 290X TRI-X OC
As far as factory overclocks go, the 1080MHz core and 5.64GHz RAM on the new Sapphire Vapor-X 290X is impressive and takes the prize for the highest factory overclock on this card [H]ard|OCP has seen yet. That didn't stop them from pushing it to 1180MHz and 5.9GHz after a little work which is even more impressive. At both the factory and manual overclocks the card handily beat the reference model and the manually overclocked benchmarks could meet or beat the overclocked MSI GTX 780 Ti GAMING 3G OC card. The speed is not the only good feature, Intelligent Fan Control keeps two of the three fans from spinning when the GPU is under 60C which vastly reduces the noise produced by this card. It is currently selling for $646, lower than the $710 that the GeForce is currently selling for as well.
"We take a look at the SAPPHIRE Vapor-X R9 290X TRI-X OC video card which has the highest factory overclock we've ever encountered on any AMD R9 290X video card. This video card is feature rich and very fast. We'll overclock it to the highest GPU clocks we've seen yet on R9 290X and compare it to the competition."
Here are some more Graphics Card articles from around the web:
- Sapphire Radeon R7 260X CrossFire Review @HiTech Legion
- ASUS Radeon R9 270X DirectCU II TOP @ [H]ard|OCP
- ASUS R9 270 Direct CU II OC 2 GB Video Card Review @ Madshrimps
- EKWB ASUS GTX 780 Ti DCII OC Full Cover Water Block Review @ Madshrimps
- Zotac GTX 750 Zone Edition @ Hardware Heaven
- Palit GTX 750 Ti KalmX 2 GB @ techPowerUp
- Palit GTX750 Ti KalmX @ Kitguru
- PNY GTX 780 Ti XLR8 OC Single & SLI Review @ Hardware Canucks
- Gigabyte GeForce GTX Titan Black GHz Edition @ X-bit Labs
- ASUS Republic of Gamers Striker Platinum GTX 760 4GB SLI @ eTeknix
- PNY GTX 750 Ti XLR8 OC @ [H]ard|OCP
The Waiting Game
NVIDIA G-Sync was announced at a media event held in Montreal way back in October, and promised to revolutionize the way the display and graphics card worked together to present images on the screen. It was designed to remove hitching, stutter, and tearing -- almost completely. Since that fateful day in October of 2013, we have been waiting. Patiently waiting. We were waiting for NVIDIA and its partners to actually release a monitor that utilizes the technology and that can, you know, be purchased.
In December of 2013 we took a look at the ASUS VG248QE monitor, the display for which NVIDIA released a mod kit to allow users that already had this monitor to upgrade to G-Sync compatibility. It worked, and I even came away impressed. I noted in my conclusion that, “there isn't a single doubt that I want a G-Sync monitor on my desk” and, “my short time with the NVIDIA G-Sync prototype display has been truly impressive…”. That was nearly 7 months ago and I don’t think anyone at that time really believed it would be THIS LONG before the real monitors began to show in the hands of gamers around the world.
Since NVIDIA’s October announcement, AMD has been on a marketing path with a technology they call “FreeSync” that claims to be a cheaper, standards-based alternative to NVIDIA G-Sync. They first previewed the idea of FreeSync on a notebook device during CES in January and then showed off a prototype monitor in June during Computex. Even more recently, AMD has posted a public FAQ that gives more details on the FreeSync technology and how it differs from NVIDIA’s creation; it has raised something of a stir with its claims on performance and cost advantages.
That doesn’t change the product that we are reviewing today of course. The ASUS ROG Swift PG278Q 27-in WQHD display with a 144 Hz refresh rate is truly an awesome monitor. What did change is the landscape, from NVIDIA's original announcement until now.
Subject: General Tech, Graphics Cards | August 6, 2014 - 10:34 AM | Jeremy Hellstrom
Tagged: radeon, Gallium3D, catalyst 14.6 Beta, linux, ubuntu 14.04
The new Gallium3D is up against the open source Catalyst 14.6 Beta, running under Ubuntu 14.04 and both the 3.14 and 3.16 Linux kernels, giving Phoronix quite a bit of testing to do. They have numerous cards in their test ranging from an HD 6770 to an R9 290 though unfortunately there are no Gallium3D results for the R9 290 as it will not function until the release of the Linux 3.17 kernel. Overall the gap is closing, the 14.6 Beta still remains the best performer but the open source alternative is quickly closing the gap.
"After last week running new Nouveau vs. NVIDIA proprietary Linux graphics benchmarks, here's the results when putting AMD's hardware on the test bench and running both their latest open and closed-source drivers. Up today are the results of using the latest Radeon Gallium3D graphics code and Linux kernel against the latest beta of the binary-only Catalyst driver."
Here is some more Tech News from around the web:
- Phison announces new quad-core SATA 6Gb/s SSD controller chip @ DigiTimes
- Microsoft KILLS Windows 8.1 Update 2 and Patch Tuesday @ The Register
- Yes, we know Active Directory cloud sync is a MESS, says Microsoft @ The Register
- Paypal ignores bug discovery that lets anyone bypass two factor authentication @ The Inquirer
- How To Emulate Rare and Retro Platforms on the Raspberry Pi @ MAKE:Blog
- European Rosetta Space Craft About To Rendezvous With Comet @ Slashdot
- How To Give Adobe Photoshop A Performance Boost With Your GPU @ Tech ARP
- How to Set up Server-to-Server Sharing in ownCloud 7 on Linux @ Linux.com
- It's official: You can now legally carrier-unlock your mobile in the US @ The Register
- Facebook goes down, people dial 911 @ The Register
- NikKTech And XSPC Worldwide Giveaway
Experience with Silent Design
In the time periods between major GPU releases, companies like ASUS have the ability to really dig down and engineer truly unique products. With the expanded time between major GPU releases, from either NVIDIA or AMD, these products have continued evolving to offer better features and experiences than any graphics card before them. The ASUS Strix GTX 780 is exactly one of those solutions – taking a GTX 780 GPU that was originally released in May of last year and twisting it into a new design that offers better cooling, better power and lower noise levels.
ASUS intended, with the Strix GTX 780, to create a card that is perfect for high end PC gamers, without crossing into the realm of bank-breaking prices. They chose to go with the GeForce GTX 780 GPU from NVIDIA at a significant price drop from the GTX 780 Ti, with only a modest performance drop. They double the reference memory capacity from 3GB to 6GB of GDDR5, to assuage any buyer’s thoughts that 3GB wasn’t enough for multi-screen Surround gaming or 4K gaming. And they change the cooling solution to offer a near silent operation mode when used in “low impact” gaming titles.
The ASUS Strix GTX 780 Graphics Card
The ASUS Strix GTX 780 card is a pretty large beast, both in physical size and in performance. The cooler is a slightly modified version of the very popular DirectCU II thermal design used in many of the custom built ASUS graphics cards. It has a heat dissipation area more than twice that of the reference NVIDIA cooler and uses larger fans that allow them to spin slower (and quieter) at the improved cooling capacity.
Out of the box, the ASUS Strix GTX 780 will run at 889 MHz base clock and 941 MHz Boost clock, a fairly modest increase over the 863/900 MHz rates of the reference card. Obviously with much better cooling and a lot of work being done on the PCB of this custom design, users will have a lot of headroom to overclock on their own, but I continue to implore companies like ASUS and MSI to up the ante out of the box! One area where ASUS does impress is with the memory – the Strix card features a full 6GB of GDDR5 running 6.0 GHz, twice the capacity of the reference GTX 780 (and even GTX 780 Ti) cards. If you had any concerns about Surround or 4K gaming, know that memory capacity will not be a problem. (Though raw compute power may still be.)
Subject: General Tech, Graphics Cards | August 3, 2014 - 01:59 PM | Scott Michaud
Tagged: nvidia, maxwell, gtx 880
Just recently, we posted a story that claimed NVIDIA was preparing to launch high-end Maxwell in the October/November time frame. Apparently, that was generous. The graphics company is said to announce their GeForce GTX 880 in mid-September, with availability coming later in the month. It is expected to be based on the GM204 architecture (which previous rumors claim is 28nm).
It is expected that the GeForce GTX 880 will be available with 4GB of video memory, with an 8GB version possible at some point. As someone who runs multiple (five) monitors, I can tell you that 2GB is not enough for someone of my use case. Windows 7 says the same. It kicks me out of applications to tell me that it does not have enough video memory. This would be enough reason for me to get more GPU memory.
We still do not know how many CUDA cores will be present in the GM204 chip, or if the GeForce GTX 880 will have all of them enabled (but I would be surprised if it didn't). Without any way to derive its theoretical performance, we cannot compare it against the GTX 780 or 780Ti. It could be significantly faster, it could be marginally faster, or it could be somewhere between.
But we will probably find out within two months.
Subject: General Tech, Graphics Cards, Displays | July 29, 2014 - 06:02 PM | Scott Michaud
Tagged: vesa, nvidia, g-sync, freesync, DisplayPort, amd
Dynamic refresh rates have two main purposes: save power by only forcing the monitor to refresh when a new frame is available, and increase animation smoothness by synchronizing to draw rates (rather than "catching the next bus" at 16.67ms, on the 16.67ms, for 60 Hz monitors). Mobile devices prefer the former, while PC gamers are interested in the latter.
Obviously, the video camera nullifies the effect.
NVIDIA was first to make this public with G-Sync. AMD responded with FreeSync, starting with a proposal that was later ratified by VESA as DisplayPort Adaptive-Sync. AMD, then, took up "Project FreeSync" as an AMD "hardware/software solution" to make use of DisplayPort Adaptive-Sync in a way that benefits PC gamers.
Today's news is that AMD has just released an FAQ which explains the standard much more thoroughly than they have in the past. For instance, it clarifies the distinction between DisplayPort Adaptive-Sync and Project FreeSync. Prior to the FAQ, I thought that FreeSync became DisplayPort Adaptive-Sync, and that was that. Now, it is sounding a bit more proprietary, just built upon an open, VESA standard.
If interested, check out the FAQ at AMD's website.
Get notified when we go live!