All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Graphics Cards | June 9, 2014 - 02:53 PM | Jeremy Hellstrom
Tagged: amd, r9 280, msi, R9 280 GAMING OC, factory overclocked
[H]ard|OCP has just posted a review of MSI's factory overclocked
HD7950 R9 280 GAMING OC card, with a 67MHz overclock on the GPU out of the box bringing it up to the 280X's default speed of 1GHz. With a bit of work that can be increased, [H]'s testing was also done at 1095MHz with the RAM raised to 5.4GHz which was enough to take it's performance just beyond the stock GTX 760 it was pitted against. Considering the equality of the performance as well as the price of these cards the decision as to which to go can be based on bundled games or personal preference.
"Priced at roughly $260 we have the MSI R9 280 GAMING OC video card, which features pre-overclocked performance, MSI's Twin Frozr IV cooling system, and highest end components. We'll focus on performance when gaming at 1080p between this boss and the GeForce GTX 760 video card!"
Here are some more Graphics Card articles from around the web:
- Gigabyte R7 250X OC 1GB GDDR5 @ Madshrimps
- HIS R7 250X iCooler 1GB GDDR5 @ Madshrimps
- PowerColor Radeon R9 295X2 Review @ OCC
- MSI Radeon R9 280 OC Review @ TechwareLabs
- Sapphire R9 290 Vapor-X OC Review @ Hardware Canucks
- AMD Kaveri Mobile APU Preview - FX-7600P with Radeon R7 Graphics @ Legit Reviews
- The Performance-Per-Watt, Efficiency Of GPUs On Open-Source Drivers @ Phoronix
- Testing 60+ Intel/AMD/NVIDIA GPUs On Linux With Open-Source Drivers @ Phoronix
- NVIDIA GeForce GT 740: I'd Rather Have Maxwell @ Phoronix
- NVIDIA’s GTX TITAN Z; GK110 Squared @ Hardware Canucks
Subject: Graphics Cards, Displays | June 4, 2014 - 12:40 AM | Ryan Shrout
Tagged: gsync, g-sync, freesync, DisplayPort, computex 2014, computex, adaptive sync
AMD FreeSync is likely a technology or brand or term that is going to be used a lot between now and the end of 2014. When NVIDIA introduced variable refresh rate monitor technology to the world in October of last year, one of the immediate topics of conversation was the response that AMD was going to have. NVIDIA's G-Sync technology is limited to NVIDIA graphics cards and only a few (actually just one still as I write this) monitors actually have the specialized hardware to support it. In practice though, variable refresh rate monitors fundamentally change the gaming experience for the better.
At CES, AMD went on the offensive and started showing press a hacked up demo of what they called "FreeSync", a similar version of the variable refresh technology working on a laptop. At the time, the notebook was a requirement of the demo because of the way AMD's implementation worked. Mobile displays have previously included variable refresh technologies in order to save power and battery life. AMD found that it could repurpose that technology to emulate the effects that NVIDIA G-Sync creates - a significantly smoother gaming experience without the side effects of Vsync.
Our video preview of NVIDIA G-Sync Technology
Since that January preview, things have progressed for the "FreeSync" technology. Taking the idea to the VESA board responsible for the DisplayPort standard, in April we found out that VESA had adopted the technology and officially and called it Adaptive Sync.
So now what? AMD is at Computex and of course is taking the opportunity to demonstrate a "FreeSync" monitor with the DisplayPort 1.2a Adaptive Sync feature at work. Though they aren't talking about what monitor it is or who the manufacturer is, the demo is up and running and functions with frame rates wavering between 40 FPS and 60 FPS - the most crucial range of frame rates that can adversely affect gaming experiences. AMD has a windmill demo running on the system, perfectly suited to showing Vsync enabled (stuttering) and Vsync disabled (tearing) issues with a constantly rotating object. It is very similar to the NVIDIA clock demo used to show off G-Sync.
The demo system is powered by an AMD FX-8350 processor and Radeon R9 290X graphics card. The monitor is running at 2560x1440 and is the very first working prototype of the new standard. Even more interesting, this is a pre-existing display that has had its firmware updated to support Adaptive Sync. That's potentially exciting news! Monitors COULD BE UPGRADED to support this feature, but AMD warns us: "...this does not guarantee that firmware alone can enable the feature, it does reveal that some scalar/LCD combinations are already sufficiently advanced that they can support some degree of DRR (dynamic refresh rate) and the full DPAS (DisplayPort Adaptive Sync) specification through software changes."
The time frame for retail available monitors using DP 1.2a is up in the air but AMD has told us that the end of 2014 is entirely reasonable. Based on the painfully slow release of G-Sync monitors into the market, AMD has less of a time hole to dig out of than we originally thought, which is good. What is not good news though is that this feature isn't going to be supported on the full range of AMD Radeon graphics cards. Only the Radeon R9 290/290X and R7 260/260X (and the R9 295X2 of course) will actually be able to support the "FreeSync" technology. Compare that to NVIDIA's G-Sync: it is supported by NVIDIA's entire GTX 700 and GTX 600 series of cards.
All that aside, seeing the first official prototype of "FreeSync" is awesome and is getting me pretty damn excited about the variable refresh rate technologies once again! Hopefully we'll get some more hands on time (eyes on, whatever) with a panel in the near future to really see how it compares to the experience that NVIDIA G-Sync provides. There is still the chance that the technologies are not directly comparable and some in-depth testing will be required to validate.
Subject: Graphics Cards, Processors | June 3, 2014 - 02:10 PM | Ryan Shrout
Tagged: Intel, amd, richard huddy
Interesting news is crossing the ocean today as we learn that Richard Huddy, who has previously had stints at NVIDIA, ATI, AMD and most recently, Intel, is teaming up with AMD once again. Richard brings with him years of experience and innovation in the world of developer relations and graphics technology. Often called "the Godfather" of DirectX, AMD wants to prove to the community it is taking PC gaming seriously.
The official statement from AMD follows:
AMD is proud to announce the return of the well-respected authority in gaming, Richard Huddy. After three years away from AMD, Richard returns as AMD's Gaming Scientist in the Office of the CTO - he'll be serving as a senior advisor to key technology executives, like Mark Papermaster, Raja Koduri and Joe Macri. AMD is extremely excited to have such an industry visionary back. Having spent his professional career with companies like NVIDIA, Intel and ATI, and having led the worldwide ISV engineering team for over six years at AMD, Mr. Huddy has a truly unique perspective on the PC and Gaming industries.
Mr. Huddy rejoins AMD after a brief stint at Intel, where he had a major impact on their graphics roadmap. During his career Richard has made enormous contributions to the industry, including the development of DirectX and a wide range of visual effects technologies. Mr. Huddy’s contributions in gaming have been so significant that he was immortalized as ‘The Scientist’ in Max Payne (if you’re a gamer, you’ll see the resemblance immediately).
Kitguru has a video from Richard Huddy explaining his reasoning for the move back to AMD.
This move points AMD in a very interesting direction going forward. The creation of the Mantle API and the debate around AMD's developer relations programs are going to be hot topics as we move into the summer and I am curious how quickly Huddy thinks he can have an impact.
I have it on good authority we will find out very soon.
Subject: Graphics Cards | June 2, 2014 - 11:41 PM | Sebastian Peak
Tagged: computex, radeon, r9 295x2, Hawaii XT, dual gpu, computex 2014, ASUS ROG, asus, Ares, amd
The latest installment in the ASUS ARES series of ultra-powerful, limited-edition graphics cards has been announced, and the Ares III is set to be the “world’s fastest” video card.
The dual-GPU powerhouse is driven by two “hand-selected” Radeon Hawaii XT GPUs (R9 290X cores) with 8GB of GDDR5 memory. The card is overclockable according to ASUS, and will likely arrive factory overclocked as they claim it will be faster out of the box than the reference R9-295x2. The ARES III features a custom-designed EK water block, so unlike the R9 295x2 the end user will need to supply the liquid cooling loop.
ASUS claims that the ARES III will “deliver 25% cooler performance than reference R9 295X designs“, but to achieve this ASUS “highly” recommends a high flow rate loop with at least a 120x3 radiator “to extract maximum performance from the card,” and they “will provide a recommended list of water cooling systems at launch”.
Only 500 of the ARES III will be made, and are individually numbered. No pricing has been announced, but ASUS says to expect it to be more than a 295x2 ($1499) - but less than a TITAN Z ($2999). The ASUS ROG ARES III will be available in Q3 2014.
For more Computex 2014 coverage, please check out our feed!
Subject: General Tech, Graphics Cards | June 2, 2014 - 05:52 PM | Scott Michaud
Tagged: nvidia, geforce, geforce experience, ShadowPlay
NVIDIA has just launched another version of their GeForce Experience, incrementing the version to 2.1. This release allows video of up to "2500x1600", which I assume means 2560x1600, as well as better audio-video synchronization in Adobe Premiere. Also, because why stop going after FRAPS once you start, it also adds an in-game framerate indicator. It also adds push-to-talk for recording the microphone.
Another note: when GeForce Experience 2.0 launched, it introduced streaming of the user's desktop. This allowed recording of OpenGL and windowed-mode games by simply capturing an entire monitor. This mode was not capable of "Shadow Mode", which I believed was because they thought users didn't want a constant rolling video to be taken of their desktop in the event that they wanted to save a few minutes of it at some point. Turns out that I was wrong; the feature was coming and it arrived with GeForce Experience 2.1.
GeForce Experience 2.1 is now available at NVIDIA's website, unless it already popped up a notification for you.
Subject: Graphics Cards | May 28, 2014 - 07:17 PM | Jeremy Hellstrom
Tagged: driver, Catalyst 14.4 beta, amd
Get the latest Catalyst for your Radeon!
- Starting with AMD Catalyst 14.6 Beta, AMD will no longer support Windows 8.0 (and the WDDM 1.2 driver) Windows 8.0 users should upgrade (for Free) to Windows 8.1 to take advantage of the new features found in the AMD Catalyst 14.6 Beta
- AMD Catalyst 14.4 will remain available for users who wish to remain on Windows 8 A future AMD Catalyst release will allow for the WDDM 1.1 (Windows 7 driver) to be installed under Windows 8.0 for those users unable to upgrade to Windows 8.1
- The AMD Catalyst 14.6 Beta Driver can be downloaded from the following links: AMD Catalyst 14.6 Beta Driver for Windows
- NOTE! This Catalyst Driver is provided "AS IS", and under the terms and conditions of the End User License Agreement provided with it.
- Performance improvements
- Watch Dogs performance improvements AMD Radeon R9 290X - 1920x1080 4x MSAA – improves up to 25%
- AMD Radeon R9290X - 2560x1600 4x MSAA – improves up to 28%
- AMD Radeon R9290X CrossFire configuration (3840x2160 Ultra settings, MSAA = 4X) - 92% scaling
- Murdered Soul Suspect performance improvements
- AMD Radeon R9 290X – 2560x1600 4x MSAA – improves up to 16%
- AMD Radeon R9290X CrossFire configuration (3840x2160 Ultra settings, MSAA = 4X) - 93% scaling
- AMD Eyefinity enhancements: Mixed Resolution Support
- A new architecture providing brand new capabilities
- Display groups can be created with monitors of different resolution (including difference sizes and shapes)
- Users have a choice of how surface is created over the display group
- Fill – legacy mode, best for identical monitors
- Fit – create the Eyefinity surface using best available rectangular area with attached displays.
- Expand – create a virtual Eyefinity surface using desktops as viewports onto the surface.
- Eyefinity Display Alignment
- Enables control over alignment between adjacent monitors
- One-Click Setup Driver detects layout of extended desktops
- Can create Eyefinity display group using this layout in one click!
- New user controls for video color and display settings
- Greater control over Video Color Management:
- Controls have been expanded from a single slider for controlling Boost and Hue to per color axis
- Color depth control for Digital Flat Panels (available on supported HDMI and DP displays)
- Allows users to select different color depths per resolution and display
- AMD Mantle enhancements
- Mantle now supports AMD Mobile products with Enduro technology
- Battlefield 4: AMD Radeon HD 8970M (1366x768; high settings) – 21% gain
- Thief: AMD Radeon HD 8970M (1920x1080; high settings) – 14% gain
- Star Swarm: AMD Radeon HD 8970M (1920x1080; medium settings) – 274% gain
- Enables support for Multi-GPU configurations with Thief (requires the latest Thief update)
... and much more, grab it here.
Subject: General Tech, Graphics Cards | May 28, 2014 - 01:49 PM | Jeremy Hellstrom
Tagged: benchmarking, 3dmark
HELSINKI, FINLAND – May 28, 2014 – Futuremark today announced 3DMark Sky Diver, a new DirectX 11 benchmark test for gaming laptops and mid-range PCs. 3DMark Sky Diver is the ideal test for benchmarking systems with mainstream DirectX 11 graphics cards, mobile GPUs, or integrated graphics. A preview trailer for the new benchmark shows a wingsuited woman skydiving into a mysterious, uncharted location. The scene is brought to life with tessellation, particles and advanced post-processing effects. Sky Diver will be shown in full at Computex from June 3-7, or find out more on the Futuremark website.
Jukka Mäkinen, Futuremark CEO said, "Some people think that 3DMark is only for high-end hardware and extreme overclocking. Yet millions of PC gamers rely on 3DMark to choose systems that best balance performance, efficiency and affordability. 3DMark Sky Diver complements our other tests by providing the ideal benchmark for gaming laptops and mainstream PCs."
3DMark - The Gamer's Benchmark for all your hardware
3DMark is the only benchmark that offers a range of tests for different classes of hardware:
- Fire Strike, for high performance gaming PCs (DirectX 11, feature level 11)
- Sky Diver, for gaming laptops and mid-range PCs (DirectX 11, feature level 11)
- Cloud Gate, for notebooks and typical home PCs (DirectX 11 feature level 10)
- Ice Storm, for tablets and entry level PCs (DirectX 11 feature level 9)
With 3DMark, you can benchmark the full performance range of modern DirectX 11 graphics hardware. Where Fire Strike is like a modern game on ultra high settings, Sky Diver is closer to a DirectX 11 game played on normal settings. This makes Sky Diver the best choice for benchmarking entry level to mid-range systems and Fire Strike the perfect benchmark for high performance gaming PCs.
See 3DMark Sky Diver in full at Computex
3DMark Sky Diver will be on display on the ASUS, MSI, GIGABYTE, Galaxy, Inno3D, and G-Skill booths at Computex, June 3-7.
S.Y. Shian, ASUS Vice President & General Manager of Notebook Business Unit said,
"We are proud to partner with Futuremark to show 3DMark Sky Diver at Computex. Sky Diver helps PC gamers choose systems that offer great performance and great value. We invite everyone to visit our stand to experience 3DMark Sky Diver on a range of new ASUS products."
Sky Diver will be released as an update for all editions of 3DMark, including the free 3DMark Basic Edition.
Subject: Graphics Cards | May 28, 2014 - 11:19 AM | Ryan Shrout
Tagged: titan z, nvidia, gtx, geforce
Though delayed by a month, today marks the official release of NVIDIA's Titan Z graphics card, the dual GK110 beast with the $3000 price tag. The massive card was shown for the first time in March at NVIDIA's GPU Technology Conference and our own Tim Verry was on the grounds to get the information.
The details remain the same:
Specifically, the GTX TITAN Z is a triple slot graphics card that marries two full GK110 (big Kepler) GPUs for a total of 5,760 CUDA cores, 448 TMUs, and 96 ROPs with 12GB of GDDR5 memory on a 384-bit bus (6GB on a 384-bit bus per GPU). For the truly adventurous, it appears possible to SLI two GTX Titan Z cards using the single SLI connector. Display outputs include two DVI, one HDMI, and one DisplayPort connector.
The difference now of course is that all the clock speeds and pricing are official.
A base clock speed of 705 MHz with a Boost rate of 876 MHz places it well behind the individual GPU performance of a GeForce GTX 780 Ti or GTX Titan Black (rated at 889/980 MHz). The memory clock speed remains the same at 7.0 Gbps and you are still getting a massive 6GB of memory per GPU.
Maybe most interesting with the release of the GeForce GTX Titan Z is that NVIDIA seems to have completely fixated on non-DIY consumers with the card. We did not receive a sample of the Titan Z (nor did we get one of the Titan Black) and when I inquired as to why, NVIDIA PR stated that they were "only going to CUDA developers and system builders."
I think it is more than likely that after the release of AMD's Radeon R9 295X2 dual GPU graphics card on April 8th, with a price tag of $1500 (half of the Titan Z), the target audience was redirected. NVIDIA already had its eye on the professional markets that weren't willing to dive into the Quadro/Tesla lines (CUDA developers will likely drop $3k at the drop of a hat to get this kind of performance density). But a side benefit of creating the best flagship gaming graphics card on the planet was probably part of the story - and promptly taken away by AMD.
I still believe the Titan Z will be an impressive graphics card to behold both in terms of look and style and in terms of performance. But it would take the BIGGEST NVIDIA fans to be able to pass up buying a pair of Radeon R9 295X2 cards for a single GeForce GTX Titan Z. At least that is our assumption until we can test one for ourselves.
I'm still working to get my hands on one of these for some testing as I think the ultra high end graphics card coverage we offer is incomplete without it.
Several of NVIDIA's partners are going to be offering the Titan Z including EVGA, ASUS, MSI and Zotac. Maybe the most intersting though is EVGA's water cooled option!
So, what do you think? Anyone lining up for a Titan Z when they show up for sale?
Subject: General Tech, Graphics Cards | May 27, 2014 - 12:00 AM | Scott Michaud
Tagged: radeon, R9, R7, eyefinity, amd
AMD has just launched their Catalyst 14.6 Beta drivers for Windows and Linux. This driver will contain performance improvements for Watch Dogs, launching today in North America, and Murdered: Soul Suspect, which arrives next week. On Linux, the driver now supports Ubuntu 14.04 and its installation process has been upgraded for simplicity and user experience.
Unless performance improvements are more important to you, the biggest feature is the support for Eyefinity with mixed resolutions. With Catalyst 14.6, you no longer need a grid of identical monitors. One example use case, suggested by AMD, is a gamer who purchases an ultra-wide 2560x1080 monitor. They will be able to add a pair of 1080p monitors on either side to create a 6400x1080 viewing surface.
If the monitors are very mismatched, the driver will allow users to letterbox to the largest rectangle contained by every monitor, or "expand" to draw the largest possible rectangle (which will lead to some assets drawing outside of any monitor). A third mode, fill, behaves like Eyefinity currently does. I must give AMD a lot of credit for leaving the choice to the user.
Returning to performance with actual figures, AMD claims "up to" 25% increases in Watch Dogs at 1080p or 28% at 1600p, compared to the previous version. The new CrossFire profile also claims up to 99% scaling in that game, at 2560x1600 with 8x MSAA. Murdered: Soul Suspect will see "up to" 16% improvements on a single card, and "up to" 93% scaling. Each of these results were provided by AMD, which tested on Radeon R9 290X cards. If these CrossFire profiles (well, first, are indicative of actual performance, and) see 99% scaling across two cards, that is pretty remarkable.
A brief mention, AMD has also expanded their JPEG decoder to Kabini. Previously, it was available to Kaveri, as of Catalyst 14.1. This allows using the GPU to display images, with their test showing a series of images being processed in about half of the time. While not claimed by AMD, I expect that the GPU will also be more power-efficient (as the processor can go back to its idle state much quicker, despite activitating another component to do so). Ironically, the three images I used for this news post are encoded in PNG. You might find that amusing.
AMD Catalyst 14.6 Beta Drivers should be now available at their download site.
Subject: Graphics Cards | May 26, 2014 - 05:16 PM | Jeremy Hellstrom
Tagged: amd, radeon, r9 295x2, R9 290X
Through hard work or good luck you find yourself the proud owner of an R9 295X2 and a 4K display but somehow the performance just isn't quite good enough. You can't afford another X2 though there is an R9 290X in your price range but you just aren't sure if it will help your system out at all. That is where [H]ard|OCP steps in with this review where they prove that tri-fire in this configuration does indeed work. Not only does it work, it allows you to vastly increase your performance over a 295X2 or to improve the performance somewhat while raising your graphics settings to new highs. For those using 5760x1200 Eyefinity you probably already have your graphics options cranked; this upgrade will still offer you a linear increase in performance. Not bad if you have the money to invest!
"Will adding a single AMD Radeon R9 290X video card to the AMD Radeon R9 295X2 work? Will you get triple-GPU performance, ala TriFire CrossFire performance? This just might be a more financially feasible configuration for gamers versus QuadFire that provides a great gaming experience in Eyefinity and 4K resolutions."
Here are some more Graphics Card articles from around the web:
- Sapphire R9 290X Vapor-X OC @ Kitguru
- Sapphire Vapor-X R9 290X - Cooling the Savage Beast @HiTech Legion
- MSI Radeon R9 280X Gaming 6 GB @ techPowerUp
- Sapphire R9 290X Vapor-X OC 4GB Video Card Review @ Legit Reviews
- CLUB3D R9 290X RoyalAce Superoverclock @ Kitguru
- AMD Radeon R9 290 On Ubuntu 14.04 With Catalyst Can Beat Windows 8.1 @ Phoronix
- Catalyst On Ubuntu 14.04 Linux Competes Well With Windows 8.1 @ Phoronix
- High-End NVIDIA GeForce vs. AMD Radeon Linux Gaming Comparison @ Phoronix
- Windows 8.1 Still Outperforms Linux With Latest Intel GPU Drivers @ Phoronix
- Raijintek Morpheus @ techPowerUp
- PNY GTX 780 XLR8 OC Edition @ [H]ard|OCP
- Palit GTX780 Jetstream 6GB SLi (Ultra HD 4K) @ Kitguru
- Gigabyte GeForce GTX 750 @ Hardware Secrets
- ASUS GeForce GTX 780 Ti DirectCU II OC @ X-bit Labs
Subject: General Tech, Graphics Cards, Mobile | May 22, 2014 - 04:58 PM | Scott Michaud
Tagged: tegra k1, nvidia, iris pro, iris, Intel, hd 4000
The Chinese tech site, Evolife, acquired a few benchmarks for the Tegra K1. We do not know exactly where they got the system from, but we know that it has 4GB of RAM and 12 GB of storage. Of course, this is the version with four ARM Cortex-A15 cores (not the upcoming, 64-bit version based on Project Denver). On 3DMark Ice Storm Unlimited, it was capable of 25737 points, full system.
Image Credit: Evolife.cn
You might remember that our tests with an Intel Core i5-3317U (Ivy Bridge), back in September, achieved a score of 25630 on 3DMark Ice Storm. Of course, that was using the built-in Intel HD 4000 graphics, not a discrete solution, but it still kept up for gaming. This makes sense, though. Intel HD 4000 (GT2) graphics has a theoretical performance of 332.8 GFLOPs, while the Tegra K1 is rated at 364.8 GFLOPs. Earlier, we said that its theoretical performance is roughly on par with the GeForce 9600 GT, although the Tegra K1 supports newer APIs.
Of course, Intel has released better solutions with Haswell. Benchmarks show that Iris Pro is able to play Battlefield 4 on High settings, at 720p, with about 30FPS. The HD 4000 only gets about 12 FPS with the same configuration (and ~30 FPS on Low). This is not to compare Intel to NVIDIA's mobile part, but rather compare Tegra K1 to modern, mainstream laptops and desktops. It is getting fairly close, especially with the first wave of K1 tablets entering at the mid-$200 USD MSRP in China.
As a final note...
There was a time where Tim Sweeney, CEO of Epic Games, said that the difference between high-end and low-end PCs "is something like 100x". Scaling a single game between the two performance tiers would be next-to impossible. He noted that ten years earlier, that factor was more "10x".
Now, an original GeForce Titan is about 12x faster than the Tegra K1 and they support the same feature set. In other words, it is easier to develop a game for the PC and high-end tablet than it was to develop an PC game for high-end and low-end machines, back in 2008. PC Gaming is, once again, getting healthier.
Subject: Graphics Cards | May 15, 2014 - 06:16 PM | Ryan Shrout
Tagged: radeon, R9 290X, r9 290, r9 280x, r9 280, amd
Just the other day AMD sent out an email to the media to discuss the current pricing situation of the Radeon R9 series of graphics cards. This email started with the following statement.
You’ve seen many articles, discussions online about the AMD Radeon™ R9 lineup – especially chatter about pricing and availability. As we’ve talked about it before, the demand for the R9 lineup has been nothing but astonishing, and went well beyond our most optimistic expectations. That created a situation where gamers weren’t able to purchase their desired R9 graphics card.
Clearly AMD would not bring up the subject if the current situation was BAD news so guess what? All seems to be back normal (or expected) in terms of AMD Radeon R9 pricing and card availability. Take a look at the table below to get an idea of where Radeon's currently stand.
|Radeon R9 295X2||$1524||$1499|
|Radeon R9 290X||$549||$529|
|Radeon R9 290||$379||$399|
|Radeon R9 280X||$289||$299|
|Radeon R9 280||$249||$249|
|Radeon R9 270X||$199||$189|
|Radeon R9 270||$169||$179|
There is one price change that differs from the products' launch - the SEP of the Radeon R9 280 has dropped from $279 to $249. Nothing dramatic but a nice change.
Maybe most interesting is this line from the AMD email.
Now that product is available and at suggested pricing, these prices will remain stable. No more madness like you saw in Q1.
That emphasis is AMD's. I'm not quite sure how the company thinks they can keep a tight control on pricing now if it wasn't able to do so before, but more than likely, with the rush for coin mining hardware somewhat dying off, the prediction will hold true. (As a side note, there appears to be some discounts to be found on used Radeon hardware these days...)
Of course the AMD bundling promotion known as Never Settle Forever is still going strong with these new prices as well. Scott wrote up a story detailing this latest incarnation of the promotion and he and I both agree that while free is always
good great, the age of most of the titles in the program is a bit of a problem. But AMD did note in this email that they have "lined up a few brand new games to add to this promotion, and they'll [sic] be sharing more info with you in the next few weeks!"
Subject: General Tech, Graphics Cards, Processors, Mobile | May 15, 2014 - 05:02 PM | Scott Michaud
Tagged: nvidia, xaiomi, mipad, tegra k1
Tegra K1 is NVIDIA's new mobile processor and this first to implement the Kepler graphics architecture. In other words, it has all of the same graphics functionality as a desktop GPU with 364 GigaFLOPs of performance (a little faster than a GeForce 9600 GT). This is quite fast for a mobile product. For instance, that amount of graphics performance could max out Unreal Tournament 3 to 2560x1600 and run Crysis at 720p. Being Kepler, it supports OpenGL 4.4, OpenGL ES 3.1, DirectX 11 and 12, and GPU compute languages.
Xiaomi is launching their MiPad in Beijing, today, with an 8-inch 2048x1536 screen and the Tegra K1. They will be available in June (for China) starting at $240 USD for the 16GB version and going up to $270 for the 64GB version. Each version has 2GB of RAM, an 8MP rear-facing camera, and a 5MP front camera.
Now, we wait and see if any Tegra K1 devices come to North America and Europe - especially at that price point.
Subject: General Tech, Graphics Cards | May 12, 2014 - 08:00 PM | Scott Michaud
Tagged: titan z, nvidia, gtx titan z, geforce
To a crowd of press and developers at their GTC summit, NVIDIA announced the GeForce GTX Titan Z add-in board (AIB). Each of the two, fully unlocked, GK110 GPUs would each have access to 6GB of GDDR5 memory (12GB total). The card was expected to be available on May 8th but has yet to surface. As NVIDIA has yet to comment on the situation, many question whether it ever will.
And then we get what we think are leaked benchmarks (note: two pictures).
One concern about the Titan Z was its rated 8 TeraFLOPs of compute performance. This is a fairly sizable reduction from the theoretical maximum of 10.24 TeraFLOPs of two Titan Black processors and even less than two first-generation Titans (9 TeraFLOPs combined). We expected that this is due to reduced clock rates. What we did not expect is for benchmarks to show the GPUs boost way above those advertised levels, and even beyond the advertised boost clocks of the Titan Black and the 780 Ti. The card was seen pushing 1058 MHz in some sections, which leads to a theoretical compute performance of 12.2 TeraFLOPs (6.1 TeraFLOPs per GPU) in single precision. That is a lot.
These benchmarks also show that NVIDIA has a slight lead over AMD's R9 295X2 in many games, except Battlefield 4 and Sleeping Dogs (plus 3DMark and Unigine). Of course, these benchmarks measure the software reported frame rate and frame times and those may or may not be indicative of actual performance. While I would say that the Titan Z appears to have a slight performance lead over the R9 295X2, although a solid argument for an AMD performance win exists, it does so double the cost (at its expected $3000 USD price point). That is not up for debate.
So, until NVIDIA says anything, the Titan Z is in limbo. I am sure there exists CUDA developers who await its arrival. Personally, I would just get three Titan Blacks since you are going to need to manually schedule your workloads across multiple processors anyway (or 780 Tis if 32-bit arithmetic is enough precision). That is, of course, unless you cannot physically fit enough GeForce Titan Blacks in your motherboard and, as such, you require two GK110 chips per AIB (but not enough to bother writing a cluster scheduling application).
Subject: General Tech, Graphics Cards, Mobile | May 7, 2014 - 02:26 AM | Scott Michaud
Tagged: Thunderbolt 2, thunderbolt, nvidia, GeForce GTX 780 Ti
Externally-attached GPUs have been a topic for many years now. Numerous companies have tried, including AMD and Lucid, but no solution has ever been a widely known and available product. Even as interfaces increase in bandwidth and compatibility with internal buses, it has never been something that a laptop salesperson could suggest to users who want to dock into a high-performance station at home. At best, we are seeing it in weird "coin mining" racks to hang way more GPUs above a system than could physically mount on the motherboard.
Apparently that has not stopped the DIY community, according to chatter on Tech Inferno forums. While the above video does not really show the monitor, MacBook Pro, and GPU enclosure at the same time, let alone all wired together and on, it seems reasonable enough. The video claims to give the MacBook Pro (running Windows 8.1) access to a GeForce GTX 780 Ti with fairly high performance, despite the reduced bandwidth. Quite cool.
Subject: Graphics Cards | May 6, 2014 - 03:36 AM | Tim Verry
Tagged: r9 295x2, powercolor, hawaii, dual gpu, devil 13
PowerColor has been teasing a new graphics card on its Facebook page. The photos show a macro shot of the Devil 13 logo along with captions hitting at the new card being a dual GPU monster including one caption referring the upcoming Devil 13 as a "dual beast."
PowerColor's previous Devil 13 branded graphics card was the Radeon HD 7990 Devil 13 which contained two HD 7970 "Tahiti" GPUs on one PCB. Coincidentally, AMD recently launched a new dual GPU reference design based around two R9 290x "Hawaii" GPUs called the R9 295x2. It is still rumor and speculation at this point, but the timing and leaked photos seem to point squarely at the upcoming Devil 13 card being the first air cooled custom R9 295x2!
Adding credence to the rumors, leaked photos have appeared online with a PCB backplate that appears to match the backplate shown in the official teaser photo. The leaked photos show an absolutely beastly triple slot graphics card that places two GPUs in CrossFire on a single custom PCB powered by four 8-pin PCI-E power connectors and cooled by a gargantuan HSF comprised of an aluminum fin stack and multiple large diameter copper heatpipes along with three fans. The cooler and PCB are reinforced with brackets and a metal backplate to help keep the air cooler in pace and the PCB from bending.
If the rumors hold true, PowerColor will be unveiling the first air cooled dual GPU R9 295X2 graphics card which is an impressive feat of engineering! Using four 8-pin PCI-E power connectors definitely suggests that aftermarket overclocking is encouraged and supported even if PowerColor does not end up factory overclocking their dual GPU beast.
For reference, the stock AMD R9 295X2 features two full Hawaii GPUs with 5,632 stream processors clocked at up to 1018 MHz interfaced with 8GB of total GDDR5 memory over a 512-bit bus (each GPU has 4GB of memory and a 512-bit bus). AMD rates this configuration at 11.5 TFLOPS of single precision performance. The reference R9 295X2 has a 500W TDP and uses two 8-pin PCI-E power connectors.
Please excuse me while I wipe the drool off of my keyboard...
Stay tuned to PC Perspective for more details on the mysterious dual GPU Devil 13 from PowerColor!
In the meantime, check out our full review of the R9 295X2 (and the Hawaii architecture) and what happens when you put two R9 295X2s in Quad CrossFire into a single system for 4K gaming goodness!
Subject: General Tech, Graphics Cards | May 5, 2014 - 05:03 PM | Scott Michaud
Tagged: nvidia, geforce experience, shield
NVIDIA has released version 2.0.1 of GeForce Experience. This update does not bring many new features, hence why it is a third-level increment to the version number, but is probably worthwhile to download regardless. Its headlining feature is security enhancements with OpenSSL under remote GameStream on SHIELD. The update also claims to improve streaming quality and reduce audio latency.
While they do not seem to elaborate, I assume this is meant to fix Heartbleed, which is an exploit that allows an attacker to receive a small snapshot of active memory. If that is that case, it is unclear whether the SHIELD, the host PC during a game session, or both endpoints are affected.
The new GeForce Experience is available at the NVIDIA website. If it is running, it will also ask you to update it, of course.
Subject: Graphics Cards | May 2, 2014 - 01:29 AM | Tim Verry
Tagged: titan z, nvidia, gpgpu, gk110, dual gpu, asus
NVIDIA unveiled the GeForce GTX TITAN Z at the GPU Technology Conference last month, and the cards will be for sale soon from various partners. ASUS will be one of the first AIB partners to offer a reference TITAN-Z.
The ASUS GTX TITAN Z pairs two full GK110-based GPUs with 12GB of GDDR5 memory. The graphics card houses a total of 5,760 CUDA cores, 480 texture manipulation units (TMUs), and 96 ROPs. Each GK110 GPU interfaces with 6GB of GDDR5 memory via a 384-bit bus. ASUS is using reference clockspeeds with this card, which means 705 MHz base and up to 876 MHz GPU Boost for the GPUs and 7.0 GHz for the memory.
For comparison, the dual-GPU TITAN Z is effectively two GTX TITAN Black cards on a single PCB. However, the TITAN Black runs at 889 MHz base and up to 980 MHz GPU Boost. A hybrid water cooling solution may have allowed NVIDIA to maintain the clockspeed advantage, but doing so would compromise the only advantage the TITAN Z has over using two (much cheaper) TITAN Blacks in a workstation or server: card density. A small hit in clockspeed will be a manageable sacrifice for the target market, I believe.
The ASUS GTX TITAN Z has a 375W TDP and is powered by two 8-pin PCI-E power connectors. The new flagship dual GPU NVIDIA card has an MSRP of $3,000 and should be available in early May.
Subject: General Tech, Graphics Cards | May 1, 2014 - 08:00 AM | Scott Michaud
Tagged: Mantle, amd
As our readers are well aware, Mantle is available for use with a few games. Its compatibility begun with the beta Catalyst 14.1 driver and an update for Battlefield 4. AMD was quite upfront about the technology, even granting a brief interview with Guennadi Riguer, Chief Architect of the API to fill in a few of the gaps left from their various keynote speeches.
What is under lock and key, however, is the actual software development kit (SDK). AMD claimed that it was too immature for the public. It was developed in partnership with DICE, Oxide Games, and other, established developers to fine-tune its shape, all the while making it more robust. That's fine. They have a development plan. There is nothing wrong with that. Today, while the SDK is still not public and sealed by non-disclosure agreement, AMD is accepting applications from developers who are requesting to enter the program.
If you want to develop a Mantle application or game, follow the instructions at their website for AMD to consider you. They consider it stable, performant, and functional enough for "a broader audience in the developer community".
AMD cites 40 developers already registered, up from seven (DICE, Crytek, Oxide, etc.).
If you are not a developer, then this news really did not mean too much to you -- except that progress is being made.
Subject: Editorial, General Tech, Graphics Cards | April 30, 2014 - 10:05 AM | Ryan Shrout
Tagged: hadron air, hadron, gtx 750, giveaway, evga, contest
Congrats to our winner: Pierce H.! Check back soon for more contests and giveaways at PC Perspective!!
In these good old United States of America, April 15th is a trying day. Circled on most of our calendars is the final deadline for paying up your bounty to Uncle Sam so we can continue to have things like freeway systems and universal Internet access.
But EVGA is here for us! Courtesy of our long time sponsor you can win a post-Tax Day prize pack that includes both an EVGA Hadron Air mini-ITX chassis (reviewed by us here) as well as an EVGA GeForce GTX 750 graphics card.
Nothing makes paying taxes better than free stuff that falls under the gift limit...
With these components under your belt you are well down the road to PC gaming bliss, upgrading your existing PC or starting a new one in a form factor you might not have otherwise imagined.
Competing for these prizes is simple and open to anyone in the world, even if you don't suffer the same April 15th fear that we do. (I'm sure you have your own worries...)
- Fill out the form at the bottom of this post to give us your name and email address, in addition to the reasons you love April 15th! (Seriously, we need some good ideas for next year to keep our heads up!) Also, this does not mean you should leave a standard comment on the post to enter, though you are welcome to do that too.
- Stop by our Facebook page and give us a LIKE (I hate saying that), head over to our Twitter page and follow @pcper and heck, why not check our our many videos and subscribe to our YouTube channel?
- Why not do the same for EVGA's Facebook and Twitter accounts?
- Wait patiently for April 30th when we will draw and update this news post with the winners name and tax documentation! (Okay, probably not that last part.)
A huge thanks goes out to friends and supporters at EVGA for providing us with the hardware to hand out to you all. If it weren't for sponsors like this PC Perspective just couldn't happen, so be sure to give them some thanks when you see them around the In-tar-webs!!