All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: General Tech, Graphics Cards | October 17, 2013 - 01:45 AM | Ryan Shrout
Tagged: video, nvidia, live blog, live
Last month it was AMD hosting the media out in sunny Hawaii for a #GPU14 press event. This week NVIDIA is hosting a group of media in Montreal for a two-day event built around "The Way It's Meant to be Played".
NVIDIA promises some very impressive software and technology demonstrations on hand and you can take it all in with our live blog and (hopefully) live stream on our PC Perspective Live! page!
It starts at 10am ET / 7am PT so join us bright and early!! And don't forget to stop by tomorrow for an even more exciting Day 2!!
Subject: General Tech, Graphics Cards | October 16, 2013 - 10:00 PM | Scott Michaud
Tagged: FPGA, Altera
(Update 10/17/2013, 6:13 PM) Apparently I messed up inputing this into the website last night. To compare FPGAs with current hardware, the Altera Stratix 10 is rated at more than 10 TeraFLOPs compared to the Tesla K20X at ~4 TeraFLOPs or the GeForce Titan at ~4.5 TeraFLOPs. All figures are single precision. (end of update)
Field Programmable Gate Arrays (FPGAs) are not general purpose processors; they are not designed to perform any random instruction at any random time. If you have a specific set of instructions that you want performed efficiently, you can spend a couple of hours compiling your function(s) to an FPGA which will then be the hardware embodiment of your code.
This is similar to an Application-Specific Integrated Circuit (ASIC) except that, for an ASIC, it is the factory who bakes your application into the hardware. Many (actually, to my knowledge, almost every) FPGAs can even be reprogrammed if you can spare those few hours to configure it again.
Altera is a manufacturer of FPGAs. They are one of the few companies who were allowed access to Intel's 14nm fabrication facilities. Rahul Garg of Anandtech recently published a story which discussed compiling OpenCL kernels to FPGAs using Altera's compiler.
Now this is pretty interesting.
The design of OpenCL splits work between "host" and "kernel". The host application is written in some arbitrary language and follows typical programming techniques. Occasionally, the application will run across a large batch of instructions. A particle simulation, for instance, will require position information to be computed. Rather than having the host code loop through every particle and perform some complex calculation, what happens to each particle could be "a kernel" which the host adds to the queue of some accelerator hardware. Normally, this is a GPU with its thousands of cores chunked into groups of usually 32 or 64 (vendor-specific).
An FPGA, on the other hand, can lock itself to the specific set of instructions. It can decide to, within a few hours, configure some arbitrary number of compute paths and just churn through each kernel call until it is finished. The compiler knows exactly the application it will need to perform while the host code runs on the CPU.
This is obviously designed for enterprise applications, at least as far into the future as we can see. Current models are apparently priced in the thousands of dollars but, as the article points out, has the potential to out-perform a 200W GPU at just a tenth of the power. This could be very interesting for companies, perhaps a film production house, who wants to install accelerator cards for sub-d surfaces or ray tracing but would like to develop the software in-house and occasionally update their code after business hours.
Regardless of the potential market, a FPGA-based add-in card simply makes sense for OpenCL and its architecture.
Subject: Graphics Cards | October 14, 2013 - 08:52 PM | Ryan Shrout
Tagged: xbox one, microsot, Mantle, dx11, amd
Microsoft posted a new blog on its Windows site that discusses some of the new features of the latest DirectX on Windows 8.1 and the upcoming Xbox One. Of particular interest was a line that confirms what I have said all along about the much-hyped AMD Mantle low-level API: it is not compatible with Xbox One.
We are very excited that with the launch of Xbox One, we can now bring the latest generation of Direct3D 11 to console. The Xbox One graphics API is “Direct3D 11.x” and the Xbox One hardware provides a superset of Direct3D 11.2 functionality. Other graphics APIs such as OpenGL and AMD’s Mantle are not available on Xbox One.
What does this mean for AMD? Nothing really changes except some of the common online discussion about how easy it would now be for developers to convert games built for the console to the AMD-specific Mantle API. AMD claims that Mantle offers a significant performance advantage over DirectX and OpenGL by giving developers that choose to implement support for it closer access to the hardware without much of the software overhead found in other APIs.
This is what Mantle does. It bypasses DirectX (and possibly the hardware abstraction layer) and developers can program very close to the metal with very little overhead from software. This lowers memory and CPU usage, it decreases latency, and because there are fewer “moving parts” AMD claims that they can do 9x the draw calls with Mantle as compared to DirectX. This is a significant boost in overall efficiency. Before everyone gets too excited, we will not see a 9x improvement in overall performance with every application. A single HD 7790 running in Mantle is not going to power 3 x 1080P monitors in Eyefinity faster than a HD 7970 or GTX 780 (in Surround) running in DirectX. Mantle shifts the bottleneck elsewhere.
I still believe that AMD Mantle could bring interesting benefits to the AMD Radeon graphics cards on the PC but I think this official statement from Microsoft will dampen some of the over excitement.
Also worth noting is this comment about the DX11 implementation on the Xbox One:
With Xbox One we have also made significant enhancements to the implementation of Direct3D 11, especially in the area of runtime overhead. The result is a very streamlined, “close to metal” level of runtime performance. In conjunction with the third generation PIX performance tool for Xbox One, developers can use Direct3D 11 to unlock the full performance potential of the console.
So while Windows and the upcoming Xbox One will share an API there will still be performance advantages for games on the console thanks to the nature of a static hardware configuration.
Subject: General Tech, Graphics Cards | October 11, 2013 - 04:47 PM | Scott Michaud
Tagged: amd, R9 290X
When you deal with the web, almost nothing is hidden. The browsers (and some extensions) have access to just about everything on screen and off. Anyone browsing a site or app can inspect the contents and even modify it. That last part could be key.
TechPowerUp got their hands on a screenshot of developer tools inspecting the Newegg website. A few elements are hidden on the right hand side within the "Coming Soon" container. One element, id of "singleFinalPrice", is set "visibility: hidden;" with a price as its contents. In the TechPowerUp screenshot, this price is listed as "729.99" USD.
Of course, good journalism is confirming this yourself. As of the time I checked, this value is listed as "9999.99" USD. This means either one of two things: Newegg changed their value to a placeholder after the leak was discovered or the source of TechPowerUp's screenshot used those same developer tools to modify the content. It is actually a remarkably easy thing to do... here I change the value to 99 cents by right clicking on the element and modifying the HTML.
No, the R9 290X is not 99 cents.
So I would be careful to take these screenshots with a grain of salt if we only have access to one source. That said, $729.99 does sound like a reasonable price point for AMD to release a Titan-competitor at. Of course, that is exactly what a hoaxer would want.
But, as it stands right now, I would not get your hopes up. An MSRP of ~$699-$749 USD sounds legitimate but we do not have even a second source, at the moment, to confirm that. Still, this might be something our readers would like to know.
Subject: General Tech, Graphics Cards, Systems | October 10, 2013 - 06:59 PM | Scott Michaud
Tagged: amd, nvidia, Intel, Steam Machine
This should be little-to-no surprise for the viewers of our podcast, as this story was discussed there, but Valve has confirmed AMD and Intel graphics are compatible with Steam Machines. Doug Lombardi of Valve commented by email to, apparently, multiple sources including Forbes and Maximum PC.
Last week, we posted some technical specs of our first wave of Steam Machine prototypes. Although the graphics hardware that we've selected for the first wave of prototypes is a variety of NVIDIA cards, that is not an indication that Steam Machines are NVIDIA-only. In 2014, there will be Steam Machines commercially available with graphics hardware made by AMD, NVIDIA, and Intel. Valve has worked closely together with all three of these companies on optimizing their hardware for SteamOS, and will continue to do so into the foreseeable future.
Ryan and the rest of the podcast crew found the whole situation, "Odd". They could not understand why AMD referred the press to Doug Lombardi rather than circulate a canned statement from him. It was also weird why NVIDIA had an exclusive on the beta program with AMD being commercially available in 2014.
As I have said in the initial post: for what seems to be deliberate non-committal to a specific hardware spec, why limit to a single graphics provider?
Subject: Graphics Cards | October 10, 2013 - 03:29 PM | Jeremy Hellstrom
Tagged: radeon, r9 270x, GCN, sapphire, toxic edition, factory overclocked
We saw the release of the reference R9s yesterday and today we get to see the custom models such as the Sapphire TOXIC R9 270X which Legit Reviews just finished benchmarking. The TOXIC sports a 100MHz overclock on both GPU and RAM as well as a custom cooler with three fans. While it remains a two slot GPU it is longer than the reference model and requires a full foot of clearance inside the case. Read on to see what kind of performance boost you can expect and how much further you can push this card.
"When it comes to discrete graphics, the $199 price point is known as the gamer’s sweet spot by both AMD and NVIDIA. This is arguably the front line in the battle for your money when it coming to gaming graphics cards. The AMD Radeon R9 270X is AMD’s offering to gamers at this competitive price point. Read on to see how it performs!"
Here are some more Graphics Card articles from around the web:
- Gigabyte Radeon R9 270X WindForce OC 2GB @ eTeknix
- ASUS Radeon R9 270X Direct CU II TOP 2GB @ eTeknix
- MSI Radeon R9 270X Hawk Edition Video Card Review @HiTech Legion
- Gigabyte R9 270X Windforce @ LanOC Reviews
- Sapphire R9 280X Toxic Edition OC 3GB @ Kitguru
- MSI Radeon R9 270X GAMING 2GB @ Benchmark Reviews
- AMD Radeon R9 280X / R9 270X from ASUS and MSI @ Hardware.info
- ASUS R9 270X Direct CU II TOP @ Kitguru
- Gigabyte Radeon R9 270X OC 2GB Video Card Review @ HiTech Legion
- ASUS R9 280X Matrix Platinum @ Kitguru
- Will it Crossfire? R9 280X & HD 7970 Scaling Tested @ Hardware Canucks
- AMD Radeon R9 280X Graphics Card Review @ Techgage
- AMD Radeon R7 260X Versus NVIDIA GeForce GTX 650 Ti Boost @ Legit Reviews
Subject: Editorial, General Tech, Graphics Cards | October 10, 2013 - 03:28 PM | Ryan Shrout
Tagged: podcast, nvidia, contest, batman arkham origins
UPDATE: We picked our winner for week 1 but now you can enter for week 2!!! See the new podcast episode listed below!!
Back in August NVIDIA announced that they would be teaming up with Warner Bros. Interactive to include copies of the upcoming Batman: Arkham Origins game with select NVIDIA GeForce graphics cards. While that's great and all, wouldn't you rather get one for free next week from PC Perspective?
Great, you're in luck! We have a handful of keys to give out to listeners and viewers of the PC Perspective Podcast. Here's how you enter:
- Listen to or watch episode #272 of the PC Perspective Podcast and listen for the "secret phrase" as mentioned in the show!
- Subscribe to our RSS feed for the podcast or subscribe to our YouTube channel.
- Fill out the form at the bottom of this podcast page with the "secret phrase" and you're entered!
I'll draw a winner before the next podcast and announce it on the show! We'll giveaway one copy each of the next two weeks! Our thanks goes to NVIDIA for supplying the Batman: Arkham Origins keys for this contest!!
No restrictions on winning, so good luck!!
Subject: Graphics Cards | October 9, 2013 - 12:32 PM | Jeremy Hellstrom
Tagged: R9 280X DirectCU II, R9 270X DirectCU II, R7 260X DirectCU II, R7 250, R7 240, Matrix R9 280X, asus
Editor's Note: Be sure to check out our full review of the new AMD Radeon R9 280X, R9 270X and R7 260X that includes the ASUS overclocked 280X!
Fremont, CA (October 8, 2013) - ASUS today announces the launch of its R9 200 and R7 200 Series graphics cards, powered by the latest AMD Radeon R9 and R7 series graphics-processing units (GPUs). As dedicated gamers have come to expect from Republic of Gamers (ROG), the new Matrix R9 280X graphics card feature exclusive technologies, overclocked core speeds and performance enhancing options.
The new R9 280X, R9 270X and R7 260X DirectCU II models are overclocked to perform faster than reference designs while also featuring DIGI+ voltage-regulator modules (VRMs) for a smooth and stable power supply and GPU Tweak software for tuning the graphics card. The new R7 250 and R7 240 cards benefit from many exclusive ASUS technologies and tools including Super Alloy Power components for superior stability, dust-proof fans for improved card lifespan and GPU Tweak.
Matrix — Push the limits
The Matrix R9 280X graphics cards benefit from a copper-based thermal design that conducts heat away from the GPU with greater efficiency. Compared to reference Radeon R9 280X designs, ROG Matrix R9 280X cards operate up to 20% cooler and three times (3X) quieter. Coupled with dual 100mm cooling fans, gamers can enjoy ultra-cool and stable game play with minimal noise. The Matrix R9 280X Platinum Edition’s core runs at a blistering 1100MHz — 100MHz higher than reference.
The Matrix R9 280X graphics card allows for overclocking on a purely hardware level with VGA Hotwire connections and TweakIt for voltage control, Turbo Fan button to crank up the fan to 100% and a Safe Mode button to instantly default the GPU back to factory BIOs. It also includes DIGI+ voltage-regulator modules (VRMs) for smooth and stable power, and GPU Tweak tuning software that allows users to squeeze the last drop of performance out of their graphics card.
DirectCU II — Faster, quieter and cooler, even in the heat of battle
ASUS DirectCU II cooling technology places highly conductive copper cooling pipes in direct contact with a card’s GPU so heat dissipates quickly and with greater efficiency. Compared with reference Radeon R9 and R7 designs, ASUS R9 280X, R9 270X and R7 260X with DirectCU II allow the latest AMD Radeon GPUs to run up to 20% cooler, three times (3X) quieter– so gamers can enjoy ultra-stable play with minimal noise.
ASUS R9 280X, R9 270X and R7 260X are all equipped with exclusive ASUS DIGI+ VRM with Super Alloy Power technology. Paired with Super Alloy Power solid-state capacitors, concrete-core chokes and hardened MOSFETs, DIGI+ VRM delivers multi-phase power and digital voltage regulation for increased graphics card stability and cleaner power, even during the most intense GPU activities.
The fans of ASUS R9 280X, R9 270X, and R7 260X DirectCU II are all dust-proof, reducing debris accumulation and retaining peak performance over a longer lifespan. In addition, ASUS R9 280X DirectCU II features exclusive CoolTech fan. This innovative fan’s hybrid blade and bearing design, with inner radial blower and outer flower-shaped blades, delivers multi-directional airflow to accelerate heat removal and maintain cooler and quieter operation.
R7 250 and R7 240 — Super Alloy Power components and dust-proof fans for superior stability and longevity
The ASUS R7 250 and R7 240 graphics cards both include exclusive Super Alloy Power technology. Super Alloy Power’s solid-state capacitors and hardened MOSFETs all withstand much greater stress and heat due to the application of specially-formulated materials — increasing reliability and overall card lifespan. Compared with reference designs, ASUS R7 250 and R7 240’s Super Alloy Power components deliver 35%-cooler operation and a lifespan that’s up to two-and-a-half times (2.5X) longer.
Additionally, the fans on the ASUS R7 250 and R7 240 are extremely resilient and dust-proof. The Dust-Proof fans ensures that even the smallest airborne particles are barred, reducing debris accumulation and retaining peak performance over a longer lifespan — typically improving lifespan by up to 25% compared to reference fans.
GPU Tweak- Easy overclocking and online streaming
The included ASUS GPU Tweak utility enables R9 280X, R9 270X, R7 260X, R7 250, and R7 240 users intuitive control over GPU and video-memory clock speeds and voltages, cooling-fan speeds and power-consumption thresholds – so they can overclock easily with confidence. Users can create multiple performance profiles for on-demand switching of custom settings for different games.
GPU Tweak now includes Live Streaming, an online-streaming tool that lets users share on-screen action over the internet in real time – so others can watch live gaming sessions. It is even possible to add scrolling text, pictures and webcam images to the streaming window.
Subject: Graphics Cards | October 8, 2013 - 05:30 PM | Jeremy Hellstrom
Tagged: amd, GCN, graphics core next, hd 7790, hd 7870 ghz edition, hd 7970 ghz edition, r7 260x, r9 270x, r9 280x, radeon, ASUS R9 280X DirectCU II TOP
AMD's rebranded cards have arrived, though with a few improvements to the GCN architecture that we already know so well. This particular release seems to be focused on price for performance which is certainly not a bad thing in these uncertain times. The 7970 GHz Edition launched at $500, while the new R9 280X will arrive at $300 which is a rather significant price drop and one which we hope doesn't damage AMD's bottom line too badly in the coming quarters. [H]ard|OCP chose the ASUS R9 280X DirectCU II TOP to test, with a custom PCB from ASUS and a mild overclock which helped it pull ahead of the 7970 GHz. AMD has tended towards leading off new graphics card families with the low and midrange models, we have yet to see the top of the line R9 290X in action yet.
Ryan's review, including frame pacing, can be found right here.
"We evaluate the new ASUS R9 280X DirectCU II TOP video card and compare it to GeForce GTX 770 and Radeon HD 7970 GHz Edition. We will find out which video card provides the best value and performance in the $300 price segment. Does it provide better performance a than its "competition" in the ~$400 price range?"
Here are some more Graphics Card articles from around the web:
- AMD's Radeon R7 260X @ The Tech Report
- AMD's Radeon R9 280X and 270X @ The Tech Report
- AMD Radeon R9 270X & R7 260X Review @ Neoseeker
- AMD Radeon R7 260X 2GB @ eTeknix
- AMD Radeon R9 270X 2GB @ eTeknix
- AMD Radeon R7 260X, R9 270X and R9 280X @ Hardware.info
- Sapphire AMD Radeon R9 280X Vapor-X OC 3GB @ eTeknix
- Radeon R9 270X and R7 260X @ TechSpot
- AMD Radeon R9 270X & R7 260X @ Legion Hardware
- AMD Radeon R9 270X & R7 260X Review @ Hardware Canucks
- AMD Radeon R9 280X 3GB Review @ Hardware Canucks
- Sapphire R9 280X Vapor X @ Kitguru
- AMD R7 260X @ Kitguru
- AMD R9 270X @ Kitguru
Subject: General Tech, Graphics Cards, Cases and Cooling, Systems | October 4, 2013 - 07:19 PM | Scott Michaud
Tagged: valve, Steam Machine
Well, that did not take long.
Valve announced the Steam Machines barely over a week ago and could not provide hardware specifications. While none of these will be available for purchase, the honor of taking money reserved for system builders and OEMs, Valve has announced hardware specifications for their beta device.
The raw specifications, or range of them, are:
- GPU: NVIDIA GeForce Titan through GeForce GTX660 (780 and 760 possible)
- CPU: Intel i7-4770 or i5-4570, or i3-something
- RAM: 16GB DDR3-1600 (CPU), 3GB GDDR5 (GPU)
- Storage: 1TB/8GB Hybrid SSHD
- Power Supply: 450W
- Dimensions: approx. 12" x 12.4" x 2.9"
Really the only reason I could see for the spread of performance is to not pressure developers into targeting a single reference design. This is odd, since every reference design contains an NVIDIA GPU which (you would expect) a company who wants to encourage an open mind would not have such a glaring omission. I could speculate about driver compatibility with SteamOS and media streaming but even that feels far-fetched.
On the geeky side of things: the potential for a GeForce Titan is fairly awesome and, along with the minimum GeForce 660, is the first sign that I might be wrong about this whole media center extender thing. My expectation was that Valve would acknowledge some developers might want a streaming-focused device.
Above all, I somewhat hope Valve is a bit more clear to consumers with their intent... especially if their intent is to be unclear with OEMs for some reason.
Subject: General Tech, Graphics Cards | October 2, 2013 - 09:03 PM | Scott Michaud
Tagged: amd, linux
Last week, NVIDIA published documentation for Nouveau to heal wounds with the open source community. AMD had a better reputation and intends to maintain it. On Tuesday, Alex Deucher published 9 PDF documents, 1178 pages of register and acceleration documentation along with 18 pages of HDA GPU audio programming details, compared to the 42 pages NVIDIA published.
Sure, a page to page comparison is meaningless, but it is clear AMD did not want to be outdone. This is especially true when you consider that some of these documents date back to early 2009. Still, reactionary or not, the open source community should accept the assistance with open arms... and open x86s?
I should note that these documents do not cover Volcanic Islands; they are for everything between Evergreen and Sea Islands.
Subject: General Tech, Graphics Cards | October 2, 2013 - 02:20 PM | Jeremy Hellstrom
Tagged: gaming, ARMA III
Forget Crysis, if you want to hammer your PC pick up ARMA III and try turning up the settings! Even an i7-3770K @ 4.8GHz and GTX 780's in SLI struggle to render this game with all the graphical bells and whistles turned on. The close up landscapes and objects are gorgeous with high quality textures but to truly get into the feel of the game you need to be able to turn up the veiw distance and number of displayed objects as you can see from [H]ard|OCP's screenshots below. [H] spent ia bit of time breaking down the best playable settings for numerous GPUs from NVIDIA and AMD as well as showing you the impact that MSAA and PPAA has on the visual quality as well as your PCs performance. If you want to show off the superiority of a high end gaming machine then this is the game for you.
"ARMA III is our focus point for today. It features a large open world environment designed on a massive continent measuring 270 square kilometers. To go along side this massive continent is a max visibility range of 20km. Combine this with ARMA III's impressive looking graphics and we have a game that demands performance."
Here is some more Tech News from around the web:
- What Does It Meaaaaan: Half-Life 3 Trademarked @ Rock, Paper, SHOTGUN
- See CDP Explain The Mad Scope Of The Witcher 3 @ Rock, Paper, SHOTGUN
- GTA 5 Online goes live @ The Inquirer
- AMD spent as much as $8 million on EA/DICE Battlefield 4 deal @ HEXUS
- Co-op Sandbox FTL? – PULSAR Is The Most Exciting Game @ Rock, Paper, SHOTGUN
- EA SPORTS Madden NFL 25 @ Benchmark Reviews
Subject: Graphics Cards | September 30, 2013 - 06:30 PM | Jeremy Hellstrom
Tagged: graphics drivers, catalyst 13.10, beta, windows, linux
- Includes 32-bit single GPU and CrossFire game profile for Battlefield 4
- Total War: Rome 2 CrossFire profile update
- CrossFire frame pacing improvements for CPU-bound applications
- Resolves image corruption seen in Autodesk Investor 2014
- Resolves intermittent black screen when resuming from a S3/S4 sleep state if the display is unplugged during the sleep state on systems supporting AMD Enduro Technology
- Updated AMD Enduro Technology application profiles
o Profile highlights:
- Total War: Rome 2
- Battlefield 4
- Saints Row 4
- Splinter Cell Blacklist
- FIFA 14
Resolved issue highlights:
- System hang up when startx after setting up an Eyefinity desktop.
- Permission issue with procfs on kernel 3.10
- System hang observed while running disaster stress test on Ubuntu 12.10
- Hang is observed when running Unigine on Linux
- AC/DC switching is not automatically detected
- Laptop backlight adjustment is broken
- Glxtest failures observed in log file with forcing on Anti-Aliasing
- Cairo-dock is broken
- Severe desktop corruption is observed when enabled compiz in certain cases
- glClientWaitSync is waiting even when timeout is 0
- C4Engine get corruption with GL_ARB_texture_array enabled
Subject: General Tech, Graphics Cards | September 26, 2013 - 03:35 AM | Scott Michaud
Tagged: frame rating, frame pacing, amd
Scott Wasson of The Tech Report just received an interview with Raja Koduri, head of Graphics Hardware and Software Development at AMD, a few hours ago. Part of the interview discussed frame the frame pacing issues we, as well as The Tech Report, published over the last year. In short, the news seems good for owners of Radeon graphics cards, future and even current.
The "Hawaii" powered Radeon R9 290 and R9 290X graphics cards are expected to handle CrossFire pacing acceptably at launch. Clearly, if there is ever a time to fix the problem, it would be in new hardware. Still, this is good news for interested customers; if all goes to plan, you are likely going to have a good experience out of the box.
Current owners of GCN-based video cards, along with potential buyers of the R9 280X and lower upcoming cards, will apparently need to wait for AMD to release a driver to fix these issues. However, this driver is not far off: Koduri, unclear whether on or off the record, intends for an autumn release. This driver is expected to cover frame pacing issues for CrossFire, Eyefinity, and 4K.
Koduri does believe the CrossFire issues were unfortunate and expresses a desire to fix the issue for his customers.
Keep checking PC Perspective for more information as it comes out!
Editor's Note: I just spoke with Raja Koduri as well and he basically reiterated everything that Scott noted in his story on The Tech Report as well. The upcoming 290X will have frame pacing at Eyefinity and 4K resolution at launch while the cards below that in the R9 series, and users of Radeon HD 7000 cards (and likely beyond) will need some more time before the driver is ready. I'll be able to talk quite a bit more about the changes to BOTH architectures very shortly so stay tuned for that.
Subject: General Tech, Graphics Cards, Shows and Expos | September 25, 2013 - 05:23 PM | Scott Michaud
Tagged: radeon, R9 290X, R9, R7, GPU14, amd
The next generation of AMD graphics processors are being announced this afternoon. They carefully mentioned this event is not a launch. We do not yet know, although I hope we will learn today, when you can give them your money.
When you can, you will have five products to choose from:
- R7 250
- R7 260X
- R9 270X
- R9 280X
- R9 290X
AMD only provides 3D Mark Fire Strike scores for performance. I assume they are using the final score, and not the "graphics score" although they were unclear.
The R7 250 is the low end card of the group with 1GB of GDDR5. Performance, according to 3DMark scores (>2000 on Fire Strike), is expected to be about two-thirds of what an NVIDIA GeForce GTX 650 Ti can deliver. Then again, that card retails for about ~$130 USD. The R7 250 has an expected retail value of less than < $89 USD. This is a pretty decent offering which can probably play Battlefield 3 at 1080p if you play with the graphics quality settings somewhere around "medium". This is just my estimate, of course.
The R7 260X is the next level up. The RAM has been double over the R7 250 to 2GB of GDDR5 and its 3DMark score almost doubled, too (> 3700 on Fire Strike). This puts it almost smack dab atop the Radeon HD 6970. The R7 260X is about $20-30 USD cheaper than the HD 6970. The R7 is expected to retail for $139. Good price cut while keeping up to date on architecture.
The R9 270X is the low end of the high end parts. With 2GB of GDDR5 and a 3DMark Fire Strike score of >5500, this is aimed at the GeForce 670. The R7 270X will retail for around ~$199 which is about $120 USD cheaper than NVIDIA's offering.
The R9 280X should be pretty close to the 7970 GHz Edition. It will be about ~$90 cheaper with an expected retail value of $299. It also has a bump in frame buffer over the lower-tier R9 270X, containing 3GB of GDDR5.
Not a lot is known about the top end, R9 290X, except that it will be the first gaming GPU to cross 5 TeraFLOPs of compute performance. To put that into comparison, the GeForce Titan has a theoretical maximum of 4.5 TeraFLOPs.
If you are interested in the R9 290X and Battlefield 4, you will be able to pre-order a limited edition package containing both products. Pre-orders open "from select partners" October 3rd. For how much? Who knows.
We will keep you informed as we are informed. Also, the announcement is still going on, so tune in!
Subject: Graphics Cards | September 25, 2013 - 03:08 PM | Ryan Shrout
Tagged: live, hawaii, amd
In case you didn't know, AMD is hosting a live stream to talk about the new AMD Hawaii series of GPUs. You should definitely be on our PC Perspective Live! page right now to participate and watch!
Subject: General Tech, Graphics Cards | September 25, 2013 - 02:59 AM | Scott Michaud
Tagged: nvidia, Nouveau, linux
AMD commit numerous updates to the open source driver community, three months ago, and has otherwise assisted the Linux community in the past. The same has not been true for NVIDIA. Despite a respectable (albeit lacking compared to Windows) proprietary driver for Linux, this GPU vendor was not adored by the community. They have not been accused of malice, it would just seem to be control over both the end-user experience and, of course, their secret sauce.
I, obviously, do not have a crystal ball of fortune telling (the journalist house of auction ran out and the gift shop is just too expensive) so it is anyone's guess the future extent of NVIDIA's involvement. For now, their assistance included 42 pages of Device Control Block documentation and proprietary developers answering questions on the Nouveau mailing list.
Many, from Ars Technica to our staff discussions at PC Perspective, note how the change of heart aligns with the SteamOS announcement. I do not really believe these events are related if only because I doubt NVIDIA would wait to contact developers until Valve spoke up. I would have to expect that SteamOS would not be a surprise to NVIDIA especially after Gabe Newell discussed Maxwell virtualization all the way back at CES.
You would think they would have come about while working with NVIDIA on the game streaming technology. You know, allow a single desktop to utilize multiple games across multiple devices. Even still, you would think NVIDIA would just put even more effort into their proprietary driver rather than help Nouveau.
Either way, we will keep an ear out for NVIDIA involvement with the open source community.
Subject: Graphics Cards | September 21, 2013 - 11:58 PM | Ryan Shrout
Tagged: radeon, leak, hawaii, amd
What better way to spend your weekend than to comb over photos and graphs to try and figure out everything you can about the upcoming AMD Hawaii GPU just days before they announce it during a live stream? A collection of leaks including pictures and benchmarks made their way onto the web (they have a way of doing that) from our friends in China. I spotted a post from our buddy Hassan at WCCFTech that detailed much of the information available so far.
The first picture was actually posted by Johan Andersson, lead developer at DICE over Twitter with a not-too-vague comment about Hawaii and Volcanic Islands.
— Johan Andersson (@repi) September 21, 2013
A website with the convenient name of udteam.tistory.com posted images with quite a bit more detail including some with the cooler removed.
The GPU here is apparently going to be called the AMD Radeon R9-290X as AMD shifts to a completely new naming scheme with this generation. We already discussed an interview with AMD's Matt Skynner in which he said the die of Hawaii was 30% smaller than NVIDIA's GTX TITAN and would be more efficient per die area than the GeForce option.
Other specifications that have been compiled (that are still rumors really at this point) include a 512-bit memory interface (quad 128-bit controllers more than likely based on the memory layout), 4GB of GDDR5, 5+1 phase power and 8+6 pin power connections (very reasonable for a flagship). The die size is being estimated at 424 mm2 (larger than Radeon HD 7970 but smaller than TITAN) and price estimates are sitting at $599.
We even found a couple of benchmarks claiming to have performance results of this new beast of a GPU. Though the name of the card on the result is blocked out we are supposed to believe these are results from the AMD R9-290X and they are impressive if true. In both of the graphs here the new Hawaii GPU is faster than the $999 GeForce GTX TITAN at a significantly lower price!
All signs are pointing to AMD's next 28nm GPU to be a high end gamer's dream graphics card. That is, IF all these rumors and leaks turns out to be accurate. We still don't know the key data points like stream processor count, but we'll know it all in due time. (Maybe next week?) We still have concerns about the status of AMD's multi-GPU fixes but if the company can get that worked out in time for this relesae, I expect AMD to make a big splash this fall with a revamped Radeon brand.
Subject: Graphics Cards | September 19, 2013 - 05:55 PM | Jeremy Hellstrom
Tagged: nvidia, msi, 650ti boost, Twin Frozr
To give you the full name, the MSI N650 Titanium TwinFrozr 2GD5/OC Boost Edition is $170 after MIR, whereas you can pick up the HD 7850 that [H]ard|OCP chose to contrast against for a mere $130 after rebate. That price difference means that NVIDIA really has to perform quite a bit better than the AMD card to beat it in a performance per price perspective. From the numbers in the review you can clearly see that the 650Ti is the better performing card, especially with the respectable overclock that [H] managed which does make it the best card under $200; on the other hand if your budget is tight the performance gap is not as big as the price gap which might make that HD 7850 a better choice.
By the way, that NVIDIA card has a Boost clock which means that it might steal some of your megahertz away when it gets too hot, which is apparently a horrible experience and if you somehow disable that feature and cook your GPU ... obviously that is not your fault.
"Today we evaluate MSI's high-end GeForce GTX 650 Ti BOOST line with the flagship overclocked Gaming Edition MSI N650Ti TF 2GD5/OC BE. With falling prices on AMD Radeon video cards we will compare it to the AMD Radeon HD 7850 to see which will emerge as the victor in the sub-$200 price price range."
Here are some more Graphics Card articles from around the web:
- MSI GTX 660 Gaming Video Card Review @ Ninjalane
- MSI GTX 660 N660 Gaming 2GD5/OC Video Card Review @HiTech Legion
- MSI GTX780 Lightning 3GB @ Kitguru
- Budget video cards: AMD Radeon HD 7730 vs. Nvidia GeForce GT 640 GK208 @ Hardware.info
- ASUS GTX 760 DirectCU Mini 2 GB @ techPowerUp
- MSI GTX 780 Lightning Review @ Hardware Canucks
- ASUS GTX 670 DirectCU II Mini @ Bjorn3D
- Palit GTX760, GTX770 and GTX780 Super JetStream @ Kitguru
- MSI GeForce GTX 760 Twin Frozr Gaming OC Edition 2GB @ eTeknix
- ASUS GTX 780 DirectCU II OC @ Bjorn3D
- Palit GTX 780 Super JetStream 3 GB @ techPowerUp
- Gainward GTX 760 Phantom 2GB @ eTeknix
- EVGA GTX 770 4GB Dual Classified w/ ACX Cooler Review @Hi Tech Legion
- XFX FX7850 Double Dissipation HD 7850 2GB @ eTeknix
- PowerColor Radeon HD 7730 1GB @ eTeknix
- Gigabyte Radeon HD 7870 2GB GHz Edition Video Card Review @ Legit Reviews
Subject: General Tech, Graphics Cards | September 19, 2013 - 12:57 PM | Jeremy Hellstrom
Tagged: graphics drivers, amd, catalyst 13.9
FEATURE HIGHLIGHTS OF AMD CATALYST™ 13.9
The AMD Catalyst 13.9 WHQL is AMD’s first logo certified driver for Windows 8.1. It does not include support for Frame Pacing or the very latest AMD CrossFire™ optimizations. AMD Catalyst 13.10 Beta includes additional performance improvements and fixes not found in AMD Catalyst 13.9 WHQL.
AMD’s first logo certified driver for Windows 8.1
Includes WDDM 1.3 support for:
- AMD Accelerated Processors (“Kabini” & “Temash”) for Desktop, Notebook or Tablet PCs, including: A4-1200, A4-1250, A4-5000, A4-5100, A4-5150, A6-1450, A6-5200, A6-5250, A6-5350, E1-2100, E1-2200. E1-2500, E1-2600, E1-2650, E2-3000, E2-3100
- AMD Accelerated Processors (“Richland”) for Desktop or Notebook PCs, including: A10-5700, A10-5745M, A10-5750M, A10-5757M, A10-5800B, A10-5800K, A8-5500, A8-5500B, A8-5545M, A8-5550M, A8-5557M, A8-5600K, A6-5345M, A6-5350M, A6-5357M, A6-5400B, A6-5400K, A4-5145M, A4-5150, A4-5300, A4-5300B
- AMD Accelerated Processors (“Trinity”) for Desktop or Notebook PCs, including: A10-4600M, A10-4655M, A10-4677M, A10-5700, A10-5800B, A10-5800K, A8-4500M, A8-4555M, A8-4557M, A6-4400M, A6-4455M, A6-5400B, A6-5400K, A4-4300M, A4-4355M, A4-5300, A4-5300B
- AMD Radeon HD 8000 Series
- AMD Radeon HD 7000 Series
- AMD Radeon HD 6000 Series
- AMD Radeon HD 5000 Series
Support for AMD Features:
- AMD Eyefinity
- AMD Dual Graphics/AMD CrossFire Technology
- AMD Overdrive
- AMD Catalyst Control Center/Vision Engine Control Center
OpenGL support for User Profiles and Catalyst Application Profiles Users can now create, per application, 3D setting profiles for OpenGL applications. OpenGL applications are now supported through Catalyst Application Profile updates (for single GPU and AMD CrossFire configurations).
AMD Enduro™ Technology enhancements: The AMD Catalyst Control Center now shows which applications are active on the Performance GPU and the Power saving GPU.