All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: General Tech, Graphics Cards | October 23, 2013 - 07:30 PM | Scott Michaud
Tagged: amd, firepro
Currently AMD holds 18% market share with their FirePro line of professional GPUs. This compares to NVIDIA who owns 81% with Quadro. I assume the "other" category is the sum of S3 and Matrox who, together, command 1% of the professional market (just the professional market)
According to Jon Peddie of JPR, as reported by X-Bit Labs, AMD intends to wrestle back revenue left unguarded for NVIDIA. "After years of neglect, AMD’s workstation group, under the tutorage of Matt Skyner, has the backing and commitment of top management and AMD intends to push into the market aggressively." They have already gained share this year.
During AMD's 3rd Quarter (2013) earnings call, CEO Rory Read outlined the importance of the professional graphics market.
We also continue to make steady progress in another of growth businesses in the third quarter as we delivered our fifth consecutive quarter of revenue and share growth in the professional graphics area. We believe that we can continue to gain share in this lucrative part of the GPU market based on our product portfolio, design wins in flight, and enhanced channel programs.
On the same conference call (actually before and after the professional graphics sound bite), Rory noted their renewed push into the server and embedded SoC markets with 64-bit x86 and 64-bit ARM processors. They will be the only company manufacturing both x86 and ARM solutions which should be an interesting proposition for an enterprise in need of both. Why deal with two vendors?
Either way, AMD will probably be refocusing on the professional and enterprise markets for the near future. For the rest of us, this hopefully means that AMD has a stable (and confident) roadmap in the processor and gaming markets. If that is the case, a profitable Q3 is definitely a good start.
Subject: Graphics Cards | October 23, 2013 - 06:20 PM | Jeremy Hellstrom
Tagged: amd, overclocking, asus, ASUS R9 280X DirectCU II TOP, r9 280x
Having already seen what the ASUS R9 280X DirectCU II TOP can do at default speeds the obvious next step, once they had time to fully explore the options, was for [H]ard|OCP to see just how far this GPU can overclock. To make a long story short, they went from a default clock of 1070MHz up to 1230MHz and pushed the RAM to 6.6GHz from 6.4GHz though the voltage needed to be bumped from 1.2v to 1.3v. The actual frequencies are nowhere near as important as the effect on gameplay though, to see those results you will have to click through to the full article.
"We take the new ASUS R9 280X DirectCU II TOP video card and find out how high it will overclock with GPU Tweak and voltage modification. We will compare performance to an overclocked GeForce GTX 770 and find out which card comes out on top when pushed to its overclocking limits."
Here are some more Graphics Card articles from around the web:
- Gigabyte R9 280X OC 3 GB @ techPowerUp
- HIS R9 280X iPower IceQ X2 Turbo Boost Clock 3GB Video Card Review @ Madshrimps
- AMD Radeon R9 290X Versus NVIDIA GeForce GTX 780 Benchmarks @ Legit Reviews
- XFX R9 280X Black OC Edition @ Kitguru
- AMD Radeon R9 280X Video Card Review w/ ASUS, XFX and MSI @ Legit Reviews
- HIS R9 280X iPower IceQ X² Turbo and R9 270X IceQ X² Turbo @ Legion Hardware
- Sapphire Radeon R9 270X Vapor-X @ Benchmark Reviews
- MSI R9 270X Hawk Review @ OCC
- Asus Matrix R9 280X Platinum @ LanOC Reviews
- ASUS R9 280X DirectCU II TOP 3 GB @ techPowerUp
- HIS Radeon R9 280X IceQ X2 @ Benchmark Reviews
- Sapphire Toxic Edition R9 270X Video Card Review @HiTech Legion
- MSI Radeon R9 270X Gaming Video Card Review @ Ninjalane
- Sapphire Toxic R9 270X @ LanOC Review
- AMD Radeon 7000 and Radeon R200 Series Mixed CrossFire Testing @ Legit Reviews
- AMD Radeon R9 270X Graphics Card Review @ Techgage
- MSI R9 270X HAWK 2 GB @ techPowerUp
- HIS Radeon R9 270X IceQ X2 Turbo Boost @ Benchmark Reviews
- Asus R9 270X DirectCU II Top @ LanOC Reviews
- Diamond Multimedia Radeon 7870 7870PE52GV Review @ HCW
- AMD Radeon R9 270X On Linux @ Phoronix
- ASUS GTX760 DirectCU Mini OC @ Hardawre.info
- Gigabyte GeForce GTX 770 OC / GTX 780 OC @ Hardware.info
- MSI N660 Gaming Review: affordable and silent GeForce GTX 660 @ Hardawre.info
- Asus GTX 670 Direct CU Mini @ LanOC Reviews
- NVIDIA GeForce GTX 650 On Linux @ Phoronix
Subject: General Tech, Graphics Cards | October 23, 2013 - 12:21 AM | Scott Michaud
Tagged: nvidia, graphics drivers, geforce
Mid-June kicked up a storm of poop across the internet when IGN broke the AMD optimizations for Frostbite 3. It was reported that NVIDIA would not receive sample code for those games until after they launched. The article was later updated with a statement from AMD: "... the AMD Gaming Evolved program undertakes no efforts to prevent our competition from optimizing for games before their release."
Now, I assume, the confusion was caused by then-not-announced Mantle.
And, as it turns out, NVIDIA did receive the code for Battlefield 4 prior to launch. Monday, the company launched their 331.58 WHQL-certified drivers which are optimized for Batman: Arkham Origins and Battlefield 4. According to the release notes, you should even be able to use SLi out of the gate. If, on the other hand, you are a Civilization V player: HBAO+ should enhance your shadowing.
They also added a DX11 SLi profile for Watch Dogs... awkwarrrrrd.
To check out the blog at GeForce.com for a bit more information, check out the release notes, or just head over to the drivers page. If you have GeForce Experience installed, it probably already asked you to update.
Subject: Graphics Cards, Displays | October 20, 2013 - 02:50 PM | Ryan Shrout
Tagged: video, tom petersen, nvidia, livestream, live, g-sync
UPDATE: If you missed our live stream today that covered NVIDIA G-Sync technology, you can watch the replay embedded below. NVIDIA's Tom Petersen stops by to talk about G-Sync in both high level and granular detail while showing off some demonstrations of why G-Sync is so important. Enjoy!!
Last week NVIDIA hosted press and developers in Montreal to discuss a couple of new technologies, the most impressive of which was NVIDIA G-Sync, a new monitor solution that looks to solve the eternal debate of smoothness against latency. If you haven't read about G-Sync and how impressive it was when first tested on Friday, you should check out my initial write up, NVIDIA G-Sync: Death of the Refresh Rate, that not only does that, but dives into the reason the technology shift was necessary in the first place.
G-Sync essentially functions by altering and controlling the vBlank signal sent to the monitor. In a normal configuration, vBlank is a combination of the combination of the vertical front and back porch and the necessary sync time. That timing is set a fixed stepping that determines the effective refresh rate of the monitor; 60 Hz, 120 Hz, etc. What NVIDIA will now do in the driver and firmware is lengthen or shorten the vBlank signal as desired and will send it when one of two criteria is met.
- A new frame has completed rendering and has been copied to the front buffer. Sending vBlank at this time will tell the screen grab data from the card and display it immediately.
- A substantial amount of time has passed and the currently displayed image needs to be refreshed to avoid brightness variation.
In current display timing setups, the submission of the vBlank signal has been completely independent from the rendering pipeline. The result was varying frame latency and either horizontal tearing or fixed refresh frame rates. With NVIDIA G-Sync creating an intelligent connection between rendering and frame updating, the display of PC games is fundamentally changed.
Every person that saw the technology, including other media members and even developers like John Carmack, Johan Andersson and Tim Sweeney, came away knowing that this was the future of PC gaming. (If you didn't see the panel that featured those three developers on stage, you are missing out.)
But it is definitely a complicated technology and I have already seen a lot of confusion about it in our comment threads on PC Perspective. To help the community get a better grasp and to offer them an opportunity to ask some questions, NVIDIA's Tom Petersen is stopping by our offices on Monday afternoon where he will run through some demonstrations and take questions from the live streaming audience.
Be sure to stop back at PC Perspective on Monday, October 21st at 2pm ET / 11am PT as to discuss G-Sync, how it was developed and the various ramifications the technology will have in PC gaming. You'll find it all on our PC Perspective Live! page on Monday but you can sign up for our "live stream mailing list" as well to get notified in advance!
NVIDIA G-Sync Live Stream
11am PT / 2pm ET - October 21st
We also want your questions!! The easiest way to get them answered is to leave them for us here in the comments of this post. That will give us time to filter through the questions and get the answers you need from Tom. We'll take questions via the live chat and via Twitter (follow me @ryanshrout) during the event but often time there is a lot of noise to deal with.
So be sure to join us on Monday afternoon!
Subject: Graphics Cards | October 18, 2013 - 07:55 PM | Ryan Shrout
Tagged: video, tim sweeney, nvidia, Mantle, john carmack, johan andersson, g-sync, amd
If you weren't on our live stream from the NVIDIA "The Way It's Meant to be Played" tech day this afternoon, you missed a hell of an event. After the announcement of NVIDIA G-Sync variable refresh rate monitor technology, NVIDIA's Tony Tomasi brough one of the most intriguing panels of developers on stage to talk.
John Carmack, Tim Sweeney and Johan Andersson talk for over an hour, taking questions from the audience and even getting into debates amongst themselves in some instances. Topics included NVIDIA G-Sync of course, AMD's Mantle low-level API, the hurdles facing PC gaming and what direction each luminary is currently on for future development.
If you are a PC enthusiast or gamer you are definitely going to want to listen and watch the video below!
Subject: General Tech, Graphics Cards | October 18, 2013 - 01:21 PM | Scott Michaud
Tagged: nvidia, GeForce GTX 780 Ti
So the really interesting news today was G-Sync but that did not stop NVIDIA from sneaking in a new high-end graphics card. The GeForce GTX 780 Ti follows the company's old method of releasing successful products:
- Attach a seemingly arbitrary suffix to a number
In all seriousness, we know basically nothing about this card. It is entirely possible that its architecture might not even be based on GK110. We do know it will be faster than a GeForce 780 but we have no frame of reference in regards to the GeForce Titan. The two cards were already so close in performance that Ryan struggled to validate the 780's existence. Imagine how difficult it would be for NVIDIA to wedge yet another product in that gap.
And if it does outperform the Titan, what is its purpose? Sure, Titan is a GPGPU powerhouse if you want double-precision performance without purchasing a Tesla or a Quadro, but that is not really relevant for gamers yet.
We shall see, soon, when we get review samples in. You, on the other hand, will likely see more when the card launches mid-November. No word on pricing.
Subject: Graphics Cards | October 18, 2013 - 10:52 AM | Ryan Shrout
Tagged: variable refresh rate, refresh rate, nvidia, gsync, geforce, g-sync
UPDATE: I have posted a more in-depth analysis of the new NVIDIA G-Sync technology: NVIDIA G-Sync: Death of the Refres Rate. Thanks for reading!!
UPDATE 2: ASUS has announced the G-Sync enabled version of the VG248QE will be priced at $399.
During a gaming event being held in Montreal, NVIDIA unveield a new technology for GeForce gamers that the company is hoping will revolutionize the PC and displays. Called NVIDIA G-Sync, this new feature will combine changes to the graphics driver as well as change to the monitor to alter the way refresh rates and Vsync have worked for decades.
With standard LCD monitors gamers are forced to choose between a tear-free experience by enabling Vsync or playing a game with the substantial visual anomolies in order to get the best and most efficient frame rates. G-Sync changes that by allowing a monitor to display refresh rates other than 60 Hz, 120 Hz or 144 Hz, etc. without the horizontal tearing normally associated with turning off Vsync. Essentially, G-Sync allows a properly equiped monitor to run at a variable refresh rate which will improve the experience of gaming in interesting ways.
This technology will be available soon on Kepler-based GeForce graphics cards but will require a monitor with support for G-Sync; not just any display will work. The first launch monitor is a variation on the very popular 144 Hz ASUS VG248QE 1920x1080 display and as we saw with 3D Vision, supporting G-Sync will require licensing and hardware changes. In fact, NVIDIA claims that the new logic inside the panels controller is NVIDIA's own design - so you can obviously expect this to only function with NVIDIA GPUs.
DisplayPort is the only input option currently supported.
It turns out NVIDIA will actually be offering retrofitting kits for current users of the VG248QE at some yet to be disclosed cost. The first retail sales of G-Sync will ship as a monitor + retrofit kit as production was just a bit behind.
Using a monitor with a variable refresh rates allows the game to display 55 FPS on the panel at 55 Hz without any horizontal tearing. It can also display 133 FPS at 133 Hz without tearing. Anything below the 144 Hz maximum refresh rate of this monitor will be running at full speed without the tearing associated with the lack of vertical sync.
The technology that NVIDIA is showing here is impressive when seen in person; and that is really the only way to understand the difference. High speed cameras and captures will help but much like 3D Vision was, this is a feature that needs to be seen to be appreciated. How users will react to that road block will have to be seen.
Features like G-Sync show the gaming world that without the restrictions of console there is quite a bit of revolutionary steps that can be made to maintain the PC gaming advantage well into the future. 4K displays were a recent example and now NVIDIA G-Sync adds to the list.
Be sure to stop back at PC Perspective on Monday, November 21st at 2pm ET / 11am PT as we will be joined in-studio by NVIDIA's Tom Petersen to discuss G-Sync, how it was developed and the various ramifications the technology will have in PC gaming. You'll find it all on our PC Perspective Live! page on Monday but you can sign up for our "live stream mailing list" as well to get notified in advance!
NVIDIA G-Sync Live Stream
11am PT / 2pm ET - October 21st
Subject: General Tech, Graphics Cards | October 17, 2013 - 06:56 PM | Scott Michaud
Tagged: nvidia, GTX 760 OEM
A pair of new graphics cards have been announced during the first day of "The Way It's Meant To Be Played Montreal 2013" both of which intended for system builders to integrate into their products. Both cards fall under the GeForce GTX 760 branding with the names: "GeForce GTX 760 Ti (OEM)" and "GeForce GTX 760 192-bit (OEM)".
I will place the main specifications of both cards side-by-side-side with the default GeForce 760 for a little bit of reference. Be sure to check out its benchmark.
|GTX 760||GTX 760 192-bit (OEM)||GTX 760 Ti (OEM)|
|Base Clock||980 MHz||823 MHz||915 MHz|
|Boost Clock||1033 MHz||888 MHz||980 MHz|
|Memory Bandwidth||6.0 GT/s||5.8 GT/s||6.0 GT/s|
|vRAM (capacity)||2 GB||1.5 or 3 GB||2 GB|
The GeForce 760 is no slouch and, especially the GTX 760 Ti, seems to be pretty close in performance to the retail product. I could see this being a respectible addition to a Steam Machine. I still cannot understand why, like the gaming bundle, these cards were not announced during the keynote speech.
Or, for that matter, why no-one seems to be reporting on them.
Subject: General Tech, Graphics Cards | October 17, 2013 - 05:36 PM | Scott Michaud
Tagged: shield, nvidia, bundle
The live stream from NVIDIA, this morning, was full of technologies focused around the PC gaming ecosystem including mobile (but still PC-like) platforms. Today they also announced a holiday gaming bundle for their GeForce cards although that missed the stream for some reason.
If you purchase a GeForce GTX 770, 780, or Titan from a participating retailer (including online), you will receive Splinter Cell: Black List, Batman: Arkham Origins, and Assassin's Creed IV: Black Flag along with a $100-off coupon for an NVIDIA SHIELD.
If, on the other hand, you purchase a GTX 760, 680, 670, 660 Ti, or 660 from a participating retailer (again, including online), you will receive Splinter Cell: Black List and Assassin's Creed IV: Black Flag along with a $50-off coupon for the NVIDIA SHIELD.
The current price at Newegg for an NVIDIA SHIELD is $299 USD. With a $100 discount, this pushes the price point to $199. The $200 price point is a barrier, for videogame systems, under which customers tend to jump at. Reaching the sub-$200 price point could be a big deal even for customers not on the fence especially when you consider PC streaming. Could be.
Assume you were already planning on upgrading your GPU. Would you be interested in adding in an NVIDIA SHIELD for an extra $199?
Subject: General Tech, Graphics Cards | October 17, 2013 - 04:37 PM | Scott Michaud
Tagged: radeon, R9 290X, amd
The NDA on AMD R9 290X benchmarks has not yet lifted but AMD was in Montreal to provide two previews: BioShock Infinite and Tomb Raider, both at 4K (3840 x 2160). Keep in mind, these scores are provided by AMD and definitely does not represent results from our monitor-capture solution. Expect more detailed results from us, later, as we do some Frame Rating.
The test machine used in both setups contains:
- Intel Core i7-3960X at 3.3 GHz
- MSI X79A-GD65
- 16GB of DDR3-1600
- Windows 7 SP1 64-bit
- NVIDIA GeForce GTX 780 (331.40 drivers) / AMD Radeon R9 290X (13.11 beta drivers)
The R9 290X is configured in its "Quiet Mode" during both benchmarks. This is particularly interesting, to me, as I was unaware of such feature (it has been a while since I last used a desktop AMD/ATi card). I would assume this is a fan and power profile to keep noise levels as silent as possible for some period of time. A quick Google search suggests this feature is new with the Radeon Rx200-series cards.
BioShock Infinite is quite demanding at 4K with ultra quality settings. Both cards maintain an average framerate above 30FPS.
AMD R9 290X "Quiet Mode": 44.25 FPS
NVIDIA GeForce GTX 780: 37.67 FPS
(Update 1: 4:44pm EST) AMD confirmed TressFX is disabled in these benchmark scores. It, however, is enabled if you are present in Montreal to see the booth. (end of update 1)
Tomb Raider is also a little harsh at those resolutions. Unfortunately, the results are ambiguous whether or not TressFX has been enabled throughout the benchmarks. The summary explicitly claims TressFX is enabled, while the string of settings contains "Tressfx=off". Clearly, one of the two entries is a typo. We are currently trying to get clarification. In the mean time:
AMD R9 290X "Quiet Mode": 40.2 FPS
NVIDIA GeForce GTX 780: 34.5 FPS
Notice how both of these results are not compared to a GeForce Titan. Recent leaks suggest a retail price for AMD's flagship card in the low-$700 market. The GeForce 780, on the other hand, resides in the $650-700 USD price point.
It seems pretty clear, to me, that cost drove this comparison rather than performance.
Subject: General Tech, Graphics Cards | October 17, 2013 - 03:46 PM | Scott Michaud
Tagged: amd, radeon
In summary, "We don't know yet".
We do know of a story posted by Fudzilla which cited Roy Taylor, VP of Global Channel Sales, as a source confirming the reintroduction of Never Settle for the new "Rx200" Radeon cards. Adding credibility, Roy Taylor retweeted the story via his official account. This tweet is still there as I write this post.
The Tech Report, after publishing the story, was contacted by Robert Hallock of AMD Gaming and Graphics. The official word, now, is that AMD does not have any announcements regarding bundles for new products. He is also quoted, "We continue to consider Never Settle bundles as a core component of AMD Gaming Evolved program and intend to do them again in the future".
So, I (personally) see promise in that we will see a new Never Settle bundle. For the moment, AMD is officially silent on the matter. Also, we do not know (and, it is possible, neither does AMD at this time) which games will be included and how many users can claim if there will even be a choice at all.
Subject: General Tech, Graphics Cards | October 17, 2013 - 01:45 AM | Ryan Shrout
Tagged: video, nvidia, live blog, live
Last month it was AMD hosting the media out in sunny Hawaii for a #GPU14 press event. This week NVIDIA is hosting a group of media in Montreal for a two-day event built around "The Way It's Meant to be Played".
NVIDIA promises some very impressive software and technology demonstrations on hand and you can take it all in with our live blog and (hopefully) live stream on our PC Perspective Live! page!
It starts at 10am ET / 7am PT so join us bright and early!! And don't forget to stop by tomorrow for an even more exciting Day 2!!
Subject: General Tech, Graphics Cards | October 16, 2013 - 10:00 PM | Scott Michaud
Tagged: FPGA, Altera
(Update 10/17/2013, 6:13 PM) Apparently I messed up inputing this into the website last night. To compare FPGAs with current hardware, the Altera Stratix 10 is rated at more than 10 TeraFLOPs compared to the Tesla K20X at ~4 TeraFLOPs or the GeForce Titan at ~4.5 TeraFLOPs. All figures are single precision. (end of update)
Field Programmable Gate Arrays (FPGAs) are not general purpose processors; they are not designed to perform any random instruction at any random time. If you have a specific set of instructions that you want performed efficiently, you can spend a couple of hours compiling your function(s) to an FPGA which will then be the hardware embodiment of your code.
This is similar to an Application-Specific Integrated Circuit (ASIC) except that, for an ASIC, it is the factory who bakes your application into the hardware. Many (actually, to my knowledge, almost every) FPGAs can even be reprogrammed if you can spare those few hours to configure it again.
Altera is a manufacturer of FPGAs. They are one of the few companies who were allowed access to Intel's 14nm fabrication facilities. Rahul Garg of Anandtech recently published a story which discussed compiling OpenCL kernels to FPGAs using Altera's compiler.
Now this is pretty interesting.
The design of OpenCL splits work between "host" and "kernel". The host application is written in some arbitrary language and follows typical programming techniques. Occasionally, the application will run across a large batch of instructions. A particle simulation, for instance, will require position information to be computed. Rather than having the host code loop through every particle and perform some complex calculation, what happens to each particle could be "a kernel" which the host adds to the queue of some accelerator hardware. Normally, this is a GPU with its thousands of cores chunked into groups of usually 32 or 64 (vendor-specific).
An FPGA, on the other hand, can lock itself to the specific set of instructions. It can decide to, within a few hours, configure some arbitrary number of compute paths and just churn through each kernel call until it is finished. The compiler knows exactly the application it will need to perform while the host code runs on the CPU.
This is obviously designed for enterprise applications, at least as far into the future as we can see. Current models are apparently priced in the thousands of dollars but, as the article points out, has the potential to out-perform a 200W GPU at just a tenth of the power. This could be very interesting for companies, perhaps a film production house, who wants to install accelerator cards for sub-d surfaces or ray tracing but would like to develop the software in-house and occasionally update their code after business hours.
Regardless of the potential market, a FPGA-based add-in card simply makes sense for OpenCL and its architecture.
Subject: Graphics Cards | October 14, 2013 - 08:52 PM | Ryan Shrout
Tagged: xbox one, microsot, Mantle, dx11, amd
Microsoft posted a new blog on its Windows site that discusses some of the new features of the latest DirectX on Windows 8.1 and the upcoming Xbox One. Of particular interest was a line that confirms what I have said all along about the much-hyped AMD Mantle low-level API: it is not compatible with Xbox One.
We are very excited that with the launch of Xbox One, we can now bring the latest generation of Direct3D 11 to console. The Xbox One graphics API is “Direct3D 11.x” and the Xbox One hardware provides a superset of Direct3D 11.2 functionality. Other graphics APIs such as OpenGL and AMD’s Mantle are not available on Xbox One.
What does this mean for AMD? Nothing really changes except some of the common online discussion about how easy it would now be for developers to convert games built for the console to the AMD-specific Mantle API. AMD claims that Mantle offers a significant performance advantage over DirectX and OpenGL by giving developers that choose to implement support for it closer access to the hardware without much of the software overhead found in other APIs.
This is what Mantle does. It bypasses DirectX (and possibly the hardware abstraction layer) and developers can program very close to the metal with very little overhead from software. This lowers memory and CPU usage, it decreases latency, and because there are fewer “moving parts” AMD claims that they can do 9x the draw calls with Mantle as compared to DirectX. This is a significant boost in overall efficiency. Before everyone gets too excited, we will not see a 9x improvement in overall performance with every application. A single HD 7790 running in Mantle is not going to power 3 x 1080P monitors in Eyefinity faster than a HD 7970 or GTX 780 (in Surround) running in DirectX. Mantle shifts the bottleneck elsewhere.
I still believe that AMD Mantle could bring interesting benefits to the AMD Radeon graphics cards on the PC but I think this official statement from Microsoft will dampen some of the over excitement.
Also worth noting is this comment about the DX11 implementation on the Xbox One:
With Xbox One we have also made significant enhancements to the implementation of Direct3D 11, especially in the area of runtime overhead. The result is a very streamlined, “close to metal” level of runtime performance. In conjunction with the third generation PIX performance tool for Xbox One, developers can use Direct3D 11 to unlock the full performance potential of the console.
So while Windows and the upcoming Xbox One will share an API there will still be performance advantages for games on the console thanks to the nature of a static hardware configuration.
Subject: General Tech, Graphics Cards | October 11, 2013 - 04:47 PM | Scott Michaud
Tagged: amd, R9 290X
When you deal with the web, almost nothing is hidden. The browsers (and some extensions) have access to just about everything on screen and off. Anyone browsing a site or app can inspect the contents and even modify it. That last part could be key.
TechPowerUp got their hands on a screenshot of developer tools inspecting the Newegg website. A few elements are hidden on the right hand side within the "Coming Soon" container. One element, id of "singleFinalPrice", is set "visibility: hidden;" with a price as its contents. In the TechPowerUp screenshot, this price is listed as "729.99" USD.
Of course, good journalism is confirming this yourself. As of the time I checked, this value is listed as "9999.99" USD. This means either one of two things: Newegg changed their value to a placeholder after the leak was discovered or the source of TechPowerUp's screenshot used those same developer tools to modify the content. It is actually a remarkably easy thing to do... here I change the value to 99 cents by right clicking on the element and modifying the HTML.
No, the R9 290X is not 99 cents.
So I would be careful to take these screenshots with a grain of salt if we only have access to one source. That said, $729.99 does sound like a reasonable price point for AMD to release a Titan-competitor at. Of course, that is exactly what a hoaxer would want.
But, as it stands right now, I would not get your hopes up. An MSRP of ~$699-$749 USD sounds legitimate but we do not have even a second source, at the moment, to confirm that. Still, this might be something our readers would like to know.
Subject: General Tech, Graphics Cards, Systems | October 10, 2013 - 06:59 PM | Scott Michaud
Tagged: amd, nvidia, Intel, Steam Machine
This should be little-to-no surprise for the viewers of our podcast, as this story was discussed there, but Valve has confirmed AMD and Intel graphics are compatible with Steam Machines. Doug Lombardi of Valve commented by email to, apparently, multiple sources including Forbes and Maximum PC.
Last week, we posted some technical specs of our first wave of Steam Machine prototypes. Although the graphics hardware that we've selected for the first wave of prototypes is a variety of NVIDIA cards, that is not an indication that Steam Machines are NVIDIA-only. In 2014, there will be Steam Machines commercially available with graphics hardware made by AMD, NVIDIA, and Intel. Valve has worked closely together with all three of these companies on optimizing their hardware for SteamOS, and will continue to do so into the foreseeable future.
Ryan and the rest of the podcast crew found the whole situation, "Odd". They could not understand why AMD referred the press to Doug Lombardi rather than circulate a canned statement from him. It was also weird why NVIDIA had an exclusive on the beta program with AMD being commercially available in 2014.
As I have said in the initial post: for what seems to be deliberate non-committal to a specific hardware spec, why limit to a single graphics provider?
Subject: Graphics Cards | October 10, 2013 - 03:29 PM | Jeremy Hellstrom
Tagged: radeon, r9 270x, GCN, sapphire, toxic edition, factory overclocked
We saw the release of the reference R9s yesterday and today we get to see the custom models such as the Sapphire TOXIC R9 270X which Legit Reviews just finished benchmarking. The TOXIC sports a 100MHz overclock on both GPU and RAM as well as a custom cooler with three fans. While it remains a two slot GPU it is longer than the reference model and requires a full foot of clearance inside the case. Read on to see what kind of performance boost you can expect and how much further you can push this card.
"When it comes to discrete graphics, the $199 price point is known as the gamer’s sweet spot by both AMD and NVIDIA. This is arguably the front line in the battle for your money when it coming to gaming graphics cards. The AMD Radeon R9 270X is AMD’s offering to gamers at this competitive price point. Read on to see how it performs!"
Here are some more Graphics Card articles from around the web:
- Gigabyte Radeon R9 270X WindForce OC 2GB @ eTeknix
- ASUS Radeon R9 270X Direct CU II TOP 2GB @ eTeknix
- MSI Radeon R9 270X Hawk Edition Video Card Review @HiTech Legion
- Gigabyte R9 270X Windforce @ LanOC Reviews
- Sapphire R9 280X Toxic Edition OC 3GB @ Kitguru
- MSI Radeon R9 270X GAMING 2GB @ Benchmark Reviews
- AMD Radeon R9 280X / R9 270X from ASUS and MSI @ Hardware.info
- ASUS R9 270X Direct CU II TOP @ Kitguru
- Gigabyte Radeon R9 270X OC 2GB Video Card Review @ HiTech Legion
- ASUS R9 280X Matrix Platinum @ Kitguru
- Will it Crossfire? R9 280X & HD 7970 Scaling Tested @ Hardware Canucks
- AMD Radeon R9 280X Graphics Card Review @ Techgage
- AMD Radeon R7 260X Versus NVIDIA GeForce GTX 650 Ti Boost @ Legit Reviews
Subject: Editorial, General Tech, Graphics Cards | October 10, 2013 - 03:28 PM | Ryan Shrout
Tagged: podcast, nvidia, contest, batman arkham origins
UPDATE: We picked our winner for week 1 but now you can enter for week 2!!! See the new podcast episode listed below!!
Back in August NVIDIA announced that they would be teaming up with Warner Bros. Interactive to include copies of the upcoming Batman: Arkham Origins game with select NVIDIA GeForce graphics cards. While that's great and all, wouldn't you rather get one for free next week from PC Perspective?
Great, you're in luck! We have a handful of keys to give out to listeners and viewers of the PC Perspective Podcast. Here's how you enter:
- Listen to or watch episode #272 of the PC Perspective Podcast and listen for the "secret phrase" as mentioned in the show!
- Subscribe to our RSS feed for the podcast or subscribe to our YouTube channel.
- Fill out the form at the bottom of this podcast page with the "secret phrase" and you're entered!
I'll draw a winner before the next podcast and announce it on the show! We'll giveaway one copy each of the next two weeks! Our thanks goes to NVIDIA for supplying the Batman: Arkham Origins keys for this contest!!
No restrictions on winning, so good luck!!
Subject: Graphics Cards | October 9, 2013 - 12:32 PM | Jeremy Hellstrom
Tagged: R9 280X DirectCU II, R9 270X DirectCU II, R7 260X DirectCU II, R7 250, R7 240, Matrix R9 280X, asus
Editor's Note: Be sure to check out our full review of the new AMD Radeon R9 280X, R9 270X and R7 260X that includes the ASUS overclocked 280X!
Fremont, CA (October 8, 2013) - ASUS today announces the launch of its R9 200 and R7 200 Series graphics cards, powered by the latest AMD Radeon R9 and R7 series graphics-processing units (GPUs). As dedicated gamers have come to expect from Republic of Gamers (ROG), the new Matrix R9 280X graphics card feature exclusive technologies, overclocked core speeds and performance enhancing options.
The new R9 280X, R9 270X and R7 260X DirectCU II models are overclocked to perform faster than reference designs while also featuring DIGI+ voltage-regulator modules (VRMs) for a smooth and stable power supply and GPU Tweak software for tuning the graphics card. The new R7 250 and R7 240 cards benefit from many exclusive ASUS technologies and tools including Super Alloy Power components for superior stability, dust-proof fans for improved card lifespan and GPU Tweak.
Matrix — Push the limits
The Matrix R9 280X graphics cards benefit from a copper-based thermal design that conducts heat away from the GPU with greater efficiency. Compared to reference Radeon R9 280X designs, ROG Matrix R9 280X cards operate up to 20% cooler and three times (3X) quieter. Coupled with dual 100mm cooling fans, gamers can enjoy ultra-cool and stable game play with minimal noise. The Matrix R9 280X Platinum Edition’s core runs at a blistering 1100MHz — 100MHz higher than reference.
The Matrix R9 280X graphics card allows for overclocking on a purely hardware level with VGA Hotwire connections and TweakIt for voltage control, Turbo Fan button to crank up the fan to 100% and a Safe Mode button to instantly default the GPU back to factory BIOs. It also includes DIGI+ voltage-regulator modules (VRMs) for smooth and stable power, and GPU Tweak tuning software that allows users to squeeze the last drop of performance out of their graphics card.
DirectCU II — Faster, quieter and cooler, even in the heat of battle
ASUS DirectCU II cooling technology places highly conductive copper cooling pipes in direct contact with a card’s GPU so heat dissipates quickly and with greater efficiency. Compared with reference Radeon R9 and R7 designs, ASUS R9 280X, R9 270X and R7 260X with DirectCU II allow the latest AMD Radeon GPUs to run up to 20% cooler, three times (3X) quieter– so gamers can enjoy ultra-stable play with minimal noise.
ASUS R9 280X, R9 270X and R7 260X are all equipped with exclusive ASUS DIGI+ VRM with Super Alloy Power technology. Paired with Super Alloy Power solid-state capacitors, concrete-core chokes and hardened MOSFETs, DIGI+ VRM delivers multi-phase power and digital voltage regulation for increased graphics card stability and cleaner power, even during the most intense GPU activities.
The fans of ASUS R9 280X, R9 270X, and R7 260X DirectCU II are all dust-proof, reducing debris accumulation and retaining peak performance over a longer lifespan. In addition, ASUS R9 280X DirectCU II features exclusive CoolTech fan. This innovative fan’s hybrid blade and bearing design, with inner radial blower and outer flower-shaped blades, delivers multi-directional airflow to accelerate heat removal and maintain cooler and quieter operation.
R7 250 and R7 240 — Super Alloy Power components and dust-proof fans for superior stability and longevity
The ASUS R7 250 and R7 240 graphics cards both include exclusive Super Alloy Power technology. Super Alloy Power’s solid-state capacitors and hardened MOSFETs all withstand much greater stress and heat due to the application of specially-formulated materials — increasing reliability and overall card lifespan. Compared with reference designs, ASUS R7 250 and R7 240’s Super Alloy Power components deliver 35%-cooler operation and a lifespan that’s up to two-and-a-half times (2.5X) longer.
Additionally, the fans on the ASUS R7 250 and R7 240 are extremely resilient and dust-proof. The Dust-Proof fans ensures that even the smallest airborne particles are barred, reducing debris accumulation and retaining peak performance over a longer lifespan — typically improving lifespan by up to 25% compared to reference fans.
GPU Tweak- Easy overclocking and online streaming
The included ASUS GPU Tweak utility enables R9 280X, R9 270X, R7 260X, R7 250, and R7 240 users intuitive control over GPU and video-memory clock speeds and voltages, cooling-fan speeds and power-consumption thresholds – so they can overclock easily with confidence. Users can create multiple performance profiles for on-demand switching of custom settings for different games.
GPU Tweak now includes Live Streaming, an online-streaming tool that lets users share on-screen action over the internet in real time – so others can watch live gaming sessions. It is even possible to add scrolling text, pictures and webcam images to the streaming window.
Subject: Graphics Cards | October 8, 2013 - 05:30 PM | Jeremy Hellstrom
Tagged: amd, GCN, graphics core next, hd 7790, hd 7870 ghz edition, hd 7970 ghz edition, r7 260x, r9 270x, r9 280x, radeon, ASUS R9 280X DirectCU II TOP
AMD's rebranded cards have arrived, though with a few improvements to the GCN architecture that we already know so well. This particular release seems to be focused on price for performance which is certainly not a bad thing in these uncertain times. The 7970 GHz Edition launched at $500, while the new R9 280X will arrive at $300 which is a rather significant price drop and one which we hope doesn't damage AMD's bottom line too badly in the coming quarters. [H]ard|OCP chose the ASUS R9 280X DirectCU II TOP to test, with a custom PCB from ASUS and a mild overclock which helped it pull ahead of the 7970 GHz. AMD has tended towards leading off new graphics card families with the low and midrange models, we have yet to see the top of the line R9 290X in action yet.
Ryan's review, including frame pacing, can be found right here.
"We evaluate the new ASUS R9 280X DirectCU II TOP video card and compare it to GeForce GTX 770 and Radeon HD 7970 GHz Edition. We will find out which video card provides the best value and performance in the $300 price segment. Does it provide better performance a than its "competition" in the ~$400 price range?"
Here are some more Graphics Card articles from around the web:
- AMD's Radeon R7 260X @ The Tech Report
- AMD's Radeon R9 280X and 270X @ The Tech Report
- AMD Radeon R9 270X & R7 260X Review @ Neoseeker
- AMD Radeon R7 260X 2GB @ eTeknix
- AMD Radeon R9 270X 2GB @ eTeknix
- AMD Radeon R7 260X, R9 270X and R9 280X @ Hardware.info
- Sapphire AMD Radeon R9 280X Vapor-X OC 3GB @ eTeknix
- Radeon R9 270X and R7 260X @ TechSpot
- AMD Radeon R9 270X & R7 260X @ Legion Hardware
- AMD Radeon R9 270X & R7 260X Review @ Hardware Canucks
- AMD Radeon R9 280X 3GB Review @ Hardware Canucks
- Sapphire R9 280X Vapor X @ Kitguru
- AMD R7 260X @ Kitguru
- AMD R9 270X @ Kitguru