All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: General Tech, Graphics Cards | October 18, 2013 - 01:21 PM | Scott Michaud
Tagged: nvidia, GeForce GTX 780 Ti
So the really interesting news today was G-Sync but that did not stop NVIDIA from sneaking in a new high-end graphics card. The GeForce GTX 780 Ti follows the company's old method of releasing successful products:
- Attach a seemingly arbitrary suffix to a number
In all seriousness, we know basically nothing about this card. It is entirely possible that its architecture might not even be based on GK110. We do know it will be faster than a GeForce 780 but we have no frame of reference in regards to the GeForce Titan. The two cards were already so close in performance that Ryan struggled to validate the 780's existence. Imagine how difficult it would be for NVIDIA to wedge yet another product in that gap.
And if it does outperform the Titan, what is its purpose? Sure, Titan is a GPGPU powerhouse if you want double-precision performance without purchasing a Tesla or a Quadro, but that is not really relevant for gamers yet.
We shall see, soon, when we get review samples in. You, on the other hand, will likely see more when the card launches mid-November. No word on pricing.
Subject: Graphics Cards | October 18, 2013 - 10:52 AM | Ryan Shrout
Tagged: variable refresh rate, refresh rate, nvidia, gsync, geforce, g-sync
UPDATE: I have posted a more in-depth analysis of the new NVIDIA G-Sync technology: NVIDIA G-Sync: Death of the Refres Rate. Thanks for reading!!
UPDATE 2: ASUS has announced the G-Sync enabled version of the VG248QE will be priced at $399.
During a gaming event being held in Montreal, NVIDIA unveield a new technology for GeForce gamers that the company is hoping will revolutionize the PC and displays. Called NVIDIA G-Sync, this new feature will combine changes to the graphics driver as well as change to the monitor to alter the way refresh rates and Vsync have worked for decades.
With standard LCD monitors gamers are forced to choose between a tear-free experience by enabling Vsync or playing a game with the substantial visual anomolies in order to get the best and most efficient frame rates. G-Sync changes that by allowing a monitor to display refresh rates other than 60 Hz, 120 Hz or 144 Hz, etc. without the horizontal tearing normally associated with turning off Vsync. Essentially, G-Sync allows a properly equiped monitor to run at a variable refresh rate which will improve the experience of gaming in interesting ways.
This technology will be available soon on Kepler-based GeForce graphics cards but will require a monitor with support for G-Sync; not just any display will work. The first launch monitor is a variation on the very popular 144 Hz ASUS VG248QE 1920x1080 display and as we saw with 3D Vision, supporting G-Sync will require licensing and hardware changes. In fact, NVIDIA claims that the new logic inside the panels controller is NVIDIA's own design - so you can obviously expect this to only function with NVIDIA GPUs.
DisplayPort is the only input option currently supported.
It turns out NVIDIA will actually be offering retrofitting kits for current users of the VG248QE at some yet to be disclosed cost. The first retail sales of G-Sync will ship as a monitor + retrofit kit as production was just a bit behind.
Using a monitor with a variable refresh rates allows the game to display 55 FPS on the panel at 55 Hz without any horizontal tearing. It can also display 133 FPS at 133 Hz without tearing. Anything below the 144 Hz maximum refresh rate of this monitor will be running at full speed without the tearing associated with the lack of vertical sync.
The technology that NVIDIA is showing here is impressive when seen in person; and that is really the only way to understand the difference. High speed cameras and captures will help but much like 3D Vision was, this is a feature that needs to be seen to be appreciated. How users will react to that road block will have to be seen.
Features like G-Sync show the gaming world that without the restrictions of console there is quite a bit of revolutionary steps that can be made to maintain the PC gaming advantage well into the future. 4K displays were a recent example and now NVIDIA G-Sync adds to the list.
Be sure to stop back at PC Perspective on Monday, November 21st at 2pm ET / 11am PT as we will be joined in-studio by NVIDIA's Tom Petersen to discuss G-Sync, how it was developed and the various ramifications the technology will have in PC gaming. You'll find it all on our PC Perspective Live! page on Monday but you can sign up for our "live stream mailing list" as well to get notified in advance!
NVIDIA G-Sync Live Stream
11am PT / 2pm ET - October 21st
Subject: General Tech, Graphics Cards | October 17, 2013 - 06:56 PM | Scott Michaud
Tagged: nvidia, GTX 760 OEM
A pair of new graphics cards have been announced during the first day of "The Way It's Meant To Be Played Montreal 2013" both of which intended for system builders to integrate into their products. Both cards fall under the GeForce GTX 760 branding with the names: "GeForce GTX 760 Ti (OEM)" and "GeForce GTX 760 192-bit (OEM)".
I will place the main specifications of both cards side-by-side-side with the default GeForce 760 for a little bit of reference. Be sure to check out its benchmark.
|GTX 760||GTX 760 192-bit (OEM)||GTX 760 Ti (OEM)|
|Base Clock||980 MHz||823 MHz||915 MHz|
|Boost Clock||1033 MHz||888 MHz||980 MHz|
|Memory Bandwidth||6.0 GT/s||5.8 GT/s||6.0 GT/s|
|vRAM (capacity)||2 GB||1.5 or 3 GB||2 GB|
The GeForce 760 is no slouch and, especially the GTX 760 Ti, seems to be pretty close in performance to the retail product. I could see this being a respectible addition to a Steam Machine. I still cannot understand why, like the gaming bundle, these cards were not announced during the keynote speech.
Or, for that matter, why no-one seems to be reporting on them.
Subject: General Tech, Graphics Cards | October 17, 2013 - 05:36 PM | Scott Michaud
Tagged: shield, nvidia, bundle
The live stream from NVIDIA, this morning, was full of technologies focused around the PC gaming ecosystem including mobile (but still PC-like) platforms. Today they also announced a holiday gaming bundle for their GeForce cards although that missed the stream for some reason.
If you purchase a GeForce GTX 770, 780, or Titan from a participating retailer (including online), you will receive Splinter Cell: Black List, Batman: Arkham Origins, and Assassin's Creed IV: Black Flag along with a $100-off coupon for an NVIDIA SHIELD.
If, on the other hand, you purchase a GTX 760, 680, 670, 660 Ti, or 660 from a participating retailer (again, including online), you will receive Splinter Cell: Black List and Assassin's Creed IV: Black Flag along with a $50-off coupon for the NVIDIA SHIELD.
The current price at Newegg for an NVIDIA SHIELD is $299 USD. With a $100 discount, this pushes the price point to $199. The $200 price point is a barrier, for videogame systems, under which customers tend to jump at. Reaching the sub-$200 price point could be a big deal even for customers not on the fence especially when you consider PC streaming. Could be.
Assume you were already planning on upgrading your GPU. Would you be interested in adding in an NVIDIA SHIELD for an extra $199?
Subject: General Tech, Graphics Cards | October 17, 2013 - 04:37 PM | Scott Michaud
Tagged: radeon, R9 290X, amd
The NDA on AMD R9 290X benchmarks has not yet lifted but AMD was in Montreal to provide two previews: BioShock Infinite and Tomb Raider, both at 4K (3840 x 2160). Keep in mind, these scores are provided by AMD and definitely does not represent results from our monitor-capture solution. Expect more detailed results from us, later, as we do some Frame Rating.
The test machine used in both setups contains:
- Intel Core i7-3960X at 3.3 GHz
- MSI X79A-GD65
- 16GB of DDR3-1600
- Windows 7 SP1 64-bit
- NVIDIA GeForce GTX 780 (331.40 drivers) / AMD Radeon R9 290X (13.11 beta drivers)
The R9 290X is configured in its "Quiet Mode" during both benchmarks. This is particularly interesting, to me, as I was unaware of such feature (it has been a while since I last used a desktop AMD/ATi card). I would assume this is a fan and power profile to keep noise levels as silent as possible for some period of time. A quick Google search suggests this feature is new with the Radeon Rx200-series cards.
BioShock Infinite is quite demanding at 4K with ultra quality settings. Both cards maintain an average framerate above 30FPS.
AMD R9 290X "Quiet Mode": 44.25 FPS
NVIDIA GeForce GTX 780: 37.67 FPS
(Update 1: 4:44pm EST) AMD confirmed TressFX is disabled in these benchmark scores. It, however, is enabled if you are present in Montreal to see the booth. (end of update 1)
Tomb Raider is also a little harsh at those resolutions. Unfortunately, the results are ambiguous whether or not TressFX has been enabled throughout the benchmarks. The summary explicitly claims TressFX is enabled, while the string of settings contains "Tressfx=off". Clearly, one of the two entries is a typo. We are currently trying to get clarification. In the mean time:
AMD R9 290X "Quiet Mode": 40.2 FPS
NVIDIA GeForce GTX 780: 34.5 FPS
Notice how both of these results are not compared to a GeForce Titan. Recent leaks suggest a retail price for AMD's flagship card in the low-$700 market. The GeForce 780, on the other hand, resides in the $650-700 USD price point.
It seems pretty clear, to me, that cost drove this comparison rather than performance.
Subject: General Tech, Graphics Cards | October 17, 2013 - 03:46 PM | Scott Michaud
Tagged: amd, radeon
In summary, "We don't know yet".
We do know of a story posted by Fudzilla which cited Roy Taylor, VP of Global Channel Sales, as a source confirming the reintroduction of Never Settle for the new "Rx200" Radeon cards. Adding credibility, Roy Taylor retweeted the story via his official account. This tweet is still there as I write this post.
The Tech Report, after publishing the story, was contacted by Robert Hallock of AMD Gaming and Graphics. The official word, now, is that AMD does not have any announcements regarding bundles for new products. He is also quoted, "We continue to consider Never Settle bundles as a core component of AMD Gaming Evolved program and intend to do them again in the future".
So, I (personally) see promise in that we will see a new Never Settle bundle. For the moment, AMD is officially silent on the matter. Also, we do not know (and, it is possible, neither does AMD at this time) which games will be included and how many users can claim if there will even be a choice at all.
Subject: General Tech, Graphics Cards | October 17, 2013 - 01:45 AM | Ryan Shrout
Tagged: video, nvidia, live blog, live
Last month it was AMD hosting the media out in sunny Hawaii for a #GPU14 press event. This week NVIDIA is hosting a group of media in Montreal for a two-day event built around "The Way It's Meant to be Played".
NVIDIA promises some very impressive software and technology demonstrations on hand and you can take it all in with our live blog and (hopefully) live stream on our PC Perspective Live! page!
It starts at 10am ET / 7am PT so join us bright and early!! And don't forget to stop by tomorrow for an even more exciting Day 2!!
Subject: General Tech, Graphics Cards | October 16, 2013 - 10:00 PM | Scott Michaud
Tagged: FPGA, Altera
(Update 10/17/2013, 6:13 PM) Apparently I messed up inputing this into the website last night. To compare FPGAs with current hardware, the Altera Stratix 10 is rated at more than 10 TeraFLOPs compared to the Tesla K20X at ~4 TeraFLOPs or the GeForce Titan at ~4.5 TeraFLOPs. All figures are single precision. (end of update)
Field Programmable Gate Arrays (FPGAs) are not general purpose processors; they are not designed to perform any random instruction at any random time. If you have a specific set of instructions that you want performed efficiently, you can spend a couple of hours compiling your function(s) to an FPGA which will then be the hardware embodiment of your code.
This is similar to an Application-Specific Integrated Circuit (ASIC) except that, for an ASIC, it is the factory who bakes your application into the hardware. Many (actually, to my knowledge, almost every) FPGAs can even be reprogrammed if you can spare those few hours to configure it again.
Altera is a manufacturer of FPGAs. They are one of the few companies who were allowed access to Intel's 14nm fabrication facilities. Rahul Garg of Anandtech recently published a story which discussed compiling OpenCL kernels to FPGAs using Altera's compiler.
Now this is pretty interesting.
The design of OpenCL splits work between "host" and "kernel". The host application is written in some arbitrary language and follows typical programming techniques. Occasionally, the application will run across a large batch of instructions. A particle simulation, for instance, will require position information to be computed. Rather than having the host code loop through every particle and perform some complex calculation, what happens to each particle could be "a kernel" which the host adds to the queue of some accelerator hardware. Normally, this is a GPU with its thousands of cores chunked into groups of usually 32 or 64 (vendor-specific).
An FPGA, on the other hand, can lock itself to the specific set of instructions. It can decide to, within a few hours, configure some arbitrary number of compute paths and just churn through each kernel call until it is finished. The compiler knows exactly the application it will need to perform while the host code runs on the CPU.
This is obviously designed for enterprise applications, at least as far into the future as we can see. Current models are apparently priced in the thousands of dollars but, as the article points out, has the potential to out-perform a 200W GPU at just a tenth of the power. This could be very interesting for companies, perhaps a film production house, who wants to install accelerator cards for sub-d surfaces or ray tracing but would like to develop the software in-house and occasionally update their code after business hours.
Regardless of the potential market, a FPGA-based add-in card simply makes sense for OpenCL and its architecture.
Subject: Graphics Cards | October 14, 2013 - 08:52 PM | Ryan Shrout
Tagged: xbox one, microsot, Mantle, dx11, amd
Microsoft posted a new blog on its Windows site that discusses some of the new features of the latest DirectX on Windows 8.1 and the upcoming Xbox One. Of particular interest was a line that confirms what I have said all along about the much-hyped AMD Mantle low-level API: it is not compatible with Xbox One.
We are very excited that with the launch of Xbox One, we can now bring the latest generation of Direct3D 11 to console. The Xbox One graphics API is “Direct3D 11.x” and the Xbox One hardware provides a superset of Direct3D 11.2 functionality. Other graphics APIs such as OpenGL and AMD’s Mantle are not available on Xbox One.
What does this mean for AMD? Nothing really changes except some of the common online discussion about how easy it would now be for developers to convert games built for the console to the AMD-specific Mantle API. AMD claims that Mantle offers a significant performance advantage over DirectX and OpenGL by giving developers that choose to implement support for it closer access to the hardware without much of the software overhead found in other APIs.
This is what Mantle does. It bypasses DirectX (and possibly the hardware abstraction layer) and developers can program very close to the metal with very little overhead from software. This lowers memory and CPU usage, it decreases latency, and because there are fewer “moving parts” AMD claims that they can do 9x the draw calls with Mantle as compared to DirectX. This is a significant boost in overall efficiency. Before everyone gets too excited, we will not see a 9x improvement in overall performance with every application. A single HD 7790 running in Mantle is not going to power 3 x 1080P monitors in Eyefinity faster than a HD 7970 or GTX 780 (in Surround) running in DirectX. Mantle shifts the bottleneck elsewhere.
I still believe that AMD Mantle could bring interesting benefits to the AMD Radeon graphics cards on the PC but I think this official statement from Microsoft will dampen some of the over excitement.
Also worth noting is this comment about the DX11 implementation on the Xbox One:
With Xbox One we have also made significant enhancements to the implementation of Direct3D 11, especially in the area of runtime overhead. The result is a very streamlined, “close to metal” level of runtime performance. In conjunction with the third generation PIX performance tool for Xbox One, developers can use Direct3D 11 to unlock the full performance potential of the console.
So while Windows and the upcoming Xbox One will share an API there will still be performance advantages for games on the console thanks to the nature of a static hardware configuration.
Subject: General Tech, Graphics Cards | October 11, 2013 - 04:47 PM | Scott Michaud
Tagged: amd, R9 290X
When you deal with the web, almost nothing is hidden. The browsers (and some extensions) have access to just about everything on screen and off. Anyone browsing a site or app can inspect the contents and even modify it. That last part could be key.
TechPowerUp got their hands on a screenshot of developer tools inspecting the Newegg website. A few elements are hidden on the right hand side within the "Coming Soon" container. One element, id of "singleFinalPrice", is set "visibility: hidden;" with a price as its contents. In the TechPowerUp screenshot, this price is listed as "729.99" USD.
Of course, good journalism is confirming this yourself. As of the time I checked, this value is listed as "9999.99" USD. This means either one of two things: Newegg changed their value to a placeholder after the leak was discovered or the source of TechPowerUp's screenshot used those same developer tools to modify the content. It is actually a remarkably easy thing to do... here I change the value to 99 cents by right clicking on the element and modifying the HTML.
No, the R9 290X is not 99 cents.
So I would be careful to take these screenshots with a grain of salt if we only have access to one source. That said, $729.99 does sound like a reasonable price point for AMD to release a Titan-competitor at. Of course, that is exactly what a hoaxer would want.
But, as it stands right now, I would not get your hopes up. An MSRP of ~$699-$749 USD sounds legitimate but we do not have even a second source, at the moment, to confirm that. Still, this might be something our readers would like to know.
Subject: General Tech, Graphics Cards, Systems | October 10, 2013 - 06:59 PM | Scott Michaud
Tagged: amd, nvidia, Intel, Steam Machine
This should be little-to-no surprise for the viewers of our podcast, as this story was discussed there, but Valve has confirmed AMD and Intel graphics are compatible with Steam Machines. Doug Lombardi of Valve commented by email to, apparently, multiple sources including Forbes and Maximum PC.
Last week, we posted some technical specs of our first wave of Steam Machine prototypes. Although the graphics hardware that we've selected for the first wave of prototypes is a variety of NVIDIA cards, that is not an indication that Steam Machines are NVIDIA-only. In 2014, there will be Steam Machines commercially available with graphics hardware made by AMD, NVIDIA, and Intel. Valve has worked closely together with all three of these companies on optimizing their hardware for SteamOS, and will continue to do so into the foreseeable future.
Ryan and the rest of the podcast crew found the whole situation, "Odd". They could not understand why AMD referred the press to Doug Lombardi rather than circulate a canned statement from him. It was also weird why NVIDIA had an exclusive on the beta program with AMD being commercially available in 2014.
As I have said in the initial post: for what seems to be deliberate non-committal to a specific hardware spec, why limit to a single graphics provider?
Subject: Graphics Cards | October 10, 2013 - 03:29 PM | Jeremy Hellstrom
Tagged: radeon, r9 270x, GCN, sapphire, toxic edition, factory overclocked
We saw the release of the reference R9s yesterday and today we get to see the custom models such as the Sapphire TOXIC R9 270X which Legit Reviews just finished benchmarking. The TOXIC sports a 100MHz overclock on both GPU and RAM as well as a custom cooler with three fans. While it remains a two slot GPU it is longer than the reference model and requires a full foot of clearance inside the case. Read on to see what kind of performance boost you can expect and how much further you can push this card.
"When it comes to discrete graphics, the $199 price point is known as the gamer’s sweet spot by both AMD and NVIDIA. This is arguably the front line in the battle for your money when it coming to gaming graphics cards. The AMD Radeon R9 270X is AMD’s offering to gamers at this competitive price point. Read on to see how it performs!"
Here are some more Graphics Card articles from around the web:
- Gigabyte Radeon R9 270X WindForce OC 2GB @ eTeknix
- ASUS Radeon R9 270X Direct CU II TOP 2GB @ eTeknix
- MSI Radeon R9 270X Hawk Edition Video Card Review @HiTech Legion
- Gigabyte R9 270X Windforce @ LanOC Reviews
- Sapphire R9 280X Toxic Edition OC 3GB @ Kitguru
- MSI Radeon R9 270X GAMING 2GB @ Benchmark Reviews
- AMD Radeon R9 280X / R9 270X from ASUS and MSI @ Hardware.info
- ASUS R9 270X Direct CU II TOP @ Kitguru
- Gigabyte Radeon R9 270X OC 2GB Video Card Review @ HiTech Legion
- ASUS R9 280X Matrix Platinum @ Kitguru
- Will it Crossfire? R9 280X & HD 7970 Scaling Tested @ Hardware Canucks
- AMD Radeon R9 280X Graphics Card Review @ Techgage
- AMD Radeon R7 260X Versus NVIDIA GeForce GTX 650 Ti Boost @ Legit Reviews
Subject: Editorial, General Tech, Graphics Cards | October 10, 2013 - 03:28 PM | Ryan Shrout
Tagged: podcast, nvidia, contest, batman arkham origins
UPDATE: We picked our winner for week 1 but now you can enter for week 2!!! See the new podcast episode listed below!!
Back in August NVIDIA announced that they would be teaming up with Warner Bros. Interactive to include copies of the upcoming Batman: Arkham Origins game with select NVIDIA GeForce graphics cards. While that's great and all, wouldn't you rather get one for free next week from PC Perspective?
Great, you're in luck! We have a handful of keys to give out to listeners and viewers of the PC Perspective Podcast. Here's how you enter:
- Listen to or watch episode #272 of the PC Perspective Podcast and listen for the "secret phrase" as mentioned in the show!
- Subscribe to our RSS feed for the podcast or subscribe to our YouTube channel.
- Fill out the form at the bottom of this podcast page with the "secret phrase" and you're entered!
I'll draw a winner before the next podcast and announce it on the show! We'll giveaway one copy each of the next two weeks! Our thanks goes to NVIDIA for supplying the Batman: Arkham Origins keys for this contest!!
No restrictions on winning, so good luck!!
Subject: Graphics Cards | October 9, 2013 - 12:32 PM | Jeremy Hellstrom
Tagged: R9 280X DirectCU II, R9 270X DirectCU II, R7 260X DirectCU II, R7 250, R7 240, Matrix R9 280X, asus
Editor's Note: Be sure to check out our full review of the new AMD Radeon R9 280X, R9 270X and R7 260X that includes the ASUS overclocked 280X!
Fremont, CA (October 8, 2013) - ASUS today announces the launch of its R9 200 and R7 200 Series graphics cards, powered by the latest AMD Radeon R9 and R7 series graphics-processing units (GPUs). As dedicated gamers have come to expect from Republic of Gamers (ROG), the new Matrix R9 280X graphics card feature exclusive technologies, overclocked core speeds and performance enhancing options.
The new R9 280X, R9 270X and R7 260X DirectCU II models are overclocked to perform faster than reference designs while also featuring DIGI+ voltage-regulator modules (VRMs) for a smooth and stable power supply and GPU Tweak software for tuning the graphics card. The new R7 250 and R7 240 cards benefit from many exclusive ASUS technologies and tools including Super Alloy Power components for superior stability, dust-proof fans for improved card lifespan and GPU Tweak.
Matrix — Push the limits
The Matrix R9 280X graphics cards benefit from a copper-based thermal design that conducts heat away from the GPU with greater efficiency. Compared to reference Radeon R9 280X designs, ROG Matrix R9 280X cards operate up to 20% cooler and three times (3X) quieter. Coupled with dual 100mm cooling fans, gamers can enjoy ultra-cool and stable game play with minimal noise. The Matrix R9 280X Platinum Edition’s core runs at a blistering 1100MHz — 100MHz higher than reference.
The Matrix R9 280X graphics card allows for overclocking on a purely hardware level with VGA Hotwire connections and TweakIt for voltage control, Turbo Fan button to crank up the fan to 100% and a Safe Mode button to instantly default the GPU back to factory BIOs. It also includes DIGI+ voltage-regulator modules (VRMs) for smooth and stable power, and GPU Tweak tuning software that allows users to squeeze the last drop of performance out of their graphics card.
DirectCU II — Faster, quieter and cooler, even in the heat of battle
ASUS DirectCU II cooling technology places highly conductive copper cooling pipes in direct contact with a card’s GPU so heat dissipates quickly and with greater efficiency. Compared with reference Radeon R9 and R7 designs, ASUS R9 280X, R9 270X and R7 260X with DirectCU II allow the latest AMD Radeon GPUs to run up to 20% cooler, three times (3X) quieter– so gamers can enjoy ultra-stable play with minimal noise.
ASUS R9 280X, R9 270X and R7 260X are all equipped with exclusive ASUS DIGI+ VRM with Super Alloy Power technology. Paired with Super Alloy Power solid-state capacitors, concrete-core chokes and hardened MOSFETs, DIGI+ VRM delivers multi-phase power and digital voltage regulation for increased graphics card stability and cleaner power, even during the most intense GPU activities.
The fans of ASUS R9 280X, R9 270X, and R7 260X DirectCU II are all dust-proof, reducing debris accumulation and retaining peak performance over a longer lifespan. In addition, ASUS R9 280X DirectCU II features exclusive CoolTech fan. This innovative fan’s hybrid blade and bearing design, with inner radial blower and outer flower-shaped blades, delivers multi-directional airflow to accelerate heat removal and maintain cooler and quieter operation.
R7 250 and R7 240 — Super Alloy Power components and dust-proof fans for superior stability and longevity
The ASUS R7 250 and R7 240 graphics cards both include exclusive Super Alloy Power technology. Super Alloy Power’s solid-state capacitors and hardened MOSFETs all withstand much greater stress and heat due to the application of specially-formulated materials — increasing reliability and overall card lifespan. Compared with reference designs, ASUS R7 250 and R7 240’s Super Alloy Power components deliver 35%-cooler operation and a lifespan that’s up to two-and-a-half times (2.5X) longer.
Additionally, the fans on the ASUS R7 250 and R7 240 are extremely resilient and dust-proof. The Dust-Proof fans ensures that even the smallest airborne particles are barred, reducing debris accumulation and retaining peak performance over a longer lifespan — typically improving lifespan by up to 25% compared to reference fans.
GPU Tweak- Easy overclocking and online streaming
The included ASUS GPU Tweak utility enables R9 280X, R9 270X, R7 260X, R7 250, and R7 240 users intuitive control over GPU and video-memory clock speeds and voltages, cooling-fan speeds and power-consumption thresholds – so they can overclock easily with confidence. Users can create multiple performance profiles for on-demand switching of custom settings for different games.
GPU Tweak now includes Live Streaming, an online-streaming tool that lets users share on-screen action over the internet in real time – so others can watch live gaming sessions. It is even possible to add scrolling text, pictures and webcam images to the streaming window.
Subject: Graphics Cards | October 8, 2013 - 05:30 PM | Jeremy Hellstrom
Tagged: amd, GCN, graphics core next, hd 7790, hd 7870 ghz edition, hd 7970 ghz edition, r7 260x, r9 270x, r9 280x, radeon, ASUS R9 280X DirectCU II TOP
AMD's rebranded cards have arrived, though with a few improvements to the GCN architecture that we already know so well. This particular release seems to be focused on price for performance which is certainly not a bad thing in these uncertain times. The 7970 GHz Edition launched at $500, while the new R9 280X will arrive at $300 which is a rather significant price drop and one which we hope doesn't damage AMD's bottom line too badly in the coming quarters. [H]ard|OCP chose the ASUS R9 280X DirectCU II TOP to test, with a custom PCB from ASUS and a mild overclock which helped it pull ahead of the 7970 GHz. AMD has tended towards leading off new graphics card families with the low and midrange models, we have yet to see the top of the line R9 290X in action yet.
Ryan's review, including frame pacing, can be found right here.
"We evaluate the new ASUS R9 280X DirectCU II TOP video card and compare it to GeForce GTX 770 and Radeon HD 7970 GHz Edition. We will find out which video card provides the best value and performance in the $300 price segment. Does it provide better performance a than its "competition" in the ~$400 price range?"
Here are some more Graphics Card articles from around the web:
- AMD's Radeon R7 260X @ The Tech Report
- AMD's Radeon R9 280X and 270X @ The Tech Report
- AMD Radeon R9 270X & R7 260X Review @ Neoseeker
- AMD Radeon R7 260X 2GB @ eTeknix
- AMD Radeon R9 270X 2GB @ eTeknix
- AMD Radeon R7 260X, R9 270X and R9 280X @ Hardware.info
- Sapphire AMD Radeon R9 280X Vapor-X OC 3GB @ eTeknix
- Radeon R9 270X and R7 260X @ TechSpot
- AMD Radeon R9 270X & R7 260X @ Legion Hardware
- AMD Radeon R9 270X & R7 260X Review @ Hardware Canucks
- AMD Radeon R9 280X 3GB Review @ Hardware Canucks
- Sapphire R9 280X Vapor X @ Kitguru
- AMD R7 260X @ Kitguru
- AMD R9 270X @ Kitguru
Subject: General Tech, Graphics Cards, Cases and Cooling, Systems | October 4, 2013 - 07:19 PM | Scott Michaud
Tagged: valve, Steam Machine
Well, that did not take long.
Valve announced the Steam Machines barely over a week ago and could not provide hardware specifications. While none of these will be available for purchase, the honor of taking money reserved for system builders and OEMs, Valve has announced hardware specifications for their beta device.
The raw specifications, or range of them, are:
- GPU: NVIDIA GeForce Titan through GeForce GTX660 (780 and 760 possible)
- CPU: Intel i7-4770 or i5-4570, or i3-something
- RAM: 16GB DDR3-1600 (CPU), 3GB GDDR5 (GPU)
- Storage: 1TB/8GB Hybrid SSHD
- Power Supply: 450W
- Dimensions: approx. 12" x 12.4" x 2.9"
Really the only reason I could see for the spread of performance is to not pressure developers into targeting a single reference design. This is odd, since every reference design contains an NVIDIA GPU which (you would expect) a company who wants to encourage an open mind would not have such a glaring omission. I could speculate about driver compatibility with SteamOS and media streaming but even that feels far-fetched.
On the geeky side of things: the potential for a GeForce Titan is fairly awesome and, along with the minimum GeForce 660, is the first sign that I might be wrong about this whole media center extender thing. My expectation was that Valve would acknowledge some developers might want a streaming-focused device.
Above all, I somewhat hope Valve is a bit more clear to consumers with their intent... especially if their intent is to be unclear with OEMs for some reason.
Subject: General Tech, Graphics Cards | October 2, 2013 - 09:03 PM | Scott Michaud
Tagged: amd, linux
Last week, NVIDIA published documentation for Nouveau to heal wounds with the open source community. AMD had a better reputation and intends to maintain it. On Tuesday, Alex Deucher published 9 PDF documents, 1178 pages of register and acceleration documentation along with 18 pages of HDA GPU audio programming details, compared to the 42 pages NVIDIA published.
Sure, a page to page comparison is meaningless, but it is clear AMD did not want to be outdone. This is especially true when you consider that some of these documents date back to early 2009. Still, reactionary or not, the open source community should accept the assistance with open arms... and open x86s?
I should note that these documents do not cover Volcanic Islands; they are for everything between Evergreen and Sea Islands.
Subject: General Tech, Graphics Cards | October 2, 2013 - 02:20 PM | Jeremy Hellstrom
Tagged: gaming, ARMA III
Forget Crysis, if you want to hammer your PC pick up ARMA III and try turning up the settings! Even an i7-3770K @ 4.8GHz and GTX 780's in SLI struggle to render this game with all the graphical bells and whistles turned on. The close up landscapes and objects are gorgeous with high quality textures but to truly get into the feel of the game you need to be able to turn up the veiw distance and number of displayed objects as you can see from [H]ard|OCP's screenshots below. [H] spent ia bit of time breaking down the best playable settings for numerous GPUs from NVIDIA and AMD as well as showing you the impact that MSAA and PPAA has on the visual quality as well as your PCs performance. If you want to show off the superiority of a high end gaming machine then this is the game for you.
"ARMA III is our focus point for today. It features a large open world environment designed on a massive continent measuring 270 square kilometers. To go along side this massive continent is a max visibility range of 20km. Combine this with ARMA III's impressive looking graphics and we have a game that demands performance."
Here is some more Tech News from around the web:
- What Does It Meaaaaan: Half-Life 3 Trademarked @ Rock, Paper, SHOTGUN
- See CDP Explain The Mad Scope Of The Witcher 3 @ Rock, Paper, SHOTGUN
- GTA 5 Online goes live @ The Inquirer
- AMD spent as much as $8 million on EA/DICE Battlefield 4 deal @ HEXUS
- Co-op Sandbox FTL? – PULSAR Is The Most Exciting Game @ Rock, Paper, SHOTGUN
- EA SPORTS Madden NFL 25 @ Benchmark Reviews
Subject: Graphics Cards | September 30, 2013 - 06:30 PM | Jeremy Hellstrom
Tagged: graphics drivers, catalyst 13.10, beta, windows, linux
- Includes 32-bit single GPU and CrossFire game profile for Battlefield 4
- Total War: Rome 2 CrossFire profile update
- CrossFire frame pacing improvements for CPU-bound applications
- Resolves image corruption seen in Autodesk Investor 2014
- Resolves intermittent black screen when resuming from a S3/S4 sleep state if the display is unplugged during the sleep state on systems supporting AMD Enduro Technology
- Updated AMD Enduro Technology application profiles
o Profile highlights:
- Total War: Rome 2
- Battlefield 4
- Saints Row 4
- Splinter Cell Blacklist
- FIFA 14
Resolved issue highlights:
- System hang up when startx after setting up an Eyefinity desktop.
- Permission issue with procfs on kernel 3.10
- System hang observed while running disaster stress test on Ubuntu 12.10
- Hang is observed when running Unigine on Linux
- AC/DC switching is not automatically detected
- Laptop backlight adjustment is broken
- Glxtest failures observed in log file with forcing on Anti-Aliasing
- Cairo-dock is broken
- Severe desktop corruption is observed when enabled compiz in certain cases
- glClientWaitSync is waiting even when timeout is 0
- C4Engine get corruption with GL_ARB_texture_array enabled
Subject: General Tech, Graphics Cards | September 26, 2013 - 03:35 AM | Scott Michaud
Tagged: frame rating, frame pacing, amd
Scott Wasson of The Tech Report just received an interview with Raja Koduri, head of Graphics Hardware and Software Development at AMD, a few hours ago. Part of the interview discussed frame the frame pacing issues we, as well as The Tech Report, published over the last year. In short, the news seems good for owners of Radeon graphics cards, future and even current.
The "Hawaii" powered Radeon R9 290 and R9 290X graphics cards are expected to handle CrossFire pacing acceptably at launch. Clearly, if there is ever a time to fix the problem, it would be in new hardware. Still, this is good news for interested customers; if all goes to plan, you are likely going to have a good experience out of the box.
Current owners of GCN-based video cards, along with potential buyers of the R9 280X and lower upcoming cards, will apparently need to wait for AMD to release a driver to fix these issues. However, this driver is not far off: Koduri, unclear whether on or off the record, intends for an autumn release. This driver is expected to cover frame pacing issues for CrossFire, Eyefinity, and 4K.
Koduri does believe the CrossFire issues were unfortunate and expresses a desire to fix the issue for his customers.
Keep checking PC Perspective for more information as it comes out!
Editor's Note: I just spoke with Raja Koduri as well and he basically reiterated everything that Scott noted in his story on The Tech Report as well. The upcoming 290X will have frame pacing at Eyefinity and 4K resolution at launch while the cards below that in the R9 series, and users of Radeon HD 7000 cards (and likely beyond) will need some more time before the driver is ready. I'll be able to talk quite a bit more about the changes to BOTH architectures very shortly so stay tuned for that.
Get notified when we go live!