NVIDIA Launches WHQL Drivers for Battlefield and Batman

Subject: General Tech, Graphics Cards | October 23, 2013 - 12:21 AM |
Tagged: nvidia, graphics drivers, geforce

Mid-June kicked up a storm of poop across the internet when IGN broke the AMD optimizations for Frostbite 3. It was reported that NVIDIA would not receive sample code for those games until after they launched. The article was later updated with a statement from AMD: "... the AMD Gaming Evolved program undertakes no efforts to prevent our competition from optimizing for games before their release."

Now, I assume, the confusion was caused by then-not-announced Mantle.

nvidia-geforce.png

And, as it turns out, NVIDIA did receive the code for Battlefield 4 prior to launch. Monday, the company launched their 331.58 WHQL-certified drivers which are optimized for Batman: Arkham Origins and Battlefield 4. According to the release notes, you should even be able to use SLi out of the gate. If, on the other hand, you are a Civilization V player: HBAO+ should enhance your shadowing.

They also added a DX11 SLi profile for Watch Dogs... awkwarrrrrd.

To check out the blog at GeForce.com for a bit more information, check out the release notes, or just head over to the drivers page. If you have GeForce Experience installed, it probably already asked you to update.

Source: NVIDIA

PCPer Live! NVIDIA G-Sync Discussion with Tom Petersen, Q&A

Subject: Graphics Cards, Displays | October 20, 2013 - 02:50 PM |
Tagged: video, tom petersen, nvidia, livestream, live, g-sync

UPDATE: If you missed our live stream today that covered NVIDIA G-Sync technology, you can watch the replay embedded below.  NVIDIA's Tom Petersen stops by to talk about G-Sync in both high level and granular detail while showing off some demonstrations of why G-Sync is so important.  Enjoy!!

Last week NVIDIA hosted press and developers in Montreal to discuss a couple of new technologies, the most impressive of which was NVIDIA G-Sync, a new monitor solution that looks to solve the eternal debate of smoothness against latency.  If you haven't read about G-Sync and how impressive it was when first tested on Friday, you should check out my initial write up, NVIDIA G-Sync: Death of the Refresh Rate, that not only does that, but dives into the reason the technology shift was necessary in the first place.

G-Sync essentially functions by altering and controlling the vBlank signal sent to the monitor.  In a normal configuration, vBlank is a combination of the combination of the vertical front and back porch and the necessary sync time.  That timing is set a fixed stepping that determines the effective refresh rate of the monitor; 60 Hz, 120 Hz, etc.  What NVIDIA will now do in the driver and firmware is lengthen or shorten the vBlank signal as desired and will send it when one of two criteria is met.

  1. A new frame has completed rendering and has been copied to the front buffer.  Sending vBlank at this time will tell the screen grab data from the card and display it immediately.
  2. A substantial amount of time has passed and the currently displayed image needs to be refreshed to avoid brightness variation.

In current display timing setups, the submission of the vBlank signal has been completely independent from the rendering pipeline.  The result was varying frame latency and either horizontal tearing or fixed refresh frame rates.  With NVIDIA G-Sync creating an intelligent connection between rendering and frame updating, the display of PC games is fundamentally changed.

Every person that saw the technology, including other media members and even developers like John Carmack, Johan Andersson and Tim Sweeney, came away knowing that this was the future of PC gaming.  (If you didn't see the panel that featured those three developers on stage, you are missing out.)

gsync.jpg

But it is definitely a complicated technology and I have already seen a lot of confusion about it in our comment threads on PC Perspective.  To help the community get a better grasp and to offer them an opportunity to ask some questions, NVIDIA's Tom Petersen is stopping by our offices on Monday afternoon where he will run through some demonstrations and take questions from the live streaming audience.

Be sure to stop back at PC Perspective on Monday, October 21st at 2pm ET / 11am PT as to discuss G-Sync, how it was developed and the various ramifications the technology will have in PC gaming.  You'll find it all on our PC Perspective Live! page on Monday but you can sign up for our "live stream mailing list" as well to get notified in advance!

NVIDIA G-Sync Live Stream

11am PT / 2pm ET - October 21st

PC Perspective Live! Page

placeholder.png

We also want your questions!!  The easiest way to get them answered is to leave them for us here in the comments of this post.  That will give us time to filter through the questions and get the answers you need from Tom.  We'll take questions via the live chat and via Twitter (follow me @ryanshrout) during the event but often time there is a lot of noise to deal with. 

So be sure to join us on Monday afternoon!

John Carmack, Tim Sweeney and Johan Andersson Talk NVIDIA G-Sync, AMD Mantle and Graphics Trends

Subject: Graphics Cards | October 18, 2013 - 07:55 PM |
Tagged: video, tim sweeney, nvidia, Mantle, john carmack, johan andersson, g-sync, amd

If you weren't on our live stream from the NVIDIA "The Way It's Meant to be Played" tech day this afternoon, you missed a hell of an event.  After the announcement of NVIDIA G-Sync variable refresh rate monitor technology, NVIDIA's Tony Tomasi brough one of the most intriguing panels of developers on stage to talk.

IMG_0009.JPG

John Carmack, Tim Sweeney and Johan Andersson talk for over an hour, taking questions from the audience and even getting into debates amongst themselves in some instances.  Topics included NVIDIA G-Sync of course, AMD's Mantle low-level API, the hurdles facing PC gaming and what direction each luminary is currently on for future development.

If you are a PC enthusiast or gamer you are definitely going to want to listen and watch the video below!

NVIDIA GeForce GTX 780 Ti Released Mid-November

Subject: General Tech, Graphics Cards | October 18, 2013 - 01:21 PM |
Tagged: nvidia, GeForce GTX 780 Ti

So the really interesting news today was G-Sync but that did not stop NVIDIA from sneaking in a new high-end graphics card. The GeForce GTX 780 Ti follows the company's old method of releasing successful products:

  • Attach a seemingly arbitrary suffix to a number
  • ???
  • Profit!

nvidia-780-ti.jpg

In all seriousness, we know basically nothing about this card. It is entirely possible that its architecture might not even be based on GK110. We do know it will be faster than a GeForce 780 but we have no frame of reference in regards to the GeForce Titan. The two cards were already so close in performance that Ryan struggled to validate the 780's existence. Imagine how difficult it would be for NVIDIA to wedge yet another product in that gap.

And if it does outperform the Titan, what is its purpose? Sure, Titan is a GPGPU powerhouse if you want double-precision performance without purchasing a Tesla or a Quadro, but that is not really relevant for gamers yet.

We shall see, soon, when we get review samples in. You, on the other hand, will likely see more when the card launches mid-November. No word on pricing.

Source: NVIDIA

NVIDIA Announces G-Sync, Variable Refresh Rate Monitor Technology

Subject: Graphics Cards | October 18, 2013 - 10:52 AM |
Tagged: variable refresh rate, refresh rate, nvidia, gsync, geforce, g-sync

UPDATE: I have posted a more in-depth analysis of the new NVIDIA G-Sync technology: NVIDIA G-Sync: Death of the Refres Rate.  Thanks for reading!!

UPDATE 2: ASUS has announced the G-Sync enabled version of the VG248QE will be priced at $399.

During a gaming event being held in Montreal, NVIDIA unveield a new technology for GeForce gamers that the company is hoping will revolutionize the PC and displays.  Called NVIDIA G-Sync, this new feature will combine changes to the graphics driver as well as change to the monitor to alter the way refresh rates and Vsync have worked for decades.

gsync.jpg

With standard LCD monitors gamers are forced to choose between a tear-free experience by enabling Vsync or playing a game with the substantial visual anomolies in order to get the best and most efficient frame rates.  G-Sync changes that by allowing a monitor to display refresh rates other than 60 Hz, 120 Hz or 144 Hz, etc. without the horizontal tearing normally associated with turning off Vsync.  Essentially, G-Sync allows a properly equiped monitor to run at a variable refresh rate which will improve the experience of gaming in interesting ways.

gsync2.jpg

This technology will be available soon on Kepler-based GeForce graphics cards but will require a monitor with support for G-Sync; not just any display will work.  The first launch monitor is a variation on the very popular 144 Hz ASUS VG248QE 1920x1080 display and as we saw with 3D Vision, supporting G-Sync will require licensing and hardware changes.  In fact, NVIDIA claims that the new logic inside the panels controller is NVIDIA's own design - so you can obviously expect this to only function with NVIDIA GPUs. 

DisplayPort is the only input option currently supported. 

It turns out NVIDIA will actually be offering retrofitting kits for current users of the VG248QE at some yet to be disclosed cost.  The first retail sales of G-Sync will ship as a monitor + retrofit kit as production was just a bit behind.

Using a monitor with a variable refresh rates allows the game to display 55 FPS on the panel at 55 Hz without any horizontal tearing.  It can also display 133 FPS at 133 Hz without tearing.  Anything below the 144 Hz maximum refresh rate of this monitor will be running at full speed without the tearing associated with the lack of vertical sync.

vsync04.jpg

The technology that NVIDIA is showing here is impressive when seen in person; and that is really the only way to understand the difference.  High speed cameras and captures will help but much like 3D Vision was, this is a feature that needs to be seen to be appreciated.  How users will react to that road block will have to be seen. 

Features like G-Sync show the gaming world that without the restrictions of console there is quite a bit of revolutionary steps that can be made to maintain the PC gaming advantage well into the future.  4K displays were a recent example and now NVIDIA G-Sync adds to the list. 

Be sure to stop back at PC Perspective on Monday, November 21st at 2pm ET / 11am PT as we will be joined in-studio by NVIDIA's Tom Petersen to discuss G-Sync, how it was developed and the various ramifications the technology will have in PC gaming.  You'll find it all on our PC Perspective Live! page on Monday but you can sign up for our "live stream mailing list" as well to get notified in advance!

NVIDIA G-Sync Live Stream

11am PT / 2pm ET - October 21st

PC Perspective Live! Page

Source: NVIDIA

NVIDIA GeForce GTX 760 OEMs Silently Announced?

Subject: General Tech, Graphics Cards | October 17, 2013 - 06:56 PM |
Tagged: nvidia, GTX 760 OEM

A pair of new graphics cards have been announced during the first day of "The Way It's Meant To Be Played Montreal 2013" both of which intended for system builders to integrate into their products. Both cards fall under the GeForce GTX 760 branding with the names: "GeForce GTX 760 Ti (OEM)" and "GeForce GTX 760 192-bit (OEM)".

nvidia-geforce-gtx-760-oem-style-3qtr.png

I will place the main specifications of both cards side-by-side-side with the default GeForce 760 for a little bit of reference. Be sure to check out its benchmark.

  GTX 760 GTX 760 192-bit (OEM) GTX 760 Ti (OEM)
Shader Cores 1152 1152 1344
Base Clock 980 MHz 823 MHz 915 MHz
Boost Clock 1033 MHz 888 MHz 980 MHz
Memory Interface 256-bit 192-bit 256-bit
Memory Bandwidth 6.0 GT/s 5.8 GT/s 6.0 GT/s
vRAM (capacity) 2 GB 1.5 or 3 GB 2 GB

The GeForce 760 is no slouch and, especially the GTX 760 Ti, seems to be pretty close in performance to the retail product. I could see this being a respectible addition to a Steam Machine. I still cannot understand why, like the gaming bundle, these cards were not announced during the keynote speech.

Or, for that matter, why no-one seems to be reporting on them.

Source: NVIDIA

NVIDIA Holiday Gaming Bundle: Free Games and More!

Subject: General Tech, Graphics Cards | October 17, 2013 - 05:36 PM |
Tagged: shield, nvidia, bundle

The live stream from NVIDIA, this morning, was full of technologies focused around the PC gaming ecosystem including mobile (but still PC-like) platforms. Today they also announced a holiday gaming bundle for their GeForce cards although that missed the stream for some reason.

The bundle is separated into two tiers depending on your class of video card.

nvidia-geforce-gtx-holiday-bundle-with-shield-tiers-v2-640px.png

If you purchase a GeForce GTX 770, 780, or Titan from a participating retailer (including online), you will receive Splinter Cell: Black List, Batman: Arkham Origins, and Assassin's Creed IV: Black Flag along with a $100-off coupon for an NVIDIA SHIELD.

If, on the other hand, you purchase a GTX 760, 680, 670, 660 Ti, or 660 from a participating retailer (again, including online), you will receive Splinter Cell: Black List and Assassin's Creed IV: Black Flag along with a $50-off coupon for the NVIDIA SHIELD.

The current price at Newegg for an NVIDIA SHIELD is $299 USD. With a $100 discount, this pushes the price point to $199. The $200 price point is a barrier, for videogame systems, under which customers tend to jump at. Reaching the sub-$200 price point could be a big deal even for customers not on the fence especially when you consider PC streaming. Could be.

Assume you were already planning on upgrading your GPU. Would you be interested in adding in an NVIDIA SHIELD for an extra $199?

AMD Allows Two Early R9 290X Benchmarks

Subject: General Tech, Graphics Cards | October 17, 2013 - 04:37 PM |
Tagged: radeon, R9 290X, amd

The NDA on AMD R9 290X benchmarks has not yet lifted but AMD was in Montreal to provide two previews: BioShock Infinite and Tomb Raider, both at 4K (3840 x 2160). Keep in mind, these scores are provided by AMD and definitely does not represent results from our monitor-capture solution. Expect more detailed results from us, later, as we do some Frame Rating.

amd-gpu14-06.png

The test machine used in both setups contains:

  • Intel Core i7-3960X at 3.3 GHz
  • MSI X79A-GD65
  • 16GB of DDR3-1600
  • Windows 7 SP1 64-bit
  • NVIDIA GeForce GTX 780 (331.40 drivers) / AMD Radeon R9 290X (13.11 beta drivers)

The R9 290X is configured in its "Quiet Mode" during both benchmarks. This is particularly interesting, to me, as I was unaware of such feature (it has been a while since I last used a desktop AMD/ATi card). I would assume this is a fan and power profile to keep noise levels as silent as possible for some period of time. A quick Google search suggests this feature is new with the Radeon Rx200-series cards.

bioshock_screen2.jpg

BioShock Infinite is quite demanding at 4K with ultra quality settings. Both cards maintain an average framerate above 30FPS.

AMD R9 290X "Quiet Mode": 44.25 FPS

NVIDIA GeForce GTX 780: 37.67 FPS

tomb_raider1.jpg

(Update 1: 4:44pm EST) AMD confirmed TressFX is disabled in these benchmark scores. It, however, is enabled if you are present in Montreal to see the booth. (end of update 1)

Tomb Raider is also a little harsh at those resolutions. Unfortunately, the results are ambiguous whether or not TressFX has been enabled throughout the benchmarks. The summary explicitly claims TressFX is enabled, while the string of settings contains "Tressfx=off". Clearly, one of the two entries is a typo. We are currently trying to get clarification. In the mean time:

AMD R9 290X "Quiet Mode": 40.2 FPS

NVIDIA GeForce GTX 780: 34.5 FPS

Notice how both of these results are not compared to a GeForce Titan. Recent leaks suggest a retail price for AMD's flagship card in the low-$700 market. The GeForce 780, on the other hand, resides in the $650-700 USD price point.

It seems pretty clear, to me, that cost drove this comparison rather than performance.

Source: AMD

Still Not Settling? Is AMD Going to Continue Never Settle?

Subject: General Tech, Graphics Cards | October 17, 2013 - 03:46 PM |
Tagged: amd, radeon

In summary, "We don't know yet".

amd7990.jpg

We do know of a story posted by Fudzilla which cited Roy Taylor, VP of Global Channel Sales, as a source confirming the reintroduction of Never Settle for the new "Rx200" Radeon cards. Adding credibility, Roy Taylor retweeted the story via his official account. This tweet is still there as I write this post.

The Tech Report, after publishing the story, was contacted by Robert Hallock of AMD Gaming and Graphics. The official word, now, is that AMD does not have any announcements regarding bundles for new products. He is also quoted, "We continue to consider Never Settle bundles as a core component of AMD Gaming Evolved program and intend to do them again in the future".

So, I (personally) see promise in that we will see a new Never Settle bundle. For the moment, AMD is officially silent on the matter. Also, we do not know (and, it is possible, neither does AMD at this time) which games will be included and how many users can claim if there will even be a choice at all.

Source: Tech Report

NVIDIA "The Way It's Meant to be Played" 2013 Press Event Live Blog

Subject: General Tech, Graphics Cards | October 17, 2013 - 01:45 AM |
Tagged: video, nvidia, live blog, live

Last month it was AMD hosting the media out in sunny Hawaii for a #GPU14 press event.  This week NVIDIA is hosting a group of media in Montreal for a two-day event built around "The Way It's Meant to be Played". 

twimtbp.png

NVIDIA promises some very impressive software and technology demonstrations on hand and you can take it all in with our live blog and (hopefully) live stream on our PC Perspective Live! page

It starts at 10am ET / 7am PT so join us bright and early!!  And don't forget to stop by tomorrow for an even more exciting Day 2!!

Altera Does FPGAs with OpenCL

Subject: General Tech, Graphics Cards | October 16, 2013 - 10:00 PM |
Tagged: FPGA, Altera

(Update 10/17/2013, 6:13 PM) Apparently I messed up inputing this into the website last night. To compare FPGAs with current hardware, the Altera Stratix 10 is rated at more than 10 TeraFLOPs compared to the Tesla K20X at ~4 TeraFLOPs or the GeForce Titan at ~4.5 TeraFLOPs. All figures are single precision. (end of update)

Field Programmable Gate Arrays (FPGAs) are not general purpose processors; they are not designed to perform any random instruction at any random time. If you have a specific set of instructions that you want performed efficiently, you can spend a couple of hours compiling your function(s) to an FPGA which will then be the hardware embodiment of your code.

This is similar to an Application-Specific Integrated Circuit (ASIC) except that, for an ASIC, it is the factory who bakes your application into the hardware. Many (actually, to my knowledge, almost every) FPGAs can even be reprogrammed if you can spare those few hours to configure it again.

14nmPressGraphic3.jpg

Altera is a manufacturer of FPGAs. They are one of the few companies who were allowed access to Intel's 14nm fabrication facilities. Rahul Garg of Anandtech recently published a story which discussed compiling OpenCL kernels to FPGAs using Altera's compiler.

Now this is pretty interesting.

The design of OpenCL splits work between "host" and "kernel". The host application is written in some arbitrary language and follows typical programming techniques. Occasionally, the application will run across a large batch of instructions. A particle simulation, for instance, will require position information to be computed. Rather than having the host code loop through every particle and perform some complex calculation, what happens to each particle could be "a kernel" which the host adds to the queue of some accelerator hardware. Normally, this is a GPU with its thousands of cores chunked into groups of usually 32 or 64 (vendor-specific).

OpenCL_Logo-thumb.png

An FPGA, on the other hand, can lock itself to the specific set of instructions. It can decide to, within a few hours, configure some arbitrary number of compute paths and just churn through each kernel call until it is finished. The compiler knows exactly the application it will need to perform while the host code runs on the CPU.

This is obviously designed for enterprise applications, at least as far into the future as we can see. Current models are apparently priced in the thousands of dollars but, as the article points out, has the potential to out-perform a 200W GPU at just a tenth of the power. This could be very interesting for companies, perhaps a film production house, who wants to install accelerator cards for sub-d surfaces or ray tracing but would like to develop the software in-house and occasionally update their code after business hours.

Regardless of the potential market, a FPGA-based add-in card simply makes sense for OpenCL and its architecture.

Source: Anandtech

Microsoft Confirms AMD Mantle Not Compatible with Xbox One

Subject: Graphics Cards | October 14, 2013 - 08:52 PM |
Tagged: xbox one, microsot, Mantle, dx11, amd

Microsoft posted a new blog on its Windows site that discusses some of the new features of the latest DirectX on Windows 8.1 and the upcoming Xbox One.  Of particular interest was a line that confirms what I have said all along about the much-hyped AMD Mantle low-level API: it is not compatible with Xbox One

We are very excited that with the launch of Xbox One, we can now bring the latest generation of Direct3D 11 to console. The Xbox One graphics API is “Direct3D 11.x” and the Xbox One hardware provides a superset of Direct3D 11.2 functionality. Other graphics APIs such as OpenGL and AMD’s Mantle are not available on Xbox One.

mantle.jpg

What does this mean for AMD?  Nothing really changes except some of the common online discussion about how easy it would now be for developers to convert games built for the console to the AMD-specific Mantle API.  AMD claims that Mantle offers a significant performance advantage over DirectX and OpenGL by giving developers that choose to implement support for it closer access to the hardware without much of the software overhead found in other APIs.

Josh summed it up in a recent editorial.

This is what Mantle does.  It bypasses DirectX (and possibly the hardware abstraction layer) and developers can program very close to the metal with very little overhead from software.  This lowers memory and CPU usage, it decreases latency, and because there are fewer “moving parts” AMD claims that they can do 9x the draw calls with Mantle as compared to DirectX.  This is a significant boost in overall efficiency.  Before everyone gets too excited, we will not see a 9x improvement in overall performance with every application.  A single HD 7790 running in Mantle is not going to power 3 x 1080P monitors in Eyefinity faster than a HD 7970 or GTX 780 (in Surround) running in DirectX.  Mantle shifts the bottleneck elsewhere.

I still believe that AMD Mantle could bring interesting benefits to the AMD Radeon graphics cards on the PC but I think this official statement from Microsoft will dampen some of the over excitement.

Also worth noting is this comment about the DX11 implementation on the Xbox One:

With Xbox One we have also made significant enhancements to the implementation of Direct3D 11, especially in the area of runtime overhead. The result is a very streamlined, “close to metal” level of runtime performance. In conjunction with the third generation PIX performance tool for Xbox One, developers can use Direct3D 11 to unlock the full performance potential of the console.

So while Windows and the upcoming Xbox One will share an API there will still be performance advantages for games on the console thanks to the nature of a static hardware configuration.

Source: Microsoft

Rumor: Web Developer Tools Leaked the R9 290X Price?

Subject: General Tech, Graphics Cards | October 11, 2013 - 04:47 PM |
Tagged: amd, R9 290X

When you deal with the web, almost nothing is hidden. The browsers (and some extensions) have access to just about everything on screen and off. Anyone browsing a site or app can inspect the contents and even modify it. That last part could be key.

amd-r9-290x-newegg01.jpg

TechPowerUp got their hands on a screenshot of developer tools inspecting the Newegg website. A few elements are hidden on the right hand side within the "Coming Soon" container. One element, id of "singleFinalPrice", is set "visibility: hidden;" with a price as its contents. In the TechPowerUp screenshot, this price is listed as "729.99" USD.

amd-r9-290x-newegg02.png

Of course, good journalism is confirming this yourself. As of the time I checked, this value is listed as "9999.99" USD. This means either one of two things: Newegg changed their value to a placeholder after the leak was discovered or the source of TechPowerUp's screenshot used those same developer tools to modify the content. It is actually a remarkably easy thing to do... here I change the value to 99 cents by right clicking on the element and modifying the HTML.

amd-r9-290x-newegg03.png

No, the R9 290X is not 99 cents.

So I would be careful to take these screenshots with a grain of salt if we only have access to one source. That said, $729.99 does sound like a reasonable price point for AMD to release a Titan-competitor at. Of course, that is exactly what a hoaxer would want.

But, as it stands right now, I would not get your hopes up. An MSRP of ~$699-$749 USD sounds legitimate but we do not have even a second source, at the moment, to confirm that. Still, this might be something our readers would like to know.

Source: TechPowerUp

Valve Confirms Steam Machines are not NVIDIA Exclusive

Subject: General Tech, Graphics Cards, Systems | October 10, 2013 - 06:59 PM |
Tagged: amd, nvidia, Intel, Steam Machine

This should be little-to-no surprise for the viewers of our podcast, as this story was discussed there, but Valve has confirmed AMD and Intel graphics are compatible with Steam Machines. Doug Lombardi of Valve commented by email to, apparently, multiple sources including Forbes and Maximum PC.

steam-os-machines.png

Last week, we posted some technical specs of our first wave of Steam Machine prototypes. Although the graphics hardware that we've selected for the first wave of prototypes is a variety of NVIDIA cards, that is not an indication that Steam Machines are NVIDIA-only. In 2014, there will be Steam Machines commercially available with graphics hardware made by AMD, NVIDIA, and Intel. Valve has worked closely together with all three of these companies on optimizing their hardware for SteamOS, and will continue to do so into the foreseeable future.

Ryan and the rest of the podcast crew found the whole situation, "Odd". They could not understand why AMD referred the press to Doug Lombardi rather than circulate a canned statement from him. It was also weird why NVIDIA had an exclusive on the beta program with AMD being commercially available in 2014.

As I have said in the initial post: for what seems to be deliberate non-committal to a specific hardware spec, why limit to a single graphics provider?

Source: Maximum PC

The custom R9's have arrived

Subject: Graphics Cards | October 10, 2013 - 03:29 PM |
Tagged: radeon, r9 270x, GCN, sapphire, toxic edition, factory overclocked

We saw the release of the reference R9s yesterday and today we get to see the custom models such as the Sapphire TOXIC R9 270X which Legit Reviews just finished benchmarking.  The TOXIC sports a 100MHz overclock on both GPU and RAM as well as a custom cooler with three fans.  While it remains a two slot GPU it is longer than the reference model and requires a full foot of clearance inside the case.  Read on to see what kind of performance boost you can expect and how much further you can push this card.

sapphire-amd-r9-270x-645x486.jpg

"When it comes to discrete graphics, the $199 price point is known as the gamer’s sweet spot by both AMD and NVIDIA. This is arguably the front line in the battle for your money when it coming to gaming graphics cards. The AMD Radeon R9 270X is AMD’s offering to gamers at this competitive price point. Read on to see how it performs!"

Here are some more Graphics Card articles from around the web:

Graphics Cards

Win a copy of Batman: Arkham Origins courtesy of NVIDIA

Subject: Editorial, General Tech, Graphics Cards | October 10, 2013 - 03:28 PM |
Tagged: podcast, nvidia, contest, batman arkham origins

UPDATE: We picked our winner for week 1 but now you can enter for week 2!!!  See the new podcast episode listed below!!

Back in August NVIDIA announced that they would be teaming up with Warner Bros. Interactive to include copies of the upcoming Batman: Arkham Origins game with select NVIDIA GeForce graphics cards. While that's great and all, wouldn't you rather get one for free next week from PC Perspective?

batmanao.jpg

Great, you're in luck!  We have a handful of keys to give out to listeners and viewers of the PC Perspective Podcast.  Here's how you enter:

  1. Listen to or watch episode #272 of the PC Perspective Podcast and listen for the "secret phrase" as mentioned in the show!
  2. Subscribe to our RSS feed for the podcast or subscribe to our YouTube channel.
  3. Fill out the form at the bottom of this podcast page with the "secret phrase" and you're entered!

I'll draw a winner before the next podcast and announce it on the show!  We'll giveaway one copy each of the next two weeks!  Our thanks goes to NVIDIA for supplying the Batman: Arkham Origins keys for this contest!!

No restrictions on winning, so good luck!!

ASUS Announces AMD Radeon R9 and R7 200 Series Graphics Cards

Subject: Graphics Cards | October 9, 2013 - 12:32 PM |
Tagged: R9 280X DirectCU II, R9 270X DirectCU II, R7 260X DirectCU II, R7 250, R7 240, Matrix R9 280X, asus

Editor's Note: Be sure to check out our full review of the new AMD Radeon R9 280X, R9 270X and R7 260X that includes the ASUS overclocked 280X!

Fremont, CA (October 8, 2013) - ASUS today announces the launch of its R9 200 and R7 200 Series graphics cards, powered by the latest AMD Radeon R9 and R7 series graphics-processing units (GPUs). As dedicated gamers have come to expect from Republic of Gamers (ROG), the new Matrix R9 280X graphics card feature exclusive technologies, overclocked core speeds and performance enhancing options.

The new R9 280X, R9 270X and R7 260X DirectCU II models are overclocked to perform faster than reference designs while also featuring DIGI+ voltage-regulator modules (VRMs) for a smooth and stable power supply and GPU Tweak software for tuning the graphics card. The new R7 250 and R7 240 cards benefit from many exclusive ASUS technologies and tools including Super Alloy Power components for superior stability, dust-proof fans for improved card lifespan and GPU Tweak.

image01.jpg

Matrix — Push the limits
The Matrix R9 280X graphics cards benefit from a copper-based thermal design that conducts heat away from the GPU with greater efficiency. Compared to reference Radeon R9 280X designs, ROG Matrix R9 280X cards operate up to 20% cooler and three times (3X) quieter. Coupled with dual 100mm cooling fans, gamers can enjoy ultra-cool and stable game play with minimal noise. The Matrix R9 280X Platinum Edition’s core runs at a blistering 1100MHz — 100MHz higher than reference.

The Matrix R9 280X graphics card allows for overclocking on a purely hardware level with VGA Hotwire connections and TweakIt for voltage control, Turbo Fan button to crank up the fan to 100% and a Safe Mode button to instantly default the GPU back to factory BIOs. It also includes DIGI+ voltage-regulator modules (VRMs) for smooth and stable power, and GPU Tweak tuning software that allows users to squeeze the last drop of performance out of their graphics card.

image02.jpg

DirectCU II — Faster, quieter and cooler, even in the heat of battle
ASUS DirectCU II cooling technology places highly conductive copper cooling pipes in direct contact with a card’s GPU so heat dissipates quickly and with greater efficiency. Compared with reference Radeon R9 and R7 designs, ASUS R9 280X, R9 270X and R7 260X with DirectCU II allow the latest AMD Radeon GPUs to run up to 20% cooler, three times (3X) quieter– so gamers can enjoy ultra-stable play with minimal noise.

ASUS R9 280X, R9 270X and R7 260X are all equipped with exclusive ASUS DIGI+ VRM with Super Alloy Power technology. Paired with Super Alloy Power solid-state capacitors, concrete-core chokes and hardened MOSFETs, DIGI+ VRM delivers multi-phase power and digital voltage regulation for increased graphics card stability and cleaner power, even during the most intense GPU activities.

The fans of ASUS R9 280X, R9 270X, and R7 260X DirectCU II are all dust-proof, reducing debris accumulation and retaining peak performance over a longer lifespan. In addition, ASUS R9 280X DirectCU II features exclusive CoolTech fan. This innovative fan’s hybrid blade and bearing design, with inner radial blower and outer flower-shaped blades, delivers multi-directional airflow to accelerate heat removal and maintain cooler and quieter operation.

image03.jpg

R7 250 and R7 240 — Super Alloy Power components and dust-proof fans for superior stability and longevity
The ASUS R7 250 and R7 240 graphics cards both include exclusive Super Alloy Power technology. Super Alloy Power’s solid-state capacitors and hardened MOSFETs all withstand much greater stress and heat due to the application of specially-formulated materials — increasing reliability and overall card lifespan. Compared with reference designs, ASUS R7 250 and R7 240’s Super Alloy Power components deliver 35%-cooler operation and a lifespan that’s up to two-and-a-half times (2.5X) longer.

Additionally, the fans on the ASUS R7 250 and R7 240 are extremely resilient and dust-proof. The Dust-Proof fans ensures that even the smallest airborne particles are barred, reducing debris accumulation and retaining peak performance over a longer lifespan — typically improving lifespan by up to 25% compared to reference fans.

GPU Tweak- Easy overclocking and online streaming
The included ASUS GPU Tweak utility enables R9 280X, R9 270X, R7 260X, R7 250, and R7 240 users intuitive control over GPU and video-memory clock speeds and voltages, cooling-fan speeds and power-consumption thresholds – so they can overclock easily with confidence. Users can create multiple performance profiles for on-demand switching of custom settings for different games.

GPU Tweak now includes Live Streaming, an online-streaming tool that lets users share on-screen action over the internet in real time – so others can watch live gaming sessions. It is even possible to add scrolling text, pictures and webcam images to the streaming window.

Source: ASUS

Hello again Tahiti

Subject: Graphics Cards | October 8, 2013 - 05:30 PM |
Tagged: amd, GCN, graphics core next, hd 7790, hd 7870 ghz edition, hd 7970 ghz edition, r7 260x, r9 270x, r9 280x, radeon, ASUS R9 280X DirectCU II TOP

AMD's rebranded cards have arrived, though with a few improvements to the GCN architecture that we already know so well.  This particular release seems to be focused on price for performance which is certainly not a bad thing in these uncertain times.  The 7970 GHz Edition launched at $500, while the new R9 280X will arrive at $300 which is a rather significant price drop and one which we hope doesn't damage AMD's bottom line too badly in the coming quarters.  [H]ard|OCP chose the ASUS R9 280X DirectCU II TOP to test, with a custom PCB from ASUS and a mild overclock which helped it pull ahead of the 7970 GHz.  AMD has tended towards leading off new graphics card families with the low and midrange models, we have yet to see the top of the line R9 290X in action yet.

Ryan's review, including frame pacing, can be found right here.

H_1381134778LXpy2pFUWk_1_6_l.jpg

"We evaluate the new ASUS R9 280X DirectCU II TOP video card and compare it to GeForce GTX 770 and Radeon HD 7970 GHz Edition. We will find out which video card provides the best value and performance in the $300 price segment. Does it provide better performance a than its "competition" in the ~$400 price range?"

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Steam Machine Specifications Revealed...?

Subject: General Tech, Graphics Cards, Cases and Cooling, Systems | October 4, 2013 - 07:19 PM |
Tagged: valve, Steam Machine

Well, that did not take long.

Valve announced the Steam Machines barely over a week ago and could not provide hardware specifications. While none of these will be available for purchase, the honor of taking money reserved for system builders and OEMs, Valve has announced hardware specifications for their beta device.

Rather, they announced a few of them?

steam-os-machines.png

The raw specifications, or range of them, are:

  • GPU: NVIDIA GeForce Titan through GeForce GTX660 (780 and 760 possible)
  • CPU: Intel i7-4770 or i5-4570, or i3-something
  • RAM: 16GB DDR3-1600 (CPU), 3GB GDDR5 (GPU)
  • Storage: 1TB/8GB Hybrid SSHD
  • Power Supply: 450W
  • Dimensions: approx. 12" x 12.4" x 2.9"

Really the only reason I could see for the spread of performance is to not pressure developers into targeting a single reference design. This is odd, since every reference design contains an NVIDIA GPU which (you would expect) a company who wants to encourage an open mind would not have such a glaring omission. I could speculate about driver compatibility with SteamOS and media streaming but even that feels far-fetched.

On the geeky side of things: the potential for a GeForce Titan is fairly awesome and, along with the minimum GeForce 660, is the first sign that I might be wrong about this whole media center extender thing. My expectation was that Valve would acknowledge some developers might want a streaming-focused device.

Above all, I somewhat hope Valve is a bit more clear to consumers with their intent... especially if their intent is to be unclear with OEMs for some reason.

AMD Published GPU Guides for Open Source Community

Subject: General Tech, Graphics Cards | October 2, 2013 - 09:03 PM |
Tagged: amd, linux

Last week, NVIDIA published documentation for Nouveau to heal wounds with the open source community. AMD had a better reputation and intends to maintain it. On Tuesday, Alex Deucher published 9 PDF documents, 1178 pages of register and acceleration documentation along with 18 pages of HDA GPU audio programming details, compared to the 42 pages NVIDIA published.

amd-new2.png

Sure, a page to page comparison is meaningless, but it is clear AMD did not want to be outdone. This is especially true when you consider that some of these documents date back to early 2009. Still, reactionary or not, the open source community should accept the assistance with open arms... and open x86s?

I should note that these documents do not cover Volcanic Islands; they are for everything between Evergreen and Sea Islands.

Source: AMD