Fall of a Titan, check out the R9 290X

Subject: Graphics Cards | October 24, 2013 - 02:38 PM |
Tagged: radeon, R9 290X, kepler, hawaii, amd

If you didn't stay up to watch our live release of the R9 290X after the podcast last night you missed a chance to have your questions answered but you will be able to watch the recording later on.  The R9 290X arrived today, bringing 4K and Crossfire reviews as well as single GPU testing on many a site including PCPer of course.  You don't just have to take our word for it, [H]ard|OCP was also putting together a review of AMD's Titan killer.  Their benchmarks included some games we haven't adopted yet such as ARMA III.  Check out their results and compare them to ours, AMD really has a winner here.

1382088059a47QS23bNQ_4_8_l.jpg

"AMD is launching the Radeon R9 290X today. The R9 290X represents AMD's fastest single-GPU video card ever produced. It is priced to be less expensive than the GeForce GTX 780, but packs a punch on the level of GTX TITAN. We look at performance, the two BIOS mode options, and even some 4K gaming."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP
Author:
Manufacturer: AMD

A bit of a surprise

Okay, let's cut to the chase here: it's late, we are rushing to get our articles out, and I think you all would rather see our testing results NOW rather than LATER.  The first thing you should do is read my review of the AMD Radeon R9 290X 4GB Hawaii graphics card which goes over the new architecture, new feature set, and performance in single card configurations. 

Then, you should continue reading below to find out how the new XDMA, bridge-less CrossFire implementation actually works in both single panel and 4K (tiled) configurations.

IMG_1802.JPG

 

A New CrossFire For a New Generation

CrossFire has caused a lot of problems for AMD in recent months (and a lot of problems for me as well).  But, AMD continues to make strides in correcting the frame pacing issues associated with CrossFire configurations and the new R9 290X moves the bar forward.

Without the CrossFire bridge connector on the 290X, all of the CrossFire communication and data transfer occurs over the PCI Express bus that connects the cards to the entire system.  AMD claims that this new XDMA interface was designed for Eyefinity and UltraHD resolutions (which were the subject of our most recent article on the subject).  By accessing the memory of the GPU through PCIe AMD claims that it can alleviate the bandwidth and sync issues that were causing problems with Eyefinity and tiled 4K displays.

Even better, this updated version of CrossFire is said to compatible with the frame pacing updates to the Catalyst driver to improve multi-GPU performance experiences for end users.

IMG_1800.JPG

When an extra R9 290X accidentally fell into my lap, I decided to take it for a spin.  And if you have followed my graphics testing methodology in the past year then you'll understand the important of these tests.

Continue reading our article Frame Rating: AMD Radeon R9 290X CrossFire and 4K Preview Testing!!

Author:
Manufacturer: AMD

A slightly new architecture

Note: We also tested the new AMD Radeon R9 290X in CrossFire and at 4K resolutions; check out that full Frame Rating story right here!!

Last month AMD brought media, analysts, and customers out to Hawaii to talk about a new graphics chip coming out this year.  As you might have guessed based on the location: the code name for this GPU was in fact, Hawaii. It was targeted at the high end of the discrete graphics market to take on the likes of the GTX 780 and GTX TITAN from NVIDIA. 

Earlier this month we reviewed the AMD Radeon R9 280X, R9 270X, and the R7 260X. None of these were based on that new GPU.  Instead, these cards were all rebrands and repositionings of existing hardware in the market (albeit at reduced prices).  Those lower prices made the R9 280X one of our favorite GPUs of the moment as it offers performance per price points currently unmatched by NVIDIA.

But today is a little different, today we are talking about a much more expensive product that has to live up to some pretty lofty goals and ambitions set forward by the AMD PR and marketing machine.  At $549 MSRP, the new AMD Radeon R9 290X will become the flagship of the Radeon brand.  The question is: to where does that ship sail?

 

The AMD Hawaii Architecture

To be quite upfront about it, the Hawaii design is very similar to that of the Tahiti GPU from the Radeon HD 7970 and R9 280X cards.  Based on the same GCN (Graphics Core Next) architecture AMD assured us would be its long term vision, Hawaii ups the ante in a few key areas while maintaining the same core.

01.jpg

Hawaii is built around Shader Engines, of which the R9 290X has four.  Each of these includes 11 CU (compute units) which hold 4 SIMD arrays each.  Doing the quick math brings us to a total stream processor count of 2,816 on the R9 290X. 

Continue reading our review of the AMD Radeon R9 290X 4GB Graphics Card!!

AMD Aggressively Targets Professional GPU Market

Subject: General Tech, Graphics Cards | October 23, 2013 - 07:30 PM |
Tagged: amd, firepro

Currently AMD holds 18% market share with their FirePro line of professional GPUs. This compares to NVIDIA who owns 81% with Quadro. I assume the "other" category is the sum of S3 and Matrox who, together, command 1% of the professional market (just the professional market)

According to Jon Peddie of JPR, as reported by X-Bit Labs, AMD intends to wrestle back revenue left unguarded for NVIDIA. "After years of neglect, AMD’s workstation group, under the tutorage of Matt Skyner, has the backing and commitment of top management and AMD intends to push into the market aggressively." They have already gained share this year.

W600-card.jpg

During AMD's 3rd Quarter (2013) earnings call, CEO Rory Read outlined the importance of the professional graphics market.

We also continue to make steady progress in another of growth businesses in the third quarter as we delivered our fifth consecutive quarter of revenue and share growth in the professional graphics area. We believe that we can continue to gain share in this lucrative part of the GPU market based on our product portfolio, design wins in flight, and enhanced channel programs.

On the same conference call (actually before and after the professional graphics sound bite), Rory noted their renewed push into the server and embedded SoC markets with 64-bit x86 and 64-bit ARM processors. They will be the only company manufacturing both x86 and ARM solutions which should be an interesting proposition for an enterprise in need of both. Why deal with two vendors?

Either way, AMD will probably be refocusing on the professional and enterprise markets for the near future. For the rest of us, this hopefully means that AMD has a stable (and confident) roadmap in the processor and gaming markets. If that is the case, a profitable Q3 is definitely a good start.

Taking the ASUS R9 280X DirectCU II TOP as far as it can go

Subject: Graphics Cards | October 23, 2013 - 06:20 PM |
Tagged: amd, overclocking, asus, ASUS R9 280X DirectCU II TOP, r9 280x

Having already seen what the ASUS R9 280X DirectCU II TOP can do at default speeds the obvious next step, once they had time to fully explore the options, was for [H]ard|OCP to see just how far this GPU can overclock.  To make a long story short, they went from a default clock of  1070MHz up to 1230MHz and pushed the RAM to 6.6GHz from 6.4GHz though the voltage needed to be bumped from 1.2v to 1.3v.  The actual frequencies are nowhere near as important as the effect on gameplay though, to see those results you will have to click through to the full article.

1381749747uEpep2oCxY_1_2.gif

"We take the new ASUS R9 280X DirectCU II TOP video card and find out how high it will overclock with GPU Tweak and voltage modification. We will compare performance to an overclocked GeForce GTX 770 and find out which card comes out on top when pushed to its overclocking limits."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

NVIDIA Launches WHQL Drivers for Battlefield and Batman

Subject: General Tech, Graphics Cards | October 23, 2013 - 12:21 AM |
Tagged: nvidia, graphics drivers, geforce

Mid-June kicked up a storm of poop across the internet when IGN broke the AMD optimizations for Frostbite 3. It was reported that NVIDIA would not receive sample code for those games until after they launched. The article was later updated with a statement from AMD: "... the AMD Gaming Evolved program undertakes no efforts to prevent our competition from optimizing for games before their release."

Now, I assume, the confusion was caused by then-not-announced Mantle.

nvidia-geforce.png

And, as it turns out, NVIDIA did receive the code for Battlefield 4 prior to launch. Monday, the company launched their 331.58 WHQL-certified drivers which are optimized for Batman: Arkham Origins and Battlefield 4. According to the release notes, you should even be able to use SLi out of the gate. If, on the other hand, you are a Civilization V player: HBAO+ should enhance your shadowing.

They also added a DX11 SLi profile for Watch Dogs... awkwarrrrrd.

To check out the blog at GeForce.com for a bit more information, check out the release notes, or just head over to the drivers page. If you have GeForce Experience installed, it probably already asked you to update.

Source: NVIDIA

PCPer Live! NVIDIA G-Sync Discussion with Tom Petersen, Q&A

Subject: Graphics Cards, Displays | October 20, 2013 - 02:50 PM |
Tagged: video, tom petersen, nvidia, livestream, live, g-sync

UPDATE: If you missed our live stream today that covered NVIDIA G-Sync technology, you can watch the replay embedded below.  NVIDIA's Tom Petersen stops by to talk about G-Sync in both high level and granular detail while showing off some demonstrations of why G-Sync is so important.  Enjoy!!

Last week NVIDIA hosted press and developers in Montreal to discuss a couple of new technologies, the most impressive of which was NVIDIA G-Sync, a new monitor solution that looks to solve the eternal debate of smoothness against latency.  If you haven't read about G-Sync and how impressive it was when first tested on Friday, you should check out my initial write up, NVIDIA G-Sync: Death of the Refresh Rate, that not only does that, but dives into the reason the technology shift was necessary in the first place.

G-Sync essentially functions by altering and controlling the vBlank signal sent to the monitor.  In a normal configuration, vBlank is a combination of the combination of the vertical front and back porch and the necessary sync time.  That timing is set a fixed stepping that determines the effective refresh rate of the monitor; 60 Hz, 120 Hz, etc.  What NVIDIA will now do in the driver and firmware is lengthen or shorten the vBlank signal as desired and will send it when one of two criteria is met.

  1. A new frame has completed rendering and has been copied to the front buffer.  Sending vBlank at this time will tell the screen grab data from the card and display it immediately.
  2. A substantial amount of time has passed and the currently displayed image needs to be refreshed to avoid brightness variation.

In current display timing setups, the submission of the vBlank signal has been completely independent from the rendering pipeline.  The result was varying frame latency and either horizontal tearing or fixed refresh frame rates.  With NVIDIA G-Sync creating an intelligent connection between rendering and frame updating, the display of PC games is fundamentally changed.

Every person that saw the technology, including other media members and even developers like John Carmack, Johan Andersson and Tim Sweeney, came away knowing that this was the future of PC gaming.  (If you didn't see the panel that featured those three developers on stage, you are missing out.)

gsync.jpg

But it is definitely a complicated technology and I have already seen a lot of confusion about it in our comment threads on PC Perspective.  To help the community get a better grasp and to offer them an opportunity to ask some questions, NVIDIA's Tom Petersen is stopping by our offices on Monday afternoon where he will run through some demonstrations and take questions from the live streaming audience.

Be sure to stop back at PC Perspective on Monday, October 21st at 2pm ET / 11am PT as to discuss G-Sync, how it was developed and the various ramifications the technology will have in PC gaming.  You'll find it all on our PC Perspective Live! page on Monday but you can sign up for our "live stream mailing list" as well to get notified in advance!

NVIDIA G-Sync Live Stream

11am PT / 2pm ET - October 21st

PC Perspective Live! Page

placeholder.png

We also want your questions!!  The easiest way to get them answered is to leave them for us here in the comments of this post.  That will give us time to filter through the questions and get the answers you need from Tom.  We'll take questions via the live chat and via Twitter (follow me @ryanshrout) during the event but often time there is a lot of noise to deal with. 

So be sure to join us on Monday afternoon!

John Carmack, Tim Sweeney and Johan Andersson Talk NVIDIA G-Sync, AMD Mantle and Graphics Trends

Subject: Graphics Cards | October 18, 2013 - 07:55 PM |
Tagged: video, tim sweeney, nvidia, Mantle, john carmack, johan andersson, g-sync, amd

If you weren't on our live stream from the NVIDIA "The Way It's Meant to be Played" tech day this afternoon, you missed a hell of an event.  After the announcement of NVIDIA G-Sync variable refresh rate monitor technology, NVIDIA's Tony Tomasi brough one of the most intriguing panels of developers on stage to talk.

IMG_0009.JPG

John Carmack, Tim Sweeney and Johan Andersson talk for over an hour, taking questions from the audience and even getting into debates amongst themselves in some instances.  Topics included NVIDIA G-Sync of course, AMD's Mantle low-level API, the hurdles facing PC gaming and what direction each luminary is currently on for future development.

If you are a PC enthusiast or gamer you are definitely going to want to listen and watch the video below!

NVIDIA GeForce GTX 780 Ti Released Mid-November

Subject: General Tech, Graphics Cards | October 18, 2013 - 01:21 PM |
Tagged: nvidia, GeForce GTX 780 Ti

So the really interesting news today was G-Sync but that did not stop NVIDIA from sneaking in a new high-end graphics card. The GeForce GTX 780 Ti follows the company's old method of releasing successful products:

  • Attach a seemingly arbitrary suffix to a number
  • ???
  • Profit!

nvidia-780-ti.jpg

In all seriousness, we know basically nothing about this card. It is entirely possible that its architecture might not even be based on GK110. We do know it will be faster than a GeForce 780 but we have no frame of reference in regards to the GeForce Titan. The two cards were already so close in performance that Ryan struggled to validate the 780's existence. Imagine how difficult it would be for NVIDIA to wedge yet another product in that gap.

And if it does outperform the Titan, what is its purpose? Sure, Titan is a GPGPU powerhouse if you want double-precision performance without purchasing a Tesla or a Quadro, but that is not really relevant for gamers yet.

We shall see, soon, when we get review samples in. You, on the other hand, will likely see more when the card launches mid-November. No word on pricing.

Source: NVIDIA
Author:
Manufacturer: NVIDIA

Our Legacys Influence

We are often creatures of habit.  Change is hard. And often times legacy systems that have been in place for a very long time can shift and determine the angle at which we attack new problems.  This happens in the world of computer technology but also outside the walls of silicon and the results can be dangerous inefficiencies that threaten to limit our advancement in those areas.  Often our need to adapt new technologies to existing infrastructure can be blamed for stagnant development. 

Take the development of the phone as an example.  The pulse based phone system and the rotary dial slowed the implementation of touch dial phones and forced manufacturers to include switches to select between pulse and tone based dialing options on phones for decades. 

Perhaps a more substantial example is that of the railroad system that has based the track gauge (width between the rails) on the transportation methods that existed before the birth of Christ.  Horse drawn carriages pulled by two horses had an axle gap of 4 feet 8 inches in the 1800s and thus the first railroads in the US were built with a track gauge of 4 feet 8 inches.  Today, the standard rail track gauge remains 4 feet 8 inches despite the fact that a wider gauge would allow for more stability of larger cargo loads and allow for higher speed vehicles.  But the cost of updating the existing infrastructure around the world would be so cost prohibitive that it is likely we will remain with that outdated standard.

railroad.jpg

What does this have to do with PC hardware and why am I giving you an abbreviated history lesson?  There are clearly some examples of legacy infrastructure limiting our advancement in hardware development.  Solid state drives are held back by the current SATA based storage interface though we are seeing movements to faster interconnects like PCI Express to alleviate this.  Some compute tasks are limited by the “infrastructure” of standard x86 processor cores and the move to GPU compute has changed the direction of these workloads dramatically.

There is another area of technology that could be improved if we could just move past an existing way of doing things.  Displays.

Continue reading our story on NVIDIA G-Sync Variable Refresh Rate Technology!!