AMD Mantle Deep Dive Video from AMD APU13 Event

Subject: Graphics Cards | November 13, 2013 - 09:54 PM |
Tagged: video, Mantle, apu13, amd

While attending the AMD APU13 event, an annual developer conference the company uses to promote heterogeneous computing, I got to sit in during a deep dive on the AMD Mantle, a new hardware level API first announced in September.  Rather than attempt to re-explain what was explained quite well, I decided to record the session on video and then intermix the slides presented in a produced video for our readers.

The result is likely the best (and seemingly first) explanation of how Mantle actually works and what it does differently than existing APIs like DirectX and OpenGL.

Also, because we had some requests, I am embedding the live blog we ran during Johan Andersson's keynote from APU13.  Enjoy!

Video: Battlefield 4 Running on AMD A10 Kaveri APU and Image Decoder HSA Acceleration

Subject: Graphics Cards, Processors | November 12, 2013 - 06:10 PM |
Tagged: amd, Kaveri, APU, video, hsa

Yesterday at the AMD APU13 developer conference, the company showed off the upcoming Kaveri APU running Battlefield 4 completely on the integrated graphics.  I was able to push the AMD guys along and get a little more personal demo to share with our readers.  The Kaveri APU had some of its details revealed this week:

  • Quad-core Steamroller x86
  • 512 Stream Processor GPU
  • 856 GFLOPS of theoretical performance
  • 3.7 GHz CPU clock speed, 720 MHz GPU clock speed

AMD wanted to be sure we pointed out in this video that the estimate clock speeds for FLOP performance may not be what the demo system was run at (likely a bit lower).  Also, the version of Battlefield 4 here is the standard retail version and with further improvements from the driver team as the upcoming Mantle API implementation will likely introduce even more performance for the APU.

The game was running at 1920x1080 with MOSTLY medium quality settings (lighting set to low) but the results still looked damn impressive and the frame rates were silky and smooth.  Considering this is running on a desktop with integrated processor graphics, the game play experience is simply unmatched.  

Memory in the system was running at 2133 MHz.

The second demo looks at the image decoding acceleration that AMD is going to enable with Kaveri APUs upon release with a driver.  Essentially, as the demonstration shows in the video, AMD is overwriting the integrated Windows JPG decompression algorithm with a new one that utilizes HSA to accelerate on both the x86 and SIMD (GPU) portions of the silicon.  For the most strenuous demo that used 22 MP images saw a 100% increase in performance compared to the Kaveri CPU cores alone.

NVIDIA strikes back!

Subject: Graphics Cards | November 8, 2013 - 04:41 PM |
Tagged: nvidia, kepler, gtx 780 ti, gk110, geforce

Here is a roundup of the reviews of what is now the fastest single GPU card on the planet, the GTX 780 Ti, which is a fully active GK110 chip.  The 7GHz GDDR5 is faster than AMD's memory but use a 384-bit memory bus which is less than the R9 290X which leads to some interesting questions about the performance of this card under high resolutions.  Are you willing to pay quite a bit more for better performance and a quieter card? Check out the performance deltas at [H]ard|OCP and see if that changes your mind at all.

You can see how it measures up in ISUs in Ryan's review as well.

1383802230J422mbwkoS_1_10_l.jpg

"NVIDIA's fastest single-GPU video card is being launched today. With the full potential of the Kepler architecture and GK110 GPU fully unlocked, how will it perform compared to the new R9 290X with new drivers? Will the price versus performance make sense? Will it out perform a TITAN? We find out all this and more."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

AMD Releases Catalyst 13.11 Beta 9.2 Driver To Correct Performance Variance Issue of R9 290 Series Graphics Cards

Subject: Graphics Cards, Cases and Cooling | November 8, 2013 - 02:41 AM |
Tagged: R9 290X, powertune, hawaii, graphics drivers, gpu, GCN, catalyst 13.11 beta, amd, 290x

AMD recently launched its 290X graphics card, which is the new high-end single GPU solution using a GCN-based Hawaii architecture. The new GPU is rather large and incorporates an updated version of AMD's PowerTune technology to automatically adjust clockspeeds based on temperature and a maximum fan speed of 40%. Unfortunately, it seems that some 290X cards available at retail exhibited performance characteristics that varied from review units.

Retail versus Review Sample Performance Variance Testing.jpg

AMD has looked into the issue and released the following statement in response to the performance variances (which PC Perspective is looking into as well).

Hello, We've identified that there's variability in fan speeds across AMD R9 290 series boards. This variability in fan speed translates into variability of the cooling capacity of the fan-sink. The flexibility of AMD PowerTune technology enables us to correct this variability in a driver update. This update will normalize the fan RPMs to the correct values.

The correct target RPM values are 2200RPM for the AMD Radeon R9 290X "Quiet mode", and 2650RPM for the R9 290. You can verify these in GPU-Z. If you're working on stories relating to R9 290 series products, please use this driver as it will reduce any variability in fan speeds. This driver will be posted publicly tonight.

From the AMD statement, it seems to be an issue with fan speeds from card to card causing the performance variances. With a GPU that is rated to run at up to 95C, a fan limited to 40% maximum, and dynamic clockspeeds, it is only natural that cards could perform differently, especially if case airflow is not up to par. On the other hand, the specific issue pointed out by other technology review sites (per my understanding, it was initially Tom's Hardware that reported on the retail vs review sample variance) is  an issue where the 40% maximum on certain cards is not actually the RPM target that AMD intended.

AMD intended for the Radeon R9 290X's fan to run at 2200RPM (40%) in Quiet Mode and the fan on the R9 290 (which has a maximum fan speed percentage of 47%) to spin at 2650 RPM in Quiet Mode. However, some cards 40% values are not actually hitting those intended RPMs, which is causing performance differences due to cooling and PowerTune adjusting the clockspeeds accordingly.

Luckily, the issue is being worked on by AMD, and it is reportedly rectified by a driver update. The driver update ensures that the fans are actually spinning at the intended speed when set to the 40% (R9 290X) or 47% (R9 290) values in Catalyst Control Center. The new driver, which includes the fix, is version Catalyst 13.11 Beta 9.2 and is available for download now. 

If you are running a R9 290 or R9 290X in your system, you should consider updating to the latest driver to ensure you are getting the cooling (and as a result gaming) performance you are supposed to be getting.

Catalyst 13.11 Beta 9.2 is available from the AMD website.

Also read:

Stay tuned to PC Perspective for more information on the Radeon R9 290 series GPU performance variance issue as it develops.

Image credit: Ryan Shrout (PC Perspective).

Source: AMD

NVIDIA Grid GPUs Available for Amazon EC2

Subject: General Tech, Graphics Cards, Systems | November 5, 2013 - 09:33 PM |
Tagged: nvidia, grid, AWS, amazon

Amazon Web Services allows customers (individuals, organizations, or companies) to rent servers of certain qualities to match their needs. Many websites are hosted at their data centers, mostly because you can purchase different (or multiple) servers if you have big variations in traffic.

I, personally, sometimes use it as a game server for scheduled multiplayer events. The traditional method is spending $50-80 USD per month on a... decent... server running all-day every-day and using it a couple of hours per week. With Amazon EC2, we hosted a 200 player event (100 vs 100) by purchasing a dual-Xeon (ironically the fastest single-threaded instance) server connected to Amazon's internet backbone by 10 Gigabit Ethernet. This server cost just under $5 per hour all expenses considered. It was not much of a discount but it ran like butter.

nvidia-grid-bracket.png

This leads me to today's story: NVIDIA GRID GPUs are now available at Amazon Web Services. Both companies hope their customers will use (or create services based on) these instances. Applications they expect to see are streamed games, CAD and media creation, and other server-side graphics processing. These Kepler-based instances, named "g2.2xlarge", will be available along side the older Fermi-based Cluster Compute Instances ("cg1.4xlarge").

It is also noteworthy that the older Fermi-based Tesla servers are about 4x as expensive. GRID GPUs are based on GK104 (or GK107, but those are not available on Amazon EC2) and not the more compute-intensive GK110. It would probably be a step backwards for customers intending to perform GPGPU workloads for computational science or "big data" analysis. The newer GRID systems do not have 10 Gigabit Ethernet, either.

So what does it have? Well, I created an AWS instance to find out.

aws-grid-cpu.png

Its CPU is advertised as an Intel E5-2670 with 8 threads and 26 Compute Units (CUs). This is particularly odd as that particular CPU is eight-core with 16 threads; it is also usually rated by Amazon at 22 CUs per 8 threads. This made me wonder whether the CPU is split between two clients or if Amazon disabled Hyper-Threading to push the clock rates higher (and ultimately led me to just log in to an instance and see). As it turns out, HT is still enabled and the processor registers as having 4 physical cores.

The GPU was slightly more... complicated.

aws-grid-gpu.png

NVIDIA control panel apparently does not work over remote desktop and the GPU registers as a "Standard VGA Graphics Adapter". Actually, two are available in Device Manager although one has the yellow exclamation mark of driver woe (random integrated graphics that wasn't disabled in BIOS?). GPU-Z was not able to pick much up from it but it was of some help.

Keep in mind: I did this without contacting either Amazon or NVIDIA. It is entirely possible that the OS I used (Windows Server 2008 R2) was a poor choice. OTOY, as a part of this announcement, offers Amazon Machine Image (AMI)s for Linux and Windows installations integrated with their ORBX middleware.

I spot three key pieces of information: The base clock is 797 MHz, the memory size is 2990 MB, and the default drivers are Forceware 276.52 (??). The core and default clock rate, GK104 and 797 MHz respectively, are characteristic of the GRID K520 GPU with its 2 GK104 GPUs clocked at 800 MHz. However, since the K520 gives each GPU 4GB and this instance only has 3GB of vRAM, I can tell that the product is slightly different.

I was unable to query the device's shader count. The K520 (similar to a GeForce 680) has 1536 per GPU which sounds about right (but, again, pure speculation).

I also tested the server with TCPing to measure its networking performance versus the cluster compute instances. I did not do anything like Speedtest or Netalyzr. With a normal cluster instance I achieve about 20-25ms pings; with this instance I was more in the 45-50ms range. Of course, your mileage may vary and this should not be used as any official benchmark. If you are considering using the instance for your product, launch an instance and run your own tests. It is not expensive. Still, it seems to be less responsive than Cluster Compute instances which is odd considering its intended gaming usage.

Regardless, now that Amazon picked up GRID, we might see more services (be it consumer or enterprise) which utilizes this technology. The new GPU instances start at $0.65/hr for Linux and $0.767/hr for Windows (excluding extra charges like network bandwidth) on demand. Like always with EC2, if you will use these instances a lot, you can get reduced rates if you pay a fee upfront.

Official press blast after the break.

Source: NVIDIA

(Nitroware) AMD Radeon R9 290X Discussion

Subject: General Tech, Graphics Cards | October 28, 2013 - 10:21 PM |
Tagged: R9 290X, amd

Hawaii launches and AMD sells their inventory (all of it, in many cases). The Radeon R9 290X brought reasonably Titan-approaching performance to the $550-600 USD dollar value. Near and dear to our website, AMD also took the opportunity to address much of the Crossfire and Eyefinity frame pacing issues.

amd-gpu14-06.png

Nitroware also took a look at the card... from a distance because they did not receive a review unit. His analysis was based on concepts, such as revisions to AMD design over the life of their Graphics Core Next architecture. The discussion goes back to the ATI Rage series of fixed function hardware and ends with a comparisson between the Radeon HD 7900 "Tahiti" and the R9 290X "Hawaii".

Our international viewers (or even curious North Americans) might also like to check out the work Dominic undertook compiling regional pricing and comparing those values to currency conversion data. There is more to an overview (or review) than benchmarks.

Source: NitroWare

NVIDIA Drops GTX 780, GTX 770 Prices, Announces GTX 780 Ti Price

Subject: Graphics Cards | October 28, 2013 - 09:29 AM |
Tagged: nvidia, kepler, gtx 780 ti, gtx 780, gtx 770, geforce

A lot of news coming from the NVIDIA camp today, including some price drops and price announcements. 

First up, the high-powered GeForce GTX 780 is getting dropped from $649 to $499, a $150 savings that will bring the GTX 780 into line with the competition of AMD's new Radeon R9 290X launched last week

Next, the GeForce GTX 770 2GB is going to drop from $399 to $329 to help it compete more closely with the R9 280X. 

r9290x.JPG

Even you weren't excited about the R9 290X, you have to be excited by competition.

In a surprising turn of events, NVIDIA is now the company with the great bundle deal with GPUs as well!  Starting today you'll be able to get a free copy of Batman: Arkham Origins, Splinter Cell: Blacklist and Assassin's Creed IV: Black Flag with the GeForce GTX 780 Ti, GTX 780 and GTX 770.  If you step down to the GTX 760 or 660 you'll lose out on the Batman title.

SHIELD discounts are available as well: $100 off you buy the upper tier GPUs and $50 off if you but the lower tier. 

UPDATE: NVIDIA just released a new version of GeForce Experience that enabled ShadowPlay, the ability to use Kepler GPUs to record game play in the background with almost no CPU/system ovehead.  You can see Scott's initial impressions of the software right here; it seems like its going to be a pretty awesome feature.

bundle.png

Need more news?  The yet-to-be-released GeForce GTX 780 Ti is also getting a price - $699 based on the email we just received.  And it will be available starting November 7th!!

With all of this news, how does it change our stance on the graphics market?  Quite a bit in fact.  The huge price drop on the GTX 780, coupled with the 3-game bundle means that NVIDIA is likely offering the better hardware/software combo for gamers this fall.  Yes, the R9 290X is likely still a step faster, but now you can get the GTX 780, three great games and spend $50 less. 

The GTX 770 is now poised to make a case for itself against the R9 280X as well with its $70 drop.  The R9 280X / HD 7970 GHz Edition was definitely a better option with its $100 price delta but with only $30 separating the two competing cards, and the three free games, again the advantage will likely fall to NVIDIA.

Finally, the price point of the GTX 780 Ti is interesting - if NVIDIA is smart they are pricing it based on comparable performance to the R9 290X from AMD.  If that is the case, then we can guess the GTX 780 Ti will be a bit faster than the Hawaii card, while likely being quieter and using less power too.  Oh, and again, the three game bundle. 

NVIDIA did NOT announce a GTX TITAN price drop which might surprise some people.  I think the answer as to why will be addressed with the launch of the GTX 780 Ti next month but from what I was hearing over the last couple of weeks NVIDIA can't make the cards fast enough to satisfy demand so reducing margin there just didn't make sense. 

NVIDIA has taken a surprisingly aggressive stance here in the discrete GPU market.  The need to address and silence critics that think the GeForce brand is being damaged by the AMD console wins is obviously potent inside the company.  The good news for us though, and the gaming community as a whole, is that just means better products and better value for graphics card purchases this holiday.

NVIDIA says these price drops will be live by tomorrow.  Enjoy!

Source: NVIDIA

Fall of a Titan, check out the R9 290X

Subject: Graphics Cards | October 24, 2013 - 02:38 PM |
Tagged: radeon, R9 290X, kepler, hawaii, amd

If you didn't stay up to watch our live release of the R9 290X after the podcast last night you missed a chance to have your questions answered but you will be able to watch the recording later on.  The R9 290X arrived today, bringing 4K and Crossfire reviews as well as single GPU testing on many a site including PCPer of course.  You don't just have to take our word for it, [H]ard|OCP was also putting together a review of AMD's Titan killer.  Their benchmarks included some games we haven't adopted yet such as ARMA III.  Check out their results and compare them to ours, AMD really has a winner here.

1382088059a47QS23bNQ_4_8_l.jpg

"AMD is launching the Radeon R9 290X today. The R9 290X represents AMD's fastest single-GPU video card ever produced. It is priced to be less expensive than the GeForce GTX 780, but packs a punch on the level of GTX TITAN. We look at performance, the two BIOS mode options, and even some 4K gaming."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

AMD Aggressively Targets Professional GPU Market

Subject: General Tech, Graphics Cards | October 23, 2013 - 07:30 PM |
Tagged: amd, firepro

Currently AMD holds 18% market share with their FirePro line of professional GPUs. This compares to NVIDIA who owns 81% with Quadro. I assume the "other" category is the sum of S3 and Matrox who, together, command 1% of the professional market (just the professional market)

According to Jon Peddie of JPR, as reported by X-Bit Labs, AMD intends to wrestle back revenue left unguarded for NVIDIA. "After years of neglect, AMD’s workstation group, under the tutorage of Matt Skyner, has the backing and commitment of top management and AMD intends to push into the market aggressively." They have already gained share this year.

W600-card.jpg

During AMD's 3rd Quarter (2013) earnings call, CEO Rory Read outlined the importance of the professional graphics market.

We also continue to make steady progress in another of growth businesses in the third quarter as we delivered our fifth consecutive quarter of revenue and share growth in the professional graphics area. We believe that we can continue to gain share in this lucrative part of the GPU market based on our product portfolio, design wins in flight, and enhanced channel programs.

On the same conference call (actually before and after the professional graphics sound bite), Rory noted their renewed push into the server and embedded SoC markets with 64-bit x86 and 64-bit ARM processors. They will be the only company manufacturing both x86 and ARM solutions which should be an interesting proposition for an enterprise in need of both. Why deal with two vendors?

Either way, AMD will probably be refocusing on the professional and enterprise markets for the near future. For the rest of us, this hopefully means that AMD has a stable (and confident) roadmap in the processor and gaming markets. If that is the case, a profitable Q3 is definitely a good start.

Taking the ASUS R9 280X DirectCU II TOP as far as it can go

Subject: Graphics Cards | October 23, 2013 - 06:20 PM |
Tagged: amd, overclocking, asus, ASUS R9 280X DirectCU II TOP, r9 280x

Having already seen what the ASUS R9 280X DirectCU II TOP can do at default speeds the obvious next step, once they had time to fully explore the options, was for [H]ard|OCP to see just how far this GPU can overclock.  To make a long story short, they went from a default clock of  1070MHz up to 1230MHz and pushed the RAM to 6.6GHz from 6.4GHz though the voltage needed to be bumped from 1.2v to 1.3v.  The actual frequencies are nowhere near as important as the effect on gameplay though, to see those results you will have to click through to the full article.

1381749747uEpep2oCxY_1_2.gif

"We take the new ASUS R9 280X DirectCU II TOP video card and find out how high it will overclock with GPU Tweak and voltage modification. We will compare performance to an overclocked GeForce GTX 770 and find out which card comes out on top when pushed to its overclocking limits."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

NVIDIA Launches WHQL Drivers for Battlefield and Batman

Subject: General Tech, Graphics Cards | October 23, 2013 - 12:21 AM |
Tagged: nvidia, graphics drivers, geforce

Mid-June kicked up a storm of poop across the internet when IGN broke the AMD optimizations for Frostbite 3. It was reported that NVIDIA would not receive sample code for those games until after they launched. The article was later updated with a statement from AMD: "... the AMD Gaming Evolved program undertakes no efforts to prevent our competition from optimizing for games before their release."

Now, I assume, the confusion was caused by then-not-announced Mantle.

nvidia-geforce.png

And, as it turns out, NVIDIA did receive the code for Battlefield 4 prior to launch. Monday, the company launched their 331.58 WHQL-certified drivers which are optimized for Batman: Arkham Origins and Battlefield 4. According to the release notes, you should even be able to use SLi out of the gate. If, on the other hand, you are a Civilization V player: HBAO+ should enhance your shadowing.

They also added a DX11 SLi profile for Watch Dogs... awkwarrrrrd.

To check out the blog at GeForce.com for a bit more information, check out the release notes, or just head over to the drivers page. If you have GeForce Experience installed, it probably already asked you to update.

Source: NVIDIA

PCPer Live! NVIDIA G-Sync Discussion with Tom Petersen, Q&A

Subject: Graphics Cards, Displays | October 20, 2013 - 02:50 PM |
Tagged: video, tom petersen, nvidia, livestream, live, g-sync

UPDATE: If you missed our live stream today that covered NVIDIA G-Sync technology, you can watch the replay embedded below.  NVIDIA's Tom Petersen stops by to talk about G-Sync in both high level and granular detail while showing off some demonstrations of why G-Sync is so important.  Enjoy!!

Last week NVIDIA hosted press and developers in Montreal to discuss a couple of new technologies, the most impressive of which was NVIDIA G-Sync, a new monitor solution that looks to solve the eternal debate of smoothness against latency.  If you haven't read about G-Sync and how impressive it was when first tested on Friday, you should check out my initial write up, NVIDIA G-Sync: Death of the Refresh Rate, that not only does that, but dives into the reason the technology shift was necessary in the first place.

G-Sync essentially functions by altering and controlling the vBlank signal sent to the monitor.  In a normal configuration, vBlank is a combination of the combination of the vertical front and back porch and the necessary sync time.  That timing is set a fixed stepping that determines the effective refresh rate of the monitor; 60 Hz, 120 Hz, etc.  What NVIDIA will now do in the driver and firmware is lengthen or shorten the vBlank signal as desired and will send it when one of two criteria is met.

  1. A new frame has completed rendering and has been copied to the front buffer.  Sending vBlank at this time will tell the screen grab data from the card and display it immediately.
  2. A substantial amount of time has passed and the currently displayed image needs to be refreshed to avoid brightness variation.

In current display timing setups, the submission of the vBlank signal has been completely independent from the rendering pipeline.  The result was varying frame latency and either horizontal tearing or fixed refresh frame rates.  With NVIDIA G-Sync creating an intelligent connection between rendering and frame updating, the display of PC games is fundamentally changed.

Every person that saw the technology, including other media members and even developers like John Carmack, Johan Andersson and Tim Sweeney, came away knowing that this was the future of PC gaming.  (If you didn't see the panel that featured those three developers on stage, you are missing out.)

gsync.jpg

But it is definitely a complicated technology and I have already seen a lot of confusion about it in our comment threads on PC Perspective.  To help the community get a better grasp and to offer them an opportunity to ask some questions, NVIDIA's Tom Petersen is stopping by our offices on Monday afternoon where he will run through some demonstrations and take questions from the live streaming audience.

Be sure to stop back at PC Perspective on Monday, October 21st at 2pm ET / 11am PT as to discuss G-Sync, how it was developed and the various ramifications the technology will have in PC gaming.  You'll find it all on our PC Perspective Live! page on Monday but you can sign up for our "live stream mailing list" as well to get notified in advance!

NVIDIA G-Sync Live Stream

11am PT / 2pm ET - October 21st

PC Perspective Live! Page

placeholder.png

We also want your questions!!  The easiest way to get them answered is to leave them for us here in the comments of this post.  That will give us time to filter through the questions and get the answers you need from Tom.  We'll take questions via the live chat and via Twitter (follow me @ryanshrout) during the event but often time there is a lot of noise to deal with. 

So be sure to join us on Monday afternoon!

John Carmack, Tim Sweeney and Johan Andersson Talk NVIDIA G-Sync, AMD Mantle and Graphics Trends

Subject: Graphics Cards | October 18, 2013 - 07:55 PM |
Tagged: video, tim sweeney, nvidia, Mantle, john carmack, johan andersson, g-sync, amd

If you weren't on our live stream from the NVIDIA "The Way It's Meant to be Played" tech day this afternoon, you missed a hell of an event.  After the announcement of NVIDIA G-Sync variable refresh rate monitor technology, NVIDIA's Tony Tomasi brough one of the most intriguing panels of developers on stage to talk.

IMG_0009.JPG

John Carmack, Tim Sweeney and Johan Andersson talk for over an hour, taking questions from the audience and even getting into debates amongst themselves in some instances.  Topics included NVIDIA G-Sync of course, AMD's Mantle low-level API, the hurdles facing PC gaming and what direction each luminary is currently on for future development.

If you are a PC enthusiast or gamer you are definitely going to want to listen and watch the video below!

NVIDIA GeForce GTX 780 Ti Released Mid-November

Subject: General Tech, Graphics Cards | October 18, 2013 - 01:21 PM |
Tagged: nvidia, GeForce GTX 780 Ti

So the really interesting news today was G-Sync but that did not stop NVIDIA from sneaking in a new high-end graphics card. The GeForce GTX 780 Ti follows the company's old method of releasing successful products:

  • Attach a seemingly arbitrary suffix to a number
  • ???
  • Profit!

nvidia-780-ti.jpg

In all seriousness, we know basically nothing about this card. It is entirely possible that its architecture might not even be based on GK110. We do know it will be faster than a GeForce 780 but we have no frame of reference in regards to the GeForce Titan. The two cards were already so close in performance that Ryan struggled to validate the 780's existence. Imagine how difficult it would be for NVIDIA to wedge yet another product in that gap.

And if it does outperform the Titan, what is its purpose? Sure, Titan is a GPGPU powerhouse if you want double-precision performance without purchasing a Tesla or a Quadro, but that is not really relevant for gamers yet.

We shall see, soon, when we get review samples in. You, on the other hand, will likely see more when the card launches mid-November. No word on pricing.

Source: NVIDIA

NVIDIA Announces G-Sync, Variable Refresh Rate Monitor Technology

Subject: Graphics Cards | October 18, 2013 - 10:52 AM |
Tagged: variable refresh rate, refresh rate, nvidia, gsync, geforce, g-sync

UPDATE: I have posted a more in-depth analysis of the new NVIDIA G-Sync technology: NVIDIA G-Sync: Death of the Refres Rate.  Thanks for reading!!

UPDATE 2: ASUS has announced the G-Sync enabled version of the VG248QE will be priced at $399.

During a gaming event being held in Montreal, NVIDIA unveield a new technology for GeForce gamers that the company is hoping will revolutionize the PC and displays.  Called NVIDIA G-Sync, this new feature will combine changes to the graphics driver as well as change to the monitor to alter the way refresh rates and Vsync have worked for decades.

gsync.jpg

With standard LCD monitors gamers are forced to choose between a tear-free experience by enabling Vsync or playing a game with the substantial visual anomolies in order to get the best and most efficient frame rates.  G-Sync changes that by allowing a monitor to display refresh rates other than 60 Hz, 120 Hz or 144 Hz, etc. without the horizontal tearing normally associated with turning off Vsync.  Essentially, G-Sync allows a properly equiped monitor to run at a variable refresh rate which will improve the experience of gaming in interesting ways.

gsync2.jpg

This technology will be available soon on Kepler-based GeForce graphics cards but will require a monitor with support for G-Sync; not just any display will work.  The first launch monitor is a variation on the very popular 144 Hz ASUS VG248QE 1920x1080 display and as we saw with 3D Vision, supporting G-Sync will require licensing and hardware changes.  In fact, NVIDIA claims that the new logic inside the panels controller is NVIDIA's own design - so you can obviously expect this to only function with NVIDIA GPUs. 

DisplayPort is the only input option currently supported. 

It turns out NVIDIA will actually be offering retrofitting kits for current users of the VG248QE at some yet to be disclosed cost.  The first retail sales of G-Sync will ship as a monitor + retrofit kit as production was just a bit behind.

Using a monitor with a variable refresh rates allows the game to display 55 FPS on the panel at 55 Hz without any horizontal tearing.  It can also display 133 FPS at 133 Hz without tearing.  Anything below the 144 Hz maximum refresh rate of this monitor will be running at full speed without the tearing associated with the lack of vertical sync.

vsync04.jpg

The technology that NVIDIA is showing here is impressive when seen in person; and that is really the only way to understand the difference.  High speed cameras and captures will help but much like 3D Vision was, this is a feature that needs to be seen to be appreciated.  How users will react to that road block will have to be seen. 

Features like G-Sync show the gaming world that without the restrictions of console there is quite a bit of revolutionary steps that can be made to maintain the PC gaming advantage well into the future.  4K displays were a recent example and now NVIDIA G-Sync adds to the list. 

Be sure to stop back at PC Perspective on Monday, November 21st at 2pm ET / 11am PT as we will be joined in-studio by NVIDIA's Tom Petersen to discuss G-Sync, how it was developed and the various ramifications the technology will have in PC gaming.  You'll find it all on our PC Perspective Live! page on Monday but you can sign up for our "live stream mailing list" as well to get notified in advance!

NVIDIA G-Sync Live Stream

11am PT / 2pm ET - October 21st

PC Perspective Live! Page

Source: NVIDIA

NVIDIA GeForce GTX 760 OEMs Silently Announced?

Subject: General Tech, Graphics Cards | October 17, 2013 - 06:56 PM |
Tagged: nvidia, GTX 760 OEM

A pair of new graphics cards have been announced during the first day of "The Way It's Meant To Be Played Montreal 2013" both of which intended for system builders to integrate into their products. Both cards fall under the GeForce GTX 760 branding with the names: "GeForce GTX 760 Ti (OEM)" and "GeForce GTX 760 192-bit (OEM)".

nvidia-geforce-gtx-760-oem-style-3qtr.png

I will place the main specifications of both cards side-by-side-side with the default GeForce 760 for a little bit of reference. Be sure to check out its benchmark.

  GTX 760 GTX 760 192-bit (OEM) GTX 760 Ti (OEM)
Shader Cores 1152 1152 1344
Base Clock 980 MHz 823 MHz 915 MHz
Boost Clock 1033 MHz 888 MHz 980 MHz
Memory Interface 256-bit 192-bit 256-bit
Memory Bandwidth 6.0 GT/s 5.8 GT/s 6.0 GT/s
vRAM (capacity) 2 GB 1.5 or 3 GB 2 GB

The GeForce 760 is no slouch and, especially the GTX 760 Ti, seems to be pretty close in performance to the retail product. I could see this being a respectible addition to a Steam Machine. I still cannot understand why, like the gaming bundle, these cards were not announced during the keynote speech.

Or, for that matter, why no-one seems to be reporting on them.

Source: NVIDIA

NVIDIA Holiday Gaming Bundle: Free Games and More!

Subject: General Tech, Graphics Cards | October 17, 2013 - 05:36 PM |
Tagged: shield, nvidia, bundle

The live stream from NVIDIA, this morning, was full of technologies focused around the PC gaming ecosystem including mobile (but still PC-like) platforms. Today they also announced a holiday gaming bundle for their GeForce cards although that missed the stream for some reason.

The bundle is separated into two tiers depending on your class of video card.

nvidia-geforce-gtx-holiday-bundle-with-shield-tiers-v2-640px.png

If you purchase a GeForce GTX 770, 780, or Titan from a participating retailer (including online), you will receive Splinter Cell: Black List, Batman: Arkham Origins, and Assassin's Creed IV: Black Flag along with a $100-off coupon for an NVIDIA SHIELD.

If, on the other hand, you purchase a GTX 760, 680, 670, 660 Ti, or 660 from a participating retailer (again, including online), you will receive Splinter Cell: Black List and Assassin's Creed IV: Black Flag along with a $50-off coupon for the NVIDIA SHIELD.

The current price at Newegg for an NVIDIA SHIELD is $299 USD. With a $100 discount, this pushes the price point to $199. The $200 price point is a barrier, for videogame systems, under which customers tend to jump at. Reaching the sub-$200 price point could be a big deal even for customers not on the fence especially when you consider PC streaming. Could be.

Assume you were already planning on upgrading your GPU. Would you be interested in adding in an NVIDIA SHIELD for an extra $199?

AMD Allows Two Early R9 290X Benchmarks

Subject: General Tech, Graphics Cards | October 17, 2013 - 04:37 PM |
Tagged: radeon, R9 290X, amd

The NDA on AMD R9 290X benchmarks has not yet lifted but AMD was in Montreal to provide two previews: BioShock Infinite and Tomb Raider, both at 4K (3840 x 2160). Keep in mind, these scores are provided by AMD and definitely does not represent results from our monitor-capture solution. Expect more detailed results from us, later, as we do some Frame Rating.

amd-gpu14-06.png

The test machine used in both setups contains:

  • Intel Core i7-3960X at 3.3 GHz
  • MSI X79A-GD65
  • 16GB of DDR3-1600
  • Windows 7 SP1 64-bit
  • NVIDIA GeForce GTX 780 (331.40 drivers) / AMD Radeon R9 290X (13.11 beta drivers)

The R9 290X is configured in its "Quiet Mode" during both benchmarks. This is particularly interesting, to me, as I was unaware of such feature (it has been a while since I last used a desktop AMD/ATi card). I would assume this is a fan and power profile to keep noise levels as silent as possible for some period of time. A quick Google search suggests this feature is new with the Radeon Rx200-series cards.

bioshock_screen2.jpg

BioShock Infinite is quite demanding at 4K with ultra quality settings. Both cards maintain an average framerate above 30FPS.

AMD R9 290X "Quiet Mode": 44.25 FPS

NVIDIA GeForce GTX 780: 37.67 FPS

tomb_raider1.jpg

(Update 1: 4:44pm EST) AMD confirmed TressFX is disabled in these benchmark scores. It, however, is enabled if you are present in Montreal to see the booth. (end of update 1)

Tomb Raider is also a little harsh at those resolutions. Unfortunately, the results are ambiguous whether or not TressFX has been enabled throughout the benchmarks. The summary explicitly claims TressFX is enabled, while the string of settings contains "Tressfx=off". Clearly, one of the two entries is a typo. We are currently trying to get clarification. In the mean time:

AMD R9 290X "Quiet Mode": 40.2 FPS

NVIDIA GeForce GTX 780: 34.5 FPS

Notice how both of these results are not compared to a GeForce Titan. Recent leaks suggest a retail price for AMD's flagship card in the low-$700 market. The GeForce 780, on the other hand, resides in the $650-700 USD price point.

It seems pretty clear, to me, that cost drove this comparison rather than performance.

Source: AMD

Still Not Settling? Is AMD Going to Continue Never Settle?

Subject: General Tech, Graphics Cards | October 17, 2013 - 03:46 PM |
Tagged: amd, radeon

In summary, "We don't know yet".

amd7990.jpg

We do know of a story posted by Fudzilla which cited Roy Taylor, VP of Global Channel Sales, as a source confirming the reintroduction of Never Settle for the new "Rx200" Radeon cards. Adding credibility, Roy Taylor retweeted the story via his official account. This tweet is still there as I write this post.

The Tech Report, after publishing the story, was contacted by Robert Hallock of AMD Gaming and Graphics. The official word, now, is that AMD does not have any announcements regarding bundles for new products. He is also quoted, "We continue to consider Never Settle bundles as a core component of AMD Gaming Evolved program and intend to do them again in the future".

So, I (personally) see promise in that we will see a new Never Settle bundle. For the moment, AMD is officially silent on the matter. Also, we do not know (and, it is possible, neither does AMD at this time) which games will be included and how many users can claim if there will even be a choice at all.

Source: Tech Report

NVIDIA "The Way It's Meant to be Played" 2013 Press Event Live Blog

Subject: General Tech, Graphics Cards | October 17, 2013 - 01:45 AM |
Tagged: video, nvidia, live blog, live

Last month it was AMD hosting the media out in sunny Hawaii for a #GPU14 press event.  This week NVIDIA is hosting a group of media in Montreal for a two-day event built around "The Way It's Meant to be Played". 

twimtbp.png

NVIDIA promises some very impressive software and technology demonstrations on hand and you can take it all in with our live blog and (hopefully) live stream on our PC Perspective Live! page

It starts at 10am ET / 7am PT so join us bright and early!!  And don't forget to stop by tomorrow for an even more exciting Day 2!!