MSI Z87I GAMING AC Motherboard and GTX 760 GAMING ITX Video Card

Subject: General Tech, Graphics Cards, Motherboards | December 3, 2013 - 09:02 PM |
Tagged: uppercase, msi, mini-itx

MSI is calling these products, "Mini, but Mighty". These components are designed for the mini-ITX form factor which is smaller than 7 inches in length and width. Its size makes it very useful for home theater PCs (HTPCs) and other places where discretion is valuable. You also want these machines to be quiet, which MSI claims this product series is.

The name is also written in full uppercase so you imagine yourself yelling every time you read it.

msi-GAMING-moboandgpu.png

The MSI Z87I GAMING AC Motherboard comes with an Intel 802.11ac (hence, "GAMING AC", I assume) wireless adapter. If you are using a wired connection, it comes with a Killer E2205 Ethernet adapter from Qualcomm's BigFoot Networks (even small PCs can be BigFoot). Also included is an HDMI 1.4 output capable of 4K video (HDMI 1.4 is limited to 30Hz output at 2160p).

Good features to have, especially for an HTPC build.

The other launch is the GTX 760 GAMING ITX video card. This card is a miniature GeForce 760 designed to fit in mini-ITX cases. If your box is a Home Theater PC, expect it to run just about any game at 1080p.

No information on pricing and availability yet. Check out the press release after the break.

Source: MSI

AMD A10-7850K and A10-7700K Kaveri Leaks Including Initial GPU Benchmarks

Subject: General Tech, Graphics Cards, Processors | December 3, 2013 - 01:12 AM |
Tagged: Kaveri, APU, amd

The launch and subsequent availability of Kaveri is scheduled for the CES time frame. The APU unites Steamroller x86 cores with several Graphics Core Next (GCN) cores. The high-end offering, the A10-7850K, is capable of 856 GFLOPs of compute power (most of which is of course from the GPU).

amd-kaveri-slide.png

Image/Leak Credit: Prohardver.hu

We now know about two SKUs: the A10-7850K and the A10-7700K. Both parts are quite similar except that the higher model is given a 200 MHz CPU bump, 3.8 GHz to 4.0 Ghz, and 33% more GPU units, 6 to 8.

But how does this compare? The original source (prohardver.hu) claims that Kaveri will achieve an average 28 FPS in Crysis 3 on low at 1680x1050; this is a 12% increase over Richland. It also achieved an average 53 FPS with Sleeping Dogs on Medium which is 26% more than Richland.

These are healthy increases over the previous generation but do not even account for HSA advantages. I am really curious what will happen if integrated graphics become accessible enough that game developers decide to target it for general compute applications. The reduction in latency (semi-wasted time bouncing memory between compute devices) might open this architecture to where it can really shine.

We will do our best to keep you up to date on this part especially when it launches at CES.

Source: ProHardver

NVIDIA Launches GeForce Experience 1.8

Subject: General Tech, Graphics Cards | December 2, 2013 - 12:16 PM |
Tagged: nvidia, ShadowPlay

They grow up so fast these days...

GeForce Experience is NVIDIA's software package, often bundled with their driver updates, to optimize the experience of their customers. This could be adding interesting features, such as GPU-accelerated game video capture, or just recommending graphics settings for popular games.

geforce-experience-1-8-adjusted-optimal-settings-overclock.png

Version 1.8 adds many desired features lacking from the previous version. I always found it weird that GeForce Experience would recommend one good baseline settings for games, and set them for you, but force you to then go into the game and tweak from there. It would be nice to see multiple presets but that is not what we get; instead, we are able to tweak the settings from within GeForce Experience. The baseline tries to provide a solid 40 FPS at the most difficult moments, computationally. You can then tune the familiar performance and quality slider from there.

You are also able to set resolutions up to 3840x2160 and select whether you would like to play in windowed (including "borderless") mode.

geforce-experience-1-8-shadowplay-recording-time.png

Also, with ShadowPlay, Windows 7 users will also be able to "shadow" the last 20 minutes like their Windows 8 neighbors. You will also be able to combine your microphone audio with the in-game audio should you select it. I can see the latter feature being very useful for shoutcasters. Apparently it allows capturing VoIP communication and not just your microphone itself.

Still no streaming to Twitch.tv, yet. It is still coming.

For now, you can download GeForce Experience from NVIDIA's GeForce website. If you want to read a little more detail about it, first, you can check out their (much longer) blog post.

Intel Xeon Phi to get Serious Refresh in 2015?

Subject: General Tech, Graphics Cards, Processors | November 28, 2013 - 12:30 AM |
Tagged: Intel, Xeon Phi, gpgpu

Intel was testing the waters with their Xeon Phi co-processor. Based on the architecture designed for the original Pentium processors, it was released in six products ranging from 57 to 61 cores and 6 to 16GB of RAM. This lead to double precision performance of between 1 and 1.2 TFLOPs. It was fabricated using their 22nm tri-gate technology. All of this was under the Knights Corner initiative.

Intel_Xeon_Phi_Family.jpg

In 2015, Intel plans to have Knights Landing ready for consumption. A modified Silvermont architecture will replace the many simple (basically 15 year-old) cores of the previous generation; up to 72 Silvermont-based cores (each with 4 threads) in fact. It will introduce the AVX-512 instruction set. AVX-512 allows applications to vectorize 8 64-bit (double-precision float or long integer) or 16 32-bit (single-precision float or standard integer) values.

In other words, packing a bunch of related problems into a single instruction.

The most interesting part? Two versions will be offered: Add-In Boards (AIBs) and a standalone CPU. It will not require a host CPU, because of its x86 heritage, if your application is entirely suited for an MIC architecture; unlike a Tesla, it is bootable with existing and common OSes. It can also be paired with standard Xeon processors if you would like a few strong threads with the 288 (72 x 4) the Xeon Phi provides.

And, while I doubt Intel would want to cut anyone else in, VR-Zone notes that this opens the door for AIB partners to make non-reference cards and manage some level of customer support. I'll believe a non-Intel branded AIB only when I see it.

Source: VR-Zone

Controversy continues to erupt over AMD's new GPU

Subject: Graphics Cards | November 27, 2013 - 01:44 PM |
Tagged: sapphire, radeon, R9 290X, hawaii, amd, 290x

Ryan is not the only one who felt it necessary to investigate the reports of differing performance between retail R9 290X cards and the ones sent out for review.  Legit Reviews also ordered a retail card made by Sapphire and tested it against the card sent to them by AMD.  As with our results, ambient temperature had more effect on the frequency of the retail card than it did on the press sample with a 14% difference being common.  Legit had another idea after they noticed that while the BIOS version was the same on both cards the part numbers differed.  Find out what happened when they flashed the retail card to exactly match the press sample.

290x-cards-645x485.jpg

"The AMD Radeon R9 290X and R9 290 have been getting a ton of attention lately due to a number of reports that the retail cards are performing differently than the press cards that the media sites received. We have been following these stories for the past few weeks and finally decided to look into the situation ourselves."

Here are some more Graphics Card articles from around the web:

Graphics Cards

So Apparently Some R9 290 Cards Can Flash in to a 290X?

Subject: General Tech, Graphics Cards | November 26, 2013 - 12:18 AM |
Tagged: R9 290X, r9 290, amd

Multiple sites are reporting that some AMD's Radeon R9 290 cards could be software-unlocked into 290Xs with a simple BIOS update. While the difference in performance is minor, free extra shader processors might be tempting for some existing owners.

"Binning" is when a manufacturer increases yield by splitting one product into several based on how they test after production. Semiconductor fabrication, specifically, is prone to constant errors and defects. Maybe only some of your wafers are not stable at 4 GHz but they can attain 3.5 or 3.7 GHz. Why throw those out when they can be sold as 3.5 GHz parts?

amd-gpu14-06.png

This is especially relevant to multi-core CPUs and GPUs. Hawaii XT has 2816 Stream processors; a compelling product could be made even with a few of those shut down. The R9 290, for instance, permits 2560 of these cores. The remaining have been laser cut or, at least, should have been.

Apparently certain batches of Radeon R9 290s were developed with fully functional Hawaii XT chips that were software locked to 290 specifications. There have been reports that several users of cards from multiple OEMs were able to flash a new BIOS to unlock these extra cores. However, other batches seem to be properly locked.

This could be interesting for lucky and brave users but I wonder why this happened. I can think of two potential causes:

  • Someone (OEMs or AMD) had too many 290X chips, or
  • The 290 launch was just that unprepared.

Either way, newer shipments should be properly locked even from affected OEMs. Again, not that it really matters given the performance differences we are talking about.

Source: WCCFTech

(JPR) NVIDIA Regains GPU Market Share

Subject: General Tech, Graphics Cards | November 22, 2013 - 03:26 PM |
Tagged: nvidia, jpr, amd

Jen Peddie Research (JPR) reports an 8% rise in quarter-to-quarter shipments of graphics add-in boards (AIBs) for NVIDIA and a decrease of 3% for AMD. This reverses the story from last quarter where NVIDIA lost 8% and AMD gained. In all, NVIDIA holds over half the market (64.5%).

But, why?

10-nv_logo.png

JPR attributed AMD's gains seen last quarter to consumers who added a discrete graphics solution to systems which already contain an integrated product. SLi and Crossfire were noted but pale in comparison. I expect that Never Settle to have contributed heavily. This quarter, the free games initiative was reduced with the new GPU lineup. For a decent amount of time, nothing was offered.

At the same time, NVIDIA launched the GTX 780 Ti and their own game bundle. While I do not believe this promotion was as popular as AMD's Never Settle, it probably helped. That said, it is still probably too early to tell whether the Battlefield 4 promotion (or Thief's addition to Silver Tier) will help them regain some ground.

The other vendors, Matrox and S3, were "flat to declining". Their story is the same as last quarter: they less than (maybe much less than) 7000 units. On the whole, add-in board shipments are rising from last quarter; that quarter, however, was a 5.4% drop from the one before.

Source: JPR

Tokyo Tech Goes Green with KFC (NVIDIA and Efficiency)

Subject: General Tech, Graphics Cards, Systems | November 21, 2013 - 06:47 PM |
Tagged: nvidia, tesla, supercomputing

GPUs are very efficient in terms of operations per watt. Their architecture is best suited for a gigantic bundle of similar calculations (such as a set of operations for each entry of a large blob of data). These are the tasks which also take up the most computation time especially for, not surprisingly, 3D graphics (where you need to do something to every pixel, fragment, vertex, etc.). It is also very relevant for scientific calculations, financial and other "big data" services, weather prediction, and so forth.

nvidia-submerge.png

Tokyo Tech KFC achieves over 4 GigaFLOPs per watt of power draw from 160 Tesla K20X GPUs in its cluster. That is about 25% more calculations per watt than current leader of the Green500 (CINECA Eurora System in Italy, with 3.208 GFLOPs/W).

One interesting trait: this supercomputer will be cooled by oil immersion. NVIDIA offers passively cooled Tesla cards which, according to my understanding of how this works, suit very well to this fluid system. I am fairly certain that they remove all of the fans before dunking the servers (I figured they would be left on).

By the way, was it intentional to name computers dunked in giant vats of heat-conducting oil, "KFC"?

Intel has done a similar test, which we reported on last September, submerging numerous servers for over a year. Another benefit of being green is that you are not nearly as concerned about air conditioning.

NVIDIA is actually taking it to the practical market with another nice supercomputer win.

Other NVIDIA Supercomputing News:

Source: NVIDIA

ASUS Announces the Republic of Gamers Mars 760 Graphics Card

Subject: Graphics Cards | November 20, 2013 - 10:52 AM |
Tagged: mars, asus, ROG MARS 760, gtx 760, dual gpu

Fremont, CA (November 19, 2013) - ASUS Republic of Gamers (ROG) today announced the MARS 760 graphics card featuring two GeForce GTX 760 graphics-processing units (GPUs) capable of delivering incredible gaming performance and ensuring ultra-smooth high-resolution gameplay. The MARS 760 even outpaces the GeForce GTX TITAN — with game performance that’s up to 39% faster overall. The MARS 760 is a two-slot card packed with exclusive ASUS technologies including DirectCU II for 20%-cooler and vastly quieter operation, DIGI+ voltage-regulator module (VRM) for ultra-stable power delivery and GPU Tweak, an easy-to-use utility that lets users safely overclock the two GTX 760 GPUs.

mars.jpg

Exclusive ASUS features provide cool, quiet, durable and stable performance ASUS exclusive DirectCU II technology puts 8 highly-conductive cooling copper heatpipes in direct contact with both GPUs. These heatpipes provide extremely efficient cooling, allowing the MARS 760 to run 20% cooler and vastly quieter than reference GeForce GTX 690 cards. Dual 90mm dust-proof fans help to provide six times (6X) greater airflow than reference design. And with 4GB of GDDR5 video memory, the ASUS ROG MARS 760 is capable of delivering visuals with incredibly high frame rates and no stutter, ensuring extremely smooth gameplay — even at WQHD resolutions. An attention-grabbing LED even illuminates as the MARS 760 is operating under load.

The MARS 760 is equipped with ROG’s acclaimed DIGI+ voltage-regulation module (VRM), featuring a 12-phase power design that reduces power noise by 30% and enhances efficiency by 15%. Custom sourced black metallic capacitors offer 20%-better temperature endurance for a lifespan that’s up to five times (5X) longer. The new card is built with extremely hardwearing polymerized organic-semiconductor capacitors (POSCAPs) and has an aluminum back-plate, further lowering power noise while increasing both durability and stability to unlock overclocking potential.

MARS-760_layer1-480x400.jpg

The exclusive GPU Tweak tuning tool allows quick, simple and safe control over clock speeds, voltages, cooling-fan speeds and power-consumption thresholds; GPU Tweak lets users push the two GTX 760 GPUs even further. The ROG edition of GPU Tweak included with the MARS 760 also enables detailed GPU load-line calibration and VRM-frequency tuning, allowing for the most extensive control and tweaking parameters in order to maximize overclocking potential — all adjusted via an attractive and easy-to-use graphical interface.

The GPU Tweak Streaming feature, the newest addition to the GPU Tweak tool, lets users share on-screen action over the internet in real time so others can watch live as games are played. It’s even possible to add a title to the streaming window along with scrolling text, pictures and webcam images.

SPECIFICATIONS:

MARS760-4GD5

  • NVIDIA GeForce GTX 760 SLI
  • PCI Express 3.0
  • 4096MB GDDR5 memory (2GB per GPU)
  • 1008MHz (1072MHz boosted) core speed
  • 6004 MHz (1501 MHz GDDR5) memory clock
  • 512-bit memory interface
  • 2560 x 1600 maximum DVI resolution
  • 2 x dual-link DVI-I output
  • 1 x dual-link DVI-D output
  • 1 x Mini DisplayPort output
  • HDMI output (via dongle)

Source: ASUS

NVIDIA Tesla K40: GK110b Gets a Career (and more vRAM)

Subject: General Tech, Graphics Cards | November 18, 2013 - 12:33 PM |
Tagged: tesla, nvidia, K40, GK110b

The Tesla K20X ruled NVIDIA's headless GPU portfolio for quite some time now. The part is based on the GK110 chip with 192 shader cores disabled, like the GeForce Titan, and achieved 3.9 TeraFLOPs of compute performance (1.31 TeraFLOPs in double precision). Also, like the Titan, the K20X offers 6GB of memory.

nvidia-k40-hero.jpg

The Tesla K40X

So the layout was basically the following: GK104 ruled the gamer market except for the, in hindsight, oddly-positioned GeForce Titan which was basically a Tesla K20X without a few features like error correction (ECC). The Quadro K6000 was the only card to utilize all 2880 CUDA cores.

Then, at the recent G-Sync event, NVIDIA CEO Jen-Hsun Huang announced the GeForce GTX 780Ti. This card uses the GK110b processor and incorporates all 2880 CUDA cores albeit with reduced double-precision performance (for the 780 Ti, not for GK110b in general). So now we have Quadro and GeForce with the full power Kepler, your move Tesla.

And they did, the Tesla K40 launched this morning and it brought more than just cores.

nvidia-tesla-k40.png

A brief overview

The GeForce launch was famous for its inclusion of GPU Boost, a feature absent in the Tesla line. It turns out that NVIDIA was paying attention to the feature but wanted to include it in a way that suited data centers. GeForce cards boost based on the status of the card, its temperature or its power draw. This is apparently unsuitable for data centers because they would like every unit operating at a very similar performance. The Tesla K40 has a base clock of 745 MHz but gives the data center two boost clocks that they can manually set: 810 MHz and 875 MHz.

nvidia-telsa-k40-2.png

Relative performance benchmarks

The Tesla K40 also doubles the amount of RAM to 12GB. Of course this allows for the GPU to work on larger data sets without streaming in the computation from system memory or worse.

There is currently no public information on pricing for the Tesla K40 but it is available starting today. What we do know are the launch OEM partners: ASUS, Bull, Cray, Dell, Eurotech, HP, IBM, Inspur, SGI, Sugon, Supermicro, and Tyan.

If you are interested in testing out a K40, NVIDIA has remotely hosted clusters that your company can sign up for at the GPU Test Drive website.

Press blast after the break!

Source: NVIDIA

AMD's Holiday Game + GPU Bundles

Subject: General Tech, Graphics Cards | November 14, 2013 - 04:54 PM |
Tagged: never settle forever, never settle, battlefield 4, amd

UPDATE (11/14/2013): After many complaints from the community about the lack of availability of graphics cards that actually HAD the Battlefield 4 bundle included with them, AMD is attempting to clarify the situation.  In a statement sent through email, AMD says that the previous information sent to press "was not clear and has led to some confusion" which is definitely the case.  While it was implied that all customers that bought R9 series graphics cards would get a free copy of BF4, when purchased on or after November 13th, the truth is that "add-in-board partners ultimately decide which select AMD Radeon R9 SKUs will include a copy of BF4."

So, how are you to know what SKUs and cards are actually going to include BF4?  AMD is trying hard to setup a landing page at http://amd.com/battlefield4 that will give gamers clear, and absolute, listings of which R9 series cards include the free copy of the game.  When I pushed AMD for a timeline on exactly when these would be posted, the best I could get was "in the next day or two." 

As for users that bought an R9 280X, R9 270X, R9 270, R9 290X or R9 290 after the announcement of the bundle program changes but DID NOT get a copy of BF4, AMD is going to try and help them out by offering up 1,000 Battlefield 4 keys over AMD's social channels.  The cynic in me thinks this is another ploy to get more Facebook likes and Twitter followers, but in truth the logistics of verifying purchases at this point would be a nightmare for AMD.  Though I don't have details on HOW they are going to distribute these keys, I certainly hope they are going to find a way to target those users that were screwed over in this mess.   Follow www.facebook.com/amdgaming or www.twitter.com/amdradeon for more information on this upcoming promotion.

AMD did send over a couple of links to cards that are currently selling with Battlefield 4 included, as an example of what to look for:

As far as I know, the board partners will also decide which online outlets to offer the bundle through, so even if you see the same SKU on Amazon.com, it may not come with Battlefield 4 as well.  It appears in this case, and going forward, extreme caution is in order when looking for the right card for you.

END UPDATE (11/14/2013)

AMD announced the first Never Settle on October 22nd, 2012 with Sleeping Dogs, Far Cry 3, Hitman: Absolution, and 20% off of Medal of Honor: Warfighter. The deal was valued at around $170. It has exploded since then to become a choose-your-own-bundle across a variety of tiers.

This bundle is mostly different.

AMD-holiday-bundle.png

Basically, apart from the R7 260X (I will get to that later), all applicable cards will receive Battlefield 4. This is a one-game promotion unlike Never Settle. Still, it is one very good game that will soon be accelerated with Mantle in an upcoming patch. It should be a good example of games based on Frostbite 3 for at least the next few years.

The qualifying cards are: R9 270, R9 270X, R9 280, R9 280X, R9 290, and R9 290X. They must be purchased from a participating retailer beginning November 13th.

The R7 260X is slightly different because it is more familiar to Never Settle. It will not have access to a free copy of Battlefield 4. Instead, the R7 260X will have access to two of six Never Settle Forever Silver Tier games: Hitman: Absolution, Sleeping Dogs, Sniper Elite (V2), Far Cry 3: Blood Dragon, DiRT 3, and (for the first time) THIEF. It is possible that other silver-tier Never Settle Forever owners, who have yet to redeem their voucher, might qualify as well. I am not sure about that. Regardless, THIEF was chosen because the developer worked closely with AMD to support both Mantle as well as TrueAudio.

Since this deal half-updates Never Settle and half-doesn't... I am unsure what this means for the future of the bundle. They seem to be simultaneously supporting and disavowing it. My personal expectation is that AMD wants to continue with Never Settle but they just cut their margins too thin with this launch. This will be a good question to revisit later in the GPU lifecycle when margins become more comfortable.

What do you think? Does AMD's hyper-aggressive hardware pricing warrant a temporary suspension of Never Settle? I mean, until today, they were being purchased without any bundle what-so-ever.

Qualifying R9-Series Cards (purchased after Nov 13 from participating retailers) can check out AMD's Battlefield 4 portal.

Qualifying R7 260X owners, on the other hand, can check out the Never Settle Forever portal.

Source: AMD

AMD Mantle Deep Dive Video from AMD APU13 Event

Subject: Graphics Cards | November 13, 2013 - 06:54 PM |
Tagged: video, Mantle, apu13, amd

While attending the AMD APU13 event, an annual developer conference the company uses to promote heterogeneous computing, I got to sit in during a deep dive on the AMD Mantle, a new hardware level API first announced in September.  Rather than attempt to re-explain what was explained quite well, I decided to record the session on video and then intermix the slides presented in a produced video for our readers.

The result is likely the best (and seemingly first) explanation of how Mantle actually works and what it does differently than existing APIs like DirectX and OpenGL.

Also, because we had some requests, I am embedding the live blog we ran during Johan Andersson's keynote from APU13.  Enjoy!

Video: Battlefield 4 Running on AMD A10 Kaveri APU and Image Decoder HSA Acceleration

Subject: Graphics Cards, Processors | November 12, 2013 - 03:10 PM |
Tagged: amd, Kaveri, APU, video, hsa

Yesterday at the AMD APU13 developer conference, the company showed off the upcoming Kaveri APU running Battlefield 4 completely on the integrated graphics.  I was able to push the AMD guys along and get a little more personal demo to share with our readers.  The Kaveri APU had some of its details revealed this week:

  • Quad-core Steamroller x86
  • 512 Stream Processor GPU
  • 856 GFLOPS of theoretical performance
  • 3.7 GHz CPU clock speed, 720 MHz GPU clock speed

AMD wanted to be sure we pointed out in this video that the estimate clock speeds for FLOP performance may not be what the demo system was run at (likely a bit lower).  Also, the version of Battlefield 4 here is the standard retail version and with further improvements from the driver team as the upcoming Mantle API implementation will likely introduce even more performance for the APU.

The game was running at 1920x1080 with MOSTLY medium quality settings (lighting set to low) but the results still looked damn impressive and the frame rates were silky and smooth.  Considering this is running on a desktop with integrated processor graphics, the game play experience is simply unmatched.  

Memory in the system was running at 2133 MHz.

The second demo looks at the image decoding acceleration that AMD is going to enable with Kaveri APUs upon release with a driver.  Essentially, as the demonstration shows in the video, AMD is overwriting the integrated Windows JPG decompression algorithm with a new one that utilizes HSA to accelerate on both the x86 and SIMD (GPU) portions of the silicon.  For the most strenuous demo that used 22 MP images saw a 100% increase in performance compared to the Kaveri CPU cores alone.

NVIDIA strikes back!

Subject: Graphics Cards | November 8, 2013 - 01:41 PM |
Tagged: nvidia, kepler, gtx 780 ti, gk110, geforce

Here is a roundup of the reviews of what is now the fastest single GPU card on the planet, the GTX 780 Ti, which is a fully active GK110 chip.  The 7GHz GDDR5 is faster than AMD's memory but use a 384-bit memory bus which is less than the R9 290X which leads to some interesting questions about the performance of this card under high resolutions.  Are you willing to pay quite a bit more for better performance and a quieter card? Check out the performance deltas at [H]ard|OCP and see if that changes your mind at all.

You can see how it measures up in ISUs in Ryan's review as well.

1383802230J422mbwkoS_1_10_l.jpg

"NVIDIA's fastest single-GPU video card is being launched today. With the full potential of the Kepler architecture and GK110 GPU fully unlocked, how will it perform compared to the new R9 290X with new drivers? Will the price versus performance make sense? Will it out perform a TITAN? We find out all this and more."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

AMD Releases Catalyst 13.11 Beta 9.2 Driver To Correct Performance Variance Issue of R9 290 Series Graphics Cards

Subject: Graphics Cards, Cases and Cooling | November 7, 2013 - 11:41 PM |
Tagged: R9 290X, powertune, hawaii, graphics drivers, gpu, GCN, catalyst 13.11 beta, amd, 290x

AMD recently launched its 290X graphics card, which is the new high-end single GPU solution using a GCN-based Hawaii architecture. The new GPU is rather large and incorporates an updated version of AMD's PowerTune technology to automatically adjust clockspeeds based on temperature and a maximum fan speed of 40%. Unfortunately, it seems that some 290X cards available at retail exhibited performance characteristics that varied from review units.

Retail versus Review Sample Performance Variance Testing.jpg

AMD has looked into the issue and released the following statement in response to the performance variances (which PC Perspective is looking into as well).

Hello, We've identified that there's variability in fan speeds across AMD R9 290 series boards. This variability in fan speed translates into variability of the cooling capacity of the fan-sink. The flexibility of AMD PowerTune technology enables us to correct this variability in a driver update. This update will normalize the fan RPMs to the correct values.

The correct target RPM values are 2200RPM for the AMD Radeon R9 290X "Quiet mode", and 2650RPM for the R9 290. You can verify these in GPU-Z. If you're working on stories relating to R9 290 series products, please use this driver as it will reduce any variability in fan speeds. This driver will be posted publicly tonight.

From the AMD statement, it seems to be an issue with fan speeds from card to card causing the performance variances. With a GPU that is rated to run at up to 95C, a fan limited to 40% maximum, and dynamic clockspeeds, it is only natural that cards could perform differently, especially if case airflow is not up to par. On the other hand, the specific issue pointed out by other technology review sites (per my understanding, it was initially Tom's Hardware that reported on the retail vs review sample variance) is  an issue where the 40% maximum on certain cards is not actually the RPM target that AMD intended.

AMD intended for the Radeon R9 290X's fan to run at 2200RPM (40%) in Quiet Mode and the fan on the R9 290 (which has a maximum fan speed percentage of 47%) to spin at 2650 RPM in Quiet Mode. However, some cards 40% values are not actually hitting those intended RPMs, which is causing performance differences due to cooling and PowerTune adjusting the clockspeeds accordingly.

Luckily, the issue is being worked on by AMD, and it is reportedly rectified by a driver update. The driver update ensures that the fans are actually spinning at the intended speed when set to the 40% (R9 290X) or 47% (R9 290) values in Catalyst Control Center. The new driver, which includes the fix, is version Catalyst 13.11 Beta 9.2 and is available for download now. 

If you are running a R9 290 or R9 290X in your system, you should consider updating to the latest driver to ensure you are getting the cooling (and as a result gaming) performance you are supposed to be getting.

Catalyst 13.11 Beta 9.2 is available from the AMD website.

Also read:

Stay tuned to PC Perspective for more information on the Radeon R9 290 series GPU performance variance issue as it develops.

Image credit: Ryan Shrout (PC Perspective).

Source: AMD

NVIDIA Grid GPUs Available for Amazon EC2

Subject: General Tech, Graphics Cards, Systems | November 5, 2013 - 06:33 PM |
Tagged: nvidia, grid, AWS, amazon

Amazon Web Services allows customers (individuals, organizations, or companies) to rent servers of certain qualities to match their needs. Many websites are hosted at their data centers, mostly because you can purchase different (or multiple) servers if you have big variations in traffic.

I, personally, sometimes use it as a game server for scheduled multiplayer events. The traditional method is spending $50-80 USD per month on a... decent... server running all-day every-day and using it a couple of hours per week. With Amazon EC2, we hosted a 200 player event (100 vs 100) by purchasing a dual-Xeon (ironically the fastest single-threaded instance) server connected to Amazon's internet backbone by 10 Gigabit Ethernet. This server cost just under $5 per hour all expenses considered. It was not much of a discount but it ran like butter.

nvidia-grid-bracket.png

This leads me to today's story: NVIDIA GRID GPUs are now available at Amazon Web Services. Both companies hope their customers will use (or create services based on) these instances. Applications they expect to see are streamed games, CAD and media creation, and other server-side graphics processing. These Kepler-based instances, named "g2.2xlarge", will be available along side the older Fermi-based Cluster Compute Instances ("cg1.4xlarge").

It is also noteworthy that the older Fermi-based Tesla servers are about 4x as expensive. GRID GPUs are based on GK104 (or GK107, but those are not available on Amazon EC2) and not the more compute-intensive GK110. It would probably be a step backwards for customers intending to perform GPGPU workloads for computational science or "big data" analysis. The newer GRID systems do not have 10 Gigabit Ethernet, either.

So what does it have? Well, I created an AWS instance to find out.

aws-grid-cpu.png

Its CPU is advertised as an Intel E5-2670 with 8 threads and 26 Compute Units (CUs). This is particularly odd as that particular CPU is eight-core with 16 threads; it is also usually rated by Amazon at 22 CUs per 8 threads. This made me wonder whether the CPU is split between two clients or if Amazon disabled Hyper-Threading to push the clock rates higher (and ultimately led me to just log in to an instance and see). As it turns out, HT is still enabled and the processor registers as having 4 physical cores.

The GPU was slightly more... complicated.

aws-grid-gpu.png

NVIDIA control panel apparently does not work over remote desktop and the GPU registers as a "Standard VGA Graphics Adapter". Actually, two are available in Device Manager although one has the yellow exclamation mark of driver woe (random integrated graphics that wasn't disabled in BIOS?). GPU-Z was not able to pick much up from it but it was of some help.

Keep in mind: I did this without contacting either Amazon or NVIDIA. It is entirely possible that the OS I used (Windows Server 2008 R2) was a poor choice. OTOY, as a part of this announcement, offers Amazon Machine Image (AMI)s for Linux and Windows installations integrated with their ORBX middleware.

I spot three key pieces of information: The base clock is 797 MHz, the memory size is 2990 MB, and the default drivers are Forceware 276.52 (??). The core and default clock rate, GK104 and 797 MHz respectively, are characteristic of the GRID K520 GPU with its 2 GK104 GPUs clocked at 800 MHz. However, since the K520 gives each GPU 4GB and this instance only has 3GB of vRAM, I can tell that the product is slightly different.

I was unable to query the device's shader count. The K520 (similar to a GeForce 680) has 1536 per GPU which sounds about right (but, again, pure speculation).

I also tested the server with TCPing to measure its networking performance versus the cluster compute instances. I did not do anything like Speedtest or Netalyzr. With a normal cluster instance I achieve about 20-25ms pings; with this instance I was more in the 45-50ms range. Of course, your mileage may vary and this should not be used as any official benchmark. If you are considering using the instance for your product, launch an instance and run your own tests. It is not expensive. Still, it seems to be less responsive than Cluster Compute instances which is odd considering its intended gaming usage.

Regardless, now that Amazon picked up GRID, we might see more services (be it consumer or enterprise) which utilizes this technology. The new GPU instances start at $0.65/hr for Linux and $0.767/hr for Windows (excluding extra charges like network bandwidth) on demand. Like always with EC2, if you will use these instances a lot, you can get reduced rates if you pay a fee upfront.

Official press blast after the break.

Source: NVIDIA

(Nitroware) AMD Radeon R9 290X Discussion

Subject: General Tech, Graphics Cards | October 28, 2013 - 07:21 PM |
Tagged: R9 290X, amd

Hawaii launches and AMD sells their inventory (all of it, in many cases). The Radeon R9 290X brought reasonably Titan-approaching performance to the $550-600 USD dollar value. Near and dear to our website, AMD also took the opportunity to address much of the Crossfire and Eyefinity frame pacing issues.

amd-gpu14-06.png

Nitroware also took a look at the card... from a distance because they did not receive a review unit. His analysis was based on concepts, such as revisions to AMD design over the life of their Graphics Core Next architecture. The discussion goes back to the ATI Rage series of fixed function hardware and ends with a comparisson between the Radeon HD 7900 "Tahiti" and the R9 290X "Hawaii".

Our international viewers (or even curious North Americans) might also like to check out the work Dominic undertook compiling regional pricing and comparing those values to currency conversion data. There is more to an overview (or review) than benchmarks.

Source: NitroWare

NVIDIA Drops GTX 780, GTX 770 Prices, Announces GTX 780 Ti Price

Subject: Graphics Cards | October 28, 2013 - 06:29 AM |
Tagged: nvidia, kepler, gtx 780 ti, gtx 780, gtx 770, geforce

A lot of news coming from the NVIDIA camp today, including some price drops and price announcements. 

First up, the high-powered GeForce GTX 780 is getting dropped from $649 to $499, a $150 savings that will bring the GTX 780 into line with the competition of AMD's new Radeon R9 290X launched last week

Next, the GeForce GTX 770 2GB is going to drop from $399 to $329 to help it compete more closely with the R9 280X. 

r9290x.JPG

Even you weren't excited about the R9 290X, you have to be excited by competition.

In a surprising turn of events, NVIDIA is now the company with the great bundle deal with GPUs as well!  Starting today you'll be able to get a free copy of Batman: Arkham Origins, Splinter Cell: Blacklist and Assassin's Creed IV: Black Flag with the GeForce GTX 780 Ti, GTX 780 and GTX 770.  If you step down to the GTX 760 or 660 you'll lose out on the Batman title.

SHIELD discounts are available as well: $100 off you buy the upper tier GPUs and $50 off if you but the lower tier. 

UPDATE: NVIDIA just released a new version of GeForce Experience that enabled ShadowPlay, the ability to use Kepler GPUs to record game play in the background with almost no CPU/system ovehead.  You can see Scott's initial impressions of the software right here; it seems like its going to be a pretty awesome feature.

bundle.png

Need more news?  The yet-to-be-released GeForce GTX 780 Ti is also getting a price - $699 based on the email we just received.  And it will be available starting November 7th!!

With all of this news, how does it change our stance on the graphics market?  Quite a bit in fact.  The huge price drop on the GTX 780, coupled with the 3-game bundle means that NVIDIA is likely offering the better hardware/software combo for gamers this fall.  Yes, the R9 290X is likely still a step faster, but now you can get the GTX 780, three great games and spend $50 less. 

The GTX 770 is now poised to make a case for itself against the R9 280X as well with its $70 drop.  The R9 280X / HD 7970 GHz Edition was definitely a better option with its $100 price delta but with only $30 separating the two competing cards, and the three free games, again the advantage will likely fall to NVIDIA.

Finally, the price point of the GTX 780 Ti is interesting - if NVIDIA is smart they are pricing it based on comparable performance to the R9 290X from AMD.  If that is the case, then we can guess the GTX 780 Ti will be a bit faster than the Hawaii card, while likely being quieter and using less power too.  Oh, and again, the three game bundle. 

NVIDIA did NOT announce a GTX TITAN price drop which might surprise some people.  I think the answer as to why will be addressed with the launch of the GTX 780 Ti next month but from what I was hearing over the last couple of weeks NVIDIA can't make the cards fast enough to satisfy demand so reducing margin there just didn't make sense. 

NVIDIA has taken a surprisingly aggressive stance here in the discrete GPU market.  The need to address and silence critics that think the GeForce brand is being damaged by the AMD console wins is obviously potent inside the company.  The good news for us though, and the gaming community as a whole, is that just means better products and better value for graphics card purchases this holiday.

NVIDIA says these price drops will be live by tomorrow.  Enjoy!

Source: NVIDIA

Fall of a Titan, check out the R9 290X

Subject: Graphics Cards | October 24, 2013 - 11:38 AM |
Tagged: radeon, R9 290X, kepler, hawaii, amd

If you didn't stay up to watch our live release of the R9 290X after the podcast last night you missed a chance to have your questions answered but you will be able to watch the recording later on.  The R9 290X arrived today, bringing 4K and Crossfire reviews as well as single GPU testing on many a site including PCPer of course.  You don't just have to take our word for it, [H]ard|OCP was also putting together a review of AMD's Titan killer.  Their benchmarks included some games we haven't adopted yet such as ARMA III.  Check out their results and compare them to ours, AMD really has a winner here.

1382088059a47QS23bNQ_4_8_l.jpg

"AMD is launching the Radeon R9 290X today. The R9 290X represents AMD's fastest single-GPU video card ever produced. It is priced to be less expensive than the GeForce GTX 780, but packs a punch on the level of GTX TITAN. We look at performance, the two BIOS mode options, and even some 4K gaming."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

AMD Aggressively Targets Professional GPU Market

Subject: General Tech, Graphics Cards | October 23, 2013 - 04:30 PM |
Tagged: amd, firepro

Currently AMD holds 18% market share with their FirePro line of professional GPUs. This compares to NVIDIA who owns 81% with Quadro. I assume the "other" category is the sum of S3 and Matrox who, together, command 1% of the professional market (just the professional market)

According to Jon Peddie of JPR, as reported by X-Bit Labs, AMD intends to wrestle back revenue left unguarded for NVIDIA. "After years of neglect, AMD’s workstation group, under the tutorage of Matt Skyner, has the backing and commitment of top management and AMD intends to push into the market aggressively." They have already gained share this year.

W600-card.jpg

During AMD's 3rd Quarter (2013) earnings call, CEO Rory Read outlined the importance of the professional graphics market.

We also continue to make steady progress in another of growth businesses in the third quarter as we delivered our fifth consecutive quarter of revenue and share growth in the professional graphics area. We believe that we can continue to gain share in this lucrative part of the GPU market based on our product portfolio, design wins in flight, and enhanced channel programs.

On the same conference call (actually before and after the professional graphics sound bite), Rory noted their renewed push into the server and embedded SoC markets with 64-bit x86 and 64-bit ARM processors. They will be the only company manufacturing both x86 and ARM solutions which should be an interesting proposition for an enterprise in need of both. Why deal with two vendors?

Either way, AMD will probably be refocusing on the professional and enterprise markets for the near future. For the rest of us, this hopefully means that AMD has a stable (and confident) roadmap in the processor and gaming markets. If that is the case, a profitable Q3 is definitely a good start.