Some Strong AMD FirePro Results in SPECviewperf 12

Subject: General Tech, Graphics Cards | December 19, 2013 - 07:23 PM |
Tagged: amd, firepro, SPECviewperf

SPECviewperf 12 is a benchmark for workstation components that attempts to measure performance expected for professional applications. It is basically synthetic but is designed to quantify how your system can handle Maya, for instance. AMD provided us with a press deck of some benchmarks they ran leading to many strong FirePro results in the entry to mid-range levels.

AMD-FirePro-Specviewperf12.png

They did not include high-end results which they justify with the quote, "[The] Vast majority of CAD and CAE users purchase entry and mid-range Professional graphics boards". That slide, itself, was titled, "Focusing Where It Matters Most". I will accept that but I assume they did the benchmarks and wonder if it would have just been better to include them.

The cards AMD compared are:

  • Quadro 410 ($105) vs FirePro V3900 ($105)
  • Quadro K600 ($160) vs FirePro V4900 ($150)
  • Quadro K2000 ($425) vs FirePro W5000 ($425)
  • Quadro K4000 ($763) vs FirePro W7000 ($750)

In each of the pairings, about as equally-priced as possible, AMD held decent lead throughout eight tests included in SPECviewperf 12. You could see the performance gap leveling off as prices begun to rise, however.

Obviously a single benchmark suite should be just one data-point when comparing two products. Still, these are pretty healthy performance numbers.

Source: AMD

Hardcoreware Reviews Intel Core i3-4340

Subject: General Tech, Graphics Cards, Processors | December 19, 2013 - 04:05 AM |
Tagged: Intel, haswell

In another review from around the net, Carl Nelson over at Hardcoreware tested the dual-core (4 threads) Intel Core i3-4340 based on the Haswell architecture. This processor slides into the $157 retail price point with a maximum frequency of 3.6GHz and an Intel HD 4600 iGPU clocked at 1150MHz. Obviously this is not intended as top-end performance but, of course, not everyone wants that.

hcw-core-i3-4340-review.jpg

Image Credit: Hardcoreware

One page which I found particularly interesting was the one which benchmarked Battlefield 4 rendering on the iGPU. The AMD A10 6790K (~$130) had slightly lower 99th percentile frame time (characteristic of higher performance) but slightly lower average frames per second (characteristic of lower performance). The graph of frame times shows that AMD is much more consistent than Intel. Perhaps the big blue needs a little Fame Rating? I would be curious to see what is causing the pretty noticeable (in the graph, at least) stutter. AMD's frame pacing seems to be very consistent albeit this is obviously not a Crossfire scenario.

If you are in the low-to-mid $100 price point be sure to check out his review. Also, of course, Kaveri should be coming next month so that is something to look out for.

Source: Hardcoreware

ASUS Announced GTX 780 Ti DirectCU II Graphics Card

Subject: General Tech, Graphics Cards | December 18, 2013 - 04:25 AM |
Tagged: GeForce GTX 780 Ti, DirectCU II, asus

There has not been too many custom coolers for top-end NVIDIA graphics cards as of late. Starting with the GeForce GTX 690, NVIDIA allegedly demands AIB partners stick to the reference designs for certain models. Obviously, this is a problem as it limits the innovation realized by partners when they are forced to compete on fewer metrics (although the reference designs were pretty good regardless). This is especially true because the affected models are the upper high-end where pricing is more flexible if the product is worth it.

ASUS-GTX780TI-DC2OC-3GD5-with-box.jpg

This is apparently not the case for the top end GTX 780 Ti. ASUS has just announced the GeForce GTX 780 Ti DirectCU II graphics card. ASUS claims this will lead to 30% cooler operating with 3x less noise. A 6% bump to performance (as measured in Battlefield 4) will accompany that cooler and quieter operation as the full GK110 GPU will boost to 1020MHz.

ASUS makes custom GPUs for both AMD and NVIDIA. Be sure to check out our review of another high-end DirectCU II card, with 100% less NVIDIA, very soon. It will definitely be a great read and maybe even an excellent podcast topic.

Source: ASUS

GeForce Experience 1.8.1 Released with Twitch Streaming

Subject: General Tech, Graphics Cards | December 17, 2013 - 05:02 PM |
Tagged: nvidia, ShadowPlay, geforce experience

Another update to GeForce Experience brings another anticipated ShadowPlay feature. The ability to stream live gameplay to Twitch, hardware accelerated by Kepler, was demoed at the NVIDIA event in Montreal from late October. They showed Batman Origins streaming at 1080p 60FPS without capping or affecting the in-game output settings.

nvidia-shadowplay-twitch.png

GeForce Experience 1.8.1 finally brings that feature, in beta of course, to the general public. When set up, Alt + F8 will launch the Twitch stream and Alt + F6 will activate your webcam. Oh, by the way, one feature they kept from us (or at least me) is the ability to overlay your webcam atop your gameplay.

Nice touch NVIDIA.

Of course the upstream bandwidth requirements of video are quite high: 3.5Mbps on the top end, a more common 2Mbps happy medium, and a 0.75Mbps minimum. NVIDIA has been trying to ensure that your machine will not lag but there's nothing a GPU can do about your internet connection.

GeForce Experience 1.8.1 is available now at the GeForce website.

Monster Madness: A First For Web Standards

Subject: General Tech, Graphics Cards | December 13, 2013 - 05:43 PM |
Tagged: webgl, ue4, UE3, asm.js

asm.js is a special division of Javascript, for numerical calculations, which can be heavily optimized and easily output by compilers for other languages such as C++. Both Mozilla Firefox and Google Chrome optimize asm.js code albeit there are differences in their implementations. In both cases, performance has become very close to the same applications compiled into native code for the host operating systems.

Its shortcoming is the difficulty and annoyance when hand coding (without compiling it from another language). The browser is used more by encouraging the adoption of web standards through discouraging the usage of web standards. You can see where the politics can enter.

Still, it makes for great demos such as the cloth physics applet from James Long of Mozilla or, more amazingly, Unreal Engine 3. The upcoming UE4 is expected to be officially supported by Epic Games on asm.js (and obviously WebGL will be necessary too) but, of course, Epic will not prevent UE3 licensees from doing their own leg-work.

NomNom Games, a group within Trendy Entertainment (Trendy is known for Dungeon Defenders), became the first company to release a commercial 3D title on these standards. Monster Madness, powered by Unreal Engine 3, runs in web browsers like Mozilla Firefox and Google Chrome without plugins (although it will fail-down to Flash 11.6 if your browser is unsupported for the web-native version). Monster Madness is a top-down cell shaded shoot'em-up.

You can play, for free, with an anonymous token here. You can also visit their website to learn more about the closed beta for registered accounts. It is natively supported on Firefox, Chrome, and Opera. I am not entirely sure why IE11 is not supported, now that Microsoft supports WebGL, but there is probably a customer support or performance reason for it.

Source: Mozilla

Video Perspective: GPU Shortages and Litecoin Mining Discussion

Subject: Graphics Cards | December 12, 2013 - 05:20 PM |
Tagged: video, amd, radeon, hawaii, r9 290, R9 290X, bitcoin, litecoin, mining

If you already listened to this weeks PC Perspective Podcast, then feel free to disregard this post.  For the rest of you - subscribe to our damned weekly podcast would you already?!?

In any event, I thought it might be interesting to extract this 6 minute discussion we had during last nights live streamed podcast about how the emergence of Litecoin mining operations is driving up prices of GPUs, particularly the compute-capable R9 290 and R9 290X Hawaii-based cards from AMD.

Check out these prices currently on Amazon!

The price of the GTX 770 is a bit higher than it should be while the GTX 780 and GTX 780 Ti are priced in the same range they have been for the last month or so.  The same cannot be said for the AMD cards listed here - the R9 280X is selling for $130 more than its expected MSRP at a minimum but you'll see quite a few going for much higher on Amazon, Ebay (thanks TR) and others.  The Radeon R9 290 has an MSRP of $399 from AMD but the lowest price we found on Amazon was $499 and anything on Newegg.com is showing at the same price, but sold out.  The R9 290X is even more obnoxiously priced when you can find them.

03.jpg

Do you have any thoughts on this?  Do you think Litecoin mining is really causing these price inflations and what does that mean for AMD, NVIDIA and the gamer?

AMD Expects Beta Eyefinity Fix in January

Subject: General Tech, Graphics Cards | December 11, 2013 - 05:58 PM |
Tagged: frame pacing, frame rating, amd, southern islands, 4k, eyefinity, crossfire, microstutter

The frame pacing issue has been covered at our website for almost a year now. It stems from the original "microstutter" problem which dates back over a year before we could quantify it. We like to use the term "Frame Rating" to denote the testing methodology we now use for our GPU tests.

amd-crossfire.JPG

AMD fared worse at these tests than NVIDIA (although even they had some problems in certain configurations). They have dedicated a lot of man-hours to the problem resulting in a driver updates for certain scenarios. Crossfire while utilizing Eyefinity or 4K MST was one area they did not focus on. The issue has been addressed in Hawaii and AMD asserted that previous cards will get a software fix soon.

The good news is that we have just received word from AMD that they plan on releasing a beta driver for Southern Islands and earlier GPUs (AMD believes it should work for anything that's not "legacy"). As usual, until it ships anything could change, but it looks good for now.

The beta "frame pacing" driver addressing Crossfire with 4K and Eyefinity, for supported HD-series and Southern Islands-based Rx cards, is expected to be public sometime in January.

Source: AMD

ASUS Teases R9 290X DirectCU II

Subject: General Tech, Graphics Cards | December 9, 2013 - 03:03 AM |
Tagged: R9 290X, DirectCU II, asus

The AMD Radeon R9 290X is a very good graphics processor whose reference design is marred with a few famous design choices. AMD specs the GPU to run at a maximum of 95C, perpetually, and will push its frequency up to 1 GHz if it can stay at or under that temperature. Its cooler in the typical, "Quiet", default setting is generally unable to keep this frequency for more than a handful of minutes. This lead to countless discussions about what it means to be a default and what are the components actual specifications.

ASUS-R9290X-DirectCU2.jpg

All along we note that custom designs from add-in board (AIB) partners could change everything.

ASUS seems to be first to tease their custom solution. This card, based on their DirectCU II design, uses two fans and multiple 10mm nickel plated heatpipes directly atop the processor. The two fans should be able to move more air at a slower rate of rotation and thus be more efficient per decibel. The heatsink itself might also be able to pull heat, quicker, altogether. I am hoping that ASUS provisioned the part to remain at a stable 1GHz under default settings or perhaps even more!

The real test for Hawaii will be when the wave of custom editions washes on shore. We know the processor is capable of some pretty amazing performance figures when it can really open up. This, and other partner boards, would make for possibly the most interesting AIB round-up we have ever had.

No word, yet, on pricing or availability.

(Phoronix) Four Monitors on Linux: AMD and NVIDIA

Subject: General Tech, Graphics Cards | December 5, 2013 - 03:17 AM |
Tagged: linux, nvidia surround, eyefinity

Could four 1080p monitors be 4K on the cheap? Probably not... but keep reading.

phoronix-linux-4monitor.jpg

Image Credit: Phoronix

Phoronix published an article for users interested in quad monitor gaming on Linux. Sure, you might think this is a bit excessive especially considering the bezel at the center of your screen. On the other hand, imagine you are playing a four player split-screen game. That would definitely get some attention. Each player would be able to tilt their screen out of the view of their opponents while only using a single computer.

In his 8-page editorial, Michael Larabel tests the official and popular open source drivers for both AMD and NVIDIA. The winner was NVIDIA's proprietary driver although the open source solution, Nouveau, seemed to fair the worst of the batch. This is the typical trade-off with NVIDIA. It was only just recent that The Green Giant opened up documentation for the other chefs in the kitchen... so these results may change soon.

If you are interested in gaming on Linux, give the article a read.

Source: Phoronix

MSI Z87I GAMING AC Motherboard and GTX 760 GAMING ITX Video Card

Subject: General Tech, Graphics Cards, Motherboards | December 4, 2013 - 12:02 AM |
Tagged: uppercase, msi, mini-itx

MSI is calling these products, "Mini, but Mighty". These components are designed for the mini-ITX form factor which is smaller than 7 inches in length and width. Its size makes it very useful for home theater PCs (HTPCs) and other places where discretion is valuable. You also want these machines to be quiet, which MSI claims this product series is.

The name is also written in full uppercase so you imagine yourself yelling every time you read it.

msi-GAMING-moboandgpu.png

The MSI Z87I GAMING AC Motherboard comes with an Intel 802.11ac (hence, "GAMING AC", I assume) wireless adapter. If you are using a wired connection, it comes with a Killer E2205 Ethernet adapter from Qualcomm's BigFoot Networks (even small PCs can be BigFoot). Also included is an HDMI 1.4 output capable of 4K video (HDMI 1.4 is limited to 30Hz output at 2160p).

Good features to have, especially for an HTPC build.

The other launch is the GTX 760 GAMING ITX video card. This card is a miniature GeForce 760 designed to fit in mini-ITX cases. If your box is a Home Theater PC, expect it to run just about any game at 1080p.

No information on pricing and availability yet. Check out the press release after the break.

Source: MSI

AMD A10-7850K and A10-7700K Kaveri Leaks Including Initial GPU Benchmarks

Subject: General Tech, Graphics Cards, Processors | December 3, 2013 - 04:12 AM |
Tagged: Kaveri, APU, amd

The launch and subsequent availability of Kaveri is scheduled for the CES time frame. The APU unites Steamroller x86 cores with several Graphics Core Next (GCN) cores. The high-end offering, the A10-7850K, is capable of 856 GFLOPs of compute power (most of which is of course from the GPU).

amd-kaveri-slide.png

Image/Leak Credit: Prohardver.hu

We now know about two SKUs: the A10-7850K and the A10-7700K. Both parts are quite similar except that the higher model is given a 200 MHz CPU bump, 3.8 GHz to 4.0 Ghz, and 33% more GPU units, 6 to 8.

But how does this compare? The original source (prohardver.hu) claims that Kaveri will achieve an average 28 FPS in Crysis 3 on low at 1680x1050; this is a 12% increase over Richland. It also achieved an average 53 FPS with Sleeping Dogs on Medium which is 26% more than Richland.

These are healthy increases over the previous generation but do not even account for HSA advantages. I am really curious what will happen if integrated graphics become accessible enough that game developers decide to target it for general compute applications. The reduction in latency (semi-wasted time bouncing memory between compute devices) might open this architecture to where it can really shine.

We will do our best to keep you up to date on this part especially when it launches at CES.

Source: ProHardver

NVIDIA Launches GeForce Experience 1.8

Subject: General Tech, Graphics Cards | December 2, 2013 - 03:16 PM |
Tagged: nvidia, ShadowPlay

They grow up so fast these days...

GeForce Experience is NVIDIA's software package, often bundled with their driver updates, to optimize the experience of their customers. This could be adding interesting features, such as GPU-accelerated game video capture, or just recommending graphics settings for popular games.

geforce-experience-1-8-adjusted-optimal-settings-overclock.png

Version 1.8 adds many desired features lacking from the previous version. I always found it weird that GeForce Experience would recommend one good baseline settings for games, and set them for you, but force you to then go into the game and tweak from there. It would be nice to see multiple presets but that is not what we get; instead, we are able to tweak the settings from within GeForce Experience. The baseline tries to provide a solid 40 FPS at the most difficult moments, computationally. You can then tune the familiar performance and quality slider from there.

You are also able to set resolutions up to 3840x2160 and select whether you would like to play in windowed (including "borderless") mode.

geforce-experience-1-8-shadowplay-recording-time.png

Also, with ShadowPlay, Windows 7 users will also be able to "shadow" the last 20 minutes like their Windows 8 neighbors. You will also be able to combine your microphone audio with the in-game audio should you select it. I can see the latter feature being very useful for shoutcasters. Apparently it allows capturing VoIP communication and not just your microphone itself.

Still no streaming to Twitch.tv, yet. It is still coming.

For now, you can download GeForce Experience from NVIDIA's GeForce website. If you want to read a little more detail about it, first, you can check out their (much longer) blog post.

Intel Xeon Phi to get Serious Refresh in 2015?

Subject: General Tech, Graphics Cards, Processors | November 28, 2013 - 03:30 AM |
Tagged: Intel, Xeon Phi, gpgpu

Intel was testing the waters with their Xeon Phi co-processor. Based on the architecture designed for the original Pentium processors, it was released in six products ranging from 57 to 61 cores and 6 to 16GB of RAM. This lead to double precision performance of between 1 and 1.2 TFLOPs. It was fabricated using their 22nm tri-gate technology. All of this was under the Knights Corner initiative.

Intel_Xeon_Phi_Family.jpg

In 2015, Intel plans to have Knights Landing ready for consumption. A modified Silvermont architecture will replace the many simple (basically 15 year-old) cores of the previous generation; up to 72 Silvermont-based cores (each with 4 threads) in fact. It will introduce the AVX-512 instruction set. AVX-512 allows applications to vectorize 8 64-bit (double-precision float or long integer) or 16 32-bit (single-precision float or standard integer) values.

In other words, packing a bunch of related problems into a single instruction.

The most interesting part? Two versions will be offered: Add-In Boards (AIBs) and a standalone CPU. It will not require a host CPU, because of its x86 heritage, if your application is entirely suited for an MIC architecture; unlike a Tesla, it is bootable with existing and common OSes. It can also be paired with standard Xeon processors if you would like a few strong threads with the 288 (72 x 4) the Xeon Phi provides.

And, while I doubt Intel would want to cut anyone else in, VR-Zone notes that this opens the door for AIB partners to make non-reference cards and manage some level of customer support. I'll believe a non-Intel branded AIB only when I see it.

Source: VR-Zone

Controversy continues to erupt over AMD's new GPU

Subject: Graphics Cards | November 27, 2013 - 04:44 PM |
Tagged: sapphire, radeon, R9 290X, hawaii, amd, 290x

Ryan is not the only one who felt it necessary to investigate the reports of differing performance between retail R9 290X cards and the ones sent out for review.  Legit Reviews also ordered a retail card made by Sapphire and tested it against the card sent to them by AMD.  As with our results, ambient temperature had more effect on the frequency of the retail card than it did on the press sample with a 14% difference being common.  Legit had another idea after they noticed that while the BIOS version was the same on both cards the part numbers differed.  Find out what happened when they flashed the retail card to exactly match the press sample.

290x-cards-645x485.jpg

"The AMD Radeon R9 290X and R9 290 have been getting a ton of attention lately due to a number of reports that the retail cards are performing differently than the press cards that the media sites received. We have been following these stories for the past few weeks and finally decided to look into the situation ourselves."

Here are some more Graphics Card articles from around the web:

Graphics Cards

So Apparently Some R9 290 Cards Can Flash in to a 290X?

Subject: General Tech, Graphics Cards | November 26, 2013 - 03:18 AM |
Tagged: R9 290X, r9 290, amd

Multiple sites are reporting that some AMD's Radeon R9 290 cards could be software-unlocked into 290Xs with a simple BIOS update. While the difference in performance is minor, free extra shader processors might be tempting for some existing owners.

"Binning" is when a manufacturer increases yield by splitting one product into several based on how they test after production. Semiconductor fabrication, specifically, is prone to constant errors and defects. Maybe only some of your wafers are not stable at 4 GHz but they can attain 3.5 or 3.7 GHz. Why throw those out when they can be sold as 3.5 GHz parts?

amd-gpu14-06.png

This is especially relevant to multi-core CPUs and GPUs. Hawaii XT has 2816 Stream processors; a compelling product could be made even with a few of those shut down. The R9 290, for instance, permits 2560 of these cores. The remaining have been laser cut or, at least, should have been.

Apparently certain batches of Radeon R9 290s were developed with fully functional Hawaii XT chips that were software locked to 290 specifications. There have been reports that several users of cards from multiple OEMs were able to flash a new BIOS to unlock these extra cores. However, other batches seem to be properly locked.

This could be interesting for lucky and brave users but I wonder why this happened. I can think of two potential causes:

  • Someone (OEMs or AMD) had too many 290X chips, or
  • The 290 launch was just that unprepared.

Either way, newer shipments should be properly locked even from affected OEMs. Again, not that it really matters given the performance differences we are talking about.

Source: WCCFTech

(JPR) NVIDIA Regains GPU Market Share

Subject: General Tech, Graphics Cards | November 22, 2013 - 06:26 PM |
Tagged: nvidia, jpr, amd

Jen Peddie Research (JPR) reports an 8% rise in quarter-to-quarter shipments of graphics add-in boards (AIBs) for NVIDIA and a decrease of 3% for AMD. This reverses the story from last quarter where NVIDIA lost 8% and AMD gained. In all, NVIDIA holds over half the market (64.5%).

But, why?

10-nv_logo.png

JPR attributed AMD's gains seen last quarter to consumers who added a discrete graphics solution to systems which already contain an integrated product. SLi and Crossfire were noted but pale in comparison. I expect that Never Settle to have contributed heavily. This quarter, the free games initiative was reduced with the new GPU lineup. For a decent amount of time, nothing was offered.

At the same time, NVIDIA launched the GTX 780 Ti and their own game bundle. While I do not believe this promotion was as popular as AMD's Never Settle, it probably helped. That said, it is still probably too early to tell whether the Battlefield 4 promotion (or Thief's addition to Silver Tier) will help them regain some ground.

The other vendors, Matrox and S3, were "flat to declining". Their story is the same as last quarter: they less than (maybe much less than) 7000 units. On the whole, add-in board shipments are rising from last quarter; that quarter, however, was a 5.4% drop from the one before.

Source: JPR

Tokyo Tech Goes Green with KFC (NVIDIA and Efficiency)

Subject: General Tech, Graphics Cards, Systems | November 21, 2013 - 09:47 PM |
Tagged: nvidia, tesla, supercomputing

GPUs are very efficient in terms of operations per watt. Their architecture is best suited for a gigantic bundle of similar calculations (such as a set of operations for each entry of a large blob of data). These are the tasks which also take up the most computation time especially for, not surprisingly, 3D graphics (where you need to do something to every pixel, fragment, vertex, etc.). It is also very relevant for scientific calculations, financial and other "big data" services, weather prediction, and so forth.

nvidia-submerge.png

Tokyo Tech KFC achieves over 4 GigaFLOPs per watt of power draw from 160 Tesla K20X GPUs in its cluster. That is about 25% more calculations per watt than current leader of the Green500 (CINECA Eurora System in Italy, with 3.208 GFLOPs/W).

One interesting trait: this supercomputer will be cooled by oil immersion. NVIDIA offers passively cooled Tesla cards which, according to my understanding of how this works, suit very well to this fluid system. I am fairly certain that they remove all of the fans before dunking the servers (I figured they would be left on).

By the way, was it intentional to name computers dunked in giant vats of heat-conducting oil, "KFC"?

Intel has done a similar test, which we reported on last September, submerging numerous servers for over a year. Another benefit of being green is that you are not nearly as concerned about air conditioning.

NVIDIA is actually taking it to the practical market with another nice supercomputer win.

Other NVIDIA Supercomputing News:

Source: NVIDIA

ASUS Announces the Republic of Gamers Mars 760 Graphics Card

Subject: Graphics Cards | November 20, 2013 - 01:52 PM |
Tagged: mars, asus, ROG MARS 760, gtx 760, dual gpu

Fremont, CA (November 19, 2013) - ASUS Republic of Gamers (ROG) today announced the MARS 760 graphics card featuring two GeForce GTX 760 graphics-processing units (GPUs) capable of delivering incredible gaming performance and ensuring ultra-smooth high-resolution gameplay. The MARS 760 even outpaces the GeForce GTX TITAN — with game performance that’s up to 39% faster overall. The MARS 760 is a two-slot card packed with exclusive ASUS technologies including DirectCU II for 20%-cooler and vastly quieter operation, DIGI+ voltage-regulator module (VRM) for ultra-stable power delivery and GPU Tweak, an easy-to-use utility that lets users safely overclock the two GTX 760 GPUs.

mars.jpg

Exclusive ASUS features provide cool, quiet, durable and stable performance ASUS exclusive DirectCU II technology puts 8 highly-conductive cooling copper heatpipes in direct contact with both GPUs. These heatpipes provide extremely efficient cooling, allowing the MARS 760 to run 20% cooler and vastly quieter than reference GeForce GTX 690 cards. Dual 90mm dust-proof fans help to provide six times (6X) greater airflow than reference design. And with 4GB of GDDR5 video memory, the ASUS ROG MARS 760 is capable of delivering visuals with incredibly high frame rates and no stutter, ensuring extremely smooth gameplay — even at WQHD resolutions. An attention-grabbing LED even illuminates as the MARS 760 is operating under load.

The MARS 760 is equipped with ROG’s acclaimed DIGI+ voltage-regulation module (VRM), featuring a 12-phase power design that reduces power noise by 30% and enhances efficiency by 15%. Custom sourced black metallic capacitors offer 20%-better temperature endurance for a lifespan that’s up to five times (5X) longer. The new card is built with extremely hardwearing polymerized organic-semiconductor capacitors (POSCAPs) and has an aluminum back-plate, further lowering power noise while increasing both durability and stability to unlock overclocking potential.

MARS-760_layer1-480x400.jpg

The exclusive GPU Tweak tuning tool allows quick, simple and safe control over clock speeds, voltages, cooling-fan speeds and power-consumption thresholds; GPU Tweak lets users push the two GTX 760 GPUs even further. The ROG edition of GPU Tweak included with the MARS 760 also enables detailed GPU load-line calibration and VRM-frequency tuning, allowing for the most extensive control and tweaking parameters in order to maximize overclocking potential — all adjusted via an attractive and easy-to-use graphical interface.

The GPU Tweak Streaming feature, the newest addition to the GPU Tweak tool, lets users share on-screen action over the internet in real time so others can watch live as games are played. It’s even possible to add a title to the streaming window along with scrolling text, pictures and webcam images.

SPECIFICATIONS:

MARS760-4GD5

  • NVIDIA GeForce GTX 760 SLI
  • PCI Express 3.0
  • 4096MB GDDR5 memory (2GB per GPU)
  • 1008MHz (1072MHz boosted) core speed
  • 6004 MHz (1501 MHz GDDR5) memory clock
  • 512-bit memory interface
  • 2560 x 1600 maximum DVI resolution
  • 2 x dual-link DVI-I output
  • 1 x dual-link DVI-D output
  • 1 x Mini DisplayPort output
  • HDMI output (via dongle)

Source: ASUS

NVIDIA Tesla K40: GK110b Gets a Career (and more vRAM)

Subject: General Tech, Graphics Cards | November 18, 2013 - 03:33 PM |
Tagged: tesla, nvidia, K40, GK110b

The Tesla K20X ruled NVIDIA's headless GPU portfolio for quite some time now. The part is based on the GK110 chip with 192 shader cores disabled, like the GeForce Titan, and achieved 3.9 TeraFLOPs of compute performance (1.31 TeraFLOPs in double precision). Also, like the Titan, the K20X offers 6GB of memory.

nvidia-k40-hero.jpg

The Tesla K40X

So the layout was basically the following: GK104 ruled the gamer market except for the, in hindsight, oddly-positioned GeForce Titan which was basically a Tesla K20X without a few features like error correction (ECC). The Quadro K6000 was the only card to utilize all 2880 CUDA cores.

Then, at the recent G-Sync event, NVIDIA CEO Jen-Hsun Huang announced the GeForce GTX 780Ti. This card uses the GK110b processor and incorporates all 2880 CUDA cores albeit with reduced double-precision performance (for the 780 Ti, not for GK110b in general). So now we have Quadro and GeForce with the full power Kepler, your move Tesla.

And they did, the Tesla K40 launched this morning and it brought more than just cores.

nvidia-tesla-k40.png

A brief overview

The GeForce launch was famous for its inclusion of GPU Boost, a feature absent in the Tesla line. It turns out that NVIDIA was paying attention to the feature but wanted to include it in a way that suited data centers. GeForce cards boost based on the status of the card, its temperature or its power draw. This is apparently unsuitable for data centers because they would like every unit operating at a very similar performance. The Tesla K40 has a base clock of 745 MHz but gives the data center two boost clocks that they can manually set: 810 MHz and 875 MHz.

nvidia-telsa-k40-2.png

Relative performance benchmarks

The Tesla K40 also doubles the amount of RAM to 12GB. Of course this allows for the GPU to work on larger data sets without streaming in the computation from system memory or worse.

There is currently no public information on pricing for the Tesla K40 but it is available starting today. What we do know are the launch OEM partners: ASUS, Bull, Cray, Dell, Eurotech, HP, IBM, Inspur, SGI, Sugon, Supermicro, and Tyan.

If you are interested in testing out a K40, NVIDIA has remotely hosted clusters that your company can sign up for at the GPU Test Drive website.

Press blast after the break!

Source: NVIDIA

AMD's Holiday Game + GPU Bundles

Subject: General Tech, Graphics Cards | November 14, 2013 - 07:54 PM |
Tagged: never settle forever, never settle, battlefield 4, amd

UPDATE (11/14/2013): After many complaints from the community about the lack of availability of graphics cards that actually HAD the Battlefield 4 bundle included with them, AMD is attempting to clarify the situation.  In a statement sent through email, AMD says that the previous information sent to press "was not clear and has led to some confusion" which is definitely the case.  While it was implied that all customers that bought R9 series graphics cards would get a free copy of BF4, when purchased on or after November 13th, the truth is that "add-in-board partners ultimately decide which select AMD Radeon R9 SKUs will include a copy of BF4."

So, how are you to know what SKUs and cards are actually going to include BF4?  AMD is trying hard to setup a landing page at http://amd.com/battlefield4 that will give gamers clear, and absolute, listings of which R9 series cards include the free copy of the game.  When I pushed AMD for a timeline on exactly when these would be posted, the best I could get was "in the next day or two." 

As for users that bought an R9 280X, R9 270X, R9 270, R9 290X or R9 290 after the announcement of the bundle program changes but DID NOT get a copy of BF4, AMD is going to try and help them out by offering up 1,000 Battlefield 4 keys over AMD's social channels.  The cynic in me thinks this is another ploy to get more Facebook likes and Twitter followers, but in truth the logistics of verifying purchases at this point would be a nightmare for AMD.  Though I don't have details on HOW they are going to distribute these keys, I certainly hope they are going to find a way to target those users that were screwed over in this mess.   Follow www.facebook.com/amdgaming or www.twitter.com/amdradeon for more information on this upcoming promotion.

AMD did send over a couple of links to cards that are currently selling with Battlefield 4 included, as an example of what to look for:

As far as I know, the board partners will also decide which online outlets to offer the bundle through, so even if you see the same SKU on Amazon.com, it may not come with Battlefield 4 as well.  It appears in this case, and going forward, extreme caution is in order when looking for the right card for you.

END UPDATE (11/14/2013)

AMD announced the first Never Settle on October 22nd, 2012 with Sleeping Dogs, Far Cry 3, Hitman: Absolution, and 20% off of Medal of Honor: Warfighter. The deal was valued at around $170. It has exploded since then to become a choose-your-own-bundle across a variety of tiers.

This bundle is mostly different.

AMD-holiday-bundle.png

Basically, apart from the R7 260X (I will get to that later), all applicable cards will receive Battlefield 4. This is a one-game promotion unlike Never Settle. Still, it is one very good game that will soon be accelerated with Mantle in an upcoming patch. It should be a good example of games based on Frostbite 3 for at least the next few years.

The qualifying cards are: R9 270, R9 270X, R9 280, R9 280X, R9 290, and R9 290X. They must be purchased from a participating retailer beginning November 13th.

The R7 260X is slightly different because it is more familiar to Never Settle. It will not have access to a free copy of Battlefield 4. Instead, the R7 260X will have access to two of six Never Settle Forever Silver Tier games: Hitman: Absolution, Sleeping Dogs, Sniper Elite (V2), Far Cry 3: Blood Dragon, DiRT 3, and (for the first time) THIEF. It is possible that other silver-tier Never Settle Forever owners, who have yet to redeem their voucher, might qualify as well. I am not sure about that. Regardless, THIEF was chosen because the developer worked closely with AMD to support both Mantle as well as TrueAudio.

Since this deal half-updates Never Settle and half-doesn't... I am unsure what this means for the future of the bundle. They seem to be simultaneously supporting and disavowing it. My personal expectation is that AMD wants to continue with Never Settle but they just cut their margins too thin with this launch. This will be a good question to revisit later in the GPU lifecycle when margins become more comfortable.

What do you think? Does AMD's hyper-aggressive hardware pricing warrant a temporary suspension of Never Settle? I mean, until today, they were being purchased without any bundle what-so-ever.

Qualifying R9-Series Cards (purchased after Nov 13 from participating retailers) can check out AMD's Battlefield 4 portal.

Qualifying R7 260X owners, on the other hand, can check out the Never Settle Forever portal.

Source: AMD