NVDA Cum Laude-ing Stanford a CUDA Center of Excellence

Subject: Editorial, General Tech, Graphics Cards | July 17, 2011 - 01:07 PM |
Tagged: stanford, nvidia, CUDA

NVIDIA has been pushing their CUDA platform for years now as a method to access your GPU for purposes far beyond the scopes of flags and frags. We have seen what a good amount of heterogeneous hardware will do to a process with a hefty portion of parallelizable code from encryption to generating bitcoins; media processing to blurring the line between real-time and non-real-time 3d rendering. NVIDIA also recognizes the role that academia plays in training the future programmers and thus strongly supports when an institution teaches how to use GPU hardware effectively, especially when they teach how to use NVIDIA GPU hardware effectively. Recently, NVIDIA knighted Stanford as the latest of its CUDA Center of Excellence round table.

GPUniversity.jpg

It will be 150$ if you want it framed.

The list of CUDA Centres of Excellence now currently includes: Georgia Institute of Technology, Harvard School of Engineering, Institute of Process Engineering at Chinese Academy of Sciences, National Taiwan University, Stanford Engineering, TokyoTech, Tsinghua University, University of Cambridge, University of Illinois at Urbana-Champaign, University of Maryland, University of Tennessee, and the University of Utah. If you are interested in learning about programming for GPUs then NVIDIA has just graced blessing on one further choice. Whether that will affect many prospective students and faculty is yet to be seen, but it makes for many amusing puns nonetheless.

Source: NVIDIA

The MSI N580GTX Lightning Once Again Breaks Three Graphics Card World Records

Subject: Graphics Cards | July 15, 2011 - 02:20 PM |
Tagged: overclocking, N580GTX Lightning XE 3GB, msi, LN2

CITY OF INDUSTRY, CA – July 15, 2010 – Since releasing its Lightning series graphics cards, world-renowned mainboard and graphics card manufacturer MSI has received universal praise from media and gamers alike for their outstanding design, rich use of components, powerful performance, and infinite overclocking potential. With the release of the N580GTX Lightning that follows in the footsteps of previous generation Lightnings, graphics card world records were sure to fall. Overclocking enthusiasts from all over have been running the N580 GTX Lightning with LN2 for some extreme overclocking and once again broke 3DMark 11, 3DMark Vantage, and Unigine Heaven (DX11) single-card, single-core world record scores. These considerable achievements demonstrate once again that only a Lightning can outshine a Lightning!

Lightning1.jpg

The triple world record king-N580GTX Lightning
At the beginning of June, US overclocker Splave put the N580GTX Lightning under LN2 cooling and effortlessly set a Unigine Heaven (DX11) single-card, single-core world record with a high score of 2501.6. Then, he topped that score this week with a new world record score of 2562.51! Russia's overclocking ace Smoke also paired a N580GTX Lightning with LN2 for some extreme overclocking as well. In the past few days, with scores of 11,390 and 46,546 respectively, he broke both 3DMark 11 and 3DMark Vantage single card single core world records! This proves the superlative performance of the Lightning series is its best spokesman.

Most advanced design and Military Class II components create a king amongst record-breakers
The N580GTX Lightning, whether in terms of specifications or design, belongs in the highest class of products. The exclusive Power4 power supply architecture and triple overvoltage function strengthens overclocking stability and potential. Additionally, the Extreme OC Function, specifically designed for overclocking, ensures the graphics card can still function normally under extreme overclocking conditions. Adoption of materials has also undergone careful consideration. High quality, second generation Military Class components provide the most stable user experience and optimum durability. In all aspects, MSI Lightning graphics cards demonstrate design and development capability and infinite overclocking potential.

N580GTX_Lightning-02.jpg

Source: MSI
Author:
Manufacturer: General

How much will these Bitcoin mining configurations cost you in power?

Earlier this week we looked at Bitcoin mining performance across a large range of GPUs but we had many requests for estimates on the cost of the power to drive them.  At the time we were much more interested in the performance of these configurations but now that we have that information and we started to look at the potential profitability of doing something like this, look at the actual real-world cost of running a mining machine 24 hours a day, 7 days a week became much more important. 

This led us to today's update where we will talk about the average cost of power, and thus the average cost of running our 16 different configurations, in 50 different locations across the United States.  We got our data from the U.S. Energy Information Administration website where they provide average retail prices on electricity divided up by state and by region.  For use today, we downloaded the latest XLS file (which has slightly more updated information than the website as of this writing) and started going to work with some simple math. 

Here is how your state matches up:

kwh-1.jpg

kwh-2.jpg

The first graph shows the rates in alphabetical order by state, the second graph in order from the most expensive to the least.  First thing we noticed: if you live in Hawaii, I hope you REALLY love the weather.  And maybe it's time to look into that whole solar panel thing, huh?  Because Hawaii was SO FAR out beyond our other data points, we are going to be leaving it out of our calculations and instead are going to ask residents and those curious to just basically double one of our groupings.

Keep reading to get the full rundown on how power costs will affect your mining operations, and why it may not make sense to mine AT ALL with NVIDIA graphics cards! 

Video Perspective: AMD A-series APU Dual Graphics Technology Performance

Subject: Graphics Cards, Processors | July 13, 2011 - 02:13 PM |
Tagged: llano, dual graphics, crossfire, APU, amd, a8-3850, 3850

Last week we posted a short video about the performance of AMD's Llano core A-series of APUs for gaming and the response was so positive that we have decided to continue on with some other short looks at features and technologies with the processor.  For this video we decided to investigate the advantages and performance of the Dual Graphics technology - the AMD APU's ability to combine the performance of a discrete GPU with the Radeon HD 6550D graphics integrated on the A8-3850 APU.

For this test we set our A8-3850 budget gaming rig to the default clock speeds and settings and used an AMD Radeon HD 6570 1GB as our discrete card of choice.  With a price hovering around $70, the HD 6570 would be a modest purchase for a user that wants to add some graphical performance to their low-cost system but doesn't stretch into the market of the enthusiast.

The test parameters were simple: we knew the GPU on the Radeon HD 6570 was a bit better than that of the A8-3850 APU so we compared performance of the discrete graphics card ALONE to the performance of the system when enabling CrossFire, aka Dual Graphics technology.  The results are pretty impressive:

You may notice that these percentages of scaling are higher than those we found in our first article about Llano on launch day.  The reasoning is that we used the Radeon HD 6670 there and found that while compatible by AMD's directives, the HD 6670 is overpowering the HD 6550D GPU on the APU and the performance delta it provides is smaller by comparison.  

So, just as we said with our APU overclocking video, while adding in a discrete card like the HD 6570 won't turn your PC into a $300 graphics card centered gaming machine it will definitely help performance by worthwhile amounts without anyone feeling like they are wasting the silicon on the A8-3850.  

Source: AMD
Author:
Manufacturer: General

What is a Bitcoin?

This article looking at Bitcoins and the performance of various GPUs with mining them was really a big team effort at PC Perspective.  Props goes out to Tim Verry for doing the research on the process of mining and helping to explain what Bitcoins are all about.  Ken Addison did a great job doing through an alottment of graphics cards running our GUIMiner and getting the data you will see presented later.  Scott Michaud helped with some graphics and imagery and I'm the monkey that just puts it all together at the end.

** Update 7/13/11 **  We recently wrote another piece on the cost of the power to run our Bitcoin mining operations used in this performance article.  Based on the individual prices of electric in all 50 states of the US, we found that the cost of the power to run some cards exceeded the value of the Bitcoin currency based on today's exchange rates.  I would highly recommend you check out that story as well after giving this performance-based article a thorough reading.  ** End Update **

A new virtual currency called Bitcoin has been receiving a great deal of news fanfare, criticism and user adoption. The so called cryptographic currency uses strong encryption methods to eliminate the need for trust when buying and selling goods over the Internet in addition to a peer-to-peer distributed timestamp server that maintains a public record of every transaction to prevent double spending of the electronic currency. The aspect of Bitcoin that has caused the most criticism and recent large rise in growth lies in is its inherent ability to anonymize the real life identities of users (though the transactions themselves are public) and the ability to make money by supporting the Bitcoin network in verifying pending transactions through a process called “mining” respectively. Privacy, security, cutting out the middle man and making it easy for users to do small casual transactions without fees as well as the ability to be rewarded for helping to secure the network by mining are all selling points (pun intended) of the currency.

When dealing with a more traditional and physical local currency, there is a need to for both parties to trust the currency but not much need to trust each other as handing over cash is fairly straightforward. One does not need to trust the other person as much as if it were a check which could bounce. Once it has changed hands, the buyer can not go and spend that money elsewhere as it is physically gone. Transactions over the Internet; however, greatly reduce the convenience of that local currency, and due to the series of tubes’ inability to carry cash through the pipes, services like Paypal as well as credit cards and checks are likely to be used in its place. While these replacements are convenient, they also are much riskier than cash as fraudulent charge-backs and disputes are likely to occur, leaving the seller in a bad position. Due to this risk, sellers have to factor a certain percentage of expected fraud into their prices in addition to collecting as much personally identifiable information as possible. Bitcoin seeks to remedy these risks by bringing the convenience of a local currency to the virtual plane with irreversible transactions, a public record of all transactions, and the ability to trust strong cryptography instead of the need for trusting people.

paymentpal.jpg

There are a number of security measures inherent in the Bitcoin protocol that assist with these security goals. Foremost, bitcoin uses strong public and private key cryptography to secure coins to a user. Money is handled by a bitcoin wallet, which is a program such as the official bitcoin client that creates public/private key pairs that allow you to send and receive money. You are further able to generate new receiving addresses whenever you want within the client. The wallet.dat file is the record of all your key pairs and thus your bitcoins and contains 100 address/key pairs (though you are able to generate new ones beyond that). Then, to send money one only needs to sign the bitcoin with their private key and send it to the recipient’s public key. This creates a chain of transactions that are secured by these public and private key pairs from person to person. Unfortunately this cryptography alone is not able to prevent double spending, meaning that Person A could sign the bitcoin with his private key to Person B, but also could do the same to Person C and so on. This issue is where the peer-to-peer and distributed computing aspect of the bitcoin protocol come into play. By using a peer-to-peer distributed timestamp server, the bitcoin protocol creates a public record of every transaction that prevents double spending of bitcoins. Once the bitcoin has been signed to a public key (receiving address) with the user’s private key, and the network confirms this transaction the bitcoins can no longer be spent by Person A as the network has confirmed that the coin belongs to Person B now, and they are the only ones that can spend it using their private key.

Keep reading our article that details the theories behind Bitcoins as well as the performance of modern GPUs in mining them!  

Powercolor's passively cooled and pricey HD6850 SCS3

Subject: Graphics Cards | July 12, 2011 - 02:19 PM |
Tagged: powercolor, passive cooling, hd6850

Powercolor's SCS3 HD6850 1GB GDDR5 graphics card is an odd beast, neither fish nor fowl but a strange hybrid of the two.  To passively cool an HD6850 you need a lot of metal, about 4 slots worth in fact, which makes it all but impossible to use this card in an HTPC or other SFF system.  That size also makes it rather hard to set up in Crossfire system and for extreme performance you need to think about adding a fan in close proximity if not attached to the heatsink, which makes paying the extra money for this card a poor decision.  That said, Benchmark Reviews saw good performance and even managed a respectable overclock with this card, though even with good airflow through their case they saw troubling temperatures on occasion.  Even if you can't picture yourself picking up the card it is worth clicking through just to see the heatsink.

BR_Powercolor_hd6850_scs3.jpg

"PowerColor's a familiar name to the AMD Radeon community. If they don't offer the widest variety of variations on AMD's reference designs, I don't know who does! They have no fewer than eight different versions of AMD's Radeon HD6850 card, ranging from factory-overclocked "PCS+" variants to a single-slot-cooler version to the one Benchmark Reviews is looking at today: the passively-cooled PowerColor SCS3 HD6850 1GB GDDR5. This card uses a massive fan-less heat sink to offer the performance of an HD6850 without any noise at all, and is certainly one of the highest-performing graphics cards I've ever seen with a passive cooler. Will this really work? How far will it overclock? Let's take a look."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Double the price but not double the performance; Powercolor designed a dual GPU HD 6870

Subject: Graphics Cards | July 8, 2011 - 02:22 PM |
Tagged: powercolor, HD6870 X2

PowerColor's HD 6870 X2 is a non-standard card sporting a pair of HD6870 GPUs on one PCB which unfortunately costs more than twice as much as standard HD6870 though within the same ballpark as the GTX580 which is its competitor.  Powercolor really went out on a limb designing this card with no help from AMD and as far as performance goes it is a success as it can beat the GTX580's frame rate when gaming.  However as far as pricing goes, it ranks with the GTX590 and HD6990, in that it is very expensive when looked at from a performance per dollar perspective.  For those enthusiasts that want the absolute best that is probably not going to matter, but if you have the slots you should consider buying a pair of reference HD 6870s.  Check out techPowerUp's full review if the $500 asking price hasn't scared you off.

tpu_6870x2.jpg

"PowerColor's exclusive HD 6870 X2 unifies two HD 6870 graphics processors on a single card. This approach offers performance that can compete with other high-end cards like GTX 580. In our testing we saw nice results that make this card a worthy alternative in today's high-end graphics card market."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: TechPowerUp

Video Perspective: AMD A-series APU Overclocking and Gaming Performance

Subject: Graphics Cards, Motherboards, Processors | July 6, 2011 - 08:15 PM |
Tagged: amd, llano, APU, a-series, a8, a8-3850, overclocking

We have spent quite a bit of time with AMD's latest processor, the A-series of APUs previously known as Llano, but something we didn't cover in the initial review was how overclocking the A8-3850 APU affected gaming performance for the budget-minded gamer.  Wonder no more!

In this short video we took the A8-3850 and pushed the base clock frequency from 100 MHz to 133 MHz and overclocked the CPU clock rate from 2.9 GHz to 3.6 GHz while also pushing the GPU frequency from 600 MHz up to 798 MHz.  All of the clock rates (including CPU, GPU, memory and north bridge) are based on that base frequency so overclocking on the AMD A-series can be pretty simple provided the motherboard vendors provide the multiplier options to go with it.  We tested a system based on a Gigabyte and an ASRock motherboard both with very good results to say the least.  

We tested 3DMark11, Bad Company 2, Lost Planet 2, Left 4 Dead 2 and Dirt 3 to give us a quick overall view of performance increases.  We ran the games at 1680x1050 resolutions and "Medium"-ish quality settings to find a base frame rate on the APU of about 30 FPS.  Then we applied our overclocked settings to see what gains we got.  Honestly, I was surprised by the results.

While overclocking a Llano-based gaming rig won't make it compete against $200 graphics cards, getting a nice 30% boost in performance for a budget minded gamer is basically a no-brainer if you are any kind of self respecting PC enthusiast. 

Source: AMD

I don't always game SLI, but when I do, I game dos 580GTX

Subject: Graphics Cards | July 4, 2011 - 06:10 PM |
Tagged: N580GTX Lightning XE 3GB, MSI lightning, msi, GTX580

Lightning strikes twice in this review from [H]ard|OCP, a pair of MSI N580GTX Lightning XE 3GB cards in SLI are tested on a variety of games.  Apart from showing off the capabilities of a system with a $1200+ graphics subsystem it demonstrates a very important example of scaling; the effect of doubling the GDDR5 available to the GTX580 GPU.  They've tested the 1.5GB version previously and so can compare that performance as well as contrasting it with the HD 6970 in Crossfire.  Their testing showed that the extra memory does help, as long as you are using multiple monitors.  If you are using a single display, even a 2560x1600 30" display, you are wasting your money picking up the 3GB models, there simply isn't enough pixels to be pushed to need the 3GB per card.  The AMD setup did come behind in performance somewhat, but since it costs about $500 less that does not make it a bad alternative.  Head over to [H] and prepare to feel jealous.

h_MSI_lightning_gtx5803gb.jpg

"Two MSI N580GTX Lightning XE 3GB video cards in SLI! Tested in NV Surround. We are going to find out if these 3GB video cards hold an advantage over two GeForce GTX 580's and two Radeon HD 6970's. This is the most VRAM per video card we have ever gamed with. We test with seven games, including the new F.3.A.R."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Just Delivered: ASUS ROG MATRIX GeForce GTX 580 1.5GB Graphics Card

Subject: Graphics Cards | July 1, 2011 - 03:03 PM |
Tagged: ROG, matrix, just delivered, GTX 580, asus

Just Delivered is a section of PC Perspective where we share some of the goodies that pass through our labs that may or may not see a review, but are pretty cool none the less.

We like big graphics cards and we can't deny...when the FedEx guy shows up with a great big box and....okay, sorry about that.  But it is true, we definitely love it when new GPUs find their way to our testing facilities.  Today is no different as the delivery guy dropped off a box that gave us the ASUS Republic of Gamers (ROG) MATRIX GTX 580 1.5GB.

IMG_1084.JPG

We first saw this card at Computex in June and we are without a doubt preparing a full review of it in the next week or so, but we wanted to show it off right away - after all, we like to share the goodies that make their way to PC Perspective as often as we can.  At first glance you can easily tell that the ROG MATRIX GTX 580 is more than just your standard 580-based solution - it takes up three slots with its large cooler and uses dual 8-pin power connectors rather than an 8-pin and 6-pin combination.  

IMG_1086.JPG

It has some very unique options including buttons directly on the PCB that instantly put the fan at the full 100% speed and + and - keys for increasing and decreasing the GPU clock rate without the need to go into software.  Pretty damn cool!

IMG_1087.JPG

There are voltage measurement positions on the PCB and a Safe Mode button to instantly revert back to the standard clock rates if you have pushed the card too far - this will make things much easier for those overclockers that push things well past reasonable limits.

IMG_1096.JPG

The cooler is GARGANTUAN but keeps the temperatures reasonable while the card runs on a 19-phase PCB.

IMG_1093.JPG

ASUS MATRIX GTX 580 - Reference GTX 580 - ASUS ARES

This probably won't beat out the Radeon HD 6990 for the fastest graphic card around but we are thoroughly expecting to be impressed in our full review.

Source: ASUS