Wish you CUDA had a GPGPU C++ template? Now you can!

Subject: General Tech, Graphics Cards | June 29, 2011 - 08:58 PM |
Tagged: gpgpu, CUDA

If you have seen our various news articles regarding how a GPU can be useful in many ways, and you are a developer yourself, you may be wondering how to get in on that action. Recently Microsoft showed off their competitor to OpenCL known as C++ AMP and AMD showed off some new tools designed to help developers of OpenCL. Everything was dead silent on the CUDA front at the AMD Fusion Developer Summit, as expected, but that does not mean that no-one is helping people who do not mind being tied in to NVIDIA. An open-sourced project has been created to generate template file for programmers wishing to do some of their computation in CUDA and wish a helping hand setting up the framework.

GPGPU-Trail.png

You may think the videocard is backwards, but clearly its DVI heads are in front.

The project was started by Pavel Kartashev and is a Java application that accepts form input and generates CUDA code to be imported into your project. The application will help you generate the tedious skeleton code for defining variables and efficiently using the GPU architecture leaving you to program the actual process to be accomplished itself. The author apparently plans to create a Web-based version which should be quite easy with the Java-based nature of his application. Personally I would find myself more interested in the local application or a widget to leaving my web browser windows to reference material. That said, I am sure that someone would like this tool in their web browser, possibly more people than are like-minded with me.

 
If you are interested in contributing either financially or through labor he asks that you contact him through the email tied with his Paypal account (likely for spam reasons, so I can assume posting it here would be the opposite of helpful). The rest of us can sit back, enjoy our GPU-enabled applications, and bet on how long it will take NVIDIA to reach out to him. I got all next week.
Source:

MSI's Cyclonicly powerful HD6850

Subject: Graphics Cards | June 28, 2011 - 05:41 PM |
Tagged: msi, cyclone, factory overclocked, hd6850

While here at PC Perspective we have been busy with Hawks and Lightning, OCIA went for a Cyclone.  The particular whirlwind in this case being the MSI R6850 Cyclone PE/OC, which as the name implies sports a custom cooling solution and a factory overclock.  The custom cooler really makes this card stand out, instead of the full shroud we are used to seeing there is a 90mm PWM-controlled 9-blade fan with the card its self being fully exposed.  The overclock, 85 MHz on the GPU taking it to 860MHz and the memory bumped 100MHz to 1.1GHz, 4.4GHz effective, which OCIA made even more impressive by overvolting the card with MSI's Afterburner software.  To MSI's credit, the card is priced similarly to other HD6850s unlike many other factory overclocked cards which carry a premium price tag.  If you have less than $200 to spend on a GPU, this card might be for you.

cylon_replaced-credit.jpg

"Discrete computer graphics are one of the toughest markets to keep current with. New graphics cores are released on a pretty frequent basis from both ATI and NVIDIA and with naming schemes that change nearly as often, it can be difficult to determine where each card stands in relation to others in the same price range. Today we will be taking a look at the MSI R6850 PE / OC graphics card, a mid-range GPU that was launched at the end of last year. Codenamed Barts, this GPU is built on a 40nm process with support for DirectX 11 & Open GL 4.0. Other notable features include HDMI 1.4a & DisplayPort 1.2 support as well as AMD Eyefinity multi-display technology and CrossfireX support."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Crysis 2: DirectX 11 free update released

Subject: Editorial, General Tech, Graphics Cards | June 27, 2011 - 04:44 PM |
Tagged: dx11, crysis 2

Last Wednesday we reported on the announcement of the Crysis 2 DX11 patch and high resolution texture pack upcoming for the 27th of June. Looking at the calendar it appears as if your graphics card just ran out of time to rule the roost. Clocking in at 546 megabytes for the DirectX 11 update and 1695 megabytes for the high resolution texture pack the new updates are not small especially since that does not include the size of the 1.9 patch itself. The big question is whether these updates will push the limits of your computer, and if so, is it worth it?

Crysis2.jpg

Can you run me now? … Hello?

VR-Zone benchmarked the new updates on an Intel Core i7-965 system paired with an NVIDIA GeForce GTX 580. We believe they accidentally mislabeled their Extreme Quality benchmark with their Ultra Quality benchmark as the ultra is the more intensive of the two settings; also, ultra should have the biggest difference between DX9 and DX11 settings as DX11 effects are not enabled at the extreme settings. ((Update: 6/28/2011 - That's exactly what happened. VR-Zone fixed it; it is correct now.)) Under that assumption you are looking at approximately 40 FPS for a 1080p experience with that test system and all the eye-candy enabled. That is a drop of approximately 33% from its usual 60 FPS under extreme settings.

But how does it look? Read on for all of that detail.

Source: VR-Zone

Intel learns from Sandy Bridge mistakes, but is it enough?

Subject: General Tech, Graphics Cards, Processors | June 24, 2011 - 01:13 PM |
Tagged: linux, Ivy Bridge, Intel

Back when Sandy Bridge launched, Intel had some difficulty with Linux compatibility due to their support software not being available long enough ahead of launch for distribution developers to roll it in to their releases. As a result, users purchasing Sandy Bridge hardware would be in for a frolic in the third-party repositories unless they wished to wait four or five months for their distributions to release their next major version. This time Intel is pushing code out much earlier though questions still remain if they will fully make Ubuntu’s 11.10 release.

19-tux.png

You mean there's Intel... inside me?

Intel came down hard on themselves for their Sandy Bridge support. Jesse Barnes, an open-source Linux developer at Intel, posted on the Phoronix Forums his thoughts on the Sandy Bridge Linux issue:

"No, this is our job, and we blew it for Sandy Bridge. We're supposed to do development well ahead of product release, and make sure distros include the necessary code to get things working … Fortunately we've learned from this and are giving ourselves more time and planning better for Sandy Bridge's successor, Ivy Bridge."

Now, six months later as support for Ivy Bridge is getting released and rolled into their necessary places, Intel appears to be more successful than last time. Much of the code that Intel needs to release for Ivy Bridge is already available and rolled in to the Linux 3.0 kernel. A few features missed the deadline and must be rolled in to Linux 3.1 kernel. While Phoronix believes that Fedora 16 will still be able to roll in support in time it is possible that Ubuntu 11.10 may not unless the back-port the changes to their distribution. That is obviously not something Intel would like to see happen given all their extra effort of recent.

Source: Phoronix

Powercolor's take on the ultimate HD6970

Subject: Graphics Cards | June 24, 2011 - 12:50 PM |
Tagged: powercolor, amd, HD6970, factory overclocked

Head over to [H]ard|OCP to meet the PowerColor PCS+ Radeon HD 6970 with a 60MHz bump on the CPU to 940MHz and memory of 1425MHz which is a 50MHz bump, along with an improved cooler.  They also added some extras to the back of the card, a dual-link DVI-I port, a single-link DVI-I port, one HDMI port, and two mini-DisplayPort jacks which will make setting up EyeFinity a breeze.  The boosted speed helped in overcoming the GTX 570  in almost every single benchmark, pity that the same can be said of the price as it costs more than NVIDIA's card and doesn't surpass it in performance enough to justify the increased cost.

H_powercolor_6970.jpg

"PowerColor's highest-end Radeon HD 6970 is on our test bench today. The PCS+ Radeon HD 6970 has a respectable out-of-the-box overclock, a custom cooler, and a free game, but does it offer value for its price premium?"

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Glean a bit more from the AMD Fusion Developer Summit

Subject: Graphics Cards | June 20, 2011 - 01:38 PM |
Tagged: amd, Eric Demers, APU

The Tech Report was present for AMD's Eric Demers keynote at the FDC in Seattle last week.  They captured quite a few of the slides on camera which you can examine at the bottom of their article.  We have seen quite a bit of coverage on the next generation of AMD's Fusion processors, but how can you get sick of reading inside information!   Still no news on Bulldozer yet though. 

TR_demers-il.jpg

"At the Fusion Developer Summit here in Bellevue, Washington this morning, AMD Graphics CTO Eric Demers made some interesting revelations about his company's next graphics processor architecture. While he didn't talk about specific products, he did say this new core design will materialize inside all future AMD products with GPUs in them over the next few years."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Win a Radeon R9 295X2 and more!!

Subject: Graphics Cards | June 17, 2011 - 03:20 PM |
Tagged: live

Hey, you're here? You found the secret page and form to enter in our AMD live stream contest! Just fill out the information below and you're entered. Good luck!

Source: PCPer Live!

Microsoft reiterates stance on 'harmful' WebGL

Subject: General Tech, Graphics Cards, Mobile | June 17, 2011 - 04:35 AM |
Tagged: webgl, microsoft

Microsoft has made substantial efforts lately to increase their support of open standards even to the point of giving them first class treatment ahead of their home-grown formats. Internet Explorer 9 shows the best support for web standards such as HTML 5, CSS, and Javascript that the browser line has ever had. One feature set, however, has been outright omitted from Internet Explorer: WebGL. Microsoft has very recently made a more official statement on the subject, claiming it harmful from a security standpoint.

WebGL-Angel.png

WebGL: Heaven or Hell?

(Image from MrDoob WebGL demo; contains Lucy model from Stanford 3D repository)

WebGL is an API very similar to OpenGL ES 2.0: the API used for OpenGL features in embedded systems, particularly smart phones. The goal of WebGL is to provide a light-weight, CSS obeying, 3D and shader system for websites that require advanced 3D graphics or even general purpose calculations performed on the shader units of the client’s GPU. Mozilla and Google currently have support in their public browsers with Opera and Apple shipping support in the near future. Microsoft has stated that allowing third-party websites that level of access to the hardware is dangerous as security vulnerabilities that formerly needed to be exploited locally can now be exploited from the web browser. This is an area of expertise that Microsoft knows all too well from their past attempts at active(x)ly adding scripting functionality to the web browser evolving into a decade-long game of whack-a-mole for security holes.

But skeptics to Microsoft’s position could easily point to their effort to single out the one standard based on OpenGL, competitor to their still-cherished DirectX standard. Regardless of Microsoft’s motives it seems to put to rest the question of whether Microsoft will be working towards implementing WebGL in any release of Internet Explorer currently in development.

Do you think Microsoft is warning its competitors about its past ActiveX woes, or is this more politically motivated? Comment below (registration not required.)

Source: Microsoft
Author:
Manufacturer: AMD

Introducing the AMD FSA

At AMD’s Fusion 11 conference, we were treated to a nice overview of AMD’s next generation graphics architecture.  With the recent change in their lineup going from the previous VLIW-5 setup (powered their graphics chips from the Radeon HD 2900 through the latest “Barts” chip running the HD 6800 series) to the new VLIW-4 (HD 6900), many were not expecting much from AMD in terms of new and unique designs.  The upcoming “Southern Isles” were thought to be based on the current VLIW-4 architecture, and would feature more performance and a few new features due to the die shrink to 28 nm.  It turns out that speculation is wrong.

amd_fsa01.jpg

In late Q4 of this year we should see the first iteration of this new architecture that was detailed today by Eric Demers.  The overview detailed some features that will not make it into this upcoming product, but eventually it will all be added in over the next three years or so.  Historically speaking, AMD has placed graphics first, with GPGPU/compute as the secondary functionality of their GPUs.  While we have had compute abilities since the HD 1800/1900 series of products, AMD has not been as aggressive with compute as has its primary competition.  From the G80 GPUs and beyond, NVIDIA has pushed compute harder and farther than AMD has.  With its mature CUDA development tools and the compute heavy Fermi architecture, NVIDIA has been a driving force in this particular market.  Now that AMD has released two APU based products (Llano and Brazos), they are starting to really push OpenCL, Direct Compute, and the recently announced C++ AMP.

Continue reading for all the details on AMD's Graphics Core Next!

AMD Fusion Developer Summit 2011: Live Blog

Subject: Editorial, Graphics Cards, Processors, Mobile, Shows and Expos | June 16, 2011 - 02:41 PM |
Tagged: llano, liveblog, fusion, APU, amd, AFDS

afdslogo.png

The AMD Fusion Developer Summit 2011 is set to begin at 11:30am ET / 8:30am PT and promises to bring some interesting and forward looking news about the future of AMD's APU technology.  We are going to cover the keynotes LIVE right here throughout the week so if you want to know what is happening AS IT HAPPENS, stick around!!

Source: PCPer