All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Graphics Cards | April 18, 2013 - 04:15 PM | Jeremy Hellstrom
Tagged: asus, HD 7850 DirectCU II
With a custom cooler, 1 DP port, 2 DVI-I connectors, and 1 HDMI connector and only requiring a single PCIe 6 pin power connector the ASUS 7850 DirectCU II is a great blend of efficiency and flexibility for those looking for a card which costs around $200. On the other hand if you have no plans to overclock the card, the GTX660 which [H]ard|OCP compared this card to is slightly more powerful, costs the same and is a better choice for those who are planning on running dual GPUs. Check out the overclocked performance of this HD7850 in the full review.
"ASUS has refreshed its AMD Radeon HD 7850 DirectCU II video card with DirectCU and DIGI+ VRM with Super Alloy Power, poised to give you a robust video card with an improved overclocking experience. We will see whether this new revision brings new value to the Radeon HD 7850 GPU and we will compare it to the NVIDIA GeForce GTX 660."
Here are some more Graphics Card articles from around the web:
- PowerColor Radeon HD 7850 PCS+ Review @ OCC
- Asus Radeon HD 7790 DirectCU II OC 1GB @ eTeknix
- XFX Radeon HD 7790 Black Edition OC 1GB @ eTeknix
- XFX R7790 Black Edition OC @ LanOC Reviews
- Club3D Radeon HD 7790 13Series 1GB @ eTeknix
- GIGABYTE Radeon HD 7790 1GB & NVIDIA GeForce GTX 650 Ti BOOST 2GB Review @ Techgage
- ASUS Radeon HD 7790 DirectCU II OC Overclocked @ Tweaktown
- 23 AMD Radeon HD 7870 / 7950 and Nvidia GeForce GTX 660 / 660 Ti graphics card round-up @ Hardware.info
- AMD Radeon Gallium3D More Competitive With Catalyst On Linux @ Phoronix
- Diamond BizView 750 @ LanOC Reviews
- History of the GPU, Part 3: The Nvidia vs. ATI era begins @ Techspot
- History of the Modern Graphics Processor, Part 4: The GPGPU era arrives @ TechSpot
- iXBT Labs Review: i3DSpeed, March 2013
- ASUS GTX670 DirectCU Mini OC @ Hardware.info
- NVIDIA GeForce GTX 650 Ti Boost Roundup @ Hardware Canucks
- GTX 650 Ti Boost SLI @ LanOC Reviews
- Palit GeForce GTX 650 Ti Boost 2GB OC @ Tweaktown
- ZOTAC GeForce GTX 650 TI Boost Edition @ Modders-Inc
- EVGA GTX 650 Ti Boost SC 2 GB @ techPowerUp
- MSI GeForce GTX 650 Ti BOOST 2GB Twin Frozr Edition Review @ Hi Tech Legion
- MSI GTX 650Ti Boost Video Card Review @ Ninjalane
- Gainward GeForce GTX 650 Ti Boost GS Dual & GTX 660 GS Dual @ Legion Hardware
Subject: Graphics Cards | April 16, 2013 - 10:24 PM | Ryan Shrout
Tagged: nvidia, metro last light, Metro
Late this evening we got word from NVIDIA about an update to its game bundle program for GeForce GTX 600 series cards. Replacing the previously running Free to Play bundle that included $50 in credit for each World of Tanks, Hawken and Planetside 2 title, NVIDIA is moving back to the AAA game with Metro: Last Light.
Metro: Last Light is the sequel to surprise hit from 2010, Metro 2033 and I am personally really looking forward to the game and seeing how it can stress PC hardware like the first did.
This bundle is only good for GTX 660 cards and above with the GTX 650 Ti sticking with the Free to Play $75 credit offer.
NVIDIA today announced that gamers who purchase a NVIDIA GeForce GTX 660 or above would also receive a copy of the highly anticipated Metro: Last Light, published by Deep Silver and is the sequel to the multi award winning Metro 2033. Metro: Last Light will be available May 14, 2013 within the US and May 17, 2013 across Europe.
The deal is already up and running on Newegg.com but with the release date of Metro: Last Light set at May 14th, you'll have just about a month to wait before you can get your hands on it.
How do you think this compares to AMD's currently running bundle with Bioshock Infinite and more? Did NVIDIA step up its game this time around?
Subject: Graphics Cards | April 16, 2013 - 03:01 PM | Ryan Shrout
Tagged: vsync, stutter, smoothness, microstutter, frame rating, animation
We are running a poll in conjunction with our Frame Rating: Visual Effects of Vsync on Gaming Animation story that compares animation smoothness between fixed 30 FPS and 60 FPS captures and Vsync enabled versions.
If you haven't read the story linked above, these questions won't make any sense to you so please go read it and then stop back here to answer the polls!
Subject: Graphics Cards | April 15, 2013 - 03:34 PM | Tim Verry
Tagged: rumor, nvidia, kepler, gtx 700, geforce 700, computex
Recent rumors seem to suggest that NVIDIA will release its desktop-class GeForce 700 series of graphics cards later this year. The new card will reportedly be faster than the currently-available GTX 600 series, but will likely remain based on the company's Kepler architecture.
According to the information presented during NVIDIA's GTC keynote, its Kepler architecture will dominate 2012 and 2013. It will then follow up with Maxwell-based cards in 2014. Notably absent from the slides are product names, meaning the publicly-available information at least leaves the possibility of a refreshed Kepler GTX 700 lineup in 2013 open.
Fudzilla further reports that NVIDIA will release the cards as soon as May 2013, with an official launch as soon as Computex. Having actual cards available for sale by Computex is a bit unlikely, but a summer launch could be possible if the new 700 series is merely a tweaked Kepler-based design with higher clocks and/or lower power usage. The company is rumored to be accelerating the launch of the GTX 700 series in the desktop space in response to AMD's heavy game-bundle marketing, which seems to be working well at persuading gamers to choose the red team.
What do you make of this rumor? Do you think a refreshed Kepler is coming this year?
Subject: Graphics Cards | April 14, 2013 - 07:59 PM | Tim Verry
Tagged: windforce, nvidia, gtx titan, gtx 680, gpu cooler, gigabyte
Earlier this week, PC component manufacturer Gigabyte showed off its new graphics card cooler at its New Idea Tech Tour even in Berlin, Germany. The new triple slot cooler is built for this generation's highest-end graphics cards. It is capable of cooling cards with up to 450W TDPs while keeping the cards cooler and quiter than reference heatsinks.
The Gigabyte WindForce 450W cooler is a triple slot design that combines a large heatsink with three 80mm fans. The heatsink features two aluminum fin arrays connected to the GPU block by three 10mm copper heatpipes. Gigabyte stated during the card's reveal that its cooler keeps a NVIDIA GTX 680 graphics card 2°C cooler and 23.3 dB quiter during a Furmark benchmark run. Further, the cooler will allow these high end cards, like the GTX Titan to achieve higher (stable) boost clocks.
ComputerBase.de was on hand at Gigabyte's event in Berlin to snap shots of the upcoming GPU cooler.
The company has not announced which graphics cards will use the new cooler or when it will be available, but A Gigabyte GTX 680 and a custom cooled-Titan seem to be likely candidates considering these cards were mentioned in the examples given in the presentation. Note that NVIDIA has prohibited AIB partners from putting custom coolers on the Titan thus far, but other rumored Titan graphics cards with custom coolers seem to suggest that the company will allow custom-cooled Titans to be sold at retail at some point. In addition to using it for the top-end NVIDIA cards, I think a GTX 670 or GTX 660 Ti GPU using this cooler would also be great, as it would likely be one of the quieter running options available (because you could spin the three 80mm fans much slower than the single reference fan and still get the same temps).
What do you think about Gigabyte's new 450W GPU cooler? You can find more photos over at Computer Base (computerbase.de).
Subject: Editorial, General Tech, Graphics Cards | April 14, 2013 - 02:22 AM | Scott Michaud
Tagged: never settle, never settle reloaded, amd, far cry 3
So when AMD reloaded their Never Settle bundles, they left an extra round in the barrel.
Some of my favorite games were given to me in a bundle with some piece of computer hardware. You might remember from the PC Perspective game night that I am a major fan of the Unreal Tournament franchise. My first Unreal Tournament game was an unexpected surprise when I purchased my first standalone GPU. My 166MHz Pentium computer also came bundled with Mechwarrior 2 and Wipeout.
As we discussed, AMD considers bundle-offers as a way to keep the software industry rolling forward. The quantity and quality of games which participate in the recent Never Settle bundles certainly deserve credit as it is due. Bioshock: Infinite is a game that just about every PC gamer needs to experience, and there are about a half-dozen other great titles as a part of the promotion depending upon which card or cards you purchase.
As it turns out, AMD negotiated with Ubisoft and added Far Cry 3: Blood Dragon to their Never Settle bundle. The coolest part is that AMD will retroactively email codes for this new title to anyone who has redeemed a Never Settle: Reloaded code.
So if you have ever Reloaded your Never Settle in the past, check your email as apparently you can Never Settle your reloads again.
Subject: Graphics Cards | April 13, 2013 - 10:07 PM | Tim Verry
Tagged: radeon hd7790, powercolor, GCN, amd, 7790
PowerColor launched a new factory overclocked graphics card recently that is a revision of a previous model. The PowerColor HD7790 OC V2 is based on AMD’s Graphics Core Next (GCN) architecture and measures a mere 180 x 150 x 38mm.
The AMD Radeon HD 7790 GPU features 896 stream processors, 56 texture units, and 80 ROP units. The GPU is clocked at 1000 MHz base and 1030 MHz boost while the 1GB of GDDR5 memory is clocked at the 6Gbps reference speed. PowerColor has fitted the overclocked card with an aluminum heatsink cooled by a single 8mm copper heatpipe and 70mm fan.
The new card features two DL-DVI, one HDMI, and one DisplayPort video outputs. Its model number is AX7790-1GBD5-DHV2/OC. According to Guru3D, the new/revised card is priced at 120 pounds sterling. However, considering the currently available OC (non-V2) card is $150, the revised card is likely to come in around that price when it hits US retailers.
Also: If you have not already, read our latest Frame Rating article to see how the Radeon HD 7790 graphics card stacks up against the competition!
Subject: Graphics Cards | April 8, 2013 - 07:33 PM | Jeremy Hellstrom
Tagged: gk106, gtx660, asus, GTX 660 DirectCU II OC
Not everyone can afford to spend $400+ on a GPU in one shot but sometimes they can manage it if the purchase is split into two. For those considering a multi-GPU setup, it has become obvious from Ryan's testing that NVIDIA is the way to go. The 660 Ti is a favourite but even it might be too rich for some peoples wallets which is why it is nice to see the ASUS offer their GTX 660 DirectCU II OC for $215 after MIR. [H]ard|OCP just put up a review of this card covering both the FPS performance of the card as it was when it arrived as well as after they pushed the base clock up almost as high as the original boost clock. If you are on a limited GPU budget you should check out the full review.
"ASUS has delivered a factory overclocked GeForce GTX 660 DirectCU II OC to our doorstep to run through the wringer. We match this ASUS video card up against AMD's Radeon HD 7870 GHz Edition and Radeon HD 7850 to see which will prevail in the battle of the mainstream cards. There are good values at this price point."
Here are some more Graphics Card articles from around the web:
- Radeon HD 7790 Crossfire vs. GeForce GTX 660 Ti @ Legion Hardware
- ASUS GTX 650 Ti Boost Direct CU II OC 2 GB @ techPowerUp
- NVIDIA GeForce GTX 650 Ti Boost Video Cards in SLI @ Tweaktown
- ASUS GTX 670 DirectCU Mini 2 GB @ techPowerUp
- NVIDIA GeForce GTX 650 Ti Boost SLI @ techPowerUp
- Gigabyte GeForce GTX Titan @ iXBT Labs
- Asus GeForce GTX 650 Ti Boost OC Edition 2GB @ eTeknix
- NVIDIA GeForce GTX 650 Ti Boost @ Tweaktown
ZOTAC GeForce GTX 650 Ti Boost 2 GB @ techPowerUp
- Inside the second with Nvidia's frame capture tools @ The Tech Report
- Frame Capture and Analysis Tools Review @ OCC
Frametime tests 2.0: our take on the latest developments @ Hardware.info
- Club3D Radeon HD 7790 13Series CrossFire @ eTeknix
- Sapphire Radeon HD 7750 1GB Low Profile @ Tweaktown
- PowerColor PCS+ AX7850 2GBD5-2DHPP @ Bjorn3D
- Gigabyte GV-R779OC-1GD Review @ Neoseeker
- Radeon HD 7790 vs. GeForce GTX 650 Ti BOOST Video Card Review @ Hardware Secrets
- Sapphire Radeon 7870 XT With Boost (Tahiti LE) Review & Bioshock Infinite Giveaway @ HCW
- ASUS Radeon HD 7870 DirectCU II Review @ Custom PC Review
- Sapphire HD7850 OC @ FunkyKit
Subject: Editorial, General Tech, Graphics Cards, Systems, Mobile | April 7, 2013 - 10:21 PM | Scott Michaud
Tagged: DirectX, DirectX 12
Microsoft DirectX is a series of interfaces for programmers to utilize typically when designing gaming or entertainment applications. Over time it became synonymous with Direct3D, the portion which mostly handles graphics processing by offloading those tasks to the video card. At one point, DirectX even handled networking through DirectPlay although that has been handled by Games for Windows Live or other APIs since Vista.
AMD Corporate Vice President Roy Taylor was recently interviewed by the German press, "c't magazin". When asked about the future of "Never Settle" bundles, Taylor claimed that games such as Crysis 3 and Bioshock: Infinite keep their consumers happy and also keep the industry innovating.
Keep in mind, the article was translated from German so I might not be entirely accurate with my understanding of his argument.
In a slight tangent, he discussed how new versions of DirectX tends to spur demand for new graphics processors with more processing power and more RAM. He has not heard anything about DirectX 12 and, in fact, he does not believe there will be one. As such, he is turning to bundled games to keep the industry moving forward.
Neowin, upon seeing this interview, reached out to Microsoft who committed to future "innovation with DirectX".
This exchange has obviously sparked a lot of... polarized... online discussion. One claimed that Microsoft is abandoning the PC to gain a foothold in the mobile market which it has practically zero share of. That is why they are dropping DirectX.
Unfortunately this does not make sense: DirectX would be one of the main advantages which Microsoft has in the mobile market. Mobile devices have access to fairly decent GPUs which can use DirectX to draw web pages and applications much smoother and much more power efficiently than their CPU counterparts. If anything, DirectX would be increased in relevance if Microsoft was blindly making a play for mobile.
The major threat to DirectX is still quite off in the horizon. At some point we might begin to see C++Amp or OpenCL nibble away at what DirectX does best: offload highly-parallel tasks to specialized processing units.
Still, releases such as DirectX 11.1 are quite focused on back-end tweaks and adjustments. What do you think a DirectX 12 API would even do, that would not already be possible with DirectX 11?
Subject: General Tech, Graphics Cards | April 5, 2013 - 11:48 AM | Ryan Shrout
Tagged: premiere pro, opencl, firepro, amd, Adobe
As we prepare for the NAB show (National Association of Broadcasters) this week, AMD and Adobe have released a fairly substantial news release concerning the future of Premiere Pro, Adobe's flagship professional video editing suite.
Earlier today Adobe revealed some of its next generation professional video and audio products, including the next version of Adobe® Premiere Pro. Basically Adobe is giving users a sneak peek at the new features coming to the next versions of its software. And we’ve decided to give you a sneak peek too, providing a look at how the next version of Premiere Pro performs when accelerated by AMD FirePro™ 3D workstation graphics and OpenCL™ versus Nvidia Quadro workstation graphics and CUDA.
This will be the first time that OpenCL is used as the primary rendering engine for Premiere and is something that AMD has been hoping to see for many years. Previous versions of the software integrated support for NVIDIA's CUDA GPGPU programming models and the revolution of the Mercury Playback Engine was truly industry changing for video production. However, because it was using CUDA, AMD users were left out of these performance improvements in favor of the proprietary NVIDIA software solution.
Adobe's next version of Premiere Pro (though we aren't told when that will be released) switches from CUDA to OpenCL and the performance of the AMD GCN architecture is being shown off by AMD today.
Using 4K TIFF 24-bit sequence content, Microsoft Windows® 7 64-bit, Intel Xeon E5530 @ 2.40 GHZ and 12GB system memory, AMD compared several FirePro graphics cards (using OpenCL) against NVIDIA Quadro options (using CUDA). Idealy we would like to see some OpenCL NVIDIA benchmarks as well, but I assume we'll have to wait to test that here at PC Perspective.
AMD also claims that by utilizing OpenCL rather than CUDA, the AMD FirePro GPUs are running at a lower utilization, opening up more graphics processing power for other applications and development work.
While this performance testing is conducted on a pre-release version of the next Adobe Premiere Pro, we’re really pleased with the results. As with all of the professional applications we support, we’ll continue to make driver optimizations for Adobe Premiere Pro that can only help to improve the overall user experience and application performance. So if you’re considering a GPU upgrade as part of your transition to the next version of Adobe Premiere Pro, definitely consider taking a look at AMD FirePro™ 3D workstation graphics cards.
You can continue on to read the full press release from AMD and Adobe on the collaboration or check out the complete blog post posted on AMD.com.
Subject: Graphics Cards | April 3, 2013 - 11:24 AM | Tim Verry
Tagged: nvidia, kepler, gtx 660 Ti, 660 ti
Two new photos recently popped up on Cowcotland, showing off an unreleased "Dragon Edition" GTX 660 Ti graphics card from ASUS. The new card boasts some impressive factory overclocks on both the GPU and memory as well as a beefy heatsink and a new blue and black color scheme.
The ASUS GTX 660 Ti Dragon will feature a custom cooler with two fans and an aluminum heastink. The back of the card includes a metal backplate to secure the cooler and help dissipate a bit of heat itself. However, there is also a cutout in the backplate to allow for (likely) additional power management circuitry. The card also features the company's power phase technology, NVIDIA's 660 Ti GK-104 GPU, and 2GB of GDDR5 memory. The graphics core is reportedly clocked at 1150MHz (no word on whether that is the base or boost figure) while the memory is overclocked to 6100MHz. For comparison, the reference GTX 660 Ti clocks are 915MHz base, 980MHz boost, and 6,000MHz memory. The new card will support DVI, DisplayPort, and HDMI video outputs.
There is no word on pricing or availability, but the Dragon looks like it will be one of the fastest GTX 660 Ti cards available when (if?) it publicly released!
Subject: Graphics Cards | April 3, 2013 - 10:14 AM | Tim Verry
Tagged: nvidia, mini-itx, gtx 670, GK104, directcu mini, asus
ASUS has finalized the design for its Kepler-based DirectCU Mini graphics card. The new card combines NVIDIA's GTX 670 GPU and reference PCB with ASUS' own power management technology and a new, much smaller, air cooler. The new ASUS cooler has allowed the company to offer a card that is a mere 17cm long. Compared to traditional GTX 670 graphics cards with coolers at approximately 24cm, the DirectCU Mini is noticeably smaller.
The DirectCU Mini features a GTX 670 GPU clocked at 928MHz base and 1,006MHz boost. It also has 2GB of GDDR5 memory on a 256-bit bus. The card requires a single 8-pin PCI-E power connector. Video outputs include two DVI, one DisplayPort, and a single HDMI port. The ASUS cooler includes a copper vapor chamber and a single CoolTech fan. According to ASUS, the DirectCU Mini is up to 20% cooler and slightly quieter than previous GTX 670 cards despite the smaller form factor.
This new card will be a great addition to Mini-ITX-based systems where saving space anyway possible is key. It is nice to know that gamers will soon have the option of powering a small form factor LAN box with a GPU as fast as the GTX 670. Even better, water cooling enthusiasts will be happy to know that the card still uses a reference PCB, meaning it is compatible with existing water blocks made for the current crop of GTX 670 cards.
Pricing and availability have not been announced, but the small form factor-friendly GPU is now official and should be coming sometime soon.
Subject: Graphics Cards | April 2, 2013 - 07:50 PM | Ryan Shrout
Tagged: video, tahiti, radeon, never settle reloaded, live, crysis, bioshock infinite, amd
UPDATE: If you missed the live stream...sorry, better luck next time! However, you can still view the on-demand version below to see the Bioshock Infinite game play!
On April 2nd on the PC Perspective Live! page we will be streaming some game action of Bioshock Infinite. Easily the most well received and reviewed game of the year, I am probably more excited to play this game than other we have stream to date!
We will be teaming up with AMD once again to provide a fun and exciting PCPer Game Stream that includes game demonstrations and of course, prizes and game keys for those that watch the event LIVE!
Bioshock Infinite Game Stream
5pm PT / 8pm ET - April 2nd
Warning: this one will DEFINITELY have mature language and content!!
The stream will be sponsored by AMD and its Never Settle Reloaded game bundles which we previously told you about. Depending on the AMD Radeon HD 7000 series GPU that you buy, you could get some amazing free games including:
Radeon HD 7900 Series
- FREE Crysis 3
- FREE Bioshock Infinite
Radeon HD 7800 Series
- FREE Bioshock Infinite
- FREE Tomb Raider
Radeon HD 7900 CrossFire Set
- FREE Crysis 3
- FREE Bioshock Infinite
- FREE Tomb Raider
- FREE Far Cry 3
- FREE Hitman: Absolution
- FREE Sleeping Dogs
AMD's Robert Hallock (@Thracks on twitter) will be joining us via Skype to talk about the game's technology, performance considerations as well as helping me with some co-op gaming!
Of course, just to sweeten the deal a bit we have some prizes lined up for those of you that participate in our Bioshock Infinite Game Stream:
- 1 x Gigabyte Radeon HD 7870 OC 2GB card
- 1 x MSI Radeon HD 7870 2GB card
- 3 x Combo codes for both Tomb Raider AND Bioshock Infinite
Pretty nice, huh? All you have to do to win is be present on the PC Perspective Live! Page during the event as we will announce both the content/sweepstakes method AND the winners!
Stop in on April 2nd for some PC gaming fun!!
Subject: Graphics Cards | March 31, 2013 - 03:06 AM | Tim Verry
Tagged: GDC 13, sky 900, sky 700, sky 500, RapidFire, radeon sky, GCN, cloud gaming, amd
Earlier this week, AMD announced a new series of Radeon-branded cards–called Radeon Sky–aimed at the cloud gaming market. At the time, details on the cards was scarce apart from the fact that the cards would use latency-reduction "secret sauce" tech called RapidFire, and the highest-end model would be the Radeon Sky 900. Thankfully, gamers will not have to wait until AFDS after all, as AMD has posted additional information and specifications to its website. At this point, pricing and the underlying details of RapidFire are the only aspects still unknown.
According to the AMD site, the company will release three Radeon Sky cards later this year, called Sky 500, Sky 700, and Sky 900. All three cards are passively cooled with aluminum fin heatsinks and are based on AMD's Graphics Core Next (GCN) architecture. At the high end is the Sky 900, which is a dual Tahiti graphics card clocked at 825 MHz. The Sky 900 features 1,792 stream processors per GPU for a total of 3,584. The card further features 3GB of GDDR5 RAM per GPU on a 384-bit interface for a total GPU bandwidth of 480GB/s. AMD claims this dual slot card draws up to 300W while under load. In many respects the Sky 900 is the Radeon-equivalent to the company's professional FirePro S10,000 graphics card. It has similar hardware specifications (including the 5.91TFLOPS of single precision performance potential), but a higher TDP. It is also $3,599, though whether AMD will price the gaming-oriented Sky 900 similarly is unknown.
The Sky 700 steps down to a single-GPU graphics card. This card features a single Tahiti GPU clocked at 900 MHz with 1792 stream processors and 6GB of GDDR5. The graphics card memory uses a 384-bit memory interface for a total memory bandwidth of 264GB/s. Although also a dual slot card like the Sky 900, the cooler is smaller and it draws only 225W under load.
Finally, the Sky 500 represents the low end of the company's cloud gaming hardware lineup. It is the Radeon Sky equivalent to the company's consumer-grade Radeon HD 7870. The Sky 500 features a single Pitcairn GPU clocked at 950 MHz with 1280 stream processors, 4GB of GDDR5 on a 256-bit memory bus, and a rated 150W power draw under load. It further features 154GB/s of memory bandwidth and is a single slot graphics card.
|Sky 900||Sky 700||Sky 500|
|GPU(s)||Dual Tahiti||Single Tahiti||Single Pitcairn|
|GPU Clockspeed||825 MHz||900 MHz||950 MHz|
|Stream Processors||3584 (1792 per GPU)||1792||1280|
|Memory||6GB GDDR5 (3GB per GPU)||6GB GDDR5||4GB GDDR5|
Additionally, the Radeon Sky cards all employ a technology called RapidFire that allegedly reduces latency immensely. As Ryan mentioned on the latest PC Perspective Podcast, the Radeon Sky cards are able to stream up to six games. RapidFire is still a mystery, but the company has indicated that one aspect of RapidFire is the use of AMD's Video Encoding Engine (VCE) to encode the video stream on the GPU itself to reduce game latency. The Sky cards will output at 720p resolutions, and the Sky 700 can support either three games at 60 FPS or six games at 30 FPS.
In addition to working with cloud gaming companies Ubitus, G-Cluster, CiiNow, and Otoy, AMD has announced a partnership with VMWare and Citrix. AMD is reportedly working to allow VMWare ESX/ESXi and Citrix XenServer virtual machines to access the GPU hardware directly, which opens up the possibility of using Sky cards to run workstation applications or remote desktops with 3D support much like NVIDIA's VCA and GRID technology (which the company showed off at GTC last week). Personally, I think the Sky cards may be late to the party but is a step in the right direction. Even if cloud gaming doesn't take off, the cards could still be used to great success by enterprise customers if they are able to allow direct access to the full graphics card hardware from within virtual machines!
More information on the Radeon Sky cards can be found on the AMD website.
Subject: General Tech, Graphics Cards | March 27, 2013 - 08:16 PM | Tim Verry
Tagged: sky graphics, sky 900, RapidFire, radeon sky, pc gaming, GDC, cloud gaming, ciinow, amd
AMD is making a new push into cloud gaming with a new series of Radeon graphics cards called Sky. The new cards feature a (mysterious) technology called "RapidFire" that allegedly provides "highly efficient and responsive game streaming" from servers to your various computing devices (tablets, PCs, Smart TVs) over the Internet. At this year's Games Developers Conference (GDC), the company announced that it is working with a number of existing cloud gaming companies to provide hardware and drivers to reduce latency.
AMD is working with Otoy, G-Cluster, Ubitus, and CiiNow. CiiNow in particular was heavily discussed by AMD, and can reportedly provide lower latency than cloud gaming competitor Gaikai. AMD Sky is, in many ways, similar in scope to NVIDIA's GRID technology which was announced last year and shown off at GTC last week. Obviously, that has given NVIDIA a head start, but it is difficult to say how AMD's technology will stack up as the company is not yet providing any specifics. Joystiq was able to obtain information on the high-end Radeon Sky graphics card, however (that's something at least...). The Sky 900 reportedly features 3,584 stream processors, 6GB of GDDR5 RAM, and 480 GB/s of bandwidth. Further, AMD has indicated that the new Radeon Sky cards will be based on the company's Graphics Core Next architecture.
|Sky 900||Radeon 7970|
I think it is safe to assume that the Sky cards will be sold to other cloud gaming companies. They will not be consumer cards, and AMD is not going to get into the cloud gaming business itself. Beyond that, AMD's Sky cloud gaming initiative is still a mystery. Hopefully more details will filter out between now and the AMD Fusion Developer Summit this summer.
Subject: Graphics Cards | March 26, 2013 - 07:41 PM | Jeremy Hellstrom
Tagged: nvidia, hd 7790, gtx 650 ti boost, gtx 650 Ti, gpu boost, gk106
Why Boost you may ask? If you guessed that NVIDIA added their new Boost Clock feature to the card you should win a prize as that is exactly what makes the GTX 650Ti special. With a core GPU speed of 980MHz, boosting to 1033MHz and beyond this card is actually aimed to compete with AMD's HD7850, not the newly released HD7790, at least the 2GB model is. Along with the boost in clock comes a wider memory pipeline and a corresponding increase in ROPs. The 2GB model should be about $170, right on the cusp between value and mid-range but is the price worth admission? Get a look at the performance at [H]ard|OCP.
"NVIDIA is launching the GeForce GTX 650 Ti Boost today. This video card is priced in the $149-$169 price range, and should give the $150 price segment another shakedown. Does it compare to the Radeon HD 7790, or is it on the level of the more expensive Radeon HD 7850? We will find out in today's latest games, you may be surprised."
Here are some more Graphics Card articles from around the web:
- Nvidia's GeForce GTX 650 Ti Boost @ The Tech Report
- Nvidia GTX 650 Ti Boost 2GB @ LanOC Reviews
- NVIDIA and EVGA GeForce GTX 650 Ti BOOST Video Card Review @ Legit Reviews
- NVIDIA GeForce GTX 650Ti Boost Review @ OCC
- Nvidia GeForce GTX 650 Ti Boost @ Hardware.info
- Nvidia GeForce GTX 650 Ti Boost @ Bjorn3D
- NVIDIA Geforce GTX 650Ti Boost 2GB Edition Review @Hi Tech Legion
- EVGA GTX 650Ti BOOST 2GB Superclocked Review @Hi Tech Legion
- NVIDIA GeForce GTX 650 Ti Boost 2GB @ Tweaktown
- NVIDIA GeForce GTX 650 Ti Boost 2 GB @ techPowerUp
- NVIDIA GeForce GTX 650 Ti BOOST @ Benchmark Reviews
- NVIDIA GTX 650 Ti Boost 2GB Review @ Hardware Canucks
- NVIDIA Chips Comparison Table @ Hardware Secrets
- AMD ATI Chips Comparison Table @ Hardware Secrets
- Workstation Graphics Card Comparison Guide @ TechARP
- PowerColor Radeon HD 7790 Turbo Duo Review @ OCC
- PowerColor HD 7790 Turbo Duo 1 GB @ techPowerUp
- Sapphire HD7950 MAC Edition @ Kitguru
Subject: Graphics Cards | March 22, 2013 - 01:56 PM | Jeremy Hellstrom
Tagged: hd 7790, graphics core next, GCN, ea Islands, bonaire, amd
AMD is trying to fill a gap in their product line between the less than $200 HD 7850 and the ~$120 HD 7770 with a $150 card, the HD 7790. The naming scheme implies two GPUs but this is not the case, it is a single Bonaire GCN chip with 896 stream processors, 56 texture units and an impressive fill rate of up to 1.79 TFLOPS thanks to some optimization of the GCN architecture. It has 1GB of GDDR5 at 6GHz effective and a CPU speed dependent on the model, in [H]ard|OCP's case the ASUS Radeon HD 7790 DirectCU II OC runs at 1.075GHz. [H] passed it a Silver Award for being a vast improvement over the 7770 and good competition for the GTX 650 Ti but feel the card does need to be faster.
This card also makes an appearance on our front page, with a lot of Frame Rating charts so you can see not only the raw FPS data you are used to, but also an indept look at how the game is going to 'feel' while you play.
"AMD is launching the Radeon HD 7790 today. This new video card should give the sub-$200 video card segment a kick in the pants. Will it provide enough performance for today's latest games at $149? We will find out, testing the new ASUS Radeon HD 7790 DirectCU II OC with no less than six of today's hottest games."
Here are some more Graphics Card articles from around the web:
- AMD's Radeon HD 7790 @ The Tech Report
- AMD Radeon HD 7790 review (incl. frametimes) @ Hardware.info
- AMD Radeon HD 7790 @ TechSpot
- AMD Radeon HD 7790 Review @ Hardware Canucks
- Sapphire Radeon HD 7790 Dual-X 1GB OC @ eTeknix
- Sapphire Radeon HD 7790 1GB Dual-X OC @ Tweaktown
- Sapphire HD 7790 1GB Graphics Card @ Bjorn3D
- Sapphire Radeon HD 7790 Dual-X OC Review @ OCC
- Sapphire HD 7790 Dual-X OC Video Card Review @ Hi Tech Legion
- AMD Radeon HD 7790 CrossFire @ techPowerUp
- ASUS HD 7790 DirectCU II OC @ Overclockers.com
- Sapphire HD 7790 Dual-X 1 GB @ techPowerUp
- AMD Radeon HD 7790 Video Card Review w/ Gigabyte & Sapphire @ Legit Reviews
- ASUS HD 7790 Direct CU II OC 1 GB @ techPowerUp
- Sapphire HD7790 OC @ Kitguru
- PowerColor PCS+ HD 7850 Radeon Graphic Card Review @ Pro-Clockers
- HIS Radeon HD 7850 iPower IceQ Turbo 4GB Video Card in CrossFire @ Tweaktown
- HIS Radeon HD 7770 iCooler 1GB Overclocked @ Tweaktown
- Mid-Range AMD Graphics Card Round-Up (HIS 7770 GHz / HIS 7850 / Sapphire 7850) @ Kitguru
- PowerColor PCS HD7870 MYST Video Card Review @ Legit Reviews
GTC 2013: Cortexica Vision Systems Talks About the Future of Image Recognition During the Emerging Companies Summit
Subject: General Tech, Graphics Cards | March 20, 2013 - 09:44 PM | Tim Verry
Tagged: video fingerprinting, image recognition, GTC 2013, gpgpu, cortexica, cloud computing
The Emerging Companies Summit is an series of sessions at NVIDIA's GPU Technology Conference (GTC) that gives the floor to CEOs from several up-and-coming technology startups. Earlier today, the CEO of Cortexica Vision Systems took the stage to talk briefly about the company's products and future direction, and to answer questions from a panel of industry experts.
If you tuned into NVIDIA's keynote presentation yesterday, you may have noticed the company showing off a new image recognition technology. That technology is being developed by a company called Cortexica Vision Systems. While it cannot perform facial recognition, it is capable of identifying everything else, according the company's CEO Ian McCready. Currently, Cortexica is employing a cluster of approximately 70 NVIDIA graphics cards, but it is capable of scaling beyond that. Mcready estimates that about 100 GPUs and a CPU would be required by a company like eBay, should they want to implement Cortexica's image recognition technology in-house.
The Cortexica technology uses images captured by a camera (such as the one in your smartphone), which is then sent to Cortexica's servers for processing. The GPUs in the Cortexica cluster handle the fingerprint creation task while the CPU does the actual lookup in the database of known fingerprints to either find an exact match, or return similar image results. According to Cortexica, the fingerprint creation takes only 100ms, though as more powerful GPUs make it into mobile devices, it may be possible to do the fingerprint creation on the device itself, reducing the time between taking a photo and getting relevant results back.
The image recognition technology is currently being used by Ebay Motors in the US, UK, and Germany. Cortexica hopes to find a home with many of the fashion companies that would use the technology to allow people to identify and ultimately purchase clothing they take photos of on television or in public. The technology can also perform 360-degree object recognition, identify logos that are as small as .4% of the screen, and identify videos. In the future Cortexica hopes to reduce latency, improve recognition accuracy, and add more search categories. Cortexica is also working on enabling an "always on" mobile device that will constantly be indentifying everything around it, which is both cool and a bit creepy. With mobile chips like Logan and Parker coming in the future, Cortexica hopes to be able to do on-device image recognition, which would greatly reduce latency and allow the use of the recognition technology while not connected to the internet.
The number of photos taken is growing rapidly, where as many as 10% of all photos stored "in the cloud" were taken last year alone. Even Facebook, with it's massive data centers is moving to a cold-storage approach to save money on electricity costs of storing and serving up those photos. And while some of these photos have relevant meta data, the majority of photos taken do not, and Cortexica claims that its technology can be used to get around that issue, but identifying photos as well as finding similar photos using its algorithms.
Stay tuned to PC Perspective for more GTC coverage!
Additional slides are available after the break:
Subject: General Tech, Graphics Cards | March 20, 2013 - 01:47 PM | Tim Verry
Tagged: tesla, tegra 3, supercomputer, pedraforca, nvidia, GTC 2013, GTC, graphics cards, data centers
There is a lot of talk about heterogeneous computing at GTC, in the sense of adding graphics cards to servers. If you have HPC workloads that can benefit from GPU parallelism, adding GPUs gives you computing performance in less physical space, and using less power, than a CPU only cluster (for equivalent TFLOPS).
However, there was a session at GTC that actually took things to the opposite extreme. Instead of a CPU only cluster or a mixed cluster, Alex Ramirez (leader of Heterogeneous Architectures Group at Barcelona Supercomputing Center) is proposing a homogeneous GPU cluster called Pedraforca.
Pedraforca V2 combines NVIDIA Tesla GPUs with low power ARM processors. Each node is comprised of the following components:
- 1 x Mini-ITX carrier board
1 x Q7 module (which hosts the ARM SoC and memory)
- Current config is one Tegra 3 @ 1.3GHz and 2GB DDR2
- 1 x NVIDIA Tesla K20 accelerator card (1170 GFLOPS)
- 1 x InfiniBand 40Gb/s card (via Mellanox ConnectX-3 slot)
- 1 x 2.5" SSD (SATA 3 MLC, 250GB)
The ARM processor is used solely for booting the system and facilitating GPU communication between nodes. It is not intended to be used for computing. According to Dr. Ramirez, in situations where running code on a CPU would be faster, it would be best to have a small number of Intel Xeon powered nodes to do the CPU-favorable computing, and then offload the parallel workloads to the GPU cluster over the InfiniBand connection (though this is less than ideal, Pedraforca would be most-efficient with data-sets that can be processed solely on the Tesla cards).
While Pedraforca is not necessarily locked to NVIDIA's Tegra hardware, it is currently the only SoC that meets their needs. The system requires the ARM chip to have PCI-E support. The Tegra 3 SoC has four PCI-E lanes, so the carrier board is using two PLX chips to allow the Tesla and InfiniBand cards to both be connected.
The researcher stated that he is also looking forward to using NVIDIA's upcoming Logan processor in the Pedraforca cluster. It will reportedly be possible to upgrade existing Pedraforca clusters with the new chips by replacing the existing (Tegra 3) Q7 module with one that has the Logan SoC when it is released.
Pedraforca V2 has an initial cluster size of 64 nodes. While the speaker was reluctant to provide TFLOPS performance numbers, as it would depend on the workload, with 64 Telsa K20 cards, it should provide respectable performance. The intent of the cluster is to save power costs by using a low power CPU. If your sever kernel and applications can run on GPUs alone, there are noticeable power savings to be had by switching from a ~100W Intel Xeon chip to a lower-power (approximately 2-3W) Tegra 3 processor. If you have a kernel that needs to run on a CPU, it is recommended to run the OS on an Intel server and transfer just the GPU work to the Pedraforca cluster. Each Pedraforca node is reportedly under 300W, with the Tesla card being the majority of that figure. Despite the limitations, and niche nature of the workloads and software necessary to get the full power-saving benefits, Pedraforca is certainly an interesting take on a homogeneous server cluster!
In another session relating to the path to exascale computing, power use in data centers was listed as one of the biggest hurdles to getting to Exaflop-levels of performance, and while Pedraforca is not the answer to Exascale, it should at least be a useful learning experience at wringing the most parallelism out of code and pushing GPGPU to the limits. And that research will help other clusters use the GPUs more efficiently as researchers explore the future of computing.
The Pedraforca project built upon research conducted on Tibidabo, a multi-core ARM CPU cluster, and CARMA (CUDA on ARM development kit) which is a Tegra SoC paired with an NVIDIA Quadro card. The two slides below show CARMA benchmarks and a Tibidabo cluster (click on image for larger version).
Stay tuned to PC Perspective for more GTC 2013 coverage!
Subject: General Tech, Graphics Cards | March 19, 2013 - 06:52 PM | Tim Verry
Tagged: GTC 2013, tyan, HPC, servers, tesla, kepler, nvidia
Server platform manufacturer TYAN is showing off several of its latest servers aimed at the high performance computing (HPC) market. The new servers range in size from 2U to 4U chassis and hold up to 8 Kepler-based Tesla accelerator cards. The new product lineup consists of two motherboards and three bare-bones systems. The S7055 and S7056 are the motherboards while the FT77-B7059, TA77-B7061, and FT48-B7055.
The TA77-B7061 is the smallest system, with support for two Intel Xeon E5-2600 processors and four Kepler-based Tesla accelerator cards. The FT48-B7055 has si7056 specifications but is housed in a 4U chassis. Finally, the FT77-B7059 is a 4U system with support for two Intel Xeon E5-2600 processors, and up to eight Tesla accelerator cards. The S7055 supports a maximum of 4 GPUs while the S7056 can support two Tesla cards, though these are bare boards so you will have to supply your own cards, processors, and RAM (of course).
According to TYAN, the new Kepler-based HPC systems will be available in Q2 2013, though there is no word on pricing yet.
Stay tuned to PC Perspective for further GTC 2013 Coverage!
Get notified when we go live!