Subject: Graphics Cards, Processors | January 23, 2013 - 02:42 PM | Ryan Shrout
Tagged: southern islands, sony, ps4, playstation 4, orbis, Kaveri, bulldozer, APU, amd
Earlier today a report from Kotaku.com posted some details about the upcoming PlayStation console, code named Orbis and sometimes just called the PS4. Kotaku author Luke Plunkett got the information from a 90 page PDF that details the development kit so the information is likely pretty accurate if incomplete. It discusses a new controller and a completely new accounts system but I was mostly interested in the hardware details given.
We'll begin with the specs. And before we go any further, know that these are current specs for a PS4 development kit, not the final retail console itself. So while the general gist of the things you see here may be similar to what makes it into the actual commercial hardware, there's every chance some—if not all of it—changes, if only slightly.
This is key to keep in mind because here are the specs listed on the report:
- 8GB of system memory
- 2.2GB of graphics memory
- 4 module (8 core) AMD Bulldozer CPU
- AMD "R10xx" based GPU
- 4x USB 3.0 ports and 2x Ethernet connections
- Blu-ray drive
- 160GB HDD
- HDMI and optical audio output
We are essentially talking about an AMD FX-series processor with a Southern Islands based discrete card and I am nearly 100% sure that this will not match the configuration of the shipping system. Think about it - would a console developer really want to have a processor that can draw more than 100 watts inside its box in addition to a discrete GPU? I doubt it.
Instead, let's go with the idea that this developer kit is simply meant to emulate some final specifications. More than likely we are looking at an APU solution that combines Bulldozer or Steamroller cores along with GCN-based GPU SIMD arrays. The most likely candidate is Kaveri, a 28nm based product that meets both of those requirements. Josh recently discussed the future with Kaveri in a post during CES, worth checking out. AMD has told us several times that Kaveri should be able to hit the 1.0 TFLOPs level of performance and if we compare to the current discrete GPUs would enable graphics performance similar to that of an under-clocked Radeon HD 7770.
There is some room for doubt though - Kaveri isn't supposed to be out until "late Q4" though its possible that the PS4 will be the first customer. It is also possible that AMD is making a specific discrete GPU for implementation on the PS4 based on the GCN architecture that would be faster than the graphics performance expected on the Kaveri APU.
When speaking with our own Josh Walrath on this rumor, he tended to think that Sony and AMD would not use an APU but would rather combine a separate CPU and GPU on a single substrate, allowing for better yields than a combined APU part. In order to make up for the slower memory controller interface (on substrate is not as fast as on-die) AMD might again utilize backside cache, just like the one used on the Xbox 360 today. With process technology improvements its not unthinkable to see that jump to 30 or 40MB of cache.
With the debate of a 2013 or 2014 release still up in the air, there is plenty of time for this to change still but we will likely know for sure after our next trip to Taipei.
Subject: Graphics Cards | January 22, 2013 - 03:47 PM | Jeremy Hellstrom
Tagged: nvidia, asus, GTX 670 DirectCU II 4GB, sli
When they first tried ASUS' new GTX 670 Direct CU II with 4GB of memory on its own, [H]ard|OCP had difficulty recommending it over a 7970 but they planned to try two cards in SLI to see if that would improve the comparative performance. The competitors are a pair of 2GB 670s, a pair of 3GB HD7970's, a pair of 2GB 680s and of course two 4GB 670s, all powering a system at 5760x1200. Unfortunately the quote from the conclusions spells out the results "It's like putting beefy off-road tires on a Yugo", so while it will give you the ability to use some higher graphics settings, overall you are still better of with HD7970s or GTX680s.
"We review two ASUS GeForce GTX 670 DirectCU II 4GB video cards in SLI under NV Surround resolutions. We'll answer the question as to the value and validity of 4GB of RAM on a GeForce GTX 670 GPU video card in SLI. Far Cry 3, Hitman Absolution, and all our other games will be taken to the extreme to get to the bottom of 4GB GTX 670 cards."
Here are some more Graphics Card articles from around the web:
- Nvidia Quadro K5000 Professional @ X-bit Labs
- Five-Way NVIDIA GeForce Comparison On Nouveau @ Phoronix
- Desktop Graphics Card Comparison @ TechARP
- Sapphire TRIXX Video Card Tweak Utility Overview @ Tweaktown
- IS Radeon HD 7970 3GB IceQ X2 Overclocked @ Tweaktown
- VTX3D HD 7870 Black Edition 2 GB @ techPowerUp
- HIS HD 7970 IceQ X² & HD 7950 IceQ X² Review @ Hardware Canucks
- Sapphire HD7870 W/ Boost @ Kitguru
- PowerColor Radeon HD 7950 3GB PCS Overclocked @ Tweaktown
Subject: Graphics Cards | January 22, 2013 - 02:44 PM | Ryan Shrout
Tagged: nvidia, geforce, gk110, titan, rumor
A combination of rumors and news pieces found online and in some recent conversations with partners indicates that February will see the release of a new super-high-end graphics card from NVIDIA based on the GK110 GPU. Apparently using the name "Titan" based on a report from Sweclockers.com, this new single GPU card will feature 2688 CUDA cores, compared to the 1536 in the GeForce GTX 680.
If true, the name Titan likely refers to the Cray super computer of the same name built using GK110 Kepler Tesla cards. Sweclockers.com's sources are quoted with the clocks of this new super-GPU as well: 732 MHz core clock and 5.2 GHz GDDR5 memory clock. While those numbers are low compared to the 1000+ MHz speeds of the GK104 parts out today, this GPU would have 75% more compute units and presumably additional memory capacity as well. The memory bus width of 384-bits is a 50% increase as well which would indicate another big jump in performance over current cards. The CUDA core count of 2688 is actually indicative of a GK110 GPU with a single SMX disabled as well.
The NVIDIA Titan card will apparently be the replacement for the GeForce GTX 690, a dual-GK104 card launched in May of last year. The performance estimate for the Titan is approximately 85% of that GTX 690 and if the rumors are right it would see an $899 price tag.
Based on other conversations I have had recently you should only expect those same partners that were able to sell the GTX 690 to stock this new GK110-based part. There won't be any modifications and you will see very little differentiation between vendors branding on it. If dates are to be believed, we are hearing that a Feb 25th (or at least that week) launch is the current target.
Subject: Graphics Cards | January 22, 2013 - 01:02 PM | PCPer Staff
World of Warcraft: Mists of Pandaria Collector's Edition (PC/Mac) for $59.99 with free shipping (normally $80 - use coupon code: EMCXWVV99).
HP ENVY 4-1130us Core i5 14" Ultrabook (Quickship) for $549.99 with free shipping (normally $849.99 - use coupon code: SNOW100).
Dell S2740L 27" 1080p LED-backlit LCD Monitor for $278.99 with Free Shipping (normally $370 - use coupon code: DMFVW8?GQ27MDX).
55" LG 55LM4600 1080p 3D 120Hz LED HDTV for $849.99 with free shipping (normally $1,400).
46" Seiki SE461TS 1080p LCD HDTV for $398.00 (normally $500).
Pioneer SW-8 100-Watt Powered Subwoofer for $69.99 with free shipping (normally $160 - use coupon code: EMCXWVV75).
Zadro Personal Sunlight 365 Therapy for $59.99 plus free shipping(normally $77.99).
Shure Craft Kit (6 options) via Groupon for $8.00(normally $19.99).
Subject: Graphics Cards | January 19, 2013 - 01:26 AM | Tim Verry
Tagged: water cooler, sealed loop, ROG ARES II, gpu cooler, asus, amd, 7970 ghz edition
ASUS has taken the wraps off of a new dual GPU graphics card that comes equipped with a sealed loop liquid cooler to keep the two overclocked 7970 GHz Edition GPUs frosty. The new ROG ARES II is a limited edition card that pairs the ARES II GPU with an Asetek-based cooler and rounds out the top-end of the company’s Republic of Gamers lineup.
The card itself features two AMD Radeon 7970 GHz Edition GPUs clocked at 1050 MHz base and 1100 MHz boost, 6GB of GDDR5 memory clocked at 1650 MHz, and ASUS’ DIGI+ 20-phase VRM with “Super Alloy Power” hardware. The ROG ARES II has a 500W TDP and uses three 8-pin PCI-E power connectors. The card measures 11.8” x 5.5” x1.8,” not including the radiator.
The ROG ARES II includes one DVI-I, one DVI-D, and four DisplayPort video outputs. ASUS is also packing a DVI to HDMI adapter in the box.
The sealed loop water cooler is where the card sets itself apart, however. Based on an Asetek design, the ARES II water cooler features a 120mm radiator, and two CPU-style water blocks over each 7970 GHz Edition GPU. The loop runs from the radiator and through both water blocks before returning to the radiator which is paired with two 120mm fans. Curiously, the water cooler did not result in a single-slot design. Rather, the ARES II card has a somewhat-bulky two slot profile. According to ASUS, the water cooled card will run up to 31 degrees Celsius cooler than the reference NVIDIA GTX 690 graphics card while being as much as 13% faster (though ASUS does not specifically name the games/benchmarks).
ASUS has not released any pricing or availability information, but you can expect it to rival the price of PowerColor’s Devil 13 thanks to the sealed loop water cooler and ARES II hardware. Currently, ASUS is planning on producing a mere 1,000 liquid cooled ARES II cards, so be prepared to be fast on the mouse click upon release.
I would have liked to see a water cooler that was a bit more customized to the card. In particular, I think ASUS should have used a single water block that covered both GPUS and the VRM area, which would have allowed ASUS to get rid of the fan on the card itself entirely. Nevertheless, the ARES II will be extremely fast, and hopefully run nice and cool even when overclocked. I’m interested in seeing a head-to-head between the ARES II and PowerColor Devil 13.
Read more about AMD’s Graphics Core Next architecture at PC Perspective.
Subject: Graphics Cards | January 18, 2013 - 11:33 AM | Tim Verry
Tagged: Radeon HD 7000, gpu, drivers, catalyst 13.1, amd
AMD recently released a new set of Catalyst graphics card drivers with Catalyst 13.1. The new drivers are WHQL (Microsoft certified) and incorporate all of the fixes contained in the 12.11 beta 11 drivers. The Radeon HD 7000 series will see the majority of the performance and stability tweaks with 13.1. Additionally, the Catalyst 13.1 suite includes a new 3D settings interface in Catalyst Control Center that allows per-application profile management. The Linux version of the Catalyst 13.1 drivers now officially support Ubuntu 12.10 as well.
Some of the notable performance tweaks for the HD 7000 series include:
- CrossFire scaling performance in Call of Duty: Black Ops II improvements.
- Up to a 25% increase in Far Cry 3 when using 8X MSAA.
- An 8% performance increase in Sleeping Dogs and StarCraft II.
- A 5% improvement in Max Payne 3.
Beyond the performance increases, AMD has fixes several bugs with the latest drivers. Some of the noteworthy fixes include:
- Fixed a system hang on X58 and X79 chipset-based systems using HD 7000-series GPUs.
- Fixed an intermittent hang with HD 7000-series GPUs in CrossFireX and Eyefinity configurations.
- Resolved a system hang in Dishonored on 5000 and 6000 series graphics cards.
- Resolved a video issue with WMP Classic Home Cinema.
- Added Super Sample Anti-Aliasing support in the OpenGL driver.
AMD has also released a new standalone un-installation utility that will reportedly clean your system of AMD graphics card drivers to make way for newer versions. That utility can be downloaded here.
If you have a Radeon HD 7000-series card, it would be worth it to update your drivers ASAP. You can download the Catalyst 13.1 drivers on the AMD website.
You can find a full list of the performance tweaks and bug fixes in the Catalyst 13.1 release notes.
In our previous article and video, I introduced you to our upcoming testing methodology for evaluating graphics cards based not only frame rates but on frame smoothness and the efficiency of those frame rates. I showed off some of the new hardware we are using for this process and detailed how direct capture of graphics card output allows us to find interesting frame and animation anomalies using some Photoshop still frames.
Today we are taking that a step further and looking at a couple of captured videos that demonstrate a "stutter" and walking you through, frame by frame, how we can detect, visualize and even start to measure them.
This video takes a couple of examples of stutter in games, DiRT 3 and Dishonored to be exact, and shows what they look like in real time, at 25% speed and then finally in a much more detailed frame-by-frame analysis.
Obviously this is just a couple instances of what a stutter is and there are often times less apparent in-game stutters that are even harder to see in video playback. Not to worry - this capture method is capable of seeing those issues as well and we plan on diving into the "micro" level as well shortly.
We aren't going to start talking about whose card and what driver is being used yet and I know that there are still a lot of questions to be answered on this topic. You will be hearing more quite soon from us and I thank you all for your comments, critiques and support.
Let me know below what you thought of this video and any questions that you might have.
Subject: General Tech, Graphics Cards | January 16, 2013 - 01:10 PM | Ryan Shrout
Tagged: stolen, nvidia, legal, Lawsuit, console, amd
Things might get interesting for a little while between AMD and NVIDIA again as a complaint has been filed by AMD accusing recently converted NVIDIA employee's of downloading and stealing 100,000 documents on the way out AMD's door.
The company alleges that Robert Feldstein, Manoo Desai, and Nicolas Kociuk collectively downloaded over 100,000 files onto external hard drives in the six months before leaving the company. All three and another manager, Richard Hagen, were accused of recruiting AMD employees after leaving for Nvidia.
The most senior of these employees is Robert Feldstein who was acting as the VP of Strategic Development at AMD before leaving for NVIDIA and was responsible for getting AMD inside the Nintendo Wii U as well as the upcoming Xbox and Playstation consoles due out this year. To say that "stealing" Feldstein was a big win for NVIDIA would seem like a bad pun now with the accusations on the table, but there, we said it.
After looking at the former employees computers AMD found that "Desai and Kociuk conspired with each other to misappropriate AMD's confidential, proprietary, and/or trade secret information; and/or to intentionally access AMD's protected computers, without authorization and/or in a way that exceeded their authorized access." And since Feldstein and Hagan were responsible for the recruitment of those former AMD employees, they were breaking the "no-solicitation of employees" agreement made before departure.
Obviously AMD hasn't come out with exactly what is in those 100,000 documents they accuse of being stolen, but the company is hoping that the US District Court in Massachusetts will help them recover the incriminating documents with a restraining order for all four current employees of NVIDIA forcing them to retain all current AMD-related documents.
The unfortunate part of this for AMD is that if the document leak is true, the damage has likely already been done and they will have to sue for damages down the road. NVIDA could be in for a world of hurt if the court finds that they were actively requesting those documents from the the four named in the complaint.
If you want to read all the legal source for this complaint, you can find it right here.
Subject: Graphics Cards | January 12, 2013 - 12:02 PM | Ryan Shrout
Tagged: CES, ces 2013, Intel, haswell, hd graphics, 650m, geforce, nvidia, dirt 3
While wandering around the Intel booth we were offered a demo of the graphics performance of the upcoming Haswell processor, due out in the middle of 2013. One of the big changes on this architecture will be another jump up in graphics performance, even more than we saw going from Sandy Bridge to Ivy Bridge.
On the left is the Intel Haswell system and on the right is a mobile system powered by the NVIDIA GeForce GT 650M. For reference, that discrete GPU has 384 cores and a 128-bit memory bus so we aren't talking about flagship performance here. Haswell GT3 graphics is rumored to have double the performance of the GT2 found in Ivy Bridge based on talks at IDF this past September.
While I am not able to report the benchmark results, I can tell you what I "saw" in my viewing. First, the Haswell graphics loaded the game up more slowly than the NVIDIA card. That isn't a big deal really and could change with driver updates closer to launch, but it is was a lingering problem we have seen with Intel HD graphics over the years.
During the actual benchmark run, both looked great while running at 1080p and High quality presets. I did notice during part of the loading of the level, the Haswell system seemed to "stutter" a bit and was a little less fluid in the animation. I did NOT notice that during the actually benchmark gameplay though.
I also inquired with Intel's graphics team about how dedicated they were to providing updated graphics drivers for HD graphics users. They were defensive about their current output saying they have released quarterly drivers since the Sandy Bridge release but that perhaps they should be more vocal about it (I agree). While I tried to get some kind of formal commitment from them going forward to monthly releases with game support added within X number of days, they weren't willing to do that quite yet.
If AMD and NVIDIA discrete notebook (and low cost desktop) graphics divisions are to push an edge, game support and frequent updates are going to be the best place to start. Still, seeing Intel continue to push forward on the path of improved processor graphics is great if they can follow through for gamers!
PC Perspective's CES 2013 coverage is sponsored by AMD.
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: Graphics Cards, Shows and Expos | January 12, 2013 - 11:38 AM | Ryan Shrout
Tagged: CES, ces 2013, caustic, imagination, ray tracing, series2
We have talked with Caustic on several occassions over the past couple of years about their desire to build a ray tracing accelerator. Back in April of 2009 we first met with Caustic, learning who they were and what the goals of the company were; we saw early models of the CausticOne and CausticTwo and a demonstration of the capabilities of the hardware and software model.
While at CES this year we found the group at a new place - the Imagination Technologies booth - having been acquired since we last talked. Now named the Caustic Series2 OpenRL accelerator boards, we are looking at fully integrated ASICs rather than demonstration FPGAs.
This is the Caustic 2500 and it will retail for $1495 and includes a pair of the RT2 chips and 16GB of memory. One of the benefits of the Caustic technology is that while you need a lot of memory, you do not need expensive, fast memory like GDDR5 used in today's graphics cards. By utilizing DDR2 memory Imagination is able to put a whopping 16GB on the 2500 model.
A key benefit of the Caustic ray tracing accelerators comes with the simply software integration. You can see above that a AutoDesk Maya 2013 is utilizing the Caustic Visualizer as a simple viewport into the project just as you would use with any other RT or preview rendering technique. The viewport software is also available for 3ds max.
There is a lower cost version of the hardware, the Caustic 2100, that uses a single chip and has half the memory for a $795 price tag. They are shipping this month and we are interested to see how quickly, and how eager developers are, to utilize this technology.
PC Perspective's CES 2013 coverage is sponsored by AMD.
Follow all of our coverage of the show at http://pcper.com/ces!