All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
GK110 Gets a Lower Price Point
If you want to ask us some questions about the GTX 780 or our review, join us for a LIVE STREAM at 2pm EDT / 11am PDT on our LIVE page.
When NVIDIA released the GeForce GTX Titan in February there was a kind of collective gasp across the enthusiast base. Half of that intake of air was from people amazed at the performance they were seeing on a single GPU graphics cards powered by the GK110 chip. The other half was from people aghast of the $1000 price point that NVIDIA launched it at. The GTX Titan was the fastest single GPU card in the world, without any debate, but with it came a cost we hadn't seen in some time. Even with the debate between it, the GTX 690 and the HD 7990, the Titan was likely my favorite GPU, cost no concerns.
Today we see the extension of the GK110, by cutting it back some, and releasing a new card. The GeForce GTX 780 3GB is based on the same chip as the GTX Titan but with additional SMX units disabled, a lower CUDA core count and less memory. But as you'll soon see, the performance delta between it and the GTX 680 and Radeon HD 7970 GHz is pretty impressive. The $650 price tag though - maybe not.
We held a live stream the day this review launched at http://pcper.com/live. You can see the replay that goes over our benchmark results and thoughts on the GTX 780 below.
The GeForce GTX 780 - A Cut Down GK110
As I mentioned above, the GTX 780 is a pared-down GK110 GPU and for more information on that particular architecture change, you should really take a look at my original GTX Titan launch article from February. There is a lot more that is different on this part compared to GK104 than simple shader counts, but for gamers most of the focus will rest there.
The chip itself is a 7.1 billion mega-ton beast though a card with the GTX 780 label is actually utilizing much fewer than that. Below you will find a couple of block diagrams that represent the reduced functionality of the GTX 780 versus the GTX Titan:
The Architectural Deep Dive
AMD officially unveiled their brand new Bobcat architecture to the world at CES 2011. This was a very important release for AMD in the low power market. Even though Netbooks were a dying breed at that time, AMD experienced a good uptick in sales due to the good combination of price, performance, and power consumption for the new Brazos platform. AMD was of the opinion that a single CPU design would not be able to span the power consumption spectrum of CPUs at the time, and so Bobcat was designed to fill that space which existed from 1 watt to 25 watts. Bobcat never was able to get down to that 1 watt point, but the Z-60 was a 4.5 watt part with two cores and the full 80 Radeon cores.
The Bobcat architecture was produced on TSMC’s 40 nm process. AMD eschewed the upcoming 32 nm HKMG/SOI process that was being utilized for the upcoming Llano and Bulldozer parts. In hindsight, this was a good idea. Yields took a while to improve on GLOBALFOUNDRIES new process, while the existing 40 nm product from TSMC was running at full speed. AMD was able to provide the market in fairly short order with good quantities of Bobcat based APUs. The product more than paid for itself, and while not exactly a runaway success that garnered many points of marketshare from Intel, it helped to provide AMD with some stability in the market. Furthermore, it provided a very good foundation for AMD when it comes to low power parts that are feature rich and offer competitive performance.
The original Brazos update did not happen, instead AMD introduced Brazos 2.0 which was a more process improvement oriented product which featured slightly higher speeds but remained in the same TDP range. The uptake of this product was limited, and obviously it was a minor refresh to buoy purchases of the aging product. Competition was coming from low power Ivy Bridge based chips, as well as AMD’s new Trinity products which could reach TDPs of 17 watts. Brazos and Brazos 2.0 did find a home in low powered, but full sized notebooks that were very inexpensive. Even heavily leaning Intel based manufacturers like Toshiba released Brazos based products in the sub-$500 market. The combination of good CPU performance and above average GPU performance made this a strong product in this particular market. It was so power efficient, small batteries were typically needed, thereby further lowering the cost.
All things must pass, and Brazos is no exception. Intel has a slew of 22 nm parts that are encroaching on the sub-15 watt territory, ARM partners have quite a few products that are getting pretty decent in terms of overall performance, and the graphics on all of these parts are seeing some significant upgrades. The 40 nm based Bobcat products are no longer competitive with what the market has to offer. So at this time we are finally seeing the first Jaguar based products. Jaguar is not a revolutionary product, but it improves on nearly every aspect of performance and power usage as compared to Bobcat.
A Reference Platform - But not a great one
Believe it or not, AMD claims that the Brazos platform, along with the "Brazos 2.0" update the following year, were the company's most successful mobile platforms in terms of sales and design wins. When it first took the scene in late 2010, it was going head to head against the likes of Intel's Atom processor and the combination of Atom + NVIDIA ION and winning. It was sold in mini-ITX motherboard form factors as well as small clamshell notebooks (gasp, dare we say...NETBOOKS?) and though it might not have gotten the universal attention it deserved, it was a great part.
With Kabini (and Temash as well), AMD is making another attempt to pull in some marketshare in the low power, low cost mobile markets. I have already gone over the details of the mobile platforms that AMD is calling Elite Mobility (Temash) and Mainstream (Kabini) in a previous article that launched today.
This article will quickly focus on the real-world performance of the Kabini platform as demonstrated by a reference laptop I received while visiting AMD in Toronto a few weeks ago. While this design isn't going to be available in retail (and I am somewhat thankful based on the build quality) the key is to look at the performance and power efficiency of the platform itself, not the specific implementation.
Kabini Architecture Overview
The building blocks of Kabini are four Jaguar x86 cores and 128 Radeon cores colleted in a pair of Compute Units - similar in many ways to the CUs found in the Radeon HD 7000 series discrete GPUs. Josh has written a very good article that focuses on the completely new architecture that is Jaguar and compared it to other processors including AMD's previous core used in Brazos, the Bobcat core.
2013 Elite Mobility APU - Temash
AMD has a lot to say today. At an event up in Toronto this month we got to sit down with AMD’s marketing leadership and key engineers to learn about the company’s plans for 2013 mobility processors. This includes a refreshed high performance APU known as Richland that will replace Trinity as well as two brand new APUs based on Jaguar CPU cores and the GCN architecture for low power platforms.
Josh has put together an article that details the Jaguar + GCN design of Temash and Kabini and I have also posted some initial performance results of the Kabini reference system AMD handed me in May. This article will detail the plans that AMD has for each of these three mobile segments, starting with the newest entry, AMD’s Elite Mobility APU platform – Temash.
The goal of the APU, the combination of traditional x86 processing cores and a discrete style graphics system, was to offer unparalleled performance in smaller and more efficient form factors. AMD believes that their leadership in the graphics front will offer them a good sized advantage in areas including performance tablets, hybrids and small screen clamshells that may or not be touch enabled. They are acknowledging though that getting into the smallest tablets (like the Nexus 7) is not on the table quite yet and that content creation desktop replacements are probably outside the scope of Richland.
2013 Elite Mobility APU – Temash
AMD will have the first x86 quad-core SoC design with Temash and AMD thinks it will make a big splash in a relatively new market known as the “high performance” tablet.
Temash, built around Jaguar CPU cores and the graphics technology of GCN, will be able to offer fully accelerated video playback with transcode support as well with features like image stabilization and Perfect Picture enabled. Temash will also be the only SoC to offer support for DX11 graphics and even though some games might not have the ability to show off added effects there are quite a few performance advantages of DX11 over DX10/9. With more than 100% claimed GPU performance upgrade you’ll be able to drive displays at 2560x1600 for productivity use and even be able to take advantage of wireless display options.
Introduction and Design
While Lenovo hasn’t historically been known for its gaming PCs, it’s poised to make quite a splash with the latest entry in its IdeaPad line. Owing little to the company’s business-oriented roots, the Y500 aims to be all power—moreso than any other laptop from the manufacturer to date—tactfully squeezed into a price tag that would normally be unattainable given the promised performance. But can it succeed?
Our Y500 review unit can be had for $1,249 at Newegg and other retailers, or for as low as $1,180 at Best Buy. Lenovo also sells customizable models, though the price is generally higher. Here’s the full list of specifications:
The configurations offered by Lenovo range in price fairly widely, from as low as $849 for a model sporting 8 GB of RAM with a single GT 650M with 2 GB GDDR5. The best value is certainly this configuration that we received, however.
What’s so special about it? Well, apart from the obvious (powerful quad-core CPU and 16 GB RAM), this laptop actually includes two NVIDIA GeForce GT 650M GPUs (both with 2 GB GDDR5) configured in SLI. Seeing as it’s just a 15.6-inch model, how does it manage to do that? By way of a clever compromise: the exchange of the usual optical drive for an Ultrabay, something normally only seen in Lenovo’s ThinkPad line of laptops. So I guess the Y500 does owe a little bit of its success to its business-grade brethren after all.
In our review unit (and in the particular configuration noted above), this Ultrabay comes prepopulated with the second GT 650M, equipped with its own heatsink/fan and all. The addition of this GPU effectively launches the Y500 into high-end gaming laptop territory—at least on the spec sheet. Other options for the Ultrabay also exist (sold separately), including a DVD burner and a second hard drive. The bay is easily removable via a switch on the back of the PC (see below).
Introduction and Technical Specifications
Courtesy of MSI
With the Z77A-GD65 Gaming motherboard, MSI takes is award-winning Intel Z77-based board design and melds it with a Killer - a Killer NIC that is. MSI integrated the Killer e2205 GigE NIC into the board's design for the ultimate solution for online gaming. The Killer NIC is well known in gaming circles for its superior hardware-based network traffic prioritization engine, making it a natural integration choice for a top-end gaming board. We put the MSI Z77A-GD65 Gaming through our rigorous suite of tests to measures is performance and were not disappointed. At a retail price of $179, this board is a steal.
Courtesy of MSI
In designing the Z77A-GD65 Gaming board, MSI provided a total of 12 digital power phases for the CPU. MSI packed this board full of features: SATA 2 and SATA 3 ports; a Killer e2205 GigE NIC; three PCI-Express x16 slots for up to tri-card support; and USB 2.0 and 3.0 port support.
Courtesy of MSI
Today, Gigabyte unveiled their Intel Z87-based board lineup to select members of the press at a live event from their headquarters in City of Industry, CA. Their Z87 boards are broken down into four series - the Extreme OC series, the Gaming Series, the Thunderbolt series, and the Standard series.
Intel Z87 motherboard lineup
Courtesy of GIGABYTE
Gigabyte includes both a new interface for their UEFI BIOS and a new power paradigm, dubbed Ultra Durable 5 Plus, into each of their Intel Z87 boards.
UEFI BIOS Enhancement
UEFI BIOS explanation
Courtesy of GIGABYTE
They also fully redesigned their UEFI BIOS interface to make it more customizable, easier to use, and to allow real-time feedback for settings changes.
Today, ASUS introduces their Intel Z87-based motherboard lineup with board refreshes across all of their product lines: ASUS (mainstream), Republic of Gamers (ROG), The Ultimate Force (TUF), and Workstation (WS). With the exception of their TUF and ROG board lines, ASUS decided to introduce a new and improved color scheme for their boards - black and gold. The motherboard surfaces are black with gold colored heat sinks. While black and gold may not seem like the best match-up, don't judge the boards until you have seen them first hand - the black and gold go very well together.
ASUS Maximus VI Gene
Courtesy of ASUS
Their ROG line will include the Maximus VI Extreme, the Maximus VI Formula, the Maximus VI Gene, and the Maximus VI Hero. All ROG boards feature the standard red and black color scheme common to that brand. Additionally, ASUS includes SupremeFX audio standard with all ROG boards and their Sonic Radar on-screen overlay technology. Sonic Radar is an in-game overlay that can be used to accurately pinpoint game-based sound sources. For powering these boards, ASUS includes 60amp-rated blackwing chokes and NexFET MOSFETS with 90% power efficiency operation. Use of these power components was seen to reduce on-board temperatures in the ASUS labs by as much as 5 degrees Celcius.
ASUS Maximus VI Extreme
Courtesy of ASUS
ASUS upped the ante even more with their Maximus VI Extreme board by including the ASUS OC Panel. This panel includes a display and can be mounted in a 5.25" drive bay or used externally for real time voltage and temperature monitoring as well as tweaking of various frequency and voltage BIOS settings. The ASUS OC Panel is supported on all ROG boards and will be available for after-market purchase for the non-Extreme boards.
ASUS Maximus VI Hero
Courtesy of ASUS
The Maximus VI Hero motherboard is the newest member of the ROG line, branded as a more affordable solution for the gamer. This board is marketed as a head-to-head competitor for MSI's MPOWER board.
Seagate recently announced and released their third generation of laptop Solid State Hybrid Drives. Originally thier hybrids carried the Momentus (laptop HDD) name forward, tacking on 'XT' to denote the on-board caching ability. The Momentus XT was intriduced in a 500GB (1st gen) and 750GB (2nd gen) model. The new line gets a new and simple title - Laptop SSHD.
In addition to the new name, we now have two capacity points available. The 'Laptop SSHD' retains the old 9.5mm form factor and now pushes a full 1TB of capacity, while the 'Laptop Thin SSHD' drops a platter and reduces availabile capacity to 500GB. The bonus with the 500GB model is that it maintains similar performance yet shaves off some thickness, making it Seagate's first 7mm Hybrid.
Today we will take a look at the new Thin SSHD, comparing it to the performance of the older generation Seagate Hybrids, as well as to Intel's RST caching solution:
Power and Value
We have seen our fair share of mini-ITX cases and system builds over the last six months, including rigs from Digital Storm and AVADirect. They attempt to offer a balance between performance, power, noise and size and some do it better than others. With the continued development of the mini-ITX form factor more users than ever are realizing you can get nearly top-end performance for gaming in a smaller package.
Today we are taking a look at the iBuyPower Revolt, in particular the Revolt R770, the highest end base offering of the system. Built around a small, but not tiny, PC chassis iBuyPower is able to include some pretty impressive specifications:
- Intel Core i7-3770K processor
- Custom built Z77 mini-ITX motherboard
- NZXT CPU water cooler
- NVIDIA GeForce GTX 670 2GB graphics cards
- 8GB DDR3-1600 memory
- 120 GB Intel 320 Series SSD
- 1TB Western Digital Blue hard drive
You get all of this in a case that is only 16-in x 16-in x 4.5-in built with a glossy black and white color scheme. The company claims that the Revolt was "designed to be a gaming system for any location" including a home theater, a dorm room or in your study. It includes "vents and air channels positioned precisely to deliver cool ambient air exactly where it is needed" and "integrated atmospheric lighting system is customizable in color."
Check out or quick video review below and then follow on to the full post for more photos of the system and a quick check of performance!
Introduction and Features
Enermax has a well-earned reputation for delivering reliable power supplies, enclosures, and other accessories to the PC enthusiast market. Their new TriAthlor Series includes three power supplies including 550W, 650W and 700W models. We will be taking a detailed look at the TriAthlor 550W power supply in this review. All TriAthlor power supplies are certified to deliver 80 Plus Bronze efficiencies and feature modular cables and quiet operation. Ecomaster is the authorized US agent for Enermax branded products.
Enermax TriAthlor Series PSU Key Features:
(Courtesy of Enermax)
A much needed architecture shift
It has been almost exactly five years since the release of the first Atom branded processors from Intel, starting with the Atom 230 and 330 based on the Diamondville design. Built for netbooks and nettops at the time, the Atom chips were a reaction to a unique market that the company had not planned for. While the early Atoms were great sellers, they were universally criticized by the media for slow performance and sub-par user experiences.
Atom has seen numerous refreshes since 2008, but they were all modifications of the simplistic, in-order architecture that was launched initially. With today's official release of the Silvermont architecture, the Atom processors see their first complete redesign from the ground up. With the focus on tablets and phones rather than netbooks, can Intel finally find a foothold in the growing markets dominated by ARM partners?
I should note that even though we are seeing the architectural reveal today, Intel doesn't plan on having shipping parts until late in 2013 for embedded, server and tablets and not until 2014 for smartphones. Why the early reveal on the design then? I think that pressure from ARM's designs (Krait, Exynos) as well as the upcoming release of AMD's own Kabini is forcing Intel's hand a bit. Certainly they don't want to be perceived as having fallen behind and getting news about the potential benefits of their own x86 option out in the public will help.
Silvermont will be the first Atom processor built on the 22nm process, leaving the 32nm designs of Saltwell behind it. This also marks the beginning of a new change in the Atom design process, to adopt the tick/tock model we have seen on Intel's consumer desktop and notebook parts. At the next node drop of 14nm, we'll see see an annual cadence that first focuses on the node change, then an architecture change at the same node.
By keeping Atom on the same process technology as Core (Ivy Bridge, Haswell, etc), Intel can put more of a focus on the power capabilities of their manufacturing.
The Intel HD Graphics are joined by Iris
Intel gets a bad wrap on the graphics front. Much of it is warranted but a lot of it is really just poor marketing about the technologies and features they implement and improve on. When AMD or NVIDIA update a driver or fix a bug or bring a new gaming feature to the table, they are sure that every single PC hardware based website knows about and thus, that as many PC gamers as possible know about it. The same cannot be said about Intel though - they are much more understated when it comes to trumpeting their own horn. Maybe that's because they are afraid of being called out on some aspects or that they have a little bit of performance envy compared to the discrete options on the market.
Today might be the start of something new from the company though - a bigger focus on the graphics technology in Intel processors. More than a month before the official unveiling of the Haswell processors publicly, Intel is opening up about SOME of the changes coming to the Haswell-based graphics products.
We first learned about the changes to Intel's Haswell graphics architecture way back in September of 2012 at the Intel Developer Forum. It was revealed then that the GT3 design would essentially double theoretical output over the currently existing GT2 design found in Ivy Bridge. GT2 will continue to exist (though slightly updated) on Haswell and only some versions of Haswell will actually see updates to the higher-performing GT3 options.
In 2009 Intel announced a drive to increase graphics performance generation to generation at an exceptional level. Not long after they released the Sandy Bridge CPU and the most significant performance increase in processor graphics ever. Ivy Bridge followed after with a nice increase in graphics capability but not nearly as dramatic as the SNB jump. Now, according to this graphic, the graphics capability of Haswell will be as much as 75x better than the chipset-based graphics from 2006. The real question is what variants of Haswell will have that performance level...
I should note right away that even though we are showing you general performance data on graphics, we still don't have all the details on what SKUs will have what features on the mobile and desktop lineups. Intel appears to be trying to give us as much information as possible without really giving us any information.
Our first thoughts and impressions
Since first hearing about the Kickstarter project that raised nearly 2.5 million dollars from over 9,500 contributors, I have eagerly been awaiting the arrival of my Oculus Rift development kit. Not because I plan on quitting the hardware review business to start working on a new 3D, VR-ready gaming project but just because as a technology enthusiast I need to see the new, fun gadgets and what they might mean for the future of gaming.
I have read other user's accounts of their time with the Oculus Rift, including a great write up in a Q&A form Ben Kuchera over at Penny Arcade Report, but I needed my own hands-on time with the consumer-oriented VR (virtual reality) product. Having tried it for very short periods of time at both Quakecon 2012 and CES 2013 (less than 5 minutes) I wanted to see how it performed and more importantly, how my body reacted to it.
I don't consider myself a person that gets motion sick. Really, I don't. I fly all the time, sit in the back of busses, ride roller coasters, watch 3D movies and play fast-paced PC games on large screens. The only instances I tend to get any kind of unease with motion is on what I call "roundy-round" rides, the kind that simply go in circles over and over. Think about something like this, The Scrambler, or the Teacups at Disney World. How would I react to time with the Oculus Rift, this was my biggest fear...
For now I don't want to get into the politics of the Rift, how John Carmack was initially a huge proponent of the project then backed off on how close we might be the higher-quality consumer version of the device. We'll cover those aspects in a future story. For now I only had time for some first impressions.
Watch the video above for a walk through of the development kit as well as some of the demos, as best can be demonstrated in a 2D plane!
Good effort goes a long way
The wait has been long and anxious for Heart of the Swarm, the expansion to 2010's StarCraft 2: Wings of Liberty. Blizzard originally hinted at a very rapid release schedule which did not exactly come to fruition. The nearly three years of development time for Heart of the Swarm is longer than a single studio spends on a full Call of Duty title; although, one could make a very credible argument that a Blizzard expansion requires more effort to create than said complete Call of Duty title.
But as Duke Nukem Forever demonstrated, a long time in development does not guarantee a fully baked product coming out the other end.
Blizzard games have always been highly entertaining albeit without deep artistic substance; their games are not first on the list for a university literature syllabus. But, there is a lot of room in life for engaging entertainment. In terms of the PC, Blizzard has always been one of the leading developers for the platform; they know how to deliver an exceptional PC experience if they choose to.
Our 4K Testing Methods
You may have recently seen a story and video on PC Perspective about a new TV that made its way into the office. Of particular interest is the fact that the SEIKI SE50UY04 50-in TV is a 4K television; it has a native resolution of 3840x2160. For those that are unfamiliar with the new upcoming TV and display standards, 3840x2160 is exactly four times the resolution of current 1080p TVs and displays. Oh, and this TV only cost us $1300.
In that short preview we validated that both NVIDIA and AMD current generation graphics cards support output to this TV at 3840x2160 using an HDMI cable. You might be surprised to find that HDMI 1.4 can support 4K resolutions, but it can do so only at 30 Hz (60 Hz 4K TVs won't be available until 2014 most likely), half the refresh rate of most TVs and monitors at 60 Hz. That doesn't mean we are limited to 30 FPS of performance though, far from it. As you'll see in our testing on the coming pages we were able to push out much higher frame rates using some very high end graphics solutions.
I should point out that I am not a TV reviewer and I don't claim to be one, so I'll leave the technical merits of the monitor itself to others. Instead I will only report on my experiences with it while using Windows and playing games - it's pretty freaking awesome. The only downside I have found in my time with the TV as a gaming monitor thus far is with the 30 Hz refresh rate and Vsync disabled situations. Because you are seeing fewer screen refreshes over the same amount of time than you would with a 60 Hz panel, all else being equal, you are getting twice as many "frames" of the game being pushed to the monitor each refresh cycle. This means that the horizontal tearing associated with Vsync will likely be more apparent than it would otherwise.
I would likely recommend enabling Vsync for a tear-free experience on this TV once you are happy with performance levels, but obviously for our testing we wanted to keep it off to gauge performance of these graphics cards.
heterogeneous Uniform Memory Access
Several years back we first heard AMD’s plans on creating a uniform memory architecture which will allow the CPU to share address spaces with the GPU. The promise here is to create a very efficient architecture that will provide excellent performance in a mixed environment of serial and parallel programming loads. When GPU computing came on the scene it was full of great promise. The idea of a heavily parallel processing unit that will accelerate both integer and floating point workloads could be a potential gold mine in wide variety of applications. Alas, the promise of the technology did not meet expectations when we have viewed the results so far. There are many problems with combining serial and parallel workloads between CPUs and GPUs, and a lot of this has to do with very basic programming and the communication of data between two separate memory pools.
CPUs and GPUs do not share common memory pools. Instead of using pointers in programming to tell each individual unit where data is stored in memory, the current implementation of GPU computing requires the CPU to write the contents of that address to the standalone memory pool of the GPU. This is time consuming and wastes cycles. It also increases programming complexity to be able to adjust to such situations. Typically only very advanced programmers with a lot of expertise in this subject could program effective operations to take these limitations into consideration. The lack of unified memory between CPU and GPU has hindered the adoption of the technology for a lot of applications which could potentially use the massively parallel processing capabilities of a GPU.
The idea for GPU compute has been around for a long time (comparatively). I still remember getting very excited about the idea of using a high end video card along with a card like the old GeForce 6600 GT to be a coprocessor which would handle heavy math operations and PhysX. That particular plan never quite came to fruition, but the idea was planted years before the actual introduction of modern DX9/10/11 hardware. It seems as if this step with hUMA could actually provide a great amount of impetus to implement a wide range of applications which can actively utilize the GPU portion of an APU.
Obsidian Series for under $100
If you need a case for your next PC build, the chances are good that Corsair has a model that you'll like. Ranging from the obscenely large Obsidian 900D to the $69 Carbide 200R and just about everything in between, Corsair has a ton of options Today we are reviewing the brand new entrant to the Obsidian series, the 350D, that brings Corsair to the Micro-ATX form factor.
The Obsidian series is the flagship chassis line from Corsair and typically means you are getting the best of the best from the expanding components company. With an MSRP of just $99 you are definitely making some sacrifices on features and on size, limiting us to Micro-ATX or Mini-ITX motherboards and systems.
The front panel has an attractive brushed finish to it with removable front panel (and fan filter).
Connections up top include headphones, microphone as well as a pair of USB 3.0 ports. There power button is right in the center with dual LEDs on each side. The reset button is just to the right of the mic port and is recessed enough to prevent accidental presses.
Jaguar Hits the Embedded Space
It has long been known that AMD has simply not had a lot of luck going head to head against Intel in the processor market. Some years back they worked on differentiating themselves, and in so doing have been able to stay afloat through hard times. The acquisitions that AMD has made in the past decade are starting to make a difference in the company, especially now that the PC market that they have relied upon for revenue and growth opportunities is suddenly contracting. This of course puts a cramp in AMD’s style, but with better than expected results in their previous quarter, things are not nearly as dim as some would expect.
Q1 was still pretty harsh for AMD, but they maintained their marketshare in both processors and graphics chips. One area that looks to get a boost is that of embedded processors. AMD has offered embedded processors for some time, but with the way the market is heading they look to really ramp up their offerings to fit in a variety of applications and SKUs. The last generation of G-series processors were based upon the Bobcat/Brazos platform. This two chip design (APU and media hub) came in a variety of wattages with good performance from both the CPU and GPU portion. While the setup looked pretty good on paper, it was not widely implemented because of the added complexity of a two chip design plus thermal concerns vs. performance.
AMD looks to address these problems with one of their first, true SOC designs. The latest G-series SOC’s are based upon the brand new Jaguar core from AMD. Jaguar is the successor to the successful Bobcat core which is a low power, dual core processor with integrated DX11/VLIW5 based graphics. Jaguar improves performance vs. Bobcat in CPU operations between 6% to 13% when clocked identically, but because it is manufactured on a smaller process node it is able to do so without using as much power. Jaguar can come in both dual core and quad core packages. The graphics portion is based on the latest GCN architecture.
The card we have been expecting
Despite all the issues that were brought up with our new graphics performance testing methodology we are calling Frame Rating, there is little debate in the industry that AMD is making noise once again in the graphics field. From the elaborate marketing and game bundles with all Radeon HD 7000 series cards over the last year to the hiring of Roy Taylor, VP of sales but also the company's most vocal supporter.
Along with the marketing though goes plenty of technology and important design wins. With the dominance of the APU on the console side (Wii U, Playstation 4 and the next Xbox), AMD is making sure that the familiarity with its GPU architecture there pays dividends on the PC side as well. Developers will be focusing on AMD's graphics hardware for 5-10 years with the console generation and that could result in improved performance and feature support for Radeon graphics for PC gamers.
Today's release of the Radeon HD 7990 6GB Malta dual-GPU graphics card shows a renewed focus on high-end graphics markets since the release of the Radeon HD 7970 in January of 2012. And while you may have seen something for sale previously with the HD 7990 name attached, those were custom designs built by partners, not by AMD.
Both ASUS and PowerColor currently have high-end dual-Tahiti cards for sale. The PowerColor HD 7990 Devil 13 used the brand directly but ASUS' ARES II kept away from the name and focused on its own high-end card brands instead.
The "real" Radeon HD 7990 card was first teased at GDC in March and takes a much less dramatic approach to its design without being less impressive technically. The card includes a pair of Tahiti, HD 7970-class GPUs on a single PCB with 6GB of total memory. The raw specifications are listed here:
Considering there are two HD 7970 GPUs on the HD 7990, the doubling of the major specs shouldn't be surprising though it is a little deceiving. There are 8.6 billion transistors yes, but there are still 4.3 billion on each GPU. Yes there are 4096 stream processors but only 2048 on each GPU requiring software GPU scaling to increase performance. The same goes with texture fill rate, compute performance, memory bandwidth, etc. The same could be said for all dual-GPU graphics cards though.