All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Introduction and Technical Specifications
Courtesy of ASUS
The ASUS Maximus VI Formula is among the newest members of the Republic of Gamer (ROG) product line. From the design of its ROG Armor to its power regulation circuitry, the Maximum VI Formula takes the ASUS Z87 motherboard line to a whole new level. The Maximus VI Formula does not come cheap at an MSRP of $329.99, but it is a steal in light of all the high-end component ASUS packed under the hood.
Courtesy of ASUS
ASUS designed the Maximus VI Formula with a top notch power delivery system, featuring an 8+2-phase digital power regulation system using BlackWing chokes, NexFET MOSFETS touting 90% efficiency, and 10k-rated Black Metallic capacitors. The ASUS integrated the following features into the Maximus VI Formula's design: 10 SATA 3 ports; an M.2 (NGFF) SSD slot integrated into the ASUS mPCIe Combo II card; an Intel I217-V GigE NIC; an Intel 802.11ac Wi-Fi and Bluetooth controller integrated into the ASUS mPCIe Combo II card; three PCI-Express x16 slots for up to tri-card support; three PCI-Express x1 slots; 2-digit diagnostic LED display; on-board power, reset, CMOS clear, MemOK!, BIOS Flashback, ROG Connect, DirectKey, and BIOS switch buttons; Probelt voltage measurement points; OC Panel support; SupremeFX Formula audio solution; CrossChill Hybrid air and water cooled VRM cooling solution; ROG Armor overlay; and USB 2.0 and 3.0 port support.
Introduction, Specifications and Packaging
Intel launched their first consumer SSD more than five years ago. Their very first SSD, the X25-M, might have gotten off to a bit of a rocky start, but once the initial bugs were worked out, it proved to be an excellent example of what a 3Gb/sec SATA SSD was capable of. While the competition was using 4 or 8 flash channels, Intel ran circles around them with their 10-channel controller. It was certainly a great concept, and it most definitely had legs. The very same controller, with only minor tweaks, was able to hold its own all the way through into the enterprise sector, doing so even though the competition was moving to controllers capable of twice the throughput (SATA 6Gb/sec).
The various iterations featuring Intel's 10-channel controller, spanning the 20GB cache SSD (left), original X25M and X25-E (center), and finally X25-M G2, SSD 320, and SSD 710 (right).
While the older controller was extremely nimble, it was bottlenecked by a slower interface than the competition, who had all moved to the more modern SATA 6Gb/sec link. Intel also moved into this area, but not with their own native controller silicon. The SSD 510 launched in 2011 equipped with a Marvell controller, followed by the SSD 520, launched in 2012 with a SandForce controller. While Intel conjured up their own firmware for these models, their own older and slower controller was still more nimble and reliable than those other solutions, proven by the fact that the SSD 710, an enterprise-spec SSD using the older 10-channel controller, was launched in tandem with the consumer SSD 510.
Fast forward to mid-2013, where Intel finally introduced their own native SATA 6Gb/s solution. This controller dropped the channel count to a more standard figure of 8, and while it did perform well, it was only available in Intel's new enterprise 'Data Center' line of SSDs. The SSD DC S3500 and SSD DC S3700 (reviewed here) were great drives, but they were priced too high for consumers. While preparing that review, I remember saying how that controller would be a great consumer unit if they could just make it cheaper and tune it for standard workloads. It appears that wish has just been granted. behold the Intel SSD 730:
Introduction and Technical Specifications
Courtesy of Antec
Antec is an established company and brand-name in the computer component space, offering quality solutions for everything from cases and power supplies to thermal paste and case-mounted fan controllers. Their latest foray is into the world of liquid cooling. The KUHLER H20 1250 is their flagship liquid cooler, featuring an all-in-one dual pump design, a 240mm x 120mm x 25mm aluminum radiator, and hardware monitoring support via the integrated USB cable and the included Antec Grid software. The KUHLER H2O 1250 comes standard with support for all current Intel and AMD CPU offerings. To gage the performance of Antec's flagship cooler, we set it against several other high-performance liquid and air-based coolers. With a retail MSRP of $109.99, the KUHLER H2O 1250 cooler comes at a premium for all the premium features it has to offer.
Courtesy of Antec
The KUHLER H2O 1250 liquid cooler was designed for a single purpose, to keep your process as cool as possible. Antec includes two pumps with the unit, one integrated into each fan. The top pump pulls liquid through the radiator and pushes it to the CPU block through the radiator outlet, while the bottom pump pulls water from the CPU block through the radiator inlet and pushes it through the radiator towards the top pump.
Introduction and Design
Alongside our T440s review unit was something slightly smaller and dear to our hearts: the latest entry in the ThinkPad X series of notebooks. Seeing as this very review is being typed on a Lenovo X220, our interest was piqued by the latest refinements to the formula. When the X220 was released, the thin-and-light trend was only just beginning to pick up steam leading into what eventually became today’s Ultrabook movement. Its 2012 successor, the ThinkPad X230, went on to receive our coveted (and rarely bestowed) Editor’s Choice Award, even in spite of a highly controversial keyboard change that sent the fanbase into a panic.
But all of that has since (mostly) blown over, primarily thanks to the fact that—in spite of the minor ergonomic adjustments required to accustom oneself with what was once a jarringly different keyboard design—the basic philosophy remained the same: pack as many powerful parts as possible into a 12.5-inch case while still maintaining good durability and battery life. These machines were every bit as capable as most other 13- and 14-inch notebooks of their time, and they were considerably smaller, too. About the only thing they lacked was higher-resolution screens, discrete graphics, and quad-core CPUs.
But with the X240 (and the T440s), portability has truly taken center stage, suggesting a complete paradigm shift—however subtly—away from “powerful (and light)” and toward “light (and powerful)”. Coupled with Intel’s Haswell CPUs and Lenovo’s new Power Bridge dual-battery design, this will certainly yield great benefits in the realm of battery life. But that isn’t all that’s different: we also find a (once again) revamped keyboard, as well as a completely new touchpad design which finally dispenses with the physical buttons entirely. Like in the X230’s case, these changes have roiled the ThinkPad purists—but is it all just a matter of close-minded traditionalism? That’s precisely what we’ll discover today.
Mobile Gaming Powerhouse
Every once in a while, a vendor sends us a preconfigured gaming PC or notebook. We don't usually focus too much on these systems because so many of readers are quite clearly DIY builders. Gaming notebooks are another beast, though. Without going through a horrible amount of headaches, building a custom gaming notebook is a pretty tough task. So, for users who are looking for a ton of gaming performance in a package that is mobile, going with a machine like the ORIGIN PC EON17-SLX is the best option.
As the name implies, the EON17-SLX is a 17-in notebook that includes some really impressive specifications including a Haswell processor and SLI GeForce GTX 780M GPUs.
|ORIGIN PC EON17-SLX|
|Processor||Core i7-4930MX (Haswell)|
|Cores / Threads||4 / 8|
|Graphics||2 x NVIDIA GeForce GTX 780M 4GB|
|System Memory||16GB Corsair Vengeance DDR3-1600|
|Storage||2 x 120GB mSATA SSD (RAID-0)
1 x Western Digital Black 750GB HDD
|Wireless||Intel 7260 802.11ac|
|Screen||17-in 1920x1080 LED Matte|
|Optical||6x Blu-ray reader / DVD writer|
|Operating System||Windows 8.1|
Intel's Core i7-4930MX processor is actually a quad-core Haswell based CPU, not an Ivy Bridge-E part like you might guess based on the part number. The GeForce GTX 780M GPUs each include 4GB of frame buffer (!!) and have very similar specifications to the desktop GTX 770 parts. Even though they run at lower clock speeds, a pair of these GPUs will provide a ludicrous amount of gaming performance.
As you would expect for a notebook with this much compute performance, it isn't a thin and light. My scale tips at 9.5 pounds with the laptop alone and over 12 pounds with the power adapter included. Images of the profile below will indicate not only many of the features included but also the size and form factor.
An Upgrade Project
When NVIDIA started talking to us about the new GeForce GTX 750 Ti graphics card, one of the key points they emphasized was the potential use for this first-generation Maxwell GPU to be used in the upgrade process of smaller form factor or OEM PCs. Without the need for an external power connector, the GTX 750 Ti provided a clear performance delta from integrated graphics with minimal cost and minimal power consumption, so the story went.
Eager to put this theory to the test, we decided to put together a project looking at the upgrade potential of off the shelf OEM computers purchased locally. A quick trip down the road to Best Buy revealed a PC sales section that was dominated by laptops and all-in-ones, but with quite a few "tower" style desktop computers available as well. We purchased three different machines, each at a different price point, and with different primary processor configurations.
The lucky winners included a Gateway DX4885, an ASUS M11BB, and a Lenovo H520.
Introduction and Technical Specifications
Courtesy of SilverStone
SilverStone Technology is a well known brand name with high quality solutions in the form of everything from cases to case-mounted fan controllers and displays. They have also gone through several iterations of CPU all-in-on liquid cooling solutions with their newest models being part of the Tundra Series. The Tundra Series TD02 liquid cooler is designed to cool CPUs of any make, including the latest offering from both Intel and AMD. The cooler is comprised of a massive 2x120mm radiator attached to a copper base plate with integrated pump. To best measure the TD02's performance, we set it against several other high-performance liquid and air-based coolers. With a retail MSRP of $129.99, the TD02 comes in at the higher end of the all-in-one cooler price range.
Courtesy of SilverStone
What we know about Maxwell
I'm going to go out on a limb and guess that many of you reading this review would not have normally been as interested in the launch of the GeForce GTX 750 Ti if a specific word hadn't been mentioned in the title: Maxwell. It's true, the launch of GTX 750 Ti, a mainstream graphics card that will sit in the $149 price point, marks the first public release of the new NVIDIA GPU architecture code named Maxwell. It is a unique move for the company to start at this particular point with a new design, but as you'll see in the changes to the architecture as well as the limitations, it all makes a certain bit of sense.
For those of you that don't really care about the underlying magic that makes the GTX 750 Ti possible, you can skip this page and jump right to the details of the new card itself. There I will detail the product specifications, performance comparison and expectations, etc.
If you are interested in learning what makes Maxwell tick, keep reading below.
The NVIDIA Maxwell Architecture
When NVIDIA first approached us about the GTX 750 Ti they were very light on details about the GPU that was powering it. Even though the fact it was built on Maxwell was confirmed the company hadn't yet determined if it was going to do a full architecture deep dive with the press. In the end they went somewhere in between the full detail we are used to getting with a new GPU design and the original, passive stance. It looks like we'll have to wait for the enthusiast GPU class release to really get the full story but I think the details we have now paint the story quite clearly.
During the course of design the Kepler architecture, and then implementing it with the Tegra line in the form of the Tegra K1, NVIDIA's engineering team developed a better sense of how to improve the performance and efficiency of the basic compute design. Kepler was a huge leap forward compared to the likes of Fermi and Maxwell is promising to be equally as revolutionary. NVIDIA wanted to address both GPU power consumption as well as finding ways to extract more performance from the architecture at the same power levels.
The logic of the GPU design remains similar to Kepler. There is a Graphics Processing Cluster (GPC) that houses Simultaneous Multiprocessors (SM) built from a large number of CUDA cores (stream processors).
GM107 Block Diagram
Readers familiar with the look of Kepler GPUs will instantly see changes in the organization of the various blocks of Maxwell. There are more divisions, more groupings and fewer CUDA cores "per block" than before. As it turns out, this reorganization was part of the ability for NVIDIA to improve performance and power efficiency with the new GPU.
It wouldn’t be February if we didn’t hear the Q4 FY14 earnings from NVIDIA! NVIDIA does have a slightly odd way of expressing their quarters, but in the end it is all semantics. They are not in fact living in the future, but I bet their product managers wish they could peer into the actual Q4 2014. No, the whole FY14 thing relates back to when they made their IPO and how they started reporting. To us mere mortals, Q4 FY14 actually represents Q4 2013. Clear as mud? Lord love the Securities and Exchange Commission and their rules.
The past quarter was a pretty good one for NVIDIA. They came away with $1.144 billion in gross revenue and had a GAAP net income of $147 million. This beat the Street’s estimate by a pretty large margin. As a response, trading of NVIDIA’s stock has gone up in after hours. This has certainly been a trying year for NVIDIA and the PC market in general, but they seem to have come out on top.
NVIDIA beat estimates primarily on the strength of the PC graphics division. Many were focusing on the apparent decline of the PC market and assumed that NVIDIA would be dragged down by lower shipments. On the contrary, it seems as though the gaming market and add-in sales on the PC helped to solidify NVIDIA’s quarter. We can look at a number of factors that likely contributed to this uptick for NVIDIA.
Straddling the R7 and R9 designation
It is often said that the sub-$200 graphics card market is crowded. It will get even more so over the next 7 days. Today AMD is announcing a new entry into this field, the Radeon R7 265, which seems to straddle the line between their R7 and R9 brands. The product is much closer in its specifications to the R9 270 than it is the R7 260X. As you'll see below, it is built on a very familiar GPU architecture.
AMD claims that the new R7 265 brings a 25% increase in performance to the R7 line of graphics cards. In my testing, this does turn out to be true and also puts it dangerously close to the R9 270 card released late last year. Much like we saw with the R9 290 compared to the R9 290X, the less expensive but similarly performing card might make the higher end model a less attractive option.
Let's take a quick look at the specifications of the new R7 265.
Based on the Pitcairn GPU, a part that made its debut with the Radeon HD 7870 and HD 7850 in early 2012, this card has 1024 stream processors running at 925 MHz equating to 1.89 TFLOPS of total peak compute power. Unlike the other R7 cards, the R7 265 has a 256-bit memory bus and will come with 2GB of GDDR5 memory running at 5.6 GHz. The card requires a single 6-pin power connection but has a peak TDP of 150 watts - pretty much the maximum of the PCI Express bus and one power connector. And yes, the R7 265 supports DX 11.2, OpenGL 4.3, and Mantle, just like the rest of the AMD R7/R9 lineup. It does NOT support TrueAudio and the new CrossFire DMA units.
|Radeon R9 270X||Radeon R9 270||Radeon R7 265||Radeon R7 260X||Radeon R7 260|
|GPU Code name||Pitcairn||Pitcairn||Pitcairn||Bonaire||Bonaire|
|Rated Clock||1050 MHz||925 MHz||925 MHz||1100 MHz||1000 MHz|
|Memory Clock||5600 MHz||5600 MHz||5600 MHz||6500 MHz||6000 MHz|
|Memory Bandwidth||179 GB/s||179 GB/s||179 GB/s||104 GB/s||96 GB/s|
|TDP||180 watts||150 watts||150 watts||115 watts||95 watts|
|Peak Compute||2.69 TFLOPS||2.37 TFLOPS||1.89 TFLOPS||1.97 TFLOPS||1.53 TFLOPS|
The table above compares the current AMD product lineup, ranging from the R9 270X to the R7 260, with the R7 265 directly in the middle. There are some interesting specifications to point out that make the 265 a much closer relation to the R7 270/270X cards than anything below it. Though the R7 265 has four fewer compute units (which is 256 stream processors) than the R9 270. The biggest performance gap here is going to be found with the 256-bit memory bus that persists; the available memory bandwidth of 179 GB/s is 72% higher than the 104 GB/s from the R7 260X! That will definitely improve performance drastically compared to the rest of the R7 products. Pay no mind to that peak performance of the 260X being higher than the R7 265; in real world testing that never happened.
The Mini ITX Surge Continues
For years now the enthusiast crowd has been clamoring for Corsair to bring its case building prowess down to the Mini ITX market and with the Obsidian 250D it has done just that. By combining the design features that have make Corsair's units so popular with the aesthetic touches of the most recent Obsidian lineup, the 250D is an interesting and combination of size and performance.
The Corsair 250D is unlike most other Mini ITX designs out today in that it supports a lot of full size components. You'll be able to use a standard ATX power supply, many self contained water coolers, full size graphics card and won't have to suffer through the most painful cable routing aspects of other small form factor cases.
Introduction and Technical Specifications
Koolance EXT-440CU Liquid Cooling System
Courtesy of Koolance
Koolance CPU-380I CPU Water Block with Intel CPU mounting plate
Courtesy of Koolance
Koolance has effectively transformed itself from a minor player in the cooling community to a powerhouse at the forefront of high performance liquid cooling products. Koolance recently released the EXT-440CU Liquid Cooling System, an apparatus integrating the cooling system's reservoir, pump, and radiator into an aluminum assembly. In addition to the EXT-440CU unit, Koolance provided us with their CPU-380I CPU water block for testing as a complete kit. We tested the Koolance kit in conjunction with other all-in-one and air coolers to see how well the Koolance kit stacks up. The EXT-440CU Liquid Cooling System retails at an MSRP of $274.99 with the CPU-380I water block available for a $74.99 MSRP. While not the cheapest solution, the adage "You get what you pay for" fits the bill for this Koolance kit.
Koolance EXT-440CU Liquid Cooling System, side view
Courtesy of Koolance
Koolance EXT-440CU Liquid Cooling System, rear view
Courtesy of Koolance
Koolance CPU-380A CPU Water Block with AMD CPU mounting plate
Courtesy of Koolance
Introduction and Features
NZXT is introducing the H440 Mid-Tower case in their H Series line. The new H440 chassis will be available in two different color schemes; white with black accents and black with red accents. Both versions exhibit clean lines and a sleek design. Gone are the 5.25” optical drive bays and in their place you get three 120mm intake fans. In addition to providing excellent case cooling with four included fans the H440 is also very water-cooling friendly with support for water-cooling radiators on the top, front and rear of the case. The left side panel features a large acrylic window to showcase the motherboard area and the H440 can support up to eight internal 3.5”/2.5” HDDs/SSDs: 6+2.
NZXT H440 Mid-Tower Chassis
The lower section of the H440 case, which houses the power supply, is shrouded from view and provides a lot of room for cable management. The color accented shroud features a lighted NZXT logo and there are two LEDs built into the back panel to provide light when making connections; very nice.
The NZXT H440 Mid-Tower case comes with four NZXT FN V2 fans preinstalled: (3) 120mm intake fans in the front and (1) 140mm exhaust fan on the back. Dust filters are provided for the three front intake fans and also on the bottom of the case for the PSU intake fan. And up to three more 120mm fans (or two 140mm fans) can be added to the top panel if desired.
Here is what NZXT has to say about the H440 Mid-Tower case:
“The new H440 features a doorless, ODD-free front panel made entirely of steel while a large, full-view window reveals an interior specially engineered to make any build seamless and beautiful. The H440 ensures a hassle-free experience, allowing anybody to become an expert on clean cable management. The H440 comes standard with four of NZXT’s newly designed FN V2 case fans. An unheard of 3 x 120mm in front and 1 x 140mm in rear. And newly designed steel HDD drive trays can support up to eight internal 3.5”/2.5” HDDs/SSDs: 6+2. The H440 supports both 140mm and 120mm fans, the steel top and front panels come Kraken ready, fitting radiators up to 360mm in size to offer comprehensive water-cooling performance in a sleek, minimalist package.”
NZXT H440 Mid-Tower Case Key Features:
• Mid-tower PC case with sleek, clean styling
• Available in matte black with red accents or gloss white with black accents
• Four NZXT FN V2 case fans preinstalled
• Two removable dust filters (front intake and under PSU intake)
• Large side window
• Lighted NZXT logo on side and two LED lights on back panel
• Support for up to eight internal HDDs/SSDs
• Support for water-cooling radiators: front, top, and/or back
• Can mount radiators up to 360mm in length
• Cable routing cutouts with rubber grommets
• Large CPU backplate cut out for easy CPU cooler upgrades
• Top panel has 2 USB 3.0, 2 USB 2.0 and audio in/out ports
• Thumbscrew side panel removal for quick access
• Supports ATX, Micro-ATX, and Mini-ITX motherboards
• 2-Year warranty
What Mantle signifies about GPU architectures
Mantle is a very interesting concept. From the various keynote speeches, it sounds like the API is being designed to address the current state (and trajectory) of graphics processors. GPUs are generalized and highly parallel computation devices which are assisted by a little bit of specialized silicon, when appropriate. The vendors have even settled on standards, such as IEEE-754 floating point decimal numbers, which means that the driver has much less reason to shield developers from the underlying architectures.
Still, Mantle is currently a private technology for an unknown number of developers. Without a public SDK, or anything beyond the half-dozen keynotes, we can only speculate on its specific attributes. I, for one, have technical questions and hunches which linger unanswered or unconfirmed, probably until the API is suitable for public development.
Or, until we just... ask AMD.
Our response came from Guennadi Riguer, the chief architect for Mantle. In it, he discusses the API's usage as a computation language, the future of the rendering pipeline, and whether there will be a day where Crossfire-like benefits can occur by leaving an older Mantle-capable GPU in your system when purchasing a new, also Mantle-supporting one.
Q: Mantle's shading language is said to be compatible with HLSL. How will optimizations made for DirectX, such as tweaks during shader compilation, carry over to Mantle? How much tuning will (and will not) be shared between the two APIs?
[Guennadi] The current Mantle solution relies on the same shader generation path games the DirectX uses and includes an open-source component for translating DirectX shaders to Mantle accepted intermediate language (IL). This enables developers to quickly develop Mantle code path without any changes to the shaders. This was one of the strongest requests we got from our ISV partners when we were developing Mantle.
Follow-Up: What does this mean, specifically, in terms of driver optimizations? Would AMD, or anyone else who supports Mantle, be able to re-use the effort they spent on tuning their shader compilers (and so forth) for DirectX?
[Guennadi] With the current shader compilation strategy in Mantle, the developers can directly leverage DirectX shader optimization efforts in Mantle. They would use the same front-end HLSL compiler for DX and Mantle, and inside of the DX and Mantle drivers we share the shader compiler that generates the shader code our hardware understands.
Introduction and Design
Arguably some of the most thoughtful machines on the market are Lenovo’s venerable ThinkPads, which—while sporadically brave in their assertions—are still among the most conservative (yet simultaneously practical) notebooks available. What makes these notebooks so popular in the business crowds is their longstanding refusal to compromise functionality in the interest of form, as well as their self-proclaimed legendary reliability. And you could argue that such practical conservatism is what defines a good business notebook: a device which embraces the latest technological trends, but only with requisite caution and consideration.
Maybe it’s the shaky PC market, or maybe it’s the sheer onset of sexy technologies such as touch and clickpads, but recent ThinkPads have begun to show some uncommon progressivism, and unapologetically so, too. First, it was the complete replacement of the traditional critically-acclaimed ThinkPad keyboard with the Chiclet AccuType variety, a decision which irked purists but eventually was accepted by most. Along with that were the integrated touchpad buttons, which are still lamented by many users. Those alterations to the winning design were ultimately relatively minor, however, and for the most part, they’ve now been digested by the community. Now, though, with the T440s (as well as the rest of Lenovo’s revamped ThinkPad lineup), we’re seeing what will perhaps constitute the most controversial change of all: the substitution of the older touchpads with a “5-button trackpad”, as well as optional touchscreen interface.
Can these changes help to keep the T440s on the cusp of technological progress, or has the design finally crossed the threshold into the realm of counterproductivity?
Compared with nearly any other modern notebook, these specs might not hold many surprises. But judged side-by-side with its T430s predecessor, there are some pretty striking differences. For starters, the T440s is the first in its line to offer only low-voltage CPU options. While our test unit shipped with the (certainly capable enough) Core i5-4200U—a dual-core processor with up to 2.6 GHz Turbo Boost clock rate—options range up to a Core i7-4600U (up to 3.30 GHz). Still, these options are admittedly a far cry from the i7-3520M with which top-end T430s machines were equipped. Of course, it’s also less than half of the TDP, which is likely why the decision was made. Other notables are the lack of discrete graphics options (previously users has the choice of either integrated graphics or an NVIDIA NVS 5200M) and the maximum supported memory of 12 GB. And, of course, there’s the touchscreen—which is not required, but rather, is merely an option. On the other hand, while we’re on the subject of the screen, this is also the first model in the series to offer a 1080p resolution, whether traditional or touch-enabled—which is very much appreciated indeed.
That’s a pretty significant departure from the design of the T430s, which—as it currently appears—could represent the last T4xxs model that will provide such powerhouse options at the obvious expense of battery life. Although some markets already have the option of the ThinkPad S440 to fill the Ultrabook void within the ThinkPad 14-inch range, that notebook can even be outfitted with discrete graphics. The T440s top-end configuration, meanwhile, consists of a 15W TDP dual-core i7 with integrated graphics and 12 GB DDR3 RAM. In other words, it’s powerful, but it’s just not in the same class as the T430’s components. What’s more important to you?
A quick look at performance results
Late last week, EA and Dice released the long awaited patch for Battlefield 4 that enables support for the Mantle renderer. This new API technology was introduced by AMD back in September. Unfortunately, AMD wasn't quite ready for its release with their Catalyst 14.1 beta driver. I wrote a short article that previewed the new driver's features, its expected performance with the Mantle version of BF4, and commentary about the current state of Mantle. You should definite read that as a primer before continuing if you haven't yet.
Today, after really just a few short hours with a useable driver, I have only limited results. Still, I know that you, our readers, clamor for ANY information on the topic. I thought I would share what we have thus far.
As I mentioned in the previous story, the Mantle version of Battlefield 4 has the biggest potential to show advantages in times where the game is more CPU limited. AMD calls this the "low hanging fruit" for this early release of Mantle and claim that further optimizations will come, especially for GPU-bound scenarios. Because of that dependency on CPU limitations, that puts some non-standard requirements on our ability to showcase Mantle's performance capabilities.
For example, the level of the game and even the section of that level, in the BF4 single player campaign, can show drastic swings in Mantle's capabilities. Multiplayer matches will also show more consistent CPU utilization (and thus could be improved by Mantle) though testing those levels in a repeatable, semi-scientific method is much more difficult. And, as you'll see in our early results, I even found a couple instances in which the Mantle API version of BF4 ran a smidge slower than the DX11 instance.
For our testing, we compiled two systems that differed in CPU performance in order to simulate the range of processors installed within consumers' PCs. Our standard GPU test bed includes a Core i7-3960X Sandy Bridge-E processor specifically to remove the CPU as a bottleneck and that has been included here today. We added in a system based on the AMD A10-7850K Kaveri APU which presents a more processor-limited (especially per-thread) system, overall, and should help showcase Mantle benefits more easily.
A troubled launch to be sure
AMD has released some important new drivers with drastic feature additions over the past year. Remember back in August of 2013 when Frame Pacing was first revealed? Today’s Catalyst 14.1 beta release will actually complete the goals that AMD set forth upon itself in early 2013 in regards to introducing (nearly) complete Frame Pacing technology integration for non-XDMA GPUs while also adding support for Mantle
and HSA capability.
Frame Pacing Phase 2 and HSA Support
When AMD released the first frame pacing capable beta driver in August of 2013, it added support to existing GCN designs (HD 7000-series and a few older generations) at resolutions of 2560x1600 and below. While that definitely addressed a lot of the market, the fact was that CrossFire users were also amongst the most likely to have Eyefinity (3+ monitors spanned for gaming) or even 4K displays (quickly dropping in price). Neither of those advanced display options were supported with any Catalyst frame pacing technology.
That changes today as Phase 2 of the AMD Frame Pacing feature has finally been implemented for products that do not feature the XDMA technology (found in Hawaii GPUs for example). That includes HD 7000-series GPUs, the R9 280X and 270X cards, as well as older generation products and Dual Graphics hardware combinations such as the new Kaveri APU and R7 250. I have already tested Kaveri and the R7 250 in fact, and you can read about its scaling and experience improvements right here. That means that users of the HD 7970, R9 280X, etc., as well as those of you with HD 7990 dual-GPU cards, will finally be able to utilize the power of both GPUs in your system with 4K displays and Eyefinity configurations!
This is finally fixed!!
As of this writing I haven’t had time to do more testing (other than the Dual Graphics article linked above) to demonstrate the potential benefits of this Phase 2 update, but we’ll be targeting it later in the week. For now, it appears that you’ll be able to get essentially the same performance and pacing capabilities on the Tahiti-based GPUs as you can with Hawaii (R9 290X and R9 290).
Catalyst 14.1 beta is also the first public driver to add support for HSA technology, allowing owners of the new Kaveri APU to take advantage of the appropriately enabled applications like LibreOffice and the handful of Adobe apps. AMD has since let us know that this feature DID NOT make it into the public release of Catalyst 14.1.
The First Mantle Ready Driver (sort of)
A technology that has been in development for more than two years according to AMD, the newly released Catalyst 14.1 beta driver is the first to enable support for the revolutionary new Mantle API for PC gaming. Essentially, Mantle is AMD’s attempt at creating a custom API that will replace DirectX and OpenGL in order to more directly target the GPU hardware in your PC, specifically the AMD-based designs of GCN (Graphics Core Next).
Mantle runs at a lower level than DX or OGL does, able to more directly access the hardware resources of the graphics chips, and with that ability is able to better utilize the hardware in your system, both CPU and GPU. In fact, the primary benefit of Mantle is going to be seen in the form of less API overhead and bottlenecks such as real-time shader compiling and code translation.
If you are interested in the meat of what makes Mantle tick and why it was so interesting to us when it was first announced in September of 2013, you should check out our first deep-dive article written by Josh. In it you’ll get our opinion on why Mantle matters and why it has the potential for drastically changing the way the PC is thought of in the gaming ecosystem.
Hybrid CrossFire that actually works
The road to redemption for AMD and its driver team has been a tough one. Since we first started to reveal the significant issues with AMD's CrossFire technology back in January of 2013 the Catalyst driver team has been hard at work on a fix, though I will freely admit it took longer to convince them that the issue was real than I would have liked. We saw the first steps of the fix released in August of 2013 with the release of the Catalyst 13.8 beta driver. It supported DX11 and DX10 games and resolutions of 2560x1600 and under (no Eyefinity support) but was obviously still less than perfect.
In October with the release of AMD's latest Hawaii GPU the company took another step by reorganizing the internal architecture of CrossFire on the chip level with XDMA. The result was frame pacing that worked on the R9 290X and R9 290 in all resolutions, including Eyefinity, though still left out older DX9 titles.
One thing that had not been addressed, at least not until today, was the issues that surrounded AMD's Hybrid CrossFire technology, now known as Dual Graphics. This is the ability for an AMD APU with integrated Radeon graphics to pair with a low cost discrete GPU to improve graphics performance and gaming experiences. Recently over at Tom's Hardware they discovered that Dual Graphics suffered from the exact same scaling issues as standard CrossFire; frame rates in FRAPS looked good but the actually perceived frame rate was much lower.
A little while ago a new driver made its way into my hands under the name of Catalyst 13.35 Beta X, a driver that promised to enable Dual Graphics frame pacing with Kaveri and R7 graphics cards. As you'll see in the coming pages, the fix definitely is working. And, as I learned after doing some more probing, the 13.35 driver is actually a much more important release than it at first seemed. Not only is Kaveri-based Dual Graphics frame pacing enabled, but Richland and Trinity are included as well. And even better, this driver will apparently fix resolutions higher than 2560x1600 in desktop graphics as well - something you can be sure we are checking on this week!
Just as we saw with the first implementation of Frame Pacing in the Catalyst Control Center, with the 13.35 Beta we are using today you'll find a new set of options in the Gaming section to enable or disable Frame Pacing. The default setting is On; which makes me smile inside every time I see it.
The hardware we are using is the same basic setup we used in my initial review of the AMD Kaveri A8-7600 APU review. That includes the A8-7600 APU, an Asrock A88X mini-ITX motherboard, 16GB of DDR3 2133 MHz memory and a Samsung 840 Pro SSD. Of course for our testing this time we needed a discrete card to enable Dual Graphics and we chose the MSI R7 250 OC Edition with 2GB of DDR3 memory. This card will run you an additional $89 or so on Amazon.com. You could use either the DDR3 or GDDR5 versions of the R7 250 as well as the R7 240, but in our talks with AMD they seemed to think the R7 250 DDR3 was the sweet spot for the CrossFire implementation.
Both the R7 250 and the A8-7600 actually share the same number of SIMD units at 384, otherwise known as 384 shader processors or 6 Compute Units based on the new nomenclature that AMD is creating. However, the MSI card is clocked at 1100 MHz while the GPU portions of the A8-7600 APU are running at only 720 MHz.
So the question is, has AMD truly fixed the issues with frame pacing with Dual Graphics configurations, once again making the budget gamer feature something worth recommending? Let's find out!
Introduction and Features
Corsair's new CS Series Modular PSUs include four models; the CS450M, CS550M, CS650M and CS750M. All of the power supplies in the CS Series feature modular cables, high efficiency (80 Plus Gold certified) and quiet operation. In addition, Corsair continues to offer a full line of high quality power supplies, memory components, cases, cooling components, SSDs and accessories for the PC market.
Here is what Corsair has to say about their CS Series Modular PSUs: “The CS Modular Series is designed for basic and midrange PCs, but offers features and performance traditionally reserved for higher-end models. 80 Plus Gold efficiency and a thermally controlled fan ensure quiet operation and lower energy use, and the modular, detachable cable set makes installations and upgrades faster and better looking.”
“80 Plus Gold rated efficiency saves you money on your power bill and produces less heat than less efficient power supplies. The flat black modular cables allow you to enjoy fast, neat builds. And, like all Corsair power supplies, CS Series Modular is built with high-quality components and is guaranteed to deliver clean, stable, continuous power.”
Corsair CS Series Modular PSU Key Features: (from the Corsair website)