All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
AM1 Walks New Ground
After Josh's initial review of the AMD AM1 Platform and the Athlon 5350, we received a few requests to look at gaming performance with a discrete GPU installed. Even though this platform isn't being aimed at gamers looking to play demanding titles, we started to investigate this setup anyway.
While Josh liked the ASUS AM1I-A Mini ITX motherboard he used in his review, with only a x1 PCI-E slot it would be less than ideal for this situation.
Luckily we had the Gigabyte AM1M-S2H Micro ATX motherboard, which features a full length PCI-E x16 slot, as well as 2 x1 slots.
Don't be mistaken by the shape of the slot though, the AM1 chipset still only offers 4 lanes of PCI-Express 2.0. This, of course, means that the graphics card will not be running at full bandwidth. However, having the physical x16 slot makes it a lot easier to physically connect a discrete GPU, without having to worry about those ribbon cables that miners use.
Competition is a Great Thing
While doing some testing with the AMD Athlon 5350 Kabini APU to determine it's flexibility as a low cost gaming platform, we decided to run a handful of tests to measure something else that is getting a lot of attention right now: AMD Mantle and NVIDIA's 337.50 driver.
Earlier this week I posted a story that looked at performance scaling of NVIDIA's new 337.50 beta driver compared to the previous 335.23 WHQL. The goal was to assess the DX11 efficiency improvements that the company stated it had been working on and implemented into this latest beta driver offering. In the end, we found some instances where games scaled by as much as 35% and 26% but other cases where there was little to no gain with the new driver. We looked at both single GPU and multi-GPU scenarios on mostly high end CPU hardware though.
Earlier in April I posted an article looking at Mantle, AMD's answer to a lower level API that is unique to its ecosystem, and how it scaled on various pieces of hardware on Battlefield 4. This was the first major game to implement Mantle and it remains the biggest name in the field. While we definitely saw some improvements in gaming experiences with Mantle there was work to be done when it comes to multi-GPU scaling and frame pacing.
Both parties in this debate were showing promise but obviously both were far from perfect.
While we were benchmarking the new AMD Athlon 5350 Kabini based APU, an incredibly low cost processor that Josh reviewed in April, it made sense to test out both Mantle and NVIDIA's 337.50 driver in an interesting side by side.
Let's see if I can start this story without sounding too much like a broken record when compared to the news post I wrote late last week on the subject of NVIDIA's new 337.50 driver. In March, while attending the Game Developer's Conference to learn about the upcoming DirectX 12 API, I sat down with NVIDIA to talk about changes coming to its graphics driver that would affect current users with shipping DX9, DX10 and DX11 games.
As I wrote then:
What NVIDIA did want to focus on with us was the significant improvements that have been made on the efficiency and performance of DirectX 11. When NVIDIA is questioned as to why they didn’t create their Mantle-like API if Microsoft was dragging its feet, they point to the vast improvements possible and made with existing APIs like DX11 and OpenGL. The idea is that rather than spend resources on creating a completely new API that needs to be integrated in a totally unique engine port (see Frostbite, CryEngine, etc.) NVIDIA has instead improved the performance, scaling, and predictability of DirectX 11.
NVIDIA claims that these fixes are not game specific and will improve performance and efficiency for a lot of GeForce users. Even if that is the case, we will only really see these improvements surface in titles that have addressable CPU limits or very low end hardware, similar to how Mantle works today.
In truth, this is something that both NVIDIA and AMD have likely been doing all along but NVIDIA has renewed purpose with the pressure that AMD's Mantle has placed on them, at least from a marketing and PR point of view. It turns out that the driver that starts to implement all of these efficiency changes is the recent 337.50 release and on Friday I wrote up a short story that tested a particularly good example of the performance changes, Total War: Rome II, with a promise to follow up this week with additional hardware and games. (As it turns out, results from Rome II are...an interesting story. More on that on the next page.)
Today I will be looking at seemingly random collection of gaming titles, running on some reconfigured test bed we had in the office in an attempt to get some idea of the overall robustness of the 337.50 driver and its advantages over the 335.23 release that came before it. Does NVIDIA have solid ground to stand on when it comes to the capabilities of current APIs over what AMD is offering today?
Introduction, Specifications, and Contents
Corsair has added another double-width liquid cooler to their growing lineup of all-in-one solutions with the H105, joining the existing H100i and larger H110 in this category.
Image courtesy of Corsair
Initially, the H105 might leave you scratching your head. It's listed on Corsair’s site with the same $119.99 MSRP as the H100i, and both are 240mm designs featuring the same high performance fans. The similarities end there, however, as the design of the H105 is more akin to Corsair's new 120mm H75 (which we recently reviewed) than to the existing 240mm H100/H100i. With the H75 already a solid price/performance pick in Corsair’s lineup - and the various other options still available - it's reasonable to wonder exactly where H105 fits in.
While this new cooler is using the same pair of excellent SP120L PWM fans as the earlier H100i (and H80i), it's the radiator they will be connected to that should separate the H105 from prior efforts. Corsair has implemented a much thicker 240mm rad with the H105 at 35mm (vs. only 27mm on earlier products), and this added thickness should have an noticeable impact on cooling performance, and possibly fan noise as well.
Ultra-Speed RAM, APU-Style
In our review of the Kingston HyperX Predator 2666MHz kit, we discovered what those knowledgeable about Intel memory scaling already knew: for most applications, and specifically games, there is no significant advantage to increases in memory speed past the current 1600MHz DDR3 standard. But this was only half of the story. What about memory scaling with an AMD processor, and specifically an APU? To find out, we put AMD’s top APU, the A10-7850K, to the test!
Ready for some APU memory testing!
AMD has created a compelling option with their APU lineup, and the inclusion of powerful integrated graphics allows for interesting build options with lower power and space requirements, and even make building tiny mini-ITX systems for gaming realistic. It’s this graphical prowess compared to any other onboard solution that creates an interesting value proposition for any gamer looking at a new low-cost build. The newest Kaveri APU’s are getting a lot of attention and they beg the question, is a discrete graphics card really needed for gaming at reasonable settings?
AMD Brings Kabini to the Desktop
Perhaps we are performing a study of opposites? Yesterday Ryan posted his R9 295X2 review, which covers the 500 watt, dual GPU monster that will be retailing for $1499. A card that is meant for only the extreme enthusiast who has plenty of room in their case, plenty of knowledge about their power supply, and plenty of electricity and air conditioning to keep this monster at bay. The product that I am reviewing could not be any more different. Inexpensive, cool running, power efficient, and can be fit pretty much anywhere. These products can almost be viewed as polar opposites.
The interesting thing of course is that it shows how flexible AMD’s GCN architecture is. GCN can efficiently and effectively power the highest performing product in AMD’s graphics portfolio, as well as their lowest power offerings in the APU market. The performance scales very linearly when it comes to adding in more GCN compute cores.
The product that I am of course referring to are the latest Athlon and Sempron APUs that are based on the Kabini architecture which fuses Jaguar x86 cores with GCN compute cores. These APUs were announced last month, but we did not have the chance at the time to test them. Since then these products have popped up in a couple of places around the world, but this is the first time that reviewers have officially received product from AMD and their partners.
A Powerful Architecture
AMD likes to toot its own horn. Just a take a look at the not-so-subtle marketing buildup to the Radeon R9 295X2 dual-Hawaii graphics card, released today. I had photos of me shipped to…me…overnight. My hotel room at GDC was also given a package which included a pair of small Pringles cans (chips) and a bottle of volcanic water. You may have also seen some photos posted of a mysterious briefcase with its side stickered by with the silhouette of a Radeon add-in board.
This tooting is not without some validity though. The Radeon R9 295X2 is easily the fastest graphics card we have ever tested and that says a lot based on the last 24 months of hardware releases. It’s big, it comes with an integrated water cooler, and it requires some pretty damn specific power supply specifications. But AMD did not compromise on the R9 295X2 and, for that, I am sure that many enthusiasts will be elated. Get your wallets ready, though, this puppy will run you $1499.
Both AMD and NVIDIA have a history of producing high quality dual-GPU graphics cards late in the product life cycle. The most recent entry from AMD was the Radeon HD 7990, a pair of Tahiti GPUs on a single PCB with a triple fan cooler. While a solid performing card, the product was released in a time when AMD CrossFire technology was well behind the curve and, as a result, real-world performance suffered considerably. By the time the drivers and ecosystem were fixed, the HD 7990 was more or less on the way out. It was also notorious for some intermittent, but severe, overheating issues, documented by Tom’s Hardware in one of the most harshly titled articles I’ve ever read. (Hey, Game of Thrones started again this week!)
The Hawaii GPU, first revealed back in September and selling today under the guise of the R9 290X and R9 290 products, is even more power hungry than Tahiti. Many in the industry doubted that AMD would ever release a dual-GPU product based on Hawaii as the power and thermal requirements would be just too high. AMD has worked around many of these issues with a custom water cooler and placing specific power supply requirements on buyers. Still, all without compromising on performance. This is the real McCoy.
Introduction and Technical Specifications
Courtesy of SilverStone
SilverStone Technology is a well known company in the enthusiast space, offering high quality solutions from cases to case-mounted fan controllers and displays. SilverStone has also gone through several generations of CPU all-in-on liquid cooling solutions with the Tundra series being the culmination of these past designs. The Tundra Series TD03 liquid cooler is designed to cool CPUs of any make, including the latest processor offerings from both Intel and AMD. The cooler is comprised of a thick 120mm radiator attached to a uni-body aluminum CPU cooler with integrated copper base plate and pump. To best measure the TD03's performance, we set it against several other high-performance liquid and air-based coolers. The TD03 cooler comes with a retail MSRP of $99.99, putting it at the higher end of the all-in-one cooler price range.
Athlon and Pentium Live On
Over the past year or so, we have taken a look at a few budget gaming builds here at PC Perspective. One of our objectives with these build guides was to show people that PC gaming can be cost competitive with console gaming, and at a much higher quality.
However, we haven't stopped pursuing our goal of the perfect inexpensive gaming PC, which is still capable of maxing out image quality settings on today's top games at 1080p.
Today we take a look at two new systems, featuring some parts which have been suggested to us after our previous articles.
|AMD System||Intel System|
|Processor||AMD Athlon X4 760K - $85||Intel Pentium G3220 - $65|
|Cores / Threads||4 / 4||2 / 2|
|Motherboard||Gigabyte F2A55M-HD2 - $60||ASUS H81M-E - $60|
|Graphics||MSI R9 270 Gaming - $180||MSI R9 270 Gaming - $180|
|System Memory||Corsair 8GB DDR3-1600 (1x8GB) - $73||Corsair 8GB DDR3-1600 (1x8GB) - $73|
|Hard Drive||Western Digital 1TB Caviar Green - $60||Western Digital 1TB Caviar Green - $60|
|Power Supply||Cooler Master GX 450W - $50||Cooler Master GX 450W - $50|
|Case||Cooler Master N200 MicroATX - $50||Cooler Master N200 MicroATX - $50|
(Editor's note: If you don't already have a copy of Windows, and don't plan on using Linux or SteamOS, you'll need an OEM copy of Windows 8.1 - currently selling for $98.)
These are low prices for a gaming computer, and feature some parts which many of you might not know a lot about. Let's take a deeper look at the two different platforms which we built upon.
First up is the AMD Athlon X4 760K. While you may not have known the Athlon brand was still being used on current parts, they represent an interesting part of the market. On the FM2 socket, the 760K is essentially a high end Richland APU, with the graphics portion of the chip disabled.
What this means is that if you are going to pair your processor with a discrete GPU anyway, you can skip paying extra for the integrated GPU.
As for the motherboard, we went for an ultra inexpensive A55 option from Gigabyte, the GA-F2A55M-HD2. This board features the A55 chipset which launched with the Llano APUs in 2011. Because of this older chipset, the board does not feature USB 3.0 or SATA 6G capability, but since we are only concerned about gaming performance here, it makes a great bare bones option.
Taking it all the way to 12!
Microsoft has been developing DirectX for around 20 years now. Back in the 90s, the hardware and software scene for gaming was chaotic, at best. We had wonderful things like “SoundBlaster compatibility” and 3rd party graphics APIs such as Glide, S3G, PowerSGL, RRedline, and ATICIF. OpenGL was aimed more towards professional applications and it took John Carmack and iD, through GLQuake in 1996, to start the ball moving in that particular direction. There was a distinct need to provide standards across audio and 3D graphics that would de-fragment the industry and developers. DirectX was introduced with Windows 95, but the popularity of Direct3D did not really take off until DirectX 3.0 that was released in late 1996.
DirectX has had some notable successes, and some notable let downs, over the years. DX6 provided a much needed boost in 3D graphics, while DX8 introduced the world to programmable shading. DX9 was the most long-lived version, thanks to it being the basis for the Xbox 360 console with its extended lifespan. DX11 added in a bunch of features and made programming much simpler, all the while improving performance over DX10. The low points? DX10 was pretty dismal due to the performance penalty on hardware that supported some of the advanced rendering techniques. DirectX 7 was around a little more than a year before giving way to DX8. DX1 and DX2? Yeah, those were very unpopular and problematic, due to the myriad changes in a modern operating system (Win95) as compared to the DOS based world that game devs were used to.
Some four years ago, if going by what NVIDIA has said, initial talks were initiated to start pursuing the development of DirectX 12. DX11 was released in 2009 and has been an excellent foundation for PC games. It is not perfect, though. There is still a significant impact in potential performance due to a variety of factors, including a fairly inefficient hardware abstraction layer that relies more upon fast single threaded performance from a CPU rather than leveraging the power of a modern multi-core/multi-thread unit. This has the result of limiting how many objects can be represented on screen as well as different operations that would bottleneck even the fastest CPU threads.
BF4 Integrates FCAT Overlay Support
Back in September AMD publicly announced Mantle, a new lower level API meant to offer more performance for gamers and more control for developers fed up with the restrictions of DirectX. Without diving too much into the politics of the release, the fact that Battlefield 4 developer DICE was integrating Mantle into the Frostbite engine for Battlefield was a huge proof point for the technology. Even though the release was a bit later than AMD had promised us, coming at the end of January 2014, one of the biggest PC games on the market today had integrated a proprietary AMD API.
When I did my first performance preview of BF4 with Mantle on February 1st, the results were mixed but we had other issues to deal with. First and foremost, our primary graphics testing methodology, called Frame Rating, wasn't able to be integrated due to the change of API. Instead we were forced to use an in-game frame rate counter built by DICE which worked fine, but didn't give us the fine grain data we really wanted to put the platform to the test. It worked, but we wanted more. Today we are happy to announce we have full support for our Frame Rating and FCAT testing with BF4 running under Mantle.
A History of Frame Rating
In late 2012 and throughout 2013, testing graphics cards became a much more complicated beast. Terms like frame pacing, stutter, jitter and runts were not in the vocabulary of most enthusiasts but became an important part of the story just about one year ago. Though complicated to fully explain, the basics are pretty simple.
Rather than using software on the machine being tested to measure performance, our Frame Rating system uses a combination of local software and external capture hardware. On the local system with the hardware being evaluated we run a small piece of software called an overlay that draws small colored bars on the left hand side of the game screen that change successively with each frame rendered by the game. Using a secondary system, we capture the output from the graphics card directly, intercepting it from the display output, in real-time in an uncompressed form. With that video file captured, we then analyze it frame by frame, measuring the length of each of those colored bars, how long they are on the screen, how consistently they are displayed. This allows us to find the average frame rate but also to find how smoothly the frames are presented, if there are dropped frames and if there are jitter or stutter issues.
Introduction, Specifications and Packaging
ADATA has been in the storage market for a good while now. I like to think of them as the patient underdog. They don't necessarily come out with the shiny new controller or flash technology. Instead they tend to sit back and wait for a given set of hardware to mature and drop in price a bit. Once that happens, they figure out how to package the matured technology into a device of relatively low cost as compared to the competition. They have done so again today, with their new Premier Pro SP920 lineup:
As hinted at earlier, this line does not use the newest Marvell controller, but as Marvell controllers have been very capable SATA 6Gb/sec units for a long time now, that is not necessarily a bad thing. In addition, Marvell controllers have a track record of gaining significant performance margins as their firmware matures, which makes ADATA's later entrance more of a good thing.
Introduction, Packaging, and Specifications
The BitFenix Colossus has grown into a family of enclosures, from the massive E-ATX original all the way down to their diminutive mini-ITX version. But somewhere in between there lies a case offering some impressive flexibility, while still retaining a small footprint.
As the PC industry has evolved over the last decade, the days of high-performance rigs requiring large towers and full-size ATX and E-ATX motherboards are gone. Of course there is still a market (and need) for full tower systems, and the majority of enthusiast motherboards available are still full ATX. But the evolution in process technology and platforms has allowed for more and more to be done within a smaller footprint, and the micro-ATX form factor has emerged as a solid option for anything from budget systems to extreme multi-GPU gaming powerhouses. Regardless of the path you choose, all of those sweet components need a home, and finding the right computer case has long been a very personal odyssey.
BitFenix entered the PC enclosure market in 2010 with the original Colossus, and since then they have grown into a respected brand with a large and differentiated product offering. From that first massive Colossus to the popular Prodigy mini-ITX, they have created an enclosure for just about any build. And while many cases specialize in one or two particular areas, once in a while you will find an enclosure that just begs for experimentation. The micro-ATX variant of the Colossus from BitFenix is just such a case. Every aspect of this small enclosure has been given a close look by BitFenix, and there are options galore for a variety of builds.
Introduction and Technical Specifications
Courtesy of Cooler Master
Cooler Master is known in the enthusiast community for their innovative designs with product offerings ranging from cases to desktop and laptop cooling implements. Like many other manufacturers, Cooler Master offers its own line of all-in-one liquid cooling solutions. Unique to their Glacer 240L cooler is the ability to easily add additional cooling blocks into the base loop. The Glacer 240L has an fill port integrated into the base of the radiator for drain and refill and uses removable clamps on all connections for easy maintenance and tube reconfiguration. To measure the performance of the Glacer 240L, we set it against several other high-performance liquid and air-based coolers. With a $139.99 MSRP, the Glacer 240L comes at a premium price.
Courtesy of Cooler Master
Courtesy of Cooler Master
Introduction and Features
The highly anticipated 450D mid-tower case is Corsair’s latest addition to their top of the line Obsidian Series and is the first new Obsidian case to be released in 2014. The Obsidian 450D mid-tower enclosure is positioned between Corsair’s 750D full-tower and 350D Micro-ATX enclosures and shares many of the same styling and design concepts of the 350D, 750D and 900D. The 450D is being introduced with an MSRP of $119.99 USD, which makes it considerably less expensive than Corsair’s classic Obsidian 650D mid-tower enclosure ($199.99 USD). It appears the new 450D may eventually become the successor to the 650D but we hope the 650D mid-tower case doesn’t go away any time soon as the two enclosures are still different enough to appeal to different users.
(Courtesy of Corsair)
And in addition to PC enclosures, Corsair continues to offer one of the largest selections of memory products, SSDs, power supplies, coolers, gaming peripherals, and PC accessories currently on the market today!
Here is what Corsair has to say about the Obsidian 450D PC case:
“The 450D Performance Mid-Tower PC case matches the iconic, brushed aluminum design of the Obsidian series with an increased focus on high-airflow, ensuring your system not only looks great, but runs cool.
Behind the 450D’s aluminum mesh intake grill are dual AF140L intake fans to direct airflow straight to a PC’s hottest component, the graphics card. The rear AF120L 120mm fan keeps the airflow moving smoothly and five other optional fan locations give you serious cooling flexibility. The 450D’s fan mounts also accommodate a wide range of water-cooling radiators, with room for up to a 360mm radiator in the roof, a 280mm radiator in the front, and a 240mm radiator in the floor.
The 450D also boasts all of the features that make the Obsidian Series a favorite among enthusiasts around the world. Easily accessible dust filters on the roof, front, and bottom ensure your system will stay looking its best, while modular tool-free 3.5”/2.5” hard disk mounts offer a wide range of storage options, or can be removed entirely to prioritize airflow.”
(Courtesy of Corsair)
Obsidian Series 450D Mid-Tower PC Case Key Points:
• Mid-tower PC case with clean, elegant styling
• Tool-free 2.5”, 3.5” and 5.25” drive installation
• Two AF140L intake fans and one AF120L exhaust fan
• Excellent airflow and low noise levels
• Support for water-cooling in a broad variety of configurations
• Support for 240mm, 280mm, and/or 360mm radiators
• Two dedicated 2.5” SSD drive sleds located behind motherboard
• Included modular (removable) drive cage supports three 2.5”/3.5” drives
• Optional drive cage adds support for three more 2.5”/3.5” drives
• Removable magnetic top filter provides a cleaner look
• Competitive price point
Introduction and Technical Specifications
Courtesy of ASUS
The ASUS ROG Poseidon GTX 780 video card is the latest incarnation of the Republic of Gamer (ROG) Poseidon series. Like the previous Poseidon series products, the Poseidon GTX 780 features a hybrid cooler, capable of air and liquid-based cooling for the GPU and on board components. The AUS ROG Poseidon GTX 780 graphics card comes with an MSRP of $599, a premium price for a premium card .
Courtesy of ASUS
In designing the Poseidon GTX 780 graphics card, ASUS packed in many of premium components you would normally find as add-ons. Additionally, the card features motherboard quality power components, featuring a 10 phase digital power regulation system using ASUS DIGI+ VRM technology coupled with Japanese black metallic capacitors. The Poseidon GTX 780 has the following features integrated into its design: DisplayPort output port, HDMI output port, dual DVI ports (DVI-D and DVI-I type ports), aluminum backplate, integrated G 1/4" threaded liquid ports, dual 90mm cooling fans, 6-pin and 8-pin PCIe-style power connectors, and integrated power connector LEDs and ROG logo LED.
DX11 could rival Mantle
The big story at GDC last week was Microsoft’s reveal of DirectX 12 and the future of the dominant API for PC gaming. There was plenty of build up to the announcement with Microsoft’s DirectX team posting teasers and starting up a Twitter account of the occasion. I hosted a live blog from the event which included pictures of the slides. It was our most successful of these types of events with literally thousands of people joining in the conversation. Along with the debates over the similarities of AMD’s Mantle API and the timeline for DX12 release, there are plenty of stories to be told.
After the initial session, I wanted to setup meetings with both AMD and NVIDIA to discuss what had been shown and get some feedback on the planned direction for the GPU giants’ implementations. NVIDIA presented us with a very interesting set of data that both focused on the future with DX12, but also on the now of DirectX 11.
The reason for the topic is easy to decipher – AMD has built up the image of Mantle as the future of PC gaming and, with a full 18 months before Microsoft’s DirectX 12 being released, how developers and gamers respond will make an important impact on the market. NVIDIA doesn’t like to talk about Mantle directly, but it’s obvious that it feels the need to address the questions in a roundabout fashion. During our time with NVIDIA’s Tony Tamasi at GDC, the discussion centered as much on OpenGL and DirectX 11 as anything else.
What are APIs and why do you care?
For those that might not really understand what DirectX and OpenGL are, a bit of background first. APIs (application programming interface) are responsible for providing an abstraction layer between hardware and software applications. An API can deliver consistent programming models (though the language can vary) and do so across various hardware vendors products and even between hardware generations. They can provide access to feature sets of hardware that have a wide range in complexity, but allow users access to hardware without necessarily knowing great detail about it.
Over the years, APIs have developed and evolved but still retain backwards compatibility. Companies like NVIDIA and AMD can improve DirectX implementations to increase performance or efficiency without adversely (usually at least) affecting other games or applications. And because the games use that same API for programming, changes to how NVIDIA/AMD handle the API integration don’t require game developer intervention.
With the release of AMD Mantle, the idea of a “low level” API has been placed in the minds of gamers and developers. The term “low level” can mean many things, but in general it is associated with an API that is more direct, has a thinner set of abstraction layers, and uses less translation from code to hardware. The goal is to reduce the amount of overhead (performance hit) that APIs naturally impair for these translations. With additional performance available, the CPU cycles can be used by the program (game) or be slept to improve battery life. In certain cases, GPU throughput can increase where the API overhead is impeding the video card's progress.
Passing additional control to the game developers, away from the API or GPU driver developers, gives those coders additional power and improves the ability for some vendors to differentiate. Interestingly, not all developers want this kind of control as it requires more time, more development work, and small teams that depend on that abstraction to make coding easier will only see limited performance advantages.
The reasons for this transition to a lower level API is being driven the by widening gap of performance between CPU and GPUs. NVIDIA provided the images below.
On the left we see performance scaling in terms of GFLOPS and on the right the metric is memory bandwidth. Clearly the performance of NVIDIA's graphics chips has far outpaced (as have AMD’s) what the best Intel desktop processor have been able and that gap means that the industry needs to innovate to find ways to close it.
Introduction and Features
Cooler Master continues to offer a full line of cases, power supplies, and coolers along with numerous other accessories for PC enthusiasts. Today we will be taking a detailed look at Cooler Master’s new V Series 850W power supply.
(Courtesy of Cooler Master)
It has been quite some time since we last reviewed a Cooler Master power supply and we were happy to see one show up recently on the test lab doorstep. We were even happier to learn that the new Cooler Master V Series, which includes three units: the V700, B850 and V1000, are all built by Seasonic, who has a stellar reputation for building some of the best PC power supplies on the planet. All three power supplies in the V Series incorporate all modular cabling, high efficiency (80Plus Gold certified), high quality Japanese made capacitors, a silent 135mm fan with a Fluid Dynamic Bearing, and they come backed by Cooler Master’s 5-year warranty. In addition, the V Series power supplies deliver excellent voltage regulation (particularly on the +12V output) with minimal AC ripple and noise. The Cooler Master V850 power supply is currently selling for $169.99 (newegg.com, March 2014).
Here is what Cooler Master has to say about their new V Series PSUs:
“Developed to be the highest quality, Cooler Master carefully sourced every aspect of the V Series PSU line to produce a high-efficiency and stable power supply. It’s highly efficient even at low loads, extremely stable, and works well beyond the parameters of 80Plus specifications.”
Cooler Master V Series 850W PSU Key Features:
• Fully modular cable design
• Single 850W +12V output that delivers up to 70A (840W)
• 80Plus Gold Certified: up to 93% efficient @ 50% load)
• Silent 135mm Fluid Dynamic Bearing (FDB) fan for low noise and longer life
• Six PCI-E 6+2 pin connectors to support high-end GPUs
• 100% high-quality Japanese capacitors ensure performance and reliability
• Reliable 5-Year warranty
Introduction and Background
Back in 2010, Intel threw a bit of a press thing for a short list of analysts and reviewers out at their IMFT flash memory plant at Lehi, Utah. The theme and message of that event was to announce 25nm flash entering mass production. A few years have passed, and 25nm flash is fairly ubiquitous, with 20nm rapidly gaining as IMFT scales production even higher with the smaller process. Last week, Intel threw a similar event, but instead of showing off a die shrink or even announcing a new enthusiast SSD, they chose to take a step back and brief us on the various design, engineering, and validation testing of their flash storage product lines.
At the Lehi event, I did my best to make off with a 25nm wafer.
Many topics were covered at this new event at the Intel campus at Folsom, CA, and over the coming weeks we will be filling you in on many of them as we take the necessary time to digest the fire hose of intel (pun intended) that we received. Today I'm going to lay out one of the more impressive things I saw at the briefings, and that is the process Intel goes through to ensure their products are among the most solid and reliable in the industry.
EVGA GTX 750 Ti ACX FTW
The NVIDIA GeForce GTX 750 Ti has been getting a lot of attention around the hardware circuits recently, but for good reason. It remains interesting from a technology stand point as it is the first, and still the only, Maxwell based GPU available for desktop users. It's a completely new architecture which is built with power efficiency (and Tegra) in mind. With it, the GTX 750 Ti was able to push a lot of performance into a very small power envelope while still maintaining some very high clock speeds.
NVIDIA’s flagship mainstream part is also still the leader when it comes to performance per dollar in this segment (for at least as long as it takes for AMD’s Radeon R7 265 to become widely available). There has been a few cases that we have noticed where the long standing shortages and price hikes from coin mining have dwindled, which is great news for gamers but may also be bad news for NVIDIA’s GPUs in some areas. Though, even if the R7 265 becomes available, the GTX 750 Ti remains the best card you can buy that doesn’t require a power connection. This puts it in a unique position for power limited upgrades.
After our initial review of the reference card, and then an interesting look at how the card can be used to upgrade an older or under powered PC, it is time to take a quick look at a set of three different retail cards that have made their way into the PC Perspective offices.
On the chopping block today we’ll look at the EVGA GeForce GTX 750 Ti ACX FTW, the Galaxy GTX 750 Ti GC and the PNY GTX 750 Ti XLR8 OC. All of them are non-reference, all of them are overclocked, but you’ll likely be surprised how they stack up.