All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
The Really Good Times are Over
We really do not realize how good we had it. Sure, we could apply that to budget surpluses and the time before the rise of global terrorism, but in this case I am talking about the predictable advancement of graphics due to both design expertise and improvements in process technology. Moore’s law has been exceptionally kind to graphics. We can look back and when we plot the course of these graphics companies, they have actually outstripped Moore in terms of transistor density from generation to generation. Most of this is due to better tools and the expertise gained in what is still a fairly new endeavor as compared to CPUs (the first true 3D accelerators were released in the 1993/94 timeframe).
The complexity of a modern 3D chip is truly mind-boggling. To get a good idea of where we came from, we must look back at the first generations of products that we could actually purchase. The original 3Dfx Voodoo Graphics was comprised of a raster chip and a texture chip, each contained approximately 1 million transistors (give or take) and were made on a then available .5 micron process (we shall call it 500 nm from here on out to give a sense of perspective with modern process technology). The chips were clocked between 47 and 50 MHz (though often could be clocked up to 57 MHz by going into the init file and putting in “SET SST_GRXCLK=57”… btw, SST stood for Sellers/Smith/Tarolli, the founders of 3Dfx). This revolutionary graphics card at the time could push out 47 to 50 megapixels and had 4 MB of VRAM and was released in the beginning of 1996.
My first 3D graphics card was the Orchid Righteous 3D. Voodoo Graphics was really the first successful consumer 3D graphics card. Yes, there were others before it, but Voodoo Graphics had the largest impact of them all.
In 1998 3Dfx released the Voodoo 2, and it was a significant jump in complexity from the original. These chips were fabricated on a 350 nm process. There were three chips to each card, one of which was the raster chip and the other two were texture chips. At the top end of the product stack was the 12 MB cards. The raster chip had 4 MB of VRAM available to it while each texture chip had 4 MB of VRAM for texture storage. Not only did this product double performance from the Voodoo Graphics, it was able to run in single card configurations at 800x600 (as compared to the max 640x480 of the Voodoo Graphics). This is the same time as when NVIDIA started to become a very aggressive competitor with the Riva TnT and ATI was about to ship the Rage 128.
Our Legacys Influence
We are often creatures of habit. Change is hard. And often times legacy systems that have been in place for a very long time can shift and determine the angle at which we attack new problems. This happens in the world of computer technology but also outside the walls of silicon and the results can be dangerous inefficiencies that threaten to limit our advancement in those areas. Often our need to adapt new technologies to existing infrastructure can be blamed for stagnant development.
Take the development of the phone as an example. The pulse based phone system and the rotary dial slowed the implementation of touch dial phones and forced manufacturers to include switches to select between pulse and tone based dialing options on phones for decades.
Perhaps a more substantial example is that of the railroad system that has based the track gauge (width between the rails) on the transportation methods that existed before the birth of Christ. Horse drawn carriages pulled by two horses had an axle gap of 4 feet 8 inches in the 1800s and thus the first railroads in the US were built with a track gauge of 4 feet 8 inches. Today, the standard rail track gauge remains 4 feet 8 inches despite the fact that a wider gauge would allow for more stability of larger cargo loads and allow for higher speed vehicles. But the cost of updating the existing infrastructure around the world would be so cost prohibitive that it is likely we will remain with that outdated standard.
What does this have to do with PC hardware and why am I giving you an abbreviated history lesson? There are clearly some examples of legacy infrastructure limiting our advancement in hardware development. Solid state drives are held back by the current SATA based storage interface though we are seeing movements to faster interconnects like PCI Express to alleviate this. Some compute tasks are limited by the “infrastructure” of standard x86 processor cores and the move to GPU compute has changed the direction of these workloads dramatically.
There is another area of technology that could be improved if we could just move past an existing way of doing things. Displays.
Introduction and Technical Specifications
XSPC Raystorm 750 EX240 Watercooling Kit
Courtesy of XSPC
XSPC has a well known presence in the enthusiast water cooling community with a track record for high performance and affordable water cooling products. Recently, XSPC released new version of their DIY cooling kits, integrating their EX series radiators into the kit. They were kind enough to provide us with a sample of their Raystorm 750 EX240 Watercooling Kit with an included EX240 radiator. We tested this cooler in conjunction with other all-in-one and air coolers to see how well the XSPC kit stacks up. With a retail price at $149.99, this kit offers an affordable alternative to the all-in-one coolers.
X2O 750 Dual Bayres/Pump V4 reservoir
Courtesy of XSPC
EX240 Dual Radiator
Courtesy of XSPC
Raystorm CPU Waterblock
Courtesy of XSPC
Introduction and Features
The 750D is Corsair’s latest addition to their top of the line Obsidian Series and is the third new Obsidian case for 2013. The new 750D is a full-tower enclosure that offers a little more room, enhanced cooling, with expanded drive mounting options, than Corsair’s ever popular 650D mid-tower enclosure. The 750D is being introduced with an MSRP of $159.99 USD, which also makes it a little less expensive than the 650D. In addition to PC enclosures, Corsair continues to offer one of the largest selections of memory products, SSDs, power supplies, coolers, gaming peripherals, and PC accessories currently on the market.
The 750D full-tower case is positioned mid-way between Corsair’s huge 900D Super-Tower and 350D Micro-ATX enclosures and shares many of the same styling and design features as the 900D and 350D. Corsair is saying the 750D is a successor to the 650D but we hope the 650D mid-tower enclosure doesn’t go away any time soon as the two enclosures are still different enough to appeal to different users.
(Courtesy of Corsair)
Get your wallet ready
While I was preparing for the release of Intel's Core i7 Ivy Bridge-E processors last month ORIGIN PC approached me about a system review based on the new platform. Of course I rarely pass up the opportunity to spend some time with unreasonably fast PC hardware so I told them to send something over that would impress me.
This system did.
The ORIGIN PC Millennium custom configuration is one of the flagship offerings from the boutique builder and it will hit your wallet nearly as hard as it will your games and applications. What kind of hardware do you get for $4200 these days?
- ORIGIN PC Millennium
- Intel Core i7-4930K (OC to 4.5 GHz)
- ASUS Rampage IV Gene mATX motherboard
- Custom Corsair H100i 240mm water cooler
- 16GB (4 x 4GB) Corsair Vengeance DDR3-1866 memory
- 2 x NVIDIA GeForce GTX 780 3GB SLI
- 2 x Samsung 840 Pro 128GB SSD (RAID 0)
- 1TB Western Digital Black HDD
- Corsair AX1200i power supply
- Corsair Obsidian 350D case
- Windows 8
Our custom build was designed to pack as much processing power into as small a case as possible and I think you'll find that ORIGIN did a bang up job here. By starting with the Corsair 350D micro ATX chassis yet still including dual graphics cards and an overclocked IVB-E processor, the results are going to impress.
The AMD Radeon R9 280X
Today marks the first step in an introduction of an entire AMD Radeon discrete graphics product stack revamp. Between now and the end of 2013, AMD will completely cycle out Radeon HD 7000 cards and replace them with a new branding scheme. The "HD" branding is on its way out and it makes sense. Consumers have moved on to UHD and WQXGA display standards; HD is no longer extraordinary.
But I want to be very clear and upfront with you: today is not the day that you’ll learn about the new Hawaii GPU that AMD promised would dominate the performance per dollar metrics for enthusiasts. The Radeon R9 290X will be a little bit down the road. Instead, today’s review will look at three other Radeon products: the R9 280X, the R9 270X and the R7 260X. None of these products are really “new”, though, and instead must be considered rebrands or repositionings.
There are some changes to discuss with each of these products, including clock speeds and more importantly, pricing. Some are specific to a certain model, others are more universal (such as updated Eyefinity display support).
Let’s start with the R9 280X.
AMD Radeon R9 280X – Tahiti aging gracefully
The AMD Radeon R9 280X is built from the exact same ASIC (chip) that powers the previous Radeon HD 7970 GHz Edition with a few modest changes. The core clock speed of the R9 280X is actually a little bit lower at reference rates than the Radeon HD 7970 GHz Edition by about 50 MHz. The R9 280X GPU will hit a 1.0 GHz rate while the previous model was reaching 1.05 GHz; not much a change but an interesting decision to be made for sure.
Because of that speed difference the R9 280X has a lower peak compute capability of 4.1 TFLOPS compared to the 4.3 TFLOPS of the 7970 GHz. The memory clock speed is the same (6.0 Gbps) and the board power is the same, with a typical peak of 250 watts.
Everything else remains the same as you know it on the HD 7970 cards. There are 2048 stream processors in the Tahiti version of AMD’s GCN (Graphics Core Next), 128 texture units and 32 ROPs all being pushed by a 384-bit GDDR5 memory bus running at 6.0 GHz. Yep, still with a 3GB frame buffer.
Introduction and Design
As we’re swimming through the veritable flood of Haswell refresh notebooks, we’ve stumbled across the latest in a line of very popular gaming models: the ASUS G750JX-DB71. This notebook is the successor to the well-known G75 series, which topped out at an Intel Core i7-3630QM with NVIDIA GeForce GTX 670MX dedicated graphics. Now, ASUS has jacked up the specs a little more, including the latest 4th-gen CPUs from Intel as well as 700-series NVIDIA GPUs.
Our ASUS G750JX-DB71 test unit features the following specs:
Of course, the closest comparison to this unit is already the most recently-reviewed MSI GT60-2OD-026US, which featured nearly identical specifications, apart from a 15.6” screen, a better GPU (a GTX 780M with 4 GB GDDR5), and a slightly different CPU (the Intel Core i7-4700MQ). In case you’re wondering what the difference is between the ASUS G750JX’s Core i7-4700MQ and the GT60’s i7-4700HQ, it’s very minor: the HQ features a slightly faster integrated graphics Turbo frequency (1.2 GHz vs. 1.15 GHz) and supports Intel Virtualization Technology for Directed I/O (VT-d). Since the G750JX doesn’t support Optimus, we won’t ever be using the integrated graphics, and unless you’re doing a lot with virtual machines, VT-d isn’t likely to offer any benefits, either. So for all intents and purposes, the CPUs are equivalent—meaning the biggest overall performance difference (on the spec sheet, anyway) lies with the GPU and the storage devices (where the G750JX offers more solid-state storage than the GT60). It’s no secret that the MSI GT60 burned up our benchmarks—so the real question is, how close is the ASUS G750JX to its pedestal, and if the differences are considerable, are they justified?
At an MSRP of around $2,000 (though it can be found for around $100 less), the ASUS G750JX-DB71 competes directly with the likes of the MSI GT60, too (which is priced equivalently). The question, of course, is whether it truly competes. Let’s find out!
A new generation of Software Rendering Engines.
We have been busy with side projects, here at PC Perspective, over the last year. Ryan has nearly broken his back rating the frames. Ken, along with running the video equipment and "getting an education", developed a hardware switching device for Wirecase and XSplit.
My project, "Perpetual Motion Engine", has been researching and developing a GPU-accelerated software rendering engine. Now, to be clear, this is just in very early development for the moment. The point is not to draw beautiful scenes. Not yet. The point is to show what OpenGL and DirectX does and what limits are removed when you do the math directly.
Errata: BioShock uses a modified Unreal Engine 2.5, not 3.
In the above video:
- I show the problems with graphics APIs such as DirectX and OpenGL.
- I talk about what those APIs attempt to solve, finding color values for your monitor.
- I discuss the advantages of boiling graphics problems down to general mathematics.
- Finally, I prove the advantages of boiling graphics problems down to general mathematics.
I would recommend watching the video, first, before moving forward with the rest of the editorial. A few parts need to be seen for better understanding.
Introduction and Technical Specifications
Courtesy of MSI
The Z87 XPower board is the flagship motherboard in MSI's MPower product line. The board supports the latest generation of Intel LGA1150-based processors with all the over-engineered goodness you've come to expect from an MSI flagship board. The Z87 XPower sports the black and yellow theme of the product line, along with integrated LEDs to really make the board stand out. While its $439.99 retail may seem a bit high, the stability, features, and raw power of the board make it a wise investment for those that only want the best.
Courtesy of MSI
Courtesy of MSI
Designed with a 32-phase digital power system and an eight-layer PCB, the MSI Z87 XPower is armed to take any amount of punishment you can throw at it. MSI incorporated a plethora of features into its massive XL-ATX form factor board: 10 SATA 6Gb/s ports; a mSATA 6Gb/s port; a Killer E2205 GigE NIC; Intel 802.11n WiFi and Bluetooth adapter; five PCI-Express x16 slots for up to quad-card NVIDIA SLI or AMD CrossFire support; two PCI-Express x1 slots; Lucidlogix Virtu® MVP 2.0 support; onboard power, reset, BIOS reset, CPU ratio control, base clock control, OC Genie, power discharge, and Go2BIOS buttons; multi-BIOS and PCIe control switches; 2-digit diagnostic LED display; 14 voltage check points; independent audio subsystem PCB design; and USB 2.0 and 3.0 port support.
Courtesy of MSI
Introduction and Features
Be Quiet! has been a market leader for PC power supplies in Germany for seven years straight and now they are bringing their value-minded Pure Power L8 series to North American markets. Earlier this year, we reviewed Be Quiet!’s top-of-the-line Dark Power Pro 10 850W PSU and found it to be an outstanding high-end, enthusiast grade power supply. Now we are going to take a look at the budget-oriented Pure Power L8 700W PSU. The Pure Power L8 series features a 120mm Be Quiet! SilentWings L8 fan, are certified for 80Plus Bronze efficiency, come with fixed cables, and are backed by a 3-year warranty.
Be Quiet! is targeting the Pure Power L8 series for gaming with multi-GPU capacity, silent PC builds, multimedia and Home Theater systems, and photo and video editing desktops.
Here is what Be Quiet! has to say about their Pure Power L8 700W PSU: “The Pure Power L8 700W provides true affordability, peerless dependability and best-in-class features – not cutting corners and settling for less. Pure Power L8 700W features rock-solid voltages, strong reliability, high efficiency and exceptional quiet – simply the best combination of features, performance and quality in the class – at a very popular price.”
Be Quiet! Pure Power L8 700W PSU Key Features:
• Exceptionally quiet operation: 120mm SilentWings fan
• 700W of continuous power output
• Two independent +12V rails for improved power stability
• 80Plus Bronze certification (up to 88% power conversion efficiency)
• Meets Energy Star 5.2 Guidelines
• Fulfills ErP 2013 Guidelines
• Ready for Intel Haswell platform
• Supports Intel’s Deep Power Down C6/C7 mode
• Sleeved cables for improved cooling and more attractive looks
• NVIDIA SLI Ready and AMD CrossFireX certified
• Four PCI-E connectors for multi-GPU support
• 3-Year warranty
If Microsoft was left to their own devices...
Microsoft's Financial Analyst Meeting 2013 set the stage, literally, for Steve Ballmer's last annual keynote to investors. The speech promoted Microsoft, its potential, and its unique position in the industry. He proclaims, firmly, their desire to be a devices and services company.
The explanation, however, does not befit either industry.
Ballmer noted, early in the keynote, how Bing is the only notable competitor to Google Search. He wanted to make it clear, to investors, that Microsoft needs to remain in the search business to challenge Google. The implication is that Microsoft can fill the cracks where Google does not, or even cannot, and establish a business from that foothold. I agree. Proprietary products (which are not inherently bad by the way), as Google Search is, require one or more rivals to fill the overlooked or under-served niches. A legitimate business can be established from that basis.
It is the following, similar, statement which troubles me.
Ballmer later mentioned, along the same vein, how Microsoft is among the few making fundamental operating system investments. Like search, the implication is that operating systems are proprietary products which must compete against one another. This, albeit subtly, does not match their vision as a devices and services company. The point of a proprietary platform is to own the ecosystem, from end to end, and to derive your value from that control. The product is not a device; the product is not a service; the product is a platform. This makes sense to them because, from birth, they were a company which sold platforms.
A platform as a product is not a device nor is it service.
Another Next Unit of Computing
Just about a year ago Intel released a new product called the Next Unit of Computing, or NUC for short. The idea was to allow Intel's board and design teams to bring the efficient performance of the ultra low voltage processors to a desktop, and creative, form factor. By taking what is essentially Ultrabook hardware and putting it in a 4-in by 4-in design Intel is attempting to rethink what the "desktop" computer is and how the industry develops for it.
We reviewed the first NUC last year, based on the Intel Ivy Bridge processor and took away a surprising amount of interest in the platform. It was (and is) a bit more expensive than many consumers are going to be willing to spend on such a "small" physical device but the performance and feature set is compelling.
This time around Intel has updated the 4x4 enclosure a bit and upgrade the hardware from Ivy Bridge to Haswell. That alone should result in a modest increase in CPU performance with quite a bit of increase in the integrated GPU performance courtesy of the Intel HD Graphics 5000. Other changes are on the table to; let's take a look.
The Intel D54250WYK NUC is a bare bones system that will run you about $360. You'll need to buy system memory and an mSATA SSD for storage (wireless is optional) to complete the build.
AMD is up to some interesting things. Today at AMD’s tech day, we discovered a veritable cornucopia of information. Some of it was pretty interesting (audio), some was discussed ad-naseum (audio, audio, and more audio), and one thing in particular was quite shocking. Mantle was the final, big subject that AMD was willing to discuss. Many assumed that the R9 290X would be the primary focus of this talk, but in fact it very much was an aside that was not discussed at any length. AMD basically said, “Yes, the card exists, and it has some new features that we are not going to really go over at this time.” Mantle, as a technology, is at the same time a logical step as well as an unforeseen one. So what all does Mantle mean for users?
Looking back through the mists of time, when dinosaurs roamed the earth, the individual 3D chip makers all implemented low level APIs that allowed programmers to get closer to the silicon than what other APIs such as Direct3D and OpenGL would allow. This was a very efficient way of doing things in terms of graphics performance. It was an inefficient way to do things for a developer writing code for multiple APIs. Microsoft and the Kronos Group had solutions with Direct3D and OpenGL that allowed these programmers to develop for these high level APIs very simply (comparatively so). The developers could write code that would run D3D/OpenGL, and the graphics chip manufacturers would write drivers that would interface with Direct3D/OpenGL, which then go through a hardware abstraction layer to communicate with the hardware. The onus was then on the graphics people to create solid, high performance drivers that would work well with DirectX or OpenGL, so the game developer would not have to code directly for a multitude of current and older graphics cards.
Introduction and Features
Corsair offers a full line of high quality power supplies, memory components, cases, cooling components, SSDs and accessories to the PC market. Corsair's new RM Series includes six models; the RM450, RM550, RM650, RM750, RM850 and RM1000. All of the power supplies in the RM Series feature all-modular cables, an energy-efficient design (80 Plus Gold certified) and quiet operation thanks to their ability to run without a cooling fun up to 40% load. The RM Series offers many of the same features as the Corsair HX Series (fanless operation, Gold level efficiency, fully-modular cables) but are a little less expensive. And all RM Series power supplies are Corsair Link ready, which means you can monitor the PSU fan speed and +12V output right from your desktop if you have a Corsair Link system set up on your PC. Previously the Corsair Link option was only available on Corsair’s premium AX Series Digital power supplies.
Here is what Corsair has to say about their RM550 PSU we will be looking at in this review: “The Corsair RM550 is fully modular and optimized for silence and high efficiency. It’s built with low-noise capacitors and transformers, and Zero RPM Fan Mode ensures that the fan doesn’t even spin until the power supply is under heavy load. And with a fan that’s custom-designed for low noise operation, it’s whisper-quiet even when it’s pushed hard.
80Plus Gold rated efficiency saves you money on your power bill, and the low-profile black cables are fully modular, so you can enjoy fast, neat builds. And, like all Corsair power supplies, the RM550 is built with high-quality components and is guaranteed to deliver clean, stable, continuous power. Want even more? Connect it to your Corsair Link system (available separately) and you can even monitor fan speed and +12V current directly from your desktop.”
Corsair RM550 PSU Key Features:
• Silent, fan-less operation up to 40% load
• 80Plus Gold certified, delivering over 92% efficiency under real world loads
• Fully modular, low-profile flat cables help maximize case airflow
• Corsair Link ready!
• High-quality capacitors provide uncompromised performance and reliability
• Active PFC and Universal AC input (100-240 VAC)
• Safety: FCC, ICES, CE, C TUV US, RCM, TUV, CB, CCC, BSMI, GOST, ROHS, WEEE, KC, TUV-S
• 5-Year warranty and lifetime access to tech support and customer service
Over the past few weeks, I have been developing a device that enables external control of Wirecast and XSplit. Here's a video of the device in action:
But now, let's get into the a little bit of background information:
While the TriCaster from NewTek has made great strides in decreasing the cost of video switching hardware, and can be credited with some of the rapid expansion of live streaming on the Internet, it still requires an initial investment of about $20,000 on the entry-level. Even though this is down from around 5x or 10x the cost just a few years ago for professional-grade hardware, a significant startup cost is still presented.
This brings us to my day job. For the past 4 years I have worked here at PC Perspective. My job began as an intern helping to develop video content, but quickly expanded from there. Several years ago, we decided to make the jump to live content, and started investing in the required infrastructure. Since we obviously didn't need to worry about the availability of PC Hardware, we decided to go with the software video switching route, as opposed to dedicated hardware like the TriCaster. At the time, we started experimenting with Wirecast and bought a few Blackmagic Intensity Pro HDMI capture cards for our Canon Vixia HV30 cameras. Overall, building an 6 core computer (Core i7-980x in those days) with 3 capture cards resulted in an investment of about $2500.
Advantages to the software route not only consisted of a much cheaper initial investment, we had an operation running for about a 1/10th of the cost of a TriCaster, but ultimately our setup was more expandable. If we had gone with a TriCaster we would have a fixed number of inputs, but in this configuration we could add more inputs on the fly as long as we had available I/O on our computer.
Summary of Events
In January of 2013 I revealed a new testing methodology for graphics cards that I dubbed Frame Rating. At the time I was only able to talk about the process, using capture hardware to record the output directly from the DVI connections on graphics cards, but over the course of a few months started to release data and information using this technology. I followed up the story in January with a collection of videos that displayed some of the capture video and what kind of performance issues and anomalies we were able to easily find.
My first full test results were published in February to quite a bit of stir and then finally in late March released Frame Rating Dissected: Full Details on Capture-based Graphics Performance Testing which dramatically changed the way graphics cards and gaming performance was discussed and evaluated forever.
Our testing proved that AMD CrossFire was not improving gaming experiences in the same way that NVIDIA SLI was. Also, we showed that other testing tools like FRAPS were inadequate in showcasing this problem. If you are at all unfamiliar with this testing process or the results it showed, please check out the Frame Rating Dissected story above.
At the time, we tested 5760x1080 resolution using AMD Eyefinity and NVIDIA Surround but found there were too many issues and problems with our scripts and the results they were presenting to give reasonably assured performance metrics. Running AMD + Eyefinity was obviously causing some problems but I wasn’t quite able to pinpoint what they were and how severe it might have been. Instead I posted graphs like this:
We were able to show NVIDIA GTX 680 performance and scaling in SLI at 5760x1080 but we only were giving results for the Radeon HD 7970 GHz Edition in a single GPU configuration.
Since those stories were released, AMD has been very active. At first they were hesitant to believe our results and called into question our processes and the ability for gamers to really see the frame rate issues we were describing. However, after months of work and pressure from quite a few press outlets, AMD released a 13.8 beta driver that offered a Frame Pacing option in the 3D controls that enables the ability to evenly space out frames in multi-GPU configurations producing a smoother gaming experience.
The results were great! The new AMD driver produced very consistent frame times and put CrossFire on a similar playing field to NVIDIA’s SLI technology. There were limitation though: the driver only fixed DX10/11 games and only addressed resolutions of 2560x1440 and below.
But the story won’t end there. CrossFire and Eyefinity are still very important in a lot of gamers minds and with the constant price drops in 1920x1080 panels, more and more gamers are taking (or thinking of taking) the plunge to the world of Eyefinity and Surround. As it turns out though, there are some more problems and complications with Eyefinity and high-resolution gaming (multi-head 4K) that are cropping up and deserve discussion.
Introduction and Technical Specifications
Courtesy of ECS
ECS seems to like gold and catchy titles. Building on the reputation of their previous generation Z77-based Golden Board series, ECS designed the Z87-based Z87H3-A2X Extreme to dominate the competition. ECS liberally spread the gold, giving all capacitors and connection pins a nice 24k coating. They even include a small fan hidden under the upper VRM heat sink for active heat pipe cooling. At an MSRP of $239.99, ECS priced the Z87H3-A2X Extreme board to aggressively compete with other manufacturers' high end offerings.
Courtesy of ECS
ECS integrated a full 12 digital power phases into the board's CPU power regulation system for optimal stability and power delivery under the most demanding circumstances. The features designed into the Z87H3-A2X Extreme include: nine SATA 6Gb/s ports; mPCIe/mSATA port; dual Realtek GigE NICs; three PCI-Express x16 slots for up to tri-card support; one PCI-Express x1 slot; one PCI slot; on-board power, reset, BIOS reset, Boot to BIOS, BIOS backup, 80P, and Quick-OC buttons; 3-digit diagnostic LED display; integrated voltage measurement points; and USB 2.0 and 3.0 port support. The 80P button configures what information displays on the diagnostic display once the board has successfully initialized.
Courtesy of ECS
Courtesy of ECS
As part of their L337 Gaming series boards, ECS redesigned the power delivery and integrated cooling solutions for a better gaming experience. The Z87H3-A2X Extreme has the latest generation cooling technology encompassed in ECS' QoolTech V. Additionally, all VRM-related power components have been cherry picked by ECS for operationally longevity under extreme duress.
Introduction and Features
Corsair’s Carbide Series currently includes eight models in different sizes, shapes, and colors (mostly black but the 500R is available in white) which includes the 200R, 300R, 330R, 400R, 500R, and the Air-540 High Airflow Cube case.
In this review we are going to take a detailed look at Corsair’s Carbide Series 330R quiet mid-tower case. The 330R incorporates superior sound absorption material for quiet operation, numerous cooling options, and support for multiple, extended length VGA cards. The 330R enclosure features a full length, hinged front door and comes with one 140mm intake fan in the front and one 120mm exhaust fan on the back with five optional fan mounting locations along with support for liquid cooling radiators.
(Courtesy of Corsair)
Here is what Corsair has to say about their Carbide Series enclosures: “Corsair Carbide Series® mid-tower PC cases have the high-end features you need, and nothing you don’t. Designed to be the foundation of awesome yet approachable gaming PCs, they combine the latest technology and ergonomic innovations with lots of room to build and expand, and amazing cooling potential!”
(Courtesy of Corsair)
Carbide 330R Quiet Mid-Tower Case Key Features:
• Supports E-ATX, ATX, Micro ATX and Mini ATX motherboards
• Extensive noise dampening material on the front door, side panels, and top panel to quiet noisy internal components
• Hinged front door is reversible, with angled air intakes to reflect internal noise away from the user
• Direct airflow to components – the front 140mm fan is unrestricted by hard drive cages and protected by a low-restriction dust filter
• Removable top panel, with top fan mounts pre-drilled for 240mm or 280mm fans and/or liquid cooling radiators
• Excellent cooling and low noise levels with up to five separate fan mounting locations
o Front: 140mm fan included (upgradable to dual 120mm or 140mm)
o Top: Dual 120mm or 140mm
o Rear: 120mm fan included
• USB 3.0 on front panel with internal motherboard connectors
• Four 3.5” / 2.5” hard drive bays with full SSD compatibility
• Three 5.25” front exposed drive bays
• Tool-free installation of 5.25” and 3.5” drives
• Up to 450mm (17.6”) of space for long graphics cards
• Up to 160mm (6.3”) of space for CPU coolers
• Cable routing cutouts to keep cables out of the airflow path
A Whole New Atom Family
This past spring I spent some time with Intel at its offices in Santa Clara to learn about a brand new architecture called Silvermont. Built for and targeted at low power platforms like tablets and smartphones, Silvermont was not simply another refresh of the aging Atom processors that were all based on Pentium cores from years ago; instead Silvermont was built from the ground up for low power consumption and high efficiency to compete against the juggernaut that is ARM and its partners. My initial preview of the Silvermont architecture had plenty of detail about the change to an out-of-order architecture, the dual-core modules that comprise it and the power optimizations included.
Today, during the annual Intel Developer Forum held in San Francisco, we are finally able to reveal the remaining details about the new Atom processors based on Silvermont, code named Bay Trail. Not only do we have new information about the designs, but we were able to get our hands on some reference tablets integrating Bay Trail and the new Atom Z3000 series of SoCs to benchmark and compare to offerings from Qualcomm, NVIDIA and AMD.
A Whole New Atom Family
It should be surprise to anyone that the name “Intel Atom Processor” has had a stigma attached to it almost since its initial release during the netbook craze. It was known for being slow and hastily put together though it was still a very successful product in terms of sales. With each successive release and update, Diamondville to Pineview to Cedarview, Atom was improved but only marginally so. Even with Medfield and Clover Trail the products were based around that legacy infrastructure and it showed. Tablets and systems based on Clover Trail saw only moderate success and lukewarm reviews.
With Silvermont the Atom brand gets a second chance. Some may consider it a fifth or sixth chance, but Intel is sticking with the name. Silvermont as an architecture is incredibly flexible and will find its way into several Intel products like Avoton, Bay Trail and Merrifield and in segments from the micro-server to smartphones to convertible tablets. Not only that, but Intel is aware that Windows isn’t the only game out there anymore and the company will support the architecture across Linux, Android and Windows environments.
Atom has been in tablets for some time now, starting in September of last year with Clover Trail deigns being announced during IDF. In February we saw the initial Android-based options also filter out, again based on Clover Trail. They were okay, but really only stop-gaps to prove that Intel was serious about the space. The real test will be this holiday season with Bay Trail at the helm.
While we always knew these Bay Trail platforms were going be branded as Atom we now have the full details on the numbering scheme and productization of the architecture. The Atom Z3700 series will consist of quad-core SoCs with Intel HD graphics (the same design as the Core processor series though with fewer compute units) that will support Windows and Android operating systems. The Atom Z3600 will be dual-core processors, still with Intel HD graphics, targeted only at the Android market.
Retiring the Workhorses
There is an inevitable shift coming. Honestly, this has been quite obvious for some time, but it has just taken AMD a bit longer to get here than many have expected. Some years back we saw AMD release their new motto, “The Future is Fusion”. While many thought it somewhat interesting and trite, it actually foreshadowed the massive shift from monolithic CPU cores to their APUs. Right now AMD’s APUs are doing “ok” in desktops and are gaining traction in mobile applications. What most people do not realize is that AMD will be going all APU all the time in the very near future.
We can look over the past few years and see that AMD has been headed in this direction for some time, but they simply have not had all the materials in place to make this dramatic shift. To get a better understanding of where AMD is heading, how they plan to address multiple markets, and what kind of pressures they are under, we have to look at the two major non-APU markets that AMD is currently hanging onto by a thread. In some ways, timing has been against AMD, not to mention available process technologies.