All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
ShadowPlay is NVIDIA's latest addition to their GeForce Experience platform. This feature allows their GPUs, starting with Kepler, to record game footage either locally or stream it online through Twitch.tv (in a later update). It requires Kepler GPUs because it is accelerated by that hardware. The goal is to constantly record game footage without any noticeable impact to performance; that way, the player can keep it running forever and have the opportunity to save moments after they happen.
Also, it is free.
I know that I have several gaming memories which come unannounced and leave undocumented. A solution like this is very exciting to me. Of course a feature on paper not the same as functional software in the real world. Thankfully, at least in my limited usage, ShadowPlay mostly lives up to its claims. I do not feel its impact on gaming performance. I am comfortable leaving it on at all times. There are issues, however, that I will get to soon.
This first impression is based on my main system running the 331.65 (Beta) GeForce drivers recommended for ShadowPlay.
- Intel Core i7-3770, 3.4 GHz
- NVIDIA GeForce GTX 670
- 16 GB DDR3 RAM
- Windows 7 Professional
- 1920 x 1080 @ 120Hz.
- 3 TB USB3.0 HDD (~50MB/s file clone).
The two games tested are Starcraft II: Heart of the Swarm and Battlefield 3.
Introduction and Technical Specifications
Courtesy of Thermalright
Thermalright is an established brand in the CPU cooling arena with its track record for innovative creations designed to best remove the heat from your prize CPU. The latest incarnation of their cooler line for Intel and AMD-based CPUs takes the form of the Silver Arrow SB-E Extreme, a massive nickel-plated copper cooler sporting two 140mm fans to aid in heat dispersal. We tested this cooler in conjunction with other all-in-one and air coolers to see how well the Thermalright cooler stacks up. With a retail price at $99.99, the cooler has a premium price for the premium performance it offers.
Courtesy of Thermalright
Courtesy of Thermalright
Courtesy of Thermalright
Thermalright took their cooler design to a whole new level with the Silver Arrow SB-E Extreme. The cooler features a nickel-plated copper base and heat pipes with two massive aluminum thin-finned tower radiators to help with heat dissipation. The Silver Arrow SB-E contains eight total 6mm diameter heat pipes that run through the copper base plate, terminating in the two aluminum tower radiators. The base plate itself is polished to a mirror-like finish, ensuring optimal mating between the base plate and CPU surfaces.
A bit of a surprise
Okay, let's cut to the chase here: it's late, we are rushing to get our articles out, and I think you all would rather see our testing results NOW rather than LATER. The first thing you should do is read my review of the AMD Radeon R9 290X 4GB Hawaii graphics card which goes over the new architecture, new feature set, and performance in single card configurations.
Then, you should continue reading below to find out how the new XDMA, bridge-less CrossFire implementation actually works in both single panel and 4K (tiled) configurations.
A New CrossFire For a New Generation
CrossFire has caused a lot of problems for AMD in recent months (and a lot of problems for me as well). But, AMD continues to make strides in correcting the frame pacing issues associated with CrossFire configurations and the new R9 290X moves the bar forward.
Without the CrossFire bridge connector on the 290X, all of the CrossFire communication and data transfer occurs over the PCI Express bus that connects the cards to the entire system. AMD claims that this new XDMA interface was designed for Eyefinity and UltraHD resolutions (which were the subject of our most recent article on the subject). By accessing the memory of the GPU through PCIe AMD claims that it can alleviate the bandwidth and sync issues that were causing problems with Eyefinity and tiled 4K displays.
Even better, this updated version of CrossFire is said to compatible with the frame pacing updates to the Catalyst driver to improve multi-GPU performance experiences for end users.
When an extra R9 290X accidentally fell into my lap, I decided to take it for a spin. And if you have followed my graphics testing methodology in the past year then you'll understand the important of these tests.
A slightly new architecture
Last month AMD brought media, analysts, and customers out to Hawaii to talk about a new graphics chip coming out this year. As you might have guessed based on the location: the code name for this GPU was in fact, Hawaii. It was targeted at the high end of the discrete graphics market to take on the likes of the GTX 780 and GTX TITAN from NVIDIA.
Earlier this month we reviewed the AMD Radeon R9 280X, R9 270X, and the R7 260X. None of these were based on that new GPU. Instead, these cards were all rebrands and repositionings of existing hardware in the market (albeit at reduced prices). Those lower prices made the R9 280X one of our favorite GPUs of the moment as it offers performance per price points currently unmatched by NVIDIA.
But today is a little different, today we are talking about a much more expensive product that has to live up to some pretty lofty goals and ambitions set forward by the AMD PR and marketing machine. At $549 MSRP, the new AMD Radeon R9 290X will become the flagship of the Radeon brand. The question is: to where does that ship sail?
The AMD Hawaii Architecture
To be quite upfront about it, the Hawaii design is very similar to that of the Tahiti GPU from the Radeon HD 7970 and R9 280X cards. Based on the same GCN (Graphics Core Next) architecture AMD assured us would be its long term vision, Hawaii ups the ante in a few key areas while maintaining the same core.
Hawaii is built around Shader Engines, of which the R9 290X has four. Each of these includes 11 CU (compute units) which hold 4 SIMD arrays each. Doing the quick math brings us to a total stream processor count of 2,816 on the R9 290X.
The Really Good Times are Over
We really do not realize how good we had it. Sure, we could apply that to budget surpluses and the time before the rise of global terrorism, but in this case I am talking about the predictable advancement of graphics due to both design expertise and improvements in process technology. Moore’s law has been exceptionally kind to graphics. We can look back and when we plot the course of these graphics companies, they have actually outstripped Moore in terms of transistor density from generation to generation. Most of this is due to better tools and the expertise gained in what is still a fairly new endeavor as compared to CPUs (the first true 3D accelerators were released in the 1993/94 timeframe).
The complexity of a modern 3D chip is truly mind-boggling. To get a good idea of where we came from, we must look back at the first generations of products that we could actually purchase. The original 3Dfx Voodoo Graphics was comprised of a raster chip and a texture chip, each contained approximately 1 million transistors (give or take) and were made on a then available .5 micron process (we shall call it 500 nm from here on out to give a sense of perspective with modern process technology). The chips were clocked between 47 and 50 MHz (though often could be clocked up to 57 MHz by going into the init file and putting in “SET SST_GRXCLK=57”… btw, SST stood for Sellers/Smith/Tarolli, the founders of 3Dfx). This revolutionary graphics card at the time could push out 47 to 50 megapixels and had 4 MB of VRAM and was released in the beginning of 1996.
My first 3D graphics card was the Orchid Righteous 3D. Voodoo Graphics was really the first successful consumer 3D graphics card. Yes, there were others before it, but Voodoo Graphics had the largest impact of them all.
In 1998 3Dfx released the Voodoo 2, and it was a significant jump in complexity from the original. These chips were fabricated on a 350 nm process. There were three chips to each card, one of which was the raster chip and the other two were texture chips. At the top end of the product stack was the 12 MB cards. The raster chip had 4 MB of VRAM available to it while each texture chip had 4 MB of VRAM for texture storage. Not only did this product double performance from the Voodoo Graphics, it was able to run in single card configurations at 800x600 (as compared to the max 640x480 of the Voodoo Graphics). This is the same time as when NVIDIA started to become a very aggressive competitor with the Riva TnT and ATI was about to ship the Rage 128.
Our Legacys Influence
We are often creatures of habit. Change is hard. And often times legacy systems that have been in place for a very long time can shift and determine the angle at which we attack new problems. This happens in the world of computer technology but also outside the walls of silicon and the results can be dangerous inefficiencies that threaten to limit our advancement in those areas. Often our need to adapt new technologies to existing infrastructure can be blamed for stagnant development.
Take the development of the phone as an example. The pulse based phone system and the rotary dial slowed the implementation of touch dial phones and forced manufacturers to include switches to select between pulse and tone based dialing options on phones for decades.
Perhaps a more substantial example is that of the railroad system that has based the track gauge (width between the rails) on the transportation methods that existed before the birth of Christ. Horse drawn carriages pulled by two horses had an axle gap of 4 feet 8 inches in the 1800s and thus the first railroads in the US were built with a track gauge of 4 feet 8 inches. Today, the standard rail track gauge remains 4 feet 8 inches despite the fact that a wider gauge would allow for more stability of larger cargo loads and allow for higher speed vehicles. But the cost of updating the existing infrastructure around the world would be so cost prohibitive that it is likely we will remain with that outdated standard.
What does this have to do with PC hardware and why am I giving you an abbreviated history lesson? There are clearly some examples of legacy infrastructure limiting our advancement in hardware development. Solid state drives are held back by the current SATA based storage interface though we are seeing movements to faster interconnects like PCI Express to alleviate this. Some compute tasks are limited by the “infrastructure” of standard x86 processor cores and the move to GPU compute has changed the direction of these workloads dramatically.
There is another area of technology that could be improved if we could just move past an existing way of doing things. Displays.
Introduction and Technical Specifications
XSPC Raystorm 750 EX240 Watercooling Kit
Courtesy of XSPC
XSPC has a well known presence in the enthusiast water cooling community with a track record for high performance and affordable water cooling products. Recently, XSPC released new version of their DIY cooling kits, integrating their EX series radiators into the kit. They were kind enough to provide us with a sample of their Raystorm 750 EX240 Watercooling Kit with an included EX240 radiator. We tested this cooler in conjunction with other all-in-one and air coolers to see how well the XSPC kit stacks up. With a retail price at $149.99, this kit offers an affordable alternative to the all-in-one coolers.
X2O 750 Dual Bayres/Pump V4 reservoir
Courtesy of XSPC
EX240 Dual Radiator
Courtesy of XSPC
Raystorm CPU Waterblock
Courtesy of XSPC
Introduction and Features
The 750D is Corsair’s latest addition to their top of the line Obsidian Series and is the third new Obsidian case for 2013. The new 750D is a full-tower enclosure that offers a little more room, enhanced cooling, with expanded drive mounting options, than Corsair’s ever popular 650D mid-tower enclosure. The 750D is being introduced with an MSRP of $159.99 USD, which also makes it a little less expensive than the 650D. In addition to PC enclosures, Corsair continues to offer one of the largest selections of memory products, SSDs, power supplies, coolers, gaming peripherals, and PC accessories currently on the market.
The 750D full-tower case is positioned mid-way between Corsair’s huge 900D Super-Tower and 350D Micro-ATX enclosures and shares many of the same styling and design features as the 900D and 350D. Corsair is saying the 750D is a successor to the 650D but we hope the 650D mid-tower enclosure doesn’t go away any time soon as the two enclosures are still different enough to appeal to different users.
(Courtesy of Corsair)
Get your wallet ready
While I was preparing for the release of Intel's Core i7 Ivy Bridge-E processors last month ORIGIN PC approached me about a system review based on the new platform. Of course I rarely pass up the opportunity to spend some time with unreasonably fast PC hardware so I told them to send something over that would impress me.
This system did.
The ORIGIN PC Millennium custom configuration is one of the flagship offerings from the boutique builder and it will hit your wallet nearly as hard as it will your games and applications. What kind of hardware do you get for $4200 these days?
- ORIGIN PC Millennium
- Intel Core i7-4930K (OC to 4.5 GHz)
- ASUS Rampage IV Gene mATX motherboard
- Custom Corsair H100i 240mm water cooler
- 16GB (4 x 4GB) Corsair Vengeance DDR3-1866 memory
- 2 x NVIDIA GeForce GTX 780 3GB SLI
- 2 x Samsung 840 Pro 128GB SSD (RAID 0)
- 1TB Western Digital Black HDD
- Corsair AX1200i power supply
- Corsair Obsidian 350D case
- Windows 8
Our custom build was designed to pack as much processing power into as small a case as possible and I think you'll find that ORIGIN did a bang up job here. By starting with the Corsair 350D micro ATX chassis yet still including dual graphics cards and an overclocked IVB-E processor, the results are going to impress.
The AMD Radeon R9 280X
Today marks the first step in an introduction of an entire AMD Radeon discrete graphics product stack revamp. Between now and the end of 2013, AMD will completely cycle out Radeon HD 7000 cards and replace them with a new branding scheme. The "HD" branding is on its way out and it makes sense. Consumers have moved on to UHD and WQXGA display standards; HD is no longer extraordinary.
But I want to be very clear and upfront with you: today is not the day that you’ll learn about the new Hawaii GPU that AMD promised would dominate the performance per dollar metrics for enthusiasts. The Radeon R9 290X will be a little bit down the road. Instead, today’s review will look at three other Radeon products: the R9 280X, the R9 270X and the R7 260X. None of these products are really “new”, though, and instead must be considered rebrands or repositionings.
There are some changes to discuss with each of these products, including clock speeds and more importantly, pricing. Some are specific to a certain model, others are more universal (such as updated Eyefinity display support).
Let’s start with the R9 280X.
AMD Radeon R9 280X – Tahiti aging gracefully
The AMD Radeon R9 280X is built from the exact same ASIC (chip) that powers the previous Radeon HD 7970 GHz Edition with a few modest changes. The core clock speed of the R9 280X is actually a little bit lower at reference rates than the Radeon HD 7970 GHz Edition by about 50 MHz. The R9 280X GPU will hit a 1.0 GHz rate while the previous model was reaching 1.05 GHz; not much a change but an interesting decision to be made for sure.
Because of that speed difference the R9 280X has a lower peak compute capability of 4.1 TFLOPS compared to the 4.3 TFLOPS of the 7970 GHz. The memory clock speed is the same (6.0 Gbps) and the board power is the same, with a typical peak of 250 watts.
Everything else remains the same as you know it on the HD 7970 cards. There are 2048 stream processors in the Tahiti version of AMD’s GCN (Graphics Core Next), 128 texture units and 32 ROPs all being pushed by a 384-bit GDDR5 memory bus running at 6.0 GHz. Yep, still with a 3GB frame buffer.
Introduction and Design
As we’re swimming through the veritable flood of Haswell refresh notebooks, we’ve stumbled across the latest in a line of very popular gaming models: the ASUS G750JX-DB71. This notebook is the successor to the well-known G75 series, which topped out at an Intel Core i7-3630QM with NVIDIA GeForce GTX 670MX dedicated graphics. Now, ASUS has jacked up the specs a little more, including the latest 4th-gen CPUs from Intel as well as 700-series NVIDIA GPUs.
Our ASUS G750JX-DB71 test unit features the following specs:
Of course, the closest comparison to this unit is already the most recently-reviewed MSI GT60-2OD-026US, which featured nearly identical specifications, apart from a 15.6” screen, a better GPU (a GTX 780M with 4 GB GDDR5), and a slightly different CPU (the Intel Core i7-4700MQ). In case you’re wondering what the difference is between the ASUS G750JX’s Core i7-4700MQ and the GT60’s i7-4700HQ, it’s very minor: the HQ features a slightly faster integrated graphics Turbo frequency (1.2 GHz vs. 1.15 GHz) and supports Intel Virtualization Technology for Directed I/O (VT-d). Since the G750JX doesn’t support Optimus, we won’t ever be using the integrated graphics, and unless you’re doing a lot with virtual machines, VT-d isn’t likely to offer any benefits, either. So for all intents and purposes, the CPUs are equivalent—meaning the biggest overall performance difference (on the spec sheet, anyway) lies with the GPU and the storage devices (where the G750JX offers more solid-state storage than the GT60). It’s no secret that the MSI GT60 burned up our benchmarks—so the real question is, how close is the ASUS G750JX to its pedestal, and if the differences are considerable, are they justified?
At an MSRP of around $2,000 (though it can be found for around $100 less), the ASUS G750JX-DB71 competes directly with the likes of the MSI GT60, too (which is priced equivalently). The question, of course, is whether it truly competes. Let’s find out!
A new generation of Software Rendering Engines.
We have been busy with side projects, here at PC Perspective, over the last year. Ryan has nearly broken his back rating the frames. Ken, along with running the video equipment and "getting an education", developed a hardware switching device for Wirecase and XSplit.
My project, "Perpetual Motion Engine", has been researching and developing a GPU-accelerated software rendering engine. Now, to be clear, this is just in very early development for the moment. The point is not to draw beautiful scenes. Not yet. The point is to show what OpenGL and DirectX does and what limits are removed when you do the math directly.
Errata: BioShock uses a modified Unreal Engine 2.5, not 3.
In the above video:
- I show the problems with graphics APIs such as DirectX and OpenGL.
- I talk about what those APIs attempt to solve, finding color values for your monitor.
- I discuss the advantages of boiling graphics problems down to general mathematics.
- Finally, I prove the advantages of boiling graphics problems down to general mathematics.
I would recommend watching the video, first, before moving forward with the rest of the editorial. A few parts need to be seen for better understanding.
Introduction and Technical Specifications
Courtesy of MSI
The Z87 XPower board is the flagship motherboard in MSI's MPower product line. The board supports the latest generation of Intel LGA1150-based processors with all the over-engineered goodness you've come to expect from an MSI flagship board. The Z87 XPower sports the black and yellow theme of the product line, along with integrated LEDs to really make the board stand out. While its $439.99 retail may seem a bit high, the stability, features, and raw power of the board make it a wise investment for those that only want the best.
Courtesy of MSI
Courtesy of MSI
Designed with a 32-phase digital power system and an eight-layer PCB, the MSI Z87 XPower is armed to take any amount of punishment you can throw at it. MSI incorporated a plethora of features into its massive XL-ATX form factor board: 10 SATA 6Gb/s ports; a mSATA 6Gb/s port; a Killer E2205 GigE NIC; Intel 802.11n WiFi and Bluetooth adapter; five PCI-Express x16 slots for up to quad-card NVIDIA SLI or AMD CrossFire support; two PCI-Express x1 slots; Lucidlogix Virtu® MVP 2.0 support; onboard power, reset, BIOS reset, CPU ratio control, base clock control, OC Genie, power discharge, and Go2BIOS buttons; multi-BIOS and PCIe control switches; 2-digit diagnostic LED display; 14 voltage check points; independent audio subsystem PCB design; and USB 2.0 and 3.0 port support.
Courtesy of MSI
Introduction and Features
Be Quiet! has been a market leader for PC power supplies in Germany for seven years straight and now they are bringing their value-minded Pure Power L8 series to North American markets. Earlier this year, we reviewed Be Quiet!’s top-of-the-line Dark Power Pro 10 850W PSU and found it to be an outstanding high-end, enthusiast grade power supply. Now we are going to take a look at the budget-oriented Pure Power L8 700W PSU. The Pure Power L8 series features a 120mm Be Quiet! SilentWings L8 fan, are certified for 80Plus Bronze efficiency, come with fixed cables, and are backed by a 3-year warranty.
Be Quiet! is targeting the Pure Power L8 series for gaming with multi-GPU capacity, silent PC builds, multimedia and Home Theater systems, and photo and video editing desktops.
Here is what Be Quiet! has to say about their Pure Power L8 700W PSU: “The Pure Power L8 700W provides true affordability, peerless dependability and best-in-class features – not cutting corners and settling for less. Pure Power L8 700W features rock-solid voltages, strong reliability, high efficiency and exceptional quiet – simply the best combination of features, performance and quality in the class – at a very popular price.”
Be Quiet! Pure Power L8 700W PSU Key Features:
• Exceptionally quiet operation: 120mm SilentWings fan
• 700W of continuous power output
• Two independent +12V rails for improved power stability
• 80Plus Bronze certification (up to 88% power conversion efficiency)
• Meets Energy Star 5.2 Guidelines
• Fulfills ErP 2013 Guidelines
• Ready for Intel Haswell platform
• Supports Intel’s Deep Power Down C6/C7 mode
• Sleeved cables for improved cooling and more attractive looks
• NVIDIA SLI Ready and AMD CrossFireX certified
• Four PCI-E connectors for multi-GPU support
• 3-Year warranty
If Microsoft was left to their own devices...
Microsoft's Financial Analyst Meeting 2013 set the stage, literally, for Steve Ballmer's last annual keynote to investors. The speech promoted Microsoft, its potential, and its unique position in the industry. He proclaims, firmly, their desire to be a devices and services company.
The explanation, however, does not befit either industry.
Ballmer noted, early in the keynote, how Bing is the only notable competitor to Google Search. He wanted to make it clear, to investors, that Microsoft needs to remain in the search business to challenge Google. The implication is that Microsoft can fill the cracks where Google does not, or even cannot, and establish a business from that foothold. I agree. Proprietary products (which are not inherently bad by the way), as Google Search is, require one or more rivals to fill the overlooked or under-served niches. A legitimate business can be established from that basis.
It is the following, similar, statement which troubles me.
Ballmer later mentioned, along the same vein, how Microsoft is among the few making fundamental operating system investments. Like search, the implication is that operating systems are proprietary products which must compete against one another. This, albeit subtly, does not match their vision as a devices and services company. The point of a proprietary platform is to own the ecosystem, from end to end, and to derive your value from that control. The product is not a device; the product is not a service; the product is a platform. This makes sense to them because, from birth, they were a company which sold platforms.
A platform as a product is not a device nor is it service.
Another Next Unit of Computing
Just about a year ago Intel released a new product called the Next Unit of Computing, or NUC for short. The idea was to allow Intel's board and design teams to bring the efficient performance of the ultra low voltage processors to a desktop, and creative, form factor. By taking what is essentially Ultrabook hardware and putting it in a 4-in by 4-in design Intel is attempting to rethink what the "desktop" computer is and how the industry develops for it.
We reviewed the first NUC last year, based on the Intel Ivy Bridge processor and took away a surprising amount of interest in the platform. It was (and is) a bit more expensive than many consumers are going to be willing to spend on such a "small" physical device but the performance and feature set is compelling.
This time around Intel has updated the 4x4 enclosure a bit and upgrade the hardware from Ivy Bridge to Haswell. That alone should result in a modest increase in CPU performance with quite a bit of increase in the integrated GPU performance courtesy of the Intel HD Graphics 5000. Other changes are on the table to; let's take a look.
The Intel D54250WYK NUC is a bare bones system that will run you about $360. You'll need to buy system memory and an mSATA SSD for storage (wireless is optional) to complete the build.
AMD is up to some interesting things. Today at AMD’s tech day, we discovered a veritable cornucopia of information. Some of it was pretty interesting (audio), some was discussed ad-naseum (audio, audio, and more audio), and one thing in particular was quite shocking. Mantle was the final, big subject that AMD was willing to discuss. Many assumed that the R9 290X would be the primary focus of this talk, but in fact it very much was an aside that was not discussed at any length. AMD basically said, “Yes, the card exists, and it has some new features that we are not going to really go over at this time.” Mantle, as a technology, is at the same time a logical step as well as an unforeseen one. So what all does Mantle mean for users?
Looking back through the mists of time, when dinosaurs roamed the earth, the individual 3D chip makers all implemented low level APIs that allowed programmers to get closer to the silicon than what other APIs such as Direct3D and OpenGL would allow. This was a very efficient way of doing things in terms of graphics performance. It was an inefficient way to do things for a developer writing code for multiple APIs. Microsoft and the Kronos Group had solutions with Direct3D and OpenGL that allowed these programmers to develop for these high level APIs very simply (comparatively so). The developers could write code that would run D3D/OpenGL, and the graphics chip manufacturers would write drivers that would interface with Direct3D/OpenGL, which then go through a hardware abstraction layer to communicate with the hardware. The onus was then on the graphics people to create solid, high performance drivers that would work well with DirectX or OpenGL, so the game developer would not have to code directly for a multitude of current and older graphics cards.
Introduction and Features
Corsair offers a full line of high quality power supplies, memory components, cases, cooling components, SSDs and accessories to the PC market. Corsair's new RM Series includes six models; the RM450, RM550, RM650, RM750, RM850 and RM1000. All of the power supplies in the RM Series feature all-modular cables, an energy-efficient design (80 Plus Gold certified) and quiet operation thanks to their ability to run without a cooling fun up to 40% load. The RM Series offers many of the same features as the Corsair HX Series (fanless operation, Gold level efficiency, fully-modular cables) but are a little less expensive. And all RM Series power supplies are Corsair Link ready, which means you can monitor the PSU fan speed and +12V output right from your desktop if you have a Corsair Link system set up on your PC. Previously the Corsair Link option was only available on Corsair’s premium AX Series Digital power supplies.
Here is what Corsair has to say about their RM550 PSU we will be looking at in this review: “The Corsair RM550 is fully modular and optimized for silence and high efficiency. It’s built with low-noise capacitors and transformers, and Zero RPM Fan Mode ensures that the fan doesn’t even spin until the power supply is under heavy load. And with a fan that’s custom-designed for low noise operation, it’s whisper-quiet even when it’s pushed hard.
80Plus Gold rated efficiency saves you money on your power bill, and the low-profile black cables are fully modular, so you can enjoy fast, neat builds. And, like all Corsair power supplies, the RM550 is built with high-quality components and is guaranteed to deliver clean, stable, continuous power. Want even more? Connect it to your Corsair Link system (available separately) and you can even monitor fan speed and +12V current directly from your desktop.”
Corsair RM550 PSU Key Features:
• Silent, fan-less operation up to 40% load
• 80Plus Gold certified, delivering over 92% efficiency under real world loads
• Fully modular, low-profile flat cables help maximize case airflow
• Corsair Link ready!
• High-quality capacitors provide uncompromised performance and reliability
• Active PFC and Universal AC input (100-240 VAC)
• Safety: FCC, ICES, CE, C TUV US, RCM, TUV, CB, CCC, BSMI, GOST, ROHS, WEEE, KC, TUV-S
• 5-Year warranty and lifetime access to tech support and customer service
Over the past few weeks, I have been developing a device that enables external control of Wirecast and XSplit. Here's a video of the device in action:
But now, let's get into the a little bit of background information:
While the TriCaster from NewTek has made great strides in decreasing the cost of video switching hardware, and can be credited with some of the rapid expansion of live streaming on the Internet, it still requires an initial investment of about $20,000 on the entry-level. Even though this is down from around 5x or 10x the cost just a few years ago for professional-grade hardware, a significant startup cost is still presented.
This brings us to my day job. For the past 4 years I have worked here at PC Perspective. My job began as an intern helping to develop video content, but quickly expanded from there. Several years ago, we decided to make the jump to live content, and started investing in the required infrastructure. Since we obviously didn't need to worry about the availability of PC Hardware, we decided to go with the software video switching route, as opposed to dedicated hardware like the TriCaster. At the time, we started experimenting with Wirecast and bought a few Blackmagic Intensity Pro HDMI capture cards for our Canon Vixia HV30 cameras. Overall, building an 6 core computer (Core i7-980x in those days) with 3 capture cards resulted in an investment of about $2500.
Advantages to the software route not only consisted of a much cheaper initial investment, we had an operation running for about a 1/10th of the cost of a TriCaster, but ultimately our setup was more expandable. If we had gone with a TriCaster we would have a fixed number of inputs, but in this configuration we could add more inputs on the fly as long as we had available I/O on our computer.
Summary of Events
In January of 2013 I revealed a new testing methodology for graphics cards that I dubbed Frame Rating. At the time I was only able to talk about the process, using capture hardware to record the output directly from the DVI connections on graphics cards, but over the course of a few months started to release data and information using this technology. I followed up the story in January with a collection of videos that displayed some of the capture video and what kind of performance issues and anomalies we were able to easily find.
My first full test results were published in February to quite a bit of stir and then finally in late March released Frame Rating Dissected: Full Details on Capture-based Graphics Performance Testing which dramatically changed the way graphics cards and gaming performance was discussed and evaluated forever.
Our testing proved that AMD CrossFire was not improving gaming experiences in the same way that NVIDIA SLI was. Also, we showed that other testing tools like FRAPS were inadequate in showcasing this problem. If you are at all unfamiliar with this testing process or the results it showed, please check out the Frame Rating Dissected story above.
At the time, we tested 5760x1080 resolution using AMD Eyefinity and NVIDIA Surround but found there were too many issues and problems with our scripts and the results they were presenting to give reasonably assured performance metrics. Running AMD + Eyefinity was obviously causing some problems but I wasn’t quite able to pinpoint what they were and how severe it might have been. Instead I posted graphs like this:
We were able to show NVIDIA GTX 680 performance and scaling in SLI at 5760x1080 but we only were giving results for the Radeon HD 7970 GHz Edition in a single GPU configuration.
Since those stories were released, AMD has been very active. At first they were hesitant to believe our results and called into question our processes and the ability for gamers to really see the frame rate issues we were describing. However, after months of work and pressure from quite a few press outlets, AMD released a 13.8 beta driver that offered a Frame Pacing option in the 3D controls that enables the ability to evenly space out frames in multi-GPU configurations producing a smoother gaming experience.
The results were great! The new AMD driver produced very consistent frame times and put CrossFire on a similar playing field to NVIDIA’s SLI technology. There were limitation though: the driver only fixed DX10/11 games and only addressed resolutions of 2560x1440 and below.
But the story won’t end there. CrossFire and Eyefinity are still very important in a lot of gamers minds and with the constant price drops in 1920x1080 panels, more and more gamers are taking (or thinking of taking) the plunge to the world of Eyefinity and Surround. As it turns out though, there are some more problems and complications with Eyefinity and high-resolution gaming (multi-head 4K) that are cropping up and deserve discussion.