All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Introduction and Technical Specifications
Courtesy of ASUS
It's been a couple months since we've had a chance to evaluate a Z77-based motherboard, so we are taking this opportunity to throw ASUS's P8Z77-V Deluxe on our test bench to put it through our comprehensive real-world and synthetic benchmarks. This $279 board has been available for several months and supports the LGA 1155 platform that includes Sandy Bridge and Ivy Bridge processors.
Courtesy of ASUS
There are many features to drool over about the ASUS P8Z77-V Deluxe, but my favorite ones include the board's unique power management features, Wi-Fi functionality with remote access, and customized UEFI BIOS. This board also includes other enhancements that focus on support for faster USB 3.0 and PCIe 3.0 integration as well as extra SATA 6GB/s ports that provide double the bandwidth of current bus systems.
Introduction, Specifications and Packaging
Last week, Samsung flew myself and a few of my fellow peers in the storage review community out to Seoul, Korea. The event was the 2012 Samsung SSD Global Summit:
At this event, Samsung officially announced their new 840 Pro, which we were able to obtain early under NDA and therefore publish in concert with the announcement. The 840 Pro was largely an incremental inprovement over their 830 Series. Newer, faster flash coupled with a higher clocked controller did well to improve on an already excellent product.
As the event closed, we were presented with the second model of the lineup - the 840. This model, sans the 'Pro' moniker, is meant more for general consumer usage. The first mass marketed SSD to use Triple Level Cell (TLC) flash, it sacrifices some write speed and long-term reliability in favor of what should become considerably lower cost/GB as production ramps up to full capacity. TLC flash is the next step beyond MLC, which is in turn a step after SLC. Here's a graphic to demonstrate:
Trinity Finally Comes to the Desktop
Trinity. Where to start? I find myself asking that question, as the road to this release is somewhat tortuous. Trinity, as a product code name, came around in early 2011. The first working silicon was shown that Summer. The first actual release of product was the mobile part in late Spring of this year. Throughout the summer notebook designs based on Trinity started to trickle out. Today we cover the release of the desktop versions of this product.
AMD has certainly had its ups and downs when it comes to APU releases. Their first real APU was Zacate, based on the new Bobcat CPU architecture. This product was an unmitigated success for AMD. Llano, on the other hand, had a pretty rocky start. Production and various supply issues caused it to be far less of a success than hoped. These issues were oddly enough not cleared up until late Spring of this year. By then mobile Trinity was out and people were looking towards the desktop version of the chip. AMD saw the situation, and the massive supply of Llano chips that it had, and decided to delay introduction of desktop Trinity until a later date.
To say that expectations for Trinity are high is an understatement. AMD has been on the ropes for quite a few years in terms of CPU performance. While the Phenom II series were at least competitive with the Core 2 Duo and Quad chips, they did not match up well against the latest i7/i5/i3 series of parts. Bulldozer was supposed to erase the processor advantage Intel had, but it came out of the oven as a seemingly half baked part. Piledriver was designed to succeed Bulldozer, and is supposed to shore up the architecture to make it more competitive. Piledriver is the basis of Trinity. Piledriver does sport significant improvements in clockspeed, power consumption, and IPC (instructions per clock). People are hopeful that Trinity would be able to match the performance of current Ivy Bridge processors from Intel, or at least get close.
So does it match Intel? In ways, I suppose. How much better is it than Bulldozer? That particular answer is actually a bit surprising. Is it really that much of a step above Llano? Yet another somewhat surprising answer for that particular question. Make no mistake, Trinity for desktop is a major launch for AMD, and their continued existence as a CPU manufacturer depends heavily on this part.
PhysX Settings Comparison
Borderlands 2 is a hell of a game; we actually ran a 4+ hour live event on launch day to celebrate its release and played it after our podcast that week as well. When big PC releases occur we usually like to take a look at performance of the game on a few graphics cards as well to see how NVIDIA and AMD cards stack up. Interestingly, for this title, PhysX technology was brought up again and NVIDIA was widely pushing it as a great example of implementation of the GPU-accelerated physics engine.
What you may find unique in Borderlands 2 is that the game actually allows you to enabled PhysX features at Low, Medium and High settings, with either NVIDIA or AMD Radeon graphics cards installed in your system. In past titles, like Batman: Arkham City and Mafia II, PhysX was only able to be enabled (or at least at higher settings) if you had an NVIDIA card. Many gamers that used AMD cards saw this as a slight and we tended to agree. But since we could enable it with a Radeon card installed, we were curious to see what the results would be.
Of course, don't expect the PhysX effects to be able to utilize the Radeon GPU for acceleration...
Borderlands 2 PhysX Settings Comparison
The first thing we wanted to learn was just how much difference you would see by moving from Low (the lowest setting, there is no "off") to Medium and then to High. The effects were identical on both AMD and NVIDIA cards and we made a short video here to demonstrate the changes in settings.
Or: the countdown to a fresh Start.
Over time – and not necessarily much of it – usage of a platform can become a marriage. I trusted Windows, nee MS-DOS, guardianship over all of my precious applications which depend upon it. Chances are you too have trusted Microsoft or a similar proprietary platform holder to provide a household for your content.
It is time for a custody hearing.
These are the reasons why I still use Windows – and who could profit as home wreckers.
1st Reason – Games
The most obvious leading topic.
Computer games have been dominated by Windows for quite some time now. When you find a PC game at retail or online you will find either a Windows trademark or the occasional half-eaten fruit somewhere on the page or packaging.
One of the leading reasons for the success of the PC platform is the culture of backwards compatibility. Though the platform has been rumored dead ad-infinitum it still exists – surrounded by a wasteland of old deprecated consoles. I still play games from past decades on their original platform.
Ahead of the release of Windows 8 and the onslaught of Windows 8-based tablets that will hit the market next month, Intel is taking the cover off the processor that many of these new devices will be powered by, the Intel Atom Z2760 previously known by the codename of Clover Trail. Intel is claiming that the Atom Z2760 is the beginning of a completely new Atom direction, now a complete SoC (system-on-a-chip) design that lowers power requirements, extends battery life and allows Intel's x86 architecture to find its way into smaller and more portable devices.
At it's heart, Clover Trail is based on the same Saltwell CPU core design that was found in the Medfield processor powering a handful of smartphones over in Europe. That means the Atom lineup remains an in-order architecture with a dual-issue command structure - nothing incredibly revolutionary there.
Unlike Medfield though, the Atom Z2760 is a dual-core design that still enables HyperThreading for four-threaded operating system integration. The cores will run at 1.8 GHz and it includes 1MB of L2 cache divided between the two cores evenly. Memory is connected through a dual-channel 32-bit bus to low power DDR2 memory running at 800 MHz and capacities up to 2GB.
Trinity's GPU Performance
Editor's Note: Right before the release of this story some discussion has been ongoing at other hardware sites about the methods AMD employed with this NDA and release of information. Essentially, AMD allowed us to write about only the gaming benchmarks and specifications for the Trinity APU, rather than allowing the full gamut of results including CPU tests, power consumption, etc. Why? Obviously AMD wants to see a good message be released about their product; by release info in stages they can at least allow a brief window for that.
Does it suck that they did this? Yes. Do I feel like we should have NOT published this because of those circumstances? Not at all. Information is information and we felt that getting it to you as soon as possible was beneficial. Also, because the parts are not on sale today we are not risking adversely affecting your purchasing decision with these limited benchmarks. When the parts DO go on sale, you will have our full review with all the positives and negatives laid out before you, in the open.
This kind of stuff happens often in our world - NVIDIA sent out GTX 660 cards but not GTX 650s because of lack luster performance for example - and we balance it and judge it on a case by case basis. I don't think anyone looking at this story sees a "full review" and would think to make a final decision about ANY product from it. That's not the goal. But just as we sometimes show you rumored specs and performance numbers on upcoming parts before the NDAs expire, we did this today with Trinity - it just so happens it was with AMD's blessing.
AMD has graciously allowed us the chance to give readers a small glimpse at the performance of the upcoming A series APUs based on the Trinity processor. Today we are covering the SKUs that will be released, general gaming performance, and what kind of power consumption we are seeing as compared to the previous Llano processor and any Intel processor we can lay hands upon.
Trinity is based on the updated Piledriver architecture, which is an update to Bulldozer. Piledriver improves upon IPC by a small amount over Bulldozer, but the biggest impact is that of power consumption and higher clockspeeds. It was pretty well known that Bulldozer did not hit the performance expectations of both AMD and consumers. Part of this was due to the design pulling more power at the target clockspeeds than was expected. To remedy this, AMD lowered clockspeeds. Piledriver fixes most of those power issues, as well as sprinkles some extra efficiency into the design, so that clockspeeds can scale to speeds that will make these products more competitive with current Intel offerings.
The top end model that AMD will be offering of the socket FM2 processors (for the time being) is the A10 5800K. This little number is a dual module/quad core processor running at 3.8 GHz with a turbo speed of 4.2 GHz. We see below the exact model range of products that AMD will be offering. This does not include the rumored Athlon II editions that will have a disabled GPU onboard. Each module features 2 MB of L2 cache, for a total of 4 MB on the processor. The A10 series does not feature a dedicated L3 cache as the FX processors do. This particular part is unlocked as well, so expect some decent overclocking right off the bat.
The A10 5800K features the VLIW 4 based graphics portion, which is significantly more efficient than the previous VLIW 5 based unit in Llano (A8 3870K and brethren). Even though it features the same number of stream processors as the 3870K, AMD is confident that this particular unit is upwards of 20% faster than the previous model. This GPU portion is running at a brisk 800 MHz. The GPU core is also unlocked, so expect some significant leaps in that piece of the puzzle as well.
That is about all I can give out at this time, since this is primarily based on what we see in the diagram and what we have learned from the previous Trinity release (for notebooks).
Introduction and Features
EVGA might not be the first name that comes to mind when looking for a high-end power supply but they are about to change that with the introduction of the SuperNOVA NEX1500 Classified 1500W power supply. Not only is the SuperNOVA NEX1500 Classified the highest capacity power supply we have reviewed to date, it also comes bundled with EVGA’s SuperNOVA software that allows monitoring all of the power supply’s functions in real-time from your desktop.
EVGA was founded in 1999 with headquarters in Brea, California. They currently specialize in producing NVIDIA based graphics adapters and Intel based motherboards and they are now expanding their product line to include enthusiast grade power supplies, starting with the NEX1500. EVGA plans to add 750W and 650W models to follow.
The EVGA SuperNOVA NEX1500 Classified power supply can deliver up to 1500W combined load while operating on 120VAC mains and can be “overclocked” to 1650W if operated on 240 VAC mains. It used to be that ~1200W DC output was about as high as you could go on 120VAC mains but as the overall PSU total efficiency has increased higher outputs are now possible. The NEX1500 supports either single or multiple +12V rail modes (DIP switch selectable) and can deliver up to 124A on the +12V rail (133A in OC mode). WOW – you can weld ½” steel plate with 120A!! (OK, I might have trouble keeping the arc stable at 12V, but that is still some serious current.) This bad-boy even comes with a handle and is backed by a 10-Year warranty. And just to tickle your interest, here are a few stats you might be wondering about:
• 1500W Continuous power output @50°C (1650W in OC mode)
• 124A +12V rail (133A in OC mode)
• SuperNOVA control and monitoring software included (USB interface)
• (19) PCI-E connectors and (2) EPS12V connectors
• OEM is Etasis Electronics Corp. (well-known in the server industry)
• MSRP $449.99 USD (available now)
Here is what EVGA has to say about the new SuperNOVA NEX1500 Classified PSU:
“The EVGA NEX1500 Classified is the ultimate enthusiast power supply. Designed to support the toughest hardware, the EVGA NEX1500 Classified supports ground-breaking new features like SuperNOVA enthusiast software control, Overclock Mode that increases the maximum power output up to 1650W, and fully modular, individually sleeved cables.
You can also count on EVGA to provide the utmost reliability and performance, with 100% Japanese capacitors and a durable ball bearing fan. The NEX1500 Classified is designed from start to finish to be the best choice for today’s most demanding high-end computers. Get to the next level with the EVGA NEX1500 Classified Power Supply!"
EVGA SuperNOVA NEX1500 Classified 1500W PSU Key Features:
• SuperNOVA, exclusive power supply control and monitoring software
• Control and adjust +12V voltage for maximum overclocking potential
• Switch between single or multiple +12V rails for ultimate control
• Overclock Mode allows PSU to deliver up to 1650W with 230VAC input
• Unbeatable 10-Year Warranty and unparalleled EVGA Customer Support
• 80PLUS Gold certified, with up to 90% efficiency under typical loads
• Highest quality Japanese brand capacitors ensure long-term reliability
• Individually sleeved cables for outstanding looks and cable management
• Fully modular to reduce clutter and improve airflow
• NVIDIA SLI Certified
• Sanyo Denki ball bearing fan for exceptional reliability and quiet operation
• Universal AC input (100-240V) with Active PFC
• Heavy-duty Protections: OVP, UVP, OCP, OPP, SCP and OTP
• Dimensions: 150mm (W) x 86mm (H) x 200mm (L)
Introduction, Specifications and Packaging
Samsung has been at this SSD thing for quite some time now. The first SSD I bought was in fact a Samsung unit meant for an ultraportable laptop. Getting it into my desktop was a hack and a half, involving a ZIF to IDE adapter, which then passed through yet another adapter to convert to SATA. The drive was wicked fast at the time, and while it handily slaughtered my RAID-0 pair of 74GB VelociRaptors in random reads, any writes caused serious stuttering of the drive, and therefore the entire OS. I was clearly using the drive outside of its intended use, but hey, I was an early adopter.
Several SSDs later came the Intel X25-M. It was a great drive, but in its earliest form was not without fault. Luckily, these kinks were worked out industry-wide, and everyone quickly accelerated their firmware optimizations as to better handle random writes. Samsung took a few generations to get this under control. The first to truly get over this hump was the 830 Series, which launched earlier this year. It utilized a triple core Arm 9 CPU which was able to effectively brute force heavy random write workloads. It also significantly increased the speed and nimbleness of the 830 across the board, which combined with Samsung's excellent reliability record, quickly made it my most recommended series as of late.
...and now we have the 840 Series, which launched today. Well, technically it launched yesterday if you're reading from the USA. Here in Korea the launch started at 10 AM and spanned a day of product press briefings leading to the product NDA expiration at 8 PM Korea time. This review will focus on the 512GB capacity of the 840 Pro model. We will follow on with the 840 (non-pro) at a later date:
Read on for the full review!
Before Intel released the ultrabook standard there were already laptops that we’re close to what Intel would envision, and while some had already gained attention on their own, most were not given any special attention. One of these laptops was the IdeaPad U series, a part of Lenovo’s consumer line-up which had long focused on thin and light design.
I reviewed one of those laptops, the Lenovo U260, in 2010. That 12.5 laptop weighed in at just 3.04 pounds and is - to this very day - among the thinnest and lightest laptops we’ve reviewed at PC Perspective.
Alas, the U260 was not long for this world, but its largest siblings live on. Now we’re taking a look at the U410, Lenovo’s 14-inch ultrabook and the largest product in the U-Series. Let’s see what kind of hardware it brings to this suddenly crowded category.
Well, there are no surprises here, but you shouldn’t have expected any. Intel’s moves to make cool, thin laptops more widespread has ironically robbed them of their excitement. They’re all roughly the same in size and weight and they can all be equipped with identical Intel processors.
This makes it hard for any particular ultrabook - even those with a bloodline that starts prior to Intel’s ultrabook push - to stand out. Let’s see if the Lenovo IdeaPad U410 can conjure some magic.
Introduction and Features
Corsair continues to bring a full line of high quality power supplies, memory components, cases, cooling components, SSDs and accessories to market for the PC enthusiast and professional alike. Corsair's updated Professional Series HX power supplies include four models; the HX650, HX750, HX850 and HX1050. All of the power supplies in the Professional Series feature modular cables, premium quality components, an energy-efficient design (now 80 Plus Gold certified) and quiet operation; and they are backed by a 7-year warranty and lifetime access to Corsair's comprehensive technical support and customer service. The most obvious differences between the new models and the old Professional Series HX PSUs are the new 80 Plus Gold efficiency certification (upgraded from 80 Plus Silver) and the ability to operate in fanless-mode.
Here is what Corsair has to say about their new Professional Series HX PSUs:
"Legendary Performance and Reliability
Corsair Professional Series HX power supplies are designed for PC builders and upgraders who need a highly efficient, quiet, and supremely-reliable power supply, with a modular cable-set that makes installation a breeze.
Quiet Operation at Low Loads
Thanks to their highly-efficient design, Corsair Professional Series power supplies generate minimal heat, and are able to operate in a silent, fully-fanless mode at up to 20% of the PSU’s maximum load (170W for the HX850). This means that Professional Series HX PSUs will be completely silent when you’re performing less intensive tasks, such as web browsing or chatting in forums. And the thermally-controlled fan spins up gradually above 20% load, so that it still operates quietly during normal use and when gaming. Basic PC power supplies have fans that spin all the time your PC is on – whether you’re pushing your graphics card to the limit or just surfing the web – making them noisier and more intrusive.
Modular Cables for Easy Installation
Professional Series power supplies have a comprehensive modular cable set that allows you to use only the cables you need for your particular set of components. The benefits of this include a cleaner, neater installation, and that ‘professionally-built’ look, plus increased airflow through the case due to reduced cable clutter. The cables are also long enough to support full-tower cases.
80 PLUS Gold: High Efficiency – Low Heat
Efficiency is the measurement of how effectively a power supply converts AC power from your wall outlet to the DC power used by your PC’s components. If your power supply isn’t efficient, it will generate more heat, which requires more cooling and more fan noise. And, it might even affect your power bill.
Professional Series HX PSUs are among the most efficient on the market. Each model has 80 Plus Gold certification, which ensures up to 90% energy-efficiency. This helps to keep your PC cool and quiet, and it may even save you money too.
Professional Series HX PSUs are built with premium components, such as 105°C capacitors, and are capable of continuous power delivery at a temperature rating of 50°C, ensuring maximum performance and reliability even in the most demanding and hot-running performance PCs.
The Corsair Advantage
Corsair Professional Series PSUs are backed by a reassuring 7-year warranty and comprehensive customer support via telephone, email, forum and the Tech Support Express helpdesk."
GK106 Completes the Circle
The release of the various Kepler-based graphics cards have been interesting to watch from the outside. Though NVIDIA certainly spiced things up with the release of the GeForce GTX 680 2GB card back in March, and then with the dual-GPU GTX 690 4GB graphics card, for quite quite some time NVIDIA was content to leave the sub-$400 markets to AMD's Radeon HD 7000 cards. And of course NVIDIA's own GTX 500-series.
But gamers and enthusiasts are fickle beings - knowing that the GTX 660 was always JUST around the corner, many of you were simply not willing to buy into the GTX 560s floating around Newegg and other online retailers. AMD benefited greatly from this lack of competition and only recently has NVIDIA started to bring their latest generation of cards to the price points MOST gamers are truly interested in.
Today we are going to take a look at the brand new GeForce GTX 660, a graphics cards with 2GB of frame buffer that will have a starting MSRP of $229. Coming in $80 under the GTX 660 Ti card released just last month, does the more vanilla GTX 660 have what it takes to replace the success of the GTX 460?
The GK106 GPU and GeForce GTX 660 2GB
NVIDIA's GK104 GPU is used in the GeForce GTX 690, GTX 680, GTX 670 and even the GTX 660 Ti. We saw the much smaller GK107 GPU with the GT 640 card, a release I was not impressed with at all. With the GTX 660 Ti starting at $299 and the GT 640 at $120, there was a WIDE gap in NVIDIA's 600-series lineup that the GTX 660 addresses with an entirely new GPU, the GK106.
First, let's take a quick look at the reference card from NVIDIA for the GeForce GTX 660 2GB - it doesn't differ much from the reference cards for the GTX 660 Ti and even the GTX 670.
The GeForce GTX 660 uses the same half-length PCB that we saw for the first time with the GTX 670 and this will allow retail partners a lot of flexibility with their card designs.
Apple Produces the new A6 for the iPhone 5
Today is the day that world gets introduced to the iPhone 5. I of course was very curious about what Apple would be bringing to market the year after the death of Steve Jobs. The excitement leading up to the iPhone announcement was somewhat muted as compared to years past, and a lot of that could be attributed to what has been happening in the Android market. Companies like Samsung and HTC have released new high end phones that are not only faster and more expansive than previous versions, but they also worked really well and were feature packed. While the iPhone 5 will be another success for Apple, for those somewhat dispassionate about the cellphone market will likely just shrug and say to themselves, “It looks like Apple caught up for the year, but too bad they really didn’t introduce anything really groundbreaking.”
If there was one area that many were anxiously awaiting, it was that of the SOC (system on a chip) that Apple would use for the iPhone 5. Speculation went basically from using a fresh piece of silicon based on the A5X (faster clocks, smaller graphics portion) to having a quad core monster running at high speeds but still sipping power. It seems that we actually got something in between. This is not a bad thing, but as we go forward we will likely see that the silicon again only matches what other manufacturers have been using since earlier this year.
Ah, IDF – the Intel Developer Forum. Almost every year–while I sit in slightly uncomfortable chairs and stare at outdated and color washed projector screens–information is passed on about Intel's future architectures, products and technologies. Last year we learned the final details about Ivy Bridge, and this year we are getting the first details about Haswell, which is the first architecture designed by Intel from the ground up for servers, desktops, laptops, tablets and phones.
While Sandy Bridge and Ivy Bridge were really derivatives of prior designs and thought processes, the Haswell design is something completely different for the company. Yes, the microarchitecture of Haswell is still very similar to Sandy Bridge (SNB), but the differences are more philosophical rather than technological.
Intel's target is a converged core: a single design that is flexible enough to be utilized in mobility devices like tablets while also scaling to the performance levels required for workstations and servers. They retain the majority of the architecture design from Sandy Bridge and Ivy Bridge including the core design as well as the key features that make Intel's parts unique: HyperThreading, Intel Turbo Boost, and the ring interconnect.
The three pillars that Intel wanted to address with Haswell were performance, modularity, and power innovations. Each of these has its own key goals including improving performance of legacy code (existing), and having the ability to extract greater parallelism with less coding work for developers.
Lenovo is one of several companies–including Acer and HP–that have embraced the ultrabook concept with both arms. Lenovo has not simply released a few high-end models similar to what as ASUS has done. Rather, it has released a fleet of products which include the U300/U310, the U410 and the U300S. And now there is a new high end product in the lineup called the ThinkPad X1 Carbon.
There several traits that mark the ThinkPad X1 Carbon as unique. This is the first ThinkPad to bear the ultrabook title, for example. Further, the 430u (which was initially announced all the way back in January at CES 2012) is not yet available. The X1 Carbon is also one of only a few laptops to use carbon fiber in its frame. And this laptop is the only ultrabook on the market with a trackpointer. As I’ve mentioned before, unique design traits are kind of a big deal, and a peek inside the X1 reminds us why.
There’s nothing in that spec sheet that stands out. Yes, the Core i7 low-voltage processor will prove faster than the Core i5s in most competitors, but you can usually option to an i7 if that’s what you desire. The solid state drive is completely standard for an ultrabook above $1000, and four gigabytes of RAM can be hand in virtually any laptop on store shelves today–even those selling for $500.
This means Lenovo needs to bring something special if it wants to justify a premium price. You can buy this laptop right now for about $1250, but grabbing the upgrades found in our review unit raises the price to about $1500. That’s in line with the HP Envy 14 Spectre, an editor’s choice winner, and the ASUS Zenbook Prime UX31A, which would have won an editor’s choice if ASUS had a handle on its quality control. Let’s see if the Carbon can deal with these top-tier competitors.
I say let the world go to hell
… but I should always have my tea. (Notes From Underground, 1864)
You can praise video games as art to justify its impact on your life – but do you really consider it art?
Best before the servers are taken down, because you're probably not playing it after.
Art allows the author to express their humanity and permits the user to consider that perspective. We become cultured when we experiment with and to some extent understand difficult human nature problems. Ideas are transmitted about topics which we cannot otherwise understand. We are affected positively as humans in society when these issues are raised in a safe medium.
Video games, unlike most other mediums, encourage the user to coat the creation with their own expressions. The player can influence the content through their dialogue and decision-tree choices. The player can accomplish challenges in their own unique way and talk about it over the water cooler. The player can also embed their own content as a direct form of expression. The medium will also mature as we further learn how to leverage interactivity to open a dialogue for these artistic topics in completely new ways and not necessarily in a single direction.
Consciously or otherwise – users will express themselves.
With all of the potential for art that the medium allows it is a shame that – time and time again – the industry and its users neuter its artistic capabilities in the name of greed, simplicity, or merely fear.
Introduction and Externals
Corsair manufactures a wide variety of components and peripherals for PC enthusiasts. They essentially target the most enthusiastic customers in whatever market they enter – breaking the ice with the coldest and harshest critics who are never above nitpicking faults and flaws. Despite tossing their first generation products to the sharks they perform uncharacteristically well for a new contender almost every time. They look before they leap.
The Corsair K60 and K90 were launched simultaneously and represent Corsair’s first attempt at producing a mechanical keyboard. Corsair has included media keys, a metal volume wheel, and a Windows-key lock on both keyboards if you find yourself yelling, “I HATE THIS KEY!” at your desktop because your game is now minimized and cannot receive your hatred.
Rubberized when down, not when up -- but stable either way.
I never said I wasn't one of the nitpickers.
Both keyboards are built around an aluminum chassis with a nonslip coating to each key. Each keycap has a sharply defined edges compared to the more round edges found on a Razer Blackwidow and other similar keyboards. Neither keyboard has rubberized tips on their ergonomic flaps although slipping has not been an issue in my testing.
Introduction, Virtual V-Sync Testing
In my recent review of the Origin EON11-S portable gaming laptop I noted that the performance of the laptop was far behind that of a larger 15.6” or 17.3” model. The laptop won a gold award despite this, as all laptops of this size are bound to physics, but it was an issue worth nothing.
Origin surprised me by responding that they had something in the works that might buff up performance. This confused me. Were they going to cast a spell on it? Would they beam in a beefier GPU? What could they possibly do that would increase performance without changing the hardware?
Now I have the answer. It’s called Lucid VirtuMVP and it uses your existing integrated GPU to improve performance. As with Lucid’s other products, VirtuMVP makes it possible for two different GPUs – in this case, your integrated GPU and your discrete GPU – to work together. It’s not magic – just ingenuity. Let’s take a closer look.
Introduction and Features
Manufacturers are continually looking for ways to differentiate their products from the rest of the field and highlight new or improved features to make you want to buy their products. Leave it to Corsair to actually come up with a truly new and unique power supply for the PC enthusiast market! The Corsair AX1200i is a smart PSU that incorporates a Digital Signal Processor (DSP) to deliver extremely clean, efficient power with the ability to make real-time adjustments to various internal parameters. The included Corsair Link software can be used to monitor and adjust performance, noise (fan speed), and Over Current Protection (OCP) settings.
It’s been almost two years since we reviewed Corsair’s flagship power supply, the Professional Series Gold AX1200. While the new AX1200i retains many of the original features it now comes with 80 Plus Platinum efficiency certification and a built-in DACS (Data Acquisition and Control System) thanks to Corsair Link technology.
(Courtesy of Corsair)
Here is what Corsair has to say about their new AX1200i Digital ATX power supply unit: "The revolutionary AX1200i is the first desktop PC power supply to use digital (DSP) control and Corsair Link to bring you an unprecedented level of monitoring and performance customization. The DSP in the AX1200i makes on-the-fly adjustments for incredibly tight voltage regulation, 80 PLUS Platinum efficiency, and clean, stable power.
Real-time monitoring and control with Corsair Link. Put the AX1200i under your control by connecting it directly to a USB header on your motherboard with the included cable, or to a Corsair Commander (available separately). Then, download the free Corsair Link Dashboard software for unrivaled power supply monitoring and control options.
Monitor power input and output, efficiency, fan speed, and internal temperature, directly from the Windows based application. Or, take it to the next level and set up and modify fan speed profiles, or even select from virtual “single rail” or “multi-rail” software modes, with selectable OCP points. "
Corsair AX1200i Digital ATX PSU Key Features:
• Digital Signal Processor (DSP) for extremely clean and efficient power
• Corsair Link Integration for monitoring and adjusting performance
• 1,200 watts continuous power output (50°C)
• Dedicated single +12V rail with user-configurable virtual rails
• 80Plus Platinum certified, delivering up to 92% efficiency
• ZVS / ZCS technology for high efficiency
• Independent DC-to-DC converters
• Ultra quiet 140mm double ball bearing fan
• Silent, Fanless mode up to ~30% load
• Self-test switch to verify power supply functionality
• Premium quality components
• Fully modular cable system
• Conforms to ATX12V v2.31 and EPS 2.92 standards
• Universal AC input (90-264V) with Active PFC
• Over-current, over-voltage, under-voltage and short circuit protection
• Dimensions: 150mm (W) x 86mm (H) x 200mm (L)
• 7-Year warranty and legendary Corsair customer service
Multiple Contenders - EVGA SC
One of the most anticipated graphics card releases of the year occurred this month in the form of the GeForce GTX 660 Ti from NVIDIA, and as you would expect we were there on the day one with an in-depth review of the card at reference speeds.
The GeForce GTX 660 Ti is based on GK104, and what you might find interesting is that it is nearly identical to the specifications of the GTX 670. Both utilize 7 SMX units for a total of 1344 stream processors – or CUDA cores – and both run at a reference clock speed of 915 MHz base and 980 MHz Boost. Both include 112 texture units though the GeForce GTX 660 Ti does see a drop in ROP count from 32 to 24. Also, L2 cache drops from 512KB to 384KB along with a memory bus width drop from 256-bit to 192-bit.
We already spent quite a lot of time talking about the GTX 660 Ti compared to the other NVIDIA and AMD GPUs in the market in our review (linked above) as well as on our most recent episode of the PC Perspective Podcast. Today's story is all about the retail cards we received from various vendors including EVGA, Galaxy, MSI and Zotac. We are going to show you each card's design, the higher clocked settings that were implemented, performance differences between them and finally the overclocking comparisons of all four.
Get notified when we go live!