All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Introduction and Design
Contortionist PCs are a big deal these days as convertible models take the stage to help bridge the gap between notebook and tablet. But not everyone wants to drop a grand on a convertible, and not everyone wants a 12-inch notebook, either. Meanwhile, these same people may not wish to blow their cash on an underpowered (and far less capable) Chromebook or tablet. It’s for these folks that Lenovo has introduced the IdeaPad Flex 14 Ultrabook, which occupies a valuable middle ground between the extremes.
The Flex 14 looks an awful lot like a Yoga at first glance, with the same sort of acrobatic design and a thoroughly IdeaPad styling (Lenovo calls it a “dual-mode notebook”). The specs are also similar to that of the x86 Yoga, though with the larger size (and later launch), the Flex also manages to assemble a slightly more powerful configuration:
The biggest internal differences here are the i5-4200U CPU, which is a 1.6 GHz Haswell model with a TDP of 15 W and the ability to Turbo Boost (versus the Yoga 11S’ i5-3339Y, which is Ivy Bridge with a marginally lower TDP of 13 W and no Turbo Boost), the integrated graphics improvements that follow with the newer CPU, and a few more ports made possible by the larger chassis. Well, and the regression to a TN panel from the Yoga 11S’ much-appreciated IPS display, which is a bummer. Externally, your wallet will also appreciate a $250 drop in price: our model, as configured here, retails for just $749 (versus the $999 Yoga 11S we reviewed a few months back).
You can actually score a Flex 14 for as low as $429 (as of this writing), by the way, but if you’re after any sort of respectable configuration, that price quickly climbs above the $500 mark. Ours is the least expensive option currently available with both a solid-state drive and an i5 CPU.
Streaming games straight from NVIDIA
Over the weekend NVIDIA released a December update for the SHIELD Android mobile gaming device that included a very interesting, and somewhat understated, new feature: Beta support for NVIDIA GRID.
You have likely heard of GRID before, NVIDIA has been pushing it as part of the companies vision going forward to GPU computing in every facet and market. GRID was aimed at creating GPU-based server farms to enable mobile, streaming gaming to users across the country and across the world. While initially NVIDIA only talked about working with partners to launch streaming services based on GRID, they have obviously changed their tune slightly with this limited release.
If you own a SHIELD, and install the most recent platform update, you'll find a new icon in your NVIDIA SHIELD menu called GRID Beta. The first time you start this new application, it will attempt to measure your bandwidth and latency to offer up an opinion on how good your experience should be. NVIDIA is asking for at least 10 Mbps of sustained bandwidth, and wants round trip latency under 60 ms from your location to their servers.
Currently, servers are ONLY located in Northern California so the further out you are, the more likely you will be to run into problems. However, oing some testing in Kentucky and Ohio resulted in a very playable gaming scenarios, though we did run into some connection problems that might be load-based or latency-based.
After the network setup portion users are shown 8 different games that they can try. Darksiders, Darksiders II, Street Fighter X Tekken, Street Fighter IV, Alan Wake, The Witcher 2, Red Faction: Armageddon and Trine 2. You are free to play them free of charge during this beta though I think you can be sure they will be removed and erased at some point; just a reminder. Saves work well and we were able to save and resume games of Darksiders 2 on GRID easily and quickly.
Starting up the game was fast, about on par with starting up a game on a local PC, though obviously the server is loading it in the background. Once the game is up and running, you are met with some button mapping information provided by NVIDIA for that particular game (great addition) and then you jump into the menus as if you were running it locally.
NVIDIA Tegra Note Program
Clearly, NVIDIA’s Tegra line has not been as successful as the company had hoped and expected. The move for the discrete GPU giant into the highly competitive world of the tablet and phone SoCs has been slower than expected, and littered with roadblocks that were either unexpected or that NVIDIA thought would be much easier to overcome.
The truth is that this was always a long play for the company; success was never going to be overnight and anyone that thought that was likely or possible was deluded. Part of it has to do with the development cycle of the ARM ecosystem. NVIDIA is used to a rather quick development, production, marketing and sales pattern thanks to its time in high performance GPUs, but the SoC world is quite different. By the time a device based on a Tegra chip is found in the retail channel it had to go through an OEM development cycle, NVIDIA SoC development cycle and even an ARM Cortex CPU development cycle. The result is an extended time frame from initial product announcement to retail availability.
Partly due to this, and partly due to limited design wins in the mobile markets, NVIDIA has started to develop internal-designed end-user devices that utilize its Tegra SoC processors. This has the benefit of being much faster to market – while most SoC vendors develop reference platforms during the normal course of business, NVIDIA is essentially going to perfect and productize them.
Introduction and Design
With few exceptions, it’s generally been taken for granted that gaming notebooks are going to be hefty devices. Portability is rarely the focus, with weight and battery life alike usually sacrificed in the interest of sheer power. But the MSI GE40 2OC—the lightest 14-inch gaming notebook currently available—seeks to compromise while retaining the gaming prowess. Trending instead toward the form factor of a large Ultrabook, the GE40 is both stylish and manageable (and perhaps affordable at around $1,300)—but can its muscle withstand the reduction in casing real estate?
While it can’t hang with the best of the 15-inch and 17-inch crowd, in context with its 14-inch peers, the GE40’s spec sheet hardly reads like it’s been the subject of any sort of game-changing handicap:
One of the most popular CPUs for Haswell gaming notebooks has been the 2.4 GHz (3.4 GHz Turbo) i7-4700MQ. But the i7-4702MQ in the GE40-20C is nearly as powerful (managing 2.2 GHz and 3.2 GHz in those same areas respectively), and it features a TDP that’s 10 W lower at just 37 W. That’s ideal for notebooks such as the GE40, which seek to provide a thinner case in conjunction with uncompromising performance. Meanwhile, the NVIDIA GTX 760M is no slouch, even if it isn’t on the same level as the 770s and 780s that we’ve been seeing in some 15.6-inch and 17.3-inch gaming beasts.
Elsewhere, it’s business as usual, with 8 GB of RAM and a 120 GB SSD rounding out the major bullet points. Nearly everything here is on par with the best of rival 14-inch gaming models with the exception of the 900p screen resolution (which is bested by some notebooks, such as Dell’s Alienware 14 and its 1080p panel).
ARM is Serious About Graphics
Ask most computer users from 10 years ago who ARM is, and very few would give the correct answer. Some well informed people might mention “Intel” and “StrongARM” or “XScale”, but ARM remained a shadowy presence until we saw the rise of the Smartphone. Since then, ARM has built up their brand, much to the chagrin of companies like Intel and AMD. Partners such as Samsung, Apple, Qualcomm, MediaTek, Rockchip, and NVIDIA have all worked with ARM to produce chips based on the ARMv7 architecture, with Apple being the first to release the first ARMv8 (64 bit) SOCs. The multitude of ARM architectures are likely the most shipped chips in the world, going from very basic processors to the very latest Apple A7 SOC.
The ARMv7 and ARMv8 architectures are very power efficient, yet provide enough performance to handle the vast majority of tasks utilized on smartphones and tablets (as well as a handful of laptops). With the growth of visual computing, ARM also dedicated itself towards designing competent graphics portions of their chips. The Mali architecture is aimed at being an affordable option for those without access to their own graphics design groups (NVIDIA, Qualcomm), but competitive with others that are willing to license their IP out (Imagination Technologies).
ARM was in fact one of the first to license out the very latest graphics technology to partners in the form of the Mali-T600 series of products. These modules were among the first to support OpenGL ES 3.0 (compatible with 2.0 and 1.1) and DirectX 11. The T600 architecture is very comparable to Imagination Technologies’ Series 6 and the Qualcomm Adreno 300 series of products. Currently NVIDIA does not have a unified mobile architecture in production that supports OpenGL ES 3.0/DX11, but they are adapting the Kepler architecture to mobile and will be licensing it to interested parties. Qualcomm does not license out Adreno after buying that group from AMD (Adreno is an anagram of Radeon).
Introduction and Design
As we’re swimming through the veritable flood of Haswell refresh notebooks, we’ve stumbled across the latest in a line of very popular gaming models: the ASUS G750JX-DB71. This notebook is the successor to the well-known G75 series, which topped out at an Intel Core i7-3630QM with NVIDIA GeForce GTX 670MX dedicated graphics. Now, ASUS has jacked up the specs a little more, including the latest 4th-gen CPUs from Intel as well as 700-series NVIDIA GPUs.
Our ASUS G750JX-DB71 test unit features the following specs:
Of course, the closest comparison to this unit is already the most recently-reviewed MSI GT60-2OD-026US, which featured nearly identical specifications, apart from a 15.6” screen, a better GPU (a GTX 780M with 4 GB GDDR5), and a slightly different CPU (the Intel Core i7-4700MQ). In case you’re wondering what the difference is between the ASUS G750JX’s Core i7-4700MQ and the GT60’s i7-4700HQ, it’s very minor: the HQ features a slightly faster integrated graphics Turbo frequency (1.2 GHz vs. 1.15 GHz) and supports Intel Virtualization Technology for Directed I/O (VT-d). Since the G750JX doesn’t support Optimus, we won’t ever be using the integrated graphics, and unless you’re doing a lot with virtual machines, VT-d isn’t likely to offer any benefits, either. So for all intents and purposes, the CPUs are equivalent—meaning the biggest overall performance difference (on the spec sheet, anyway) lies with the GPU and the storage devices (where the G750JX offers more solid-state storage than the GT60). It’s no secret that the MSI GT60 burned up our benchmarks—so the real question is, how close is the ASUS G750JX to its pedestal, and if the differences are considerable, are they justified?
At an MSRP of around $2,000 (though it can be found for around $100 less), the ASUS G750JX-DB71 competes directly with the likes of the MSI GT60, too (which is priced equivalently). The question, of course, is whether it truly competes. Let’s find out!
A new generation of Software Rendering Engines.
We have been busy with side projects, here at PC Perspective, over the last year. Ryan has nearly broken his back rating the frames. Ken, along with running the video equipment and "getting an education", developed a hardware switching device for Wirecase and XSplit.
My project, "Perpetual Motion Engine", has been researching and developing a GPU-accelerated software rendering engine. Now, to be clear, this is just in very early development for the moment. The point is not to draw beautiful scenes. Not yet. The point is to show what OpenGL and DirectX does and what limits are removed when you do the math directly.
Errata: BioShock uses a modified Unreal Engine 2.5, not 3.
In the above video:
- I show the problems with graphics APIs such as DirectX and OpenGL.
- I talk about what those APIs attempt to solve, finding color values for your monitor.
- I discuss the advantages of boiling graphics problems down to general mathematics.
- Finally, I prove the advantages of boiling graphics problems down to general mathematics.
I would recommend watching the video, first, before moving forward with the rest of the editorial. A few parts need to be seen for better understanding.
If Microsoft was left to their own devices...
Microsoft's Financial Analyst Meeting 2013 set the stage, literally, for Steve Ballmer's last annual keynote to investors. The speech promoted Microsoft, its potential, and its unique position in the industry. He proclaims, firmly, their desire to be a devices and services company.
The explanation, however, does not befit either industry.
Ballmer noted, early in the keynote, how Bing is the only notable competitor to Google Search. He wanted to make it clear, to investors, that Microsoft needs to remain in the search business to challenge Google. The implication is that Microsoft can fill the cracks where Google does not, or even cannot, and establish a business from that foothold. I agree. Proprietary products (which are not inherently bad by the way), as Google Search is, require one or more rivals to fill the overlooked or under-served niches. A legitimate business can be established from that basis.
It is the following, similar, statement which troubles me.
Ballmer later mentioned, along the same vein, how Microsoft is among the few making fundamental operating system investments. Like search, the implication is that operating systems are proprietary products which must compete against one another. This, albeit subtly, does not match their vision as a devices and services company. The point of a proprietary platform is to own the ecosystem, from end to end, and to derive your value from that control. The product is not a device; the product is not a service; the product is a platform. This makes sense to them because, from birth, they were a company which sold platforms.
A platform as a product is not a device nor is it service.
A Whole New Atom Family
This past spring I spent some time with Intel at its offices in Santa Clara to learn about a brand new architecture called Silvermont. Built for and targeted at low power platforms like tablets and smartphones, Silvermont was not simply another refresh of the aging Atom processors that were all based on Pentium cores from years ago; instead Silvermont was built from the ground up for low power consumption and high efficiency to compete against the juggernaut that is ARM and its partners. My initial preview of the Silvermont architecture had plenty of detail about the change to an out-of-order architecture, the dual-core modules that comprise it and the power optimizations included.
Today, during the annual Intel Developer Forum held in San Francisco, we are finally able to reveal the remaining details about the new Atom processors based on Silvermont, code named Bay Trail. Not only do we have new information about the designs, but we were able to get our hands on some reference tablets integrating Bay Trail and the new Atom Z3000 series of SoCs to benchmark and compare to offerings from Qualcomm, NVIDIA and AMD.
A Whole New Atom Family
It should be surprise to anyone that the name “Intel Atom Processor” has had a stigma attached to it almost since its initial release during the netbook craze. It was known for being slow and hastily put together though it was still a very successful product in terms of sales. With each successive release and update, Diamondville to Pineview to Cedarview, Atom was improved but only marginally so. Even with Medfield and Clover Trail the products were based around that legacy infrastructure and it showed. Tablets and systems based on Clover Trail saw only moderate success and lukewarm reviews.
With Silvermont the Atom brand gets a second chance. Some may consider it a fifth or sixth chance, but Intel is sticking with the name. Silvermont as an architecture is incredibly flexible and will find its way into several Intel products like Avoton, Bay Trail and Merrifield and in segments from the micro-server to smartphones to convertible tablets. Not only that, but Intel is aware that Windows isn’t the only game out there anymore and the company will support the architecture across Linux, Android and Windows environments.
Atom has been in tablets for some time now, starting in September of last year with Clover Trail deigns being announced during IDF. In February we saw the initial Android-based options also filter out, again based on Clover Trail. They were okay, but really only stop-gaps to prove that Intel was serious about the space. The real test will be this holiday season with Bay Trail at the helm.
While we always knew these Bay Trail platforms were going be branded as Atom we now have the full details on the numbering scheme and productization of the architecture. The Atom Z3700 series will consist of quad-core SoCs with Intel HD graphics (the same design as the Core processor series though with fewer compute units) that will support Windows and Android operating systems. The Atom Z3600 will be dual-core processors, still with Intel HD graphics, targeted only at the Android market.
Introduction and Design
It seems like only yesterday (okay, last month) that we were testing the IdeaPad Yoga 11, which was certainly an interesting device. That’s primarily because of what it represents: namely, the slow merging of the tablet and notebook markets. You’ve probably heard people proclaiming the death of the PC as we know it. Not so fast—while it’s true that tablets have eaten into the sales of what were previously low-powered notebooks and now-extinct netbooks, there is still no way to replace the utility of a physical keyboard and the sensibility of a mouse cursor. Touch-centric devices are hard to beat when entertainment and education are the focus of a purchase, but as long as productivity matters, we aren’t likely to see traditional means of input and a range of connectivity options disappear anytime soon.
The IdeaPad Yoga 11 leaned so heavily in the direction of tablet design that it arguably was more tablet than notebook. That is, it featured a tablet-grade SOC (the nVidia Tegra 3) as opposed to a standard Intel or AMD CPU, an 11” display, and a phenomenal battery life that can only be compared to the likes of other ARM-based tablets. But, of course, with those allegiances come necessary concessions, not least of which is the inability to run x86 applications and the consequential half-baked experiment that is Windows RT.
Fortunately, there’s always room for compromise, and for those of us searching for something closer to a notebook than the original Yoga 11, we’re now afforded the option of the 11S. Apart from being nearly identical in terms of form factor, the $999 (as configured) Yoga 11S adopts a standard x86 chipset with Intel ULV CPUs, which allows it to run full-blown Windows 8. That positions it squarely in-between the larger x86 Yoga 13 and the ARM-based Yoga 11, which makes it an ideal candidate for someone hoping for the best of both worlds. But can it survive the transition, or do its compromises outstrip its gains?
Our Yoga 11S came equipped with a fairly standard configuration:
Unless you’re comparing to the Yoga 11’s specs, not much about this stands out. The Core i5-3339Y is the first thing that jumps out at you; in exchange for the nVidia Tegra 3 ARM-based SOC of the original Yoga 11, it’s a much more powerful chip with a 13W TDP and (thanks to its x86 architecture) the ability to run Windows 8 and standard Windows applications. Next on the list is the included 8 GB of DDR3 RAM—versus just 2 GB on the Yoga 11. Finally, there’s USB 3.0 and a much larger SSD (256 GB vs. 64 GB)—all valuable additions. One thing that hasn’t changed, meanwhile, is the battery size. Surely you’re wondering how this will affect the longevity of the notebook under typical usage. Patience; we’ll get to that in a bit! First, let’s talk about the general design of the notebook.
500GB on the go
Corsair seems to have its fingers in just about everything these days so why not mobile storage, right? The Voyager Air a multi-function device that Corsair calls as "portable wireless drive, home network drive, USB drive, and wireless hub." This battery powered device is meant to act as a mobile hard drive for users that need more storage on the go including PCs and Macs as well as iOS and Android users.
The Voyager Air can also act as a basic home NAS device with a Gigabit Ethernet connection on board for all the computers on your local network. And if you happen to have DLNA ready Blu-ray players or TVs nearby, they can access the video and audio stored on the Voyager Air as well.
Available in either red or black, with 500GB and 1TB capacities, the Voyager Air is slim and sleek, meant to be seen not hidden in a closet.
The front holds the power switch and WiFi on/off switch as well as back-lit icons to check for power, battery life and connection status.
It has come to my attention that you are planning on producing and selling a device to be called “NVIDIA SHIELD.” It should be noted that even though it shares the same name, this device has no matching attributes of the super-hero comic-based security agency. Please adjust.
When SHIELD was previewed to the world at CES in January of this year, there were a hundred questions about the device. What would it cost? Would the build quality stand up to expectations? Would the Android operating system hold up as a dedicated gaming platform? After months of waiting a SHIELD unit finally arrived in our offices in early July, giving us plenty of time (I thought) to really get a feel for the device and its strengths and weakness. As it turned out though, it still seemed like an inadequate amount of time to really gauge this product. But I am going to take a stab at it, feature by feature.
NVIDIA SHIELD aims to be a mobile gaming platform based on Android with a flip out touch-screen interface, high quality console design integrated controller, and added features like PC game streaming and Miracast support.
Initial Unboxing and Overview of Product Video
At the heart of NVIDIA SHIELD is the brand new Tegra 4 SoC, NVIDIA’s latest entry into the world of mobile processors. Tegra 4 is a quad-core, ARM Cortex-A15 based SoC that includes a 5th A15 core built on lower power optimized process technology to run background and idle tasks using less power. This is very similar to what NVIDIA did with Tegra 3’s 4+1 technology, and how ARM is tackling the problem with big.LITTLE philosophy.
NVIDIA Finally Gets Serious with Tegra
Tegra has had an interesting run of things. The original Tegra 1 was utilized only by Microsoft with Zune. Tegra 2 had a better adoption, but did not produce the design wins to propel NVIDIA to a leadership position in cell phones and tablets. Tegra 3 found a spot in Microsoft’s Surface, but that has turned out to be a far more bitter experience than expected. Tegra 4 so far has been integrated into a handful of products and is being featured in NVIDIA’s upcoming Shield product. It also hit some production snags that made it later to market than expected.
I think the primary issue with the first three generations of products is pretty simple. There was a distinct lack of differentiation from the other ARM based products around. Yes, NVIDIA brought their graphics prowess to the market, but never in a form that distanced itself adequately from the competition. Tegra 2 boasted GeForce based graphics, but we did not find out until later that it was comprised of basically four pixel shaders and four vertex shaders that had more in common with the GeForce 7800/7900 series than it did with any of the modern unified architectures of the time. Tegra 3 boasted a big graphical boost, but it was in the form of doubling the pixel shader units and leaving the vertex units alone.
While NVIDIA had very strong developer relations and a leg up on the competition in terms of software support, it was never enough to propel Tegra beyond a handful of devices. NVIDIA is trying to rectify that with Tegra 4 and the 72 shader units that it contains (still divided between pixel and vertex units). Tegra 4 is not perfect in that it is late to market and the GPU is not OpenGL ES 3.0 compliant. ARM, Imagination Technologies, and Qualcomm are offering new graphics processing units that are not only OpenGL ES 3.0 compliant, but also offer OpenCL 1.1 support. Tegra 4 does not support OpenCL. In fact, it does not support NVIDIA’s in-house CUDA. Ouch.
Jumping into a new market is not an easy thing, and invariably mistakes will be made. NVIDIA worked hard to make a solid foundation with their products, and certainly they had to learn to walk before they could run. Unfortunately, running effectively entails having design wins due to outstanding features, performance, and power consumption. NVIDIA was really only average in all of those areas. NVIDIA is hoping to change that. Their first salvo into offering a product that offers features and support that is a step above the competition is what we are talking about today.
Introduction and Design
With the release of Haswell upon us, we’re being treated to an impacting refresh of some already-impressive notebooks. Chief among the benefits is the much-championed battery life improvements—and while better power efficiency is obviously valuable where portability is a primary focus, beefier models can also benefit by way of increased versatility. Sure, gaming notebooks are normally tethered to an AC adapter, but when it’s time to unplug for some more menial tasks, it’s good to know that you won’t be out of juice in a couple of hours.
Of course, an abundance of gaming muscle never hurts, either. As the test platform for one of our recent mobile GPU analyses, MSI’s 15.6” GT60 gaming notebook is, for lack of a better description, one hell of a beast. Following up on Ryan’s extensive GPU testing, we’ll now take a more balanced and comprehensive look at the GT60 itself. Is it worth the daunting $1,999 MSRP? Does the jump to Haswell provide ample and economical benefits? And really, how much of a difference does it make in terms of battery life?
Our GT60 test machine featured the following configuration:
In case it wasn’t already apparent, this device makes no compromises. Sporting a desktop-grade GPU and a quad-core Haswell CPU, it looks poised to be the most powerful notebook we’ve tested to date. Other configurations exist as well, spanning various CPU, GPU, and storage options. However, all available GT60 configurations feature a 1080p anti-glare screen, discrete graphics (starting at the GTX 670M and up), Killer Gigabit LAN, and a case built from metal and heavy-duty plastic. They also come preconfigured with Windows 8, so the only way to get Windows 7 with your GT60 is to purchase it through a reseller that performs customizations.
Another Wrench – GeForce GTX 760M Results
Just recently, I evaluated some of the current processor-integrated graphics options from our new Frame Rating performance metric. The results were very interesting, proving Intel has done some great work with its new HD 5000 graphics option for Ultrabooks. You might have noticed that the MSI GE40 didn’t just come with the integrated HD 4600 graphics but also included a discrete NVIDIA GeForce GTX 760M, on-board. While that previous article was to focus on the integrated graphics of Haswell, Trinity, and Richland, I did find some noteworthy results with the GTX 760M that I wanted to investigate and present.
The MSI GE40 is a new Haswell-based notebook that includes the Core i7-4702MQ quad-core processor and Intel HD 4600 graphics. Along with it MSI has included the Kepler architecture GeForce GTX 760M discrete GPU.
This GPU offers 768 CUDA cores running at a 657 MHz base clock but can stretch higher with GPU Boost technology. It is configured with 2GB of GDDR5 memory running at 2.0 GHz.
If you didn’t read the previous integrated graphics article, linked above, you’re going to have some of the data presented there spoiled and so you might want to get a baseline of information by getting through that first. Also, remember that we are using our Frame Rating performance evaluation system for this testing – a key differentiator from most other mobile GPU testing. And in fact it is that difference that allowed us to spot an interesting issue with the configuration we are showing you today.
If you are not familiar with the Frame Rating methodology, and how we had to change some things for mobile GPU testing, I would really encourage you to read this page of the previous mobility Frame Rating article for the scoop. The data presented below depends on that background knowledge!
Okay, you’ve been warned – on to the results.
Battle of the IGPs
Our long journey with Frame Rating, a new capture-based analysis tool to measure graphics performance of PCs and GPUs, began almost two years ago as a way to properly evaluate the real-world experiences for gamers. What started as a project attempting to learn about multi-GPU complications has really become a new standard in graphics evaluation and I truly believe it will play a crucial role going forward in GPU and game testing.
Today we use these Frame Rating methods and tools, which are elaborately detailed in our Frame Rating Dissected article, and apply them to a completely new market: notebooks. Even though Frame Rating was meant for high performance discrete desktop GPUs, the theory and science behind the entire process is completely applicable to notebook graphics and even on the integrated graphics solutions on Haswell processors and Richland APUs. It also is able to measure performance of discrete/integrated graphics combos from NVIDIA and AMD in a unique way that has already found some interesting results.
Battle of the IGPs
Even though neither side wants us to call it this, we are testing integrated graphics today. With the release of Intel’s Haswell processor (the Core i7/i5/i3 4000) the company has upgraded the graphics noticeably on several of their mobile and desktop products. In my first review of the Core i7-4770K, a desktop LGA1150 part, the integrated graphics now known as the HD 4600 were only slightly faster than the graphics of the previous generation Ivy Bridge and Sandy Bridge. Even though we had all the technical details of the HD 5000 and Iris / Iris Pro graphics options, no desktop parts actually utilize them so we had to wait for some more hardware to show up.
When Apple held a press conference and announced new MacBook Air machines that used Intel’s Haswell architecture, I knew I could count on Ken to go and pick one up for himself. Of course, before I let him start using it for his own purposes, I made him sit through a few agonizing days of benchmarking and testing in both Windows and Mac OS X environments. Ken has already posted a review of the MacBook Air 11-in model ‘from a Windows perspective’ and in that we teased that we had done quite a bit more evaluation of the graphics performance to be shown later. Now is later.
So the first combatant in our integrated graphics showdown with Frame Rating is the 11-in MacBook Air. A small, but powerful Ultrabook that sports more than 11 hours of battery life (in OS X at least) but also includes the new HD 5000 integrated graphics options. Along with that battery life though is the GT3 variation of the new Intel processor graphics that doubles the number of compute units as compared to the GT2. The GT2 is the architecture behind the HD 4600 graphics that sits with nearly all of the desktop processors, and many of the notebook versions, so I am very curious how this comparison is going to stand.
Introduction and Design
As headlines mount championing the supposed shift toward tablets for the average consumer, PC manufacturers continue to devise clever hybrid solutions to try and lure those who are on the fence toward more traditional machines. Along with last year’s IdeaPad Yoga 13 and ThinkPad Twist, Lenovo shortly thereafter launched the smallest of the bunch, an 11.6” convertible tablet PC with a 5-point touch 720p IPS display.
Unlike its newer, more powerful counterpart, the Yoga 11S, it runs Windows RT and features an NVIDIA Tegra 3 Quad-Core system on a chip (SoC). There are pros and cons to this configuration in contrast to the 11S. For starters, the lower-voltage, fanless design of the 11 guarantees superior battery life (something which we’ll cover in detail in just a bit). It’s also consequently (slightly) smaller and lighter than the 11S, which gains a hair on height and weighs around a quarter pound more. But, as you’re probably aware, Windows RT also doesn’t qualify as a fully-functional version of Windows—and, in fact, the Yoga 11’s versatility is constrained by the relatively meager selection of apps available on the Windows Store. The other obvious difference is architecture and chipset, where the Yoga 11’s phone- and tablet-grade ARM-based NVIDIA Tegra 3 is replaced on the 11S by Intel Core Ivy Bridge ULV processors.
But let’s forget about that for a moment. What it all boils down to is that these two machines, while similar in terms of design, are different enough (both in terms of specs and price) to warrant a choice between them based on your intended use. The IdeaPad Yoga 11 configuration we reviewed can currently be found for around $570 at retailers such as Amazon and Newegg. In terms of its innards:
If it looks an awful lot like the specs of your latest smartphone, that’s probably because it is. The Yoga 11 banks on the fact that such ARM-based SoCs have become powerful enough to run a modern personal computer comfortably—and by combining the strengths of an efficient, low-power chipset with the body of a notebook, it reaps benefits from both categories. Of course, there are trade-offs involved, starting with the 2 GB memory ceiling of the chipset and extending to the aforementioned limitations of Windows RT. So the ultimate question is, once those trade-offs are considered, is the Yoga 11 still worth the investment?
Apple has seen a healthy boost in computer sales and adoption since the transition to Intel-based platforms in 2006, but the MacBook line has far and away been the biggest benefactor. Apple has come a long way both from an engineering standpoint and consumer satisfaction point since the long retired iBook and PowerBook lines. This is especially evident when you look at their current product lineup, and products like the 11” MacBook Air.
Even though it may not be the most popular opinion around here, I have been a Mac user since 2005 with the original Mac Mini, and I have used a MacBook as my primary computer since 2008. I switched to the 11” MacBook Air when it came out in 2011, and experienced the growing pains of using a low power platform as my main computer.
While I still have a desktop for the occasional video that I edit at home, or game I manage to find time to play, the majority of my day involves being portable. Both in class and at the office, and I quickly grew to appreciate the 11” form factor, as well as the portability it offers. However, I was quite dissatisfied with the performance and battery life that my ageing ultraportable offered. Desperate for improvements, I decided to see what two generations worth of Intel engineering afforded, and picked up the new Haswell-based 11” MacBook Air.
Since the redesign of the MacBook Air in 2010, the overall look and feel has stayed virtually the same. While the Mini DisplayPort connector on the side became a Thunderbolt connector in 2011, things are still pretty much the same.
In this way, the 2013 MacBook Air should provide no surprises. The one visual difference I can notice involves upgrading the microphone on the left side to a stereo array, causing there to be two grilles this time, instead of one. However, the faults I found in the past with the MacBook Air have nothing to do with the aesthetics or build quality of the device, so I am not too disappointed by the design stagnation.
From an industrial design perspective, everything about this notebook feels familiar to me, which is a positive. I still believe that Apple’s trackpad implementation is the best I've used, and the backlit chiclet keyboard they have been using for years is a good compromise between thickness and key travel.
Kepler-based Mobile GPUs
Late last month, just before the tech world blew up from the mess that is Computex, NVIDIA announced a new line of mobility discrete graphics parts under the GTX 700M series label. At the time we simply posted some news and specifications about the new products but left performance evaluation for a later time. Today we have that for the highest end offering, the GeForce GTX 780M.
As with most mobility GPU releases it seems, the GTX 700M series is not really a new GPU and only offers cursory feature improvements. Based completely on the Kepler line of parts, the GTX 700M will range from 1536 CUDA cores on the GTX 780M to 768 cores on the GTX 760M.
The flagship GTX 780M is essentially a desktop GTX 680 card in a mobile form factor with lower clock speeds. With 1536 CUDA cores running at 823 MHz and boosting to higher speeds depending on the notebook configuration, a 256-bit memory controller running at 5 GHz, the GTX 780M will likely be the fastest mobile GPU you can buy. (And we’ll be testing that in the coming pages.)
The GTX 760M, 765M and 770M offering ranges of performance that scale down to 768 cores at 657 MHz. NVIDIA claims we’ll see the GTX 760M in systems as small as 14-in and below with weights at 2kg or so from vendors like MSI and Acer. For Ultrabooks and thinner machines you’ll have to step down to smaller, less power hungry GPUs like the GT 750 and 740 but even then we expect NVIDIA to have much faster gaming performance than the Haswell-based processor graphics.