All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
If Microsoft was left to their own devices...
Microsoft's Financial Analyst Meeting 2013 set the stage, literally, for Steve Ballmer's last annual keynote to investors. The speech promoted Microsoft, its potential, and its unique position in the industry. He proclaims, firmly, their desire to be a devices and services company.
The explanation, however, does not befit either industry.
Ballmer noted, early in the keynote, how Bing is the only notable competitor to Google Search. He wanted to make it clear, to investors, that Microsoft needs to remain in the search business to challenge Google. The implication is that Microsoft can fill the cracks where Google does not, or even cannot, and establish a business from that foothold. I agree. Proprietary products (which are not inherently bad by the way), as Google Search is, require one or more rivals to fill the overlooked or under-served niches. A legitimate business can be established from that basis.
It is the following, similar, statement which troubles me.
Ballmer later mentioned, along the same vein, how Microsoft is among the few making fundamental operating system investments. Like search, the implication is that operating systems are proprietary products which must compete against one another. This, albeit subtly, does not match their vision as a devices and services company. The point of a proprietary platform is to own the ecosystem, from end to end, and to derive your value from that control. The product is not a device; the product is not a service; the product is a platform. This makes sense to them because, from birth, they were a company which sold platforms.
A platform as a product is not a device nor is it service.
A Whole New Atom Family
This past spring I spent some time with Intel at its offices in Santa Clara to learn about a brand new architecture called Silvermont. Built for and targeted at low power platforms like tablets and smartphones, Silvermont was not simply another refresh of the aging Atom processors that were all based on Pentium cores from years ago; instead Silvermont was built from the ground up for low power consumption and high efficiency to compete against the juggernaut that is ARM and its partners. My initial preview of the Silvermont architecture had plenty of detail about the change to an out-of-order architecture, the dual-core modules that comprise it and the power optimizations included.
Today, during the annual Intel Developer Forum held in San Francisco, we are finally able to reveal the remaining details about the new Atom processors based on Silvermont, code named Bay Trail. Not only do we have new information about the designs, but we were able to get our hands on some reference tablets integrating Bay Trail and the new Atom Z3000 series of SoCs to benchmark and compare to offerings from Qualcomm, NVIDIA and AMD.
A Whole New Atom Family
It should be surprise to anyone that the name “Intel Atom Processor” has had a stigma attached to it almost since its initial release during the netbook craze. It was known for being slow and hastily put together though it was still a very successful product in terms of sales. With each successive release and update, Diamondville to Pineview to Cedarview, Atom was improved but only marginally so. Even with Medfield and Clover Trail the products were based around that legacy infrastructure and it showed. Tablets and systems based on Clover Trail saw only moderate success and lukewarm reviews.
With Silvermont the Atom brand gets a second chance. Some may consider it a fifth or sixth chance, but Intel is sticking with the name. Silvermont as an architecture is incredibly flexible and will find its way into several Intel products like Avoton, Bay Trail and Merrifield and in segments from the micro-server to smartphones to convertible tablets. Not only that, but Intel is aware that Windows isn’t the only game out there anymore and the company will support the architecture across Linux, Android and Windows environments.
Atom has been in tablets for some time now, starting in September of last year with Clover Trail deigns being announced during IDF. In February we saw the initial Android-based options also filter out, again based on Clover Trail. They were okay, but really only stop-gaps to prove that Intel was serious about the space. The real test will be this holiday season with Bay Trail at the helm.
While we always knew these Bay Trail platforms were going be branded as Atom we now have the full details on the numbering scheme and productization of the architecture. The Atom Z3700 series will consist of quad-core SoCs with Intel HD graphics (the same design as the Core processor series though with fewer compute units) that will support Windows and Android operating systems. The Atom Z3600 will be dual-core processors, still with Intel HD graphics, targeted only at the Android market.
Introduction and Design
It seems like only yesterday (okay, last month) that we were testing the IdeaPad Yoga 11, which was certainly an interesting device. That’s primarily because of what it represents: namely, the slow merging of the tablet and notebook markets. You’ve probably heard people proclaiming the death of the PC as we know it. Not so fast—while it’s true that tablets have eaten into the sales of what were previously low-powered notebooks and now-extinct netbooks, there is still no way to replace the utility of a physical keyboard and the sensibility of a mouse cursor. Touch-centric devices are hard to beat when entertainment and education are the focus of a purchase, but as long as productivity matters, we aren’t likely to see traditional means of input and a range of connectivity options disappear anytime soon.
The IdeaPad Yoga 11 leaned so heavily in the direction of tablet design that it arguably was more tablet than notebook. That is, it featured a tablet-grade SOC (the nVidia Tegra 3) as opposed to a standard Intel or AMD CPU, an 11” display, and a phenomenal battery life that can only be compared to the likes of other ARM-based tablets. But, of course, with those allegiances come necessary concessions, not least of which is the inability to run x86 applications and the consequential half-baked experiment that is Windows RT.
Fortunately, there’s always room for compromise, and for those of us searching for something closer to a notebook than the original Yoga 11, we’re now afforded the option of the 11S. Apart from being nearly identical in terms of form factor, the $999 (as configured) Yoga 11S adopts a standard x86 chipset with Intel ULV CPUs, which allows it to run full-blown Windows 8. That positions it squarely in-between the larger x86 Yoga 13 and the ARM-based Yoga 11, which makes it an ideal candidate for someone hoping for the best of both worlds. But can it survive the transition, or do its compromises outstrip its gains?
Our Yoga 11S came equipped with a fairly standard configuration:
Unless you’re comparing to the Yoga 11’s specs, not much about this stands out. The Core i5-3339Y is the first thing that jumps out at you; in exchange for the nVidia Tegra 3 ARM-based SOC of the original Yoga 11, it’s a much more powerful chip with a 13W TDP and (thanks to its x86 architecture) the ability to run Windows 8 and standard Windows applications. Next on the list is the included 8 GB of DDR3 RAM—versus just 2 GB on the Yoga 11. Finally, there’s USB 3.0 and a much larger SSD (256 GB vs. 64 GB)—all valuable additions. One thing that hasn’t changed, meanwhile, is the battery size. Surely you’re wondering how this will affect the longevity of the notebook under typical usage. Patience; we’ll get to that in a bit! First, let’s talk about the general design of the notebook.
500GB on the go
Corsair seems to have its fingers in just about everything these days so why not mobile storage, right? The Voyager Air a multi-function device that Corsair calls as "portable wireless drive, home network drive, USB drive, and wireless hub." This battery powered device is meant to act as a mobile hard drive for users that need more storage on the go including PCs and Macs as well as iOS and Android users.
The Voyager Air can also act as a basic home NAS device with a Gigabit Ethernet connection on board for all the computers on your local network. And if you happen to have DLNA ready Blu-ray players or TVs nearby, they can access the video and audio stored on the Voyager Air as well.
Available in either red or black, with 500GB and 1TB capacities, the Voyager Air is slim and sleek, meant to be seen not hidden in a closet.
The front holds the power switch and WiFi on/off switch as well as back-lit icons to check for power, battery life and connection status.
It has come to my attention that you are planning on producing and selling a device to be called “NVIDIA SHIELD.” It should be noted that even though it shares the same name, this device has no matching attributes of the super-hero comic-based security agency. Please adjust.
When SHIELD was previewed to the world at CES in January of this year, there were a hundred questions about the device. What would it cost? Would the build quality stand up to expectations? Would the Android operating system hold up as a dedicated gaming platform? After months of waiting a SHIELD unit finally arrived in our offices in early July, giving us plenty of time (I thought) to really get a feel for the device and its strengths and weakness. As it turned out though, it still seemed like an inadequate amount of time to really gauge this product. But I am going to take a stab at it, feature by feature.
NVIDIA SHIELD aims to be a mobile gaming platform based on Android with a flip out touch-screen interface, high quality console design integrated controller, and added features like PC game streaming and Miracast support.
Initial Unboxing and Overview of Product Video
At the heart of NVIDIA SHIELD is the brand new Tegra 4 SoC, NVIDIA’s latest entry into the world of mobile processors. Tegra 4 is a quad-core, ARM Cortex-A15 based SoC that includes a 5th A15 core built on lower power optimized process technology to run background and idle tasks using less power. This is very similar to what NVIDIA did with Tegra 3’s 4+1 technology, and how ARM is tackling the problem with big.LITTLE philosophy.
NVIDIA Finally Gets Serious with Tegra
Tegra has had an interesting run of things. The original Tegra 1 was utilized only by Microsoft with Zune. Tegra 2 had a better adoption, but did not produce the design wins to propel NVIDIA to a leadership position in cell phones and tablets. Tegra 3 found a spot in Microsoft’s Surface, but that has turned out to be a far more bitter experience than expected. Tegra 4 so far has been integrated into a handful of products and is being featured in NVIDIA’s upcoming Shield product. It also hit some production snags that made it later to market than expected.
I think the primary issue with the first three generations of products is pretty simple. There was a distinct lack of differentiation from the other ARM based products around. Yes, NVIDIA brought their graphics prowess to the market, but never in a form that distanced itself adequately from the competition. Tegra 2 boasted GeForce based graphics, but we did not find out until later that it was comprised of basically four pixel shaders and four vertex shaders that had more in common with the GeForce 7800/7900 series than it did with any of the modern unified architectures of the time. Tegra 3 boasted a big graphical boost, but it was in the form of doubling the pixel shader units and leaving the vertex units alone.
While NVIDIA had very strong developer relations and a leg up on the competition in terms of software support, it was never enough to propel Tegra beyond a handful of devices. NVIDIA is trying to rectify that with Tegra 4 and the 72 shader units that it contains (still divided between pixel and vertex units). Tegra 4 is not perfect in that it is late to market and the GPU is not OpenGL ES 3.0 compliant. ARM, Imagination Technologies, and Qualcomm are offering new graphics processing units that are not only OpenGL ES 3.0 compliant, but also offer OpenCL 1.1 support. Tegra 4 does not support OpenCL. In fact, it does not support NVIDIA’s in-house CUDA. Ouch.
Jumping into a new market is not an easy thing, and invariably mistakes will be made. NVIDIA worked hard to make a solid foundation with their products, and certainly they had to learn to walk before they could run. Unfortunately, running effectively entails having design wins due to outstanding features, performance, and power consumption. NVIDIA was really only average in all of those areas. NVIDIA is hoping to change that. Their first salvo into offering a product that offers features and support that is a step above the competition is what we are talking about today.
Introduction and Design
With the release of Haswell upon us, we’re being treated to an impacting refresh of some already-impressive notebooks. Chief among the benefits is the much-championed battery life improvements—and while better power efficiency is obviously valuable where portability is a primary focus, beefier models can also benefit by way of increased versatility. Sure, gaming notebooks are normally tethered to an AC adapter, but when it’s time to unplug for some more menial tasks, it’s good to know that you won’t be out of juice in a couple of hours.
Of course, an abundance of gaming muscle never hurts, either. As the test platform for one of our recent mobile GPU analyses, MSI’s 15.6” GT60 gaming notebook is, for lack of a better description, one hell of a beast. Following up on Ryan’s extensive GPU testing, we’ll now take a more balanced and comprehensive look at the GT60 itself. Is it worth the daunting $1,999 MSRP? Does the jump to Haswell provide ample and economical benefits? And really, how much of a difference does it make in terms of battery life?
Our GT60 test machine featured the following configuration:
In case it wasn’t already apparent, this device makes no compromises. Sporting a desktop-grade GPU and a quad-core Haswell CPU, it looks poised to be the most powerful notebook we’ve tested to date. Other configurations exist as well, spanning various CPU, GPU, and storage options. However, all available GT60 configurations feature a 1080p anti-glare screen, discrete graphics (starting at the GTX 670M and up), Killer Gigabit LAN, and a case built from metal and heavy-duty plastic. They also come preconfigured with Windows 8, so the only way to get Windows 7 with your GT60 is to purchase it through a reseller that performs customizations.
Another Wrench – GeForce GTX 760M Results
Just recently, I evaluated some of the current processor-integrated graphics options from our new Frame Rating performance metric. The results were very interesting, proving Intel has done some great work with its new HD 5000 graphics option for Ultrabooks. You might have noticed that the MSI GE40 didn’t just come with the integrated HD 4600 graphics but also included a discrete NVIDIA GeForce GTX 760M, on-board. While that previous article was to focus on the integrated graphics of Haswell, Trinity, and Richland, I did find some noteworthy results with the GTX 760M that I wanted to investigate and present.
The MSI GE40 is a new Haswell-based notebook that includes the Core i7-4702MQ quad-core processor and Intel HD 4600 graphics. Along with it MSI has included the Kepler architecture GeForce GTX 760M discrete GPU.
This GPU offers 768 CUDA cores running at a 657 MHz base clock but can stretch higher with GPU Boost technology. It is configured with 2GB of GDDR5 memory running at 2.0 GHz.
If you didn’t read the previous integrated graphics article, linked above, you’re going to have some of the data presented there spoiled and so you might want to get a baseline of information by getting through that first. Also, remember that we are using our Frame Rating performance evaluation system for this testing – a key differentiator from most other mobile GPU testing. And in fact it is that difference that allowed us to spot an interesting issue with the configuration we are showing you today.
If you are not familiar with the Frame Rating methodology, and how we had to change some things for mobile GPU testing, I would really encourage you to read this page of the previous mobility Frame Rating article for the scoop. The data presented below depends on that background knowledge!
Okay, you’ve been warned – on to the results.
Battle of the IGPs
Our long journey with Frame Rating, a new capture-based analysis tool to measure graphics performance of PCs and GPUs, began almost two years ago as a way to properly evaluate the real-world experiences for gamers. What started as a project attempting to learn about multi-GPU complications has really become a new standard in graphics evaluation and I truly believe it will play a crucial role going forward in GPU and game testing.
Today we use these Frame Rating methods and tools, which are elaborately detailed in our Frame Rating Dissected article, and apply them to a completely new market: notebooks. Even though Frame Rating was meant for high performance discrete desktop GPUs, the theory and science behind the entire process is completely applicable to notebook graphics and even on the integrated graphics solutions on Haswell processors and Richland APUs. It also is able to measure performance of discrete/integrated graphics combos from NVIDIA and AMD in a unique way that has already found some interesting results.
Battle of the IGPs
Even though neither side wants us to call it this, we are testing integrated graphics today. With the release of Intel’s Haswell processor (the Core i7/i5/i3 4000) the company has upgraded the graphics noticeably on several of their mobile and desktop products. In my first review of the Core i7-4770K, a desktop LGA1150 part, the integrated graphics now known as the HD 4600 were only slightly faster than the graphics of the previous generation Ivy Bridge and Sandy Bridge. Even though we had all the technical details of the HD 5000 and Iris / Iris Pro graphics options, no desktop parts actually utilize them so we had to wait for some more hardware to show up.
When Apple held a press conference and announced new MacBook Air machines that used Intel’s Haswell architecture, I knew I could count on Ken to go and pick one up for himself. Of course, before I let him start using it for his own purposes, I made him sit through a few agonizing days of benchmarking and testing in both Windows and Mac OS X environments. Ken has already posted a review of the MacBook Air 11-in model ‘from a Windows perspective’ and in that we teased that we had done quite a bit more evaluation of the graphics performance to be shown later. Now is later.
So the first combatant in our integrated graphics showdown with Frame Rating is the 11-in MacBook Air. A small, but powerful Ultrabook that sports more than 11 hours of battery life (in OS X at least) but also includes the new HD 5000 integrated graphics options. Along with that battery life though is the GT3 variation of the new Intel processor graphics that doubles the number of compute units as compared to the GT2. The GT2 is the architecture behind the HD 4600 graphics that sits with nearly all of the desktop processors, and many of the notebook versions, so I am very curious how this comparison is going to stand.
Introduction and Design
As headlines mount championing the supposed shift toward tablets for the average consumer, PC manufacturers continue to devise clever hybrid solutions to try and lure those who are on the fence toward more traditional machines. Along with last year’s IdeaPad Yoga 13 and ThinkPad Twist, Lenovo shortly thereafter launched the smallest of the bunch, an 11.6” convertible tablet PC with a 5-point touch 720p IPS display.
Unlike its newer, more powerful counterpart, the Yoga 11S, it runs Windows RT and features an NVIDIA Tegra 3 Quad-Core system on a chip (SoC). There are pros and cons to this configuration in contrast to the 11S. For starters, the lower-voltage, fanless design of the 11 guarantees superior battery life (something which we’ll cover in detail in just a bit). It’s also consequently (slightly) smaller and lighter than the 11S, which gains a hair on height and weighs around a quarter pound more. But, as you’re probably aware, Windows RT also doesn’t qualify as a fully-functional version of Windows—and, in fact, the Yoga 11’s versatility is constrained by the relatively meager selection of apps available on the Windows Store. The other obvious difference is architecture and chipset, where the Yoga 11’s phone- and tablet-grade ARM-based NVIDIA Tegra 3 is replaced on the 11S by Intel Core Ivy Bridge ULV processors.
But let’s forget about that for a moment. What it all boils down to is that these two machines, while similar in terms of design, are different enough (both in terms of specs and price) to warrant a choice between them based on your intended use. The IdeaPad Yoga 11 configuration we reviewed can currently be found for around $570 at retailers such as Amazon and Newegg. In terms of its innards:
If it looks an awful lot like the specs of your latest smartphone, that’s probably because it is. The Yoga 11 banks on the fact that such ARM-based SoCs have become powerful enough to run a modern personal computer comfortably—and by combining the strengths of an efficient, low-power chipset with the body of a notebook, it reaps benefits from both categories. Of course, there are trade-offs involved, starting with the 2 GB memory ceiling of the chipset and extending to the aforementioned limitations of Windows RT. So the ultimate question is, once those trade-offs are considered, is the Yoga 11 still worth the investment?
Apple has seen a healthy boost in computer sales and adoption since the transition to Intel-based platforms in 2006, but the MacBook line has far and away been the biggest benefactor. Apple has come a long way both from an engineering standpoint and consumer satisfaction point since the long retired iBook and PowerBook lines. This is especially evident when you look at their current product lineup, and products like the 11” MacBook Air.
Even though it may not be the most popular opinion around here, I have been a Mac user since 2005 with the original Mac Mini, and I have used a MacBook as my primary computer since 2008. I switched to the 11” MacBook Air when it came out in 2011, and experienced the growing pains of using a low power platform as my main computer.
While I still have a desktop for the occasional video that I edit at home, or game I manage to find time to play, the majority of my day involves being portable. Both in class and at the office, and I quickly grew to appreciate the 11” form factor, as well as the portability it offers. However, I was quite dissatisfied with the performance and battery life that my ageing ultraportable offered. Desperate for improvements, I decided to see what two generations worth of Intel engineering afforded, and picked up the new Haswell-based 11” MacBook Air.
Since the redesign of the MacBook Air in 2010, the overall look and feel has stayed virtually the same. While the Mini DisplayPort connector on the side became a Thunderbolt connector in 2011, things are still pretty much the same.
In this way, the 2013 MacBook Air should provide no surprises. The one visual difference I can notice involves upgrading the microphone on the left side to a stereo array, causing there to be two grilles this time, instead of one. However, the faults I found in the past with the MacBook Air have nothing to do with the aesthetics or build quality of the device, so I am not too disappointed by the design stagnation.
From an industrial design perspective, everything about this notebook feels familiar to me, which is a positive. I still believe that Apple’s trackpad implementation is the best I've used, and the backlit chiclet keyboard they have been using for years is a good compromise between thickness and key travel.
Kepler-based Mobile GPUs
Late last month, just before the tech world blew up from the mess that is Computex, NVIDIA announced a new line of mobility discrete graphics parts under the GTX 700M series label. At the time we simply posted some news and specifications about the new products but left performance evaluation for a later time. Today we have that for the highest end offering, the GeForce GTX 780M.
As with most mobility GPU releases it seems, the GTX 700M series is not really a new GPU and only offers cursory feature improvements. Based completely on the Kepler line of parts, the GTX 700M will range from 1536 CUDA cores on the GTX 780M to 768 cores on the GTX 760M.
The flagship GTX 780M is essentially a desktop GTX 680 card in a mobile form factor with lower clock speeds. With 1536 CUDA cores running at 823 MHz and boosting to higher speeds depending on the notebook configuration, a 256-bit memory controller running at 5 GHz, the GTX 780M will likely be the fastest mobile GPU you can buy. (And we’ll be testing that in the coming pages.)
The GTX 760M, 765M and 770M offering ranges of performance that scale down to 768 cores at 657 MHz. NVIDIA claims we’ll see the GTX 760M in systems as small as 14-in and below with weights at 2kg or so from vendors like MSI and Acer. For Ultrabooks and thinner machines you’ll have to step down to smaller, less power hungry GPUs like the GT 750 and 740 but even then we expect NVIDIA to have much faster gaming performance than the Haswell-based processor graphics.
An new era for computing? Or, just a bit of catching up?
Early Tuesday, at 2am for viewers in eastern North America, Intel performed their Computex 2013 keynote to officially kick off Haswell. Unlike ASUS from the night prior, Intel did not announce a barrage of new products; the purpose is to promote future technologies and the new products of their OEM and ODM partners. In all, there was a pretty wide variety of discussed topics.
Intel carried on with the computational era analogy: the 80's was dominated by mainframes; the 90's were predominantly client-server; and the 2000's brought the internet to the forefront. While true, they did not explicitly mention how each era never actually died but rather just bled through: we still use mainframes, especially with cloud infrastructure; we still use client-server; and just about no-one would argue that the internet has been displaced, despite its struggle against semi-native apps.
Intel believes that we are currently in the two-in-one era, which they probably mean "multiple-in-one" due to devices such as the ASUS Transformer Book Trio. They created a tagline, almost a mantra, illustrating their vision:
"It's a laptop when you need it; it's a tablet when you want it."
But before elaborating, they wanted to discuss their position in the mobile market. They believe they are becoming a major player in the mobile market with key design wins and outperforming some incumbent system on a chips (SoCs). The upcoming Silvermont architecture pines to be fill in the gaps below Haswell, driving smartphones and tablets and stretching upward to include entry-level notebooks and all-in-one PCs. The architecture promises to scale between offering three-fold more performance than its past generation, or a fifth of the power for equivalent performance.
Ryan discussed Silvermont last month, be sure to give his thoughts a browse for more depth.
Cortex-A12 fills a gap
Starting off Computex with an interesting announcement, ARM is talking about a new Cortex-A12 core that will attempt to address a performance gap in the SoC ecosystem between the A9 and A15. In the battle to compete with Krait and Intel's Silvermont architecture due in late 2013, ARM definitely needed to address the separation in performance and efficiency of the A9 and A15.
Source: ARM. Top to bottom: Cortex-A15, A12, A9 die size estimate
Targeted at mid-range devices that tend to be more cost (and thus die-size) limited, the Cortex-A12 will ship in late 2014 for product sampling and you should begin seeing hardware for sale in early 2015.
Architecturally, the changes for the upcoming A12 core revolve around a move to fully out of order dual-issue design including the integrated floating point units. The execution units are faster and the memory design has been improved but ARM wasn't ready to talk about specifics with me yet; expect that later in the year.
ARM claims this results in a 40% performance gain for the Cortex-A12 over the Cortex-A9, tested in SPECint. Because product won't even start sampling until late in 2014 we have no way to verify this data yet or to evaluate efficiency claims. That time lag between announcement and release will also give competitors like Intel, AMD and even Qualcomm time to answer back with potential earlier availability.
A Reference Platform - But not a great one
Believe it or not, AMD claims that the Brazos platform, along with the "Brazos 2.0" update the following year, were the company's most successful mobile platforms in terms of sales and design wins. When it first took the scene in late 2010, it was going head to head against the likes of Intel's Atom processor and the combination of Atom + NVIDIA ION and winning. It was sold in mini-ITX motherboard form factors as well as small clamshell notebooks (gasp, dare we say...NETBOOKS?) and though it might not have gotten the universal attention it deserved, it was a great part.
With Kabini (and Temash as well), AMD is making another attempt to pull in some marketshare in the low power, low cost mobile markets. I have already gone over the details of the mobile platforms that AMD is calling Elite Mobility (Temash) and Mainstream (Kabini) in a previous article that launched today.
This article will quickly focus on the real-world performance of the Kabini platform as demonstrated by a reference laptop I received while visiting AMD in Toronto a few weeks ago. While this design isn't going to be available in retail (and I am somewhat thankful based on the build quality) the key is to look at the performance and power efficiency of the platform itself, not the specific implementation.
Kabini Architecture Overview
The building blocks of Kabini are four Jaguar x86 cores and 128 Radeon cores colleted in a pair of Compute Units - similar in many ways to the CUs found in the Radeon HD 7000 series discrete GPUs. Josh has written a very good article that focuses on the completely new architecture that is Jaguar and compared it to other processors including AMD's previous core used in Brazos, the Bobcat core.
Introduction and Design
While Lenovo hasn’t historically been known for its gaming PCs, it’s poised to make quite a splash with the latest entry in its IdeaPad line. Owing little to the company’s business-oriented roots, the Y500 aims to be all power—moreso than any other laptop from the manufacturer to date—tactfully squeezed into a price tag that would normally be unattainable given the promised performance. But can it succeed?
Our Y500 review unit can be had for $1,249 at Newegg and other retailers, or for as low as $1,180 at Best Buy. Lenovo also sells customizable models, though the price is generally higher. Here’s the full list of specifications:
The configurations offered by Lenovo range in price fairly widely, from as low as $849 for a model sporting 8 GB of RAM with a single GT 650M with 2 GB GDDR5. The best value is certainly this configuration that we received, however.
What’s so special about it? Well, apart from the obvious (powerful quad-core CPU and 16 GB RAM), this laptop actually includes two NVIDIA GeForce GT 650M GPUs (both with 2 GB GDDR5) configured in SLI. Seeing as it’s just a 15.6-inch model, how does it manage to do that? By way of a clever compromise: the exchange of the usual optical drive for an Ultrabay, something normally only seen in Lenovo’s ThinkPad line of laptops. So I guess the Y500 does owe a little bit of its success to its business-grade brethren after all.
In our review unit (and in the particular configuration noted above), this Ultrabay comes prepopulated with the second GT 650M, equipped with its own heatsink/fan and all. The addition of this GPU effectively launches the Y500 into high-end gaming laptop territory—at least on the spec sheet. Other options for the Ultrabay also exist (sold separately), including a DVD burner and a second hard drive. The bay is easily removable via a switch on the back of the PC (see below).
A much needed architecture shift
It has been almost exactly five years since the release of the first Atom branded processors from Intel, starting with the Atom 230 and 330 based on the Diamondville design. Built for netbooks and nettops at the time, the Atom chips were a reaction to a unique market that the company had not planned for. While the early Atoms were great sellers, they were universally criticized by the media for slow performance and sub-par user experiences.
Atom has seen numerous refreshes since 2008, but they were all modifications of the simplistic, in-order architecture that was launched initially. With today's official release of the Silvermont architecture, the Atom processors see their first complete redesign from the ground up. With the focus on tablets and phones rather than netbooks, can Intel finally find a foothold in the growing markets dominated by ARM partners?
I should note that even though we are seeing the architectural reveal today, Intel doesn't plan on having shipping parts until late in 2013 for embedded, server and tablets and not until 2014 for smartphones. Why the early reveal on the design then? I think that pressure from ARM's designs (Krait, Exynos) as well as the upcoming release of AMD's own Kabini is forcing Intel's hand a bit. Certainly they don't want to be perceived as having fallen behind and getting news about the potential benefits of their own x86 option out in the public will help.
Silvermont will be the first Atom processor built on the 22nm process, leaving the 32nm designs of Saltwell behind it. This also marks the beginning of a new change in the Atom design process, to adopt the tick/tock model we have seen on Intel's consumer desktop and notebook parts. At the next node drop of 14nm, we'll see see an annual cadence that first focuses on the node change, then an architecture change at the same node.
By keeping Atom on the same process technology as Core (Ivy Bridge, Haswell, etc), Intel can put more of a focus on the power capabilities of their manufacturing.
Introduction and Technical Specifications
Courtesy of Oyen Digital
Oyen Digital, a popular manufacturer of portable storage enclosures and devices, provided us with its MiniPro™ eSATA / USB 3.0 Portable Hard Drive enclosure for testing USB 3.0 enhanced mode on the ASUS P8Z77-I Deluxe motherboard. This enclosure offers support for USB 2.0, USB 3.0, and eSATA ports in conjunction with a 2.5" hard drive. We put this enclosure on the test bench with the ASUS P8Z77-I Deluxe board to test the performance limits of the device. The MiniPro™ enclosure can be found at your favorite e-tailer for $39.95.
The MiniPro™ SATA / USB 3.0 Portable Hard Drive enclosure is a simple aluminum enclosure supporting any 2.5" form factor hard drive up to SATA III speeds. The enclosure itself supports USB 2.0, USB 3.0, and eSATA connections. Because of its use of the ASMedia 1053e chipset for USB 3.0 support, the enclosure supports both USB 3.0 normal mode transfer speeds and UASP (USB Attached SCSI Protocol) mode transfer speeds. UASP mode is a method of bulk transfer for USB 3.0 connections that increases transfer speeds through the use of parallel simultaneous packet transfers. Per our sources at ASUS, UASP can be explained as follows:
The adoption of the SCSI Protocol in USB 3.0 provides its users with the advantage of having better data throughput than traditional BOT (Bulk-Only Transfer) protocol, all thanks to its streaming architecture as well as the improved queuing (NCQ support) and task management, which eliminated much of the round trip time between USB commands, so more commands can be sent simultaneously. Moreover, thanks to the multi-tasking aware architecture, the performance is further enhanced when multiple transfers occur.
The downside of UASP is that the receiving device (Flash drive/external hard drive etc) must also be UASP enabled for the protocol to work. This requires checking your peripherals before purchase. However since UASP is an industry standard, the device support for ASUS UASP implementation is not restricted to a particular controller manufacturer or device type, so the overall number of peripherals available should undoubtedly grow.
Technical Specifications (taken from the Oyen Digital website)
eSATA 6G (Up to 6.0 Gbps)
USB 3.0: (Up to 5.0 Gbps)
|SATA III (up to 15mm SATA 2.5" HDD/SSD)|
|Windows XP/Vista/7/8 & above; MAC OS 10.2 & above; Linux 2.4.22 & above|
NVIDIA releases the GeForce GT 700M family
NVIDIA revolutionized gaming on the desktop with the release of its 600-series Kepler-based graphics cards in March 2012. With the release of the GeForce GT 700M series, Kepler enters the mobile arena to power laptops, ultrabooks, and all-in-one systems.
Today, NVIDIA introduces four new members to its mobile line: the GeForce GT 750M, the GeForce GT 740M, the GeForce GT 735M, and the GeForce GT 720M. These four new mobile graphics processors join the previously-released members of the GeForce GT 700m series: the GeForce GT 730M and the GeForce GT 710M. With the exception of the Fermi-based GeForce GT 720M, all of the newly-released mobile cores are based on NVIDIA's 28nm Kepler architecture.
Notebooks based on the GeForce GT 700M series will offer in-built support for the following new technologies:
Automatic Battery Savings through NVIDIA Optimus Technology
Automatic Game Configuration through the GeForce Experience
Automatic Performance Optimization through NVIDIA GPU Boost 2.0