All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Gaming laptops are something that most people are quick to reject as out of their price range. There is a lot of sense in this train of thought. We know that laptop components are inherently lower performing than their desktop counterparts, and significantly more expensive. So the idea of spending more money for less powerful components seems like a bad trade off for the added gains of portability for many gamers.
However, we also seem to be in a bit of a plateau as far as generation-to-generation performance gain with desktop components. Midrange processors from a few generations ago are still more than capable of playing the vast majority of games, and even lower-end modern GPUs are able to game at 1080p.
So maybe it's time to take another look at the sub-$1000 gaming notebook options, and that's exactly what we are doing today with the Acer Aspire V 15 Nitro Black Edition.
The Aspire V Nitro is equipped with fairly modest components when compared to what most people think of gaming laptops as. Where machines such as the MSI GT70 Dominator or ASUS G751 seem to take the kitchen sink approach towards mobile gaming machines, The Aspire V is a more carefully balanced option.
|Acer Aspire V 15 Nitro Black Edition|
|Processor||Intel Core i7-4720HQ 2.6 GHz|
|Graphics Card||NVIDIA GTX 960M 4GB|
|Storage||1 TB Hard Drive|
|Dimensions (W x D x H)||15.34" x 10.14" x 0.86" - 0.94"|
Anchored by an Intel Core i7-4720HQ and a GTX 960M, the Aspire V Nitro isn't trying to reach to the top stack of mobile performance. A 15.6" display along with 8GB of RAM, and a single 1TB spindle drive are all logical choices for a machine aimed towards gaming on a budget.
While it's difficult for us to recommend that you buy any machine without an SSD these days, a 1TB drive is great for game storage on a machine like there. There are also other configurations optiosn which add SATA M.2 SSDs alongside the 1TB drive, and we managed to open up our sample and put an SSD in ourselves with little pain.
Business Model Based on Partnerships
|Alexandru Voica works for Imagination Technologies. His background includes research in computer graphics at the School of Advanced Studies Sant'Anna in Pisa and a brief stint as a CPU engineer, working on several high-profile 32-bit processors used in many mobile and embedded devices today. You can follow Alex on Twitter @alexvoica.|
Some months ago my colleague Rys Sommefeldt wrote an article offering his (deeply) technical perspective on how a chip gets made, from R&D to manufacturing. While his bildungsroman production covers a lot of the engineering details behind silicon production, it is light on the business side of things; and that is a good thing because it gives me opportunity to steal some of his spotlight!
This article will give you a breakdown of the IP licensing model, describing the major players and the relationships between them. It is not designed to be a complete guide by any means and some parts might already sound familiar, but I hope it is a comprehensive overview that can be used by anyone who is new to product manufacturing in general.
The diagram below offers an analysis of the main categories of companies involved in the semiconductor food chain. Although I’m going to attempt to paint a broad picture, I will mainly offer examples based on the ecosystem formed around Imagination (since that is what I know best).
A simplified view of the manufacturing chain
Let’s work our way from left to right.
Traditionally, these are the companies that design and sell silicon IP. ARM and Imagination Technologies are perhaps the most renowned for their sub-brands: Cortex CPU + Mali GPU and MIPS CPU + PowerVR GPU, respectively.
Given the rapid evolution of the semiconductor market, such companies continue to evolve their business models beyond point solutions to become one-stop shops that offer more than for a wide variety of IP cores and platforms, comprising CPUs, graphics, video, connectivity, cloud software and more.
Qualcomm’s GPU History
Despite its market dominance, Qualcomm may be one of the least known contenders in the battle for the mobile space. While players like Apple, Samsung, and even NVIDIA are often cited as the most exciting and most revolutionary, none come close to the sheer sales, breadth of technology, and market share that Qualcomm occupies. Brands like Krait and Snapdragon have helped push the company into the top 3 semiconductor companies in the world, following only Intel and Samsung.
Founded in July 1985, seven industry veterans came together in the den of Dr. Irwin Jacobs’ San Diego home to discuss an idea. They wanted to build “Quality Communications” (thus the name Qualcomm) and outlined a plan that evolved into one of the telecommunications industry’s great start-up success stories.
Though Qualcomm sold its own handset business to Kyocera in 1999, many of today’s most popular mobile devices are powered by Qualcomm’s Snapdragon mobile chipsets with integrated CPU, GPU, DSP, multimedia CODECs, power management, baseband logic and more. In fact the typical “chipset” from Qualcomm encompasses up to 20 different chips of different functions besides just the main application processor. If you are an owner of a Galaxy Note 4, Motorola Droid Turbo, Nexus 6, or Samsung Galaxy S5, then you are most likely a user of one of Qualcomm’s Snapdragon chipsets.
Qualcomm’s GPU History
Before 2006, the mobile GPU as we know it today was largely unnecessary. Feature phones and “dumb” phones were still the large majority of the market with smartphones and mobile tablets still in the early stages of development. At this point all the visual data being presented on the screen, whether on a small monochrome screen or with the color of a PDA, was being drawn through a software renderer running on traditional CPU cores.
But by 2007, the first fixed-function, OpenGL ES 1.0 class of GPUs started shipping in mobile devices. These dedicated graphics processors were originally focused on drawing and updating the user interface on smartphones and personal data devices. Eventually these graphics units were used for what would be considered the most basic gaming tasks.
Introduction and Specifications
The ASUS Zenfone 2 is a 5.5-inch smartphone with a premium look and the specs to match. But the real story here is that it sells for just $199 or $299 unlocked, making it a tempting alternative to contract phones without the concessions often made with budget devices; at least on paper. Let's take a closer look to see how the new Zenfone 2 stacks up! (Note: a second sample unit was provided by Gearbest.com.)
When I first heard about the Zenfone 2 from ASUS I was eager to check it out given its mix of solid specs, nice appearance, and a startlingly low price. ASUS has created something that has the potential to transcend the disruptive nature of a phone like the Moto E, itself a $149 alternative to contract phones that we reviewed recently. With its premium specs to go along with very low unlocked pricing, the Zenfone 2 could be more than just a bargain device, and if it performs well it could make some serious waves in the smartphone industry.
The Zenfone 2 also features a 5.5-inch IPS LCD screen with 1920x1080 resolution (in line with an iPhone 6 Plus or the OnePlus One), and beyond the internal hardware ASUS has created a phone that looks every bit the part of a premium device that one would expect to cost hundreds more. In fact, without spoiling anything up front, I will say that the context of price won't be necessary to judge the merit of the Zenfone 2; it stands on its own as a smartphone, and not simply a budget phone.
The big question is going to be how the Zenfone 2 compares to existing phones, and with its quad-core Intel Atom SoC this is something of an unknown. Intel has been making a push to enter the U.S. smartphone market (with earlier products more widely available in Europe) and the Zenfone 2 marks an important milestone for both Intel and ASUS in this regard. The Z3580 SoC powering my review unit certainly sounds fast on paper with its 4 cores clocked up to 2.33 GHz, and no less than 4 GB of RAM on board (and a solid 64 GB onboard storage as well).
Introduction and Design
With the introduction of the ASUS G751JT-CH71, we’ve now got our first look at the newest ROG notebook design revision. The celebrated design language remains the same, and the machine’s lineage is immediately discernible. However, unlike the $2,000 G750JX-DB71 unit we reviewed a year and a half ago, this particular G751JT configuration is 25% less expensive at just $1,500. So first off, what’s changed on the inside?
(Editor's Note: This is NOT the recent G-Sync version of the ASUS G751 notebook that was announced at Computex. This is the previously released version, one that I am told will continue to sell for the foreseeable future and one that will come at a lower overall price than the G-Sync enabled model. Expect a review on the G-Sync derivative very soon!)
Quite a lot, as it turns out. For starters, we’ve moved all the way from the 700M series to the 900M series—a leap which clearly ought to pay off in spades in terms of GPU performance. The CPU and RAM remain virtually equivalent, while the battery has migrated from external to internal and enjoyed a 100 mAh bump in the process (from 5900 to 6000 mAh). So what’s with the lower price then? Well, apart from the age difference, it’s the storage: the G750JX featured both a 1 TB storage drive and a 256 GB SSD, while the G751JT-CH71 drops the SSD. That’s a small sacrifice in our book, especially when an SSD is so easily added thereafter. By the way, if you’d rather simply have ASUS handle that part of the equation for you, you can score a virtually equivalent configuration (chipset and design evolutions notwithstanding) in the G751JT-DH72 for $1750—still $250 less than the G750JX we reviewed.
Digging into a specific market
A little while ago, I decided to think about processor design as a game. You are given a budget of complexity, which is determined by your process node, power, heat, die size, and so forth, and the objective is to lay out features in the way that suits your goal and workload best. While not the topic of today's post, GPUs are a great example of what I mean. They make the assumption that in a batch of work, nearby tasks are very similar, such as the math behind two neighboring pixels on the screen. This assumption allows GPU manufacturers to save complexity by chaining dozens of cores together into not-quite-independent work groups. The circuit fits the work better, and thus it lets more get done in the same complexity budget.
Carrizo is aiming at a 63 million unit per year market segment.
This article is about Carrizo, though. This is AMD's sixth-generation APU, starting with Llano's release in June 2011. For this launch, Carrizo is targeting the 15W and 35W power envelopes for $400-$700 USD notebook devices. AMD needed to increase efficiency on the same, 28nm process that we have seen in their product stack since Kabini and Temash were released in May of 2013. They tasked their engineers to optimize their APU's design for these constraints, which led to dense architectures and clever features on the same budget of complexity, rather than smaller transistors or a bigger die.
15W was their primary target, and they claim to have exceeded their own expectations.
Backing up for a second. Beep. Beep. Beep. Beep.
When I met with AMD last month, I brought up the Bulldozer architecture with many individuals. I suspected that it was a quite clever design that didn't reach its potential because of external factors. As I started this editorial, processor design is a game and, if you can save complexity by knowing your workload, you can do more with less.
Bulldozer looked like it wanted to take a shortcut by cutting elements that its designers believed would be redundant going forward. First and foremost, two cores share a single floating point (decimal) unit. While you need some floating point capacity, upcoming workloads could use the GPU for a massive increase in performance, which is right there on the same die. As such, the complexity that is dedicated to every second FPU can be cut and used for something else. You can see this trend throughout various elements of the architecture.
Announced just this past June at last year’s Google I/O event, Android TV is a platform developed by Google, running Android 5.0 and higher, that aims to create an interactive experience for the TV. This platform can be built into a TV directly as well as into set-top style boxes, like the NVIDIA SHIELD we are looking at today. The idea is to bring the breadth of apps and content to the TV through the Android operating system in a way that is both convenient and intuitive.
NVIDIA announced SHIELD back in March at GDC as the first product to use the company’s latest Tegra processor, the X1. This SoC combines an 8-core big.LITTLE ARM processor design with a 256-core implementation of the NVIDIA Maxwell GPU architecture, providing GPU performance previously unseen in an Android device. I have already spent some time with the NVIDIA SHIELD at various events and the promise was clearly there to make it a leading option for Android TV adoption, but obviously there were questions to be answered.
Today’s article will focus on my early impressions with the NVIDIA SHIELD, having used it both in the office and at home for a handful of days. As you’ll see during the discussion there are still some things to be ironed out, some functionality that needs to be added before SHIELD and Android TV can really be called a must-buy product. But I do think it will get there.
And though this review will focus on the NVIDIA SHIELD, it’s impossible not to marry the success of SHIELD with the success of Google’s Android TV. The dominant use case for SHIELD is as a media playback device, with the gaming functionality as a really cool side project for enthusiasts and gamers looking for another outlet. For SHIELD to succeed, Google needs to prove that Android TV can improve over other integrated smart TV platforms as well as other set-top box platforms like Boxee, Roku and even the upcoming Apple TV refresh.
But first, let’s get an overview of the NVIDIA SHIELD device, pricing and specifications, before diving into my experiences with the platform as a whole.
Introduction: Improving Portable Sound
The Calyx PaT is very small USB DAC and headphone amp that can be used with PCs and mobile devices, offering the possibility of better sound from just about any digital source. So how does it sound? Let’s find out!
The PaT is a very interesting little device, to be sure. It rather resembles a large domino and weighs less than 1 ounce thanks to an ultra-light aluminum construction. It requires no battery or power source other than its micro USB connection, yet it provides sufficient power (0.8 V output) for in-ear monitors and efficient headphones through its 3.5mm headphone jack. Inside is a proprietary mix of DAC and amplifier circuitry, and like other products produced by Calyx, a Korean company with little presence in the United States, there is the promise of a dedication to great sound. Did Calyx pull it off with the diminutive PaT?
Improving Portable Sound
Outboard DACs and headphone amplifiers for computers and mobile devices are nothing new, with recent products like AudioQuest’s Dragonfly a prime example in the portable USB DAC market (though it offers no mobile support). When I first heard about the PaT during CES it was still in the prototype stage, but I was interested because of the Calyx name if nothing else, as I already owned the Calyx M DAP and had been quite honestly blown away by the sound.
So what need might I have for the interestingly-named PaT (pronounced "paat", meaning "bean" in reference to the small size), which is itself a DAC that requires another device to play music files? It wasn’t until I had the opportunity to speak with Calyx president Seungmok Yi during CES (via video chat as I couldn’t attend the show) that I started realize that this could be a compelling product, not just for the $99 price tag - a bargain for an audiophile product - but because of how versatile the PaT can be. You don't have to identify as an "audiophile" to appreciate the clearer and more detailed sound of a good DAC, especially when so many of us simply haven't heard one (especially on mobile devices).
ARM Releases Cortex-A72 for Licensing
On February 3rd, ARM announced a slew of new designs, including the Cortex A72. Few details were shared with us, but what we learned was that it could potentially redefine power and performance in the ARM ecosystem. Ryan was invited to London to participate in a deep dive of what ARM has done to improve its position against market behemoth Intel in the very competitive mobile space. Intel has a leg up on process technology with their 14nm Tri-Gate process, but they are continuing to work hard in making their x86 based processors more power efficient, while still maintaining good performance. There are certain drawbacks to using an ISA that is focused on high performance computing rather than being designed from scratch to provide good performance with excellent energy efficiency.
ARM has been on a pretty good roll with their Cortex A9, A7, A15, A17, A53, and A57 parts over the past several years. These designs have been utilized in a multitude of products and scenarios, with configurations that have scaled up to 16 cores. While each iteration has improved upon the previous, ARM is facing the specter of Intel’s latest generation, highly efficient x86 SOCs based on the 2nd gen 14nm Tri-Gate process. Several things have fallen into place for ARM to help them stay competitive, but we also cannot ignore the experience and design hours that have led to this product.
(Editor's Note: During my time with ARM last week it became very apparent that it is not standing still, not satisfied with its current status. With competition from Intel, Qualcomm and others ramping up over the next 12 months in both mobile and server markets, ARM will more than ever be depedent on the evolution of core design and GPU design to maintain advantages in performance and efficiency. As Josh will go into more detail here, the Cortex-A72 appears to be an incredibly impressive design and all indications and conversations I have had with others, outside of ARM, believe that it will be an incredibly successful product.)
Cortex A72: Highest Performance ARM Cortex
ARM has been ubiquitous for mobile applications since it first started selling licenses for their products in the 90s. They were found everywhere it seemed, but most people wouldn’t recognize the name ARM because these chips were fabricated and sold by licensees under their own names. Guys like Ti, Qualcomm, Apple, DEC and others all licensed and adopted ARM technology in one form or the other.
ARM’s importance grew dramatically with the introduction of increased complexity cellphones and smartphones. They also gained attention through multimedia devices such as the Microsoft Zune. What was once a fairly niche company with low performance, low power offerings became the 800 pound gorilla in the mobile market. Billions of chips are sold yearly based on ARM technology. To stay in that position ARM has worked aggressively on continually providing excellent power characteristics for their parts, but now they are really focusing on overall performance and capabilities to address, not only the smartphone market, but also the higher performance computing and server spaces that they want a significant presence in.
When I first was handed the Intel Compute Stick product at CES back in January, my mind began to race with a lot of questions. The first set were centered around the capabilities of the device itself: where could it be used, how much performance could Intel pack into it and just how many users would be interested in a product like this? Another set of questions was much more philosophical in nature: why was Intel going in this direction, does this mean an end for the emphasis on high performance componentry from Intel and who comes up with these darned part numbers?
I have since settled my mind on the issues surrounding Intel’s purpose with the Compute Stick and began to dive into the product itself. On the surface the Intel Compute Stick is a product entering late into a potentially crowded market. We already have devices like the Roku, Google Chromecast, the Apple TV, and even the Amazon Fire TV Stick. All of those devices share some of the targets and goals of the Compute Stick, but the one area where Intel’s product really stands out is flexibility. The Roku has the most pre-built applications and “channels” for a streaming media box. The Chromecast is dirt cheap at just $30 or so. Even Amazon’s Fire TV Stick is clearly the best choice for streaming Amazon’s own multimedia services. But the Intel Compute Stick can do all of those things – in addition to operating as a standalone PC with Windows or Linux. Anything you can do I can do better…
But it’s not a product without a few flaws, most of which revolve around the status of the current operating system designs for TVs and larger displays. Performance obviously isn’t peeling the paint off any walls, as you would expect. But I still think at for $150 with a full copy of Windows 8.1 with Bing, the Intel Compute Stick is going to find more fans that you might have first expected.
Battle of the Sixes, they call it
GameBench is a low-level application released in 2014 that attempts to bring the technical analysis and benchmarking capability of the PC to the mobile device. You might remember that I showed you some early results and discussed our use of the GameBench testing capability in my Dell Venue 8 7000 review a few months back; my understanding and practice of using the software was just beginning at that time and continues to grow as I spend time with the software.
The idea is simple yet powerful: GameBench allows Android users, and soon iOS users, the ability to monitor frame rates of nearly any game or 3D application that you can run on your phone or tablet to accurately measure real-world performance. This is similar to what we have done for years on the PC with FRAPS and allows us to gather average frames per second data over time. This is something that was previously unavailable to consumers or press for that matter and could be a very powerful tool for device to device comparisons going forward. The ability to utilize actual games and applications and gather benchmark data that is accurate to consumer experiences, rather than simply synthetic graphics tests that we have been forced to use in the past, will fundamentally change how we test and compare mobile hardware.
Image source: GameBench.net
Today, GameBench itself released a small report meant to showcase some of the kinds of data the software can gather while also revealing early support for Apple’s iPhone and iPad devices. Primary competitors for the comparison include the Apple iPhone 6, the Samsung Galaxy S6, HTC One M9 and Motorola Nexus 6. I was able to get an early look at the report and offer some feedback, while sharing with our readers my views on the results.
GameBench tested those four devices in a total of 10 games:
- Asphalt 8: Airborne
- Real Racing 3
- Dead Trigger 2
- Kill Shot
- Modern Combat 5: Blackout
- Boom Beach
- XCOM: Enemy Unknown
- GTA: San Andreas
- Marvel: Contest of Champions
- Monument Valley
These games all vary in price and in play style, but they all are in the top 50 games lists for each platform and are known for their graphically intense settings and look.
Introduction and First Impressions
The ASUS X205 offers the full Windows 8.1 notebook experience for the cost of a Chromebook, and the design offers a surprising amount of polish for the price. Is this $199 Atom-powered notebook a viable solution as a daily driver? We're about to find out.
What do you use a laptop for? A thoughtful answer to this question can be the most important part of the process when selecting your next notebook PC, and if your needs are modest there are a growing number of very low-cost options on the market. For example, I personally do not play games on a laptop, typically alternating between web, email, and Microsoft Office. Thus for myself the most important aspects of a notebook PC become screen quality, keyboard, trackpad, and battery life. High performance is not of utmost importance, and I assure myself of at least speedy load times by always choosing (or installing) a solid-state hard drive. For those reasons when I first read the description and specifications of the ASUS X205 notebook, I took notice.
The X205 is a small notebook with an 11.6” display and 1366x768 resolution, essentially matching the form-factor of Apple's 11.6" MacBook Air. It is powered by a quad-core Intel Atom processor with 2GB of RAM, and onboard storage is solid-state - though limited to 32GB and of the slower eMMC variety (which is in keeping with many Chromebooks). There is adequate connectivity as well, with the expected wireless card and two USB 2.0 ports. One aspect of this design that intrigued me was the trackpad, which ASUS claims is using "smartphone technology", indicating a touchscreen digitizer implementation. Smoothness and accuracy are the biggest problems I find with most inexpensive notebook trackpads, and if this turns out to be a strong performer it would be a major boon to the X205's overall usability. I opted for the Microsoft Signature Edition of the X205TA, which carries the same $199 retail price but does not come preloaded with any trialware or other junk software.
At the outset this feels like a compelling product simply because it retails for the same price as an average Chromebook, but offers the flexibility of a full Windows 8.1 installation. Granted this is the “Windows 8.1 with Bing” version found on low-cost, low-power devices like this, but it offers the functionality of the standard version. While Chrome OS and Google's productivity apps are great for many people, the ability to install and run Windows applications made this highly preferable to a Chromebook for me. Of course beyond the operating system the overall experience of using the laptop will ultimately decide the viability of this inexpensive product, so without further preamble let's dive right into the X205TA notebook!
Motorola has released an updated version of their low-cost Moto E smartphone for 2015, adding faster hardware and LTE support to an unlocked device with an unsubsidized retail of just $149. In this review we'll examine this new phone to find out if there are any significant limitations given its bargain price.
There has been a trend toward affordability with smartphone pricing that accelerated in 2014 and has continued its pace to start this year. Of course expensive flagships still exist at their $500+ unsubsidized retail prices, but is the advantage of such a device worth the price premium? In most cases a customer in a retail space would be naturally drawn to the more expensive phones on display with their large, sharp screens and thin designs that just look better by comparison. To get the latest and greatest the longstanding $500 - $700 unsubsidized cost of popular smartphones have made 2-year contract pricing a part of life for many, with contract offers and programs allowing users to lease or finance phones positioned as attractive alternatives to the high initial price. And while these high-end options can certainly reward the additional cost, there are rapidly diminishing returns on investment once we venture past the $200 mark with a mobile device. So it’s this bottom $200 of the full-price phone market which is so interesting not just to myself, but to the future of smartphones as they become the commodity devices that the so-called “feature phones” once were.
One of the companies at the forefront of a lower-cost approach to smartphones is Motorola, now independent from Google after Motorola Mobility was sold to Lenovo in October of 2014. A year before the sale Motorola had released a low-cost smartphone called the Moto G, an interesting product which ran stock Android for a fraction of the cost of a Google Play edition or even Nexus device; though it was underpowered with decidedly low-end specs. After a redesign in 2014, however, the 2nd edition Moto G became a much more compelling option, offering a unique combination of low price, respectable hardware, a stock Android experience, and Motorola’s now trademark design language, to a market drowning in bloated MSRPs. There was just one problem: while the 2014 Moto G had solid performance and had (quite importantly) moved larger 5-inch screen with a higher 720x1280 resolution IPS panel, there was still no LTE support. Selling without a contract for just $179 unlocked made the lack of LTE at least understandable, but as carrier technology has matured the prevalence of LTE has made it an essential part of future devices - especially in 2015. Admittedly 3G data speeds are fast enough for many people, but the structure of the modern mobile data plan often leaves that extra speed on the table if one’s device doesn’t support LTE.
Way back in January of this year, while attending CES 2015 in Las Vegas, we wandered into the MSI suite without having any idea what we might see as new and exciting product. Besides the GT80 notebook with a mechanical keyboard on it, the MSI GS30 Shadow was easily the most interesting and exciting technology. Although MSI is not the first company to try this, the Shadow is the most recent attempt to combine the benefits of a thin and light notebook with a discrete, high performance GPU when the former is connected to the latter's docking station.
The idea has always been simple but the implementation has always been complex. Take a thin, light, moderately powered notebook that is usable and high quality in its own right and combine it with the ability to connect a discrete GPU while at home for gaming purposes. In theory, this is the best of both worlds: a notebook PC for mobile productivity and gaming capability courtesy of an external GPU. But as the years have gone on, more companies try and more companies fail; the integration process is just never as perfect a mix as we hope.
Today we see if MSI and the GS30 Shadow can fare any better. Does the combination of a very high performance thin and light notebook and the GamingDock truly create a mobile and gaming system that is worth your investment?
The perfect laptop; it is every manufacturer’s goal. Obviously no one has gotten there yet (or we would have all stopped writing reviews of them). At CES this past January, we got our first glimpse of a new flagship Ultrabook from Dell: the XPS 13. It got immediate attention for some of the physical characteristics it included, like an ultra-thin bezel and a 13-in screen in the body of a typical 11-in laptop, all while being built in a sleek thin and light design. It’s not a gaming machine, despite what you might remember from the XPS line, but the Intel Core-series Broadwell-U processor keeps performance speedy in standard computing tasks.
As a frequent traveler that tends to err on the side of thin and light designs, as opposed to high performance notebooks with discrete graphics, the Dell XPS 13 is immediately compelling on a personal level as well. I have long been known as a fan of what Lenovo builds for this space, trusting my work machine requirements to the ThinkPad line for years and year. Dell’s new XPS 13 is a strong contender to take away that top spot for me and perhaps force me down the path of an upgrade of my own. So, you might consider this review as my personal thesis on the viability of said change.
The Dell XPS 13 Specifications
First, make sure as you hunt around the web for information on the XPS 13 that you are focusing on the new 2015 model. Much like we see from Apple, Dell reuses model names and that can cause confusion unless you know what specifications to look for or exactly what sub-model you need. Trust me, the new XPS 13 is much better than anything that existed before.
Introduction and Specifications
Had you asked me just a few years ago if 6-inch phones would not only be a viable option, but a dominant force in the mobile computing market, I would have likely rolled my eyes. At that time phones were small, tablets were big, and phablets were laughed at. Today, no one is laughing at the Galaxy Note 4, the latest iteration in Samsung’s created space of larger-than-you-probably-thought-you-wanted smartphones. Nearly all consumers are amazed by the size of the screen and the real estate this class of phone provides but some are instantly off put by the way the phone feels in the hand – it can come off as foreign, cumbersome, and unusable.
In my time with the new Galaxy Note 4 – my first extended-use experience with a phone of this magnitude – I have come to see the many positive traits that a larger phone can offer. There are some trade-offs of course, including the pocket/purse viability debate. One thing beyond question is that a large phone means a big screen. One that can display a large amount of data whether that be on a website or in a note-taking application. The extra screen real estate can instantly improve your productivity. To that end Samsung also provides a multi-tasking framework that lets you run multiple programs in a side-by-side view, similar to what the original version of Windows 8 did. It might seem unnecessary for an Android device, but as soon as you find the situation where you need it going back to a device without it can feel archaic.
A larger phone also means that there is more room for faster hardware, a larger camera sensor, and a bigger battery. Samsung even includes an active stylus called the S-Pen in the body of the device – something that few other modern tablets/phablets/phones feature.
Introduction and Design
Although the target market and design emphasis may be different, there is one thing consumer and business-grade laptops have in common: a drift away from processing power and toward portability and efficiency. At the risk of repeating our introduction for the massive MSI GT72 gaming notebook we reviewed last month, it seems that battery life, temperature, and power consumption get all the attention these days. And arguably, it makes sense for most people: it’s true that CPU performance gains have in years past greatly outstripped the improvements in battery life, and that likewise performance gains could be realized far more easily by upgrading storage device speed (such as by replacing conventional hard drives with solid-state drives) than by continuing to focus on raw CPU power and clock rates. As a result, we’ve seen many mobile CPU speeds plateauing or even dropping in exchange for a reduction in power consumption, while simultaneously cases have slimmed and battery life has jumped appreciably across the board.
But what if you’re one of the minority who actually appreciates and needs raw computing power? Fortunately, Lenovo’s ThinkPad W series still has you covered. This $1,500 workstation is the business equivalent of the consumer-grade gaming notebook. It’s one of the few designs where portability takes a backseat to raw power and ridiculous spec. Users shopping for a ThinkPad workstation aren’t looking to go unplugged all day long on an airplane tray table. They’re looking for power, reliability, and premium design, with function over form as a rule. And that’s precisely what they’ll get.
Beyond the fairly-typical (and very powerful) Intel Core i7-4800MQ CPU—often found in gaming PCs and workstations—and just 8 GB of DDR3-1600 MHz RAM (single-channel) is a 256 GB SSD and a unique feature to go along with the WQHD+ display panel: built-in X-Rite Pantone color sensor which can be used to calibrate the panel simply by closing the lid when prompted. How well this functions is another topic entirely, but at the very least, it’s a novel idea.
Finally, a SHIELD Console
NVIDIA is filling out the family of the SHIELD brand today with the announcement of SHIELD, a set-top box powered by the Tegra X1 processor. SHIELD will run Android TV and act as a game playing, multimedia watching, GRID streaming device. Selling for $199 and available in May of this year, there is a lot to discuss.
Odd naming scheme aside, the SHIELD looks to be an impressive little device, sitting on your home theater or desk and bringing a ton of connectivity and performance to your TV. Running Android TV means the SHIELD will have access to the entire library of Google Play media including music, movies and apps. SHIELD supports 4K video playback at 60 Hz thanks to an HDMI 2.0 connection and fully supports H.265/HEVC decode thanks to Tegra X1 processor.
Here is a full breakdown of the device's specifications.
|NVIDIA SHIELD Specifications|
|Processor||NVIDIA® Tegra® X1 processor with 256-core Maxwell™ GPU with 3GB RAM|
|Video Features||4K Ultra-HD Ready with 4K playback and capture up to 60 fps (VP9, H265, H264)|
|Audio||7.1 and 5.1 surround sound pass through over HDMI
High-resolution audio playback up to 24-bit/192kHz over HDMI and USB
High-resolution audio upsample to 24-bit/192hHz over USB
|Wireless||802.11ac 2x2 MIMO 2.4 GHz and 5 GHz Wi-Fi
Two USB 3.0 (Type A)
MicroSD slot (supports 128GB cards)
IR Receiver (compatible with Logitech Harmony)
|Gaming Features||NVIDIA GRID™ streaming service
|SW Updates||SHIELD software upgrades directly from NVIDIA|
|Power||40W power adapter|
|Weight and Size||Weight: 23oz / 654g
Height: 5.1in / 130mm
Width: 8.3in / 210mm
Depth: 1.0in / 25mm
|OS||Android TV™, Google Cast™ Ready|
|In the box||NVIDIA SHIELD
NVIDIA SHIELD controller
HDMI cable (High Speed), USB cable (Micro-USB to USB)
Power adapter (Includes plugs for North America, Europe, UK)
|Requirements||TV with HDMI input, Internet access|
|Options||SHIELD controller, SHIELD remove, SHIELD stand|
Obviously the most important feature is the Tegra X1 SoC, built on an 8-core 64-bit ARM processor and a 256 CUDA Core Maxwell architecture GPU. This gives the SHIELD set-top more performance than basically any other mobile part on the market, and demos showing Doom 3 and Crysis 3 running natively on the hardware drive the point home. With integrated HEVC decode support the console is the first Android TV device to offer the support for 4K video content at 60 FPS.
Even though storage is only coming in at 16GB, the inclusion of an MicroSD card slot enabled expansion to as much as 128GB more for content and local games.
The first choice for networking will be the Gigabit Ethernet port, but the 2x2 dual-band 802.11ac wireless controller means that even those of us that don't have hardwired Internet going to our TV will be able to utilize all the performance and features of SHIELD.
Who Should Care? Thankfully, Many People
The Khronos Group has made three announcements today: Vulkan (their competitor to DirectX 12), OpenCL 2.1, and SPIR-V. Because there is actually significant overlap, we will discuss them in a single post rather than splitting them up. Each has a role in the overall goal to access and utilize graphics and compute devices.
Before we get into what everything is and does, let's give you a little tease to keep you reading. First, Khronos designs their technologies to be self-reliant. As such, while there will be some minimum hardware requirements, the OS pretty much just needs to have a driver model. Vulkan will not be limited to Windows 10 and similar operating systems. If a graphics vendor wants to go through the trouble, which is a gigantic if, Vulkan can be shimmed into Windows 8.x, Windows 7, possibly Windows Vista despite its quirks, and maybe even Windows XP. The words “and beyond” came up after Windows XP, but don't hold your breath for Windows ME or anything. Again, the further back in Windows versions you get, the larger the “if” becomes but at least the API will not have any “artificial limitations”.
Outside of Windows, the Khronos Group is the dominant API curator. Expect Vulkan on Linux, Mac, mobile operating systems, embedded operating systems, and probably a few toasters somewhere.
On that topic: there will not be a “Vulkan ES”. Vulkan is Vulkan, and it will run on desktop, mobile, VR, consoles that are open enough, and even cars and robotics. From a hardware side, the API requires a minimum of OpenGL ES 3.1 support. This is fairly high-end for mobile GPUs, but it is the first mobile spec to require compute shaders, which are an essential component of Vulkan. The presenter did not state a minimum hardware requirement for desktop GPUs, but he treated it like a non-issue. Graphics vendors will need to be the ones making the announcements in the end, though.
SoFIA, Cherry Trail Make Debuts
Mobile World Congress is traditionally dominated by Samsung, Qualcomm, HTC, and others yet Intel continues to make in-roads into the mobile market. Though the company has admittedly lost a lot of money during this growing process, Intel pushes forward with today's announcement of a trio of new processor lines that keep the Atom brand. The Atom x3, the Atom x5, and the Atom x7 will be the company's answer in 2015 for a wide range of products, starting at the sub-$75 phone market and stretching up to ~$400 tablets and all-in-ones.
There are some significant differences in these Atom processors, more than the naming scheme might indicate.
Intel Atom x3 SoFIA Processor
For years now we have questioned Intel's capability to develop a processor that could fit inside the thermal envelope that is required for a smartphone while also offering performance comparable to Qualcomm, MediaTek, and others. It seemed that the x86 architecture was a weight around Intel's ankles rather than a float lifting it up. Intel's answer was the development of SoFIA, (S)mart (o)r (F)eature phone with (I)ntel (A)rchitecture. The project started about 2 years ago leading to product announcements finally reaching us today. SoFIA parts are "designed for budget smartphones; SoFIA is set to give Qualcomm and MediaTek a run for their money in this rapidly growing part of the market."
The SoFIA processors are based on the same Silvermont architecture as the current generation of Atom processors, but they are more tuned for power efficiency. Originally planned to be a dual-core only option, Intel has actually built both dual-core and quad-core variants that will pair with varying modem options to create a combination that best fit target price points and markets. Intel has partnered with RockChip for these designs, even though the architecture is completely IA/x86 based. Production will be done on a 28nm process technology at an unnamed vendor, though you can expect that to mean TSMC. This allows RockChip access to the designs, to help accelerate development, and to release them into the key markets that Intel is targeting.