All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
The perfect laptop; it is every manufacturer’s goal. Obviously no one has gotten there yet (or we would have all stopped writing reviews of them). At CES this past January, we got our first glimpse of a new flagship Ultrabook from Dell: the XPS 13. It got immediate attention for some of the physical characteristics it included, like an ultra-thin bezel and a 13-in screen in the body of a typical 11-in laptop, all while being built in a sleek thin and light design. It’s not a gaming machine, despite what you might remember from the XPS line, but the Intel Core-series Broadwell-U processor keeps performance speedy in standard computing tasks.
As a frequent traveler that tends to err on the side of thin and light designs, as opposed to high performance notebooks with discrete graphics, the Dell XPS 13 is immediately compelling on a personal level as well. I have long been known as a fan of what Lenovo builds for this space, trusting my work machine requirements to the ThinkPad line for years and year. Dell’s new XPS 13 is a strong contender to take away that top spot for me and perhaps force me down the path of an upgrade of my own. So, you might consider this review as my personal thesis on the viability of said change.
The Dell XPS 13 Specifications
First, make sure as you hunt around the web for information on the XPS 13 that you are focusing on the new 2015 model. Much like we see from Apple, Dell reuses model names and that can cause confusion unless you know what specifications to look for or exactly what sub-model you need. Trust me, the new XPS 13 is much better than anything that existed before.
Introduction and Specifications
Had you asked me just a few years ago if 6-inch phones would not only be a viable option, but a dominant force in the mobile computing market, I would have likely rolled my eyes. At that time phones were small, tablets were big, and phablets were laughed at. Today, no one is laughing at the Galaxy Note 4, the latest iteration in Samsung’s created space of larger-than-you-probably-thought-you-wanted smartphones. Nearly all consumers are amazed by the size of the screen and the real estate this class of phone provides but some are instantly off put by the way the phone feels in the hand – it can come off as foreign, cumbersome, and unusable.
In my time with the new Galaxy Note 4 – my first extended-use experience with a phone of this magnitude – I have come to see the many positive traits that a larger phone can offer. There are some trade-offs of course, including the pocket/purse viability debate. One thing beyond question is that a large phone means a big screen. One that can display a large amount of data whether that be on a website or in a note-taking application. The extra screen real estate can instantly improve your productivity. To that end Samsung also provides a multi-tasking framework that lets you run multiple programs in a side-by-side view, similar to what the original version of Windows 8 did. It might seem unnecessary for an Android device, but as soon as you find the situation where you need it going back to a device without it can feel archaic.
A larger phone also means that there is more room for faster hardware, a larger camera sensor, and a bigger battery. Samsung even includes an active stylus called the S-Pen in the body of the device – something that few other modern tablets/phablets/phones feature.
Introduction and Design
Although the target market and design emphasis may be different, there is one thing consumer and business-grade laptops have in common: a drift away from processing power and toward portability and efficiency. At the risk of repeating our introduction for the massive MSI GT72 gaming notebook we reviewed last month, it seems that battery life, temperature, and power consumption get all the attention these days. And arguably, it makes sense for most people: it’s true that CPU performance gains have in years past greatly outstripped the improvements in battery life, and that likewise performance gains could be realized far more easily by upgrading storage device speed (such as by replacing conventional hard drives with solid-state drives) than by continuing to focus on raw CPU power and clock rates. As a result, we’ve seen many mobile CPU speeds plateauing or even dropping in exchange for a reduction in power consumption, while simultaneously cases have slimmed and battery life has jumped appreciably across the board.
But what if you’re one of the minority who actually appreciates and needs raw computing power? Fortunately, Lenovo’s ThinkPad W series still has you covered. This $1,500 workstation is the business equivalent of the consumer-grade gaming notebook. It’s one of the few designs where portability takes a backseat to raw power and ridiculous spec. Users shopping for a ThinkPad workstation aren’t looking to go unplugged all day long on an airplane tray table. They’re looking for power, reliability, and premium design, with function over form as a rule. And that’s precisely what they’ll get.
Beyond the fairly-typical (and very powerful) Intel Core i7-4800MQ CPU—often found in gaming PCs and workstations—and just 8 GB of DDR3-1600 MHz RAM (single-channel) is a 256 GB SSD and a unique feature to go along with the WQHD+ display panel: built-in X-Rite Pantone color sensor which can be used to calibrate the panel simply by closing the lid when prompted. How well this functions is another topic entirely, but at the very least, it’s a novel idea.
Finally, a SHIELD Console
NVIDIA is filling out the family of the SHIELD brand today with the announcement of SHIELD, a set-top box powered by the Tegra X1 processor. SHIELD will run Android TV and act as a game playing, multimedia watching, GRID streaming device. Selling for $199 and available in May of this year, there is a lot to discuss.
Odd naming scheme aside, the SHIELD looks to be an impressive little device, sitting on your home theater or desk and bringing a ton of connectivity and performance to your TV. Running Android TV means the SHIELD will have access to the entire library of Google Play media including music, movies and apps. SHIELD supports 4K video playback at 60 Hz thanks to an HDMI 2.0 connection and fully supports H.265/HEVC decode thanks to Tegra X1 processor.
Here is a full breakdown of the device's specifications.
|NVIDIA SHIELD Specifications|
|Processor||NVIDIA® Tegra® X1 processor with 256-core Maxwell™ GPU with 3GB RAM|
|Video Features||4K Ultra-HD Ready with 4K playback and capture up to 60 fps (VP9, H265, H264)|
|Audio||7.1 and 5.1 surround sound pass through over HDMI
High-resolution audio playback up to 24-bit/192kHz over HDMI and USB
High-resolution audio upsample to 24-bit/192hHz over USB
|Wireless||802.11ac 2x2 MIMO 2.4 GHz and 5 GHz Wi-Fi
Two USB 3.0 (Type A)
MicroSD slot (supports 128GB cards)
IR Receiver (compatible with Logitech Harmony)
|Gaming Features||NVIDIA GRID™ streaming service
|SW Updates||SHIELD software upgrades directly from NVIDIA|
|Power||40W power adapter|
|Weight and Size||Weight: 23oz / 654g
Height: 5.1in / 130mm
Width: 8.3in / 210mm
Depth: 1.0in / 25mm
|OS||Android TV™, Google Cast™ Ready|
|In the box||NVIDIA SHIELD
NVIDIA SHIELD controller
HDMI cable (High Speed), USB cable (Micro-USB to USB)
Power adapter (Includes plugs for North America, Europe, UK)
|Requirements||TV with HDMI input, Internet access|
|Options||SHIELD controller, SHIELD remove, SHIELD stand|
Obviously the most important feature is the Tegra X1 SoC, built on an 8-core 64-bit ARM processor and a 256 CUDA Core Maxwell architecture GPU. This gives the SHIELD set-top more performance than basically any other mobile part on the market, and demos showing Doom 3 and Crysis 3 running natively on the hardware drive the point home. With integrated HEVC decode support the console is the first Android TV device to offer the support for 4K video content at 60 FPS.
Even though storage is only coming in at 16GB, the inclusion of an MicroSD card slot enabled expansion to as much as 128GB more for content and local games.
The first choice for networking will be the Gigabit Ethernet port, but the 2x2 dual-band 802.11ac wireless controller means that even those of us that don't have hardwired Internet going to our TV will be able to utilize all the performance and features of SHIELD.
Who Should Care? Thankfully, Many People
The Khronos Group has made three announcements today: Vulkan (their competitor to DirectX 12), OpenCL 2.1, and SPIR-V. Because there is actually significant overlap, we will discuss them in a single post rather than splitting them up. Each has a role in the overall goal to access and utilize graphics and compute devices.
Before we get into what everything is and does, let's give you a little tease to keep you reading. First, Khronos designs their technologies to be self-reliant. As such, while there will be some minimum hardware requirements, the OS pretty much just needs to have a driver model. Vulkan will not be limited to Windows 10 and similar operating systems. If a graphics vendor wants to go through the trouble, which is a gigantic if, Vulkan can be shimmed into Windows 8.x, Windows 7, possibly Windows Vista despite its quirks, and maybe even Windows XP. The words “and beyond” came up after Windows XP, but don't hold your breath for Windows ME or anything. Again, the further back in Windows versions you get, the larger the “if” becomes but at least the API will not have any “artificial limitations”.
Outside of Windows, the Khronos Group is the dominant API curator. Expect Vulkan on Linux, Mac, mobile operating systems, embedded operating systems, and probably a few toasters somewhere.
On that topic: there will not be a “Vulkan ES”. Vulkan is Vulkan, and it will run on desktop, mobile, VR, consoles that are open enough, and even cars and robotics. From a hardware side, the API requires a minimum of OpenGL ES 3.1 support. This is fairly high-end for mobile GPUs, but it is the first mobile spec to require compute shaders, which are an essential component of Vulkan. The presenter did not state a minimum hardware requirement for desktop GPUs, but he treated it like a non-issue. Graphics vendors will need to be the ones making the announcements in the end, though.
SoFIA, Cherry Trail Make Debuts
Mobile World Congress is traditionally dominated by Samsung, Qualcomm, HTC, and others yet Intel continues to make in-roads into the mobile market. Though the company has admittedly lost a lot of money during this growing process, Intel pushes forward with today's announcement of a trio of new processor lines that keep the Atom brand. The Atom x3, the Atom x5, and the Atom x7 will be the company's answer in 2015 for a wide range of products, starting at the sub-$75 phone market and stretching up to ~$400 tablets and all-in-ones.
There are some significant differences in these Atom processors, more than the naming scheme might indicate.
Intel Atom x3 SoFIA Processor
For years now we have questioned Intel's capability to develop a processor that could fit inside the thermal envelope that is required for a smartphone while also offering performance comparable to Qualcomm, MediaTek, and others. It seemed that the x86 architecture was a weight around Intel's ankles rather than a float lifting it up. Intel's answer was the development of SoFIA, (S)mart (o)r (F)eature phone with (I)ntel (A)rchitecture. The project started about 2 years ago leading to product announcements finally reaching us today. SoFIA parts are "designed for budget smartphones; SoFIA is set to give Qualcomm and MediaTek a run for their money in this rapidly growing part of the market."
The SoFIA processors are based on the same Silvermont architecture as the current generation of Atom processors, but they are more tuned for power efficiency. Originally planned to be a dual-core only option, Intel has actually built both dual-core and quad-core variants that will pair with varying modem options to create a combination that best fit target price points and markets. Intel has partnered with RockChip for these designs, even though the architecture is completely IA/x86 based. Production will be done on a 28nm process technology at an unnamed vendor, though you can expect that to mean TSMC. This allows RockChip access to the designs, to help accelerate development, and to release them into the key markets that Intel is targeting.
Flagship. Premium. Best in class. These are the terms that Dell and Intel muttered to me during a conference call to discuss the new Dell Venue 8 7000 tablet. It’s a bullish claim and one that would likely have been received with a sideways eye roll or a shrug had I not been able to get a short amount of hands on time with the device at CES in January. The idea that Dell would develop an Android tablet that bests what more established brands like Nexus and Samsung have created, AND that that same tablet would be powered by an Intel processor rather than a Qualcomm, NVIDIA or Samsung chip would have seemed laughable last year. But after a solid three weeks with the Venue 8 7000 I am prepared to make the statement: this is my favorite tablet. Not my favorite Intel tablet, not my favorite Android tablet: just plain favorite.
The Venue 8 7000 combines style, design, technology and visuals that are simply unmatched by anything else in the Android word and rivals anything that Apple has created to date. There are a couple of warts that center around the camera and gaming performance that won’t drop your jaw, but for the majority of use cases the user experience is as exceptional as the looks.
Maybe best of all, this tablet starts at just $399 and is available today.
Dell Venue 8 7000 Specifications
Let’s begin the review by looking at the raw specifications of the Dell Venue 8 7000. Even though hardware specifications don’t tell a complete story of any device, especially a tablet that is based so much on experience, it is important to get a good baseline expectation.
|Dell Venue 8 7000 (Model 7840)|
|Processor||Intel Atom Z3580 Quad-Core 2.33 GHz|
|Screen||2560x1600 OLED 8.4-in (359 ppi)|
MicroSD Slot (up to 512GB)
|Camera||8MP Rear + Dual 720p Depth
|Wireless||Intel 7260 802.11ac 1x1 Dual Band
|Connection||USB 2.0 (power and data)
|Dimensions||215.8mm x 124.4mm x 6mm
8.5" x 4.88" x 0.24"
The center of the Venue 8 7000 is the Intel Atom Z3580 quad-core processor with a peak clock rate of 2.3 GHz and a base clock rate of 500 MHz. The Z3580 is a 22nm processor based on the Moorefield platform and Silvermont architecture. I first got information about the Silvermont architecture back in May of 2013 so it seems a bit dated in some regards, but the performance and power efficiency is still there to compete with the rival options from ARM.. The Venue 8 7000 includes an LPDDR3-1600 controller and there is 2GB of memory; a decent amount but we are seeing quite a few smartphones with more system memory like the OnePlus One.
New Features and Specifications
It is increasingly obvious that in the high end smartphone and tablet market, much like we saw occur over the last several years in the PC space, consumers are becoming more concerned with features and experiences than just raw specifications. There is still plenty to drool over when looking at and talking about 4K screens in the palm of your hand, octa-core processors and mobile SoC GPUs measuring performance in hundreds of GFLOPS, but at the end of the day the vast majority of consumers want something that does something to “wow” them.
As a result, device manufacturers and SoC vendors are shifting priorities for performance, features and how those are presented both the public and to the media. Take this week’s Qualcomm event in San Diego where a team of VPs, PR personnel and engineers walked me through the new Snapdragon 810 processor. Rather than showing slide after slide of comparative performance numbers to the competition, I was shown room after room of demos. Wi-Fi, LTE, 4K capture and playback, gaming capability, thermals, antennae modifications, etc. The goal is showcase the experience of the entire platform – something that Qualcomm has been providing for longer than just about anyone in this business, while educating consumers on the need for balance too.
As a 15-year veteran of the hardware space my first reaction here couldn’t have been scripted any more precisely: a company that doesn’t show performance numbers has something to hide. But I was given time with a reference platform featuring the Snapdragon 810 processor in a tablet form-factor and the results show impressive increases over the 801 and 805 processors from the previous family. Rumors of the chips heat issues seem overblown, but that part will be hard to prove for sure until we get retail hardware in our hands to confirm.
Today’s story will outline the primary feature changes of the Snapdragon 810 SoC, though there was so much detail presented at the event with such a short window of time for writing that I definitely won’t be able to get to it all. I will follow up the gory specification details with performance results compared to a wide array of other tablets and smartphones to provide some context to where 810 stands in the market.
It has been an abnormal week for us here at PC Perspective. Our typical review schedule has pretty much flown out the window, and the past seven days have been filled with learning, researching, retesting, and publishing. That might sound like the norm, but in these cases the process was initiated by tips from our readers. Last Saturday (24 Jan), a few things were brewing:
- Ryan was informed by NVIDIA that the memory layout of the GTX 970 was different than expected.
- The huge (now 168 page) overclock.net forum thread about the Samsung 840 EVO slowdown was once again gaining traction.
- Someone got G-Sync working on a laptop integrated display.
We had to do a bit of triage here of course, as we can only research and write so quickly. Ryan worked the GTX 970 piece as it was the hottest item. I began a few days of research and testing on the 840 EVO slow down issue reappearing on some drives, and we kept tabs on that third thing, which at the time seemed really farfetched. With those two first items taken care of, Ryan shifted his efforts to GTX 970 SLI testing while I shifted my focus to finding out of there was any credence to this G-Sync laptop thing.
A few weeks ago, an ASUS Nordic Support rep inadvertently leaked an interim build of the NVIDIA driver. This was a mobile driver build (version 346.87) focused at their G751 line of laptops. One recipient of this driver link posted it to the ROG forum back on the 20th. A fellow by the name Gamenab, owning the same laptop cited in that thread, presumably stumbled across this driver, tried it out, and was more than likely greeted by this popup after the installation completed:
Now I know what you’re thinking, and it’s probably the same thing anyone would think. How on earth is this possible? To cut a long story short, while the link to the 346.87 driver was removed shortly after being posted to that forum, we managed to get our hands on a copy of it, installed it on the ASUS G751 that we had in for review, and wouldn’t you know it we were greeted by the same popup!
Ok, so it’s a popup, could it be a bug? We checked NVIDIA control panel and the options were consistent with that of a G-Sync connected system. We fired up the pendulum demo and watched the screen carefully, passing the machine around the office to be inspected by all. We then fired up some graphics benchmarks that were well suited to show off the technology (Unigine Heaven, Metro: Last Light, etc), and everything looked great – smooth steady pans with no juddering or tearing to be seen. Ken Addison, our Video Editor and jack of all trades, researched the panel type and found that it was likely capable of 100 Hz refresh. We quickly dug created a custom profile, hit apply, and our 75 Hz G-Sync laptop was instantly transformed into a 100 Hz G-Sync laptop!
Ryan's Note: I think it is important here to point out that we didn't just look at demos and benchmarks for this evaluation but actually looked at real-world gameplay situations. Playing through Metro: Last Light showed very smooth pans and rotation, Assassin's Creed played smoothly as well and flying through Unigine Heaven manually was a great experience. Crysis 3, Battlefield 4, etc. This was NOT just a couple of demos that we ran through - the variable refresh portion of this mobile G-Sync enabled panel was working and working very well.
At this point in our tinkering, we had no idea how or why this was working, but there was no doubt that we were getting a similar experience as we have seen with G-Sync panels. As I digested what was going on, I thought surely this can’t be as good as it seems to be… Let’s find out, shall we?
Introduction and Design
MSI’s unapologetically large GT70 “Dominator Pro” series of machines knows its audience well: for every gripe about the notebooks’ hulking sizes, a snicker and a shrug are returned by the community, who rarely value such items as portability as highly as the critics who are hired to judge based on them. These machines are built for power, first and foremost. While featherweight construction and manageable dimensions matter to those regularly tossing machines into their bags, by contrast, MSI’s desktop replacements recognize the meaning of their classification: the flexibility of merely moving around the house with one’s gaming rig is reason enough to consider investing in one.
So its priorities are arguably well in line. But if you want to keep on dominating, regular updates are a necessity, too. And with the GT72 2QE, MSI takes it all up yet another notch: our review unit (GT72 2QE-208US) packs four SSDs in a RAID-0 array (as opposed to the GT70’s three), plus a completely redesigned case which manages to address some of our biggest complaints. Oh yeah, and an NVIDIA GTX 980M GPU with 8 GB GDDR5 RAM—the fastest mobile GPU ever. (You can find much more information and analysis on this GPU specifically in Ryan’s ever-comprehensive review.)
Of course, these state-of-the-art innards come at no small price: $2,999 as configured (around a $2,900 street price), or a few hundred bucks less with storage or RAM sacrifices—a reasonable trade-off considering the marginal benefits one gains from a quad-SSD array or 32 GB of RAM.
Core M 5Y70 Specifications
Back in August of this year, Intel invited me out to Portland, Oregon to talk about the future of processors and process technology. Broadwell is the first microarchitecture to ship on Intel's newest 14nm process technology and the performance and power implications of it are as impressive as they are complex. We finally have the first retail product based on Broadwell-Y in our hands and I am eager to see how this combination of technology is going to be implemented.
If you have not read through my article that dives into the intricacies of the 14nm process and the architectural changes coming with Broadwell, then I would highly recommend that you do so before diving any further into this review. Our Intel Core M Processor: Broadwell Architecture and 14nm Process Reveal story clearly explains the "how" and "why" for many of the decisions that determined the direction the Core M 5Y70 heads in.
As I stated at the time:
"The information provided by Intel about Broadwell-Y today shows me the company is clearly innovating and iterating on its plans set in place years ago with the focus on power efficiency. Broadwell and the 14nm process technology will likely be another substantial leap between Intel and AMD in the x86 tablet space and should make an impact on other tablet markets (like Android) as long as pricing can remain competitive. That 14nm process gives Intel an advantage that no one else in the industry can claim and unless Intel begins fabricating processors for the competition (not completely out of the question), that will remain a house advantage."
With a background on Intel's goals with Broadwell-Y, let's look at the first true implementation.
GeForce GTX 980M Performance Testing
When NVIDIA launched the GeForce GTX 980 and GTX 970 graphics cards last month, part of the discussion at our meetings also centered around the mobile variants of Maxwell. The NDA was a bit later though and Scott wrote up a short story announcing the release of the GTX 980M and the GTX 970M mobility GPUs. Both of these GPUs are based on the same GM204 design as the desktop cards, though as you should have come to expect by now, do so with lower specifications than the similarly-named desktop options. Take a look:
|GTX 980M||GTX 970M||
|Memory||Up to 4GB||Up to 3GB||4GB||4GB||4GB/8GB|
|Memory Rate||2500 MHz||2500 MHz||7.0 (GT/s)||7.0 (GT/s)||2500 MHz|
Just like the desktop models, GTX 980M and GTX 970M are built on the 28nm process technology and are tweaked and built for power efficiency - one of the reasons the mobile release of this product is so interesting.
With a CUDA core count of 1536, the GTX 980M has 33% fewer shader cores than the desktop GTX 980, along with a slightly lower base clock speed. The result is a peak theoretical performance of 3.189 TFLOPs, compared to 4.6 TFLOPs on the GTX 980 desktop. In fact, that is only slightly higher than the GTX 880M based on Kepler, that clocks in with the same CUDA core count (1536) but a TFLOP capability of 2.9. Bear in mind that the GTX 880M is using a different architecture design than the GTX 980M; Maxwell's design advantages go beyond just CUDA core count and clock speed.
The GTX 970M is even smaller, with a CUDA core count of 1280 and peak performance rated at 2.365 TFLOPs. Also notice that the memory bus width has shrunk from 256-bit to 192-bit for this part.
As is typically the case with mobile GPUs, the memory speed of the GTX 980M and GTX 970M is significantly lower than the desktop parts. While the GeForce GTX 980 and 970 that install in your desktop PC will have memory running at 7.0 GHz, the mobile versions will run at 5.0 GHz in order to conserve power.
From a feature set stand point though, the GTX 980M/970M are very much the same as the desktop parts that I looked at in September. You will have support for VXGI, NVIDIA's new custom global illumination technology, Multi-Frame AA and maybe most interestingly, Dynamic Super Resolution (DSR). DSR allows you to render a game at a higher resolution and then use a custom filter to down sample it back to your panel's native resolution. For mobile gamers that are using 1080p screens (as our test sample shipped with) this is a good way to utilize the power of your GPU for less power-hungry games, while getting a surprisingly good image at the same time.
If there is one message that I get from NVIDIA's GeForce GTX 900M-series announcement, it is that laptop gaming is a first-class citizen in their product stack. Before even mentioning the products, the company provided relative performance differences between high-end desktops and laptops. Most of the rest of the slide deck is showing feature-parity with the desktop GTX 900-series, and a discussion about battery life.
First, the parts. Two products have been announced: The GeForce GTX 980M and the GeForce GTX 970M. Both are based on the 28nm Maxwell architecture. In terms of shading performance, the GTX 980M has a theoretical maximum of 3.189 TFLOPs, and the GTX 970M is calculated at 2.365 TFLOPs (at base clock). On the desktop, this is very close to the GeForce GTX 770 and the GeForce GTX 760 Ti, respectively. This metric is most useful when you're compute bandwidth-bound, at high resolution with complex shaders.
The full specifications are:
|GTX 980M||GTX 970M||
|Memory||Up to 4GB||Up to 3GB||4GB||4GB||4GB/8GB|
|Memory Rate||2500 MHz||2500 MHz||7.0 (GT/s)||7.0 (GT/s)||2500 MHz|
As for the features, it should be familiar for those paying attention to both desktop 900-series and the laptop 800M-series product launches. From desktop Maxwell, the 900M-series is getting VXGI, Dynamic Super Resolution, and Multi-Frame Sampled AA (MFAA). From the latest generation of Kepler laptops, the new GPUs are getting an updated BatteryBoost technology. From the rest of the GeForce ecosystem, they will also get GeForce Experience, ShadowPlay, and so forth.
For VXGI, DSR, and MFAA, please see Ryan's discussion for the desktop Maxwell launch. Information about these features is basically identical to what was given in September.
BatteryBoost, on the other hand, is a bit different. NVIDIA claims that the biggest change is just raw performance and efficiency, giving you more headroom to throttle. Perhaps more interesting though, is that GeForce Experience will allow separate one-click optimizations for both plugged-in and battery use cases.
The power efficiency demonstrated with the Maxwell GPU in Ryan's original GeForce GTX 980 and GTX 970 review is even more beneficial for the notebook market where thermal designs are physically constrained. Longer battery life, as well as thinner and lighter gaming notebooks, will see tremendous advantages using a GPU that can run at near peak performance on the maximum power output of an integrated battery. In NVIDIA's presentation, they mention that while notebooks on AC power can use as much as 230 watts of power, batteries tend to peak around 100 watts. Given that a full speed, desktop-class GTX 980 has a TDP of 165 watts, compared to the 250 watts of a Radeon R9 290X, translates into notebook GPU performance that will more closely mirror its desktop brethren.
Of course, you probably will not buy your own laptop GPU; rather, you will be buying devices which integrate these. There are currently five designs across four manufacturers that are revealed (see image above). Three contain the GeForce GTX 980M, one has a GTX 970M, and the other has a pair of GTX 970Ms. Prices and availability are not yet announced.
Introduction and Design
A little over a year ago, we posted our review of the Lenovo Y500, which was a gaming notebook that leveraged not one, but two discrete video adapters (2 x NVIDIA GeForce GT 650M in SLI, to be exact) to achieve respectable gaming performance at a reasonable price point (around $1,200 at the time of the review).
Well—take away nearly a pound of weight (to 5.7 lbs), slim the case down to around an inch thick, update the chipset, and remove one video card, and you’ve got the Lenovo Y50 Touch, which ought to be able to improve upon the Y500 in nearly every area if the specifications add up to typical results. Here’s the full list of what our review unit includes:
While the GTX 860M (2 GB) is a far cry from, say, the GTX 880M (8 GB) we had the pleasure of testing in MSI’s GT70 2PE, it’s still a very capable card that should provide satisfactory results without breaking the bank (or the back). The rest of the spec sheet is conventional fare for a budget gaming notebook, with the only other surprise being the inclusion of a touchscreen—an option which replaces the traditional matte LCD panel in the standard Y50.
The configuration we received has already been slightly updated to include a CPU that’s a nudge better than the i7-4700HQ: the i7-4710HQ (which gains it 100 MHz in Turbo Boost clock rate). Otherwise, the specs are identical, and the street price is very close to that of the Y500 we originally reviewed: $1,139. Currently, an extra 10 bucks will also score you an external DVD+/-RW drive, and just 90 bucks more will boost your GTX 860M’s VRAM to 4 GB (from 2 GB) and your system RAM to 16 GB from 8 GB. That’s really not a bad deal at all.
One Small Step
While most articles surrounding the iPhone 6 and iPhone 6 Plus this far have focused around user experience and larger screen sizes, performance, and in particular the effect of Apple's transition to the 20nm process node for the A8 SoC have been our main questions regarding these new phones. Naturally, I decided to put my personal iPhone 6 though our usual round of benchmarks.
First, let's start with 3DMark.
Comparing the 3DMark scores of the new Apple A8 to even the last generation A7 provides a smaller improvement than we are used to seeing generation-to-generation with Apple's custom ARM implementations. When you compare the A8 to something like the NVIDIA Tegra K1, which utilizes desktop-class GPU cores, the overall score blows Apple out of the water. Even taking a look at the CPU-bound physics score, the K1 is still a winner.
A 78% performance advantage in overall score when compared the A8 shows just how much of a powerhouse NVIDIA has with the K1. (Though clearly power envelopes are another matter entirely.)
If we look at more CPU benchmarks, like the browser-based Google Octane and SunSpider tests, the A8 starts to shine more.
While the A8 edges out the A7 to be the best performing device and 54% faster than the K1 in SunSpider, the A8 and K1 are neck and neck in the Google Octane benchmark.
Moving back to a graphics heavy benchmark, GFXBench's Manhattan test, the Tegra K1 has a 75% percent performance advantage over the A8 though it is 36% faster than the previous A7 silicon.
These early results are certainly a disappointment compared to the usual generation-to-generation performance increase we see with Apple SoCs.
However, the other aspect to look at is power efficiency. With normal use I have noticed a substantial increase in battery life of my iPhone 6 over the last generation iPhone 5S. While this may be due to a small (about 1 wH) increase in battery capacity, I think more can be credited to this being an overall more efficient device. Certain choices like sticking to a highly optimized Dual Core CPU design and Quad Core GPU, as well as a reduction in process node to 20nm all contribute to increased battery life, while surpassing the performance of the last generation Apple A7.
In that way, the A8 moves the bar forward for Apple and is a solid first attempt at using the 20nm silicon technology at TSMC. There is a strong potential that further refined parts (like the expected A8x for the iPad revisions) Apple will be able to further surpass 28nm silicon in performance and efficiency.
The notebook market of today barely resembles the notebook market of 5 years ago. People are spending less money on their computers than ever before, and we find even sub $1000 options are adequate for casual 1080p gaming. However the high-end, boutique gaming notebook hasn’t been forgotten. Companies like Maingear still forge on to try to provide a no compromise portable gaming experience. Today, we look at the Maingear Pulse 17 gaming laptop.
The most striking feature of the Pulse 17 is the namesake 17-in display. While we are used to seeing gaming laptops fall in the 15-in or higher range, there is something to be said about opening up the Pulse and being greeted by a massive display with 1080p resolution. The choice of a 17-in display here also enables one of the most impressive parts of this notebook, the thickness.
When most people think about gaming laptops, their minds go to the gigantic bricks of the past, The Pulse 17 manages to provide gaming power in a similar thickness to the average ultrabook at 0.86”. In fact, the form factor is similar to what I’d imagine a 17” MacBook Pro Retina as, if Apple decided to use a display that large.
Even though the screen size creates a large footprint for the Pulse 17, both the thickness, and the 6lb weight make this the first truly portable gaming laptop I have used.
Comparing the physical form of the Pulse 17 to a notebook like the ASUS G750JX, which we reviewed late last year, is almost comical. The G750 weighs in at 10lbs and just under 2” thick while toting similar hardware and performance to the Pulse 17.
Top: Maingear Pulse 17, Bottom: ASUS G750JX
Beyond physical attributes, the Pulse 17 has a lot to offer from a hardware standpoint. The Intel Core i7-4700HQ processor and NVIDIA GTX 765M GPU (as tested, it now ships with a 870M) mean that you’ll have all that you need to play any modern game on the integrated 1080p display.
Storage is provided by a 1TB Hard Drive, as well as 2x128GB mSATA SSDs in SuperRAID 0 to provide maximum throughput.
A Tablet and Controller Worth Using
An interesting thing happened a couple of weeks back, while I was standing on stage at our annual PC Perspective Hardware Workshop during Quakecon in Dallas, TX. When NVIDIA offered up a SHIELD (now called the SHIELD Portable) for raffle, the audience cheered. And not just a little bit, but more than they did for nearly any other hardware offered up during the show. That included motherboards, graphics card, monitors, even complete systems. It kind of took me aback - NVIDIA SHIELD was a popular brand, a name that was recognized, and apparently, a product that people wanted to own. You might not have guessed that based on the sales numbers that SHIELD has put forward though. Even though it appeared to have a significant mind share, market share was something that was lacking.
Today though, NVIDIA prepares the second product in the SHIELD lineup, the SHIELD Tablet, a device the company hopes improves on the idea of SHIELD to encourage other users to sign on. It's a tablet (not a tablet with a controller attached), it has a more powerful SoC that can utilize different APIs for unique games, it can be more easily used in a 10-ft console mode and the SHIELD specific features like Game Stream are included and enhanced.
The question of course though is easy to put forward: should you buy one? Let's explore.
The NVIDIA SHIELD Tablet
At first glance, the NVIDIA SHIELD Tablet looks like a tablet. That actually isn't a negative selling point though, as the SHIELD Tablet can and does act like a high end tablet in nearly every way: performance, function, looks. We originally went over the entirety of the tablet's specifications in our first preview last week but much of it bears repeating for this review.
The SHIELD Tablet is built around the NVIDIA Tegra K1 SoC, the first mobile silicon to implement the Kepler graphics architecture. That feature alone makes this tablet impressive because it offers graphics performance not seen in a form factor like this before. CPU performance is also improved over the Tegra 4 processor, but the graphics portion of the die sees the largest performance jump easily.
A 1920x1200 resolution 7.9-in IPS screen faces the user and brings the option of full 1080p content lacking with the first SHIELD portable. The screen is bright and crisp, easily viewable in bring lighting for gaming or use in lots of environments. Though the Xiaomi Mi Pad 7.9 had a 2048x1536 resolution screen, the form factor of the SHIELD Tablet is much more in line with what NVIDIA built with the Tegra Note 7.
Introduction and Design
The next candidate in our barrage of ThinkPad reviews is the ThinkPad Yoga, which, at first glance, might seem a little bit redundant. After all, we’ve already got three current-gen Yoga models to choose from between the Yoga 2 11- and 13-inch iterations and the Yoga 2 Pro top-end selection. What could possibly be missing?
Well, in fact, as is often the case when choosing between well-conceived notebook models, it isn’t so much about what’s missing as it is priorities. Whereas the consumer-grade Yoga models all place portability, slimness, and aesthetics in the highest regard, the ThinkPad Yoga subscribes to a much more practical business-oriented approach, which (nearly) always instead favors function over form. It’s a conversation we’ve had here at PC Perspective a thousand times before, but yet again, it is the core ThinkPad philosophy which separates the ThinkPad Yoga from other notebooks of its type. Suffice it to say, in fact, that really the only reason to think of it as a Yoga at all is the unique hinge design and affiliated notebook/tablet convertibility; excepting that, this seems much closer to an X240 than anything in Lenovo’s current consumer-grade lineup. And carrying a currently-configurable street price of around $1,595 currently, it’s positioned as such, too.
But it isn’t beyond reproach. Some of the same questionable decisions regarding design changes which we’ve covered in our recent ThinkPad reviews still apply to the Yoga. For instance, the much-maligned clickpad is back, bringing with it vivid nightmares of pointer jumpiness and click fatigue that were easily the biggest complaint about the T440s and X240 we recently reviewed. The big question today is whether these criticisms are impactful enough to disqualify the ThinkPad Yoga as a rational alternative to other ThinkPad convertibles and the consumer-grade Yoga models. It’s a tall order, so let’s tackle it.
First up, the specs:
While most of this list is pretty conventional, the astute might have already picked out one particular item which tops the X240 we recently reviewed: a possible 16 GB of dual-channel RAM. The X240 was limited to just 8 GB of single-channel memory thanks to a mere single SODIMM slot. The ThinkPad Yoga also boasts a 1080p screen with a Wacom digitizer pen—something which is clearly superior to our X240 review unit. Sadly missing, however, are the integrated Gigabit Ethernet port and the VGA port—and the mini DisplayPort has been replaced by a mini-HDMI, which ultimately is decidedly inferior.
SHIELD Tablet with new Features
It's odd how regular these events seem to come. Almost exactly one year ago today, NVIDIA launched the SHIELD gaming device, which is a portable Android tablet attached to a controller, all powered by the Tegra 4 SoC. It was a completely unique device that combined a 5-in touchscreen with a console-grade controller to build the best Android gaming machine you could buy. NVIDIA did its best to promote Android gaming as a secondary market to consoles and PCs, and the frequent software updates kept the SHIELD nearly-up-to-date with the latest Android software releases.
As we approach the one year anniversary of SHIELD, NVIDIA is preparing to release another product to add to the SHIELD family of products: the SHIELD Tablet. Chances are, you could guess what this device is already. It is a tablet powered by Tegra K1 and updated to support all SHIELD software. Of course, there are some new twists as well.
The NVIDIA SHIELD Tablet is being targeted, as the slide above states, at being "the ultimate tablet for gamers." This is a fairly important point to keep in mind as you we walk through the details of the SHIELD tablet, and its accessories, as there are certain areas where NVIDIA's latest product won't quite appeal to you for general purpose tablet users.
Most obviously, this new SHIELD device is a tablet (and only a tablet). There is no permanently attached controller. Instead, the SHIELD controller will be an add-on accessory for buyers. NVIDIA has put a lot of processing power into the tablet as well as incredibly interesting new software capabilities to enable 10-ft use cases and even mobile Twitch streaming.
The First with the Tegra K1 Processor
Back in May a Chinese company announced what was then the first and only product based on NVIDIA’s Tegra K1 SoC, the Xiaomi Mi Pad 7.9. Since then we have had a couple of other products hit our news wire including Google’s own Project Tango development tablet. But the Xiaomi is the first to actually be released, selling through 50,000 units in four minutes according to some reports. I happened to find one on Aliexpress.com, a Chinese sell-through website, and after a few short days the DHL deliveryman dropped the Tegra K1 powered machine off at my door.
If you are like me, the Xiaomi name was a new one. A privately owned company from Beijing and has become one of China’s largest electronics companies, jumping into the smartphone market in 2011. The Mi Pad marks the company’s first attempt at a tablet device, and the partnership with NVIDIA to be an early seller of the Tegra K1 seems to be making waves.
The Tegra K1 Processor
The Tegra K1 SoC was first revealed at CES in January of 2014, and with it came a heavy burden of expectation from NVIDIA directly, as well as from investors and the media. The first SoC from the Tegra family to have a GPU built from the ground up by NVIDIA engineers, the Tegra K1 gets its name from the Kepler family of GPUs. It also happens to get the base of its architecture there as well.
The processor of the Tegra K1 look very familiar and include four ARM Cortex-A15 “r3” cores and 2MB of L2 cache with a fifth A15 core used for lower power situations. This 4+1 design is the same that was introduced with the Tegra 4 processor last year and allows NVIDIA to implement a style of “big.LITTLE” design that is unique. Some slight modifications to the cores are included with Tegra K1 that improve performance and efficiency, but not by much – the main CPU is very similar to the Tegra 4.
The focus on the Tegra K1 will be on the GPU, now powered by NVIDIA’s Kepler architecture. The K1 features 192 CUDA cores with a very similar design to a single SMX on today’s GeForce GTX 700-series graphics cards. This includes OpenGL ES3.0 support but much more importantly, OpenGL 4.4 and DirectX 11 integration. The ambition of bringing modern, quality PC gaming to mobile devices is going to be closer than you ever thought possible with this product and the demos I have seen running on reference designs are enough to leave your jaw on the floor.
By far the most impressive part of Tegra K1 is the implementation of a full Kepler SMX onto a chip that will be running well under 2 watts. While it has been the plan from NVIDIA to merge the primary GPU architectures between mobile and discrete, this choice did not come without some risk. When the company was building the first Tegra part it basically had to make a hedge on where the world of mobile technology would be in 2015. NVIDIA might have continued to evolve and change the initial GPU IP that was used in Tegra 1, adding feature support and increasing the required die area to improve overall GPU performance, but instead they opted to position a “merge point” with Kepler in 2014. The team at NVIDIA saw that they were within reach of the discontinuity point we are seeing today with Tegra K1, but in truth they had to suffer through the first iterations of Tegra GPU designs that they knew were inferior to the design coming with Kepler.
You can read much more on the technical detail of the Tegra K1 SoC by heading over to our launch article that goes into the updated CPU design, as well as giving you all the gore behind the Kepler integration.
By far the most interesting aspect of the Xiaomi Mi Pad 7.9 tablet is the decsion to integrate the Tegra K1 processor. Performance and battery life comparisons with other 7 to 8-in tablets will likely not impact how it sells in China, but the results may mean the world to NVIDIA as they implore other vendors to integrate the SoC.