All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
GeForce GTX 980M Performance Testing
When NVIDIA launched the GeForce GTX 980 and GTX 970 graphics cards last month, part of the discussion at our meetings also centered around the mobile variants of Maxwell. The NDA was a bit later though and Scott wrote up a short story announcing the release of the GTX 980M and the GTX 970M mobility GPUs. Both of these GPUs are based on the same GM204 design as the desktop cards, though as you should have come to expect by now, do so with lower specifications than the similarly-named desktop options. Take a look:
|GTX 980M||GTX 970M||
|Memory||Up to 4GB||Up to 3GB||4GB||4GB||4GB/8GB|
|Memory Rate||2500 MHz||2500 MHz||7.0 (GT/s)||7.0 (GT/s)||2500 MHz|
Just like the desktop models, GTX 980M and GTX 970M are built on the 28nm process technology and are tweaked and built for power efficiency - one of the reasons the mobile release of this product is so interesting.
With a CUDA core count of 1536, the GTX 980M has 33% fewer shader cores than the desktop GTX 980, along with a slightly lower base clock speed. The result is a peak theoretical performance of 3.189 TFLOPs, compared to 4.6 TFLOPs on the GTX 980 desktop. In fact, that is only slightly higher than the GTX 880M based on Kepler, that clocks in with the same CUDA core count (1536) but a TFLOP capability of 2.9. Bear in mind that the GTX 880M is using a different architecture design than the GTX 980M; Maxwell's design advantages go beyond just CUDA core count and clock speed.
The GTX 970M is even smaller, with a CUDA core count of 1280 and peak performance rated at 2.365 TFLOPs. Also notice that the memory bus width has shrunk from 256-bit to 192-bit for this part.
As is typically the case with mobile GPUs, the memory speed of the GTX 980M and GTX 970M is significantly lower than the desktop parts. While the GeForce GTX 980 and 970 that install in your desktop PC will have memory running at 7.0 GHz, the mobile versions will run at 5.0 GHz in order to conserve power.
From a feature set stand point though, the GTX 980M/970M are very much the same as the desktop parts that I looked at in September. You will have support for VXGI, NVIDIA's new custom global illumination technology, Multi-Frame AA and maybe most interestingly, Dynamic Super Resolution (DSR). DSR allows you to render a game at a higher resolution and then use a custom filter to down sample it back to your panel's native resolution. For mobile gamers that are using 1080p screens (as our test sample shipped with) this is a good way to utilize the power of your GPU for less power-hungry games, while getting a surprisingly good image at the same time.
If there is one message that I get from NVIDIA's GeForce GTX 900M-series announcement, it is that laptop gaming is a first-class citizen in their product stack. Before even mentioning the products, the company provided relative performance differences between high-end desktops and laptops. Most of the rest of the slide deck is showing feature-parity with the desktop GTX 900-series, and a discussion about battery life.
First, the parts. Two products have been announced: The GeForce GTX 980M and the GeForce GTX 970M. Both are based on the 28nm Maxwell architecture. In terms of shading performance, the GTX 980M has a theoretical maximum of 3.189 TFLOPs, and the GTX 970M is calculated at 2.365 TFLOPs (at base clock). On the desktop, this is very close to the GeForce GTX 770 and the GeForce GTX 760 Ti, respectively. This metric is most useful when you're compute bandwidth-bound, at high resolution with complex shaders.
The full specifications are:
|GTX 980M||GTX 970M||
|Memory||Up to 4GB||Up to 3GB||4GB||4GB||4GB/8GB|
|Memory Rate||2500 MHz||2500 MHz||7.0 (GT/s)||7.0 (GT/s)||2500 MHz|
As for the features, it should be familiar for those paying attention to both desktop 900-series and the laptop 800M-series product launches. From desktop Maxwell, the 900M-series is getting VXGI, Dynamic Super Resolution, and Multi-Frame Sampled AA (MFAA). From the latest generation of Kepler laptops, the new GPUs are getting an updated BatteryBoost technology. From the rest of the GeForce ecosystem, they will also get GeForce Experience, ShadowPlay, and so forth.
For VXGI, DSR, and MFAA, please see Ryan's discussion for the desktop Maxwell launch. Information about these features is basically identical to what was given in September.
BatteryBoost, on the other hand, is a bit different. NVIDIA claims that the biggest change is just raw performance and efficiency, giving you more headroom to throttle. Perhaps more interesting though, is that GeForce Experience will allow separate one-click optimizations for both plugged-in and battery use cases.
The power efficiency demonstrated with the Maxwell GPU in Ryan's original GeForce GTX 980 and GTX 970 review is even more beneficial for the notebook market where thermal designs are physically constrained. Longer battery life, as well as thinner and lighter gaming notebooks, will see tremendous advantages using a GPU that can run at near peak performance on the maximum power output of an integrated battery. In NVIDIA's presentation, they mention that while notebooks on AC power can use as much as 230 watts of power, batteries tend to peak around 100 watts. Given that a full speed, desktop-class GTX 980 has a TDP of 165 watts, compared to the 250 watts of a Radeon R9 290X, translates into notebook GPU performance that will more closely mirror its desktop brethren.
Of course, you probably will not buy your own laptop GPU; rather, you will be buying devices which integrate these. There are currently five designs across four manufacturers that are revealed (see image above). Three contain the GeForce GTX 980M, one has a GTX 970M, and the other has a pair of GTX 970Ms. Prices and availability are not yet announced.
Introduction and Design
A little over a year ago, we posted our review of the Lenovo Y500, which was a gaming notebook that leveraged not one, but two discrete video adapters (2 x NVIDIA GeForce GT 650M in SLI, to be exact) to achieve respectable gaming performance at a reasonable price point (around $1,200 at the time of the review).
Well—take away nearly a pound of weight (to 5.7 lbs), slim the case down to around an inch thick, update the chipset, and remove one video card, and you’ve got the Lenovo Y50 Touch, which ought to be able to improve upon the Y500 in nearly every area if the specifications add up to typical results. Here’s the full list of what our review unit includes:
While the GTX 860M (2 GB) is a far cry from, say, the GTX 880M (8 GB) we had the pleasure of testing in MSI’s GT70 2PE, it’s still a very capable card that should provide satisfactory results without breaking the bank (or the back). The rest of the spec sheet is conventional fare for a budget gaming notebook, with the only other surprise being the inclusion of a touchscreen—an option which replaces the traditional matte LCD panel in the standard Y50.
The configuration we received has already been slightly updated to include a CPU that’s a nudge better than the i7-4700HQ: the i7-4710HQ (which gains it 100 MHz in Turbo Boost clock rate). Otherwise, the specs are identical, and the street price is very close to that of the Y500 we originally reviewed: $1,139. Currently, an extra 10 bucks will also score you an external DVD+/-RW drive, and just 90 bucks more will boost your GTX 860M’s VRAM to 4 GB (from 2 GB) and your system RAM to 16 GB from 8 GB. That’s really not a bad deal at all.
One Small Step
While most articles surrounding the iPhone 6 and iPhone 6 Plus this far have focused around user experience and larger screen sizes, performance, and in particular the effect of Apple's transition to the 20nm process node for the A8 SoC have been our main questions regarding these new phones. Naturally, I decided to put my personal iPhone 6 though our usual round of benchmarks.
First, let's start with 3DMark.
Comparing the 3DMark scores of the new Apple A8 to even the last generation A7 provides a smaller improvement than we are used to seeing generation-to-generation with Apple's custom ARM implementations. When you compare the A8 to something like the NVIDIA Tegra K1, which utilizes desktop-class GPU cores, the overall score blows Apple out of the water. Even taking a look at the CPU-bound physics score, the K1 is still a winner.
A 78% performance advantage in overall score when compared the A8 shows just how much of a powerhouse NVIDIA has with the K1. (Though clearly power envelopes are another matter entirely.)
If we look at more CPU benchmarks, like the browser-based Google Octane and SunSpider tests, the A8 starts to shine more.
While the A8 edges out the A7 to be the best performing device and 54% faster than the K1 in SunSpider, the A8 and K1 are neck and neck in the Google Octane benchmark.
Moving back to a graphics heavy benchmark, GFXBench's Manhattan test, the Tegra K1 has a 75% percent performance advantage over the A8 though it is 36% faster than the previous A7 silicon.
These early results are certainly a disappointment compared to the usual generation-to-generation performance increase we see with Apple SoCs.
However, the other aspect to look at is power efficiency. With normal use I have noticed a substantial increase in battery life of my iPhone 6 over the last generation iPhone 5S. While this may be due to a small (about 1 wH) increase in battery capacity, I think more can be credited to this being an overall more efficient device. Certain choices like sticking to a highly optimized Dual Core CPU design and Quad Core GPU, as well as a reduction in process node to 20nm all contribute to increased battery life, while surpassing the performance of the last generation Apple A7.
In that way, the A8 moves the bar forward for Apple and is a solid first attempt at using the 20nm silicon technology at TSMC. There is a strong potential that further refined parts (like the expected A8x for the iPad revisions) Apple will be able to further surpass 28nm silicon in performance and efficiency.
The notebook market of today barely resembles the notebook market of 5 years ago. People are spending less money on their computers than ever before, and we find even sub $1000 options are adequate for casual 1080p gaming. However the high-end, boutique gaming notebook hasn’t been forgotten. Companies like Maingear still forge on to try to provide a no compromise portable gaming experience. Today, we look at the Maingear Pulse 17 gaming laptop.
The most striking feature of the Pulse 17 is the namesake 17-in display. While we are used to seeing gaming laptops fall in the 15-in or higher range, there is something to be said about opening up the Pulse and being greeted by a massive display with 1080p resolution. The choice of a 17-in display here also enables one of the most impressive parts of this notebook, the thickness.
When most people think about gaming laptops, their minds go to the gigantic bricks of the past, The Pulse 17 manages to provide gaming power in a similar thickness to the average ultrabook at 0.86”. In fact, the form factor is similar to what I’d imagine a 17” MacBook Pro Retina as, if Apple decided to use a display that large.
Even though the screen size creates a large footprint for the Pulse 17, both the thickness, and the 6lb weight make this the first truly portable gaming laptop I have used.
Comparing the physical form of the Pulse 17 to a notebook like the ASUS G750JX, which we reviewed late last year, is almost comical. The G750 weighs in at 10lbs and just under 2” thick while toting similar hardware and performance to the Pulse 17.
Top: Maingear Pulse 17, Bottom: ASUS G750JX
Beyond physical attributes, the Pulse 17 has a lot to offer from a hardware standpoint. The Intel Core i7-4700HQ processor and NVIDIA GTX 765M GPU (as tested, it now ships with a 870M) mean that you’ll have all that you need to play any modern game on the integrated 1080p display.
Storage is provided by a 1TB Hard Drive, as well as 2x128GB mSATA SSDs in SuperRAID 0 to provide maximum throughput.
A Tablet and Controller Worth Using
An interesting thing happened a couple of weeks back, while I was standing on stage at our annual PC Perspective Hardware Workshop during Quakecon in Dallas, TX. When NVIDIA offered up a SHIELD (now called the SHIELD Portable) for raffle, the audience cheered. And not just a little bit, but more than they did for nearly any other hardware offered up during the show. That included motherboards, graphics card, monitors, even complete systems. It kind of took me aback - NVIDIA SHIELD was a popular brand, a name that was recognized, and apparently, a product that people wanted to own. You might not have guessed that based on the sales numbers that SHIELD has put forward though. Even though it appeared to have a significant mind share, market share was something that was lacking.
Today though, NVIDIA prepares the second product in the SHIELD lineup, the SHIELD Tablet, a device the company hopes improves on the idea of SHIELD to encourage other users to sign on. It's a tablet (not a tablet with a controller attached), it has a more powerful SoC that can utilize different APIs for unique games, it can be more easily used in a 10-ft console mode and the SHIELD specific features like Game Stream are included and enhanced.
The question of course though is easy to put forward: should you buy one? Let's explore.
The NVIDIA SHIELD Tablet
At first glance, the NVIDIA SHIELD Tablet looks like a tablet. That actually isn't a negative selling point though, as the SHIELD Tablet can and does act like a high end tablet in nearly every way: performance, function, looks. We originally went over the entirety of the tablet's specifications in our first preview last week but much of it bears repeating for this review.
The SHIELD Tablet is built around the NVIDIA Tegra K1 SoC, the first mobile silicon to implement the Kepler graphics architecture. That feature alone makes this tablet impressive because it offers graphics performance not seen in a form factor like this before. CPU performance is also improved over the Tegra 4 processor, but the graphics portion of the die sees the largest performance jump easily.
A 1920x1200 resolution 7.9-in IPS screen faces the user and brings the option of full 1080p content lacking with the first SHIELD portable. The screen is bright and crisp, easily viewable in bring lighting for gaming or use in lots of environments. Though the Xiaomi Mi Pad 7.9 had a 2048x1536 resolution screen, the form factor of the SHIELD Tablet is much more in line with what NVIDIA built with the Tegra Note 7.
Introduction and Design
The next candidate in our barrage of ThinkPad reviews is the ThinkPad Yoga, which, at first glance, might seem a little bit redundant. After all, we’ve already got three current-gen Yoga models to choose from between the Yoga 2 11- and 13-inch iterations and the Yoga 2 Pro top-end selection. What could possibly be missing?
Well, in fact, as is often the case when choosing between well-conceived notebook models, it isn’t so much about what’s missing as it is priorities. Whereas the consumer-grade Yoga models all place portability, slimness, and aesthetics in the highest regard, the ThinkPad Yoga subscribes to a much more practical business-oriented approach, which (nearly) always instead favors function over form. It’s a conversation we’ve had here at PC Perspective a thousand times before, but yet again, it is the core ThinkPad philosophy which separates the ThinkPad Yoga from other notebooks of its type. Suffice it to say, in fact, that really the only reason to think of it as a Yoga at all is the unique hinge design and affiliated notebook/tablet convertibility; excepting that, this seems much closer to an X240 than anything in Lenovo’s current consumer-grade lineup. And carrying a currently-configurable street price of around $1,595 currently, it’s positioned as such, too.
But it isn’t beyond reproach. Some of the same questionable decisions regarding design changes which we’ve covered in our recent ThinkPad reviews still apply to the Yoga. For instance, the much-maligned clickpad is back, bringing with it vivid nightmares of pointer jumpiness and click fatigue that were easily the biggest complaint about the T440s and X240 we recently reviewed. The big question today is whether these criticisms are impactful enough to disqualify the ThinkPad Yoga as a rational alternative to other ThinkPad convertibles and the consumer-grade Yoga models. It’s a tall order, so let’s tackle it.
First up, the specs:
While most of this list is pretty conventional, the astute might have already picked out one particular item which tops the X240 we recently reviewed: a possible 16 GB of dual-channel RAM. The X240 was limited to just 8 GB of single-channel memory thanks to a mere single SODIMM slot. The ThinkPad Yoga also boasts a 1080p screen with a Wacom digitizer pen—something which is clearly superior to our X240 review unit. Sadly missing, however, are the integrated Gigabit Ethernet port and the VGA port—and the mini DisplayPort has been replaced by a mini-HDMI, which ultimately is decidedly inferior.
SHIELD Tablet with new Features
It's odd how regular these events seem to come. Almost exactly one year ago today, NVIDIA launched the SHIELD gaming device, which is a portable Android tablet attached to a controller, all powered by the Tegra 4 SoC. It was a completely unique device that combined a 5-in touchscreen with a console-grade controller to build the best Android gaming machine you could buy. NVIDIA did its best to promote Android gaming as a secondary market to consoles and PCs, and the frequent software updates kept the SHIELD nearly-up-to-date with the latest Android software releases.
As we approach the one year anniversary of SHIELD, NVIDIA is preparing to release another product to add to the SHIELD family of products: the SHIELD Tablet. Chances are, you could guess what this device is already. It is a tablet powered by Tegra K1 and updated to support all SHIELD software. Of course, there are some new twists as well.
The NVIDIA SHIELD Tablet is being targeted, as the slide above states, at being "the ultimate tablet for gamers." This is a fairly important point to keep in mind as you we walk through the details of the SHIELD tablet, and its accessories, as there are certain areas where NVIDIA's latest product won't quite appeal to you for general purpose tablet users.
Most obviously, this new SHIELD device is a tablet (and only a tablet). There is no permanently attached controller. Instead, the SHIELD controller will be an add-on accessory for buyers. NVIDIA has put a lot of processing power into the tablet as well as incredibly interesting new software capabilities to enable 10-ft use cases and even mobile Twitch streaming.
The First with the Tegra K1 Processor
Back in May a Chinese company announced what was then the first and only product based on NVIDIA’s Tegra K1 SoC, the Xiaomi Mi Pad 7.9. Since then we have had a couple of other products hit our news wire including Google’s own Project Tango development tablet. But the Xiaomi is the first to actually be released, selling through 50,000 units in four minutes according to some reports. I happened to find one on Aliexpress.com, a Chinese sell-through website, and after a few short days the DHL deliveryman dropped the Tegra K1 powered machine off at my door.
If you are like me, the Xiaomi name was a new one. A privately owned company from Beijing and has become one of China’s largest electronics companies, jumping into the smartphone market in 2011. The Mi Pad marks the company’s first attempt at a tablet device, and the partnership with NVIDIA to be an early seller of the Tegra K1 seems to be making waves.
The Tegra K1 Processor
The Tegra K1 SoC was first revealed at CES in January of 2014, and with it came a heavy burden of expectation from NVIDIA directly, as well as from investors and the media. The first SoC from the Tegra family to have a GPU built from the ground up by NVIDIA engineers, the Tegra K1 gets its name from the Kepler family of GPUs. It also happens to get the base of its architecture there as well.
The processor of the Tegra K1 look very familiar and include four ARM Cortex-A15 “r3” cores and 2MB of L2 cache with a fifth A15 core used for lower power situations. This 4+1 design is the same that was introduced with the Tegra 4 processor last year and allows NVIDIA to implement a style of “big.LITTLE” design that is unique. Some slight modifications to the cores are included with Tegra K1 that improve performance and efficiency, but not by much – the main CPU is very similar to the Tegra 4.
The focus on the Tegra K1 will be on the GPU, now powered by NVIDIA’s Kepler architecture. The K1 features 192 CUDA cores with a very similar design to a single SMX on today’s GeForce GTX 700-series graphics cards. This includes OpenGL ES3.0 support but much more importantly, OpenGL 4.4 and DirectX 11 integration. The ambition of bringing modern, quality PC gaming to mobile devices is going to be closer than you ever thought possible with this product and the demos I have seen running on reference designs are enough to leave your jaw on the floor.
By far the most impressive part of Tegra K1 is the implementation of a full Kepler SMX onto a chip that will be running well under 2 watts. While it has been the plan from NVIDIA to merge the primary GPU architectures between mobile and discrete, this choice did not come without some risk. When the company was building the first Tegra part it basically had to make a hedge on where the world of mobile technology would be in 2015. NVIDIA might have continued to evolve and change the initial GPU IP that was used in Tegra 1, adding feature support and increasing the required die area to improve overall GPU performance, but instead they opted to position a “merge point” with Kepler in 2014. The team at NVIDIA saw that they were within reach of the discontinuity point we are seeing today with Tegra K1, but in truth they had to suffer through the first iterations of Tegra GPU designs that they knew were inferior to the design coming with Kepler.
You can read much more on the technical detail of the Tegra K1 SoC by heading over to our launch article that goes into the updated CPU design, as well as giving you all the gore behind the Kepler integration.
By far the most interesting aspect of the Xiaomi Mi Pad 7.9 tablet is the decsion to integrate the Tegra K1 processor. Performance and battery life comparisons with other 7 to 8-in tablets will likely not impact how it sells in China, but the results may mean the world to NVIDIA as they implore other vendors to integrate the SoC.
When Magma Freezes Over...
Intel confirms that they have approached AMD about access to their Mantle API. The discussion, despite being clearly labeled as "an experiment" by an Intel spokesperson, was initiated by them -- not AMD. According to AMD's Gaming Scientist, Richard Huddy, via PCWorld, AMD's response was, "Give us a month or two" and "we'll go into the 1.0 phase sometime this year" which only has about five months left in it. When the API reaches 1.0, anyone who wants to participate (including hardware vendors) will be granted access.
AMD inside Intel Inside???
I do wonder why Intel would care, though. Intel has the fastest per-thread processors, and their GPUs are not known to be workhorses that are held back by API call bottlenecks, either. Of course, that is not to say that I cannot see any reason, however...
Introduction and Design
It was only last year that we were singing the praises of the GT60, which was one of the fastest notebooks we’d seen to date. Its larger cousin, the GT70, features a 17.3” screen (versus the GT60’s 15.6”), faster CPUs and GPUs, and even better options for storage. Now, the latest iteration of this force to be reckoned with has arrived on our desks, and while its appearance hasn’t changed much, its performance is even better than ever.
While we’ll naturally be spending a good deal of time discussing performance and stability in our article here, we won’t be dedicating much to casing and general design, as—for the most part—it is very similar to that of the GT60. On the other hand, one area on which we’ll be focusing particularly heavily is that of battery life, thanks solely to the presence of NVIDIA’s new Battery Boost technology. As the name suggests, this new feature employs power conservation techniques to extend the notebook’s life while gaming unplugged. This is accomplished primarily via frame rate limiting, which is a feature that has actually been available since the introduction of Kepler, but which until now has been buried within the advanced options available for such products. Battery Boost basically brings this to the forefront and makes it both accessible and default.
Let’s take a look at what this bad boy is packing:
Not much commentary needed here; this table reads like a who’s who of computer specifications. Of particular note are the 32 GB of RAM, the 880M (of course), and the 384 GB SSD RAID array (!!). Elsewhere, it’s mostly business as usual for the ultra-high-end MSI GT notebooks, with a slightly faster CPU than the previous model we reviewed (the i7-4700MQ). One thing is guaranteed: it’s a fast machine.
Kaveri Goes Mobile
The processor market is in an interesting place today. At the high end of the market Intel continues to stand pretty much unchallenged, ranging from the Ivy Bridge-E at $1000 to the $300 Haswell parts available for DIY users. The same could really be said for the mobile market - if you want a high performance part the default choice continues to rest with Intel. But AMD has some interesting options that Intel can't match when you start to enter the world of the mainstream notebook. The APU was slow to develop but it has placed AMD in a unique position, separated from the Intel processors with a more or less reversed compute focus. While Intel dominates in the performance on the x86 side of things, the GPU in AMD's latest APUs continue to lead in gaming and compute performance.
The biggest problem for AMD is that the computing software ecosystem still has not caught up with the performance that a GPU can provide. With the exception of games, the GPU in a notebook or desktop remains under utilized. Certain software vendors are making strides - see the changes in video transcoding and image manipulation - but there is still some ground AMD needs to accelerate down.
Today we are looking at the mobile version of Kaveri, AMD's latest entry into the world of APUs. This processor combines the latest AMD processor architecture with a GCN-based graphics design for a pretty advanced part. When the desktop version of this processor was released, we wrote quite a bit about the architecture and the technological advancements made into, including becoming the first processor that is fully HSA compliant. I won't be diving into the architecture details here since we covered them so completely back in January just after CES.
The mobile version of Kaveri is basically identical in architecture with some changes for better power efficiency. The flagship part will ship with 12 Compute Cores (4 Steamroller x86 cores and 8 GCN cores) and will support all the same features of GCN graphics designs including the new Mantle API.
Early in the spring we heard rumors that the AMD FX brand was going to make a comeback! Immediately enthusiasts were thinking up ways AMD could compete against the desktop Core i7 parts from Intel; could it be with 12 cores? DDR4 integration?? As it turns out...not so much.
In many ways, the Google Nexus 7 has long been the standard of near perfection for an Android tablet. With a modest 7-inch screen, solid performance and low cost, the ASUS-built hardware has stood through one major revision as our top selection. Today though, a new contender in the field makes its way to the front of the pack in the form of the ASUS MeMO Pad 7 (ME176C). At $150, this new 7-inch tablet has almost all the hallmarks to really make an impact in the Android ecosystem. Finally.
The MeMO Pad 7 is not a new product family, though. It has existed with Mediatek processors for quite some time with essentially the same form factor. This new ME176C model makes some decisions that help it break into a new level of performance while maintaining the budget pricing required to really take on the likes of Google. By coupling the MeMO Pad brand with the Intel Bay Trail Atom processor, the two companies firmly believe they have a winner; but do they?
I have to admit that my time with the ASUS MeMO Pad 7 (ME176C) has been short; shorter than I would have liked to offer a truly definitive take on this mobile platform. I prefer to take the time to work the tablet into my daily work and home routines. Reading, browsing, email, etc. This allows me to filter though any software intricacies that might make or break a purchasing decision. Still, I think the ASUS design is going to live up to my expectations and is worth every penny of the $150 price tag.
The ASUS MeMO Pad 7 has a 1280x800 resolution IPS screen. This 7-inch device is powered by the new Intel Atom Z3745 quad-core SoC with 1GB of memory and 16GB of on-board storage. The front facing camera is of the 2MP variety while the rear facing camera is 5MP - but you will likely be as disappointed in the image quality of the photos as I was. Connectivity options include the microUSB port for charging and data transfer along with 802.11b/g/n 2.4 GHz WiFi (sorry, no 5.0 GHz option here). Bluetooth 4.0 allows for low power data sync with other devices you might have and our model shipped with Android 4.4.2 already pre-installed.
The rear of the ASUS MeMO Pad is a pseudo rubber/plastic type material that is easy to grip while not leaving fingerprints behind - a solid combination. The center mounted camera lens takes decent pictures - but I can't put any more praise on it than that. It was easy to find image quality issues with photos even in full daylight. It's hard to know how disappointed to be considering the price, but the Nexus 7 has better optical hardware.
Upgrades from Anker
Last year we started to have a large amount of mobile devices around the office including smartphones, tablets and even convertibles like the ASUS T100, all of which were charged with USB connections. While not a hassle when you are charging one or two units at time, having 6+ on our desks on any day started to become a problem for our less numerous wall outlets. Our solution last year was Anker's E150 25 watt wall charger that we did a short video overview on.
It was great but had limitations including different charging rates depending on the port you connected it to, limited output of 5 Amps total for all five ports and fixed outputs per port. Today we are taking a look at a pair of new Anker devices that implement smart ports called PowerIQ that enable the battery and wall charger to send as much power to the charging device as it requests, regardless of what physical port it is attached to.
We'll start with the updated Anker 40 watt 5-port wall charger and then move on to discuss the 3-port mobile battery charger, both of which share the PowerIQ feature.
Anker 40 watt 5-Port Wall Charger
The new Anker 5-port wall charger is actually smaller than the previous generation but offers superior specifications at all feature points. This unit can push out more than 40 watts total combined through all five USB ports, 5 volts at as much as 8 amps. All 8 amps can in fact go through a single USB charging port we are told if there was a device that would request that much - we don't have anything going above 2.3A it seems in our offices.
Any USB port can be used for any device on this new model, it doesn't matter where it plugs in. This great simplifies things from a user experience point of view as you don't have to hold the unit up to your face to read the tiny text that existed on the E150. With 8 amps spread across all five ports you should have more than enough power to charge all your devices at full speed. If you happen to have five iPads charging at the same time, that would exceed 8A and all the devices charge rates would be a bit lower.
AMD Makes some Lemonade...
I guess we could say that AMD has been rather busy lately. It seems that a significant amount of the content on PC Perspective this month revolved around the AMD AM1 platform. Before that we had the Kaveri products and the R7 265. AMD also reported some fairly solid growth over the past year with their graphics and APU lines. Things are not as grim and dire as they once were for the company. This is good news for consumers as they will continue to be offered competing solutions that will vie for that hard earned dollar.
AMD is continuing their releases for 2014 with the announcement of their latest low-power and mainstream mobile APUs. These are codenamed “Beema” and “Mullins”, but they are based on the year old Kabini chip. This may cause a few people to roll their eyes as AMD has had some fairly unimpressive refreshes in the past. We saw the rather meager increases in clockspeed and power consumption with Brazos 2.0 a couple of years back, and it looked like this would be the case again for Beema and Mullins.
I was again expecting said meager improvements in power consumption and clockspeeds that we had received all those years ago with Brazos 2.0. Turns out I was wrong. This is a fairly major refresh which does a few things that I did not think were entirely possible, and I’m a rather optimistic person. So why is this release surprising? Let us take a good look under the hood.
Maxwell and Kepler and...Fermi?
Covering the landscape of mobile GPUs can be a harrowing experience. Brands, specifications, performance, features and architectures can all vary from product to product, even inside the same family. Rebranding is rampant from both AMD and NVIDIA and, in general, we are met with one of the most confusing segments of the PC hardware market.
Today, with the release of the GeForce GTX 800M series from NVIDIA, we are getting all of the above in one form or another. We will also see performance improvements and the introduction of the new Maxwell architecture (in a few parts at least). Along with the GeForce GTX 800M parts, you will also find the GeForce 840M, 830M and 820M offerings at lower performance, wattage and price levels.
With some new hardware comes a collection of new software for mobile users, including the innovative Battery Boost that can increase unplugged gaming time by using frame rate limiting and other "magic" bits that NVIDIA isn't talking about yet. ShadowPlay and GameStream also find their way to mobile GeForce users as well.
Let's take a quick look at the new hardware specifications.
|GTX 880M||GTX 780M||GTX 870M||GTX 770M|
|GPU Code name||Kepler||Kepler||Kepler||Kepler|
|Rated Clock||954 MHz||823 MHz||941 MHz||811 MHz|
|Memory||Up to 4GB||Up to 4GB||Up to 3GB||Up to 3GB|
|Memory Clock||5000 MHz||5000 MHz||5000 MHz||4000 MHz|
Both the GTX 880M and the GTX 870M are based on Kepler, keeping the same basic feature set and hardware specifications of their brethren in the GTX 700M line. However, while the GTX 880M has the same CUDA core count as the 780M, the same cannot be said of the GTX 870M. Moving from the GTX 770M to the 870M sees a significant 40% increase in core count as well as a jump in clock speed from 811 MHz (plus Boost) to 941 MHz.
Introduction and Design
Alongside our T440s review unit was something slightly smaller and dear to our hearts: the latest entry in the ThinkPad X series of notebooks. Seeing as this very review is being typed on a Lenovo X220, our interest was piqued by the latest refinements to the formula. When the X220 was released, the thin-and-light trend was only just beginning to pick up steam leading into what eventually became today’s Ultrabook movement. Its 2012 successor, the ThinkPad X230, went on to receive our coveted (and rarely bestowed) Editor’s Choice Award, even in spite of a highly controversial keyboard change that sent the fanbase into a panic.
But all of that has since (mostly) blown over, primarily thanks to the fact that—in spite of the minor ergonomic adjustments required to accustom oneself with what was once a jarringly different keyboard design—the basic philosophy remained the same: pack as many powerful parts as possible into a 12.5-inch case while still maintaining good durability and battery life. These machines were every bit as capable as most other 13- and 14-inch notebooks of their time, and they were considerably smaller, too. About the only thing they lacked was higher-resolution screens, discrete graphics, and quad-core CPUs.
But with the X240 (and the T440s), portability has truly taken center stage, suggesting a complete paradigm shift—however subtly—away from “powerful (and light)” and toward “light (and powerful)”. Coupled with Intel’s Haswell CPUs and Lenovo’s new Power Bridge dual-battery design, this will certainly yield great benefits in the realm of battery life. But that isn’t all that’s different: we also find a (once again) revamped keyboard, as well as a completely new touchpad design which finally dispenses with the physical buttons entirely. Like in the X230’s case, these changes have roiled the ThinkPad purists—but is it all just a matter of close-minded traditionalism? That’s precisely what we’ll discover today.
Introduction and Design
Arguably some of the most thoughtful machines on the market are Lenovo’s venerable ThinkPads, which—while sporadically brave in their assertions—are still among the most conservative (yet simultaneously practical) notebooks available. What makes these notebooks so popular in the business crowds is their longstanding refusal to compromise functionality in the interest of form, as well as their self-proclaimed legendary reliability. And you could argue that such practical conservatism is what defines a good business notebook: a device which embraces the latest technological trends, but only with requisite caution and consideration.
Maybe it’s the shaky PC market, or maybe it’s the sheer onset of sexy technologies such as touch and clickpads, but recent ThinkPads have begun to show some uncommon progressivism, and unapologetically so, too. First, it was the complete replacement of the traditional critically-acclaimed ThinkPad keyboard with the Chiclet AccuType variety, a decision which irked purists but eventually was accepted by most. Along with that were the integrated touchpad buttons, which are still lamented by many users. Those alterations to the winning design were ultimately relatively minor, however, and for the most part, they’ve now been digested by the community. Now, though, with the T440s (as well as the rest of Lenovo’s revamped ThinkPad lineup), we’re seeing what will perhaps constitute the most controversial change of all: the substitution of the older touchpads with a “5-button trackpad”, as well as optional touchscreen interface.
Can these changes help to keep the T440s on the cusp of technological progress, or has the design finally crossed the threshold into the realm of counterproductivity?
Compared with nearly any other modern notebook, these specs might not hold many surprises. But judged side-by-side with its T430s predecessor, there are some pretty striking differences. For starters, the T440s is the first in its line to offer only low-voltage CPU options. While our test unit shipped with the (certainly capable enough) Core i5-4200U—a dual-core processor with up to 2.6 GHz Turbo Boost clock rate—options range up to a Core i7-4600U (up to 3.30 GHz). Still, these options are admittedly a far cry from the i7-3520M with which top-end T430s machines were equipped. Of course, it’s also less than half of the TDP, which is likely why the decision was made. Other notables are the lack of discrete graphics options (previously users has the choice of either integrated graphics or an NVIDIA NVS 5200M) and the maximum supported memory of 12 GB. And, of course, there’s the touchscreen—which is not required, but rather, is merely an option. On the other hand, while we’re on the subject of the screen, this is also the first model in the series to offer a 1080p resolution, whether traditional or touch-enabled—which is very much appreciated indeed.
That’s a pretty significant departure from the design of the T430s, which—as it currently appears—could represent the last T4xxs model that will provide such powerhouse options at the obvious expense of battery life. Although some markets already have the option of the ThinkPad S440 to fill the Ultrabook void within the ThinkPad 14-inch range, that notebook can even be outfitted with discrete graphics. The T440s top-end configuration, meanwhile, consists of a 15W TDP dual-core i7 with integrated graphics and 12 GB DDR3 RAM. In other words, it’s powerful, but it’s just not in the same class as the T430’s components. What’s more important to you?
Lenovo introduces a unique form factor
Lenovo isn't a company that seems interested in slowing down. Just when you think the world of notebooks is getting boring, it releases products like the ThinkPad Tablet 2 and the Yoga 2 Pro. Today we are looking at another innovative product from Lenovo, the Yoga Tablet 8 and Yoga Tablet 10. While the tablets share the Yoga branding seen in recent convertible notebooks these are NOT Windows-based PCs - something that I fear some consumers might get confused by.
Instead this tablet pair is based on Android (4.2.2 at this point) which brings with it several advantages. First, the battery life is impressive, particularly with the 8-in version that clocked in more than 17 hours in our web browsing test! Second, the form factor of these units is truly unique and not only allows for larger batteries but also a more comfortable in-the-hand feeling than I have had with any other tablet.
Check out the video overview below!
You can pick up the 8-in version of the Lenovo Yoga Tablet for just $199 while the 10.1-in model starts at $274.
The Lenovo Yoga Tablet is available in both 8-in and 10.1-in sizes though the hardware is mostly identical between both units include screen resolution (1280x800) and SoC hardware (MediaTek quad-core Cortex-A7). The larger model does get an 8000 mAh battery (over the 6000 mAh on the 8-in) but isn't enough to counter balance the power draw of the larger screen.
The 1280x800 resolution is a bit lower than I would like but is perfectly acceptable on the 8-in version of the Yoga Tablet. On the 10-in model though the pixels are just too big and image quality suffers. These are currently running Android 4.2.2 which is fine, but hopefully we'll see some updates from Lenovo to more current Android versions.
Introduction and Design
We’re always on the hunt for good docking stations, and sometimes it can be difficult to locate one when you aren’t afforded the luxury of a dedicated docking port. Fortunately, with the advent of USB 3.0 and the greatly improved bandwidth that comes along with it, the options have become considerably more robust.
Today, we’ll take a look at StarTech’s USB3SDOCKHDV, more specifically labeled the Universal USB 3.0 Laptop Docking Station - Dual Video HDMI DVI VGA with Audio and Ethernet (whew). This docking station carries an MSRP of $155 (currently selling for $123 on Amazon.com) and is well above other StarTech options (such as the $100 USBVGADOCK2, which offers just one video output—VGA—10/100 Ethernet, and four USB 2.0 ports). In terms of street price, it is currently available at resellers such as Amazon for around $125.
The big selling points of the USB3SDOCKHDV are its addition of three USB 3.0 ports and Gigabit Ethernet—but most enticingly, its purported ability to provide three total screens simultaneously (including the connected laptop’s LCD) by way of dual HD video output. This video output can be achieved by way of either HDMI + DVI-D or HDMI + VGA combinations (but not by VGA + DVI-D). We’ll be interested to see how well this functionality works, as well as what sort of toll it takes on the CPU of the connected machine.
Continue reading our review of the StarTech USB3SDOCKHDV USB 3.0 Docking Station!!!