All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
New Features and Specifications
It is increasingly obvious that in the high end smartphone and tablet market, much like we saw occur over the last several years in the PC space, consumers are becoming more concerned with features and experiences than just raw specifications. There is still plenty to drool over when looking at and talking about 4K screens in the palm of your hand, octa-core processors and mobile SoC GPUs measuring performance in hundreds of GFLOPS, but at the end of the day the vast majority of consumers want something that does something to “wow” them.
As a result, device manufacturers and SoC vendors are shifting priorities for performance, features and how those are presented both the public and to the media. Take this week’s Qualcomm event in San Diego where a team of VPs, PR personnel and engineers walked me through the new Snapdragon 810 processor. Rather than showing slide after slide of comparative performance numbers to the competition, I was shown room after room of demos. Wi-Fi, LTE, 4K capture and playback, gaming capability, thermals, antennae modifications, etc. The goal is showcase the experience of the entire platform – something that Qualcomm has been providing for longer than just about anyone in this business, while educating consumers on the need for balance too.
As a 15-year veteran of the hardware space my first reaction here couldn’t have been scripted any more precisely: a company that doesn’t show performance numbers has something to hide. But I was given time with a reference platform featuring the Snapdragon 810 processor in a tablet form-factor and the results show impressive increases over the 801 and 805 processors from the previous family. Rumors of the chips heat issues seem overblown, but that part will be hard to prove for sure until we get retail hardware in our hands to confirm.
Today’s story will outline the primary feature changes of the Snapdragon 810 SoC, though there was so much detail presented at the event with such a short window of time for writing that I definitely won’t be able to get to it all. I will follow up the gory specification details with performance results compared to a wide array of other tablets and smartphones to provide some context to where 810 stands in the market.
It has been an abnormal week for us here at PC Perspective. Our typical review schedule has pretty much flown out the window, and the past seven days have been filled with learning, researching, retesting, and publishing. That might sound like the norm, but in these cases the process was initiated by tips from our readers. Last Saturday (24 Jan), a few things were brewing:
- Ryan was informed by NVIDIA that the memory layout of the GTX 970 was different than expected.
- The huge (now 168 page) overclock.net forum thread about the Samsung 840 EVO slowdown was once again gaining traction.
- Someone got G-Sync working on a laptop integrated display.
We had to do a bit of triage here of course, as we can only research and write so quickly. Ryan worked the GTX 970 piece as it was the hottest item. I began a few days of research and testing on the 840 EVO slow down issue reappearing on some drives, and we kept tabs on that third thing, which at the time seemed really farfetched. With those two first items taken care of, Ryan shifted his efforts to GTX 970 SLI testing while I shifted my focus to finding out of there was any credence to this G-Sync laptop thing.
A few weeks ago, an ASUS Nordic Support rep inadvertently leaked an interim build of the NVIDIA driver. This was a mobile driver build (version 346.87) focused at their G751 line of laptops. One recipient of this driver link posted it to the ROG forum back on the 20th. A fellow by the name Gamenab, owning the same laptop cited in that thread, presumably stumbled across this driver, tried it out, and was more than likely greeted by this popup after the installation completed:
Now I know what you’re thinking, and it’s probably the same thing anyone would think. How on earth is this possible? To cut a long story short, while the link to the 346.87 driver was removed shortly after being posted to that forum, we managed to get our hands on a copy of it, installed it on the ASUS G751 that we had in for review, and wouldn’t you know it we were greeted by the same popup!
Ok, so it’s a popup, could it be a bug? We checked NVIDIA control panel and the options were consistent with that of a G-Sync connected system. We fired up the pendulum demo and watched the screen carefully, passing the machine around the office to be inspected by all. We then fired up some graphics benchmarks that were well suited to show off the technology (Unigine Heaven, Metro: Last Light, etc), and everything looked great – smooth steady pans with no juddering or tearing to be seen. Ken Addison, our Video Editor and jack of all trades, researched the panel type and found that it was likely capable of 100 Hz refresh. We quickly dug created a custom profile, hit apply, and our 75 Hz G-Sync laptop was instantly transformed into a 100 Hz G-Sync laptop!
Ryan's Note: I think it is important here to point out that we didn't just look at demos and benchmarks for this evaluation but actually looked at real-world gameplay situations. Playing through Metro: Last Light showed very smooth pans and rotation, Assassin's Creed played smoothly as well and flying through Unigine Heaven manually was a great experience. Crysis 3, Battlefield 4, etc. This was NOT just a couple of demos that we ran through - the variable refresh portion of this mobile G-Sync enabled panel was working and working very well.
At this point in our tinkering, we had no idea how or why this was working, but there was no doubt that we were getting a similar experience as we have seen with G-Sync panels. As I digested what was going on, I thought surely this can’t be as good as it seems to be… Let’s find out, shall we?
Introduction and Design
MSI’s unapologetically large GT70 “Dominator Pro” series of machines knows its audience well: for every gripe about the notebooks’ hulking sizes, a snicker and a shrug are returned by the community, who rarely value such items as portability as highly as the critics who are hired to judge based on them. These machines are built for power, first and foremost. While featherweight construction and manageable dimensions matter to those regularly tossing machines into their bags, by contrast, MSI’s desktop replacements recognize the meaning of their classification: the flexibility of merely moving around the house with one’s gaming rig is reason enough to consider investing in one.
So its priorities are arguably well in line. But if you want to keep on dominating, regular updates are a necessity, too. And with the GT72 2QE, MSI takes it all up yet another notch: our review unit (GT72 2QE-208US) packs four SSDs in a RAID-0 array (as opposed to the GT70’s three), plus a completely redesigned case which manages to address some of our biggest complaints. Oh yeah, and an NVIDIA GTX 980M GPU with 8 GB GDDR5 RAM—the fastest mobile GPU ever. (You can find much more information and analysis on this GPU specifically in Ryan’s ever-comprehensive review.)
Of course, these state-of-the-art innards come at no small price: $2,999 as configured (around a $2,900 street price), or a few hundred bucks less with storage or RAM sacrifices—a reasonable trade-off considering the marginal benefits one gains from a quad-SSD array or 32 GB of RAM.
Core M 5Y70 Specifications
Back in August of this year, Intel invited me out to Portland, Oregon to talk about the future of processors and process technology. Broadwell is the first microarchitecture to ship on Intel's newest 14nm process technology and the performance and power implications of it are as impressive as they are complex. We finally have the first retail product based on Broadwell-Y in our hands and I am eager to see how this combination of technology is going to be implemented.
If you have not read through my article that dives into the intricacies of the 14nm process and the architectural changes coming with Broadwell, then I would highly recommend that you do so before diving any further into this review. Our Intel Core M Processor: Broadwell Architecture and 14nm Process Reveal story clearly explains the "how" and "why" for many of the decisions that determined the direction the Core M 5Y70 heads in.
As I stated at the time:
"The information provided by Intel about Broadwell-Y today shows me the company is clearly innovating and iterating on its plans set in place years ago with the focus on power efficiency. Broadwell and the 14nm process technology will likely be another substantial leap between Intel and AMD in the x86 tablet space and should make an impact on other tablet markets (like Android) as long as pricing can remain competitive. That 14nm process gives Intel an advantage that no one else in the industry can claim and unless Intel begins fabricating processors for the competition (not completely out of the question), that will remain a house advantage."
With a background on Intel's goals with Broadwell-Y, let's look at the first true implementation.
GeForce GTX 980M Performance Testing
When NVIDIA launched the GeForce GTX 980 and GTX 970 graphics cards last month, part of the discussion at our meetings also centered around the mobile variants of Maxwell. The NDA was a bit later though and Scott wrote up a short story announcing the release of the GTX 980M and the GTX 970M mobility GPUs. Both of these GPUs are based on the same GM204 design as the desktop cards, though as you should have come to expect by now, do so with lower specifications than the similarly-named desktop options. Take a look:
|GTX 980M||GTX 970M||
|Memory||Up to 4GB||Up to 3GB||4GB||4GB||4GB/8GB|
|Memory Rate||2500 MHz||2500 MHz||7.0 (GT/s)||7.0 (GT/s)||2500 MHz|
Just like the desktop models, GTX 980M and GTX 970M are built on the 28nm process technology and are tweaked and built for power efficiency - one of the reasons the mobile release of this product is so interesting.
With a CUDA core count of 1536, the GTX 980M has 33% fewer shader cores than the desktop GTX 980, along with a slightly lower base clock speed. The result is a peak theoretical performance of 3.189 TFLOPs, compared to 4.6 TFLOPs on the GTX 980 desktop. In fact, that is only slightly higher than the GTX 880M based on Kepler, that clocks in with the same CUDA core count (1536) but a TFLOP capability of 2.9. Bear in mind that the GTX 880M is using a different architecture design than the GTX 980M; Maxwell's design advantages go beyond just CUDA core count and clock speed.
The GTX 970M is even smaller, with a CUDA core count of 1280 and peak performance rated at 2.365 TFLOPs. Also notice that the memory bus width has shrunk from 256-bit to 192-bit for this part.
As is typically the case with mobile GPUs, the memory speed of the GTX 980M and GTX 970M is significantly lower than the desktop parts. While the GeForce GTX 980 and 970 that install in your desktop PC will have memory running at 7.0 GHz, the mobile versions will run at 5.0 GHz in order to conserve power.
From a feature set stand point though, the GTX 980M/970M are very much the same as the desktop parts that I looked at in September. You will have support for VXGI, NVIDIA's new custom global illumination technology, Multi-Frame AA and maybe most interestingly, Dynamic Super Resolution (DSR). DSR allows you to render a game at a higher resolution and then use a custom filter to down sample it back to your panel's native resolution. For mobile gamers that are using 1080p screens (as our test sample shipped with) this is a good way to utilize the power of your GPU for less power-hungry games, while getting a surprisingly good image at the same time.
If there is one message that I get from NVIDIA's GeForce GTX 900M-series announcement, it is that laptop gaming is a first-class citizen in their product stack. Before even mentioning the products, the company provided relative performance differences between high-end desktops and laptops. Most of the rest of the slide deck is showing feature-parity with the desktop GTX 900-series, and a discussion about battery life.
First, the parts. Two products have been announced: The GeForce GTX 980M and the GeForce GTX 970M. Both are based on the 28nm Maxwell architecture. In terms of shading performance, the GTX 980M has a theoretical maximum of 3.189 TFLOPs, and the GTX 970M is calculated at 2.365 TFLOPs (at base clock). On the desktop, this is very close to the GeForce GTX 770 and the GeForce GTX 760 Ti, respectively. This metric is most useful when you're compute bandwidth-bound, at high resolution with complex shaders.
The full specifications are:
|GTX 980M||GTX 970M||
|Memory||Up to 4GB||Up to 3GB||4GB||4GB||4GB/8GB|
|Memory Rate||2500 MHz||2500 MHz||7.0 (GT/s)||7.0 (GT/s)||2500 MHz|
As for the features, it should be familiar for those paying attention to both desktop 900-series and the laptop 800M-series product launches. From desktop Maxwell, the 900M-series is getting VXGI, Dynamic Super Resolution, and Multi-Frame Sampled AA (MFAA). From the latest generation of Kepler laptops, the new GPUs are getting an updated BatteryBoost technology. From the rest of the GeForce ecosystem, they will also get GeForce Experience, ShadowPlay, and so forth.
For VXGI, DSR, and MFAA, please see Ryan's discussion for the desktop Maxwell launch. Information about these features is basically identical to what was given in September.
BatteryBoost, on the other hand, is a bit different. NVIDIA claims that the biggest change is just raw performance and efficiency, giving you more headroom to throttle. Perhaps more interesting though, is that GeForce Experience will allow separate one-click optimizations for both plugged-in and battery use cases.
The power efficiency demonstrated with the Maxwell GPU in Ryan's original GeForce GTX 980 and GTX 970 review is even more beneficial for the notebook market where thermal designs are physically constrained. Longer battery life, as well as thinner and lighter gaming notebooks, will see tremendous advantages using a GPU that can run at near peak performance on the maximum power output of an integrated battery. In NVIDIA's presentation, they mention that while notebooks on AC power can use as much as 230 watts of power, batteries tend to peak around 100 watts. Given that a full speed, desktop-class GTX 980 has a TDP of 165 watts, compared to the 250 watts of a Radeon R9 290X, translates into notebook GPU performance that will more closely mirror its desktop brethren.
Of course, you probably will not buy your own laptop GPU; rather, you will be buying devices which integrate these. There are currently five designs across four manufacturers that are revealed (see image above). Three contain the GeForce GTX 980M, one has a GTX 970M, and the other has a pair of GTX 970Ms. Prices and availability are not yet announced.
Introduction and Design
A little over a year ago, we posted our review of the Lenovo Y500, which was a gaming notebook that leveraged not one, but two discrete video adapters (2 x NVIDIA GeForce GT 650M in SLI, to be exact) to achieve respectable gaming performance at a reasonable price point (around $1,200 at the time of the review).
Well—take away nearly a pound of weight (to 5.7 lbs), slim the case down to around an inch thick, update the chipset, and remove one video card, and you’ve got the Lenovo Y50 Touch, which ought to be able to improve upon the Y500 in nearly every area if the specifications add up to typical results. Here’s the full list of what our review unit includes:
While the GTX 860M (2 GB) is a far cry from, say, the GTX 880M (8 GB) we had the pleasure of testing in MSI’s GT70 2PE, it’s still a very capable card that should provide satisfactory results without breaking the bank (or the back). The rest of the spec sheet is conventional fare for a budget gaming notebook, with the only other surprise being the inclusion of a touchscreen—an option which replaces the traditional matte LCD panel in the standard Y50.
The configuration we received has already been slightly updated to include a CPU that’s a nudge better than the i7-4700HQ: the i7-4710HQ (which gains it 100 MHz in Turbo Boost clock rate). Otherwise, the specs are identical, and the street price is very close to that of the Y500 we originally reviewed: $1,139. Currently, an extra 10 bucks will also score you an external DVD+/-RW drive, and just 90 bucks more will boost your GTX 860M’s VRAM to 4 GB (from 2 GB) and your system RAM to 16 GB from 8 GB. That’s really not a bad deal at all.
One Small Step
While most articles surrounding the iPhone 6 and iPhone 6 Plus this far have focused around user experience and larger screen sizes, performance, and in particular the effect of Apple's transition to the 20nm process node for the A8 SoC have been our main questions regarding these new phones. Naturally, I decided to put my personal iPhone 6 though our usual round of benchmarks.
First, let's start with 3DMark.
Comparing the 3DMark scores of the new Apple A8 to even the last generation A7 provides a smaller improvement than we are used to seeing generation-to-generation with Apple's custom ARM implementations. When you compare the A8 to something like the NVIDIA Tegra K1, which utilizes desktop-class GPU cores, the overall score blows Apple out of the water. Even taking a look at the CPU-bound physics score, the K1 is still a winner.
A 78% performance advantage in overall score when compared the A8 shows just how much of a powerhouse NVIDIA has with the K1. (Though clearly power envelopes are another matter entirely.)
If we look at more CPU benchmarks, like the browser-based Google Octane and SunSpider tests, the A8 starts to shine more.
While the A8 edges out the A7 to be the best performing device and 54% faster than the K1 in SunSpider, the A8 and K1 are neck and neck in the Google Octane benchmark.
Moving back to a graphics heavy benchmark, GFXBench's Manhattan test, the Tegra K1 has a 75% percent performance advantage over the A8 though it is 36% faster than the previous A7 silicon.
These early results are certainly a disappointment compared to the usual generation-to-generation performance increase we see with Apple SoCs.
However, the other aspect to look at is power efficiency. With normal use I have noticed a substantial increase in battery life of my iPhone 6 over the last generation iPhone 5S. While this may be due to a small (about 1 wH) increase in battery capacity, I think more can be credited to this being an overall more efficient device. Certain choices like sticking to a highly optimized Dual Core CPU design and Quad Core GPU, as well as a reduction in process node to 20nm all contribute to increased battery life, while surpassing the performance of the last generation Apple A7.
In that way, the A8 moves the bar forward for Apple and is a solid first attempt at using the 20nm silicon technology at TSMC. There is a strong potential that further refined parts (like the expected A8x for the iPad revisions) Apple will be able to further surpass 28nm silicon in performance and efficiency.
The notebook market of today barely resembles the notebook market of 5 years ago. People are spending less money on their computers than ever before, and we find even sub $1000 options are adequate for casual 1080p gaming. However the high-end, boutique gaming notebook hasn’t been forgotten. Companies like Maingear still forge on to try to provide a no compromise portable gaming experience. Today, we look at the Maingear Pulse 17 gaming laptop.
The most striking feature of the Pulse 17 is the namesake 17-in display. While we are used to seeing gaming laptops fall in the 15-in or higher range, there is something to be said about opening up the Pulse and being greeted by a massive display with 1080p resolution. The choice of a 17-in display here also enables one of the most impressive parts of this notebook, the thickness.
When most people think about gaming laptops, their minds go to the gigantic bricks of the past, The Pulse 17 manages to provide gaming power in a similar thickness to the average ultrabook at 0.86”. In fact, the form factor is similar to what I’d imagine a 17” MacBook Pro Retina as, if Apple decided to use a display that large.
Even though the screen size creates a large footprint for the Pulse 17, both the thickness, and the 6lb weight make this the first truly portable gaming laptop I have used.
Comparing the physical form of the Pulse 17 to a notebook like the ASUS G750JX, which we reviewed late last year, is almost comical. The G750 weighs in at 10lbs and just under 2” thick while toting similar hardware and performance to the Pulse 17.
Top: Maingear Pulse 17, Bottom: ASUS G750JX
Beyond physical attributes, the Pulse 17 has a lot to offer from a hardware standpoint. The Intel Core i7-4700HQ processor and NVIDIA GTX 765M GPU (as tested, it now ships with a 870M) mean that you’ll have all that you need to play any modern game on the integrated 1080p display.
Storage is provided by a 1TB Hard Drive, as well as 2x128GB mSATA SSDs in SuperRAID 0 to provide maximum throughput.
A Tablet and Controller Worth Using
An interesting thing happened a couple of weeks back, while I was standing on stage at our annual PC Perspective Hardware Workshop during Quakecon in Dallas, TX. When NVIDIA offered up a SHIELD (now called the SHIELD Portable) for raffle, the audience cheered. And not just a little bit, but more than they did for nearly any other hardware offered up during the show. That included motherboards, graphics card, monitors, even complete systems. It kind of took me aback - NVIDIA SHIELD was a popular brand, a name that was recognized, and apparently, a product that people wanted to own. You might not have guessed that based on the sales numbers that SHIELD has put forward though. Even though it appeared to have a significant mind share, market share was something that was lacking.
Today though, NVIDIA prepares the second product in the SHIELD lineup, the SHIELD Tablet, a device the company hopes improves on the idea of SHIELD to encourage other users to sign on. It's a tablet (not a tablet with a controller attached), it has a more powerful SoC that can utilize different APIs for unique games, it can be more easily used in a 10-ft console mode and the SHIELD specific features like Game Stream are included and enhanced.
The question of course though is easy to put forward: should you buy one? Let's explore.
The NVIDIA SHIELD Tablet
At first glance, the NVIDIA SHIELD Tablet looks like a tablet. That actually isn't a negative selling point though, as the SHIELD Tablet can and does act like a high end tablet in nearly every way: performance, function, looks. We originally went over the entirety of the tablet's specifications in our first preview last week but much of it bears repeating for this review.
The SHIELD Tablet is built around the NVIDIA Tegra K1 SoC, the first mobile silicon to implement the Kepler graphics architecture. That feature alone makes this tablet impressive because it offers graphics performance not seen in a form factor like this before. CPU performance is also improved over the Tegra 4 processor, but the graphics portion of the die sees the largest performance jump easily.
A 1920x1200 resolution 7.9-in IPS screen faces the user and brings the option of full 1080p content lacking with the first SHIELD portable. The screen is bright and crisp, easily viewable in bring lighting for gaming or use in lots of environments. Though the Xiaomi Mi Pad 7.9 had a 2048x1536 resolution screen, the form factor of the SHIELD Tablet is much more in line with what NVIDIA built with the Tegra Note 7.
Introduction and Design
The next candidate in our barrage of ThinkPad reviews is the ThinkPad Yoga, which, at first glance, might seem a little bit redundant. After all, we’ve already got three current-gen Yoga models to choose from between the Yoga 2 11- and 13-inch iterations and the Yoga 2 Pro top-end selection. What could possibly be missing?
Well, in fact, as is often the case when choosing between well-conceived notebook models, it isn’t so much about what’s missing as it is priorities. Whereas the consumer-grade Yoga models all place portability, slimness, and aesthetics in the highest regard, the ThinkPad Yoga subscribes to a much more practical business-oriented approach, which (nearly) always instead favors function over form. It’s a conversation we’ve had here at PC Perspective a thousand times before, but yet again, it is the core ThinkPad philosophy which separates the ThinkPad Yoga from other notebooks of its type. Suffice it to say, in fact, that really the only reason to think of it as a Yoga at all is the unique hinge design and affiliated notebook/tablet convertibility; excepting that, this seems much closer to an X240 than anything in Lenovo’s current consumer-grade lineup. And carrying a currently-configurable street price of around $1,595 currently, it’s positioned as such, too.
But it isn’t beyond reproach. Some of the same questionable decisions regarding design changes which we’ve covered in our recent ThinkPad reviews still apply to the Yoga. For instance, the much-maligned clickpad is back, bringing with it vivid nightmares of pointer jumpiness and click fatigue that were easily the biggest complaint about the T440s and X240 we recently reviewed. The big question today is whether these criticisms are impactful enough to disqualify the ThinkPad Yoga as a rational alternative to other ThinkPad convertibles and the consumer-grade Yoga models. It’s a tall order, so let’s tackle it.
First up, the specs:
While most of this list is pretty conventional, the astute might have already picked out one particular item which tops the X240 we recently reviewed: a possible 16 GB of dual-channel RAM. The X240 was limited to just 8 GB of single-channel memory thanks to a mere single SODIMM slot. The ThinkPad Yoga also boasts a 1080p screen with a Wacom digitizer pen—something which is clearly superior to our X240 review unit. Sadly missing, however, are the integrated Gigabit Ethernet port and the VGA port—and the mini DisplayPort has been replaced by a mini-HDMI, which ultimately is decidedly inferior.
SHIELD Tablet with new Features
It's odd how regular these events seem to come. Almost exactly one year ago today, NVIDIA launched the SHIELD gaming device, which is a portable Android tablet attached to a controller, all powered by the Tegra 4 SoC. It was a completely unique device that combined a 5-in touchscreen with a console-grade controller to build the best Android gaming machine you could buy. NVIDIA did its best to promote Android gaming as a secondary market to consoles and PCs, and the frequent software updates kept the SHIELD nearly-up-to-date with the latest Android software releases.
As we approach the one year anniversary of SHIELD, NVIDIA is preparing to release another product to add to the SHIELD family of products: the SHIELD Tablet. Chances are, you could guess what this device is already. It is a tablet powered by Tegra K1 and updated to support all SHIELD software. Of course, there are some new twists as well.
The NVIDIA SHIELD Tablet is being targeted, as the slide above states, at being "the ultimate tablet for gamers." This is a fairly important point to keep in mind as you we walk through the details of the SHIELD tablet, and its accessories, as there are certain areas where NVIDIA's latest product won't quite appeal to you for general purpose tablet users.
Most obviously, this new SHIELD device is a tablet (and only a tablet). There is no permanently attached controller. Instead, the SHIELD controller will be an add-on accessory for buyers. NVIDIA has put a lot of processing power into the tablet as well as incredibly interesting new software capabilities to enable 10-ft use cases and even mobile Twitch streaming.
The First with the Tegra K1 Processor
Back in May a Chinese company announced what was then the first and only product based on NVIDIA’s Tegra K1 SoC, the Xiaomi Mi Pad 7.9. Since then we have had a couple of other products hit our news wire including Google’s own Project Tango development tablet. But the Xiaomi is the first to actually be released, selling through 50,000 units in four minutes according to some reports. I happened to find one on Aliexpress.com, a Chinese sell-through website, and after a few short days the DHL deliveryman dropped the Tegra K1 powered machine off at my door.
If you are like me, the Xiaomi name was a new one. A privately owned company from Beijing and has become one of China’s largest electronics companies, jumping into the smartphone market in 2011. The Mi Pad marks the company’s first attempt at a tablet device, and the partnership with NVIDIA to be an early seller of the Tegra K1 seems to be making waves.
The Tegra K1 Processor
The Tegra K1 SoC was first revealed at CES in January of 2014, and with it came a heavy burden of expectation from NVIDIA directly, as well as from investors and the media. The first SoC from the Tegra family to have a GPU built from the ground up by NVIDIA engineers, the Tegra K1 gets its name from the Kepler family of GPUs. It also happens to get the base of its architecture there as well.
The processor of the Tegra K1 look very familiar and include four ARM Cortex-A15 “r3” cores and 2MB of L2 cache with a fifth A15 core used for lower power situations. This 4+1 design is the same that was introduced with the Tegra 4 processor last year and allows NVIDIA to implement a style of “big.LITTLE” design that is unique. Some slight modifications to the cores are included with Tegra K1 that improve performance and efficiency, but not by much – the main CPU is very similar to the Tegra 4.
The focus on the Tegra K1 will be on the GPU, now powered by NVIDIA’s Kepler architecture. The K1 features 192 CUDA cores with a very similar design to a single SMX on today’s GeForce GTX 700-series graphics cards. This includes OpenGL ES3.0 support but much more importantly, OpenGL 4.4 and DirectX 11 integration. The ambition of bringing modern, quality PC gaming to mobile devices is going to be closer than you ever thought possible with this product and the demos I have seen running on reference designs are enough to leave your jaw on the floor.
By far the most impressive part of Tegra K1 is the implementation of a full Kepler SMX onto a chip that will be running well under 2 watts. While it has been the plan from NVIDIA to merge the primary GPU architectures between mobile and discrete, this choice did not come without some risk. When the company was building the first Tegra part it basically had to make a hedge on where the world of mobile technology would be in 2015. NVIDIA might have continued to evolve and change the initial GPU IP that was used in Tegra 1, adding feature support and increasing the required die area to improve overall GPU performance, but instead they opted to position a “merge point” with Kepler in 2014. The team at NVIDIA saw that they were within reach of the discontinuity point we are seeing today with Tegra K1, but in truth they had to suffer through the first iterations of Tegra GPU designs that they knew were inferior to the design coming with Kepler.
You can read much more on the technical detail of the Tegra K1 SoC by heading over to our launch article that goes into the updated CPU design, as well as giving you all the gore behind the Kepler integration.
By far the most interesting aspect of the Xiaomi Mi Pad 7.9 tablet is the decsion to integrate the Tegra K1 processor. Performance and battery life comparisons with other 7 to 8-in tablets will likely not impact how it sells in China, but the results may mean the world to NVIDIA as they implore other vendors to integrate the SoC.
When Magma Freezes Over...
Intel confirms that they have approached AMD about access to their Mantle API. The discussion, despite being clearly labeled as "an experiment" by an Intel spokesperson, was initiated by them -- not AMD. According to AMD's Gaming Scientist, Richard Huddy, via PCWorld, AMD's response was, "Give us a month or two" and "we'll go into the 1.0 phase sometime this year" which only has about five months left in it. When the API reaches 1.0, anyone who wants to participate (including hardware vendors) will be granted access.
AMD inside Intel Inside???
I do wonder why Intel would care, though. Intel has the fastest per-thread processors, and their GPUs are not known to be workhorses that are held back by API call bottlenecks, either. Of course, that is not to say that I cannot see any reason, however...
Introduction and Design
It was only last year that we were singing the praises of the GT60, which was one of the fastest notebooks we’d seen to date. Its larger cousin, the GT70, features a 17.3” screen (versus the GT60’s 15.6”), faster CPUs and GPUs, and even better options for storage. Now, the latest iteration of this force to be reckoned with has arrived on our desks, and while its appearance hasn’t changed much, its performance is even better than ever.
While we’ll naturally be spending a good deal of time discussing performance and stability in our article here, we won’t be dedicating much to casing and general design, as—for the most part—it is very similar to that of the GT60. On the other hand, one area on which we’ll be focusing particularly heavily is that of battery life, thanks solely to the presence of NVIDIA’s new Battery Boost technology. As the name suggests, this new feature employs power conservation techniques to extend the notebook’s life while gaming unplugged. This is accomplished primarily via frame rate limiting, which is a feature that has actually been available since the introduction of Kepler, but which until now has been buried within the advanced options available for such products. Battery Boost basically brings this to the forefront and makes it both accessible and default.
Let’s take a look at what this bad boy is packing:
Not much commentary needed here; this table reads like a who’s who of computer specifications. Of particular note are the 32 GB of RAM, the 880M (of course), and the 384 GB SSD RAID array (!!). Elsewhere, it’s mostly business as usual for the ultra-high-end MSI GT notebooks, with a slightly faster CPU than the previous model we reviewed (the i7-4700MQ). One thing is guaranteed: it’s a fast machine.
Kaveri Goes Mobile
The processor market is in an interesting place today. At the high end of the market Intel continues to stand pretty much unchallenged, ranging from the Ivy Bridge-E at $1000 to the $300 Haswell parts available for DIY users. The same could really be said for the mobile market - if you want a high performance part the default choice continues to rest with Intel. But AMD has some interesting options that Intel can't match when you start to enter the world of the mainstream notebook. The APU was slow to develop but it has placed AMD in a unique position, separated from the Intel processors with a more or less reversed compute focus. While Intel dominates in the performance on the x86 side of things, the GPU in AMD's latest APUs continue to lead in gaming and compute performance.
The biggest problem for AMD is that the computing software ecosystem still has not caught up with the performance that a GPU can provide. With the exception of games, the GPU in a notebook or desktop remains under utilized. Certain software vendors are making strides - see the changes in video transcoding and image manipulation - but there is still some ground AMD needs to accelerate down.
Today we are looking at the mobile version of Kaveri, AMD's latest entry into the world of APUs. This processor combines the latest AMD processor architecture with a GCN-based graphics design for a pretty advanced part. When the desktop version of this processor was released, we wrote quite a bit about the architecture and the technological advancements made into, including becoming the first processor that is fully HSA compliant. I won't be diving into the architecture details here since we covered them so completely back in January just after CES.
The mobile version of Kaveri is basically identical in architecture with some changes for better power efficiency. The flagship part will ship with 12 Compute Cores (4 Steamroller x86 cores and 8 GCN cores) and will support all the same features of GCN graphics designs including the new Mantle API.
Early in the spring we heard rumors that the AMD FX brand was going to make a comeback! Immediately enthusiasts were thinking up ways AMD could compete against the desktop Core i7 parts from Intel; could it be with 12 cores? DDR4 integration?? As it turns out...not so much.
In many ways, the Google Nexus 7 has long been the standard of near perfection for an Android tablet. With a modest 7-inch screen, solid performance and low cost, the ASUS-built hardware has stood through one major revision as our top selection. Today though, a new contender in the field makes its way to the front of the pack in the form of the ASUS MeMO Pad 7 (ME176C). At $150, this new 7-inch tablet has almost all the hallmarks to really make an impact in the Android ecosystem. Finally.
The MeMO Pad 7 is not a new product family, though. It has existed with Mediatek processors for quite some time with essentially the same form factor. This new ME176C model makes some decisions that help it break into a new level of performance while maintaining the budget pricing required to really take on the likes of Google. By coupling the MeMO Pad brand with the Intel Bay Trail Atom processor, the two companies firmly believe they have a winner; but do they?
I have to admit that my time with the ASUS MeMO Pad 7 (ME176C) has been short; shorter than I would have liked to offer a truly definitive take on this mobile platform. I prefer to take the time to work the tablet into my daily work and home routines. Reading, browsing, email, etc. This allows me to filter though any software intricacies that might make or break a purchasing decision. Still, I think the ASUS design is going to live up to my expectations and is worth every penny of the $150 price tag.
The ASUS MeMO Pad 7 has a 1280x800 resolution IPS screen. This 7-inch device is powered by the new Intel Atom Z3745 quad-core SoC with 1GB of memory and 16GB of on-board storage. The front facing camera is of the 2MP variety while the rear facing camera is 5MP - but you will likely be as disappointed in the image quality of the photos as I was. Connectivity options include the microUSB port for charging and data transfer along with 802.11b/g/n 2.4 GHz WiFi (sorry, no 5.0 GHz option here). Bluetooth 4.0 allows for low power data sync with other devices you might have and our model shipped with Android 4.4.2 already pre-installed.
The rear of the ASUS MeMO Pad is a pseudo rubber/plastic type material that is easy to grip while not leaving fingerprints behind - a solid combination. The center mounted camera lens takes decent pictures - but I can't put any more praise on it than that. It was easy to find image quality issues with photos even in full daylight. It's hard to know how disappointed to be considering the price, but the Nexus 7 has better optical hardware.
Upgrades from Anker
Last year we started to have a large amount of mobile devices around the office including smartphones, tablets and even convertibles like the ASUS T100, all of which were charged with USB connections. While not a hassle when you are charging one or two units at time, having 6+ on our desks on any day started to become a problem for our less numerous wall outlets. Our solution last year was Anker's E150 25 watt wall charger that we did a short video overview on.
It was great but had limitations including different charging rates depending on the port you connected it to, limited output of 5 Amps total for all five ports and fixed outputs per port. Today we are taking a look at a pair of new Anker devices that implement smart ports called PowerIQ that enable the battery and wall charger to send as much power to the charging device as it requests, regardless of what physical port it is attached to.
We'll start with the updated Anker 40 watt 5-port wall charger and then move on to discuss the 3-port mobile battery charger, both of which share the PowerIQ feature.
Anker 40 watt 5-Port Wall Charger
The new Anker 5-port wall charger is actually smaller than the previous generation but offers superior specifications at all feature points. This unit can push out more than 40 watts total combined through all five USB ports, 5 volts at as much as 8 amps. All 8 amps can in fact go through a single USB charging port we are told if there was a device that would request that much - we don't have anything going above 2.3A it seems in our offices.
Any USB port can be used for any device on this new model, it doesn't matter where it plugs in. This great simplifies things from a user experience point of view as you don't have to hold the unit up to your face to read the tiny text that existed on the E150. With 8 amps spread across all five ports you should have more than enough power to charge all your devices at full speed. If you happen to have five iPads charging at the same time, that would exceed 8A and all the devices charge rates would be a bit lower.
AMD Makes some Lemonade...
I guess we could say that AMD has been rather busy lately. It seems that a significant amount of the content on PC Perspective this month revolved around the AMD AM1 platform. Before that we had the Kaveri products and the R7 265. AMD also reported some fairly solid growth over the past year with their graphics and APU lines. Things are not as grim and dire as they once were for the company. This is good news for consumers as they will continue to be offered competing solutions that will vie for that hard earned dollar.
AMD is continuing their releases for 2014 with the announcement of their latest low-power and mainstream mobile APUs. These are codenamed “Beema” and “Mullins”, but they are based on the year old Kabini chip. This may cause a few people to roll their eyes as AMD has had some fairly unimpressive refreshes in the past. We saw the rather meager increases in clockspeed and power consumption with Brazos 2.0 a couple of years back, and it looked like this would be the case again for Beema and Mullins.
I was again expecting said meager improvements in power consumption and clockspeeds that we had received all those years ago with Brazos 2.0. Turns out I was wrong. This is a fairly major refresh which does a few things that I did not think were entirely possible, and I’m a rather optimistic person. So why is this release surprising? Let us take a good look under the hood.
Maxwell and Kepler and...Fermi?
Covering the landscape of mobile GPUs can be a harrowing experience. Brands, specifications, performance, features and architectures can all vary from product to product, even inside the same family. Rebranding is rampant from both AMD and NVIDIA and, in general, we are met with one of the most confusing segments of the PC hardware market.
Today, with the release of the GeForce GTX 800M series from NVIDIA, we are getting all of the above in one form or another. We will also see performance improvements and the introduction of the new Maxwell architecture (in a few parts at least). Along with the GeForce GTX 800M parts, you will also find the GeForce 840M, 830M and 820M offerings at lower performance, wattage and price levels.
With some new hardware comes a collection of new software for mobile users, including the innovative Battery Boost that can increase unplugged gaming time by using frame rate limiting and other "magic" bits that NVIDIA isn't talking about yet. ShadowPlay and GameStream also find their way to mobile GeForce users as well.
Let's take a quick look at the new hardware specifications.
|GTX 880M||GTX 780M||GTX 870M||GTX 770M|
|GPU Code name||Kepler||Kepler||Kepler||Kepler|
|Rated Clock||954 MHz||823 MHz||941 MHz||811 MHz|
|Memory||Up to 4GB||Up to 4GB||Up to 3GB||Up to 3GB|
|Memory Clock||5000 MHz||5000 MHz||5000 MHz||4000 MHz|
Both the GTX 880M and the GTX 870M are based on Kepler, keeping the same basic feature set and hardware specifications of their brethren in the GTX 700M line. However, while the GTX 880M has the same CUDA core count as the 780M, the same cannot be said of the GTX 870M. Moving from the GTX 770M to the 870M sees a significant 40% increase in core count as well as a jump in clock speed from 811 MHz (plus Boost) to 941 MHz.