All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Introduction: Improving Portable Sound
The Calyx PaT is very small USB DAC and headphone amp that can be used with PCs and mobile devices, offering the possibility of better sound from just about any digital source. So how does it sound? Let’s find out!
The PaT is a very interesting little device, to be sure. It rather resembles a large domino and weighs less than 1 ounce thanks to an ultra-light aluminum construction. It requires no battery or power source other than its micro USB connection, yet it provides sufficient power (0.8 V output) for in-ear monitors and efficient headphones through its 3.5mm headphone jack. Inside is a proprietary mix of DAC and amplifier circuitry, and like other products produced by Calyx, a Korean company with little presence in the United States, there is the promise of a dedication to great sound. Did Calyx pull it off with the diminutive PaT?
Improving Portable Sound
Outboard DACs and headphone amplifiers for computers and mobile devices are nothing new, with recent products like AudioQuest’s Dragonfly a prime example in the portable USB DAC market (though it offers no mobile support). When I first heard about the PaT during CES it was still in the prototype stage, but I was interested because of the Calyx name if nothing else, as I already owned the Calyx M DAP and had been quite honestly blown away by the sound.
So what need might I have for the interestingly-named PaT (pronounced "paat", meaning "bean" in reference to the small size), which is itself a DAC that requires another device to play music files? It wasn’t until I had the opportunity to speak with Calyx president Seungmok Yi during CES (via video chat as I couldn’t attend the show) that I started realize that this could be a compelling product, not just for the $99 price tag - a bargain for an audiophile product - but because of how versatile the PaT can be. You don't have to identify as an "audiophile" to appreciate the clearer and more detailed sound of a good DAC, especially when so many of us simply haven't heard one (especially on mobile devices).
ARM Releases Cortex-A72 for Licensing
On February 3rd, ARM announced a slew of new designs, including the Cortex A72. Few details were shared with us, but what we learned was that it could potentially redefine power and performance in the ARM ecosystem. Ryan was invited to London to participate in a deep dive of what ARM has done to improve its position against market behemoth Intel in the very competitive mobile space. Intel has a leg up on process technology with their 14nm Tri-Gate process, but they are continuing to work hard in making their x86 based processors more power efficient, while still maintaining good performance. There are certain drawbacks to using an ISA that is focused on high performance computing rather than being designed from scratch to provide good performance with excellent energy efficiency.
ARM has been on a pretty good roll with their Cortex A9, A7, A15, A17, A53, and A57 parts over the past several years. These designs have been utilized in a multitude of products and scenarios, with configurations that have scaled up to 16 cores. While each iteration has improved upon the previous, ARM is facing the specter of Intel’s latest generation, highly efficient x86 SOCs based on the 2nd gen 14nm Tri-Gate process. Several things have fallen into place for ARM to help them stay competitive, but we also cannot ignore the experience and design hours that have led to this product.
(Editor's Note: During my time with ARM last week it became very apparent that it is not standing still, not satisfied with its current status. With competition from Intel, Qualcomm and others ramping up over the next 12 months in both mobile and server markets, ARM will more than ever be depedent on the evolution of core design and GPU design to maintain advantages in performance and efficiency. As Josh will go into more detail here, the Cortex-A72 appears to be an incredibly impressive design and all indications and conversations I have had with others, outside of ARM, believe that it will be an incredibly successful product.)
Cortex A72: Highest Performance ARM Cortex
ARM has been ubiquitous for mobile applications since it first started selling licenses for their products in the 90s. They were found everywhere it seemed, but most people wouldn’t recognize the name ARM because these chips were fabricated and sold by licensees under their own names. Guys like Ti, Qualcomm, Apple, DEC and others all licensed and adopted ARM technology in one form or the other.
ARM’s importance grew dramatically with the introduction of increased complexity cellphones and smartphones. They also gained attention through multimedia devices such as the Microsoft Zune. What was once a fairly niche company with low performance, low power offerings became the 800 pound gorilla in the mobile market. Billions of chips are sold yearly based on ARM technology. To stay in that position ARM has worked aggressively on continually providing excellent power characteristics for their parts, but now they are really focusing on overall performance and capabilities to address, not only the smartphone market, but also the higher performance computing and server spaces that they want a significant presence in.
When I first was handed the Intel Compute Stick product at CES back in January, my mind began to race with a lot of questions. The first set were centered around the capabilities of the device itself: where could it be used, how much performance could Intel pack into it and just how many users would be interested in a product like this? Another set of questions was much more philosophical in nature: why was Intel going in this direction, does this mean an end for the emphasis on high performance componentry from Intel and who comes up with these darned part numbers?
I have since settled my mind on the issues surrounding Intel’s purpose with the Compute Stick and began to dive into the product itself. On the surface the Intel Compute Stick is a product entering late into a potentially crowded market. We already have devices like the Roku, Google Chromecast, the Apple TV, and even the Amazon Fire TV Stick. All of those devices share some of the targets and goals of the Compute Stick, but the one area where Intel’s product really stands out is flexibility. The Roku has the most pre-built applications and “channels” for a streaming media box. The Chromecast is dirt cheap at just $30 or so. Even Amazon’s Fire TV Stick is clearly the best choice for streaming Amazon’s own multimedia services. But the Intel Compute Stick can do all of those things – in addition to operating as a standalone PC with Windows or Linux. Anything you can do I can do better…
But it’s not a product without a few flaws, most of which revolve around the status of the current operating system designs for TVs and larger displays. Performance obviously isn’t peeling the paint off any walls, as you would expect. But I still think at for $150 with a full copy of Windows 8.1 with Bing, the Intel Compute Stick is going to find more fans that you might have first expected.
Battle of the Sixes, they call it
GameBench is a low-level application released in 2014 that attempts to bring the technical analysis and benchmarking capability of the PC to the mobile device. You might remember that I showed you some early results and discussed our use of the GameBench testing capability in my Dell Venue 8 7000 review a few months back; my understanding and practice of using the software was just beginning at that time and continues to grow as I spend time with the software.
The idea is simple yet powerful: GameBench allows Android users, and soon iOS users, the ability to monitor frame rates of nearly any game or 3D application that you can run on your phone or tablet to accurately measure real-world performance. This is similar to what we have done for years on the PC with FRAPS and allows us to gather average frames per second data over time. This is something that was previously unavailable to consumers or press for that matter and could be a very powerful tool for device to device comparisons going forward. The ability to utilize actual games and applications and gather benchmark data that is accurate to consumer experiences, rather than simply synthetic graphics tests that we have been forced to use in the past, will fundamentally change how we test and compare mobile hardware.
Image source: GameBench.net
Today, GameBench itself released a small report meant to showcase some of the kinds of data the software can gather while also revealing early support for Apple’s iPhone and iPad devices. Primary competitors for the comparison include the Apple iPhone 6, the Samsung Galaxy S6, HTC One M9 and Motorola Nexus 6. I was able to get an early look at the report and offer some feedback, while sharing with our readers my views on the results.
GameBench tested those four devices in a total of 10 games:
- Asphalt 8: Airborne
- Real Racing 3
- Dead Trigger 2
- Kill Shot
- Modern Combat 5: Blackout
- Boom Beach
- XCOM: Enemy Unknown
- GTA: San Andreas
- Marvel: Contest of Champions
- Monument Valley
These games all vary in price and in play style, but they all are in the top 50 games lists for each platform and are known for their graphically intense settings and look.
Introduction and First Impressions
The ASUS X205 offers the full Windows 8.1 notebook experience for the cost of a Chromebook, and the design offers a surprising amount of polish for the price. Is this $199 Atom-powered notebook a viable solution as a daily driver? We're about to find out.
What do you use a laptop for? A thoughtful answer to this question can be the most important part of the process when selecting your next notebook PC, and if your needs are modest there are a growing number of very low-cost options on the market. For example, I personally do not play games on a laptop, typically alternating between web, email, and Microsoft Office. Thus for myself the most important aspects of a notebook PC become screen quality, keyboard, trackpad, and battery life. High performance is not of utmost importance, and I assure myself of at least speedy load times by always choosing (or installing) a solid-state hard drive. For those reasons when I first read the description and specifications of the ASUS X205 notebook, I took notice.
The X205 is a small notebook with an 11.6” display and 1366x768 resolution, essentially matching the form-factor of Apple's 11.6" MacBook Air. It is powered by a quad-core Intel Atom processor with 2GB of RAM, and onboard storage is solid-state - though limited to 32GB and of the slower eMMC variety (which is in keeping with many Chromebooks). There is adequate connectivity as well, with the expected wireless card and two USB 2.0 ports. One aspect of this design that intrigued me was the trackpad, which ASUS claims is using "smartphone technology", indicating a touchscreen digitizer implementation. Smoothness and accuracy are the biggest problems I find with most inexpensive notebook trackpads, and if this turns out to be a strong performer it would be a major boon to the X205's overall usability. I opted for the Microsoft Signature Edition of the X205TA, which carries the same $199 retail price but does not come preloaded with any trialware or other junk software.
At the outset this feels like a compelling product simply because it retails for the same price as an average Chromebook, but offers the flexibility of a full Windows 8.1 installation. Granted this is the “Windows 8.1 with Bing” version found on low-cost, low-power devices like this, but it offers the functionality of the standard version. While Chrome OS and Google's productivity apps are great for many people, the ability to install and run Windows applications made this highly preferable to a Chromebook for me. Of course beyond the operating system the overall experience of using the laptop will ultimately decide the viability of this inexpensive product, so without further preamble let's dive right into the X205TA notebook!
Motorola has released an updated version of their low-cost Moto E smartphone for 2015, adding faster hardware and LTE support to an unlocked device with an unsubsidized retail of just $149. In this review we'll examine this new phone to find out if there are any significant limitations given its bargain price.
There has been a trend toward affordability with smartphone pricing that accelerated in 2014 and has continued its pace to start this year. Of course expensive flagships still exist at their $500+ unsubsidized retail prices, but is the advantage of such a device worth the price premium? In most cases a customer in a retail space would be naturally drawn to the more expensive phones on display with their large, sharp screens and thin designs that just look better by comparison. To get the latest and greatest the longstanding $500 - $700 unsubsidized cost of popular smartphones have made 2-year contract pricing a part of life for many, with contract offers and programs allowing users to lease or finance phones positioned as attractive alternatives to the high initial price. And while these high-end options can certainly reward the additional cost, there are rapidly diminishing returns on investment once we venture past the $200 mark with a mobile device. So it’s this bottom $200 of the full-price phone market which is so interesting not just to myself, but to the future of smartphones as they become the commodity devices that the so-called “feature phones” once were.
One of the companies at the forefront of a lower-cost approach to smartphones is Motorola, now independent from Google after Motorola Mobility was sold to Lenovo in October of 2014. A year before the sale Motorola had released a low-cost smartphone called the Moto G, an interesting product which ran stock Android for a fraction of the cost of a Google Play edition or even Nexus device; though it was underpowered with decidedly low-end specs. After a redesign in 2014, however, the 2nd edition Moto G became a much more compelling option, offering a unique combination of low price, respectable hardware, a stock Android experience, and Motorola’s now trademark design language, to a market drowning in bloated MSRPs. There was just one problem: while the 2014 Moto G had solid performance and had (quite importantly) moved larger 5-inch screen with a higher 720x1280 resolution IPS panel, there was still no LTE support. Selling without a contract for just $179 unlocked made the lack of LTE at least understandable, but as carrier technology has matured the prevalence of LTE has made it an essential part of future devices - especially in 2015. Admittedly 3G data speeds are fast enough for many people, but the structure of the modern mobile data plan often leaves that extra speed on the table if one’s device doesn’t support LTE.
Way back in January of this year, while attending CES 2015 in Las Vegas, we wandered into the MSI suite without having any idea what we might see as new and exciting product. Besides the GT80 notebook with a mechanical keyboard on it, the MSI GS30 Shadow was easily the most interesting and exciting technology. Although MSI is not the first company to try this, the Shadow is the most recent attempt to combine the benefits of a thin and light notebook with a discrete, high performance GPU when the former is connected to the latter's docking station.
The idea has always been simple but the implementation has always been complex. Take a thin, light, moderately powered notebook that is usable and high quality in its own right and combine it with the ability to connect a discrete GPU while at home for gaming purposes. In theory, this is the best of both worlds: a notebook PC for mobile productivity and gaming capability courtesy of an external GPU. But as the years have gone on, more companies try and more companies fail; the integration process is just never as perfect a mix as we hope.
Today we see if MSI and the GS30 Shadow can fare any better. Does the combination of a very high performance thin and light notebook and the GamingDock truly create a mobile and gaming system that is worth your investment?
The perfect laptop; it is every manufacturer’s goal. Obviously no one has gotten there yet (or we would have all stopped writing reviews of them). At CES this past January, we got our first glimpse of a new flagship Ultrabook from Dell: the XPS 13. It got immediate attention for some of the physical characteristics it included, like an ultra-thin bezel and a 13-in screen in the body of a typical 11-in laptop, all while being built in a sleek thin and light design. It’s not a gaming machine, despite what you might remember from the XPS line, but the Intel Core-series Broadwell-U processor keeps performance speedy in standard computing tasks.
As a frequent traveler that tends to err on the side of thin and light designs, as opposed to high performance notebooks with discrete graphics, the Dell XPS 13 is immediately compelling on a personal level as well. I have long been known as a fan of what Lenovo builds for this space, trusting my work machine requirements to the ThinkPad line for years and year. Dell’s new XPS 13 is a strong contender to take away that top spot for me and perhaps force me down the path of an upgrade of my own. So, you might consider this review as my personal thesis on the viability of said change.
The Dell XPS 13 Specifications
First, make sure as you hunt around the web for information on the XPS 13 that you are focusing on the new 2015 model. Much like we see from Apple, Dell reuses model names and that can cause confusion unless you know what specifications to look for or exactly what sub-model you need. Trust me, the new XPS 13 is much better than anything that existed before.
Introduction and Specifications
Had you asked me just a few years ago if 6-inch phones would not only be a viable option, but a dominant force in the mobile computing market, I would have likely rolled my eyes. At that time phones were small, tablets were big, and phablets were laughed at. Today, no one is laughing at the Galaxy Note 4, the latest iteration in Samsung’s created space of larger-than-you-probably-thought-you-wanted smartphones. Nearly all consumers are amazed by the size of the screen and the real estate this class of phone provides but some are instantly off put by the way the phone feels in the hand – it can come off as foreign, cumbersome, and unusable.
In my time with the new Galaxy Note 4 – my first extended-use experience with a phone of this magnitude – I have come to see the many positive traits that a larger phone can offer. There are some trade-offs of course, including the pocket/purse viability debate. One thing beyond question is that a large phone means a big screen. One that can display a large amount of data whether that be on a website or in a note-taking application. The extra screen real estate can instantly improve your productivity. To that end Samsung also provides a multi-tasking framework that lets you run multiple programs in a side-by-side view, similar to what the original version of Windows 8 did. It might seem unnecessary for an Android device, but as soon as you find the situation where you need it going back to a device without it can feel archaic.
A larger phone also means that there is more room for faster hardware, a larger camera sensor, and a bigger battery. Samsung even includes an active stylus called the S-Pen in the body of the device – something that few other modern tablets/phablets/phones feature.
Introduction and Design
Although the target market and design emphasis may be different, there is one thing consumer and business-grade laptops have in common: a drift away from processing power and toward portability and efficiency. At the risk of repeating our introduction for the massive MSI GT72 gaming notebook we reviewed last month, it seems that battery life, temperature, and power consumption get all the attention these days. And arguably, it makes sense for most people: it’s true that CPU performance gains have in years past greatly outstripped the improvements in battery life, and that likewise performance gains could be realized far more easily by upgrading storage device speed (such as by replacing conventional hard drives with solid-state drives) than by continuing to focus on raw CPU power and clock rates. As a result, we’ve seen many mobile CPU speeds plateauing or even dropping in exchange for a reduction in power consumption, while simultaneously cases have slimmed and battery life has jumped appreciably across the board.
But what if you’re one of the minority who actually appreciates and needs raw computing power? Fortunately, Lenovo’s ThinkPad W series still has you covered. This $1,500 workstation is the business equivalent of the consumer-grade gaming notebook. It’s one of the few designs where portability takes a backseat to raw power and ridiculous spec. Users shopping for a ThinkPad workstation aren’t looking to go unplugged all day long on an airplane tray table. They’re looking for power, reliability, and premium design, with function over form as a rule. And that’s precisely what they’ll get.
Beyond the fairly-typical (and very powerful) Intel Core i7-4800MQ CPU—often found in gaming PCs and workstations—and just 8 GB of DDR3-1600 MHz RAM (single-channel) is a 256 GB SSD and a unique feature to go along with the WQHD+ display panel: built-in X-Rite Pantone color sensor which can be used to calibrate the panel simply by closing the lid when prompted. How well this functions is another topic entirely, but at the very least, it’s a novel idea.
Finally, a SHIELD Console
NVIDIA is filling out the family of the SHIELD brand today with the announcement of SHIELD, a set-top box powered by the Tegra X1 processor. SHIELD will run Android TV and act as a game playing, multimedia watching, GRID streaming device. Selling for $199 and available in May of this year, there is a lot to discuss.
Odd naming scheme aside, the SHIELD looks to be an impressive little device, sitting on your home theater or desk and bringing a ton of connectivity and performance to your TV. Running Android TV means the SHIELD will have access to the entire library of Google Play media including music, movies and apps. SHIELD supports 4K video playback at 60 Hz thanks to an HDMI 2.0 connection and fully supports H.265/HEVC decode thanks to Tegra X1 processor.
Here is a full breakdown of the device's specifications.
|NVIDIA SHIELD Specifications|
|Processor||NVIDIA® Tegra® X1 processor with 256-core Maxwell™ GPU with 3GB RAM|
|Video Features||4K Ultra-HD Ready with 4K playback and capture up to 60 fps (VP9, H265, H264)|
|Audio||7.1 and 5.1 surround sound pass through over HDMI
High-resolution audio playback up to 24-bit/192kHz over HDMI and USB
High-resolution audio upsample to 24-bit/192hHz over USB
|Wireless||802.11ac 2x2 MIMO 2.4 GHz and 5 GHz Wi-Fi
Two USB 3.0 (Type A)
MicroSD slot (supports 128GB cards)
IR Receiver (compatible with Logitech Harmony)
|Gaming Features||NVIDIA GRID™ streaming service
|SW Updates||SHIELD software upgrades directly from NVIDIA|
|Power||40W power adapter|
|Weight and Size||Weight: 23oz / 654g
Height: 5.1in / 130mm
Width: 8.3in / 210mm
Depth: 1.0in / 25mm
|OS||Android TV™, Google Cast™ Ready|
|In the box||NVIDIA SHIELD
NVIDIA SHIELD controller
HDMI cable (High Speed), USB cable (Micro-USB to USB)
Power adapter (Includes plugs for North America, Europe, UK)
|Requirements||TV with HDMI input, Internet access|
|Options||SHIELD controller, SHIELD remove, SHIELD stand|
Obviously the most important feature is the Tegra X1 SoC, built on an 8-core 64-bit ARM processor and a 256 CUDA Core Maxwell architecture GPU. This gives the SHIELD set-top more performance than basically any other mobile part on the market, and demos showing Doom 3 and Crysis 3 running natively on the hardware drive the point home. With integrated HEVC decode support the console is the first Android TV device to offer the support for 4K video content at 60 FPS.
Even though storage is only coming in at 16GB, the inclusion of an MicroSD card slot enabled expansion to as much as 128GB more for content and local games.
The first choice for networking will be the Gigabit Ethernet port, but the 2x2 dual-band 802.11ac wireless controller means that even those of us that don't have hardwired Internet going to our TV will be able to utilize all the performance and features of SHIELD.
Who Should Care? Thankfully, Many People
The Khronos Group has made three announcements today: Vulkan (their competitor to DirectX 12), OpenCL 2.1, and SPIR-V. Because there is actually significant overlap, we will discuss them in a single post rather than splitting them up. Each has a role in the overall goal to access and utilize graphics and compute devices.
Before we get into what everything is and does, let's give you a little tease to keep you reading. First, Khronos designs their technologies to be self-reliant. As such, while there will be some minimum hardware requirements, the OS pretty much just needs to have a driver model. Vulkan will not be limited to Windows 10 and similar operating systems. If a graphics vendor wants to go through the trouble, which is a gigantic if, Vulkan can be shimmed into Windows 8.x, Windows 7, possibly Windows Vista despite its quirks, and maybe even Windows XP. The words “and beyond” came up after Windows XP, but don't hold your breath for Windows ME or anything. Again, the further back in Windows versions you get, the larger the “if” becomes but at least the API will not have any “artificial limitations”.
Outside of Windows, the Khronos Group is the dominant API curator. Expect Vulkan on Linux, Mac, mobile operating systems, embedded operating systems, and probably a few toasters somewhere.
On that topic: there will not be a “Vulkan ES”. Vulkan is Vulkan, and it will run on desktop, mobile, VR, consoles that are open enough, and even cars and robotics. From a hardware side, the API requires a minimum of OpenGL ES 3.1 support. This is fairly high-end for mobile GPUs, but it is the first mobile spec to require compute shaders, which are an essential component of Vulkan. The presenter did not state a minimum hardware requirement for desktop GPUs, but he treated it like a non-issue. Graphics vendors will need to be the ones making the announcements in the end, though.
SoFIA, Cherry Trail Make Debuts
Mobile World Congress is traditionally dominated by Samsung, Qualcomm, HTC, and others yet Intel continues to make in-roads into the mobile market. Though the company has admittedly lost a lot of money during this growing process, Intel pushes forward with today's announcement of a trio of new processor lines that keep the Atom brand. The Atom x3, the Atom x5, and the Atom x7 will be the company's answer in 2015 for a wide range of products, starting at the sub-$75 phone market and stretching up to ~$400 tablets and all-in-ones.
There are some significant differences in these Atom processors, more than the naming scheme might indicate.
Intel Atom x3 SoFIA Processor
For years now we have questioned Intel's capability to develop a processor that could fit inside the thermal envelope that is required for a smartphone while also offering performance comparable to Qualcomm, MediaTek, and others. It seemed that the x86 architecture was a weight around Intel's ankles rather than a float lifting it up. Intel's answer was the development of SoFIA, (S)mart (o)r (F)eature phone with (I)ntel (A)rchitecture. The project started about 2 years ago leading to product announcements finally reaching us today. SoFIA parts are "designed for budget smartphones; SoFIA is set to give Qualcomm and MediaTek a run for their money in this rapidly growing part of the market."
The SoFIA processors are based on the same Silvermont architecture as the current generation of Atom processors, but they are more tuned for power efficiency. Originally planned to be a dual-core only option, Intel has actually built both dual-core and quad-core variants that will pair with varying modem options to create a combination that best fit target price points and markets. Intel has partnered with RockChip for these designs, even though the architecture is completely IA/x86 based. Production will be done on a 28nm process technology at an unnamed vendor, though you can expect that to mean TSMC. This allows RockChip access to the designs, to help accelerate development, and to release them into the key markets that Intel is targeting.
Flagship. Premium. Best in class. These are the terms that Dell and Intel muttered to me during a conference call to discuss the new Dell Venue 8 7000 tablet. It’s a bullish claim and one that would likely have been received with a sideways eye roll or a shrug had I not been able to get a short amount of hands on time with the device at CES in January. The idea that Dell would develop an Android tablet that bests what more established brands like Nexus and Samsung have created, AND that that same tablet would be powered by an Intel processor rather than a Qualcomm, NVIDIA or Samsung chip would have seemed laughable last year. But after a solid three weeks with the Venue 8 7000 I am prepared to make the statement: this is my favorite tablet. Not my favorite Intel tablet, not my favorite Android tablet: just plain favorite.
The Venue 8 7000 combines style, design, technology and visuals that are simply unmatched by anything else in the Android word and rivals anything that Apple has created to date. There are a couple of warts that center around the camera and gaming performance that won’t drop your jaw, but for the majority of use cases the user experience is as exceptional as the looks.
Maybe best of all, this tablet starts at just $399 and is available today.
Dell Venue 8 7000 Specifications
Let’s begin the review by looking at the raw specifications of the Dell Venue 8 7000. Even though hardware specifications don’t tell a complete story of any device, especially a tablet that is based so much on experience, it is important to get a good baseline expectation.
|Dell Venue 8 7000 (Model 7840)|
|Processor||Intel Atom Z3580 Quad-Core 2.33 GHz|
|Screen||2560x1600 OLED 8.4-in (359 ppi)|
MicroSD Slot (up to 512GB)
|Camera||8MP Rear + Dual 720p Depth
|Wireless||Intel 7260 802.11ac 1x1 Dual Band
|Connection||USB 2.0 (power and data)
|Dimensions||215.8mm x 124.4mm x 6mm
8.5" x 4.88" x 0.24"
The center of the Venue 8 7000 is the Intel Atom Z3580 quad-core processor with a peak clock rate of 2.3 GHz and a base clock rate of 500 MHz. The Z3580 is a 22nm processor based on the Moorefield platform and Silvermont architecture. I first got information about the Silvermont architecture back in May of 2013 so it seems a bit dated in some regards, but the performance and power efficiency is still there to compete with the rival options from ARM.. The Venue 8 7000 includes an LPDDR3-1600 controller and there is 2GB of memory; a decent amount but we are seeing quite a few smartphones with more system memory like the OnePlus One.
New Features and Specifications
It is increasingly obvious that in the high end smartphone and tablet market, much like we saw occur over the last several years in the PC space, consumers are becoming more concerned with features and experiences than just raw specifications. There is still plenty to drool over when looking at and talking about 4K screens in the palm of your hand, octa-core processors and mobile SoC GPUs measuring performance in hundreds of GFLOPS, but at the end of the day the vast majority of consumers want something that does something to “wow” them.
As a result, device manufacturers and SoC vendors are shifting priorities for performance, features and how those are presented both the public and to the media. Take this week’s Qualcomm event in San Diego where a team of VPs, PR personnel and engineers walked me through the new Snapdragon 810 processor. Rather than showing slide after slide of comparative performance numbers to the competition, I was shown room after room of demos. Wi-Fi, LTE, 4K capture and playback, gaming capability, thermals, antennae modifications, etc. The goal is showcase the experience of the entire platform – something that Qualcomm has been providing for longer than just about anyone in this business, while educating consumers on the need for balance too.
As a 15-year veteran of the hardware space my first reaction here couldn’t have been scripted any more precisely: a company that doesn’t show performance numbers has something to hide. But I was given time with a reference platform featuring the Snapdragon 810 processor in a tablet form-factor and the results show impressive increases over the 801 and 805 processors from the previous family. Rumors of the chips heat issues seem overblown, but that part will be hard to prove for sure until we get retail hardware in our hands to confirm.
Today’s story will outline the primary feature changes of the Snapdragon 810 SoC, though there was so much detail presented at the event with such a short window of time for writing that I definitely won’t be able to get to it all. I will follow up the gory specification details with performance results compared to a wide array of other tablets and smartphones to provide some context to where 810 stands in the market.
It has been an abnormal week for us here at PC Perspective. Our typical review schedule has pretty much flown out the window, and the past seven days have been filled with learning, researching, retesting, and publishing. That might sound like the norm, but in these cases the process was initiated by tips from our readers. Last Saturday (24 Jan), a few things were brewing:
- Ryan was informed by NVIDIA that the memory layout of the GTX 970 was different than expected.
- The huge (now 168 page) overclock.net forum thread about the Samsung 840 EVO slowdown was once again gaining traction.
- Someone got G-Sync working on a laptop integrated display.
We had to do a bit of triage here of course, as we can only research and write so quickly. Ryan worked the GTX 970 piece as it was the hottest item. I began a few days of research and testing on the 840 EVO slow down issue reappearing on some drives, and we kept tabs on that third thing, which at the time seemed really farfetched. With those two first items taken care of, Ryan shifted his efforts to GTX 970 SLI testing while I shifted my focus to finding out of there was any credence to this G-Sync laptop thing.
A few weeks ago, an ASUS Nordic Support rep inadvertently leaked an interim build of the NVIDIA driver. This was a mobile driver build (version 346.87) focused at their G751 line of laptops. One recipient of this driver link posted it to the ROG forum back on the 20th. A fellow by the name Gamenab, owning the same laptop cited in that thread, presumably stumbled across this driver, tried it out, and was more than likely greeted by this popup after the installation completed:
Now I know what you’re thinking, and it’s probably the same thing anyone would think. How on earth is this possible? To cut a long story short, while the link to the 346.87 driver was removed shortly after being posted to that forum, we managed to get our hands on a copy of it, installed it on the ASUS G751 that we had in for review, and wouldn’t you know it we were greeted by the same popup!
Ok, so it’s a popup, could it be a bug? We checked NVIDIA control panel and the options were consistent with that of a G-Sync connected system. We fired up the pendulum demo and watched the screen carefully, passing the machine around the office to be inspected by all. We then fired up some graphics benchmarks that were well suited to show off the technology (Unigine Heaven, Metro: Last Light, etc), and everything looked great – smooth steady pans with no juddering or tearing to be seen. Ken Addison, our Video Editor and jack of all trades, researched the panel type and found that it was likely capable of 100 Hz refresh. We quickly dug created a custom profile, hit apply, and our 75 Hz G-Sync laptop was instantly transformed into a 100 Hz G-Sync laptop!
Ryan's Note: I think it is important here to point out that we didn't just look at demos and benchmarks for this evaluation but actually looked at real-world gameplay situations. Playing through Metro: Last Light showed very smooth pans and rotation, Assassin's Creed played smoothly as well and flying through Unigine Heaven manually was a great experience. Crysis 3, Battlefield 4, etc. This was NOT just a couple of demos that we ran through - the variable refresh portion of this mobile G-Sync enabled panel was working and working very well.
At this point in our tinkering, we had no idea how or why this was working, but there was no doubt that we were getting a similar experience as we have seen with G-Sync panels. As I digested what was going on, I thought surely this can’t be as good as it seems to be… Let’s find out, shall we?
Introduction and Design
MSI’s unapologetically large GT70 “Dominator Pro” series of machines knows its audience well: for every gripe about the notebooks’ hulking sizes, a snicker and a shrug are returned by the community, who rarely value such items as portability as highly as the critics who are hired to judge based on them. These machines are built for power, first and foremost. While featherweight construction and manageable dimensions matter to those regularly tossing machines into their bags, by contrast, MSI’s desktop replacements recognize the meaning of their classification: the flexibility of merely moving around the house with one’s gaming rig is reason enough to consider investing in one.
So its priorities are arguably well in line. But if you want to keep on dominating, regular updates are a necessity, too. And with the GT72 2QE, MSI takes it all up yet another notch: our review unit (GT72 2QE-208US) packs four SSDs in a RAID-0 array (as opposed to the GT70’s three), plus a completely redesigned case which manages to address some of our biggest complaints. Oh yeah, and an NVIDIA GTX 980M GPU with 8 GB GDDR5 RAM—the fastest mobile GPU ever. (You can find much more information and analysis on this GPU specifically in Ryan’s ever-comprehensive review.)
Of course, these state-of-the-art innards come at no small price: $2,999 as configured (around a $2,900 street price), or a few hundred bucks less with storage or RAM sacrifices—a reasonable trade-off considering the marginal benefits one gains from a quad-SSD array or 32 GB of RAM.
Core M 5Y70 Specifications
Back in August of this year, Intel invited me out to Portland, Oregon to talk about the future of processors and process technology. Broadwell is the first microarchitecture to ship on Intel's newest 14nm process technology and the performance and power implications of it are as impressive as they are complex. We finally have the first retail product based on Broadwell-Y in our hands and I am eager to see how this combination of technology is going to be implemented.
If you have not read through my article that dives into the intricacies of the 14nm process and the architectural changes coming with Broadwell, then I would highly recommend that you do so before diving any further into this review. Our Intel Core M Processor: Broadwell Architecture and 14nm Process Reveal story clearly explains the "how" and "why" for many of the decisions that determined the direction the Core M 5Y70 heads in.
As I stated at the time:
"The information provided by Intel about Broadwell-Y today shows me the company is clearly innovating and iterating on its plans set in place years ago with the focus on power efficiency. Broadwell and the 14nm process technology will likely be another substantial leap between Intel and AMD in the x86 tablet space and should make an impact on other tablet markets (like Android) as long as pricing can remain competitive. That 14nm process gives Intel an advantage that no one else in the industry can claim and unless Intel begins fabricating processors for the competition (not completely out of the question), that will remain a house advantage."
With a background on Intel's goals with Broadwell-Y, let's look at the first true implementation.
GeForce GTX 980M Performance Testing
When NVIDIA launched the GeForce GTX 980 and GTX 970 graphics cards last month, part of the discussion at our meetings also centered around the mobile variants of Maxwell. The NDA was a bit later though and Scott wrote up a short story announcing the release of the GTX 980M and the GTX 970M mobility GPUs. Both of these GPUs are based on the same GM204 design as the desktop cards, though as you should have come to expect by now, do so with lower specifications than the similarly-named desktop options. Take a look:
|GTX 980M||GTX 970M||
|Memory||Up to 4GB||Up to 3GB||4GB||4GB||4GB/8GB|
|Memory Rate||2500 MHz||2500 MHz||7.0 (GT/s)||7.0 (GT/s)||2500 MHz|
Just like the desktop models, GTX 980M and GTX 970M are built on the 28nm process technology and are tweaked and built for power efficiency - one of the reasons the mobile release of this product is so interesting.
With a CUDA core count of 1536, the GTX 980M has 33% fewer shader cores than the desktop GTX 980, along with a slightly lower base clock speed. The result is a peak theoretical performance of 3.189 TFLOPs, compared to 4.6 TFLOPs on the GTX 980 desktop. In fact, that is only slightly higher than the GTX 880M based on Kepler, that clocks in with the same CUDA core count (1536) but a TFLOP capability of 2.9. Bear in mind that the GTX 880M is using a different architecture design than the GTX 980M; Maxwell's design advantages go beyond just CUDA core count and clock speed.
The GTX 970M is even smaller, with a CUDA core count of 1280 and peak performance rated at 2.365 TFLOPs. Also notice that the memory bus width has shrunk from 256-bit to 192-bit for this part.
As is typically the case with mobile GPUs, the memory speed of the GTX 980M and GTX 970M is significantly lower than the desktop parts. While the GeForce GTX 980 and 970 that install in your desktop PC will have memory running at 7.0 GHz, the mobile versions will run at 5.0 GHz in order to conserve power.
From a feature set stand point though, the GTX 980M/970M are very much the same as the desktop parts that I looked at in September. You will have support for VXGI, NVIDIA's new custom global illumination technology, Multi-Frame AA and maybe most interestingly, Dynamic Super Resolution (DSR). DSR allows you to render a game at a higher resolution and then use a custom filter to down sample it back to your panel's native resolution. For mobile gamers that are using 1080p screens (as our test sample shipped with) this is a good way to utilize the power of your GPU for less power-hungry games, while getting a surprisingly good image at the same time.
If there is one message that I get from NVIDIA's GeForce GTX 900M-series announcement, it is that laptop gaming is a first-class citizen in their product stack. Before even mentioning the products, the company provided relative performance differences between high-end desktops and laptops. Most of the rest of the slide deck is showing feature-parity with the desktop GTX 900-series, and a discussion about battery life.
First, the parts. Two products have been announced: The GeForce GTX 980M and the GeForce GTX 970M. Both are based on the 28nm Maxwell architecture. In terms of shading performance, the GTX 980M has a theoretical maximum of 3.189 TFLOPs, and the GTX 970M is calculated at 2.365 TFLOPs (at base clock). On the desktop, this is very close to the GeForce GTX 770 and the GeForce GTX 760 Ti, respectively. This metric is most useful when you're compute bandwidth-bound, at high resolution with complex shaders.
The full specifications are:
|GTX 980M||GTX 970M||
|Memory||Up to 4GB||Up to 3GB||4GB||4GB||4GB/8GB|
|Memory Rate||2500 MHz||2500 MHz||7.0 (GT/s)||7.0 (GT/s)||2500 MHz|
As for the features, it should be familiar for those paying attention to both desktop 900-series and the laptop 800M-series product launches. From desktop Maxwell, the 900M-series is getting VXGI, Dynamic Super Resolution, and Multi-Frame Sampled AA (MFAA). From the latest generation of Kepler laptops, the new GPUs are getting an updated BatteryBoost technology. From the rest of the GeForce ecosystem, they will also get GeForce Experience, ShadowPlay, and so forth.
For VXGI, DSR, and MFAA, please see Ryan's discussion for the desktop Maxwell launch. Information about these features is basically identical to what was given in September.
BatteryBoost, on the other hand, is a bit different. NVIDIA claims that the biggest change is just raw performance and efficiency, giving you more headroom to throttle. Perhaps more interesting though, is that GeForce Experience will allow separate one-click optimizations for both plugged-in and battery use cases.
The power efficiency demonstrated with the Maxwell GPU in Ryan's original GeForce GTX 980 and GTX 970 review is even more beneficial for the notebook market where thermal designs are physically constrained. Longer battery life, as well as thinner and lighter gaming notebooks, will see tremendous advantages using a GPU that can run at near peak performance on the maximum power output of an integrated battery. In NVIDIA's presentation, they mention that while notebooks on AC power can use as much as 230 watts of power, batteries tend to peak around 100 watts. Given that a full speed, desktop-class GTX 980 has a TDP of 165 watts, compared to the 250 watts of a Radeon R9 290X, translates into notebook GPU performance that will more closely mirror its desktop brethren.
Of course, you probably will not buy your own laptop GPU; rather, you will be buying devices which integrate these. There are currently five designs across four manufacturers that are revealed (see image above). Three contain the GeForce GTX 980M, one has a GTX 970M, and the other has a pair of GTX 970Ms. Prices and availability are not yet announced.