All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
When Magma Freezes Over...
Intel confirms that they have approached AMD about access to their Mantle API. The discussion, despite being clearly labeled as "an experiment" by an Intel spokesperson, was initiated by them -- not AMD. According to AMD's Gaming Scientist, Richard Huddy, via PCWorld, AMD's response was, "Give us a month or two" and "we'll go into the 1.0 phase sometime this year" which only has about five months left in it. When the API reaches 1.0, anyone who wants to participate (including hardware vendors) will be granted access.
AMD inside Intel Inside???
I do wonder why Intel would care, though. Intel has the fastest per-thread processors, and their GPUs are not known to be workhorses that are held back by API call bottlenecks, either. Of course, that is not to say that I cannot see any reason, however...
Introduction and Design
It was only last year that we were singing the praises of the GT60, which was one of the fastest notebooks we’d seen to date. Its larger cousin, the GT70, features a 17.3” screen (versus the GT60’s 15.6”), faster CPUs and GPUs, and even better options for storage. Now, the latest iteration of this force to be reckoned with has arrived on our desks, and while its appearance hasn’t changed much, its performance is even better than ever.
While we’ll naturally be spending a good deal of time discussing performance and stability in our article here, we won’t be dedicating much to casing and general design, as—for the most part—it is very similar to that of the GT60. On the other hand, one area on which we’ll be focusing particularly heavily is that of battery life, thanks solely to the presence of NVIDIA’s new Battery Boost technology. As the name suggests, this new feature employs power conservation techniques to extend the notebook’s life while gaming unplugged. This is accomplished primarily via frame rate limiting, which is a feature that has actually been available since the introduction of Kepler, but which until now has been buried within the advanced options available for such products. Battery Boost basically brings this to the forefront and makes it both accessible and default.
Let’s take a look at what this bad boy is packing:
Not much commentary needed here; this table reads like a who’s who of computer specifications. Of particular note are the 32 GB of RAM, the 880M (of course), and the 384 GB SSD RAID array (!!). Elsewhere, it’s mostly business as usual for the ultra-high-end MSI GT notebooks, with a slightly faster CPU than the previous model we reviewed (the i7-4700MQ). One thing is guaranteed: it’s a fast machine.
Kaveri Goes Mobile
The processor market is in an interesting place today. At the high end of the market Intel continues to stand pretty much unchallenged, ranging from the Ivy Bridge-E at $1000 to the $300 Haswell parts available for DIY users. The same could really be said for the mobile market - if you want a high performance part the default choice continues to rest with Intel. But AMD has some interesting options that Intel can't match when you start to enter the world of the mainstream notebook. The APU was slow to develop but it has placed AMD in a unique position, separated from the Intel processors with a more or less reversed compute focus. While Intel dominates in the performance on the x86 side of things, the GPU in AMD's latest APUs continue to lead in gaming and compute performance.
The biggest problem for AMD is that the computing software ecosystem still has not caught up with the performance that a GPU can provide. With the exception of games, the GPU in a notebook or desktop remains under utilized. Certain software vendors are making strides - see the changes in video transcoding and image manipulation - but there is still some ground AMD needs to accelerate down.
Today we are looking at the mobile version of Kaveri, AMD's latest entry into the world of APUs. This processor combines the latest AMD processor architecture with a GCN-based graphics design for a pretty advanced part. When the desktop version of this processor was released, we wrote quite a bit about the architecture and the technological advancements made into, including becoming the first processor that is fully HSA compliant. I won't be diving into the architecture details here since we covered them so completely back in January just after CES.
The mobile version of Kaveri is basically identical in architecture with some changes for better power efficiency. The flagship part will ship with 12 Compute Cores (4 Steamroller x86 cores and 8 GCN cores) and will support all the same features of GCN graphics designs including the new Mantle API.
Early in the spring we heard rumors that the AMD FX brand was going to make a comeback! Immediately enthusiasts were thinking up ways AMD could compete against the desktop Core i7 parts from Intel; could it be with 12 cores? DDR4 integration?? As it turns out...not so much.
In many ways, the Google Nexus 7 has long been the standard of near perfection for an Android tablet. With a modest 7-inch screen, solid performance and low cost, the ASUS-built hardware has stood through one major revision as our top selection. Today though, a new contender in the field makes its way to the front of the pack in the form of the ASUS MeMO Pad 7 (ME176C). At $150, this new 7-inch tablet has almost all the hallmarks to really make an impact in the Android ecosystem. Finally.
The MeMO Pad 7 is not a new product family, though. It has existed with Mediatek processors for quite some time with essentially the same form factor. This new ME176C model makes some decisions that help it break into a new level of performance while maintaining the budget pricing required to really take on the likes of Google. By coupling the MeMO Pad brand with the Intel Bay Trail Atom processor, the two companies firmly believe they have a winner; but do they?
I have to admit that my time with the ASUS MeMO Pad 7 (ME176C) has been short; shorter than I would have liked to offer a truly definitive take on this mobile platform. I prefer to take the time to work the tablet into my daily work and home routines. Reading, browsing, email, etc. This allows me to filter though any software intricacies that might make or break a purchasing decision. Still, I think the ASUS design is going to live up to my expectations and is worth every penny of the $150 price tag.
The ASUS MeMO Pad 7 has a 1280x800 resolution IPS screen. This 7-inch device is powered by the new Intel Atom Z3745 quad-core SoC with 1GB of memory and 16GB of on-board storage. The front facing camera is of the 2MP variety while the rear facing camera is 5MP - but you will likely be as disappointed in the image quality of the photos as I was. Connectivity options include the microUSB port for charging and data transfer along with 802.11b/g/n 2.4 GHz WiFi (sorry, no 5.0 GHz option here). Bluetooth 4.0 allows for low power data sync with other devices you might have and our model shipped with Android 4.4.2 already pre-installed.
The rear of the ASUS MeMO Pad is a pseudo rubber/plastic type material that is easy to grip while not leaving fingerprints behind - a solid combination. The center mounted camera lens takes decent pictures - but I can't put any more praise on it than that. It was easy to find image quality issues with photos even in full daylight. It's hard to know how disappointed to be considering the price, but the Nexus 7 has better optical hardware.
Upgrades from Anker
Last year we started to have a large amount of mobile devices around the office including smartphones, tablets and even convertibles like the ASUS T100, all of which were charged with USB connections. While not a hassle when you are charging one or two units at time, having 6+ on our desks on any day started to become a problem for our less numerous wall outlets. Our solution last year was Anker's E150 25 watt wall charger that we did a short video overview on.
It was great but had limitations including different charging rates depending on the port you connected it to, limited output of 5 Amps total for all five ports and fixed outputs per port. Today we are taking a look at a pair of new Anker devices that implement smart ports called PowerIQ that enable the battery and wall charger to send as much power to the charging device as it requests, regardless of what physical port it is attached to.
We'll start with the updated Anker 40 watt 5-port wall charger and then move on to discuss the 3-port mobile battery charger, both of which share the PowerIQ feature.
Anker 40 watt 5-Port Wall Charger
The new Anker 5-port wall charger is actually smaller than the previous generation but offers superior specifications at all feature points. This unit can push out more than 40 watts total combined through all five USB ports, 5 volts at as much as 8 amps. All 8 amps can in fact go through a single USB charging port we are told if there was a device that would request that much - we don't have anything going above 2.3A it seems in our offices.
Any USB port can be used for any device on this new model, it doesn't matter where it plugs in. This great simplifies things from a user experience point of view as you don't have to hold the unit up to your face to read the tiny text that existed on the E150. With 8 amps spread across all five ports you should have more than enough power to charge all your devices at full speed. If you happen to have five iPads charging at the same time, that would exceed 8A and all the devices charge rates would be a bit lower.
AMD Makes some Lemonade...
I guess we could say that AMD has been rather busy lately. It seems that a significant amount of the content on PC Perspective this month revolved around the AMD AM1 platform. Before that we had the Kaveri products and the R7 265. AMD also reported some fairly solid growth over the past year with their graphics and APU lines. Things are not as grim and dire as they once were for the company. This is good news for consumers as they will continue to be offered competing solutions that will vie for that hard earned dollar.
AMD is continuing their releases for 2014 with the announcement of their latest low-power and mainstream mobile APUs. These are codenamed “Beema” and “Mullins”, but they are based on the year old Kabini chip. This may cause a few people to roll their eyes as AMD has had some fairly unimpressive refreshes in the past. We saw the rather meager increases in clockspeed and power consumption with Brazos 2.0 a couple of years back, and it looked like this would be the case again for Beema and Mullins.
I was again expecting said meager improvements in power consumption and clockspeeds that we had received all those years ago with Brazos 2.0. Turns out I was wrong. This is a fairly major refresh which does a few things that I did not think were entirely possible, and I’m a rather optimistic person. So why is this release surprising? Let us take a good look under the hood.
Maxwell and Kepler and...Fermi?
Covering the landscape of mobile GPUs can be a harrowing experience. Brands, specifications, performance, features and architectures can all vary from product to product, even inside the same family. Rebranding is rampant from both AMD and NVIDIA and, in general, we are met with one of the most confusing segments of the PC hardware market.
Today, with the release of the GeForce GTX 800M series from NVIDIA, we are getting all of the above in one form or another. We will also see performance improvements and the introduction of the new Maxwell architecture (in a few parts at least). Along with the GeForce GTX 800M parts, you will also find the GeForce 840M, 830M and 820M offerings at lower performance, wattage and price levels.
With some new hardware comes a collection of new software for mobile users, including the innovative Battery Boost that can increase unplugged gaming time by using frame rate limiting and other "magic" bits that NVIDIA isn't talking about yet. ShadowPlay and GameStream also find their way to mobile GeForce users as well.
Let's take a quick look at the new hardware specifications.
|GTX 880M||GTX 780M||GTX 870M||GTX 770M|
|GPU Code name||Kepler||Kepler||Kepler||Kepler|
|Rated Clock||954 MHz||823 MHz||941 MHz||811 MHz|
|Memory||Up to 4GB||Up to 4GB||Up to 3GB||Up to 3GB|
|Memory Clock||5000 MHz||5000 MHz||5000 MHz||4000 MHz|
Both the GTX 880M and the GTX 870M are based on Kepler, keeping the same basic feature set and hardware specifications of their brethren in the GTX 700M line. However, while the GTX 880M has the same CUDA core count as the 780M, the same cannot be said of the GTX 870M. Moving from the GTX 770M to the 870M sees a significant 40% increase in core count as well as a jump in clock speed from 811 MHz (plus Boost) to 941 MHz.
Introduction and Design
Alongside our T440s review unit was something slightly smaller and dear to our hearts: the latest entry in the ThinkPad X series of notebooks. Seeing as this very review is being typed on a Lenovo X220, our interest was piqued by the latest refinements to the formula. When the X220 was released, the thin-and-light trend was only just beginning to pick up steam leading into what eventually became today’s Ultrabook movement. Its 2012 successor, the ThinkPad X230, went on to receive our coveted (and rarely bestowed) Editor’s Choice Award, even in spite of a highly controversial keyboard change that sent the fanbase into a panic.
But all of that has since (mostly) blown over, primarily thanks to the fact that—in spite of the minor ergonomic adjustments required to accustom oneself with what was once a jarringly different keyboard design—the basic philosophy remained the same: pack as many powerful parts as possible into a 12.5-inch case while still maintaining good durability and battery life. These machines were every bit as capable as most other 13- and 14-inch notebooks of their time, and they were considerably smaller, too. About the only thing they lacked was higher-resolution screens, discrete graphics, and quad-core CPUs.
But with the X240 (and the T440s), portability has truly taken center stage, suggesting a complete paradigm shift—however subtly—away from “powerful (and light)” and toward “light (and powerful)”. Coupled with Intel’s Haswell CPUs and Lenovo’s new Power Bridge dual-battery design, this will certainly yield great benefits in the realm of battery life. But that isn’t all that’s different: we also find a (once again) revamped keyboard, as well as a completely new touchpad design which finally dispenses with the physical buttons entirely. Like in the X230’s case, these changes have roiled the ThinkPad purists—but is it all just a matter of close-minded traditionalism? That’s precisely what we’ll discover today.
Introduction and Design
Arguably some of the most thoughtful machines on the market are Lenovo’s venerable ThinkPads, which—while sporadically brave in their assertions—are still among the most conservative (yet simultaneously practical) notebooks available. What makes these notebooks so popular in the business crowds is their longstanding refusal to compromise functionality in the interest of form, as well as their self-proclaimed legendary reliability. And you could argue that such practical conservatism is what defines a good business notebook: a device which embraces the latest technological trends, but only with requisite caution and consideration.
Maybe it’s the shaky PC market, or maybe it’s the sheer onset of sexy technologies such as touch and clickpads, but recent ThinkPads have begun to show some uncommon progressivism, and unapologetically so, too. First, it was the complete replacement of the traditional critically-acclaimed ThinkPad keyboard with the Chiclet AccuType variety, a decision which irked purists but eventually was accepted by most. Along with that were the integrated touchpad buttons, which are still lamented by many users. Those alterations to the winning design were ultimately relatively minor, however, and for the most part, they’ve now been digested by the community. Now, though, with the T440s (as well as the rest of Lenovo’s revamped ThinkPad lineup), we’re seeing what will perhaps constitute the most controversial change of all: the substitution of the older touchpads with a “5-button trackpad”, as well as optional touchscreen interface.
Can these changes help to keep the T440s on the cusp of technological progress, or has the design finally crossed the threshold into the realm of counterproductivity?
Compared with nearly any other modern notebook, these specs might not hold many surprises. But judged side-by-side with its T430s predecessor, there are some pretty striking differences. For starters, the T440s is the first in its line to offer only low-voltage CPU options. While our test unit shipped with the (certainly capable enough) Core i5-4200U—a dual-core processor with up to 2.6 GHz Turbo Boost clock rate—options range up to a Core i7-4600U (up to 3.30 GHz). Still, these options are admittedly a far cry from the i7-3520M with which top-end T430s machines were equipped. Of course, it’s also less than half of the TDP, which is likely why the decision was made. Other notables are the lack of discrete graphics options (previously users has the choice of either integrated graphics or an NVIDIA NVS 5200M) and the maximum supported memory of 12 GB. And, of course, there’s the touchscreen—which is not required, but rather, is merely an option. On the other hand, while we’re on the subject of the screen, this is also the first model in the series to offer a 1080p resolution, whether traditional or touch-enabled—which is very much appreciated indeed.
That’s a pretty significant departure from the design of the T430s, which—as it currently appears—could represent the last T4xxs model that will provide such powerhouse options at the obvious expense of battery life. Although some markets already have the option of the ThinkPad S440 to fill the Ultrabook void within the ThinkPad 14-inch range, that notebook can even be outfitted with discrete graphics. The T440s top-end configuration, meanwhile, consists of a 15W TDP dual-core i7 with integrated graphics and 12 GB DDR3 RAM. In other words, it’s powerful, but it’s just not in the same class as the T430’s components. What’s more important to you?
Lenovo introduces a unique form factor
Lenovo isn't a company that seems interested in slowing down. Just when you think the world of notebooks is getting boring, it releases products like the ThinkPad Tablet 2 and the Yoga 2 Pro. Today we are looking at another innovative product from Lenovo, the Yoga Tablet 8 and Yoga Tablet 10. While the tablets share the Yoga branding seen in recent convertible notebooks these are NOT Windows-based PCs - something that I fear some consumers might get confused by.
Instead this tablet pair is based on Android (4.2.2 at this point) which brings with it several advantages. First, the battery life is impressive, particularly with the 8-in version that clocked in more than 17 hours in our web browsing test! Second, the form factor of these units is truly unique and not only allows for larger batteries but also a more comfortable in-the-hand feeling than I have had with any other tablet.
Check out the video overview below!
You can pick up the 8-in version of the Lenovo Yoga Tablet for just $199 while the 10.1-in model starts at $274.
The Lenovo Yoga Tablet is available in both 8-in and 10.1-in sizes though the hardware is mostly identical between both units include screen resolution (1280x800) and SoC hardware (MediaTek quad-core Cortex-A7). The larger model does get an 8000 mAh battery (over the 6000 mAh on the 8-in) but isn't enough to counter balance the power draw of the larger screen.
The 1280x800 resolution is a bit lower than I would like but is perfectly acceptable on the 8-in version of the Yoga Tablet. On the 10-in model though the pixels are just too big and image quality suffers. These are currently running Android 4.2.2 which is fine, but hopefully we'll see some updates from Lenovo to more current Android versions.
Introduction and Design
We’re always on the hunt for good docking stations, and sometimes it can be difficult to locate one when you aren’t afforded the luxury of a dedicated docking port. Fortunately, with the advent of USB 3.0 and the greatly improved bandwidth that comes along with it, the options have become considerably more robust.
Today, we’ll take a look at StarTech’s USB3SDOCKHDV, more specifically labeled the Universal USB 3.0 Laptop Docking Station - Dual Video HDMI DVI VGA with Audio and Ethernet (whew). This docking station carries an MSRP of $155 (currently selling for $123 on Amazon.com) and is well above other StarTech options (such as the $100 USBVGADOCK2, which offers just one video output—VGA—10/100 Ethernet, and four USB 2.0 ports). In terms of street price, it is currently available at resellers such as Amazon for around $125.
The big selling points of the USB3SDOCKHDV are its addition of three USB 3.0 ports and Gigabit Ethernet—but most enticingly, its purported ability to provide three total screens simultaneously (including the connected laptop’s LCD) by way of dual HD video output. This video output can be achieved by way of either HDMI + DVI-D or HDMI + VGA combinations (but not by VGA + DVI-D). We’ll be interested to see how well this functionality works, as well as what sort of toll it takes on the CPU of the connected machine.
Continue reading our review of the StarTech USB3SDOCKHDV USB 3.0 Docking Station!!!
Once known as Logan, now known as K1
NVIDIA has bet big on Tegra. Since the introduction of the SoC's first iteration, that much was clear. With the industry push to mobile computing and the decreased importance of the classic PC design, developing and gaining traction with a mobile processor was not only an expansion of the company’s portfolio but a critical shift in the mindset of a graphics giant.
The problem thus far is that while NVIDIA continues to enjoy success in the markets of workstation and consumer discrete graphics, the Tegra line of silicon-on-chip processors has faltered. Design wins have been tough to come by. Other companies with feet already firmly planted on this side of the hardware fence continue to innovate and seal deals with customers. Qualcomm is the dominant player for mobile processors with Samsung, MediaTek, and others all fighting for the same customers NVIDIA needs. While press conferences and releases have been all smiles and sunshine since day one, the truth is that Tegra hasn’t grown at the rate NVIDIA had hoped.
Solid products based on NVIDIA Tegra processors have been released. The first Google Nexus 7 used the Tegra 3 processor, and was considered the best Android tablet on the market by most, until it was succeeded by the 2013 iteration of the Nexus 7 this year. Tegra 4 slipped backwards, though – the NVIDIA SHIELD mobile gaming device was the answer for a company eager to show the market they built compelling and relevant hardware. It has only partially succeeded in that task.
With today’s announcement of the Tegra K1, previously known as Logan or Tegra 5, NVIDIA hopes to once again spark a fire under partners and developers, showing them that NVIDIA’s dominance in the graphics fields of the PC has clear benefits to the mobile segment as well. During a meeting with NVIDIA about Tegra K1, Dan Vivoli, Senior VP of marketing and a 16 year employee, equated the release of the K1 to the original GeForce GPU. That is a lofty ambition and puts of a lot pressure on the entire Tegra team, not to mention the K1 product itself, to live up to.
Tegra K1 Overview
What we previously knew as Logan or Tegra 5 (and actually it was called Tegra 5 until just a couple of days ago), is now being released as the Tegra K1. The ‘K’ designation indicated the graphics architecture that powers the SoC, in this case Kepler. Also, it’s the first one. So, K1.
The processor of the Tegra K1 look very familiar and include four ARM Cortex-A15 “r3” cores and 2MB of L2 cache with a fifth A15 core used for lower power situations. This 4+1 design is the same that was introduced with the Tegra 4 processor last year and allows NVIDIA to implement a style of “big.LITTLE” design that is unique. Some slight modifications to the cores are included with Tegra K1 that improve performance and efficiency, but not by much – the main CPU is very similar to the Tegra 4.
NVIDIA also unveiled late last night that another version of the Tegra K1 that replaces the quad A15 cores with two of the company's custom designs Denver CPU cores. Project Denver, announced in early 2011, is NVIDIA's attempt at building its own core design based on the ARMv8 64-bit ISA. This puts this iteration of Tegra K1 on the same level as Apple's A7 and Qualcomm's Krait processors. When these are finally available in the wild it will be incredibly intriguing to see how well NVIDIA's architects did in the first true CPU design from the GPU giant.
Follow all of our coverage of the show at http://pcper.com/ces!
Introduction and Design
Contortionist PCs are a big deal these days as convertible models take the stage to help bridge the gap between notebook and tablet. But not everyone wants to drop a grand on a convertible, and not everyone wants a 12-inch notebook, either. Meanwhile, these same people may not wish to blow their cash on an underpowered (and far less capable) Chromebook or tablet. It’s for these folks that Lenovo has introduced the IdeaPad Flex 14 Ultrabook, which occupies a valuable middle ground between the extremes.
The Flex 14 looks an awful lot like a Yoga at first glance, with the same sort of acrobatic design and a thoroughly IdeaPad styling (Lenovo calls it a “dual-mode notebook”). The specs are also similar to that of the x86 Yoga, though with the larger size (and later launch), the Flex also manages to assemble a slightly more powerful configuration:
The biggest internal differences here are the i5-4200U CPU, which is a 1.6 GHz Haswell model with a TDP of 15 W and the ability to Turbo Boost (versus the Yoga 11S’ i5-3339Y, which is Ivy Bridge with a marginally lower TDP of 13 W and no Turbo Boost), the integrated graphics improvements that follow with the newer CPU, and a few more ports made possible by the larger chassis. Well, and the regression to a TN panel from the Yoga 11S’ much-appreciated IPS display, which is a bummer. Externally, your wallet will also appreciate a $250 drop in price: our model, as configured here, retails for just $749 (versus the $999 Yoga 11S we reviewed a few months back).
You can actually score a Flex 14 for as low as $429 (as of this writing), by the way, but if you’re after any sort of respectable configuration, that price quickly climbs above the $500 mark. Ours is the least expensive option currently available with both a solid-state drive and an i5 CPU.
Streaming games straight from NVIDIA
Over the weekend NVIDIA released a December update for the SHIELD Android mobile gaming device that included a very interesting, and somewhat understated, new feature: Beta support for NVIDIA GRID.
You have likely heard of GRID before, NVIDIA has been pushing it as part of the companies vision going forward to GPU computing in every facet and market. GRID was aimed at creating GPU-based server farms to enable mobile, streaming gaming to users across the country and across the world. While initially NVIDIA only talked about working with partners to launch streaming services based on GRID, they have obviously changed their tune slightly with this limited release.
If you own a SHIELD, and install the most recent platform update, you'll find a new icon in your NVIDIA SHIELD menu called GRID Beta. The first time you start this new application, it will attempt to measure your bandwidth and latency to offer up an opinion on how good your experience should be. NVIDIA is asking for at least 10 Mbps of sustained bandwidth, and wants round trip latency under 60 ms from your location to their servers.
Currently, servers are ONLY located in Northern California so the further out you are, the more likely you will be to run into problems. However, oing some testing in Kentucky and Ohio resulted in a very playable gaming scenarios, though we did run into some connection problems that might be load-based or latency-based.
After the network setup portion users are shown 8 different games that they can try. Darksiders, Darksiders II, Street Fighter X Tekken, Street Fighter IV, Alan Wake, The Witcher 2, Red Faction: Armageddon and Trine 2. You are free to play them free of charge during this beta though I think you can be sure they will be removed and erased at some point; just a reminder. Saves work well and we were able to save and resume games of Darksiders 2 on GRID easily and quickly.
Starting up the game was fast, about on par with starting up a game on a local PC, though obviously the server is loading it in the background. Once the game is up and running, you are met with some button mapping information provided by NVIDIA for that particular game (great addition) and then you jump into the menus as if you were running it locally.
NVIDIA Tegra Note Program
Clearly, NVIDIA’s Tegra line has not been as successful as the company had hoped and expected. The move for the discrete GPU giant into the highly competitive world of the tablet and phone SoCs has been slower than expected, and littered with roadblocks that were either unexpected or that NVIDIA thought would be much easier to overcome.
The truth is that this was always a long play for the company; success was never going to be overnight and anyone that thought that was likely or possible was deluded. Part of it has to do with the development cycle of the ARM ecosystem. NVIDIA is used to a rather quick development, production, marketing and sales pattern thanks to its time in high performance GPUs, but the SoC world is quite different. By the time a device based on a Tegra chip is found in the retail channel it had to go through an OEM development cycle, NVIDIA SoC development cycle and even an ARM Cortex CPU development cycle. The result is an extended time frame from initial product announcement to retail availability.
Partly due to this, and partly due to limited design wins in the mobile markets, NVIDIA has started to develop internal-designed end-user devices that utilize its Tegra SoC processors. This has the benefit of being much faster to market – while most SoC vendors develop reference platforms during the normal course of business, NVIDIA is essentially going to perfect and productize them.
Introduction and Design
With few exceptions, it’s generally been taken for granted that gaming notebooks are going to be hefty devices. Portability is rarely the focus, with weight and battery life alike usually sacrificed in the interest of sheer power. But the MSI GE40 2OC—the lightest 14-inch gaming notebook currently available—seeks to compromise while retaining the gaming prowess. Trending instead toward the form factor of a large Ultrabook, the GE40 is both stylish and manageable (and perhaps affordable at around $1,300)—but can its muscle withstand the reduction in casing real estate?
While it can’t hang with the best of the 15-inch and 17-inch crowd, in context with its 14-inch peers, the GE40’s spec sheet hardly reads like it’s been the subject of any sort of game-changing handicap:
One of the most popular CPUs for Haswell gaming notebooks has been the 2.4 GHz (3.4 GHz Turbo) i7-4700MQ. But the i7-4702MQ in the GE40-20C is nearly as powerful (managing 2.2 GHz and 3.2 GHz in those same areas respectively), and it features a TDP that’s 10 W lower at just 37 W. That’s ideal for notebooks such as the GE40, which seek to provide a thinner case in conjunction with uncompromising performance. Meanwhile, the NVIDIA GTX 760M is no slouch, even if it isn’t on the same level as the 770s and 780s that we’ve been seeing in some 15.6-inch and 17.3-inch gaming beasts.
Elsewhere, it’s business as usual, with 8 GB of RAM and a 120 GB SSD rounding out the major bullet points. Nearly everything here is on par with the best of rival 14-inch gaming models with the exception of the 900p screen resolution (which is bested by some notebooks, such as Dell’s Alienware 14 and its 1080p panel).
ARM is Serious About Graphics
Ask most computer users from 10 years ago who ARM is, and very few would give the correct answer. Some well informed people might mention “Intel” and “StrongARM” or “XScale”, but ARM remained a shadowy presence until we saw the rise of the Smartphone. Since then, ARM has built up their brand, much to the chagrin of companies like Intel and AMD. Partners such as Samsung, Apple, Qualcomm, MediaTek, Rockchip, and NVIDIA have all worked with ARM to produce chips based on the ARMv7 architecture, with Apple being the first to release the first ARMv8 (64 bit) SOCs. The multitude of ARM architectures are likely the most shipped chips in the world, going from very basic processors to the very latest Apple A7 SOC.
The ARMv7 and ARMv8 architectures are very power efficient, yet provide enough performance to handle the vast majority of tasks utilized on smartphones and tablets (as well as a handful of laptops). With the growth of visual computing, ARM also dedicated itself towards designing competent graphics portions of their chips. The Mali architecture is aimed at being an affordable option for those without access to their own graphics design groups (NVIDIA, Qualcomm), but competitive with others that are willing to license their IP out (Imagination Technologies).
ARM was in fact one of the first to license out the very latest graphics technology to partners in the form of the Mali-T600 series of products. These modules were among the first to support OpenGL ES 3.0 (compatible with 2.0 and 1.1) and DirectX 11. The T600 architecture is very comparable to Imagination Technologies’ Series 6 and the Qualcomm Adreno 300 series of products. Currently NVIDIA does not have a unified mobile architecture in production that supports OpenGL ES 3.0/DX11, but they are adapting the Kepler architecture to mobile and will be licensing it to interested parties. Qualcomm does not license out Adreno after buying that group from AMD (Adreno is an anagram of Radeon).
Introduction and Design
As we’re swimming through the veritable flood of Haswell refresh notebooks, we’ve stumbled across the latest in a line of very popular gaming models: the ASUS G750JX-DB71. This notebook is the successor to the well-known G75 series, which topped out at an Intel Core i7-3630QM with NVIDIA GeForce GTX 670MX dedicated graphics. Now, ASUS has jacked up the specs a little more, including the latest 4th-gen CPUs from Intel as well as 700-series NVIDIA GPUs.
Our ASUS G750JX-DB71 test unit features the following specs:
Of course, the closest comparison to this unit is already the most recently-reviewed MSI GT60-2OD-026US, which featured nearly identical specifications, apart from a 15.6” screen, a better GPU (a GTX 780M with 4 GB GDDR5), and a slightly different CPU (the Intel Core i7-4700MQ). In case you’re wondering what the difference is between the ASUS G750JX’s Core i7-4700MQ and the GT60’s i7-4700HQ, it’s very minor: the HQ features a slightly faster integrated graphics Turbo frequency (1.2 GHz vs. 1.15 GHz) and supports Intel Virtualization Technology for Directed I/O (VT-d). Since the G750JX doesn’t support Optimus, we won’t ever be using the integrated graphics, and unless you’re doing a lot with virtual machines, VT-d isn’t likely to offer any benefits, either. So for all intents and purposes, the CPUs are equivalent—meaning the biggest overall performance difference (on the spec sheet, anyway) lies with the GPU and the storage devices (where the G750JX offers more solid-state storage than the GT60). It’s no secret that the MSI GT60 burned up our benchmarks—so the real question is, how close is the ASUS G750JX to its pedestal, and if the differences are considerable, are they justified?
At an MSRP of around $2,000 (though it can be found for around $100 less), the ASUS G750JX-DB71 competes directly with the likes of the MSI GT60, too (which is priced equivalently). The question, of course, is whether it truly competes. Let’s find out!
A new generation of Software Rendering Engines.
We have been busy with side projects, here at PC Perspective, over the last year. Ryan has nearly broken his back rating the frames. Ken, along with running the video equipment and "getting an education", developed a hardware switching device for Wirecase and XSplit.
My project, "Perpetual Motion Engine", has been researching and developing a GPU-accelerated software rendering engine. Now, to be clear, this is just in very early development for the moment. The point is not to draw beautiful scenes. Not yet. The point is to show what OpenGL and DirectX does and what limits are removed when you do the math directly.
Errata: BioShock uses a modified Unreal Engine 2.5, not 3.
In the above video:
- I show the problems with graphics APIs such as DirectX and OpenGL.
- I talk about what those APIs attempt to solve, finding color values for your monitor.
- I discuss the advantages of boiling graphics problems down to general mathematics.
- Finally, I prove the advantages of boiling graphics problems down to general mathematics.
I would recommend watching the video, first, before moving forward with the rest of the editorial. A few parts need to be seen for better understanding.
If Microsoft was left to their own devices...
Microsoft's Financial Analyst Meeting 2013 set the stage, literally, for Steve Ballmer's last annual keynote to investors. The speech promoted Microsoft, its potential, and its unique position in the industry. He proclaims, firmly, their desire to be a devices and services company.
The explanation, however, does not befit either industry.
Ballmer noted, early in the keynote, how Bing is the only notable competitor to Google Search. He wanted to make it clear, to investors, that Microsoft needs to remain in the search business to challenge Google. The implication is that Microsoft can fill the cracks where Google does not, or even cannot, and establish a business from that foothold. I agree. Proprietary products (which are not inherently bad by the way), as Google Search is, require one or more rivals to fill the overlooked or under-served niches. A legitimate business can be established from that basis.
It is the following, similar, statement which troubles me.
Ballmer later mentioned, along the same vein, how Microsoft is among the few making fundamental operating system investments. Like search, the implication is that operating systems are proprietary products which must compete against one another. This, albeit subtly, does not match their vision as a devices and services company. The point of a proprietary platform is to own the ecosystem, from end to end, and to derive your value from that control. The product is not a device; the product is not a service; the product is a platform. This makes sense to them because, from birth, they were a company which sold platforms.
A platform as a product is not a device nor is it service.