Microsoft Surface Book 2-in-1 with Skylake with NVIDIA Discrete GPU Announced

Subject: Mobile | October 6, 2015 - 02:38 PM |
Tagged: video, surface book, surface, Skylake, nvidia, microsoft, Intel, geforce

Along with the announcement of the new Surface Pro 4, Microsoft surprised many with the release of the new Surface Book 2-in-1 convertible laptop. Sharing much of the same DNA as the Surface tablet line, the Surface Book adopts a more traditional notebook design while still adding enough to the formula to produce a unique product.


The pivotal part of the design (no pun intended) is the new hinge, a "dynamic fulcrum" design that looks great and also (supposedly) will be incredibly strong. The screen / tablet attachment mechanism is called Muscle Wire and promises secure attachment as well as ease of release with a single button.

An interesting aspect of the fulcrum design is that, when closed, the Surface Book screen and keyboard do not actually touch near the hinge. Instead you have a small gap in this area. I'm curious how this will play out in real-world usage - it creates a natural angle for using the screen in its tablet form but also may find itself "catching" coin, pens and other things between the two sections. 


The 13.5-in screen has a 3000 x 2000 resolution (3:2 aspect ratio obviously) with a 267 PPI pixel density. Just like the Surface Pro 4, it has a 10-point touch capability and uses the exclusive PixelSense display technology for improved image quality.

While most of the hardware is included in the tablet portion of the device, the keyboard dock includes some surprises of its own. You get a set of two USB 3.0 ports, a full size SD card slot and a proprietary SurfaceConnect port for an add-on dock. But most interestingly you'll find an optional discrete GPU from NVIDIA, an as-yet-undiscovered GeForce GPU with 1GB (??) of memory. I have sent inquiries to Microsoft and NVIDIA for details on the GPU, but haven't heard back yet. We think it is a 30 watt GeForce GPU of some kind (by looking at the power adapter differences) but I'm more interested in how the GPU changes both battery life and performance.

UPDATE: Just got official word from NVIDIA on the GPU, but unfortunately it doesn't tell us much.

The new GPU is a Maxwell based GPU with GDDR5 memory. It was designed to deliver the best performance in ultra-thin form factors such as the Surface Book keyboard dock. Given its unique implementation and design in the keyboard module, it cannot be compared to a traditional 900M series GPU. Contact Microsoft for performance information.


Keyboard and touchpad performance looks to be impressive as well, with a full glass trackpad integration, backlit keyboard design and "class leading" key switch throw distance.

The Surface Book is powered by Intel Skylake processors, available in both Core i5 and Core i7 options, but does not offer Core m-based or Iris graphics options. Instead the integrated GPU will only be offered with the Intel HD 520.


Microsoft promises "up to" 12 hours of battery life on the Surface Book, though that claim was made with the Core i5 / 256GB / 8GB configuration option; no discrete GPU included. 


Pricing on the Surface Book starts at $1499 but can reach as high as $2699 with the maximum performance and storage capacity options. 

Source: Microsoft
Manufacturer: NVIDIA

GPU Enthusiasts Are Throwing a FET

NVIDIA is rumored to launch Pascal in early (~April-ish) 2016, although some are skeptical that it will even appear before the summer. The design was finalized months ago, and unconfirmed shipping information claims that chips are being stockpiled, which is typical when preparing to launch a product. It is expected to compete against AMD's rumored Arctic Islands architecture, which will, according to its also rumored numbers, be very similar to Pascal.

This architecture is a big one for several reasons.


Image Credit: WCCFTech

First, it will jump two full process nodes. Current desktop GPUs are manufactured at 28nm, which was first introduced with the GeForce GTX 680 all the way back in early 2012, but Pascal will be manufactured on TSMC's 16nm FinFET+ technology. Smaller features have several advantages, but a huge one for GPUs is the ability to fit more complex circuitry in the same die area. This means that you can include more copies of elements, such as shader cores, and do more in fixed-function hardware, like video encode and decode.

That said, we got a lot more life out of 28nm than we really should have. Chips like GM200 and Fiji are huge, relatively power-hungry, and complex, which is a terrible idea to produce when yields are low. I asked Josh Walrath, who is our go-to for analysis of fab processes, and he believes that FinFET+ is probably even more complicated today than 28nm was in the 2012 timeframe, which was when it launched for GPUs.

It's two full steps forward from where we started, but we've been tiptoeing since then.


Image Credit: WCCFTech

Second, Pascal will introduce HBM 2.0 to NVIDIA hardware. HBM 1.0 was introduced with AMD's Radeon Fury X, and it helped in numerous ways -- from smaller card size to a triple-digit percentage increase in memory bandwidth. The 980 Ti can talk to its memory at about 300GB/s, while Pascal is rumored to push that to 1TB/s. Capacity won't be sacrificed, either. The top-end card is expected to contain 16GB of global memory, which is twice what any console has. This means less streaming, higher resolution textures, and probably even left-over scratch space for the GPU to generate content in with compute shaders. Also, according to AMD, HBM is an easier architecture to communicate with than GDDR, which should mean a savings in die space that could be used for other things.

Third, the architecture includes native support for three levels of floating point precision. Maxwell, due to how limited 28nm was, saved on complexity by reducing 64-bit IEEE 754 decimal number performance to 1/32nd of 32-bit numbers, because FP64 values are rarely used in video games. This saved transistors, but was a huge, order-of-magnitude step back from the 1/3rd ratio found on the Kepler-based GK110. While it probably won't be back to the 1/2 ratio that was found in Fermi, Pascal should be much better suited for GPU compute.


Image Credit: WCCFTech

Mixed precision could help video games too, though. Remember how I said it supports three levels? The third one is 16-bit, which is half of the format that is commonly used in video games. Sometimes, that is sufficient. If so, Pascal is said to do these calculations at twice the rate of 32-bit. We'll need to see whether enough games (and other applications) are willing to drop down in precision to justify the die space that these dedicated circuits require, but it should double the performance of anything that does.

So basically, this generation should provide a massive jump in performance that enthusiasts have been waiting for. Increases in GPU memory bandwidth and the amount of features that can be printed into the die are two major bottlenecks for most modern games and GPU-accelerated software. We'll need to wait for benchmarks to see how the theoretical maps to practical, but it's a good sign.

Google's Pixel C Is A Powerful Convertible Tablet Running Android 6.0

Subject: Mobile | October 2, 2015 - 04:09 PM |
Tagged: Tegra X1, tablet, pixel, nvidia, google, android 6.0, Android

During its latest keynote event, Google unveiled the Pixel C, a powerful tablet with optional keyboard that uses NVIDIA’s Tegra X1 SoC and runs the Android 6.0 “Marshmallow” operating system.

The Pixel C was designed by the team behind the Chromebook Pixel. Pixel C features an anodized aluminum body that looks (and reportedly feels) smooth with clean lines and rounded corners. The tablet itself is 7mm thick and weighs approximately one pound. The front of the Pixel C is dominated by a 10.2” display with a resolution of 2560 x 1800 (308 PPI, 500 nits brightness), wide sRGB color gamut, and 1:√2 aspect ratio (which Google likened to the size and aspect ratio of an A4 sheet of paper). A 2MP front camera sits above the display while four microphones sit along the bottom edge and a single USB Type-C port and two stereo speakers sit on the sides of the tablet. Around back, there is an 8MP rear camera and a bar of LED lights that will light up to indicate the battery charge level after double tapping it.

Google Pixel C Tegra X1 Tablet.jpg

The keyboard is an important part of the Pixel C, and Google has given it special attention to make it part of the package. The keyboard attaches to the tablet using self-aligning magnets that are powerful enough to keep the display attached while holding it upside down and shaking it (not that you'd want to do that, mind you). It can be attached to the bottom of the tablet for storage and used like a slate or you can attach the tablet to the back of the keyboard and lift the built-in hinge to use the Pixel C in laptop mode (the hinge can hold the display at anywhere from 100 to 135-degrees). The internal keyboard battery is good for two months of use, and can be simply recharged by closing the Pixel C like a laptop and allowing it to inductively charge from the tablet portion. The keyboard is around 2mm thick and is nearly full size at 18.85mm pitch and the chiclet keys have a 1.4mm travel that is similar to that of the Chromebook Pixel. There is no track pad, but it does offer a padded palm rest which is nice to see.

Google Pixel C with Keyboard.jpg

Internally, the Pixel C is powered by the NVIDIA Tegra X1 SoC, 3GB of RAM, and 32GB or 64GB of storage (depending on model). The 20nm Tegra X1 consists of four ARM Cortex A57 and four Cortex A53 CPU cores paired with a 256-core Maxwell GPU. The Pixel C is a major design win for NVIDIA, and the built in GPU will be great for gaming on the go.

The Pixel C will be available in December ("in time for the holidays") for $499 for the base 32 GB model, $599 for the 64 GB model, and $149 for the keyboard.

First impressions, such as this hands-on by Engadget, seem to be very positive stating that it is sturdy yet sleek hardware that feels comfortable typing on. While the hardware looks more than up to the task, the operating system of choice is a concern for me. Android is not the most productivity and multi-tasking friendly software. There are some versions of Android that enable multiple windows or side-by-side apps, but it has always felt rather clunky and limited in its usefulness. With that said, Computer World's  JR Raphael seems hopeful. He points out that the Pixel C is, in Batman fashion, not the hardware Android wants, but the hardware that Android needs (to move forward) and is primed for a future of Android that is more friendly to such productive endeavors. Development versions of Android 6.0 included support for multiple apps running simultaneously side-by-side, and while that feature will not make the initial production code cut, it does show that it is something that Google is looking into pursuing and possibly enabling at some point. The Pixel C has an excellent aspect ratio to take advantage of the app splitting with the ability to display four windows each with the same aspect ratio.

I am not sure how well received the Pixel C will be by business users who have several convertible tablet options running Windows and Chrome OS. It certainly gives the iPad-and-keyboard combination a run for its money and is a premium alternative to devices like the Asus Transformers.

What do you think about the Pixel C, and in particular, it running Android?

Even if I end up being less-than-productive using it, I think I'd still want the sleek-looking hardware as a second machine, heh.

Source: Google

Podcast #369 - Fable Legends DX12 Benchmark, Apple A9 SoC, Intel P3608 SSD, and more!

Subject: General Tech | October 1, 2015 - 02:17 PM |
Tagged: podcast, video, fable legends, dx12, apple, A9, TSMC, Samsung, 14nm, 16nm, Intel, P3608, NVMe, logitech, g410, TKL, nvidia, geforce now, qualcomm, snapdragon 820

PC Perspective Podcast #369 - 10/01/2015

Join us this week as we discuss the Fable Legends DX12 Benchmark, Apple A9 SoC, Intel P3608 SSD, and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Josh Walrath, Jeremy Hellstrom, and Allyn Malventano

Program length: 1:42:35

  1. Week in Review:
  2. 0:54:10 This episode of PC Perspective is brought to you by…Zumper, the quick and easy way to find your next apartment or home rental. To get started and to find your new home go to
  3. News item of interest:
  4. Hardware/Software Picks of the Week:
  5. Closing/outro

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

Subject: General Tech
Manufacturer: NVIDIA

Setup, Game Selection

Yesterday NVIDIA officially announced the new GeForce NOW streaming game service, the conclusion to the years-long beta and development process known as NVIDIA GRID. As I detailed on my story yesterday about the reveal, GeForce NOW is a $7.99/mo. subscription service that will offer on-demand, cloud-streamed games to NVIDIA SHIELD devices, including a library of 60 games for that $7.99/mo. fee in addition to 7 titles in the “purchase and play” category. There are several advantages that NVIDIA claims make GeForce NOW a step above any other streaming gaming service including PlayStation Now, OnLive and others. Those include load times, resolution and frame rate, combined local PC and streaming game support and more.


I have been able to use and play with the GeForce NOW service on our SHIELD Android TV device in the office for the last few days and I thought I would quickly go over my initial thoughts and impressions up to this point.

Setup and Availability

If you have an NVIDIA SHIELD Android TV (or a SHIELD Tablet) then the setup and getting started process couldn’t be any simpler for new users. An OS update is pushed that changes the GRID application on your home screen to GeForce NOW and you can sign in using your existing Google account on your Android device, making payment and subscription simple to manage. Once inside the application you can easily browse through the included streaming games or look through the smaller list of purchasable games and buy them if you so choose.


Playing a game is as simple and selecting title from the grid list and hitting play.

Game Selection

Let’s talk about that game selection first. For $7.99/mo. you get access to 60 titles for unlimited streaming. I have included a full list below, originally posted in our story yesterday, for reference.

Continue reading my initial thoughts and an early review of GeForce NOW!!

NVIDIA Publishes DirectX 12 Tips for Developers

Subject: Graphics Cards | September 26, 2015 - 09:10 PM |
Tagged: microsoft, windows 10, DirectX 12, dx12, nvidia

Programming with DirectX 12 (and Vulkan, and Mantle) is a much different process than most developers are used to. The biggest change is how work is submit to the driver. Previously, engines would bind attributes to a graphics API and issue one of a handful of “draw” commands, which turns the current state of the API into a message. Drivers would play around with queuing them and manipulating them, to optimize how these orders are sent to the graphics device, but the game developer had no control over that.


Now, the new graphics APIs are built more like command lists. Instead of bind, call, bind, call, and so forth, applications request queues to dump work into, and assemble the messages themselves. It even allows these messages to be bundled together and sent as a whole. This allows direct control over memory and the ability to distribute a lot of the command control across multiple CPU cores. Applications are only as fast as its slowest (relevant) thread, so the ability to spread work out increases actual performance.

NVIDIA has created a large list of things that developers should do, and others that they should not, to increase performance. Pretty much all of them apply equally, regardless of graphics vendor, but there are a few NVIDIA-specific comments, particularly the ones about NvAPI at the end and a few labeled notes in the “Root Signatures” category.

The tips are fairly diverse, covering everything from how to efficiently use things like command lists, to how to properly handle multiple GPUs, and even how to architect your engine itself. Even if you're not a developer, it might be interesting to look over to see how clues about what makes the API tick.

Source: NVIDIA

The Fable of the uncontroversial benchmark

Subject: Graphics Cards | September 24, 2015 - 02:53 PM |
Tagged: radeon, nvidia, lionhead, geforce, fable legends, fable, dx12, benchmark, amd

By now you should have memorized Ryan's review of Fable's DirectX 12 performance on a variety of cards and hopefully tried out our new interactive IFU charts.  You can't always cover every card, as those who were brave enough to look at the CSV file Ryan provided might have come to realize.  That's why it is worth peeking at The Tech Report's review after reading through ours.  They have included an MSI R9 285 and XFX R9 390 as well as an MSI GTX 970, which may be cards you are interested in seeing.  They also spend some time looking at CPU scaling and the effect that has on AMD and NVIDIA's performance.  Check it out here.


"Fable Legends is one of the first games to make use of DirectX 12, and it produces some truly sumptuous visuals. Here's a look at how Legends performs on the latest graphics cards."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Manufacturer: Lionhead Studios

Benchmark Overview

When approached a couple of weeks ago by Microsoft with the opportunity to take an early look at an upcoming performance benchmark built on a DX12 game pending release later this year, I of course was excited for the opportunity. Our adventure into the world of DirectX 12 and performance evaluation started with the 3DMark API Overhead Feature Test back in March and was followed by the release of the Ashes of the Singularity performance test in mid-August. Both of these tests were pinpointing one particular aspect of the DX12 API - the ability to improve CPU throughput and efficiency with higher draw call counts and thus enabling higher frame rates on existing GPUs.


This game and benchmark are beautiful...

Today we dive into the world of Fable Legends, an upcoming free to play based on the world of Albion. This title will be released on the Xbox One and for Windows 10 PCs and it will require the use of DX12. Though scheduled for release in Q4 of this year, Microsoft and Lionhead Studios allowed us early access to a specific performance test using the UE4 engine and the world of Fable Legends. UPDATE: It turns out that the game will have a fall-back DX11 mode that will be enabled if the game detects a GPU incapable of running DX12.

This benchmark focuses more on the GPU side of DirectX 12 - on improved rendering techniques and visual quality rather than on the CPU scaling aspects that made Ashes of the Singularity stand out from other graphics tests we have utilized. Fable Legends is more representative of what we expect to see with the release of AAA games using DX12. Let's dive into the test and our results!

Continue reading our look at the new Fable Legends DX12 Performance Test!!

Phoronix Looks at NVIDIA's Linux Driver Quality Settings

Subject: Graphics Cards | September 22, 2015 - 09:09 PM |
Tagged: nvidia, linux, graphics drivers

In the NVIDIA driver control panel, there is a slider that controls Performance vs Quality. On Windows, I leave it set to “Let the 3D application decide” and change my 3D settings individually, as needed. I haven't used NVIDIA's control panel on Linux too much, mostly because my laptop is what I usually install Linux on, which runs an AMD GPU, but the UI seems to put a little more weight on it.


Or is that GTux?

Phoronix decided to test how each of these settings affects a few titles, and the only benchmark they bothered reporting is Team Fortress 2. It turns out that other titles see basically zero variance. TF2 saw a difference of 6FPS though, from 115 FPS at High Quality to 121 FPS at Quality. Oddly enough, Performance and High Performance were worse performance than Quality.

To me, this sounds like NVIDIA has basically forgot about the feature. It barely affects any title, the game it changes anything measureable in is from 2007, and it contradicts what the company is doing on other platforms. I predict that Quality is the default, which is the same as Windows (albeit with only 3 choices: “Performance”, “Balanced”, and the default “Quality”). If it is, you probably should just leave it there 24/7 in case NVIDIA has literally not thought about tweaking the other settings. On Windows, it is kind-of redundant with GeForce Experience, anyway.

Final note: Phoronix has only tested the GTX 980. Results may vary elsewhere, but probably don't.

Source: Phoronix
Manufacturer: NVIDIA

Pack a full GTX 980 on the go!

For many years, the idea of a truly mobile gaming system has been attainable if you were willing to pay the premium for high performance components. But anyone that has done research in this field would tell you that though they were named similarly, the mobile GPUs from both AMD and NVIDIA had a tendency to be noticeably slower than their desktop counterparts. A GeForce GTX 970M, for example, only had a CUDA core count that was slightly higher than the desktop GTX 960, and it was 30% lower than the true desktop GTX 970 product. So even though you were getting fantastic mobile performance, there continued to be a dominant position that desktop users held over mobile gamers in PC gaming.

This fall, NVIDIA is changing that with the introduction of the GeForce GTX 980 for gaming notebooks. Notice I did not put an 'M' at the end of that name; it's not an accident. NVIDIA has found a way, through binning and component design, to cram the entirety of a GM204-based Maxwell GTX 980 GPU inside portable gaming notebooks.


The results are impressive and the implications for PC gamers are dramatic. Systems built with the GTX 980 will include the same 2048 CUDA cores, 4GB of GDDR5 running at 7.0 GHz and will run at the same base and typical GPU Boost clocks as the reference GTX 980 cards you can buy today for $499+. And, while you won't find this GPU in anything called a "thin and light", 17-19" gaming laptops do allow for portability of gaming unlike any SFF PC.

So how did they do it? NVIDIA has found a way to get a desktop GPU with a 165 watt TDP into a form factor that has a physical limit of 150 watts (for the MXM module implementations at least) through binning, component selection and improved cooling. Not only that, but there is enough headroom to allow for some desktop-class overclocking of the GTX 980 as well.

Continue reading our preview of the new GTX 980 for notebooks!!