One year later and the Nvidia Shield becomes a Tablet

Subject: General Tech | July 22, 2014 - 10:58 AM |
Tagged: twitch, tegra k1, tegra, shield tablet, shield controller, shield, nvidia, grid, gamestream

Shame on you if you skipped Ryan's review of the new Shield, for those that have and are looking for a second opinion you can check out The Tech Report and other links below the fold.  To quickly recap the controller is now optional but you can connect up to 4 simultaneously for group gaming, the built in 8" IPS display is capable of 1920x1200 and you can output video to an external monitor at 1080p.  The 192 shader processors on the Tegra K1 SoC inside should have no problems with fast paced action at these resolutions and at launch there are almost a dozen games optimized for the K1.  The focus on gaming performance is obvious but the inclusion of DirectStylus 2 for those who want to use the tablet for creating art adds an interesting extra feature to this tablet, especially if it will work with NVIDIA's ShadowPlay streaming technology as live broadcasts of artists drawing has become quite popular in some crowds.  It will be very interesting to see this tablet compete against consoles and the soon to arrive Steamboxes.

tablet-controller2.jpg

"Just under a year since the release of the Shield Portable, Nvidia has announced a second member of the Shield family. As expected, it's the Shield Tablet, an Android slate with an emphasis on gaming. Like the Shield Portable before it, the Shield Tablet will sell direct from Nvidia, not from a partner company. The Shield Tablet extends Nvidia's Android gaming focus to a new form factor, making it one of the first tablets anywhere with a fairly pure gaming mission."

Here is some more Tech News from around the web:

Tech Talk

Author:
Subject: Mobile
Manufacturer: NVIDIA

SHIELD Tablet with new Features

It's odd how regular these events seem to come. Almost exactly one year ago today, NVIDIA launched the SHIELD gaming device, which is a portable Android tablet attached to a controller, all powered by the Tegra 4 SoC. It was a completely unique device that combined a 5-in touchscreen with a console-grade controller to build the best Android gaming machine you could buy. NVIDIA did its best to promote Android gaming as a secondary market to consoles and PCs, and the frequent software updates kept the SHIELD nearly-up-to-date with the latest Android software releases. 

As we approach the one year anniversary of SHIELD, NVIDIA is preparing to release another product to add to the SHIELD family of products: the SHIELD Tablet. Chances are, you could guess what this device is already. It is a tablet powered by Tegra K1 and updated to support all SHIELD software. Of course, there are some new twists as well.

03.jpg

The NVIDIA SHIELD Tablet is being targeted, as the slide above states, at being "the ultimate tablet for gamers." This is a fairly important point to keep in mind as you we walk through the details of the SHIELD tablet, and its accessories, as there are certain areas where NVIDIA's latest product won't quite appeal to you for general purpose tablet users. 

Most obviously, this new SHIELD device is a tablet (and only a tablet). There is no permanently attached controller. Instead, the SHIELD controller will be an add-on accessory for buyers. NVIDIA has put a lot of processing power into the tablet as well as incredibly interesting new software capabilities to enable 10-ft use cases and even mobile Twitch streaming.

Continue reading our preview of the NVIDIA SHIELD Tablet and Controller powered by Tegra K1!!

Author:
Subject: Mobile
Manufacturer: Xiaomi

The First with the Tegra K1 Processor

Back in May a Chinese company announced what was then the first and only product based on NVIDIA’s Tegra K1 SoC, the Xiaomi Mi Pad 7.9. Since then we have had a couple of other products hit our news wire including Google’s own Project Tango development tablet. But the Xiaomi is the first to actually be released, selling through 50,000 units in four minutes according to some reports. I happened to find one on Aliexpress.com, a Chinese sell-through website, and after a few short days the DHL deliveryman dropped the Tegra K1 powered machine off at my door.

If you are like me, the Xiaomi name was a new one. A privately owned company from Beijing and has become one of China’s largest electronics companies, jumping into the smartphone market in 2011. The Mi Pad marks the company’s first attempt at a tablet device, and the partnership with NVIDIA to be an early seller of the Tegra K1 seems to be making waves.

02.jpg

The Tegra K1 Processor

The Tegra K1 SoC was first revealed at CES in January of 2014, and with it came a heavy burden of expectation from NVIDIA directly, as well as from investors and the media. The first SoC from the Tegra family to have a GPU built from the ground up by NVIDIA engineers, the Tegra K1 gets its name from the Kepler family of GPUs. It also happens to get the base of its architecture there as well.

The processor of the Tegra K1 look very familiar and include four ARM Cortex-A15 “r3” cores and 2MB of L2 cache with a fifth A15 core used for lower power situations.  This 4+1 design is the same that was introduced with the Tegra 4 processor last year and allows NVIDIA to implement a style of “big.LITTLE” design that is unique.  Some slight modifications to the cores are included with Tegra K1 that improve performance and efficiency, but not by much – the main CPU is very similar to the Tegra 4.

The focus on the Tegra K1 will be on the GPU, now powered by NVIDIA’s Kepler architecture.  The K1 features 192 CUDA cores with a very similar design to a single SMX on today’s GeForce GTX 700-series graphics cards.  This includes OpenGL ES3.0 support but much more importantly, OpenGL 4.4 and DirectX 11 integration.  The ambition of bringing modern, quality PC gaming to mobile devices is going to be closer than you ever thought possible with this product and the demos I have seen running on reference designs are enough to leave your jaw on the floor.

03.jpg

By far the most impressive part of Tegra K1 is the implementation of a full Kepler SMX onto a chip that will be running well under 2 watts.  While it has been the plan from NVIDIA to merge the primary GPU architectures between mobile and discrete, this choice did not come without some risk.  When the company was building the first Tegra part it basically had to make a hedge on where the world of mobile technology would be in 2015.  NVIDIA might have continued to evolve and change the initial GPU IP that was used in Tegra 1, adding feature support and increasing the required die area to improve overall GPU performance, but instead they opted to position a “merge point” with Kepler in 2014.  The team at NVIDIA saw that they were within reach of the discontinuity point we are seeing today with Tegra K1, but in truth they had to suffer through the first iterations of Tegra GPU designs that they knew were inferior to the design coming with Kepler.

You can read much more on the technical detail of the Tegra K1 SoC by heading over to our launch article that goes into the updated CPU design, as well as giving you all the gore behind the Kepler integration.

By far the most interesting aspect of the Xiaomi Mi Pad 7.9 tablet is the decsion to integrate the Tegra K1 processor. Performance and battery life comparisons with other 7 to 8-in tablets will likely not impact how it sells in China, but the results may mean the world to NVIDIA as they implore other vendors to integrate the SoC.

Continue reading our review of the Xiaomi Mi Pad 7.9 tablet powered by Tegra K1!!

NVIDIA Counts Down to... Something "Ultimate".

Subject: General Tech, Mobile | July 20, 2014 - 12:41 AM |
Tagged: shield tablet, shield 2, shield, nvidia

In Europe, NVIDIA set up a simple, official page. Its title is "THE ULTIMATE IS COMING | NVIDIA". On it are the words, "The Ultimate Is Coming" as well as a countdown to 9 AM EDT on Tuesday, July 22nd. At the same time, North Americans get "Ultimate Quest", a text adventure game which ends on -- surprise -- Tuesday with a giveaway of "something big" for the first players who finish.

Allegedly leaked slide - Not Official!

NVIDIA-SHIELD-Tablet-2.jpg

All images credit: Videocardz (and they have more).

What is it? It is very likely the rumored SHIELD tablet, especially considering Videocardz has convincing slides which are definitely made in NVIDIA's style. What made SHIELD so unique was its controller form factor (and NVIDIA's software support). According to the slides, the controller will now be a wireless accessory to the base tablet. What will come standard is a stylus, the "DirectStylus 2 with 3D Paint". This seems like an odd addition, unless they have already planned use cases.

Allegedly leaked slide - Not Official!

NVIDIA-SHIELD-Tablet-5.jpg

As for the tablet? The slides claim Tegra K1, 2GB of RAM, 1920x1200 display, 5MP (HDR) front and rear cameras, and a MicroSD card slot. Previously, leaks suggested a 640x480 front-facing camera, which did not make sense to me. With the original SHIELD lacking a camera, it seemed very odd to relaunch with a bad one. 5MP, especially if it is a good sensor, is much more reasonable (especially for the front-facing one).

Allegedly leaked slide - Not Official!

NVIDIA-SHIELD-Tablet-4.jpg

The leaks also suggest interesting price points, especially for such a powerful tablet. These details can change at a moment's notice, though, so I won't really acknowledge it (apart from embedding the slide below). They do seem to be targeting the end of the month for North America, or middle August for Europe, which is a very quick launch. Then again, it is ready enough to be a prize for a contest which ends on Tuesday.

Allegedly leaked slide - Not Official!

NVIDIA-SHIELD-Tablet-9.jpg

I except we will see how much of this, if anything, holds up on Tuesday.

Source: NVIDIA

NVIDIA Preparing GeForce 800M (Laptop) Maxwell GPUs?

Subject: General Tech, Graphics Cards, Mobile | July 19, 2014 - 12:29 AM |
Tagged: nvidia, geforce, maxwell, mobile gpu, mobile graphics

Apparently, some hardware sites got their hands on an NVIDIA driver listing with several new product codes. They claim thirteen N16(P/E) chips are listed (although I count twelve (??)). While I do not have much knowledge of NVIDIA's internal product structure, the GeForce GTX 880M, based on Kepler, is apparently listed as N15E.

nvidiamaxwellroadmap.jpg

Things have changed a lot since this presentation.

These new parts will allegedly be based on the second-generation Maxwell architecture. Also, the source believes that these new GPUs will in the GeForce GTX 800-series, possibly with the MX suffix that was last seen in October 2012 with the GeForce GTX 680MX. Of course, being a long-time PC gamer, the MX suffix does not exactly ring positive with my memory. It used to be the Ti-line that you wanted, and the MX-line that you could afford. But who am I kidding? None of that is relevant these days. Get off my lawn.

Source: Videocardz

Podcast #308 - Intel and Mantle, XSPC Watercooling Kits, Quantum Dots, and more!

Subject: General Tech | July 10, 2014 - 10:17 AM |
Tagged: podcast, video, Intel, Mantle, amd, nvidia, XSPC, quantum dots, western digital, My Cloud Mirror, A10-7850K, Kaveri, arm, quakecon

PC Perspective Podcast #308 - 07/10/2014

Join us this week as we discuss Intel using Mantle, XSPC Watercooling Kits, Quantum Dots, and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Josh Walrath, Jeremy Hellstrom, Allyn Malventano, and Morry Tietelman

Program length: 1:25:47

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

 

Fully Enabling the A10-7850K while Utilizing a Standalone GPU

Subject: Processors | July 9, 2014 - 02:42 PM |
Tagged: nvidia, msi, Luxmark, Lightning, hsa, GTX 580, GCN, APU, amd, A88X, A10-7850K

When I first read many of the initial AMD A10 7850K reviews, my primary question was how would the APU act if there was a different GPU installed on the system and did not utilize the CrossFire X functionality that AMD talked about.  Typically when a user installs a standalone graphics card on the AMD FM2/FM2+ platform, they disable the graphics portion of the APU.  They also have to uninstall the AMD Catalyst driver suite.  So this then leaves the APU as a CPU only, and all of that graphics silicon is left silent and dark.

apu_first.jpg

Who in their right mind would pair a high end graphics card with the A10-7850K? This guy!

Does this need to be the case?  Absolutely not!  The GCN based graphics unit on the latest Kaveri APUs is pretty powerful when used in GPGPU/OpenCL applications.  The 4 cores/2 modules and 8 GCN cores can push out around 856 GFlops when fully utilized.  We also must consider that the APU is the first fully compliant HSA (Heterogeneous System Architecture) chip, and it handles memory accesses much more efficiently than standalone GPUs.  The shared memory space with the CPU gets rid of a lot of the workarounds typically needed for GPGPU type applications.  It makes sense that users would want to leverage the performance potential of a fully functioning APU while upgrading their overall graphics performance with a higher end standalone GPU.

To get this to work is very simple.  Assuming that the user has been using the APU as their primary graphics controller, they should update to the latest Catalyst drivers.  If the user is going to use an AMD card, then it would behoove them to totally uninstall the Catalyst driver and re-install only after the new card is installed.  After this is completed restart the machine, go into the UEFI, and change the primary video boot device to PEG (PCI-Express Graphics) from the integrated unit.  Save the setting and shut down the machine.  Insert the new video card and attach the monitor cable(s) to it.  Boot the machine and either re-install the Catalyst suite if an AMD card is used, or install the latest NVIDIA drivers if that is the graphics choice.

Windows 7 and Windows 8 allow users to install multiple graphics drivers from different vendors.  In my case I utilized a last generation GTX 580 (the MSI N580GTX Lightning) along with the AMD A10 7850K.  These products coexist happily together on the MSI A88X-G45 Gaming motherboard.  The monitor is attached to the NVIDIA card and all games are routed through that since it is the primary graphics adapter.  Performance seems unaffected with both drivers active.

luxmark_setup.PNG

I find it interesting that the GPU portion of the APU is named "Spectre".  Who owns those 3dfx trademarks anymore?

When I load up Luxmark I see three entries: the APU (CPU and GPU portions), the GPU portion of the APU, and then the GTX 580.  Luxmark defaults to the GPUs.  We see these GPUs listed as “Spectre”, which is the GCN portion of the APU, and the NVIDIA GTX 580.  Spectre supports OpenCL 1.2 while the GTX 580 is an OpenCL 1.1 compliant part.

With both GPUs active I can successfully run the Luxmark “Sala” test.  The two units perform better together than when they are run separately.  Adding in the CPU does increase the score, but not by very much (my guess here is that the APU is going to be very memory bandwidth bound in such a situation).  Below we can see the results of the different units separate and together.

luxmark_results_02.png

These results make me hopeful about the potential of AMD’s latest APU.  It can run side by side with a standalone card, and applications can leverage the performance of this unit.  Now all we need is more HSA aware software.  More time and more testing is needed for setups such as this, and we need to see if HSA enabled software really does see a boost from using the GPU portion of the APU as compared to a pure CPU piece of software or code that will run on the standalone GPU.

Personally I find the idea of a heterogeneous solution such as this appealing.  The standalone graphics card handles the actual graphics portions, the CPU handles that code, and the HSA software can then fully utilize the graphics portion of the APU in a very efficient manner.  Unfortunately, we do not have hard numbers on the handful of HSA aware applications out there, especially when used in conjunction with standalone graphics.  We know in theory that this can work (and should work), but until developers get out there and really optimize their code for such a solution, we simply do not know if having an APU will really net the user big gains as compared to something like the i7 4770 or 4790 running pure x86 code.

full_APU_GPU.PNG

In the meantime, at least we know that these products work together without issue.  The mixed mode OpenCL results make a nice case for improving overall performance in such a system.  I would imagine with more time and more effort from developers, we could see some really interesting implementations that will fully utilize a system such as this one.  Until then, happy experimenting!

Source: AMD

Podcast #306 - Budget PC Shootout, the Coolermaster Elite 110, AMD GameWorks competitor

Subject: General Tech | June 26, 2014 - 11:36 AM |
Tagged: xeon, video, seiki, podcast, nvidia, msi, Intel, HDMI 2.0, gt70 2pe, gt70, gameworks, FX-9590, displayport 1.3, coolermaster, amd, 4k

PC Perspective Podcast #306 - 06/26/2014

Join us this week as we discuss our Budget PC Shootout, the Coolermaster Elite 110, an AMD GameWorks competitor and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Josh Walrath, Jeremy Hellstrom, and Allyn Maleventano

Program length: 1:19:12

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

 

 

Subject: Mobile
Manufacturer: MSI

Introduction and Design

P4264361.jpg

It was only last year that we were singing the praises of the GT60, which was one of the fastest notebooks we’d seen to date. Its larger cousin, the GT70, features a 17.3” screen (versus the GT60’s 15.6”), faster CPUs and GPUs, and even better options for storage. Now, the latest iteration of this force to be reckoned with has arrived on our desks, and while its appearance hasn’t changed much, its performance is even better than ever.

While we’ll naturally be spending a good deal of time discussing performance and stability in our article here, we won’t be dedicating much to casing and general design, as—for the most part—it is very similar to that of the GT60. On the other hand, one area on which we’ll be focusing particularly heavily is that of battery life, thanks solely to the presence of NVIDIA’s new Battery Boost technology. As the name suggests, this new feature employs power conservation techniques to extend the notebook’s life while gaming unplugged. This is accomplished primarily via frame rate limiting, which is a feature that has actually been available since the introduction of Kepler, but which until now has been buried within the advanced options available for such products. Battery Boost basically brings this to the forefront and makes it both accessible and default.

Let’s take a look at what this bad boy is packing:

specs.png

Not much commentary needed here; this table reads like a who’s who of computer specifications. Of particular note are the 32 GB of RAM, the 880M (of course), and the 384 GB SSD RAID array (!!). Elsewhere, it’s mostly business as usual for the ultra-high-end MSI GT notebooks, with a slightly faster CPU than the previous model we reviewed (the i7-4700MQ). One thing is guaranteed: it’s a fast machine.

P4264350.jpg

Continue reading our review of the MSI GT70 2PE Gaming Notebook!!

GeForce GTX Titan Z Overclocking Testing

Subject: Graphics Cards | June 12, 2014 - 03:17 PM |
Tagged: overclocking, nvidia, gtx titan z, geforce

Earlier this week I posted a review of the NVIDIA GeForce GTX Titan Z graphics card, a dual-GPU Kepler GK110 part that currently sells for $3000. If you missed that article you should read it first and catch up but the basic summary was that, for PC gamers, it's slower and twice the price of AMD's Radeon R9 295X2.

During that article though I mentioned that the Titan Z had more variable clock speeds than any other GeForce card I had tested. At the time I didn't go any further than that since the performance of the card already pointed out the deficit it had going up against the R9 295X2. However, several readers asked me to dive into overclocking with the Titan Z and with that came the need to show clock speed changes. 

My overclocking was done through EVGA's PrecisionX software and we measured clock speeds with GPU-Z. The first step in overclocking an NVIDIA GPU is to simply move up the Power Target sliders and see what happens. This tells the card that it is allowed to consume more power than it would normally be allowed to, and then thanks to GPU Boost technology, the clock speed should scale up naturally. 

titanzoc.jpg

Click to Enlarge

And that is exactly what happened. I ran through 30 minutes of looped testing with Metro: Last Light at stock settings, with the Power Target at 112%, with the Power Target at 120% (the maximum setting) and then again with the Power Target at 120% and the GPU clock offset set to +75 MHz. 

That 75 MHz offset was the highest setting we could get to run stable on the Titan Z, which brings the Base clock up to 781 MHz and the Boost clock to 951 MHz. Though, as you'll see in our frequency graphs below the card was still reaching well above that.

clockspeedtitanz.png

Click to Enlarge

This graph shows clock rates of the GK110 GPUs on the Titan Z over the course of 25 minutes of looped Metro: Last Light gaming. The green line is the stock performance of the card without any changes to the power settings or clock speeds. While it starts out well enough, hitting clock rates of around 1000 MHz, it quickly dives and by 300 seconds of gaming we are often going at or under the 800 MHz mark. That pattern is consistent throughout the entire tested time and we have an average clock speed of 894 MHz.

Next up is the blue line, generated by simply moving the power target from 100% to 112%, giving the GPUs a little more thermal headroom to play with. The results are impressive, with a much more consistent clock speed. The yellow line, for the power target at 120%, is even better with a tighter band of clock rates and with a higher average clock. 

Finally, the red line represents the 120% power target with a +75 MHz offset in PrecisionX. There we see a clock speed consistency matching the yellow line but offset up a bit, as we have been taught to expect with NVIDIA's recent GPUs. 

clockspeedtitan-avg.png

Click to Enlarge

The result of all this data comes together in the bar graph here that lists the average clock rates over the entire 25 minute test runs. At stock settings, the Titan Z was able to hit 894 MHz, just over the "typical" boost clock advertised by NVIDIA of 876 MHz. That's good news for NVIDIA! Even though there is a lot more clock speed variance than I would like to see with the Titan Z, the clock speeds are within the expectations set by NVIDIA out the gate.

Bumping up that power target though will help out gamers that do invest in the Titan Z quite a bit. Just going to 112% results in an average clock speed of 993 MHz, a 100 MHz jump worth about 11% overall. When we push that power target up even further, and overclock the frequency offset a bit, we actually get an average clock rate of 1074 MHz, 20% faster than the stock settings. This does mean that our Titan Z is pulling more power and generating more noise (quite a bit more actually) with fan speeds going from around 2000 to 2700 RPM.

MetroLL_2560x1440_OFPS.png

MetroLL_2560x1440_PER.png

MetroLL_3840x2160_OFPS.png

MetroLL_3840x2160_PER.png

At both 2560x1440 and 3840x2160, in the Metro: Last Light benchmark we ran, the added performance of the Titan Z does put it at the same level of the Radeon R9 295X2. Of course, it goes without saying that we could also overclock the 295X2 a bit further to improve ITS performance, but this is an exercise in education.

IMG_0270.JPG

Does it change my stance or recommendation for the Titan Z? Not really; I still think it is overpriced compared to the performance you get from AMD's offerings and from NVIDIA's own lower priced GTX cards. However, it does lead me to believe that the Titan Z could have been fixed and could have offered at least performance on par with the R9 295X2 had NVIDIA been willing to break PCIe power specs and increase noise.

UPDATE (6/13/14): Some of our readers seem to be pretty confused about things so I felt the need to post an update to the main story here. One commenter below mentioned that I was one of "many reviewers that pounded the R290X for the 'throttling issue' on reference coolers" and thinks I am going easy on NVIDIA with this story. However, there is one major difference that he seems to overlook: the NVIDIA results here are well within the rated specs. 

When I published one of our stories looking at clock speed variance of the Hawaii GPU in the form of the R9 290X and R9 290, our results showed that clock speed of these cards were dropping well below the rated clock speed of 1000 MHz. Instead I saw clock speeds that reached as low as 747 MHz and stayed near the 800 MHz mark. The problem with that was in how AMD advertised and sold the cards, using only the phrase "up to 1.0 GHz" in its marketing. I recommended that AMD begin selling the cards with a rated base clock and a typical boost clock instead only labeling with the, at the time, totally incomplete "up to" rating. In fact, here is the exact quote from this story: "AMD needs to define a "base" clock and a "typical" clock that users can expect." Ta da.

The GeForce GTX Titan Z though, as we look at the results above, is rated and advertised with a base clock of 705 MHz and a boost clock of 876 MHz. The clock speed comparison graph at the top of the story shows the green line (the card at stock) never hitting that 705 MHz base clock while averaging 894 MHz. That average is ABOVE the rated boost clock of the card. So even though the GPU is changing between frequencies more often than I would like, the clock speeds are within the bounds set by NVIDIA. That was clearly NOT THE CASE when AMD launched the R9 290X and R9 290. If NVIDIA had sold the Titan Z with only the specification of "up to 1006 MHz" or something like then the same complaint would be made. But it is not.

The card isn't "throttling" at all, in fact, as someone specifies below. That term insinuates that it is going below a rated performance rating. It is acting in accordance with the GPU Boost technology that NVIDIA designed.

Some users seem concerned about temperature: the Titan Z will hit 80-83C in my testing, both stock and overclocked, and simply scales the fan speed to compensate accordingly. Yes, overclocked, the Titan Z gets quite a bit louder but I don't have sound level tests to show that. It's louder than the R9 295X2 for sure but definitely not as loud as the R9 290 in its original, reference state.

Finally, some of you seem concerned that I was restrticted by NVIDIA on what we could test and talk about on the Titan Z. Surprise, surprise, NVIDIA didn't send us this card to test at all! In fact, they were kind of miffed when I did the whole review and didn't get into showing CUDA benchmarks. So, there's that.