Author:
Manufacturer: ASUS

Experience with Silent Design

In the time periods between major GPU releases, companies like ASUS have the ability to really dig down and engineer truly unique products. With the expanded time between major GPU releases, from either NVIDIA or AMD, these products have continued evolving to offer better features and experiences than any graphics card before them. The ASUS Strix GTX 780 is exactly one of those solutions – taking a GTX 780 GPU that was originally released in May of last year and twisting it into a new design that offers better cooling, better power and lower noise levels.

ASUS intended, with the Strix GTX 780, to create a card that is perfect for high end PC gamers, without crossing into the realm of bank-breaking prices. They chose to go with the GeForce GTX 780 GPU from NVIDIA at a significant price drop from the GTX 780 Ti, with only a modest performance drop. They double the reference memory capacity from 3GB to 6GB of GDDR5, to assuage any buyer’s thoughts that 3GB wasn’t enough for multi-screen Surround gaming or 4K gaming. And they change the cooling solution to offer a near silent operation mode when used in “low impact” gaming titles.

The ASUS Strix GTX 780 Graphics Card

The ASUS Strix GTX 780 card is a pretty large beast, both in physical size and in performance. The cooler is a slightly modified version of the very popular DirectCU II thermal design used in many of the custom built ASUS graphics cards. It has a heat dissipation area more than twice that of the reference NVIDIA cooler and uses larger fans that allow them to spin slower (and quieter) at the improved cooling capacity.

IMG_0325.JPG

Out of the box, the ASUS Strix GTX 780 will run at 889 MHz base clock and 941 MHz Boost clock, a fairly modest increase over the 863/900 MHz rates of the reference card. Obviously with much better cooling and a lot of work being done on the PCB of this custom design, users will have a lot of headroom to overclock on their own, but I continue to implore companies like ASUS and MSI to up the ante out of the box! One area where ASUS does impress is with the memory – the Strix card features a full 6GB of GDDR5 running 6.0 GHz, twice the capacity of the reference GTX 780 (and even GTX 780 Ti) cards. If you had any concerns about Surround or 4K gaming, know that memory capacity will not be a problem. (Though raw compute power may still be.)

Continue reading our review of the ASUS Strix GTX 780 6GB Graphics Card!!

Author:
Subject: Mobile
Manufacturer: NVIDIA

A Tablet and Controller Worth Using

An interesting thing happened a couple of weeks back, while I was standing on stage at our annual PC Perspective Hardware Workshop during Quakecon in Dallas, TX. When NVIDIA offered up a SHIELD (now called the SHIELD Portable) for raffle, the audience cheered. And not just a little bit, but more than they did for nearly any other hardware offered up during the show. That included motherboards, graphics card, monitors, even complete systems. It kind of took me aback - NVIDIA SHIELD was a popular brand, a name that was recognized, and apparently, a product that people wanted to own. You might not have guessed that based on the sales numbers that SHIELD has put forward though. Even though it appeared to have a significant mind share, market share was something that was lacking.

Today though, NVIDIA prepares the second product in the SHIELD lineup, the SHIELD Tablet, a device the company hopes improves on the idea of SHIELD to encourage other users to sign on. It's a tablet (not a tablet with a controller attached), it has a more powerful SoC that can utilize different APIs for unique games, it can be more easily used in a 10-ft console mode and the SHIELD specific features like Game Stream are included and enhanced.

The question of course though is easy to put forward: should you buy one? Let's explore.

The NVIDIA SHIELD Tablet

At first glance, the NVIDIA SHIELD Tablet looks like a tablet. That actually isn't a negative selling point though, as the SHIELD Tablet can and does act like a high end tablet in nearly every way: performance, function, looks. We originally went over the entirety of the tablet's specifications in our first preview last week but much of it bears repeating for this review.

21.jpg

The SHIELD Tablet is built around the NVIDIA Tegra K1 SoC, the first mobile silicon to implement the Kepler graphics architecture. That feature alone makes this tablet impressive because it offers graphics performance not seen in a form factor like this before. CPU performance is also improved over the Tegra 4 processor, but the graphics portion of the die sees the largest performance jump easily.

IMG_0417.JPG

A 1920x1200 resolution 7.9-in IPS screen faces the user and brings the option of full 1080p content lacking with the first SHIELD portable. The screen is bright and crisp, easily viewable in bring lighting for gaming or use in lots of environments. Though the Xiaomi Mi Pad 7.9 had a 2048x1536 resolution screen, the form factor of the SHIELD Tablet is much more in line with what NVIDIA built with the Tegra Note 7.

Continue reading our review of the NVIDIA SHIELD Tablet and Controller!!

Author:
Manufacturer: NVIDIA

A powerful architecture

In March of this year, NVIDIA announced the GeForce GTX Titan Z at its GPU Technology Conference. It was touted as the world's fastest graphics card with its pair of full GK110 GPUs but it came with an equally stunning price of $2999. NVIDIA claimed it would be available by the end of April for gamers and CUDA developers to purchase but it was pushed back slightly and released at the very end of May, going on sale for the promised price of $2999.

The specifications of GTX Titan Z are damned impressive - 5,760 CUDA cores, 12GB of total graphics memory, 8.1 TFLOPs of peak compute performance. But something happened between the announcement and product release that perhaps NVIDIA hadn't accounted for. AMD's Radeon R9 295X2, a dual-GPU card with full-speed Hawaii chips on-board, was released at $1499. I think it's fair to say that AMD took some chances that NVIDIA was surprised to see them take, including going the route of a self-contained water cooler and blowing past the PCI Express recommended power limits to offer a ~500 watt graphics card. The R9 295X2 was damned fast and I think it caught NVIDIA a bit off-guard.

As a result, the GeForce GTX Titan Z release was a bit quieter than most of us expected. Yes, the Titan Black card was released without sampling the gaming media but that was nearly a mirror of the GeForce GTX 780 Ti, just with a larger frame buffer and the performance of that GPU was well known. For NVIDIA to release a flagship dual-GPU graphics cards, admittedly the most expensive one I have ever seen with the GeForce brand on it, and NOT send out samples, was telling.

NVIDIA is adamant though that the primary target of the Titan Z is not just gamers but the CUDA developer that needs the most performance possible in as small of a space as possible. For that specific user, one that doesn't quite have the income to invest in a lot of Tesla hardware but wants to be able to develop and use CUDA applications with a significant amount of horsepower, the Titan Z fits the bill perfectly.

Still, the company was touting the Titan Z as "offering supercomputer class performance to enthusiast gamers" and telling gamers in launch videos that the Titan Z is the "fastest graphics card ever built" and that it was "built for gamers." So, interest peaked, we decided to review the GeForce GTX Titan Z.

The GeForce GTX TITAN Z Graphics Card

Cost and performance not withstanding, the GeForce GTX Titan Z is an absolutely stunning looking graphics card. The industrial design started with the GeForce GTX 690 (the last dual-GPU card NVIDIA released) and continued with the GTX 780 and Titan family, lives on with the Titan Z. 

IMG_0270.JPG

The all metal finish looks good and stands up to abuse, keeping that PCB straight even with the heft of the heatsink. There is only a single fan on the Titan Z, center mounted, with a large heatsink covering both GPUs on opposite sides. The GeForce logo up top illuminates, as we have seen on all similar designs, which adds a nice touch.

Continue reading our review of the NVIDIA GeForce GTX Titan Z 12GB Graphics Card!!

Enter Tegra K1 CUDA Vision Challenge, Win Jetson TK1

Subject: General Tech | April 25, 2014 - 10:43 AM |
Tagged: nvidia, contest, jetson tk1, kepler

Attention enthusiasts, developers and creators. Are you working on a new embedded computing application?

Meet the Jetson TK1 Developer Kit. It’s the world’s first mobile supercomputer for embedded systems, putting unprecedented computing performance in a low-power, portable and fully programmable package.

image002.jpg

Power, ports, and portability: the Jetson TK1 development kit.The Jetson TK1 development kit

It’s the ultimate platform for developing next-generation computer vision solutions for robotics, medical devices, and automotive applications.

And we’re giving away 50 of them as part of our Tegra K1 CUDA Vision Challenge.

In addition to the Tegra K1 processor, the Jetson TK1 DevKit is equipped with 2 GB of RAM, 16 GB of storage and a host of ports and connectivity options.

And, because it offers full support for CUDA, the most pervasive, easy-to-use parallel computing platform and programming model, it’s much easier to program than the FPGA, custom ASIC and DSP processors that are typically used in today’s embedded systems.

Jetson TK1 is based on the Kepler computing architecture, the same technology powering today’s supercomputers, professional workstations and high-end gaming rigs. It has 192 CUDA cores, delivering over 300 GFLOPs of performance, and also provides full support for OpenGL 4.4, and CUDA 6.0, as well as the GPU-accelerated OpenCV.

Our Tegra K1 system-on-a-chip offers unprecedented power and portability.Our Tegra K1 system-on-a-chip offers unprecedented power and portability.

Entering the Tegra K1 CUDA Vision Challenge is easy. Just tell us about your embedded application idea. All proposals must be submitted April 30, 2014. Entries will be judged for innovation, impact on research or industry, public availability, and quality of work.

image003.jpg

By the end of May, the top 50 submissions will be awarded one of the first Jetson TK1 DevKits to roll off the production line, as well as access to technical support documents and assets.

The five most noteworthy Jetson TK1 breakthroughs may get a chance to share their work at the NVIDIA GPU Technology Conference in 2015.

Source: NVIDIA

NVIDIA Launches Jetson TK1 Mobile CUDA Development Platform

Subject: General Tech, Mobile | March 25, 2014 - 06:34 PM |
Tagged: GTC 2014, tegra k1, nvidia, CUDA, kepler, jetson tk1, development

NVIDIA recently unified its desktop and mobile GPU lineups by moving to a Kepler-based GPU in its latest Tegra K1 mobile SoC. The move to the Kepler architecture has simplified development and enabled the CUDA programming model to run on mobile devices. One of the main points of the opening keynote earlier today was ‘CUDA everywhere,’ and NVIDIA has officially accomplished that goal by having CUDA compatible hardware from servers to desktops to tablets and embedded devices.

Speaking of embedded devices, NVIDIA showed off a new development board called the Jetson TK1. This tiny new board features a NVIDIA Tegra K1 SoC at its heart along with 2GB RAM and 16GB eMMC storage. The Jetson TK1 supports a plethora of IO options including an internal expansion port (GPIO compatible), SATA, one half-mini PCI-e slot, serial, USB 3.0, micro USB, Gigabit Ethernet, analog audio, and HDMI video outputs.

NVIDIA Jetson TK1 Mobile CUDA Development Board.jpg

Of course the Tegra K1 part is a quad core (4+1) ARM CPU and a Kepler-based GPU with 192 CUDA cores. The SoC is rated at 326 GFLOPS which enables some interesting compute workloads including machine vision.

Computer Vision On NVIDIA CUDA.jpg

In fact, Audi has been utilizing the Jetson TK1 development board to power its self-driving prototype car (more on that soon). Other intended uses for the new development board include robotics, medical devices, security systems, and perhaps low power compute clusters (such as an improved Pedraforca system).It can also be used as a simple desktop platform for testing and developing mobile applications for other Tegra K1 powered devices, of course.

NVIDIA VisionWorks GTC 2014.jpg

Beyond the hardware, the Jetson TK1 comes with the CUDA toolkit, OpenGL 4.4 driver, and NVIDIA VisionWorks SDK which includes programming libraries and sample code for getting machine vision applications running on the Tegra K1 SoC.

The Jetson TK1 is available for pre-order now at $192 and is slated to begin shipping in April. Interested developers can find more information on the NVIDIA developer website.

 

Author:
Manufacturer: NVIDIA

Maxwell and Kepler and...Fermi?

Covering the landscape of mobile GPUs can be a harrowing experience.  Brands, specifications, performance, features and architectures can all vary from product to product, even inside the same family.  Rebranding is rampant from both AMD and NVIDIA and, in general, we are met with one of the most confusing segments of the PC hardware market.  

Today, with the release of the GeForce GTX 800M series from NVIDIA, we are getting all of the above in one form or another. We will also see performance improvements and the introduction of the new Maxwell architecture (in a few parts at least).  Along with the GeForce GTX 800M parts, you will also find the GeForce 840M, 830M and 820M offerings at lower performance, wattage and price levels.

slides01.jpg
 

With some new hardware comes a collection of new software for mobile users, including the innovative Battery Boost that can increase unplugged gaming time by using frame rate limiting and other "magic" bits that NVIDIA isn't talking about yet.  ShadowPlay and GameStream also find their way to mobile GeForce users as well.

Let's take a quick look at the new hardware specifications.

  GTX 880M GTX 780M GTX 870M GTX 770M
GPU Code name Kepler Kepler Kepler Kepler
GPU Cores 1536 1536 1344 960
Rated Clock 954 MHz 823 MHz 941 MHz 811 MHz
Memory Up to 4GB Up to 4GB Up to 3GB Up to 3GB
Memory Clock 5000 MHz 5000 MHz 5000 MHz 4000 MHz
Memory Interface 256-bit 256-bit 192-bit 192-bit
Features Battery Boost
GameStream
ShadowPlay
GFE
GameStream
ShadowPlay
GFE
Battery Boost
GameStream
ShadowPlay
GFE
GameStream
ShadowPlay
GFE

Both the GTX 880M and the GTX 870M are based on Kepler, keeping the same basic feature set and hardware specifications of their brethren in the GTX 700M line.  However, while the GTX 880M has the same CUDA core count as the 780M, the same cannot be said of the GTX 870M.  Moving from the GTX 770M to the 870M sees a significant 40% increase in core count as well as a jump in clock speed from 811 MHz (plus Boost) to 941 MHz.  

Continue reading about the NVIDIA GeForce GTX 800M Launch and Battery Boost!!

Author:
Subject: Editorial
Manufacturer: NVIDIA

It wouldn’t be February if we didn’t hear the Q4 FY14 earnings from NVIDIA!  NVIDIA does have a slightly odd way of expressing their quarters, but in the end it is all semantics.  They are not in fact living in the future, but I bet their product managers wish they could peer into the actual Q4 2014.  No, the whole FY14 thing relates back to when they made their IPO and how they started reporting.  To us mere mortals, Q4 FY14 actually represents Q4 2013.  Clear as mud?  Lord love the Securities and Exchange Commission and their rules.

633879_NVLogo_3D.jpg

The past quarter was a pretty good one for NVIDIA.  They came away with $1.144 billion in gross revenue and had a GAAP net income of $147 million.  This beat the Street’s estimate by a pretty large margin.  As a response, trading of NVIDIA’s stock has gone up in after hours.  This has certainly been a trying year for NVIDIA and the PC market in general, but they seem to have come out on top.

NVIDIA beat estimates primarily on the strength of the PC graphics division.  Many were focusing on the apparent decline of the PC market and assumed that NVIDIA would be dragged down by lower shipments.  On the contrary, it seems as though the gaming market and add-in sales on the PC helped to solidify NVIDIA’s quarter.  We can look at a number of factors that likely contributed to this uptick for NVIDIA.

Click here to read the rest of NVIDIA's Q4 FY2014 results!

Rumor: NVIDIA GeForce GTX TITAN Black and GTX 790 Incoming

Subject: Graphics Cards | January 21, 2014 - 12:49 PM |
Tagged: rumor, nvidia, kepler, gtx titan black, gtx titan, gtx 790

How about some fresh graphics card rumors for your Tuesday afternoon?  The folks at VideoCardz.com have collected some information about two potential NVIDIA GeForce cards that are going to hit your pocketbook hard.  If the mid-range GPU market was crowded wait until you see the changes NVIDIA might have for you soon on the high-end.

First up is the NVIDIA GeForce GTX TITAN Black Edition, a card that will actually have the same specifications as the GTX 780 Ti but with full performance double precision floating point and a move from 3GB to 6GB of memory.  The all-black version of the GeForce GTX 700-series cooler is particularly awesome looking.  

gtx790pic.jpg

Image from VideoCardz.com

The new TITAN would sport the same GPU as GTX 780 Ti, only TITAN BLACK would have higher double precision computing performance, thus more FP64 CUDA cores. The GTX TITAN Black Edition is also said to feature 6GB memory buffer.This is twice as much as GTX 780 Ti, and it pretty much confirms we won’t be seeing any 6GB Ti’s.

The rest is pretty much well known, TITAN BLACK has 2880 CUDA cores, 240 TMUs and 48 ROPs.

VideoCardz.com says this will come in at $999.  If true, this is a pure HPC play as the GTX 780 Ti would still offer the same gaming performance for enthusiasts.  

Secondly, there looks to be an upcoming dual-GPU graphics card using a pair of GK110 GPUs that will be called the GeForce GTX 790.  The specifications that VideoCardz.com says they have indicate that each GPU will have 2496 enabled CUDA cores and a smaller 320-bit memory interface with 5GB designated for each GPU.  Cutting back on the memory interface, shader counts and even clocks speeds would allow NVIDIA to manage power consumption at the targeted 300 watt level.  

gtx790table.jpg

Image from VideoCardz.com

Head over to VideoCardz.com for more information about these rumors but if all goes as they expect, you'll hear about these products quite a bit more in February and March.

What do you think?  Are these new $1000 graphics cards something you are looking forward to?  

Source: VideoCardz

Nvidia's renamed Tegra K1 SoC uses Denver and Kepler

Subject: General Tech | January 6, 2014 - 11:08 AM |
Tagged: tegra k1, tegra, SoC, nvidia, kepler, k1, cortex a15, CES, arm, A15

Project X Logan K1 is the first big news out of CES from NVIDIA and represents a bit of a change from what we were expecting.  The current belief was that the SoC would have four 28nm Cortex A15 processors but that will only be one flavour of K1, a Denver based dual core version will also be released.  Those ARMv8 64-bit processors will natively handle 64 bit applications while the A15 version that The Tech Report had taken pictures of will be limited to 32 bit applications, though that will not matter in many mobile applications.   You should also check out Ryan's deep dive into the new Denver and Kepler version here.

die.jpg

"In early 2011, during a CES press event, Nvidia revealed its Project Denver CPU initiative. On Sunday evening, at another CES press conference, the company provided a glimpse of the first Denver-based processor: the Tegra K1. This next-generation SoC features dual Denver CPU cores clocked at up to 2.5GHz. The cores were designed by Nvidia, and they're compatible with the 64-bit ARMv8 instruction set. They have a seven-way superscalar pipeline and a hefty 192KB of L1 cache."

Here is some more Tech News from around the web:

Tech Talk

 

Coverage of CES 2014 is brought to you by AMD!

PC Perspective's CES 2014 coverage is sponsored by AMD.

Follow all of our coverage of the show at http://pcper.com/ces!

Author:
Subject: Mobile
Manufacturer: NVIDIA

Once known as Logan, now known as K1

NVIDIA has bet big on Tegra.  Since the introduction of the SoC's first iteration, that much was clear.  With the industry push to mobile computing and the decreased importance of the classic PC design, developing and gaining traction with a mobile processor was not only an expansion of the company’s portfolio but a critical shift in the mindset of a graphics giant. 

The problem thus far is that while NVIDIA continues to enjoy success in the markets of workstation and consumer discrete graphics, the Tegra line of silicon-on-chip processors has faltered.  Design wins have been tough to come by. Other companies with feet already firmly planted on this side of the hardware fence continue to innovate and seal deals with customers.  Qualcomm is the dominant player for mobile processors with Samsung, MediaTek, and others all fighting for the same customers NVIDIA needs.  While press conferences and releases have been all smiles and sunshine since day one, the truth is that Tegra hasn’t grown at the rate NVIDIA had hoped.

Solid products based on NVIDIA Tegra processors have been released.  The first Google Nexus 7 used the Tegra 3 processor, and was considered the best Android tablet on the market by most, until it was succeeded by the 2013 iteration of the Nexus 7 this year.  Tegra 4 slipped backwards, though – the NVIDIA SHIELD mobile gaming device was the answer for a company eager to show the market they built compelling and relevant hardware.  It has only partially succeeded in that task.

denver2.jpg

With today’s announcement of the Tegra K1, previously known as Logan or Tegra 5, NVIDIA hopes to once again spark a fire under partners and developers, showing them that NVIDIA’s dominance in the graphics fields of the PC has clear benefits to the mobile segment as well.  During a meeting with NVIDIA about Tegra K1, Dan Vivoli, Senior VP of marketing and a 16 year employee, equated the release of the K1 to the original GeForce GPU.  That is a lofty ambition and puts of a lot pressure on the entire Tegra team, not to mention the K1 product itself, to live up to.

Tegra K1 Overview

What we previously knew as Logan or Tegra 5 (and actually it was called Tegra 5 until just a couple of days ago), is now being released as the Tegra K1.  The ‘K’ designation indicated the graphics architecture that powers the SoC, in this case Kepler.  Also, it’s the first one.  So, K1.

The processor of the Tegra K1 look very familiar and include four ARM Cortex-A15 “r3” cores and 2MB of L2 cache with a fifth A15 core used for lower power situations.  This 4+1 design is the same that was introduced with the Tegra 4 processor last year and allows NVIDIA to implement a style of “big.LITTLE” design that is unique.  Some slight modifications to the cores are included with Tegra K1 that improve performance and efficiency, but not by much – the main CPU is very similar to the Tegra 4.

NVIDIA also unveiled late last night that another version of the Tegra K1 that replaces the quad A15 cores with two of the company's custom designs Denver CPU cores.  Project Denver, announced in early 2011, is NVIDIA's attempt at building its own core design based on the ARMv8 64-bit ISA.  This puts this iteration of Tegra K1 on the same level as Apple's A7 and Qualcomm's Krait processors.  When these are finally available in the wild it will be incredibly intriguing to see how well NVIDIA's architects did in the first true CPU design from the GPU giant.

Continue reading about NVIDIA's new Tegra K1 SoC with Kepler-based graphics!

Coverage of CES 2014 is brought to you by AMD!

PC Perspective's CES 2014 coverage is sponsored by AMD.

Follow all of our coverage of the show at http://pcper.com/ces!