Manufacturer: NVIDIA

NVIDIA releases the GeForce GT 700M family

NVIDIA revolutionized gaming on the desktop with the release of its 600-series Kepler-based graphics cards in March 2012. With the release of the GeForce GT 700M series, Kepler enters the mobile arena to power laptops, ultrabooks, and all-in-one systems.

Today, NVIDIA introduces four new members to its mobile line: the GeForce GT 750M, the GeForce GT 740M, the GeForce GT 735M, and the GeForce GT 720M. These four new mobile graphics processors join the previously-released members of the GeForce GT 700m series: the GeForce GT 730M and the GeForce GT 710M. With the exception of the Fermi-based GeForce GT 720M, all of the newly-released mobile cores are based on NVIDIA's 28nm Kepler architecture.

Notebooks based on the GeForce GT 700M series will offer in-built support for the following new technologies:

Automatic Battery Savings through NVIDIA Optimus Technology

02-optimus-tech-slide.PNG

Automatic Game Configuration through the GeForce Experience

03-gf-exp.PNG

Automatic Performance Optimization through NVIDIA GPU Boost 2.0

03-gpu-boost-20.PNG

Continue reading our release coverage of the NVIDIA GTX 700M series!

GTC 2013: eyeSight Will Use GPUs To Improve Its Gesture Recognition Software

Subject: General Tech | March 31, 2013 - 08:43 PM |
Tagged: nvidia, lenovo yoga, GTC 2013, GTC, gesture control, eyesight, ECS

During the Emerging Companies Summit at NVIDIA's GPU Technology Conference, Israeli company EyeSight Mobile Technologies' CEO Gideon Shmuel took the stage to discuss the future of its gesture recognition software. He also provided insight into how EyeSight plans to use graphics cards to improve and accelerate the process of identifying, and responding to, finger and hand movements along with face detection.

GTC_ECS_EyeSight_Gideon Shmuel (2).jpg

EyeSight is a five year old company that has developed gesture recognition software that can be installed on existing machines (though it appears to be aimed more at OEMs than directly to consumers). It can use standard cameras, such as webcams, to get its 2D input data and then gets a relative Z-axis from proprietary algorithms. This gives EyeSight essentially 2.5D of input data, and camera resolution and frame rate permitting, allows the software to identify and track finger and hand movements. EyeSight CEO Gideon Shmuel stated at the ECS presentation that the software is currently capable of "finger-level accuracy" at 5 meters from a TV.

GTC_ECS_EyeSight_Gideon Shmuel (5).jpg

Gestures include the ability to use your fingers as a mouse to point at on-screen objects, waving your hand to turn pages, scrolling, and even give hand signal cues.

The software is not open source, and there are no plans to move in that direction. The company has 15 patents pending on its technology, several of which it managed to file before the US Patent Office changed from First to Invent to First Inventor to File (heh, which is another article...). The software will support up to 20 million hardware devices in 2013, and EyeSight expects the number of compatible camera-packing devices to increase further to as many as 3.5 billion in 2015. Other features include the ability transparently map EyeSight input to Android apps without user's needing to muck with settings, and the ability to detect faces and "emotional signals" even in low light. According to the website, SDKs are available for Windows, Linux, and Android. The software maps the gestures it recognizes to Windows shortcuts, to increase compatibility with many existing applications (so long as they support keyboard shortcuts).

GTC_ECS_EyeSight_Gideon Shmuel (10).jpg

Currently, the EyeSight software is mostly run on the CPU, but the company is heavily investing into incorporating GPU support. Moving the processing to GPUs will allow the software to run faster and more power efficiently, especially on mobile devices (NVIDIA's Tegra platform was specifically mentioned). EyeSight's future road-map includes using GPU acceleration to bolster the number of supported gestures, move image processing to the GPUs, add velocity and vector control inputs, incorporate a better low-light filter (which will run on the GPU), and offload processing from the CPU to optimize power management and save CPU resources for the OS and other applications which is especially important for mobile devices. Gideon Shmuel also stated that he wants to see the technology being used on "anything with a display" from your smartphone to your air conditioner.

GTC_ECS_EyeSight_Gideon Shmuel (6).jpg

A basic version of the EyeSight input technology reportedly comes installed on the Lenovo Yoga convertible tablet. I think this software has potential, and would provide that Minority Report-like interaction that many enthusiasts wish for. Hopefully, EyeSight can deliver on its claimed accuracy figures and OEMs will embrace the technology by integrating it into future devices.

EyeSight has posted additional video demos and information about its touch-free technology on its website.

Do you think this "touch-free" gesture technology has merit, or will this type of input remain limited to awkward-integration in console games?

Summary Thus Far

Because of the complexity and sheer amount of data we have gathered using our Frame Rating performance methodology, we are breaking it up into several articles that each feature different GPU comparisons.  Here is the schedule:

Welcome to the second in our intial series of articles focusing on Frame Rating, our new graphics and GPU performance technology that drastically changes how the community looks at single and multi-GPU performance.  In the article we are going to be focusing on a different set of graphics cards, the highest performing single card options on the market including the GeForce GTX 690 4GB dual-GK104 card, the GeForce GTX Titan 6GB GK110-based monster as well as the Radeon HD 7990, though in an emulated form.  The HD 7990 was only recently officially announced by AMD at this years Game Developers Conference but the specifications of that hardware are going to closely match what we have here on the testbed today - a pair of retail Radeon HD 7970s in CrossFire. 

titancard.JPG

Will the GTX Titan look as good in Frame Rating as it did upon its release?

If you are just joining this article series today, you have missed a lot!  If nothing else you should read our initial full release article that details everything about the Frame Rating methodology and why we are making this change to begin with.  In short, we are moving away from using FRAPS for average frame rates or even frame times and instead are using a secondary hardware capture system to record all the frames of our game play as they would be displayed to the gamer, then doing post-process analyzation on that recorded file to measure real world performance.

Because FRAPS measures frame times at a different point in the game pipeline (closer to the game engine) its results can vary dramatically from what is presented to the end user on their display.  Frame Rating solves that problem by recording video through a dual-link DVI capture card that emulates a monitor to the testing system and by simply applying a unique overlay color on each produced frame from the game, we can gather a new kind of information that tells a very unique story.

card1.jpg

The capture card that makes all of this work possible.

I don't want to spend too much time on this part of the story here as I already wrote a solid 16,000 words on the topic in our first article and I think you'll really find the results fascinating.  So, please check out my first article on the topic if you have any questions before diving into these results today!

 

Test System Setup
CPU Intel Core i7-3960X Sandy Bridge-E
Motherboard ASUS P9X79 Deluxe
Memory Corsair Dominator DDR3-1600 16GB
Hard Drive OCZ Agility 4 256GB SSD
Sound Card On-board
Graphics Card NVIDIA GeForce GTX TITAN 6GB
NVIDIA GeForce GTX 690 4GB
AMD Radeon HD 7970 CrossFire 3GB
Graphics Drivers AMD: 13.2 beta 7
NVIDIA: 314.07 beta (GTX 690)
NVIDIA: 314.09 beta (GTX TITAN)
Power Supply Corsair AX1200i
Operating System Windows 8 Pro x64

 

On to the results! 

Continue reading our review of the GTX Titan, GTX 690 and HD 7990 using Frame Rating!!

Podcast #244 - Frame Rating Launch, HD 7790 vs. GTX 650Ti BOOST, and news from GDC

Subject: General Tech | March 28, 2013 - 03:47 PM |
Tagged: sli, podcast, pcper, nvidia, kepler, HD7790, GTX 560Ti BOOST, GCN, frame rating, crossfire, amd

PC Perspective Podcast #244 - 03/28/2013

Join us this week as we discuss the launch of Frame Rating, HD 7790 vs. GTX 650Ti BOOST, and news from GDC

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano

This Podcast is brought to you by MSI!

Program length: 1:19:22

Podcast topics of discussion:
  1. Week in Review:
  2. News items of interest:
  3. 1:12:00 Hardware/Software Picks of the Week:
  4. 1-888-38-PCPER or podcast@pcper.com
  5. Closing/outro

 

How Games Work

 

Because of the complexity and sheer amount of data we have gathered using our Frame Rating performance methodology, we are breaking it up into several articles that each feature different GPU comparisons.  Here is the schedule:

 

Introduction

The process of testing games and graphics has been evolving even longer than I have been a part of the industry: 14+ years at this point. That transformation in benchmarking has been accelerating for the last 12 months. Typical benchmarks test some hardware against some software and look at the average frame rate which can be achieved. While access to frame time has been around for nearly the full life of FRAPS, it took an article from Scott Wasson at the Tech Report to really get the ball moving and investigate how each frame contributes to the actual user experience. I immediately began research into testing actual performance perceived by the user, including the "microstutter" reported by many in PC gaming, and pondered how we might be able to test for this criteria even more accurately.

The result of that research is being fully unveiled today in what we are calling Frame Rating – a completely new way of measuring and validating gaming performance.

The release of this story for me is like the final stop on a journey that has lasted nearly a complete calendar year.  I began to release bits and pieces of this methodology starting on January 3rd with a video and short article that described our capture hardware and the benefits that directly capturing the output from a graphics card would bring to GPU evaluation.  After returning from CES later in January, I posted another short video and article that showcased some of the captured video and stepping through a recorded file frame by frame to show readers how capture could help us detect and measure stutter and frame time variance. 

card4.jpg

Finally, during the launch of the NVIDIA GeForce GTX Titan graphics card, I released the first results from our Frame Rating system and discussed how certain card combinations, in this case CrossFire against SLI, could drastically differ in perceived frame rates and performance while giving very similar average frame rates.  This article got a lot more attention than the previous entries and that was expected – this method doesn’t attempt to dismiss other testing options but it is going to be pretty disruptive.  I think the remainder of this article will prove that. 

Today we are finally giving you all the details on Frame Rating; how we do it, what we learned and how you should interpret the results that we are providing.  I warn you up front though that this is not an easy discussion and while I am doing my best to explain things completely, there are going to be more questions going forward and I want to see them all!  There is still much to do regarding graphics performance testing, even after Frame Rating becomes more common. We feel that the continued dialogue with readers, game developers and hardware designers is necessary to get it right.

Below is our full video that features the Frame Rating process, some example results and some discussion on what it all means going forward.  I encourage everyone to watch it but you will definitely need the written portion here to fully understand this transition in testing methods.  Subscribe to your YouTube channel if you haven't already!

Continue reading our analysis of the new Frame Rating performance testing methodology!!

NVIDIA Boosts the Sub-$200 market with the GTX 650 Ti Boost

Subject: Graphics Cards | March 26, 2013 - 07:41 PM |
Tagged: nvidia, hd 7790, gtx 650 ti boost, gtx 650 Ti, gpu boost, gk106

Why Boost you may ask?  If you guessed that NVIDIA added their new Boost Clock feature to the card you should win a prize as that is exactly what makes the GTX 650Ti special.  With a core GPU speed of 980MHz, boosting to 1033MHz and beyond this card is actually aimed to compete with AMD's HD7850, not the newly released HD7790, at least the 2GB model is.  Along with the boost in clock comes a wider memory pipeline and a corresponding increase in ROPs.  The 2GB model should be about $170, right on the cusp between value and mid-range but is the price worth admission?  Get a look at the performance at [H]ard|OCP.

H_Specs.gif

"NVIDIA is launching the GeForce GTX 650 Ti Boost today. This video card is priced in the $149-$169 price range, and should give the $150 price segment another shakedown. Does it compare to the Radeon HD 7790, or is it on the level of the more expensive Radeon HD 7850? We will find out in today's latest games, you may be surprised."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Boxx Launches 3DBoxx 8950 Workstation

Subject: General Tech, Systems | March 26, 2013 - 06:18 PM |
Tagged: workstation, nvidia, GTC 2013, BOXX, 3dboxx 8950

Boxx Technologies recently launched a new multi-GPU workstation called the 3DBoxx 8950. It is aimed at professionals that need a fast system with beefy GPU accelerator cards that they can design and render at the same time. The 8950 is intended to be used with applications like Autodesk, Dassault, NVIDIA iray, and V-Ray (et al).

8950 front.jpg

The Boxx 3DBoxx 8950 features two liquid cooled Intel Xeon Ed-2600 processors (2GHz, 16 cores, 32 threads), up to 512GB of system memory (16 DIMM slots), and seven PCI-E slots (four of which accept dual slot GPUs, the remaining three are spaced for single slot cards). A 1250W power suppy (80 PLUS Gold) powers the workstation. An example configuration would include three Tesla K20 cards and one Quadro K5000. The Tesla cards would handle the computation while the Quadro can power the multi-display ouput. The chassis has room for eight 3.5" hard drives and a single externally-accessible 5.25" drive. The 8950 workstation can be loaded with either the Windows or Linux operating system.

Rear IO on the 8950 workstation includes:

  • 5 x audio jacks
  • 1 x optical in/out
  • 4 x USB 2.0 ports
  • 1 x serial port
  • 2 x RJ45 jacks, backed by Intel Gigabit NICs

The system is available now, with pricing available upon request. You can find the full list of specifications and supported hardware configurations in this spec sheet (PDF).

Source: Boxx

NVIDIA's Project SHIELD is just about ready

Subject: General Tech | March 26, 2013 - 01:49 PM |
Tagged: tegra 4, tegra, shield, nvidia, Tegrazone

Remember Project Shield from CES and before?  The Inquirer has managed to get their hands on an actual console at the Game Developers Conference and played a bit of Need For Speed streamed from a PC onto the Shield.  Project Shield its self is a Tegra 4 powered controller running Android 4.2 with a 5" 720p display attached and wireless connectivity.  The actual game is streamed wireless from a PC with a Kepler GPU via the Tegrazone application, so the real performance limit occurs from latency, similar to the company once known as Onlive.  While The Inq was not quite ready to toss their money at Project Shield, but it was close.

TheInq_nvidia-shield-console-front-540x334.JPG

"CHIP DESIGNER Nvidia caused something of a stir at CES when it announced the Project Shield handheld games console, and with its launch nearing, the firm is letting people try its first own-brand game console, which we managed to get our hands on at this week's GDC gaming conference in San Francisco."

Here is some more Tech News from around the web:

Tech Talk

Source: The Inquirer
Author:
Manufacturer: NVIDIA

The GTX 650 Ti Gets Boost and More Memory

In mid-October NVIDIA released the GeForce GTX 650 Ti based on GK106, the same GPU that powers the GTX 660 though with fewer enabled CUDA cores and GPC units.  At the time we were pretty impressed with the 650 Ti:

The GTX 650 Ti has more in common with the GTX 660 than it does the GTX 650, both being based on the GK106 GPU, but is missing some of the unique features that NVIDIA has touted of the 600-series cards like GPU Boost and SLI.

Today's release of the GeForce GTX 650 Ti BOOST actually addresses both of those missing features by moving even closer to the specification sheet found on the GTX 660 cards. 

Our video review of the GTX 650 Ti BOOST and Radeon HD 7790.

block1.jpg

Option 1: Two GPCs with Four SMXs

Just like we saw with the original GTX 650 Ti, there are two different configurations of the GTX 650 Ti BOOST; both have the same primary specifications but will differ in which SMX is disabled from the full GK106 ASIC.  The newer version will still have 768 CUDA cores but clock speeds will increase from 925 MHz to 980 MHz base and 1033 MHz typical boost clock.  Texture unit count remains the same at 64.

Continue reading our review of the NVIDIA GeForce GTX 650 Ti BOOST graphics card!!

Get more out of Columbia with the all new GeForce 314.22 driver

Subject: General Tech | March 25, 2013 - 01:30 PM |
Tagged: bioshock infinite, geforce, GeForce 314.22, nvidia, gaming

Official_cover_art_for_Bioshock_Infinite.jpg

BioShock Infinite launches tomorrow and promises to be an exciting third installment to the award-winning franchise.

GeForce gamers today can get ready for a great Day 1 experience with BioShock Infinite by upgrading to our new GeForce 314.22 Game Ready drivers. These drivers are Microsoft WHQL-certified and available for download on GeForce.com.

Our software engineers have been working with Irrational Games over the past two years to optimize BioShock Infinite for GeForce users and, as a result, these drivers offer game-changing performance increases of up to 41 percent.

Also, with a single click in GeForce Experience, gamers can optimize the image quality in BioShock Infinite and have it instantly tuned to the capability of their PC’s hardware.

GeForce 314.22 drivers also offer several other significant performance increases in other current games. For more details, refer to the release highlights on the driver download pages and read the GeForce driver article on GeForce.com.

GeForce 314.22 Highlights

  • Delivers GeForce Game Ready experience for BioShock Infinite:
    • Up to 41% faster performance
    • Optimal game settings with GeForce Experience
    • Microsoft WHQL-certified
  • Increases gaming performance in other popular titles:
    • Up to 60% faster in Tomb Raider
    • Up to 23% faster in Sniper Elite V2
    • Up to 13% faster in Sleeping Dogs
  • Adds new SLI and 3D Vision profiles for upcoming games.

logo_geforce.png

Source: NVIDIA