Rumors About Upcoming NVIDIA GTX 680M Emerge

Subject: Graphics Cards | May 16, 2012 - 10:40 PM |
Tagged: nvidia, gtx 680m, gpu, mobile, kepler

Videocardz.com managed to get their hands on some rumored details about an upcoming NVIDIA mobile graphics card–the GTX 680M. According to rumors, the mobile chip will be launched at Computex 2012 in Taiwan next month.

alleged gtx680m.jpg

There aren’t many details about the mobile chip, but it is set up to be a scaled down version of it’s Kepler based GTX 680 desktop counterpart. The GTX 680M will have approximately half as many CUDA cores at either 744 or 768 cores depending on the source. Either way, the card keeps the same 256-bit memory interface and can support SLI configurations. In addition, the 680M will be able to have up to 4GB of GDDR5 memory. Reportedly, it can use as much as 100 Watts of power.

When paired with an Intel Core i7 3720QM processor, the GPU was able to get a score of 4,905 points in 3DMark 11’s Performance present benchmark. It is supposed to be as much as 37 percent faster than the GTX 670M, which is not surprising considering that chip has only 336 CUDA cores and is clocked at 598 MHz (no word yet on what the GTX 680M will be clocked at).

No matter what the GTX 680M turns out to be, you can bet it will only be found in the highest end gaming notebooks where performance is more important than battery life. Until then, feel free to brush up on your Kepler architecture knowledge by visiting our GTX 680 (desktop) review.

Source: Videocardz
Author:
Manufacturer: NVIDIA

GK110 Specifications

When the Fermi architecture was first discussed in September of 2009 at the NVIDIA GPU Technology Conference it marked an interesting turn for the company. Not only was NVIDIA releasing details about a GPU that wasn’t going to be available to consumers for another six months, but also that NVIDIA was building GPUs not strictly for gaming anymore – HPC and GPGPU were a defining target of all the company’s resources going forward.

Kepler on the other hand seemed to go back in the other direction with a consumer graphics release in March of this year without discussion of the Tesla / Quadro side of the picture. While the company liked to tout that Kepler was built for gamers I think you’ll find that with the information NVIDIA released today, Kepler was still very much designed to be an HPC powerhouse. More than likely NVIDIA’s release schedules were altered by the very successful launch of AMD’s Tahiti graphics cards under the HD 7900 brand. As a result, gamers got access to GK104 before NVIDIA’s flagship professional conference and the announcement of GK110 – a 7.1 billion transistor GPU aimed squarely at parallel computing workloads.

Kepler GK110

With the Fermi design NVIDIA took a gamble and changed directions with its GPU design betting that it could develop a microprocessor that was primarily intended for the professional markets while still appealing to the gaming markets that have sustained it for the majority of the company’s existence. While the GTX 480 flagship consumer card and the GTX 580 to some degree had overheating and efficiency drawbacks for gaming workloads compared to AMD GPUs, the GTX 680 based on Kepler GK104 has improved on them greatly. NVIDIA has still designed Kepler for high-performance computing though with a focus this time on power efficiency as well as performance though we haven’t seen the true king of this product line until today.

dieshot.jpg

GK110 Die Shot

Built on the 28nm process technology from TSMC, GK110 is an absolutely MASSIVE chip built on 7.1 billion transistors and though NVIDIA hasn’t given us a die size, it is likely coming close the reticle limit of 550 square millimeters. NVIDIA is proud to call this chip the most ‘architecturally complex’ microprocessor ever built and while impressive, it means there is potential for some issues when it comes to producing a chip of this size. This GPU will be able to offer more than 1 TFlop of double precision computing power with greater than 80% efficiency and 3x the performance per watt of Fermi designs.

02.jpg

Continue reading our overview of the newly announced NVIDIA Kepler GK110 GPU!

Live: First NVIDIA Kepler GK110 GPU Details

Subject: Graphics Cards | May 16, 2012 - 05:14 PM |
Tagged: nvidia, kepler, GTC 2012, gk110

We are posting live from the "Inside Kepler" talk at NVIDIA's GPU Technology Conference with details on the new GK110 GPU.  Here is what we know so far:

  • 7.1 billion transistors
  • 15 SMX (modified) units
  • 2880 available CUDA cores
  • Greater than 1 TFLOP FP64 (double precision) compute
  • 384-bit GDDR5 memory bus

01.jpg

The block diagram for the GK110 GPU

We will update this post with more photos and information as we have it!

02.jpg

Diagram of the updated SMX for GK110

Don't expect to see this GPU until at least  Q4 of this year.

Check out our full Kepler GK110 deep-dive article we just posted!  

ZOTAC announces ZOTAC GeForce GT 630, GT 620 and GT 610 series

Subject: Graphics Cards | May 15, 2012 - 05:26 PM |
Tagged: zotac synergy, zotac, nvidia, gt 630, gt 620, GT 610, GK104, geforce, fermi

Zotac has released ten different graphics cards today, three GT 630s, three GT 620s and four GT 610s if you count the PCI version.  Enjoy all the benefits of the new Kepler architecture without the price of the GTX 680 or 690. These cards are a mix of GF108, GF119 and GK107, essentially rebrands of previous GT series cards as opposed to new ones like NVIDIA would prefer you believed.

630_4gb.jpg

They range from this 4GB GT 630 Synergy Edition which will give you the ability to handle multiple monitors in a work environment.

zotax 610 PCIe.jpg

To this 1GB PCIe 1x GT 610 model for low power, low profile applications where a 16x slot just won't fit.

 

HONG KONG – May 15, 2012 – ZOTAC International, a global innovator and channel manufacturer of graphics cards, mainboards and mini-PCs, today expands the successful ZOTAC GeForce 600 series with new value offerings. The ZOTAC GeForce GT 630, GT 620 and GT 610 series deliver a savory taste of Microsoft DirectX 11 technologies for an outstanding visual computing experience.

“ZOTAC is pleased to bring the GeForce 600 series to value shoppers seeking a superior visual experience discrete graphics brings to computing,” said Carsten Berger, marketing director, ZOTAC International. “By installing one of our ZOTAC GeForce GT 630, GT 620 or GT 610 series graphics cards, users can experience faster video and image processing and perfect high-definition video playback with a simple upgrade.”

The ZOTAC GeForce GT 630, GT 620 and GT 610 series are available in a variety configurations with 512MB, 1GB, 2GB and 4GB memory options in PCI Express 2.0 x16, PCI Express x1 or PCI interfaces, and active or passive cooling configurations to cater exclusively to all user computing needs.

It’s time to play with ZOTAC and the GeForce GT 630, GT 620 and GT 610 series.

zotac logo.jpg

General details

  • ZOTAC Expands successful GeForce 600 series
  • ZOTAC GeForce GT 630 series
    • 96 processor cores
    • 1GB, 2GB and 4GB memory configurations
    • 128-bit memory interface
  • ZOTAC GeForce GT 620 series
    • 96 processor cores
    • 1GB & 2GB memory configurations
    • 64-bit memory interface
  • ZOTAC GeForce GT 610 series
    • 48 processor cores
    • 512MB, 1GB & 2GB memory configurations
    • 64-bit memory interface
  • NVIDIA 3D Vision capable
  • NVIDIA Adaptive Vertical Sync
  • DirectX 11 technology & Shader Model 5.0
  • OpenGL 4.2 compatible
  • Hardware-accelerated Full HD video playback
  • Blu-ray 3D ready
  • Loss-less audio bitstream capable

zotac specs.png

Source: Zotac
Author:
Manufacturer: NVIDIA

NVIDIA puts its head in the clouds

Today at the 2012 NVIDIA GPU Technology Conference (GTC), NVIDIA took the wraps off a new cloud gaming technology that promises to reduce latency and improve the quality of streaming gaming using the power of NVIDIA GPUs.  Dubbed GeForce GRID, NVIDIA is offering the technology to online services like Gaikai and OTOY.  

01.jpg

The goal of GRID is to bring the promise of "console quality" gaming to every device a user has.  The term "console quality" is kind of important here as NVIDIA is trying desperately to not upset all the PC gamers that purchase high-margin GeForce products.  The goal of GRID is pretty simple though and should be seen as an evolution of the online streaming gaming that we have covered in the past–like OnLive.  Being able to play high quality games on your TV, your computer, your tablet or even your phone without the need for high-performance and power hungry graphics processors through streaming services is what many believe the future of gaming is all about. 

02.jpg

GRID starts with the Kepler GPU - what NVIDIA is now dubbing the first "cloud GPU" - that has the capability to virtualize graphics processing while being power efficient.  The inclusion of a hardware fixed-function video encoder is important as well as it will aid in the process of compressing images that are delivered over the Internet by the streaming gaming service. 

 

03.jpg

This diagram shows us how the Kepler GPU handles and accelerates the processing required for online gaming services.  On the server side, the necessary process for an image to find its way to the user is more than just a simple render to a frame buffer.  In current cloud gaming scenarios the frame buffer would have to be copied to the main system memory, compressed on the CPU and then sent via the network connection.  With NVIDIA's GRID technology that capture and compression happens on the GPU memory and thus can be on its way to the gamer faster.

The results are H.264 streams that are compressed quickly and efficiently to be sent out over the network and return to the end user on whatever device they are using. 

Continue reading our editorial on the new NVIDIA GeForce GRID cloud gaming technology!!

NVIDIA Introduces World's First Virtualized GPU, Accelerating Graphics for Cloud Computing

Subject: Shows and Expos | May 15, 2012 - 04:12 PM |
Tagged: NVIDIA VGX, nvidia, GTC 2012, virtual graphics, virtual machine

One of the more interesting announcements so far at the GTC has been NVIDIA's wholehearted leap into desktop virtualization with NVIDIA VGX series of add on cards.  Not really a graphics card and more specialized than the Tesla, GPU VDI will give you a GPU accelerated virtual machine.  If you are wondering why you would need that consider a VM which can handle an Aero desktop and stream live HD video where the processing power comes not from the CPU but from a virtual GPU.  They've partnered it with Hypervisor which can integrate with existing VM platforms to provide virtual GPU control as well as another piece of software which allows you to pick and choose what graphics resources your users get.

nv-vgx-large.jpg

SAN JOSE, Calif.—GPU Technology Conference—May 15, 2012—NVIDIA today unveiled the NVIDIA VGX platform, which enables IT departments to deliver a virtualized desktop with the graphics and GPU computing performance of a PC or workstation to employees using any connected device.

With the NVIDIA VGX platform in the data center, employees can now access a true cloud PC from any device – thin client, laptop, tablet or smartphone – regardless of its operating system, and enjoy a responsive experience for the full spectrum of applications previously only available on an office PC.

NVIDIA VGX enables knowledge workers for the first time to access a GPU-accelerated desktop similar to a traditional local PC. The platform’s manageability options and ultra-low latency remote display capabilities extend this convenience to those using 3D design and simulation tools, which had previously been too intensive for a virtualized desktop.

Integrating the VGX platform into the corporate network also enables enterprise IT departments to address the complex challenges of “BYOD” – employees bringing their own computing device to work. It delivers a remote desktop to these devices, providing users the same access they have on their desktop terminal. At the same time, it helps reduce overall IT spend, improve data security and minimize data center complexity.

“NVIDIA VGX represents a new era in desktop virtualization,” said Jeff Brown, general manager of the Professional Solutions Group at NVIDIA. “It delivers an experience nearly indistinguishable from a full desktop while substantially lowering the cost of a virtualized PC.”

The NVIDIA VGX platform is part of a series of announcements NVIDIA is making today at the GPU Technology Conference (GTC), all of which can be accessed in the GTC online press room.

The VGX platform addresses key challenges faced by global enterprises, which are under constant pressure both to control operating costs and to use IT as a competitive edge that allows their workforces to achieve greater productivity and deliver new products faster. Delivering virtualized desktops can also minimize the security risks inherent in sharing critical data and intellectual property with an increasingly internationalized workforce.

NVIDIA VGX is based on three key technology breakthroughs:

  • NVIDIA VGX Boards. These are designed for hosting large numbers of users in an energy-efficient way. The first NVIDA VGX board is configured with four GPUs and 16 GB of memory, and fits into the industry-standard PCI Express interface in servers. ·
  • NVIDIA VGX GPU Hypervisor. This software layer integrates into commercial hypervisors, such as the Citrix XenServer, enabling virtualization of the GPU.
  • NVIDIA User Selectable Machines (USMs). This manageability option allows enterprises to configure the graphics capabilities delivered to individual users in the network, based on their demands. Capabilities range from true PC experiences available with the NVIDIA standard USM to enhanced professional 3D design and engineering experiences with NVIDIA Quadro or NVIDIA NVS GPUs.

The NVIDIA VGX platform enables up to 100 users to be served from a single server powered by one VGX board, dramatically improving user density on a single server compared with traditional virtual desktop infrastructure (VDI) solutions. It sharply reduces such issues as latency, sluggish interaction and limited application support, all of which are associated with traditional VDI solutions.

With the NVIDIA VGX platform, IT departments can serve every user in the organization – from knowledge workers to designers – with true PC-like interactive desktops and applications.

NVIDIA VGX Boards
NVIDIA VGX boards are the world’s first GPU boards designed for data centers. The initial NVIDIA VGX board features four GPUs, each with 192 NVIDIA CUDA architecture cores and 4 GB of frame buffer. Designed to be passively cooled, the board fits within existing server-based platforms.

The boards benefit from a range of advancements, including hardware virtualization, which enables many users who are running hosted virtual desktops to share a single GPU and enjoy a rich, interactive graphics experience; support for low-latency remote display, which greatly reduces the lag currently experienced by users; and, redesigned shader technology to deliver higher power efficiency.

NVIDIA VGX GPU Hypervisor
The NVIDIA VGX GPU Hypervisor is a software layer that integrates into a commercial hypervisor, enabling access to virtualized GPU resources. This allows multiple users to share common hardware and ensure virtual machines running on a single server have protected access to critical resources. As a result, a single server can now economically support a higher density of users, while providing native graphics and GPU computing performance.

This new technology is being integrated by leading virtualization companies, such as Citrix, to add full hardware graphics acceleration to their full range of VDI products.

vgx--hypervisor.jpg

NVIDIA User Selectable Machines
NVIDIA USMs allow the NVIDIA VGX platform to deliver the advanced experience of professional GPUs to those requiring them across an enterprise. This enables IT departments to easily support multiple types of users from a single server.

USMs allow better utilization of hardware resources, with the flexibility to configure and deploy new users’ desktops based on changing enterprise needs. This is particularly valuable for companies providing infrastructure as a service, as they can repurpose GPU-accelerated servers to meet changing demand throughout the day, week or season.

Source: NVIDIA

NVIDIA Pioneers New Standard for High Performance Computing with Tesla GPUs

Subject: Shows and Expos | May 15, 2012 - 03:43 PM |
Tagged: tesla, nvidia, GTC 2012, kepler, CUDA

SAN JOSE, Calif.—GPU Technology Conference—May 15, 2012—NVIDIA today unveiled a new family of Tesla GPUs based on the revolutionary NVIDIA Kepler GPU computing architecture, which makes GPU-accelerated computing easier and more accessible for a broader range of high performance computing (HPC) scientific and technical applications.

GTC_horizontal_376_large.jpg

The new NVIDIA Tesla K10 and K20 GPUs are computing accelerators built to handle the most complex HPC problems in the world. Designed with an intense focus on high performance and extreme power efficiency, Kepler is three times as efficient as its predecessor, the NVIDIA Fermi architecture, which itself established a new standard for parallel computing when introduced two years ago.

“Fermi was a major step forward in computing,” said Bill Dally, chief scientist and senior vice president of research at NVIDIA. “It established GPU-accelerated computing in the top tier of high performance computing and attracted hundreds of thousands of developers to the GPU computing platform. Kepler will be equally disruptive, establishing GPUs broadly into technical computing, due to their ease of use, broad applicability and efficiency.”

servers-workstations-on.png

The Tesla K10 and K20 GPUs were introduced at the GPU Technology Conference (GTC), as part of a series of announcements from NVIDIA, all of which can be accessed in the GTC online press room.

NVIDIA developed a set of innovative architectural technologies that make the Kepler GPUs high performing and highly energy efficient, as well as more applicable to a wider set of developers and applications. Among the major innovations are:

  • SMX Streaming Multiprocessor – The basic building block of every GPU, the SMX streaming multiprocessor was redesigned from the ground up for high performance and energy efficiency. It delivers up to three times more performance per watt than the Fermi streaming multiprocessor, making it possible to build a supercomputer that delivers one petaflop of computing performance in just 10 server racks. SMX’s energy efficiency was achieved by increasing its number of CUDA architecture cores by four times, while reducing the clock speed of each core, power-gating parts of the GPU when idle and maximizing the GPU area devoted to parallel-processing cores instead of control logic.
  • Dynamic Parallelism – This capability enables GPU threads to dynamically spawn new threads, allowing the GPU to adapt dynamically to the data. It greatly simplifies parallel programming, enabling GPU acceleration of a broader set of popular algorithms, such as adaptive mesh refinement, fast multipole methods and multigrid methods.
  • Hyper-Q – This enables multiple CPU cores to simultaneously use the CUDA architecture cores on a single Kepler GPU. This dramatically increases GPU utilization, slashing CPU idle times and advancing programmability. Hyper-Q is ideal for cluster applications that use MPI.

“We designed Kepler with an eye towards three things: performance, efficiency and accessibility,” said Jonah Alben, senior vice president of GPU Engineering and principal architect of Kepler at NVIDIA. “It represents an important milestone in GPU-accelerated computing and should foster the next wave of breakthroughs in computational research.”

NVIDIA Tesla K10 and K20 GPUs
The NVIDIA Tesla K10 GPU delivers the world’s highest throughput for signal, image and seismic processing applications. Optimized for customers in oil and gas exploration and the defense industry, a single Tesla K10 accelerator board features two GK104 Kepler GPUs that deliver an aggregate performance of 4.58 teraflops of peak single-precision floating point and 320 GB per second memory bandwidth.

The NVIDIA Tesla K20 GPU is the new flagship of the Tesla GPU product family, designed for the most computationally intensive HPC environments. Expected to be the world’s highest-performance, most energy-efficient GPU, the Tesla K20 is planned to be available in the fourth quarter of 2012.

The Tesla K20 is based on the GK110 Kepler GPU. This GPU delivers three times more double precision compared to Fermi architecture-based Tesla products and it supports the Hyper-Q and dynamic parallelism capabilities. The GK110 GPU is expected to be incorporated into the new Titan supercomputer at the Oak Ridge National Laboratory in Tennessee and the Blue Waters system at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign.

“In the two years since Fermi was launched, hybrid computing has become a widely adopted way to achieve higher performance for a number of critical HPC applications,” said Earl C. Joseph, program vice president of High-Performance Computing at IDC. “Over the next two years, we expect that GPUs will be increasingly used to provide higher performance on many applications.”

Preview of CUDA 5 Parallel Programming Platform
In addition to the Kepler architecture, NVIDIA today released a preview of the CUDA 5 parallel programming platform. Available to more than 20,000 members of NVIDIA’s GPU Computing Registered Developer program, the platform will enable developers to begin exploring ways to take advantage of the new Kepler GPUs, including dynamic parallelism.

The CUDA 5 parallel programming model is planned to be widely available in the third quarter of 2012. Developers can get access to the preview release by signing up for the GPU Computing Registered Developer program on the CUDA website.

Source: NVIDIA

NVIDIA GPU Technology Conference Keynote Live Stream

Subject: General Tech, Graphics Cards, Shows and Expos | May 15, 2012 - 10:14 AM |
Tagged: nvidia, GTC 2012, live

Are you interested in GPUs?  Maybe GPU computing or even some cloud-based GeForce announcements?  Chances are then you'll want to tune in to the NVIDIA GPU Technology Conference keynote today at 10:30am PT / 1:30pm ET.  

gtc01.png

NVIDIA CEO Jen-Hsun Huang is expected to be on stage with three new announcements, one of which will likely be the the GK110 GPU we have all been waiting to hear about.  Another has been teased as "a new major cloud gaming technology" while the third...well I really have no idea. It should be exciting though so tune in and watch along with us!

You can catch it all at http://www.gputechconf.com/!

What's up at the GTC? Check out this BOXX!

Subject: General Tech | May 14, 2012 - 03:31 PM |
Tagged: tesla, quadro, nvidia, maximus, GTC 2012, BOXX

There are many professional level products to be seen at this years GPU Technology Conference, one of the more impressive being NVIDIA Maximus technology. That takes the power of a Quadro and couples it with the new Tesla GPUs for impressive live rendering and CAD applications.  These products are not for gamers more for game designers and graphical artists, but the technology its self is still something to keep your eyes on. 

maximus_bnr.jpg

One of the vendors you will see is BOXX, with several different lines of computers are designed to 3ds Max, CATIA V6 Live Rendering, SolidWorks and other professional level HPC applications.  With a NVIDIA Quadro 6000 6 GB, a Tesla 2075 6 GB and a 240GB SSD for cache and programs you will be rendering like never before.

Ryan will be at the GTC so keep an eye on the page for news from that show when it begins in the middle of this week.  NVIDIA's Maximus technology is sure to feature in some of these stories but do keep in mind this is the GTC and not the GDC so new game previews are unlikely though new benchmark software and proof of concept game engines might be.

gdcBOXX.png

"3DBOXX workstations featuring NVIDIA Maximus technology combine the visualization and interactive design capability of NVIDIA Quadro GPUs with the high-performance computing power of NVIDIA Tesla C2075 GPUs into a single system."

Here is some more Tech News from around the web:

Tech Talk

 

Source: BOXX

The GTX 670 and the Case of the Missing (and Returning) 4-Way SLI Support

Subject: Graphics Cards | May 11, 2012 - 04:57 PM |
Tagged: sli, nvidia, kepler, gtx 670, GK104, geforce

In our launch review of the GeForce GTX 670 2GB graphics card this week, we had initially mentioned that these $399 graphics cards would support SLI, 3-Way SLI and even 4-Way SLI configurations thanks to the pair of SLI connections on the PCB.  We received an update from NVIDIA later on that day that in fact it would NOT support 4-Way SLI.

07.JPG

The message from NVIDIA was pretty clear cut:

"As I’m sure you can imagine, we have to QA every feature that we claim support for and this takes a tremendous amount of time/resources. For the GTX 680 and GTX 690, we do support Quad SLI and take the time to QA it, as it makes sense for the extreme OC’ers and ultra-enthusiasts who are shooting to break world records."

My reply:

But with the similarities between the GTX 680 and the GTX 670, is there really any QA addition required to enable quad for 670? Seems like a cop-out to me man...

I saw it mostly as a reason to differentiate the GTX 670 and the GTX 680 with a feature since the performance between the cards was very similar; maybe too similar for NVIDIA's tastes with the $100 price difference.  

Well this afternoon we received some good news from our contact at NVIDIA:

"Change in plans.....we will be offering 4-Way SLI support for GTX 670 in a future driver."

So while the 301.34 driver will not support 4-Way configurations with the GTX 670, 4-Way SLI will in fact be enabled after all in a future version.  We'll be sure to keep you in the loop when that happens and the super-extreme enthusiasts can rejoice.  

This does go to show that the fundamental differences between AMD's license-free and seemingly more "open" CrossFire technology and NVIDIA's for-fee SLI technology.  With enough feedback and prodding in the right direction, NVIDIA can and does do the right thing, just look at the success we had convincing them to support SLI on AMD CPU platforms last year.  

Feet to the fire everyone!