GTC 2013: Oculus VR Reveals Future of Oculus Rift at ECS

Subject: General Tech | April 1, 2013 - 02:37 AM |
Tagged: virtual reality, oculus vr, oculus rift, GTC 2013, gaming

Update: Yesterday, an Oculus representative reached out to us in order to clarify that the information presented was forward looking and subject to change (naturally). Later in the day, the company also posted to its forum the following statement:

"The information from that presentation (dates, concepts, projections, etc...) represent our vision, ideas, and on-going research/exploration. None of it should be considered fact, though we'd love to have that projected revenue!"

You can find the full statement in this thread.

The original article text is below:

Oculus VR, the company behind the Oculus Rift virtual reality headset, took the stage at NVIDIA's GPU Technology Conference to talk about the state of its technology and where it is headed.

Oculus VR Emerging Companies Summit_GTC2013 (2).jpg

Oculus VR is a relatively new company founded by Palmer Luckey and managed by CEO Brendan Iribe, who is the former CPO of cloud gaming company Gaikai. Currently, Oculus VR is developing a wearable, 3D, virtual reality headset called the Oculus Rift. Initially launched via Kickstarter, the Oculus Rift hardware is now shipping to developers as the rev 1 developer kit. Oculus VR will manufacture 10,000 developer kits and managed to raise $2.55 million in 2012.

oculus-rift.jpg

The developer kit has a resolution of 1280x800 and weighs 320g. It takes a digital video input via a DVI or HDMI cable (HDMI with an adapter). The goggles hold the display and internals, and a control box connects via a wire to provide power. It uses several pieces of hardware found in smartphones, and CEO Brendan Iribe even hinted that an ongoing theme at Oculus VR was that "if it's not in a cell phone, it's not in Oculus." It delivers a 3D experience with head tracking, but Iribe indicated that full motion VR is coming in the future. For now, it is head tracking that allows you to look around the game world, but "in five to seven or eight years" virtual reality setups that combine an Oculus Rift-like headset with a omni-directional treadmill would allow you to walk and run around the world in addition to simply looking around.

Beyond the immersion factor, a full motion VR setup would reduce (and possibly eliminate) the phenomena of VR sickness, where users using VR headsets for extended periods of time experience discomfort due to the disconnect between your perceived in-game movement and your (lack of) physical movement and inner-ear balance.

Oculus VR Emerging Companies Summit_GTC2013 (6).jpg

After the first developer kit, Oculus is planning to release a revised version, and ultimately a consumer version. This consumer version is slated for a Q3 2014 launch. It will weigh significantly less (Oculus VR is aiming for around 200g), and will support 1080p 3D resolutions. The sales projections estimate 50,000 revision 2 developer kits in 2013 and at least 500,000 consumer versions of the Oculus Rift in 2014. Ambitious numbers, for sure, but if Oculus can nail down next-generation console support, reduce the weight of the headset, and increase the resolution it is not out of the question.

Oculus VR Emerging Companies Summit_GTC2013 (9).jpg

With the consumer version, Oculus is hoping to offer both a base wired version and a higher-priced wireless Rift VR headset. Further, the company is working with game and professional 3D creation software developers to get software support for the VR headset. Team Fortress 2 support has been announced, for example (and there will even be an Oculus Rift hat, for gamers that are into hats). Additionally, Oculus is working to get support into the following software titles (among others):

  • AutoDesk 3D
  • TF2
  • DOTA 2
  • L4D
  • Half-Life
  • Warface
  • Minecraft
  • Fortnite
  • UT3
  • Hawken
  • Crysis

During the presentation Iribe stated that graphics cards (specifically he mentioned the GTX 680) are finally in a place to deliver 3D with smooth frame rates at high-enough resolutions for immersive virtual reality.

Left: potential games with Oculus VR support. Right: Oculus VR CEO Brendan Iribe at ECS during GTC 2013.

Pricing on the consumer version of the VR headset is still unkown, but developers can currently pre-order an Oculus Rift developer kit on the Oculus VR site. In the past, the company has stated that consumers should hold off on buying a developer kit and wait for the consumer version of the Rift in 2014. If the company is able to deliver on its claims of a lighter headset with higher resolution screen and adjustable 3D effects (like the 3DS, the level of stereo 3D can be adjusted, and even turned off), I think it will be worth the wait. The deciding factor will then be software support. Hopefully developers will take to the VR technology and offer up support for it in upcoming titles into the future.

Are you excited for the Oculus Rift?

GTC 2013: eyeSight Will Use GPUs To Improve Its Gesture Recognition Software

Subject: General Tech | March 31, 2013 - 05:43 PM |
Tagged: nvidia, lenovo yoga, GTC 2013, GTC, gesture control, eyesight, ECS

During the Emerging Companies Summit at NVIDIA's GPU Technology Conference, Israeli company EyeSight Mobile Technologies' CEO Gideon Shmuel took the stage to discuss the future of its gesture recognition software. He also provided insight into how EyeSight plans to use graphics cards to improve and accelerate the process of identifying, and responding to, finger and hand movements along with face detection.

GTC_ECS_EyeSight_Gideon Shmuel (2).jpg

EyeSight is a five year old company that has developed gesture recognition software that can be installed on existing machines (though it appears to be aimed more at OEMs than directly to consumers). It can use standard cameras, such as webcams, to get its 2D input data and then gets a relative Z-axis from proprietary algorithms. This gives EyeSight essentially 2.5D of input data, and camera resolution and frame rate permitting, allows the software to identify and track finger and hand movements. EyeSight CEO Gideon Shmuel stated at the ECS presentation that the software is currently capable of "finger-level accuracy" at 5 meters from a TV.

GTC_ECS_EyeSight_Gideon Shmuel (5).jpg

Gestures include the ability to use your fingers as a mouse to point at on-screen objects, waving your hand to turn pages, scrolling, and even give hand signal cues.

The software is not open source, and there are no plans to move in that direction. The company has 15 patents pending on its technology, several of which it managed to file before the US Patent Office changed from First to Invent to First Inventor to File (heh, which is another article...). The software will support up to 20 million hardware devices in 2013, and EyeSight expects the number of compatible camera-packing devices to increase further to as many as 3.5 billion in 2015. Other features include the ability transparently map EyeSight input to Android apps without user's needing to muck with settings, and the ability to detect faces and "emotional signals" even in low light. According to the website, SDKs are available for Windows, Linux, and Android. The software maps the gestures it recognizes to Windows shortcuts, to increase compatibility with many existing applications (so long as they support keyboard shortcuts).

GTC_ECS_EyeSight_Gideon Shmuel (10).jpg

Currently, the EyeSight software is mostly run on the CPU, but the company is heavily investing into incorporating GPU support. Moving the processing to GPUs will allow the software to run faster and more power efficiently, especially on mobile devices (NVIDIA's Tegra platform was specifically mentioned). EyeSight's future road-map includes using GPU acceleration to bolster the number of supported gestures, move image processing to the GPUs, add velocity and vector control inputs, incorporate a better low-light filter (which will run on the GPU), and offload processing from the CPU to optimize power management and save CPU resources for the OS and other applications which is especially important for mobile devices. Gideon Shmuel also stated that he wants to see the technology being used on "anything with a display" from your smartphone to your air conditioner.

GTC_ECS_EyeSight_Gideon Shmuel (6).jpg

A basic version of the EyeSight input technology reportedly comes installed on the Lenovo Yoga convertible tablet. I think this software has potential, and would provide that Minority Report-like interaction that many enthusiasts wish for. Hopefully, EyeSight can deliver on its claimed accuracy figures and OEMs will embrace the technology by integrating it into future devices.

EyeSight has posted additional video demos and information about its touch-free technology on its website.

Do you think this "touch-free" gesture technology has merit, or will this type of input remain limited to awkward-integration in console games?

GTC 2013: Fuzzy Logix Launches Tanay Rx for GPU Accelerating Analytic Models Programmed In R

Subject: General Tech | March 26, 2013 - 08:40 PM |
Tagged: GTC 2013, gpu analytics, gpgpu, fuzzy logix

Fuzzy Logix, a company that specializes in HPC data analytics, recently unveiled a new extension (to the Tanay Zx library) called Tanay Rx that will GPU accelerate analytic models written in R. R is a programming language commonly used by statisticians. It is reportedly relatively easy to program, but has an inherent lack of multi-threading performance and memory limitations. With Tanay Rx, Fuzzy Logix is hoping to combine the performance benefits of its Tanay Zx libraries with the simplicity of R programming. According to Fuzzy Logix, Tanay Rx is "the perfect prescription to cure performance issues with R."

FuzzyLogix_at_GTC2013.jpg

Tanay Zx allowed the use of many programming languages to run models with .net, .dll, or shared object calls on the GPU, and the new Tanay Rx extension extends that functionality to statistical and analytic models run using R. Models include those data intensive tasks as matrix operations, Monte Carlo simulations, data mining, financial mathematics (equities, fixed income, and time series analysis). Fuzzy Logix claims to enable R users to run over 500 analytic models up to 10 to 100-times faster by harnessing the parallel processing power of graphics and accelerator cards such as NVIDIA's Quadro/Tesla cards, Intel's MIC, and AMD's FirePro cards.

As an example, Fuzzy Logix states that calculations for intra-day risk of equity, interest rate, and FX options amount to approximately 1 billion future scenarios can be performed in milliseconds on the GPU. While some conversions may be more intensive, certain aspects of R code can be sped-up by replacing R functions with Fuzzy Logix' own Tanay Rx functions.

Fuzzy Logix CUDA Function.jpg

As per Fuzzy Logix's website.

Industry solutions implementing Tanay Rx for the financial, healthcare, internet marketing, pharmaceutical, oil, gas, insurance, and other sectors are available now. More information on the company's approach to GPGPU analytics is available here.

Source: Fuzzy Logix

Boxx Launches 3DBoxx 8950 Workstation

Subject: General Tech, Systems | March 26, 2013 - 03:18 PM |
Tagged: workstation, nvidia, GTC 2013, BOXX, 3dboxx 8950

Boxx Technologies recently launched a new multi-GPU workstation called the 3DBoxx 8950. It is aimed at professionals that need a fast system with beefy GPU accelerator cards that they can design and render at the same time. The 8950 is intended to be used with applications like Autodesk, Dassault, NVIDIA iray, and V-Ray (et al).

8950 front.jpg

The Boxx 3DBoxx 8950 features two liquid cooled Intel Xeon Ed-2600 processors (2GHz, 16 cores, 32 threads), up to 512GB of system memory (16 DIMM slots), and seven PCI-E slots (four of which accept dual slot GPUs, the remaining three are spaced for single slot cards). A 1250W power suppy (80 PLUS Gold) powers the workstation. An example configuration would include three Tesla K20 cards and one Quadro K5000. The Tesla cards would handle the computation while the Quadro can power the multi-display ouput. The chassis has room for eight 3.5" hard drives and a single externally-accessible 5.25" drive. The 8950 workstation can be loaded with either the Windows or Linux operating system.

Rear IO on the 8950 workstation includes:

  • 5 x audio jacks
  • 1 x optical in/out
  • 4 x USB 2.0 ports
  • 1 x serial port
  • 2 x RJ45 jacks, backed by Intel Gigabit NICs

The system is available now, with pricing available upon request. You can find the full list of specifications and supported hardware configurations in this spec sheet (PDF).

Source: Boxx

Podcast #243 - ASUS Crosshair V Formula-Z, MSI Z77A-G45 Thunderbolt, 2TB SSDs and more!

Subject: General Tech | March 21, 2013 - 11:54 AM |
Tagged: z77a-g45 thunderbolt, video, tegra, quadro, podcast, GTX 690, GTC 2013, DDR3-3000, Crosshair V Formula Z, 2tb ssd

PC Perspective Podcast #243 - 03/21/2013

Join us this week as we discuss the ASUS Crosshair V Formula-Z, MSI Z77A-G45 Thunderbolt, 2TB SSDs and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Josh Walrath, Jeremy Hellstrom, Allyn Malventano, Morry Teitelman, and sometimes Ken Addison

This Podcast is brought to you by MSI!

Program length: 1:18:24

Podcast topics of discussion:
  1. Week in Review:
  1. News items of interest:
  2. Closing:
    1. 1:09:50 Hardware / Software Pick of the Week
      1. Morry: Memory, always more memory - G.Skill Sniper 1866 16GB DDR3
  3. 1-888-38-PCPER or podcast@pcper.com

 

GTC 2013: Cortexica Vision Systems Talks About the Future of Image Recognition During the Emerging Companies Summit

Subject: General Tech, Graphics Cards | March 20, 2013 - 06:44 PM |
Tagged: video fingerprinting, image recognition, GTC 2013, gpgpu, cortexica, cloud computing

The Emerging Companies Summit is an series of sessions at NVIDIA's GPU Technology Conference (GTC) that gives the floor to CEOs from several up-and-coming technology startups. Earlier today, the CEO of Cortexica Vision Systems took the stage to talk briefly about the company's products and future direction, and to answer questions from a panel of industry experts.

If you tuned into NVIDIA's keynote presentation yesterday, you may have noticed the company showing off a new image recognition technology. That technology is being developed by a company called Cortexica Vision Systems. While it cannot perform facial recognition, it is capable of identifying everything else, according the company's CEO Ian McCready. Currently, Cortexica is employing a cluster of approximately 70 NVIDIA graphics cards, but it is capable of scaling beyond that. Mcready estimates that about 100 GPUs and a CPU would be required by a company like eBay, should they want to implement Cortexica's image recognition technology in-house.

20130320_047.jpg

The Cortexica technology uses images captured by a camera (such as the one in your smartphone), which is then sent to Cortexica's servers for processing. The GPUs in the Cortexica cluster handle the fingerprint creation task while the CPU does the actual lookup in the database of known fingerprints to either find an exact match, or return similar image results. According to Cortexica, the fingerprint creation takes only 100ms, though as more powerful GPUs make it into mobile devices, it may be possible to do the fingerprint creation on the device itself, reducing the time between taking a photo and getting relevant results back.

20130320_051.jpg

The image recognition technology is currently being used by Ebay Motors in the US, UK, and Germany. Cortexica hopes to find a home with many of the fashion companies that would use the technology to allow people to identify and ultimately purchase clothing they take photos of on television or in public. The technology can also perform 360-degree object recognition, identify logos that are as small as .4% of the screen, and identify videos. In the future Cortexica hopes to reduce latency, improve recognition accuracy, and add more search categories. Cortexica is also working on enabling an "always on" mobile device that will constantly be indentifying everything around it, which is both cool and a bit creepy. With mobile chips like Logan and Parker coming in the future, Cortexica hopes to be able to do on-device image recognition, which would greatly reduce latency and allow the use of the recognition technology while not connected to the internet.

20130320_054.jpg

The number of photos taken is growing rapidly, where as many as 10% of all photos stored "in the cloud" were taken last year alone. Even Facebook, with it's massive data centers is moving to a cold-storage approach to save money on electricity costs of storing and serving up those photos. And while some of these photos have relevant meta data, the majority of photos taken do not, and Cortexica claims that its technology can be used to get around that issue, but identifying photos as well as finding similar photos using its algorithms.

20130320_055.jpg

Stay tuned to PC Perspective for more GTC coverage!

Additional slides are available after the break:

CEO Jen-Hsun Huang Sells Windows RT... A Little Bit.

Subject: Editorial, General Tech, Processors, Shows and Expos | March 20, 2013 - 03:26 PM |
Tagged: windows rt, nvidia, GTC 2013

NVIDIA develops processors, but without an x86 license they are only able to power ARM-based operating systems. When it comes to Windows, that means Windows Phone or Windows RT. The latter segment of the market has disappointing sales according to multiple OEMs, which Microsoft blames them for, but the jolly green GPU company is not crying doomsday.

surface-cover.jpg

NVIDIA just skimming the Surface RT, they hope.

As reported by The Verge, NVIDIA CEO Jen-Hsun Huang was optimistic that Microsoft would eventually let Windows RT blossom. He noted how Microsoft very often "gets it right" at some point when they push an initiative. And it is true, Microsoft has a history of turning around perceived disasters across a variety of devices.

They also have a history of, as they call it, "knifing the baby."

I think there is a very real fear for some that Microsoft could consider Intel's latest offerings as good enough to stop pursuing ARM. Of course, the more the pursue ARM, the more their business model will rely upon the-interface-formerly-known-as-Metro and likely all of its certification politics. As such, I think it is safe to say that I am watching the industry teeter on a fence with a bear on one side and a pack of rabid dogs on the other. On the one hand, Microsoft jumping back to Intel would allow them to perpetuate the desktop and all of the openness it provides. On the other hand, even if they stick with Intel they likely will just kill the desktop anyway, for the sake of user confusion and the security benefits of cert. We might just have less processor manufacturers when they do that.

So it could be that NVIDIA is confident that Microsoft will push Windows RT, or it could be that NVIDIA is pushing Microsoft to continue to develop Windows RT. Frankly, I do not know which would be better... or more accurately, worse.

Source: The Verge

GTC 2013: Pedraforca Is A Power Efficient ARM + GPU Cluster For Homogeneous (GPU) Workloads

Subject: General Tech, Graphics Cards | March 20, 2013 - 10:47 AM |
Tagged: tesla, tegra 3, supercomputer, pedraforca, nvidia, GTC 2013, GTC, graphics cards, data centers

There is a lot of talk about heterogeneous computing at GTC, in the sense of adding graphics cards to servers. If you have HPC workloads that can benefit from GPU parallelism, adding GPUs gives you computing performance in less physical space, and using less power, than a CPU only cluster (for equivalent TFLOPS).

However, there was a session at GTC that actually took things to the opposite extreme. Instead of a CPU only cluster or a mixed cluster, Alex Ramirez (leader of Heterogeneous Architectures Group at Barcelona Supercomputing Center) is proposing a homogeneous GPU cluster called Pedraforca.
Pedraforca V2 combines NVIDIA Tesla GPUs with low power ARM processors. Each node is comprised of the following components:

  • 1 x Mini-ITX carrier board
  • 1 x Q7 module (which hosts the ARM SoC and memory)
    • Current config is one Tegra 3 @ 1.3GHz and 2GB DDR2
  • 1 x NVIDIA Tesla K20 accelerator card (1170 GFLOPS)
  • 1 x InfiniBand 40Gb/s card (via Mellanox ConnectX-3 slot)
  • 1 x 2.5" SSD (SATA 3 MLC, 250GB)

The ARM processor is used solely for booting the system and facilitating GPU communication between nodes. It is not intended to be used for computing. According to Dr. Ramirez, in situations where running code on a CPU would be faster, it would be best to have a small number of Intel Xeon powered nodes to do the CPU-favorable computing, and then offload the parallel workloads to the GPU cluster over the InfiniBand connection (though this is less than ideal, Pedraforca would be most-efficient with data-sets that can be processed solely on the Tesla cards).

DSCF2421.JPG

While Pedraforca is not necessarily locked to NVIDIA's Tegra hardware, it is currently the only SoC that meets their needs. The system requires the ARM chip to have PCI-E support. The Tegra 3 SoC has four PCI-E lanes, so the carrier board is using two PLX chips to allow the Tesla and InfiniBand cards to both be connected.

The researcher stated that he is also looking forward to using NVIDIA's upcoming Logan processor in the Pedraforca cluster. It will reportedly be possible to upgrade existing Pedraforca clusters with the new chips by replacing the existing (Tegra 3) Q7 module with one that has the Logan SoC when it is released.

Pedraforca V2 has an initial cluster size of 64 nodes. While the speaker was reluctant to provide TFLOPS performance numbers, as it would depend on the workload, with 64 Telsa K20 cards, it should provide respectable performance. The intent of the cluster is to save power costs by using a low power CPU. If your sever kernel and applications can run on GPUs alone, there are noticeable power savings to be had by switching from a ~100W Intel Xeon chip to a lower-power (approximately 2-3W) Tegra 3 processor. If you have a kernel that needs to run on a CPU, it is recommended to run the OS on an Intel server and transfer just the GPU work to the Pedraforca cluster. Each Pedraforca node is reportedly under 300W, with the Tesla card being the majority of that figure. Despite the limitations, and niche nature of the workloads and software necessary to get the full power-saving benefits, Pedraforca is certainly an interesting take on a homogeneous server cluster!

DSCF2413.JPG

In another session relating to the path to exascale computing, power use in data centers was listed as one of the biggest hurdles to getting to Exaflop-levels of performance, and while Pedraforca is not the answer to Exascale, it should at least be a useful learning experience at wringing the most parallelism out of code and pushing GPGPU to the limits. And that research will help other clusters use the GPUs more efficiently as researchers explore the future of computing.

The Pedraforca project built upon research conducted on Tibidabo, a multi-core ARM CPU cluster, and CARMA (CUDA on ARM development kit) which is a Tegra SoC paired with an NVIDIA Quadro card. The two slides below show CARMA benchmarks and a Tibidabo cluster (click on image for larger version).

Stay tuned to PC Perspective for more GTC 2013 coverage!

 

GTC 2013: TYAN Launches New HPC Servers Powered by Kepler-based Tesla Cards

Subject: General Tech, Graphics Cards | March 19, 2013 - 03:52 PM |
Tagged: GTC 2013, tyan, HPC, servers, tesla, kepler, nvidia

Server platform manufacturer TYAN is showing off several of its latest servers aimed at the high performance computing (HPC) market. The new servers range in size from 2U to 4U chassis and hold up to 8 Kepler-based Tesla accelerator cards. The new product lineup consists of two motherboards and three bare-bones systems. The S7055 and S7056 are the motherboards while the FT77-B7059, TA77-B7061, and FT48-B7055.

FT48_B7055_3D_2_Rev2_S.jpg

The TA77-B7061 is the smallest system, with support for two Intel Xeon E5-2600 processors and four Kepler-based Tesla accelerator cards. The FT48-B7055 has si7056 specifications but is housed in a 4U chassis. Finally, the FT77-B7059 is a 4U system with support for two Intel Xeon E5-2600 processors, and up to eight Tesla accelerator cards. The S7055 supports a maximum of 4 GPUs while the S7056 can support two Tesla cards, though these are bare boards so you will have to supply your own cards, processors, and RAM (of course).

FT77A-B7059_3D_S.jpg

According to TYAN, the new Kepler-based HPC systems will be available in Q2 2013, though there is no word on pricing yet.

Stay tuned to PC Perspective for further GTC 2013 Coverage!

GTC 2013: Jen-Hsun Huang Takes the Stage to Discuss NVIDIA's Future, New Hardware

Subject: General Tech, Graphics Cards | March 19, 2013 - 11:55 AM |
Tagged: unified virtual memory, ray tracing, nvidia, GTC 2013, grid vca, grid, graphics cards

Today, NVIDIA's CEO Jen-Hsun Huang stepped on stage to present the GTC keynote. In the presentation (which was live streamed on the GTC website and archived here.), NVIDIA discussed five major points, looking back over 2013 and into the future of its mobile and professional products. In addition to the product roadmap, NVIDIA discussed the state of computer graphics and GPGPU software. Remote graphics and GPU virtualization was also on tap. Finally, towards the end of the Keynote, the company revealed its first appliance with the NVIDIA GRID VCA. The culmination of NVIDIA's GRID and GPU virtualization technology, the VCA is a device that hosts up to 16 virtual machines which each can tap into one of 16 Kepler-based graphics processors (8 cards, 16 GPUs per card) to fully hardware accelerate software running of the VCA. Three new mobile Tegra parts and two new desktop graphics processors were also hinted at, with improvements to power efficiency and performance.

DSCF2303.JPG

On the desktop side of things, NVIDIA's roadmap included two new GPUs. Following Kepler, NVIDIA will introduce Maxwell and Volta. Maxwell will feature a new virtualized memory technology called Unified Virtual Memory. This tech will allow both the CPU and GPU to read from a single (virtual) memory store. Much as with the promise of AMD's Kaveri APU, the Unified Virtual Meory will result in speed improvements in heterogeneous applications because data will not have to be copied to/from the GPU and CPU in order for the data to be processed. Server applications will really benefit from the shared memory tech. NVIDIA did not provide details, but from the sound of it, the CPU and GPU both continue to write to their own physical memory, but their is a layer of virtualized memory on top of that, that will allow the two (or more) different processors to read from each other's memory store.
Following Maxwell, Volta will be a physically smaller chip with more transistors (likely a smaller process node). In addition to the power efficiency improvements over Maxwell, it steps up the memory bandwidth significantly. NVIDIA will use TSV (through silicon via) technology to physically mount the graphics DRAM chips over the GPU (attached to the same silicon substrate electrically). According to NVIDIA, this new TSV-mounted memory will achieve up to 1 Terabytes/second of memory bandwidth, which is a notable increase over existing GPUs.

DSCF2354.JPG

NVIDIA continues to pursue the mobile market with its line of Tegra chips that pair an ARM CPU, NVIDIA GPU, and SDR modem. Two new mobile chips called Logan and Parker will follow Tegra 4. Both new chips will support the full CUDA 5 stack and OpenGL 4.3 out of the box. Logan will feature a Kepler-based graphics porcessor on the chip that can “everything a modern computer ought to do” according to NVIDIA. Parker will have a yet-to-be-revealed graphics processor (Kepler successor). This mobile chip will utilize 3D FinFET transistors. It will have a greater number of transistors in a smaller package than previous Tegra parts (it will be about the size of a dime), and NVIDIA also plans to ramp up the frequency to wrangle more performance out of the mobile chip. NVIDIA has stated that Logan silicon should be completed towards the end of 2013, with the mobile chips entering production in 2014.

DSCF2371.JPG

Interestingly, Logan has a sister chip that NVIDIA is calling Kayla. This mobile chip is capable of running ray tracing applications and features OpenGL geometric shaders. It can support GPGPU code and will be compatible with Linux.

NVIDIA has been pushing CUDA for several years, now. The company has seen some respectable adoption rates, by growing from 1 Tesla supercomputer in 2008 to its graphics cards being used in 50 supercomputers, with 500 million CUDA processors on the market. There are now allegedly 640 universities working with CUDA and 37,000 academic papers on CUDA.

DSCF2331.JPG

Finally, NVIDIA's hinted-at new product announcement was the NVIDIA VCA, which is a GPU virtualization appliance that hooks into the network and can deliver up to 16 virtual machines running independant applications. These GPU accelerated workspaces can be presneted to thin clinets over the netowrk by installing the GRID client software on users' workstations. The specifications of the GRID VCA is rather impressive, as well.

The GRID VCA features:

  • 2 x Intel Xeon processors with 16 threads each (32 total threads)
  • 192GB to 384GB of system memory
  • 8 Kepler-based graphics cards, with two GPUs each (16 total GPUs)
  • 16 x GPU-accelerated virtual machines

The GRID VCA fits into a 4U case. It can deliver remote graphics to workstations, and is allegedly fast enough to deliver gpu accelerated software that is equivalent to having it run on the local machine (at least over LAN). The GRID Visual Computing Appliance will come in two flavors at different price points. The first will have 8 Kepler GPUs with 4GB of memory each, 16 CPU threads, and 192GB of system memory for $24,900. The other version will cost $34,900 and features 16 Kepler GPUs (4GB memory), 32 CPU threads, and 384GB system memory. On top of the hardware cost, NVIDIA is also charging licensing fees. While both GRID VCA devices can support unlimited devices, the licenses cost $2,400 and $4,800 per year respectively.

DSCF2410.JPG

Overall, it was an interesting keynote, and the proposed graphics cards look to be offering up some unique and necessary features that should help hasten the day of ubiquitous general purpose GPU computing. The Unified Virtual Memory was something I was not expecting, and it will be interesting to see how AMD responds. AMD is already promising shared memory in its Kaveri APU, but I am interested to see the details of how NVIDIA and AMD will accomplish shared memory with dedicated grapahics cards (and whether CrossFire/SLI setups will all have a single shared memory pool)..

Stay tuned to PC Perspective for more GTC 2013 Coverage!