NVIDIA Will Present Global Impact Award And $150,000 Grant To Researchers At GTC 2015

Subject: General Tech | April 8, 2014 - 02:03 PM |
Tagged: research, nvidia, GTC, gpgpu, global impact award

During the GPU Technology Conference last month, NVIDIA introduced a new annual grant called the Global Impact Award. The grant awards $150,000 to researchers using NVIDIA GPUs to research issues with worldwide impact such as disease research, drug design, medical imaging, genome mapping, urban planning, and other "complex social and scientific problems."

NVIDIA Global Impact Award.png

NVIDIA will be presenting the Global Impact Award to the winning researcher or non-profit institution at next year's GPU Technology Conference (GTC 2015). Individual researchers, universities, and non-profit research institutions that are using GPUs as a significant enabling technology in their research are eligible for the grant. Both third party and self-nomiations (.doc form) are accepted with the nominated candidates being evaluated based on several factors including the level of innovation, social impact, and current state of the research and its effectiveness in approaching the problem. Submissions for nominations are due by December 12, 2014 with the finalists being announced by NVIDIA on March 13, 2015. NVIDIA will then reveal the winner of the $150,000 grant at GTC 2015 (April 28, 2015).

The researcher, university, or non-profit firm can be located anywhere in the world, and the grant money can be assigned to a department, initiative, or a single project. The massively parallel nature of modern GPUs makes them ideal for many times of research with scalable projects, and I think the Global Impact Award is a welcome incentive to encourage the use of GPGPU in applicable research projects. I am interested to see what the winner will do with the money and where the research leads.

More information on the Global Impact Award can be found on the NVIDIA website.

Source: NVIDIA

NVIDIA 337.50 Driver and GeForce Experience 2.0 Released

Subject: General Tech, Graphics Cards | April 7, 2014 - 06:01 AM |
Tagged: nvidia, geforce experience, directx 11

We knew that NVIDIA had an impending driver update providing DirectX 11 performance improvements. Launched today, 337.50 still claims significant performance increases over the previous 335.23 version. What was a surprise is GeForce Experience 2.0. This version allows both ShadowPlay and GameStream to operate on notebooks. It also allows ShadowPlay to record, and apparently stream to Twitch, your Windows desktop (but not on notebooks). It also enables Battery Boost, discussed previously.

nvidia-shadowplay-desktop.png

Personally, I find desktop streaming is the headlining feature, although I rarely use laptops (and much less for gaming). This is especially useful for OpenGL, games which run in windowed mode, and if you want to occasionally screencast without paying for Camtasia or tinkering with CamStudio. If I were to make a critique, and of course I will, I would like the option to select which monitor gets recorded. Its current behavior records the primary monitor as far as I can tell.

I should also mention that, in my testing, "shadow recording" is not supported when not recording a fullscreen game. I'm guessing that NVIDIA believes their users would prefer to not record their desktops until manually started and likewise stopped. It seems like it had to have been a conscious decision. It does limit its usefulness in OpenGL or windowed games, however.

This driver also introduces GameStream for devices out of your home discussed in the SHIELD update.

nvidia-337-sli.png

This slide is SLi improvements, driver-to driver, for the GTX 770 and the 780 Ti.

As for the performance boost, NVIDIA claims up to 64% faster performance in configurations with one active GPU and up to 71% faster in SLI. It will obviously vary on a game-by-game and GPU-by-GPU basis. I do not have any benchmarks, besides a few examples provided by NVIDIA, to share. That said, it is a free driver. If you have a GeForce GPU, download it. It does complicate matters if you are deciding between AMD and NVIDIA, however.

Source: NVIDIA

GTC 2014: NVIDIA Launches Iray VCA Networked Rendering Appliance

Subject: General Tech, Graphics Cards | April 1, 2014 - 01:42 PM |
Tagged: VCA, nvidia, GTC 2014

NVIDIA launched a new visual computing appliance called the Iray VCA at the GPU Technology Conference last week. This new piece of enterprise hardware uses full GK 110 graphics cards to accelerate the company’s Iray renderer which is used to create photo realistic models in various design programs.

NVIDIA IRAY VCA.jpg

The Iray VCA specifically is a licensed appliance (hardware + software) that combines NVIDIA hardware and software. On the hardware side of things, the Iray VCA is powered by eight graphics cards, dual processors (unspecified but likely Intel Xeons based on usage in last year’s GRID VCA), 256GB of system RAM, and a 2TB SSD. Networking hardware includes two 10GbE NICs, two 1GbE NICs, and one Infiniband connection. In total, the Iray VCA features 20 CPU cores and 23,040 CUDA cores. The GPUs used are based on the full GK110 die and are paired with 12GB of memory each.

Even better, it is a scalable solution such that companies can add additional Iray VCAs to the network. The appliances reportedly transparently accelerate the Iray accelerated renders done on designer’s workstations. NVIDIA reports that an Iray VCA is approximately 60-times faster than a Quadro K5000-powered workstation. Further, according to NVIDIA, 19 Iray VCAs working together amounts to 1 PetaFLOP of compute performance which is enough to render photo realistic simulations using 1 billion rays with up to hundreds of thousands of bounces.

DSC01431.JPG

The Iray VCA enables some rather impressive real time renders of 3D models with realistic physical properties and lighting. The models are light simulations that use ray tracing, global illumination and other techniques to show photo realistic models using up to billions of rays of light. NVIDIA is positioning the Iray VCA as an alternative to physical prototyping, allowing designers to put together virtual prototypes that can be iterated and changed at significantly less cost and time.

DSC01447.JPG

Iray itself is NVIDIA’s GPU-accelerated photo realistic renderer. The Iray technology is used in a number of design software packages. The Iray VCA is meant to further accelerate that Iray renderer by throwing massive amounts of parallel processing hardware at the resource intensive problem over the network (the Iray VCAs can be installed at a data center or kept on site). Initially the Iray VCA will support 3ds Max, Catia, Bunkspeed, and Maya, but NVIDIA is working on supporting all Iray accelerated software with the VCA hardware.

GTC 2014 IRAY VCA Renders Honda Car Interior In Real Time.jpg

The virtual prototypes can be sliced and examined and can even be placed in real world environments by importing HDR photos. Jen-Hsun Huang demonstrated this by placing Honda’s vehicle model on the GTC stage (virtually).

DSC01450.JPG

In fact, one of NVIDIA’s initial partners with the Iray VCA is Honda. Honda is currently beta testing a cluster of 25 Iray VCAs to refine styling designs for cars and their interiors based on initial artistic work. Honda Research and Development System Engineer Daisuke Ide was quoted by NVIDIA as stating that “Our TOPS tool, which uses NVIDIA Iray on our NVIDIA GPU cluster, enables us to evaluate our original design data as if it were real. This allows us to explore more designs so we can create better designs faster and more affordably.”

The Iray VCA (PDF) will be available this summer for $50,000. The sticker price includes the hardware, Iray license, and the first year of updates and maintenance. This is far from consumer technology, but it is interesting technology that may be used in the design process of your next car or other major purchase.

What do you think about the Iray VCA and NVIDIA's licensed hardware model?

Podcast #293 - NVIDIA Titan-Z, ASUS ROG Poseidon 780, News from OculusVR and more!

Subject: General Tech | March 27, 2014 - 11:42 AM |
Tagged: W9100, video, titan z, poseidon 780, podcast, Oculus, nvidia, GTC, GDC

PC Perspective Podcast #293 - 03/27/2014

Join us this week as we discuss the NVIDIA Titan-Z, ASUS ROG Poseidon 780, News from OculusVR and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath and Allyn Malventano

 
This podcast is brought to you by Coolermaster, and the CM Storm Pulse-R Gaming Headset!
 
Program length: 1:19:03
  1. Week in Review:
    1. 0:10:45 Microsoft's DirectX 12 (Live Blog)
  2. 0:37:07 This podcast is brought to you by Coolermaster, and the CM Storm Pulse-R Gaming Headset
  3. News items of interest:
  4. Hardware/Software Picks of the Week:
    1. Josh: Certainly not a Skype Connection to the Studio
  5. Closing/outro

Be sure to subscribe to the PC Perspective YouTube channel!!

 

Connecting Pascal's triangle with the Maxwell Equations

Subject: General Tech | March 27, 2014 - 10:10 AM |
Tagged: pascal, nvlink, nvidia, maxwell, jen-hsun huang, GTC

Before we get to see Volta in action NVIDIA is taking a half step and releasing the Pascal architecture which will use Maxwell-like Streaming Multiprocessors and will introduce stacked or 3D memory which will reside on the same substrate as the GPU.  Jen-Hsun claimed this new type of memory will vastly increase the bandwidth available, provide two and a half times the capacity and be four times as energy efficient at the same time.  Along with the 3D memory announcement was the revealing of NVLink, an alternative interconnect which he claims will offer 5-12 times the bandwidth of PCIe and will be utilized by HPC systems.  From his announcement that NVLink will feature eight 20Gbps lanes per block or as NVIDIA is calling them, bricks, which The Tech Report used to make a quick calculation and came up with an aggregate bandwidth of a brick of around 20GB/s.  Read on to see what else was revealed.

pascal-scaling.jpg

"Today during his opening keynote at the Nvidia GPU Technology Conference, CEO Jen-Hsun Huang offered an update to Nvidia's GPU roadmap. The big reveal was about a GPU code-named Pascal, which will be a generation beyond the still-being-introduced Maxwell architecture in the firm's plans."

Here is some more Tech News from around the web:

Tech Talk

GTC 2014: NVIDIA Awards Startup Map-D $100,000 In Early Stage Challenge

Subject: General Tech | March 26, 2014 - 05:49 PM |
Tagged: remote graphics, nvidia, GTC 2014, gpgpu, emerging companies summit, ecs 2014, cloud computing

NVIDIA started the Emerging Companies Summit six years ago, and since then the event has grown in size and scope to identify and support those technology companies tha leverage (or plan to leverage) GPGPU computing to deliver innovative products. The ECS continues to be a platform for new startups to showcase their work at the annual GPU Technology Conference. NVIDIA provides support in the form of legal, developmental, and co-marketing to the companies featured at ECS.

GTC 2014 ECS GPGPU Technologies.jpg

There was an interesting twist this year though in the form of the Early Start Challenge. This is a new aspect to ECS in addition to the ‘One to Watch’ award. I attended the Emerging Companies Summit again this year and managed to snag some photos and participate in the Early Start Challenge (disclosure: i voted for Audiostream TV).

GTC 2014 ECS Early Start Challenge Companies.jpg

The 12 Early Start Challenge contestants take the stage at once to await the vote tally.

During the challenge, 12 selected startup companies were each given eight minutes on stage to pitch their company and why their innovations were deserving of the $100,000 grand prize. The on stage time was divided into a four minute presentation and a four minute Q&A session with the panel of judges (this year the audience was not part of the Q&A session at ECS unlike last year due to time constraints).

After all 12 companies had their chance on stage, the panel of judges and the audience submitted their votes for the most innovative startup. The panel of judges included:

  • Scott Budman Business & Technology Reporter, NBC
  • Jeff Herbst Vice President of Business Development, NVIDIA
  • Jens Hortsmann Executive Producer & Managing Partner, Crestlight Venture Productions
  • Pat Moorhead President & Principal Analyst, Moor Insights & Strategy
  • Bill Reichert Managing Director, Garage Technology Ventures

The companies participating in the challenge include Okam Studio, MyCloud3D, Global Valuation, Brytlyt, Clarifai, Aerys, oMobio, ShiVa Technologies, IGI Technologies, Map-D, Scalable Graphics, and AudioStream TV. The companies are involved in machine learning, deep neural networks, computer vision, remote graphics, real time visualization, gaming, and big data analytics.

After all the votes were tallied, Map-D was revealed to be the winner and received a check for $100,000 from NVIDIA Vice President of Business Development Jeff Herbst.

Map-D Wins ECS Early Start Challenge.jpg

Jeff Herbst awarding Map-D's CEO with the Early Start Challenge grand prize check. From left to right: Scott Budman, Jeff Herbst, and Thomas Graham.

Map-D is a company that specializes in a scaleable in-memory GPU database that promises millisecond queries directly from GPU memory (with GPU memory bandwidth being the bottleneck) and very fast database inserts. The company is working with Facebook and PayPal to analyze data. In the case of Facebook, Map-D is being used to analyze status updates in real time to identify malicious behavior. The software can be scaled across eight NVIDIA Tesla cards to analyze a billion Twitter tweets in real time.

It is specialized software, but extremely useful within its niche. Hopefully the company puts the prize money to good use in furthering its GPGPU endeavors. Although there was only a single grand prize winner, I found all the presentations interesting and look forward to seeing where they go from here.

Read more about the Emerging Companies Summit (from last year) and keep track of new GTC 2014 articles by following the GTC 2014 tag @ PC Perspective.

Source: PC

NVIDIA Launches Jetson TK1 Mobile CUDA Development Platform

Subject: General Tech, Mobile | March 25, 2014 - 06:34 PM |
Tagged: GTC 2014, tegra k1, nvidia, CUDA, kepler, jetson tk1, development

NVIDIA recently unified its desktop and mobile GPU lineups by moving to a Kepler-based GPU in its latest Tegra K1 mobile SoC. The move to the Kepler architecture has simplified development and enabled the CUDA programming model to run on mobile devices. One of the main points of the opening keynote earlier today was ‘CUDA everywhere,’ and NVIDIA has officially accomplished that goal by having CUDA compatible hardware from servers to desktops to tablets and embedded devices.

Speaking of embedded devices, NVIDIA showed off a new development board called the Jetson TK1. This tiny new board features a NVIDIA Tegra K1 SoC at its heart along with 2GB RAM and 16GB eMMC storage. The Jetson TK1 supports a plethora of IO options including an internal expansion port (GPIO compatible), SATA, one half-mini PCI-e slot, serial, USB 3.0, micro USB, Gigabit Ethernet, analog audio, and HDMI video outputs.

NVIDIA Jetson TK1 Mobile CUDA Development Board.jpg

Of course the Tegra K1 part is a quad core (4+1) ARM CPU and a Kepler-based GPU with 192 CUDA cores. The SoC is rated at 326 GFLOPS which enables some interesting compute workloads including machine vision.

Computer Vision On NVIDIA CUDA.jpg

In fact, Audi has been utilizing the Jetson TK1 development board to power its self-driving prototype car (more on that soon). Other intended uses for the new development board include robotics, medical devices, security systems, and perhaps low power compute clusters (such as an improved Pedraforca system).It can also be used as a simple desktop platform for testing and developing mobile applications for other Tegra K1 powered devices, of course.

NVIDIA VisionWorks GTC 2014.jpg

Beyond the hardware, the Jetson TK1 comes with the CUDA toolkit, OpenGL 4.4 driver, and NVIDIA VisionWorks SDK which includes programming libraries and sample code for getting machine vision applications running on the Tegra K1 SoC.

The Jetson TK1 is available for pre-order now at $192 and is slated to begin shipping in April. Interested developers can find more information on the NVIDIA developer website.

 

NVIDIA SHIELD: New Features and Promotional Price Cut

Subject: General Tech, Graphics Cards, Mobile | March 25, 2014 - 12:01 PM |
Tagged: shield, nvidia

The SHIELD from NVIDIA is getting a software update which advances GameStream, TegraZone, and the Android OS, itself, to KitKat. Personally, the GameStream enhancements seem most notable as it now allows users to access their home PC's gaming content outside of the home, as if it were a cloud server (but some other parts were interesting, too). Also, from now until the end of April, NVIDIA has temporarily cut the price down to $199.

nvidia-shield-gamestream-01.jpg

Going into more detail: GameStream, now out of Beta, will stream games which are rendered on your gaming PC to your SHIELD. Typically, we have seen this through "cloud" services, such as OnLive and GaiKai, which allow access to a set of games that run on their servers (with varying license models). The fear with these services is the lack of ownership, but the advantage is that the slave device just needs enough power to decode an HD video stream.

nvidia-shield-gamestream-02.jpg

In NVIDIA's case, the user owns both server (their standard NVIDIA-powered gaming PC, which can now be a laptop) and target device (the SHIELD). This technology was once limited to your own network (which definitely has its uses, especially for the SHIELD as a home theater device) but now can also be exposed over the internet. For this technology, NVIDIA recommends 5 megabit upload and download speeds - which is still a lot of upload bandwidth, even for 2014. In terms of performance, NVIDIA believes that it should live up to expectations set by their GRID. I do not have any experience with this, but others on the conference call took it as good news.

As for content, NVIDIA has expanded the number of supported titles to over a hundred, including new entries: Assassin's Creed IV, Batman: Arkham Origins, Battlefield 4, Call of Duty: Ghosts, Daylight, Titanfall, and Dark Souls II. They also claim that users can add other apps which are not officially supported, Halo 2: Vista was mentioned as an example, for streaming. FPS and Bitrate can now be set by the user. A bluetooth mouse and keyboard can also be paired to SHIELD for that input type through GameStream.

nvidia-shield-checkbox.jpg

Yeah, I don't like checkbox comparisons either. It's just a summary.

A new TegraZone was also briefly mentioned. Its main upgrade was apparently its library interface. There has also been a number of PC titles ported to Android recently, such as Mount and Blade: Warband.

The update is available now and the $199 promotion will last until the end of April.

Source: NVIDIA

Valve Ports Portal To NVIDIA Shield Gaming Handheld

Subject: General Tech | March 25, 2014 - 11:33 AM |
Tagged: Portal, GTC 2014, gaming, nvidia

During the opening keynote of NVIDIA's GTC 2014 conference, company CEO Jen-Hsun Huang announced that Valve had ported the ever-popular "Portal" game to the NVIDIA SHIELD handheld gaming platform.

The game appeared to run smoothly on the portable device, and is a worthy addition to the catalog of local games that can be run on the SHIELD.

DSC01456.JPG

Additionally, while the cake may still be a lie, portable gaming systems apparently are not as Jen-Hsun Huang revealed that all GTC attendees will be getting a free SHIELD.

Stay tuned to PC Perspective for more information on all the opening keynote announcements and their implications for the future of computing!

GPU Technology Conference 2014 resources:

Keep up with GTC 2014 throughout the week by following the NVIDIA blog (blogs.nvidia.com) and the GTC tag on PC Perspective!

Manufacturer: ASUS

Introduction and Technical Specifications

Introduction

02-card-profile.jpg

Courtesy of ASUS

The ASUS ROG Poseidon GTX 780 video card is the latest incarnation of the Republic of Gamer (ROG) Poseidon series. Like the previous Poseidon series products, the Poseidon GTX 780 features a hybrid cooler, capable of air and liquid-based cooling for the GPU and on board components. The AUS ROG Poseidon GTX 780 graphics card comes with an MSRP of $599, a premium price for a premium card .

03-fly-apart-image.jpg

Courtesy of ASUS

In designing the Poseidon GTX 780 graphics card, ASUS packed in many of premium components you would normally find as add-ons. Additionally, the card features motherboard quality power components, featuring a 10 phase digital power regulation system using ASUS DIGI+ VRM technology coupled with Japanese black metallic capacitors. The Poseidon GTX 780 has the following features integrated into its design: DisplayPort output port, HDMI output port, dual DVI ports (DVI-D and DVI-I type ports), aluminum backplate, integrated G 1/4" threaded liquid ports, dual 90mm cooling fans, 6-pin and 8-pin PCIe-style power connectors, and integrated power connector LEDs and ROG logo LED.

Continue reading our review of the ASUS ROG Poseidon GTX 780 graphics card!