GTC 2014: NVIDIA Awards Startup Map-D $100,000 In Early Stage Challenge

Subject: General Tech | March 26, 2014 - 05:49 PM |
Tagged: remote graphics, nvidia, GTC 2014, gpgpu, emerging companies summit, ecs 2014, cloud computing

NVIDIA started the Emerging Companies Summit six years ago, and since then the event has grown in size and scope to identify and support those technology companies tha leverage (or plan to leverage) GPGPU computing to deliver innovative products. The ECS continues to be a platform for new startups to showcase their work at the annual GPU Technology Conference. NVIDIA provides support in the form of legal, developmental, and co-marketing to the companies featured at ECS.

GTC 2014 ECS GPGPU Technologies.jpg

There was an interesting twist this year though in the form of the Early Start Challenge. This is a new aspect to ECS in addition to the ‘One to Watch’ award. I attended the Emerging Companies Summit again this year and managed to snag some photos and participate in the Early Start Challenge (disclosure: i voted for Audiostream TV).

GTC 2014 ECS Early Start Challenge Companies.jpg

The 12 Early Start Challenge contestants take the stage at once to await the vote tally.

During the challenge, 12 selected startup companies were each given eight minutes on stage to pitch their company and why their innovations were deserving of the $100,000 grand prize. The on stage time was divided into a four minute presentation and a four minute Q&A session with the panel of judges (this year the audience was not part of the Q&A session at ECS unlike last year due to time constraints).

After all 12 companies had their chance on stage, the panel of judges and the audience submitted their votes for the most innovative startup. The panel of judges included:

  • Scott Budman Business & Technology Reporter, NBC
  • Jeff Herbst Vice President of Business Development, NVIDIA
  • Jens Hortsmann Executive Producer & Managing Partner, Crestlight Venture Productions
  • Pat Moorhead President & Principal Analyst, Moor Insights & Strategy
  • Bill Reichert Managing Director, Garage Technology Ventures

The companies participating in the challenge include Okam Studio, MyCloud3D, Global Valuation, Brytlyt, Clarifai, Aerys, oMobio, ShiVa Technologies, IGI Technologies, Map-D, Scalable Graphics, and AudioStream TV. The companies are involved in machine learning, deep neural networks, computer vision, remote graphics, real time visualization, gaming, and big data analytics.

After all the votes were tallied, Map-D was revealed to be the winner and received a check for $100,000 from NVIDIA Vice President of Business Development Jeff Herbst.

Map-D Wins ECS Early Start Challenge.jpg

Jeff Herbst awarding Map-D's CEO with the Early Start Challenge grand prize check. From left to right: Scott Budman, Jeff Herbst, and Thomas Graham.

Map-D is a company that specializes in a scaleable in-memory GPU database that promises millisecond queries directly from GPU memory (with GPU memory bandwidth being the bottleneck) and very fast database inserts. The company is working with Facebook and PayPal to analyze data. In the case of Facebook, Map-D is being used to analyze status updates in real time to identify malicious behavior. The software can be scaled across eight NVIDIA Tesla cards to analyze a billion Twitter tweets in real time.

It is specialized software, but extremely useful within its niche. Hopefully the company puts the prize money to good use in furthering its GPGPU endeavors. Although there was only a single grand prize winner, I found all the presentations interesting and look forward to seeing where they go from here.

Read more about the Emerging Companies Summit (from last year) and keep track of new GTC 2014 articles by following the GTC 2014 tag @ PC Perspective.

Source: PC

NVIDIA Launches Jetson TK1 Mobile CUDA Development Platform

Subject: General Tech, Mobile | March 25, 2014 - 06:34 PM |
Tagged: GTC 2014, tegra k1, nvidia, CUDA, kepler, jetson tk1, development

NVIDIA recently unified its desktop and mobile GPU lineups by moving to a Kepler-based GPU in its latest Tegra K1 mobile SoC. The move to the Kepler architecture has simplified development and enabled the CUDA programming model to run on mobile devices. One of the main points of the opening keynote earlier today was ‘CUDA everywhere,’ and NVIDIA has officially accomplished that goal by having CUDA compatible hardware from servers to desktops to tablets and embedded devices.

Speaking of embedded devices, NVIDIA showed off a new development board called the Jetson TK1. This tiny new board features a NVIDIA Tegra K1 SoC at its heart along with 2GB RAM and 16GB eMMC storage. The Jetson TK1 supports a plethora of IO options including an internal expansion port (GPIO compatible), SATA, one half-mini PCI-e slot, serial, USB 3.0, micro USB, Gigabit Ethernet, analog audio, and HDMI video outputs.

NVIDIA Jetson TK1 Mobile CUDA Development Board.jpg

Of course the Tegra K1 part is a quad core (4+1) ARM CPU and a Kepler-based GPU with 192 CUDA cores. The SoC is rated at 326 GFLOPS which enables some interesting compute workloads including machine vision.

Computer Vision On NVIDIA CUDA.jpg

In fact, Audi has been utilizing the Jetson TK1 development board to power its self-driving prototype car (more on that soon). Other intended uses for the new development board include robotics, medical devices, security systems, and perhaps low power compute clusters (such as an improved Pedraforca system).It can also be used as a simple desktop platform for testing and developing mobile applications for other Tegra K1 powered devices, of course.

NVIDIA VisionWorks GTC 2014.jpg

Beyond the hardware, the Jetson TK1 comes with the CUDA toolkit, OpenGL 4.4 driver, and NVIDIA VisionWorks SDK which includes programming libraries and sample code for getting machine vision applications running on the Tegra K1 SoC.

The Jetson TK1 is available for pre-order now at $192 and is slated to begin shipping in April. Interested developers can find more information on the NVIDIA developer website.

 

NVIDIA SHIELD: New Features and Promotional Price Cut

Subject: General Tech, Graphics Cards, Mobile | March 25, 2014 - 12:01 PM |
Tagged: shield, nvidia

The SHIELD from NVIDIA is getting a software update which advances GameStream, TegraZone, and the Android OS, itself, to KitKat. Personally, the GameStream enhancements seem most notable as it now allows users to access their home PC's gaming content outside of the home, as if it were a cloud server (but some other parts were interesting, too). Also, from now until the end of April, NVIDIA has temporarily cut the price down to $199.

nvidia-shield-gamestream-01.jpg

Going into more detail: GameStream, now out of Beta, will stream games which are rendered on your gaming PC to your SHIELD. Typically, we have seen this through "cloud" services, such as OnLive and GaiKai, which allow access to a set of games that run on their servers (with varying license models). The fear with these services is the lack of ownership, but the advantage is that the slave device just needs enough power to decode an HD video stream.

nvidia-shield-gamestream-02.jpg

In NVIDIA's case, the user owns both server (their standard NVIDIA-powered gaming PC, which can now be a laptop) and target device (the SHIELD). This technology was once limited to your own network (which definitely has its uses, especially for the SHIELD as a home theater device) but now can also be exposed over the internet. For this technology, NVIDIA recommends 5 megabit upload and download speeds - which is still a lot of upload bandwidth, even for 2014. In terms of performance, NVIDIA believes that it should live up to expectations set by their GRID. I do not have any experience with this, but others on the conference call took it as good news.

As for content, NVIDIA has expanded the number of supported titles to over a hundred, including new entries: Assassin's Creed IV, Batman: Arkham Origins, Battlefield 4, Call of Duty: Ghosts, Daylight, Titanfall, and Dark Souls II. They also claim that users can add other apps which are not officially supported, Halo 2: Vista was mentioned as an example, for streaming. FPS and Bitrate can now be set by the user. A bluetooth mouse and keyboard can also be paired to SHIELD for that input type through GameStream.

nvidia-shield-checkbox.jpg

Yeah, I don't like checkbox comparisons either. It's just a summary.

A new TegraZone was also briefly mentioned. Its main upgrade was apparently its library interface. There has also been a number of PC titles ported to Android recently, such as Mount and Blade: Warband.

The update is available now and the $199 promotion will last until the end of April.

Source: NVIDIA

Valve Ports Portal To NVIDIA Shield Gaming Handheld

Subject: General Tech | March 25, 2014 - 11:33 AM |
Tagged: Portal, GTC 2014, gaming, nvidia

During the opening keynote of NVIDIA's GTC 2014 conference, company CEO Jen-Hsun Huang announced that Valve had ported the ever-popular "Portal" game to the NVIDIA SHIELD handheld gaming platform.

The game appeared to run smoothly on the portable device, and is a worthy addition to the catalog of local games that can be run on the SHIELD.

DSC01456.JPG

Additionally, while the cake may still be a lie, portable gaming systems apparently are not as Jen-Hsun Huang revealed that all GTC attendees will be getting a free SHIELD.

Stay tuned to PC Perspective for more information on all the opening keynote announcements and their implications for the future of computing!

GPU Technology Conference 2014 resources:

Keep up with GTC 2014 throughout the week by following the NVIDIA blog (blogs.nvidia.com) and the GTC tag on PC Perspective!

Manufacturer: ASUS

Introduction and Technical Specifications

Introduction

02-card-profile.jpg

Courtesy of ASUS

The ASUS ROG Poseidon GTX 780 video card is the latest incarnation of the Republic of Gamer (ROG) Poseidon series. Like the previous Poseidon series products, the Poseidon GTX 780 features a hybrid cooler, capable of air and liquid-based cooling for the GPU and on board components. The AUS ROG Poseidon GTX 780 graphics card comes with an MSRP of $599, a premium price for a premium card .

03-fly-apart-image.jpg

Courtesy of ASUS

In designing the Poseidon GTX 780 graphics card, ASUS packed in many of premium components you would normally find as add-ons. Additionally, the card features motherboard quality power components, featuring a 10 phase digital power regulation system using ASUS DIGI+ VRM technology coupled with Japanese black metallic capacitors. The Poseidon GTX 780 has the following features integrated into its design: DisplayPort output port, HDMI output port, dual DVI ports (DVI-D and DVI-I type ports), aluminum backplate, integrated G 1/4" threaded liquid ports, dual 90mm cooling fans, 6-pin and 8-pin PCIe-style power connectors, and integrated power connector LEDs and ROG logo LED.

Continue reading our review of the ASUS ROG Poseidon GTX 780 graphics card!

GDC wasn't just about DirectX; OpenGL was also a hot topic

Subject: General Tech | March 24, 2014 - 09:26 AM |
Tagged: opengl, nvidia, gdc 14, GDC, amd, Intel

DX12 and its Mantle-like qualities garnered the most interest from gamers at GDC but an odd trio of companies were also pushing a different API.  OpenGL has been around for over 20 years and has waged a long war against Direct3D, a war which may be intensifying again.  Representatives from Intel, AMD and NVIDIA all took to the stage to praise the new OpenGL standard, suggesting that with a tweaked implementation of OpenGL developers could expect to see performance increases between 7 to 15 times.  The Inquirer has embedded an hour long video in their story, check it out to learn more.

slide-1-638.jpg

"CHIP DESIGNERS AMD, Intel and Nvidia teamed up to tout the advantages of the OpenGL multi-platform application programming interface (API) at this year's Game Developers Conference (GDC)."

Here is some more Tech News from around the web:

Tech Talk

Source: The Inquirer
Author:
Manufacturer: NVIDIA

DX11 could rival Mantle

The big story at GDC last week was Microsoft’s reveal of DirectX 12 and the future of the dominant API for PC gaming.  There was plenty of build up to the announcement with Microsoft’s DirectX team posting teasers and starting up a Twitter account of the occasion. I hosted a live blog from the event which included pictures of the slides. It was our most successful of these types of events with literally thousands of people joining in the conversation. Along with the debates over the similarities of AMD’s Mantle API and the timeline for DX12 release, there are plenty of stories to be told.

After the initial session, I wanted to setup meetings with both AMD and NVIDIA to discuss what had been shown and get some feedback on the planned direction for the GPU giants’ implementations.  NVIDIA presented us with a very interesting set of data that both focused on the future with DX12, but also on the now of DirectX 11.

15.jpg

The reason for the topic is easy to decipher – AMD has built up the image of Mantle as the future of PC gaming and, with a full 18 months before Microsoft’s DirectX 12 being released, how developers and gamers respond will make an important impact on the market. NVIDIA doesn’t like to talk about Mantle directly, but it’s obvious that it feels the need to address the questions in a roundabout fashion. During our time with NVIDIA’s Tony Tamasi at GDC, the discussion centered as much on OpenGL and DirectX 11 as anything else.

What are APIs and why do you care?

For those that might not really understand what DirectX and OpenGL are, a bit of background first. APIs (application programming interface) are responsible for providing an abstraction layer between hardware and software applications.  An API can deliver consistent programming models (though the language can vary) and do so across various hardware vendors products and even between hardware generations.  They can provide access to feature sets of hardware that have a wide range in complexity, but allow users access to hardware without necessarily knowing great detail about it.

Over the years, APIs have developed and evolved but still retain backwards compatibility.  Companies like NVIDIA and AMD can improve DirectX implementations to increase performance or efficiency without adversely (usually at least) affecting other games or applications.  And because the games use that same API for programming, changes to how NVIDIA/AMD handle the API integration don’t require game developer intervention.

With the release of AMD Mantle, the idea of a “low level” API has been placed in the minds of gamers and developers.  The term “low level” can mean many things, but in general it is associated with an API that is more direct, has a thinner set of abstraction layers, and uses less translation from code to hardware.  The goal is to reduce the amount of overhead (performance hit) that APIs naturally impair for these translations.  With additional performance available, the CPU cycles can be used by the program (game) or be slept to improve battery life. In certain cases, GPU throughput can increase where the API overhead is impeding the video card's progress.

Passing additional control to the game developers, away from the API or GPU driver developers, gives those coders additional power and improves the ability for some vendors to differentiate. Interestingly, not all developers want this kind of control as it requires more time, more development work, and small teams that depend on that abstraction to make coding easier will only see limited performance advantages.

The reasons for this transition to a lower level API is being driven the by widening gap of performance between CPU and GPUs.  NVIDIA provided the images below.

04.jpg

On the left we see performance scaling in terms of GFLOPS and on the right the metric is memory bandwidth. Clearly the performance of NVIDIA's graphics chips has far outpaced (as have AMD’s) what the best Intel desktop processor have been able and that gap means that the industry needs to innovate to find ways to close it.

Continue reading NVIDIA Talks DX12, DX11 Efficiency Improvements!!!

GDC 14: NVIDIA, AMD, and Intel Discuss OpenGL Speed-ups

Subject: General Tech, Shows and Expos | March 21, 2014 - 10:41 PM |
Tagged: opengl, nvidia, Intel, gdc 14, GDC, amd

So, for all the discussion about DirectX 12, the three main desktop GPU vendors, NVIDIA, AMD, and Intel, want to tell OpenGL developers how to tune their applications. Using OpenGL 4.2 and a few cross-vendor extensions, because OpenGL is all about its extensions, a handful of known tricks can reduce driver overhead up to ten-fold and increase performance up to fifteen-fold. The talk is very graphics developer-centric, but it basically describes a series of tricks known to accomplish feats similar to what Mantle and DirectX 12 suggest.

opengl_logo.jpg

The 130-slide presentation is broken into a few sections, each GPU vendor getting a decent chunk of time. On occasion, they would mention which implementation fairs better with one function call. The main point that they wanted to drive home (since they clearly repeated the slide three times with three different fonts) is that none of this requires a new API. Everything exists and can be implemented right now. The real trick is to know how to not poke the graphics library in the wrong way.

The page also hosts a keynote from the recent Steam Dev Days.

That said, an advantage that I expect from DirectX 12 and Mantle is reduced driver complexity. Since the processors have settled into standards, I expect that drivers will not need to do as much unless the library demands it for legacy reasons. I am not sure how extending OpenGL will affect that benefit, as opposed to just isolating the legacy and building on a solid foundation, but I wonder if these extensions could be just as easy to maintain and optimize. Maybe it is.

Either way, the performance figures do not lie.

Source: NVIDIA

NVIDIA GPUs pre-Fermi Are End of Life After 340.xx Drivers

Subject: General Tech, Graphics Cards | March 21, 2014 - 09:43 PM |
Tagged: nvidia

NVIDIA's Release 340.xx GPU drivers for Windows will be the last to contain "enhancements and optimizations" for users with video cards based on architectures before Fermi. While NVIDIA will provided some extended support for 340.xx (and earlier) drivers until April 1st, 2016, they will not be able to install Release 343.xx (or later) drivers. Release 343 will only support Fermi, Kepler, and Maxwell-based GPUs.

nvidia-geforce.png

The company has a large table on their CustHelp website filled with product models that are pining for the fjords. In short, if the model is 400-series or higher (except the GeForce 405) then it is still fully supported. If you do have the GeForce 405, or anything 300-series and prior, then GeForce Release 340.xx drivers will be the end of the line for you.

As for speculation, Fermi was the first modern GPU architecture for NVIDIA. It transitioned to standards-based (IEEE 754, etc.) data structures, introduced L1 and L2 cache, and so forth. From our DirectX 12 live blog, we also noticed that the new graphics API will, likewise, begin support at Fermi. It feels to me that NVIDIA, like Microsoft, wants to shed the transition period and work on developing a platform built around that baseline.

Source: NVIDIA

ASUS custom built ROG MARS GTX 760

Subject: Graphics Cards | March 21, 2014 - 11:33 AM |
Tagged: ROG MARS 760, nvidia, gtx 760, GK104, asus

If you can afford to spend $1000 or more on a GPU the ASUS ROG MARS GTX 760 is an interesting choice.  The two GTX 760 cores on this card are not modified as we have seen on some other two GPU cards, indeed ASUS even overclocked them to a base of 1006MHz and a boost clock of 1072MHz.  Ryan reviewed this card back in December, awarding it with a Gold and [H]ard|OCP is revisiting this card with a new driver and a different lineup of games.  They also pass this unique card from ASUS a Gold after it finished stomping on AMD and the GTX 780Ti.

1395035044ISFXADrH5Z_1_6_l.jpg

"The ASUS ROG MARS 760 is one of the most unique custom built video cards out on the market today. ASUS has designed a video card sporting dual NVIDIA GTX 760 GPUs on a single video card and given gamers something that didn't exist before in the market place. We will find out how it compares with the fastest video cards out there."

Here are some more Graphics Card articles from around the web:

Graphics Cards

 

Source: [H]ard|OCP