Subject: General Tech | May 9, 2013 - 07:50 PM | Tim Verry
Tagged: tegra 4, nvidia, grid, financial results
NVIDIA has released the results of its first fiscal quarter of 2014. Overall, NVIDIA had a positive first quarter with total revenue of $954.7 million and a net income of $77.9 million. During Q1 2014 the company announced its Grid VCA for enterprise customers and Tegra 4 and Tegra 4i for the mobile market. NVIDIA’s shareholders saw an Earnings Per Share (EPS) of 13 cents, which is up 30% versus the same quarter last year. Interestingly, NVIDIA has announced that it will be returning $1 billion to shareholders through increased dividends and buying back shares.
Q1 2014 is an interesting quarter, as it is up year over year, but down significantly versus the previous quarter (Q4’13). NVIDIA’s Q1’14 revenue of 954.7 million is up YOY 32% from $924.9 million in Q1’13, but down 13.7% from $1.1 billion in the previous quarter. The dip is likely attributable to the fact that its Q1’14 is the quarter after the holiday rush at the end of Q4. Considering it is still up versus last year, the dip versus last quarter shouldn’t be taken as a bad sign. Net income follows a similar pattern, with net income down 53.2% versus last quarter’s $174 million, but up 29% YOY (Q1’13 net income was $60.9 million).
The financial results seem to indicate that NVIDIA is continuing to grow and remain profitable. According to NVIDIA, the company expects to see operating expenses and revenue increase in Q2’14 to $448 million in and approximately $975 million respectively. Further, NVIDIA expects growth to continue throughout 2014 as it launches new Tegra 4(i) SoCs and expands its server/business offerings with its GRID technologies.
You can find NVIDIA's full financial report on the company's website.
Subject: Cases and Cooling | April 19, 2013 - 08:46 AM | Tim Verry
Tagged: nzxt, case fan, fan controller, fan hub, cooling, grid
NZXT has announced that it is making its Grid fan hub available to the masses. No longer only available with certain NZXT cases, the Grid fan hub takes a single Molex power cable and provides 3-pin power outputs for up to ten fans.
The NZXT kit will come with the Grid hub, a 200mm long Molex power adapter, a single 200mm long (3-pin) female-to-female adapter cable, and two 200mm (3-pin) fan extension cables. NZXT is also including five black cable ties to assist with cable management.
Unfortunately, the Grid does not provide functionality to allow adjustable fan speeds. All fans connected to the Grid hub will run at 100% unless other means (such as resistors) are used inline to slow them down. If you only care for speed, and are in a situation where your motherboard does not support enough fan headers but you cannot justify a full fan controller the Grid might be for you. For the price, it is serviceable in that regard.
Speaking of pricing, the Grid fan hub will be available soon with a MSRP of $11.99. More information is available on NZXT's product page.
Is the Grid something that you could see yourself using?
Subject: General Tech, Graphics Cards | March 19, 2013 - 02:55 PM | Tim Verry
Tagged: unified virtual memory, ray tracing, nvidia, GTC 2013, grid vca, grid, graphics cards
Today, NVIDIA's CEO Jen-Hsun Huang stepped on stage to present the GTC keynote. In the presentation (which was live streamed on the GTC website and archived here.), NVIDIA discussed five major points, looking back over 2013 and into the future of its mobile and professional products. In addition to the product roadmap, NVIDIA discussed the state of computer graphics and GPGPU software. Remote graphics and GPU virtualization was also on tap. Finally, towards the end of the Keynote, the company revealed its first appliance with the NVIDIA GRID VCA. The culmination of NVIDIA's GRID and GPU virtualization technology, the VCA is a device that hosts up to 16 virtual machines which each can tap into one of 16 Kepler-based graphics processors (8 cards, 16 GPUs per card) to fully hardware accelerate software running of the VCA. Three new mobile Tegra parts and two new desktop graphics processors were also hinted at, with improvements to power efficiency and performance.
On the desktop side of things, NVIDIA's roadmap included two new GPUs. Following Kepler, NVIDIA will introduce Maxwell and Volta. Maxwell will feature a new virtualized memory technology called Unified Virtual Memory. This tech will allow both the CPU and GPU to read from a single (virtual) memory store. Much as with the promise of AMD's Kaveri APU, the Unified Virtual Meory will result in speed improvements in heterogeneous applications because data will not have to be copied to/from the GPU and CPU in order for the data to be processed. Server applications will really benefit from the shared memory tech. NVIDIA did not provide details, but from the sound of it, the CPU and GPU both continue to write to their own physical memory, but their is a layer of virtualized memory on top of that, that will allow the two (or more) different processors to read from each other's memory store.
Following Maxwell, Volta will be a physically smaller chip with more transistors (likely a smaller process node). In addition to the power efficiency improvements over Maxwell, it steps up the memory bandwidth significantly. NVIDIA will use TSV (through silicon via) technology to physically mount the graphics DRAM chips over the GPU (attached to the same silicon substrate electrically). According to NVIDIA, this new TSV-mounted memory will achieve up to 1 Terabytes/second of memory bandwidth, which is a notable increase over existing GPUs.
NVIDIA continues to pursue the mobile market with its line of Tegra chips that pair an ARM CPU, NVIDIA GPU, and SDR modem. Two new mobile chips called Logan and Parker will follow Tegra 4. Both new chips will support the full CUDA 5 stack and OpenGL 4.3 out of the box. Logan will feature a Kepler-based graphics porcessor on the chip that can “everything a modern computer ought to do” according to NVIDIA. Parker will have a yet-to-be-revealed graphics processor (Kepler successor). This mobile chip will utilize 3D FinFET transistors. It will have a greater number of transistors in a smaller package than previous Tegra parts (it will be about the size of a dime), and NVIDIA also plans to ramp up the frequency to wrangle more performance out of the mobile chip. NVIDIA has stated that Logan silicon should be completed towards the end of 2013, with the mobile chips entering production in 2014.
Interestingly, Logan has a sister chip that NVIDIA is calling Kayla. This mobile chip is capable of running ray tracing applications and features OpenGL geometric shaders. It can support GPGPU code and will be compatible with Linux.
NVIDIA has been pushing CUDA for several years, now. The company has seen some respectable adoption rates, by growing from 1 Tesla supercomputer in 2008 to its graphics cards being used in 50 supercomputers, with 500 million CUDA processors on the market. There are now allegedly 640 universities working with CUDA and 37,000 academic papers on CUDA.
Finally, NVIDIA's hinted-at new product announcement was the NVIDIA VCA, which is a GPU virtualization appliance that hooks into the network and can deliver up to 16 virtual machines running independant applications. These GPU accelerated workspaces can be presneted to thin clinets over the netowrk by installing the GRID client software on users' workstations. The specifications of the GRID VCA is rather impressive, as well.
The GRID VCA features:
- 2 x Intel Xeon processors with 16 threads each (32 total threads)
- 192GB to 384GB of system memory
- 8 Kepler-based graphics cards, with two GPUs each (16 total GPUs)
- 16 x GPU-accelerated virtual machines
The GRID VCA fits into a 4U case. It can deliver remote graphics to workstations, and is allegedly fast enough to deliver gpu accelerated software that is equivalent to having it run on the local machine (at least over LAN). The GRID Visual Computing Appliance will come in two flavors at different price points. The first will have 8 Kepler GPUs with 4GB of memory each, 16 CPU threads, and 192GB of system memory for $24,900. The other version will cost $34,900 and features 16 Kepler GPUs (4GB memory), 32 CPU threads, and 384GB system memory. On top of the hardware cost, NVIDIA is also charging licensing fees. While both GRID VCA devices can support unlimited devices, the licenses cost $2,400 and $4,800 per year respectively.
Overall, it was an interesting keynote, and the proposed graphics cards look to be offering up some unique and necessary features that should help hasten the day of ubiquitous general purpose GPU computing. The Unified Virtual Memory was something I was not expecting, and it will be interesting to see how AMD responds. AMD is already promising shared memory in its Kaveri APU, but I am interested to see the details of how NVIDIA and AMD will accomplish shared memory with dedicated grapahics cards (and whether CrossFire/SLI setups will all have a single shared memory pool)..
Stay tuned to PC Perspective for more GTC 2013 Coverage!
Subject: Graphics Cards, Shows and Expos | January 9, 2013 - 11:46 AM | Ryan Shrout
Tagged: video, nvidia, grid, cloud gaming, ces 2013, CES
Despite all the excitement about the NVIDIA Shield handheld gaming device at CES, the company was also heavily promoting its GRID Cloud Gaming Technology, marking another company that is promosing "game everywhere on everything". NVIDIA's claims of lower latency thanks to rendering and encoding on the same GPU have really yet to be verified as the hands-on demos they had at the show were running on local servers (not exactly a real-world test...).
NVIDIA isn't planning on releasing a self-branded service to the public but instead wants to sell servers to ISPs and service providers to increase density (more games per server) and performance. There are no current cloud gaming companies using GRID technology so it looks like we'll have to wait a bit longer to see it's true capabilities.
PC Perspective's CES 2013 coverage is sponsored by AMD.
Follow all of our coverage of the show at http://pcper.com/ces!
NVIDIA puts its head in the clouds
Today at the 2012 NVIDIA GPU Technology Conference (GTC), NVIDIA took the wraps off a new cloud gaming technology that promises to reduce latency and improve the quality of streaming gaming using the power of NVIDIA GPUs. Dubbed GeForce GRID, NVIDIA is offering the technology to online services like Gaikai and OTOY.
The goal of GRID is to bring the promise of "console quality" gaming to every device a user has. The term "console quality" is kind of important here as NVIDIA is trying desperately to not upset all the PC gamers that purchase high-margin GeForce products. The goal of GRID is pretty simple though and should be seen as an evolution of the online streaming gaming that we have covered in the past–like OnLive. Being able to play high quality games on your TV, your computer, your tablet or even your phone without the need for high-performance and power hungry graphics processors through streaming services is what many believe the future of gaming is all about.
GRID starts with the Kepler GPU - what NVIDIA is now dubbing the first "cloud GPU" - that has the capability to virtualize graphics processing while being power efficient. The inclusion of a hardware fixed-function video encoder is important as well as it will aid in the process of compressing images that are delivered over the Internet by the streaming gaming service.
This diagram shows us how the Kepler GPU handles and accelerates the processing required for online gaming services. On the server side, the necessary process for an image to find its way to the user is more than just a simple render to a frame buffer. In current cloud gaming scenarios the frame buffer would have to be copied to the main system memory, compressed on the CPU and then sent via the network connection. With NVIDIA's GRID technology that capture and compression happens on the GPU memory and thus can be on its way to the gamer faster.
The results are H.264 streams that are compressed quickly and efficiently to be sent out over the network and return to the end user on whatever device they are using.