Author:
Manufacturer: NVIDIA

Looking Towards the Professionals

This is a multi-part story for the NVIDIA Titan V:

Earlier this week we dove into the new NVIDIA Titan V graphics card and looked at its performacne from a gaming perspective. Our conclusions were more or less what we expected - the card was on average ~20% faster than the Titan Xp and about ~80% faster than the GeForce GTX 1080. But with that $3000 price tag, the Titan V isn't going to win any enthusiasts over.

What the Titan V is meant for in reality is the compute space. Developers, coders, engineers, and professionals that use GPU hardware for research, for profit, or for both. In that case, $2999 for the Titan V is simply an investment that needs to show value in select workloads. And though $3000 is still a lot of money, keep in mind that the NVIDIA Quadro GP100, the most recent part with full-performance double precision compute from the Pascal chip, is still selling for well over $6000 today. 

IMG_5009.JPG

The Volta GV100 GPU offers 1:2 double precision performance, equating to 2560 FP64 cores. That is a HUGE leap over the GP102 GPU used on the Titan Xp that uses a 1:32 ratio, giving us just 120 FP64 cores equivalent.

  Titan V Titan Xp GTX 1080 Ti GTX 1080 GTX 1070 Ti GTX 1070 RX Vega 64 Liquid Vega Frontier Edition
GPU Cores 5120 3840 3584 2560 2432 1920 4096 4096
FP64 Cores 2560 120 112 80 76 60 256 256
Base Clock 1200 MHz 1480 MHz 1480 MHz 1607 MHz 1607 MHz 1506 MHz 1406 MHz 1382 MHz
Boost Clock 1455 MHz 1582 MHz 1582 MHz 1733 MHz 1683 MHz 1683 MHz 1677 MHz 1600 MHz
Texture Units 320 240 224 160 152 120 256 256
ROP Units 96 96 88 64 64 64 64 64
Memory 12GB 12GB 11GB 8GB 8GB 8GB 8GB 16GB
Memory Clock 1700 MHz MHz 11400 MHz 11000 MHz 10000 MHz 8000 MHz 8000 MHz 1890 MHz 1890 MHz
Memory Interface 3072-bit
HBM2
384-bit G5X 352-bit G5X 256-bit G5X 256-bit 256-bit 2048-bit HBM2 2048-bit HBM2
Memory Bandwidth 653 GB/s 547 GB/s 484 GB/s 320 GB/s 256 GB/s 256 GB/s 484 GB/s 484 GB/s
TDP 250 watts 250 watts 250 watts 180 watts 180 watts 150 watts 345 watts 300 watts
Peak Compute 12.2 (base) TFLOPS
14.9 (boost) TFLOPS
12.1 TFLOPS 11.3 TFLOPS 8.2 TFLOPS 7.8 TFLOPS 5.7 TFLOPS 13.7 TFLOPS 13.1 TFLOPS
Peak DP Compute 6.1 (base) TFLOPS
7.45 (boost) TFLOPS
0.37 TFLOPS 0.35 TFLOPS 0.25 TFLOPS 0.24 TFLOPS 0.17 TFLOPS 0.85 TFLOPS 0.81 TFLOPS
MSRP (current) $2999 $1299 $699 $499 $449 $399 $699 $999

The current AMD Radeon RX Vega 64, and the Vega Frontier Edition, all ship with a 1:16 FP64 ratio, giving us the equivalent of 256 DP cores per card.

Test Setup and Benchmarks

Our testing setup remains the same from our gaming tests, but obviously the software stack is quite different. 

  PC Perspective GPU Testbed
Processor Intel Core i7-5960X Haswell-E
Motherboard ASUS Rampage V Extreme X99
Memory G.Skill Ripjaws 16GB DDR4-3200
Storage OCZ Agility 4 256GB (OS)
Adata SP610 500GB (games)
Power Supply Corsair AX1500i 1500 watt
OS Windows 10 x64
Drivers AMD: 17.10.2
NVIDIA: 388.59

Applications in use include:

  • Luxmark 
  • Cinebench R15
  • VRay
  • Sisoft Sandra GPU Compute
  • SPECviewperf 12.1
  • FAHBench

Let's not drag this along - I know you are hungry for results! (Thanks to Ken for running most of these tests for us!!)

Continue reading part 2 of our Titan V review on compute performance!!

Author:
Manufacturer: NVIDIA

A preview of potential Volta gaming hardware

This is a multi-part story for the NVIDIA Titan V:

As a surprise to most of us in the media community, NVIDIA launched a new graphics card to the world, the TITAN V. No longer sporting the GeForce brand, NVIDIA has returned the Titan line of cards to where it began – clearly targeted at the world of developers and general purpose compute. And if that branding switch isn’t enough to drive that home, I’m guessing the $2999 price tag will be.

Today’s article is going to look at the TITAN V from the angle that is likely most interesting to the majority of our readers, that also happens to be the angle that NVIDIA is least interested in us discussing. Though targeted at machine learning and the like, there is little doubt in my mind that some crazy people will want to take on the $3000 price to see what kind of gaming power this card can provide. After all, this marks the first time that a Volta-based GPU from NVIDIA has shipped in a place a consumer can get their hands on it, and the first time it has shipped with display outputs. (That’s kind of important to build a PC around it…)

IMG_4999.JPG

From a scientific standpoint, we wanted to look at the Titan V for the same reasons we tested the AMD Vega Frontier Edition cards upon their launch: using it to estimate how future consumer-class cards will perform in gaming. And, just as we had to do then, we purchased this Titan V from NVIDIA.com with our own money. (If anyone wants to buy this from me to recoup the costs, please let me know! Ha!)

  Titan V Titan Xp GTX 1080 Ti GTX 1080 GTX 1070 Ti GTX 1070 RX Vega 64 Liquid Vega Frontier Edition
GPU Cores 5120 3840 3584 2560 2432 1920 4096 4096
Base Clock 1200 MHz 1480 MHz 1480 MHz 1607 MHz 1607 MHz 1506 MHz 1406 MHz 1382 MHz
Boost Clock 1455 MHz 1582 MHz 1582 MHz 1733 MHz 1683 MHz 1683 MHz 1677 MHz 1600 MHz
Texture Units 320 240 224 160 152 120 256 256
ROP Units 96 96 88 64 64 64 64 64
Memory 12GB 12GB 11GB 8GB 8GB 8GB 8GB 16GB
Memory Clock 1700 MHz MHz 11400 MHz 11000 MHz 10000 MHz 8000 MHz 8000 MHz 1890 MHz 1890 MHz
Memory Interface 3072-bit
HBM2
384-bit G5X 352-bit G5X 256-bit G5X 256-bit 256-bit 2048-bit HBM2 2048-bit HBM2
Memory Bandwidth 653 GB/s 547 GB/s 484 GB/s 320 GB/s 256 GB/s 256 GB/s 484 GB/s 484 GB/s
TDP 250 watts 250 watts 250 watts 180 watts 180 watts 150 watts 345 watts 300 watts
Peak Compute 12.2 (base) TFLOPS
14.9 (boost) TFLOPS
12.1 TFLOPS 11.3 TFLOPS 8.2 TFLOPS 7.8 TFLOPS 5.7 TFLOPS 13.7 TFLOPS 13.1 TFLOPS
MSRP (current) $2999 $1299 $699 $499   $399 $699 $999

The Titan V is based on the GV100 GPU though with some tweaks that lower performance and capability slightly when compared to the Tesla-branded equivalent hardware. Though our add-in card iteration has the full 5120 CUDA cores enabled, the HBM2 memory bus is reduced from 4096-bit to 3072-bit and it has one of the four stacks on the package disabled. This also drops the memory capacity from 16GB to 12GB, and memory bandwidth to 652.8 GB/s.

Continue reading our gaming review of the NVIDIA Titan V!!

Video: What does a $3000 GPU look like? NVIDIA TITAN V Unboxing and Teardown!

Subject: Graphics Cards | December 12, 2017 - 07:51 PM |
Tagged: nvidia, titan, titan v, Volta, video, teardown, unboxing

NVIDIA launched the new Titan V graphics card last week, a $2999 part targeted not at gamers (thankfully) but instead at developers of machine learning applications. Based on the GV100 GPU and 12GB of HBM2 memory, the Titan V is an incredibly powerful graphics card. We have every intention of looking at the gaming performance of this card as a "preview" of potential consumer Volta cards that may come out next year. (This is identical to our stance of testing the Vega Frontier Edition cards.)

But for now, enjoy this unboxing and teardown video that takes apart the card to get a good glimpse of that GV100 GPU.

A couple of quick interesting notes:

  • This implementation has 25% of the memory and ROPs disabled, giving us 12GB of HBM2, a 3072-bit bus, and 96 ROPs.
  • Clock speeds in our testing look to be much higher than the base AND boost ratings.
  • So far, even though the price takes this out of the gaming segment completely, we are impressed with some of the gaming results we have found.
  • The cooler might LOOK the same, but it definitely is heavier than the cooler and build for the Titan Xp.
  • Champagne. It's champagne colored.
  • Double precision performance is insanely good, spanking the Titan Xp and Vega so far in many tests.
  • More soon!

gv100.png

Source: NVIDIA

NVIDIA Launches Titan V, the World's First Consumer Volta GPU with HBM2

Subject: Graphics Cards | December 7, 2017 - 11:44 PM |
Tagged: Volta, titan, nvidia, graphics card, gpu

NVIDIA made a surprising move late Thursday with the simultaneous announcement and launch of the Titan V, the first consumer/prosumer graphics card based on the Volta architecture.

NVIDIA_TITAN V_KV.jpeg

Like recent flagship Titan-branded cards, the Titan V will be available exclusively from NVIDIA for $2,999. Labeled "the most powerful graphics card ever created for the PC," Titan V sports 12GB of HBM2 memory, 5120 CUDA cores, and a 1455MHz boost clock, giving the card 110 teraflops of maximum compute performance. Check out the full specs below:

6 Graphics Processing Clusters
80 Streaming Multiprocessors
5120 CUDA Cores (single precision)
320 Texture Units
640 Tensor Cores
1200 MHz Base Clock (MHz)
1455 MHz Boost Clock (MHz)
850 MHz Memory Clock
1.7 Gbps Memory Data Rate
4608K L2 Cache Size
12288 MB HBM2 Total Video Memory
3072-bit Memory Interface
652.8 GB/s Total Memory Bandwidth
384 GigaTexels/sec Texture Rate (Bilinear)
12 nm Fabrication Process (TSMC 12nm FFN High Performance)
21.1 Billion Transistor Count
3 x DisplayPort, 1 x HDMI Connectors
Dual Slot Form Factor
One 6-pin, One 8-pin Power Connectors
600 Watts Recommended Power Supply
250 Watts Thermal Design Power (TDP)

The NVIDIA Titan V's 110 teraflops of compute performance compares to a maximum of about 12 teraflops on the Titan Xp, a greater than 9X increase in a single generation. Note that this is a very specific claim though, and references the AI compute capability of the Tensor cores rather than we traditionally measure for GPUs (single precision FLOPS). In that metric, the Titan V only truly offers a jump to 14 TFLOPS. The addition of expensive HBM2 memory also adds to the high price compared to its predecessor.

titan-v-stylized-photography-6.jpeg

The Titan V is available now from NVIDIA.com for $2,999, with a limit of 2 per customer. And hey, there's free shipping too.

Source: NVIDIA

NVIDIA's SC17 Keynote: Data Center Business on Cloud 9

Subject: Graphics Cards | November 13, 2017 - 10:35 PM |
Tagged: nvidia, data center, Volta, tesla v100

There have been a few NVIDIA datacenter stories popping up over the last couple of months. A month or so after Google started integrating Pascal-based Tesla P100s into their cloud, Amazon announced Telsa V100s for their rent-a-server service. They have also announced Volta-based solutions available or coming from Dell EMC, Hewlett Packard Enterprise, Huawei, IBM, Lenovo, Alibaba Cloud, Baidu Cloud, Microsoft Azure, Oracle Cloud, and Tencent Cloud.

nvidia-2017-sc17-money.jpg

This apparently translates to boatloads of money. Eyeball-estimating from their graph, it looks as though NVIDIA has already made about 50% more from datacenter sales in their first three quarters (fiscal year 2018) than all last year.

nvidia-2017-sc17-japanaisuper.jpg

They are also seeing super-computer design wins, too. Earlier this year, Japan announced that it would get back into supercomputing, having lost ground to other nations in recent years, with a giant, AI-focused offering. Turns out that this design will use 4352 Tesla V100 GPUs to crank out 0.55 ExaFLOPs of (tensor mixed-precision) performance.

nvidia-2017-sc17-cloudcontainer.jpg

As for product announcements, this one isn’t too exciting for our readers, but should be very important for enterprise software developers. NVIDIA is creating optimized containers for various programming environments, such as TensorFlow and GAMESS, with their recommended blend of driver version, runtime libraries, and so forth, for various generations of GPUs (Pascal and higher). Moreover, NVIDIA claims that they will support it “for as long as they live”. Getting the right container for your hardware is just filling out a simple form and downloading the blob.

NVIDIA’s keynote is available on UStream, but they claim it will also be uploaded to their YouTube soon.

Source: NVIDIA

Podcast #474 - Optane 900P, Cord Cutting, 1070 Ti, and more!

Subject: General Tech | November 2, 2017 - 12:11 PM |
Tagged: Volta, video, podcast, PCI-e 4, nvidia, msi, Microsoft Andromeda, Memristors, Mali-D71, Intel Optane, gtx 1070 ti, cord cutting, arm, aegis 3, 8th generation core

PC Perspective Podcast #474 - 11/02/17

Join us for discussion on Optane 900P, Cord Cutting, 1070 Ti, and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts: Ryan Shrout, Josh Walrath, Jeremy Hellstrom, Allyn Malventano,

Peanut Gallery: Ken Addison, Alex Lustenberg

Program length: 1:32:19

Podcast topics of discussion:
  1. Week in Review:
  2. News items of interest:
  3. Hardware/Software Picks of the Week
    1. 1:17:00 Ryan: Intel 900P Optane SSD
    2. 1:26:45 Allyn: Sony RX10 Mk IV. Pricey, but damn good.
  4. Closing/outro

Source:

NVIDIA Partners with AWS for Volta V100 in the Cloud

Subject: Graphics Cards | October 31, 2017 - 09:58 PM |
Tagged: nvidia, amazon, google, pascal, Volta, gv100, tesla v100

Remember last month? Remember when I said that Google’s introduction of Tesla P100s would be good leverage over Amazon, as the latter is still back in the Kepler days (because Maxwell was 32-bit focused)?

Amazon has leapfrogged them by introducing Volta-based V100 GPUs.

nvidia-2017-voltatensor.jpg

To compare the two parts, the Tesla P100 has 3584 CUDA cores, yielding just under 10 TFLOPs of single-precision performance. The Tesla V100, with its ridiculous die size, pushes that up over 14 TFLOPs. Same as Pascal, they also support full 1:2:4 FP64:FP32:FP16 performance scaling. It also has access to NVIDIA’s tensor cores, which are specialized for 16-bit, 4x4 multiply-add matrix operations that are apparently common in neural networks, both training and inferencing.

Amazon allows up to eight of them at once (with their P3.16xlarge instances).

So that’s cool. While Google has again been quickly leapfrogged by Amazon, it’s good to see NVIDIA getting wins in multiple cloud providers. This keeps money rolling in that will fund new chip designs for all the other segments.

Source: Amazon

NVIDIA Teases Low Power, High Performance Xavier SoC That Will Power Future Autonomous Vehicles

Subject: Processors | October 1, 2016 - 06:11 PM |
Tagged: xavier, Volta, tegra, SoC, nvidia, machine learning, gpu, drive px 2, deep neural network, deep learning

Earlier this week at its first GTC Europe event in Amsterdam, NVIDIA CEO Jen-Hsun Huang teased a new SoC code-named Xavier that will be used in self-driving cars and feature the company's newest custom ARM CPU cores and Volta GPU. The new chip will begin sampling at the end of 2017 with product releases using the future Tegra (if they keep that name) processor as soon as 2018.

NVIDIA_Xavier_SOC.jpg

NVIDIA's Xavier is promised to be the successor to the company's Drive PX 2 system which uses two Tegra X2 SoCs and two discrete Pascal MXM GPUs on a single water cooled platform. These claims are even more impressive when considering that NVIDIA is not only promising to replace the four processors but it will reportedly do that at 20W – less than a tenth of the TDP!

The company has not revealed all the nitty-gritty details, but they did tease out a few bits of information. The new processor will feature 7 billion transistors and will be based on a refined 16nm FinFET process while consuming a mere 20W. It can process two 8k HDR video streams and can hit 20 TOPS (NVIDIA's own rating for deep learning int(8) operations).

Specifically, NVIDIA claims that the Xavier SoC will use eight custom ARMv8 (64-bit) CPU cores (it is unclear whether these cores will be a refined Denver architecture or something else) and a GPU based on its upcoming Volta architecture with 512 CUDA cores. Also, in an interesting twist, NVIDIA is including a "Computer Vision Accelerator" on the SoC as well though the company did not go into many details. This bit of silicon may explain how the ~300mm2 die with 7 billion transistors is able to match the 7.2 billion transistor Pascal-based Telsa P4 (2560 CUDA cores) graphics card at deep learning (tera-operations per second) tasks. Of course in addition to the incremental improvements by moving to Volta and a new ARMv8 CPU architectures on a refined 16nm FF+ process.

  Drive PX Drive PX 2 NVIDIA Xavier Tesla P4
CPU 2 x Tegra X1 (8 x A57 total) 2 x Tegra X2 (8 x A57 + 4 x Denver total) 1 x Xavier SoC (8 x Custom ARM + 1 x CVA) N/A
GPU 2 x Tegra X1 (Maxwell) (512 CUDA cores total 2 x Tegra X2 GPUs + 2 x Pascal GPUs 1 x Xavier SoC GPU (Volta) (512 CUDA Cores) 2560 CUDA Cores (Pascal)
TFLOPS 2.3 TFLOPS 8 TFLOPS ? 5.5 TFLOPS
DL TOPS ? 24 TOPS 20 TOPS 22 TOPS
TDP ~30W (2 x 15W) 250W 20W up to 75W
Process Tech 20nm 16nm FinFET 16nm FinFET+ 16nm FinFET
Transistors ? ? 7 billion 7.2 billion

For comparison, the currently available Tesla P4 based on its Pascal architecture has a TDP of up to 75W and is rated at 22 TOPs. This would suggest that Volta is a much more efficient architecture (at least for deep learning and half precision)! I am not sure how NVIDIA is able to match its GP104 with only 512 Volta CUDA cores though their definition of a "core" could have changed and/or the CVA processor may be responsible for closing that gap. Unfortunately, NVIDIA did not disclose what it rates the Xavier at in TFLOPS so it is difficult to compare and it may not match GP104 at higher precision workloads. It could be wholly optimized for int(8) operations rather than floating point performance. Beyond that I will let Scott dive into those particulars once we have more information!

Xavier is more of a teaser than anything and the chip could very well change dramatically and/or not hit the claimed performance targets. Still, it sounds promising and it is always nice to speculate over road maps. It is an intriguing chip and I am ready for more details, especially on the Volta GPU and just what exactly that Computer Vision Accelerator is (and will it be easy to program for?). I am a big fan of the "self-driving car" and I hope that it succeeds. It certainly looks to continue as Tesla, VW, BMW, and other automakers continue to push the envelope of what is possible and plan future cars that will include smart driving assists and even cars that can drive themselves. The more local computing power we can throw at automobiles the better and while massive datacenters can be used to train the neural networks, local hardware to run and make decisions are necessary (you don't want internet latency contributing to the decision of whether to brake or not!).

I hope that NVIDIA's self-proclaimed "AI Supercomputer" turns out to be at least close to the performance they claim! Stay tuned for more information as it gets closer to launch (hopefully more details will emerge at GTC 2017 in the US).

What are your thoughts on Xavier and the whole self-driving car future?

Also read:

Source: NVIDIA
Manufacturer: NVIDIA

Is Enterprise Ascending Outside of Consumer Viability?

So a couple of weeks have gone by since the Quadro P6000 (update: was announced) and the new Titan X launched. With them, we received a new chip: GP102. Since Fermi, NVIDIA has labeled their GPU designs with a G, followed by a single letter for the architecture (F, K, M, or P for Fermi, Kepler, Maxwell, and Pascal, respectively), which is then followed by a three digit number. The last digit is the most relevant one, however, as it separates designs by their intended size.

nvidia-2016-Quadro_P6000_7440.jpg

Typically, 0 corresponds to a ~550-600mm2 design, which is about as larger of a design that fabrication labs can create without error-prone techniques, like multiple exposures (update for clarity: trying to precisely overlap multiple designs to form a larger integrated circuit). 4 corresponds to ~300mm2, although GM204 was pretty large at 398mm2, which was likely to increase the core count while remaining on a 28nm process. Higher numbers, like 6 or 7, fill back the lower-end SKUs until NVIDIA essentially stops caring for that generation. So when we moved to Pascal, jumping two whole process nodes, NVIDIA looked at their wristwatches and said “about time to make another 300mm2 part, I guess?”

The GTX 1080 and the GTX 1070 (GP104, 314mm2) were born.

nvidia-2016-gtc-pascal-banner.png

NVIDIA already announced a 600mm2 part, though. The GP100 had 3840 CUDA cores, HBM2 memory, and an ideal ratio of 1:2:4 between FP64:FP32:FP16 performance. (A 64-bit chunk of memory can store one 64-bit value, two 32-bit values, or four 16-bit values, unless the register is attached to logic circuits that, while smaller, don't know how to operate on the data.) This increased ratio, even over Kepler's 1:6 FP64:FP32, is great for GPU compute, but wasted die area for today's (and tomorrow's) games. I'm predicting that it takes the wind out of Intel's sales, as Xeon Phi's 1:2 FP64:FP32 performance ratio is one of its major selling points, leading to its inclusion in many supercomputers.

Despite the HBM2 memory controller supposedly being actually smaller than GDDR5(X), NVIDIA could still save die space while still providing 3840 CUDA cores (despite disabling a few on Titan X). The trade-off is that FP64 and FP16 performance had to decrease dramatically, from 1:2 and 2:1 relative to FP32, all the way down to 1:32 and 1:64. This new design comes in at 471mm2, although it's $200 more expensive than what the 600mm2 products, GK110 and GM200, launched at. Smaller dies provide more products per wafer, and, better, the number of defective chips should be relatively constant.

Anyway, that aside, it puts NVIDIA in an interesting position. Splitting the xx0-class chip into xx0 and xx2 designs allows NVIDIA to lower the cost of their high-end gaming parts, although it cuts out hobbyists who buy a Titan for double-precision compute. More interestingly, it leaves around 150mm2 for AMD to sneak in a design that's FP32-centric, leaving them a potential performance crown.

nvidia-2016-pascal-volta-roadmap-extremetech.png

Image Credit: ExtremeTech

On the other hand, as fabrication node changes are becoming less frequent, it's possible that NVIDIA could be leaving itself room for Volta, too. Last month, it was rumored that NVIDIA would release two architectures at 16nm, in the same way that Maxwell shared 28nm with Kepler. In this case, Volta, on top of whatever other architectural advancements NVIDIA rolls into that design, can also grow a little in size. At that time, TSMC would have better yields, making a 600mm2 design less costly in terms of waste and recovery.

If this is the case, we could see the GPGPU folks receiving a new architecture once every second gaming (and professional graphics) architecture. That is, unless you are a hobbyist. If you are? I would need to be wrong, or NVIDIA would need to somehow bring their enterprise SKU into an affordable price point. The xx0 class seems to have been pushed up and out of viability for consumers.

Or, again, I could just be wrong.

Podcast #409 - GTX 1060 Review, 3DMark Time Spy Controversy, Tiny Nintendo and more!

Subject: General Tech | July 21, 2016 - 12:21 PM |
Tagged: Wraith, Volta, video, time spy, softbank, riotoro, retroarch, podcast, nvidia, new, kaby lake, Intel, gtx 1060, geforce, asynchronous compute, async compute, arm, apollo lake, amd, 3dmark, 10nm, 1070m, 1060m

PC Perspective Podcast #409 - 07/21/2016

Join us this week as we discuss the GTX 1060 review, controversy surrounding the async compute of 3DMark Time Spy and more!!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

This episode of the PC Perspective Podcast is sponsored by Casper!

Hosts:  Ryan Shrout, Allyn Malventano, Jeremy Hellstrom, and Josh Walrath

Program length: 1:34:57
  1. Week in Review:
  2. 0:51:17 This episode of the PC Perspective Podcast is sponsored by Casper!
  3. News items of interest:
  4. 1:26:26 Hardware/Software Picks of the Week
    1. Ryan: Sapphire Nitro Bot
    2. Allyn: klocki - chill puzzle game (also on iOS / Android)
  5. Closing/outro