Creatively testing GPUs with Google's Tilt Brush

Subject: Graphics Cards | August 23, 2016 - 01:43 PM |
Tagged: amd, nvidia, Tilt Brush, VR

[H]ard|OCP continues their foray into testing VR applications, this time moving away from games to try out the rather impressive Tilt Brush VR drawing application from Google.  If you have yet to see this software in action it is rather incredible, although you do still require an artist's talent and practical skills to create true 3D masterpieces. 

Artisic merit may not be [H]'s strong suite but testing how well a GPU can power VR applications certainly lies within their bailiwick.  Once again they tested five NVIDIA GPUs and a pair of AMD's for dropped frames and reprojection caused by a drop in FPS.


"We are changing gears a bit with our VR Performance coverage and looking at an application that is not as GPU-intensive as those we have looked at in the recent past. Google's Tilt Brush is a virtual reality application that makes use of the HTC Vive head mounted display and its motion controllers to allow you to paint in 3D space."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP
Manufacturer: PC Perspective

Why Two 4GB GPUs Isn't Necessarily 8GB

We're trying something new here at PC Perspective. Some topics are fairly difficult to explain cleanly without accompanying images. We also like to go fairly deep into specific topics, so we're hoping that we can provide educational cartoons that explain these issues.

This pilot episode is about load-balancing and memory management in multi-GPU configurations. There seems to be a lot of confusion around what was (and was not) possible with DirectX 11 and OpenGL, and even more confusion about what DirectX 12, Mantle, and Vulkan allow developers to do. It highlights three different load-balancing algorithms, and even briefly mentions what LucidLogix was attempting to accomplish almost ten years ago.


If you like it, and want to see more, please share and support us on Patreon. We're putting this out not knowing if it's popular enough to be sustainable. The best way to see more of this is to share!

Open the expanded article to see the transcript, below.

NVIDIA Officially Announces GeForce GTX 1060 3GB Edition

Subject: Graphics Cards | August 18, 2016 - 02:28 PM |
Tagged: nvidia, gtx 1060 3gb, gtx 1060, graphics card, gpu, geforce, 1152 CUDA Cores

NVIDIA has officially announced the 3GB version of the GTX 1060 graphics card, and it indeed contains fewer CUDA cores than the 6GB version.


The GTX 1060 Founders Edition

The product page on now reflects the 3GB model, and board partners have begun announcing their versions. The MSRP on this 3GB version is set at $199, and availablity of partner cards is expected in the next couple of weeks. The two versions will be designated only by their memory size, and no other capacities of either card are forthcoming.

  GeForce GTX 1060 3GB GeForce GTX 1060 6GB
Architecture Pascal Pascal
CUDA Cores 1152 1280
Base Clock 1506 MHz 1506 MHz
Boost Clock 1708 MHz 1708 MHz
Memory Speed 8 Gbps 8 Gbps
Memory Configuration 3GB 6GB
Memory Interface 192-bit 192-bit
Power Connector 6-pin 6-pin
TDP 120W 120W

As you can see from the above table, the only specification that has changed is the CUDA core count, with base/boost clocks, memory speed and interface, and TDP identical. As to performance, NVIDIA says the 6GB version holds a 5% performance advantage over this lower-cost version, which at $199 is 20% less expensive than the previous GTX 1060 6GB.

Source: NVIDIA

Podcast #413 - NVIDIA Pascal Mobile, ARM and Intel partner on 10nm, Flash Memory Summit and more!

Subject: Editorial | August 18, 2016 - 02:20 PM |
Tagged: video, podcast, pascal, nvidia, msi, mobile, Intel, idf, GTX 1080, gtx 1070, gtx 1060, gigabyte, FMS, Flash Memory Summit, asus, arm, 10nm

PC Perspective Podcast #413 - 08/18/2016

Join us this week as we discuss the new mobile GeForce GTX 10-series gaming notebooks, ARM and Intel partnering on 10nm, Flash Memory Summit and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: - Share with your friends!

Hosts:  Allyn Malventano, Sebastian Peak, Josh Walrath and Jeremy Hellstrom

Program length: 1:29:39
  1. Week in Review:
  2. This episode of PC Perspective is brought to you by Casper!! Use code “PCPER”
  3. News items of interest:
    1. 0:42:05 Final news from FMS 2016
  4. Hardware/Software Picks of the Week
    1. Ryan: VR Demi Moore
  5. Closing/outro

Subject: Editorial
Manufacturer: NVIDIA


It always feels a little odd when covering NVIDIA’s quarterly earnings due to how they present their financial calendar.  No, we are not reporting from the future.  Yes, it can be confusing when comparing results and getting your dates mixed up.  Regardless of the date before the earnings, NVIDIA did exceptionally well in a quarter that is typically the second weakest after Q1.

NVIDIA reported revenue of $1.43 billion.  This is a jump from an already strong Q1 where they took in $1.30 billion.  Compare this to the $1.027 billion of its competitor AMD who also provides CPUs as well as GPUs.  NVIDIA sold a lot of GPUs as well as other products.  Their primary money makers were the consumer space GPUs and the professional and compute markets where they have a virtual stranglehold on at the moment.  The company’s GAAP net income is a very respectable $253 million.


The release of the latest Pascal based GPUs were the primary mover for the gains for this latest quarter.  AMD has had a hard time competing with NVIDIA for marketshare.  The older Maxwell based chips performed well against the entire line of AMD offerings and typically did so with better power and heat characteristics.  Even though the GTX 970 was somewhat limited in its memory configuration as compared to the AMD products (3.5 GB + .5 GB vs. a full 4 GB implementation) it was a top seller in its class.  The same could be said for the products up and down the stack.

Pascal was released at the end of May, but the company had been shipping chips to its partners as well as creating the “Founder’s Edition” models to its exacting specifications.  These were strong sellers throughout the end of May until the end of the quarter.  NVIDIA recently unveiled their latest Pascal based Quadro cards, but we do not know how much of an impact those have had on this quarter.  NVIDIA has also been shipping, in very limited quantities, the Tesla P100 based units to select customers and outfits.

Click to read more about NVIDIA's latest quarterly results!

Manufacturer: NVIDIA

Is Enterprise Ascending Outside of Consumer Viability?

So a couple of weeks have gone by since the Quadro P6000 (update: was announced) and the new Titan X launched. With them, we received a new chip: GP102. Since Fermi, NVIDIA has labeled their GPU designs with a G, followed by a single letter for the architecture (F, K, M, or P for Fermi, Kepler, Maxwell, and Pascal, respectively), which is then followed by a three digit number. The last digit is the most relevant one, however, as it separates designs by their intended size.


Typically, 0 corresponds to a ~550-600mm2 design, which is about as larger of a design that fabrication labs can create without error-prone techniques, like multiple exposures (update for clarity: trying to precisely overlap multiple designs to form a larger integrated circuit). 4 corresponds to ~300mm2, although GM204 was pretty large at 398mm2, which was likely to increase the core count while remaining on a 28nm process. Higher numbers, like 6 or 7, fill back the lower-end SKUs until NVIDIA essentially stops caring for that generation. So when we moved to Pascal, jumping two whole process nodes, NVIDIA looked at their wristwatches and said “about time to make another 300mm2 part, I guess?”

The GTX 1080 and the GTX 1070 (GP104, 314mm2) were born.


NVIDIA already announced a 600mm2 part, though. The GP100 had 3840 CUDA cores, HBM2 memory, and an ideal ratio of 1:2:4 between FP64:FP32:FP16 performance. (A 64-bit chunk of memory can store one 64-bit value, two 32-bit values, or four 16-bit values, unless the register is attached to logic circuits that, while smaller, don't know how to operate on the data.) This increased ratio, even over Kepler's 1:6 FP64:FP32, is great for GPU compute, but wasted die area for today's (and tomorrow's) games. I'm predicting that it takes the wind out of Intel's sales, as Xeon Phi's 1:2 FP64:FP32 performance ratio is one of its major selling points, leading to its inclusion in many supercomputers.

Despite the HBM2 memory controller supposedly being actually smaller than GDDR5(X), NVIDIA could still save die space while still providing 3840 CUDA cores (despite disabling a few on Titan X). The trade-off is that FP64 and FP16 performance had to decrease dramatically, from 1:2 and 2:1 relative to FP32, all the way down to 1:32 and 1:64. This new design comes in at 471mm2, although it's $200 more expensive than what the 600mm2 products, GK110 and GM200, launched at. Smaller dies provide more products per wafer, and, better, the number of defective chips should be relatively constant.

Anyway, that aside, it puts NVIDIA in an interesting position. Splitting the xx0-class chip into xx0 and xx2 designs allows NVIDIA to lower the cost of their high-end gaming parts, although it cuts out hobbyists who buy a Titan for double-precision compute. More interestingly, it leaves around 150mm2 for AMD to sneak in a design that's FP32-centric, leaving them a potential performance crown.


Image Credit: ExtremeTech

On the other hand, as fabrication node changes are becoming less frequent, it's possible that NVIDIA could be leaving itself room for Volta, too. Last month, it was rumored that NVIDIA would release two architectures at 16nm, in the same way that Maxwell shared 28nm with Kepler. In this case, Volta, on top of whatever other architectural advancements NVIDIA rolls into that design, can also grow a little in size. At that time, TSMC would have better yields, making a 600mm2 design less costly in terms of waste and recovery.

If this is the case, we could see the GPGPU folks receiving a new architecture once every second gaming (and professional graphics) architecture. That is, unless you are a hobbyist. If you are? I would need to be wrong, or NVIDIA would need to somehow bring their enterprise SKU into an affordable price point. The xx0 class seems to have been pushed up and out of viability for consumers.

Or, again, I could just be wrong.

That old chestnut again? Intel compares their current gen hardware against older NVIDIA kit

Subject: General Tech | August 17, 2016 - 12:41 PM |
Tagged: nvidia, Intel, HPC, Xeon Phi, maxwell, pascal, dirty pool

There is a spat going on between Intel and NVIDIA over the slide below, as you can read about over at Ars Technica.  It seems that Intel have reached into the industries bag of dirty tricks and polished off an old standby, testing new hardware and software against older products from their competitors.  In this case it was high performance computing products which were tested, Intel's new Xeon Phi against NVIDIA's Maxwell, tested on an older version of the Caffe AlexNet benchmark.

NVIDIA points out that not only would they have done better than Intel if an up to date version of the benchmarking software was used, but that the comparison should have been against their current architecture, Pascal.  This is not quite as bad as putting undocumented flags into compilers to reduce the performance of competitors chips or predatory discount programs but it shows that the computer industry continues to have only a passing acquaintance with fair play and honest competition.


"At this juncture I should point out that juicing benchmarks is, rather sadly, par for the course. Whenever a chip maker provides its own performance figures, they are almost always tailored to the strength of a specific chip—or alternatively, structured in such a way as to exacerbate the weakness of a competitor's product."

Here is some more Tech News from around the web:

Tech Talk

Source: Ars Technica

GIGABYTE Announces GeForce GTX 10 Series Gaming Laptops

Subject: Systems, Mobile | August 16, 2016 - 11:39 AM |
Tagged: Skylake, nvidia, notebook, laptop, Intel Core i7, gtx 1070, gtx 1060, gigabyte, gaming

GIGABYTE has refreshed their gaming laptop lineup with NVIDIA's GTX 10 series graphics, announcing updated versions of the P55 & P57 Series, and thin-and-light P35 & P37.


"GIGABYTE offers a variety of options based on preference while providing the latest GeForce® GTX 10 series graphics and the latest 6th Generation Intel Core i7 Processor for the power and performance to meet the growing demands of top tier applications, games, and Virtual Reality. With the superior performance GIAGBYTE also includes industry leading features such as M.2 PCIe SSD, DDR4 memory, USB 3.1 with Type-C connection, and HDMI 2.0."

The notebooks retain 6th-gen Intel (Skylake) Core processors, but now feature NVIDIA GeForce GTX 1070 and GTX 1060 GPUs.


Here's a rundown of the new systems from GIGABYTE, beginning with the Performance Series:


The GIGABYE P57 Gaming Laptop

"The new 17” P57 is pulling no punches when it comes to performance, including the all-new, ultra-powerful NVIDIA® GeForce® GTX 1070 & 1060 Graphics. With a fresh GPU, come fresh ID changes. Along with its subtle style, curved lines and orange accents, comes all-new additional air intake ventilation above the keyboard to improve thermal cooling. The backlit keyboard itself supports Anti-Ghost with 30-Key Rollover. The Full HD 1920x1080 IPS display provides vivid and immersive visuals, while a Swappable Bay is included for user preference of an optical drive, an additional HDD, or weight reduction."

Next we have the thin-and-light ULTRAFORCE Gaming models:



"The new 17.3” P37 reiterates what ULTRAFORCE is all about. Despite being a 17” model, the P37 weights under 2.7kg and retains an ultra-thin and light profile being less than 22.5mm thin. Paired with extreme mobility is the NVIDIA GeForce GTX 1070 graphics. The display comes in both options of 4K UHD 3840x2160 and FHD 1920x1080, achieving high-res gaming thanks to the performance boost with the new graphics.

The P37 includes a hot-swappable bay for an additional HDD, ODD, or to reduce weight for improved mobility, forming a quad-storage system with multiple M.2 PCIe SSDs and HDDs. The Macro Keys on the left, together with the included Macro Hub software, allows up to 25 programmable macros for one-click execution in any games and applications

Powerful yet portable, the thinnest gaming laptop of the series, the 15.6” P35, also has either a UHD 3840x2160 or FHD 1920x1080 display, delivering perfect and vivid colors for an enhanced gameplay experience. Included in the Ultrabook-like chassis is the powerful all-new NVIDIA® GeForce GTX 1070 GPU. The P35 also features the iconic hot-swappable bay for flexible storage and the quad-storage system."

P37 keyboard.jpg

The P37 keyboard features macro keys

We will update with pricing and availability for these new laptops when known.


Lenovo Announces the IdeaCentre Y910 AIO Gaming Desktop

Subject: Systems | August 16, 2016 - 08:00 AM |
Tagged: PC, nvidia, Lenovo, Intel Core i7, IdeaCentre Y910, GTX 1080, gaming, desktop, all in one, AIO

Lenovo has announced a new all-in-one gaming desktop, and the IdeaCentre Y910 offers up to a 7th-generation 6th-generation Intel Core i7 processor and NVIDIA GeForce GTX 1080 graphics behind its 27-inch QHD display.


But this is no ordinary all-in-one, as Lenovo has designed the Y910 to be "effortlessly upgradeable":

"Designed to game, engineered to evolve, the IdeaCentreTM AIO Y910 is easy to upgrade –
no special tools needed. Simply press the Y button to pop out the back panel, for effortless swapping of your GPU, Memory or Storage."

The specs include a 7th-gen Intel Core i7 processor, and if that's not a typo we're talking about Intel Kaby Lake here. Specs have been corrected as 6th-gen Intel Core processors up to an i7. Exactly what SKU might be inside the Y910 isn't clear just yet, and we'll update when we know for sure. It would be limited to 65 W based on the specified cooling, and notice that the CPU isn't on the list of user-upgradable parts (though it could still be possible).


Here's a rundown of specs from Lenovo:

  • Processor: Up to a 6th-generation Intel Core i7 Processor
  • Graphics: Up to NVIDIA GeForce GTX 1080 8 GB
  • Memory: Up to 32 GB DDR4
  • Storage: Up to 2 TB HDD + 256 GB SSD
  • Display: 27-inch QHD (2560x1440) near-edgeless
  • Audio: Integrated 7.1 Channel Dolby Audio, 5W Harmon Kardon speakers
  • Webcam: 720p, Single Array Microphone
  • Networking: Killer DoubleShot WiFi / LAN
  • Rear Ports:
    • 2x USB 2.0
    • LAN
    • HDMI-in / HDMI-out
  • Side Ports:
    • 3x USB 3.0
    • 6-in-1 Card Reader (SD, SDHC, SDXC, MMC, MS, MS-Pro) Headphone, Microphone
  • Cooling: 65 W
  • Dimensions (W x L x H): 237.6 x 615.8 x 490.25 mm (9.35 x 24.24 x 19.3 inches)
  • Weight: Starting at 27 lbs (12.24 kg)


Update: The IdeaCentre Y910 starts at $1,799.99 for a version with the GTX 1070, and will be available in October.

Source: Lenovo

Lenovo IdeaCentre Y710 Cube: Small Form-Factor Gaming with Desktop Power

Subject: Systems | August 16, 2016 - 08:00 AM |
Tagged: small form-factor, SFF, nvidia, Lenovo, Killer Networking, Intel, IdeaCentre Y710 Cube, GTX 1080, gaming, gamescom, cube

Lenovo has announced the IdeaCentre Y710 Cube; a small form-factor system designed for gaming regardless of available space, and it can be configured with some very high-end desktop components for serious performance.

Lenovo IdeaCentre_Y710_Cube_Left_hero_shot.jpg

"Ideal for gamers who want to stay competitive no matter where they play, the IdeaCentre Y710 Cube comes with a built-in carry handle for easy transport between gaming stations. Housed sleekly within a new, compact cube form factor, it features NVIDIA’s latest GeForce GTX graphics and 6th Gen Intel Core processors to handle today’s most resource-intensive releases."

The Y710 Cube offers NVIDIA GeForce graphics up to the GTX 1080, and up to a 6th-generation Core i7 processor. (Though a specific processor number was not mentioned, this is likely the non-K Core i7-6700 CPU given the 65W cooler specified below).

Lenovo IdeaCentre_Y710_Cube_Birdseye.jpg

Lenovo offers a pre-installed XBox One controller receiver with the Y710 Cube to position the small desktop as a console alternative, and the machines are configured with SSD storage and feature Killer Double Shot Pro networking (where the NIC and wireless card are combined for better performance).

Specifications include:

  • Processor: Up to 6th Generation Intel Core i7 Processor
  • Operating System: Windows 10 Home
  • Graphics: Up to NVIDIA GeForce GTX 1080; 8 GB
  • Memory: Up to 32 GB DDR4
  • Storage: Up to 2 TB HDD + 256 GB SSD
  • Cooling: 65 W
  • Networking: Killer LAN / WiFi 10/100/1000M
  • Connectivity:
    • Video: 1x HDMI, 1x VGA
    • Rear Ports: 1x USB 2.0 1x USB 3.0
    • Front Ports: 2x USB 3.0
  • Dimensions (L x D x H): 393.3 x 252.3 x 314.5 mm (15.48 x 9.93 x 12.38 inches)
  • Weight: Starting at 16.3 lbs (7.4 kg)
  • Carry Handle: Yes
  • Accessory: Xbox One Wireless Controller/Receiver (optional)

Lenovo IdeaCentre_Y710_Cube_Back_Port.jpg

The IdeaCentre Y710 Cube is part of Lenovo's Gamescom 2016 annoucement, and will be available for purchase starting in October. Pricing starts at $1,299.99 for a version with the GTX 1070.

Source: Lenovo