Subject: Graphics Cards | August 24, 2016 - 10:34 AM | Ryan Shrout
Tagged: nvidia, market share, jpr, jon peddie, amd
As reported by both Mercury Research and now by Jon Peddie Research, in a graphics add-in card market that dropped dramatically in Q2 2016 in terms of total units shipped, AMD has gained significant market share against NVIDIA.
|GPU Supplier||Market share this QTR||Market share last QTR||Market share last year|
Source: Jon Peddie Research
Last year at this time, AMD was sitting at 18% market share in terms of units sold, an absolutely dismal result compared to NVIDIA's dominating 81.9%. Over the last couple of quarters we have seen AMD gain in this space, and keeping in mind that Q2 2016 does not include sales of AMD's new Polaris-based graphics cards like the Radeon RX 480, the jump to 29.9% is a big move for the company. As a result, NVIDIA falls back to 70% market share for the quarter, which is still a significant lead over the AMD.
Numbers like that shouldn't be taken lightly - for AMD to gain 7 points of market share in a single quarter indicates a substantial shift in the market. This includes all add-in cards: budget, mainstream, enthusiast and even workstation class products. One report I am received says that NVIDIA card sales specifically dropped off in Q2, though the exact reason why isn't known, and as a kind of defacto result, AMD gained sales share.
There are several other factors to watch with this data however. First, the quarterly drop in graphics card sales was -20% in Q2 when compared to Q1. That is well above the average seasonal Q1-Q2 drop, which JPR claims to be -9.7%. Much of this sell through decrease is likely due to consumers expecting releases of both NVIDIA Pascal GPUs and AMD Polaris GPUs, stalling sales as consumers delay their purchases.
The NVIDIA GeForce GTX 1080 launched on May 17th and the GTX 1070 on May 29th. The company has made very bold claims about product sales of Pascal parts so I am honestly very surprised that the overall market would drop the way it did in Q2 and that NVIDIA would fall behind AMD as much as it has. Q3 2016 may be the defining time for both GPU vendors however as it will show the results of the work put into both new architectures and both new product lines. NVIDIA reported record profits recently so it will be interesting to see how that matches up to unit sales.
Subject: Graphics Cards | August 23, 2016 - 01:43 PM | Jeremy Hellstrom
Tagged: amd, nvidia, Tilt Brush, VR
[H]ard|OCP continues their foray into testing VR applications, this time moving away from games to try out the rather impressive Tilt Brush VR drawing application from Google. If you have yet to see this software in action it is rather incredible, although you do still require an artist's talent and practical skills to create true 3D masterpieces.
Artisic merit may not be [H]'s strong suite but testing how well a GPU can power VR applications certainly lies within their bailiwick. Once again they tested five NVIDIA GPUs and a pair of AMD's for dropped frames and reprojection caused by a drop in FPS.
"We are changing gears a bit with our VR Performance coverage and looking at an application that is not as GPU-intensive as those we have looked at in the recent past. Google's Tilt Brush is a virtual reality application that makes use of the HTC Vive head mounted display and its motion controllers to allow you to paint in 3D space."
Here are some more Graphics Card articles from around the web:
- PowerColor Red Devil RX 470 Overclocking @ [H]ard|OCP
- MSI GeForce GTX 1060 OC 6 GB @ techPowerUp
- ASUS STRIX GAMING GTX 1070 OC @ eTeknix
- EVGA GeForce GTX 1070 FTW GAMING ACX 3.0 @ Bjorn3d
Why Two 4GB GPUs Isn't Necessarily 8GB
We're trying something new here at PC Perspective. Some topics are fairly difficult to explain cleanly without accompanying images. We also like to go fairly deep into specific topics, so we're hoping that we can provide educational cartoons that explain these issues.
This pilot episode is about load-balancing and memory management in multi-GPU configurations. There seems to be a lot of confusion around what was (and was not) possible with DirectX 11 and OpenGL, and even more confusion about what DirectX 12, Mantle, and Vulkan allow developers to do. It highlights three different load-balancing algorithms, and even briefly mentions what LucidLogix was attempting to accomplish almost ten years ago.
If you like it, and want to see more, please share and support us on Patreon. We're putting this out not knowing if it's popular enough to be sustainable. The best way to see more of this is to share!
Subject: Graphics Cards | August 18, 2016 - 02:28 PM | Sebastian Peak
Tagged: nvidia, gtx 1060 3gb, gtx 1060, graphics card, gpu, geforce, 1152 CUDA Cores
NVIDIA has officially announced the 3GB version of the GTX 1060 graphics card, and it indeed contains fewer CUDA cores than the 6GB version.
The GTX 1060 Founders Edition
The product page on NVIDIA.com now reflects the 3GB model, and board partners have begun announcing their versions. The MSRP on this 3GB version is set at $199, and availablity of partner cards is expected in the next couple of weeks. The two versions will be designated only by their memory size, and no other capacities of either card are forthcoming.
|GeForce GTX 1060 3GB||GeForce GTX 1060 6GB|
|Base Clock||1506 MHz||1506 MHz|
|Boost Clock||1708 MHz||1708 MHz|
|Memory Speed||8 Gbps||8 Gbps|
As you can see from the above table, the only specification that has changed is the CUDA core count, with base/boost clocks, memory speed and interface, and TDP identical. As to performance, NVIDIA says the 6GB version holds a 5% performance advantage over this lower-cost version, which at $199 is 20% less expensive than the previous GTX 1060 6GB.
Subject: Editorial | August 18, 2016 - 02:20 PM | Ryan Shrout
Tagged: video, podcast, pascal, nvidia, msi, mobile, Intel, idf, GTX 1080, gtx 1070, gtx 1060, gigabyte, FMS, Flash Memory Summit, asus, arm, 10nm
PC Perspective Podcast #413 - 08/18/2016
Join us this week as we discuss the new mobile GeForce GTX 10-series gaming notebooks, ARM and Intel partnering on 10nm, Flash Memory Summit and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store (audio only)
- Google Play - Subscribe to our audio podcast directly through Google Play!
- RSS - Subscribe through your regular RSS reader (audio only)
- MP3 - Direct download link to the MP3 file
Hosts: Allyn Malventano, Sebastian Peak, Josh Walrath and Jeremy Hellstrom
Week in Review:
This episode of PC Perspective is brought to you by Casper!! Use code “PCPER”
News items of interest:
0:42:05 Final news from FMS 2016
Hardware/Software Picks of the Week
It always feels a little odd when covering NVIDIA’s quarterly earnings due to how they present their financial calendar. No, we are not reporting from the future. Yes, it can be confusing when comparing results and getting your dates mixed up. Regardless of the date before the earnings, NVIDIA did exceptionally well in a quarter that is typically the second weakest after Q1.
NVIDIA reported revenue of $1.43 billion. This is a jump from an already strong Q1 where they took in $1.30 billion. Compare this to the $1.027 billion of its competitor AMD who also provides CPUs as well as GPUs. NVIDIA sold a lot of GPUs as well as other products. Their primary money makers were the consumer space GPUs and the professional and compute markets where they have a virtual stranglehold on at the moment. The company’s GAAP net income is a very respectable $253 million.
The release of the latest Pascal based GPUs were the primary mover for the gains for this latest quarter. AMD has had a hard time competing with NVIDIA for marketshare. The older Maxwell based chips performed well against the entire line of AMD offerings and typically did so with better power and heat characteristics. Even though the GTX 970 was somewhat limited in its memory configuration as compared to the AMD products (3.5 GB + .5 GB vs. a full 4 GB implementation) it was a top seller in its class. The same could be said for the products up and down the stack.
Pascal was released at the end of May, but the company had been shipping chips to its partners as well as creating the “Founder’s Edition” models to its exacting specifications. These were strong sellers throughout the end of May until the end of the quarter. NVIDIA recently unveiled their latest Pascal based Quadro cards, but we do not know how much of an impact those have had on this quarter. NVIDIA has also been shipping, in very limited quantities, the Tesla P100 based units to select customers and outfits.
Is Enterprise Ascending Outside of Consumer Viability?
So a couple of weeks have gone by since the Quadro P6000 (update: was announced) and the new Titan X launched. With them, we received a new chip: GP102. Since Fermi, NVIDIA has labeled their GPU designs with a G, followed by a single letter for the architecture (F, K, M, or P for Fermi, Kepler, Maxwell, and Pascal, respectively), which is then followed by a three digit number. The last digit is the most relevant one, however, as it separates designs by their intended size.
Typically, 0 corresponds to a ~550-600mm2 design, which is about as larger of a design that fabrication labs can create without error-prone techniques, like
multiple exposures (update for clarity: trying to precisely overlap multiple designs to form a larger integrated circuit). 4 corresponds to ~300mm2, although GM204 was pretty large at 398mm2, which was likely to increase the core count while remaining on a 28nm process. Higher numbers, like 6 or 7, fill back the lower-end SKUs until NVIDIA essentially stops caring for that generation. So when we moved to Pascal, jumping two whole process nodes, NVIDIA looked at their wristwatches and said “about time to make another 300mm2 part, I guess?”
The GTX 1080 and the GTX 1070 (GP104, 314mm2) were born.
NVIDIA already announced a 600mm2 part, though. The GP100 had 3840 CUDA cores, HBM2 memory, and an ideal ratio of 1:2:4 between FP64:FP32:FP16 performance. (A 64-bit chunk of memory can store one 64-bit value, two 32-bit values, or four 16-bit values, unless the register is attached to logic circuits that, while smaller, don't know how to operate on the data.) This increased ratio, even over Kepler's 1:6 FP64:FP32, is great for GPU compute, but wasted die area for today's (and tomorrow's) games. I'm predicting that it takes the wind out of Intel's sales, as Xeon Phi's 1:2 FP64:FP32 performance ratio is one of its major selling points, leading to its inclusion in many supercomputers.
Despite the HBM2 memory controller supposedly being actually smaller than GDDR5(X), NVIDIA could still save die space while still providing 3840 CUDA cores (despite disabling a few on Titan X). The trade-off is that FP64 and FP16 performance had to decrease dramatically, from 1:2 and 2:1 relative to FP32, all the way down to 1:32 and 1:64. This new design comes in at 471mm2, although it's $200 more expensive than what the 600mm2 products, GK110 and GM200, launched at. Smaller dies provide more products per wafer, and, better, the number of defective chips should be relatively constant.
Anyway, that aside, it puts NVIDIA in an interesting position. Splitting the xx0-class chip into xx0 and xx2 designs allows NVIDIA to lower the cost of their high-end gaming parts, although it cuts out hobbyists who buy a Titan for double-precision compute. More interestingly, it leaves around 150mm2 for AMD to sneak in a design that's FP32-centric, leaving them a potential performance crown.
Image Credit: ExtremeTech
On the other hand, as fabrication node changes are becoming less frequent, it's possible that NVIDIA could be leaving itself room for Volta, too. Last month, it was rumored that NVIDIA would release two architectures at 16nm, in the same way that Maxwell shared 28nm with Kepler. In this case, Volta, on top of whatever other architectural advancements NVIDIA rolls into that design, can also grow a little in size. At that time, TSMC would have better yields, making a 600mm2 design less costly in terms of waste and recovery.
If this is the case, we could see the GPGPU folks receiving a new architecture once every second gaming (and professional graphics) architecture. That is, unless you are a hobbyist. If you are? I would need to be wrong, or NVIDIA would need to somehow bring their enterprise SKU into an affordable price point. The xx0 class seems to have been pushed up and out of viability for consumers.
Or, again, I could just be wrong.
Subject: General Tech | August 17, 2016 - 12:41 PM | Jeremy Hellstrom
Tagged: nvidia, Intel, HPC, Xeon Phi, maxwell, pascal, dirty pool
There is a spat going on between Intel and NVIDIA over the slide below, as you can read about over at Ars Technica. It seems that Intel have reached into the industries bag of dirty tricks and polished off an old standby, testing new hardware and software against older products from their competitors. In this case it was high performance computing products which were tested, Intel's new Xeon Phi against NVIDIA's Maxwell, tested on an older version of the Caffe AlexNet benchmark.
NVIDIA points out that not only would they have done better than Intel if an up to date version of the benchmarking software was used, but that the comparison should have been against their current architecture, Pascal. This is not quite as bad as putting undocumented flags into compilers to reduce the performance of competitors chips or predatory discount programs but it shows that the computer industry continues to have only a passing acquaintance with fair play and honest competition.
"At this juncture I should point out that juicing benchmarks is, rather sadly, par for the course. Whenever a chip maker provides its own performance figures, they are almost always tailored to the strength of a specific chip—or alternatively, structured in such a way as to exacerbate the weakness of a competitor's product."
Here is some more Tech News from around the web:
- USB Implementers Forum introduces branding for safe USB-C charging @ The Inquirer
- Some Windows 10 Anniversary Update: SSD freeze @ The Register
- Intel Project Alloy: all-in-one VR headset takes aim at Google's Project Daydream @ The Inquirer
- Wanna build your own drone? Intel emits Linux-powered x86 brains for DIY flying gizmos @ The Register
- Intel's Optane XPoint DIMMs pushed back – source @ The Register
Subject: Systems, Mobile | August 16, 2016 - 11:39 AM | Sebastian Peak
Tagged: Skylake, nvidia, notebook, laptop, Intel Core i7, gtx 1070, gtx 1060, gigabyte, gaming
GIGABYTE has refreshed their gaming laptop lineup with NVIDIA's GTX 10 series graphics, announcing updated versions of the P55 & P57 Series, and thin-and-light P35 & P37.
"GIGABYTE offers a variety of options based on preference while providing the latest GeForce® GTX 10 series graphics and the latest 6th Generation Intel Core i7 Processor for the power and performance to meet the growing demands of top tier applications, games, and Virtual Reality. With the superior performance GIAGBYTE also includes industry leading features such as M.2 PCIe SSD, DDR4 memory, USB 3.1 with Type-C connection, and HDMI 2.0."
The notebooks retain 6th-gen Intel (Skylake) Core processors, but now feature NVIDIA GeForce GTX 1070 and GTX 1060 GPUs.
Here's a rundown of the new systems from GIGABYTE, beginning with the Performance Series:
The GIGABYE P57 Gaming Laptop
"The new 17” P57 is pulling no punches when it comes to performance, including the all-new, ultra-powerful NVIDIA® GeForce® GTX 1070 & 1060 Graphics. With a fresh GPU, come fresh ID changes. Along with its subtle style, curved lines and orange accents, comes all-new additional air intake ventilation above the keyboard to improve thermal cooling. The backlit keyboard itself supports Anti-Ghost with 30-Key Rollover. The Full HD 1920x1080 IPS display provides vivid and immersive visuals, while a Swappable Bay is included for user preference of an optical drive, an additional HDD, or weight reduction."
Next we have the thin-and-light ULTRAFORCE Gaming models:
The ULTRAFORCE P35
"The new 17.3” P37 reiterates what ULTRAFORCE is all about. Despite being a 17” model, the P37 weights under 2.7kg and retains an ultra-thin and light profile being less than 22.5mm thin. Paired with extreme mobility is the NVIDIA GeForce GTX 1070 graphics. The display comes in both options of 4K UHD 3840x2160 and FHD 1920x1080, achieving high-res gaming thanks to the performance boost with the new graphics.
The P37 includes a hot-swappable bay for an additional HDD, ODD, or to reduce weight for improved mobility, forming a quad-storage system with multiple M.2 PCIe SSDs and HDDs. The Macro Keys on the left, together with the included Macro Hub software, allows up to 25 programmable macros for one-click execution in any games and applications
Powerful yet portable, the thinnest gaming laptop of the series, the 15.6” P35, also has either a UHD 3840x2160 or FHD 1920x1080 display, delivering perfect and vivid colors for an enhanced gameplay experience. Included in the Ultrabook-like chassis is the powerful all-new NVIDIA® GeForce GTX 1070 GPU. The P35 also features the iconic hot-swappable bay for flexible storage and the quad-storage system."
The P37 keyboard features macro keys
We will update with pricing and availability for these new laptops when known.
Subject: Systems | August 16, 2016 - 08:00 AM | Sebastian Peak
Tagged: PC, nvidia, Lenovo, Intel Core i7, IdeaCentre Y910, GTX 1080, gaming, desktop, all in one, AIO
Lenovo has announced a new all-in-one gaming desktop, and the IdeaCentre Y910 offers up to a
7th-generation 6th-generation Intel Core i7 processor and NVIDIA GeForce GTX 1080 graphics behind its 27-inch QHD display.
But this is no ordinary all-in-one, as Lenovo has designed the Y910 to be "effortlessly upgradeable":
"Designed to game, engineered to evolve, the IdeaCentreTM AIO Y910 is easy to upgrade –
no special tools needed. Simply press the Y button to pop out the back panel, for effortless swapping of your GPU, Memory or Storage."
The specs include a 7th-gen Intel Core i7 processor, and if that's not a typo we're talking about Intel Kaby Lake here. Specs have been corrected as 6th-gen Intel Core processors up to an i7. Exactly what SKU might be inside the Y910 isn't clear just yet, and we'll update when we know for sure. It would be limited to 65 W based on the specified cooling, and notice that the CPU isn't on the list of user-upgradable parts (though it could still be possible).
Here's a rundown of specs from Lenovo:
- Processor: Up to a 6th-generation Intel Core i7 Processor
- Graphics: Up to NVIDIA GeForce GTX 1080 8 GB
- Memory: Up to 32 GB DDR4
- Storage: Up to 2 TB HDD + 256 GB SSD
- Display: 27-inch QHD (2560x1440) near-edgeless
- Audio: Integrated 7.1 Channel Dolby Audio, 5W Harmon Kardon speakers
- Webcam: 720p, Single Array Microphone
- Networking: Killer DoubleShot WiFi / LAN
- Rear Ports:
- 2x USB 2.0
- HDMI-in / HDMI-out
- Side Ports:
- 3x USB 3.0
- 6-in-1 Card Reader (SD, SDHC, SDXC, MMC, MS, MS-Pro) Headphone, Microphone
- Cooling: 65 W
- Dimensions (W x L x H): 237.6 x 615.8 x 490.25 mm (9.35 x 24.24 x 19.3 inches)
- Weight: Starting at 27 lbs (12.24 kg)
Update: The IdeaCentre Y910 starts at $1,799.99 for a version with the GTX 1070, and will be available in October.