Bitmain could create headaches for NVIDIA, AMD, and Qualcomm

Subject: Graphics Cards | February 28, 2018 - 09:04 PM |
Tagged: bitmain, bitcoin, qualcomm, nvidia, amd

This article originally appeared in MarketWatch.

Research firm Bernstein recently published a report on the profitability of Bitmain Technologies, a secretive Chinese company with a huge impact on the bitcoin and cryptocurrency markets.

With estimated 2017 profits ranging from $3 billion to $4 billion, the size and scope of Beijing-based Bitmain is undeniable, with annual net income higher than some major tech players, including Nvidia and AMD. The privately held company, founded five years ago, has expanded its reach into many bitcoin-based markets, but most of its income stems from the development and sale of dedicated cryptocurrency mining hardware.

There is a concern that the sudden introduction of additional companies in the chip-production landscape could alter how other players operate. This includes the ability for Nvidia, AMD, Qualcomm and others to order chip production from popular semiconductor vendors at the necessary prices to remain competitive in their respective markets.

Bitmain makes most of its income through the development of dedicated chips used to mine bitcoin. These ASICs (application-specific integrated circuits) offer better performance and power efficiency than other products such as graphics chips from Nvidia and AMD. The Bitmain chips are then combined into systems called “miners” that can include as many as 250 chips in a single unit. Those are sold to large mining companies or individuals hoping to turn a profit from the speculative cryptocurrency markets for prices ranging from a few hundred to a few thousand dollars apiece.

Bitcoin mining giant

Bernstein estimates that as much as 70%-80% of the dedicated market for bitcoin mining is being addressed by Bitmain and its ASIC sales.

Bitmain has secondary income sources, including running mining pools (where groups of bitcoin miners share the workload of computing in order to turn a profit sooner) and cloud-based mining services where customers can simply rent mining hardware that exists in a dedicated server location. This enables people to attempt to profit from mining without the expense of buying hardware directly.

bitmain.JPG

A Bitmain Antminer

The chip developer and mining hardware giant has key advantages for revenue growth and stability, despite the volatility of the cryptocurrency market. When Bitmain designs a new ASIC that can address a new currency or algorithm, or run a current coin algorithm faster than was previously possible, it can choose to build its Antminers (the brand for these units) and operate them at its own server farms, squeezing the profitability and advantage the faster chips offer on the bitcoin market before anyone else in the ecosystem has access to them.

As the difficulty of mining increases (which occurs as higher-performance mining options are released, lowering the profitability of older hardware), Bitmain can then start selling the new chips and associated Antminers to customers, moving revenue from mining directly to sales of mining hardware.

This pattern can be repeated for as long as chip development continues, giving Bitmain a tremendous amount of flexibility to balance revenue from different streams.

Imagine a situation where one of the major graphics chip vendors exclusively used its latest graphics chips for its own services like cloud-compute, crypto-mining and server-based rendering and how much more valuable those resources would be — that is the power that Bitmain holds over the bitcoin market.

Competing for foundry business

Clearly Bitmain is big business, and its impact goes well beyond just the bitcoin space. Because its dominance for miners depends on new hardware designs and chip production, where performance and power efficiency are critical to building profitable hardware, it competes for the same foundry business as other fabless semiconductor giants. That includes Apple, Nvidia, Qualcomm, AMD and others.

Companies that build ASICs as part of their business model, including Samsung, TSMC, GlobalFoundries and even Intel to a small degree, look for customers willing to bid the most for the limited availability of production inventory. Bitmain is not restricted to a customer base that is cost-sensitive — instead, its customers are profit-sensitive. As long as the crypto market remains profitable, Bitmain can absorb the added cost of chip production.

Advantages over Nvidia, AMD and Qualcomm

Nvidia, AMD and Qualcomm are not as flexible. Despite the fact that Nvidia can charge thousands for some of its most powerful graphics chips when targeting the enterprise and machine-learning market, the wider gaming market is more sensitive to price changes. You can see that in the unrest that has existed in the gaming space as the price of graphics cards rises due to inventory going to miners rather than gamers. Neither AMD nor Nvidia will get away with selling graphic cards to partners for higher prices and, as a result, there is a potential for negative market growth in PC gaming.

If Bitmain uses the same foundry as others, and is willing to pay more for it to build their chips at a higher priority than other fabless semiconductor companies, then it could directly affect the availability and pricing for graphics chips, mobile phone processors and anything else built at those facilities. As a result, not only does the cryptocurrency market have an effect on the current graphics chip market for gamers by causing shortages, but it could also impact future chip availability if Bitmain (and its competitors) are willing to spend more for the advanced process technologies coming in 2018 and beyond.

Still, nothing is certain in the world of bitcoin and cryptocurrency. The fickle and volatile market means the profitability of Bitmain’s Antminers could be reduced, lessening the drive to pay more for chips and production. There is clearly an impact from sudden bitcoin value drops (from $20,000 to $6,000 as we see saw this month) on mining hardware sales, both graphics chip-based and ASIC-based, but measuring that and predicting it is a difficult venture.

Source: MarketWatch
Author:
Manufacturer: AMD

Overview

It's clear by now that AMD's latest CPU releases, the Ryzen 3 2200G and the Ryzen 5 2400G are compelling products. We've already taken a look at them in our initial review, as well as investigated how memory speed affected the graphics performance of the internal GPU but it seemed there was something missing.

Recently, it's been painfully clear that GPUs excel at more than just graphics rendering. With the rise of cryptocurrency mining, OpenCL and CUDA performance are as important as ever.

Cryptocurrency mining certainly isn't the only application where having a powerful GPU can help system performance. We set out to see how much of an advantage the Radeon Vega 11 graphics in the Ryzen 5 2400G provided over the significantly less powerful UHD 630 graphics in the Intel i5-8400.

DSC04637.JPG

Test System Setup
CPU AMD Ryzen 5 2400G
Intel Core i5-8400
Motherboard Gigabyte AB350N-Gaming WiFi
ASUS STRIX Z370-E Gaming
Memory 2 x 8GB G.SKILL FlareX DDR4-3200
(All memory running at 3200 MHz)
Storage Corsair Neutron XTi 480 SSD
Sound Card On-board
Graphics Card AMD Radeon Vega 11 Graphics
Intel UHD 630 Graphics
Graphics Drivers AMD 17.40.3701
Intel 23.20.16.4901
Power Supply Corsair RM1000x
Operating System Windows 10 Pro x64 RS3

 

GPGPU Compute

Before we take a look at some real-world examples of where a powerful GPU can be utilized, let's look at the relative power of the Vega 11 graphics on the Ryzen 5 2400G compared to the UHD 630 graphics on the Intel i5-8400.

sisoft-screen.png

SiSoft Sandra is a suite of benchmarks covering a wide array of system hardware and functionality, including an extensive range of GPGPU tests, which we are looking at today. 

sandra1.png

Comparing the raw shader performance of the Ryzen 5 2400G and the Intel i5-8400 provides a clear snapshot of what we are dealing with. In every precision category, the Vega 11 graphics in the AMD part are significantly more powerful than the Intel UHD 630 graphics. This all combines to provide a 175% increase in aggregate shader performance over Intel for the AMD part. 

Now that we've taken a look at the theoretical power of these GPUs, let's see how they perform in real-world applications.

Continue reading our look at the GPU compute performance of the Ryzen 5 2400G!

Author:
Manufacturer: AMD

Memory Matters

Memory speed is not a factor that the average gamer thinks about when building their PC. For the most part, memory performance hasn't had much of an effect on modern processors running high-speed memory such as DDR3 and DDR4.

With the launch of AMD's Ryzen processors, last year emerged a platform that was more sensitive to memory speeds. By running Ryzen processors with higher frequency and lower latency memory, users should see significant performance improvements, especially in 1080p gaming scenarios.

However, the Ryzen processors are not the only ones to exhibit this behavior.

Gaming on integrated GPUs is a perfect example of a memory starved situation. Take for instance the new AMD Ryzen 5 2400G and it's Vega-based GPU cores. In a full Vega 56 or 64 situation, these Vega cores utilize blazingly fast HBM 2.0 memory. However, due to constraints such as die space and cost, this processor does not integrate HBM.

DSC04643.JPG

Instead, both the CPU portion and the graphics portion of the APU must both depend on the same pool of DDR4 system memory. DDR4 is significantly slower than memory traditionally found on graphics cards such as GDDR5 or HBM. As a result, APU performance is usually memory limited to some extent.

In the past, we've done memory speed testing with AMD's older APUs, however with the launch of the new Ryzen and Vega based R3 2200G and R5 2400G, we decided to take another look at this topic.

For our testing, we are running the Ryzen 5 2400G at three different memory speeds, 2400 MHz, 2933 MHz, and 3200 MHz. While the maximum supported JEDEC memory standard for the R5 2400G is 2933, the memory provided by AMD for our processor review will support overclocking to 3200MHz just fine.

Continue reading our look at memory speed scaling with the Ryzen 5 2400G!

NVIDIA Job Posting for Metal and OpenGL Engineer

Subject: Graphics Cards | February 18, 2018 - 02:54 PM |
Tagged: opengl, nvidia, metal, macos, apple

Just two days ago, NVIDIA has published a job posting for a software engineer to “implement and extend 3D graphics and Metal”. Given that they specify the Metal API, and they want applicants who are “Experienced with OSX and/or Linux operating systems”, it seems clear that this job would involve macOS and/or iOS.

First, if this appeals to any of our readers, the job posting is here.

Apple-logo.png

Second, and this is where it gets potentially news-worthy, is that NVIDIA hasn’t really done a whole lot on Apple platforms for a while. The most recent NVIDIA GPU to see macOS is the GeForce GTX 680. It’s entirely possible that NVIDIA needs someone to fill in and maintain those old components. If that’s the case? Business as usual. Nothing to see here.

The other possibility is that NVIDIA might be expecting a design win with Apple. What? Who knows. It could be something as simple as Apple’s external GPU architecture allowing the user to select their own add-in board. Alternatively, Apple could have selected an NVIDIA GPU for one or more product lines, which they have not done since 2013 (as far as I can tell).

Apple typically makes big announcements at WWDC, which is expected in early June, or around the back-to-school season in September. I’m guessing we’ll know by then at the latest if something is in the works.

Source: NVIDIA

Radeon Software Adrenaline Edition 18.2.2 Released

Subject: Graphics Cards | February 14, 2018 - 07:00 PM |
Tagged: amd, graphics drivers

AMD has published a new version of their Radeon Software Adrenaline Edition graphics drivers. This one focuses on Kingdom Come: Deliverance, Fortnite, and PlayerUnknown’s Battleground. The first one, Kingdom Come: Deliverance, is an action RPG from Deep Silver and Warhorse Studios. It is the studio’s first game, although its founders came from 2K and Bohemia Interactive.

amd-2016-rx480-candid.jpg

AMD is quoting frame rate increases in the range of ~3-4% with this driver, although PubG can see up to 7% if you compare it with 17.12.1. They don’t seem to list any fixes, although there’s a handful of known issues, like FreeSync coming online during Google Chrome video playback, refreshing incorrectly and causing flicker. There’s also a system hang that could occur when twelve GPUs are performing a compute task. I WONDER WHAT CONDITIONS WOULD CAUSE THAT.

You can pick up the latest driver from AMD’s website.

Source: AMD

Valve Supporting AMD's GPU-Powered TrueAudio Next In Latest Steam Audio Beta

Subject: General Tech, Graphics Cards | February 7, 2018 - 09:02 PM |
Tagged: VR, trueaudio next, TrueAudio, steam audio, amd

Valve has announced support for AMD's TrueAudio Next technology in its Steam Audio SDK for developers. The partnership will allow game and VR application developers to reserve a portion of a GCN-based GPU's compute units for audio processing and increase the quality and quantity of audio sources as a result. AMD's OpenCL-based TrueAudio Next technology can run CPUs as well but it's strength is in the ability to run on a dedicated portion of the GPU to improve both frame times and audio quality since threads are not competing for the same GPU resources during complex scenes and the GPU can process complex audio scenes and convolutions much more efficiently than a CPU (especially as the number of sources and impulse responses increase) respectively.

AMD True Audio Next In Steam Audio.jpg

Steam Audio's TrueAudio Next integration is being positioned as an option for developers and the answer to increasing the level of immersion in virtual reality games and applications. While TrueAudio Next is not using ray tracing for audio, it is physics-based and can be used to great effect to create realistic scenes with large numbers of direct and indirect audio sources, ambisonics, increased impulse response lengths, echoes, reflections, reverb, frequency equalization, and HRTF (Head Related Transfer Function) 3D audio. According to Valve indirect audio from multiple sources with convolution reverb is one of the most computationally intensive parts of Steam Audio, and TAN is able to handle it much more efficiently and accurately without affecting GPU frame times and freeing the CPU up for additional physics and AI tasks which it is much better at anyway. Convolution is a way of modeling and filtering audio to create effects such as echoes and reverb. In the case of indirect audio, Steam Audio uses ray tracing to generate an impulse response (it measures the distance and path audio would travel from source to listener) and then convolution is used to generate a reverb effect which, while very accurate, can be quite computationally intensive with it requiring hundreds of thousands of sound samples. Ambisonics further represent the directional nature of indirect sound which helps to improve positional audio and the immersion factor as sounds are more real-world modeled.

GPU Convolution Performance versus CPU.png

GPU versus CPU convolution (audio filtering) performance. Lower is better.

In addition to the ability of developers to dedicate a portion (up to 20 to 25%) of a GPU's compute units to audio processing, developers can enable/disable TrueAudio processing including the level of acoustic complexity and detail on a scene-by-scene basis. Currently it appears that Unity, FMOD Studio, and C API engines can hook into Steam Audio and the TrueAudio Next features, but it remains up to developers to use the features and integrate them into their games.

Note that GPU-based TrueAudio Next requires a GCN-based graphics card of the RX 470, RX 480, RX 570, RX 580, R9 Fury, R9 Fury X, Radeon Pro Duo, RX Vega 56, and RX Vega 64 variety in order to work, so that is a limiting factor in adoption much like the various hair and facial tech is for AMD and NVIDIA on the visual side of things where the question of is the target market large enough to encourage developers to put in the time and effort to enable X optional feature arises.

I do not pretend to be an audio engineer, nor do I play a GPU programmer on TV but more options are always good and I hope that developers take advantage of the resource reservation and GPU compute convolution algorithms of TrueAudio Next to further the immersion factor of audio as much as they have the visual side of things. As VR continues to become more relevant I think that developers will have to start putting more emphasis on accurate and detailed audio and that's a good thing for an aspect of gaming that has seemingly taken a backseat since Windows Vista. 

What are your thoughts on the state of audio in gaming and Steam Audio's new TrueAudio Next integration?

Also read:

Source: Valve

MSI Launches Radeon RX 580 Armor MK2 Graphics Cards

Subject: Graphics Cards | February 3, 2018 - 05:00 PM |
Tagged: RX 580, msi, GDDR5, factory overclocked, amd, 8gb

MSI is updating its Radeon RX 580 Armor series with a new MK2 variant (in both standard and OC editions) that features an updated cooler with red and black color scheme and a metal backplate along with Torx 2.0 fans.

MSI Radeon RX 580 Armor MK2 OC Graphics Card.png

The graphics card is powered by a single 8-pin PCI-E power connection and has two DisplayPort, two HDMI, and one DVI display output. MSI claims the MK2 cards use its Military Class 4 hardware including high end solid capacitors. The large heatsink features three copper heatpipes and a large aluminum fin stack. It appears that the cards are using the same PCB as the original Armor series but it is not clear from MSI’s site if they have dome anything different to the power delivery.

The RX 580 Polaris GPU is running at a slight factory overclock out of the box with a boost clock of up to 1353 MHz (reference is 1340) for the standard edition and at up to 1366 MHz for the RX 580 Armor MK2 OC Edition. The OC edition can further clock up to 1380 MHz when run in OC mode using the company’s software utility (enthusiasts can attempt to go beyond that but MSi makes no guarantees). Both cards come with 8GB of GDDR5 memory clocked at the reference 8GHz.

MSI did not release pricing or availability but expect them to be difficult to find and for well above MSRP when they are in stock  If you have a physical Microcenter near you, it might be worth watching for one of these cards there to have a chance of getting one closer to MSRP.

Source: MSI

Up next on things you can't have, the GIGABYTE AORUS GTX 1080 Ti Waterforce Xtreme

Subject: Graphics Cards | January 30, 2018 - 04:57 PM |
Tagged: gigabyte, aorus, gtx 1080 ti, waterforce extreme edition, watercooling, factory overclocked

On the odd occasion it is in stock, the GIGABYTE AORUS GTX 1080 Ti Waterforce Xtreme will cost you $1300 or more, about twice what the MSRP is.  The liquid cooled card does come with overclocking, Gaming mode offers 1607MHz Base and 1721MHz Boost Clock, OC mode is 1632MHz Base and 1746MHz Boost Clock.  [H]ard|OCP managed to hit an impressive 2038MHz Base, 2050MHz Boost with 11.6GHz VRAM.  Check out the full review to see what that did for its performance.

1516441630zbp5jc8wvb_1_3_l.png

"GIGABYTE has released a brand new All-In-One liquid cooled GeForce GTX 1080 Ti video card with the AORUS Waterforce Xtreme Edition video card. This video card gives the Corsair Hydro GFX liquid cooled video card some competition, with a higher out-of-box clock speed we’ll see how fast this video card is and if there is any room left for overclocking."

Here are some more Graphics Card articles from around the web:

Graphics Cards

 

Source: [H]ard|OCP

SK Hynix Launches Its 8Gb GDDR6 Memory Running at 14 Gbps

Subject: Graphics Cards, Memory | January 24, 2018 - 11:04 PM |
Tagged: SK Hynix, graphics memory, gddr6, 8gb, 14Gbps

SK Hynix recently updated its product catalog and announced the availability of its eight gigabit (8 Gb) GDDR6 graphics memory. The new chips come in two SKUs and three speed grades with the H56C8H24MJR-S2C parts operating at 14 Gbps and 12 Gbps and the H56C8H24MJR-S0C operating at 12 Gbps (but at higher voltage than the -S2C SKU) and 10 Gbps. Voltages range from 1.25V for 10 Gbps and either 1.25V or 1.35V for 12 Gbps to 1.35V for 14 Gbps. Each 8 Gb GDDR6 memory chip holds 1 GB of memory and can provide up to 56 GB/s of per-chip bandwidth.

SK Hynix logo.jpg

While SK Hynix has a long way to go before competing with Samsung’s 18 Gbps GDDR6, its new chips are significantly faster than even its latest GDDR5 chips with the company working on bringing 9 Gbps and 10 Gbps GDDR5 to market. As a point of comparison, its fastest 10 Gbps GDDR5 would have a per chip bandwidth of 40 GB/s versus its 14 Gbps GDDR6 at 56 GB/s. A theoretical 8GB graphics card with eight 8 Gb chips running at 10 Gbps on a 256-bit memory bus would have maximum bandwidth of 320 GB/s. Replacing the GDDR5 with 14 Gbps GDDR6 in the same eight chip 256-bit bus configuration, the graphics card would hit 448 GB/s of bandwidth. In the Samsung story I noted that the Titan XP runs 12 8 Gb GDDR5X memory chips at 11.4 Gbps on a 384-bit bus for bandwidth of 547 GB/s. Replacing the G5X with GDDR6 would ramp up the bandwidth to 672 GB/s if running the chips at 14 Gbps.

Theoretical Memory Bandwidth
Chip Pin Speed Per Chip Bandwidth 256-bit bus 384-bit bus 1024-bit (one package) 4096-bit (4 packages)
10 Gbps 40 GB/s 320 GB/s 480 GB/s    

12 Gbps

48 GB/s 384 GB/s 576 GB/s    
14 Gbps 56 GB/s 448 GB/s 672 GB/s    
16 Gbps 64 GB/s 512 GB/s 768 GB/s    
18 Gbps 72 GB/s 576 GB/s 864 GB/s    
HBM2 2 Gbps 256 GB/s     256 GB/s 1 TB/s

GDDR6 is still a far cry from High Bandwidth Memory levels of performance, but it is much cheaper and easier to produce. With SK Hynix ramping up production and Samsung besting the fastest 16 Gbps G5X, it is likely that the G5X stop-gap will be wholly replaced with GDDR6 and things like the upgraded 10 Gbps GDDR5 from SK Hynix will pick up the low end. As more competition enters the GDDR6 space, prices should continue to come down and adoption should ramp up for the new standard with the next generation GPUs, game consoles, network devices, ect. using GDDR6 for all but the highest tier prosumer and enterprise HPC markets.

Also read:

AMD Hires Two Graphics Execs to Help Tackle NVIDIA

Subject: Graphics Cards | January 23, 2018 - 05:10 PM |
Tagged: amd, radeon, radeon technologies group, rtg

The following story was originally posted on ShroutResearch.com.

AMD announced today that it has hired two new executives to run its graphics division after the departure of Radeon Technologies Group’s previous lead. Raja Koduri left AMD in November to join Intel and launch its new Core and Visual Computing group, creating a hole in the leadership of this critical division at AMD. CEO Lisa Su filled in during Koduri’s sabbatical and subsequent exit, but the company had been searching for the right replacements since late last year.

Appointed as the senior vice president and GM of the Radeon Technologies Group, Mike Rayfield comes to AMD from previous stints at both Micron and NVIDIA. Rayfield will cover all aspects of the business management of AMD’s graphics division, including consumer, professional, game consoles, and the semi-custom division that recently announced a partnership with Intel. At Micron he served as the senior vice president of the Mobile Business Unit, responsible for company’s direction in working with wireless technology providers (smart phones, tablets, etc.) across various memory categories. While at NVIDIA, Rayfield was the general manager of the Mobile Business Unit helping to create the Tegra brand and products. Though in a different division at the time, Rayfield’s knowledge and experience in the NVIDIA organization may help AMD better address the graphics markets.

11471-amd-logo-1260x709.jpg

David Wang is now the senior vice president of engineering for the AMD Radeon Technologies Group and is responsible for the development of new graphics architectures, the hardware and software that integrate them, and the future strategy of where AMD will invest in graphics R&D. Wang is an alumni of AMD, working as corporate vice president for graphics IP and chip development before leaving in 2012 for Synaptics. David has more than 25 years of graphics and silicon experience, starting at LSI Logic, through ArtX, then ATI, before being acquired by AMD.

The hires come at a critical time for AMD. Though the processor division responsible for the Zen architecture and Ryzen/EPYC processors continues to make strong movement against the Intel dominated space, NVIDIA’s stranglehold on the graphics markets for gaming, machine learning, and autonomous driving are expanding the gap between the graphics chip vendors. The Vega architecture was meant to close it (at least somewhat) but NVIDIA remains the leader in the space by a not insignificant margin. Changing that is and should be AMD’s primary goal for the next few years.

AMD is hoping that by creating this two-headed spear of leadership for its Radeon graphics division it can get the group back on track. Rayfield will be taking over all business aspects of the graphics portion of AMD and that includes the addition of the semi-custom segment, previously a part of the EESC (Enterprise, Embedded, and Semi-Custom) group under senior vice president Forrest Norrod. AMD believes that with the growth and expansion of the enterprise segment with its EPYC processor family, and because the emphasis on the semi-custom group continues to be the advantage AMD holds in its graphics portfolio, the long-term strategy can be better executed with that group under the Radeon Technologies umbrella.

The return of Wang as the technical lead for the graphics division could bring significant positive momentum to the group that has struggled in the weeks leading up to the release of its Vega architecture. The product family based on that tech underwhelmed and had concerns over availability, pricing, and timing. Wang has a strong history in the graphics field, with experience as far back as any high-level graphics executive in the business. While at ATI and AMD, Wang worked on architectures from 2002 through 2012, with several periods of graphics leadership under his belt. Competing against the giant that NVIDIA has become will be a challenge that requires significant technical knowledge and risk-taking and Wang has the acumen to get it done.

AMD CEO Lisa Su expressed excitement and trust in the new graphics executives. “Mike and David are industry leaders who bring proven track records of delivering profitable business growth and leadership product roadmaps,” she says. “We enter 2018 with incredible momentum for our graphics business based on the full set of GPU products we introduced last year for the consumer, professional, and machine learning markets. Under Mike and David’s leadership, I am confident we will continue to grow the footprint of Radeon across the gaming, immersive, and GPU compute markets.”