PCPer Mailbag #24 - 12/29/2017

Subject: Editorial | December 29, 2017 - 09:00 AM |
Tagged: video, Ryan Shrout, pcper mailbag

It's time for the PCPer Mailbag, our weekly show where Ryan and the team answer your questions about the tech industry, the latest and greatest GPUs, the process of running a tech review website, and more!

On today's show, we say goodbye to 2017 with some unrelated questions:

00:44 - Advantages of a smaller process node?
03:08 - AMD GPUs better at cryptocurrency mining?
06:24 - Can overclocking harm your motherboard?
09:17 - Laptop CPUs die faster than desktop?
10:47 - Net Neutrality and Bitcoin?
11:59 - FP16 vs FP32 shaders?
14:14 - Where are all the HDR monitors?
16:01 - DDR4 vs. DDR5 memory?
18:32 - AMD Navi GPUs at CES?
20:38 - PCPer Mailbag setting too informal?

Want to have your question answered on a future Mailbag? Leave a comment on this post or in the YouTube comments for the latest video. Check out new Mailbag videos each Friday!

Be sure to subscribe to our YouTube Channel to make sure you never miss our weekly reviews and podcasts, and please consider supporting PC Perspective via Patreon to help us keep videos like our weekly mailbag coming!

Source: YouTube

December 29, 2017 | 09:52 AM - Posted by mouf

double posted Sir.

December 29, 2017 | 10:15 AM - Posted by Jim Tanous

Thanks for the heads up. I have no idea how I screwed that up.

December 29, 2017 | 02:27 PM - Posted by mouf

No worries.. I would blame Ryan anyway. :P

December 29, 2017 | 11:22 AM - Posted by TheGamingMarketIsNoLongerTheOnlyFocusForGPUsales (not verified)

Nvidia G-force offers more ROPs for more frames flung!
AMD Radeon/GCN GPUs more shaders/shader cores for more compute.

AMD has more shaders so more FP/INT units and Nvidia has more ROPs and higher FPS.

Note: That AMD having more shaders cores was not mentioned and Nvidia having more ROPs was not discussed! And when performance is discusssed the implication was that of Nvidia having more performance for gaming workloads(ROPs). It's not hard to win any FPS metrics if the GPU SKU has 20% more ROPs because those extra ROPs mean higher pixel fill rates and that directly relates to higher FPS.

Miners want the most shaders/compute and they are very much into undervolting/underclocking AMD's GPUs to get the optimal performance/Watt. Gamers want the most ROPs and overclocking/overvolting because that means the higher FPS rates.

Nvidia is more dependent on gamers though that's about to change as Nvidia's professional/non consumer market revenues are about to overtake its consumer gaming revenues. Nvidia has the vast majority of the discrete gameing market sales while AMD has a much smaller share. AMD's GPUs have always had more compute on the consumer side of the market relative to Nvidia and up until the last few years of the coin mining market AMD's GPUs where realatively less expensive than Nvidia's GPU.

It's really the same for CPUs with respect to the number of cores giving more compute as with the GPUs with the most shader cores have more FP/INT units avaiable for mining compute as a GPU SKUs shader/core count offered goes higher. Nvidia is getting more mining sales because the demand for AMD's GPUs have driven the price higher, so high at times that those Price/Performance and TCO metrics make Nvidia's GPUs more affordable and miners are very much concerned with the TCO for their GPUs/hardware and the amount of time it takes for that specific GPU to pay for itself running coin mining workloads 24/7/365.

Look for wild swings in consumer/GPU pricing tied to the price of whetever coins are producing the most return on the coin miner's investment in hardware and electricity.
Performance per watt and TCO figure greatly into the cost of GPUs and mining rigs and that's just the economics of coin mining just like any other form of mining.

The really sucessful miners are the ones that have fully amortized the cost of their hardware to a point where no matter what happens in the market those miners can not lose money on their operation and even if the miner chooses to fold the mining operation the hardware still retains an intrinsic resale value for used GPU sales to gamers or sales to other miners.

Miners can not afford the professional GPU SKUs that Nvidia offers that have more compute. So with AMD traditionally the company offering just as much compute on its consumer GPU variants at a lower cost than AMD's professionl SKUs those AMD GPUs where always going to be wanted for offering more affordable compute options to miners and anyone else that needs compute for lower inital costs.

There appears to be no ASICs this time around tuned to one type of coin. So the demand for GPUs that can be reprogrammed for many different types of coin/hashing will remain high! With the proliferation of different mining hashing algorithm in the coin mining market dedicated ASICs are not tenable and GPUs will remain in demand for the foreseeable future. The Professional Compute/AI markets are getting more of the top end AMD Vega 10 dies produced as that market pays more than any coin miner can realistically afford.

Look for AMD's Modular Navi Designs where GPUs are made up of many smaller modular/scalable GPU chiplets to releave some of the production stresses on AMD's chip fab partners as those smaller scalable GPU chiplets' smaller size directly equates to higher GPU Die/Wafer yields. And that modular/scalable design will allow AMD to release new Navi based SKUs that target all market segements low end to high end and professional markets. Look at how easy its was for AMD to take that Zeppelin modular/scalable die and create its entire line of consumer Desktop/HEDT/Professional CPU SKUs from one moduar/scalable Zeppelin die design!

AMD is also poised to take back more mobile/laptop market share with its x86 based Raven Ridge APUs and that market is relatively larger than the discrete GPU market as the overall market figures show. AMD can take over the lower end GPU market with integrated graphics products and that represents some very large unit sales over the discrete GPU market numbers.

December 29, 2017 | 11:42 AM - Posted by Power (not verified)

Where are all the HDR monitors?

Here is Samsung CHG90 review: https://www.rtings.com/monitor/reviews/samsung/chg90-series-curved-gamin...

Except for the fact it is not flat and not flicker-free seems decent.

December 29, 2017 | 12:39 PM - Posted by Jim Lahey

Hey bud, why don't VR manufacturers just hardware upscale 1440p rendered scenes to 4k or 8k VR displays? Screen door effect would be gone, and frame rates would be high enough to reduce motion sickness.

December 31, 2017 | 06:58 PM - Posted by Streetguru

You need more actual pixels in VR headsets to get rid of any screen door effects

Something like 4k per eye at 120hz is where we want to be at.

You can then just upscale games from 1080p, lose some sharpness, but still has a solid VR image.

December 31, 2017 | 11:23 PM - Posted by Jim Lahey

That's my point though, just use upscaling tech to take a 1080p or 1440p signal to 4k or 8k displays per eye. The frame rate stays the same (ie HIGH) because the upscaling is done via hardware (like HDMI antialiasing).

December 29, 2017 | 10:59 PM - Posted by Goofus Maximus

Hmm. You could always just put up a special backdrop for the distracted one. I think it appropriate to set up a wall-o-ouija boards!

Such a long way to go, to arrive at such a lame joke...

December 31, 2017 | 06:56 PM - Posted by Streetguru

Why does no one understand what "Bottleneck" means for games and PC hardware?

Would you agree with the simplification of

CPUs bottleneck Refresh Rate
GPUs bottleneck Resolution/Quality

December 31, 2017 | 07:11 PM - Posted by stewrt


When will we see OLED computer monitors?
In the meantime, I'm looking for a 34" ultrawide IPS panel, 3440 x 1440, with G-Sync that reaches at least 144Hz. Any recommendations?


January 2, 2018 | 02:52 AM - Posted by Kareha

Here's a question for you Ryan, can you explain why OpenGL is complete trash on AMD compared with Nvidia? If it is just poorly written drivers, why have they never seem to have been able to fix it over the years compared to Nvidia who have always seemed to have extremely good OpenGL performance. This has recently been brought to life again via the CEMU (WiiU emulator) community and the fact that it just runs terribly on AMD hardware compared with Nvidia hardware.

January 2, 2018 | 03:34 AM - Posted by Power (not verified)

AMD invested in Vulkan drivers in order not to spend as much as Nvidia constantly fixing OpenGL drivers shortcomings.

January 2, 2018 | 11:30 AM - Posted by brisa117


Can you give us an overview of what your article and/or video pipeline looks like? What is the typical turnaround time from start to finish?


Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.