Podcast #344 - Intel SSD 750 Series, NZXT S340, an ASUS FreeSync Monitor and more!

Subject: General Tech | April 9, 2015 - 12:58 PM |
Tagged: video, podcast, nvidia, mg279q, Intel, gsync, gigabyte, freesync, ddr4-3400, corsair, compute stick, asus, amd, 750 series

PC Perspective Podcast #344 - 04/09/2015

Join us this week as we discuss the Intel SSD 750 Series, NZXT S340, an ASUS FreeSync Monitor and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts:Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

The NVIDIA, Samsung and Qualcomm saga has been cleared to continue

Subject: General Tech | April 7, 2015 - 01:27 PM |
Tagged: nvidia, qualcomm, Samsung, patents, sueball

A judge has ruled that six of the seven patent disputes that NVIDIA has filed are valid and will proceed to court for judgment.  These patents involve the use of graphics coprocessors in mobile devices and have been judged to be worded in such a way that it does not matter if those GPUs are ARM, Imagination Technologies or Qualcomm.  This is not the end of the dispute, merely a pretrial to see if the claims are valid and worth going to trial.  Of course Qualcomm and Samsung dispute NVIDIA's claims and in Samsung's case they have already launched a counter suit claiming NVIDIA has violated six of their own patents.  You can read about the history of the latest legal battle in the tech world as well as today's judgment over at The Register.


"Nvidia has won an important early victory in its ongoing patent litigation against Qualcomm and Samsung, with a judge in the US International Trade Commission ruling in Nvidia's favor as to the language of the disputed patents."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register

Podcast #343 - DX12 Performance, Dissecting G-SYNC and FreeSync, Intel 3D NAND and more!

Subject: General Tech | April 2, 2015 - 01:16 PM |
Tagged: podcast, video, dx12, 3dmark, freesync, g-sync, Intel, 3d nand, 20nm, 28nm, micron, nvidia, shield, Tegra X1, raptr, 850 EVO, msata, M.2

PC Perspective Podcast #343 - 04/02/2015

Join us this week as we discuss DX12 Performance, Dissecting G-SYNC and FreeSync, Intel 3D NAND and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts:Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

Subject: Editorial
Manufacturer: Various

Process Technology Overview

We have been very spoiled throughout the years.  We likely did not realize exactly how spoiled we were until it became very obvious that the rate of process technology advances hit a virtual brick wall.  Every 18 to 24 months we were treated to a new, faster, more efficient process node that was opened up to fabless semiconductor firms and we were treated to a new generation of products that would blow our hair back.  Now we have been in a virtual standstill when it comes to new process nodes from the pure-play foundries.

Few expected the 28 nm node to live nearly as long as it has.  Some of the first cracks in the façade actually came from Intel.  Their 22 nm Tri-Gate (FinFET) process took a little bit longer to get off the ground than expected.  We also noticed some interesting electrical features from the products developed on that process.  Intel skewed away from higher clockspeeds and focused on efficiency and architectural improvements rather than staying at generally acceptable TDPs and leapfrogging the competition by clockspeed alone.  Overclockers noticed that the newer parts did not reach the same clockspeed heights as previous products such as the 32 nm based Sandy Bridge processors.  Whether this decision was intentional from Intel or not is debatable, but my gut feeling here is that they responded to the technical limitations of their 22 nm process.  Yields and bins likely dictated the max clockspeeds attained on these new products.  So instead of vaulting over AMD’s products, they just slowly started walking away from them.


Samsung is one of the first pure-play foundries to offer a working sub-20 nm FinFET product line. (Photo courtesy of ExtremeTech)

When 28 nm was released the plans on the books were to transition to 20 nm products based on planar transistors, thereby bypassing the added expense of developing FinFETs.  It was widely expected that FinFETs were not necessarily required to address the needs of the market.  Sadly, that did not turn out to be the case.  There are many other factors as to why 20 nm planar parts are not common, but the limitations of that particular process node has made it a relatively niche process node that is appropriate for smaller, low power ASICs (like the latest Apple SOCs).  The Apple A8 is rumored to be around 90 mm square, which is a far cry from the traditional midrange GPU that goes from 250 mm sq. to 400+ mm sq.

The essential difficulty of the 20 nm planar node appears to be a lack of power scaling to match the increased transistor density.  TSMC and others have successfully packed in more transistors into every square mm as compared to 28 nm, but the electrical characteristics did not scale proportionally well.  Yes, there are improvements there per transistor, but when designers pack in all those transistors into a large design, TDP and voltage issues start to arise.  As TDP increases, it takes more power to drive the processor, which then leads to more heat.  The GPU guys probably looked at this and figured out that while they can achieve a higher transistor density and a wider design, they will have to downclock the entire GPU to hit reasonable TDP levels.  When adding these concerns to yields and bins for the new process, the advantages of going to 20 nm would be slim to none at the end of the day.

Click here to read the rest of the 28 nm GPU editorial!

Rumor: New NVIDIA SHIELD Portable Coming with Tegra X1

Subject: Mobile | March 30, 2015 - 03:43 PM |
Tagged: Tegra X1, tegra, shield portable, shield, portable, nvidia

UPDATE (3/31/15): Thanks to another tip we can confirm that the new SHIELD P2523 will have the Tegra X1 SoC in it. From this manifest document you'll see the Tegra T210 listed (the same part marketed as X1) as well as the code name "Loki." Remember that the first SHIELD Portable device was code named Thor. Oh, so clever, NVIDIA.


Based on a rumor posted by Brad over at Lilliputing, it appears we can expect an updated NVIDIA SHIELD Portable device sometime later in 2015. According to both the Bluetooth and Wi-Fi certification websites, a device going by the name "NVIDIA Shield Portable P2523" has been submitted. There isn't a lot of detail though:

  • 802.11a/b/g/n/ac dual-band 2.4 GHz and 5 GHz WiFi
  • Bluetooth 4.1
  • Android 5.0
  • Firmware version 3.10.61

We definitely have a new device here as the initial SHIELD Portable did not includ 802.11ac support at all. And though no data is there to support it, you have to assume that NVIDIA would be using the new Tegra X1 processor in any new SHIELD devices coming out this year. I already previewd the new SHIELD console from GDC that utilizes that same SoC, but it might require a lower clocked, lower power version of the processor to help with heat and battery life on a portable unit.


There’s no information about the processor, screen, or other hardware. But if the new Shield portable is anything like the original, it’ll probably consist of what looks like an Xbox-style game controller with an attached 5 inch display which you can fold up to play games on the go.

And if it’s anything like the new NVIDIA Shield console, it could have a shiny new NVIDIA Tegra X1 processor to replace the aging Tegra 4 chip found in the original Shield Portable.

I wouldn’t be surprised if it also had a higher-resolution display, more memory, or other improvements.

Keep an eye out - NVIDIA may be making a push for even more SHIELD hardware this summer.

Source: Lilliputing

Saved so much using Linux you can afford a Titan X?

Subject: Graphics Cards | March 27, 2015 - 04:02 PM |
Tagged: gtx titan x, linux, nvidia

Perhaps somewhere out there is a Linux user who wants a TITAN X and if there is they will like the results of Phoronix's testing.  The card works perfectly straight out of the box with the latest 346.47 driver as well as the 349.12 Beta; if you want to use Nouveau then don't buy this card.  The TITAN did not win any awards for power efficiency but for OpenCL tests, synthetic OpenGL benchmarks and Unigine on Linux it walked away a clear winner.  Phoronix, and many others, hope that AMD is working on an updated Linux driver to accompany the new 300 series of cards we will see soon to help them be more competitive on open source systems.

If you are sick of TITAN X reviews by now, just skip to their 22 GPU performance roundup of Metro Redux.


"Last week NVIDIA unveiled the GeForce GTX TITAN X during their annual GPU Tech Conference. Of course, all of the major reviews at launch were under Windows and thus largely focused on the Direct3D performance. Now that our review sample arrived this week, I've spent the past few days hitting the TITAN X hard under Linux with various OpenGL and OpenCL workloads compared to other NVIDIA and AMD hardware on the binary Linux drivers."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: Phoronix
Manufacturer: Various

It's more than just a branding issue

As a part of my look at the first wave of AMD FreeSync monitors hitting the market, I wrote an analysis of how the competing technologies of FreeSync and G-Sync differ from one another. It was a complex topic that I tried to state in as succinct a fashion as possible given the time constraints and that the article subject was on FreeSync specifically. I'm going to include a portion of that discussion here, to recap:

First, we need to look inside the VRR window, the zone in which the monitor and AMD claims that variable refresh should be working without tears and without stutter. On the LG 34UM67 for example, that range is 48-75 Hz, so frame rates between 48 FPS and 75 FPS should be smooth. Next we want to look above the window, or at frame rates above the 75 Hz maximum refresh rate of the window. Finally, and maybe most importantly, we need to look below the window, at frame rates under the minimum rated variable refresh target, in this example it would be 48 FPS.

AMD FreeSync offers more flexibility for the gamer than G-Sync around this VRR window. For both above and below the variable refresh area, AMD allows gamers to continue to select a VSync enabled or disabled setting. That setting will be handled as you are used to it today when your game frame rate extends outside the VRR window. So, for our 34UM67 monitor example, if your game is capable of rendering at a frame rate of 85 FPS then you will either see tearing on your screen (if you have VSync disabled) or you will get a static frame rate of 75 FPS, matching the top refresh rate of the panel itself. If your game is rendering at 40 FPS, lower than the minimum VRR window, then you will again see the result of tearing (with VSync off) or the potential for stutter and hitching (with VSync on).

But what happens with this FreeSync monitor and theoretical G-Sync monitor below the window? AMD’s implementation means that you get the option of disabling or enabling VSync.  For the 34UM67 as soon as your game frame rate drops under 48 FPS you will either see tearing on your screen or you will begin to see hints of stutter and judder as the typical (and previously mentioned) VSync concerns again crop their head up. At lower frame rates (below the window) these artifacts will actually impact your gaming experience much more dramatically than at higher frame rates (above the window).

G-Sync treats this “below the window” scenario very differently. Rather than reverting to VSync on or off, the module in the G-Sync display is responsible for auto-refreshing the screen if the frame rate dips below the minimum refresh of the panel that would otherwise be affected by flicker. So, in a 30-144 Hz G-Sync monitor, we have measured that when the frame rate actually gets to 29 FPS, the display is actually refreshing at 58 Hz, each frame being “drawn” one extra instance to avoid flicker of the pixels but still maintains a tear free and stutter free animation. If the frame rate dips to 25 FPS, then the screen draws at 50 Hz. If the frame rate drops to something more extreme like 14 FPS, we actually see the module quadruple drawing the frame, taking the refresh rate back to 56 Hz. It’s a clever trick that keeps the VRR goals and prevents a degradation of the gaming experience. But, this method requires a local frame buffer and requires logic on the display controller to work. Hence, the current implementation in a G-Sync module.

As you can see, the topic is complicated. So Allyn and I (and an aging analog oscilloscope) decided to take it upon ourselves to try and understand and teach the implementation differences with the help of some science. The video below is where the heart of this story is focused, though I have some visual aids embedded after it.

Still not clear on what this means for frame rates and refresh rates on current FreeSync and G-Sync monitors? Maybe this will help.

Continue reading our story dissecting NVIDIA G-Sync and AMD FreeSync!!

Podcast #342 - FreeSync Launch, Dell XPS 13, Super Fast DDR4 and more!

Subject: General Tech | March 26, 2015 - 01:51 PM |
Tagged: XPS 13, video, Vector 180, usb 3.1, supernova, Silverstone, quadro, podcast, ocz, nvidia, m6000, gsync, FT05, freesync, Fortress, evga, dell, ddr4-3400, ddr4, corsair, broadwell-u, amd

PC Perspective Podcast #342 - 03/25/2015

Join us this week as we discuss the launch of FreeSync, Dell XPS 13, Super Fast DDR4 and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts:Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Sebastian Peak

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!


Manufacturer: Futuremark

Our first DX12 Performance Results

Late last week, Microsoft approached me to see if I would be interested in working with them and with Futuremark on the release of the new 3DMark API Overhead Feature Test. Of course I jumped at the chance, with DirectX 12 being one of the hottest discussion topics among gamers, PC enthusiasts and developers in recent history. Microsoft set us up with the latest iteration of 3DMark and the latest DX12-ready drivers from AMD, NVIDIA and Intel. From there, off we went.

First we need to discuss exactly what the 3DMark API Overhead Feature Test is (and also what it is not). The feature test will be a part of the next revision of 3DMark, which will likely ship in time with the full Windows 10 release. Futuremark claims that it is the "world's first independent" test that allows you to compare the performance of three different APIs: DX12, DX11 and even Mantle.

It was almost one year ago that Microsoft officially unveiled the plans for DirectX 12: a move to a more efficient API that can better utilize the CPU and platform capabilities of future, and most importantly current, systems. Josh wrote up a solid editorial on what we believe DX12 means for the future of gaming, and in particular for PC gaming, that you should check out if you want more background on the direction DX12 has set.


One of DX12 keys for becoming more efficient is the ability for developers to get closer to the metal, which is a phrase to indicate that game and engine coders can access more power of the system (CPU and GPU) without having to have its hand held by the API itself. The most direct benefit of this, as we saw with AMD's Mantle implementation over the past couple of years, is improved quantity of draw calls that a given hardware system can utilize in a game engine.

Continue reading our overview of the new 3DMark API Overhead Feature Test with early DX12 Performance Results!!

NVIDIA Quadro M6000 Announced

Subject: Graphics Cards | March 23, 2015 - 07:30 AM |
Tagged: quadro, nvidia, m6000, gm200

Alongside the Titan X, NVIDIA has announced the Quadro M6000. In terms of hardware, they are basically the same component: 12 GB of GDDR5 on a 384-bit memory bus, 3072 CUDA cores, and a reduction in double precision performance to 1/32nd of its single precision. The memory, but not the cache, is capable of ECC (error-correction) for enterprises who do not want a stray photon to mess up their computation. That might be the only hardware difference between it and the Titan X.


Compared to other Quadro cards, it loses some double precision performance as mentioned earlier, but it will be an upgrade in single precision (FP32). The add-in board connects to the power supply with just a single eight-pin plug. Technically, with its 250W TDP, it is slightly over the rating for one eight-pin PCIe connector, but NVIDIA told Anandtech that they're confident that it won't matter for the card's intended systems.

That is probably true, but I wouldn't put it past someone to do something spiteful given recent events.

The lack of double precision performance (IEEE 754 FP64) could be disappointing for some. While NVIDIA would definitely know their own market better than I do, I was under the impression that a common workstation system for GPU compute was a Quadro driving a few Teslas (such as two of these). It would seem weird for a company to have such a high-end GPU be paired with Teslas that have such a significant difference in FP64 compute. I wonder what this means for the Tesla line, and whether we will see a variant of Maxwell with a large boost in 64-bit performance, or if that line will be in an awkward place until Pascal.

Or maybe not? Maybe NVIDIA is planning to launch products based on an unannounced, FP64-focused architecture? The aim could be to let the Quadro deal with the heavy FP32 calculations, while the customer could opt to load co-processors according to their double precision needs? It's an interesting thought as I sit here at my computer musing to myself, but then I immediately wonder why did they not announce it at GTC if that is the case? If that is the case, and honestly I doubt it because I'm just typing unfiltered thoughts here, you would think they would kind-of need to be sold together. Or maybe not. I don't know.

Pricing and availability is not currently known, except that it is “soon”.

Source: Anandtech