Author:
Subject: Editorial
Manufacturer: ARM

Aggressively Pursuing New Markets

ARM has had a pretty fascinating history, but for most of its time on this Earth it has not been a very public facing company. After the release of the iPhone and ARM’s dominance in the mobile market, they decided to push their PR efforts up a few notches. Now we finally were able to see some of the inner workings of a company that was once a little known low power CPU designer that licensed cores out to third parties.

arm_road_01.JPG

The company was not always as aggressive as what we are seeing now. The mobile space for a long time was dominated by multiple architectures that all have eventually faded away. ARM held steady with design improvements and good customer relations that ensured that they would continue into the future. After the release of the original iPhone, the world changed. Happily for us, ARM changed as well. In previous years ARM would announce products, but they would be at least three years away and few people took notice of what they were up to. I originally started paying attention to ARM as I thought that their cores might have the ability to power mobile gaming and perhaps be integrated into future consoles so that there would be a unified architecture that these providers could lean upon. This was back when the 3DS and PSP were still selling millions of units.

This of course never came to pass as I had expected it to, but at least ARM did make it into the Nintendo Switch. ARM worked hard to quickly put faster, more efficient parts out the door. They also went on a buying spree and acquired several graphics startups that would eventually contribute to the now quite formidable Mali GPU family of products. Today we have an extensive lineup of parts that can be bundled into a tremendous amount of configurations. ARM has a virtual monopoly in the cellphone market because they have been willing to work with anyone who wants to license their designs, technologies, and architectures. This is actually a relatively healthy “monopoly” because the partners do the work to mix and match features to provide unique products to the marketplace. Architectural licensees like Apple, Qualcomm, and Samsung all differentiate their products as well and provide direct competition to the ARM designed cores that are licensed to other players.

arm_road_02.JPG

Today we are seeing a new direction from ARM that has never been officially explored. We have been given a roadmap of the next two generations of products from the company that are intended to compete in not only the cellphone market, but also in the laptop market. ARM has thrown down the gauntlet and their sights are set on Intel and AMD. Not only is ARM showing us the codenames for these products, but also the relative performance.

Click here to read the entire ARM Roadmap Editorial!

Author:
Subject: Editorial
Manufacturer: Intel

2018: A banner year

Intel has a long history of generating tremendous amounts of revenue and income. This latest quarter is no exception. Intel has announced record Q1 revenues for this year and they look to continue that trend throughout 2018. AMD released their very positive results yesterday, but their finances are dwarfed by what Intel has brought to market. The company had revenue of $16.1 billion with a net income of $4.5 billion. Compare this to AMD’s $1.625B revenue and $81M net income we see that the massive gulf between these two companies will not be bridged anytime soon with either Intel falling or AMD gaining.

intc_logo.png

Intel has put its money to good use with a wide variety of products that stretch between the PC market and datacenters. While their low power and ultra-mobile strategies have been scaled back and cancelled in some cases, their core markets are unaffected and they continue to make money hand over fist. The company has always been fundamentally sound in terms of finances and they do not typically spend money recklessly. They continue to feature market leading IPC with their product lines and can address multiple markets with the x86 products they have.

Click here to continue reading about Intel's Q1 results!

Author:
Subject: Editorial
Manufacturer: PCPerspective

The Experiment...

Introduction

OK, call me crazy (you wouldn’t be the first) but this is something I’ve wanted to try for years, and I bet I’m not the only one. Each time a new power supply comes across the lab bench with ever increasing output capacities, I find myself thinking, “I could weld with this beast.” Well the AX1600i pushed me over the edge and I decided to give it a go; what could possibly go wrong?

2-Side-view.jpg

133.3 Amps on the +12V outputs!

The Corsair AX1600i Digital power supply can deliver up to 133 Amps on the combined +12V rails, more than enough amperage for welding. There are dozens of PC power supplies on the market today that can deliver 100 Amps or more on the +12V output, but the AX1600i has another feature that might help make this project a success, the ability to manually set current limits on the +12V outputs. Thanks to the fact that the AX1600i is a digital power supply that allows manually setting the current limits on the +12V outputs via the Corsair Link data acquisition and control software, I might be able to add the ability to select a desired amperage to weld with. Yes!

Just because the AX1600i “can” deliver 133A doesn’t mean I want that much current available for welding. I typically only use that much power when I’m welding heavy steel pieces using ¼” rod. For this experiment I would like to be able to start out at a lot lower amperage, and I’m hoping the Corsair Link software will provide that ability.

3-Setup.jpg

Stick Welding with a PC Power Supply!

My first thought was to try to adapt a TIG (Tungsten Inert Gas) welder for use with the AX1600i. I figured using a TIG torch (Tungsten electrode shrouded with Argon gas instead of a flux coated rod) might give better control especially at the lower voltage and currents where I plan to start testing. TIG welders are commonly used to weld small stainless steel parts and sheet metal. But then I remembered the TIG welder power supply has a high voltage pulse built-in to initiate the plasma arc. Without that extra kick-start, it might be difficult to strike an arc without damaging the fine pointed tip of the Tungsten electrode. So I decided to just go with a conventional stick welding setup. The fact that PC power supplies put out DC voltage will be an advantage over the more common AC buzz-box arc welders for better stability and producing higher quality welds.

Modifications

Obviously, trying to convert a PC power supply into an arc welding power supply will require a few modifications. Here is a quick list of the main challenges I think we will have to overcome.

•    Higher capacity fan for better cooling
•    Terminate all the PSU’s +12V cables into welding leads
•    Disable the Short Circuit protection feature
•    Implement selecting the desired current output
•    Strike and maintain a stable arc with only 12 volts

Please continue reading our write up about Arc Welding with a PC Power Supply!!!

Author:
Subject: Editorial
Manufacturer: AMD

Much Ado About Nothing?

We live in a world seemingly fueled by explosive headlines. This morning we were welcomed with a proclamation that AMD has 13 newly discovered security flaws in their latest Ryzen/Zen chips that could potentially be showstoppers for the architecture, and AMD’s hopes that it can regain lost marketshare in mobile, desktop, and enterprise markets. CTS-Labs released a report along with a website and videos explaining what these vulnerabilities are and how they can affect AMD and its processors.

amd_01.jpg

This is all of course very scary. It was not all that long ago that we found out about the Spectre/Meltdown threats that seemingly are more dangerous to Intel than to its competitor. Spectre/Meltdown can be exploited by code that will compromise a machine without having elevated privileges. Parts of Spectre/Meltdown were fixed by firmware updates and OS changes which had either no effect on the machine in terms of performance, or incurred upwards of 20% to 30% performance hits in certain workloads requiring heavy I/O usage. Intel is planning a hardware fix for these vulnerabilities later on this year with new products. Current products have firmware updates available to them and Microsoft has already implemented a fix in software. Older CPUs and platforms (back to at least 4th Generation Core) have fixes, but they were rolled out a bit slower. So the fear of a new exploit that is located on the latest AMD processors is something that causes fear in users, CTOs, and investors alike.

CTS-Labs have detailed four major vulnerabilities and have named them as well as have provided fun little symbols for each; Ryzenfall, Fallout, Masterkey, and Chimera. The first three affect the CPU directly. Unlike Spectre/Meltdown, these vulnerabilities require elevated administrative privileges to be run. These are secondary exploits that require either physical access to the machine or logging on with enhanced admin privileges. Chimera affects the chipset designed by ASMedia. It is installed via a signed driver. In a secured system where the attacker has no administrative access, these exploits are no threat. If a system has been previously compromised or physically accessed (eg. force a firmware update via USB and flashback functionality), then these vulnerabilities are there to be taken advantage of.

amd_02.jpg

In every CPU it makes AMD utilizes a “Secure Processor”. This is simply a licensed ARM Cortex A5 that runs the internal secure OS/firmware. The same cores that comprise ARM’s “TrustZone” security product. In theory someone could compromise a server, install these exploits, and then remove the primary exploit so that on the surface it looks like the machine is operating as usual. The attackers will still have low level access to the machine in question, but it will be much harder to root them out.

Continue reading our thoughts on the AMD security concerns.

Author:
Subject: Displays
Manufacturer: Acer

When PC monitors made the mainstream transition to widescreen aspect ratios in the mid-2000s, many manufacturers opted for resolutions at a 16:10 ratio. My first widescreen displays were a pair of Dell monitors with a 1920x1200 resolution and, as time and technology marched forward, I moved to larger 2560x1600 monitors.

I grew to rely on and appreciate the extra vertical resolution that 16:10 displays offer, but as the production and development of "widescreen" PC monitors matured, it naturally began to merge with the television industry, which had long since settled on a 16:9 aspect ratio. This led to the introduction of PC displays with native resolutions of 1920x1080 and 2560x1440, keeping things simple for activities such as media playback but robbing consumers of pixels in terms of vertical resolution.

I was well-accustomed to my 16:10 monitors when the 16:9 aspect ratio took over the market, and while I initially thought that the 120 or 160 missing rows of pixels wouldn't be missed, I was unfortunately mistaken. Those seemingly insignificant pixels turned out to make a noticeable difference in terms of on-screen productivity real estate, and my 1080p and 1440p displays have always felt cramped as a result.

I was therefore sad to see that the relatively new ultrawide monitor market continued the trend of limited vertical resolutions. Most ultrawides feature a 21:9 aspect ratio with resolutions of 2560x1080 or 3440x1440. While this gives users extra resolution on the sides, it maintains the same limited height options of those ubiquitous 1080p and 1440p displays. The ultrawide form factor is fantastic for movies and games, but while some find them perfectly acceptable for productivity, I still felt cramped.

Thankfully, a new breed of ultrawide monitors is here to save the day. In the second half of 2017, display manufactures such as Dell, Acer, and LG launched 38-inch ultrawide monitors with a 3840x1600 resolution. Just like the how the early ultrawides "stretched" a 1080p or 1440p monitor, the 38-inch versions do the same for my beloved 2560x1600 displays.

The Acer XR382CQK

I've had the opportunity to test one of these new "taller" displays thanks to a review loan from Acer of the XR382CQK, a curved 37.5-inch behemoth. It shares the same glorious 3840x1600 resolution as others in its class, but it also offers some unique features, including a 75Hz refresh rate, USB-C input, and AMD FreeSync support.

XR382CQK-desk.jpg

Based on my time with the XR382CQK, my hopes for those extra 160 of resolution were fulfilled. The height of the display area felt great for tasks like video editing in Premiere and referencing multiple side-by-side documents and websites, and the gaming experience was just as satisfying. And with its 38-inch size, the display is quite usable at 100 percent scaling.

windows-3840x1600.jpg

There's also an unexpected benefit for video content that I hadn't originally considered. I was so focused on regaining that missing vertical resolution that I initially failed to appreciate the jump in horizontal resolution from 3440px to 3840px. This is the same horizontal resolution as the consumer UHD standard, which means that 4K movies in a 21:9 or similar aspect ratio will be viewable in their full size with a 1:1 pixel ratio.

Continue reading our look at 38-inch 3840x1600 ultrawide monitors!

Author:
Subject: Editorial
Manufacturer: AMD

Beating AMD and Analyst Estimates

January 30th has rolled around and AMD released their Q4 2017 results. The results were positive and somewhat unexpected. I have been curious how the company fared and was waiting for these results to compare them to the relatively strong quarter that Intel experienced. At the Q3 earnings AMD was not entirely bullish about how Q4 would go. The knew that it was going to be a down quarter as compared to an unexpectedly strong third quarter, but they were unsure how that was going to pan out. The primary reason that Q4 was not going to be as strong was due to the known royalty income that AMD was expecting from their Semi-Custom Group. Q4 has traditionally been bad for that group as all of their buildup for the holiday season came from Q1 and Q2 rampings of the physical products that would be integrated into consoles.

amd_logo_2.png

The results exceeded AMD’s and analysts’ expectations. They were expecting in the $1.39B range, but their actual revenue came in at a relatively strong $1.48B. Not only was the quarter stronger than expected, but AMD was able to pull out another positive net income of $61M. It has been a while since AMD was able to post back to back profitable quarters. This allowed AMD to have a net positive year to the tune of $43M where in 2016 AMD had a loss of $497M. 2017 as a whole was $1.06B more in revenue over 2016. AMD has been historically lean in terms of expenses for the past few years, and a massive boost in revenue has allowed them to invest in R&D as well as more aggressively ramp up their money making products to compete more adequately with Intel, who is having their own set of issues right now with manufacturing and security.

Click here to continue reading about AMD's Q4 2017 Earnings analysis!

Author:
Subject: Editorial
Manufacturer: Intel

Another Strong Quarter for the Giant

This afternoon Intel released their Q4 2017 financial results. The quarter was higher in revenue than was expected by analysts. The company made $17.1B US in revenue and recorded a non-GAAP net of $1.08 a share.  On the surface it looks like Intel had another good quarter that was expected by the company and others alike. Underneath the surface these results have shown a few more interesting things about the company as well as the industry it exists in.

Intel-Swimming-in-Money.jpg

We have been constantly hearing about how the PC market is weak and it will start to negatively affect those companies who's primary products go into these machines. Intel did see a 2% drop in revenue year on year from their Client Computing Group, but it certainly did not look to be a collapse. We can also speculate that part of the drop is from a much more competitive AMD and their strong performing Ryzen processors. These indications point to the PC market still being pretty stable and robust, even though it isn't growing at the rate it once had.

The Data Center Group was quite the opposite. It grew around 20% over the same timespan. Intel did not provide more detail but it seems that datacenters and cloud computing are still growing at a tremendous rate. With the proliferation of low power devices yet increased computing needs, data centers are continuing to expand and purchase the latest and greatest CPUs from Intel. So far AMD's EPYC has not been rolled out aggressively so far, but 2H 2018 should shed a lot more light on where this part of the market is going.

Click to continue reading about Intel's Q4 2017 earnings!

Author:
Subject: Editorial
Manufacturer: Various

Quirks, Savings, and Conclusions

Welcome back to the third and final chapter in our recent cord cutting saga, in which the crew here at the PC Perspective office take a fresh look at dumping traditional cable and satellite sources for online and over-the-air content. We previously planned our cord cutting adventure with a look at the devices, software, and services that will replace our cable and satellite subscriptions, and then put that plan to action by deploying an NVIDIA SHIELD TV, Plex, and an HDTV tuner with antenna.

shield-remote-featured.jpg

Now, several weeks into this experiment, we wanted to take a step back to evaluate how the process went in practice, including a look at some of the challenges we failed to initially anticipate, projections of the increased Internet bandwidth usage that accompanies cord cutting (especially important for the many of you with home broadband usage caps), and finally a calculation of the initial and ongoing costs associated with cord cutting in order to determine if this whole process actually saves us any money.

Read on for our debriefing of PC Perspective ’s 2017 cord cutting guide!

Author:
Subject: Editorial
Manufacturer:

Ultimate Cord Cutting Guide - Part 2: Installation & Configuration

We're back with Part 2 of our cord cutting series, documenting our experience with dumping traditional cable and satellite providers in exchange for cheaper and more flexible online and over-the-air content. In Part 1 we looked at the devices that could serve as our cord-cutting hub, the types of subscription content that would be available, and the options for free OTA and online media.

In the end, we selected the NVIDIA SHIELD as our central media device due to its power, capabilities, and flexibility. Now in Part 2 we'll walk through setting up the SHIELD, adding our channels and services, configuring Plex, and more!

nvidia-shield-featured2.jpg

Read on for Part 2 of our cord cutting experience!

Author:
Manufacturer: Intel

The Expected Unexpected

Last night we first received word that Raja had resigned from AMD (during a sabbatical) after they had launched Vega.  The initial statement was that Raja would come back to resume his position at AMD in a December/January timeframe.  During this time there was some doubt as to if Raja would in fact come back to AMD, as “sabbaticals” in the tech world would often lead the individual to take stock of their situation and move on to what they would consider to be greener pastures.

raja_ryan.JPG

Raja has dropped by the PCPer offices in the past.

Initially it was thought that Raja would take the time off and then eventually jump to another company and tackle the issues there.  This behavior is quite common in Silicon Valley and Raja is no stranger to this.  Raja cut his teeth on 3D graphics at S3, but in 2001 he moved to ATI.  While there he worked on a variety of programs including the original Radeon, the industry changing Radeon 9700 series, and finishing up with the strong HD 4000 series of parts.  During this time ATI was acquired by AMD and he became one of the top graphics guru at that company.  In 2009 he quit AMD and moved on to Apple.  He was Director of Graphics Architecture at Apple, but little is known about what he actually did.  During that time Apple utilized AMD GPUs and licensed Imagination Technologies graphics technology.  Apple could have been working on developing their own architecture at this point, which has recently showed up in the latest iPhone products.

In 2013 Raja rejoined AMD and became a corporate VP of Visual Computing, but in 2015 he was promoted to leading the Radeon Technology Group after Lisu Su became CEO of the company. While there Raja worked to get AMD back on an even footing under pretty strained conditions. AMD had not had the greatest of years and had seen their primary moneymakers start taking on water.  AMD had competitive graphics for the most part, and the Radeon technology integrated into AMD’s APUs truly was class leading.  On the discrete side AMD was able to compare favorably to NVIDIA with the HD 7000 and later R9 200 series of cards.  After NVIDIA released their Maxwell based chips, AMD had a hard time keeping up.  The general consensus here is that the RTG group saw its headcount decreased by the company-wide cuts as well as a decrease in R&D funds.

Continue reading about Raja Koduri joinging Intel...

Providers and Devices

"Cutting the Cord," the process of ditching traditional cable and satellite content providers for cheaper online-based services, is nothing new. For years, consumers have cancelled their cable subscriptions (or declined to even subscribe in the first place), opting instead to get their entertainment from companies like Netflix, Hulu, and YouTube.

cord-cutting.jpg

But the recent introduction of online streaming TV services like Sling TV, new technologies like HDR, and the slow online adoption of live local channels has made the idea of cord cutting more complicated. While cord cutters who are happy with just Netflix and YouTube need not worry, what are the solutions for those who don't like the idea of high cost cable subscriptions but also want to preserve access to things like local channels and the latest 4K HDR content?

This article is the first in a three-part series that will look at this "high-end" cord cutting scenario. We'll be taking a look at the options for online streaming TV, access to local "OTA" (over the air) channels, and the devices that can handle it all, including DVR support, 4K output, and HDR compliance.

There are two approaches that you can take when considering the cord cutting process. The first is to focus on capabilities: Do you want 4K? HDR? Lossless surround sound audio? Voice search? Gaming?

The second approach is to focus on content: Do you want live TV or à la carte downloads? Can you live without ESPN or must it and your other favorite networks still be available? Are you heavily invested in iTunes content? Perhaps most importantly for those concerned with the "Spousal Acceptance Factor" (SAP), do you want the majority of your content contained in a single app, which can prevent you and your family members from having to jump between apps or devices to find what they want?

While most people on the cord cutting path will consider both approaches to a certain degree, it's easier to focus on the one that's most important to you, as that will make other choices involving devices and content easier. Of course, there are those of us out there that are open to purchasing and using multiple devices and content sources at once, giving us everything at the expense of increased complexity. But most cord cutters, especially those with families, will want to pursue a setup based around a single device that accommodates most, if not all, of their needs. And that's exactly what we set out to find.

Read on for our overview and experience with cutting the cord in 2017.

Introduction, How PCM Works, Reading, Writing, and Tweaks

I’ve seen a bit of flawed logic floating around related to discussions about 3D XPoint technology. Some are directly comparing the cost per die to NAND flash (you can’t - 3D XPoint likely has fewer fab steps than NAND - especially when compared with 3D NAND). Others are repeating a bunch of terminology and element names without taking the time to actually explain how it works, and far too many folks out there can't even pronounce it correctly (it's spoken 'cross-point'). My plan is to address as much of the confusion as I can with this article, and I hope you walk away understanding how XPoint and its underlying technologies (most likely) work. While we do not have absolute confirmation of the precise material compositions, there is a significant amount of evidence pointing to one particular set of technologies. With Optane Memory now out in the wild and purchasable by folks wielding electron microscopes and mass spectrometers, I have seen enough additional information come across to assume XPoint is, in fact, PCM based.

XPoint.png

XPoint memory. Note the shape of the cell/selector structure. This will be significant later.

While we were initially told at the XPoint announcement event Q&A that the technology was not phase change based, there is overwhelming evidence to the contrary, and it is likely that Intel did not want to let the cat out of the bag too early. The funny thing about that is that both Intel and Micron were briefing on PCM-based memory developments five years earlier, and nearly everything about those briefings lines up perfectly with what appears to have ended up in the XPoint that we have today.

comparison.png

Some die-level performance characteristics of various memory types. source

The above figures were sourced from a 2011 paper and may be a bit dated, but they do a good job putting some actual numbers with the die-level performance of the various solid state memory technologies. We can also see where the ~1000x speed and ~1000x endurance comparisons with XPoint to NAND Flash came from. Now, of course, those performance characteristics do not directly translate to the performance of a complete SSD package containing those dies. Controller overhead and management must take their respective cuts, as is shown with the performance of the first generation XPoint SSD we saw come out of Intel:

gap.png

The ‘bridging the gap’ Latency Percentile graph from our Intel SSD DC P4800X review.
(The P4800X comes in at 10us above).

There have been a few very vocal folks out there chanting 'not good enough', without the basic understanding that the first publicly available iteration of a new technology never represents its ultimate performance capabilities. It took NAND flash decades to make it into usable SSDs, and another decade before climbing to the performance levels we enjoy today. Time will tell if this holds true for XPoint, but given Micron's demos and our own observed performance of Intel's P4800X and Optane Memory SSDs, I'd argue that it is most certainly off to a good start!

XPoint Die.jpg

A 3D XPoint die, submitted for your viewing pleasure (click for larger version).

You want to know how this stuff works, right? Read on to find out!

Author:
Subject: Editorial
Manufacturer: AMD

Zen vs. 40 Years of CPU Development

Zen is nearly upon us.  AMD is releasing its next generation CPU architecture to the world this week and we saw CPU demonstrations and upcoming AM4 motherboards at CES in early January.  We have been shown tantalizing glimpses of the performance and capabilities of the “Ryzen” products that will presumably fill the desktop markets from $150 to $499.  I have yet to be briefed on the product stack that AMD will be offering, but we know enough to start to think how positioning and placement will be addressed by these new products.

zen_01.jpg

To get a better understanding of how Ryzen will stack up, we should probably take a look back at what AMD has accomplished in the past and how Intel has responded to some of the stronger products.  AMD has been in business for 47 years now and has been a major player in semiconductors for most of that time.  It really has only been since the 90s where AMD started to battle Intel head to head that people have become passionate about the company and their products.

The industry is a complex and ever-shifting one.  AMD and Intel have been two stalwarts over the years.  Even though AMD has had more than a few challenging years over the past decade, it still moves forward and expects to compete at the highest level with its much larger and better funded competitor.  2017 could very well be a breakout year for the company with a return to solid profitability in both CPU and GPU markets.  I am not the only one who thinks this considering that AMD shares that traded around the $2 mark ten months ago are now sitting around $14.

 

AMD Through 1996

AMD became a force in the CPU industry due to IBM’s requirement to have a second source for its PC business.  Intel originally entered into a cross licensing agreement with AMD to allow it to produce x86 chips based on Intel designs.  AMD eventually started to produce their own versions of these parts and became a favorite in the PC clone market.  Eventually Intel tightened down on this agreement and then cancelled it, but through near endless litigation AMD ended up with a x86 license deal with Intel.

AMD produced their own Am286 chip that was the first real break from the second sourcing agreement with Intel.  Intel balked at sharing their 386 design with AMD and eventually forced the company to develop its own clean room version.  The Am386 was released in the early 90s, well after Intel had been producing those chips for years. AMD then developed their own version of the Am486 which then morphed into the Am5x86.  The company made some good inroads with these speedy parts and typically clocked them faster than their Intel counterparts (eg. Am486 40 MHz and 80 MHz vs. the Intel 486 DX33 and DX66).  AMD priced these points lower so users could achieve better performance per dollar using the same chipsets and motherboards.

zen_02.jpg

Intel released their first Pentium chips in 1993.  The initial version was hot and featured the infamous FDIV bug.  AMD made some inroads against these parts by introducing the faster Am486 and Am5x86 parts that would achieve clockspeeds from 133 MHz to 150 MHz at the very top end.  The 150 MHz part was very comparable in overall performance to the Pentium 75 MHz chip and we saw the introduction of the dreaded “P-rating” on processors.

There is no denying that Intel continued their dominance throughout this time by being the gold standard in x86 manufacturing and design.  AMD slowly chipped away at its larger rival and continued to profit off of the lucrative x86 market.  William Sanders III set the bar higher about where he wanted the company to go and he started on a much more aggressive path than many expected the company to take.

Click here to read the rest of the AMD processor editorial!

Author:
Subject: Editorial
Manufacturer: NVIDIA

NVIDIA Today?

It always feels a little odd when covering NVIDIA’s quarterly earnings due to how they present their financial calendar.  No, we are not reporting from the future.  Yes, it can be confusing when comparing results and getting your dates mixed up.  Regardless of the date before the earnings, NVIDIA did exceptionally well in a quarter that is typically the second weakest after Q1.

NVIDIA reported revenue of $1.43 billion.  This is a jump from an already strong Q1 where they took in $1.30 billion.  Compare this to the $1.027 billion of its competitor AMD who also provides CPUs as well as GPUs.  NVIDIA sold a lot of GPUs as well as other products.  Their primary money makers were the consumer space GPUs and the professional and compute markets where they have a virtual stranglehold on at the moment.  The company’s GAAP net income is a very respectable $253 million.

results.png

The release of the latest Pascal based GPUs were the primary mover for the gains for this latest quarter.  AMD has had a hard time competing with NVIDIA for marketshare.  The older Maxwell based chips performed well against the entire line of AMD offerings and typically did so with better power and heat characteristics.  Even though the GTX 970 was somewhat limited in its memory configuration as compared to the AMD products (3.5 GB + .5 GB vs. a full 4 GB implementation) it was a top seller in its class.  The same could be said for the products up and down the stack.

Pascal was released at the end of May, but the company had been shipping chips to its partners as well as creating the “Founder’s Edition” models to its exacting specifications.  These were strong sellers throughout the end of May until the end of the quarter.  NVIDIA recently unveiled their latest Pascal based Quadro cards, but we do not know how much of an impact those have had on this quarter.  NVIDIA has also been shipping, in very limited quantities, the Tesla P100 based units to select customers and outfits.

Click to read more about NVIDIA's latest quarterly results!

Author:
Subject: Editorial
Manufacturer: ARM

A Watershed Moment in Mobile

This previous May I was invited to Austin to be briefed on the latest core innovations from ARM and their partners.  We were introduced to new CPU and GPU cores, as well as the surrounding technologies that provide the basis of a modern SOC in the ARM family.  We also were treated to more information about the process technologies that ARM would embrace with their Artisan and POP programs.  ARM is certainly far more aggressive now in their designs and partnerships than they have been in the past, or at least they are more willing to openly talk about them to the press.

arm_white.png

The big process news that ARM was able to share at this time was the design of 10nm parts using an upcoming TSMC process node.  This was fairly big news as TSMC was still introducing parts on their latest 16nm FF+ line.  NVIDIA had not even released their first 16FF+ parts to the world in early May.  Apple had dual sourced their 14/16 nm parts from Samsung and TSMC respectively, but these were based on LPE and FF lines (early nodes not yet optimized to LPP/FF+).  So the news that TSMC would have a working 10nm process in 2017 was important to many people.  2016 might be a year with some good performance and efficiency jumps, but it seems that 2017 would provide another big leap forward after years of seeming stagnation of pure play foundry technology at 28nm.

Yesterday we received a new announcement from ARM that shows an amazing shift in thought and industry inertia.  ARM is partnering with Intel to introduce select products on Intel’s upcoming 10nm foundry process.  This news is both surprising and expected.  It is surprising in that it happened as quickly as it did.  It is expected as Intel is facing a very different world than it had planned for 10 years ago.  We could argue that it is much different than they planned for 5 years ago.

Intel is the undisputed leader in process technologies and foundry practices.  They are the gold standard of developing new, cutting edge process nodes and implementing them on a vast scale.  This has served them well through the years as they could provide product to their customers seemingly on demand.  It also allowed them a leg up in technology when their designs may not have fit what the industry wanted or needed (Pentium 4, etc.).  It also allowed them to potentially compete in the mobile market with designs that were not entirely suited for ultra-low power.  x86 is a modern processor technology with decades of development behind it, but that development focused mainly on performance at higher TDP ranges.

intel.png

This past year Intel signaled their intent to move out of the sub 5 watt market and cede it to ARM and their partners.  Intel’s ultra mobile offerings just did not make an impact in an area that they were expected to.  For all of Intel’s advances in process technology, the base ARM architecture is just better suited to these power envelopes.  Instead of throwing good money after bad (in the form of development time, wafer starts, rebates) Intel has stepped away from this market.

This leaves Intel with a problem.  What to do with extra production capacity?  Running a fab is a very expensive endeavor.  If these megafabs are not producing chips 24/7, then the company is losing money.  This past year Intel has seen their fair share of layoffs and slowing down production/conversion of fabs.  The money spent on developing new, cutting edge process technologies cannot stop for the company if they want to keep their dominant position in the CPU industry.  Some years back they opened up their process products to select 3rd party companies to help fill in the gaps of production.  Right now Intel has far more production line space than they need for the current market demands.  Yes, there were delays in their latest Skylake based processors, but those were solved and Intel is full steam ahead.  Unfortunately, they do not seem to be keeping their fabs utilized at the level needed or desired.  The only real option seems to be opening up some fab space to more potential customers in a market that they are no longer competing directly in.

The Intel Custom Foundry Group is working with ARM to provide access to their 10nm HPM process node.  Initial production of these latest generation designs will commence in Q1 2017 with full scale production in Q4 2017.  We do not have exact information as to what cores will be used, but we can imagine that they will be Cortex-A73 and A53 parts in big.LITTLE designs.  Mali graphics will probably be the first to be offered on this advanced node as well due to the Artisan/POP program.  Initial customers have not been disclosed and we likely will not hear about them until early 2017.

This is a big step for Intel.  It is also a logical progression for them when we look over the changing market conditions of the past few years.  They were unable to adequately compete in the handheld/mobile market with their x86 designs, but they still wanted to profit off of this ever expanding area.  The logical way to monetize this market is to make the chips for those that are successfully competing here.  This will cut into Intel’s margins, but it should increase their overall revenue base if they are successful here.  There is no reason to believe that they won’t be.

Nehalem-Wafer_HR-crop-16x9-photo.jpg.rendition.intel_.web_.864.486.jpg

The last question we have is if the 10nm HPM node will be identical to what Intel will use for their next generation “Cannonlake” products.  My best guess is that the foundry process will be slightly different and will not provide some of the “secret sauce” that Intel will keep for themselves.  It will probably be a mobile focused process node that stresses efficiency rather than transistor switching speed.  I could be very wrong here, but I don’t believe that Intel will open up their process to everyone that comes to them hat in hand (AMD).

The partnership between ARM and Intel is a very interesting one that will benefit customers around the globe if it is handled correctly from both sides.  Intel has a “not invented here” culture that has both benefited it and caused it much grief.  Perhaps some flexibility on the foundry side will reap benefits of its own when dealing with very different designs than Intel is used to.  This is a titanic move from where Intel probably thought it would be when it first started to pursue the ultra-mobile market, but it is a move that shows the giant can still positively react to industry trends.

Manufacturer: NVIDIA

First, Some Background

 
TL;DR:
NVIDIA's Rumored GP102
 
Based on two rumors, NVIDIA seems to be planning a new GPU, called GP102, that sits between GP100 and GP104. This changes how their product stack flowed since Fermi and Kepler. GP102's performance, both single-precision and double-precision, will likely signal NVIDIA's product plans going forward.
  • - GP100's ideal 1 : 2 : 4 FP64 : FP32 : FP16 ratio is inefficient for gaming
  • - GP102 either extends GP104's gaming lead or bridges GP104 and GP100
  • - If GP102 is a bigger GP104, the future is unclear for smaller GPGPU devs
    • This is, unless GP100 can be significantly up-clocked for gaming.
  • - If GP102 matches (or outperforms) GP100 in gaming, and has better than 1 : 32 double-precision performance, then GP100 would be the first time that NVIDIA designed an enterprise-only, high-end GPU.
 

 

When GP100 was announced, Josh and I were discussing, internally, how it would make sense in the gaming industry. Recently, an article on WCCFTech cited anonymous sources, which should always be taken with a dash of salt, that claimed NVIDIA was planning a second architecture, GP102, between GP104 and GP100. As I was writing this editorial about it, relating it to our own speculation about the physics of Pascal, VideoCardz claims to have been contacted by the developers of AIDA64, seemingly on-the-record, also citing a GP102 design.

I will retell chunks of the rumor, but also add my opinion to it.

nvidia-titan-black-1.jpg

In the last few generations, each architecture had a flagship chip that was released in both gaming and professional SKUs. Neither audience had access to a chip that was larger than the other's largest of that generation. Clock rates and disabled portions varied by specific product, with gaming usually getting the more aggressive performance for slightly better benchmarks. Fermi had GF100/GF110, Kepler had GK110/GK210, and Maxwell had GM200. Each of these were available in Tesla, Quadro, and GeForce cards, especially Titans.

Maxwell was interesting, though. NVIDIA was unable to leave 28nm, which Kepler launched on, so they created a second architecture at that node. To increase performance without having access to more feature density, you need to make your designs bigger, more optimized, or more simple. GM200 was giant and optimized, but, to get the performance levels it achieved, also needed to be more simple. Something needed to go, and double-precision (FP64) performance was the big omission. NVIDIA was upfront about it at the Titan X launch, and told their GPU compute customers to keep purchasing Kepler if they valued FP64.

Fast-forward to Pascal.

Author:
Subject: Editorial, Mobile
Manufacturer: Samsung

Hardware Experience

Seeing Ryan transition from being a long-time Android user over to iOS late last year has had me thinking. While I've had hands on with flagship phones from many manufacturers since then, I haven't actually carried an Android device with me since the Nexus S (eventually, with the 4.0 Ice Cream Sandwich upgrade). Maybe it was time to go back in order to gain a more informed perspective of the mobile device market as it stands today.

IMG_4464.JPG

So that's exactly what I did. When we received our Samsung Galaxy S7 review unit (full review coming soon, I promise!), I decided to go ahead and put a real effort forth into using Android for an extended period of time.

Full disclosure, I am still carrying my iPhone with me since we received a T-Mobile locked unit, and my personal number is on Verizon. However, I have been using the S7 for everything but phone calls, and the occasional text message to people who only has my iPhone number.

Now one of the questions you might be asking yourself right now is why did I choose the Galaxy S7 of all devices to make this transition with. Most Android aficionados would probably insist that I chose a Nexus device to get the best experience and one that Google intends to provide when developing Android. While these people aren't wrong, I decided that I wanted to go with a more popular device as opposed to the more niche Nexus line.

Whether you Samsung's approach or not, the fact is that they sell more Android devices than anyone else and the Galaxy S7 will be their flagship offering for the next year or so.

Continue reading our editorial on switching from iOS to Android with the Samsung Galaxy S7!!

Author:
Subject: Editorial
Manufacturer: ARM

28HPCU: Cost Effective and Power Efficient

Have you ever been approached about something and upon first hearing about it, the opportunity just did not seem very exciting?  Then upon digging into things, it became much more interesting?  This happened to me with this announcement.  At first blush, who really cares that ARM is partnering with UMC at 28 nm?  Well, once I was able to chat with the people at ARM, it is much more interesting than initially expected.

icon_arm.jpg

The new hotness in fabrication is the latest 14 nm and 16 nm processes from Samsung/GF and TSMC respectively.  It has been a good 4+ years since we last had a new process node that actually performed as expected.  The planar 22/20 nm products just were not entirely suitable for mass production.  Apple was one of the few to actually develop a part for TSMC’s 20 nm process that actually sold in the millions.  The main problem was a lack of power and speed scaling as compared to 28 nm processes.  Planar was a bad choice, but the development of FinFET technologies hadn’t been implemented in time for it to show up at this time by 3rd party manufacturers.

There is a problem with the latest process generations, though.  They are new, expensive, and are production constrained.  Also, they may not be entirely appropriate for the applications that are being developed.  There are several strengths with 28 nm as compared.  These are mature processes with an excess of line space.  The major fabs are offering very competitive pricing structures for 28 nm as they see space being cleared up on the lines with higher end SOCs, GPUs, and assorted ASICs migrating to the new process nodes.

umc_01.png

TSMC has typically been on the forefront of R&D with advanced nodes.  UMC is not as aggressive with their development, but they tend to let others do some of the heavy lifting and then integrate the new nodes when it fits their pricing and business models.  TSMC is on their third generation of 28 nm.  UMC is on their second, but that generation encompasses many of the advanced features of TSMC’s 3rd generation so it is actually quite competitive.

Click here to continue reading about ARM, UMC, and the 28HPCU process!

Author:
Subject: Editorial
Manufacturer: AMD

Fighting for Relevance

AMD is still kicking.  While the results of this past year have been forgettable, they have overcome some significant hurdles and look like they are improving their position in terms of cutting costs while extracting as much revenue as possible.  There were plenty of ups and downs for this past quarter, but when compared to the rest of 2015 there were some solid steps forward here.

AMD-Logo.jpg

The company reported revenues of $958 million, which is down from $1.06 billion last quarter.  The company also recorded a $103 million loss, but that is down significantly from the $197 million loss the quarter before.  Q3 did have a $65 million write-down due to unsold inventory.  Though the company made far less in revenues, they also shored up their losses.  The company is still bleeding, but they still have plenty of cash on hand for the next several quarters to survive.  When we talk about non-GAAP figures, AMD reports a $79 million loss for this past quarter.

For the entire year AMD recorded $3.99 billion in revenue with a net loss of $660 million.  This is down from FY 2014 revenues of $5.51 billion and a net loss of $403 million.  AMD certainly is trending downwards year over year, but they are hoping to reverse that come 2H 2016.

amd-financial-analyst-day-2015-11-1024.jpg

Graphics continues to be solid for AMD as they increased their sales from last quarter, but are down year on year.  Holiday sales were brisk, but with only the high end Fury series being a new card during this season, the impact of that particular part was not as great as compared to the company having a new mid-range series like the newly introduced R9 380X.  The second half of 2016 will see the introduction of the Polaris based GPUs for both mobile and desktop applications.  Until then, AMD will continue to provide the current 28 nm lineup of GPUs to the market.  At this point we are under the assumption that AMD and NVIDIA are looking at the same timeframe for introducing their next generation parts due to process technology advances.  AMD already has working samples on Samsung’s/GLOBALFOUNDRIES 14nm LPP (low power plus) that they showed off at CES 2016.

Click here to continue reading about AMD's Q4 2015 and FY 2015 results!

Author:
Subject: Editorial
Manufacturer: Patreon
Tagged: video, patreon

Thank you for all you do!

Much of what I am going to say here is repeated from the description on our brand new Patreon support page, but I think a direct line to our readers is in order.

First, I think you may need a little back story. Ask anyone that has been doing online media in this field for any length of time and they will tell you that getting advertisers to sign on and support the production of "free" content has been getting more and more difficult. You'll see this proven out in the transition of several key personalities of our industry away from media into the companies they used to cover. And you'll see it in the absorption of some of our favorite media outlets, being purchased by larger entities with the promise of being able to continue doing what they have been doing. Or maybe you've seen it show as more interstitial ads, road blocks, sponsored site sections, etc. 

At PC Perspective we've seen the struggle first hand but I have done my best to keep as much of that influence away from my team. We are not immune - several years ago we started doing site skins, something we didn't plan for initially. I do think I have done a better than average job keeping the lights on here though, so to speak. We have good sell through on our ad inventory and some of the best companies in our industry support the work we do. 

icon3.jpg

Some of the PC Perspective team at CES 2016

Let me be clear though - we aren't on the verge of going out of business. I am not asking for Patreon supporters to keep from firing anyone. We just wanted to maintain and grow our content library and capability and it seemed like the audience that benefits and enjoys that content might be the best place to start.

Some of you are likely asking yourself if supporting PC Perspective is really necessary? After all, you can chug out a 400 word blog in no time! The truth is that high quality, technical content takes a lot of man hours and those hours are expensive. Our problem is that to advertisers, a page view is a page view, they don't really care how much time and effort went into creating the content on that page. If we spend 20 hours developing a way to evaluate variable refresh rate monitors with an oscilloscope, but put the results on a single page at pcper.com, we get the same amount of traffic as someone that just posts an hour's worth of gameplay experiences. Both are valuable to the community, but one costs a lot more to produce.

screen-04-framerating.jpg

Frame Rating testing methodology helped move the industry forward

The easy way out is to create click bait style content (have you seen the new Marvel trailer??!?) and hope for enough extra page views to make up for the difference. But many people find the allure of the cheap/easy posts too easy and quickly devolve into press releases and marketing vomit. No one at PC Perspective wants to see that happen here.

Not only do we want to avoid a slide into that fate but we want to improve on what we are doing, going further down the path of technical analysis with high quality writing and video content. Very few people are working on this kind of writing and analysis yet it is vitally important to those of you that want the information to make critical purchasing decisions. And then you, in turn, pass those decisions on to others with less technical interest (brothers, mothers, friends). 

We have ideas for new regular shows including a PC Perspective Mailbag, a gaming / Virtual LAN Party show and even an old hardware post-mortem production. All of these take extra time beyond what each person has dedicated today and the additional funding provided by a successful Patreon campaign will help us towards those goals.

I don't want anyone to feel that they are somehow less of a fan of PC Perspective if you can't help - that's not what we are about and not what I stand for. Just being here, reading and commenting on our work means a lot to us. You can still help by spreading the word about stories you find interesting or even doing your regular Amazon.com shopping through our link on the right side bar.

But for those of you that can afford a monthly contribution, consider a "value for value" amount. How much do you think the content we have produced and will produce is worth to you? If that's $3/month, thank you! If that's $20/month, thank you as well! 

vidlogo.jpg

Support PC Perspective through Patreon

http://www.patreon.com/pcper

The team and I spent a lot of our time in the last several weeks talking through this Patreon campaign and we are proud to offer ourselves up to our community. PC Perspective is going to be here for a long time, and support from readers like you will help us be sure we can continue to improve and innovate on the information and content we provide.

Again, thank you so much for support over the last 16 years!