Author:
Subject: General Tech
Manufacturer: ARM

New Products for 2017

PC Perspective was invited to Austin, TX on May 11 and 12 to participate in ARM’s yearly tech day.  Also invited were a handful of editors and analysts that cover the PC and mobile markets.  Those folks were all pretty smart, so it is confusing as to why they invited me.  Perhaps word of my unique talent of screenshoting PDFs into near-unreadable JPGs preceded me?  Regardless of the reason, I was treated to two full days of in-depth discussion of the latest generation of CPU and GPU cores, 10nm test chips, and information on new licensing options.

A73_formfactors.png

Today ARM is announcing their next CPU core with the introduction of the Cortex-A73. They are also unwrapping the latest Mali-G71 graphics technology.  Other technologies such as the CCI-550 interconnect are also revealed.  It is a busy and important day for ARM, especially in light of Intel seemingly abandoning the sub-milliwatt mobile market.

A73_boost.png

Cortex-A73

ARM previously announced the Cortex-A72 in February, 2015.  Since that time it has been seen in most flagship mobile devices in late 2015 and throughout 2016.  The market continues to evolve, and as such the workloads and form factors have pushed ARM to continue to develop and improve their CPU technology.

A73_perf_comp_A72.png

The Sofia Antipolis, France design group is behind the new A73.  The previous several core architectures had been developed by the Cambridge group.  As such, the new design differs quite dramatically from the previous A72.  I was actually somewhat taken aback by the differences in the design philosophy of the two groups and the changes between the A72 and A73, but the generational jumps we have seen in the past make a bit more sense to me.

The marketplace is constantly changing when it comes to workloads and form factors.  More and more complex applications are being ported to mobile devices, including hot technologies like AR and VR.  Other technologies include 3D/360 degree video, greater than 20 MP cameras, and 4K/8K displays and their video playback formats.  Form factors on the other hand have continued to decrease in size, especially in overall height.  We have relatively large screens on most premium devices, but the designers have continued to make these phones thinner and thinner throughout the years.  This has put a lot of pressure on ARM and their partners to increase performance while keeping TDPs in check, and even reducing them so they more adequately fit in the TDP envelope of these extremely thin devices.

A73_power_comp_A72.png

Click here to continue reading about ARM's Tech Day 2016!

NVIDIA Releases Full Specifications for GTX 1070

Subject: Graphics Cards | May 18, 2016 - 04:49 PM |
Tagged: nvidia, pascal, gtx 1070, 1070, gtx, GTX 1080, 16nm FF+, TSMC, Founder's Edition

Several weeks ago when NVIDIA announced the new GTX 1000 series of products, we were given a quick glimpse of the GTX 1070.  This upper-midrange card is to carry a $379 price tag in retail form while the "Founder's Edition" will hit the $449 mark.  Today NVIDIA released the full specifications of this card on their website.

The interest of the GTX 1070 is incredibly great because of the potential performance of this card vs. the previous generation.  Price is also a big consideration here as it is far easier to raise $370 than it is to make the jump to GTX 1080 and shell out $599 once non-Founder's Edition cards are released.  The GTX 1070 has all of the same features as the GTX 1080, but it takes a hit when it comes to clockspeed and shader units.

gtx_1070_launch.png

The GTX 1070 is a Pascal based part that is fabricated on TSMC's 16nm FF+ node.  It shares the same overall transistor count of the GTX 1080, but it is partially disabled.  The GTX 1070 contains 1920 CUDA cores as compared to the 2560 cores of the 1080.  Essentially one full GPC is disabled to reach that number.  The clockspeeds take a hit as well compared to the full GTX 1080.  The base clock for the 1070 is still an impressive 1506 MHz and boost reaches 1683 MHz.  This combination of shader counts and clockspeed makes this probably a little bit faster than the older GTX 980 ti.  The rated TDP for the card is 150 watts with a single 8 pin PCI-E power connector.  This means that there should be some decent headroom when it comes to overclocking this card.  Due to binning and yields, we may not see 2+ GHz overclocks with these cards, especially if NVIDIA cut down the power delivery system as compared to the GTX 1080.  Time will tell on that one.

The memory technology that NVIDIA is using for this card is not the cutting edge GDDR5x or HBM, but rather the tried and true GDDR5.  8 GB of this memory sits on a 256 bit bus, but it is running at a very, very fast 8 gbps.  This gives overall bandwidth in the 256 GB/sec region.  When we combine this figure with the memory compression techniques implemented with the Pascal architecture we can see that the GTX 1070 will not be bandwidth starved.  We have no information if this generation of products will mirror what we saw with the previous generation GTX 970 in terms of disabled memory controllers and the 3.5 GB/500 MB memory split due to that unique memory subsystem.

gtx_1070_launch2.png

Beyond those things, the GTX 1070 is identical to the GTX 1080 in terms of DirectX features, display specifications, decoding support, double bandwidth SLI, etc.  There is an obvious amount of excitement for this card considering its potential performance and price point.  These supposedly will be available in the Founder's Edition release on June 10 for the $449 MSRP.  I know many people are considering using these cards in SLI to deliver performance for half the price of last year's GTX 980ti.  From all indications, these cards will be a signficant upgrade for anyone using GTX 970s in SLI.  With the greater access to monitors that hit 4K as well as Surround Gaming, this could be a solid purchase for anyone looking to step up their game in these scenarios.

Source: NVIDIA
Author:
Subject: Processors
Manufacturer: ARM

10nm Sooner Than Expected?

It seems only yesterday that we had the first major GPU released on 16nm FF+ and now we are talking about ARM about to receive their first 10nm FF test chips!  Well, in fact it was yesterday that NVIDIA formally released performance figures on the latest GeForce GTX 1080 which is based on TSMC’s 16nm FF+ process technology.  Currently TSMC is going full bore on their latest process node and producing the fastest current graphics chip around.  It has taken the foundry industry as a whole a lot longer to develop FinFET technology than expected, but now that they have that piece of the puzzle seemingly mastered they are moving to a new process node at an accelerated rate.

arm_td01.png

TSMC’s 10nm FF is not well understood by press and analysts yet, but we gather that it is more of a marketing term than a true drop to 10 nm features.  Intel has yet to get past 14nm and does not expect 10 nm production until well into next year.  TSMC is promising their version in the second half of 2016.  We cannot assume that TSMC’s version will match what Intel will be doing in terms of geometries and electrical characteristics, but we do know that it is a step past TSMC’s 16nm FF products.  Lithography will likely get a boost with triple patterning exposure.  My guess is that the back end will also move away from the “20nm metal” stages that we see with 16nm.  All in all, it should be an improved product from what we see with 16nm, but time will tell if it can match the performance and density of competing lines that bear the 10nm name from Intel, Samsung, and GLOBALFOUNDRIES.

ARM has a history of porting their architectures to new process nodes, but they are being a bit more aggressive here than we have seen in the past.  It used to be that ARM would announce a new core or technology, and it would take up to two years to be introduced into the market.  Now we are seeing technology announcements and actual products hitting the scenes about nine months later.  With the mobile market continuing to grow we expect to see products quicker to market still.

arm_td02.png

The company designed a simplified test chip to tape out and send to TSMC for test production on the aforementioned 10nm FF process.  The chip was taped out in December, 2015.  The design was shipped to TSMC for mask production and wafer starts.  ARM is expecting the finished wafers to arrive this month.

Click here to continue reading about ARM's test foray into 10nm!

Rumor: Apple's A11 SoC Reaches Tapeout at TSMC 10nm

Subject: Processors, Mobile | May 9, 2016 - 05:42 PM |
Tagged: apple, a11, 10nm, TSMC

Before I begin, the report comes from DigiTimes and they cite anonymous sources for this story. As always, a grain of salt is required when dealing with this level of alleged leak.

apple.png

That out of the way, rumor has it that Apple's A11 SoC has been taped out on TSMC's 10nm process node. This is still a little way's away from production, however. From here, TSMC should be providing samples of the now finalized chip in Q1 2017, start production a few months later, and land in iOS devices somewhere in Q3/Q4. Knowing Apple, that will probably align with their usual release schedule -- around September.

DigiTimes also reports that Apple will likely make their split-production idea a recurring habit. Currently, the A9 processor is fabricated at TSMC and Samsung on two different process nodes (16nm for TSMC and 14nm for Samsung). They claim that two-thirds of A11 chips will come from TSMC.

Source: DigiTimes

ARM Partners with TSMC to Produce SoCs on 7nm FinFET

Subject: Processors | March 15, 2016 - 04:52 PM |
Tagged: TSMC, SoC, servers, process technology, low power, FinFET, datacenter, cpu, arm, 7nm, 7 nm FinFET

ARM and TSMC have announced their collaboration on 7 nm FinFET process technology for future SoCs. A multi-year agreement between the companies, products produces on this 7 nm FinFET process are intended to expand ARM’s reach “beyond mobile and into next-generation networks and data centers”.

tsmc-headquarters.jpg

TSMC Headquarters (Image credit: AndroidHeadlines)

So when can we expect to see 7nm SoCs on the market? The report from The Inquirer offers this quote from TSMC:

“A TSMC spokesperson told the INQUIRER in a statement: ‘Our 7nm technology development progress is on schedule. TSMC's 7nm technology development leverages our 10nm development very effectively. At the same time, 7nm offers a substantial density improvement, performance improvement and power reduction from 10nm’.”

Full press release after the break.

Source: ARM

MWC 2016: MediaTek Announces Helio P20 True Octa-Core SoC

Subject: Processors, Mobile | February 22, 2016 - 04:11 PM |
Tagged: TSMC, SoC, octa-core, MWC 2016, MWC, mediatek, Mali-T880, LPDDR4X, Cortex-A53, big.little, arm

MediaTek might not be well-known in the United States, but the company has been working to expand from China, where it had a 40% market share as of June 2015, into the global market. While 2015 saw the introduction of the 8-core Helio P10 and the 10-core helio X20 SoCs, the company continues to expand their lineup, today announcing the Helio P20 SoC.

Helio_P20.jpg

There are a number of differences between the recent SoCs from MediaTek, beginning with the CPU core configuration. This new Helio P20 is a “True Octa-Core” design, but rather than a big.LITTLE configuration it’s using 8 identically-clocked ARM Cortex-A53 cores at 2.3 GHz. The previous Helio P10 used a similar CPU configuration, though clocks were limited to 2.0 GHz with that SoC. Conversely, the 10-core Helio X20 uses a tri-cluster configuration, with 2x ARM Cortex-A72 cores running at 2.5 GHz, along with a typical big.LITTLE arrangement (4x Cortex-A53 cores at 2.0 Ghz and 4x Cortex-A53 cores at 1.4 GHz).

Another change affecting MediaTek’s new SoC and he industry at large is the move to smaller process nodes. The Helio P10 was built on 28 nm HPM, and this new P20 moves to 16 nm FinFET. Just as with the Helio P10 and Helio X20 (a 20 nm part) this SoC is produced at TSMC using their 16FF+ (FinFET Plus) technology. This should provide up to “40% higher speed and 60% power saving” compared to the company’s previous 20 nm process found in the Helio X20, though of course real-world results will have to wait until handsets are available to test.

The Helio P20 also takes advantage of LPDDR4X, and is “the world’s first SoC to support low power double data rate random access memory” according to MediaTek. The company says this new memory provides “70 percent more bandwidth than the LPDDR3 and 50 percent power savings by lowering supply voltage to 0.6v”. Graphics are powered by ARM’s high-end Mali T880 GPU, clocked at an impressive 900 MHz. And all-important modem connectivity includes CAT6 LTE with 2x carrier aggregation for speeds of up to 300 Mbps down, 50 Mbps up. The Helio P20 also supports up to 4k/30 video decode with H.264/265 support, and the 12-bit dual camera ISP supports up to 24 MP sensors.

Specs from MediaTek:

  • Process: 16nm
  • Apps CPU: 8x Cortex-A53, up to 2.3GHz
  • Memory: Up to 2 x LPDDR4X 1600MHz (up to 6GB) + 1x LPDDR3 933Mhz (up to 4GB) + eMMC 5.1
  • Camera: Up to 24MP at 24FPS w/ZSD, 12bit Dual ISP, 3A HW engine, Bayer & Mono sensor support
  • Video Decode: Up to 4Kx2K 30fps H.264/265
  • Video Encode: Up to 4Kx2K 30fps H.264
  • Graphics: Mali T-880 MP2 900MHz
  • Display: FHD 1920x1080 60fps. 2x DSI for dual display
  • Modem: LTE FDD TDD R.11 Cat.6 with 2x20 CA. C2K SRLTE. L+W DSDS support
  • Connectivity: WiFiac/abgn (with MT6630). GPS/Glonass/Beidou/BT/FM.
  • Audio: 110db SNR & -95db THD

It’s interesting to see SoC makers experiment with less complex CPU designs after a generation of multi-cluster (big.LITTLE) SoCs, as even the current flagship Qualcomm SoC, the Snapdragon 820, has reverted to a straight quad-core design. The P20 is expected to be in shipping devices by the second half of 2016, and we will see how this configuration performs once some devices using this new P20 SoC are in the wild.

Full press release after the break:

Source: MediaTek

TSMC Allegedly Wants 5nm by 2020

Subject: Graphics Cards, Processors | January 20, 2016 - 04:38 AM |
Tagged: TSMC

Digitimes is reporting on statements that were allegedly made by TSMC co-CEO, Mark Liu. We are currently seeing 16nm parts come out of the foundry, which is expected to be used in the next generation of GPUs, replacing the long-running 28nm node that launched with the GeForce GTX 680. (It's still unannounced whether AMD and NVIDIA will use 14nm FinFET from Samsung or GlobalFoundries, or 16nm FinFET from TSMC.)

Update (Jan 20th, @4pm EST): Couple minor corrections. Radeon HD 7970 launched at 28nm first by a couple of months. I just remember NVIDIA getting swamped in delays because it was a new node, so that's probably why I thought of the GTX 680. Also, AMD announced during CES that they will use GlobalFoundries to fab their upcoming GPUs, which I apparently missed. We suspect that NVIDIA will use TSMC, and have assumed that for a while, but it hasn't been officially announced yet (if ever).

tsmc.jpg

According to their projections, which (again) are filtered through Digitimes, the foundry expects to have 7nm in the first half of 2018. They also expect to introduce extreme ultraviolet (EUV) lithography methods with 5nm in 2020. Given that Silicon in a solid has a lattice spacing of ~0.54nm at room temperature, 7nm transistors will consist of about 13 atoms, and 5nm transistors will have features containing about 9 atoms.

We continue the march toward the end of silicon lithography.

Even if the statement is correct, much can happen between then and now. It wouldn't be the first time that I've seen a major foundry believe that a node would be available, but end up having it delayed. I wouldn't hold my breath, but I might cross my fingers if my hands were free.

At the very least, we can assume that TSMC's roadmap is 16nm, 10nm, 7nm, and then 5nm.

Source: Digitimes

Rumors Surrounding the LG NUCLUN 2 SoC

Subject: Processors, Mobile | December 1, 2015 - 12:30 PM |
Tagged: TSMC, SoC, LG, Intel, arm

So this story came out of nowhere. Whether the rumors are true or false, I am stuck on how everyone seems to be talking about it with a casual deadpan. I spent a couple hours Googling whether I missed some big announcement that made Intel potentially fabricating ARM chips a mundane non-story. Pretty much all that I found was Intel allowing Altera to make FPGAs with embedded ARM processors in a supporting role, which is old news.

simpsons-2015-skinner-out-of-touch.jpg

Image Credit: Internet Memes...

The rumor is that Intel and TSMC were both vying to produce LG's Nuclon 2 SoC. This part is said to house two quad-core ARM modules in a typical big.LITTLE formation. Samples were allegedly produced, with Intel's part (2.4 GHx) being able to clock around 300 MHz faster than TSMC's offering (2.1 GHz). Clock rate is highly dependent upon the “silicon lottery,” so this is an area that production maturity can help with. Intel's sample would also be manufactured at 14nm (versus 16nm from TSMC although these numbers mean less than they used to). LG was also, again allegedly, interesting in Intel's LTE modem. According to the rumors, LG went with TSMC because they felt Intel couldn't keep up with demand.

Now that the rumor has been reported... let's step back a bit.

I talked with Josh a couple of days ago about this post. He's quite skeptical (as I am) about the whole situation. First and foremost, it takes quite a bit of effort to port a design to a different manufacturing process. LG could do it, but it is questionable, especially for a second chip ever sort of thing. Moreover, I still believe that Intel doesn't want to manufacture chips that directly compete with them. x86 in phones is still not a viable business, but Intel hasn't given up and you would think that's a prerequisite.

So this whole thing doesn't seem right.

Source: Android

Podcast #369 - Fable Legends DX12 Benchmark, Apple A9 SoC, Intel P3608 SSD, and more!

Subject: General Tech | October 1, 2015 - 06:17 PM |
Tagged: podcast, video, fable legends, dx12, apple, A9, TSMC, Samsung, 14nm, 16nm, Intel, P3608, NVMe, logitech, g410, TKL, nvidia, geforce now, qualcomm, snapdragon 820

PC Perspective Podcast #369 - 10/01/2015

Join us this week as we discuss the Fable Legends DX12 Benchmark, Apple A9 SoC, Intel P3608 SSD, and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Josh Walrath, Jeremy Hellstrom, and Allyn Malventano

Program length: 1:42:35

  1. Week in Review:
  2. 0:54:10 This episode of PC Perspective is brought to you by…Zumper, the quick and easy way to find your next apartment or home rental. To get started and to find your new home go to http://zumper.com/PCP
  3. News item of interest:
  4. Hardware/Software Picks of the Week:
  5. Closing/outro

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

Apple Dual Sources A9 SOCs with TSMC and Samsung: Some Extra Thoughts

Subject: Processors | October 1, 2015 - 01:55 AM |
Tagged: TSMC, Samsung, FinFET, apple, A9, 16 nm, 14 nm

So the other day the nice folks over at Chipworks got word that Apple was in fact sourcing their A9 SOC at both TSMC and Samsung.  This is really interesting news on multiple fronts.  From the information gleaned the two parts are the APL0898 (Samsung fabbed) and the APL1022 (TSMC).

These process technologies have been in the news quite a bit.  As we well know, it has been a hard time for any foundry to go under 28 nm in an effective way if your name is not Intel.  Even Intel has had some pretty hefty issues with their march to sub 32 nm parts, but they have the resources and financial ability to push through a lot of these hurdles.  One of the bigger problems that affected the foundries was the idea that they could push back FinFETs beyond what they were initially planning.  The idea was to hit 22/20 nm and use planar transistors and push development back to 16/14 nm for FinFET technology.

apple_a9.jpg

The Chipworks graphic that explains the differences between Samsung's and TSMC's A9 products.

There were many reasons why this did not work in an effective way for the majority of products that the foundries were looking to service with a 22/20 nm planar process.  Yes, there were many parts that were fabricated using these nodes, but none of them were higher power/higher performance parts that typically garner headlines.  No CPUs, no GPUs, and only a handful of lower power SOCs (most notably Apple's A8, which was around 89 mm squared and consumed up to 5 to 10 watts at maximum).  The node just did not scale power very effectively.  It provided a smaller die size, but it did not increase power efficiency and switching performance significantly as compared to 28 nm high performance nodes.

The information Chipworks has provided also verifies that Samsung's 14 nm FF process is more size optimized than TSMC's 16 nm FF.  There was originally some talk about both nodes being very similar in overall transistor size and density, but Samsung has a slightly tighter design.  Neither of them are smaller than Intel's latest 14 nm which is going into its second generation form.  Intel still has a significant performance and size advantage over everyone else in the field.  Going back to size we see the Samsung chip is around 96 mm square while the TSMC chip is 104.5 mm square.  This is not huge, but it does show that the Samsung process is a little tighter and can squeeze more transistors per square mm than TSMC.

In terms of actual power consumption and clock scaling we have nothing to go on here.  The chips are both represented in the 6S and 6S+.  Testing so far has not shown there to be significant differences between the two SOCs so far.  In theory one could be performing better than the other, but in reality we have not tested these chips at a low enough level to discern any major performance or power issue.  My gut feeling here is that Samsung's process is more mature and running slightly better than TSMC's, but the differences are going to be minimal at best.

The next piece of info that we can glean from this is that there just isn't enough line space for all of the chip companies who want to fabricate their parts with either Samsung or TSMC.  From a chip standpoint a lot of work has to be done to port a design to two different process nodes.  While 14 and 16 are similar in overall size and the usage of FinFETS, the standard cells and design libraries for both Samsung and TSMC are going to be very different.  It is not a simple thing to port over a design.  A lot of work has to be done in the design stage to make a chip work with both nodes.  I can tell you that there is no way that both chips are identical in layout.  It is not going to be a "dumb port" where they just adjust the optics with the same masks and magically make these chips work right off the bat.  Different mask sets for each fab, verification of both designs, and troubleshooting the yields by metal layer changes will be different for each manufacturer.

In the end this means that there just simply was not enough space at either TSMC or Samsung to handle the demand that Apple was expecting.  Because Apple has deep pockets they contracted out both TSMC and Samsung to produce two very similar, but still different parts.  Apple also likely outbid and locked down what availability to process wafers that Samsung and TSMC have, much to the dismay of other major chip firms.  I have no idea what is going on in the background with people like NVIDIA and AMD when it comes to line space for manufacturing their next generation parts.  At least for AMD it seems that their partnership with GLOBALFOUNDRIES and their version of 14 nm FF is having a hard time taking off.  Eventually more space will be made in production and yields and bins will improve.  Apple will stop taking up so much space and we can get other products rolling off the line.  In the meantime, enjoy that cutting edge iPhone 6S/+ with the latest 14/16 nm FF chips.

Source: Chipworks