Author:
Subject: Editorial
Manufacturer: NVIDIA

NVIDIA Today?

It always feels a little odd when covering NVIDIA’s quarterly earnings due to how they present their financial calendar.  No, we are not reporting from the future.  Yes, it can be confusing when comparing results and getting your dates mixed up.  Regardless of the date before the earnings, NVIDIA did exceptionally well in a quarter that is typically the second weakest after Q1.

NVIDIA reported revenue of $1.43 billion.  This is a jump from an already strong Q1 where they took in $1.30 billion.  Compare this to the $1.027 billion of its competitor AMD who also provides CPUs as well as GPUs.  NVIDIA sold a lot of GPUs as well as other products.  Their primary money makers were the consumer space GPUs and the professional and compute markets where they have a virtual stranglehold on at the moment.  The company’s GAAP net income is a very respectable $253 million.

results.png

The release of the latest Pascal based GPUs were the primary mover for the gains for this latest quarter.  AMD has had a hard time competing with NVIDIA for marketshare.  The older Maxwell based chips performed well against the entire line of AMD offerings and typically did so with better power and heat characteristics.  Even though the GTX 970 was somewhat limited in its memory configuration as compared to the AMD products (3.5 GB + .5 GB vs. a full 4 GB implementation) it was a top seller in its class.  The same could be said for the products up and down the stack.

Pascal was released at the end of May, but the company had been shipping chips to its partners as well as creating the “Founder’s Edition” models to its exacting specifications.  These were strong sellers throughout the end of May until the end of the quarter.  NVIDIA recently unveiled their latest Pascal based Quadro cards, but we do not know how much of an impact those have had on this quarter.  NVIDIA has also been shipping, in very limited quantities, the Tesla P100 based units to select customers and outfits.

Click to read more about NVIDIA's latest quarterly results!

Xbox One S is Compact, Power Efficient, and (Slightly) Faster

Subject: General Tech | August 8, 2016 - 11:06 PM |
Tagged: xbox one s, xbox one, TSMC, microsoft, console, 16nm

Microsoft recently unleashed a smaller version of its gaming console in the form of the Xbox One S. The new "S" variant packs an internal power supply, 4K Blu-ray optical drive, and a smaller (die shrunk) AMD SoC into a 40% smaller package. The new console is clad in all white with black accents and a circular vent on left half of the top. A USB port and pairing button has been added to the front and the power and eject buttons are now physical rather than capacitive (touch sensitive).

Rear I/O remains similar to the original console and includes a power input, two HDMI ports (one input, one output), two USB 3.0 ports, one Ethernet, one S/PDIF audio out, and one IR out port. There is no need for the power brick anymore though as the power supply is now internal. Along with being 40% smaller, it can now be mounted vertically using an included stand. While there is no longer a dedicated Kinect port, it is still possible to add a Kinect to your console using an adapter.

Microsoft Xbox One S Gaming Console 4K Media BluRay UHD.jpg

The internal specifications of the Xbox One S remain consistent with the original Xbox One console except that it will now be available in a 2TB model. The gaming console is powered by a nearly identical processor that is now 35% smaller thanks to being manufactured on a smaller 16nm FinFet process node at TSMC. While the chip is more power efficient, it still features the same eight Jaguar CPU cores clocked at 1.75 GHz and 12 CU graphics portion (768 stream processors). Microsoft and AMD now support HDR and 4K resolutions and upscaling with the new chip. The graphics portion is where the new Xbox One S gets a bit interesting because it appears that Microsoft has given the GPU a bit of an overclock to 914 MHz. Compared to the original Xbox One's 853 MHz, this is a 7.1% increase in clockspeed. The increased GPU clocks also results in increased bandwidth for the ESRAM (204 GB/s on the original Xbox One versus 219 GB/s on the Xbox One S).

According to Microsoft, the increased GPU clockspeeds were necessary to be able to render non HDR versions of the game for Game DVR, Game Streaming, and taking screenshots in real time. A nice side benefit to this though is that the extra performance can result in improved game play in certain games. In Digital Foundry's testing, Richard Leadbetter found this to be especially true in games with unlocked frame rates or in games that are 30 FPS locked but where the original console could not hit 30 FPS consistently. The increased clocks can be felt in slightly smoother game play and less screen tearing. For example, they found that the Xbox One S got up to 11% higher frames in Project Cars (47 FPS versus 44) and between 6% to 8% in Hitman. Further, they found that the higher clocks help performance in playing Xbox 360 games on the Xbox One in backwards compatibility mode such as Alan Wake's American Nightmare.

The 2TB Xbox One S is available now for $400 while the 1TB ($350) and 500GB ($300) versions will be available on the 23rd. For comparison, the 500GB Xbox One (original) is currently $250. The Xbox One 1TB game console varies in price depending on game bundle.

What are your thoughts on the smaller console? While the ever so slight performance boost is a nice bonus, I definitely don't think that it is worth specifically upgrading for if you already have an Xbox One. If you have been holding off, now is the time to get a discounted original or smaller S version though! If you are hoping for more performance, definitely wait for Microsoft's Scorpio project or it's competitor the PlayStation 4 Neo (or even better a gaming PC right!? hehe).

I do know that Ryan has gotten his hands on the slimmer Xbox One S, so hopefully we will see some testing of our own as well as a teardown (hint, hint!).

Also read:

Source: Eurogamer
Author:
Subject: General Tech
Manufacturer: ARM

New Products for 2017

PC Perspective was invited to Austin, TX on May 11 and 12 to participate in ARM’s yearly tech day.  Also invited were a handful of editors and analysts that cover the PC and mobile markets.  Those folks were all pretty smart, so it is confusing as to why they invited me.  Perhaps word of my unique talent of screenshoting PDFs into near-unreadable JPGs preceded me?  Regardless of the reason, I was treated to two full days of in-depth discussion of the latest generation of CPU and GPU cores, 10nm test chips, and information on new licensing options.

A73_formfactors.png

Today ARM is announcing their next CPU core with the introduction of the Cortex-A73. They are also unwrapping the latest Mali-G71 graphics technology.  Other technologies such as the CCI-550 interconnect are also revealed.  It is a busy and important day for ARM, especially in light of Intel seemingly abandoning the sub-milliwatt mobile market.

A73_boost.png

Cortex-A73

ARM previously announced the Cortex-A72 in February, 2015.  Since that time it has been seen in most flagship mobile devices in late 2015 and throughout 2016.  The market continues to evolve, and as such the workloads and form factors have pushed ARM to continue to develop and improve their CPU technology.

A73_perf_comp_A72.png

The Sofia Antipolis, France design group is behind the new A73.  The previous several core architectures had been developed by the Cambridge group.  As such, the new design differs quite dramatically from the previous A72.  I was actually somewhat taken aback by the differences in the design philosophy of the two groups and the changes between the A72 and A73, but the generational jumps we have seen in the past make a bit more sense to me.

The marketplace is constantly changing when it comes to workloads and form factors.  More and more complex applications are being ported to mobile devices, including hot technologies like AR and VR.  Other technologies include 3D/360 degree video, greater than 20 MP cameras, and 4K/8K displays and their video playback formats.  Form factors on the other hand have continued to decrease in size, especially in overall height.  We have relatively large screens on most premium devices, but the designers have continued to make these phones thinner and thinner throughout the years.  This has put a lot of pressure on ARM and their partners to increase performance while keeping TDPs in check, and even reducing them so they more adequately fit in the TDP envelope of these extremely thin devices.

A73_power_comp_A72.png

Click here to continue reading about ARM's Tech Day 2016!

NVIDIA Releases Full Specifications for GTX 1070

Subject: Graphics Cards | May 18, 2016 - 12:49 PM |
Tagged: nvidia, pascal, gtx 1070, 1070, gtx, GTX 1080, 16nm FF+, TSMC, Founder's Edition

Several weeks ago when NVIDIA announced the new GTX 1000 series of products, we were given a quick glimpse of the GTX 1070.  This upper-midrange card is to carry a $379 price tag in retail form while the "Founder's Edition" will hit the $449 mark.  Today NVIDIA released the full specifications of this card on their website.

The interest of the GTX 1070 is incredibly great because of the potential performance of this card vs. the previous generation.  Price is also a big consideration here as it is far easier to raise $370 than it is to make the jump to GTX 1080 and shell out $599 once non-Founder's Edition cards are released.  The GTX 1070 has all of the same features as the GTX 1080, but it takes a hit when it comes to clockspeed and shader units.

gtx_1070_launch.png

The GTX 1070 is a Pascal based part that is fabricated on TSMC's 16nm FF+ node.  It shares the same overall transistor count of the GTX 1080, but it is partially disabled.  The GTX 1070 contains 1920 CUDA cores as compared to the 2560 cores of the 1080.  Essentially one full GPC is disabled to reach that number.  The clockspeeds take a hit as well compared to the full GTX 1080.  The base clock for the 1070 is still an impressive 1506 MHz and boost reaches 1683 MHz.  This combination of shader counts and clockspeed makes this probably a little bit faster than the older GTX 980 ti.  The rated TDP for the card is 150 watts with a single 8 pin PCI-E power connector.  This means that there should be some decent headroom when it comes to overclocking this card.  Due to binning and yields, we may not see 2+ GHz overclocks with these cards, especially if NVIDIA cut down the power delivery system as compared to the GTX 1080.  Time will tell on that one.

The memory technology that NVIDIA is using for this card is not the cutting edge GDDR5x or HBM, but rather the tried and true GDDR5.  8 GB of this memory sits on a 256 bit bus, but it is running at a very, very fast 8 gbps.  This gives overall bandwidth in the 256 GB/sec region.  When we combine this figure with the memory compression techniques implemented with the Pascal architecture we can see that the GTX 1070 will not be bandwidth starved.  We have no information if this generation of products will mirror what we saw with the previous generation GTX 970 in terms of disabled memory controllers and the 3.5 GB/500 MB memory split due to that unique memory subsystem.

gtx_1070_launch2.png

Beyond those things, the GTX 1070 is identical to the GTX 1080 in terms of DirectX features, display specifications, decoding support, double bandwidth SLI, etc.  There is an obvious amount of excitement for this card considering its potential performance and price point.  These supposedly will be available in the Founder's Edition release on June 10 for the $449 MSRP.  I know many people are considering using these cards in SLI to deliver performance for half the price of last year's GTX 980ti.  From all indications, these cards will be a signficant upgrade for anyone using GTX 970s in SLI.  With the greater access to monitors that hit 4K as well as Surround Gaming, this could be a solid purchase for anyone looking to step up their game in these scenarios.

Source: NVIDIA
Author:
Subject: Processors
Manufacturer: ARM

10nm Sooner Than Expected?

It seems only yesterday that we had the first major GPU released on 16nm FF+ and now we are talking about ARM about to receive their first 10nm FF test chips!  Well, in fact it was yesterday that NVIDIA formally released performance figures on the latest GeForce GTX 1080 which is based on TSMC’s 16nm FF+ process technology.  Currently TSMC is going full bore on their latest process node and producing the fastest current graphics chip around.  It has taken the foundry industry as a whole a lot longer to develop FinFET technology than expected, but now that they have that piece of the puzzle seemingly mastered they are moving to a new process node at an accelerated rate.

arm_td01.png

TSMC’s 10nm FF is not well understood by press and analysts yet, but we gather that it is more of a marketing term than a true drop to 10 nm features.  Intel has yet to get past 14nm and does not expect 10 nm production until well into next year.  TSMC is promising their version in the second half of 2016.  We cannot assume that TSMC’s version will match what Intel will be doing in terms of geometries and electrical characteristics, but we do know that it is a step past TSMC’s 16nm FF products.  Lithography will likely get a boost with triple patterning exposure.  My guess is that the back end will also move away from the “20nm metal” stages that we see with 16nm.  All in all, it should be an improved product from what we see with 16nm, but time will tell if it can match the performance and density of competing lines that bear the 10nm name from Intel, Samsung, and GLOBALFOUNDRIES.

ARM has a history of porting their architectures to new process nodes, but they are being a bit more aggressive here than we have seen in the past.  It used to be that ARM would announce a new core or technology, and it would take up to two years to be introduced into the market.  Now we are seeing technology announcements and actual products hitting the scenes about nine months later.  With the mobile market continuing to grow we expect to see products quicker to market still.

arm_td02.png

The company designed a simplified test chip to tape out and send to TSMC for test production on the aforementioned 10nm FF process.  The chip was taped out in December, 2015.  The design was shipped to TSMC for mask production and wafer starts.  ARM is expecting the finished wafers to arrive this month.

Click here to continue reading about ARM's test foray into 10nm!

Rumor: Apple's A11 SoC Reaches Tapeout at TSMC 10nm

Subject: Processors, Mobile | May 9, 2016 - 01:42 PM |
Tagged: apple, a11, 10nm, TSMC

Before I begin, the report comes from DigiTimes and they cite anonymous sources for this story. As always, a grain of salt is required when dealing with this level of alleged leak.

apple.png

That out of the way, rumor has it that Apple's A11 SoC has been taped out on TSMC's 10nm process node. This is still a little way's away from production, however. From here, TSMC should be providing samples of the now finalized chip in Q1 2017, start production a few months later, and land in iOS devices somewhere in Q3/Q4. Knowing Apple, that will probably align with their usual release schedule -- around September.

DigiTimes also reports that Apple will likely make their split-production idea a recurring habit. Currently, the A9 processor is fabricated at TSMC and Samsung on two different process nodes (16nm for TSMC and 14nm for Samsung). They claim that two-thirds of A11 chips will come from TSMC.

Source: DigiTimes

ARM Partners with TSMC to Produce SoCs on 7nm FinFET

Subject: Processors | March 15, 2016 - 12:52 PM |
Tagged: TSMC, SoC, servers, process technology, low power, FinFET, datacenter, cpu, arm, 7nm, 7 nm FinFET

ARM and TSMC have announced their collaboration on 7 nm FinFET process technology for future SoCs. A multi-year agreement between the companies, products produces on this 7 nm FinFET process are intended to expand ARM’s reach “beyond mobile and into next-generation networks and data centers”.

tsmc-headquarters.jpg

TSMC Headquarters (Image credit: AndroidHeadlines)

So when can we expect to see 7nm SoCs on the market? The report from The Inquirer offers this quote from TSMC:

“A TSMC spokesperson told the INQUIRER in a statement: ‘Our 7nm technology development progress is on schedule. TSMC's 7nm technology development leverages our 10nm development very effectively. At the same time, 7nm offers a substantial density improvement, performance improvement and power reduction from 10nm’.”

Full press release after the break.

Source: ARM

MWC 2016: MediaTek Announces Helio P20 True Octa-Core SoC

Subject: Processors, Mobile | February 22, 2016 - 11:11 AM |
Tagged: TSMC, SoC, octa-core, MWC 2016, MWC, mediatek, Mali-T880, LPDDR4X, Cortex-A53, big.little, arm

MediaTek might not be well-known in the United States, but the company has been working to expand from China, where it had a 40% market share as of June 2015, into the global market. While 2015 saw the introduction of the 8-core Helio P10 and the 10-core helio X20 SoCs, the company continues to expand their lineup, today announcing the Helio P20 SoC.

Helio_P20.jpg

There are a number of differences between the recent SoCs from MediaTek, beginning with the CPU core configuration. This new Helio P20 is a “True Octa-Core” design, but rather than a big.LITTLE configuration it’s using 8 identically-clocked ARM Cortex-A53 cores at 2.3 GHz. The previous Helio P10 used a similar CPU configuration, though clocks were limited to 2.0 GHz with that SoC. Conversely, the 10-core Helio X20 uses a tri-cluster configuration, with 2x ARM Cortex-A72 cores running at 2.5 GHz, along with a typical big.LITTLE arrangement (4x Cortex-A53 cores at 2.0 Ghz and 4x Cortex-A53 cores at 1.4 GHz).

Another change affecting MediaTek’s new SoC and he industry at large is the move to smaller process nodes. The Helio P10 was built on 28 nm HPM, and this new P20 moves to 16 nm FinFET. Just as with the Helio P10 and Helio X20 (a 20 nm part) this SoC is produced at TSMC using their 16FF+ (FinFET Plus) technology. This should provide up to “40% higher speed and 60% power saving” compared to the company’s previous 20 nm process found in the Helio X20, though of course real-world results will have to wait until handsets are available to test.

The Helio P20 also takes advantage of LPDDR4X, and is “the world’s first SoC to support low power double data rate random access memory” according to MediaTek. The company says this new memory provides “70 percent more bandwidth than the LPDDR3 and 50 percent power savings by lowering supply voltage to 0.6v”. Graphics are powered by ARM’s high-end Mali T880 GPU, clocked at an impressive 900 MHz. And all-important modem connectivity includes CAT6 LTE with 2x carrier aggregation for speeds of up to 300 Mbps down, 50 Mbps up. The Helio P20 also supports up to 4k/30 video decode with H.264/265 support, and the 12-bit dual camera ISP supports up to 24 MP sensors.

Specs from MediaTek:

  • Process: 16nm
  • Apps CPU: 8x Cortex-A53, up to 2.3GHz
  • Memory: Up to 2 x LPDDR4X 1600MHz (up to 6GB) + 1x LPDDR3 933Mhz (up to 4GB) + eMMC 5.1
  • Camera: Up to 24MP at 24FPS w/ZSD, 12bit Dual ISP, 3A HW engine, Bayer & Mono sensor support
  • Video Decode: Up to 4Kx2K 30fps H.264/265
  • Video Encode: Up to 4Kx2K 30fps H.264
  • Graphics: Mali T-880 MP2 900MHz
  • Display: FHD 1920x1080 60fps. 2x DSI for dual display
  • Modem: LTE FDD TDD R.11 Cat.6 with 2x20 CA. C2K SRLTE. L+W DSDS support
  • Connectivity: WiFiac/abgn (with MT6630). GPS/Glonass/Beidou/BT/FM.
  • Audio: 110db SNR & -95db THD

It’s interesting to see SoC makers experiment with less complex CPU designs after a generation of multi-cluster (big.LITTLE) SoCs, as even the current flagship Qualcomm SoC, the Snapdragon 820, has reverted to a straight quad-core design. The P20 is expected to be in shipping devices by the second half of 2016, and we will see how this configuration performs once some devices using this new P20 SoC are in the wild.

Full press release after the break:

Source: MediaTek

TSMC Allegedly Wants 5nm by 2020

Subject: Graphics Cards, Processors | January 19, 2016 - 11:38 PM |
Tagged: TSMC

Digitimes is reporting on statements that were allegedly made by TSMC co-CEO, Mark Liu. We are currently seeing 16nm parts come out of the foundry, which is expected to be used in the next generation of GPUs, replacing the long-running 28nm node that launched with the GeForce GTX 680. (It's still unannounced whether AMD and NVIDIA will use 14nm FinFET from Samsung or GlobalFoundries, or 16nm FinFET from TSMC.)

Update (Jan 20th, @4pm EST): Couple minor corrections. Radeon HD 7970 launched at 28nm first by a couple of months. I just remember NVIDIA getting swamped in delays because it was a new node, so that's probably why I thought of the GTX 680. Also, AMD announced during CES that they will use GlobalFoundries to fab their upcoming GPUs, which I apparently missed. We suspect that NVIDIA will use TSMC, and have assumed that for a while, but it hasn't been officially announced yet (if ever).

tsmc.jpg

According to their projections, which (again) are filtered through Digitimes, the foundry expects to have 7nm in the first half of 2018. They also expect to introduce extreme ultraviolet (EUV) lithography methods with 5nm in 2020. Given that Silicon in a solid has a lattice spacing of ~0.54nm at room temperature, 7nm transistors will consist of about 13 atoms, and 5nm transistors will have features containing about 9 atoms.

We continue the march toward the end of silicon lithography.

Even if the statement is correct, much can happen between then and now. It wouldn't be the first time that I've seen a major foundry believe that a node would be available, but end up having it delayed. I wouldn't hold my breath, but I might cross my fingers if my hands were free.

At the very least, we can assume that TSMC's roadmap is 16nm, 10nm, 7nm, and then 5nm.

Source: Digitimes

Rumors Surrounding the LG NUCLUN 2 SoC

Subject: Processors, Mobile | December 1, 2015 - 07:30 AM |
Tagged: TSMC, SoC, LG, Intel, arm

So this story came out of nowhere. Whether the rumors are true or false, I am stuck on how everyone seems to be talking about it with a casual deadpan. I spent a couple hours Googling whether I missed some big announcement that made Intel potentially fabricating ARM chips a mundane non-story. Pretty much all that I found was Intel allowing Altera to make FPGAs with embedded ARM processors in a supporting role, which is old news.

simpsons-2015-skinner-out-of-touch.jpg

Image Credit: Internet Memes...

The rumor is that Intel and TSMC were both vying to produce LG's Nuclon 2 SoC. This part is said to house two quad-core ARM modules in a typical big.LITTLE formation. Samples were allegedly produced, with Intel's part (2.4 GHx) being able to clock around 300 MHz faster than TSMC's offering (2.1 GHz). Clock rate is highly dependent upon the “silicon lottery,” so this is an area that production maturity can help with. Intel's sample would also be manufactured at 14nm (versus 16nm from TSMC although these numbers mean less than they used to). LG was also, again allegedly, interesting in Intel's LTE modem. According to the rumors, LG went with TSMC because they felt Intel couldn't keep up with demand.

Now that the rumor has been reported... let's step back a bit.

I talked with Josh a couple of days ago about this post. He's quite skeptical (as I am) about the whole situation. First and foremost, it takes quite a bit of effort to port a design to a different manufacturing process. LG could do it, but it is questionable, especially for a second chip ever sort of thing. Moreover, I still believe that Intel doesn't want to manufacture chips that directly compete with them. x86 in phones is still not a viable business, but Intel hasn't given up and you would think that's a prerequisite.

So this whole thing doesn't seem right.

Source: Android