Nvidia's renamed Tegra K1 SoC uses Denver and Kepler

Subject: General Tech | January 6, 2014 - 02:08 PM |
Tagged: tegra k1, tegra, SoC, nvidia, kepler, k1, cortex a15, CES, arm, A15

Project X Logan K1 is the first big news out of CES from NVIDIA and represents a bit of a change from what we were expecting.  The current belief was that the SoC would have four 28nm Cortex A15 processors but that will only be one flavour of K1, a Denver based dual core version will also be released.  Those ARMv8 64-bit processors will natively handle 64 bit applications while the A15 version that The Tech Report had taken pictures of will be limited to 32 bit applications, though that will not matter in many mobile applications.   You should also check out Ryan's deep dive into the new Denver and Kepler version here.

die.jpg

"In early 2011, during a CES press event, Nvidia revealed its Project Denver CPU initiative. On Sunday evening, at another CES press conference, the company provided a glimpse of the first Denver-based processor: the Tegra K1. This next-generation SoC features dual Denver CPU cores clocked at up to 2.5GHz. The cores were designed by Nvidia, and they're compatible with the 64-bit ARMv8 instruction set. They have a seven-way superscalar pipeline and a hefty 192KB of L1 cache."

Here is some more Tech News from around the web:

Tech Talk

 

Coverage of CES 2014 is brought to you by AMD!

PC Perspective's CES 2014 coverage is sponsored by AMD.

Follow all of our coverage of the show at http://pcper.com/ces!

Author:
Subject: Mobile
Manufacturer: NVIDIA

Once known as Logan, now known as K1

NVIDIA has bet big on Tegra.  Since the introduction of the SoC's first iteration, that much was clear.  With the industry push to mobile computing and the decreased importance of the classic PC design, developing and gaining traction with a mobile processor was not only an expansion of the company’s portfolio but a critical shift in the mindset of a graphics giant. 

The problem thus far is that while NVIDIA continues to enjoy success in the markets of workstation and consumer discrete graphics, the Tegra line of silicon-on-chip processors has faltered.  Design wins have been tough to come by. Other companies with feet already firmly planted on this side of the hardware fence continue to innovate and seal deals with customers.  Qualcomm is the dominant player for mobile processors with Samsung, MediaTek, and others all fighting for the same customers NVIDIA needs.  While press conferences and releases have been all smiles and sunshine since day one, the truth is that Tegra hasn’t grown at the rate NVIDIA had hoped.

Solid products based on NVIDIA Tegra processors have been released.  The first Google Nexus 7 used the Tegra 3 processor, and was considered the best Android tablet on the market by most, until it was succeeded by the 2013 iteration of the Nexus 7 this year.  Tegra 4 slipped backwards, though – the NVIDIA SHIELD mobile gaming device was the answer for a company eager to show the market they built compelling and relevant hardware.  It has only partially succeeded in that task.

denver2.jpg

With today’s announcement of the Tegra K1, previously known as Logan or Tegra 5, NVIDIA hopes to once again spark a fire under partners and developers, showing them that NVIDIA’s dominance in the graphics fields of the PC has clear benefits to the mobile segment as well.  During a meeting with NVIDIA about Tegra K1, Dan Vivoli, Senior VP of marketing and a 16 year employee, equated the release of the K1 to the original GeForce GPU.  That is a lofty ambition and puts of a lot pressure on the entire Tegra team, not to mention the K1 product itself, to live up to.

Tegra K1 Overview

What we previously knew as Logan or Tegra 5 (and actually it was called Tegra 5 until just a couple of days ago), is now being released as the Tegra K1.  The ‘K’ designation indicated the graphics architecture that powers the SoC, in this case Kepler.  Also, it’s the first one.  So, K1.

The processor of the Tegra K1 look very familiar and include four ARM Cortex-A15 “r3” cores and 2MB of L2 cache with a fifth A15 core used for lower power situations.  This 4+1 design is the same that was introduced with the Tegra 4 processor last year and allows NVIDIA to implement a style of “big.LITTLE” design that is unique.  Some slight modifications to the cores are included with Tegra K1 that improve performance and efficiency, but not by much – the main CPU is very similar to the Tegra 4.

NVIDIA also unveiled late last night that another version of the Tegra K1 that replaces the quad A15 cores with two of the company's custom designs Denver CPU cores.  Project Denver, announced in early 2011, is NVIDIA's attempt at building its own core design based on the ARMv8 64-bit ISA.  This puts this iteration of Tegra K1 on the same level as Apple's A7 and Qualcomm's Krait processors.  When these are finally available in the wild it will be incredibly intriguing to see how well NVIDIA's architects did in the first true CPU design from the GPU giant.

Continue reading about NVIDIA's new Tegra K1 SoC with Kepler-based graphics!

Coverage of CES 2014 is brought to you by AMD!

PC Perspective's CES 2014 coverage is sponsored by AMD.

Follow all of our coverage of the show at http://pcper.com/ces!

ST Ericsson Shows off First FD-SOI Product

Subject: Editorial | January 16, 2013 - 09:41 PM |
Tagged: ST Ericsson, planar, PD-SOI, L8580, FinFET, FD-SOI, Cortex A9, cortex a15, arm

SOI has been around for some time now, but in partially depleted form (PD-SOI).  Quite a few manufacturers have utilized PD-SOI for their products, such as AMD and IBM (probably the two largest producers of SOI based parts).  Oddly enough, Intel has shunned SOI wafers altogether.  One would expect Intel to spare no expense to have the fastest semiconductor based chips on the market, but SOI did not provide enough advantages for the chip behemoth to outweigh the nearly 10% increase in wafer and production costs.  There were certainly quite a few interesting properties to PD-SOI, but Intel was able to find ways around bulk silicon’s limitations.  These non-SOI improvements include stress and strain, low-K dialectrics, high-K metal gates, and now 3D FinFET Technology.  Intel simply did not need SOI to achieve the performance they were looking for while still using bulk silicon wafers.

stlogo.jpg

Things started looking a bit grim for SOI as a technology a few years back.  AMD was starting to back out of utilizing SOI for sub-32 nm products, and IBM was slowly shifting away from producing chips based on their Power technology.  PD-SOI’s days seemed numbered.  And they are.  That is ok though, as the technology will see a massive uptake with the introduction of Fully Depleted SOI wafers.  I will not go into the technology in full right now, but expect another article further into the future.  I mentioned in a tweet some days ago that in manufacturing, materials are still king.  This looks to hold true with FD-SOI.

Intel had to utilize 3D FinFETs on 22 nm because they simply could not get the performance out of bulk silicon and planar structures.  There are advantages and disadvantages to these structures.  The advantage is that better power characteristics can be attained without using exotic materials all the while keeping bins high, but the disadvantage is the increased complexity of wafer production with such structures.  It is arguable that the increase in complexity completely offsets the price premium of a SOI based solution.  We have also seen with the Intel process that while power consumption is decreased as compared to the previous 32 nm process, the switching performance vs. power consumption is certainly not optimal.  Hence the reason why we have not seen Intel release Ivy Bridge parts that are clocked significantly faster than last generation Sandy Bridge chips. 

FD-SOI and planar structures at 22 nm and 20 nm promise the improve power characteristics as compared to bulk/FinFET.  It also looks to improve overall power vs. clockspeed as compared to bulk/FinFET.  In a nutshell this means better power consumption as well as a jump in clockspeed as compared to previous generations.  Gate first designs using FD-SOI could be very good, but industry analysts say that gate last designs could be “spectacular”.

SOIConsortiumFDSOIBulk.jpg

So what does this have to do with ST Ericsson?  They are one of the first companies to show a products based on 28 nm FD-SOI technology.    The ARM based NovaThore L8580 is a dual Cortex A9 design with the graphics portion being the IMG SGX544.  At first glance we would think that ST is behind the ball, as other manufacturers are releasing Cortex A15 parts which improve IPC by a significant amount.  Then we start digging into the details.

The fastest Cortex A9 designs that we have seen so far have been clocked around 1.5 GHz.  The L8580 can be clocked up to 2.5 GHz.  Whatever IPC improvements we see with A15 are soon washed away by the sheer clockspeed advantage that the L8580 has.  While it has been rumored that the Tegra 4 will be clocked up to 2 GHz in tablet form, ST is able to get the L8580 to 2.5 GHz in a smartphone.  NVIDIA utilizes a 5th core to improve low power performance, but ST was able to get their chip to run at 0.6v in low power mode.  This decrease in complexity combined with what appears to be outstanding electrical and thermal characteristics makes this a very interesting device.

The Cortex A9 cores are not the only ones to see an improvement in clockspeed and power consumption.  The well known and extensively used SGX544 graphics portion runs at 600 MHz in a handheld device, and is around 20% faster clocked than other comparable parts.

L8580.jpg

When we add all these things together we have a product that appears to be head and shoulders above current parts from Qualcomm and Samsung.  It also appears that these parts are comparable, if not slightly ahead, of the announced next generation of parts from the Cortex A15 crowd.  It stands to reason that ST Ericsson will run away with the market and be included in every new handheld sold from now until the first 22/20 nm parts are released?  Unfortunately for ST Ericsson, this is not the case.  If there was an Achilles Heel to the L8580 it is that of production capabilities.  ST Ericsson started production on FD-SOI wafers this past spring, but it was processing hundreds of wafers a month vs. the thousands that are required for full scale production.  We can assume that ST Ericsson has improved this situation, but they are not exactly a powerhouse when it comes to manufacturing prowess.  They simply do not seem to have the FD-SOI production capabilities to handle orders from more than a handful of cellphone and table manufacturers.

ST Ericsson has a very interesting part, and it certainly looks to prove the capabilities of FD-SOI when compared to competing products being produced on bulk silicon.  The Nova Thor L8580 will gain some new customers with its combination of performance and power characteristics, even though it is using the “older” Cortex A9 design.  FD-SOI has certainly caught the industrys’ attention.  There are more FD-SOI factoids floating around that I want to cover soon, but these will have to wait.  For the time being ST Ericsson is on the cutting edge when it comes to SOI and their proof of concept L8580 seems to have exceeded expectations.

Source: ST Ericsson

CES 2013: NVIDIA Officially Releases Tegra 4 SoC

Subject: General Tech | January 7, 2013 - 03:01 AM |
Tagged: tegra 4, SoC, nvidia, cortex a15, ces 2013, CES, arm

Details about NVIDIA’s latest system on a chip (SoC) for mobile devices leaked last month. On Sunday, NVIDIA officially released its Tegra 4 chip, and talked up more details on the new silicon.

Interestingly, the leaked information from the slide held true over the weekend when NVIDIA officially unveiled it during a press conference. The new chip is manufactured on a 28nm, low power, high-k metal gate process.  It features four ARM Cortex-A15 CPU cores running at up to 1.9GHz, one (additional) low power Cortex-A15 companion core, a NVIDIA GeForce GPU with 72 cores (not unified shader design unfortunately). In addition, the Tegra 4 SoC includes the company’s i500 programmable soft modem, and a number of fixed function hardware used for audio and image processing.

According to Anandtech, the majority of GPU cores in the Tegra 4 are 20-bit pixel shaders though exact specifications on the GPU are still unknown. Further, the i500 modem currectly supports LTE UE Category 3 on the WCDMA band with an LTE 4 modem expected in the future.

Nvidia Tegra 4 SoC at CES 2013.jpg

Image Credit: ArsTechnica attended the NVIDIA press conference.

Tegra 4 will support dual channel LP-DDR3 memory, USB 3.0, and a technology that NVIDIA is calling its Computational Photography Architecture that allegedly will allow real-time HDR imagery with still and video shoots.

According to NVIDIA, Tegra 4 will be noticeably faster than its predecessors and the competing SoCs from Apple and Qualcomm et al. When compared to the Nexus 10 (Samsung Exynos 5 SoC) and the stock Android web browser, the Tegra 4 device (Chrome browser) opened pages in 27 seconds versus the Nexus 10’s 50 second benchmark time. Users will have to wait for retail devices with Tegra 4 hardware for independent benchmarks, however. Thanks to the higher top-end clockspeed and beefier GPU, you can expect Tegra 4 to be faster than Tegra 3, but until reviewers get their hands on Tegra 4-powered devices it is difficult to say just how much faster it is.

Speaking of hardware, the Tegra 4 chip will most likely be used in tablets (and not smartphones). Here’s hoping we see some prototype Tegra 4 devices or product announcements later this week at CES.

Coverage of CES 2013 is brought to you by AMD!

PC Perspective's CES 2013 coverage is sponsored by AMD.

Follow all of our coverage of the show at http://pcper.com/ces!

Source: Ars Technica

Intel versus ARM; the hunting cry of a krayt dragon

Subject: Systems | January 4, 2013 - 07:59 PM |
Tagged: arm, Intel, krayt, atom, qualcomm, cortex a15, tegra 3

AnandTech managed to get their hands on an Samsung designed ARM Cortex A15 processor powered tablet, which they compared to several competitors such as Intel's Atom, Qualcomm's Krait and NVIDIA's Tegra 3.  The test names may seem unfamiliar with Sunspider, Kraken and RIABench providing performance comparisons though the power consumption tests will be familiar to all.  Read on to see how the next generation of chips from the main contenders for your mobile device spending compare.

GreaterKrayt-WOSW.jpg

"The previous article focused on an admittedly not too interesting comparison: Intel's Atom Z2760 (Clover Trail) versus NVIDIA's Tegra 3. After much pleading, Intel returned with two more tablets: a Dell XPS 10 using Qualcomm's APQ8060A SoC (dual-core 28nm Krait) and a Nexus 10 using Samsung's Exynos 5 Dual (dual-core 32nm Cortex A15). What was a walk in the park for Atom all of the sudden became much more challenging. Both of these SoCs are built on very modern, low power manufacturing processes and Intel no longer has a performance advantage compared to Exynos 5."

Here are some more Systems articles from around the web:

Systems

Source: AnandTech

NVIDIA Tegra 4 Details Revealed By Leaked Slide

Subject: Processors, Mobile | December 19, 2012 - 03:26 AM |
Tagged: wayne, tegra 4, SoC, nvidia, cortex a15, arm

Earlier this year, NVIDIA showed off a roadmap for its Tegra line of mobile system on a chip (SoC) processors. Namely, the next generation Tegra 4 mobile chip is codenamed Wayne and will be the successor to the Tegra 3.

Tegra 4 will use a 28nm manufacturing process and feature improvements to the CPU, GPU, and IO components. Thanks to a leaked slide that appeared on Chip Hell, we now have more details on Tegra 4.

NVIDIA Tegra 4 Leaked Slide.jpg

The 28nm Tegra 4 SoC will keep the same 4+1 CPU design* as the Tegra 3, but it will use ARM Cortex A15 CPU cores instead of the Cortex A9 cores used in the current generation chips. NVIDIA is also improving the GPU portion, and Tegra 4 will reportedly feature a 72 core GPU based on a new architecture. Unfortunately, we do not have specifics on how that GPU is set up architecturally, but the leaked slide indicates that the GPU will be as much as 6x faster than NVIDIA’s own Tegra 3. It will allegedly be fast enough to power displays with resolutions from 1080p @ 120Hz to 4K (refresh rate unknown). Don’t expect to drive games at native 4K resolution, however it should run a tablet OS fine. Interestingly, NVIDIA has included hardware to hardware accelerate VP8 and H.264 video at up to 2560x1440 resolutions.

Additionally, Tegra 4 will feature support for dual channel DDR3L memory, USB 3.0 and hardware accelerated secuity options including HDCP, Secure Boot, and DRM which may make Tegra 4 an attractive option for Windows RT tablets.

The leaked slide has revealed several interesting details on Tegra 4, but it has also raised some questions on the nitty-gritty details. Also, there is no mention of the dual core variant of Tegra 4 – codenamed Grey – that is said to include an integrated Icera 4G LTE cellular modem. Here’s hoping more details surface at CES next month!

* NVIDIA's name for a CPU that features four ARM CPU cores and one lower power ARM companion core.

Source: Chip Hell

A $250 Dual Core Cortex A15 powered Chromebook from Samsung

Subject: Mobile | November 23, 2012 - 02:59 PM |
Tagged: ubuntu, Chromebook, cortex a15, Samsung, linux, exynos 5

At $250 this Samsung Chromebook costs less than most tablets or phones but can outperform previous A9 powered models and the Atom D525 as well.  The processor is Samsung's Exynos 5, a dual core A15 chip running at 1.7GHz with ARM's Mali-T604 graphics  and is accompanied by 2GB of DDR3 and a 16GB SSD.  It can be loaded with Ubuntu 13.04 and offers a compelling and inexpensive alternative to Sleekbooks and Ultrabooks as it weighs 2.5lbs and is 11.4" x 8.09" x 0.69" and promises over 6 hours of battery life.  Check out how it performs at Phoronix.

samsung-chromebook-12.jpg

"Google recently launched the Samsung Chromebook that for $249 USD features an 11-inch display, a 16GB SSD, a promise of 6.5-hour battery life, and is backed by a Samsung Exynos 5 SoC. The Samsung Exynos 5 packs a 1.7GHz dual-core ARM Cortex-A15 processor with ARM Mali-T604 graphics. With using this new ARM Cortex-A15 chip plus the Samsung Chromebook not being locked down so it can be loaded up with a Linux distribution like Ubuntu or openSUSE, it was a must-buy for carrying out some interesting Cortex-A15 Linux benchmarks. The Exynos 5 Dual in this affordable laptop packs an impressive performance punch."

Here are some more Mobile articles from around the web:

Mobile

Source: Phoronix
Author:
Subject: Processors
Manufacturer:

Apple Produces the new A6 for the iPhone 5

 

Today is the day that world gets introduced to the iPhone 5.  I of course was very curious about what Apple would be bringing to market the year after the death of Steve Jobs.  The excitement leading up to the iPhone announcement was somewhat muted as compared to years past, and a lot of that could be attributed to what has been happening in the Android market.  Companies like Samsung and HTC have released new high end phones that are not only faster and more expansive than previous versions, but they also worked really well and were feature packed.  While the iPhone 5 will be another success for Apple, for those somewhat dispassionate about the cellphone market will likely just shrug and say to themselves, “It looks like Apple caught up for the year, but too bad they really didn’t introduce anything really groundbreaking.”

a6_01.jpg

If there was one area that many were anxiously awaiting, it was that of the SOC (system on a chip) that Apple would use for the iPhone 5.  Speculation went basically from using a fresh piece of silicon based on the A5X (faster clocks, smaller graphics portion) to having a quad core monster running at high speeds but still sipping power.  It seems that we actually got something in between.  This is not a bad thing, but as we go forward we will likely see that the silicon again only matches what other manufacturers have been using since earlier this year.

Click here to read the entire article.

Samsung Launches Cortex A15-based Exynos 5 Dual Core SoC

Subject: Mobile | August 10, 2012 - 05:12 AM |
Tagged: Samsung, Exynos 5250, exynos 5, dual core arm, cortex a15

A few months back, Samsung debuted its latest Exynos 4 quad core mobile System on a Chip (SoC) based on four Cortex A9 cores. The company recently released details of its next generation Exynos processor, only this time it is a dual core variant. The Samsung Exynos 5 Dual (Exynos 5250) is packing the latest mobile ARM technology with two ARM Cortex A15 CPU cores and a Mali T604 graphics core.

samsung-exynos-5-dual.jpg

The dual core processor is running at 1.7 GHz and features the NEON fixed function hardware for accelerated video decoding. Further, the Mali T604 GPU is based on ARM’s new Midgard architecture. The T604 includes a unified shader design with support for OpenGL ES 3.0 and the full OpenCL 1.1 profile. Not too shabby for a mobile GPU!

The Exynos 5250 also sees an upgrade (from 6.4 GB/s in the Exynos 4) in memory bandwidth to 12.8 GB/s between the processor and two port LPDDR3 memory at up to 800Mhz. The increased memory bandwidth along with the new–and more powerful–processor and graphics hardware enables Samsung to offer support for much higher resolution displays up to WXQGA or 2560x1600 pixels.

Other features of the new Exynos 5 dual core processor include USB 3.0 support, wireless display support, and a claimed ability to playback 1080p video at 60 FPS using Google’s VP8 video decoder (no word on H.264 performance, though the ARM processor’s NEON hardware should handle those videos well enough). The GPU is also able to allegedly use 20-times less power when displaying a static image (such as a web page or ebook page) called PSR mode.

exynos-5-dual-features.jpg

According to the Android Authority, the first product to be powered by the new Samsung Exynos 5 processor will likely be the company’s upcoming Galaxy Tab 11.6 tablet. Quad core variants of the Exynos 5 should come out following the successful dual core launch.

The Cortex A15-based mobile processor is packing some impressive specifications, and it will be interesting to see Exynos 5-powered devices. Specifically, it will be interesting to see how it stacks up compared to products like NVIDIA’s Tegra 3, TI’s OMAP 5, and even Samsung’s own Exynos 4 quad core SoC. Are you excited about the new dual core SoC?

ARM aims to make TSMC the Fab of choice for their customers

Subject: General Tech | April 16, 2012 - 01:47 PM |
Tagged: arm, TSMC, fab, cortex a15, cortex-a9, 28nm, 40nm

ARM has developed some optimizations for their chips, provided that the customer purchasing them uses TSMC to fabricate them.  ARM has licensed a large variety of fabrication companies to produce their chips but with their familiarity with TSMC's 28nm and 40nm processes they have been able to introduce performance enhancing optimizations specific to TSMC.  It could taste a bit like favouritism but is much more likely to stem from the volume of TSMC's production as well as the maturity of the 40nm process node.  The 28nm node could be a bit of a problem for ARM as we have seen that TSMC is not having an easy time producing enough good dies for their customers; this is why you cannot buy a GTX 680.  As The Inquirer points out, if ARM wants to make sure their customers can get their hands on reasonable volumes of chips, they will want to create optimizations specific to other manufacturers sooner rather than later.

arm-cortex-a15.jpg

"CHIP DESIGNER ARM has released a slew of optimisation packs for Taiwan Semiconductor Manufacturing Company's (TSMC) 28nm and 40nm process nodes.

ARM, which licenses designs to many chip designers, including Qualcomm, Texas Instruments, Nvidia and Samsung, has given TSMC a boost by offering processor optimisation packs for the firm's 28nm and 40nm process nodes. ARM claims the optimisation packs for its Cortex-A5, Cortex-A7, Cortex-A9 and Cortex-A15 processor cores help designers make use of TSMC's process node nuances to get the most out of their designs."

Here is some more Tech News from around the web:

Tech Talk

 

Source: The Inquirer