Subject: General Tech | March 30, 2017 - 01:20 PM | Jeremy Hellstrom
Tagged: Intel, 14nm, 14 nm FinFET
At Intel's Technology and Manufacturing Day event in San Francisco there was a lot of talk about how Intel's 14nm process technology compares to the 16nm, 14nm, and 10nm offerings of their competitors. Investors and enthusiasts are curious if Intel can hold their lead in process tech as Samsung seems to be on track to release chips fabbed on 10nm process before Intel will. Intel rightly pointed out that not all process tech is measured the same way and that pitch measurements give only one part of the picture; meaning Samsung might not actually be smaller than them.
The Tech Report were present at that meeting and have written up an in depth look at what Intel means when they dispute the competitions claims, as well as their rationale behind their belief that the 14nm node still has a lot of life left in it.
"As process sizes grow smaller and smaller, Intel believes that the true characteristics of those technology advances are being clouded by an over-reliance on a single nanometer figure. At its Technology and Manufacturing Day this week, the company defended its process leadership and proposed fresh metrics that could more accurately describe what a given process is capable of."
Here is some more Tech News from around the web:
- Scientists Discover Way To Transmit Taste of Lemonade Over Internet @ Slashdot
- There's a Samsung Galaxy S8 Microsoft Edition, for some reason @ The Inquirer
- 'Trash-80' escapes the dustbin of history with new TRS-80 emulator @ The Register
- Beyond Zelda: The first month of Switch games acts as a promising crystal ball @ Ars Technica
- ZX Spectrum Vega Plus backers complain of months-long refund delays @ The Register
- Microsoft wants screaming Windows fans, not just users @ The Register
- GDC 2017 and NVIDIA Editor's Day Coverage @ Neoseeker
Subject: Processors | January 3, 2017 - 03:54 PM | Jeremy Hellstrom
Tagged: z270, overclocking, kaby lake, Intel, i7-7700k, core i7-7700k, 7th generation core, 7700k, 14nm
Having already familiarized yourself with Intel's new Kaby Lake architecture and the i7-7700k processor in Ryan's review you may now be wondering how well the new CPU overclocks for others. [H]ard|OCP received three i7-7700k's and three different Z270 motherboards for testing and they set about overclocking these in combination to see what frequency they could reach. Only one of the chips was ever stable at 5GHz, and it is reassuring that it managed that on all three motherboards, the remaining two would only hit 4.8GHz which is still not a bad result. Drop by to see their settings in full detail.
"After having a few weeks to play around with Intel's new Kaby Lake architecture Core i7-7700K processors, we finally have some results that we want to discuss when it comes to overclocking and the magic 5GHz many of us are looking for, and what we think your chances are of getting there yourself."
Here are some more Processor articles from around the web:
- Intel's Core i7-7700K 'Kaby Lake' CPU @ The Tech Report
- Intel Kaby Lake i7-7700K & i5-7600K Review @ Hardware Canucks
- Intel Core i7-7700K vs 6700K: 22 Games, RX 480 & GTX 1080 @ techPowerUp
- ntel Kaby Lake Core i7-7700K Performance & Z270 Chipset Overview @ Techgage
- Intel 7th Generation Core i7 7700K Processor Review @ OCC
- Intel Kaby Lake Core i7-7700K IPC @ [H]ard|OCP
- Core i5-6400 @ Hardware Secrets
- FX-4300 @ Hardware Secrets
- AMD's New Ryzen CPU - SMT and IPC @ [H]ard|OCP
It probably doesn't surprise any of our readers that there has been a tepid response to the leaks and reviews that have come out about the new Core i7-7700K CPU ahead of the scheduled launch of Kaby Lake-S from Intel. Replacing the Skylake-based 6700K part as the new "flagship" consumer enthusiast CPU, the 7700K has quite a bit stacked against it. We know that Kaby Lake is the first in the new sequence of tick-tock-optimize, and thus there are few architectural changes to any portion of the chip. However, that does not mean that the 7700K and Kaby Lake in general don't offer new capabilities (HEVC) or performance (clock speed).
The Core i7-7700K is in an interesting spot as well with regard to motherboards and platforms. Nearly all motherboards that run the Z170 chipset will be able to run the new Kaby Lake parts without requiring an upgrade to the newly released Z270 chipset. However, the likelihood that any user on a Z170 platform today using a Skylake processor will feel the NEED to upgrade to Kaby Lake is minimal, to say the least. The Z270 chipset only offers a couple of new features compared to last generation, so the upgrade path is again somewhat limited in excitement.
Let's start by taking a look at the Core i7-7700K and how it compares to the previous top-end parts from the consumer processor line and then touch on the changes that Kaby Lake brings to the table.
With the beginning of CES just days away (as I write this), Intel is taking the wrapping paper off of its first gift of 2017 to the industry. As you can see from the slide above, more than just the Kaby Lake-S consumer socketed processors are launching today, but other components including Iris Plus graphics implementations and quad-core notebook implementations will need to wait for another day.
For DIY builders and OEMs, Kaby Lake-S, now known as the 7th Generation Core Processor family, offer some changes and additions. First, we will get a dual-core HyperThreaded processor with an unlocked designation in the Core i3-7350K. Other than the aforementioned Z270 chipset, Kaby Lake will be the first platform compatible with Intel Optane memory. (To be extra clear, I was told that previous processors will NOT be able to utilize Optane in its M.2 form factor.)
Though we have already witnessed Lenovo announcing products using Optane, this is the first official Intel discussion about it. Optane memory will be available in M.2 modules that can be installed on Z270 motherboards, improving snappiness and responsiveness. It seems this will be launched later in the quarter as we don't have any performance numbers or benchmarks to point to demonstrating the advantages that Intel touts. I know both Allyn and I are very excited to see how this differs from previous Intel caching technologies.
|Core i7-7700K||Core i7-6700K||Core i7-5775C||Core i7-4790K||Core i7-4770K||Core i7-3770K|
|Architecture||Kaby Lake||Skylake||Broadwell||Haswell||Haswell||Ivy Bridge|
|Socket||LGA 1151||LGA 1151||LGA 1150||LGA 1150||LGA 1150||LGA 1155|
|Base Clock||4.2 GHz||4.0 GHz||3.3 GHz||4.0 GHz||3.5 GHz||3.5 GHz|
|Max Turbo Clock||4.5 GHz||4.2 GHz||3.7 GHz||4.4 GHz||3.9 GHz||3.9 GHz|
|Memory Speeds||Up to 2400 MHz||Up to 2133 MHz||Up to 1600 MHz||Up to 1600 MHz||Up to 1600 MHz||Up to 1600 MHz|
|Cache (L4 Cache)||8MB||8MB||6MB (128MB)||8MB||8MB||8MB|
|System Bus||DMI3 - 8.0 GT/s||DMI3 - 8.0 GT/s||DMI2 - 6.4 GT/s||DMI2 - 5.0 GT/s||DMI2 - 5.0 GT/s||DMI2 - 5.0 GT/s|
|Graphics||HD Graphics 630||HD Graphics 530||Iris Pro 6200||HD Graphics 4600||HD Graphics 4600||HD Graphics 4000|
|Max Graphics Clock||1.15 GHz||1.15 GHz||1.15 GHz||1.25 GHz||1.25 GHz||1.15 GHz|
Subject: General Tech | November 5, 2016 - 07:01 AM | Scott Michaud
Tagged: Samsung, euv, 7nm, 14nm, 10nm
As the comments usually remind us, the smallest feature size varies in interpretation from company to company, and node to node. You cannot assume how Samsung compares with Intel, GlobalFoundries, or TSMC based on the nanometer rating alone, better or worse. In fact, any specific fabrication process, when compared to another one, might be better in some ways yet worse in others.
With all of that in mind, Samsung has announced the progress they've made with 14nm, 10nm, and 7nm fabrication processes. First, they plan to expand 14nm production with 14LPU. I haven't been able to figure out what this specific branding stands for, but I'm guessing it's something like “Low Power Ultra” given that it's an engineering name and those are usually super literal (like the other suffixes).
As for the other suffixes, Samsung begins manufacturing nodes with Low Power Early (LPE). From there, they improve upon their technique, providing higher performance and/or lower power, and call this new process Low Power Plus (LPP). LPC, which I believe stands for something like Low Power Cost, although I haven't seen this acronym officially expanded, removes a few manufacturing steps to make the end product cheaper. LPU is an extension of LPC with higher performance. Add the appropriate acronym as a suffix to the claimed smallest feature size, and you get the name of the node: xxLPX.
14LPU is still a ways out, though. Their second announcement, 10LPU, is expected to be their cost-reduction step for 10nm, which I interpret to mean they are omitting LPC from their 10nm production. You may think this is very soon, given how 10LPE has just started mass production a few weeks ago. Really, this is a quite early announcement in terms of overall 10nm production. The process design kits (PDKs) for both 14LPU and 10LPU, which are used by hardware vendors to design their integrated circuits, won't ship until 2Q17. As such, products will be a while behind that.
To close out, Samsung reiterated that 7nm is planned to use extreme ultraviolet lithography (EUV). They have apparently created a wafer using 7nm EUV, but images do not seem to be provided.
Development kits for 14LPU and 10LPU are expected to ship in the second quarter of 2017.
Subject: Processors | September 2, 2016 - 01:39 AM | Tim Verry
Tagged: IBM, power9, power 3.0, 14nm, global foundries, hot chips
Earlier this month at the Hot Chips symposium, IBM revealed details on its upcoming Power9 processors and architecture. The new chips are aimed squarely at the data center and will be used for massive number crunching in big data and scientific applications in servers and supercomputer nodes.
Power9 is a big play from Big Blue, and will help the company expand its precense in the Intel-ruled datacenter market. Power9 processors are due out in 2018 and will be fabricated at Global Foundries on a 14nm HP FinFET process. The chips feature eight billion transistors and utilize an “execution slice microarchitecture” that lets IBM combine “slices” of fixed, floating point, and SIMD hardware into cores that support various levels of threading. Specifically, 2 slices make an SMT4 core and 4 slices make an SMT8 core. IBM will have Power9 processors with 24 SMT4 cores or 12 SMT8 cores (more on that later). Further, Power9 is IBM’s first processor to support its Power 3.0 instruction set.
According to IBM, its Power9 processors are between 50% to 125% faster than the previous generation Power8 CPUs depending on the application tested. The performance improvement is thanks to a doubling of the number of cores as well as a number of other smaller improvements including:
- A 5 cycle shorter pipeline versus Power8
- A single instruction random number generator (RNG)
- Hardware assisted garbage collection for interpreted languages (e.g. Java)
- New interrupt architecture
- 128-bit quad precision floating point and decimal math support
- Important for finance and security markets, massive databases and money math.
- IEEE 754
- CAPI 2.0 and NVLink support
- Hardware accelerators for encryption and compression
The Power9 processor features 120 MB of direct attached eDRAM that acts as an L3 cache (256 GB/s). The chips offer up 7TB/s of aggregate fabric bandwidth which certainly sounds impressive but that is a number with everything added together. With that said, there is a lot going on under the hood. Power9 supports 48 lanes of PCI-E 4.0 (2 GB/s per lane per direction), 48 lanes of proprietary 25Gbps accelerator lanes – these will be used for NVLink 2.0 to connect to NVIDIA GPUs as well as to connect to FPGAs, ASICs, and other accelerators or new memory technologies using CAPI 2.0 (Coherent Accelerator Processor Interface) – , and four 16Gbps SMP links (NUMA) used to combine four quad socket Power9 boards into a single 16 socket “cluster.”
These are processors that are built to scale and tackle the big data problems. In fact, not only is Google interested in Power9 to power its services, but the US Department of Energy will be building two supercomputers using IBM’s Power9 CPUs and NVIDI’s Volta GPUs. Summit and Sierra will offer between 100 to 300 Petaflops of computer power and will be installed at Oak Ridge National Laboratory and Lawrence Livermore National Laboratory respectively. There, some of the projects they will tackle is enabling the researchers to visualize the internals of a virtual light water reactor, research methods to improve fuel economy, and delve further into bioinformatics research.
The Power9 processors will be available in four variants that differ in the number of cores and number of threads each core supports. The chips are broken down into Power9 SO (Scale Out) and Power9 SU (Scale Up) and each group has two processors depending on whether you need a greater number of weaker cores or a smaller number of more powerful cores. Power9 SO chips are intended for multi-core systems and will be used in servers with one or two sockets while Power9 SU chips are for multi-processor systems with up to four sockets per board and up to 16 total sockets per cluster when four four socket boards are linked together. Power9 SO uses DDR4 memory and supports a theoretical maximum 4TB of memory (1TB with today’s 64GB DIMMS) and 120 GB/s of bandwidth while Power9 SU uses IBM’s buffered “Centaur” memory scheme that allows the systems to address a theoretical maximum of 8TB of memory (2TB with 64GB DIMMS) at 230 GB/s. In other words, the SU series is Big Blue’s “big guns.”
A photo of the 24 core SMT4 Power9 SO die.
Here is where it gets a bit muddy. The processors are further broken down by an SMT4 or SMT8 and both Power9 SO and Power9 SU have both options. There are Power9 CPUs with 24 SMT4 cores and there are CPUs with 12 SMT8 cores. IBM indicated that SMT4 (four threads per core) was suited to systems running Linux and virtualization with emphasis on high core counts. Meanwhile SMT8 (eight threads per core) is a better option for large logical partitions (one big system versus partitioning out the compute cluster into smaller VMs as above) and running IBM’s Hypervisor. In either case (24 SMT4 or 12 SMT8) there is the same number of total threads, but you are able to choose whether you want fewer “stronger” threads on each core or more (albeit weaker) threads per core depending on which you workloads are optimized for.
Servers supporting Power9 are already under development by Google and Rackspace and blueprints are even available from the OpenPower Foundation. Currently, it appears that Power9 SO will emerge as soon as the second half of next year (2H 2017) with Power9 SU following in 2018 which would line up with the expected date for the Summit and Sierra supercomputer launches.
This is not a chip that will be showing up in your desktop any time soon, but it is an interesting high performance processor! I will be keeping an eye on updates from Oak Ridge lab hehe.
Subject: Processors | July 28, 2016 - 02:47 PM | Tim Verry
Tagged: kaby lake, Intel, gt3e, coffee lake, 14nm
Intel will allegedly be releasing another 14nm processor following Kaby Lake (which is itself a 14nm successor to Skylake) in 2018. The new processors are code named "Coffee Lake" and will be released alongside low power runs of 10nm Cannon Lake chips.
Not much information is known about Coffee Lake outside of leaked slides and rumors, but the first processors slated to launch in 2018 will be mainstream mobile chips that will come in U and HQ mobile flavors which are 15W to 28W and 35W to 45W TDP chips respectively. Of course, these processors will be built on a very mature 14nm process with the usual small performance and efficiency gains beyond Skylake and Kaby Lake. The chips should have a better graphics unit, but perhaps more interesting is that the slides suggest that Coffee Lake will be the first architecture where Intel will bring "hexacore" (6 core) processors into mainstream consumer chips! The HQ-class Coffee Lake processors will reportedly come in two, four, and six core variants with Intel GT3e class GPUs. Meanwhile the lower power U-class chips top out at dual cores with GT3e class graphics. This is interesting because Intel has previous held back the six core CPUs for its more expensive and higher margin HEDT and Xeon platforms.
Of course 2018 is also the year for Cannon Lake which would have been the "tock" in Intel's old tick-tock schedule (which is no more) as the chips will move to a smaller process node and then Intel would improve on the 10nm process from there in future architectures. Cannon Lake is supposed to be built on the tiny 10nm node, and it appears that the first chips on this node will be ultra low power versions for laptops and tablets. Occupying the ULV platform's U-class (15W) and Y-class (4.5W), Cannon Lake CPUs will be dual cores with GT2 graphics. These chips should sip power while giving comparable performance to Kaby and Coffee Lake perhaps even matching the performance of the Coffee Lake U processors!
Stay tuned to PC Perspective for more information!
Subject: Mobile | February 12, 2016 - 04:26 PM | Sebastian Peak
Tagged: X16 modem, qualcomm, mu-mimo, modem, LTE, Gigabit LTE, FinFET, Carrier Aggregation, 14nm
Qualcomm’s new X16 LTE Modem is the industry's first Gigabit LTE chipset to be announced, achieving speeds of up to 1 Gbps using 4x Carrier Aggregation. The X16 succeeds the recently announced X12 modem, improving on the X12's 3x Carrier Aggregation and moving from LTE CAT 12 to CAT 16 on the downlink, while retaining CAT 13 on the uplink.
"In order to make a Gigabit Class LTE modem a reality, Qualcomm added a suite of enhancements – built on a foundation of commercially-proven Carrier Aggregation technology. The Snapdragon X16 LTE modem employs sophisticated digital signal processing to pack more bits per transmission with 256-QAM, receives data on four antennas through 4x4 MIMO, and supports for up to 4x Carrier Aggregation — all of which come together to achieve unprecedented download speeds."
Gigabit speeds are only possible if multiple data streams are connected to the device simultaneously, and with the new X16 modem such aggregation is performed using LTE-U and LAA.
(Image via EE Times)
What does all of this mean? Aggregation is a term you'll see a lot as we progress into the next generation of cellular data technology, and with the X16 Qualcomm is emphasizing carrier over link aggregation. Essentially Carrier Aggregation works by combining the carrier LTE data signal (licensed, high transmit power) with a shorter-range, shared spectrum (unlicensed, low transmit power) LTE signal. When the signals are combined at the device (i.e. your smartphone), significantly better throughput is possible with this larger (aggregated) data ‘pipe’.
Qualcomm lists the four main options for unlicensed LTE deployment as follows:
- LTE-U: Based on 3GPP Rel. 12, LTE-U targets early mobile operators deployments in USA, Korea and India, with coexistence tests defined by LTE-U forum
- LAA: Defined in 3GPP Rel. 13, LAA (Licensed Assisted Access) targets deployments in Europe, Japan, & beyond.
- LWA: Defined in 3GPP Rel. 13, LWA (LTE - Wi-Fi link aggregation) targets deployments where the operators already has carrier Wi-Fi deployments.
- MulteFire: Broadens the LTE ecosystem to new deployment opportunities by operating solely in unlicensed spectrum without a licensed anchor channel
The X16 is also Qualcomm’s first modem to be built on 14nm FinFet process, which Qualcomm says is highly scalable and will enable the company to evolve the modem product line “to address an even wider range of product, all the way down to power-efficient connectivity for IoT devices.”
Qualcomm has already begun sampling the X16, and expects the first commercial products in the second half of 2016.
Subject: General Tech | January 5, 2016 - 04:40 AM | Ken Addison
Tagged: podcast, video, CES, CES 2016, Lenovo, Thinkpad, x1 carbon, x1 yoga, nvidia, pascal, amd, Polaris, FinFET, 14nm
CES 2016 Podcast Day 1 - 01/05/16
CES is just beginning. Join us for announcements from Lenovo, NVIDIA Press Conference, new AMD GPUs and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the iTunes Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Josh Walrath, Allyn Malventano, Ken Addison and Sebastian Peak
Program length: 1:11:05
AMD Polaris Architecture Coming Mid-2016
In early December, I was able to spend some time with members of the newly formed Radeon Technologies Group (RTG), which is a revitalized and compartmentalized section of AMD that is taking over all graphics work. During those meetings, I was able to learn quite a bit about the plans for RTG going forward, including changes for AMD FreeSync and implementation of HDR display technology, and their plans for the GPUOpen open-sourced game development platform. Perhaps most intriguing of all: we received some information about the next-generation GPU architecture, targeted for 2016.
Codenamed Polaris, this new architecture will be the 4th generation of GCN (Graphics Core Next), and it will be the first AMD GPU that is built on FinFET process technology. These two changes combined promise to offer the biggest improvement in performance per watt, generation to generation, in AMD’s history.
Though the amount of information provided about the Polaris architecture is light, RTG does promise some changes to the 4th iteration of its GCN design. Those include primitive discard acceleration, an improved hardware scheduler, better pre-fetch, increased shader efficiency, and stronger memory compression. We have already discussed in a previous story that the new GPUs will include HDMI 2.0a and DisplayPort 1.3 display interfaces, which offer some impressive new features and bandwidth. From a multimedia perspective, Polaris will be the first GPU to include support for h.265 4K decode and encode acceleration.
This slide shows us quite a few changes, most of which were never discussed specifically that we can report, coming to Polaris. Geometry processing and the memory controller stand out as potentially interesting to me – AMD’s Fiji design continues to lag behind NVIDIA’s Maxwell in terms of tessellation performance and we would love to see that shift. I am also very curious to see how the memory controller is configured on the entire Polaris lineup of GPUs – we saw the introduction of HBM (high bandwidth memory) with the Fury line of cards.
Subject: General Tech | December 22, 2015 - 02:07 PM | Jeremy Hellstrom
Tagged: amd, Samsung, 14nm, rumour
The talk around the watercooler includes a rumour that AMD may use Samsung to produce at least some of their 14nm chips in the coming year. If true this has been a huge year for Samsung who produce NVIDIA chips as well as recently picking up a contract with Apple to produce some of their A9 SoCs. The rumour still includes GLOBALFOUNDRIES as a source for APUs and GPUs so this would make Samsung a second source for working silicon, which we can hope will alleviate some of AMD's difficulty in maintaining supplies of products. This could also help fund Samsung's development of their 10nm FinFET node which the claim should be in production by the end of 2016. As always, take the rumour for what it is but if you want to learn more about what is being said you can pop over to The Inquirer.
"A report in South Korea's Electronic Times, which cited unknown sources, said that Samsung Electronics will start making new chips for AMD sometime next year."
Here is some more Tech News from around the web:
- Toshiba denies NAND exit report with 'no decision made' comment @The Register
- 25 years ago: Sir Tim Berners-Lee builds world's first website @ The Register
- Facepalm time: Windows 10 security patch wipes custom Word autotext @ The Register
- Make Show-Stopping Netflix Socks @ MAKE:Blog