ARM Partners with Xilinx to Accelerate Path to 7nm

Subject: Processors | December 8, 2016 - 09:00 AM |
Tagged: Xilinx, TSMC, standard cells, layout, FinFET, EDA, custom cell, arm, 7nm

Today ARM is announcing their partnership with Xilinx to deliver design solutions for their products on TSMC’s upcoming 7nm process node.  ARM has previously partnered with Xilinx on other nodes including 28, 20, and 16nm.  Their partnership extends into design considerations to improve the time to market of complex parts and to rapidly synthesize new designs for cutting edge process nodes.

Xilinx is licensing out the latest ARM Artisan Physical IP platform for TSMC’s 7nm.  Artisan Physical IP is a set of tools to help rapidly roll out complex designs as compared to what previous generations of products faced.  ARM has specialized libraries and tools to help implement these designs on a variety of processes and receive good results even on the shortest possible design times.

icon_arm.jpg

Design relies on two basic methodologies.  There is custom cell and then standard cell designs.  Custom cell design allows for a tremendous amount of flexibility in layout and electrical characteristics, but it requires a lot of man-hours to complete even the simplest logic.  Custom cell designs typically draw less power and provide higher clockspeeds than standard cell design.  Standard cells are like Legos in that the cells can be quickly laid out to create complex logic.  Software called EDA (Electronic Design Automation) can quickly place and route these cells.  GPUs lean heavily on standard cells and EDA software to get highly complex products out to market quickly.

These two basic methods have netted good results over the years, but during that time we have seen implementations of standard cells become more custom in how they behave.  While not achieving full custom performance, we have seen semi-custom type endeavors achieve appreciable gains without requiring the man hours to achieve fully custom.

In this particular case ARM is achieving a solid performance in power and speed through automated design that improves upon standard cells, but without the downsides of a fully custom part.  This provides positive power and speed benefits without the extra power draw of a traditional standard cell.  ARM further improves upon this with the ARM Artisan Power Grid Architect (PGA) which simplifies the development of a complex power grid that services a large and complex chip.

We have seen these types of advancements in the GPU world that NVIDIA and AMD enjoy talking about.  A better power grid allows the ASIC to perform at lower power envelopes due to less impedence.  The GPU guys have also utilized High Density Libraries to pack in the transistors as tight as possible to utilize less space and increase spatial efficiency.  A smaller chip, which requires less power is always a positive development over a larger chip of the same capabilities that requires more power.  ARM looks to be doing their own version of these technologies and are applying them to TSMC’s upcoming 7nm FinFET process.

TSMC is not releasing this process to mass production until at least 2018.  In 1H 2017 we will see some initial test and early production runs for a handful of partners.  Full blown production of 7nm will be in 2018.  Early runs and production are increasingly being used for companies working with low power devices.  We can look back at 20/16/14 nm processes and see that they were initially used by designs that do not require a lot of power and will run at moderate clockspeeds.  We have seen a shift in who uses these new processes with the introduction of sub-28nm process nodes.  The complexity of the design, process steps, materials, and libraries have pushed the higher performance and power hungry parts to a secondary position as the foundries attempt to get these next generation nodes up to speed.  It isn’t until after some many months of these low power parts are pushed through that we see adjustments and improvements in these next generation nodes to handle the higher power and clockspeed needs of products like desktop CPUs and GPUs.

Zynq-7015-module_large.jpg

ARM is certainly being much more aggressive in addressing next generation nodes and pushing their cutting edge products on them to allow for far more powerful mobile products that also exhibit improved battery life.  This step with 7nm and Xilinx will provide a lot of data to ARM and its partners downstream when the time comes to implement new designs.  Artisan will continue to evolve to allow partners to quickly and efficiently introduce new products on new nodes to the market at an accelerated rate as compared to years past.

Click to read the entire ARM post!

Source: ARM

Leaked Kaby Lake Sample Found and Overclocked

Subject: Processors | November 30, 2016 - 06:52 PM |
Tagged: kaby lake, Intel, core i7 7700k

Someone, who wasn’t Intel, seeded Tom’s Hardware an Intel Core i7-7700k, which is expected for release in the new year. This is the top end of the mainstream SKUs, bringing four cores (eight threads) to 4.2 GHz base, 4.5 GHz boost. Using a motherboard built around the Z170 chipset, they were able to clock the CPU up to 4.8 GHz, which is a little over 4% higher than the Skylake-based Core i7-6700k maximum overclock on the same board.

intel-2016-7700k-tomshardware.jpg

Image Credit: Tom's Hardware
Lucky number i7-77.

Before we continue, these results are based on a single sample. (Update: @7:01pm -- Also, the motherboard they used has some known overclock and stability issues. They mentioned it a bit in the post, like why their BCLK is 99.65MHz, but I forgot to highlight it here. Thankfully, Allyn caught it in the first ten minutes.) This sample has retail branding, but Intel would not confirm that it performs like they expect a retail SKU would. Normally, pre-release products are labeled as such, but there’s no way to tell if this one part is some exception. Beyond concerns that it might be slightly different from what consumers will eventually receive, there is also a huge variation in overclocking performance due to binning. With a sample size of one, we cannot tell whether this chip has an abnormally high, or an abnormally low, defect count, which affects both power and maximum frequency.

That aside, if this chip is representative of Kaby Lake performance, users should expect an increase in headroom for clock rates, but it will come at the cost of increased power consumption. In fact, Tom’s Hardware states that the chip “acts like an overclocked i7-6700K”. Based on this, it seems like, unless they want an extra 4 PCIe lanes on Z270, Kaby Lake’s performance might already be achievable for users with a lucky Skylake.

I should note that Tom’s Hardware didn’t benchmark the iGPU. I don’t really see it used for much more than video encoding anyway, but it would be nice to see if Intel improved in that area, seeing as how they incremented the model number. Then again, even users who are concerned about that will probably be better off just adding a second, discrete GPU anyway.

Rumor: Leaked Zen Prices and SKUs

Subject: Processors | November 28, 2016 - 09:26 PM |
Tagged: amd, Zen, Summit Ridge

Guru3D got hold of a product list, which includes entries for AMD’s upcoming Zen architecture.

Four SKUs are thus rumored to exist:

  • Zen SR3: (65W, quad-core, eight threads, ~$150 USD)
  • Zen SR5: (95W, hexa-core, twelve threads, ~$250 USD)
  • Zen SR7: (95W, octo-core, sixteen threads, ~$350 USD)
  • Special Zen SR7: (95W, octo-core, sixteen threads, ~$500 USD)

The sheet also states that none of these are supposed to contain integrated graphics, like we see on the current FX line. There is some merit to using integrated GPUs for specific tasks, like processing video while the main GPU is busy or doing a rapid, massively parallel calculation without the latency of memory copies, but AMD is probably right to not waste resources, such as TDP, fighting our current lack of compatible software and viable use cases for these SKUs.

amd-2016-summit-ridge-guru3d.png

Image Credit: Guru3D

The sheet also contains benchmarks for Cinebench R15. While pre-rendered video is a task that really should be done on GPUs at this point, especially with permissive, strong, open-source projects like Cycles, they do provide a good example of multi-core performance that scales. In this one test, the Summit Ridge 7 CPU ($350) roughly matches the Intel Core i7-6850K ($600), again, according to this one unconfirmed benchmark. It doesn’t list clock rates, but other rumors claim that the top-end chip will be around 3.2 GHz base, 3.5 GHz boost at stock, with manual overclocks exceeding 4 GHz.

These performance figures suggest that Zen will not beat Skylake on single-threaded performance, but it might be close. That might not matter, however. CPUs, these days, are kind-of converging around a certain level of per-thread performance, and are differentiating with core count, price, and features. Unfortunately, there doesn’t seem to have been many leaks regarding enthusiast-level chipsets for Zen, so we don’t know if there will be compelling use cases yet.

Zen is expected early in 2017.

Source: Guru3D

Qualcomm Teases Snapdragon 835, built on Samsung 10nm FinFET

Subject: Processors, Mobile | November 17, 2016 - 07:30 AM |
Tagged: snapdragon, Samsung, qualcomm, FinFET, 835, 10nm

Though we are still months away from shipping devices, Qualcomm has announced that it will be building its upcoming flagship Snapdragon 835 mobile SoC on Samsung’s 10nm 2nd generation FinFET process technology. Qualcomm tells us that integrating the 10nm node in 2017 will keep it “the technology leader in mobile platforms” and this makes the 835 the world's first 10nm production processor.

“Using the new 10nm process node is expected to allow our premium tier Snapdragon 835 processor to deliver greater power efficiency and increase performance while also allowing us to add a number of new capabilities that can improve the user experience of tomorrow’s mobile devices.”

Samsung announced its 10nm FinFET process technology in October of this year and it sports some impressive specifications and benefits to the Snapdragon 835 platform. Per Samsung, it offers “up to a 30% increase in area efficiency with 27% higher performance or up to 40% lower power consumption.” For Qualcomm and its partners, that means a smaller silicon footprint for innovative device designs, including thinner chassis or larger batteries (yes, please).

qualcomm-logo.jpg

Other details on the Snapdragon 835 are still pending a future reveal, but Qualcomm says that 835 is in production now and will be shipping in commercial devices in the first half of 2017. We did hear that the new 10nm chip is built on "more than 3 billion transistors" - making it an incredibly complex design!

Image_Keith Kressin Qualcomm, Ben Suh Samsung with 10nm Snapdragon 835.jpeg

Keith Kressin SVP, Product Management, Qualcomm Technologies Inc and Ben Suh, SVP, Foundry Marketing, Samsung, show off first 10nm mobile processor, Snapdragon 835, in New York at Qualcomm's Snapdragon Technology Summit.

I am very curious to see how the market reacts to the release of the Snapdragon 835. We are still seeing new devices being released using the 820/821 SoCs, including Google’s own flagship Pixel phones this fall. Qualcomm wants to maintain leadership in the SoC market by innovating on both silicon and software but consumers are becoming more savvy to the actual usable benefits that new devices offer. Qualcomm promises features, performance and power benefits on SD 835 to make the case for your next upgrade.

Full press release after the break!

Source: Qualcomm

NVIDIA Tegra SoC powers new Nintendo Switch gaming system

Subject: Processors, Mobile | October 20, 2016 - 11:40 AM |
Tagged: Nintendo, switch, nvidia, tegra

It's been a hell of a 24 hours for NVIDIA and the Tegra processor. A platform that many considered dead in the water after the failure of it to find its way into smartphones or into an appreciable amount of consumer tablets, had two major design wins revealed. First, it was revealed that NVIDIA is powered the new fully autonomous driving system in the Autopilot 2.0 hardware implementation in Tesla's current Model S, X and upcoming Model 3 cars.

Now, we know that Nintendo's long rumored portable and dockable gaming system called Switch is also powered by a custom NVIDIA Tegra SoC.

20-nintendo-switch-1200x923.jpg

We don't know much about the hardware that gives the Switch life, but NVIDIA did post a short blog with some basic information worth looking at. Based on it, we know that the Tegra processor powering this Nintendo system is completely custom and likely uses Pascal architecture GPU CUDA cores; though we don't know how many and how powerful it will be. It will likely exceed the performance of the Nintendo Wii U, which was only 0.35 TFLOPS and consisting of 320 AMD-based stream processors. How much faster we just don't know yet.

On the CPU side we assume that this is built using an ARM-based processor, most likely off-the-shelf core designs to keep things simple. Basing it on custom designs like Denver might not be necessary for this type of platform. 

Nintendo has traditionally used custom operating systems for its consoles and that seems to be what is happening with the Switch as well. NVIDIA mentions a couple of times how much work the technology vendor put into custom APIs, custom physic engines, new libraries, etc. 

The Nintendo Switch’s gaming experience is also supported by fully custom software, including a revamped physics engine, new libraries, advanced game tools and libraries. NVIDIA additionally created new gaming APIs to fully harness this performance. The newest API, NVN, was built specifically to bring lightweight, fast gaming to the masses.

We’ve optimized the full suite of hardware and software for gaming and mobile use cases. This includes custom operating system integration with the GPU to increase both performance and efficiency.

The system itself looks pretty damn interesting, with the ability to switch (get it?) between a docked to your TV configuration to a mobile one with attached or wireless controllers. Check out the video below for a preview.

I've asked both NVIDIA and Nintendo for more information on the hardware side but these guys tend to be tight lipped on the custom silicon going into console hardware. Hopefully one or the other is excited to tell us about the technology so we can some interesting specifications to discuss and debate!

UPDATE: A story on The Verge claims that Nintendo "took the chip from the Shield" and put it in the Switch. This is more than likely completely false; the Shield is a significantly dated product and that kind of statement could undersell the power and capability of the Switch and NVIDIA's custom SoC quite dramatically.

Source: Nintendo

Qualcomm Announces Snapdragon 653, 626, and 427 SoCs

Subject: Processors, Mobile | October 18, 2016 - 11:32 AM |
Tagged: SoC, Snapdragon 653, Snapdragon 626, Snapdragon 427, snapdragon, smartphone, qualcomm, mobile

Qualcomm has announced new 400 and 600-series Snapdragon parts, and these new SoCs (Snapdragon 653, 626, and 427) inherit technology found previously on the 800-series parts, including fast LTE connectivity and dual-camera support.

qualcomm-snapdragon-mobile-processor.jpg

The integrated LTE modem has been significantly for each of these SoCs, and Qualcomm lists these features for each of the new products:

  • X9 LTE with CAT 7 modem (300Mbps DL; 150Mbps UL) designed to provide users with a 50 percent increase in maximum uplink speeds over the X8 LTE modem.
  • LTE Advanced Carrier Aggregation with up to 2x20 MHz in the downlink and uplink
  • Support for 64-QAM in the uplink
  • Superior call clarity and higher call reliability with the Enhanced Voice Services (EVS) codec on VoLTE calls.

In addition to the new X9 modem, all three SoCs offer faster CPU and GPU performance, with the Snapdragon 653 (which replaces the 652) now supporting up to 8GB of memory - up from a max of 4GB previously. Each of the new SoCs also feature Qualcomm's Quick Charge 3.0 for fast charging.

SD_600_400.png

Full specifications for these new products can be found on the updated Snapdragon product page.

Availability of the new 600-series Snapdragon processors is set for the end of this year, so we could start seeing handsets with the faster parts soon; while the Snapdragon 427 is expected to ship in devices early in 2017.

Source: Qualcomm

Intel Launches Stratix 10 FPGA With ARM CPU and HBM2

Subject: Processors | October 10, 2016 - 02:25 AM |
Tagged: SoC, Intel, FPGA, Cortex A53, arm, Altera

 Intel and recently acquired Altera have launched a new FPGA product based on Intel’s 14nm Tri-Gate process featuring an ARM CPU, 5.5 million logic element FPGA, and HBM2 memory in a single package. The Stratix 10 is aimed at data center, networking, and radar/imaging customers.

The Stratix 10 is an Altera-designed FPGA (field programmable gate array) with 5.5 million logic elements and a new HyperFlex architecture that optimizes registers, pipeline, and critical pathing (feed-forward designs) to increase core performance and increase the logic density by five times that of previous products. Further, the upcoming FPGA SoC reportedly can run at twice the core performance of Stratix V or use up to 70% less power than its predecessor at the same performance level.

Intel Altera Stratix 10.jpg

The increases in logic density, clockspeed, and power efficiency are a combination of the improved architecture and Intel’s 14nm FinFET (Tri-Gate) manufacturing process.

Intel rates the FPGA at 10 TFLOPS of single precision floating point DSP performance and 80 GFLOPS/watt.

Interestingly, Intel is using an ARM processor to feed data to the FPGA chip rather than its own Quark or Atom processors. Specifically, the Stratix 10 uses an ARM CPU with four Cortex A53 cores as well as four stacks of on package HBM2 memory with 1TB/s of bandwidth to feed data to the FPGA. There is also a “secure device manager” to ensure data integrity and security.

The Stratix 10 is aimed at data centers and will be used with in specialized tasks that demand high throughput and low latency. According to Intel, the processor is a good candidate for co-processors to offload and accelerate encryption/decryption, compression/de-compression, or Hadoop tasks. It can also be used to power specialized storage controllers and networking equipment.

Intel has started sampling the new chip to potential customers.

Intel Altera Stratix 10 FPGA SoC.png

In general, FPGAs are great at highly parallelized workloads and are able to efficiently take huge amounts of inputs and process the data in parallel through custom programmed logic gates. An FPGA is essentially a program in hardware that can be rewired in the field (though depending on the chip it is not necessarily a “fast” process and it can take hours or longer to switch things up heh). These processors are used in medical and imaging devices, high frequency trading hardware, networking equipment, signal intelligence (cell towers, radar, guidance, ect), bitcoin mining (though ASICs stole the show a few years ago), and even password cracking. They can be almost anything you want which gives them an advantage over traditional CPUs and graphics cards though cost and increased coding complexity are prohibitive.

The Stratix 10 stood out as interesting to me because of its claimed 10 TFLOPS of single precision performance which is reportedly the important metric when it comes to training neural networks. In fact, Microsoft recently began deploying FPGAs across its Azure cloud computing platform and plans to build the “world’s fastest AI supercomputer. The Redmond-based company’s Project Catapult saw the company deploy Stratix V FPGAs to nearly all of its Azure datacenters and is using the programmable silicon as part of an “acceleration fabric” in its “configurable cloud” architecture that will be used initially to accelerate the company’s Bing search and AI research efforts and later by independent customers for their own applications.

It is interesting to see Microsoft going with FPGAs especially as efforts to use GPUs for GPGPU and neural network training and inferencing duties have increased so dramatically over the years (with NVIDIA being the one pushing the latter). It may well be a good call on Microsoft’s part as it could enable better performance and researchers would be able to code their AI accelerator platforms down to the gate level to really optimize things. Using higher level languages and cheaper hardware with GPUs does have a lower barrier to entry though. I suppose ti will depend on just how much Microsoft is going to charge customers to use the FPGA-powered instances.

FPGAs are in kind of a weird middle ground and while they are definitely not a new technology, they do continue to get more complex and powerful!

What are your thoughts on Intel's new FPGA SoC?

Also read:

Source: Intel

NVIDIA Teases Low Power, High Performance Xavier SoC That Will Power Future Autonomous Vehicles

Subject: Processors | October 1, 2016 - 06:11 PM |
Tagged: xavier, Volta, tegra, SoC, nvidia, machine learning, gpu, drive px 2, deep neural network, deep learning

Earlier this week at its first GTC Europe event in Amsterdam, NVIDIA CEO Jen-Hsun Huang teased a new SoC code-named Xavier that will be used in self-driving cars and feature the company's newest custom ARM CPU cores and Volta GPU. The new chip will begin sampling at the end of 2017 with product releases using the future Tegra (if they keep that name) processor as soon as 2018.

NVIDIA_Xavier_SOC.jpg

NVIDIA's Xavier is promised to be the successor to the company's Drive PX 2 system which uses two Tegra X2 SoCs and two discrete Pascal MXM GPUs on a single water cooled platform. These claims are even more impressive when considering that NVIDIA is not only promising to replace the four processors but it will reportedly do that at 20W – less than a tenth of the TDP!

The company has not revealed all the nitty-gritty details, but they did tease out a few bits of information. The new processor will feature 7 billion transistors and will be based on a refined 16nm FinFET process while consuming a mere 20W. It can process two 8k HDR video streams and can hit 20 TOPS (NVIDIA's own rating for deep learning int(8) operations).

Specifically, NVIDIA claims that the Xavier SoC will use eight custom ARMv8 (64-bit) CPU cores (it is unclear whether these cores will be a refined Denver architecture or something else) and a GPU based on its upcoming Volta architecture with 512 CUDA cores. Also, in an interesting twist, NVIDIA is including a "Computer Vision Accelerator" on the SoC as well though the company did not go into many details. This bit of silicon may explain how the ~300mm2 die with 7 billion transistors is able to match the 7.2 billion transistor Pascal-based Telsa P4 (2560 CUDA cores) graphics card at deep learning (tera-operations per second) tasks. Of course in addition to the incremental improvements by moving to Volta and a new ARMv8 CPU architectures on a refined 16nm FF+ process.

  Drive PX Drive PX 2 NVIDIA Xavier Tesla P4
CPU 2 x Tegra X1 (8 x A57 total) 2 x Tegra X2 (8 x A57 + 4 x Denver total) 1 x Xavier SoC (8 x Custom ARM + 1 x CVA) N/A
GPU 2 x Tegra X1 (Maxwell) (512 CUDA cores total 2 x Tegra X2 GPUs + 2 x Pascal GPUs 1 x Xavier SoC GPU (Volta) (512 CUDA Cores) 2560 CUDA Cores (Pascal)
TFLOPS 2.3 TFLOPS 8 TFLOPS ? 5.5 TFLOPS
DL TOPS ? 24 TOPS 20 TOPS 22 TOPS
TDP ~30W (2 x 15W) 250W 20W up to 75W
Process Tech 20nm 16nm FinFET 16nm FinFET+ 16nm FinFET
Transistors ? ? 7 billion 7.2 billion

For comparison, the currently available Tesla P4 based on its Pascal architecture has a TDP of up to 75W and is rated at 22 TOPs. This would suggest that Volta is a much more efficient architecture (at least for deep learning and half precision)! I am not sure how NVIDIA is able to match its GP104 with only 512 Volta CUDA cores though their definition of a "core" could have changed and/or the CVA processor may be responsible for closing that gap. Unfortunately, NVIDIA did not disclose what it rates the Xavier at in TFLOPS so it is difficult to compare and it may not match GP104 at higher precision workloads. It could be wholly optimized for int(8) operations rather than floating point performance. Beyond that I will let Scott dive into those particulars once we have more information!

Xavier is more of a teaser than anything and the chip could very well change dramatically and/or not hit the claimed performance targets. Still, it sounds promising and it is always nice to speculate over road maps. It is an intriguing chip and I am ready for more details, especially on the Volta GPU and just what exactly that Computer Vision Accelerator is (and will it be easy to program for?). I am a big fan of the "self-driving car" and I hope that it succeeds. It certainly looks to continue as Tesla, VW, BMW, and other automakers continue to push the envelope of what is possible and plan future cars that will include smart driving assists and even cars that can drive themselves. The more local computing power we can throw at automobiles the better and while massive datacenters can be used to train the neural networks, local hardware to run and make decisions are necessary (you don't want internet latency contributing to the decision of whether to brake or not!).

I hope that NVIDIA's self-proclaimed "AI Supercomputer" turns out to be at least close to the performance they claim! Stay tuned for more information as it gets closer to launch (hopefully more details will emerge at GTC 2017 in the US).

What are your thoughts on Xavier and the whole self-driving car future?

Also read:

Source: NVIDIA

AMD A12-9800 Overclocked to 4.8 GHz

Subject: Processors | September 27, 2016 - 07:01 AM |
Tagged: overclock, Bristol Ridge, amd

Update 9/27 @ 5:10pm: Added a link to Anandtech's discussion of Bristol Ridge. It was mentioned in the post, but I forgot to add the link itself when I transfered it to the site. The text is the same, though.

While Zen is nearing release, AMD has launched the AM4 platform with updated APUs. They will be based on an updated Excavator architecture, which we discussed during the Carrizo launch in mid-2015. Carrizo came about when AMD decided to focus heavily on the 15W and 35W power targets, giving the best possible experience for that huge market of laptops, in the tasks that those devices usually encounter, such as light gaming and media consumption.

amd-2016-a12-9800-overclock.png

Image Credit: NAMEGT via HWBot

Bristol Ridge, instead, focuses on the 35W and 65W thermal points. This will be targeted more at OEMs who want to release higher-performance products in the holiday time-frame, although consumers can purchase it directly, according to Anandtech, later in the year. I'm guessing it won't be pushed too heavily to DIY users, though, because they know that those users know Zen is coming.

It turns out that overclockers already have their hands on it, though, and it seems to take a fairly high frequency. NAMEGT, from South Korea, uploaded a CPU-Z screenshot to HWBot that shows the 28nm, quad-core part clocked at 4.8 GHz. The included images claim that this was achieved on air, using AMD's new stock “Wraith” cooler.

Source: HWBot

AMD's Upcoming Socket AM4 Pictured with 1331 Pins

Subject: Processors | September 19, 2016 - 10:35 AM |
Tagged: Socket AM4, processor, FX, cpu, APU, amd, 1331 pins

A report from Hungarian site HWSW (cited by Bit-Tech) has a close-up photo of the new AMD AM4 processor socket, and it looks like this will have 1331 pins (go ahead and count them, if you dare!).

socket_am4.jpg

Image credit: Bit-Tech via HWSW

AMD's newest socket will merge the APU and FX series CPUs into this new AM4 socket, unlike the previous generation which split the two between AM3+ and FM2+. This is great news for system builders, who now have the option of starting with an inexpensive CPU/APU, and upgrading to a more powerful FX processor later on - with the same motherboard.

The new socket will apparently require a new cooler design, which is contrary to early reports (yes, we got it wrong, too) that the AM4 socket would be compatible with existing AM3 cooler mounts (manufacturers could of course offer hardware kits for existing cooler designs). In any case, AMD's new socket takes more of the delicate copper pins you love to try not to bend!

Source: Bit-Tech

GlobalFoundries to Continue FD-SOI Tech, Adds 12nm “12FDX” Node To Roadmap

Subject: Processors | September 13, 2016 - 06:51 PM |
Tagged: GLOBALFOUNDRIES, FD-SOI, 12FDX, process technology

In addition to the company’s efforts to get its own next generation FinFET process technology up and running, GlobalFoundries announced that will continue to pursue FD-SOI process technology with the addition of a 12nm FD-SOI (FDX in GlobalFoundries parlance) node to its roadmap with a slated release of 2019 at the earliest.

GlobalFoundries.png

FD-SOI stands for Fully Depleted Silicon On Insulator and is a planar process technology that uses a thin insulator on top of the base silicon which is then covered by a very thin layer of silicon that is used as the transistor channel. The promise of FD-SOI is that it offers the performance of a FinFET node with lower power consumption and cost than other bulk processes. While the substrate is more expensive with FD-SOI, it uses 50% of the lithography layers and companies can take advantage of reportedly easy-to-implement body biasing to design a single chip that can fulfill multiple products and roles. For example, in the case of 22FDX – which should start rolling out towards the end of this year – GlobalFoundries claims that it offers the performance of 14 FinFET at the 28nm bulk pricing. 22FDX is actually a 14nm front end (FEOL) and 28nm back end of line (BEOL) combined. Notably, it purportedly uses 70% lower power than 28nm HKMG.

22FDX Body Biasing.jpg

A GloFo 22nm FD-SOI "22FDX" transistor.

The FD-SOI design offers lower static leakage and allows chip makers to use body biasing (where substrate is polarized) to balance performance and leakage. Forward Body Biasing allows the transistor to switch faster and/or operate at much lower voltages. On the other hand, Reverse Body Biasing further reduces leakage and frequency to improves energy efficiency. Dynamic Body Biasing (video link) allows for things like turbo modes whereby increasing voltage to the back gate can increase transistor switching speed or reducing voltage can reduce switching speeds and leakage. For a process technology that is aimed at battery powered wearables, mobile devices, and various Internet of Things products, energy efficiency and being able to balance performance and power depending on what is needed is important.

Dyanmic Body Biasing.jpg

22FDX offers body biasing.

While the process node numbers are not as interesting as the news that FD-SOI will continue itself (thanks to marketing mucking up things heh), GlobalFoundries did share that 12FDX (12nm FD-SOI) will be a true full node shrink that will offer the performance of 10nm FinFET (presumably its own future FinFET tech though they do not specify) with better power characteristics and lower cost than 16nm FinFET. I am not sure if GlobalFoundries is using theoretical numbers or compared it to TSMC’s process here since they do not have their own 16nm FinFET process. Further, 12FDX will feature 15% higher performance and up to 50% lower power consumption that today’s FinFET technologies. The future process is aimed at the “cost sensitive mobile market” that includes IoT, automotive (entertainment and AI), mobile, and networking. FD-SOI is reportedly well suited for processors that combine both digital and analog (RF) elements as well.

Following the roll out of 22FDX GlobalFoundries will be preparing its Fab 1 facility in Dresden, Germany for the 12nm FD-SOI (12FDX) process. The new process is slated to begin tapping out products in early 2019 which should mean products using chips will hit the market in 2020.

The news is interesting because it indicates that there is still interest and research/development being made on FD-SOI and GlobalFoundries is the first company to talk about next generation process plans. Samsung and STMicroelectronics also support FD-SOI but have not announced their future plans yet.

If I had to guess, Samsung will be the next company to talk about future FD-SOI as the company continues to offer both FinFET and FD-SOI to its customers though they certainly do not talk as much about the latter. What are your thoughts on FD-SOI and its place in the market?

Also read: FD-SOI Expands, But Is It Disruptive? @ EETimes

Source: Tech Report

AMD Officially Launches Bristol Ridge Processors And Zen-Ready AM4 Platform

Subject: Motherboards, Processors | September 7, 2016 - 08:08 PM |
Tagged: Zen, Summit Ridge, Excavator, Bristol Ridge, amd, A12-9800

This week AMD officially took the wraps off of its 7th generation APU lineup that it introduced back in May. Previously known as Bristol Ridge, AMD is launching eight new processors along with a new desktop platform that finally brings next generation I/O to AMD systems.

Bristol Ridge maintains the Excavator CPU cores and GCN GPU cores of Carrizo, but on refreshed silicon with performance and power efficiency gains that will bring the architecture started by Bulldozer to an apex. These will be the last chips of that line, and wil be succeeded by AMD's new "Zen" architecture in 2017. For now though, Bristol Ridge delivers as much as 17% higher per thread CPU performance and 27% higher graphics performance while using significantly lower power than its predecessors. Further, AMD has been able to (thanks to various process tweaks that Josh talked about previously) hit some impressive clock speeds with these chips enabling AMD to better compete with Intel's Core i5 offerings.

Bristol Ridge.png

At the top end AMD has the (65W) quad core A12-9800 running at 3.8 GHz base and 4.2 GHz boost paired with GCN 3.0-based Radeon R7 graphics (that support VP9 and HEVC acceleration). These new Bristol Ridge chips are able to take advantage of DDR4 clocked up to 2400 MHz. For DIY PC builders planning to use dedicated graphics, AMD has the non-APU Athlon X4 950 which features four CPU cores at 3.5 GHz base and 3.8 GHz boost with a 65W TDP. While it is not clocked quite as high as its APU counterpart, it should still prove to be a popular choice for budge builds and will replace the venerable Athlon X4 860 and will also be paired with an AM4 motherboard that will be ready to accept a new Zen-based "Summit Ridge" CPU next year.

The following table lists the eight new 7th generation "Bristol Ridge" processors and their specifications. 

  CPU Cores CPU Clocks Base / Boost GPU GPU CUs GPU Clocks (Max) TDP

A12-98004

4 3.8 GHz / 4.2 GHz Radeon R7 8 1,108 MHz 65W
A12-9800E4 4 3.1 GHz / 3.8 GHz Radeon R7 8 900 MHz 35W
A10-9700 4 3.5 GHz / 3.8 GHz Radeon R7 6 1,029 MHz 65W
A10-9700E 4 3.0 GHz / 3.5 GHz Radeon R7 6 847 MHz 35W
A8-9600 4 3.1 GHz / 3.4 GHz Radeon R7 6 900 MHz 65W
A6-9500 2 3.5 GHz / 3.8 GHz Radeon
R5
6 1029 MHz 65W
A6-9500E 2 3.0 GHz / 3.4 GHz Radeon
R5
4 800 MHz 35W
Athlon X4 950 4 3.5 GHz / 3.8 GHz None 0 N/A 65W

Source: AMD

To expand on the performance increases of Bristol Ridge, AMD compared the A12-9800 to the previous generation A10-8850 as well as Intel's Core i5-6500. According to the company, the Bristol Ridge processor handily beats the Carrizo chip and is competitive with the Intel i5. Specifically, when comparing Bristol Ridge and Carrizo, AMD found that the A12-9800 scored 3,521.25 in 3DMark 11 while the A10-8850 (95W Godavari) scored 2,880. Further, when compared in Cinebench R11.5 1T the A12-980 scored 1.21 versus the A10-8850's 1.06. Not bad when you consider that the new processor has a 30W lower TDP!

With that said, the comparison to Intel is perhaps most interesting to the readers. In this case, the A12-9800 is about where you would expect though that is not necessarily a bad thing. It does pull a bit closer to Intel in CPU and continues to offer superior graphics performance.

  AMD A12-9800 (65W) Intel Core i5-6500 (65W) AMD A10-8850 (95W)

3DMark 11 Performance

3,521.25 1,765.75 2,880
PCMark 8 Home Accelerated 3,483.25 3,702 Not run
Cinebench R11.5 1T 1.21 Not run 1.06

Source: AMD

Specifically, in 3DMark 11 Performance the A12-9800's score of 3,521.25 is quite a bit better than the Intel i5-6500's 1,765.75 result. However, in the more CPU focused PCMark 8 Home Accelerated benchmark the Intel comes out ahead with a score of 3,702 versus the AMD A12-9800's score of 3,483.25. If the price is right Bristol Ridge does not look too bad on paper, assuming AMD's testing holds true in independent reviews!

The AM4 Platform

Alongside the launch of desktop 7th generation APUs, AMD is launching a new AM4 platform that supports Bristol Ridge and is ready for Zen APUs next year. The new platform finally brings new I/O technologies to AMD systems including PCI-E 3.0, NVMe, SATA Express, DDR4, and USB 3.1 Gen 2.

According to Digital Trends, AMD's AM4 desktop platform wil span all the way from low end to enthusiast motherboards and these boards will be powered by one of three new chipsets. The three new chipsets are the B350 for mainstream, A320 for "essential," and X/B/A300 for small form factor motherboards. Notably missing is any mention of an enthusiast chipset, but one is reportedly being worked on and will arive closer to the launch of Zen-based processors in 2017.

The image below outlines the differences in the chipsets. Worth noting is that the APUs themselves will handle the eight lanes of PCI-E 3.0, dual channel DDR4, four USB 3.1 Gen 1 ports, and two SATA 6Gbps and two NVMe or PCI-E 3.0 storage devices. This leaves PCI-E 2.0, SATA Express, additional SATA 6Gbps, and USB 3.1 Gen 2 connection duties to the chipsets.

AMD Bristol Ridge AM4 Chipsets.jpg

As of today, AMD has only announced the availability of AM4 motherboards and 7th generation APUs for OEM systems (with design wins from HP and Lenovo so far). The company will be outlining the channel / DIY PC builder lineup and pricing at a later (to be announced date).

I am looking forward to Zen and in a way the timing of Bristol Ridge seems strange. On the other hand, for OEMs it should do well and hold them over until then (heh) and enthusiasts / DIY builders are able to buy into Bristol Ridge knowing that they will be able to upgrade to Zen next year (while getting better than Carrizo performance with less power and possibly better overclocking) is not a bad option so long as the prices are right!

The full press blast is included below for more information on how they got their benchmark results.

Source: AMD

Leaked Geekbench Results: AMD Zen Performance Extrapolated

Subject: Processors | September 6, 2016 - 03:05 PM |
Tagged: Zen, single thread, geekbench, amd

Over the holiday weekend a leaked Geekbench benchmark result on an engineering sample AMD Zen processor got tech nerds talking. Other than the showcase that AMD presented a couple weeks back using the Blender render engine, the only information we have on performance claims come from AMD touting a "40% IPC increase" over the latest Bulldozer derivative. 

The results from Geekbench show performance from a two physical processor system and a total of 64 cores running at 1.44 GHz. Obviously that clock speed is exceptionally low; AMD demoed Summit Ridge running at 3.0 GHz in the showcase mentioned above. But this does give us an interesting data point with which to do some performance extrapolation. If we assume perfect clock speed scaling, we can guess at performance levels that AMD Zen might see at various clocks. 

I needed a quick comparison point and found this Geekbench result from a Xeon E7-8857 v2 running at 3.6 GHz. That is an Ivy Bridge based architecture and though the system has 48 cores, we are only going to a look at single threaded results to focus on the IPC story.

Obviously there are a ton of caveats with looking at data like this. It's possible that AMD Zen platform was running in a very sub-optimal condition. It's possible that the BIOS and motherboard weren't fully cache aware (though I would hope that wouldn't be the case this late in the game). It's possible that the Linux OS was somehow holding back performance of the Zen architecture and needs update. There are many reasons why you shouldn't consider this data a final decision yet; but that doesn't make it any less interesting to see.

In the two graphs below I divide the collection of single threaded results from Geekbench into two halves and there are three data points for each benchmark. The blue line represents the Xeon Ivy Bridge processor running at 3.6 GHz. The light green line shows the results from the AMD Zen processor running at 1.44 GHz as reported by Geekbench. The dark green line shows an extrapolated AMD Zen performance result with perfect scaling by frequency. 

graph1.png

Continue reading our preview of AMD Zen single threaded performance!!

Source: Geekbench

IBM Prepares Power9 CPUs to Power Servers and Supercomputers In 2018

Subject: Processors | September 2, 2016 - 01:39 AM |
Tagged: IBM, power9, power 3.0, 14nm, global foundries, hot chips

Earlier this month at the Hot Chips symposium, IBM revealed details on its upcoming Power9 processors and architecture. The new chips are aimed squarely at the data center and will be used for massive number crunching in big data and scientific applications in servers and supercomputer nodes.

Power9 is a big play from Big Blue, and will help the company expand its precense in the Intel-ruled datacenter market. Power9 processors are due out in 2018 and will be fabricated at Global Foundries on a 14nm HP FinFET process. The chips feature eight billion transistors and utilize an “execution slice microarchitecture” that lets IBM combine “slices” of fixed, floating point, and SIMD hardware into cores that support various levels of threading. Specifically, 2 slices make an SMT4 core and 4 slices make an SMT8 core. IBM will have Power9 processors with 24 SMT4 cores or 12 SMT8 cores (more on that later). Further, Power9 is IBM’s first processor to support its Power 3.0 instruction set.

IBM Power9.jpg

According to IBM, its Power9 processors are between 50% to 125% faster than the previous generation Power8 CPUs depending on the application tested. The performance improvement is thanks to a doubling of the number of cores as well as a number of other smaller improvements including:

  • A 5 cycle shorter pipeline versus Power8
  • A single instruction random number generator (RNG)
  • Hardware assisted garbage collection for interpreted languages (e.g. Java)
  • New interrupt architecture
  • 128-bit quad precision floating point and decimal math support
    • Important for finance and security markets, massive databases and money math.
    • IEEE 754
  • CAPI 2.0 and NVLink support
  • Hardware accelerators for encryption and compression

The Power9 processor features 120 MB of direct attached eDRAM that acts as an L3 cache (256 GB/s). The chips offer up 7TB/s of aggregate fabric bandwidth which certainly sounds impressive but that is a number with everything added together. With that said, there is a lot going on under the hood. Power9 supports 48 lanes of PCI-E 4.0 (2 GB/s per lane per direction), 48 lanes of proprietary 25Gbps accelerator lanes – these will be used for NVLink 2.0 to connect to NVIDIA GPUs as well as to connect to FPGAs, ASICs, and other accelerators or new memory technologies using CAPI 2.0 (Coherent Accelerator Processor Interface) – , and four 16Gbps SMP links (NUMA) used to combine four quad socket Power9 boards into a single 16 socket “cluster.”

These are processors that are built to scale and tackle the big data problems. In fact, not only is Google interested in Power9 to power its services, but the US Department of Energy will be building two supercomputers using IBM’s Power9 CPUs and NVIDI’s Volta GPUs. Summit and Sierra will offer between 100 to 300 Petaflops of computer power and will be installed at Oak Ridge National Laboratory and Lawrence Livermore National Laboratory respectively. There, some of the projects they will tackle is enabling the researchers to visualize the internals of a virtual light water reactor, research methods to improve fuel economy, and delve further into bioinformatics research.

The Power9 processors will be available in four variants that differ in the number of cores and number of threads each core supports. The chips are broken down into Power9 SO (Scale Out) and Power9 SU (Scale Up) and each group has two processors depending on whether you need a greater number of weaker cores or a smaller number of more powerful cores. Power9 SO chips are intended for multi-core systems and will be used in servers with one or two sockets while Power9 SU chips are for multi-processor systems with up to four sockets per board and up to 16 total sockets per cluster when four four socket boards are linked together. Power9 SO uses DDR4 memory and supports a theoretical maximum 4TB of memory (1TB with today’s 64GB DIMMS) and 120 GB/s of bandwidth while Power9 SU uses IBM’s buffered “Centaur” memory scheme that allows the systems to address a theoretical maximum of 8TB of memory (2TB with 64GB DIMMS) at 230 GB/s. In other words, the SU series is Big Blue’s “big guns.”

Power9 SO Die Shot Photo.jpg

A photo of the 24 core SMT4 Power9 SO die.

Here is where it gets a bit muddy. The processors are further broken down by an SMT4 or SMT8 and both Power9 SO and Power9 SU have both options. There are Power9 CPUs with 24 SMT4 cores and there are CPUs with 12 SMT8 cores. IBM indicated that SMT4 (four threads per core) was suited to systems running Linux and virtualization with emphasis on high core counts. Meanwhile SMT8 (eight threads per core) is a better option for large logical partitions (one big system versus partitioning out the compute cluster into smaller VMs as above) and running IBM’s Hypervisor. In either case (24 SMT4 or 12 SMT8) there is the same number of total threads, but you are able to choose whether you want fewer “stronger” threads on each core or more (albeit weaker) threads per core depending on which you workloads are optimized for.

Servers supporting Power9 are already under development by Google and Rackspace and blueprints are even available from the OpenPower Foundation. Currently, it appears that Power9 SO will emerge as soon as the second half of next year (2H 2017) with Power9 SU following in 2018 which would line up with the expected date for the Summit and Sierra supercomputer launches.

This is not a chip that will be showing up in your desktop any time soon, but it is an interesting high performance processor! I will be keeping an eye on updates from Oak Ridge lab hehe.

Qualcomm Releases the Snapdragon 821 Mobile Processor

Subject: Processors, Mobile | August 31, 2016 - 07:30 AM |
Tagged: SoC, Snapdragon 821, snapdragon, SD821, qualcomm, processor, mobile, adreno

Qualcomm has officially launched the Snapdragon 821 SoC, an upgraded successor to the existing Snapdragon 820 found in such phones as the Samsung Galaxy S7.

snapdragon821_1.jpg

"With Snapdragon 820 already powering many of the premier flagship Android smartphones today, Snapdragon 821 is now poised to become the processor of choice for leading smartphones and devices for this year’s holiday season. Qualcomm Technologies’ engineers have improved Snapdragon 821 in three key areas to ensure Snapdragon 821 maintains the level of industry leadership introduced by its predecessor."

Specifications were previously revealed when the Snapdragon 821 was announced in July, with a 10% increase on the CPU clocks (2.4 GHz, up from the previous 2.2 GHz max frequency). The Adreno 530 GPU clock increases 5%, to 650 MHz from 624 MHz. In addition to improved performance from CPU and GPU clock speed increases, the SD821 is said to offer lower power consumption (estimated at 5% compared to the SD820), and offers new functionality including improved auto-focus capability.

snapdragon-821.jpg

From Qualcomm:

Enhanced overall user experience:

The Snapdragon 821 has been specifically tuned to support a more responsive user experience when compared with the 820, including:

  • Shorter boot times: Snapdragon 821 powered devices can boot up to 10 percent faster.
  • Faster application launch times: Snapdragon 821 can reduce app load times by up to 10 percent.
  • Smoother, more responsive user interactions: UI optimizations and performance enhancements designed to allow users to enjoy smoother scrolling and more responsive browsing performance.

Improved performance and power consumption:

  • CPU speeds increase: As we previously announced, the 821 features Qualcomm Kryo CPU speeds up to 2.4GHz, representing an up to 10 percent improvement in performance over Snapdragon 820.
  • GPU speeds increase: The Qualcomm Adreno GPU received a 5 percent speed increase over Snapdragon 820.
  • Power savings: The 821 is engineered to deliver an incremental 5 percent power savings when comparing standard use case models. This power savings can extend battery life and support OEMs interested in reducing battery size for slimmer phones.

New features and functionality:

  • Snapdragon 821 introduces several new features and capabilities, offering OEMs new options to create more immersive and engaging user experiences, including support for:
  • Snapdragon VR SDK (Software Development Kit): Offers developers a superior mobile VR toolset, provides compatibility with the Google Daydream platform, and access to Snapdragon 821’s powerful heterogeneous architecture. Snapdragon VR SDK supports a superior level of visual and audio quality and more immersive virtual reality and gaming experiences in a mobile environment.
  • Dual PD (PDAF): Offers significantly faster image autofocus speeds under a wide variety of conditions when compared to single PDAF solutions.
  • Extended Laser Auto-Focus Ranging: Extends the visible focusing range, improving laser focal accuracy over Snapdragon 820.
  • Android Nougat OS: Snapdragon 821 (as well as the 820) will support the latest Android operating system when available, offering new features, expanded compatibility, and additional security compared to prior Android versions.

Qualcomm says the ASUS ZenFone 3 Deluxe is the first phone to use this new Snapdragon 821 SoC while other OEMs will be working on designs implementing the upgraded SoC.

Source: Qualcomm

AMD's 7870 rides again, checking out the new cooler on the A10-7870K

Subject: Processors | August 22, 2016 - 05:37 PM |
Tagged: amd, a10-7870K

Leaving aside the questionable naming to instead focus on the improved cooler on this ~$130 APU from AMD.  Neoseeker fired up the fun sized, 125W rated cooler on top of the A10-7870K and were pleasantly surprised at the lack of noise even under load.  Encouraged by the performance they overclocked the chip by 500MHz to 4.4GHz and were rewarded with a stable and still very quiet system.  The review focuses more the improvements the new cooler offers as opposed to the APU itself, which has not changed.  Check out the review if you are considering a lower cost system that only speaks when spoken to.

14.jpg

"In order to find out just how much better the 125W thermal solution will perform, I am going to test the A10-7870K APU mounted on a Gigabyte F2A88X-UP4 motherboard provided by AMD with a set of 16 GB (2 x 8) DDR3 RAM modules set at 2133 MHz speed. I will then run thermal and fan speed tests so a comparison of the results will provide a meaningful data set to compare the near-silent 125W cooler to an older model AMD cooling solution."

Here are some more Processor articles from around the web:

Processors

Source: Neoseeker

GlobalFoundries Will Allegedly Skip 10nm and Jump to Developing 7nm Process Technology In House (Updated)

Subject: Processors | August 20, 2016 - 03:06 PM |
Tagged: Semiconductor, lithography, GLOBALFOUNDRIES, global foundries, euv, 7nm, 10nm

UPDATE (August 22nd, 11:11pm ET): I reached out to GlobalFoundries over the weekend for a comment and the company had this to say:

"We would like to confirm that GF is transitioning directly from 14nm to 7nm. We consider 10nm as more a half node in scaling, due to its limited performance adder over 14nm for most applications. For most customers in most of the markets, 7nm appears to be a more favorable financial equation. It offers a much larger economic benefit, as well as performance and power advantages, that in most cases balances the design cost a customer would have to spend to move to the next node.

As you stated in your article, we will be leveraging our presence at SUNY Polytechnic in Albany, the talent and know-how gained from the acquisition of IBM Microelectronics, and the world-class R&D pipeline from the IBM Research Alliance—which last year produced the industry’s first 7nm test chip with working transistors."

An unexpected bit of news popped up today via TPU that alleges GlobalFoundries is not only developing 7nm technology (expected), but that the company will skip production of the 10nm node altogether in favor of jumping straight from the 14nm FinFET technology (which it licensed from Samsung) to 7nm manufacturing based on its own in house design process.

Reportedly, the move to 7nm would offer 60% smaller chips at three times the design cost of 14nm which is to say that this would be both an expensive and impressive endeavor. Aided by Extreme Ultraviolet (EUV) lithography, GlobalFoundries expects to be able to hit 7nm production sometime in 2020 with prototyping and small usage of EUV in the year or so leading up to it. The in house process tech is likely thanks to the research being done at the APPC (Advanced Patterning and Productivity Center) in Albany New York along with the expertise of engineers and design patents and technology (e.g. ASML NXE 3300 and 3300B EUV) purchased from IBM when it acquired IBM Microelectronics. The APPC is reportedly working simultaneously on research and development of manufacturing methods (especially EUV where extremely small wavelengths of ultraviolet light (14nm and smaller) are used to etch patterns into silicon) and supporting production of chips at GlobalFoundries' "Malta" fab in New York.

APPC in Albany NY.jpg

Advanced Patterning and Productivity Center in Albany, NY where Global Foundries, SUNY Poly, IBM Engineers, and other partners are forging a path to 7nm and beyond semiconductor manufacturing. Photo by Lori Van Buren for Times Union.

Intel's Custom Foundry Group will start pumping out ARM chips in early 2017 followed by Intel's own 10nm Cannon Lake processors in 2018 and Samsung will be offering up its own 10nm node as soon as next year. Meanwhile, TSMC has reportedly already tapped out 10nm wafers and will being prodction in late 2016/early 2017 and claims that it will hit 5nm by 2020. With its rivals all expecting production of 10nm chips as soon as Q1 2017, GlobalFoundries will be at a distinct disadvantage for a few years and will have only its 14nm FinFET (from Samsung) and possibly its own 14nm tech to offer until it gets the 7nm production up and running (hopefully!).

Previously, GlobalFoundries has stated that:

“GLOBALFOUNDRIES is committed to an aggressive research roadmap that continually pushes the limits of semiconductor technology. With the recent acquisition of IBM Microelectronics, GLOBALFOUNDRIES has gained direct access to IBM’s continued investment in world-class semiconductor research and has significantly enhanced its ability to develop leading-edge technologies,” said Dr. Gary Patton, CTO and Senior Vice President of R&D at GLOBALFOUNDRIES. “Together with SUNY Poly, the new center will improve our capabilities and position us to advance our process geometries at 7nm and beyond.” 

If this news turns out to be correct, this is an interesting move and it is certainly a gamble. However, I think that it is a gamble that GlobalFoundries needs to take to be competitive. I am curious how this will affect AMD though. While I had expected AMD to stick with 14nm for awhile, especially for Zen/CPUs, will this mean that AMD will have to go to TSMC for its future GPUs  or will contract limitations (if any? I think they have a minimum amount they need to order from GlobalFoundries) mean that GPUs will remain at 14nm until GlobalFoundries can offer its own 7nm? I would guess that Vega will still be 14nm, but Navi in 2018/2019? I guess we will just have to wait and see!

Also read:

Source: TechPowerUp

Intel Larrabee Post-Mortem by Tom Forsyth

Subject: Graphics Cards, Processors | August 17, 2016 - 01:38 PM |
Tagged: Xeon Phi, larrabee, Intel

Tom Forsyth, who is currently at Oculus, was once on the core Larrabee team at Intel. Just prior to Intel's IDF conference in San Francisco, which Ryan is at and covering as I type this, Tom wrote a blog post that outlined the project and its design goals, including why it didn't hit market as a graphics device. He even goes into the details of the graphics architecture, which was almost entirely in software apart from texture units and video out. For instance, Larrabee was running FreeBSD with a program, called DirectXGfx, that gave it the DirectX 11 feature set -- and it worked on hundreds of titles, too.

Intel_Xeon_Phi_Family.jpg

Also, if you found the discussion interesting, then there is plenty of content from back in the day to browse. A good example is an Intel Developer Zone post from Michael Abrash that discussed software rasterization, doing so with several really interesting stories.

IDF 2016: Intel Project Alloy Promises Untethered VR and AR Experiences

Subject: General Tech, Processors, Displays, Shows and Expos | August 16, 2016 - 01:50 PM |
Tagged: VR, virtual reality, project alloy, Intel, augmented reality, AR

At the opening keynote to this summer’s Intel Developer Forum, CEO Brian Krzanich announced a new initiative to enable a completely untether VR platform called Project Alloy. Using Intel processors and sensors the goal of Project Alloy is to move all of the necessary compute into the headset itself, including enough battery to power the device for a typical session, removing the need for a high powered PC and a truly cordless experience.

01.jpg

This is indeed the obvious end-game for VR and AR, though Intel isn’t the first to demonstrate a working prototype. AMD showed the Sulon Q, an AMD FX-based system that was a wireless VR headset. It had real specs too, including a 2560x1440 OLED 90Hz display, 8GB of DDR3 memory, an AMD FX-8800P APU with R7 graphics embedded. Intel’s Project Alloy is currently using unknown hardware and won’t have a true prototype release until the second half of 2017.

There is one key advantage that Intel has implemented with Alloy: RealSense cameras. The idea is simple but the implications are powerful. Intel demonstrated using your hands and even other real-world items to interact with the virtual world. RealSense cameras use depth sensing to tracking hands and fingers very accurately and with a device integrated into the headset and pointed out and down, Project Alloy prototypes will be able to “see” and track your hands, integrating them into the game and VR world in real-time.

02.jpg

The demo that Intel put on during the keynote definitely showed the promise, but the implementation was clunky and less than what I expected from the company. Real hands just showed up in the game, rather than representing the hands with rendered hands that track accurately, and it definitely put a schism in the experience. Obviously it’s up to the application developer to determine how your hands would actually be represented, but it would have been better to show case that capability in the live demo.  

03.jpg

Better than just tracking your hands, Project Alloy was able to track a dollar bill (why not a Benjamin Intel??!?) and use it to interact with a spinning lathe in the VR world. It interacted very accurately and with minimal latency – the potential for this kind of AR integration is expansive.

Those same RealSense cameras and data is used to map the space around you, preventing you from running into things or people or cats in the room. This enables the first “multi-room” tracking capability, giving VR/AR users a new range of flexibility and usability.

04.jpg

Though I did not get hands on with the Alloy prototype itself, the unit on-stage looked pretty heavy, pretty bulky. Comfort will obviously be important for any kind of head mounted display, and Intel has plenty of time to iterate on the design for the next year to get it right. Both AMD and NVIDIA have been talking up the importance of GPU compute to provide high quality VR experiences, so Intel has an uphill battle to prove that its solution, without the need for external power or additional processing, can truly provide the untethered experience we all desire.

Intel Will Release 14nm Coffee Lake To Succeed Kaby Lake In 2018

Subject: Processors | July 28, 2016 - 02:47 PM |
Tagged: kaby lake, Intel, gt3e, coffee lake, 14nm

Intel will allegedly be releasing another 14nm processor following Kaby Lake (which is itself a 14nm successor to Skylake) in 2018. The new processors are code named "Coffee Lake" and will be released alongside low power runs of 10nm Cannon Lake chips. 

Intel Coffe Lake to Coexist With Cannon Lake.jpg

Not much information is known about Coffee Lake outside of leaked slides and rumors, but the first processors slated to launch in 2018 will be mainstream mobile chips that will come in U and HQ mobile flavors which are 15W to 28W and 35W to 45W TDP chips respectively. Of course, these processors will be built on a very mature 14nm process with the usual small performance and efficiency gains beyond Skylake and Kaby Lake. The chips should have a better graphics unit, but perhaps more interesting is that the slides suggest that Coffee Lake will be the first architecture where Intel will bring "hexacore" (6 core) processors into mainstream consumer chips! The HQ-class Coffee Lake processors will reportedly come in two, four, and six core variants with Intel GT3e class GPUs. Meanwhile the lower power U-class chips top out at dual cores with GT3e class graphics. This is interesting because Intel has previous held back the six core CPUs for its more expensive and higher margin HEDT and Xeon platforms.

Of course 2018 is also the year for Cannon Lake which would have been the "tock" in Intel's old tick-tock schedule (which is no more) as the chips will move to a smaller process node and then Intel would improve on the 10nm process from there in future architectures. Cannon Lake is supposed to be built on the tiny 10nm node, and it appears that the first chips on this node will be ultra low power versions for laptops and tablets. Occupying the ULV platform's U-class (15W) and Y-class (4.5W), Cannon Lake CPUs will be dual cores with GT2 graphics. These chips should sip power while giving comparable performance to Kaby and Coffee Lake perhaps even matching the performance of the Coffee Lake U processors!

Stay tuned to PC Perspective for more information!