Tesla Motors Hires Peter Bannon of Apple

Subject: Graphics Cards, Processors | February 29, 2016 - 06:48 PM |
Tagged: tesla motors, tesla, SoC, Peter Bannon, Jim Keller

When we found out that Jim Keller has joined Tesla, we were a bit confused. He is highly skilled in processor design, and he moved to a company that does not design processors. Kind of weird, right? There are two possibilities that leap to mind: either he wanted to try something new in life, and Elon Musk hired him for his general management skills, or Tesla wants to get more involved in the production of their SoCs, possibly even designing their own.

tesla-2016-logo2.png

Now Peter Bannon, who was a colleague of Jim Keller at Apple, has been hired by Tesla Motors. Chances are, the both of them were not independently interested in an abrupt career change that led them to the same company. That seems highly unlikely, to say the least. So it appears that Tesla Motors wants experienced chip designers in house. What for? We don't know. This is a lot of talent to just look over the shoulders of NVIDIA and other SoC partners, to make sure they have an upper hand in negotiation. Jim Keller is at Tesla as their “Vice-President of Autopilot Hardware Engineering.” We don't know what Peter Bannon's title will be.

And then, if Tesla Motors does get into creating their own hardware, we wonder what they will do with it. The company has a history of open development and releasing patents (etc.) into the public. That said, SoC design is a highly encumbered field, depending on what they're specifically doing, which we have no idea about.

Source: Eletrek

MWC 2016: MediaTek Announces Helio P20 True Octa-Core SoC

Subject: Processors, Mobile | February 22, 2016 - 11:11 AM |
Tagged: TSMC, SoC, octa-core, MWC 2016, MWC, mediatek, Mali-T880, LPDDR4X, Cortex-A53, big.little, arm

MediaTek might not be well-known in the United States, but the company has been working to expand from China, where it had a 40% market share as of June 2015, into the global market. While 2015 saw the introduction of the 8-core Helio P10 and the 10-core helio X20 SoCs, the company continues to expand their lineup, today announcing the Helio P20 SoC.

Helio_P20.jpg

There are a number of differences between the recent SoCs from MediaTek, beginning with the CPU core configuration. This new Helio P20 is a “True Octa-Core” design, but rather than a big.LITTLE configuration it’s using 8 identically-clocked ARM Cortex-A53 cores at 2.3 GHz. The previous Helio P10 used a similar CPU configuration, though clocks were limited to 2.0 GHz with that SoC. Conversely, the 10-core Helio X20 uses a tri-cluster configuration, with 2x ARM Cortex-A72 cores running at 2.5 GHz, along with a typical big.LITTLE arrangement (4x Cortex-A53 cores at 2.0 Ghz and 4x Cortex-A53 cores at 1.4 GHz).

Another change affecting MediaTek’s new SoC and he industry at large is the move to smaller process nodes. The Helio P10 was built on 28 nm HPM, and this new P20 moves to 16 nm FinFET. Just as with the Helio P10 and Helio X20 (a 20 nm part) this SoC is produced at TSMC using their 16FF+ (FinFET Plus) technology. This should provide up to “40% higher speed and 60% power saving” compared to the company’s previous 20 nm process found in the Helio X20, though of course real-world results will have to wait until handsets are available to test.

The Helio P20 also takes advantage of LPDDR4X, and is “the world’s first SoC to support low power double data rate random access memory” according to MediaTek. The company says this new memory provides “70 percent more bandwidth than the LPDDR3 and 50 percent power savings by lowering supply voltage to 0.6v”. Graphics are powered by ARM’s high-end Mali T880 GPU, clocked at an impressive 900 MHz. And all-important modem connectivity includes CAT6 LTE with 2x carrier aggregation for speeds of up to 300 Mbps down, 50 Mbps up. The Helio P20 also supports up to 4k/30 video decode with H.264/265 support, and the 12-bit dual camera ISP supports up to 24 MP sensors.

Specs from MediaTek:

  • Process: 16nm
  • Apps CPU: 8x Cortex-A53, up to 2.3GHz
  • Memory: Up to 2 x LPDDR4X 1600MHz (up to 6GB) + 1x LPDDR3 933Mhz (up to 4GB) + eMMC 5.1
  • Camera: Up to 24MP at 24FPS w/ZSD, 12bit Dual ISP, 3A HW engine, Bayer & Mono sensor support
  • Video Decode: Up to 4Kx2K 30fps H.264/265
  • Video Encode: Up to 4Kx2K 30fps H.264
  • Graphics: Mali T-880 MP2 900MHz
  • Display: FHD 1920x1080 60fps. 2x DSI for dual display
  • Modem: LTE FDD TDD R.11 Cat.6 with 2x20 CA. C2K SRLTE. L+W DSDS support
  • Connectivity: WiFiac/abgn (with MT6630). GPS/Glonass/Beidou/BT/FM.
  • Audio: 110db SNR & -95db THD

It’s interesting to see SoC makers experiment with less complex CPU designs after a generation of multi-cluster (big.LITTLE) SoCs, as even the current flagship Qualcomm SoC, the Snapdragon 820, has reverted to a straight quad-core design. The P20 is expected to be in shipping devices by the second half of 2016, and we will see how this configuration performs once some devices using this new P20 SoC are in the wild.

Full press release after the break:

Source: MediaTek

Extreme Overclocking of Skylake (7.02566 GHz)

Subject: Processors | February 6, 2016 - 09:00 PM |
Tagged: Skylake, overclocking, asrock, Intel, gskill

I recently came across a post at PC Gamer that looked at the extreme overclocking leaderboard of the Skylake-based Intel Core i7-6700K. Obviously, these competitions will probably never end as long as higher numbers are possible on parts that are interesting for one reason or another. Skylake is the new chip on the liquid nitrogen block. It cannot reach frequencies as high as its predecessors, but teams still compete to get as high as possible on that specific SKU.

overclock-2016-skylake6700k7ghz.jpg

The current world record for a single-threaded Intel Core i7-6700K is 7.02566 GHz, which is achieved with a voltage of 4.032V. For comparison, the i7-6700K is typically around 1.3V at load. This record was apparently set about a month ago, on January 11th.

This is obviously a huge increase, about three-fold more voltage for the extra 3 GHz. For comparison, the current world record over all known CPUs is the AMD FX-8370 with a clock of 8.72278 GHz. Many Pentium 4-era processors make up the top 15 places too, as those parts were designed for high clock rates with relatively low IPC.

The rest of the system used G.SKILL Ripjaws 4 DDR4 RAM, an ASRock Z170M OC Formula motherboard, and an Antec 1300W power supply. It used an NVIDIA GeForce GT 630 GPU, which offloaded graphics from the integrated chip, but otherwise interfered as little as possible. They also used Windows XP, because why not I guess? I assume that it does the least amount of work to boot, allowing a quicker verification, but that is only a guess.

Source: HWBot

ASRock Releases BIOS to Disable Non-K Skylake Overclocking

Subject: Processors | February 5, 2016 - 11:44 AM |
Tagged: Intel, Skylake, overclocking, cpu, Non-K, BCLK, bios, SKY OC, asrock, Z170

ASRock's latest batch of motherboard BIOS updates remove the SKY OS function, which permitted overclocking of non-K Intel processors via BCLK (baseclock).

20151215-8.jpg

The news comes amid speculation that Intel had pressured motherboard vendors to remove such functionality. Intel's unlocked K parts (i5-6600K, i7-6700K) will once again be the only options for Skylake overclocking on Z170 on ASRock boards (assuming prior BIOS versions are no longer available), and with no Pentium G3258 this generation Intel is no longer a budget friendly option for enthusiasts looking to push their CPU past factory specs.

3386ebeb-34f8-4a83-9909-0e29985f4712.jpg

(Image credit: Hexus.net)

It sounds like now would be a good time to archive that SKY OS enabled BIOS update file if you've downloaded it - or simply refrain from this BIOS update. What remains to be seen of course is whether other vendors will follow suit and disable BCLK overclocking of non-K processors. This had become a popular feature on a number of Z170 motherboards on the market, but ASRock may have been in too weak a position to battle Intel on this issue.

Source: Hexus

So That's Where Jim Keller Went To... Tesla Motors...

Subject: General Tech, Processors, Mobile | January 29, 2016 - 05:28 PM |
Tagged: tesla, tesla motors, amd, Jim Keller, apple

Jim Keller, a huge name in the semiconductor industry for his work at AMD and Apple, recently left AMD before the launch of the Zen architecture. This made us nervous, because when a big name leaves a company before a product launch, it could either be that their work is complete... or they're evacuating before a stink-bomb detonates and the whole room smells like rotten eggs.

jim_keller.jpg

It turns out a third option is possible: Elon Musk offers you a job making autonomous vehicles. Jim Keller's job title at Tesla will be Vice President of Autopilot Hardware Engineering. I could see this position being enticing, to say the least, even if you are confident in your previous employer's upcoming product stack. It doesn't mean that AMD's Zen architecture will be either good or bad, but it nullifies the earlier predictions, when Jim Keller left AMD, at least until further notice.

We don't know who approached who, or when.

Another point of note: Tesla Motors currently uses NVIDIA Tegra SoCs in their cars, who are (obviously) competitors of Jim Keller's former employer, AMD. It sounds like Jim Keller is moving into a somewhat different role than he had at AMD and Apple, but it could be interesting if Tesla starts taking chip design in-house, to customize the chip to their specific needs, and take away responsibilities from NVIDIA.

The first time he was at AMD, he was the lead architecture of the Athlon 64 processor, and he co-authored x86-64. When he worked at Apple, he helped design the Apple A4 and A5 processors, which were the first two that Apple created in-house; the first three iPhone processors were Samsung SoCs.

Report: Intel Tigerlake Revealed; Company's Third 10nm CPU

Subject: Processors | January 24, 2016 - 12:19 PM |
Tagged: Tigerlake, rumor, report, processor, process node, Intel, Icelake, cpu, Cannonlake, 10 nm

A report from financial website The Motley Fool discusses Intel's plan to introduce three architectures at the 10 nm node, rather than the expected two. This comes after news that Kaby Lake will remain at the present 14 nm, interrupting Intel's 2-year manufacturing tech pace.

intel_10nm.jpg

(Image credit: wccftech)

"Management has told investors that they are pushing to try to get back to a two-year cadence post-10-nanometer (presumably they mean a two-year transition from 10-nanometer to 7-nanometer), however, from what I have just learned from a source familiar with Intel's plans, the company is working on three, not two, architectures for the 10-nanometer node."

Intel's first 10 nm processor architecture will be known as Cannonlake, with Icelake expected to follow about a year afterward. With Tigerlake expected to be the third architecture build on 10 nm, and not coming until "the second half of 2019", we probably won't see 7 nm from Intel until the second half of 2020 at the earliest.

It appears that the days of two-year, two product process node changes are numbered for Intel, as the report continues:

"If all goes well for the company, then 7-nanometer could be a two-product node, implying a transition to the 5-nanometer technology node by the second half of 2022. However, the source that I spoke to expressed significant doubts that Intel will be able to return to a two-years-per-technology cycle."

intel-node-density_large.png

(Image credit: The Motley Fool)

It will be interesting to see how players like TSMC, themselves "planning to start mass production of 7-nanometer in the first half of 2018", will fare moving forward as Intel's process development (apparently) slows.

TSMC Allegedly Wants 5nm by 2020

Subject: Graphics Cards, Processors | January 19, 2016 - 11:38 PM |
Tagged: TSMC

Digitimes is reporting on statements that were allegedly made by TSMC co-CEO, Mark Liu. We are currently seeing 16nm parts come out of the foundry, which is expected to be used in the next generation of GPUs, replacing the long-running 28nm node that launched with the GeForce GTX 680. (It's still unannounced whether AMD and NVIDIA will use 14nm FinFET from Samsung or GlobalFoundries, or 16nm FinFET from TSMC.)

Update (Jan 20th, @4pm EST): Couple minor corrections. Radeon HD 7970 launched at 28nm first by a couple of months. I just remember NVIDIA getting swamped in delays because it was a new node, so that's probably why I thought of the GTX 680. Also, AMD announced during CES that they will use GlobalFoundries to fab their upcoming GPUs, which I apparently missed. We suspect that NVIDIA will use TSMC, and have assumed that for a while, but it hasn't been officially announced yet (if ever).

tsmc.jpg

According to their projections, which (again) are filtered through Digitimes, the foundry expects to have 7nm in the first half of 2018. They also expect to introduce extreme ultraviolet (EUV) lithography methods with 5nm in 2020. Given that Silicon in a solid has a lattice spacing of ~0.54nm at room temperature, 7nm transistors will consist of about 13 atoms, and 5nm transistors will have features containing about 9 atoms.

We continue the march toward the end of silicon lithography.

Even if the statement is correct, much can happen between then and now. It wouldn't be the first time that I've seen a major foundry believe that a node would be available, but end up having it delayed. I wouldn't hold my breath, but I might cross my fingers if my hands were free.

At the very least, we can assume that TSMC's roadmap is 16nm, 10nm, 7nm, and then 5nm.

Source: Digitimes

Skylake and Later Will Be Withheld Windows 7 / 8.x Support

Subject: Processors | January 17, 2016 - 02:20 AM |
Tagged: Windows 8.1, Windows 7, windows 10, Skylake, microsoft, kaby lake, Intel, Bristol Ridge, amd

Microsoft has not been doing much to put out the fires in comment threads all over the internet. The latest flare-up involves hardware support with Windows 7 and 8.x. Currently unreleased architectures, such as Intel's Kaby Lake and AMD's Bristol Ridge, will only be supported on Windows 10. This is despite Windows 7 and Windows 8.x being supported until 2020 and 2023, respectively. Microsoft does not believe that they need to support older hardware, though.

windows-10-bandaid.png

This brings us to Skylake. These processors are out, but Microsoft considers them “transition” parts. Microsoft provided PC World with a list of devices that will be gjven Windows 7 and Windows 8.x drivers, which enable support until July 17, 2017. Beyond that date, only a handful of “most critical” updates will be provided until the official end of life.

I am not sure what the cut-off date for unsupported Skylake processors is, though; that is, Skylake processors that do not line up with Microsoft's list could be deprecated at any time. This is especially a problem for the ones that are potentially already sold.

As I hinted earlier, this will probably reinforce the opinion that Microsoft is doing something malicious with Windows 10. As Peter Bright of Ars Technica reports, Windows 10 does not exactly have an equivalent in the server space yet, which makes you wonder what that support cycle will be like. If they can continue to patch Skylake-based servers in Windows Server builds that are derived from Windows 7 and Windows 8.x, like Windows Server 2012 R2, then why are they unwilling to port those changes to the base operating system? If they will not patch current versions of Windows Server, because the Windows 10-derived version still isn't out yet, then what will happen with server farms, like Amazon Web Services, when Xeon v5s are suddenly incompatible with most Windows-based OS images? While this will, no doubt, be taken way out of context, there is room for legitimate commentary about this whole situation.

Of course, supporting new hardware on older operating systems can be difficult, and not just for Microsoft at that. Peter Bright also noted that Intel has a similar, spotty coverage of drivers, although that mostly applies to Windows Vista, which, while still in extended support for another year, doesn't have a significant base of users who are unwilling to switch. The point remains, though, that Microsoft could be doing a favor for their hardware vendor partners.

I'm not sure whether that would be less concerning, or more.

Whatever the reason, this seems like a very silly, stupid move on Microsoft's part, given the current landscape. Windows 10 can become a great operating system, but users need to decide that for themselves. When users are pushed, and an adequate reason is not provided, they will start to assume things. Chances are, it will not be in your favor. Some may put up with it, but others might continue to hold out on older platforms, maybe even including older hardware.

Other users may be able to get away with Windows 7 VMs on a Linux host.

Source: Ars Technica

Meet the new AMD Opteron A1100 Series SoC with ARM onboard

Subject: Processors | January 14, 2016 - 02:26 PM |
Tagged: opteron a1100, amd

The chip once known as Seattle has arrived from AMD, the Opteron A1100 Series which is built upon up to eight cores based on a 64-bit ARM Cortex-A57.  The chips will have up to 4 MB of shared L2 cache and 8 MB L3 cache with an integrated dual-channel memory controller that supports up to 128 GB of DDR3 or DDR4 memory.  For connectivity options you will have two 10Gb Ethernet ports, 8 lanes of PCIe 3.0 and up to 14 SATA3 devices.

a1100models.PNG

As you can see above the TDPs range from 25W to 32W, perfect for power conscious data centres.  The SoftIron Overdrive 3000 systems will use the new A1100 chips and AMD is working with Silver Lining Systems to integrate SLS’ fabric technology for interconnecting systems. 

a1100.PNG

TechARP has posted a number of slides from AMD's presentation or you can head straight over to AMD to get the scoop.  You won't see these chips on the desktop but new server chips are great news for AMD's bottom line in the coming year.  They also speak well of AMD's continued innovations, using low powered and low cost 64-bit ARM chips, combined with their interconnect technologies opens up a new market for AMD.

Full PR is available after the break.

Source: AMD

Report: AMD Carrizo FM2+ Processor Listing Appears Online

Subject: Processors | January 11, 2016 - 06:26 PM |
Tagged: rumor, report, FM2+, carrizo, Athlon X4, amd

According to a report published by CPU World, a pair of unreleased AMD Athlon X4 processors appeared in a supported CPU list on Gigabyte's website (since removed) long enough to give away some information about these new FM2+ models.

Athlon_X4_835_and_845.jpg

Image credit: CPU World

The CPUs in question are the Athlon X4 835 and Athlon X4 845, 65W quad-core parts that are both based on AMD's Excavator core, according to CPU World. The part numbers are AD835XACI43KA and AD845XACI43KA, which the CPU World report interprets:

"The 'I43' letters and digits in the part number signify Socket FM2+, 4 CPU cores, and 1 MB L2 cache per module, or 2MB in total. The last two letters 'KA' confirm that the CPUs are based on Carrizo design."

The report further states that the Athlon X4 835 will operate at 3.1 GHz, with 3.5 GHz for the X4 845. No Turbo Core frequency information is known for these parts.

Source: CPU-World

Far Cry Primal System Requirements Slightly Lower than 4?

Subject: Graphics Cards, Processors | January 9, 2016 - 07:00 AM |
Tagged: ubisoft, quad-core, pc gaming, far cry primal, dual-core

If you remember back when Far Cry 4 launched, it required a quad-core processor. It would block your attempts to launch the game unless it detected four CPU threads, either native quad-core or dual-core with two SMT threads per core. This has naturally been hacked around by the PC gaming community, but it is not supported by Ubisoft. It's also, apparently, a bad experience.

ubisoft-2015-farcryprimal.jpg

The follow-up, Far Cry Primal, will be released in late February. Oddly enough, it has similar, but maybe slightly lower, system requirements. I'll list them, and highlight the differences.

Minimum:

  • 64-bit Windows 7, 8.1, or 10 (basically unchanged from 4)
  • Intel Core i3-550 (down from i5-750)
    • or AMD Phenom II X4 955 (unchanged from 4)
  • 4GB RAM (unchanged from 4)
  • 1GB NVIDIA GTX 460 (unchanged from 4)
    • or 1GB AMD Radeon HD 5770 (down from HD 5850)
  • 20GB HDD Space (down from 30GB)

Recommended:

  • Intel Core i7-2600K (up from i5-2400S)
    • or AMD FX-8350 (unchanged from 4)
  • 8GB of RAM (unchanged from 4)
  • NVIDIA GeForce GTX 780 (up from GTX 680)
    • or AMD Radeon R9 280X (down from R9 290X)

While the CPU is interesting, the opposing directions of the recommended GPU is fascinating. Either the parts are within Ubisoft's QA margin of error, or they increased the GPU load, but were able to optimize AMD better than Far Cry 4, which was a net gain in performance (and explains the slight bump in CPU power required to feed the extra content). Of course, either way is just a guess.

Back on the CPU topic though, I would be interested to see the performance of Pentium Anniversary Edition parts. I wonder whether they removed the two-thread lock, and, especially if hacks are still required, whether it is playable anyway.

That is, in a month and a half.

Source: Ubisoft

Intel Pushes Device IDs of Kaby Lake GPUs

Subject: Graphics Cards, Processors | January 8, 2016 - 02:38 AM |
Tagged: Intel, kaby lake, linux, mesa

Quick post about something that came to light over at Phoronix. Someone noticed that Intel published a handful of PCI device IDs for graphics processors to Mesa and libdrm. It will take a few months for graphics drivers to catch up, although this suggests that Kaby Lake will be releasing relatively soon.

intel-2015-linux-driver-mesa.png

It also gives us hints about what Kaby Lake will be. Of the published batch, there will be six tiers of performance: GT1 has five IDs, GT1.5 has three IDs, GT2 has six IDs, GT2F has one ID, GT3 has three IDs, and GT4 has four IDs. Adding them up, we see that Intel plans 22 GPU devices. The Phoronix post lists what those device IDs are, but that is probably not interesting for our readers. Whether some of those devices overlap in performance or numbering is unclear, but it would make sense given how few SKUs Intel usually provides. I have zero experience in GPU driver development.

Source: Phoronix

Rumor: Intel and Xiaomi "Special Deal"

Subject: Processors, Mobile | January 6, 2016 - 10:56 PM |
Tagged: xiaomi, Intel, atom

So this rumor cites anonymous source(s) that leaked info to Digitimes. That said, it aligns with things that I've suspected in a few other situations. We'll discuss this throughout the article.

Xiaomi, a popular manufacturer of mobile devices, are breaking into the laptop space. One model was spotted on pre-order in China with an Intel Core i7 processor. According to the aforementioned leak, Intel has agreed to bundle an additional Intel Atom processor with every Core i7 that they order. Use Intel in a laptop, and they can use Intel in an x86-based tablet for no additional cost.

Single_grain_of_table_salt_(electron_micrograph).jpg

A single grain of salt... ...
Image Source: Wikipedia

While it's not an explicit practice, we've been seeing hints of similar initiatives for years now. A little over a year ago, Intel's mobile group reported revenues that are ~$1 million, which are offset by ~$1 billion in losses. We would also see phones like the ASUS ZenFone 2, which has amazing performance at a seemingly impossible $199 / $299 price point. I'm not going to speculate on what the actual relationships are, but it sounds more complicated than a listed price per tray.

And that's fine, of course. I know comments will claim the opposite, either that x86 is unsuitable for mobile devices or alleging that Intel is doing shady things. In my view, it seems like Intel has products that they believe can change established mindsets if given a chance. Personally, I would be hesitant to get an x86-based developer phone, but that's because I would only want to purchase one and I'd prefer to target the platform that the majority has. It's that type of inertia that probably frustrates Intel, but they can afford to compete against it.

It does make you wonder how long Intel plans to make deals like this -- again, if they exist.

Coverage of CES 2016 is brought to you by Logitech!

PC Perspective's CES 2016 coverage is sponsored by Logitech.

Follow all of our coverage of the show at http://pcper.com/ces!

Source: Digitimes

Photonic IC Created by University of Colorado Boulder

Subject: Processors | December 28, 2015 - 09:03 PM |
Tagged: optical, photonics

A typical integrated circuit pushes electrical voltage across pathways, with transistors and stuff modifying it. When you interpret those voltages as mathematical values and logical instructions, then congratulations, you have created a processor, memory, and so forth. You don't need to use electricity for this. In fact, the history of Charles Babbage and Ada Lovelace was their attempts to perform computation on mechanical state.

ucolorado-2015-opticalic.jpg

Image Credit: University of Colorado
Chip contains optical (left) and electric (top and right) circuits.

One possible follow-up is photonic integrated circuits. This routes light through optical waveguides, rather than typical electric traces. The prototype made by University of Colorado Boulder (and UC Berkeley) seem to use photonics just to communicate, and an electrical IC for the computation. The advantage is high bandwidth, high density, and low power.

This sort of technology was being investigated for several years. My undergraduate thesis for Physics involved computing light transfer through defects in a photonic crystal, using it to create 2D waveguides. With all the talk of silicon fabrication coming to its limits, as 14nm transistors are typically made of around two-dozen atoms, this could be a new direction to innovate.

And honestly, wouldn't you want to overclock your PC to 400+ THz? Make it go plaid for ludicrous speed. (Yes, this paragraph is a joke.)

Intel Adds New Processors to Broadwell and Skylake Lineups

Subject: Processors | December 28, 2015 - 07:00 AM |
Tagged: skylake-u, Skylake, mobile cpu, Intel, desktop cpu, core i7, core i5, core i3, Broadwell

As reported by CPU World Intel has added a total of eight new processors to the 5th-gen “Broadwell” and 6th-gen “Skylake” CPU lineups, with new mobile and desktop models appearing in Intel’s price lists. The models include Core and Celeron, and range from dual core (five with Hyper-Threading) to a new quad-core i5:

CPU_World_chart.png

Chart of new Intel models from CPU-World

“Intel today added 8 new Broadwell- and Skylake-based microprocessors to the official price list. New CPUs have unusual model numbers, like i5-6402P and i5-5200DU, which indicates that they may have different feature-set than the mainstream line of desktop and mobile CPUs. Intel also introduced today Celeron 3855U and 3955U ultra-low voltage models.”

It is unclear if the desktop models (Core i3-6098P, Core i5-6402P) listed with enter the retail channel, or if they are destined for OEM applications. The report points out these models have a P suffix “that was used to signify the lack of integrated GPU in older generations of Core i3/i5 products. There is a good chance that it still means just that”.

Source: CPU-World

Overclocking Locked Intel Skylake CPUs Possible - i3 6100 Benchmarked

Subject: Processors | December 11, 2015 - 02:08 PM |
Tagged: Skylake, overclocking, Intel, Core i3-6100, bios, BCLK, asrock

The days of Intel overclocking being limited to their more expensive unlocked parts appear to be over, as TechSpot has posted benchmarks from an overclocked Intel Core i3-6100 using a new (pre-release) BIOS update from ASRock.

6100_cpuz.jpg

Image credit: TechSpot

"In overclocking circles it was recently noted that BCLK (base clock) overclocking might become a possibility in Skylake processors. Last night Asrock contacted us with an updated BIOS that enabled this. We jumped at the opportunity and have already tested and benched a Core i3-6100 Skylake CPU with a 1GHz overclock (4.7GHz) on air cooling."

The 1.0 GHz overclock was achieved with a 127 MHz base clock on the i3 processor, with a vcore of ~1.36v. Apparently the ASRock motherboard requires the processor's graphics portion to be disabled for overclocking with this method, and TechSpot used an NVIDIA GTX 960 for test system. The results were impressive, as you might imagine.

The following is a small sampling of the benchmark results available from the sourced TechSpot article:

cb15.png

Image credit: TechSpot

handbrake.png

Image credit: TechSpot

The overclocked i3-6100 was able to come very close to the multi-threaded performance of the stock AMD FX-8320E (8-core) processor in Cinebench, with double the per-thread performance. Results from their Handbrake encode test were even better, with the overclocked i3-6100 essentially matching the performance of the Core i5-4430 processor tested.

Gaming was underwhelming, with very similar performance from the GTX 960 from all CPUs at the settings tested.

civ.png

Image credit: TechSpot

So what did the article say about this new overclocking-friendly BIOS availability? "We are told this updated BIOS for their Z170 motherboards will be available to owners very soon." It will be interesting to see if other vendors offer the same, as there are results out there using a SuperMicro board as well.

Source: TechSpot

AMD HSA Patches Hoping for GCC 6

Subject: Graphics Cards, Processors | December 8, 2015 - 08:07 AM |
Tagged: hsa, GCC, amd

Phoronix, the Linux-focused hardware website, highlighted patches for the GNU Compiler Collection (GCC) that implement HSA. This will allow newer APUs, such as AMD's Carrizo, to accelerate chunks of code (mostly loops) that have been tagged with a precompiler flag as valuable to be done on the GPU. While I have done some GPGPU development, many of the low-level specifics of HSA aren't areas that I have too much experience with.

amd-2015-carrizo-8.jpg

The patches have been managed by Martin Jambor of SUSE Labs. You can see a slideshow presentation of their work on the GNU website. Even though features froze about a month ago, they are apparently hoping that this will make it into the official GCC 6 release. If so, many developers around the world will be able to target HSA-compatible hardware in the first half of 2016. Technically, anyone can do so regardless, but they would need to specifically use the unofficial branch on the GCC Subversion repository. This probably means compiling it themselves, and it might even be behind on a few features in other branches that were accepted into GCC 6.

Source: Phoronix

Intel Skylake Processors Can Bend Under Pressure, Damage CPU and LGA Socket

Subject: Processors | December 4, 2015 - 11:35 PM |
Tagged: Skylake, Intel, heatsink, damage, cpu cooler, Core i7 6700K, Core i7 6600K, bend, 6th generation, 3rd party

Some Intel 6th-gen "Skylake" processors have been damaged by the heatsink mounts of 3rd-party CPU coolers according to a report that began with pcgameshardware.de and has since made its rounds throughout PC hardware media (including the sourced Ars Technica article).

IMG_8244.JPG

The highly-referenced pcgameshardware.de image of a bent Skylake CPU

The problem is easy enough to explain, as Skylake has a notably thinner construction compared to earlier generations of Intel CPUs, and if enough pressure is exerted against these new processors the green substrate can bend, causing damage not only to the CPU but the pins in the LGA 1151 socket as well.

The only way to prevent the possibility of a bend is avoid overtightening the heatsink, but considering most compatible coolers on the market were designed for Haswell and earlier generations of Intel CPU this leaves users to guess what pressure might be adequate without potentially bending the CPU.

Intel has commented on the issue:

"The design specifications and guidelines for the 6th Gen Intel Core processor using the LGA 1151 socket are unchanged from previous generations and are available for partners and 3rd party manufacturers. Intel can’t comment on 3rdparty designs or their adherence to the recommended design specifications. For questions about a specific cooling product we must defer to the manufacturer."

It's worth noting that while Intel states that their "guidelines for the 6th Gen Intel Core processor using the LGA 1151 socket are unchanged from previous generations", it is specifically a change in substrate thickness that has caused the concerns. The problem is not limited to any specific brands, but certainly will be more of an issue for heatsink mounts that can exert a tremendous amount of pressure.

IMG_7045-pcgh.JPG

An LGA socket damaged from a bent Skylake CPU (credit: pcgameshardware)

From the Ars report:

"Noctua, EK Water Blocks, Scythe, Arctic, Thermaltake, and Thermalright, commenting to Games Hardware about the issue, suggested that damage from overly high mounting pressure is most likely to occur during shipping or relocation of a system. Some are recommending that the CPU cooler be removed altogether before a system is shipped."

Scythe has been the first vendor to offer a solution to the issue, releasing this statement on their support website:

"Japanese cooling expert Scythe announces a change of the mounting system for Skylake / Socket 1151 on several coolers of its portfolio. All coolers are compatible with Skylake sockets in general, but bear the possibility of damage to CPU and motherboard in some cases where the PC is exposed to strong shocks (e.g. during shipping or relocation).This problem particularly involves only coolers which will mounted with the H.P.M.S. mounting system. To prevent this, the mounting pressure has been reduced by an adjustment of the screw set. Of course, Scythe is going to ship a the new set of screws to every customer completely free of charge! To apply for the free screw set, please send your request via e-mail to support@scythe.com or use the contact form on our website."

Substrate.jpg

The thickness of Skylake (left) compared to Haswell (right) (credit: pcgameshardware)

As owner of an Intel Skylake i5-6600K, which I have been testing with an assortment of CPU coolers for upcoming reviews, I can report that my processor appears to be free of any obvious damage. I am particularly careful about pressure when attaching a heatsink, but there have been a couple (including the above mentioned Scythe HPMS mounting system) that could easily have been tightened far beyond what was needed for a proper connection.

We will continue to monitor this situation and update as more vendors offer their response to the issue.

Source: Ars Technica

Rumors Surrounding the LG NUCLUN 2 SoC

Subject: Processors, Mobile | December 1, 2015 - 07:30 AM |
Tagged: TSMC, SoC, LG, Intel, arm

So this story came out of nowhere. Whether the rumors are true or false, I am stuck on how everyone seems to be talking about it with a casual deadpan. I spent a couple hours Googling whether I missed some big announcement that made Intel potentially fabricating ARM chips a mundane non-story. Pretty much all that I found was Intel allowing Altera to make FPGAs with embedded ARM processors in a supporting role, which is old news.

simpsons-2015-skinner-out-of-touch.jpg

Image Credit: Internet Memes...

The rumor is that Intel and TSMC were both vying to produce LG's Nuclon 2 SoC. This part is said to house two quad-core ARM modules in a typical big.LITTLE formation. Samples were allegedly produced, with Intel's part (2.4 GHx) being able to clock around 300 MHz faster than TSMC's offering (2.1 GHz). Clock rate is highly dependent upon the “silicon lottery,” so this is an area that production maturity can help with. Intel's sample would also be manufactured at 14nm (versus 16nm from TSMC although these numbers mean less than they used to). LG was also, again allegedly, interesting in Intel's LTE modem. According to the rumors, LG went with TSMC because they felt Intel couldn't keep up with demand.

Now that the rumor has been reported... let's step back a bit.

I talked with Josh a couple of days ago about this post. He's quite skeptical (as I am) about the whole situation. First and foremost, it takes quite a bit of effort to port a design to a different manufacturing process. LG could do it, but it is questionable, especially for a second chip ever sort of thing. Moreover, I still believe that Intel doesn't want to manufacture chips that directly compete with them. x86 in phones is still not a viable business, but Intel hasn't given up and you would think that's a prerequisite.

So this whole thing doesn't seem right.

Source: Android

Intel to Ship FPGA-Accelerated Xeons in Early 2016

Subject: Processors | November 20, 2015 - 06:21 PM |
Tagged: xeon, Intel, FPGA

UPDATE (Nov 26th, 3:30pm ET): A few readers have mentioned that FPGAs take much less than hours to reprogram. I even received an email last night that claims FPGAs can be reprogrammed in "well under a second." This differs from the sources I've read when I was reading up on their OpenCL capabilities (for potential evolutions of projects) back in ~2013. That said, multiple sources, including one who claim to have personal experience with FPGAs, say that it's not the case. Also, I've never used an FPGA myself -- again, I was just researching them to see where some GPU-based projects could go.

Designing integrated circuits, as I've said a few times, is basically a game. You have a blank canvas that you can etch complexity into. The amount of “complexity” depends on your fabrication process, how big your chip is, the intended power, and so forth. Performance depends on how you use the complexity to compute actual tasks. If you know something special about your workload, you can optimize your circuit to do more with less. CPUs are designed to do basically anything, while GPUs assume similar tasks can be run together. If you will only ever run a single program, you can even bake some or all of its source code into hardware called an “application-specific integrated circuit” (ASIC), which is often used for video decoding, rasterizing geometry, and so forth.

intel-2015-fpga-old-atom.png

This is an old Atom back when Intel was partnered with Altera for custom chips.

FPGAs are circuits that can be baked into a specific application, but can also be reprogrammed later. Changing tasks requires a significant amount of time (sometimes hours) but it is easier than reconfiguring an ASIC, which involves removing it from your system, throwing it in the trash, and printing a new one. FPGAs are not quite as efficient as a dedicated ASIC, but it's about as close as you can get without translating the actual source code directly into a circuit.

Intel, after purchasing FPGA manufacturer, Altera, will integrate their technology into Xeons in Q1 2016. This will be useful to offload specific tasks that dominate a server's total workload. According to PC World, they will be integrated as a two-chip package, where both the CPU and FPGA can access the same cache. I'm not sure what form of heterogeneous memory architecture that Intel is using, but this would be a great example of a part that could benefit from in-place acceleration. You could imagine a simple function being baked into the FPGA to, I don't know, process large videos in very specific ways without expensive copies.

Again, this is not a consumer product, and may never be. Reprogramming an FPGA can take hours, and I can't think of too many situations where consumers will trade off hours of time to switch tasks with high performance. Then again, it just takes one person to think of a great application for it to take off.

Source: PCWorld