All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Graphics Cards, Processors | January 8, 2016 - 02:38 AM | Scott Michaud
Tagged: Intel, kaby lake, linux, mesa
Quick post about something that came to light over at Phoronix. Someone noticed that Intel published a handful of PCI device IDs for graphics processors to Mesa and libdrm. It will take a few months for graphics drivers to catch up, although this suggests that Kaby Lake will be releasing relatively soon.
It also gives us hints about what Kaby Lake will be. Of the published batch, there will be six tiers of performance: GT1 has five IDs, GT1.5 has three IDs, GT2 has six IDs, GT2F has one ID, GT3 has three IDs, and GT4 has four IDs. Adding them up, we see that Intel plans 22 GPU devices. The Phoronix post lists what those device IDs are, but that is probably not interesting for our readers. Whether some of those devices overlap in performance or numbering is unclear, but it would make sense given how few SKUs Intel usually provides. I have zero experience in GPU driver development.
Subject: Processors, Mobile | January 6, 2016 - 10:56 PM | Scott Michaud
Tagged: xiaomi, Intel, atom
So this rumor cites anonymous source(s) that leaked info to Digitimes. That said, it aligns with things that I've suspected in a few other situations. We'll discuss this throughout the article.
Xiaomi, a popular manufacturer of mobile devices, are breaking into the laptop space. One model was spotted on pre-order in China with an Intel Core i7 processor. According to the aforementioned leak, Intel has agreed to bundle an additional Intel Atom processor with every Core i7 that they order. Use Intel in a laptop, and they can use Intel in an x86-based tablet for no additional cost.
A single grain of salt... ...
Image Source: Wikipedia
While it's not an explicit practice, we've been seeing hints of similar initiatives for years now. A little over a year ago, Intel's mobile group reported revenues that are ~$1 million, which are offset by ~$1 billion in losses. We would also see phones like the ASUS ZenFone 2, which has amazing performance at a seemingly impossible $199 / $299 price point. I'm not going to speculate on what the actual relationships are, but it sounds more complicated than a listed price per tray.
And that's fine, of course. I know comments will claim the opposite, either that x86 is unsuitable for mobile devices or alleging that Intel is doing shady things. In my view, it seems like Intel has products that they believe can change established mindsets if given a chance. Personally, I would be hesitant to get an x86-based developer phone, but that's because I would only want to purchase one and I'd prefer to target the platform that the majority has. It's that type of inertia that probably frustrates Intel, but they can afford to compete against it.
It does make you wonder how long Intel plans to make deals like this -- again, if they exist.
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: Processors | December 28, 2015 - 09:03 PM | Scott Michaud
Tagged: optical, photonics
A typical integrated circuit pushes electrical voltage across pathways, with transistors and stuff modifying it. When you interpret those voltages as mathematical values and logical instructions, then congratulations, you have created a processor, memory, and so forth. You don't need to use electricity for this. In fact, the history of Charles Babbage and Ada Lovelace was their attempts to perform computation on mechanical state.
Image Credit: University of Colorado
Chip contains optical (left) and electric (top and right) circuits.
One possible follow-up is photonic integrated circuits. This routes light through optical waveguides, rather than typical electric traces. The prototype made by University of Colorado Boulder (and UC Berkeley) seem to use photonics just to communicate, and an electrical IC for the computation. The advantage is high bandwidth, high density, and low power.
This sort of technology was being investigated for several years. My undergraduate thesis for Physics involved computing light transfer through defects in a photonic crystal, using it to create 2D waveguides. With all the talk of silicon fabrication coming to its limits, as 14nm transistors are typically made of around two-dozen atoms, this could be a new direction to innovate.
And honestly, wouldn't you want to overclock your PC to 400+ THz? Make it go plaid for ludicrous speed. (Yes, this paragraph is a joke.)
Subject: Processors | December 28, 2015 - 07:00 AM | Sebastian Peak
Tagged: skylake-u, Skylake, mobile cpu, Intel, desktop cpu, core i7, core i5, core i3, Broadwell
As reported by CPU World Intel has added a total of eight new processors to the 5th-gen “Broadwell” and 6th-gen “Skylake” CPU lineups, with new mobile and desktop models appearing in Intel’s price lists. The models include Core and Celeron, and range from dual core (five with Hyper-Threading) to a new quad-core i5:
Chart of new Intel models from CPU-World
“Intel today added 8 new Broadwell- and Skylake-based microprocessors to the official price list. New CPUs have unusual model numbers, like i5-6402P and i5-5200DU, which indicates that they may have different feature-set than the mainstream line of desktop and mobile CPUs. Intel also introduced today Celeron 3855U and 3955U ultra-low voltage models.”
It is unclear if the desktop models (Core i3-6098P, Core i5-6402P) listed with enter the retail channel, or if they are destined for OEM applications. The report points out these models have a P suffix “that was used to signify the lack of integrated GPU in older generations of Core i3/i5 products. There is a good chance that it still means just that”.
Subject: Processors | December 11, 2015 - 02:08 PM | Sebastian Peak
Tagged: Skylake, overclocking, Intel, Core i3-6100, bios, BCLK, asrock
The days of Intel overclocking being limited to their more expensive unlocked parts appear to be over, as TechSpot has posted benchmarks from an overclocked Intel Core i3-6100 using a new (pre-release) BIOS update from ASRock.
Image credit: TechSpot
"In overclocking circles it was recently noted that BCLK (base clock) overclocking might become a possibility in Skylake processors. Last night Asrock contacted us with an updated BIOS that enabled this. We jumped at the opportunity and have already tested and benched a Core i3-6100 Skylake CPU with a 1GHz overclock (4.7GHz) on air cooling."
The 1.0 GHz overclock was achieved with a 127 MHz base clock on the i3 processor, with a vcore of ~1.36v. Apparently the ASRock motherboard requires the processor's graphics portion to be disabled for overclocking with this method, and TechSpot used an NVIDIA GTX 960 for test system. The results were impressive, as you might imagine.
The following is a small sampling of the benchmark results available from the sourced TechSpot article:
Image credit: TechSpot
Image credit: TechSpot
The overclocked i3-6100 was able to come very close to the multi-threaded performance of the stock AMD FX-8320E (8-core) processor in Cinebench, with double the per-thread performance. Results from their Handbrake encode test were even better, with the overclocked i3-6100 essentially matching the performance of the Core i5-4430 processor tested.
Gaming was underwhelming, with very similar performance from the GTX 960 from all CPUs at the settings tested.
Image credit: TechSpot
So what did the article say about this new overclocking-friendly BIOS availability? "We are told this updated BIOS for their Z170 motherboards will be available to owners very soon." It will be interesting to see if other vendors offer the same, as there are results out there using a SuperMicro board as well.
Subject: Graphics Cards, Processors | December 8, 2015 - 08:07 AM | Scott Michaud
Tagged: hsa, GCC, amd
Phoronix, the Linux-focused hardware website, highlighted patches for the GNU Compiler Collection (GCC) that implement HSA. This will allow newer APUs, such as AMD's Carrizo, to accelerate chunks of code (mostly loops) that have been tagged with a precompiler flag as valuable to be done on the GPU. While I have done some GPGPU development, many of the low-level specifics of HSA aren't areas that I have too much experience with.
The patches have been managed by Martin Jambor of SUSE Labs. You can see a slideshow presentation of their work on the GNU website. Even though features froze about a month ago, they are apparently hoping that this will make it into the official GCC 6 release. If so, many developers around the world will be able to target HSA-compatible hardware in the first half of 2016. Technically, anyone can do so regardless, but they would need to specifically use the unofficial branch on the GCC Subversion repository. This probably means compiling it themselves, and it might even be behind on a few features in other branches that were accepted into GCC 6.
Subject: Processors | December 4, 2015 - 11:35 PM | Sebastian Peak
Tagged: Skylake, Intel, heatsink, damage, cpu cooler, Core i7 6700K, Core i7 6600K, bend, 6th generation, 3rd party
Some Intel 6th-gen "Skylake" processors have been damaged by the heatsink mounts of 3rd-party CPU coolers according to a report that began with pcgameshardware.de and has since made its rounds throughout PC hardware media (including the sourced Ars Technica article).
The highly-referenced pcgameshardware.de image of a bent Skylake CPU
The problem is easy enough to explain, as Skylake has a notably thinner construction compared to earlier generations of Intel CPUs, and if enough pressure is exerted against these new processors the green substrate can bend, causing damage not only to the CPU but the pins in the LGA 1151 socket as well.
The only way to prevent the possibility of a bend is avoid overtightening the heatsink, but considering most compatible coolers on the market were designed for Haswell and earlier generations of Intel CPU this leaves users to guess what pressure might be adequate without potentially bending the CPU.
Intel has commented on the issue:
"The design specifications and guidelines for the 6th Gen Intel Core processor using the LGA 1151 socket are unchanged from previous generations and are available for partners and 3rd party manufacturers. Intel can’t comment on 3rdparty designs or their adherence to the recommended design specifications. For questions about a specific cooling product we must defer to the manufacturer."
It's worth noting that while Intel states that their "guidelines for the 6th Gen Intel Core processor using the LGA 1151 socket are unchanged from previous generations", it is specifically a change in substrate thickness that has caused the concerns. The problem is not limited to any specific brands, but certainly will be more of an issue for heatsink mounts that can exert a tremendous amount of pressure.
An LGA socket damaged from a bent Skylake CPU (credit: pcgameshardware)
From the Ars report:
"Noctua, EK Water Blocks, Scythe, Arctic, Thermaltake, and Thermalright, commenting to Games Hardware about the issue, suggested that damage from overly high mounting pressure is most likely to occur during shipping or relocation of a system. Some are recommending that the CPU cooler be removed altogether before a system is shipped."
Scythe has been the first vendor to offer a solution to the issue, releasing this statement on their support website:
"Japanese cooling expert Scythe announces a change of the mounting system for Skylake / Socket 1151 on several coolers of its portfolio. All coolers are compatible with Skylake sockets in general, but bear the possibility of damage to CPU and motherboard in some cases where the PC is exposed to strong shocks (e.g. during shipping or relocation).This problem particularly involves only coolers which will mounted with the H.P.M.S. mounting system. To prevent this, the mounting pressure has been reduced by an adjustment of the screw set. Of course, Scythe is going to ship a the new set of screws to every customer completely free of charge! To apply for the free screw set, please send your request via e-mail to email@example.com or use the contact form on our website."
The thickness of Skylake (left) compared to Haswell (right) (credit: pcgameshardware)
As owner of an Intel Skylake i5-6600K, which I have been testing with an assortment of CPU coolers for upcoming reviews, I can report that my processor appears to be free of any obvious damage. I am particularly careful about pressure when attaching a heatsink, but there have been a couple (including the above mentioned Scythe HPMS mounting system) that could easily have been tightened far beyond what was needed for a proper connection.
We will continue to monitor this situation and update as more vendors offer their response to the issue.
Subject: Processors, Mobile | December 1, 2015 - 07:30 AM | Scott Michaud
Tagged: TSMC, SoC, LG, Intel, arm
So this story came out of nowhere. Whether the rumors are true or false, I am stuck on how everyone seems to be talking about it with a casual deadpan. I spent a couple hours Googling whether I missed some big announcement that made Intel potentially fabricating ARM chips a mundane non-story. Pretty much all that I found was Intel allowing Altera to make FPGAs with embedded ARM processors in a supporting role, which is old news.
Image Credit: Internet Memes...
The rumor is that Intel and TSMC were both vying to produce LG's Nuclon 2 SoC. This part is said to house two quad-core ARM modules in a typical big.LITTLE formation. Samples were allegedly produced, with Intel's part (2.4 GHx) being able to clock around 300 MHz faster than TSMC's offering (2.1 GHz). Clock rate is highly dependent upon the “silicon lottery,” so this is an area that production maturity can help with. Intel's sample would also be manufactured at 14nm (versus 16nm from TSMC although these numbers mean less than they used to). LG was also, again allegedly, interesting in Intel's LTE modem. According to the rumors, LG went with TSMC because they felt Intel couldn't keep up with demand.
Now that the rumor has been reported... let's step back a bit.
I talked with Josh a couple of days ago about this post. He's quite skeptical (as I am) about the whole situation. First and foremost, it takes quite a bit of effort to port a design to a different manufacturing process. LG could do it, but it is questionable, especially for a second chip ever sort of thing. Moreover, I still believe that Intel doesn't want to manufacture chips that directly compete with them. x86 in phones is still not a viable business, but Intel hasn't given up and you would think that's a prerequisite.
So this whole thing doesn't seem right.
Subject: Processors | November 20, 2015 - 06:21 PM | Scott Michaud
Tagged: xeon, Intel, FPGA
UPDATE (Nov 26th, 3:30pm ET): A few readers have mentioned that FPGAs take much less than hours to reprogram. I even received an email last night that claims FPGAs can be reprogrammed in "well under a second." This differs from the sources I've read when I was reading up on their OpenCL capabilities (for potential evolutions of projects) back in ~2013. That said, multiple sources, including one who claim to have personal experience with FPGAs, say that it's not the case. Also, I've never used an FPGA myself -- again, I was just researching them to see where some GPU-based projects could go.
Designing integrated circuits, as I've said a few times, is basically a game. You have a blank canvas that you can etch complexity into. The amount of “complexity” depends on your fabrication process, how big your chip is, the intended power, and so forth. Performance depends on how you use the complexity to compute actual tasks. If you know something special about your workload, you can optimize your circuit to do more with less. CPUs are designed to do basically anything, while GPUs assume similar tasks can be run together. If you will only ever run a single program, you can even bake some or all of its source code into hardware called an “application-specific integrated circuit” (ASIC), which is often used for video decoding, rasterizing geometry, and so forth.
This is an old Atom back when Intel was partnered with Altera for custom chips.
FPGAs are circuits that can be baked into a specific application, but can also be reprogrammed later. Changing tasks requires a significant amount of time (sometimes hours) but it is easier than reconfiguring an ASIC, which involves removing it from your system, throwing it in the trash, and printing a new one. FPGAs are not quite as efficient as a dedicated ASIC, but it's about as close as you can get without translating the actual source code directly into a circuit.
Intel, after purchasing FPGA manufacturer, Altera, will integrate their technology into Xeons in Q1 2016. This will be useful to offload specific tasks that dominate a server's total workload. According to PC World, they will be integrated as a two-chip package, where both the CPU and FPGA can access the same cache. I'm not sure what form of heterogeneous memory architecture that Intel is using, but this would be a great example of a part that could benefit from in-place acceleration. You could imagine a simple function being baked into the FPGA to, I don't know, process large videos in very specific ways without expensive copies.
Again, this is not a consumer product, and may never be. Reprogramming an FPGA can take hours, and I can't think of too many situations where consumers will trade off hours of time to switch tasks with high performance. Then again, it just takes one person to think of a great application for it to take off.
Subject: Processors | November 18, 2015 - 07:34 AM | Scott Michaud
Tagged: Xeon Phi, knights landing, Intel
The add-in board version of the Xeon Phi has just launched, which Intel aims at supercomputing audiences. They also announced that this product will be available as a socketed processor that is embedded in, as PC World states, “a limited number of workstations” by the first half of next year. The interesting part about these processors is that they combine a GPU-like architecture with the x86 instruction set.
Image Credit: Intel (Developer Zone)
In the case of next year's socketed Knights Landing CPUs, you can even boot your OS with it (and no other processor installed). It will probably be a little like running a 72-core Atom-based netbook.
To make it a little more clear, Knights Landing is a 72-core, 512-bit processor. You might wonder how that can compete against a modern GPU, which has thousands of cores, but those are not really cores in the CPU sense. GPUs crunch massive amounts of calculations by essentially tying several cores together, and doing other tricks to minimize die area per effective instruction. NVIDIA ties 32 instructions together and pushes them down the silicon. As long as they don't diverge, you can get 32 independent computations for very little die area. AMD packs 64 together.
Knight's Landing does the same. The 512-bit registers can hold 16 single-precision (32-bit) values and operate on them simultaneously.
16 times 72 is 1152. All of a sudden, we're in shader-count territory. This is one of the reasons why they can achieve such high performance with “only” 72 cores, compared to the “thousands” that are present on GPUs. They're actually on a similar scale, just counted differently.
Update: (November 18th @ 1:51 pm EST) I just realized that, while I kept saying "one of the reasons", I never elaborated on the other points. Knights Landing also has four threads per core. So that "72 core" is actually "288 thread", with 512-bit registers that can perform sixteen 32-bit SIMD instructions simultaneously. While hyperthreading is not known to be 100% efficient, you could consider Knights Landing to be a GPU with 4608 shader units. Again, it's not the best way to count it, but it could sort-of work.
So in terms of raw performance, Knights Landing can crunch about 8 TeraFLOPs of single-precision performance or around 3 TeraFLOPs of double-precision, 64-bit performance. This is around 30% faster than the Titan X in single precision, and around twice the performance of Titan Black in double precision. NVIDIA basically removed the FP64 compute units from Maxwell / Titan X, so Knight's Landing is about 16x faster, but that's not really a fair comparison. NVIDIA recommends Kepler for double-precision workloads.
So interestingly, Knights Landing would be a top-tier graphics card (in terms of shading performance) if it was compatible with typical graphics APIs. Of course, it's not, and it will be priced way higher than, for instance, the AMD Radeon Fury X. Knight's Landing isn't available on Intel ARK yet, but previous models are in the $2000 - $4000 range.
Subject: Processors, Systems | November 17, 2015 - 11:21 AM | Sebastian Peak
Tagged: Skylake, NUC6i5SYK, NUC6i5SYH, NUC6i3SYK, NUC6i3SYH, nuc, mini-pc, Intel, i5-6260U, i3-6100U
(Image credit: PCMag)
NUC systems sporting the latest Intel 6th-gen Skylake processors are coming, with the NUC6i5SYH, NUC6i5SYK, NUC6i3SYH, NUC6i3SYK listed with updated Core i5 and i3 CPUs. As this is a processor refresh the appearance and product nomenclature remain unchanged (unfortunately).
The four new Skylake Intel NUC models listed on Intel's product page
Here's Intel's description of the Skylake Core i5-powered NUC6i5SYH:
"Intel NUC Kit NUC6i5SYH is equipped with Intel’s newest architecture, the 6th generation Intel Core i5-6260U processor. Intel Iris graphics 540 with 4K display capabilities provides brilliant resolution for gaming and home theaters. NUC5i5SYH has room for a 2.5” drive for additional storage and an M.2 SSD so you can transfer your data at lightning speed. Designed for Windows 10, NUC6i5SYH has the performance to stream media, manage spreadsheets, or create presentations."
The NUC6i5SYH and NUC6i5SYK feature the i5-6260U is a dual-core, Hyper-Threaded 15W part with a base speed of 1.9 GHz with up to 2.8 GHz Turbo. It has 4 MB cache and supports up to 32GB 2133 MHz DDR4. The processor also provides Intel Iris graphics 540 (Skylake GT3e), which offers 48 Execution Units and 64 MB of dedicated eDRAM. The lower-end NUC6i3SYH and NUC6i3SYK models offer the i3-6100U, which is also a dual-core, Hyper-Threaded part, but this 15W processor's speed is fixed at 2.3 GHz without Turbo Boost, and it offers the lesser Intel HD Graphics 520.
Availability and pricing are not yet known, but expect to see the new models for sale soon.
Subject: Processors | November 13, 2015 - 06:40 PM | Sebastian Peak
Tagged: X99, processor, LGA2011-v3, Intel, i7-6950X, HEDT, Haswell-E, cpu, Broadwell-E
Intel's high-end desktop (HEDT) processor line will reportedly be moving from Haswell-E to Broadwell-E soon, and with the move Intel will offer their highest consumer core count to date, according to a post at XFastest which WCCFtech reported on yesterday.
Image credit: VR-Zone
While it had been thought that Broadwell-E would feature the same core counts as Haswell-E (as seen on the leaked slide above), according to the report the upcoming flagship Core i7-6950X will be a massive 10 core, 20 thread part built using Intel's 14 nm process. Broadwell-E is expected to provide an upgrade to those running on Intel's current enthusiast X99 platform before Skylake-E arrives with an all-new chipset.
WCCFtech offered this chart in their report, outlining the differences between the HEDT generations (and providing a glimpse of the future Skylake-E variant):
Intel HEDT generations compared (Credit: WCCFtech)
It isn't all that surprising that one of Intel's LGA2011-v3 processors would arrive on desktops with 10 cores as these are closely related to the Xeon server processors, and Haswell based Xeon CPUs are already available with up to 18 cores, though priced far beyond what even the extreme builder would probably find reasonable (not to mention being far less suited to a desktop build based on motherboard compatibility). The projected $999 price tag for the Extreme Edition part with 10 cores would mark not only the first time an Intel desktop processor reached the core-count milestone, but it would also mark the lowest price to attain one of the company's 10-core parts to date (Xeon or otherwise).
Subject: Processors | November 12, 2015 - 01:22 PM | Jeremy Hellstrom
Tagged: linux, Skylake, Intel, i5-6600K, hd 530, Ubuntu 15.10
A great way to shave money off of a minimalist system is to skip buying a GPU and using the one present on modern processors, as well as installing Linux instead of buying a Windows license. The problem with doing so is that playing demanding games is going to be beyond your computers ability, at least without turning off most of the features that make the game look good. To help you figure out what your machine would be capable of is this article from Phoronix. Their tests show that Windows 10 currently has a very large performance lead compared to the same hardware running on Ubuntu as the Windows OpenGL driver is superior to the open-source Linux driver. This may change sooner rather than later but you should be aware that for now you will not get the most out of your Skylakes GPU on Linux at this time.
"As it's been a while since my last Windows vs. Linux graphics comparison and haven't yet done such a comparison for Intel's latest-generation Skylake HD Graphics, the past few days I was running Windows 10 Pro x64 versus Ubuntu 15.10 graphics benchmarks with a Core i5 6600K sporting HD Graphics 530."
Here are some more Processor articles from around the web:
- Intel Core i5 6500: A Great Skylake CPU For $200, Works Well On Linux @ Phoronix
- CPU Battle - Old and High-End vs. New and Entry-Level @ Hardware Secrets
- Which is the faster CPU: old but high-end or entry-level and new? - Part 2 @ Hardware Secrets
- AMD FX 8320E CPU Review @ Neoseeker
Subject: Processors, Mobile | November 12, 2015 - 09:30 AM | Sebastian Peak
Tagged: SoC, smartphone, Samsung Galaxy, Samsung, mobile, Exynos 8890, Exynos 8 Octa, Exynos 7420, Application Processor
Coming just a day after Qualcomm officially launched their Snapdragon 820 SoC, Samsung is today unveiling their latest flagship mobile part, the Exynos 8 Octa 8890.
The Exynos 8 Octa 8890 is built on Samsung’s 14 nm FinFET process like the previous Exynos 7 Octa 7420, and again is based on the a big.LITTLE configuration; though the big processing cores are a custom design this time around. The Exynos 7420 was comprised of four ARM Cortex A57 cores and four small Cortex A53 cores, and while the small cores in the 8890 are again ARM Cortex A53, the big cores feature Samsung’s “first custom designed CPU based on 64-bit ARMv8 architecture”.
“With Samsung’s own SCI (Samsung Coherent Interconnect) technology, which provides cache-coherency between big and small cores, the Exynos 8 Octa fully utilizes benefits of big.LITTLE structure for efficient usage of the eight cores. Additionally, Exynos 8 Octa is built on highly praised 14nm FinFET process. These all efforts for Exynos 8 Octa provide 30% more superb performance and 10% more power efficiency.”
Another big advancement for the Exynos 8 Octa is the integrated modem, which provides Category 12/13 LTE with download speeds (with carrier aggregation) of up to 600 Mbps, and uploads up to 150 Mbps. This might sound familiar, as it mirrors the LTE Release 12 specs of the new modem in the Snapdragon 820.
Video processing is handled by the Mali-T880 GPU, moving up from the Mali-T760 found in the Exynos 7 Octa. The T880 is “the highest performance and the most energy-efficient mobile GPU in the Mali family”, with up to 1.8x the performance of the T760 while being 40% more energy-efficient.
Samsung will be taking this new SoC into mass production later this year, and the chip is expected to be featured in the company’s upcoming flagship Galaxy phone.
Full PR after the break.
Subject: Processors | November 6, 2015 - 10:09 AM | Sebastian Peak
Tagged: tape out, processors, GLOBALFOUNDRIES, global foundries, APU, amd, 14 nm FinFET
GlobalFoundries has today officially announced their success with sample 14 nm FinFET production for upcoming AMD products.
(Image credit: KitGuru)
GlobalFoundries licensed 14 nm LPE and LPP technology from Samsung in 2014, and were producing wafers as early as April of this year. At the time a GF company spokesperson was quoted in this report at KitGuru, stating "the early version (14LPE) is qualified in our fab and our lead product is yielding in double digits. Since 2014, we have taped multiple products and testchips and are seeing rapid progress, in yield and maturity, for volume shipments in 2015." Now they have moved past LPE (Low Power Early) to LPP (Low Power Plus), with new products based on the technology slated for 2016:
"AMD has taped out multiple products using GLOBALFOUNDRIES’ 14nm Low Power Plus (14LPP) process technology and is currently conducting validation work on 14LPP production samples. Today’s announcement represents another significant milestone towards reaching full production readiness of GLOBALFOUNDRIES’ 14LPP process technology, which will reach high-volume production in 2016."
GlobalFoundries was originally the manufacturing arm of AMD, and has continued to produce the companies processors since the spin-off in 2012. AMD's current desktop FX-8350 CPU was manufactured on 32 nm SOI, and more recently APUs such as the A10-7850K have been produced at 28 nm - both at GlobalFoundries. Intel's latest offerings such as the flagship 6700K desktop CPU are produced with Intel's 14nm process, and the success of the 14LPP production at GlobalFoundries has the potential to bring AMD's new processors closer parity with Intel (at least from a lithography standpoint).
Full PR after the break.
Subject: Processors | November 5, 2015 - 09:30 PM | Sebastian Peak
Tagged: SoC, report, processor, mobile apu, leak, FX-9830PP, cpu, Bristol Ridge, APU, amd
A new report points to an entry from the USB implementors forum, which shows an unreleased AMD Bristol Ridge SoC.
(AMD via VideoCardz.com)
Bristol Ridge itself is not news, as the report at Computer Base observes (translation):
"A leaked roadmap had previously noted that Bristol Ridge is in the coming year soldered on motherboards for notebooks and desktop computers in special BGA package FP4."
(USB.org via Computer Base)
But there is something different about this chip as the report point out the model name FX-9830P pictured in the USB.org screen grab is consistent with the naming scheme for notebook parts, with the highest current model being FX-8800P (Carrizo), a 35W 4-thread Excavator part with 512 stream processors from the R7 GPU core.
(BenchLife via Computer Base)
No details are available other than information from a leaked roadmap (above), which points to Bristol Ridge as an FP4 BGA part for mobile, with a desktop variant for socket FM3 that would replace Kaveri/Godavari (and possibly still an Excavator part). New cores are coming in 2016, and we'll have to wait and see for additional details (or until more information inevitably leaks out).
Update, 11/06/15: WCCFtech expounds on the leak:
“Bristol Ridge isn’t just limited to mobility platforms but will also be featured on AM4 desktop platform as Bristol Ridge will be the APU generation available on desktops in 2016 while Zen would be integrated on the performance focused FX processors.”
WCCFtech’s report also included a link to this SiSoftware database entry for an engineering sample of a dual-core Stoney Ridge processor, a low-power mobile part with a 2.7 GHz clock speed. Stoney Ridge will reportedly succeed Carrizo-L for low-power platforms.
The report also provided this chart to reference the new products:
Subject: Processors | October 23, 2015 - 02:21 PM | Sebastian Peak
Tagged: Xeon D, SoC, rumor, report, processor, Pentium D, Intel, cpu
Intel's Xeon D SoC lineup will soon expand to include 12-core and 16-core options, after the platform launched earlier this year with the option of 4 or 8 cores for the 14 nm chips.
The report yesterday from CPU World offers new details on the refreshed lineup which includes both Xeon D and Pentium D SoCs:
"According to our sources, Intel have made some changes to the lineup, which is now comprised of 13 Xeon D and Pentium D SKUs. Even more interesting is that Intel managed to double the maximum number of cores, and consequentially combined cache size, of Xeon D design, and the nearing Xeon D launch may include a few 12-core and 16-core models with 18 MB and 24 MB cache."
The move is not unexpected as Intel initially hinted at an expanded offering by the end of the year (emphasis added):
"...the Intel Xeon processor D-1500 product family is the first offering of a line of processors that will address a broad range of low-power, high-density infrastructure needs. Currently available with 4 or 8 cores and 128 GB of addressable memory..."
Current Xeon D Processors
The new flagship Xeon D model will be the D-1577, a 16-core processor with between 18 and 24 MB of L3 cache (exact specifications are not yet known). These SoCs feature integrated platform controller hub (PCH), I/O, and dual 10 Gigabit Ethernet, and the initial offerings had up to a 45W TDP. It would seem likely that a model with double the core count would either necessitate a higher TDP or simply target a lower clock speed. We should know more before too long.
For futher information on Xeon D, please check out our previous coverage:
- New Intel Xeon D Broadwell Processors Aimed at Low Power, High Density Servers @ PC Perspective.
- Xeon D Podcast Discussion at 0:40:35 (YouTube or downloadable audio).
Subject: Processors | October 19, 2015 - 11:28 AM | Sebastian Peak
Tagged: Zen, SoC, processor, imac, APU, apple, amd
Rumor: Apple to Use AMD SoC for Next-Gen iMac News about AMD has been largely depressing of late, with the introduction of the R9 Fury/Fury X and Nano graphics cards a bright spot in the otherwise tumultuous year that was recently capped by a $65 million APU write down. But one area where AMD has managed to earn a big win has been the console market, where their APUs power the latest machines from Microsoft and Sony. The combination of CPU and a powerful GPU on a single chip is ideal for those small form-factor designs, and likewise it would be ideal for a slim all-in-one PC. But an iMac?
Image credit: Apple
A report from WCCFtech today points to the upcoming Zen architecture from AMD as a likely power source for a potential custom SoC:
"A Semi-custom SOC x86 for the iMac would have to include a high performance x86 component, namely Zen, in addition to a graphics engine to drive the visual experience of the device. Such a design would be very similar to the current semi-custom Playstation 4 and XBOX ONE Accelerated Processing Units, combining x86 CPU cores with a highly capable integrated graphics solution."
Those who don't follow Apple probably don't know the company switched over almost exclusively to AMD graphics a short time ago, with NVIDIA solutions phased out of all discrete GPU models. Whether politically motivated or simply the result of AMD providing what Apple wanted from a hardware/driver standpoint I can't say, but it's still a big win for AMD considering Apple's position as one of the largest computer manufacturers - even though its market share is very low in the highly fragmented PC market overall. And while Apple has exclusively used Intel processors in its systems since transitioning away from IBM's PowerPC beginning in 2006, the idea of an AMD custom APU makes a lot of sense for the company, especially for their size and heat constrained iMac designs.
Image credit: WCCFtech
Whether or not you'd ever consider buying an iMac - or any other computer from Apple, for that matter - it's still important for the PC industry as a whole that AMD continues to find success and provide competition for Intel. Consumers can only benefit from the potential for improved performance and reduced cost if competition heats up between Intel and AMD, something we really haven't seen on the CPU front in a few years now. With CEO Lisa Su stating that AMD "had secured two new semi-custom design wins" In their recent earnings call it could very well be that we will see Zen in future iMacs, or in other PC all-in-one solutions for that matter.
Regardless, it will be exciting to see some good competition from AMD, even if we will have to wait quite a while for it. Zen isn't ready yet and we have no indication that any such product would be introduced until later next year. It will be interesting to see what Intel might do to compete given their resources. 2016 could be interesting.
Subject: Processors | October 12, 2015 - 12:24 PM | Sebastian Peak
Tagged: servers, qualcomm, processor, enterprise, cpu, arm, 24-core
Another player emerges in the CPU landscape: Qualcomm is introducing its first socketed processor for the enterprise market.
Image credit: PC World
A 24-core design based on 64-bit ARM architecture has reached the prototype phase, in a large LGA package resembling an Intel Xeon CPU.
From the report published by PC World:
"Qualcomm demonstrated a pre-production chip in San Francisco on Thursday. It's a purpose-built system-on-chip, different from its Snapdragon processor, that integrates PCIe, storage and other features. The initial version has 24 cores, though the final part will have more, said Anand Chandrasekher, Qualcomm senior vice president."
Image credit: PC World
Qualcomm built servers as proof-of-concept with this new processor, "running a version of Linux, with the KVM hypervisor, streaming HD video to a PC. The chip was running the LAMP stack - Linux, the Apache Web server, MySQL, and PHP - and OpenStack cloud software," according to PC World. The functionality of this design demonstrate the chip's potential to power highly energy-efficient servers, making an obvious statement about the potential cost savings for large data companies such as Google and Facebook.
Subject: Processors, Mobile | October 12, 2015 - 11:08 AM | Ryan Shrout
Tagged: iphone 6s, iphone, ios, google, apple, Android, A9
PC Perspective’s Android to iPhone series explores the opinions, views and experiences of the site’s Editor in Chief, Ryan Shrout, as he moves from the Android smartphone ecosystem to the world of the iPhone and iOS. Having been entrenched in the Android smartphone market for 7+ years, the editorial series is less of a review of the new iPhone 6s as it is an exploration on how the current smartphone market compares to what each sides’ expectations are.
Full Story Listing:
- Day 0: What to Expect
- Day 3: Widgets and Live Photos
- Day 6: Battery Life and Home Screens
- Day 17: SoC Performance
- Day 31: Battery Life and Closing
My iPhone experiment continues, running into the start of the third full week of only carrying and using the new iPhone 6s. Today I am going to focus a bit more on metrics that can be measured in graph form – and that means benchmarks and battery life results. But before I dive into those specifics I need to touch on some other areas.
The most surprising result of this experiment to me, even as I cross into day 17, is that I honestly don’t MISS anything from the previous ecosystem. I theorized at the beginning of this series that I would find applications or use cases that I had adopted with Android that would not be able to be matched on iOS without some significant sacrifices. That isn’t the case – anything that I want to do on the iPhone 6s, I can. Have I needed to find new apps for taking care of my alarms or to monitor my rewards card library? Yes, but the alternatives for iOS are at least as good and often times I find there are more (and often better) solutions. I think it is fair to assume that same feeling of equality would be prevalent for users going in other direction, iPhone to Android, but I can’t be sure without another move back to Android sometime in the future. It may come to that.
My previous alarm app was replaced with Sleep Cycle
In my Day 3 post I mentioned my worry about the lack of Quick Charging support. Well I don’t know why Apple doesn’t talk it up more but the charging rate for the iPhone 6s and iPhone 6s Plus is impressive, and even more so when you pair them with the higher amperage charger that ships with iPads. Though purely non-scientific thus far, my through the day testing showed that I was able to charge the iPhone 6s Plus to 82% (from being dead after a battery test) in the span of 1.5 hours while the OnePlus 2 was only at 35%. I realize the battery on the OnePlus 2 is larger, but based purely on how much use time you get for your charging time wait, the iPhones appear to be just as fast as any Android phone I have used.
Photo taking with the iPhones 6s still impresses me – more so with the speed than the quality. Image quality is fantastic, and we’ll do more analytical testing in the near future, but while attending events over weekend including a Bengals football game (5-0!) and a wedding, the startup process for the camera was snappy and the shutter speed never felt slow. I never thought “Damn, I missed the shot I wanted” and that’s a feeling I’ve had many times over the last several years of phone use.
You don't want to miss photos like this!
There were a couple of annoyances that cropped up, including what I think is a decrease in accuracy of the fingerprint reader on the home button. In the last 4 days I have had more bouncing “try again” notices on the phone than in the entirety of use before that. It’s possible that the button has additional oils from my hands on it or maybe that I am getting lazier about placement of my fingers on the Touch ID, but it’s hard to tell.