Qualcomm Enters Server CPU Space with 24-Core Socketed Processor

Subject: Processors | October 12, 2015 - 12:24 PM |
Tagged: servers, qualcomm, processor, enterprise, cpu, arm, 24-core

Another player emerges in the CPU landscape: Qualcomm is introducing its first socketed processor for the enterprise market.


Image credit: PC World

A 24-core design based on 64-bit ARM architecture has reached the prototype phase, in a large LGA package resembling an Intel Xeon CPU.

From the report published by PC World:

"Qualcomm demonstrated a pre-production chip in San Francisco on Thursday. It's a purpose-built system-on-chip, different from its Snapdragon processor, that integrates PCIe, storage and other features. The initial version has 24 cores, though the final part will have more, said Anand Chandrasekher, Qualcomm senior vice president."


Image credit: PC World

Qualcomm built servers as proof-of-concept with this new processor, "running a version of Linux, with the KVM hypervisor, streaming HD video to a PC. The chip was running the LAMP stack - Linux, the Apache Web server, MySQL, and PHP - and OpenStack cloud software," according to PC World. The functionality of this design demonstrate the chip's potential to power highly energy-efficient servers, making an obvious statement about the potential cost savings for large data companies such as Google and Facebook.

Source: PC World

Android to iPhone Day 17: SoC Performance

Subject: Processors, Mobile | October 12, 2015 - 11:08 AM |
Tagged: iphone 6s, iphone, ios, google, apple, Android, A9

PC Perspective’s Android to iPhone series explores the opinions, views and experiences of the site’s Editor in Chief, Ryan Shrout, as he moves from the Android smartphone ecosystem to the world of the iPhone and iOS. Having been entrenched in the Android smartphone market for 7+ years, the editorial series is less of a review of the new iPhone 6s as it is an exploration on how the current smartphone market compares to what each sides’ expectations are.

Full Story Listing:


My iPhone experiment continues, running into the start of the third full week of only carrying and using the new iPhone 6s. Today I am going to focus a bit more on metrics that can be measured in graph form – and that means benchmarks and battery life results. But before I dive into those specifics I need to touch on some other areas.

The most surprising result of this experiment to me, even as I cross into day 17, is that I honestly don’t MISS anything from the previous ecosystem. I theorized at the beginning of this series that I would find applications or use cases that I had adopted with Android that would not be able to be matched on iOS without some significant sacrifices. That isn’t the case – anything that I want to do on the iPhone 6s, I can. Have I needed to find new apps for taking care of my alarms or to monitor my rewards card library? Yes, but the alternatives for iOS are at least as good and often times I find there are more (and often better) solutions. I think it is fair to assume that same feeling of equality would be prevalent for users going in other direction, iPhone to Android, but I can’t be sure without another move back to Android sometime in the future. It may come to that.


My previous alarm app was replaced with Sleep Cycle

In my Day 3 post I mentioned my worry about the lack of Quick Charging support. Well I don’t know why Apple doesn’t talk it up more but the charging rate for the iPhone 6s and iPhone 6s Plus is impressive, and even more so when you pair them with the higher amperage charger that ships with iPads. Though purely non-scientific thus far, my through the day testing showed that I was able to charge the iPhone 6s Plus to 82% (from being dead after a battery test) in the span of 1.5 hours while the OnePlus 2 was only at 35%. I realize the battery on the OnePlus 2 is larger, but based purely on how much use time you get for your charging time wait, the iPhones appear to be just as fast as any Android phone I have used.

Photo taking with the iPhones 6s still impresses me – more so with the speed than the quality. Image quality is fantastic, and we’ll do more analytical testing in the near future, but while attending events over weekend including a Bengals football game (5-0!) and a wedding, the startup process for the camera was snappy and the shutter speed never felt slow. I never thought “Damn, I missed the shot I wanted” and that’s a feeling I’ve had many times over the last several years of phone use.


You don't want to miss photos like this!

There were a couple of annoyances that cropped up, including what I think is a decrease in accuracy of the fingerprint reader on the home button. In the last 4 days I have had more bouncing “try again” notices on the phone than in the entirety of use before that. It’s possible that the button has additional oils from my hands on it or maybe that I am getting lazier about placement of my fingers on the Touch ID, but it’s hard to tell.

Continue reading day 17 of my Android to iPhone editorial!!

Curious just what is special about the AMD Pro line of APUs?

Subject: Processors | October 5, 2015 - 04:48 PM |
Tagged: amd, PRO A12-8800B, Excavator, carrizo pro, Godavari Pro

AMD recently announced a Pro lineup of Excavator based chips which match their Carrizo and Godavari current lineup as far as the specifications go.  This was somewhat confusing as there were no real features at first glance that separated the Pro chips from the non-Pro cousins in the press material from AMD or HP.  Tech ARP posted the slides from the reveal and they note one key feature that separates the two chip families and why businesses should be interested in them.  These are hand-picked dies taken from hand picked wafers which AMD chose as they represent the best of the chips they have fabbed.  You should expect performance free from any possible defects which made it past quality control and if you do have bad enough luck to find a way to get a less than perfect chip they come with a 36 month extended OEM warranty.

In addition to being hand picked, machines with an AMD Pro chip will also come with an ARM TrustZone Technology based AMD Secure Processor onboard.  If you use a mobile device which has TPM and a crypto-processor onboard you will be familar with the technology; AMD is the first to bring this open sourced security platform to Windows based machines.  Small business owners may also be interested the AMD PRO Control Center which is an inventory management client which will not cost as much as ones designed for Enterprise and in theory should be easier to use as well.

This news is of lesser interest to the gamer you never know, if you can secure one of these hand picked chips you may find it gives you a bit more headroom for tweaking than your average run of the mill Godavari or Carrizo would.


"We will now only show you the presentation slides, we also recorded the entire conference call and created a special video presentation based on the conference call for you. We hope you enjoy our work."

Here are some more Processor articles from around the web:


Source: Tech ARP

Apple Dual Sources A9 SOCs with TSMC and Samsung: Some Extra Thoughts

Subject: Processors | September 30, 2015 - 09:55 PM |
Tagged: TSMC, Samsung, FinFET, apple, A9, 16 nm, 14 nm

So the other day the nice folks over at Chipworks got word that Apple was in fact sourcing their A9 SOC at both TSMC and Samsung.  This is really interesting news on multiple fronts.  From the information gleaned the two parts are the APL0898 (Samsung fabbed) and the APL1022 (TSMC).

These process technologies have been in the news quite a bit.  As we well know, it has been a hard time for any foundry to go under 28 nm in an effective way if your name is not Intel.  Even Intel has had some pretty hefty issues with their march to sub 32 nm parts, but they have the resources and financial ability to push through a lot of these hurdles.  One of the bigger problems that affected the foundries was the idea that they could push back FinFETs beyond what they were initially planning.  The idea was to hit 22/20 nm and use planar transistors and push development back to 16/14 nm for FinFET technology.


The Chipworks graphic that explains the differences between Samsung's and TSMC's A9 products.

There were many reasons why this did not work in an effective way for the majority of products that the foundries were looking to service with a 22/20 nm planar process.  Yes, there were many parts that were fabricated using these nodes, but none of them were higher power/higher performance parts that typically garner headlines.  No CPUs, no GPUs, and only a handful of lower power SOCs (most notably Apple's A8, which was around 89 mm squared and consumed up to 5 to 10 watts at maximum).  The node just did not scale power very effectively.  It provided a smaller die size, but it did not increase power efficiency and switching performance significantly as compared to 28 nm high performance nodes.

The information Chipworks has provided also verifies that Samsung's 14 nm FF process is more size optimized than TSMC's 16 nm FF.  There was originally some talk about both nodes being very similar in overall transistor size and density, but Samsung has a slightly tighter design.  Neither of them are smaller than Intel's latest 14 nm which is going into its second generation form.  Intel still has a significant performance and size advantage over everyone else in the field.  Going back to size we see the Samsung chip is around 96 mm square while the TSMC chip is 104.5 mm square.  This is not huge, but it does show that the Samsung process is a little tighter and can squeeze more transistors per square mm than TSMC.

In terms of actual power consumption and clock scaling we have nothing to go on here.  The chips are both represented in the 6S and 6S+.  Testing so far has not shown there to be significant differences between the two SOCs so far.  In theory one could be performing better than the other, but in reality we have not tested these chips at a low enough level to discern any major performance or power issue.  My gut feeling here is that Samsung's process is more mature and running slightly better than TSMC's, but the differences are going to be minimal at best.

The next piece of info that we can glean from this is that there just isn't enough line space for all of the chip companies who want to fabricate their parts with either Samsung or TSMC.  From a chip standpoint a lot of work has to be done to port a design to two different process nodes.  While 14 and 16 are similar in overall size and the usage of FinFETS, the standard cells and design libraries for both Samsung and TSMC are going to be very different.  It is not a simple thing to port over a design.  A lot of work has to be done in the design stage to make a chip work with both nodes.  I can tell you that there is no way that both chips are identical in layout.  It is not going to be a "dumb port" where they just adjust the optics with the same masks and magically make these chips work right off the bat.  Different mask sets for each fab, verification of both designs, and troubleshooting the yields by metal layer changes will be different for each manufacturer.

In the end this means that there just simply was not enough space at either TSMC or Samsung to handle the demand that Apple was expecting.  Because Apple has deep pockets they contracted out both TSMC and Samsung to produce two very similar, but still different parts.  Apple also likely outbid and locked down what availability to process wafers that Samsung and TSMC have, much to the dismay of other major chip firms.  I have no idea what is going on in the background with people like NVIDIA and AMD when it comes to line space for manufacturing their next generation parts.  At least for AMD it seems that their partnership with GLOBALFOUNDRIES and their version of 14 nm FF is having a hard time taking off.  Eventually more space will be made in production and yields and bins will improve.  Apple will stop taking up so much space and we can get other products rolling off the line.  In the meantime, enjoy that cutting edge iPhone 6S/+ with the latest 14/16 nm FF chips.

Source: Chipworks

Oh Hey! Skylake and Broadwell Stock Levels Replenish

Subject: Processors | September 27, 2015 - 07:01 AM |
Tagged: Skylake, iris pro, Intel, Broadwell

Thanks to the Tech Report for pointing this out, but some recent stock level troubles with Skylake and Broadwell have been overcome. Both Newegg and Amazon have a few Core i7-6700Ks that are available for purchase, and both also have the Broadwell Core i7s and Core i5s with Iris Pro graphics. Moreover, Microcenter has stock of the Skylake processor at some of their physical stores with the cheapest price tag of all, but they do not have the Broadwell chips with Iris Pro (they are not even listed).


You'll notice that Skylake is somewhat cheaper than the Core i7 Broadwell, especially on Newegg. That is somewhat expected, as Broadwell with Iris Pro is a larger die than Skylake with an Intel HD 530. A bigger die means that fewer can be cut from a wafer, and thus each costs more (unless the smaller die has a relatively high amount of waste to compensate of course). Also, if you go with Broadwell, you will miss out on the Z170 chipset, because they still use Haswell's LGA-1150 socket.

On the other hand, despite being based on an older architecture and having much less thermal headroom, you can find some real-world applications that really benefit from the 128 MB of L4 Cache that Iris Pro brings, even if the iGPU itself is unused. The graphics cache can be used by the main processor. In Project Cars, again, according to The Tech Report, the i7-5775C measured a 5% increase in frame rate over the newer i7-6700k -- when using a GeForce GTX 980. Granted, this was before the FCLK tweak on Skylake so there are a few oranges mixed with our apples. PCIe rates might be slightly different now.

Regardless, they're all available now. If you were awaiting stock, have fun.

Intel Will Not Bring eDRAM to Socketed Skylake

Subject: Graphics Cards, Processors | September 17, 2015 - 09:33 PM |
Tagged: Skylake, kaby lake, iris pro, Intel, edram

Update: Sept 17, 2015 @ 10:30 ET -- To clarify: I'm speaking of socketed desktop Skylake. There will definitely be Iris Pro in the BGA options.

Before I begin, the upstream story has a few disputes that I'm not entirely sure on. The Tech Report published a post in September that cited an Intel spokesperson, who said that Skylake would not be getting a socketed processor with eDRAM (unlike Broadwell did just before Skylake launched). This could be a big deal, because the fast, on-processor cache could be used by the CPU as well as the RAM. It is sometimes called “128MB of L4 cache”.


Later, ITWorld and others posted stories that said Intel killed off a Skylake processor with eDRAM, citing The Tech Report. After, Scott Wasson claimed that a story, which may or may not be ITWorld's one, had some “scrambled facts” but wouldn't elaborate. Comparing the two articles doesn't really illuminate any massive, glaring issues, but I might just be missing something.

Update: Sept 18, 2015 @ 9:45pm -- So I apparently misunderstood the ITWorld article. They were claiming that Broadwell-C was discontinued, while The Tech Report was talking about Socketed Skylake with Iris Pro. I thought they both were talking about the latter. Moreover, Anandtech received word from Intel that Broadwell-C is, in fact, not discontinued. This is odd, because ITWorld said they had confirmation from Intel. My guess is that someone gave them incorrect information. Sorry that it took so long to update.

In the same thread, Ian Cutress of Anandtech asked whether The Tech Report benchmarked the processor after Intel tweaked its FCLK capabilities, which Scott did not (but is interested in doing so). Intel addressed a slight frequency boost between the CPU and PCIe lanes after Skylake shipped, which naturally benefits discrete GPUs. Since the original claim was that Broadwell-C is better than Skylake-K for gaming, giving a 25% boost to GPU performance (or removing a 20% loss, depending on how you look at it) could tilt Skylake back above Broadwell. We won't know until it's benchmarked, though.

Iris Pro and eDRAM, while skipping Skylake, might arrive in future architectures though, such as Kaby Lake. It seems to have been demonstrated that, in some situations, and ones relevant to gamers at that, that this boost in eDRAM can help computation -- without even considering the compute potential of a better secondary GPU. One argument is that cutting the extra die room gives Intel more margins, which is almost definitely true, but I wonder how much attention Kaby Lake will get. Especially with AVX-512 and other features being debatably removed, it almost feels like Intel is treating this Tock like a Tick, since they didn't really get one with Broadwell, and Kaby Lake will be the architecture that will lead us to 10nm. On the other hand, each of these architectures are developed by independent teams, so I might be wrong in comparing them serially.

AMD Releases App SDK 3.0 with OpenCL 2.0

Subject: Graphics Cards, Processors | August 30, 2015 - 09:14 PM |
Tagged: amd, carrizo, Fiji, opencl, opencl 2.0

Apart from manufacturers with a heavy first-party focus, such as Apple and Nintendo, hardware is useless without developer support. In this case, AMD has updated their App SDK to include support for OpenCL 2.0, with code samples. It also updates the SDK for Windows 10, Carrizo, and Fiji, but it is not entirely clear how.


That said, OpenCL is important to those two products. Fiji has a very high compute throughput compared to any other GPU at the moment, and its memory bandwidth is often even more important for GPGPU workloads. It is also useful for Carrizo, because parallel compute and HSA features are what make it a unique product. AMD has been creating first-party software software and helping popular third-party developers such as Adobe, but a little support to the world at large could bring a killer application or two, especially from the open-source community.

The SDK has been available in pre-release form for quite some time now, but it is finally graduated out of beta. OpenCL 2.0 allows for work to be generated on the GPU, which is especially useful for tasks that vary upon previous results without contacting the CPU again.

Source: AMD

This is your Intel HD530 GPU on Linux

Subject: Processors | August 26, 2015 - 02:40 PM |
Tagged: Skylake, Intel, linux, Godavari

Using the GPU embedded in the vast majority of modern processors is a good way to reduce the price of and entry level system, as indeed is choosing Linux for your OS.  Your performance is not going to match that of a system with a discrete GPU but with the newer GPU cores available you will be doing much better than the old days of the IGP.  The first portion of Phoronix's review of the Skylake GPU covers the various versions of driver you can choose from while the rest compares Kaveri, Godavari, Haswell and Broadwell to the new HD530 on SkyLake CPUs.  Currently the Iris Pro 6200 present on Broadwell is still the best for gaming, though the A10-7870K Godavari performance is also decent.  Consider one of those two chips now, or await Iris Pro's possible arrival on a newer socketed processor if you are in no hurry.


"Intel's Core i5 6600K and i7 6700K processors released earlier this month feature HD Graphics 530 as the first Skylake graphics processor. Given that Intel's Open-Source Technology Center has been working on open-source Linux graphics driver support for over a year for Skylake, I've been quite excited to see how the Linux performance compares for Haswell and Broadwell as well as AMD's APUs on Linux."

Here are some more Processor articles from around the web:



Source: Phoronix

Qualcomm Introduces Adreno 5xx Architecture for Snapdragon 820

Subject: Graphics Cards, Processors, Mobile | August 12, 2015 - 07:30 AM |
Tagged: snapdragon 820, snapdragon, siggraph 2015, Siggraph, qualcomm, adreno 530, adreno

Despite the success of the Snapdragon 805 and even the 808, Qualcomm’s flagship Snapdragon 810 SoC had a tumultuous lifespan.  Rumors and stories about the chip and an inability to run in phone form factors without overheating and/or draining battery life were rampant, despite the company’s insistence that the problem was fixed with a very quick second revision of the part. There are very few devices that used the 810 and instead we saw more of the flagship smartphones uses the slightly cut back SD 808 or the SD 805.

Today at Siggraph Qualcomm starts the reveal of a new flagship SoC, Snapdragon 820. As the event coinciding with launch is a graphics-specific show, QC is focusing on a high level overview of the graphics portion of the Snapdragon 820, the updated Adreno 5xx architecture and associated designs and a new camera image signal processor (ISP) aiming to improve quality of photos and recording on our mobile devices.


A modern SoC from Qualcomm features many different processors working in tandem to impact the user experience on the device. While the only details we are getting today focus around the Adreno 530 GPU and Spectra ISP, other segments like connectivity (wireless), DSP, video processing and digital signal processing are important parts of the computing story. And we are well aware that Qualcomm is readying its own 64-bit processor architecture for the Kryo CPU rather than implementing the off-the-shelf cores from ARM used in the 810.

We also know that Qualcomm is targeting a “leading edge” FinFET process technology for SD 820 and though we haven’t been able to confirm anything, it looks very like that this chip will be built on the Samsung 14nm line that also built the Exynos 7420.

But over half of the processing on the upcoming Snapdragon 820 fill focus on visual processing, from graphics to gaming to UI animations to image capture and video output, this chip’s die will be dominated by high performance visuals.

Qualcomm’s lists of target goals for SD 820 visuals reads as you would expect: wanting perfection in every area. Wouldn’t we all love a phone or tablet that takes perfect photos each time, always focusing on the right things (or everything) with exceptional low light performance? Though a lesser known problem for consumers, having accurate color reproduction from capture, through processing and to the display would be a big advantage. And of course, we all want graphics performance that impresses and a user interface that is smooth and reliable while enabling NEW experience that we haven’t even thought of in the mobile form factor. Qualcomm thinks that Snapdragon 820 will be able to deliver on all of that.

Continue reading about the new Adreno 5xx architecture!!

Source: Qualcomm

We hear you like Skylake-U news

Subject: Processors | August 11, 2015 - 06:39 PM |
Tagged: skylake-u, Intel

Fanless Tech just posted slides of Skylake-U the ultraportable version of Skylake, all of which have an impressively low TDP of 15W which can be reduced to either 10W or in some cases all the way down to 7.5W.  As they have done previously all are BGA socketed which means you will not be able to upgraded nor are you likely to see them in desktops, not necessarily a bad thing for this segment of the mobile market but certainly worth noting.


There will be two i7 models and two i5 along with a single i3 version, the top models of which, the Core i7-6600U and Core i5-6300U sport a slightly increased frequency and support for vPro.  Those two models, along with the i7-6500U and i5-6200U will have the Intel HD graphics 520 with frequencies of 300/1050 for the i7's and 300/1000 for the i5 and i3 chips


Along with the Core models will come a single Pentium chip, the 4405U and a pair of Celerons, the 3955U and 3855U.  They will have HD510 graphics, clocks of 300/950 or 300/900 for the Celerons and you will see slight reductions in PCIe and storage subsystems on teh 4405U and 3855U.  The naming scheme is less confusing that some previous generations, a boon for those with family or friends looking for a new laptop who are perhaps not quite as obsessed with processors as we are.



Source: Fanless Tech

Khronos Group at SIGGRAPH 2015

Subject: Graphics Cards, Processors, Mobile, Shows and Expos | August 10, 2015 - 09:01 AM |
Tagged: vulkan, spir, siggraph 2015, Siggraph, opengl sc, OpenGL ES, opengl, opencl, Khronos

When the Khronos Group announced Vulkan at GDC, they mentioned that the API is coming this year, and that this date is intended to under promise and over deliver. Recently, fans were hoping that it would be published at SIGGRAPH, which officially begun yesterday. Unfortunately, Vulkan has not released. It does hold a significant chunk of the news, however. Also, it's not like DirectX 12 is holding a commanding lead at the moment. The headers were public only for a few months, and the code samples are less than two weeks old.


The organization made announcements for six products today: OpenGL, OpenGL ES, OpenGL SC, OpenCL, SPIR, and, as mentioned, Vulkan. They wanted to make their commitment clear, to all of their standards. Vulkan is urgent, but some developers will still want the framework of OpenGL. Bind what you need to the context, then issue a draw and, if you do it wrong, the driver will often clean up the mess for you anyway. The briefing was structure to be evident that it is still in their mind, which is likely why they made sure three OpenGL logos greeted me in their slide deck as early as possible. They are also taking and closely examining feedback about who wants to use Vulkan or OpenGL, and why.

As for Vulkan, confirmed platforms have been announced. Vendors have committed to drivers on Windows 7, 8, 10, Linux, including Steam OS, and Tizen (OSX and iOS are absent, though). Beyond all of that, Google will accept Vulkan on Android. This is a big deal, as Google, despite its open nature, has been avoiding several Khronos Group standards. For instance, Nexus phones and tablets do not have OpenCL drivers, although Google isn't stopping third parties from rolling it into their devices, like Samsung and NVIDIA. Direct support of Vulkan should help cross-platform development as well as, and more importantly, target the multi-core, relatively slow threaded processors of those devices. This could even be of significant use for web browsers, especially in sites with a lot of simple 2D effects. Google is also contributing support from their drawElements Quality Program (dEQP), which is a conformance test suite that they bought back in 2014. They are going to expand it to Vulkan, so that developers will have more consistency between devices -- a big win for Android.


While we're not done with Vulkan, one of the biggest announcements is OpenGL ES 3.2 and it fits here nicely. At around the time that OpenGL ES 3.1 brought Compute Shaders to the embedded platform, Google launched the Android Extension Pack (AEP). This absorbed OpenGL ES 3.1 and added Tessellation, Geometry Shaders, and ASTC texture compression to it. It was also more tension between Google and cross-platform developers, feeling like Google was trying to pull its developers away from Khronos Group. Today, OpenGL ES 3.2 was announced and includes each of the AEP features, plus a few more (like “enhanced” blending). Better yet, Google will support it directly.

Next up are the desktop standards, before we finish with a resurrected embedded standard.

OpenGL has a few new extensions added. One interesting one is the ability to assign locations to multi-samples within a pixel. There is a whole list of sub-pixel layouts, such as rotated grid and Poisson disc. Apparently this extension allows developers to choose it, as certain algorithms work better or worse for certain geometries and structures. There were probably vendor-specific extensions for a while, but now it's a ratified one. Another extension allows “streamlined sparse textures”, which helps manage data where the number of unpopulated entries outweighs the number of populated ones.

OpenCL 2.0 was given a refresh, too. It contains a few bug fixes and clarifications that will help it be adopted. C++ headers were also released, although I cannot comment much on it. I do not know the state that OpenCL 2.0 was in before now.

And this is when we make our way back to Vulkan.


SPIR-V, the code that runs on the GPU (or other offloading device, including the other cores of a CPU) in OpenCL and Vulkan is seeing a lot of community support. Projects are under way to allow developers to write GPU code in several interesting languages: Python, .NET (C#), Rust, Haskell, and many more. The slide lists nine that Khronos Group knows about, but those four are pretty interesting. Again, this is saying that you can write code in the aforementioned languages and have it run directly on a GPU. Curiously missing is HLSL, and the President of Khronos Group agreed that it would be a useful language. The ability to cross-compile HLSL into SPIR-V means that shader code written for DirectX 9, 10, 11, and 12 could be compiled for Vulkan. He expects that it won't take long for a project to start, and might already be happening somewhere outside his Google abilities. Regardless, those who are afraid to program in the C-like GLSL and HLSL shading languages might find C# and Python to be a bit more their speed, and they seem to be happening through SPIR-V.

As mentioned, we'll end on something completely different.


For several years, the OpenGL SC has been on hiatus. This group defines standards for graphics (and soon GPU compute) in “safety critical” applications. For the longest time, this meant aircraft. The dozens of planes (which I assume meant dozens of models of planes) that adopted this technology were fine with a fixed-function pipeline. It has been about ten years since OpenGL SC 1.0 launched, which was based on OpenGL ES 1.0. SC 2.0 is planned to launch in 2016, which will be based on the much more modern OpenGL ES 2 and ES 3 APIs that allow pixel and vertex shaders. The Khronos Group is asking for participation to direct SC 2.0, as well as a future graphics and compute API that is potentially based on Vulkan.

The devices that this platform intends to target are: aircraft (again), automobiles, drones, and robots. There are a lot of ways that GPUs can help these devices, but they need a good API to certify against. It needs to withstand more than an Ouya, because crashes could be much more literal.

Photos and Tests of Skylake (Intel Core i7-6700K) Delidded

Subject: Processors | August 8, 2015 - 05:55 PM |
Tagged: Skylake, Intel, delid, CPU die, cpu, Core i7-6700K

PC Watch, a Japanese computer hardware website, acquired at least one Skylake i7-6700K and removed the heatspreader. With access to the bare die, they took some photos and tested a few thermal compound replacements, which quantifies how good (or bad) Intel's default thermal grease is. As evidenced by the launch of Ivy Bridge and, later, Devil's Canyon, the choice of thermal interface between the die and the lid can make a fairly large difference in temperatures and overclocking.


Image Credit: PC Watch

They chose the vice method for the same reason that Morry chose this method in his i7-4770k delid article last year. This basically uses a slight amount of torque and external pressure or shock to pop the lid off the processor. Despite how it looks, this is considered to be less traumatic than using a razer blade to cut the seal, because human hands are not the most precise instruments and a slight miss could damage the PCB. PC Watch, apparently, needed to use a wrench to get enough torque on the vice, which is transferred to the processor as pressure.


Image Credit: PC Watch

Of course, Intel could always offer enthusiasts with choices in thermal compounds before they put the lid on, which would be safest. How about that, Intel?


Image Credit: PC Watch

With the lid off, PC Watch mentioned that the thermal compound seems to be roughly the same as Devil's Canyon, which is quite good. They also noticed that the PCB is significantly more thin than Haswell, dropping in thickness from about 1.1mm to about 0.8mm. For some benchmarks, they tested it with the stock interface, an aftermarket solution called Prolimatech PK-3, and a liquid metal alloy called Coollaboratory Liquid Pro.


Image Credit: PC Watch

At 4.0 GHz, PK-3 dropped the temperature by about 4 degrees Celsius, while Liquid Metal knocked it down 16 degrees. At 4.6 GHz, PK-3 continued to give a delta of about 4 degrees, while Liquid Metal widened its gap to 20 degrees. It reduced an 88 C temperature to 68 C!


Image Credit: PC Watch

There are obviously limitations to how practical this is. If you were concerned about thermal wear on your die, you probably wouldn't forcibly remove its heatspreader from its PCB to acquire it. That would be like performing surgery on yourself to remove your own appendix, which wasn't inflamed, just in case. Also, from an overclocking standpoint, heat doesn't scale with frequency. Twenty degrees is a huge gap, but even a hundred MHz could eat it up, depending on your die.

It's still interesting for those who try, though.

Source: PC Watch

Check out the architecture at Skylake and Sunrise Point

Subject: Processors | August 5, 2015 - 03:20 PM |
Tagged: sunrise point, Skylake, Intel, ddr4, Core i7-6700K, core i7, 6700k, 14nm

By now you have read through Ryan's review of the new i7-6700 and the ASUS Z170-A as well as the related videos and testing, if not we will wait for you to flog yourself in punishment and finish reading the source material.  Now that you are ready, take a look at what some of the other sites thought about the new Skylake chip and Sunrise Point chipset.  For instance [H]ard|OCP managed to beat Ryan's best overclock, hitting 4.7GHz/3600MHz at 1.32v vCore with some toasty but acceptable CPU temperatures.  The full review is worth looking for and if some of the rumours going around are true you should take H's advice, if you think you want one buy it now.


"Today we finally get to share with you our Intel Skylake experiences. As we like to, we are going to focus on Instructions Per Clock / IPC and overclocking this new CPU architecture. We hope to give our readers a definitive answer to whether or not it is time to make the jump to a new desktop PC platform."

Here are some more Processor articles from around the web:


Source: [H]ard|OCP

Report: Intel Core i7-6700K and i5-6600K Retail Box Photos and Pricing Leak

Subject: Processors | August 3, 2015 - 10:58 AM |
Tagged: Skylake, leak, Intel, i7-6700K, Core i7-6700K

Leaked photos of what appear to be the full retail box version of the upcoming Intel Core i7-6700K and i5-6600K "Skylake" unlocked CPU have appeared on imgur, making the release of these processors feel ever closer.


Is this really the new box graphic for the unlocked i7?

While the authenticity of these photos can't be verified through any official channel, they certainly do look real. We have heard of Skylake leaks - a.k.a. Skyleaks - for a while now, and the rumors point to an August release for these new LGA 1151 chips (sorry LGA 1150 motherboard owners!).


Looks real. But we do live in a Photoshop world...

We only have about four weeks to wait at the most if an August release is, in fact, imminent. If not, I blame Jeremy for getting our hopes up with terms like Skyleak™. I encourage you to direct all angry correspondence to his inbox.


These boxes are very colorful (or colourful, if you will)

Update: A new report has emerged with US retail pricing for the upcoming Skylake lineup. Here is the chart from WCCFTech:


Chart taken from WCCFTech

The pricing of the top i7 part at $316 would be a welcome reduction from the current $339 retail of the i7-4790K. Now whether the 6700K can beat out that Devil's Canyon part remains to be seen. Doubtless we will have benchmarks and complete coverage once any official release is made by Intel for these parts.

Source: imgur

Iris Pro on Linux

Subject: Processors | July 31, 2015 - 03:37 PM |
Tagged: iris pro, Broadwell, linux, i7-5775C

The graphics core of new CPUs used to have issues on Linux at launch but recently this has become much less of an issue.  The newly released Iris Pro on the 5770C follows this trend as you can see in the benchmarks at Phoronix.  The OpenGL performance is a tiny bit slower overall on Linux, apart from OpenArena, but not enough to ruin your gaming experience.  With a new kernel on the horizon and a community working with the new GPU you can expect the performance gap to narrow.  Low cost gaming on a Linux machine becomes more attractive every day.


"Resulting from the What Windows 10 vs. Linux Benchmarks Would You Like To See and The Phoronix Test Suite Is Running On Windows 10, here are our first benchmarks comparing the performance of Microsoft's newly released Windows 10 Pro x64 against Fedora 22 when looking at the Intel's OpenGL driver performance across platforms."

Here are some more Processor articles from around the web:



Source: Phoronix

AMD A8-7670K (Godavari) Launches with Steamroller

Subject: Processors | July 22, 2015 - 09:56 PM |
Tagged: amd, APU, Godavari, a8, a8-7670k

AMD's Godavari architecture is the last one based on Bulldozer, which will hold the company's product stack over until their Zen architecture arrives in 2016. The A10-7870K was added a month ago, with a 95W TDP at a MSRP of $137 USD. This involved a slight performance bump of +200 MHz at its base frequency, but a +100 MHz higher Turbo than its predecessor when under high load. More interesting, it does this at the same TDP and the same basic architecture.


Remember that these are AMD's benchmarks.

The refresh has been expanded to include the A8-7670K. Some sites have reported that this uses the Excavator architecture as seen in Carrizo, but this is not the case. It is based on Steamroller. This product has a base clock of 3.6 GHz with a Turbo of up to 3.9 GHz. This is a +300 MHz Base and +100 MHz Turbo increase over the previous A8-7650K. Again, this is with the same architecture and TDP. The GPU even received a bit of a bump, too. It is now clocked at 757 MHz versus the previous generation's 720 MHz with all else equal, as far as I can tell. This should lead to a 5.1% increase in GPU compute throughput.

The A8-7670K just recently launched for an MSRP of $117.99. This 20$ saving should place it in a nice position below the A10-7870K for mainstream users.

Source: AMD

Meet the Intel Core i7-5775C Broadwell CPU

Subject: Processors | July 20, 2015 - 05:58 PM |
Tagged: Intel, i7-5775C, LGA1150, Broadwell, crystalwell

To keep it interesting and to drive tech reviewers even crazier, Intel has changed their naming scheme again, with C now designating an unlocked CPU as opposed to K on the new Broadwell models.  Compared to the previous 4770K, the TPD is down to 65W from 84W, the L3 cache has shrunk from 8MB to 6MB and the frequency of both the base and turbo clocks have dropped 200MHz. It does have the Iris Pro 6200 graphics core, finally available on an LGA chip.  Modders Inc. took the opportunity to clock both the flagship Haswell and Broadwell chips to 4GHz to do a clock for clock comparison of the architectures.  Check out the review right here.


"While it is important to recognize one's strengths and leverage it as an asset, accepting shortcomings and working on them is equally as important for the whole is greater than the sum of its parts."

Here are some more Processor articles from around the web:


Source: Modders Inc

TSMC Plans 10nm, 7nm, and "Very Steep" Ramping of 16nm.

Subject: Graphics Cards, Processors, Mobile | July 19, 2015 - 06:59 AM |
Tagged: Zen, TSMC, Skylake, pascal, nvidia, Intel, Cannonlake, amd, 7nm, 16nm, 10nm

Getting smaller features allows a chip designer to create products that are faster, cheaper, and consume less power. Years ago, most of them had their own production facilities but that is getting rare. IBM has just finished selling its manufacturing off to GlobalFoundries, which was spun out of AMD when it divested from fabrication in 2009. Texas Instruments, on the other hand, decided that they would continue manufacturing but get out of the chip design business. Intel and Samsung are arguably the last two players with a strong commitment to both sides of the “let's make a chip” coin.


So where do you these chip designers go? TSMC is the name that comes up most. Any given discrete GPU in the last several years has probably been produced there, along with several CPUs and SoCs from a variety of fabless semiconductor companies.

Several years ago, when the GeForce 600-series launched, TSMC's 28nm line led to shortages, which led to GPUs remaining out of stock for quite some time. Since then, 28nm has been the stable work horse for countless high-performance products. Recent chips have been huge, physically, thanks to how mature the process has become granting fewer defects. The designers are anxious to get on smaller processes, though.

In a conference call at 2 AM (EDT) on Thursday, which is 2 PM in Taiwan, Mark Liu of TSMC announced that “the ramping of our 16 nanometer will be very steep, even steeper than our 20nm”. By that, they mean this year. Hopefully this translates to production that could be used for GPUs and CPUs early, as AMD needs it to launch their Zen CPU architecture in 2016, as early in that year as possible. Graphics cards have also been on that technology for over three years. It's time.

Also interesting is how TSMC believes that they can hit 10nm by the end of 2016. If so, this might put them ahead of Intel. That said, Intel was also confident that they could reach 10nm by the end of 2016, right until they announced Kaby Lake a few days ago. We will need to see if it pans out. If it does, competitors could actually beat Intel to the market at that feature size -- although that could end up being mobile SoCs and other integrated circuits that are uninteresting for the PC market.

Following the announcement from IBM Research, 7nm was also mentioned in TSMC's call. Apparently they expect to start qualifying in Q1 2017. That does not provide an estimate for production but, if their 10nm schedule is both accurate and also representative of 7nm, that would production somewhere in 2018. Note that I just speculated on an if of an if of a speculation, so take that with a mine of salt. There is probably a very good reason that this date wasn't mentioned in the call.

Back to the 16nm discussion, what are you hoping for most? New GPUs from NVIDIA, new GPUs from AMD, a new generation of mobile SoCs, or the launch of AMD's new CPU architecture? This should make for a highly entertaining comments section on a Sunday morning, don't you agree?

AMD Projects Decreased Revenue by 8% for Q2 2015

Subject: Graphics Cards, Processors | July 7, 2015 - 08:00 AM |
Tagged: earnings, amd

The projections for AMD's second fiscal quarter had revenue somewhere between flat and down 6%. The actual estimate, as of July 6th, is actually below the entire range. They expect that revenue is down 8% from the previous quarter, rather than the aforementioned 0 to 6%. This is attributed to weaker APU sales in OEM devices, but they also claim that channel sales are in line with projections.


This is disappointing news for fans of AMD, of course. The next two quarters will be more telling though. Q3 will count two of the launch months for Windows 10, which will likely include a bunch of new and interesting devices and aligns well with back to school season. We then get one more chance at a pleasant surprise in the fourth quarter and its holiday season, too. My intuition is that it won't be too much better than however Q3 ends up.

One extra note: AMD has also announced a “one-time charge” of $33 million USD related to a change in product roadmap. Rather than releasing designs at 20nm, they have scrapped those plans and will architect them for “the leading-edge FinFET node”. This might be a small expense compared to how much smaller the process technology will become. Intel is at 14nm and will likely be there for some time. Now AMD doesn't need to wait around at 20nm in the same duration.

Source: AMD

Report: No Stock Cooler Bundled with Intel Skylake-K Unlocked CPUs

Subject: Processors | June 26, 2015 - 12:32 PM |
Tagged: skylake-s, Skylake-K, Intel Skylake, cpu cooler

A report from Chinese-language site XFastest contains a slide reportedly showing Intel's cooling strategy for upcoming retail HEDT (High-end Desktop) Skylake "K" processors.


Typically Intel CPUs (outside of the current high-end enthusiast segment on LGA2011) have been packaged with one of Intel's ubiquitous standard performance air coolers, and this move to eliminate them from future unlocked SKUs makes sense for unlocked "K" series processors. The slide indicates that a 135W solution will be recommended, even if the TDP of the processor is still in the 91-95W range. The additional headroom is certainly advisable, and arguably the stock cooler never should have been used with products like the 4770K and 4790K, which more than push the limits of the stock cooler (and often allow 90 °C at load without overclocking in my experience with these high-end chips).

Aftermarket cooling (with AIO liquid CPU coolers in particular) has been essential for maximizing the performance of an unlocked CPU all along, so this news shouldn't effect the appeal of these upcoming CPUs for those interested in the latest Intel offerings (though it won't help enhance your collection of unused stock heatsinks).