GDC 15: Imagination Technologies Shows Vulkan Driver

Subject: Graphics Cards, Mobile, Shows and Expos | March 7, 2015 - 07:00 AM |
Tagged: vulkan, PowerVR, Khronos, Imagination Technologies, gdc 15, GDC

Possibly the most important feature of upcoming graphics APIs, albeit the least interesting for enthusiasts, is how much easier driver development will become. So many decisions and tasks that once laid on the shoulders of AMD, Intel, NVIDIA, and the rest will now be given to game developers or made obsolete. Of course, you might think that game developers would oppose this burden, but (from what I understand) it is a weight they already bear, just when dealing with the symptoms instead of the root problem.


This also helps other hardware vendors become competitive. Imagination Technologies is definitely not new to the field. Their graphics powers the PlayStation Vita, many earlier Intel graphics processors, and the last couple of iPhones. Despite how abrupt the API came about, they have a proof of concept driver that was present at GDC. The unfinished driver was running an OpenGL ES 3.0 demo that was converted to the Vulkan API.

A screenshot of the CPU usage was also provided, which is admittedly heavily cropped and hard to read. The one on the left claims 1.2% CPU load, with a fairly flat curve, while the one on the right claims 5% and seems to waggle more. Granted, the wobble could be partially explained by differences in the time they chose to profile.

According to Tom's Hardware, source code will be released “in the near future”.

Imagination Launches PowerVR GT7900, "Super-GPU" Targeting Consoles

Subject: Graphics Cards, Mobile | February 26, 2015 - 02:15 PM |
Tagged: super-gpu, PowerVR, Imagination Technologies, gt7900

As a preview to announcements and releases being made at both Mobile World Congress (MWC) and the Game Developers Summit (GDC) next week, Imagination Technologies took the wraps off of a new graphics product they are calling a "super-GPU". The PowerVR GT7900 is the new flagship GPU as a part of its Series7XT family that is targeting a growing category called "affordable game consoles." Think about the Android-powered set-top devices like the Ouya or maybe Amazon's Kindle TV.


PowerVR breaks up its GPU designs into unified shading clusters (USCs) and the GT7900 has 16 of them for a total of 512 ALU cores. Imagination has previously posted a great overview of its USC architecture design and how you can compare its designs to other GPUs on the market. Imagination wants to claim that the GT7900 will offer "PC-class gaming experiences" though that is as ambiguous as the idea of a work load of a "console-level game." But with rated peak performance levels hitting over 800 GFLOPS in FP32 and 1.6 TFLOPS in FP16 (half-precision) this GPU does have significant theoretical capability.

  PowerVR GT7900 Tegra X1
Vendor Imagination Technologies NVIDIA
FP32 ALUs 512 256
FP32 GFLOPS 800 512
FP16 GFLOPS 1600 1024
GPU Clock 800 MHz 1000 MHz
Process Tech 16nm FinFET+ 20nm TSMC

Imagination also believes that PowerVR offers a larger portion of its peak performance for a longer period of time than the competition thanks to the tile-based deferred rendering (TBDR) approach that has been "refined over the years to deliver unmatched efficiency."


The FP16 performance number listed above is useful as an extreme power savings option where the half-precision compute operates in a much more efficient manner. A fair concern is how many applications, GPGPU or gaming, actually utilize the FP16 data type but having support for it in the GT7900 allows developers to target it.

Other key features of the GT7900 include support for OpenGL ES 3.1 + AEP (Android Extension Pack), hardware tessellation and ASTC LDR and HDR texture compression standards. The GPU also can run in a multi-domain virtualization mode that would allow multiple operating systems to run in parallel on a single platform.


Imagination believes that this generation of PowerVR will "usher a new era of console-like gaming experiences" and will showcase a new demo at GDC called Dwarf Hall.

I'll be at GDC next week and have already setup a meeting with Imagination to talk about the GT7900 so I can have some hands on experiences to report back with soon. I am continually curious about the market for these types of high-end "mobile" GPUs with the limited market that the Android console market currently addresses. Imagination does claim that the GT7900 is beating products with performance levels as high as the GeForce GT 730M discrete GPU - no small feat.

Intel Sheds Its Remaining Stake In Imagination Technologies

Subject: General Tech | February 25, 2015 - 08:56 PM |
Tagged: PowerVR, Intel, Imagination Technologies, igp, finance

Update: Currency exchange rates have been corrected. I'm sorry for any confusion!

Intel Foundation is selling off its remaining stake in UK-based Imagination Technologies (IMG.LN). According to JP Morgan, Intel is selling off 13.4 million shares (4.9% of Imagination Technologies) for 245 GBp each. Once all shares are sold, Intel will gross just north of $50.57 Million USD.

PowerVR Rogue Series6XT GPU.png

Imagination Technologies' PowerVR Rogue Series 6XT GPU is used in Apple's A8-series chips.

Intel first invested in Imagination Technologies back in October of 2006 in a deal to gain access to the company’s PowerVR graphics IP portfolio. Since then, Intel has been slowly moving away from PowerVR graphics in favor of it’s own internal HD graphics GPUs. (Further, Intel sold off 10% of its IMG.LN stake in June of last year.) Even Intel’s low cost Atom line of SoCs has mostly moved to Intel GPUs with the exception of the mobile Merrifield and Moorefield” smartphone/tablet SoCs.

The expansion of Intel’s own graphics IP combined with Imagination Technologies acquisition of MIPS are reportedly the “inevitable” reasons for the sale. According to The Guardian, industry analysts have speculated that, as it stands, Intel is a minor customer of Imagination Technologies at less than 5% for graphics (a licensing agreement signed this year doesn’t rule out PowerVR graphics permanently despite the sale). Imagination Technologies still has a decent presence in the mobile (ARM-based) space with customers including Apple, MediaTek, Rockchip, Freescale, and Texas Instruments.

Currently, the company’s stock price is sitting at 258.75 GBp (~$3.99 USD) which seems to indicate that the Intel sell off news was “inevitable” and was already priced in or simply does not have investors that concerned.

What do you think about the sale? Where does this leave Intel as far as graphics goes? Will we see Intel HD Graphics scale down to smartphones or will the company go with a PowerVR competitor? Would Intel really work with ARM’s Mali, Qualcomm’s Adreno, or Samsung’s rumored custom GPU cores? On that note, an Intel powered smartphone with NVIDIA Tegra graphics would be amazing (hint, hint Intel!)

Apple A8 Die Shot Released (and Debated)

Subject: Graphics Cards, Processors, Mobile | September 29, 2014 - 01:53 AM |
Tagged: apple, a8, a7, Imagination Technologies, PowerVR

First, Chipworks released a dieshot of the new Apple A8 SoC (stored at It is based on the 20nm fabrication process from TSMC, which they allegedly bought the entire capacity for. From there, a bit of a debate arose regarding what each group of transistors represented. All sources claim that it is based around a dual-core CPU, but the GPU is a bit polarizing.


Image Credit: Chipworks via Ars Technica

Most sources, including Chipworks, Ars Technica, Anandtech, and so forth believe that it is a quad-core graphics processor from Imagination Technologies. Specifically, they expect that it is the GX6450 from the PowerVR Series 6XT. This is a narrow upgrade over the G6430 found in the Apple A7 processor, which is in line with the initial benchmarks that we saw (and not in line with the 50% GPU performance increase that Apple claims). For programmability, the GX6450 is equivalent to a DirectX 10-level feature set, unless it was extended by Apple, which I doubt.


Image Source: DailyTech

DailyTech has their own theory, suggesting that it is a GX6650 that is horizontally-aligned. From my observation, their "Cluster 2" and "Cluster 5" do not look identical at all to the other four, so I doubt their claims. I expect that they heard Apple's 50% claims, expected six GPU cores as the rumors originally indicated, and saw cores that were not there.

Which brings us back to the question of, "So what is the 50% increase in performance that Apple claims?" Unless they had a significant increase in clock rate, I still wonder if Apple is claiming that their increase in graphics performance will come from the Metal API even though it is not exclusive to new hardware.

But from everything we saw so far, it is just a handful of percent better.

MWC 2014: Allwinner launches UltraOcta A80 processor with big.LITTLE and PowerVR

Subject: Mobile | February 24, 2014 - 02:17 PM |
Tagged: ultraocta a80, PowerVR, MWC 14, MWC, big.little, Allwinner

The wheels keep turning from Mobile World Congress 2014 in Barcelona with Allwinner's announcement of the UltraOcta A80 SoC.  Dubbed the "world's first big.LITTLE octa-core (8-core) heterogeneous SoC to included PowerVR Series6 GPU technology", the UltraOcta A80 combines four Cortex-A7 and four Cortex-A15 cores in a single chip design.  

The UltraOcta A80 is aimed at tablets, portable game consoles, set-top boxes, media players and other devices that require "premium performance" level parts.  The first devices will apparently hit "soon" but no other details were offered.  


ARM is definitely on-board with Allwinner for this product as it is the poster child example of how the big.LITTLE design philosophy can be implemented to offer both high performance and low power results in one SoC.  This chip is built on 28nm process technology and also includes high performance graphics with the PowerVR G6230 GPU.  This GPU includes two "clusters" for a total of 64 ALUs (called cores in other SoC).  Keep in mind that this Series6 GPU is about half the performance of the G6400 series included in the iPhone 5s and even Intel's new Merrifield and Moorefield processors.

The Allwinner UltraOcta A80 will also support 4K video encode and decode with H.265 thrown in for good measure.  I am very curious to see the load on the SoC during these types of high quality video processes as the amount of acceleration on the chip isn't known yet.

View the full press release after the break!

MWC 2014: Intel Atom Moorefield and Merrifield officially unveiled

Subject: Processors, Mobile | February 24, 2014 - 04:00 AM |
Tagged: z3480, PowerVR, MWC 14, MWC, moorefield, merrifield, Intel, atom

Intel also announced an LTE-Advanced modem, the XMM 7260 at Mobile World Congress today.

Last May Intel shared with us details of its new Silvermont architecture, a complete revamp of the Atom brand with an out-of-order design and vastly improved performance per watt.  In September we had our first real-hands on with a processor built around Silvermont, code named Bay Trail.  The Atom Z37xx and Z36xx products were released and quickly found their way into products like the ASUS T100 convertible notebook.  In fact, both the Bay Trail processor and the ASUS T100 took home honors in our end-of-year hardware recognitions.  

Today at Mobile World Congress 2014, Intel is officially announcing the Atom Z35xx and Z34xx processors based on the same Silvermont architecture, code named Moorefield and Merrifield respectively.  These new processors share the same power efficiency of Bay Trail and excellent performance but have a few changes to showcase.


Though there are many SKUs yet to be revealed for Merrifield and Moorefield, this comparison table gives you a quick idea of how the new Atom Z3480 compares to the previous generation, Atom Z2580 and Clover Trail+.  


The Atom Z3480 is a dual core (single module) processor with a clock speed as high as 2.13 GHz.  And even though it doesn't have HyperThreading support, the new architecture is definitely faster than the previous product.  The cellular radio listed on this table is a separate chip, not integrated into the SoC - at least not yet.  PowerVR G6400 quad core/cluster graphics should present performance somewhere near that of the iPhone 5s with support for OpenCL and RenderScript acceleration.  Intel claims that this PowerVR architecture will give Merrifield a 2x performance advantages over the graphics system in Clover Trail+.  A new image processor allows for 1080p60 video capture (vs 30 FPS before) and support Android 4.4.2 is ready.  


Most interestingly, the Merrifield and Moorefield SoCs do not use Intel's HD graphics technology and instead return to the world of Imagination Technology and the PowerVR IP.  Specifically, the Merrifield chip, the smaller of the two new offerings from Intel, is using the PowerVR G6400 GPU; the same base technology that powers the A7 SoC from Apple in the iPhone 5s.  


A comparison between the Merrifield and Moorefield SoCs reveals the differences between what will likely be targeted smartphone and tablet processors.  The Moorefield part uses a pair of modules with a total of four cores, double that of Merrifield, and also includes a slightly higher performance PowerVR GPU option, the G6430.  

Intel has provided some performance results of the new Atom Z3480 using a reference phone, though of course, with all vendor provided benchmarks, take them as an estimate until some third parties get a hold of this hardware for independent testing.  


Looking at GFXBench 2.7, Intel estimates that Merrifield will run faster than the Apple A7 in the iPhone 5s and just slightly behind the Qualcomm Snapdragon 800 found in the Samsung Galaxy S4.  Moorefield, the SoC that adds slightly to GPU performance and doubles the CPU core count, would improve performance to best the Qualcomm result.


WebXPRT is a web application benchmark and with it Intel's Atom Z3480 has the edge over both the Apple A7 and the Qualcomm S800.  Intel also states that they can meet these performance claims while also offering better battery life than the Snapdragon S800 as well - interestingly the Apple A7 was left out of those metrics.


Finally, Intel did dive into the potential performance improvements that support for 64-bit technology will offer when Android finally implements support.  While Kitkat can run a 64-bit kernel, the user space is not yet supported so benchmarking is a very complicated and limited process.  Intel was able to find instances of 16-34% performance improvements from the move to 64-bit on Merrifield.  We are still some time from 64-bit Android OS versions but Intel claims they will have full support ready when Google makes the transistion.

Both of these SoCs should be showing up in handsets and tablets by Q2.  Intel did have design wins for Clover Trail+ in a couple of larger smartphones but the company has a lot more to prove to really make Silvermont a force in the mobile market.  

NVIDIA Details Tegra 4 and Tegra 4i Graphics

Subject: Graphics Cards | February 25, 2013 - 08:01 PM |
Tagged: nvidia, tegra, tegra 4, Tegra 4i, pixel, vertex, PowerVR, mali, adreno, geforce


When Tegra 4 was introduced at CES there was precious little information about the setup of the integrated GPU.  We all knew that it would be a much more powerful GPU, but we were not entirely sure how it was set up.  Now NVIDIA has finally released a slew of whitepapers that deal with not only the GPU portion of Tegra 4, but also some of the low level features of the Cortex A15 processor.  For this little number I am just going over the graphics portion.


This robust looking fellow is the Tegra 4.  Note the four pixel "pipelines" that can output 4 pixels per clock.

The graphics units on the Tegra 4 and Tegra 4i are identical in overall architecture, just that the 4i has fewer units and they are arranged slightly differently.  Tegra 4 is comprised of 72 units, 48 of which are pixel shaders.  These pixel shaders are VLIW based VEC4 units.  The other 24 units are vertex shaders.  The Tegra 4i is comprised of 60 units, 48 of which are pixel shaders and 12 are vertex shaders.  We knew at CES that it was not a unified shader design, but we were still unsure of the overall makeup of the part.  There are some very good reasons why NVIDIA went this route, as we will soon explore.

If NVIDIA were to transition to unified shaders, it would increase the overall complexity and power consumption of the part.  Each shader unit would have to be able to handle both vertex and pixel workloads, which means more transistors are needed to handle it.  Simpler shaders focused on either pixel or vertex operations are more efficient at what they do, both in terms of transistors used and power consumption.  This is the same train of thought when using fixed function units vs. fully programmable.  Yes, the programmability will give more flexibility, but the fixed function unit is again smaller, faster, and more efficient at its workload.


On the other hand here we have the Tegra 4i, which gives up half the pixel pipelines and vertex shaders, but keeps all 48 pixel shaders.

If there was one surprise here, it would be that the part is not completely OpenGL ES 3.0 compliant.  It is lacking in one major function that is required for certification.  This particular part cannot render at FP32 levels.  It has been quite a few years since we have heard of anything not being able to do FP32 in the PC market, but it is quite common to not support it in the power and transistor conscious mobile market.  NVIDIA decided to go with a FP 20 partial precision setup.  They claim that for all intents and purposes, it will not be noticeable to the human eye.  Colors will still be rendered properly and artifacts will be few and far between.  Remember back in the day when NVIDIA supported FP16 and FP32 while they chastised ATI for choosing FP24 with the Radeon 9700 Pro?  Times have changed a bit.  Going with FP20 is again a power and transistor saving decision.  It still supports DX9.3 and OpenGL ES 2.0, but it is not fully OpenGL ES 3.0 compliant.  This is not to say that it does not support any 3.0 features.  It in fact does support quite a bit of the functionality required by 3.0, but it is still not fully compliant.

This will be an interesting decision to watch over the next few years.  The latest Mali 600 series, PowerVR 6 series, and Adreno 300 series solutions all support OpenGL ES 3.0.  Tegra 4 is the odd man out.  While most developers have no plans to go to 3.0 anytime in the near future, it will eventually be implemented in software.  When that point comes, then the Tegra 4 based devices will be left a bit behind.  By then NVIDIA will have a fully compliant solution, but that is little comfort for those buying phones and tablets in the near future that will be saddled with non-compliance once applications hit.


The list of OpenGL ES 3.0 features that are actually present in Tegra 4, but the lack of FP32 relegates it to 2.0 compliant status.

The core speed is increased to 672 MHz, well up from the 520 MHz in Tegra 3 (8 pixel and 4 vertex shaders).  The GPU can output four pixels per clock, double that of Tegra 3.  Once we consider the extra clock speed and pixel pipelines, the Tegra 4 increases pixel fillrate by 2.6x.  Pixel and vertex shading will get a huge boost in performance due to the dramatic increase of units and clockspeed.  Overall this is a very significant improvement over the previous generation of parts.

The Tegra 4 can output to a 4K display natively, and that is not the only new feature for this part.  Here is a quick list:

2x/4x Multisample Antialiasing (MSAA)

24-bit Z (versus 20-bit Z in the Tegra 3 processor) and 8-bit Stencil

4K x 4K texture size incl. Non-Power of Two textures (versus 2K x 2K in the Tegra 3 processor) – for higher quality textures, and easier to port full resolution textures from  console and PC games to Tegra 4 processor.  Good for high resolution displays.

16:1 Depth (Z) Compression and 4:1 Color Compression (versus none in Tegra 3 processor) – this is lossless compression and is useful for reducing bandwidth to/from the frame buffer, and especially effective in antialiasing processing when processing multiple samples per pixel

Depth Textures

Percentage Closer Filtering for Shadow Texture Mapping and Soft Shadows

Texture border color eliminate coarse MIP-level bleeding

sRGB for Texture Filtering, Render Surfaces and MSAA down-filter

1 - CSAA is no longer supported in Tegra 4 processors

This is a big generational jump, and now we only have to see how it performs against the other top end parts from Qualcomm, Samsung, and others utilizing IP from Imagination and ARM.

Source: NVIDIA

... and Intel officially jumps into the TV or set top box business too

Subject: General Tech | February 13, 2013 - 12:40 PM |
Tagged: Intel, tv, intel media, imagination, PowerVR, intel tv

Ryan spotted prototype Intel TV hardware at CES in the Imagination suite and today Intel Media's Erik Huggers has confirmed that Intel will be producing some sort of set top box or TV for sale in the near future.  It will likely be in partnership with Imagination and their PowerVR technology and might possibly be tied to the new NUC that Intel released recently and which would fit the definition of set top box, plus keep your cat nice and warm.  While The Inquirer did get confirmation that Intel will release hardware to compete with Apple and Google before the end of the year but they would not specify exactly what that hardware would be.  They plan to set themselves apart from NetFlix and other content streamers by offering live TV streams which will probably not make them popular with established cable or satellite proveders but with Intel's deep pockets and the possiblity of personalized advertising they could well steal customers away.


"CHIPMAKER Intel has confirmed that it is working on hardware to stream live and on-demand content to televisions in 2013."

Here is some more Tech News from around the web:

Tech Talk

Source: The Inquirer

CES 2013: Prototype Intel TV System Spotted at Imagination Suite

Subject: Systems | January 10, 2013 - 02:16 PM |
Tagged: CES, ces 2013, Intel, tv, intel media, imagination, PowerVR

While visiting with the folks at Imagination, responsible for the graphics system known as PowerVR found in many Apple and Samsung SoCs, we were shown a new, innovative way to watch TV.  This new system used an impressively quick graphic overlay, the ability to preview other channels before changing to them and even the ability to browse content on your phone and "toss" it to your TV. 


The software infrastructure is part of the iFeelSmart package but the PowerVR team was demonstrating the performance and use experiences that its low power graphics system could provide for future applications.  And guess what we saw was connected to the TV?


With all of the information filtering out on Intel's upcoming dive into the TV ecosystem, it shouldn't be a surprise that find hardware like this floating around.  We aren't sure what kind of hardware Intel would actually end up using for the set top box expected later this year, but it is possible we are looking at an early development configuration right here. 

Coverage of CES 2013 is brought to you by AMD!

PC Perspective's CES 2013 coverage is sponsored by AMD.

Follow all of our coverage of the show at!

Apple's A6 Processor Uses Hand Drawn ARM Cores to Boost Performance

Subject: General Tech, Processors, Mobile | September 27, 2012 - 12:26 PM |
Tagged: SoC, PowerVR, iphone, arm, apple, a6

Apple's latest smartphone was unveiled earlier this month, and just about every feature has been analyzed extensively by reviewers and expounded upon by Apple. However, the one aspect that remains a mystery is the ARM System on a Chip that is powering the iPhone 5. There has been a great deal of speculation, but the officially Apple is not talking. The company has stated that the new processor is two times faster than its predecessor, but beyond that it will be up to reviewers to figure out what makes it tick.

After the press conference PC Perspective's Josh Walrath researched what few hints there were on the new A6 processor, and determined that there was a good chance it was an ARM Cortex A15-based design. Since then some tidbits of information have come out that suggest otherwise, however. Developers for iOS disovered that the latest SDK suggest new functionality for the A6 processor, including some new instruction sets. That discovery tended credence to the A6 possibly being Cortex A15, but it did not prove that it wasn't. Following that, Anandtech posted an article that stated it was in a licensed Cortex A15 design. Rather, the A6 was a custom Apple-developed chip that would, ideally, give users the same level of performance without needing significantly more power – and without waiting for a Cortex A15 chip to be manufactured.

Finally, thanks to the work of the enthusiasts over at Chipworks, we have physical proof that, finally, reveals details about Apple's A6 SoC. By stripping away the outer protective layers, and placing the A6 die under a powerful microscope, they managed to get an 'up close and personal' look at the inside of the chip.

Apple A6 ARM SoC.jpg

Despite the near-Jersey Shore (shudder) levels of drama between Apple and Samsung over the recent trade dress and patent infringement allegations, it seems that the two companies worked together to bring Apple's custom processor to market. The researchers determined that the A6 was based on Samsung's 32nm CMOS manufacturing process. It reads APL0589B01 on the inside, which suggests that it is of Apple's own design. Once the Chipworks team sliced open the processor further, they discovered proof that Apple really did craft a custom ARM processor.

In fact, Apple has created a chip with dual ARM CPU cores and three GPU cores (PowerVR). The CPU cores support the ARMv7s instruction set, and Apple has gone with a hand drawn design. Rather than employ computer libraries to automatically lay out the logic in the processor, Apple and the engineers acquired from its purchase of PA Semi have manually drawn out the processor by hand. This chip has likely been in the works for a couple of years now, and the 96.71mm^2 sized die will offer up some notable performance improvements.


It seems like Apple has opted to go for an expensive custom chip rather than opt for a licensed Cortex A15 design. That combined with the hand drawn layout should give Apple a processor with better performance than its past designs without requiring significantly more power.

At a time when mobile SoC giant Texas Instruments is giving up on ARM chips for tablets and smartphones, and hand drawn designs are becoming increasingly rare (even AMD has given up), I have to give Apple props for going with a custom processor laid out by hand. I'm interested to see what the company is able to do with it and where they will go from here. 

Chipworks and iFixIt also took a look at the LTE modem, Wi-Fi chip, audio amplifier, and other aspects of the iPhone 5's internals, and it is definitely worth a read for the impressive imagery alone.

Source: ifixit