Subject: Graphics Cards, Mobile | December 4, 2018 - 03:00 AM | Scott Michaud
Tagged: series3nx-f, series3nx, PowerVR, neural network, Imagination Technologies, imagination
Imagination Technologies has just announced the Series3NX line of Neural Network Accelerator (NNA) architectures. These products are designs that can be licensed by system-on-a-chip (SoC) manufacturers to include in their designs. The previous design, Series2NX, has seen some design wins, which Imagination claims is “predominantly focused in the mobile and automotive markets”.
Actually, there are two announcements today: Series3NX and Series3NX-F.
The base NNA core is the Series3NX. Their press kit mentions six SKUs: AX3125 with 0.6 trillion operations per second (TOPS), AX3145 with 1.2 TOPS, AX3165 with 2.4 TOPS, AX3185 with 5 TOPS, and AX3195 with 10 TOPs. Multiple of these cores can be integrated at the same time, which allows products with over 160 TOPS of performance. These designs are available now for licensing.
This brings us to the Series3NX-F. This product combines a Series3NX core with a programmable, floating-point processor (based on the latest PowerVR Rogue architecture) and some RAM. This will be available to license in Q1 2019.
Subject: Graphics Cards, Mobile | December 4, 2018 - 03:00 AM | Scott Michaud
Tagged: PowerVR, Imagination Technologies
Imagination Technologies has just launched three new GPUs: the PowerVR 9XEP, the PowerVR 9XMP, and the PowerVR 9XTP. The 9XEP is designed for casual gaming and UI, the 9XMP is designed for mid-level mobile gaming, and the 9XTP is for high-end mobile-and-up.
The press release notes that, with the release of Fortnite and PUBG on mobile platforms, gaming is pushing devices toward larger GPUs. As a result, they have worked on gaming-centric features like anisotropic filtering to improve performance an image quality. They specifically mention a 2x performance boost in anisotropic filtering and a 4x increase in shadow sample performance on the 9XMP.
There’s a lot of segments that these designs cover; check out Imagination’s slides above.
All three of these designs are available now for licensing.
Subject: Graphics Cards, Mobile | June 2, 2017 - 02:23 AM | Scott Michaud
Tagged: Imagination Technologies, PowerVR, ray tracing, ue4, vulkan
Imagination Technologies has published another video that demonstrates ray tracing with their PowerVR Wizard GPU. The test system, today, is a development card that is running on Ubuntu, and powering Unreal Engine 4. Specifically, it is using UE4’s Vulkan renderer.
The demo highlights two major advantages of ray traced images. The first is that, rather than applying a baked cubemap with screen-space reflections to simulate metallic objects, this demo calculates reflections with secondary rays. From there, it’s just a matter of hooking up the gathered information into the parameters that the shader requires and doing the calculations.
The second advantage is that it can do arbitrary lens effects, like distortion and equirectangular, 360 projections. Rasterization, which projects 3D world coordinates into 2D coordinates on a screen, assumes that edges are still straight, and that causes problems as FoV gets very large, especially full circle. Imagination Technologies acknowledges that workarounds exist, like breaking up the render into six faces of a cube, but the best approximation is casting a ray per pixel and seeing what it hits.
The demo was originally for GDC 2017, back in February, but the videos have just been released.
Subject: Graphics Cards, Mobile | April 3, 2017 - 06:18 PM | Scott Michaud
Tagged: apple, Imagination Technologies, PowerVR
This morning, Imagination Technologies Group released a press statement announcing that Apple Inc. intends to phase out their technology in 15 to 24 months. Imagination has doubts that Apple could have circumvented every piece of intellectual property, and they have requested proof from Apple that their new solution avoids all patents, trade secrets, and so forth. According to Imagination’s statement, Apple has, thus far, not provided that proof, and they don’t believe Apple’s claims.
On the one hand, it makes sense that Apple would not divulge their own trade secrets to their current-partner, soon-competitor until it’s necessary for them to do so. On the other hand, GPUs, based on previous stories, like the Intel / NVIDIA cross-license six years ago, are still a legal minefield for new players in the industry.
So, in short, Apple says they don’t need Imagination anymore, but Imagination calls bull.
From the financial side of things, Apple is a gigantic chunk of Imagination’s revenue. For the year ending on April 30th, 2016, Apple contributed about £60.7 million GBP (~$75 million USD in today’s currency) to Imagination Technology’s revenue. Over that same period, Imagination Technology’s entire revenue was £120.0 million GBP ($149.8 million USD in today’s currency).
To see how losing essentially half of your revenue can damage a company, I’ve included a screenshot of their current stock price (via Google Finance... and I apologize for the tall shot). It must be a bit scary to do business with Apple, given how much revenue they can add and subtract on a moment’s notice. I’m reminded of the iPhone 6 sapphire glass issue, where GT Advanced Technologies took on a half-billion dollars of debt to create sapphire for Apple, only to end up rejected in the end. In that case, though, Apple agreed to absolve the company of its remaining debt after GT liquidated its equipment.
As for Apple’s new GPU? It will be interesting to see how it turns out. Apple already has their own low-level graphics API, Metal, so they might have a lot to gain, although some macOS and iOS applications use OpenGL and OpenGL ES.
We’ll find out in less than two years.
Subject: Graphics Cards, Mobile, Shows and Expos | February 23, 2016 - 08:46 PM | Scott Michaud
Tagged: raytracing, ray tracing, PowerVR, mwc 16, MWC, Imagination Technologies
For the last couple of years, Imagination Technologies has been pushing hardware-accelerated ray tracing. One of the major problems in computer graphics is knowing what geometry and material corresponds to a specific pixel on the screen. Several methods exists, although typical GPUs crush a 3D scene into the virtual camera's 2D space and do a point-in-triangle test on it. Once they know where in the triangle the pixel is, if it is in the triangle, it can be colored by a pixel shader.
Another method is casting light rays into the scene, and assigning a color based on the material that it lands on. This is ray tracing, and it has a few advantages. First, it is much easier to handle reflections, transparency, shadows, and other effects where information is required beyond what the affected geometry and its material provides. There are usually ways around this, without resorting to ray tracing, but they each have their own trade-offs. Second, it can be more efficient for certain data sets. Rasterization, since it's based around a “where in a triangle is this point” algorithm, needs geometry to be made up of polygons.
It also has the appeal of being what the real world sort-of does (assuming we don't need to model Gaussian beams). That doesn't necessarily mean anything, though.
At Mobile World Congress, Imagination Technologies once again showed off their ray tracing hardware, embodied in the PowerVR GR6500 GPU. This graphics processor has dedicated circuitry to calculate rays, and they use it in a couple of different ways. They presented several demos that modified Unity 5 to take advantage of their ray tracing hardware. One particularly interesting one was their quick, seven second video that added ray traced reflections atop an otherwise rasterized scene.
It was a little too smooth, creating reflections that were too glossy, but that could probably be downplayed in the material ((Update: Feb 24th @ 5pm Car paint is actually that glossy. It's a different issue). Back when I was working on a GPU-accelerated software renderer, before Mantle, Vulkan, and DirectX 12, I was hoping to use OpenCL-based ray traced highlights on idle GPUs, if I didn't have any other purposes for it. Now though, those can be exposed to graphics APIs directly, so they might not be so idle.
The downside of dedicated ray tracing hardware is that, well, the die area could have been used for something else. Extra shaders, for compute, vertex, and material effects, might be more useful in the real world... or maybe not. Add in the fact that fixed-function circuitry already exists for rasterization, and it makes you balance gain for cost.
It could be cool, but it has its trade-offs, like anything else.
Subject: General Tech, Mobile, Shows and Expos | January 6, 2016 - 02:42 PM | Jeremy Hellstrom
Tagged: Series7XT Plus, PowerVR, hsa. GT7200 Plus, GT7400 Plus, CES
Update (Jan 7th, 2016 - Scott Michaud): Imagination sent us an updated diagram, and they wanted to clarify that there is "a 1:1 correspondance between the FP32 ALUs and the integer units." The updated diagram is just below.
Original article below
PowerVR GPUs are found in a variety of devices from the PlayStation Vita to the last couple of iPhones and at one point was the GPU in Intel APUs. Their latest offerings are the GT7200 Plus and GT7400 Plus both of which offer quite a few improvements over their previous generations, not least of which is wholesale adoption of heterogeneous computing and its various benefits such as shared virtual memory.
These GPUs expand their support to INT16 and INT8 data paths, keeping the legacy INT32 paths for applications that require it. They have also adopted the OpenCL 2.0 API for heterogeneous computing as well as OpenGL ES 3.2 and even Vulkan support. The GT7200 Plus is in a dual-cluster configuration with 64 ALU cores and the GT7400 Plus doubles that to a quad-cluster with 128 ALU cores.
Along with the performance and feature upgrades comes a focus on upgrading the machine vision capabilities of the Rogue GPUs to be able to identify thousands of objects directly from the camera input stream in real-time. Check out their blog entry for more information on the new chips and if you want a refresher on the technology in these GPUs you can refer back to Ryan's article here.
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: Graphics Cards, Mobile, Shows and Expos | March 7, 2015 - 07:00 AM | Scott Michaud
Tagged: vulkan, PowerVR, Khronos, Imagination Technologies, gdc 15, GDC
Possibly the most important feature of upcoming graphics APIs, albeit the least interesting for enthusiasts, is how much easier driver development will become. So many decisions and tasks that once laid on the shoulders of AMD, Intel, NVIDIA, and the rest will now be given to game developers or made obsolete. Of course, you might think that game developers would oppose this burden, but (from what I understand) it is a weight they already bear, just when dealing with the symptoms instead of the root problem.
This also helps other hardware vendors become competitive. Imagination Technologies is definitely not new to the field. Their graphics powers the PlayStation Vita, many earlier Intel graphics processors, and the last couple of iPhones. Despite how abrupt the API came about, they have a proof of concept driver that was present at GDC. The unfinished driver was running an OpenGL ES 3.0 demo that was converted to the Vulkan API.
A screenshot of the CPU usage was also provided, which is admittedly heavily cropped and hard to read. The one on the left claims 1.2% CPU load, with a fairly flat curve, while the one on the right claims 5% and seems to waggle more. Granted, the wobble could be partially explained by differences in the time they chose to profile.
According to Tom's Hardware, source code will be released “in the near future”.
Subject: Graphics Cards, Mobile | February 26, 2015 - 02:15 PM | Ryan Shrout
Tagged: super-gpu, PowerVR, Imagination Technologies, gt7900
As a preview to announcements and releases being made at both Mobile World Congress (MWC) and the Game Developers Summit (GDC) next week, Imagination Technologies took the wraps off of a new graphics product they are calling a "super-GPU". The PowerVR GT7900 is the new flagship GPU as a part of its Series7XT family that is targeting a growing category called "affordable game consoles." Think about the Android-powered set-top devices like the Ouya or maybe Amazon's Kindle TV.
PowerVR breaks up its GPU designs into unified shading clusters (USCs) and the GT7900 has 16 of them for a total of 512 ALU cores. Imagination has previously posted a great overview of its USC architecture design and how you can compare its designs to other GPUs on the market. Imagination wants to claim that the GT7900 will offer "PC-class gaming experiences" though that is as ambiguous as the idea of a work load of a "console-level game." But with rated peak performance levels hitting over 800 GFLOPS in FP32 and 1.6 TFLOPS in FP16 (half-precision) this GPU does have significant theoretical capability.
|PowerVR GT7900||Tegra X1|
|GPU Clock||800 MHz||1000 MHz|
|Process Tech||16nm FinFET+||20nm TSMC|
Imagination also believes that PowerVR offers a larger portion of its peak performance for a longer period of time than the competition thanks to the tile-based deferred rendering (TBDR) approach that has been "refined over the years to deliver unmatched efficiency."
The FP16 performance number listed above is useful as an extreme power savings option where the half-precision compute operates in a much more efficient manner. A fair concern is how many applications, GPGPU or gaming, actually utilize the FP16 data type but having support for it in the GT7900 allows developers to target it.
Other key features of the GT7900 include support for OpenGL ES 3.1 + AEP (Android Extension Pack), hardware tessellation and ASTC LDR and HDR texture compression standards. The GPU also can run in a multi-domain virtualization mode that would allow multiple operating systems to run in parallel on a single platform.
Imagination believes that this generation of PowerVR will "usher a new era of console-like gaming experiences" and will showcase a new demo at GDC called Dwarf Hall.
I'll be at GDC next week and have already setup a meeting with Imagination to talk about the GT7900 so I can have some hands on experiences to report back with soon. I am continually curious about the market for these types of high-end "mobile" GPUs with the limited market that the Android console market currently addresses. Imagination does claim that the GT7900 is beating products with performance levels as high as the GeForce GT 730M discrete GPU - no small feat.
Subject: General Tech | February 25, 2015 - 08:56 PM | Tim Verry
Tagged: PowerVR, Intel, Imagination Technologies, igp, finance
Update: Currency exchange rates have been corrected. I'm sorry for any confusion!
Intel Foundation is selling off its remaining stake in UK-based Imagination Technologies (IMG.LN). According to JP Morgan, Intel is selling off 13.4 million shares (4.9% of Imagination Technologies) for 245 GBp each. Once all shares are sold, Intel will gross just north of $50.57 Million USD.
Imagination Technologies' PowerVR Rogue Series 6XT GPU is used in Apple's A8-series chips.
Intel first invested in Imagination Technologies back in October of 2006 in a deal to gain access to the company’s PowerVR graphics IP portfolio. Since then, Intel has been slowly moving away from PowerVR graphics in favor of it’s own internal HD graphics GPUs. (Further, Intel sold off 10% of its IMG.LN stake in June of last year.) Even Intel’s low cost Atom line of SoCs has mostly moved to Intel GPUs with the exception of the mobile Merrifield and Moorefield” smartphone/tablet SoCs.
The expansion of Intel’s own graphics IP combined with Imagination Technologies acquisition of MIPS are reportedly the “inevitable” reasons for the sale. According to The Guardian, industry analysts have speculated that, as it stands, Intel is a minor customer of Imagination Technologies at less than 5% for graphics (a licensing agreement signed this year doesn’t rule out PowerVR graphics permanently despite the sale). Imagination Technologies still has a decent presence in the mobile (ARM-based) space with customers including Apple, MediaTek, Rockchip, Freescale, and Texas Instruments.
Currently, the company’s stock price is sitting at 258.75 GBp (~$3.99 USD) which seems to indicate that the Intel sell off news was “inevitable” and was already priced in or simply does not have investors that concerned.
What do you think about the sale? Where does this leave Intel as far as graphics goes? Will we see Intel HD Graphics scale down to smartphones or will the company go with a PowerVR competitor? Would Intel really work with ARM’s Mali, Qualcomm’s Adreno, or Samsung’s rumored custom GPU cores? On that note, an Intel powered smartphone with NVIDIA Tegra graphics would be amazing (hint, hint Intel!)
Subject: Graphics Cards, Processors, Mobile | September 29, 2014 - 01:53 AM | Scott Michaud
Tagged: apple, a8, a7, Imagination Technologies, PowerVR
First, Chipworks released a dieshot of the new Apple A8 SoC (stored at archive.org). It is based on the 20nm fabrication process from TSMC, which they allegedly bought the entire capacity for. From there, a bit of a debate arose regarding what each group of transistors represented. All sources claim that it is based around a dual-core CPU, but the GPU is a bit polarizing.
Image Credit: Chipworks via Ars Technica
Most sources, including Chipworks, Ars Technica, Anandtech, and so forth believe that it is a quad-core graphics processor from Imagination Technologies. Specifically, they expect that it is the GX6450 from the PowerVR Series 6XT. This is a narrow upgrade over the G6430 found in the Apple A7 processor, which is in line with the initial benchmarks that we saw (and not in line with the 50% GPU performance increase that Apple claims). For programmability, the GX6450 is equivalent to a DirectX 10-level feature set, unless it was extended by Apple, which I doubt.
Image Source: DailyTech
DailyTech has their own theory, suggesting that it is a GX6650 that is horizontally-aligned. From my observation, their "Cluster 2" and "Cluster 5" do not look identical at all to the other four, so I doubt their claims. I expect that they heard Apple's 50% claims, expected six GPU cores as the rumors originally indicated, and saw cores that were not there.
Which brings us back to the question of, "So what is the 50% increase in performance that Apple claims?" Unless they had a significant increase in clock rate, I still wonder if Apple is claiming that their increase in graphics performance will come from the Metal API even though it is not exclusive to new hardware.
But from everything we saw so far, it is just a handful of percent better.