Imagination Technologies Announces PowerVR 2NX NNA

Subject: Mobile | September 25, 2017 - 10:45 PM |
Tagged: Imagination Technologies, deep neural network

Imagination Technologies is known to develop interesting, somewhat offbeat hardware, such as GPUs with built-in ray tracers. In this case, the company is jumping into the neural network market with a Power VR-branded accelerator. The PowerVR Series2NX Neural Network Accelerator works on massively parallel, but low-precision tasks. AnandTech says that the chip can even work in multiple bit-depths on different layers in a single network, from 16-bit, down to 12-, 10-, 8-, 7-, 6-, 5-, and 4-bit.

View Full Size

Image Credit: Imagination Technologies via Anandtech

Imagination seems to say that this is variable “to maintain accuracy”. I’m guessing it doesn’t give an actual speed-up to tweak your network in that way, but I honestly don’t know.

As for Imagination Technologies, they intend to have this in mobile devices for, as they suggest, photography and predictive text. They also state the usual suspects: VR/AR, automotive, surveillance, and so forth. They are suggesting that this GPU technology will target Tensorflow Lite.

The PowerVR 2NX Neural Network Accelerator is available for licensing.

Video News

September 25, 2017 | 11:51 PM - Posted by extide

They are claiming up to 60% performance increase while maintaining 99% relative accuracy for final results by using variable precision internally. (Why else would you opt to use lower precision?) ;)

September 26, 2017 | 02:28 AM - Posted by Power (not verified)

"I honestly don’t know"
At least you are honest about it.

September 26, 2017 | 06:32 AM - Posted by willmore

Having done some variable precision math for signal processing some time ago, I can tell you that the analysis you need to do to know what precision you need from one calculation to the next is extrememly hard. For something as complex as a neural net calculation, you're quickly outside of the 'calculate how much I need' and into the 'try it with different precisions and see if it seems to work'.

Good luck with that.

September 26, 2017 | 09:16 AM - Posted by FineNeuralGranularityForBubba (not verified)

"16-bit, down to 12-, 10-, 8-, 7-, 6-, 5-, and 4-bit."

That's a lot of fine granularity from 16 bits on down to 4 bits, some in 1 bit increments between 8 and 4 with a 2 bit increment between 12 and 10 and one 4 bit jump from 12 to 16 with the smallest being 4 bits(16 states/synapses). So that's a varying degree of simulated synapse connections and/or state values available of varying capacities.

I guess that Imagination Technologies wants in on that facial recongnition hardware business that will be needed for any phones that will have to compete with Apples A11 bionic based phones. What I want to Know is how good can all this AI/Neural net hardware do with simulating Ray Tracing and other graphics tasks along with the usual edge detection that is done with these Neural processing units. I'd think that that this Neural Networking IP may just share some DNA PowerVR Wizard Ray Tracing IP that Imagination Technologies demonstrated a while back.

AI/Neural neworking is the next big application to be done in dedicated hardware and even Graphics Effects can benifit form some deep learing IP.

September 26, 2017 | 09:19 AM - Posted by FineNeuralGranularityForBubba (not verified)

Edit: 12 and 10

to: 12, 10, 8

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.