Review Index:

Apple A11 Performance Review with the iPhone 8 Plus: Taking on Desktop?

Subject: Processors, Mobile
Manufacturer: Apple

GPU Benchmarks and Conclusion

Perhaps more interesting than the A11 Bionic’s additional CPU horsepower is the inclusion of Apple’s first in-house GPU - though the A10 Fusion contained a mysterious, custom GPU as well (discussed in detail in this excellent write-up from David Kanter last October). In any case, Apple has now officially severed ties with Imagination Technologies, makers of the PowerVR GPUs found in Apple’s previous SoCs, and once again Apple has avoided divulging technical details surrounding their mobile parts so exact details on the GPU were not announced - though TechInsights has an ongoing teardown that indicates another 6-core GPU design.

View Full Size

Apple's A11 Bionic Chip (Image credit: TechInsights)

What is the result of Apple's secretive efforts? A11 Bionic's GPU is extremely powerful. Just take a look at these results:

View Full Size

View Full Size

View Full Size

Offscreen 1920x1080 results with OpenGL ES 2.0 and 3.0 are staggering, easily surpassing not only Apple's previous best but even the Snapdragon 835's mighty Adreno 540 in the GFXBench tests.

Next we have a look at the graphics portion of Basemark OS II, which is another fully cross-platform mobile benchmark:

View Full Size

Another big victory for the A11 Bionic's GPU, with the A10 Fusion in second place. To me these results actually suggest an advantage for Apple based on their comparatively lower display resolutions, with flagship Android handsets such as the the Pixel and GS8+ at or above QHD. The offscreen 1080p tests from GFXBench above should provide a more balanced assessment of performance, as they are not dependent on native screen resolution.

3DMark is next, and here the balance shifts somewhat when using the standard versions of the tests. It's worth noting that Sling Shot is rendered at 1920x1080 before being scaled to the display resolution, and only by running these tests in "unlimited" mode can display scaling-related performance differences be avoided.

View Full Size

The A11 Bionic is still above Apple's previous best, but in this OpenGL ES 3.0 test Qualcomm's Adreno 540 is still on top. One of the aspects of Apple's platform that has been shifting in the last couple of years is graphics API, as Apple skipped ES 3.1 support in favor of their own API. How does this compare? We ran the Sling Shot Extreme benchmark, which uses ES 3.1 on Android devices and Apple's proprietary Metal API on iPhone.

View Full Size

Here the Adreno 540 is again on top, but the difference between last year's A10 Fusion and this new A11 Bionic GPU is remarkable. If you are an iOS user this is exciting news - though in this particular benchmark suite Apple trails Qualcomm's best option by a significant margin. Part of this is likely due to the fact that the Extreme version of Sling Shot is rendered at a full 2560x1440 before display scaling.

Next, we'll take a look at the 'unlimited' versions of both Sling Shot tests, and this time it is a new Galaxy Note8 providing the Snapdragon 835 results rather than the GS8+. The results are also more granular this time, with a look at each component of the benchmark score (graphics, physics):

View Full Size

View Full Size

Bypassing display resolution and scaling provides what should be a more accurate look at the comparative performance of these mobile platforms, and things are certainly a little closer between the Adreno 540 and the A11 Bionic GPU. Apple is on top with the standard Sling Shot test (1080p), and with the extreme test (1440p) the Adreno holds on for its first victory.

The Intel Challenge: Part 2

Finally, let's have a look at this A11 Bionic GPU against the same Intel Kaby Lake processor from the first page. Processor graphics are not impressive as a rule, but against a mobile GPU it should be a little more interesting. Once again the Snapdragon 835 is included for reference:

View Full Size

The A11 Bionic provides significantly higher graphics performance to edge out the i5-7300U (Intel HD Graphics 620) in the overall score, though the physics test shows a clear advantage from Intel. The Snapdragon 835 lags far behind in all categories here. Clearly a new standard for mobile platforms has been set by Apple, and it will be interesting to see how the industry responds.


It's a safe bet to call the A11 Bionic the fastest mobile processor in the world. Of course, certain aspects of the A11 Bionic platform's performance will require extended hands-on testing, including network performance and battery life. The latter is an area where the iPhone 7’s A10 Fusion was disappointing, with lackluster overall battery life for a flagship device - both in controlled testing and everyday use. While Ryan has completed a battery test on an iPhone 8 Plus - with a very impressive 9 hours 40 minutes from our wi-fi benchmark (two hours better than our iPhone 7 Plus was able to achieve) - it remains to be seen if areas such as standby power draw are improved over the previous platform, and that will require some additional testing.

View Full Size

Externally not much has changed with the iPhone 8 and 8 Plus, which now have glass backs

A full review of the iPhone 8 Plus is upcoming, as this author is now in possession of a new handset and will begin daily use for the next couple of weeks to see if the iPhone 8 is worth the upgrade over an iPhone 7 - though with the iPhone X coming by year's end it feels a bit like a stopgap. However, with its fantastic performance from the A11 Bionic platform and an MSRP $200 lower than the iPhone X ($799 vs. $999 for a 64GB capacity), this iPhone 8 Plus might offer enough to be a compelling alternative to the iPhone X - especially if one has grown accustomed to the fingerprint sensor that is conspicuously absent from the upcoming flagship.

Stay tuned for our full iPhone 8 Plus review!

Video News

September 29, 2017 | 06:27 AM - Posted by khanmein

@Sebastian Peak, Can you make a comparison between iPhone 8 & iPhone 8 Plus? Thanks & cheers.

September 29, 2017 | 06:28 AM - Posted by JL77 (not verified)

Sorry but saying that this SOC is taking on desktop level performance is just headline grabbing. In no real terms is it anywhere near desktop performance. All the parts shown in comparison to the A11 chip are mobile chips with the exception of the Core I5-7300U which is designed to be a low power CPU for laptops rather than a performance one in desktops. Being only a 2 core part I would expect performance from it in line with a typical I3 CPU which is hardly a fair comparison against a 6 core chip.

Now if the A11 performed like a 6 core desktop chip such as the Ryzen 1600 your headline could be justified, but as is it is nothing but click-bait.

September 29, 2017 | 06:32 AM - Posted by StompinRound (not verified)

But when you been running a P3 IBM running SuSe for months on backup, even an IPad 3 feels like a boost

September 29, 2017 | 07:32 AM - Posted by JohnGR

You should pay more attention on the single core performance. The Intel chip can boost at 3.5GHz and Geekbench's single thread scores makes the Apple chip look ridiculously good compared to the Intel chip. The other scores also are good and I wouldn't say that the title is headline grabbing. Mobile type processors are used on desktop computers(the AM1 platform from AMD or these Intel processors in mini PCs) and there are plenty of desktop processors that perform probably much worst compared to this mobile i5 chip(dual core FM2+ or Celerons).

September 29, 2017 | 11:33 AM - Posted by UnknownsForLayoutsPresently (not verified)

Since the Apple A7 Cyclone cores, Apple's Custom ARMv8A ISA running costom micro-arch CPU designs have been twice as wide order superscalar than any of the ARM holdings Refrence CPU core designs, or the other custom ARM micro-archs that are engineered to run the ARMv8A ISA, with the exception of Nvidia's Denver custom ARMv8A ISA running core designs. And the Devner designs use a bit more power than Apple's designs and Denver never made it into any smart-phone form factors.

The Apple A7/Cyclone core:

CPU Codename----------------Cyclone,
ARM ISA---------------------ARMv8-A(32/64),
Issue Width-----------------6 micro-ops,
Reorder Buffer Size---------192 micro-ops,
Branch Mispredict Penalty---16 cycles (14 – 19),
Integer ALUs----------------4,
Load/Store Units------------2,
Load Latency----------------4 Cycles,
Branch Units----------------2,
Indirect Branch Units-------1,
FP/NEON ALUs----------------3,
L1 Cache-–------------------64KB I$ + 64KB D$,
L2 Cache--------------------1MB,
L3 Cache--------------------4MB,

So take this A11 design seriously(It's Beyond the A7/Cyclone) and compare the A7's high power cores to Intel's Haswell reorder buffer size(192 micro-ops) design, and compare the numbers of Int-Units and FP-Units, L/S Units, caches etc. And the Base CPU design for the Apple A7 CPU is nearer to a desktop chip than to a mobile chip, and the newer Apple A seies CPU designs have improved upon the A7's design. Apple did good for itself when it purchased P.A. Semiconductor and all that CPU IP and those P.A. Semi engineers.

September 29, 2017 | 02:22 PM - Posted by jef (not verified)

I would agree that the i5 desktop will have much better performance over the mobile version of i5. Unfortunately we do not have an externally cooled A11 to compare with here. As such it is a fair comparison to compare mobile to mobile chips. We do not see power requirements here ether. That could possibly push us into a much better comparison as well.
I would really like to see a comparison of power/performance with all of the CPUs being tested here. Not an easy task because the phones would be a complete phone power usage, the laptops would be a complete laptop power requirement.
Nothing is perfect.

September 29, 2017 | 06:42 AM - Posted by Martin (not verified)

Geekbench has its own problems. It is not well comparable across architectures and what is even more problematic, across different operating systems.

Also, the chip itself.
A11 has 4.1 billion transistors on 87.66 mm2 at 10nm process. I wonder what might be of similar size? Or what would the competition look like in terms of die size or transistor count?

Let's check wikipedia:
- 15-core Xeon Ivy Bridge-EX has 4.3 billion transistors on old process and huge size.
- Dual-core + GPU Iris Core i7 Broadwell-U has 1.9 billion transistors on 133 mm2 at 14nm process.
- Quad-core + GPU GT2 Core i7 Skylake K has 1.75 billion on 122mm2 at 14nm.

A11 sounds pretty damn expensive. I guess there is a good rason for IPhone prices after all :D

September 29, 2017 | 07:31 AM - Posted by Anonymous2 (not verified)

There a lot more on the a11 than cpu cores and a gpu, its called a system-on-chip for a reason. Can't really compare its die size to cpu's.

September 29, 2017 | 07:49 AM - Posted by ppi (not verified)

Geekbench 4 (unlike version 3) has the same workload, so one of the biggest criciticsms of v3 is done.

On the other hand, at least on the level of desktop CPU, this is more benchmark of frequency ramp-up from sleep. There's two-second pause between each test (to prevent overheating), and the tests themselves hardly last over half-a-second. Nothing will ever report 100% CPU utilization when running GB4. When I ran it last time OS decided that it's good time to run various background tasks like AV scan or OS update ...

So we're still in the world that this is mobile test only. Desktops (and notebooks) are generally snappy enough that what matters are tasks you have to wait for. And that is something GB4 does not test.

The worst part is, that despite all the bickering about Geekbench, nobody has made better fully crossplatform CPU-centric benchmark to date.

It would be very nice to see results of individual tests (not just integer, floating, but individual sub-tests).

September 29, 2017 | 01:51 PM - Posted by John Poole (not verified)

Geekbench founder here. In our testing the workload gap doesn't negatively affect desktop performance. You can disable the pause between workloads by using the --workload-gap switch (e.g., geekbench4 --workload-gap 0). On my 4C SKL system single-core scores were ~4% higher with the gap than without.

September 29, 2017 | 08:23 AM - Posted by Anonymous2 (not verified)

"Fastest mobile processor in the world", but you compared it against a low-mid range mobile Intel SKU, not a mobile i7...

September 29, 2017 | 10:12 AM - Posted by Sebastian Peak

Mobile = smartphone in this context. I can edit to clarify. Obviously there are far faster mobile parts on the x86 side, an easy example being the Intel Core i7-6700HQ found in some high end laptops.

September 29, 2017 | 02:24 PM - Posted by Anonymous2 (not verified)

Thanks for clarifying.

dat gpu tho...

Looking forward to Raven Ridge.

September 29, 2017 | 09:54 PM - Posted by CleanRoomDesignsNotSafeFromPatentsGranted (not verified)

Only gamimg laptops mostly now as any of the business laptops have been gimped with Ultrabook U/other series dual core Intel i7/i5 SKUs as opposed to when business class laptops used to be readily available on HP Probooks/other makers' business laptops with those Quad-core HQ/QM i7 variants.

So much for the road warrior class of Business Laptop as a desktop replacment 35 watt quad core laptops SKUs that started to disappear after Ivy-Bridge was replaced by the Haswell/Broadwell into the many Intel "Lake" micro-arch based CPUs that are mostly for low power laptops.

Intel appears to be bring back its quad core CPU offerings but at 15 watts and under the U/other series branding and running under much lower clocks to make that 15 watt limit.

I'm really hoping that AMD will at least have one quad core Raven Ridge variant with 35+ watts thermal rating for that Vega integrated graphics to have some thermal breathing room and I'd love for there to be HP probooks using APU parts with at least 35 watts of cooling solutions provided so the laptop will not throttled down excessively under load.

September 30, 2017 | 07:40 AM - Posted by Anonymous2 (not verified)

Vega isn't thermally constrained, it's voltage/power. Vega runs very cool and quiet under normal circumstances.

September 30, 2017 | 11:13 AM - Posted by DaftnessInAnonymous2KnowsNoBounds (not verified)

What the hell are you talking about and go back to the desktop PC discussion because the post that you replied to is talking about laptops! And some laptops(thin and light/ultrabook) laptops that are restricted to 15 watts in those much too thin and gimped laptop form factors.

Any graphics is going to be thermally constrained and damn GPUs are needed for other things besides gaming even in business laptops. Any APU from AMD and SOC from Intel will be constrained at 15 watts in those thin and light/ultrabook abominations.

September 29, 2017 | 10:54 AM - Posted by Old_and_Grumpy (not verified)

Why do PC reviews write that the OS is optimized for the hardware. Why is this something bad? Why isnt all OSes it? Benchmarks come in 2 main forms: Synthetic and based on real life apps. If these apps run better because of the OS: why negative? Should not be mentioned.

Most PC klickers dont understand this: X86 have never been the fastest. Just cheap and had Windows. Even mainstream benchmarks like SPEC is based on SPARC CPUs "how many times faster" then the original SPARC CPU.

There is a huge thing with A11: The removal of 32bit. This is something X86 cant do today since its not real 64bit, but extensions of 32bit. Over 2 years ago Apple stopped 32bit apps, so this have been a long process. The removal of 32bit in A11: Apple added 2 more cores, got the power envelope to run all cores same time "turbo" while A10 only could run 2 cores at a time. The CPU scheduler on A10 could not even see more than 2 cores at a time. Even with this more cores and the other ASICs Apple have put is: Removal of 32bit + going to 10nm from 14/16nm = 30% smaller chip.

Android will never catch up until they remove 32bit. Today 32bit on Android is like the CISC tax on X86. Intel could never compete with RISC on low power because the same perforemce on CISC needs 30% more die are. This have been Intels secret all years and why they spend 10billion dollar each years on fabs: They could for decades eat the CISC tax by being 2 nodes ahead of the open foundries. Thanks to Apple/Sammy: that time is past. (The rapid node shrinks started with A5 and Apple prepaid Sammys Texas fab. Since Sammy/Apple compete and sell large amount of hardware: we have seen a new node almost each year since 2011. It actually saved AMD: Ryzen isnt amazing. Removing GPU that takes 50% die area and add cores + going from 28nm to 14nm FinFET should just by following basic design rules give 50% incense in perforce)

A10 was larger than Intels high end 6700K/7700K. It cost more to manufacture a A10 than a 4core Intel. No wonder that the performance per mhz is bigger than Intel. Billions of transistors. These are huge mobile chips. A11 2.5 ghz is comparable with a 3.5ghz Intel Xeon per mhz. Its just a fact.

So why dont we use ARM in mainstream PCs? Well: Customers are prepared to pay for Intel. Look at all silly BOOMs of iPhone 7/8 where they pinn the CPU cost at 25 dollars. And all PC/Fandroid scream how Apple is evil. Why is it normal to pay Intel 300 dollar + the motherboard tax to Intel?

The other reason is that companies use profit margin as a goal. Parts that cost 500 dollar and have 20% profit margin is more fun than parts that cost 50 dollar and have 40% profit margin.

IT is in a strange place. Todays ASP on a PC is 400 dollars. Microsoft cont care about the price since their OS cost the same. PCs costing over 1000 dollar. Apple have a 90% market share. They like expensive PCs since each % point margin is more worth. Therefore they have no incentive making cheaper products with A SoC.

At the end is we consumers that are manhandled by this oligopoly.

Microsoft dont care about making Windows great. It have never been good and still have 95% marketshare. For decades their business model where making bad products to force payed upgrades. This works since we customers have no choice in most cases.

Just accept that RISC is faster like it always have been. Dont make stupid excuses with optimized OS or like other made "power envelope" (that is Blantanly wrong with A SoC, but right with Qualcomm)

Remember that Windows is unique on all 100 different computer platforms. Its the only platform where 64bit is slower than 32bit (look at TomHardwares benches). Same program is 3% slower in 64bit = why the 64bit transition was made almost 2 decades later then real computers and over a decade from Mac. In real world with real 64bit CPUS: you get 30-40% more perforce on the same code comparing compiling 32bit/64bit. Just look at the tests that where done when Apple A7 went 64bit. Same code: 30-40% faster 64bit.

Apple have announced that MacOS will not run 32bit programs in 2019. This is a hint of something huge. Either Apple moves to A SoC or more likely: They will use the rumored new uARCH from Intel that is not X86 compatible. By removing X86 compatibility, they can remove 32bit from these Intels and make a real 64bit chip that Appel and hopefully Windows will use later. Killing CISC and 32bit would be a huge step forward. (and while dreaming: Why cant Windows do the same as Apple: Use a Unix kernel: get working computers and put on the great Windows gui on it. They used NT kernel to lock in customers. The era of competition is over. Linux will never beat Windows. Imagine 0 viruses. (yes. Still 57 years with Unix and zero viruses since YOU DONT GIVE VIRUS Root, like Windows/Admin)

September 29, 2017 | 12:19 PM - Posted by UnknownsForLayoutsPresently (not verified)

The x86 ISA is one Butt Ugly CISC ISA that must support so much legacy software that Apple does not have to support on its iOS devices at least for the most part. So Apple controlls the OS/Hardware/API and application ecosystem on it's iOS devices, and to a lesser degree it's MacOS/OSX devices that have some folks purchasing Apple laptop's/PC's and running Windows OS/Linux OS builds. Any iOS based devices can quickly get beyond any dependencies on legacy 32 bit code so Apple's A11 hardware can be even more optimized for future uses based on the most up to date software/OS/hardware.

The x86 ISA only recieved the market share in its history in thanks to IBM taking that x86 8/16 bit ISA and using it in IBMs first PC the put the PC name on the market. So with x86 ISA over the years it's more of a dependency than anything else that has kept the x86 8/16 and 32/64 bit ISA needed for so many years.

But Even iOS 11 on the earlier Tablet/Phone hardware is not going to be as feature complete because of the A11's new hardware functionality that did not exist before the A11. So there will still be some drawbacks for Apple's users in that regard, just like it is for every other device, mobile phone/tablet or PC/laptop based.

Hell Apple did not even have iOS 11 installed on any of it's in store display devices the day of iOS 11's release and those Apple Store employees are such drones that are not allowed to answer the simplest question without restrictions for any relevent information pre-approved from the closed mouth Apple management. Hell Apple does not even let its engineers present at the Hot Chips Symposium now.

Apple is the North Korea of the technology industry when it comes to engineers speaking/presenting about Apple's CPU/GPU designs, with only Apple's marketing wonks speaking their market double-speak that's geared to the lowest common denominator of folks.

September 29, 2017 | 12:48 PM - Posted by James

None of the modern ISAs in use are RISC now. It is a meaninglesss distinction. They are all huge instruction sets with all manner of very specialized instructions. I wouldn't even say that AMD64 is CISC either. While a lot of the old instructions may still be present, modern compilers will not issue a lot of them because they were to slow. They end up as just some microcode for backward comparability. In some respects CISC type instructions can be better. They represent a form of instruction compression. While some operations are broken into multiple micro-ops, other operations can fuse multiple operations into a single micro-op. Cache resources are of huge importance in modern processors, so saving some space in the L1 instruction cache can be an advantage. Most modern processors don't actually manage to fully utilize their instruction execution resources due to memory limitations. SMT is a way to try to make use of the hose resources.

Also, if most other things are kept constant, I would expect a performance drop from going 64-bit. It takes more space in the instruction cache with 64-bit instructions. It takes more space in the data cache due to pointers all being 64-bit. Some architectures get around the instruction size by leaving the instructions at 32-bit width and only increasing the pointer/int size to 64-bit. There is no inherent increase in performance just from going 64-bit. Other things are figuring in to it.

September 29, 2017 | 04:19 PM - Posted by CISCdesignsTakeMoreTransistorsToImplementThanRISCdesigns (not verified)

I think that you Do Not Know what you are talking about when you bring that CISC Front-End to "RISC"(Micro-code) Back-End attempt at comparsion when discussing the RISC to CISC ISAs! It still takes more transistors to Implement a CISC ISA vurses a RISC ISA and many of the RISC ISA Assembly OP codes translate into a single Micro-OP for the majority of RISC ISA based instructions used in the RISC based(ARM, MIPS, RISC-V, Nvidia uses a RISC-V based controller: Falcon = FAst Logic CONtroller) designes in use currently. RISC ISAs based CPU core designs can also be scaled up well compared to CISC ISA core designs that can not be scaled down as far as the RISC ISA based designs can, if you take into account IBMs power7, power8, power9 RISC ISA based server/HPC processor designs.

So the CISC ISAs being comprised of many more instrctions that have to be implemented by even more more micro-code/Micro-OP logic/transistors and there must be hardware(decoder logic, other logic) to handle the many more Micro-code instructions on CISC designs that will still use more power and take up more die area to implement compared to any RISC ISA based CPU core designs.

The decoders for CISC ISA CPU have to be larger, the micro-code ROMs have to be larger, for the CISC based designs and the overall HIGHER transistor counts on CISC ISA CPU designs are going to use more power unless Intel/AMD/other CISC ISA based CPU makers have some sort of technology that defies the laws of physics.

ARM/RISC ISA running cores can be smaller and with the majority of the ARM ISA instructions decoding into single micro-OPs the instruction decoders will be less complex. There is a one main reason that Intel's x86 based ISA designs failed to take the mobile and the embedded controller/"IOT" markets and the CISC ISA is one of the main reasons that the ARM/RISC based designs dominate the Phone/Tablet/Embedded controller/IOT markets.

You can scale the RISC ISA running core designs up with some very wide order superscalar cores like the Power8/Power9/Power RISC ISA based desgns. The Power8 with its 16 execution ports/pipelines and 8 instruction decoders. With the power8 able to decode 8 and issue 10 instructions per clock cycle per core for a power8/SMT(8) core with 8 threads per core and 12 cores per die. And that Power RISC ISA is the reason that IBM could cram so much execution resurces onto one power8 CPU core. Any CISC/x86 design will have to make room for all the Legacy instructions and transistors and the extra CISC ISA decoder/other unit space required to implement/service that x86 CISC ISA.

If you Look at Apple's A7 and newer designs they can with the A11 fit 2 fat high power A11 cores and 4 Smaller low power A11 cores and assoicared Apple designed GPU cores on the Bionic die, as well as some other new IP functional blocks.

The CISC x86 ISA has too much dependency on legacy instructions that must be there for legacy software/legacy OS support and both Intel and AMD will have too much trouble trying to get x86 shoehorned down into the phone form factor/low power using/space saving necessary for any phone SKUs.

But at least AMD does have it's Keller/Team's K12 custom ARMv8A ISA running micro-arch to rely on if they decide to compete at a later time. AMD also already has its ARM ISA server software/ecosystem ducks in order with it's current Seattle(ARM holdings refrence A57 core based) Opteron A1100 designs that just came to market beginning in Jan 2016. So AMD has some server customers and ecosystem partners that will want the K12 custom design to replace the Opteron A1100 in 2018-later, and AMD has committments to support its Opteron A1100 customers for 5-10 years.

With all this Talk about Zen and x86 there is still a growing ARM server market in its nascent stages and even Microsoft is beginning to try and make that work on ARM with some x86 to ARM ISA translation layers. So there will be more ARM RISC server designs incoming maybe even an AMD K12 Epyc Branded custom ARM core server line using that K12 custom ARMv8A ISA running CPU micro-arch. And AMD did push K12's release back to 2018 but who knows until 2018 arrives and AMD is forced by SEC 10K requirements to update K12's status.

October 1, 2017 | 02:36 AM - Posted by James

Ah. A reading comprehension failure plus a personal atrack. That would help me evaluate the strength of you point, If only I knew what the point of your long, rambling, off topic posts actually are. If you want to have a discussion on computer architecture, then I would suggest that you go back to school or read some more books. My computer architecture class was a long time ago, but I still have the book. It is "Computer Organization & Design: The Hardware / Software Interface" by Patterson and Hennessy. It is a text book, so it isn't exactly easy reading, but I remember it being quite good. The old version I have mostly used MIPS architecture as examples, but the concepts are the same.

Good luck with whatever it is you are trying to do here.

October 1, 2017 | 11:54 AM - Posted by UglyCISCaButtUglyX86Monster (not verified)

That still does not preclude the fact that CISC ISAs take more transusrors and die space to implement than RISC ISAs and bringing up that "RISC"/micro-code back-end arguement does not make the facts in the matter any different.

CISC ISAs are not used in the majority of mobile phone/tablet devices for some reasons, and those reasons are that CISC ISA based CPUs use more power and take up more die space to implement and for the RISC based designs. And with the RISC ISA based designs fabbed at 28nm the CISC designs where still having no sucess at cometing with the RISC designs for power usage/lower thermals even with the CISC designs fabbed at 14nm. The RISC based ISA designs can scale all the way down to the smallest MIPS/ARM based controllers in IOT devices and all the way up to the Power8/power9 devices that can outperform the x86 designs in the server room.

The x86 ISA only had power over the markets when the software/OS ecosyetems where so dependent on the x86 rather ugly ISA and that was mostly because of an accentential selection process undertaken by IBM way back when IBM invented its Personal Computer(PC) design that gave birth tho that PC market, ditto for how the PC/OS market became stuck with Windows for so many years. And now even Microsoft is trying to hedge its bets and is trying to get some ARM/RISC compatability layers and its OS/API variants running on ARM/RISC based devices.

Keep your eyes out for RISC-V/RISC ISA based devices as the various OS/API SDKs and compilers add support for the RISC-V open ISA and even Nvidia is using the RISC-V in their Falcon(FAst Logic CONtroller) controller IP.

That x86 is one butt ugly/fugly ISA and it's nowhere to be found in most of the Phone/Tablet SKUs on the market today and that Apple A series and hopefully the AMD K12 designs from AMD will make x86 an even less appealing choice in the future. And it's because even the ARM/MIPS/RISC-V/Other RISC bease ISAs can have custom micro-archs beefed up in a similar way to the way as the Power8/Power9 RISC ISA beased mico-archs are and give that ugly x86 Frankenstein monster some even more powerful competition.

AMD better not think that x86 only is the way to go or AMD will be stuck in a market that is shrinking. So AMD needs to continue to support its Opteron A1100 ARM srever customers and replace the A1100(Based on the Arm Holdings reference A57 core designs) with some custom ARM/K12 replacments because that ARM server market is going to continue to grow and the x86 only market is getting smaller everyday with the competition form others custom ARM designs and IBM/Third party Power8/Power9 licensees, Google is a Power9 licensee.

It's on longer a one ISA world and most of the world's devices(phones/tablets/toasters) run on RISC ISA based processors.

September 29, 2017 | 10:57 AM - Posted by Odizzido2 (not verified)

Synthetic.... How uninteresting. I am hoping to see some real testing for the full review.

September 29, 2017 | 12:02 PM - Posted by Raiden (not verified)

Way to be original apple, what are the next cores going to be called, Sundowner and Jetstream?

September 29, 2017 | 01:06 PM - Posted by remc86007

With Windows on ARM coming soon, is it possible Intel will create hybrid x86 and arm processors with a couple high powered arm cores for tasks like web rendering that they are good at and a couple x86 cores for backwards compatibility and more complicated tasks?

September 29, 2017 | 10:55 PM - Posted by Photonboy

Windows 10 for ARM works via having an ARM CPU with x86 emulation software running.

Hybrid x86/ARM doesn't make sense since your efficiency just goes out the window. The SOLE POINT of having ARM is for efficiency, so adding on some x86 CPU's makes no sense.

So anything using x86 should be relatively low-performing applications due to the emulation loss, and frankly many programs would work just fine so not having to REWRITE the code from x86 to ARM is a good idea for more compatibility.

LACK OF DRIVERS though is going to be a problem.

September 29, 2017 | 03:04 PM - Posted by agello24 (not verified)

so how is the battery life? no one uses an iphone like they do a laptop or pc. you check you email, text, apps news and talk on the phone. explain why you need all that power to do just that?

September 29, 2017 | 04:12 PM - Posted by Mememememe (not verified)

Please run specint (you'll need to compile them for the a11 using the llvm) so we can have a better idea of the low level CPU characteristics.
GB is kinda awful.

September 29, 2017 | 04:32 PM - Posted by LDM (not verified)

I am not sure if you have noticed, but this is not a new GPU. This is a version of PowerVR and it has OpenGL PowerVR extension and it has TBDR architecture.
Now I don't know what sort of agreement has Apple with IMG so far, but Apple hasn't discover any wheel with this solution. And this is a fact

September 29, 2017 | 09:32 PM - Posted by CleanRoomDesignsNotSafeFromPatentsGranted (not verified)

Maybe some of Imagination Technologies' patents have expired or Apple has the same sorts of IP licensed from AMD or Nvidia/others.

It is hard for anyone to even design a Clean Room GPU design without stepping on some GPU IP that AMD, Nvidia, or Imagination Technologies' has the IP rights for in the form of patents granted by the USPTO/Other Patent offices.

So Apple has the money to make its In-House GPU designs legal even if Apple has to license the rights. I'm sure that somewhere there are A-11s being examined under electron microscopes by those Chip Works types of consultants that everybody in the industry contracts with to make sure others are not using any patented IP without paying. Even Intel has to license IP for its In-House GPU designs if they use of unified shaders because both AMD and Nvidia have patents granted for that.

The one reason that both Nvidia and AMD are hesitant to take each other to court is that every time a patent is taken to court that patent is at its greatest risk of being declared invalid, like some of Nvidia's overly broad patent IP was when Nvidia took on Samsung and Qualcomm to court and had to withdraw some of its other patent IP from consideration in that lawsuit after one other Nvidia patent was invalidated by that court.

So the USPTO can and does grant patents without doing the proper amount if vetting for prior art and corporate patent attorneys can and do scour the patent filings with the help of specilist engineers looking for prior art so companies can even attempt to use patents without paying, with the knoledge that some patents, if taken to court, will more than likely be invalidated by that court.

So Apple has the money to hire the best patent layers and patent engineers/consultants to have a look at what the USPTO/other patant offices have on file and maybe some of Imagination Technologies' patents have expired and for others Apple can license elsewhere.

September 30, 2017 | 04:22 PM - Posted by LDM (not verified)

It's a not a matter if Apple can do it or not. They can in a long run.
The layout of this chip has been analysed and it is a 6 cores gpu just like the previous one in a same design, just smaller.
It is compatible Metal, it has IMG, PWRTC extension and it is a TBDR, just like previous one.
This ''in house GPU'' as more of other house that Apple.
Anyway the deal is that Apple has to still rely on PowerVR design to make a fast GPU.

September 30, 2017 | 08:32 PM - Posted by DaftyDaftOnThatOne (not verified)

Really and other that using 6 GPU "cores" what proof do you have. Because Just haveing 6 of somthing proves nothing regarding those 6 GPU "cores" design. AND Nvidia is using TBDR rendering also. and look at David Kanter's article(1) and just maybe the 1990's PowerVR patents have expired so Apple is free from some restrictions in dropping Imagination Technologies and going to with an Apple in-house design.

Those API claims are even more LOL, as that's what APIs(Metal, Vulkan, etc.) are designed to offers and somtimes a graphics API can implement a feature in software with an emulation layer, at the cost of performance, and that can be done across any GPU hardware. Just look at Nvidia's software scheduler compared to AMD's hardware scheduler/s in FX12/Vulkan and AMD recent Vega Performance advantage on Forza 7.

Apple's in-house designs are going to be similar to other GPU designs because of the very nature of GPUs so unless you have proof then you can not really know.

Once those Patents expire and both Nvidia and AMD have similar patents for similar IP that Apple can license! Apple never really used a fully Refrence PowerVR design in its later A series processors as Apple would heavily customize the Refrence PowerVR IP and a lot of patents are around GPUs are going to start to expire and GPU usage has now been around long enough for that process to begin to happen.

Your 6 "core" argument is a rather Daft argument without any real refrences to back up your claim, ditto for any API support than is often times emulated in software with respect to any API features not implemented in a GPUs hardware.

"Tile-based rasterization is nothing new in graphics. The PowerVR architecture has used tile-based deferred rendering since the 1990’s, and mobile GPUs from ARM and Qualcomm also use various forms of tiling. However, tiling has repeatedly failed on the desktop. In the 1990’s, Gigapixel developed the GP-1 tiled-rendering GPU, before the company was acquired by 3dfx (in turn acquired by Nvidia). The PowerVR-based Kyro desktop GPU was released in 2001, but STMicro cancelled the product line. Microsoft also investigated tiling for Talisman, an entirely new graphics pipeline (including APIs and hardware) that was ultimately shelved." (1)


"Tile-based Rasterization in Nvidia GPUs"

September 30, 2017 | 08:34 PM - Posted by DaftyDaftOnThatOne (not verified)

Edit: FX12/Vulkan
To: DX12/Vulkan

October 1, 2017 | 03:55 PM - Posted by LDM (not verified)

What proof do I have?

How about this one?

or if you like Portuguese language there you go:

Now common sense suggests that since prior A11 all models are PowerVR, TBDR Apple has to provide a solution 100% compatible with previous ones. We can't escape from that.
You also seem to not understand the fact that if you run the extensions, it will appear PWRTC...I wonder why
Reason is that whatever you like to call this GPU, is still a PowerVR solution, simple as this.
Do not worry Apple won't be suited, I presume they have paid a lot IMG to keep them quiet.

October 10, 2017 | 05:44 PM - Posted by Verne Arase (not verified)

I think Apple's already stated it's a three core GPU.

September 29, 2017 | 11:02 PM - Posted by MacRumorsOfARMedMacs (not verified)

More of those rumors from the rumor mills(1) surrounding Apple!

I'd rather see Apple contract with AMD's semi-custom division for a custom x86 Zen/Vega APU with a little HBM2 on the side for Apple's Macbooks as that will still allow for thase Macbook users to run windows/legacy software also. I'm sure that AMD would be able to create some very nice custom APU designs with that space/power saving HBM2 and a custom x86 ISA based APU that Apple could have AMD tailor for Apple's exact needs.

Apple will need to at least get a custom ARM core micro-arch design with SMT capability for laptop usage and get any wasted CPU cycles on any non SMT enabled cores better utilized with SMT enabled CPU micro-archs. So Apple has some years of engineering/validation ahead before Apple can make a clean break away from the x86 based designs in laptops. Apple's customers that develop for other OS/Ecosystems still nets Apple enough business for Apple to maybe delay a complete departure from the x86 ISA across its entire line of laptop offerings.


Apple Interested in Developing ARM-Based Mac Processors and iPhone Modems in House

October 2, 2017 | 02:55 PM - Posted by Anonymous3358 (not verified)

Can you compare A11 bionic vs the best other soc from Apple, not A10, A10X? In all same benchmark.

October 3, 2017 | 10:08 AM - Posted by Zegra (not verified)

@Sebastian Peak

No need to go for a 15 W CPU. You should compare it to an i7-7Y75 in the macbook 10.1.

December 26, 2017 | 10:12 PM - Posted by Samuel Martinez (not verified)

Is funny to see post where people said apple A11 bionic when you are comparing 2 gpus with different api that works in a different way. On the benchmark like you said one is open gl es 3.1 the other one use metal, metal is not only a low level api is a multithreaded one so you have alll 6 cores of the A11 running, different with open gl es. Only one core have all the load plus is not a low level api and like you can see in sling shot extreme the adreno 540 is more powerful thant the 3 or 6 core gpu on the A11. Soon we going have sling shot extreme runing on vulkan.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.