Apple’s A12X is similarly built on a 7-nanometer process, but physically larger than the A12. It has 10 billion transistors and comprises a seven-core GPU and eight-core CPU. Single-core CPU performance is up to 35 percent faster compared to last year’s iPad Pro chip, and 90 percent faster in terms of multicore performance. The graphics processing unit (GPU) is two times speedier, meanwhile, with better tessellation and multilayer rendering performance. And there’s a new storage controller that can efficiently handle up to 1TB storage.
All those innovations allow it to deliver up to 5 trillion operations per second and “all-day” battery life.
Apple says it delivers “Xbox One S-class” graphics performance in a package that is much smaller, and claims it’s faster than 92 percent of all portable PCs.
The A12X, like the A12, has Apple’s eight-core Neural Engine, which is designed for real-time machine learning tasks like recognizing faces.
The Neural Engine is an eight-core chip (up from a two-core processor in the A11) that’s capable of up to five trillion operations per second (compared to 500 billion for the last-gen neural engine). Also in tow is a smart compute system that automatically determines whether to run algorithms on the processor, GPU, neural engine, or a combination of all three.
Apps created with Core ML 2, Apple’s machine learning framework, can crunch numbers up to nine times faster on the A12X Bionic silicon with one-tenth of the power. Those apps launch up to 30 percent faster, too, thanks to algorithms that learn your usage habits over time.
Real-time machine learning-powered features enabled by the new hardware include Siri Shortcuts, which allows users to create and run app macros via custom Siri phrases; Memoji, a new version of Emoji that can be customized to look like you; Face ID; and Apple’s augmented reality toolkit, ARKit 2.0.
Today’s news follows on the heels of Apple’s Core ML 2 announcement this summer.
Core ML 2 is 30 percent faster, Apple said at its Worldwide Developers Conference in June, thanks to a technique called batch prediction. Furthermore, Apple said the toolkit would let developers shrink the size of trained machine learning models by up to 75 percent through quantization.
Apple introduced Core ML in June 2017 alongside iOS 11. It allows developers to load on-device machine learning models onto an iPhone or iPad, or to convert models from frameworks like XGBoost, Keras, LibSVM, scikit-learn, and Facebook’s Caffe and Caffe2. Core ML is designed to optimize models for power efficiency, and it doesn’t require an internet connection in order to get the benefits of machine learning models.