I arose on Tuesday to a flood of messages from friends and media sources regarding a leaked WWDC announcement that Apple would be developing a line of computers using their own ARM chipsets rather than the existing Intel x86 chip architecture that Apple has used since 2006. This is not breaking news, as this idea has been swirling around in the rumor mill since the first pronouncements around Apple’s Worldwide Developer Conference in 2012.
The bigger surprise to me was the response online, Apple stock prices jumped overnight setting a new record high in spite of the pandemic or other economic indicators, but I understand. Apple has had a tenuous relationship with Intel over the years, with multiple Apple product deadlines missed due to Intel chip delays, especially since Intel has not been able to meet a single chip product’s projected delivery deadline in over 5 years.
I doubt the conversion will be easy this time either. Apple’s 2006 change over from PowerPC chips to the Intel x86 architecture was painful for professional users and messed with workflows and deliverables for nearly 6 years after the projected launch, an issue related to many user’s reluctances to accept change. Failures during the transition were mainly the parts of the workflow that get the least amount of public attention, as it was the plugins and drivers that took the longest to fully update, leaving many of the smaller developers out of business rather than suffering the costs of converting their entire codebase to work on an entirely different platform. I expect the same will happen again.
Changing the chip architecture is not an easy thing. The computers we work with are often defined as being either RISC (Reduced Instruction Set Architecture) and those chips are usually designated for consumer-facing products like phones and tablets versus the CISC (Complex Instruction Set Architecture) processors that are found in most workstation level products. Think of the chip instructions that are being sent to be processed as you would any language, the RISC (ie: reduced) instructional command is simplified for increased performance speed and significantly reduced power consumption, while the more complex CISC instructions allow for dynamic re-adjustments but with higher power consumption, more heat and with increased costs to produce and to program.
In a reduced programing language, the programmer strives to reduce the options available for branched decision making. The RISC chips only understand “rain” as the soft summer rain of May, not the bone-chilling sideways blast of a nor’easter, the percussive deluge of a summer thunderstorm, nor able to discern the differences between freezing rain, sleet or hail either. While the CISC architecture can understand all of the nuances and can decide between a light mist, a spring rain or a full-on thunderstorm, it just requires the programmer to include those options in the codebase.
Apple has been working on this for more than a decade. Shedding Intel to embrace their own “A” series chipsets has been in the works for a long time. If you have a MacBook Pro, MacBook Air, MacBook, iMac or MacMini made since 2017, chances are you have at least one of Apple’s “T” series chips tucked away in your machine. The T1 and T2 series Apple chips handle some of the most secure transactions on your devices, namely Fingerprint and Face ID security scans stored in “Secure Enclave” while at the same time managing the “Control Strip” as a dynamically changing input device. Indications are that the foundation in the T2 chip will also accelerate the hardware-enabled encode and decode of audio and video, so that my future Mac might finally be able to playback the H.265 HEVC compression my iPhone 11 Pro records in without transcoding first.
Eight years after the first trial balloons, Apple may actually make the jump this time. Because of the foundations of iOS, Apple controls a huge supply chain and cash reserves that did not exist for the struggling company in 2006. Suppling their own chips for their higher performance systems will increase productivity, reducing the cost per chip by more than half in some industry estimates. An Apple-designed chipset would speed the connection between GPU, CPU, Secure Enclave, and the system architecture in a way that could take us all to the future.
The tech Apple is pushing has already proved itself in millions of devices. It will streamline access and control over disparate services such as accelerated I/O, enhanced security, and hardware encoded and decoded video playback, albeit with a loss of external functionality for items such as camera codecs and wide range of legacy devices used in media and entertainment. It will make us better and faster, just not initially upon launch
Now if we can just get through the Catalina transition