Author: Tronserve admin
Thursday 29th July 2021 10:50 AM
Shhh…Apple Very Quietly Acquires Edge-Focused AI Startup For $200 Million
Apple has somehow avoided the headlines in recent weeks with its acquisition of Seattle-based AI-at-the-edge company Xnor for $200 million. It’s an exciting development for the Apple fanbase, as this may mean we will soon be seeing Xnor’s low-power AI algorithms for things like object detection in future iterations of the iPhone, iPad, etc.
For those unfamiliar with Xnor, the company is a spin-out from the Allen Institute for Artificial Intelligence. Prior to the deal with Apple, it had raised $14.6 million in funding over the three years it has been in operation. The company’s founders, Ali Farhadi and Mohammed Rastegari, are the creators of YOLO, a popular neural network widely used for object detection.
EETimes editor Sally Ward-Foxton explains why Xnor was targeted by Apple:
Xnor’s solution for embedded processors is based on binarized neural networks (BNNs), which use binary values for activations and weights, instead of full precision values. This reduces model size and memory requirements. Xnor-net, the first binarized convolutional neural network, can detect objects in images using very little processing power while maintaining accuracy.
In other words, these models and techniques can be used for things like image processing in resource-constrained devices like smartphones, security cameras, and other remote sensor network. It’s also beneficial from a privacy standpoint, as all image data can be processed in the actual edge device, as opposed to sending this information to the cloud which, in turn, could incur high latency and obvious privacy concerns.
A good case in point: one Xnor demo running on a Raspberry Pi Zero was capable of person detection at 8 frames per second. “That’s a 50-cent CPU, not normally considered a viable platform for edge inference,” said Xnor VP engineering Peter Zatloukal at the Embedded Vision Summit earlier this year.
What’s more, the company’s demos also included state of the art person detection using deep learning on a $2-FPGA (Lattice ECP5). This demo could inference 32 frames per second for person detection using 48mW (1.5mJ per inference). The power requirements were so small that the demo was powered by ambient sunlight via a small solar harvester. It could run indefinitely with no external power input, Zatloukal explained.
Also worth pointing out: The company’s developer platform, AI2GO, comprised software development kits for embedded devices, plus pre-trained AI models optimized for embedded devices.