Last year, Qualcomm released a software tool that could be used to carefully curate machine learning algorithms in its hardware. The neural processing engine breaks algorithms into simpler parts and then spreads them out among the processor’s CPU, GPU and DSP cores to improve efficiency.
The neural processing engine is one way that the San Diego, California-based company is trying to push its existing hardware into the market for machine learning chips. On Wednesday, it released two new system-on-chips for the Internet of Things that can execute tasks like image classification, facial recognition and object tracking without relying on the cloud.
The chips, QCS605 and QCS603, can be used in industrial and consumer security cameras, wearable cameras, virtual reality cameras and autonomous robots. With them, these devices can avoid the latency, security and bandwidth issues introduced by streaming data to corporate servers in the cloud, where most companies run machine learning algorithms.
Qualcomm’s Internet of Things strategy so far has been to shoehorn its Snapdragon chips for smartphones into car dashboard displays, wearables and surveillance cameras. Last year, the company reported that it was selling more than a million chips every day for these embedded applications, and it could package them with a broader range of chips if it succeeds in closing its $54.5 billion NXP Semiconductors deal.
Even though the new chips use the same Snapdragon building blocks, the hardware was assembled specifically for the Internet of Things, said Joseph Bousaba, vice president of Qualcomm’s smart home and consumer Internet of Things businesses, in an interview. The chips were manufactured using Samsung’s 10-nanometer technology, he said. Bousaba declined to discuss the prices of the chips.
The SoCs are based on a heterogeneous computing architecture, which brings together Qualcomm’s Kryo CPU, Adreno GPU and Hexagon DSP cores. They also support the neural processing engine, which is compatible with models created with Tensorflow, Caffe and other machine learning libraries as well as the Open Neural Network Exchange (ONNX) format.
“The combination of the Hexagon DSP and Adreno GPU are basically the A.I. engine behind the chips,” Bousaba told Electronic Design. “We have the flexibility and the programmability to run artificial intelligence in these chips versus the older ones we had.” The first commercial devices using the chips will be released by the end of the year, he said. Samples of the chips are currently available.
The artificial intelligence engine delivers 2.1 trillion operations per second at a single watt on inference algorithms, unfettering them from corporate servers in the cloud. Conversely, the Myriad X architecture designed by Intel provides a trillion operations per second, while Mobileye’s new generation of front-facing automotive camera chips handles 2.5 trillion operations.
As generational rivals like Intel and Mediatek, customers like Apple and Huawei, and startups like Mythic and Horizon Robotics pour hundreds of millions of dollars into custom chips for image recognition, Qualcomm is trying to wrench every shred computing power out of its more conventional chips. Several years ago, the company stopped short of including an accelerator core, called the neural processing unit (NPU), into its Snapdragon line.
The QCS605 and QCS603 take advantage of an advanced image signal processor that can improve the low light performance of security cameras as well as provide electronic image stabilization, staggered high dynamic range, chromatic aberration correction. That way, the camera can feed clearer images of its surroundings to machine learning algorithms.
Bousaba said that Qualcomm had partnered with Sensetime, one of the world’s most valuable artificial intelligence startups, among others, to provide facial recognition and object tracking software that complement its new chips. It is not clear whether Qualcomm could provide similar software through its acquisition of the Amsterdam, Netherlands-based Scyfer last year.