Only two years ago, NXP Semiconductors seemed behind in the artificial intelligence race. After Qualcomm announced it would buy the company for over $40 billion, NXP’s chief executive Richard Clemmer admitted it was not yet working on machine learning. It also needed more processing power to compete with Nvidia in supplying the brains of driverless car prototypes.
The Eindhoven, Netherlands-based company's strategy has changed as new applications for the technology emerge. Last month, it introduced a new software tool that lowers the bar for customers to integrate machine learning into consumer electronics, factory equipment and cars. The software can improve how efficiently its embedded chips can handle inference jobs.
This is only the first step, though. The company will integrate scalable artificial intelligence accelerators in its chips in 2019, said Gowri Chindalore, NXP’s head of technology and business strategy. NXP is currently weighing whether to build an accelerator from scratch or license another company’s cores to enter the market faster, he told Electronic Design. The deliberations have not been previously reported.
NXP is under pressure to show customers that embedded chips can handle machine learning tasks without being shackled to the cloud, where training and inference typically occur. The benefits include lower latency and tougher security—which are critical for applications like autonomous driving and industrial robots—as well as conserving power normally used to communicate with the cloud—critical for anything battery-powered.
The new tool, the latest in the company's EdgeScale platform, serves to compress machine learning models. The resulting inference engine can run inside the graphics processing units (GPUs) and digital-signal processors (DSPs) in its Cortex-A chips, which include the i.MX and LayerScape product lines. Using the software, customers can store algorithms trained with TensorFlow in the cloud and automatically insert them in chips out in the field.
“The platform is meant to help our customers learn about artificial intelligence,” said Martyn Humphries, NXP’s vice president of consumer and industrial i.MX processors. Each product has tradeoffs in terms of accuracy, power consumption and speed. The software spots the tradeoffs so that customers can choose the processor that works best for their application, Humphries said in an interview.
“Simplicity and device-specific optimization will be critical for broader adoption in the embedded market given how fragmented the hardware industry is,” Chris Rommel, executive vice president of market research firm VDC Research, told Electronic Design. “And what is exciting to me is to think about how that could ultimately extend to their broader portfolio.”
Boosting algorithm efficiency and the amount of memory that microcontrollers are budgeted are other prerequisites before machine learning can be pushed to Cortex-M devices, Rommel added. That would require either lower memory prices or manufacturers to increase their bill-of-material budgets. “All of this can and will happen with time, we just aren’t there yet,” he said.
Dozens of semiconductor startups are trying muscle into Nvidia’s stronghold in data centers. But another battle is brewing over chips with the performance-per-watt for edge computing, where Nvidia has less command. NXP is fighting to stay ahead of new embedded rivals like Mythic, Syntiant and Intel’s Movidius unit as well as enemies like Infineon, Renesas and ST Microelectronics by wringing as much computing power as possible from its chips.
“There are companies that want to provide this single chip and say that’s what you need for artificial intelligence. But they don’t know,” said Humphries, adding that the company is debating whether to sacrifice space inside its chips for neural network silicon, which would likely raise costs for customers. “We don’t put in what we don’t need,” said Humphries, and not every processor needs an accelerator inside.
NXP is assessing a number of neural network cores, including Arm’s Project Trillium as well as in-memory processors, but nothing has been determined yet. “The key factors for dedicated accelerators are cost-effective performance, power efficiency and scalability for evolving A.I. applications,” Chindalore said. He is focused on the “long-term viability” of the company’s machine learning strategy rather than winning short-term skirmishes.
Qualcomm’s acquisition could complicate the calculus. The deal has been tangled in recent trade negotiations between the Trump administration and China, and it is still not guaranteed to close. The San Diego, California-based company’s machine learning strategy has been almost as understated as NXP’s. Last year, after shelving a custom neural network core, Qualcomm started giving customers a new software tool that could cut neural networks into smaller parts that are then assigned to the heterogeneous cores inside its chips to boost efficiency.
Whatever NXP does will have broad implications. The company, which was founded as Philips Semiconductor in 1953, has over 25,000 customers and keeps its embedded chips in production significantly longer than it would in the consumer electronics space. NXP – the world’s largest maker of automotive chips since its acquisition of Freescale Semiconductor – projects to sell over 100 million Cortex-A processors in 2018.
NXP can use its wide reach to understand customer requirements and adjust its machine learning plans accordingly, said Nadine Manjaro of market research firm Compass Intelligence. Nvidia and Intel – which has poured billions into its acquisitions of Nervana Systems, Movidius and Mobileye – have more complete strategies, but the market is constantly changing, she said. “NXP has the resources to invest further.”
Specifically, the company can use its understanding of the automotive, networking, aerospace and other industries to build better chips for machine learning inference. “Given NXP’s primary markets in embedded, they are not really late,” Rommel told Electronic Design. He added that Nvidia’s position in the machine learning market is far from unassailable: “It is still early days."
Customers are already wading into machine learning. Orcam, a startup founded by the former chief executive of Mobileye, uses NXP’s i.MX 7 processor inside its wearable device for the elderly and visually impaired. The device can be attached to a pair of glasses, reading out loud newspaper articles and other text and remembering faces. The wearable uses an on-device deep learning algorithm instead of offloading inference tasks to the cloud.
NXP foresees customers using microcontrollers for simpler but similar tasks. At the NXP FTF Connects conference last month, the company showed that its Cortex-M4 devices could use machine learning to sense vibrational anomalies in industrial machines, so that factory owners can address issues sooner. Another processor that only costs a few dollars was used to identify foods placed in a microwave in a tenth of a microsecond. Both chips, however, were plugged into a power source, not battery-powered.
Unplugging and lowering the cost of these devices would check off boxes for researchers like Pete Warden, who works on Google’s open-source machine learning tools. He foresees an artificially intelligence future in which microcontrollers only cost pennies but have the processing power to do things like grasp basic voice commands like on and off and monitor for a grasshopper’s chirping on farms. They would also be able to function on a single battery charge for several years.
“We’ve been talking about the potential of A.I. and neural networks in the embedded market for a decade,” said Rommel. “Now there finally is some momentum and economical processor performance to support it.” He added: “Target system memory availability is still going to be a gate for widespread A.I. implementation” as well as “further improvements in algorithm engineering and compression tools.”