(Image courtesy of Fritzchens Fritz).
(Image courtesy of Fritzchens Fritz).
(Image courtesy of Fritzchens Fritz).
(Image courtesy of Fritzchens Fritz).
(Image courtesy of Fritzchens Fritz).

Eta Compute Runs Neural Networks That Train Themselves

Nov. 6, 2018
Eta Compute Runs Neural Networks That Train Themselves

Before machine learning algorithms can be used in factories to detect equipment malfunctions or cars that can autonomously tell the difference between left and right turn arrows, they need training. That currently takes place in data centers, where neural networks are introduced to hundreds or thousands of examples labeled with what they will need to tell apart. Once trained, the algorithms are programmed into embedded devices.

Eta Compute is trying something different. The company, which has raised more than $10 million in venture capital since it was founded in 2015, announced that its new microcontroller can support spiking neural networks that train themselves. The new Tensai chip can handle unsupervised learning while consuming less power than a hearing aid. That enables the use of continuous processing required in voice interfaces and predictive maintenance.

Nvidia currently dominates the market for machine learning chips. Its graphics processors are the gold standard for training neural networks on massive amounts of data. But Eta Compute is attempting to relocate machine learning to the embedded devices that do most of the data collection. That would cut down on the communications with the cloud, not only improving latency but also reducing the amount of information that needs cloud storage.

“We are expecting a shift from a more cloud-dominated space to a more edge-dominated one,” said Nara Srinivasa, Eta Compute’s chief technology officer and a former chief scientist at Intel, where he developed self-learning algorithms. “That basically changes the whole way we collect, process, save and transmit data. We want to send as little information to the cloud as possible while also reducing the area and power of the hardware.”

Eta Compute is fighting for position with market leaders including NXP Semiconductors and Renesas Electronics but also startups Greenwaves and Reduced Energy Microsystems. The company, which announced the Tensai microcontroller at Arm TechCon last month, is targeting devices running on tiny batteries that have to function without being replaced for years—or energy harvested from their surroundings instead of using batteries.

The company said that samples of the new chip are currently available to potential customers. Tensai will enter volume production in the first quarter of next year prior to competing chips from Greenwaves and Syntiant. The product’s projected cost is in the $10 range. Eta Compute combined Tensai’s Arm Cortex-M3 MCU with low power analog blocks and NXP CoolFlux DSP cores to handle machine learning jobs.

The cores are based on Eta Compute’s asynchronous Dial architecture, which lowers the supply voltage that chips need to function. The chip supports subthreshold voltage operation down to 0.2 volts, while most chips have threshold voltages around 0.9 volts. The frequency can be scaled up to 100MHz depending on the application. An embedded power-management unit helps to keep frequency constant over process and temperature changes.

Tensai can also handle convolutional neural networks (CNNs), the main building block of deep learning algorithms. These networks are a honeycomb of nodes, each connected to another node in the network. The weights that determine the strength of the connection are generated as the software is trained on reams of labeled data. The resulting model can be used to distinguish between, for instance, a crow from a black cat from a black high heel.  

Deep learning is suited for applications with abundant data that can be labeled. But neural networks that do unsupervised learning are a better match for applications where the data is unstructured or scarce, like listening for unusual sounds in factory equipment. Eta Compute said that one of its spiking neural networks—more commonly called SNNs—can train itself to identify the word “smart” while ignoring the words “dumb” and “yellow.”

Spiking neural networks also slash the amount of computation needed to reach these conclusions. Reason number one: the nodes inside them are more sparsely connected than in CNNs. Reason number two: while CNN algorithms represent weights as 8-bit and 16-bit numbers, SNNs give them values of either one or zero. That lowers the cost of the main multiply and accumulate (MAC) operations in machine learning.

The result is embedded machine learning that requires less power and less data to reach conclusions about something, according to Eta Compute’s vice president of sales and business development Chet Jewan. Eta Compute gave a demonstration: a convolutional neural network needed a photograph with 100,000 pixels to identify a cheetah. Eta Compute’s SNN algorithms identified it using less than 1,000.

Eta Compute’s chip could be paired with microphones in everything from wearables to household appliances to listen for simple voice commands like “On” or “Off.” The microcontroller only draws around 50uA while listening for wake words and 500uA during classification, the company said. It could also be used to compress raw images or audio before sending to the cloud, reducing communications requirements for Internet of Things devices.

“Eta Compute is not alone in addressing edge artificial intelligence, but [the company] is the first to offer an Arm-based MCU with a machine-learning engine,” said Robert Wheeler, technology analyst for market researcher The Linley Group. “The startup completes its solution with optimized neural-network software.” He added: “Its holistic approach should appeal to Internet of Things developers lacking AI expertise.”

Tensai’s support for unsupervised learning could also lower the bar for companies adding artificial intelligence to tiny battery-powered devices. Renting out computing power in the cloud for training and spending time labeling examples for the algorithm to study can add to development costs. “Most companies have very limited resources to label all the data they collect,” said Srinivasa. “It can be arduous.”

Voice your opinion!

To join the conversation, and become an exclusive member of Supply Chain Connect, create an account today!