Skip to content

The next embedded frontier: machine learning enabled MCUs

A new microcontroller claims to offer hardware-assisted machine learning (ML) acceleration for the Internet of Things (IoT) and industrial applications such as smart home, security surveillance, wearables, and robotics. That’s expected to significantly lower the barrier in human-machine interaction and add contextual awareness to end applications.

Infineon’s high-end MCU with ML compute acceleration—PSoC Edge—is targeting a new space of responsive compute and control applications. Steve Tateosian, senior VP of microcontrollers at Infineon, calls it a game changer in terms of compute performance on the hardware side. “It will lead to significant performance improvements when running neural network applications.”

He added that advanced ML applications have traditionally been done in the cloud. “With PSoC Edge, tasks like natural language processing can be carried out locally on the device,” Tateosian said. “Developers will also get code and ML tool support for their applications.

Tool enablement and software support infrastructure are crucial for ML-enabled MCUs. So, Infineon has integrated the end-to-end ML tool suite from Imagimob, the Stockholm, Sweden-based startup that Infineon acquired earlier this year, in its Modus Toolbox software platform. ModusToolbox provides a collection of development tools, libraries, and embedded runtime assets for embedded system developers.

The PSoC Edge MCUs are based on a high-performance Arm Cortex-M55 processor complemented by Arm’s Helium technology for enhanced DSP and ML capabilities to accelerate neural network processing. Cortex-M55 is paired with Arm Ethos-U55, an NPU specifically designed to accelerate ML inference in area-constrained embedded and IoT devices.

Moreover, Cortex-M33 is paired with Infineon’s ultra-low power NNLite, a proprietary hardware accelerator intended to accelerate the neural networks used in ML applications. And there is ample on-chip memory, including non-volatile RRAM as well as high-speed, secured external memory support. This mix of scalable compute power and memory is supported by an ecosystem of software and tools. Which, according to Infineon, will deliver support for end-to-end ML development, from data entry to model deployment.

Share to your social below!

Leave a Reply

Your email address will not be published. Required fields are marked *

Request Quote
Request one quote by partnumbers or upload a BOM, we will get back to you soon!

    Request Quote
    Request one quote by partnumbers or upload a BOM, we will get back to you soon!