Embedded World 2024: Edge AI to transform industry
Kicking off the leading trade show for embedded technologies, Salil Raje, senior vice-president and general manager of the adaptive and embedded computing group at AMD, opened Embedded World 2024 by highlighting the importance of artificial intelligence (AI) at the edge.
He said that AI at the edge will be crucial to ensure desired real-time performance, data security and customisation in key applications such as autonomous driving, infotainment and robotics, adding that generative AI (GenAI) systems such as chat GPT have captured the imagination unlike anything the technology industry has seen in the past 50 years.
Raje noted that AI and robotics advancements in healthcare, transportation, and homes are transforming convenience and quality of life, redefining how people interact with technology.
Among the innovations being rolled out, Raje cited seven key use cases where AI on the edge was transformational in development: healthcare and life sciences, smart retail, communications, smart city, automotive, digital home, and intelligent factories.
Citing a number of key examples, he said: “AI is allowing industrial robots to learn and act autonomously. In healthcare, AI-based exoskeletons are making tremendous progress. The convergence of robotics and AI is on the cusp of restoring lost limb functions for many individuals, dramatically improving their quality of life.
“[In automotive], AI assistants are on the verge of being introduced in vehicles. These AI will allow you to personalise your car from adjusting settings to making dinner reservations, reading a user manual or whatever [from inside] the car. This has fundamentally redefined convenience as we know it.”
AI at the edge will grow, he said, when the technology industry could deliver very high compute performance within a constrained environment. He added that what was driving AI at the edge was down to three principal vectors: real time processing; data and security/privacy; personalisation/customisation.
Yet that was not to say there were not specific challenges that each specific industry would face. As AI is pushed right to the edge, it was igniting new requirements for decentralised intelligence, which means AI cannot depend on cloud and compute infrastructures, Raje cautioned, observing the challenges centred around power, data throughput latency, accuracy, environment temperature, safety, security, regulation, diverse workloads and frequency changes.
“The world of AI is innovating at a fairly rapid pace with more models and putting tremendous stress on computing. Harnessing the power of these continuously evolving models is posing extreme challenges, especially because with an edge the constraints are quite severe,” he said.
“As each of these applications change, these constraints and requirements continue to change. Each one of these use cases and applications [places] different demands and different requirements on your edge application. In industrial, for example, safety may be more important than our consumption. In healthcare, accuracy is non-negotiable.”
The AMD exec pointed out that key challenges in healthcare use cases would be safety, security, power and data accuracy. In industrial uses, these were coping with diverse workloads, regulation, safety and accuracy. For automotive applications, latency, accuracy, safety and regulation would likely be paramount.
The net result was that designing edge AI applications would require “flawless” integration of multiple systems, including pre-processing, AI inference and post-processing stages. These would support system scalability and adaptability.
Concluding, and presenting what could be a massive opportunity not just with the global car giant but automotive industry as a whole, Raje announced that AMD was working with Subaru to build a camera-based perception vision pipeline. This is combing pre-processing, AI inference and post-processing with functional safety in a single device.
#Embedded #World #Edge #transform #industry