Tokyo University of Science sets pace in neural networks on edge IoT

by Pelican Press
21 views 4 minutes read

Tokyo University of Science sets pace in neural networks on edge IoT

The University of Science has revealed it has developed a binarised neural network (BNN) scheme using ternary gradients to address the computational challenges of internet of things (IoT) edge devices.

The college said the breakthrough could pave the way to powerful IoT devices capable of using artificial intelligence (AI) to a greater extent, something that has notable implications for many rapidly developing fields. It cited the example of wearable health-monitoring devices that could become more efficient, smaller and reliable without requiring cloud connectivity at all times to function. Smart houses would be able to perform more complex tasks and operate in a more responsive way.

Moreover, the researchers said that across these and all other possible use cases, the proposed design could also reduce energy consumption, thus contributing to sustainability goals.

At the heart of the innovation was the introduction of a magnetic random access memory-based (RAM)-based computing-in-memory architecture that the team said significantly reduced circuit size and power consumption. The design is said to have achieved near-identical accuracy and faster training times compared with traditional BNNs, making it a promising offering for efficient AI implementation in resource-limited devices, such as those used in IoT systems.

Explaining the background of the project, the research team said there were two clear broad technological fields that have been developing at an increasingly fast pace over the past decade: artificial intelligence (AI) and IoT, where engineers and researchers alike foresee a world where devices are ubiquitous, comprising the foundation of a highly interconnected world.

Yet the research team warned that bringing AI capabilities to IoT edge devices presents a significant challenge. Artificial neural networks (ANNs) – one of the most important AI technologies of which BNNs are a subset – require substantial computational resources.

Meanwhile, IoT edge devices are inherently small, with limited power, processing speed and circuit space. Developing ANNs that can efficiently learn, deploy and operate on edge devices is a major hurdle.

In their latest study published in IEEE Access, Takayuki Kawahara and Yuya Fujiwara from the Tokyo University of Science revealed how they are working towards finding elegant solutions to this challenge, introducing a training algorithm for BNNs, as well as an innovative implementation of this algorithm in a computing-in-memory (CiM) architecture suitable for IoT devices.

“BNNs are ANNs that employ weights and activation values of only -1 and +1, and they can minimize the computing resources required by the network by reducing the smallest unit of information to just one bit,” said Kawahara. “However, although weights and activation values can be stored in a single bit during inference, weights and gradients are real numbers during learning, and most calculations performed during learning are real number calculations as well. For this reason, it has been difficult to provide learning capabilities to BNNs on the IoT edge side.”

To overcome this, the researchers developed a training algorithm called ternary gradient BNN (TGBNN), featuring three key innovations. First, it employs ternary gradients during training, while keeping weights and activations binary. Second, they enhanced the straight through estimator (STE), improving the control of gradient backpropagation to ensure efficient learning. Third, they adopted a probabilistic approach for updating parameters by leveraging the behaviour of magneticRAM (MRAM) cells.

Afterwards, the research team implemented the TGBNN algorithm in a CiM architecture – a modern design paradigm where calculations are performed directly in memory, rather than in a dedicated processor, to save circuit space and power. To realise this, they developed a completely new XNOR logic gate as the building block for a MRAM array. This gate uses a magnetic tunnel junction to store information in its magnetisation state.

“The results showed that our ternarised gradient BNN achieved an accuracy of over 88% using Error-Correcting Output Codes-based learning, while matching the accuracy of regular BNNs with the same structure and achieving faster convergence during training,” added Kawahara. “We believe our design will enable efficient BNNs on edge devices, preserving their ability to learn and adapt.”



Source link

#Tokyo #University #Science #sets #pace #neural #networks #edge #IoT

You may also like