AI models can’t learn as they go along like humans do

by Pelican Press
24 views 6 minutes read

AI models can’t learn as they go along like humans do

AI programs quickly lose the ability to learn anything new

Jiefeng Jiang/iStockphoto/Getty Images

The algorithms that underpin artificial intelligence systems like ChatGPT can’t learn as they go along, forcing tech companies to spend billions of dollars to train new models from scratch. While this has been a concern in the industry for some time, a new study suggests there is an inherent problem with the way models are designed – but there may be a way to solve it.

Most AIs today are so-called neural networks inspired by how brains work, with processing units known as artificial neurons. They typically go through distinct phases in their development. First, the AI is trained, which sees its artificial neurons fine-tuned by an algorithm to better reflect a given dataset. Then, the AI can be used to respond to new data, such as text inputs like those put into ChatGPT. However, once the model’s neurons have been set in the training phase, they can’t update and learn from new data.

This means that most large AI models must be retrained if new data becomes available, which can be prohibitively expensive, especially when those new datasets consist of large portions of the entire internet.

Researchers have wondered whether these models can incorporate new knowledge after the initial training, which would reduce costs, but it has been unclear whether they are capable of it.

Now, Shibhansh Dohare at the University of Alberta in Canada and his colleagues have tested whether the most common AI models can be adapted to continually learn. The team found that they quickly lose the ability to learn anything new, with vast numbers of artificial neurons getting stuck on a value of zero after they are exposed to new data.

“If you think of it like your brain, then it’ll be like 90 per cent of the neurons are dead,” says Dohare. “There’s just not enough left for you to learn.”

Dohare and his team first trained AI systems from the ImageNet database, which consists of 14 million labelled images of simple objects like houses or cats. But rather than train the AI once and then test it by trying to distinguish between two images multiple times, as is standard, they retrained the model after each pair of images.

They tested a range of different learning algorithms in this way and found that after a couple of thousand retraining cycles, the networks appeared unable to learn and performed poorly, with many neurons appearing “dead”, or with a value of zero.

The team also trained AIs to simulate an ant learning to walk through reinforcement learning, a common method where an AI is taught what success looks like and figures out the rules using trial and error. When they tried to adapt this technique to enable continual learning by retraining the algorithm after walking on different surfaces, they found that it also leads to a significant inability to learn.

This problem seems inherent to the way these systems learn, says Dohare, but there is a possible way around it. The researchers developed an algorithm that randomly turns some neurons on after each training round, and it appeared to reduce the poor performance. “If a [neuron] has died, then we just revive it,” says Dohare. “Now it’s able to learn again.”

The algorithm looks promising, but it will need to be tested for much larger systems before we can be sure that it will help, says Mark van der Wilk at the University of Oxford.

“A solution to continual learning is literally a billion dollar question,” he says. “A real, comprehensive solution that would allow you to continuously update a model would reduce the cost of training these models significantly.”

Topics:



Source link

#models #learn #humans

You may also like