AI Neural Network Learns When It Should Not Be Trusted


TEHRAN (Tasnim) – Researchers have developed a way for deep learning neural networks to rapidly estimate confidence levels in their output.

MIT engineers expect this advance may eventually save lives, as deep learning is now widely deployed in everyday ways, the Express reported.

For example, a network’s level of certainty can be the difference between an autonomous vehicle determining between a clear crossroad and “it’s probably clear, so stop just in case.”

This approach, led by MIT PhD student Alexander Amini, dubbed “deep evidential regression”, accelerates the process and could lead to even safer AI technology.

He said: “We need the ability to not only have high-performance models, but also to understand when we cannot trust those models.

“This idea is important and applicable broadly. It can be used to assess products that rely on learned models.

“By estimating the uncertainty of a learned model, we also learn how much error to expect from the model, and what missing data could improve the model.”

The AI analyst adds how previous approaches to uncertainty analysis are based on Bayesian deep learning.

This is a significantly slower process, a non-existent luxury in the real-world where which decisions can make a difference between life and death.

Amini said: “We’ve had huge successes using deep learning,” says Amini.

“Neural networks are really good at knowing the right answer 99 percent of the time”

“One thing that has eluded researchers is the ability of these models to know and tell us when they might be wrong.

“We really care about that one percent of the time, and how we can detect those situations reliably and efficiently.”

The researchers started with a challenging computer vision task to put their approach to the test.

They trained their neural network to analyse an image and estimate the focal depth for each pixel.

Self-driving cars use similar calculations to estimate proximity to pedestrian or another vehicle – no simple task.

As the researchers had hoped, the network projected high uncertainty for pixels where it predicted the wrong depth.

Amini said: “It was very calibrated to the errors that the network makes, which we believe was one of the most important things in judging the quality of a new uncertainty estimator.”

The test revealed the network’s ability to flag when users should not place full trust in its decisions.

In such examples, “if this is a health care application, maybe we don’t trust the diagnosis that the model is giving, and instead seek a second opinion,” Amini added.

Dr Raia Hadsell, a DeepMind artificial intelligence researcher not involved with the workDeep evidential describes regression as “a simple and elegant approach that advances the field of uncertainty estimation, which is important for robotics and other real-world control systems.

She added: “This is done in a novel way that avoids some of the messy aspects of other approaches — [for example] sampling or ensembles — which makes it not only elegant but also computationally more efficient — a winning combination.”