How sure is sure? Incorporating human error into machine learning


SOURCE: HTTPS://WWW.SCIENCEDAILY.COM/
AUG 17, 2023

Researchers are developing a way to incorporate one of the most human of characteristics -- uncertainty -- into machine learning systems.

Human error and uncertainty are concepts that many artificial intelligence systems fail to grasp, particularly in systems where a human provides feedback to a machine learning model. Many of these systems are programmed to assume that humans are always certain and correct, but real-world decision-making includes occasional mistakes and uncertainty.

Researchers from the University of Cambridge, along with The Alan Turing Institute, Princeton, and Google DeepMind, have been attempting to bridge the gap between human behaviour and machine learning, so that uncertainty can be more fully accounted for in AI applications where humans and machines are working together. This could help reduce risk and improve trust and reliability of these applications, especially where safety is critical, such as medical diagnosis.

The team adapted a well-known image classification dataset so that humans could provide feedback and indicate their level of uncertainty when labelling a particular image. The researchers found that training with uncertain labels can improve these systems' performance in handling uncertain feedback, although humans also cause the overall performance of these hybrid systems to drop. Their results will be reported at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society (AIES 2023) in Montréal.

'Human-in-the-loop' machine learning systems -- a type of AI system that enables human feedback -- are often framed as a promising way to reduce risks in settings where automated models cannot be relied upon to make decisions alone. But what if the humans are unsure?

"Uncertainty is central in how humans reason about the world but many AI models fail to take this into account," said first author Katherine Collins from Cambridge's Department of Engineering. "A lot of developers are working to address model uncertainty, but less work has been done on addressing uncertainty from the person's point of view."

We are constantly making decisions based on the balance of probabilities, often without really thinking about it. Most of the time -- for example, if we wave at someone who looks just like a friend but turns out to be a total stranger -- there's no harm if we get things wrong. However, in certain applications, uncertainty comes with real safety risks.

Researchers are developing a way to incorporate one of the most human of characteristics -- uncertainty -- into machine learning systems.

Human error and uncertainty are concepts that many artificial intelligence systems fail to grasp, particularly in systems where a human provides feedback to a machine learning model. Many of these systems are programmed to assume that humans are always certain and correct, but real-world decision-making includes occasional mistakes and uncertainty.

Researchers from the University of Cambridge, along with The Alan Turing Institute, Princeton, and Google DeepMind, have been attempting to bridge the gap between human behaviour and machine learning, so that uncertainty can be more fully accounted for in AI applications where humans and machines are working together. This could help reduce risk and improve trust and reliability of these applications, especially where safety is critical, such as medical diagnosis.

The team adapted a well-known image classification dataset so that humans could provide feedback and indicate their level of uncertainty when labelling a particular image. The researchers found that training with uncertain labels can improve these systems' performance in handling uncertain feedback, although humans also cause the overall performance of these hybrid systems to drop. Their results will be reported at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society (AIES 2023) in Montréal.

'Human-in-the-loop' machine learning systems -- a type of AI system that enables human feedback -- are often framed as a promising way to reduce risks in settings where automated models cannot be relied upon to make decisions alone. But what if the humans are unsure?

"Uncertainty is central in how humans reason about the world but many AI models fail to take this into account," said first author Katherine Collins from Cambridge's Department of Engineering. "A lot of developers are working to address model uncertainty, but less work has been done on addressing uncertainty from the person's point of view."

We are constantly making decisions based on the balance of probabilities, often without really thinking about it. Most of the time -- for example, if we wave at someone who looks just like a friend but turns out to be a total stranger -- there's no harm if we get things wrong. However, in certain applications, uncertainty comes with real safety risks.

Similar articles you can read