Artificial Intelligence

In AI we trust?

18th November 2019
Anna Flockett
0

Artificial Intelligence is a many things, just like human intelligence is. Intelligence, from Latin “Intelligere,” means the ability to go deep and broad as we “bind” (ligere) “together” (intra) things that might apparently have no evident relationship. But by going too deep you lose the general applicability of a developing pattern, and by going too broad you lose meaningful correlations. 

Guest blog written by Davide Ricci, Wind River.

The acts of developing a pattern, bringing things together, and learning all contribute to the creation of knowledge. And if you can explain something simply (efficiently) and elegantly (effectively) to somebody else, then you really have learned. The act of explaining reinforces our learning, shares knowledge, and builds trust.

When a machine learns, well, it’s called Machine Learning (ML), a form of AI. It starts with data and from that data, going broad and deep, knowledge that was not known before is developed. So essentially ML augments knowledge that humans are capable of building, especially when the data set is particularly big, and when time is lacking. Different ML algorithms exist that are very accurate and widely used. Yet way too many operate by brute force (well, if you were a modern powerful computer, wouldn’t you?) and build knowledge that is better represented as a “black box.” Essentially, based on data set analysis, if training has been properly conducted and given the same input, then the same output is always generated with great accuracy and repeatability. Yet, the human is left to wonder: Why? How? Can you explain this to me, computer?

So this raises the question: Is such AI really useful if the knowledge it extracts cannot be explained, shared and/or ultimately trusted?

Well, certainly not in safety critical markets. For example, we currently would not trust an AI system, for as accurate as it could be, to operate a drone that flies over our heads. We would certainly not trust an AI system that supervises a doctor in making medical decisions. If the doctor is left wondering, “Why would this data correlate somehow, and why is the AI system recommending one treatment vs. another? We would certainly not trust an AI system that is supposed to help us prevent possible harm when operating a remotely operated surgical robot, or a dialysis machine, or an infusion pump.

Take IEC 62304 and the medical market as an example. The idea of Software of Unknown Provenance (SOUP) is baked into the standard. With SOUP, or third party software in mind, one might and should use third party algorithms and knowledge, provided it is clear what such third party algorithm and knowledge do, and provided the risks of such algorithms misbehaving and failing are mitigated.

ML software and developed knowledge can certainly fall into the category of SOUP or 3rd party software. Yet, if ML behaves like a black box, how could we know what that box does? How could we mitigate the effect of a failure, and even prevent such failure from happening?

So, in conclusion: Yes, in AI we trust. As long that AI can be explainable and can support us in our daily decisions. After all, the doctor, and not the AI system, is responsible for saving human lives.

For more information on how Wind River technology is helping revolutionise the medical sector click here.

Courtesy of Wind River. 

Featured products

Upcoming Events

No events found.
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier