Podcasts

Series 10 – Episode 1 – Why engineers need explainable AI

11th November 2022
Paige West
0

Paige West speaks with Johanna Pingel, Product Marketing Manager at MathWorks about explainable AI.

Pingel started off in life as a musician, but she quickly transitioned over to a more technical field because she really liked the interplay between music and technology. Also, making a living as a musician is actually very difficult!

For the past 10 years, she has been working on image processing and computer vision applications as well as artificial intelligence applications.

Pingel is the Product Marketing Manager at MathWorks, a software company based just outside of Boston. People aren’t always familiar with MathWorks but more familiar with its software, MATLAB.

Decision-makers across industries and applications are increasingly relying on AI, yet AI models are often considered ‘black-boxes,’ offering little visibility into how the model will work. Pingel explains: “When you first start off with machine learning and deep learning techniques, you often get questions of, how does the algorithm work, or what is it basing its decision on, and this is normally in response to showing something like a neural network and deep learning or a machine learning model that would take input and learn something from that input to a various output.

“As the model learns more features, and gets more and more complex, it becomes a little bit harder to talk about what the model is learning. But the problem really comes from when the model doesn't work as expected. So, then what do you do and you don't have that language to really translate what the model has learned into something that you can explain?

“This is the challenge that comes from that. The lack of visibility from blackbox models.”

With the help of explainable AI, engineers can better understand how machine learning models arrive at their predictions and work towards instilling confidence in models.

“Explainable AI is trying to uncover the blackbox nature of machine learning and deep learning models,” said Pingel.

Pingel goes on to talk about how an engineer can implement explainable AI, what some of the challenges are that come with explainable AI, and how can users overcome them.

To hear more about explainable AI and much more, you can listen to Electronic Specifier’s interview with Johanna Pingel on Spotify or Apple Podcasts.

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier