Menu

Blog

Jun 21, 2020

The case for self-explainable AI

Posted by in categories: biotech/medical, information science, robotics/AI

For instance, suppose a neural network has labeled the image of a skin mole as cancerous. Is it because it found malignant patterns in the mole or is it because of irrelevant elements such as image lighting, camera type, or the presence of some other artifact in the image, such as pen markings or rulers?

Researchers have developed various interpretability techniques that help investigate decisions made by various machine learning algorithms. But these methods are not enough to address AI’s explainability problem and create trust in deep learning models, argues Daniel Elton, a scientist who researches the applications of artificial intelligence in medical imaging.

Elton discusses why we need to shift from techniques that interpret AI decisions to AI models that can explain their decisions by themselves as humans do. His paper, “Self-explaining AI as an alternative to interpretable AI,” recently published in the arXiv preprint server, expands on this idea.

Comments are closed.