## Can A.I. Be Taught to Explain Itself?

Can A.I. Be Taught to Explain Itself?

Others involve psychological insight: One team at Rutgers is designing a deep neural network that, once it makes a

decision, can then sift through its data set to find the example that best demonstrates why it made that decision.

Darrell, without a second thought, said, Sure — but you could make it explainable by once again

lashing two deep neural networks together, one to do the task and one to describe it.

Just like old-fashioned neural nets, deep neural networks seek to draw a link between an input on one end (say, a picture from the internet)

and an output on the other end (“This is a picture of a dog”).

The idea was to connect leading A. I.

researchers with experts in data visualization and human-computer interaction to

see what new tools they might invent to find patterns in huge sets of data.

The disconnect between how we make decisions and how machines make them, and the fact

that machines are making more and more decisions for us, has birthed a new push for transparency and a field of research called explainable A. I., or X. A.I.

Then he poured that data into an open-source facial-recognition algorithm — a so-called deep neural network, built by researchers at Oxford University —

and asked it to find correlations between people’s faces and the information in their profiles.

Can A.I. Be Taught to Explain Itself?

Others involve psychological insight: One team at Rutgers is designing a deep neural network that, once it makes a

decision, can then sift through its data set to find the example that best demonstrates why it made that decision.

Darrell, without a second thought, said, Sure — but you could make it explainable by once again

lashing two deep neural networks together, one to do the task and one to describe it.

Just like old-fashioned neural nets, deep neural networks seek to draw a link between an input on one end (say, a picture from the internet)

and an output on the other end (“This is a picture of a dog”).

The idea was to connect leading A. I.

researchers with experts in data visualization and human-computer interaction to

see what new tools they might invent to find patterns in huge sets of data.

The disconnect between how we make decisions and how machines make them, and the fact

that machines are making more and more decisions for us, has birthed a new push for transparency and a field of research called explainable A. I., or X. A.I.

Then he poured that data into an open-source facial-recognition algorithm — a so-called deep neural network, built by researchers at Oxford University —

and asked it to find correlations between people’s faces and the information in their profiles.

## Comments