Skip to Content

Explainable Artificial Intelligence

Artificial Intelligence(AI) systems have already proven to be quite useful to humans – from driving our cars to landing rovers on Mars. However,  in some applications, both existing and envisioned, the current paradigm of building AI systems fall short.

Let us examine, the example of an AI tutor for humans to learn a new language. Although, google translate doesn’t always do as well as our bilingual cousin, there is no doubt that such AI systems are getting better everyday. It’s not hard then to imagine an AI that will one day help us learn languages. The direct benefits to having an AI teachers will be everyone having access to a one-on-one and affordable education at one’s own schedule.

We envisioned and implement such an AI tutor that will teach potential learners American Sign Language. A majority of sign language users are deaf and face social isolations due to language barriers. A smartphone AI system that can quickly and effectively teach sign language to their friends/family would help mitigate issues to some extent, at least to the ones that are willing but not able to learn.

Learning, especially language-learning, is an inherently interactive process in which feedback plays a vital role. While watching videos on youtube might help pick up a few signs, the lack of feedback really hinders true learning. An AI system that teaches sign language in theory can show a video of a sign, then ‘watch’ a person perform the sign and then give some constructive feedback to aid the learning process.

A fundamental challenge to designing this is the ‘feedback’ bit. Curren’t AI systems can indeed ‘watch’ a video and decide if sign performed was correct or not, but they have no way of telling ‘why’ it was incorrect. This is because the behavior of AI unlike humans is often not explainable. For instance, an AI can easily  tell, with 98% confidence at that, that a given picture has a ‘cat’ in it, but ask it ‘why’ it thinks so, and it may not have a compelling answer.

Similarly, an AI sign language tutor may be able to tell you with reasonable confidence that you don’t know the sign for ‘cat’ but can it tell you exactly what your mistake was? In a project, learn2sign, we develop an AI system that can.

There are many issues related to building Explainable AI. The most fundamental being that we rarely understand exactly how or why they work. 

Is the learning algorithm, essentially seeing a world that corresponds to our own? Although we cannot explicitly program our AI systems, we should lay down a framework for it to follow so that on a high-level we know why it is making the decisions it is making.

One approach is to XAI is to build two systems: one for making the decision and the other for explaining it. Another approach is to show other similar examples to a given query and pass the burden of figuring why its incorrect back. Yet another approach is to building the recognition system in a modular way to give explanations naturally. 

There are benefits and trade-off for each of these processes. At Impact lab, we are studying these trade-offs in order to design better performing XAI systems. 

Contacts:

Prajwal Paudyal ppaudyal at asu dot edu

Ayan Banerjee abaner3 at asu dot edu