A New Study Finds People Prefer Robots That Explain Themselves |
Innovation


Artificial intelligence is entering our lives in many ways – on our smartphones, in our homes, in our cars. These systems can help people make appointments, drive and even diagnose illnesses. But as AI systems continue to serve important and collaborative roles in people’s lives, a natural question is: Can I trust them? How do I know they will do what I expect?

Explainable AI (XAI) is a branch of A.I. research that examines how artificial agents can be made more transparent and trustworthy to their human users. Trustworthiness is essential if robots and people are to work together. XAI seeks to develop A.I. systems that human beings find trustworthy – while also performing well to fulfill designed tasks.

At the Center for Vision, Cognition, Learning, and Autonomy at UCLA, we and our colleagues are interested in what factors make machines more trustworthy, and how well…



Find out the full story here