Interaction Design for Explainable AI

OzCHI 2018 Workshop

Tuesday December 4th 2018 in Room 80.11.06 of RMIT Building 80

OzCHI 2018 Website

As artificial intelligence (AI) systems become increasingly complex and ubiquitous, these systems will be responsible for making decisions that directly affect individuals and society as a whole. Such decisions will need to be justified due to ethical concerns as well as trust, but achieving this has become difficult due to the `black-box' nature many AI models have adopted. Explainable AI (XAI) can potentially address this problem by explaining its actions, decisions and behaviours of the system to users. However, much research in XAI is done in a vacuum using only the researchers' intuition of what constitutes a `good' explanation while ignoring the interaction and the human aspect.

This workshop invites researchers in the HCI community and related fields to have a discourse about human-centred approaches to XAI rooted in interaction and to shed light and spark discussion on interaction design challenges in XAI.

Invited Speaker: Associate Professor Tim Miller

Title: Explanation in Artificial Intelligence: Insights from the Social Sciences

Abstract:

In his seminal book 'The Inmates are Running the Asylum: Why High-Tech Products Drive Us Crazy And How To Restore The Sanity' [2004, Sams Indianapolis, IN, USA], Alan Cooper argues that a major reason why software is often poorly designed (from a user perspective) is that programmers are in charge of design decisions, rather than interaction designers. As a result, programmers design software for themselves, rather than for their target audience; a phenomenon he refers to as the 'inmates running the asylum'. I argue that the currently hot-topic of 'Explainable artificial intelligence' risks a similar fate. Explainable artificial intelligence is the study of techniques to help people understand why algorithms have made particular decisions, with the aim of increasing trust and transparency of systems employing these algorithms. While the re-emergence of explainable AI is positive, most of us as AI researchers are building explanatory agents for ourselves, rather than for the intended users. But explainable AI is more likely to succeed if researchers and practitioners understand, adopt, implement, and improve models from the vast and valuable bodies of research in philosophy, psychology, and cognitive science; and if evaluation of these models is focused more on people than on technology.

In this talk, I will demonstrate that most work in explainable AI is ignorant of the social sciences, argue why this is bad, and present several insights from the social sciences that are important for explanation in any subfield of artificial intelligence. The talk will be accessible to a general audience, and I hope it will be of particular interest to people working in artificial intelligence, social/cognitive science, or interaction design.

Bio:

Tim Miller an associate professor in the School of Computing and Information Systems at The University of Melbourne. His primary area of expertise is in artificial intelligence, with particular emphasise on:

Human-AI interaction and collaboration

Explainable Artificial Intelligence (XAI)

Decision making in complex, multi-agent environments

Reasoning about action and knowledge using automated planning

His research is at the intersection of artificial intelligence, interaction design, and cognitive science/psychology. His areas of education expertise is in artificial intelligence, software engineering, and technology innovation. He also have extensive experience developing novel and innovative solution with industry and defence collaborators and is a member of the AI and Autonomy Lab in the school.

Accepted Papers

  1. But Why? Generating Narratives Using Provenance, Steven Wark, Marcin Nowina-Krowicki, Crisrael Lucero, Douglas Lange
    • Steven Wark, Marcin Nowina-Krowicki, Defence Science & Technology Group, Edinburgh, SA 5111, Australia.
    • Crisrael Lucero, Douglas Lange, Space and Naval Warfare Systems Center Pacific, San Diego, CA 92110, USA.
  2. Demand-Driven Transparency For Monitoring Intelligent Agents, Mor Vered, Piers Howe, Tim Miller, Liz Sonenberg, Eduardo Velloso
    • Mor Vered, Tim Miller, Liz Sonenberg, Eduardo Velloso, School of Computing and Information Systems, University of Melbourne, Australia.
    • Piers Howe, Melbourne School of Psychological Sciences, University of Melbourne, Australia.
  3. Transparency and Opacity in AI Systems: An Overview, Abdulrahman Baqais, Zubair Baig, Marthie Grobler
    • Abdulrahman Baqais, Freelance Researcher, Dhahran, Saudi Arabia.
    • Zubair Baig, Marthie Grobler, CSIRO, Data61, Melbourne, Australia
  4. A Survey of Interpretable AI: Aneesha Bakharia
    • Aneesha Bakharia, Institute of Teaching and Learning Innovation The University of Queensland
  5. Designing Explainable AI Interfaces through Interaction Design Techniques, Joshua Newn, Ronal Singh, Prashan Madumal, Eduardo Velloso, Frank Vetere
    • Joshua Newn, Ronal Singh, Prashan Madumal, Eduardo Velloso, Frank Vetere, Microsoft Research Centre for Social Natural User Interfaces, School of Computing and Information Systems, University of Melbourne, Australia.
  6. A Grounded dialog model for Explainable Artificial Intelligence, Prashan Madumal, Tim Miller, Frank Vetere and Liz Sonenberg
    • Prashan Madumal, Eduardo Velloso, Frank Vetere, Microsoft Research Centre for Social Natural User Interfaces, School of Computing and Information Systems, University of Melbourne, Australia.
    • Tim Miller, School of Computing and Information Systems, University of Melbourne, Australia.

Program

Tuesday December 4, 9am - 5pm in Room 80.11.06 of RMIT Building 80

Time Paper/Event

9:00 - 9:10am Introduction

9:10 - 9:40am But Why? Generating Narratives Using Provenance

9:40 - 10:10am Demand-Driven Transparency For Monitoring Intelligent Agents

10:10 - 10:40am Coffee

10:40 - 11:10am Transparency and Opacity in AI Systems: An Overview

11:10 - 12:10pm INVITED TALK: Explanation in Artificial Intelligence: Insights from the Social Sciences

12:10 - 02:00pm Lunch

2:00 - 2:30pm Designing Explainable AI Interfaces through Interaction Design Techniques

2:30 - 3:00pm A Survey of Interpretable AI

3:00 - 3:30pm Coffee

3:30 - 4:00pm A Grounded dialog model for Explainable Artificial Intelligence

4:00 - 5:00pm Discussion

Call for Papers

The aim of this workshop is to address the topics related to the design of human-computer interfaces for XAI. Effective knowledge transfer through an explanation depends on a combination of explanation dialogues, psychological and philosophical theories on explanations and interfaces that can accommodate explanations. The aim is to explore explanation interfaces that enable users to understand and interact with intelligent systems, which ultimately promotes trust among the users and the intelligent systems.

We welcome multidisciplinary contributions that inform or intersect with XAI. These include but are not limited to:

    • Human-human interaction
    • Human-computer interfaces
    • Interactive design
    • Multimodal interaction for explanation
    • Theoretical approaches of explainability
    • Transparent AI
    • Fairness, accountability, and trust

We also seek submissions that contribute to answering some of the fundamental questions raised by other researchers in the field such as: (1)

    1. What is an explanation? What should they look like?
    2. What, when and how to explain?
    3. How to evaluate explanations or how the explanation is provided?

Submissions

Formatting guidelines:

We encourage participants to submit a paper (2-4 pages max), describing their work on one or more of the topics mentioned above. Please use the CHI Extended Abstracts Format for formatting your paper. Papers will be peer reviewed and selected based on originality and relevance to workshop themes. We particularly welcome interdisciplinary research.

Submission date: 19/10/2018

Notification of acceptance: 26/10/2018

Submission Email: xaiozchi2018@gmail.com

Register for OzCHI 2018 workshops here.

Date and Venue

Tuesday December 4th 2018 in Room 80.11.06 of RMIT Building 80

Questions?

Please send all emails to: xaiozchi2018@gmail.com

Organisers

Prashan Madumal

PhD Candidate

AI and Autonomy Lab

School of Computing and Information Systems,

University of Melbourne.

Dr Ronal Singh

Research Fellow

AI and Autonomy Lab

School of Computing and Information Systems,

University of Melbourne

Joshua Newn

PhD Candidate

Interaction Design Lab

School of Computing and Information Systems,

University of Melbourne

Prof Frank Vetere

Director

Interaction Design Lab

School of Computing and Information Systems,

University of Melbourne

References

[1] Malin Eiband, Hanna Schneider, Mark Bilandzic, Julian Fazekas-Con, Mareike Haug, and Heinrich Hussmann. 2018. Bringing Transparency Design into Practice. In 23rd International Conference on Intelligent User Interfaces. ACM, 211–223.

[2] Jonathan Dodge, Sean Penney, Claudia Hilderbrand, Andrew Anderson, and Margaret Burnett. 2018. How the Experts Do It: Assessing and Explaining Agent Behaviors in Real-Time Strategy Games. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, USA, Article 562, 12 pages. https://doi.org/10.1145/3173574.3174136

[3] Finale Doshi-Velez and Been Kim. 2017. Towards A Rigorous Science of Interpretable Machine Learning. Ml (2017), 1–13. arXiv:1702.08608 http://arxiv.org/abs/1702.08608

[4] Maria Fox, Derek Long, and Daniele Magazzeni. 2017. Explainable Planning. IJCAI - Workshop on Explainable AI (2017).

[5] David Gunning and Darpa Io. [n. d.]. Explainable Artificial Intelligence ( XAI ) Explainable AI - What Are We Trying To Do ? ([n. d.]), 1–18.

[6] P. Madumal, T. Miller, F. Vetere, and L. Sonenberg. 2018. Towards a Grounded Dialog Model for Explainable Artificial Intelligence. ArXiv e-prints (June 2018). arXiv:cs.AI/1806.08055

[7] Tim Miller. 2017. Explanation in Artificial Intelligence: Insights from the Social Sciences. (2017). https://doi.org/arXiv:1706.07269v1 arXiv:1706.07269

[8] Changhoon Oh, Jungwoo Song, Jinhan Choi, Seonghyeon Kim, Sungwoo Lee, and Bongwon Suh. 2018. I Lead, You Help but Only with Enough Details: Understanding User Experience of Co-Creation with Artificial Intelligence. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, USA, Article 649, 13 pages. https://doi.org/10.1145/3173574.3174223

News

  • (2018 November 25): Program now available.
  • (2018 November 25): Titles of accepted papers available online.
  • (2018 September 10): Paper submission deadline and process announcement.
  • (2018 August 12): Web page up.