|
Ofra Amir did her PhD in computer science in Harvard, where she studied the interactions between humans and intelligent machines. In this episode we talk with Ofra about designing algorithms whose goal is to improve human performance in a given task, how to design metrics when some of your goals are not easily measurable, and how to explain the decisions of intelligent agents to humans.
Resources for the episode:
explainable AI:
Last year's workshop on explainable AI:
http://home.earthlink.net/~dwaha/research/meetings/ijcai17-xai/
Last year's NIPS symposium on interpretable ML:
http://interpretable.ml/
ICML 2016's workshop on interpretability
https://arxiv.org/html/1607.02531v2
The paper we talked about that summarize agent's actions:
https://drive.google.com/file/d/11BfYioYLDzTxsCr0QkMAYGPUDh-qW-6u/view
Human Computer Interaction and AI:
A nice paper about the connection between HCI and AI:
https://pdfs.semanticscholar.org/e22b/e3642660d6a779e477124cae7cbfdfa5b0a5.pdf
A paper from a workshop about usable AI:
http://www.eecs.harvard.edu/~kgajos/papers/2008/kgajos-UsableAI08.pdf
a classic paper about mixed-initiative interfaces - interfaces that use some sort of AI/ML. Addressing issues like how to handle uncertainty about user intents etc...:
http://erichorvitz.com/chi99horvitz.pdf |