Skip to main content

CHI 2022 Conference Report

By Os Keyes, HCDE PhD student

Three years into the pandemic, the only certainty I’ve achieved is that I’m never getting used to remote conferences. This year’s CHI was hybrid – with attendance possible both remotely and in-person – so theoretically I had the opportunity to be present in the flesh. In practice, with COVID numbers spiking and only a single workshop paper bringing me there, I chose to present from a distance. Correspondingly I can’t really comment on the conference as a whole. The workshop in question was TRAIT: Trust and Reliance in human-AI Teams. As the name suggests, it was about the fundamental questions of trust involved in the deployment of AI: how is trust produced? When is it good, or bad? And what even is trust, anyway?

My paper, in collaboration with Ahmer Arif, was on the nature of trust (and particularly, the role of vulnerability) – definitely on the more philosophical end of the presentations! But the breadth of the presented papers, covering everything from the aforementioned philosophy to questions as applied and specific as “Shaping Trust in Machine Translation Suggestions Through AI-Assisted Context Building”, was a strength, not a weakness. We got a wide range of questions and feedback, which is where the use of virtual and written Q&A really shone as a format, far more than the in-person equivalent: it meant that presenters could leave with the questions and answers to refer back to, which is particularly useful given the work-in-progress nature of workshop submissions.

Still: looking at the conference overall put somewhat of a dampener on my enthusiasm. The interest in trust in HCI doesn’t come from a vacuum; it’s motivated in part by the worry that other communities lack trust in us, and our work, and that this might be for good reason. Addressing these concerns is laudable – but the decisonmaking around CHI, and its hosting, only validates them. Almost as soon as the conference was over it became clear that the in-person element had been responsible for a vast COVID outbreak, the possibility of which had kept many people (particularly disabled people) away in the first place, which organisers had been warned about. That their concerns were not taken seriously enough to change the plan – and that the consequences were hundreds of infections and an unknown number of serious consequences, as attendees head back to their communities and campuses – feels rather like a microcosm of the same frustrating problems of power and minimisation that leads to such distrust in technology, and in technologists.