Skip to main content

News & Events

HCDE research at CHI 2020


January 29, 2020

CHI logoThe 2020 ACM Conference on Human Factors in Computing Systems (CHI 2020), will be held April 25 - 30 in Oahu, Hawaii.

The annual CHI conference brings together researchers and practitioners to discuss the latest in interactive technology. Several faculty and PhD students from the Department of Human Centered Design & Engineering are selected to present their latest research at this year's conference. Find details of papers authored by HCDE researchers below.

ABSTRACT
Smart speakers have become pervasive in family homes, cre-ating the potential for these devices to influence parent-child dynamics and parenting behaviors. We investigate the im-pact of introducing a smart speaker to 10 families with chil-dren, over four weeks. We use pre- and post- deployment in-terviews with the whole family and in-home audio capture of parent-child interactions with the smart speaker for our anal-ysis. Despite the smart speaker causing occasional conflict in the home, we observed that parents leveraged the smart speaker to further parenting goals. We found three forms of influence the smart speaker has on family dynamics: 1) fos-tering communication, 2) disrupting access, and 3) augment-ing parenting. All of these influences arise from a commu-nally accessible, stand-alone voice interface which democra-tizes family access to technology. We discuss design impli-cations in furthering parenting practices and behaviors as the capabilities of the technology continue to improve.

 

ABSTRACT
Current approaches to AI and Assistive Technology (AT) often foreground task completion over other encounters such as expressions of care. Our paper challenges and complements such task-completion approaches by attending to the care work of access—the continual affective and emotional adjustments that people make by noticing and attending to one another. We explore how this work impacts encounters among people with and without vision impairments who complete tasks together. We find that bound up in attempts to get things done are concerns for one another and how well people are doing together. Reading this work through emerging disability studies and feminist STS scholarship, we account for two important forms of work that give rise to access: (1) mundane attunements and (2) non-innocent authorizations. Together these processes work as sensitizing concepts to help HCI scholars account for the ways that intelligent ATs both produce access while sometimes subverting people with disabilities.

 

ABSTRACT
We present a qualitative study with 16 deaf and hard of hearing (DHH) participants examining reactions to smartwatch-based visual + haptic sound feedback designs. In Part 1, we conducted a Wizard-of-Oz (WoZ) evaluation of three smartwatch feedback techniques (visual alone, visual +simple vibration, and visual + tacton) and investigated vibrational patterns (tactons) to portray sound loudness,direction, and identity. In Part 2, we visited three public or semi-public locations where we demonstrated sound feedback on the smartwatch in situ to examine contextual influences and explore sound filtering options. Our findings characterize uses for vibration in multimodal sound awareness, both for push notification and for immediately actionable sound information displayed through vibrational patterns (tactons). In situ experiences caused participants to request sound filtering—particularly to limit haptic feedback—as a method for managing soundscape complexity. Additional concerns arose related to learnability, possibility of distraction, and system trust. Our findings have implications for future portable sound awareness systems.

 

ABSTRACT
We introduce HomeSound, an in-home sound awareness system for Deaf and hard of hearing (DHH) users. Similar to the Echo Show or Nest Hub, HomeSound consists of a microphone and display, and uses multiple devices installed in each home. We iteratively developed two prototypes, both of which sense and visualize sound information in real-time. Prototype 1 provided a floorplan view of sound occurrences with waveform histories depicting loudness and pitch. A three-week deployment in four DHH homes showed an increase in participants’ home- and self-awareness but also uncovered challenges due to lack of line of sight and sound classification. For Prototype 2, we added automatic sound classification and smartwatch support for wearable alerts. A second field deployment in four homes showed further increases in awareness but misclassifications and constant watch vibrations were not well received. We discuss findings related to awareness, privacy, and display placement and implications for future home sound awareness technology.

 

ABSTRACT
Leveraging existing popular games such as Pokémon GO to promote health can engage people in healthy activities without sacrificing gaming appeal. However, little is known about what potential tensions arise from incorporating new healthrelated features to already existing and popular games and how to resolve those tensions from players’ perspectives. In this paper, we identify design tensions surrounding the appeals of Pokémon GO, perspectives on different health needs, and mobile health technologies. By conducting surveys and design workshops with 20 avid Pokémon GO players, we demonstrate four design tensions: (1) diverse goals and rewards vs. data accuracy, (2) strong bonds between players and characters vs. gaming obsession, (3) collaborative play vs. social anxiety, and (4) connection of in-real-life experiences with the game vs. different individual contexts. We provide design implications to resolve these tensions in Pokémon GO and discuss how to extend our findings to the broader context of health promotion in location-based games.

 

ABSTRACT
Beyond being the world’s largest social network, Facebook is for many also one of its greatest sources of digital distraction. For students, problematic use has been associated with negative effects on academic achievement and general wellbeing. To understand what strategies could help users regain control, we investigated how simple interventions to the Facebook UI affect behaviour and perceived control. We assigned 58 university students to one of three interventions: goal reminders, removed newsfeed, or white background (control). We logged use for 6 weeks, applied interventions in the middle weeks, and administered fortnightly surveys. Both goal reminders and removed newsfeed helped participants stay on task and avoid distraction. However, goal reminders were often annoying, and removing the newsfeed made some fear missing out on information. Our findings point to future interventions such as controls for adjusting types and amount of available information, and flexible blocking which matches individual definitions of ‘distraction’.

 

ABSTRACT
Users are fundamental to HCI. However, little is known about how HCI education introduces students to working with users, particularly those different from themselves. To better understand design students’ engagement, reactions, and reflections with users, we investigate a case study of a graduate-level 10-week prototyping studio course that partnered with a children’s co-design team. HCI students participated in two co-design sessions with children to design a STEM learning experience for youth. We conducted participant observations, interviews with 14 students, and analyzed final artifacts. Our findings demonstrate the communication challenges and strategies students experienced, how students observed issues of power dynamics, and students’ perceived value in engaging with users. We contribute empirical evidence of how HCI students directly interact with target users, principles for reflective HCI pedagogy, and highlight the need for more intentional investigation into HCI educational practice.

 

ABSTRACT
Automatically generated explanations of how machine learning (ML) models reason can help users understand and accept them. However, explanations can have unintended consequences: promoting over-reliance or undermining trust. This paper investigates how explanations shape users’ perceptions of ML models with or without the ability to provide feedback to them: (1) does revealing model flaws increase users’ desire to “fix” them; (2) does providing explanations cause users to believe—wrongly—that models are introspective, and will thus improve over time. Through two controlled experiments—varying model quality—we show how the combination of explanations and user feedback impacted perceptions, such as frustration and expectations of model improvement. Explanations without opportunity for feedback were frustrating with a lower quality model, while interactions between explanation and feedback for the higher quality model suggest that detailed feedback should not be requested without explanation. Users expected model correction, regardless of whether they provided feedback or received explanations.

 

ABSTRACT
We present Jubilee, an open-source hardware machine with automatic tool-changing and interchangeable bed plates. As digital fabrication tools have become more broadly accessible, tailoring those machines to new users and novel workflows has become central to HCI research. However, the lack of hardware infrastructure makes custom application development cumbersome. We identify a need for an extensible platform to allow HCI researchers to develop workflows for fabrication, material exploration, and other applications. Jubilee addresses this need. It can automatically and repeatably change tools in the same operation. It can be built with a combination of simple 3D-printed and readily available parts. It has several standard head designs for a variety of applications including 3D printing, syringe-based liquid handling, imaging, and plotting. We present Jubilee with a comprehensive set of assembly instructions and kinematic mount templates for user-designed tools and bed plates. Finally we demonstrate Jubilee’s multi-tool workflow functionality with a series of example applications.

 

ABSTRACT
Wayfinding is a critical but challenging task for people who have low vision, a visual impairment that falls short of blindness. Prior wayfinding systems for people with visual impairments focused on blind people, providing only audio and tactile feedback. Since people with low vision use their remaining vision, we sought to determine how audio feedback compares to visual feedback in a wayfinding task. We developed visual and audio wayfinding guidance on smartglasses based on de facto standard approaches for blind and sighted people and conducted a study with 16 low vision participants. We found that participants made fewer mistakes and experienced lower cognitive load with visual feedback. Moreover, participants with a full field of view completed the wayfinding tasks faster when using visual feedback. However, many participants preferred audio feedback because of its shorter learning curve. Based on our findings, we propose design guidelines for wayfinding systems for low vision.