Skip to main content

HCDE students earn second place in global IBM Service Design Challenge

Leah Pistorius
November 18, 2025

A team of HCDE master’s students has earned second place in the IBM Challenge as part of the 2025 Student Service Design Challenge, an international competition inviting students to design services that advance equality and inclusivity. Their project, Echo, addresses a growing need for transparency and ethical communication around the use of AI in speech-language pathology.

Echo provides a clear, accessible way for speech-language pathologists to explain how AI tools are used in therapy. The students identified a significant gap in patient understanding of emerging technologies and designed the platform to strengthen trust and support more collaborative decision-making between practitioners and the people they serve.

A challenge centered on equality and inclusivity

This year’s Student Service Design Challenge encouraged participants to rethink outdated systems, confront non-inclusive business models, and imagine services aligned with the UN Sustainable Development Goals.

Five students standing side by side smile at the camera in front of a UW building

HCDE master’s students Inhauck Choi, Alex Chung, Yu-Jie (Janet) Chen, Jen Zhang, and Gahui Yun received second place in the IBM Challenge of the 2025 Student Service Design Challenge for their project, Echo.

Echo stood out to the jury for its well-researched foundation, emphasis on trust-building, and attention to communities that often experience uncertainty or bias in healthcare technologies. The team centered their work on demystifying AI and ensuring that patients, particularly those from immigrant backgrounds, have the information they need to make informed decisions.

Grounded in research and real-world needs

Three students write together on a whiteboard in HCDE's Sieg Building

Over the course of six months, HCDE students Yu-Jie (Janet) Chen, Alex Chung, Jen Zhang, Gahui Yun, and Inhauck Choi investigated AI bias in healthcare using a range of qualitative and quantitative research methods. Recognizing that AI in healthcare is both complex and highly technical, they began by interviewing stakeholders to map the broader ecosystem and identify challenges.

They then shifted to the local context, conducting interviews with professionals at the UW Speech and Hearing Clinic to understand how AI is currently integrated into speech-language pathology. These conversations helped them surface clinicians’ pain points, gaps in ethical guidance, and opportunities for clearer communication with patients.

Their research revealed consistent patterns: patients often face unclear integration of AI tools, limited channels to raise concerns, and low AI literacy and trust. Clinicians, meanwhile, face time constraints, incomplete intake information, ethical uncertainty around AI usage, and unclear guidance on the limitations of emerging tools. Privacy risks and lack of transparency were concerns shared by both groups.

Based on these findings, they developed a prototype of Echo with key service features and held a co-design workshop with a speech-language pathologist to refine the platform.

Designing toward a more equitable future

Echo’s final design supports both patients and clinicians: creating a clear, accessible way for clients to understand AI use in their care, and giving clinicians a structured, ethical process for communicating about AI within existing time and administrative constraints.

Intake form screenshot 

Echo's intake form is flexible, patient-friendly, and guided. It allows patients to enter not only basic information but also detailed contextual information about their multilingual background.

AI preferences on the intake form screenshot 

Unlike the current experience, patients can have more agency over how AI is involved in their treatment by describing their preferences directly in the intake form.

Clinician interface screenshot 

On the clinician side, they can easily access patients' contextual information and AI preferences.

Client profile screenshot 

Echo flags potential biases and risks related to each diagnostic tool and software based on the information gathered during the application process.

Echo insignts interface screenshot 

Clinicians can more openly share their insights about diverse AI tools to guide each other, which can ultimately provide valuable data to AI developers to reduce bias.

Reflecting on their experience, the team shared: “Through multiple rounds of research and iteration, we deepened our understanding of what it takes to build human-centred AI in healthcare. Interviews revealed real-world gaps and design opportunities that shaped our direction and taught us to navigate complexity by grounding ideas in real needs. With invaluable guidance from mentors, this challenge helped us grow as a team to create meaningful impact.”

Watch a video about Echo here: 

screenshot of video still