Skip to main content
Research

David McDonald's Research Group Archives

This page contains an archive of the past five years of Directed Research Groups led by Professor McDonald. View his currently offered DRGs »


Winter 2025

Temporality in Crisis Communication on TikTok

Directed by PhD candidate Julie Vera with support from Dr. David McDonald and Dr. Mark Zachry

In this DRG, we will investigate how users make sense of crisis events using TikTok comments by focusing on the temporal aspects of information seeking and sharing. Our research will explore how users navigate and understand emergent events in situations where information may not appear chronologically and/or lacks context. We also aim to understand how users employ temporal anchors and other references to make sense of evolving situations. Results from this study will contribute to research at the intersection of crisis informatics, social media studies, and collective sensemaking.

Students participating can expect to:

  1. Analyze collected TikTok comments and videos related to crisis events using both qualitative and quantitative methods
  2. Help develop and refine a framework for categorizing the temporal dimensions of TikTok comments
  3. Participate in reliability testing and manual validation of framework

*Note that due to the nature of this research, some media may contain distressing or sensitive material.

Autumn 2024

Prototyping with LLMs to Inform Design of AI Applications

Instructors:

  • Meena Devii Muralikumar (PhD Candidate, HCDE)
  • David W. McDonald (Professor, HCDE)

In this DRG, we will explore how we can leverage LLMs in simulating AI capabilities and informing the user experience of AI applications. 

Prototyping for AI applications is not straightforward. Even when we employ Wizard of Oz techniques, they might not realistically simulate AI failures. 

Model development is informed by user needs and requirements. However, users cannot easily envision and/or discuss realistic needs without using something tangible – especially for AI. Prior work in the literature refers to this as a chicken-and-egg problem.  

How might LLMs help address this? What differences can we observe when we leverage LLMs in the design process versus when we do not?

We will undertake this line of inquiry by selecting a specific AI-based use case, conducting UX research, and evaluating prototypes. For this DRG, we will focus on the first half of the double diamond design process - defining the problem. 

This DRG will resemble a project-based course. Since we are driven by this overarching research question we will also read relevant research articles and maintain a design journal. 

We are looking for 8-12 students. Students will work in a group of designers & researchers to address an AI-related design problem. We will meet once a week to work together, engage with each others’ work, discuss, and provide feedback. Students will have the opportunity to i) add work done as part of this DRG to their portfolio/resumes and ii) inform the research inquiry.


Spring 2024

Exploring LLMs for UX Design Critique

Your faculty hosts:

  • Dr. Tyler Fox
  • Dr. David W. McDonald

Can an LLM generate a design critique? Well … sure it can. But is it any good? How would you know?

User experience professionals will soon encounter all types of tools driven by LLMs, GPT, or AI that will claim to make their jobs easier, provide feedback, and assess usability of their designs. Having a clear understanding of what these tools can or cannot provide will be important to being an effective professional in the new world of UX.

This DRG will explore how GPT and LLMs can be used to provide important design feedback on early stage design artifacts. Participants in this DRG will work with their own early stage designs - perhaps something they already have designed. Those designs will then be used to examine the quality and effectiveness of different types of critique generated by LLMs. Students will qualitatively evaluate the LLM generated critique to understand the different qualities of LLM feedback. During the quarter, students will work to engineer a ChatGPT prompt that will provide a selected form of critique.


Winter 2024

Can GPT detect harassment, bullying and hate speech?

Meena Muralikumar (HCDE PhD candidate) & David Mcdonald (HCDE Professor)

Supportive, informative and civil interactions are often important to the growth and long-term viability of any online community. Unfortunately, there will be times where some individuals will harass, bully or target others. In those situations, moderating the community becomes an important task.

Content moderation is a moving target. Adapting to this challenge requires monitoring and updating an understanding of hate speech, toxicity, and harassment. Re-training a dedicated and specific machine learning model requires additional labor, money, and time. Using a Generative Pre-trained Transformer (GPT) Large Language Model (LLM) shows promise in adapting to this challenge because of its natural language generation capabilities. 

How might we leverage LLM capabilities to detect toxic/hate speech? Could one customize LLMs using prompt engineering and few-shot training to fulfill specific moderation policies or to conform to human judgements? How would it compare to human judgements of toxic/hate speech? We will explore such questions in this DRG.

In this DRG, we will be working with Open AI’s GPT-4. We will be exploring both the moderation endpoint and how to customize GPT-4 for content moderation. The main objective of this DRG is to compare our results with human judgements and/or other popular toxicity detecting classifiers such as Perspective, primarily using quantitative analysis methods.

What students can learn from this DRG:

  • Programming in Python, using Jupyter Lab Notebooks
  • Introductory methods for quantitative data analysis
  • Prompt engineering techniques for ChatGPT

Skills that would allow you to be successful in this DRG include :

  • Prior coursework programming with Python
  • A statistical methods course 

Winter 2023

Designing with Large Language Models for Debugging Assistance

Instructors:
        Dr. Colin Clement (Microsoft)
        Dr. David W. McDonald 

Being good at programming is partly a function of what you are taught in a course and partly the experiences you gain. Debugging a program when something goes wrong is often based on hard won experiences.

What if we could make some aspects of debugging easier?

This DRG will consider how to improve the debugging experiences of novice programmers using large language models (LLMs)---such as OpenAI's Codex---which can answer questions and offer edit suggestions leveraging both natural language and source code.

Software flaws or errors, sometimes generate 'exceptions' which often contain code context and dubiously helpful error messages. This DRG will use the knowledge retrieval and synthesis behaviors of LLMs to offer suggestions to overcome such errors quickly, inside the development environment.

In the DRG students will develop interactive prototypes for an IDE to capture exceptions, interact with an LLM and display possible solutions. These interactive prototypes will explore the possible user experiences that will help novice programmers overcome challenges and learn to unblock themselves.

Students who are the best fit for this DRG will minimally:

  • Have had 2+ programming courses
  • Have experience with prototyping techniques
  • Have used Python

Autumn 2022

Analyzing How UX Practitioners Communicate AI Enabled Apps

Help us analyze design pitches for proof of concept AI-enabled apps! We’ve collected design pitches (e.g. slide decks, documents) created by UX professionals who we challenged to prototype and pitch a proof-of-concept app that uses AI. This DRG will be focused on analyzing those artifacts to better understand how practitioners communicate to stakeholders the promises and challenges of AI in the context of the UX design process. Through our analysis, we hope to identify recommendations to better support UX practitioners when they communicate designs for AI-enabled apps. The end goal of this DRG is to submit a paper to the ACM Designing Interactive Systems (DIS) conference in early 2023. 

In this DRG, you will learn how to:

  • Systematically and qualitatively analyze visual artifacts created by UX practitioners to communicate an AI-enabled app
  • Identify trends and insights from the analysis
  • Turn trends and insights into concrete guidelines for practitioners as well as an academic paper

The benefits for you are:

  • Getting first-hand exposure to working at the intersection of AI and UX
  • Learning how UX practitioners communicate technical concepts so you can incorporate them into your own work
  • Helping to create guidelines or recommendations that make direct contributions to HCI/UX industry and academic communities
  • Obtaining DRG credits for fall quarter

Spring 2022

Designing TikTok Videos to Explain Wikipedia

Led by: Julie Vera, PhD Student, HCDE 
With guidance from faculty advisor Professor David McDonald and Professor Mark Zachry

We are looking for:

  • Up to 40 undergraduate or masters students

  • Folks with experience or a strong interest in TikTok, video production, or visual storytelling

  • Nice to have:

    • Interest in Wikipedia or other collaborative knowledge platforms

    • Interest in science communication or communication for public audiences

    • Interest in the design of learning or how-to experiences

  • You do not have to be an expert on Wikipedia to participate!


About the DRG:

In this DRG, we will be thinking of new ways to introduce students to Wikipedia as a concept and platform. We will be designing TikTok videos that explain some important features and concepts of Wikipedia so that they feel equipped to contribute. We will follow a flexible design process to create short-form videos that are informative as well as fun and engaging. Participants in the DRG can expect to be lightly onboarded onto Wikipedia. 

Students participating in the DRG will:

  • Conceptualize ways to introduce high-school and college-aged students to Wikipedia via TikTok

  • Get (lightly) onboarded onto Wikipedia

  • Think about what concepts are important to people who are just joining the platform

  • Use a “how might we” approach to design video material that addresses important Wikipedia concepts

  • Storyboard potential video content and collaborate with other students on audio and visual components

  • Prototype TikTok videos for public consumption

  • Respond to weekly reflection prompts about Wikipedia, content ideas, and design process


Expectations:

  • Attend weekly meetings (in-person; time 4:30 - 5:30pm on Mondays)

  • Work in the DRG for 2 CR (6 total hours a week, including “class” time) 

  • Later in the quarter, we may choose to be remote and asynchronous due to the nature of the work


Please contact Julie (jvera@uw.edu) if you have any questions.