top of page

Course Project

image_edited_edited.png

Developing an generative AI bias reporting tool based on user research, design and testing. 

Role

UX Researcher

 

Timeline

12 Weeks

Skills

Prototyping
Storyboard
Interviews

Affinity Diagram
Task Flow
User Tests

​

Tools

Figma
Miro
Gemini
ChatGPT

Team

Elin Jiang
Emma Im
Kat Karl

 

Screenshot 2025-01-20 at 3.07.45 PM.png
Screenshot 2025-01-20 at 5.12.42 PM.png

Exploration

The Problem
Biased algorithms can reinforce existing societal prejudices, and those who may not flag bias in artificial intelligence risk perpetuating this inequality.

Currently, there is a lack of open platforms where people can voice their everyday encounters with bias online, and especially a platform where they can see others’ interactions with them as well.
As our reliance on AI intensifies, it becomes imperative to ensure fairness and equity in the design and deployment of these technologies. We must prioritize the development of generative AI systems that are not only technologically advanced but are also shaped by an ethical commitment to inclusivity and equity.
​
— Zhou et al. 2024
Exploration
The Current Model
CMU HCII professor, Dr. Hong Shen, developed the application WeAudit TAIGA as a tool to highlight and report biases in generative AI. We decided address the pain points of the tool through user research and create a solution that is usable and engaging.   

Exploration

Usability Tests
To highlight pain points, these tests followed a direct, straight-forward framework to follow instructions and ensure that the basic functionalities are executable.
Screen Shot 2024-07-16 at 8.37.47 PM.png
1. Users experienced confusion on where to go each time they were assigned with the task: the UI of the interface was not self-explanatory nor straightforward.
 
Screen Shot 2024-07-16 at 8.41.08 PM.png
2. Log-in process was not as clear and required much navigation back and forth.
 
Screen Shot 2024-07-16 at 8.40.53 PM.png
3. The "create thread", "image generation" and "report" functionalities were confused with the report button.
 

​

​

The ratio of negative to positive occurrences during the testing process was 17:5, and there were 3 neutral insights. 

 

​​

There seemed to be confusion for both the participants and the test administrators regarding the appearance/disappearance of the threads feature. 

​​

​

Users need more clarification or a more direct overview/guide that would allow them to develop an understanding of the outcome of the website and how they can navigate/flow through the website.

Exploration

Heuristic Evaluation
I used the heuristic evaluation principles to flag areas of the interface that do not match guidelines. 
Visibility of System Status 



Presence of two search bars - unclear whether search has been made or not
image.png
Help and Documentation 




Three-step documentation for generating images, no resource to troubleshoot
image.png
User Control and Freedom





Once generating an image, there is no way to save the thread or return to main page 
image.png
User Control and Freedom




Users can name threads, tag them, and provide description before posting
image.png

So how might we...

Ensure that users can utilize their diverse backgrounds and experiences to evaluate bias in AI generated images to effectively frame an argument about AI bias.

User Research

User Survey Analysis
From a large survey with over 1000 participants, I examined how exploring AI image generation interfaces can surface biases in the outputs and how users felt about these biases.
Screenshot 2024-08-09 at 2.22.20 PM.png

Perceptions of bias in generative AI outputs are significantly higher among minority groups, with 45% more individuals identifying the results as generally harmful compared to the majority group.

Screenshot 2024-08-09 at 2.24.47 PM.png

There was a consistent distribution of responses across gender categories, indicating a widespread awareness and concern about the potential harm of gender bias in generative AI outputs.

User Research

Consolidating Research
In order to “Walk The Wall” I initially took a look at our previous work on developing a background understanding on generative AI, looking at how I could visualize data regarding generative AI biases and user representations. 
 
image.png

Key Takeaways

Parallel functionality to social media platforms for intuitive use

 

​

​Provide guidance and a statement of purpose to calibrate user’s approach to the website​

 

​

Feedback and efficiency with the UI​

 

 

Keep the interface simple, cohesive and engaging

User Research

Learning About Our Users
At this point, I wanted to better understand the motivations and experiences that users have going into use cases of generative AI. I conducted user interviews to develop a persona and user profile. 

Methods: Directed Storytelling, Semi-Structured Interview

 
Screenshot 2025-01-20 at 7.17.03 PM.png

INSIGHT 1: Bias recognition increases with firsthand experiences 

Screenshot 2025-01-20 at 7.33.51 PM.png

INSIGHT 2: Biases in AI reflect existing societal issues

Screenshot 2025-01-20 at 7.36.42 PM.png

INSIGHT 3: Social media can exacerbate and educate bias in AI

Screenshot 2025-01-20 at 7.36.34 PM.png

INSIGHT 4: Solutions can be found in transparency

Screenshot 2025-01-20 at 7.36.24 PM.png

INSIGHT 5: Reporting bias in AI tools is not simple

Design

Defining the Product 
Screenshot 2024-08-09 at 7.42.20 PM.png

​

 

To ensure that users can utilize their diverse background and experiences to evaluate bias in AI generated images to effectively frame an argument about AI bias through the website/service.

​

​

​

Goal

Screenshot 2024-08-09 at 7.43.26 PM.png

Form

​

 

A more accessible and navigable interface - such as an app in the form of social media to engage users and incentivize sharing and communicating.​

​

​

Design

Sketches
Screenshot 2025-02-18 at 10.29.38 PM.png

Design

Wireframes
Screenshot 2025-02-18 at 10.29.51 PM.png

Design

User Tests

POSITIVES

  • Most users thought that our buttons were very clear and that uploading an image, comparing prompts, and using a prompt to generate an image were all straightforward tasks

​

  • User found the reporting feature as intuitive and accessible

TO WORK ON

  • Suggested adding a side-by-side comparison feature for original and AI-generated images to easily spot discrepancies or biases.

​

  • There was some uncertainty regarding what exactly they would flag as being biased and what they should consider as being biased

​​​

Design

Final Prototype
Screenshot 2025-01-20 at 8.25.30 PM.png
Screenshot 2025-01-20 at 8.25.44 PM.png
Screenshot 2025-01-20 at 8.25.18 PM.png
Screenshot 2025-01-20 at 8.25.38 PM.png
Screenshot 2025-01-20 at 8.25.51 PM.png

Design

High Fidelity Designs
Screenshot 2025-01-20 at 9.15.31 PM.png

Reflection

What I Learned

Working With Teams

How best to handle differences in design choices, or interpretation of data

 

Time management and organizing schedules

Deep Dive into the User

Multiple methods simultaneously to get a better sense of use-cases

 

Compile notes from different models, or evaluation metrics to figure out how to make sense of the information

Effective Testing

Anticipate questions that the participant may have during testing

Reflection

With More Time
Screenshot 2025-01-20 at 9.28.15 PM.png

Conduct more usability tests to see what more improvements could be made to the flow of the website, especially the interaction with the community pages.

Screenshot 2025-01-20 at 9.28.21 PM.png

Explore different platforms, such as a website redesign, a tool - browser or website add-on. Integrating bias reporting on multiple platforms is important.

Screenshot 2025-01-20 at 9.28.25 PM.png

Implement a feedback method so that users are notified how their reporting is being used to make changes and prevent further bias in generative AI platforms.

bottom of page