top of page

1

Human-AI Trust & Safety Research

Designing Safer Self-Disclosure
in an AI Mental Health Chatbot

Balancing empathy and safety to create a supportive mental health
chatbot for adolescents.

My Role
Lead Researcher
Project
PhD Research study
Timeline
6 Months
Methods
Interviews,
Surveys,
Usability Testing,
Moderation Analysis
Iphone mockup.png

2

Research Context

The Problem

Adolescents often find it easier to open up in private, immediate, and non-judgmental spaces, which is part of what makes conversational AI so compelling in emotionally sensitive contexts. But that same openness can also create risk. This project explored how to design an AI mental health chatbot that feels supportive and approachable while still maintaining psychological safety, clear boundaries, and responsible interaction design.
User Behaviour
Adolescents may feel more comfortable opening up to a chatbot than in face-to-face conversation. Privacy, immediacy, and a non-judgmental interaction can make self-disclosure feel easier.
Design Challenge
The chatbot needed to feel warm and supportive without becoming emotionally overreaching or unclear in its role. Even small decisions in tone and flow could affect how safe the interaction felt.
Why it Matters
In sensitive AI interactions, design shapes what users share, what they expect, and whether the experience supports safer reflection or creates avoidable psychological risk.

3

Research Goals and My Role

Project Goals & Research Questions 

To guide our design, we outlined key questions aimed at ensuring the chatbot would be effective, safe and trustworthy for adolescent users in sensitive contexts.

How can the chatbot build trust while ensuring conversations remain private and safe?
What type of responses feel most supportive and helpful to adolescents?
How can we prevent the chatbot from providing incorrect or harmful advice?
What measures can prevent overdependence on the chatbot for emotional support?

My Role & Scope

I led the end-to-end research and design processes for the mental health chatbot, from understanding user needs to shaping a safe and supportive conversational experience. My focus was on balancing emptahy, clarity, and responsible design.

I conducted one-to-one interviews with 50 adolescents and 4 mental health professionals to understand how comfortable adolescents felt sharing emotions with a chatbot, and how both groups perceived chatbot-based support.

Survey.png

I conducted a survey to understand adolescents’ comfort with emotional disclosure, trust in chatbot support, and expectations from emotionally sensitive AI interactions. This helped identify broader patterns that complemented the interview findings.

Design.png
Journey maps.png

I created journey maps to track how adolescents moved through the chatbot experience and to identify moments of comfort, hesitation, and risk. This helped translate research into clearer design priorities.

Wireframes.png

I developed wireframes to turn research findings into a safer and more intuitive chatbot experience. They helped structure the flow, clarify prompts, and improve how support and safety cues were presented.

Usability tests.png

I conducted usability tests to evaluate how adolescents interacted with the chatbot and where they experienced confusion or discomfort. This helped validate the flow and refine the design for clarity and safety.

4

Design process

I translated the research findings into a conversation flow that helped adolescents begin sharing gradually, with less pressure and more emotional clarity.

Define disclosure touchpoints
Mapped the key emotional and behavioural moments where users were most likely to hesitate, disclose, or disengage.
Map conversational architecture
Structured the conversation flow to improve sequencing, pacing, and cognitive clarity across the interaction.
Develop wireframe logic
Translated the flow into wireframes to test hierarchy, interaction states, and disclosure-support patterns.
Iterate for safety and comprehension
Refined the interface to improve emotional safety, reduce ambiguity, and strengthen response clarity.
bottom of page