top of page
Archive

My research work

Balancing Cognition & Emotion in VR-Based Language Learning: Ethical Design of Emotion-Aware Metaverse Environments

This chapter documents a research-led build of a VR language classroom that senses learner affect and adapts tasks in real time. It maps learner journeys, spots moments of confusion or anxiety, and tests whether emotion cues lift motivation without adding load. Methods include contextual interviews, co-design sessions, and in-headset think-aloud trials, paired with SUS, NASA-TLX, foreign-language-anxiety scales, and performance logs. The work also prototypes consent and data-transparency patterns and checks for bias across demographics. Findings land as practical design moves: steadier pacing, supportive prompts at the right moment, and ethical guardrails that help the experience feel trustworthy.

Springer nature.png

​Therapeutic LLMs in Mental Health: Evidence, Alignment Engineering, and SAFEE-Based Governance

A practical playbook for building and scaling mental health LLMs with real-world guardrails. It maps core use cases across psychoeducation, screening and triage, and intervention and support, then distils what the evidence says about symptom change, engagement, safety reporting, and costs. Common failure modes are converted into concrete mitigations, and the SAFEE framework is translated into measurable checks, ownership, and audit trails. A staged evaluation path closes the loop, from preclinical validation through pilots, randomised trials, implementation, and ongoing post-market monitoring.

Elsevier 2.png

From Empowerment to Over-Exposure: The Double-Edged Sword of AI Self-Help Apps

A clear guide to the tradeoffs in AI self-help apps for teens, showing how design choices can lift autonomy and skills or slide into surveillance-style personalisation, persuasive hooks, and over-reliance. It maps eight tipping points, then turns risks into concrete safeguards: SAFEE-Plus policies, crisis escalation, minimal-data personalisation, notification caps, outcome-aligned recommenders, fairness audits, and opt-in controls, with practical metrics like consent granularity, adaptive session limits, and crisis-response time so teams can steer products toward empowerment with oversight.

Bentham Science.png

From Perceived Usefulness to Parasocial Bonding: Integrating TAM & PSR in Adolescent AI Companions

Tracks how teens shift from finding AI companions useful to forming one-sided bonds through frequent, intimate chats, and why that raises risk. Turns that pathway into clear propositions, a real case walkthrough, and a SAFEE by design checklist with verified age gating, romance-free teen personas, shared visibility, mandatory human escalation, and a measurement plan using TAM and PSR scales tied to outcomes like help-seeking. Presented at a conference in Dubai.

TAM  and PSR.png

Synthetic Empathy and the Influence Trap: Ethical Implications of Emotion AI in Customer Support Environments

This chapter looks at how emotion-driven chatbots shape real customer conversations, what makes people trust them, and where things can go wrong. It explains how choices like tone, persona, memory, and timing can nudge users toward compliance or create false consent, using two clear case stories from support and mental health contexts. The piece also offers a practical roadmap to build safer and more transparent systems with the SAFEE framework for Safety, Accountability, Fairness, Explainability, and Ethics.

IGI global.png

The Dangers of AI Friendship Adolescent Mental Health and Ethical Oversight

This paper looks at how “AI friendship” can affect teens, what design choices make people bond with chatbots, and where safety breaks down. It uses a real case to show how persona, tone, memory, and timing can deepen attachment while missing basic checks like crisis escalation and clear limits. It pulls in evidence on social presence, parasocial ties, and the Eliza effect, then shows how hallucinations and profiling raise risk. It ends with practical guardrails through the SAFEE model so teams can build safer, accountable, and transparent systems with clinical and policy oversight.

Elsevier 3.png

Ethical AI in Marketing: Safeguarding Consumer Well-Being Through the SAFEE Framework

This paper explains how AI-powered marketing shapes real customer moments, from chatbots and sentiment analysis to highly personalised ads. It spotlights risks like parasocial pull, emotional manipulation, deepfakes, and murky data use, then turns SAFEE into clear checklists that teams can apply to build safer, fairer, more transparent products. The outcome is a practical playbook for personalisation that protects well-being and trust.

Ethical AI.png

From Algorithms to Empathy: Navigating Ethics, Efficacy and User Trust

This article reviews what mental health chatbots can and cannot do in care, what drives people to trust them, and what keeps results reliable. It maps the evidence on outcomes, personalization, and working alliance, and surfaces risks around privacy, bias, and overreach. It turns those insights into practical steps for teams, from strong data safeguards and clear explanations to human oversight and trustworthy evaluation metrics.

from algorithms to emapthy.png

From Algorithms to Empathy: Navigating Ethics, Efficacy and User Trust

This study examines how aerobic fitness and personality shape attention and how hard a task feels. In tests with 50 adults using VO2max, the Attention Network Task, and NASA TLX, fitter participants showed stronger alerting and felt less effort, while those high in extraversion showed weaker executive control, more mental and time pressure, and lower self-rated performance; neuroticism tracked with higher mental demand. Design takeaway: tune cues and pacing, limit simultaneous demands, and adapt difficulty to energy and trait profile to keep people focused and reduce strain.

Role of aerobic fitness.png
bottom of page