2025-06-02 / 8 min / AI + Mental Health + Students
AURA and building AI for sensitive student needs
The mental health chatbot that won a science fair also taught me where AI products must be careful.
AURA was one of my earliest serious AI projects: a chatbot for students that offered advice and tried to normalize asking for help. It won first place in the national science fair hosted by the University of West London, but the project also left me with a more cautious view of AI in sensitive domains.
The important realization was that supportive language is not the same as support. A chatbot can make a student feel heard in a moment, but the product has to be clear about what it can and cannot do. Escalation paths, careful disclaimers, and safe response patterns matter as much as the model output.
At the time, I was excited by the possibility of making help more approachable. I still am. But AURA taught me that approachable products should not blur responsibility. The more vulnerable the user, the more precise the boundaries need to be.
That experience continues to influence how I build AI systems. I want tools that are useful and humane, but I also want the interface to tell the truth about the system's limits.
takeaways.
- Sensitive AI products need safety design from the first prototype.
- A warm tone does not replace professional support.
- Clear boundaries can make a product more trustworthy, not less.
related project.
AURA - Mental Health Chatbot - Created an AI Chatbot for students offering advice and normalizing mental health help. Won First Place in the national science fair hosted by University of West London.