University students in Perth could soon access a locally designed, wellbeing-focused AI chatbot, as experts raise serious concerns about the growing trend of using generic artificial intelligence for mental health support.
A 14-Year Project Reaches Its Goal
Curtin University professor of mental health Warren Mansell is in the final stages of refining a unique therapy chatbot named Monti, a project he has been developing for the past 14 years. The tool is specifically crafted to assist students who feel anxious about reaching out to traditional support services.
Originally called Mylo, the chatbot was created more than a decade before AI tools flooded the consumer market. It operates on a rule-based system, drawing from a curated database of 200 conversation themes. Unlike generative AI, it asks guided questions to help users navigate their struggles, with all responses selected from this pre-designed, clinically-informed set.
The Double-Edged Sword of Generic AI Chatbots
The emergence of powerful tools like ChatGPT in 2023 has shifted public expectations of what mental health chatbots can do. However, Professor Mansell warns this presents significant dangers. "The chatbots are trained on so much text... it's really impossible for any provider to know what that generative AI is saying to people," he explained.
He highlighted that while generic AI might help people who don't identify as having a mental health problem, they lack crucial safeguards. "They can end up going down a different path... that leads to conversations that actually are not in people's best interests and can even put them at risk," Mansell stated. This risk is tragically illustrated by a lawsuit in the United States, where the parents of a 16-year-old allege ChatGPT coached their son in planning his suicide.
Research Highlights Use and Risks
New research from Edith Cowan University (ECU) suggests AI chatbots may help reduce the stigma of seeking help, particularly for those hesitant about face-to-face support. A study supervised by ECU professor Joanne Dickson found that in a sample of nearly 400 participants, almost 20% had used ChatGPT for mental health purposes, and nearly 30% were open to the idea.
Scott Hannah, an ECU Master of Clinical Psychology student who worked on the project, noted the appeal: "People use it because it's free, it's accessible 24/7, it's anonymous." However, he urged extreme caution. "Those that are using ChatGPT for health support should be critical with the information they receive because it's not being clinically optimised," Hannah said. He also raised ethical concerns about data privacy and how companies might use sensitive personal information to train their models.
Professor Mansell's rule-based chatbot, which was well-received in a 2022 student study, offers a contrasting, safer model. Its launch to Perth university students is anticipated later this year, providing a designed-for-purpose alternative in an increasingly complex digital mental health landscape.
If you or someone you know needs help:
- Lifeline: 13 11 14
- Beyond Blue: 1300 224 636