brand logo
Mental health and AI

Mental health and AI

24 Nov 2024 | By Naveed Rozais


  • Navigating the opportunities and risks of AI in mental health services 


The rapid advancement of Artificial Intelligence (AI) has touched every aspect of human life, from everyday conveniences to complex professional tasks. AI’s transformative impact on how we live and work also affects healthcare and healthcare services, particularly mental health, and stands to reshape how mental health services are delivered and how individuals understand and manage their well-being.

The sensitive nature of mental health and mental health services, however, means that the application of AI needs to be carefully addressed. 

The Sunday Morning Brunch recently attended the newest edition of Tech Circle – a talk series that aims to create a space for open dialogue and collaboration on emerging technologies, how they impact our lives and different industries, and how legal and policy frameworks can be strengthened to adapt to these emerging technologies. This newest edition of Tech Circle focused on the intersection of technology, mental health, and AI. 

This edition of Tech Circle, titled ‘Mind Matters: Navigating Mental Health in the AI Era,’ saw licensed clinical psychologist and themindmanager.lk Founder Dr. Shanelle De Almeida; Cognitive Behavioural Therapy (CBT) therapist, management psychologist, and Edhati Consultancy Founder and Chief Executive Officer (CEO) Iranjali Unamboowe Abeyratne; and technology law trainer and educator Thanuki Goonesinghe share their perspectives on the role AI and related technology can play in the mental health field. 


The intersection of AI and mental health


AI’s potential in the mental health sector is vast, and in many cases, AI already plays a role that is overlooked. 

From chatbots offering counselling services to wearable devices monitoring stress levels, AI tools are being developed to provide quick, accessible, and affordable support. Yet, these advances raise fundamental questions about the nature of care, privacy, and human interaction. 

“AI is impacting different aspects of our lives, and that includes mental health. But there’s no global consensus on what AI even is,” Goonesinghe said, emphasising that this lack of consensus complicated how AI should be regulated, especially in mental health, a field already marked by complexity and sensitivity.

Offering her own perspective on defining AI, Dr. De Almeida explained that as AI mimicked human behaviour, it also magnified human flaws, presenting ethical dilemmas around privacy, accuracy, and the therapeutic relationship. The technology’s ability to influence emotions and behaviour must be approached with caution, particularly in vulnerable individuals. 

“It is a direct reflection of the best and worst parts of who we are,” she said, adding that AI’s potential lay in its capacity to supplement traditional mental health services, but it could not be expected to replace them. 


AI as a diagnostic tool


AI algorithms can analyse patterns in speech, text, and biometric data to detect signs of mental health disorders. This capability is already in use through apps and wearable devices that monitor symptoms such as anxiety and depression. 

These tools have democratised access to mental health resources, allowing individuals to receive initial evaluations without long waiting times. According to Goonesinghe, “tech has made access to information more democratised,” enabling people to seek help for “not just daily anxiety but also more serious issues”.

However, this accessibility brings up the issue of self-diagnosis. There is a fine line between informed self-awareness and the dangers of over-reliance on technology without professional guidance. AI cannot yet replicate the nuanced judgement of a trained therapist, a point Abeyratne highlighted when she discussed how “each therapist is unique, with their training, knowledge, skills, and very important personality characteristics”.

Chatbots are increasingly being used as a first step in mental health support, providing a non-judgemental space for individuals to express their thoughts. These tools are particularly valuable in regions where stigma around mental health still persists or where access to professionals is limited. 

Yet, the limitations are evident. Dr. De Almeida stated that AI should be an aid, not a replacement, noting that “autonomy is what matters” and that humans must retain control over their health decisions. 

Chatbots may provide initial comfort, but they are not equipped to deal with complex emotional issues, cultural sensitivities, or crisis situations. 

“If it’s on the low levels of diagnosis, for example, if someone has had a really bad day and it is a low level of concern, AI can be helpful as an AI therapist or assistant that can talk back to you,” Dr. De Almeida said. “But when it comes to high levels of diagnosis (the more serious forms of mental illness like Post-Traumatic Stress Disorder (PTSD), major depressive disorder, and so on), it needs to be used with a great deal of caution.” 

Dr. De Almeida stressed that this was especially true of serious disorders to do with psychosis and delusions, as it was imperative that such patients be treated by a human doctor who could diagnose and treat these issues while paying attention to real-life context, something that AI was unable to do. 

The discussion also focused on wearables equipped with sensors that can track physical and emotional well-being by monitoring heart rates, sleep patterns, and stress levels. These devices provide real-time data, which can be used to develop personalised care plans – something Goonesinghe cautioned against, especially in light of how loosely AI is regulated. “Wearables collect information you may not want collected,” she said, raising questions about consent and the ethical use of sensitive health data.


The biggest pitfalls of AI in healthcare 


While AI promises significant benefits, several challenges must be carefully managed to avoid ethical pitfalls and ensure safe usage:

  • Privacy and data security: One of the main concerns with AI-driven mental health tools is privacy. Wearable devices, apps, and chatbots often collect vast amounts of sensitive data. If mishandled, this information could lead to breaches of confidentiality, a critical issue in mental health. 


The issue of could versus should was also raised, with the panel questioning what need there is for a wearable to log things like heart rates and other functions when humans are capable of making many decisions independently without relying on tech-driven insights. Protecting user data must be a top priority, with clear regulations on how data is collected, stored, and shared.

  • The role of human judgement: AI tools lack the human touch – something indispensable in mental healthcare. Therapists draw on their experience, intuition, and cultural understanding to tailor their approach to each individual, a capacity that AI does not yet possess. 


Goonesinghe raised the issue of “cultural sensitivity, which an app cannot capture for people in Sri Lanka and our communal lifestyle”. This limitation underscores the importance of using AI as a supplementary tool rather than as a replacement for human interaction.

  • Bias and discrimination in AI algorithms: The accuracy and fairness of AI-driven diagnoses and recommendations depend on the quality of data used to train these systems. If the data is biased, the AI’s output will be too, potentially reinforcing stereotypes or providing incorrect assessments. 


Goonesinghe drew attention to this with the example of the documentary ‘Coded Bias,’ which illustrates the limitations of facial recognition technology due to biased training data. In mental health, similar biases could have harmful consequences if not carefully monitored.

  • Over-reliance on AI technology: There is a danger in over-relying on AI for social and emotional needs, particularly in cultures that traditionally value strong community bonds. While AI can be programmed to simulate friendship or counselling, the impact of these relationships is still poorly understood. 


Abeyratne spoke of a case where a South Korean man took his own life after forming a relationship with an AI, illustrating the risks of blurred boundaries between reality and the virtual. Dr. De Almeida stressed that “AI can assist, but should not take over our autonomy,” underscoring the importance of maintaining human agency.


Developing AI-sensitive frameworks for healthcare


Brunch also reached out to the Information Communication Technology Agency (ICTA), which noted that as AI was still in its very early stages, there were no official policies established as yet. 

A National AI Strategy has been developed by a task force of multi-stakeholders and published on the Ministry of Technology’s website in September for public consultation, which includes immediate, short-term, and long-term actions. However, specific policies, such as those related to health or guidelines for app developers, are still under development. 

Moreover, the country is also currently in a transition phase, with no clear agency ownership over the strategy’s implementation defined as yet. While this will be more clearly defined in the next two weeks, for guidance, the existing strategy and action plan can be found online, and European Union (EU) policies are suggested as a reference with regard to defining responsible AI use.

Crafting effective policies for AI in mental health care requires a balanced approach. One topic of discussion at Tech Circle was a brainstorming exercise to improve critical thinking skills towards better policy-making in this area. 

Key areas to consider when analysing how to balance the impact AI can have on mental health services and healthcare included: 

  • Clear standards for data privacy and security: These should include transparency on how data is collected, who has access, and how it is stored. The EU’s AI Act, which places certain AI applications under high-risk categories, is a step in the right direction. 
  • Multidisciplinary input in AI development: To develop AI tools that are both effective and ethical, a multidisciplinary approach is essential. This means involving professionals from diverse fields – neuropsychologists, ethicists, medical device specialists, and engineers – in the design and implementation process. 
  • Emphasis on human-centred AI design: Policies should prioritise AI that enhances rather than replaces human capabilities. This means designing systems that support healthcare professionals and offer tools for self-improvement without taking over the decision-making process. 

Goonesinghe warned that one of the key issues with AI was its current tendency to be viewed as a “substitute instead of a complementary tool”. Ethical guidelines should focus on maintaining AI as an aid to human judgement, not a replacement.

  • Education and awareness: Public education about the benefits and risks of AI in mental health is crucial. This includes helping individuals understand how to use AI tools responsibly and recognise their limitations. Many users are capable of making informed decisions without tech-driven guidance, suggesting a need for greater emphasis on self-awareness and discipline.


AI has the potential to revolutionise mental healthcare by making support more accessible, personalised, and timely. However, its integration into such a delicate field must be handled with care. A thoughtful, well-regulated approach – one that prioritises data privacy, respects human autonomy, and draws on multidisciplinary expertise – will be essential to harness AI’s benefits without compromising ethical standards.




More News..