Restorative Robots: AI in the Therapy Room
And then we are left to live with what follows - Kelly. G Wilson
This blog series has focused largely on digital intimacies in online spaces where personal, sensitive interactions between persons known and unknown play out - what might be called digital intimate publics.
When harm arises in these spaces, what can the digital realm offer the person harmed? Inputting details of emotional distress into AI platforms like openai will quickly direct individuals to seek therapy with a qualified, skilled professional to make sense of their experiences - but many may not have easy access to, cannot afford, or may not wish to, attend such professionals. What then? Could digital spaces also provide direct support to individuals experiencing mental or emotional distress?
In a panel on the evolution of AI driven psychotherapy on the Pysched for Mental Health podcast with Dr. Ed Billotti, Alastair van Hearden, research director of the Human and Social Development Programme at the Human Sciences Research Council in Pretoria, South Africa, raises interesting questions about what he sees as a gold rush to capitalise on the need for intimacy in all its forms the the digital sphere, driven by the affordances of the AI revolution, with particular implications for AI driven therapies:
‘Social media’s whole hook was attention. And so this kind of first wave of digital tools we’ve seen have all been optimising for attention, trying to get you to scroll on Instagram, Whereas the AI revolution is likely to tray and hook into intimacy and connection and have people connecting with products and tools by forming bonds" - Alastair van Hearden, Research Director of the Human and Social Development Programme at the Human Sciences Research Council in Pretoria, South Africa.
Like many of us, I suppose, I've "spoken" to AI about some of the things that have troubled my heart in recent years, including some of the challenges described in earlier blogs.
What's been remarkable to me as an end-user is how a simple prompt fed to Chat GPT can spit out a very reasonable set of recommendations for how to make sense of something so personal, so intimate, so difficult.
Simple prompts, freely available in the public domain, can be used to guide AI perspectives on how to respond to difficult content - for example, asking AI to imagine that they are an older wiser version of the self, revisiting a difficult moment and looking at the pain in the person's eyes and offering validation or wisdom.
They can also enable fresh perspectives when it comes to online harm, for example, where a person is struggling to understand the behaviour of another person who has harmed them, messages that may have been distressing can be inputted into a chat bot and analysed to offer additional or different viewpoints on what’s happening in the interaction, offering a different set of eyes to validate or to challenge with less intensity of emotion.
This mimics the types of activities that can occur in cognitive behavioural therapies like Acceptance and Commitment Therapy (ACT), such as cognitive delusion - stepping back from one’s thoughts and viewing them more flexibly, or self-as-context, a perspective of looking at the behaviour of the self from a safe distance with friendliness and compassion.
Van Heerden's wife, herself a psychotherapist, found that when she engaged in such exercises with an online clinical AI robot Wysa, she was left in tears. She felt more validated by a generative processing transformer than by colleagues, supervisors and therapists over a lifetime of work in the field.
So, can AI replace human psychotherapy? Would this be a good or bad thing?
Currently ACT founder Steven C. Hayes is investigating machine learning in analysing and formulating treatment plans for individuals based on their input into an app called MindGrapher at the Institute for Better Help linked to a repository of clinically validated psychological exercises in a separate paid app called Psychflex. The vision is to semi-automate analysis of complex data sets provided by the client in order to tailor therapies to their individual profile so that, in Hayes’ words, every voice can be heard and every voice will matter.
The reasoning behind this is pretty straightforward - there is a dearth of therapists to meet global need. As it is, access to therapies can be limited to those with money, time and resources to attend. There's hope that AI could fill a gap and increase access to the most vulnerable at scale - and that where that change is driven by those with extensive expertise and experience in providing support to vulnerable people, this can not only get help into the hands of those who need it, but can generate new and better data about how people live and think and need on a daily basis, shaping our understanding of the human mind and how to support it when it runs into trouble.
The idea here is that Artificial Intelligence can analyse data in ways human minds cannot - it has the potential to see things and connections between things that a human being, with their own trauma histories and individual messiness, might miss with their biases and foibles, even when it might seem to be screaming out in the room to another pair of eyes. Hayes’ ultimate vision is that working to address processes of change rather than viewing people categorically (putting them in a diagnostic box)could radically transform psychological care, removing psychiatric diagnostic categories in their entirety (Hayes and Hofman, 2020). Hayes believes that his idea of looking at processes and diagnoses precedes AI, and AI is simply a tool to achieve it, but it’s interesting to consider to what extent this thesis could gain traction or be used practically without it.
The strength that draws clinicians and researchers to AI and machine learning to pursue its application to clinical practice is its very powers of analysis extend to include large datasets about the most intimate aspects of people's inner lives that can span divides, across cultures and continents. This is also, however, its greatest potential weakness and danger.
Who is responsible for this data, how it is being used now, and how it will be used in the future? What protections are needed to allow the deeply personal experience of engaging in therapy to be safe - who wants their secrets sold to the highest bigger to inform product placement?
Furthermore, what happens when AI makes a mistake, hallucinates, or gives advise that lands poorly with someone in emotional crisis? Who is responsible for any harm that arises? While Hayes and others cite compliance with current regulatory standards, what happens if those regulatory standards are changed to exploit these data sets in different ways?
Once the technology is out there, can it be constrained, and how? And will the fruits of data analysis in AI extend to everyone, or will AI become the default, efficient second best generic therapist, available to all as an efficient resource, but ultimately poorer quality than engaging in hybrid and blended models with expensive, highly trained human clinicians with kind eyes - available only to the very wealthy elite? Will only the rich see kind eyes when they cry? Will this only widen an already gaping digital divide, and how will this impact?
AI is an emergent field, and at present there are more questions than answers - but those with expertise in both clinical applications and digital developments can offer some recommendations:
1. Trained Clinicians must have ethical and professional oversight and be able to establish boundaries around upholding and enforcing ethical standards for AI in therapy around privacy, confidentiality and data security within their practices.
2. Foster responsible development with clinician oversight. Involve ethicists, too.
3. Prioritise consideration of accessibility to effective AI-led mental health innovations across the digital divide - ensure human being led therapy and hybrid models are as available to those in the most need as they are to the rich and wealthy. Consider how to promote sufficient digital literacy across the population to make it workable to use digital tools in any service.
4. Focus primarily on using AI as augmentation of successful relational human therapies - not as a replacement (e.g. to support homework exercises, rather than to be the main agent of change)
5. Engage in continuous iterative research and evaluation of any AI driven product, involving all relevant stakeholders, honouring user experience.
6. Consider cultural biases in both directions - both what is fed to AI, and how AI can support us to better understand commonalities and differences across disparate cultures.
Will these mitigate potential risks from robot therapy? Maybe, maybe not - but they seem a good start from a human perspective. But then again, what would we know?
References
Dobson, A. S., Robards, B., & Carah, N. (Eds.). (2018). Digital intimate publics and social media (pp. 3-27). Cham: Palgrave Macmillan.
Hayes, S. C., & Hofmann, S. G. (Eds.). (2020). Beyond the DSM: Toward a process-based alternative for diagnosis and mental health treatment. New Harbinger Publications.
Hofmann, S. G., Curtiss, J. E., & Hayes, S. C. (2020). Beyond linear mediation: Toward a dynamic network approach to study treatment processes. Clinical Psychology Review, 76, 10182
https://hai.stanford.edu/news/blueprint-using-ai-psychotherapy
https://webshrink.com/general/general-podcast-episodes/ai-in-psychotherapy-where-technology-and-human-connection-intersect-psyched-for-mental-health-ep-7



Comments
Post a Comment