At first glance, the combo of technology and mental health doesn’t feel like a natural fit. Can you talk to us about why that became your field of study?
I was always excited by the prospect of psychology research, a career pursuing empirical answers to questions like “How do people think?” and “Why do they think the way they do?”
When I started doing psychology, I was working in a hospital doing clinical trial research. A lot of the reactions that I was having were along the lines of “Why are we still using paper and pencil forms to collect information about people? Is there not a better way?” I started thinking about the application of technology in the field. Over the course of a decade, that evolved into using AI and large language models to measure and intervene on conditions like depression.
When people think about AI and mental health, they immediately think chatbots. But your work considers ways AI could be applied beyond talk therapy.
There is a science of psychotherapy. As clinical psychologists and clinical scientists, we study which therapies are more effective than others and what makes a therapy effective. We also tend to know that even when we have a therapy that's considered a gold standard treatment, the rates of uptake in the community are really poor. For example: we have two gold standard therapies for PTSD. They work really well. But very few therapists actually know how to do them. I think that with AI and technology, we can try and promote more evidence-based therapy practices.
I see a lot of promise to actually getting people the treatments we know work in a really scalable way, whether that's through helping make sure that more therapists are using evidence-based therapies, call-center treatments, or developing apps that are allowing people access to concrete and actionable tools that we have science to support.
There are people who seem to actually prefer chatting with robots rather than human therapists. Why do you think that is?
Anecdotally, I've heard that some individuals with autism spectrum disorder may have a preference for speaking with technology or interacting with technology, as opposed to a human. I've also heard anecdotes about people who would be very happy to see a human therapist, but time, money, or even availability of the therapist is getting in the way. We're running a study where we're trying to get some more concrete and empirical answers about the rates of people seeking out chatbot therapy and why they’re doing it.
One thing that I would add: there may be people who have a preference for chatbot therapy. Let's say a person has social anxiety disorder. Clinically speaking, we might not always want to follow the person's preferences. We have evidence that when socially anxious people avoid social situations, that helps keep the problem alive. It feeds the anxiety. We certainly want to take those things into account if we're talking about who uses chatbot therapy and who doesn't.