Q&A: Why mental health chatbots need strict safety guardrails

Ramakant Vempati, Wysa cofounder and president, discusses how the startup tests its AI-backed chatbot to monitor safety and quality.
By Emily Olsen
10:25 am
Share

Wysa cofounders Jo Aggarwal and Ramakant Vempati

Photo: Wysa

Mental health continues to be a leading clinical focus for digital health investors. There's plenty of competition in the space, but it's still a big challenge for the healthcare system: Many Americans live in areas with a shortage of mental health professionals, limiting access to care.

Wysa, maker of an AI-backed chatbot that aims to help users work though concerns like anxiety, stress and low mood, recently announced a $20 million Series B funding raise, not long after the startup received FDA Breakthrough Device Designation to use its tool to help adults with chronic musculoskeletal pain.

Ramakant Vempati, the company's cofounder and president, sat down with MobiHealthNews to discuss how the chatbot works, the guardrails Wysa uses to monitor safety and quality, and what's next after its latest funding round.

MobiHealthNews: Why do you think a chatbot is a useful tool for anxiety and stress? 

Ramakant Vempati: Accessibility has a lot to do with it. Early on in Wysa's journey, we received feedback from one housewife who said, "Look, I love this solution because I was sitting with my family in front of the television, and I did an entire session of CBT [cognitive behavioral therapy], and no one had to know." 

I think it really is privacy, anonymity and accessibility. From a product point of view, users may or may not think about it directly, but the safety and the guardrails which we built into the product to make sure that it's fit for purpose in that wellness context is an essential part of the value we provide. I think that's how you create a safe space. 

Initially, when we launched Wysa, I wasn't quite sure how this would do. When we went live in 2017, I was like, "Will people really talk to a chatbot about their deepest, darkest fears?" You use chatbots in a customer service context, like a bank website, and frankly, the experience leaves much to be desired. So, I wasn't quite sure how this would be received. 

I think five months after we launched, we got this email from a girl who said that this was there when nobody else was, and this helped save her life. She couldn't speak to anybody else, a 13-year-old girl. And when that happened, I think that was when the penny dropped, personally for me, as a founder.

Since then, we have gone through a three-phase evolution of going from an idea to a concept to a product or business. I think phase one has been proving to ourselves, really convincing ourselves, that users like it and they derive value out of the service. I think phase two has been to prove this in terms of clinical outcomes. So, we now have 15 peer-reviewed publications either published or in train right now. We are involved in six randomized control trials with partners like the NHS and Harvard.  And then, we have the FDA Breakthrough Device Designation for our work in chronic pain.

I think all that is to prove and to create that evidence base, which also gives everybody else confidence that this works. And then, phase three is taking it to scale.

MHN: You mentioned guardrails in the product. Can you describe what those are?

Vempati: No. 1 is, when people talk about AI, there's a lot of misconception, and there's a lot of fear. And, of course, there's some skepticism. What we do with Wysa is that the AI is, in a sense, put in a box.

Where we use NLP [natural language processing], we are using NLU, natural language understanding, to understand user context and to understand what they're talking about and what they're looking for. But when it's responding back to the user, it is a pre-programmed response. The conversation is written by clinicians. So, we have a team of clinicians on staff who actually write the content, and we explicitly test for that. 

So, the second part is, given that we don't use generative models, we are also very aware that the AI will never catch what somebody says 100%. There will always be instances where people say something ambiguous, or they will use nested or complicated sentences, and the AI models will not be able to catch them. In that context, whenever we are writing a script, you write with the intent that when you don't understand what the user is saying, the response will not trigger, it will not do harm.

To do this, we also have a very formal testing protocol. And we comply with a safety standard used by the NHS in the U.K. We have a large clinical safety data set, which we use because we've now had 500 million conversations on the platform. So, we have a huge set of conversational data. We have a subset of data which we know the AI will never be able to catch. Every time we create a new conversation script, we then test with this data set. What if the user said these things? What would the response be? And then, our clinicians look at the response and the conversation and judge whether or not the response is appropriate. 

MHN: When you announced your Series B, Wysa said it wanted to add more language support. How do you determine which languages to include?

Vempati: In the early days of Wysa, we used to have people writing in, volunteering to translate. We had somebody from Brazil write and say, "Look, I'm bilingual, but my wife only speaks Portuguese. And I can translate for you."

So, it's a hard question. Your heart goes out, especially for low-resource languages where people don't get support. But there's a lot of work required to not just translate, but this is almost adaptation. It's almost like building a new product. So, you need to be very careful in terms of what you take on. And it's not just a static, one-time translation. You need to constantly watch it, make sure that clinical safety is in place, and it evolves and improves over time. 

So, from that point of view, there are a few languages we're considering, mainly driven by market demand and places where we are strong. So, it's a combination of market feedback and strategic priorities, as well as what the product can handle, places where it is easier to use AI in that particular language with clinical safety. 

MHN: You also noted that you're looking into integrating with messaging service WhatsApp. How would that integration work? How do you manage privacy and security concerns?

Vempati: WhatsApp is a very new concept for us right now, and we're exploring it. We are very, very cognizant of the privacy requirements. WhatsApp itself is end-to-end encrypted, but then, if you break the veil of anonymity, how do you do that in a responsible manner? And how do you make sure that you're also complying to all the regulatory standards? These are all ongoing conversations right now. 

But I think, at this stage, what I really do want to highlight is that we're doing it very, very carefully. There's a huge sense of excitement around the opportunity of WhatsApp because, in large parts of the world, that's the primary means of communication. In Asia, in Africa. 

Imagine people in communities which are underserved where you don't have mental health support. From an impact point of view, that's a dream. But it's early stage. 

Share