Q&A: Google on creating Pixel Watch's fall detection capabilities, part two

Edward Shi and Paras Unadkat, product managers at Google, tell MobiHealthNews what challenges they faced when developing the technology and what future versions of the Watch may include.
By Jessica Hagen
12:08 pm
Share

Photo courtesy of Google

The Google Pixel Watch, announced in March, included the addition of fall detection capabilities, which uses sensors to determine if a user has taken a hard fall and then subsequently alerts emergency services upon being prompted by the user or when no response from the user is received. 

In part two of our two-part series, Edward Shi, product manager on the personal safety team of Android and Pixel at Google, and Paras Unadkat, product manager and Fitbit product lead for wearable health and fitness sensing and machine learning at Google, discusses with MobiHealthNews what obstacles the company and its teams faced when developing the technology, and how the Watch may evolve. 

MobiHealthNews: What were some challenges you met along the development pathway?

Paras Unadkat: Kind of earlier on in the program was understanding how to detect falls in the first place. So that was definitely a big challenge, really getting that deep understanding of that and building up that knowledge base, and expertise, and that dataset, was quite difficult. 

And then similarly, understanding how we can validate and understand this is actually working in the real world was quite a difficult problem. And then we were able to solve that through some of the different data collection approaches that we had, understanding how to scale our dataset.

We used a lot of simulations and things like that just to basically get at, you know, we were able to collect a certain number of different fall types, a certain number of different freeloading event types. But how do we know that we had a person who is 5'5" take a fall? How do we know that that's similar to a person who's 5'7" taking that same fall?

We were able to actually take that data and basically simulate those changes into a person's kind of height and weight, and stuff like that, and use that to help us understand the impacts of these different parameters in our data. 

So that was one of the big challenges and ways that we approached that. And as we kind of got ... closer to launch, we also ran into a bunch of challenges around, like, the other side of the world, understanding what to do about these phone settings and how do we actually make sure people get the help they need.

Edward Shi: Yeah, on our side, taking from that handoff, was, essentially, we're always trying to balance the speed for which we can get the users' help, as well as mitigating any accidental triggers. 

Because we have a responsibility to both the user and, of course, the call-taker centers, if they get a lot of false calls, then they aren't able to help with real emergencies. And so basically, tweaking and working closely with Paras on this.

What is our algorithm capable of? How do we tweak the experience to give users enough time to cancel, but then also not take too long to really call for help when help is needed? And then, of course, tweaking that experience when the call is actually made.

What precise information can we give to emergency call takers? What happens if a user is traveling? And if they're, they speak a specific language, and they go to another region, what language does that region speak, and what language do those call takers understand? So, those are the different challenges that we kind of worked through once we've taken that handoff from the algorithm.

MHN: What does the next iteration of Pixel's fall detection look like?

Unadkat: We're constantly looking to improve the feature, improve our accuracy and improve the number of things that we're able to detect. I think a lot of that just looks like scaling our datasets more and more, and really just kind of building a deeper understanding of what fall events look like for different scenarios, different user groups, different types of things happening across different populations that we serve. And really just kind of pushing to detect more and more of these types of emergency events and being able to get help in as many situations as we possibly can.

MHN: Do you have any examples?

Unadkat: A few things are in the works around things that are difficult for us to distinguish from non-fall events. Like, generally speaking, the harder the impact of the fall, the easier it is to detect and the softer the impact of the fall, the harder it is to distinguish from something that is not a fall. So being able to do that can encompass a number of different things, from collecting more data in, like, clinical settings, things like that in the future, to leveraging different kinds of sensor configurations to be able to detect that something has gone wrong.

So an example of this is if you want to detect somebody collapsing, it's a difficult thing to do because the level of impact for that type of fall is not nearly as much as if, you know, a fall down a ladder or something like that. So we're able to do it. We've been able to get better and better at it, but I think just continuing to improve on scenarios like that so that people can really start to trust our device, and kind of just wearables as a whole, to really have their back across a broad range of situations.

Shi: On our end, a lot of things that we talk about is we really want to make the best experience for users and making sure that they're able to get help quickly, and then while still feeling like, hey, if there was an accidental trigger, then they are able to cancel and they don't panic in those situations. So I think those are the things that we really look at. 

And then I know Paras mentioned a little bit about the data collection for improving the feature moving forward. One thing that we are really, on the safety side, very, very much dedicated to is our users' privacy. So we recognize that, hey, we want to improve.

We need data to improve the safety features, but we made it very clear that it was an opt-in toggle for users, and they can, of course, turn that off. And then, as well as any of this data that we do collect, is exclusively used only for improving these algorithms and nothing else. And so, privacy and wanting to make sure our users feel protected both physically, as well as with their privacy, is something that we adhere very strongly to.

Share