Q&A: Google on creating Pixel Watch's fall detection capabilities, part one

Edward Shi and Paras Unadkat, product managers at Google, tell MobiHealthNews how their teams worked together to build the Pixel Watch's fall-detection features.
By Jessica Hagen
01:28 pm
Share

Photo courtesy of Google

Tech giant Google announced in March that it added fall detection capabilities to its Pixel Watch, which uses sensors to determine if a user has taken a hard fall. 

If the watch doesn't sense a user's movement for around 30 seconds, it vibrates, sounds an alarm and displays prompts for a user to select if they're okay or need assistance. The watch notifies emergency services if no response is chosen after a minute.

In part one of our two-part series, Edward Shi, product manager on the personal safety team of Android and Pixel at Google, and Paras Unadkat, product manager and Fitbit product lead for wearable health/fitness sensing and machine learning at Google, sat down with MobiHealthNews to discuss the steps they and their teams took to create Pixel's fall detection technology. 

MobiHealthNews: Can you tell me about the process of developing fall detection?

Paras Unadkat: It was definitely a long journey. We started this off a few years ago, and the first thing was, just how do we even think about collecting a dataset and understanding just turnover in a motion-sensor perspective. What does a fall look like?

So in order to do that, we consulted with a pretty large number of experts who worked in a few different university labs in different places. We kind of consulted on what are the mechanics of a fall. What are the biomechanics? What does the human body look like? What do reactions look like when someone falls?

We collected a lot of data in controlled environments, just like induced falls, having people strapped to harnesses and just, like, having loss of balance events happen and just seeing kind of what that looked like. So that kind of kicked us off. 

And we were able to start that process, building up that initial dataset to really understand what falls look like and really break down how we actually think about detecting and kind of analyzing fall data. 

We also kicked off a large data collection effort over multiple years, and it was collecting sensor data of people doing other non-fall activities. The big thing is distinguishing between what is a fall and what is not a fall.

And then we also kind of, over the process of developing that, we needed to figure out how are ways that we can actually validate this thing is working? So one thing that we did is we actually went down to Los Angeles, and we worked with a stunt crew and just had a bunch of people take our finished product, test it out, and basically use that to validate that across all these different activities that people were actually taking part in falls.

And they were trained professionals, so they weren't hurting themselves to do it. We were actually able to detect all these different types of things. That was really cool to see.

MHN: So, you worked with stunt performers to actually see how the sensors were working?

Unadkat: Yeah, we did. So we just kind of had a lot of different fall types that we had people do and simulate. And, in addition to the rest of the data we collected, that kind of gave us this sort of validation that we were actually able to see this thing working in kind of real-world situations. 

MHN: How can it tell the difference between someone playing with their kid on the floor and hitting their hand against the ground, or something similar, and actually taking a substantial fall?

Unadkat: So there's a few different ways that we do that. We use sensor fusion between a few different types of sensors on the device, including actually the barometer, which can actually tell elevation change. So when you take a fall, you go from a certain level to a different level, and then on the ground.  

We can also detect when a person has been kind of stationary and lying there for a certain amount of time. So that kind of feeds into our output of, like, okay, this person was moving, and they suddenly had a hard impact, and they weren't moving anymore. They probably took a hard fall and probably needed some help.

We also collected large datasets of people doing this kind of what we were talking about, like, free-living activities throughout the day, not taking falls, add that into our machine learning model from these massive pipelines we've created to get all that data in and analyze all of it. And that, along with the other dataset of actual hard, high-impact falls, we're actually able to use that to distinguish between those types of events.

MHN: Is the Pixel continuously collecting data for Google to see how it's working within the real world to improve it?

Unadkat: We do have an option that is opt-in for users of the future where you know, if they opt-in, when they receive a fall alert, for us to receive data off their devices. We will be able to take that data, and incorporate it into our model, and improve the model over time. But it is something that, as a user, you'd have to manually go in and tap, "I want you to do this."

MHN: But if people are doing it, then it's just continuously going to be improved.

Unadkat: Yeah, exactly. That's the ideal. But we're continuously trying to improve all these models. And even internally continuing to collect data, continuing to iterate on it and validate it, increasing the number of use cases that we are able to detect, increasing our overall coverage, and decreasing the kind of false positive rates.

MHN: And Edward, what was your role in creating the fall-detection capabilities?

Edward Shi: Working with Paras on all the hard work that he and his team already did, essentially, the Android Pixel safety team that we have is really focused on making sure users' physical wellbeing is protected. And so there was a great synergy there. And one of the features that we had launched before was car crash detection.

And so, in a lot of ways, they are very similar. When an emergency event is detected, in particular, a user may be unable to get help for themselves, depending on if they're unconscious or not. How do we then escalate that? And then making sure, of course, false positives are minimized. In addition to all the work that Paras' team had already done to make sure we're minimizing false positives, how, in experience, can we minimize that false positive rate? 

So, for instance, we check in with the user. We have a countdown. We have haptics, and then we also have an alarm sound going, all the UX, the user experience that we designed there. And then, of course, when we actually do make the call to emergency services, in particular, if the user is unconscious, how do we relay the necessary information for an emergency call taker to be able to understand what's going on, and then dispatch the right help for that user? And so that's the work that our team did. 

And then we worked as well with emergency dispatch call taker centers to kind of test out what our flow was to validate, hey, are we providing the necessary information for them to triage? Are they understanding the information? And would it be helpful for them in an actual fall event, and we did place the call for the user?

MHN: What kind of information would you be able to garner from the watch to relay to emergency services?

Shi: Where we come into play is essentially all of the algorithm has already done its beautiful work and saying, "All right, we've detected a hard fall. Then in our user experience, we don't make the call until we've given the user a chance to cancel it and say, "Hey, I'm okay." So, in this case, now, we're assuming that the user was unconscious and had taken a fall, or did not respond in this case.

So when we make the call, we actually provide context to say, hey, the Pixel Watch detected a potential hard fall. The user did not respond, so we're able to share that context as well, and then this is the user's location in particular. So we keep it pretty succinct, because we know that succinct and concise information is optimal for them. But if they have the context that the fall has happened, and the user may have been unconscious, and the location, hopefully, they can send help to the user quickly.

MHN: How long did it take to develop?

Unadkat: I've been working on it for four years. Yeah, it's been a while. It was started a while ago. And, you know, we've had initiatives within Google to kind of understand the space, collect data and stuff like that even well before that, but with this initiative, it kind of ended up with a bit smaller and started upward in scale.

In part two of our series, we'll explore challenges the teams faced during the development process and what future iterations of the Pixel Watch may look like. 

Share