Q&A: Microsoft's AI for Good Lab on AI biases and regulation

Juan Lavista Ferres, head of Microsoft’s AI for Good Lab, sat down with MobiHealthNews to discuss his book on ethically using AI to impact humanity positively.
By Jessica Hagen
07:37 pm
Share

Juan Lavista Ferres, head of Microsoft's AI for Good Lab

Photo courtesy of Microsoft's AI for Good Lab

The head of Microsoft's AI for Good Lab, Juan Lavista Ferres, coauthored a book providing real-world examples of how artificial intelligence can responsibly be used to positively affect humankind.

Ferres sat down with MobiHealthNews to discuss his new book, AI for Good: Applications in Sustainability, Humanitarian Action and Health. He talked about ways to mitigate biases within data input in AI and recommendations for regulators creating rules around AI use in healthcare.  

MobiHealthNews: Can you tell our readers about Microsoft's AI for Good lab?

Juan Lavista Ferres: The initiative is a completely philanthropic initiative where we partner with organizations around the world, and we provide them with our AI skills, our AI technology, our AI knowledge, and they provide the subject matter experts. 

We create teams combining those two efforts, and collectively we help them solve their problems. This is something that is extremely important, because we have seen that AI can help many of these organizations and many of these problems, and, unfortunately, there is a big gap in AI skills, especially with nonprofit organizations, or even government organizations that are working on these projects. Usually, they don't have the capacity or structure to hire or retain the talent that is needed, and that's why we decided to make an investment from our perspective, a philanthropic investment to help the world with those problems.  

We have a lab here in Redmond. We have a lab in New York. We have a lab in Nairobi. We have people also in Uruguay. We have postdocs in Colombia, and we work in many areas, health being one of them, and an important area for us – a very important area for us. We work a lot in medical imaging, like through CT scans, X-rays, areas where we have a lot of unstructured data also through text, for example. We can use AI to help these doctors even learn more or better understand the problems.

MHN: What are you doing to ensure AI is not causing more harm than good, especially when it comes to inherent biases within data?

Ferres: That is something that is in our DNA. It is fundamental for Microsoft. Even before AI became a trend in the last two years, Microsoft has been investing heavily on areas like our responsible AI. Every project we have goes through a very thorough work on responsible AI. That is also why it is so fundamental for us that we will never work on a project if we don't have a subject matter expert on the other side. And not only any subject matter experts, we try to pick the best. For example, we are working with pancreatic cancer, and we are working with Johns Hopkins University. These are the best doctors in the world working on cancer.  

The reason why it is so critical, particularly when it relates to what you have mentioned, is because these experts are the ones that have a better understanding of data collection and any potential biases. But even with that, we go through our review for responsible AI. We are making sure that the data is representative. We just published a book about this. 

MHN: Yes. Tell me about the book.

Ferres: I talk a lot in the first two chapters, specifically about the potential biases and the risk of these biases, and there are a lot of, unfortunately, bad examples for society, particularly in areas like skin cancer detection. A lot of the models in skin cancer have been trained on white people's skin, because, usually, that's the population that has more access to doctors. That is the population that is usually targeted for skin cancer, and that's why you have an under-representative number of people with those issues.  

So, we do a very thorough review. Microsoft has been leading the way, if you ask me, on responsible AI. We have our chief responsible AI officer at Microsoft, Natasha Crampton.  

Also, we are a research organization, so we will publish the results. We will go through peer review to make sure that we're not missing anything on that, and, at the end, our partners are the ones that will be understanding the technology.  

Our job is to make sure that they understand all these risks and potential biases.

MHN: You mentioned the first couple of chapters discuss the issue of potential biases in data. What does the rest of the book address?

Ferres: So, the book is like 30 chapters. Each chapter is a case study, and you have case studies in sustainability and case studies in health. These are real case studies that we have worked on with partners. But in the first three chapters, I do a good review of some of the potential risks and try to explain these in an easy way for people to understand. I would say a lot of people have heard about biases and data-collection problems, but sometimes it's difficult for people to realize how easy it is for this to happen.  

We also need to understand that, even from a bias perspective, the fact that you can predict something, it doesn't necessarily mean that it is causal. Predictive power doesn't imply causation, and a lot of times people understand and repeat correlation doesn't imply causation. Sometimes people don't necessarily grasp that predictive power also doesn't imply causation and even explainable AI also doesn't imply causation. That's really important for us. Those are some of the examples that I cover in the book.  

MHN: What recommendations do you have for government regulators regarding the creation of rules for AI implementation in healthcare?

Ferres: I am not the right person to talk to about regulation itself but I can tell you, in general, having a very good understanding of two things.  

First, what is AI, and what is not? What is the power of AI? What is not the power of AI? I think having a very good understanding of the technology will always help you make better decisions. We do think that technology, any technology, can be used for good and can be used for bad, and in many ways, it is our societal responsibility to make sure that we use the technology in the best way, maximizing the probability that it will be used for good and minimizing the risk factors.  

So, from that perspective, I think there's a lot of work on making sure people understand the technology. That's rule number one. 

Listen, we as a society need to have a better understanding of the technology. And what we see, and what I see personally, is that it has huge potential. We need to make sure we maximize the potential, but also make sure that we are using it right. And that requires governments, organizations, private sector, nonprofits to first start by understanding the technology, understanding the risks and working together to minimize those potential risks.

Share