top of page
  • Lloyd Price

AI models and tools can 'transform healthcare provision' - House of Commons SITC Report



Exec Summary:


The House of Commons Science, Innovation and Technology Committee began an inquiry on 20 October 2022, to examine: the impact of AI on different areas of society and the economy; whether and how AI and its different uses should be regulated; and the UK Government’s AI governance proposals.


The SITC received and published over 100 written submissions, taking oral evidence from 24 individuals, including AI researchers, businesses, civil society representatives, and individuals affected by this technology.


Medicine and healthcare is often said to be particularly well-placed to benefit from the use of AI models and tools—in the 2023 AI white paper, improvements in NHS medical care are listed among the key societal benefits.


The 3 main benefits to healthcare identified to date from the SITC reviews are diagnostics, medical research and productivity.


1) Diagnostics - AI can be used in healthcare as a diagnostic tool, capable of processing data and predicting patient risks. Dr Manish Patel, CEO of Jiva.ai, described how his firm developed an algorithm to recognise potential prostate cancer tissue from MRI scans.


2) Medical Research - The global pharmaceutical company GSK described the impact of AI models and tools on medical research in similarly positive terms, and said that “… ultimately, AI will provide greater probability that the discovery and development of new medicines will be successful.


3) Productivity - AI models and tools can transform healthcare provision, by assisting with diagnostics and, perhaps more significantly, freeing up time for the judgement of medical professionals by automating routine processes.


Artificial intelligence (AI) has been the subject of public, private and research sector interest since the 1950’s. However, since the emergence of so-called ‘large language models’ such as ChatGPT in particular, it has become a general-purpose, ubiquitous technology—albeit not one that should be viewed as capable of supplanting humans in all areas of society and the economy.
AI models and tools are capable of processing increasing amounts of data, and this is already delivering significant benefits in areas such as medicine, healthcare, and education. They can find patterns where humans might not, improve productivity through the automation of routine processes, and power new, innovative consumer products. However, they can also be manipulated, provide false information, and do not always perform as one might expect in messy, complex environments—such as the world we live in.

Engage with the HealthTech Community


HealthTech M&A Newsletter from Nelson Advisors - Market Insights & Analysis for Founders & Investors. Subscribe today! https://lnkd.in/e5hTp_xb


HealthTech M&A Advisory by Founders for Founders, Owners & Investors. Buy Side, Sell Side, Growth and Strategy mandates - Email lloyd@nelsonadvisors.co.uk


HealthTech Thought Leadership from Nelson Advisors - Industry Insights & Analysis for Founders, Owners & Investors. Visit https://www.healthcare.digital


Diagnostics


AI can be used in healthcare as a diagnostic tool, capable of processing data and predicting patient risks. Dr Manish Patel, CEO of Jiva.ai, described how his firm developed an algorithm to recognise potential prostate cancer tissue from MRI scans. The Department of Health and Social Care has also invested in projects focused on using AI to detect other forms of cancer, and established an AI Diagnostic Fund “… to accelerate the deployment of the most promising AI imaging and decision support tools” across NHS Trusts.


Dr Patel told us that a key advantage was the speed at which these tools could help medical professionals reach a diagnosis, avoiding longer waits and the associated emotional and financial costs. Professor Delmiro Fernandez-Reyes, Professor of Biomedical Computing at University College London, said that it could also help relieve pressure on medical personnel, by augmenting their work, speeding up referrals and preventing diseases from worsening.


Dr Patel noted that there is “… a very high barrier to entry” for companies offering AI diagnostic tools to healthcare providers, owing to the need for sufficiently representative training datasets, and a robust regulatory framework intended to ensure such tools are deployed safely.


Professor Mihaela van der Schaar of the University of Cambridge pointed to a longstanding trend of bias in medical data: “… at times the data we collect— not only at one point, but over time as interventions are made—is biased”. Dr Patel also said that the inherent bias within medical formulas such as the Body Mass Index could not be automatically out-programmed by such tools. Given this, he told us that the technology should be viewed as a way to augment rather than replace human expertise: “I don’t see that changing in the next 10 years. I think the medical community has to be confident that this technology works for them, and that takes time and it takes evidence”.


Medical research


Our inquiry has also heard how AI models and tools can help deliver breakthroughs in medical research, such as drug discovery. Dr Andrew Hopkins, Chief Executive of Exscientia, a ‘pharmatech’ company, described to us how it used the technology to “… design the right drug and select the right patient for that drug”, and how this allowed for a complexity of analysis beyond the cognitive and computational capabilities of human researchers. He said that this analysis could be applied to new drugs and existing drugs that had previously failed to pass clinical trials on efficacy grounds, with a view to repurposing them.


The global pharmaceutical company GSK described the impact of AI models and tools on medical research in similarly positive terms, and said that “… ultimately, AI will provide greater probability that the discovery and development of new medicines will be successful”.


The ability of AI models and tools to process substantial volumes of data, and rapidly identify patterns where human researchers might take months or be unable to, makes it a potentially transformational technology for medical research. Either through the development of new drugs, or the repurposing of existing ones, the technology could reduce the investment required to bring a drug to market; and bring personalised medicine closer to becoming a reality.


Productivity


AI models and tools can also deliver benefits via the automation of existing processes—“… doing the dirty work” of improving logistics, as Professor Mihaela van der Schaar of The University of Cambridge phrased it.


She described how during the covid-19 pandemic tools were developed “… to predict how many beds and ventilators we would need and who would need them” and argued that pursuing similar efficiencies should be the primary use case for AI in medicine and healthcare.


Professor Michael Osborne of the University of Oxford described AI models and tools as “… a way to automate away much of the tedious admin work that plagues frontline workers in the NHS today, particularly in primary healthcare”, and said that the technology could help medical professionals process letters and manage data.


AI models and tools can transform healthcare provision, by assisting with diagnostics and, perhaps more significantly, freeing up time for the judgement of medical professionals by automating routine processes.


AI Governance Challenges


As governments across the world grapple with the question of if and how AI should be governed, the UK is positioned as a centre of AI research and practice with a reputation for creativity and international trust in its regulatory policy and institutions.


The November Global AI Safety Summit, to be held at Bletchley Park, provides a golden opportunity for Britain to lead world thinking and practice on AI governance.


An interim report published today by the Science, Innovation and Technology Committee sets out the Committee’s findings from its inquiry so far, and the twelve essential challenges that AI governance must meet if public safety and confidence in AI are to be secured.

The twelve challenges of AI governance that must be addressed by policymakers:

  1. The Bias challenge: AI can introduce or perpetuate biases that society finds unacceptable.

  2. The Privacy challenge: AI can allow individuals to be identified and personal information about them to be used in ways beyond what the public wants.

  3. The Misrepresentation challenge: AI can allow the generation of material that deliberately misrepresents someone’s behaviour, opinions or character.

  4. The Access to Data challenge: The most powerful AI needs very large datasets, which are held by few organisations.

  5. The Access to Compute challenge: The development of powerful AI requires significant compute power, access to which is limited to a few organisations.

  6. The Black Box challenge: Some AI models and tools cannot explain why they produce a particular result, which is a challenge to transparency requirements.

  7. The Open-Source challenge: Requiring code to be openly available may promote transparency and innovation; allowing it to be proprietary may concentrate market power but allow more dependable regulation of harms.

  8. The Intellectual Property and Copyright Challenge: Some AI models and tools make use of other people's content: policy must establish the rights of the originators of this content, and these rights must be enforced.

  9. The Liability challenge: If AI models and tools are used by third parties to do harm, policy must establish whether developers or providers of the technology bear any liability for harms done.

  10. The Employment challenge: AI will disrupt the jobs that people do and that are available to be done. Policy makers must anticipate and manage the disruption.

  11. The International Coordination challenge: AI is a global technology, and the development of governance frameworks to regulate its uses must be an international undertaking.

  12. The Existential challenge: Some people think that AI is a major threat to human life. If that is a possibility, governance needs to provide protections for national security.


Engage with the HealthTech Community


HealthTech M&A Newsletter from Nelson Advisors - Market Insights & Analysis for Founders & Investors. Subscribe today! https://lnkd.in/e5hTp_xb


HealthTech M&A Advisory by Founders for Founders, Owners & Investors. Buy Side, Sell Side, Growth and Strategy mandates - Email lloyd@nelsonadvisors.co.uk


HealthTech Thought Leadership from Nelson Advisors - Industry Insights & Analysis for Founders, Owners & Investors. Visit https://www.healthcare.digital






153 views
Screenshot 2023-11-06 at 13.13.55.png
bottom of page