American Telemedicine Association Leaps Into Privacy and AI Policies

The computerization of daily life has evolved from a set of technical decisions to a matter of urgent public policy over the past few decades. The American Telemedicine Association (ATA) recently released two sets of principles that highlights its concerns in two prominent computing issues: privacy and AI.

Of course, privacy and AI both make front-page news these days. Concerns over the data protection of individuals dates back to the 1970s. The issue takes on new urgency since evidence has emerged about the manipulation of voters and exploitation of children. AI became the subject of an international summit at Bletchley Park.

I talked to Kyle Zebley, senior vice president of public policy at the ATA, about the organization’s determination to enter these policy areas.

Data Policy in the Context of Telehealth Policy

The ATA has been publishing policy statements on complex technical topics since they launched a good 30 years ago. Progress on telemedicine proceeded at a snails pace for most of this time. But I believe that the ATA’s careful brick-laying, their forging of bonds with policy-makers and health care leaders, their patient education, made it easier for providers to adopt telemedicine quickly in the Spring of 2020 when the COVID-19 pandemic required it.

The ATA has addressed telemedicine on many levels and in many forums during its existence. It talks to providers about the benefits of telemedicine, tries to ensure payers cover it, and promotes bills in state and federal legislatures, including through its advocacy arm, ATA Action. This is a huge set of tasks, yet at the ATA’s annual policy meeting at the end of 2022, it added a data policy workgroup to its activities.

Zebley describes the formation of the data policy workgroup as an organic development. At their policy conferences, ATA members like to discuss social trends that will affect their mission. In 2022, it became clear that technical developments were having a large impact on social issues and would inevitably come to affect telehealth. Hence the formation of the data policy workgroup and its two current teams on privacy and AI.

Principles Concerning Privacy

According to Zebley, the privacy team believes that the federal government must update HIPAA, which covers organizations narrowly associated with traditional medical treatment, and regulated the many organizations and digital apps that collect personal health data. HIPAA was last revised in 2013.

At the same time, the ATA is worried about state-level legislation. Although each law may have good clauses, the proliferation of state laws can create what Zebley calls a “patchwork” of regulations for organizations to obey.

The impact may be most detrimental to small companies doing innovative work, but that now have to make sure to conform to regulations in each state. Imagine if a small manufacturer of new medical devices had to get approval to market its device in each state, instead of just with the FDA.

Principles Concerning AI

Zebley says that the AI team is very bullish on the potential of AI. We all know that we’re facing an escalating crisis in staffing: Retirement and burn-out among providers, an aging population with more and more medical conditions, the entry of more people into universal access to health care, and the explosion of mental health and substance abuse problems, add or multiply into a shortage of treatment options.

So AI can do a lot to support providers and remove burdens, without threatening employment at all. Still, AI must be employed ethically and effectively.

As with privacy, Zebley would like more federal action, making it unnecessary for individual states to regulate AI’s use.

Telemedicine Watchdogs

The two data policy teams completed their initial tasks in a pretty impressive time-frame, about a year after the concept of the workgroup was raised. Zebley said that the ATA found experts in numerous areas of technology and policy and collaborated with many partner organizations to form the teams.

The initial press releases from the privacy and AI teams state little that is new. In particular, the AI principles repeat what dozens of other organizations have said. I bet by now that a large language model could generate a good set of ethics for AI. Whether the large language model could train itself to obey these ethics is a larger question.

So the first public appearances of the privacy and AI teams are declarations of an intent to follow and contribute to these spaces, not novel statements in themselves. Zebley says that the data policy workgroup will be the “voice of the telehealth community” in ongoing policy work.

With the formation of these teams—and possibly others to follow—the ATA shows that it will devote the same attention to policy in computer technology as it does to laws and telemedicine adoption. (Personally, I’d like to see a team devoted to the user experience (UX) in telehealth.) Like all of us, as consumers and citizens, the ATA needs to educate itself about the opportunities and risks of emerging technologies and participate in policy.

About the author

Andy Oram

Andy is a writer and editor in the computer field. His editorial projects have ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. A correspondent for Healthcare IT Today, Andy also writes often on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM (Brussels), DebConf, and LibrePlanet. Andy participates in the Association for Computing Machinery's policy organization, named USTPC, and is on the editorial board of the Linux Professional Institute.

   

Categories