Warner: Google can boost transparency, protect patient privacy with AI health care
Health, Politics, U.S.

Warner: Google can boost transparency, protect patient privacy with AI health care

Rebecca Barnabi
Artificial intelligence
(© Zobacz więcej – stock.adobe.com)

Reports are circulating of inaccuracies in a new medical chatbot.

Google began testing Med-PaLM2 with customers, including the Mayo clinic, in April 2023. Med-PaLM2 answers medical questions, summarizes documents and organizes health data. The technology has shown promising results, but reports of repeated inaccuracies and of Google’s own senior researchers expressing reservations about the technology’s readiness are a concern. Much about Med-PaLM 2 remains unknown about where the technology is tested, what data sources it learns from, to what extent patients are aware of and can object to the use of AI in their treatment and what steps Google can take to protect against bias.

U.S. Sen. Mark R. Warner of Virginia, chair of the Senate Select Committee on Intelligence, is encouraging Google CEO Sundar Pichai to provide more clarity into the company’s deployment of Med-PaLM 2.

Warner expressed concerns in a letter about reports of inaccuracies, and called on Google to increase transparency, protect patient privacy and ensure ethical guardrails.

“While artificial intelligence (AI) undoubtedly holds tremendous potential to improve patient care and health outcomes, I worry that premature deployment of unproven technology could lead to the erosion of trust in our medical professionals and institutions, the exacerbation of existing racial disparities in health outcomes, and an increased risk of diagnostic and care-delivery errors,” Warner wrote.

Concerns in the letter are also expressed about AI companies prioritizing the race to establish market share over patient well-being. Warner has previously raised alarm about Google skirting health privacy as it trained diagnostic models on sensitive health data without patients’ knowledge or consent.

“It is clear more work is needed to improve this technology as well as to ensure the health care community develops appropriate standards governing the deployment and use of AI,” Warner wrote.

A former technology entrepreneur, Warner has expressed concern about cyberattacks and misinformation online. In April, he directly expressed concerns to several AI CEOs — including Sundar Pichai — about the potential risks posed by AI, and called on companies to ensure their products and systems are secure. Last month, he called on the Biden administration to work with AI companies to develop additional guardrails around the responsible deployment of AI.

Warner has introduced several pieces of legislation aimed at making tech more secure, including the RESTRICT Act, which would comprehensively address the ongoing threat posed by technology from foreign adversaries; the SAFE TECH Act, which would reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms; and the Honest Ads Act, which would require online political advertisements to adhere to the same disclaimer requirements as TV, radio and print ads.

Rebecca Barnabi

Rebecca Barnabi

Rebecca J. Barnabi is the national editor of Augusta Free Press. A graduate of the University of Mary Washington, she began her journalism career at The Fredericksburg Free-Lance Star. In 2013, she was awarded first place for feature writing in the Maryland, Delaware, District of Columbia Awards Program, and was honored by the Virginia School Boards Association’s 2019 Media Honor Roll Program for her coverage of Waynesboro Schools. Her background in newspapers includes writing about features, local government, education and the arts.