Google, you’re not unleashing ‘unproven’ AI medical bots on hospital patients, yeah?

Google is under pressure from a US lawmaker to explain how it trains and deploys its medical chatbot Med-PaLM 2 in hospitals.

Writing to the internet giant today, Senator Mark Warner (D-VA) also urged the web titan to not put patients at risk in a rush to commercialize the technology.

Med-PaLM 2 is based on Google’s large language model PaLM 2, and is fine-tuned on medical information. The system can generate written answers in response to medical queries, summarize documents, and retrieve data. Google introduced the model in April, and said a select group of Google Cloud customers were testing the software.

One of those testers is VHC Health, a hospital in Virginia affiliated with the Mayo Clinic, according to Senator Warner. In a letter to Google chief Sundar Pichai, Warner said he was concerned that generative AI raises “complex new questions and risks” particularly when applied in the healthcare industry.

“While AI undoubtedly holds tremendous potential to improve patient care and health outcomes, I worry that premature deployment of unproven technology could lead to the erosion of trust in our medical professionals and institutions, the exacerbation of existing racial disparities in health outcomes, and an increased risk of diagnostic and care-delivery errors,” he wrote [PDF].

“This race to establish market share is readily apparent and especially concerning in the health care industry, given the life-and-death consequences of mistakes in the clinical setting, declines of trust in health care institutions in recent years, and the sensitivity of health information.”

In his letter the senator laid out a dozen sets of questions for Google’s executives to answer. These queries included:

and finally…

All rather good points that ought to be raised or highlighted.

Large language models are prone to generating false information that sounds convincing, so one might well fear a bot confidently handing out harmful medical advice or wrongly influencing someone’s health decisions. The National Eating Disorders Association, for example, took its Tessa chatbot offline after it suggested people count calories, weigh themselves weekly, and monitor body fat – behaviors that are deemed counterintuitive to a healthy recovery.

A Google-DeepMind-authored research paper detailing Med-PaLM 2 admitted the model’s “answers were not as favorable as physician answers,” and scored poorly in terms of accuracy and relevancy.

Warner wants Pichai to share more information about how the model is deployed in clinical settings, and wants to know whether the mega-corp is collecting patient data from those testing its technology, and what data was used to train it. 

He highlighted that Google has previously stored and analyzed patient data without their explicit knowledge or consent in deals with hospitals in the US and UK under the Project Nightingale banner.

“Google has not publicly provided documentation on Med-PaLM 2, including refraining from disclosing the contents of the model’s training data. Does Med-PaLM 2’s training corpus include protected health information?” he asked. 

A spokesperson for Google denied that Med-PaLM 2 was a chatbot as people know them today, and said the model was being tested by customers to explore how it can be useful to the healthcare industry. 

“We believe AI has the potential to transform healthcare and medicine and are committed to exploring with safety, equity, evidence and privacy at the core,” the representative told The Register in a statement. 

“As stated in April, we’re making Med-PaLM 2 available to a select group of healthcare organizations for limited testing, to explore use cases and share feedback – a critical step in building safe and helpful technology. These customers retain control over their data. Med-PaLM 2 is not a chatbot; it is a fine-tuned version of our large language model PaLM 2, and designed to encode medical knowledge.”

The spokesperson did not confirm whether Google would be responding to Senator Warner’s questions. ®



Accessibility Dashboard

Accessibility settings have been reset

Help = available voice commands

Hide help = available voice commands

Scroll down = available voice commands

Scroll up = available voice commands

Go to top = available voice commands

Go to bottom = available voice commands

Tab = available voice commands

Tab back = available voice commands

Show numbers = available voice commands

Hide numbers = available voice commands

Clear input = available voice commands

Enter = available voice commands

Reload = available voice commands

Stop = available voice commands

Exit = available voice commands