Generative AI is coming for healthcare, but not everyone is happy about it

Image credits: Nadezhda Fedronova/Getty/Getty Images

Generative AI, which can create and analyze images, text, audio, videos, and more, is increasingly making its way into healthcare, with support from big tech companies and startups alike.

Google Cloud, Google's cloud services and products division, is collaborating with Highmark Health, a Pittsburgh-based nonprofit healthcare company, on generative AI tools designed to personalize patients' intake experience. Amazon's AWS division says it is working with unnamed customers on a way to use generative AI for analysis Medical databases for “social determinants of health”. Microsoft Azure is helping build a generative AI system for Providence, the nonprofit healthcare network, to automatically sort messages sent from patients to caregivers.

Notable AI startups in healthcare include Ambience Healthcare, which is developing a generative AI application for doctors; Nabla, Ambient AI Assistant for Practitioners; and Abridge, which creates analytical tools for medical documentation.

The widespread enthusiasm for generative AI is reflected in investments in generative AI efforts targeting healthcare. Collectively, generative AI in healthcare startups have raised tens of millions of dollars in venture capital to date, and the vast majority of health investors say generative AI has been a huge success. I was greatly affected Their investment strategies.

But both professionals and patients are mixed about whether healthcare-focused generative AI is ready for prime time.

Generative AI may not be what people want

in Deloitte's latest survey, only about half of American consumers (53%) said they believe generative AI can improve health care — for example, by making it more accessible or reducing wait times for appointments. Less than half said they expect productive AI to make medical care more affordable.

Andrew Borkowski, chief artificial intelligence officer at Virginia Sunshine Healthcare Network, the largest VA health system in the United States, does not believe this pessimism is unwarranted. Borkowski warned that deployment of generative AI may be premature due to its “significant” limitations — and concerns about its effectiveness.

“One of the main problems with generative AI is its inability to handle complex medical queries or emergency situations,” he told TechCrunch. “Its limited knowledge base – that is, the absence of up-to-date clinical information – and lack of human experience make it unsuitable for providing comprehensive medical advice or treatment recommendations.”

See also  General Electric, Warner Bros. Discovery, and more

Many studies indicate that these points are valid.

In a paper published in JAMA Pediatrics, OpenAI's AI-generated chatbot, ChatGPT, which some healthcare organizations have been piloting for limited use cases, Found to make mistakes Diagnosis of children's diseases in 83% of cases. And in Tests OpenAI's GPT-4 as a diagnostic assistant, doctors at Beth Israel Deaconess Medical Center in Boston noted that the model classified the wrong diagnosis as the first answer nearly two times out of three.

Today's generative AI also struggles with medical administrative tasks that are an integral part of doctors' daily workflow. In MedAlign's benchmark for evaluating how well AI can do things like summarize patient health records and search through notes, GPT-4 fails in 35% of cases.

OpenAI and many other generative AI vendors They warn against relying on their models for medical advice. But Borkowski and others say they can do more. “Relying solely on generative AI in healthcare can lead to false diagnoses, inappropriate treatments, or even life-threatening situations,” Borkowski said.

Jan Egger, who leads AI-guided therapies at the Institute for Artificial Intelligence in Medicine at the University of Duisburg-Essen, and who studies applications of the emerging technology to patient care, shares Borkowski's concerns. He believes that the only safe way to use obstetric AI in healthcare currently is under the close supervision and monitoring of a physician.

“The results could be completely wrong, and it becomes more difficult to maintain awareness of this,” Egger said. “Certainly, generative AI can be used, for example, to pre-write discharge letters. But doctors have a responsibility to check it out and make the final decision.

Generative AI can perpetuate stereotypes

One particularly harmful way that generative AI in healthcare can go wrong is by perpetuating stereotypes.

In a 2023 study at Stanford Medicine, a team of researchers tested ChatGPT and other AI-generated chatbots on questions about kidney function, lung capacity, and skin thickness. The co-authors found that not only were ChatGPT answers often wrong, but the answers also included many long-held incorrect beliefs that there are biological differences between black and white people — falsehoods that have been known to lead medical providers to misdiagnose problems. Health.

See also  Asia Pacific markets are trading lower after Wall Street tumbled after the Federal Reserve's meeting minutes

The irony is that the patients most likely to be discriminated against by generative AI in healthcare are also those most likely to use it.

People who lack health care coverage – People of color, largely, according to a study by KFF — are more willing to try generative AI for things like finding a doctor or mental health support, a Deloitte survey showed. If AI recommendations are tainted by bias, this could exacerbate treatment inequalities.

However, some experts argue that generative AI is improving in this regard.

In a Microsoft study published in late 2023, The researchers said they achieved 90.2% accuracy. On four challenging medical benchmarks using GPT-4. Vanilla GPT-4 was unable to reach this result. But, the researchers say, through rapid engineering — designing vectors for GPT-4 to produce specific output — they were able to boost the model's result by up to 16.2 percentage points. (It's worth noting that Microsoft is a major investor in OpenAI.)

Beyond chatbots

But asking a chatbot a question isn't the only thing generative AI is useful for. Some researchers say medical imaging could greatly benefit from the power of generative AI.

In July, a group of scientists unveiled a system called cIntegration-based postponement of clinical workflow (CoDoC), in a study published in Nature. The system is designed to understand when medical imaging professionals should rely on AI for diagnosis versus traditional techniques. CoDoC performed better than specialists with a 66% reduction in clinical workflow, according to co-authors.

And in November a Chinese experimental research team Panda, an artificial intelligence model used to detect potential pancreatic lesions in x-rays. a The study showed PANDA must be very careful in classifying these lesions, which are often discovered too late for surgical intervention.

In fact, Arun Thirunavukkarasu, a clinical research fellow at the University of Oxford, said there is “nothing unique” about generative AI that would preclude its deployment in healthcare settings.

“More mundane applications of generative AI technology are possible in In the short and medium term, they include text correction, automatic documentation of notes and letters, and improved search features to improve electronic patient records. “There is no reason why generative AI technology – if it is effective – cannot be deployed.” in These types of roles are spot on.

See also  Stock futures are down slightly as the Dow is on track for its best month since 1976

“rigorous science”

But while generative AI shows promise in specific, narrow areas of medicine, experts like Borkowski point to technical and compliance hurdles that must be overcome before generative AI can become useful — and reliable — as a universal ancillary healthcare tool.

“Significant privacy and security concerns surround the use of generative AI in healthcare,” Borkowski said. “The sensitive nature of medical data and the potential for misuse or unauthorized access to it poses severe risks to patient confidentiality and trust in the healthcare system. Furthermore, the regulatory and legal landscape surrounding the use of generative AI in healthcare is still evolving, and questions remain Related to liability, data protection, and the practice of medicine by non-human entities need to be resolved.

Even Thirunavkarasu, an optimist about generative AI in healthcare, says there should be “hard science” behind patient-facing tools.

“Especially without direct medical oversight, there must be pragmatic randomized control trials showing clinical benefit to justify the deployment of patient-facing obstetric AI,” he said. “Moving forward with sound governance is essential to overcome any unforeseen damage following large-scale deployment.”

The World Health Organization recently issued guidelines calling for this kind of science and human oversight of generative AI in healthcare as well as the introduction of audit, transparency and impact assessments of this AI by independent third parties. The goal, which the World Health Organization outlines in its guidelines, is to encourage participation from a diverse group of people in the development of generative AI for healthcare and the opportunity to voice concerns and provide input throughout the process.

“Until concerns are adequately addressed and appropriate safeguards are in place, widespread implementation of medical AI could be…harmful to patients and the healthcare industry as a whole,” Borkowski said.



Leave a Reply

Your email address will not be published. Required fields are marked *