The future of healthcare technology

8 June 2023

The future of healthcare technology: a robot to take your pulse.

• Generative AI is being used in healthcare settings already.
• Without nuance, it has been shown to go badly wrong.
• Nevertheless, the potential for AI in the future of healthcare is enormous.

The future of healthcare technology, as science fiction has occasionally fed it to us, feels like it’s drawing closer than it’s ever been.

But it’s not all plain sailing from here to cancer-zapping nanobots, robo-nurses, and medical tricorders. If we take a look at some of the latest ways in which tech has been used in the medical sector, we’ll see pros and cons, both already in the field and just around the corner.

Medicine might seem like an area that should be practiced only by well-trained humans. When we put our lives in their hands, after all, we understand that humans have a duty of care – and have been trained to perform it.

However, technology is responsible for huge medical advances, and it only makes sense that the tech world’s latest darling – artificial intelligence – is being trialled in the field.

Doctors, hospital executives and data scientists (put them in a bar and you’re halfway to a joke) all agree that artificial intelligence could help solve huge healthcare problems.

It’s been used already, when the Mayo Clinic combined AI and machine learning with clinical practice to improve care, applying the technology to radiology. You can watch a panel discussion about AI Adoption for Clinical Practice during the Mayo Clinic Platform Conference 2022 here.

Unfortunately, healthcare systems are biased, and the new tools could also perpetuate long-standing racial inequities in the delivery of healthcare. Having been trained on historical records, there’s scope for the same patterns to continue.

“If you mess this up, you can really, really harm people by entrenching systemic racism further into the health system,” said Dr. Mark Sendak, a lead data scientist at the Duke Institute for Health Innovation.

Healthcare organization Carbon Health has introduced an AI tool to generate medical records automatically. With a patient’s consent, meetings with the doctor will be recorded and the audio sent to Amazon’s AWS Transcribe Medical cloud service, which transcribes it.

The transcript – along with data from the patient’s medical records, including recent test results – is passed to an ML model that produces notes summarizing important information gathered in the consultation.

Company CEO Eren Bali said the software is directly integrated into the firm’s electronic health records (EHR) system and is powered by OpenAI’s latest language model, GPT-4.

“The use of scribes and transcription services is standard in the healthcare industry, and a majority of patients provide consent to have their visit recorded by their provider,” a spokesperson told The Register on Monday.

The AI-generated text will still have to be reviewed by physicians, although Carbon Health claims that 88% of the information can be accepted without edits.

Theoretically, the tool will increase the number of patients that the doctor’s office can see, as the tool can produce consultation summaries in four minutes compared to the 16 it typically takes a human.

It’ll also cost less than a human doctor.

The future of healthcare technology: faster and cheaper

Last week the National Eating Disorders Association (NEDA) announced a new chatbot feature on its helpline, named Tessa. The entire human staff, six paid workers and around 200 volunteers were to be replaced by the chatbot.

Less than a week later, Monday, June 5, the chatbot was pulled from the site. An eating disorder activist made an Instagram post, sounding the alarm that the chatbot wasn’t helpful – in fact it was actively dangerous to sufferers.

Despite stating that she had an eating disorder, Maxwell received weight loss tips from the helpline. Initially, NEDA pushed back against the claims in its own Instagram post – that was deleted soon after Maxwell provided screenshots as proof.

While anyone can make an honest mistake, that initial pushback is symptomatic of the sometimes blind faith people are already putting in systems that use generative AI.

“It came to our attention [Monday] night that the current version of the Tessa chatbot, running the Body Positive program, may have given information that was harmful,” NEDA said in an Instagram post. “We are investigating this immediately and have taken down that program until further notice for a complete investigation.”

Vice President Lauren Smolar denied that the move to AI came from the hotline staff’s threat of unionization. She told NPR that the organization was concerned about how to keep up with the demand from the increasing number of calls and long wait times – last year staff took nearly 70,000 calls.

She also stated that NEDA never intended the automated chat function to completely replace the human-powered call line (contradicting the fact of the staff’s literal replacement).

Whether Tessa was a techno-union buster or not, technology is becoming a convenient scapegoat when things go wrong.

The AI equivalent of “The robot dog ate my homework” is becoming ever more commonplace as an explanation – helping re-define what people can expect from medical services.

A California-based company that sells a blood test kit which detects cancer has said it incorrectly informed roughly 400 customers that they might have cancer.

Coming in at $949, the Galleri test by Grail detects a marker for more than 50 types of cancer. Customers who paid out hard-earned money received a letter “stating incorrectly that a cancer signal was detected,” a spokeswoman told CBS MoneyWatch.  

The error was supposedly the fault of the vendor, PWN Health, and was put down to a “software configuration issue.” The point being that at that price – and with the potential terror and stress of a cancer diagnosis on the other side of it – the company should have a legal duty to ensure its software is configured correctly.

In a statement, PWN Health said the problem was down to “a misconfiguration of our patient engagement platform used to send templated communications to individuals.”

The robot dog ate my homework. Wrongly.

It also claimed that it has added processes to make sure such a mistake wouldn’t occur again, and started contacting the people who received the erroneous letters within 36 hours.

“The issue was in no way related to or caused by an incorrect Galleri laboratory test result.”

There’s no risk that the medical profession is going to be taken over by technology anytime soon, but while technological failures make headlines, it’s worth remembering the successes that are happening every day.

The future of healthcare technology isn’t exactly hanging in the balance – it’s here already.

No-one would argue against technology being put to medical use, but even as AI gets more capable, we should be questioning which roles we want it to take on.

Plus, in the name of preventing its heavily forecast world domination, maybe AI shouldn’t be privy to our innermost thoughts (or perhaps our physical weak points) just yet.

We may be some way from emergency medical holograms just yet – but AI in our medical profession is already here, for better and worse.