TechScape: Can AI Really Help Fix a Healthcare System in Crisis?

WWhat if AI isn’t so great? What if we’ve overstated its potential to a downright dangerous degree? That’s the concern of leading cancer experts in the NHS, who warn that the health service is so obsessed with new technology that it’s putting patient safety at risk. From our story yesterday:

In a stark warning, the cancer experts say that “new solutions”, such as new diagnostic tests, have been wrongly touted as “miracle cures” for the cancer crisis, but that “none of the solutions address the fundamental problems of cancer as a systems problem”.

A “common misconception” among NHS leaders is the assumption that new technologies can reverse inequalities, the authors add. The reality is that tools such as AI “may create additional barriers for those with poor digital or health literacy”.

“We caution against technocentric approaches without a proper evaluation from an equity perspective,” the article concludes.

The paper, published in the journal Lancet Oncology, instead calls for a back-to-basics approach to cancer care. The proposals focus on solutions such as increasing staff numbers, redirecting research into less trendy areas including surgery and radiotherapy, and creating a dedicated technology transfer unit to ensure that treatments that have already been proven to work actually become part of routine care.

Against these much-needed improvements, AI can be an attractive distraction. The technology’s promise is that within a few years, a radical increase in capacity will allow AI technology to perform tasks in healthcare that currently cannot be performed, or at least that would take hours of a highly trained specialist’s time. And the fear, experts say, is that this promise of the future will distract from the changes that are needed today.

It effectively positions AI as the latest example of ‘bionic duckweed’, a term coined by Stian Westlake in 2020 to cover the use, intentionally or unintentionally, of technology that may or may not come in the future to argue against investing in the present. Elon Musk’s Hyperloop is perhaps the best-known example of bionic duckweed, first proposed more than a decade ago explicitly to discourage California from proceeding with plans to build a high-speed rail line.

(The term comes from a real-life case in the wild, where the British government was advised against electrifying railways in 2007 because “we might … have hydrogen-powered trains developed from bionic duckweed in 15 years’ time … we might have to remove the wires and it would all be wasted”. Seventeen years later, the UK continues to use diesel engines on non-electrified lines.)

But the article’s fears about AI—and the general technophilia of healthcare—are more than just that it might not happen. Even if AI does make progress in the fight against cancer, without the right foundation, it may be less useful than it could be.

Back to the article, a quote from the lead author, oncologist Ajay Aggarwal:

AI is a workflow tool, but is it actually going to improve survival? Well, we have limited evidence of that so far. Yes, it’s something that could potentially help the workforce, but you still need people to take a patient’s history, draw blood, perform surgery, deliver bad news.

Even if AI is as good as we hope, it might not mean much for healthcare in the short term. Say AI can significantly speed up a radiologist’s work, diagnosing cancer earlier or faster after a scan: that won’t mean much if there are bottlenecks in the rest of healthcare. In the worst case, you might even see a kind of AI-driven denial-of-service attack, where the tech-driven parts of the workflow overwhelm the rest of the system.

AI advocates hope that in the long run, systems will adapt to integrate the technology well. (Or, if you really believe in it, maybe it’s just a matter of waiting for AI to staff a hospital from start to finish.) But in the short term, it’s important not to assume that just because AI can perform some medical tasks, it can help save a healthcare system in crisis.

Digital government

The new DSIT secretary Peter Kyle at Downing Street. Photo: Tejas Sandhu/PA

Last week we looked at some ideas for what the new government could do around technology, and it’s looking good for at least one of those suggestions. New Science, Innovation and Technology Minister Peter Kyle has only been in office for a few days, but he’s already landed in my inbox. The DSIT, he says, will:

Become the center for digital expertise and delivery within government, and improve the way government Government and public services interact with citizens.

We will play a leading role, working with government, industry and the research community to improve Britain’s economic performance and strengthen our public services to improve people’s lives and opportunities through the application of science and technology.

Specifically, DSIT will “help civil servants improve their skills so they are better able to use digital technology and AI in their work on the frontline”. Last week we called on Labour to “take AI government seriously”; it seems they already are.

skip the newsletter promotion

Digital colleagues

Will your next new colleague be digital? Photo: Andriy Popov/Alamy

On the one hand, this is clearly a publicity stunt:

Lattice puts an AI worker through the same processes as a human worker starting a new role.

We add them to the personnel file and integrate them into our HRIS. We add them to the organizational chart so you can see where they fit into a team and department. We hire the AI ​​employees and make sure they get the training they need for their role.

And we’re going to assign goals to this digital worker, to make sure that we hold them accountable to certain standards – just like we would with any other worker. This is going to be a huge learning opportunity for us and for the industry.

That’s Sarah Franklin, CEO of HR platform Lattice, talking about the company’s plans to put an AI worker through the same steps as a human. But if you want a glimpse of what AI success would look like, it’s not far off.

Companies are bad at introducing new technology. If something works well enough, they often stick with it for years, even decades, and it’s a huge hurdle to get them to switch to a different way of doing things, even if the profits seem great.

But they’re much better at bringing in new staff. They have to be: staff leave, retire, have children, or die. If you can change the process of bringing in an AI employee to be more like the latter and less like the former, you can dramatically expand the pool of companies that feel like they can bring AI into their own world.

To read the full version of the newsletter, subscribe and receive TechScape in your inbox every Tuesday.