Artificial intelligence programs that check medical images for evidence of cancer can be duped by hacks and cyberattacks, according to
That could lead to an incorrect diagnosis. An AI program helping to screen mammograms might say a scan is healthy when there are actually signs of cancer or incorrectly say that a patient does have cancer when they’re actually cancer free. Such hacks are not known to have happened in the real world yet, but the new study adds to a growing body of research suggesting healthcare organizations need to be prepared for them.
Hackers are increasingly targeting hospitals and healthcare institutions with cyberattacks. Most of the time, those attacks siphon off patient data (which is valuable on the black market) or lock up an organization’s computer systems until that organizations pays a ransom. Both of those types of attacks
But experts are also growing more worried about the potential for more direct attacks on people’s health. Security researchers have shown that hackers can remotely
Hacks that can change medical images and impact a diagnosis also fall into that category. In the new study on mammograms, published in Nature Communications, a research team from the University of Pittsburgh designed a computer program that would make the X-ray scans of breasts that originally appeared to have no signs of cancer look like they were cancerous, and that would make mammograms that look cancerous appear to have no signs of cancer. They then fed the tampered images to an artificial intelligence program trained to spot signs of breast cancer and asked five human radiologists to decide if the images were real or fake.
Around 70 percent of the manipulated images fooled that program — that is, the AI wrongly said that images manipulated to look cancer-free were cancer-free, and that the images manipulated to look like they had cancer did have evidence of cancer. As for the radiologists, some were better at spotting manipulated images than others. Their accuracy at picking out the fake images ranged widely, from 29 percent to 71 percent.
Other studies have also demonstrated the possibility that a cyberattack on medical images could lead to incorrect diagnoses. In 2019, a team of cybersecurity researchers
There
Whatever the reason, demonstrations like this one show that healthcare organizations and people designing AI models should be aware that hacks that alter medical scans are a possibility. Models should be shown manipulated images during their training to teach them to spot fake ones, study author Shandong Wu, associate professor of radiology, biomedical informatics, and bioengineering at the University of Pittsburgh, said in a statement. Radiologists might also need to be trained to identify fake images.
“We hope that this research gets people thinking about medical AI model safety and what we can do to defend against potential attacks,” Wu said.