By Erik L. Ridley, AuntMinnie staff writer

August 27, 2020 –– The combination of an artificial intelligence (AI)-based computer-aided detection (CAD) algorithm with radiologist interpretation can detect more cases of breast cancer on screening mammograms than double reading by radiologists, according to research published online August 27 in JAMA: Oncology.

Click here to read the complete article at AuntMinnie.com

Researchers from the Karolinska University Hospital in Stockholm, Sweden, retrospectively compared three commercially available AI models in a case-control study involving nearly 9,000 women who had undergone screening mammography. They found that one of the models demonstrated sufficient diagnostic performance to merit further prospective evaluation as an independent reader.

What’s more, the best results — 88.6% sensitivity with 93% specificity — were achieved when utilizing that algorithm’s results along with the first radiologist interpretation.

“Combining the first readers with the best algorithm identified more cases positive for cancer than combining the first readers with second readers,” wrote the authors, led by Dr. Mattie Salim. “No other examined combination of AI algorithms and radiologists surpassed this sensitivity level.”

The researchers used a study sample of 8,805 women ages 40 to 74 who had received screening mammography at their academic hospital from 2008 to 2015 and who did not have implants or prior breast cancer. All exams were performed on a full-field digital mammography system from Hologic.

Of these women from the public mammography screening program, 8,066 were a random sample of healthy controls and 739 were diagnosed with breast cancer. These 739 cancer cases included 618 actual screening-detected cancers and 121 clinically detected cancers. In order to mimic the 0.5% screening-detected cancer rate in the source screening cohort, a stratified bootstrapping method was used to increase the simulated number of screenings to 113,663.

The researchers then applied AI CAD software from three different vendors, who asked to remain anonymous. None of the algorithms had been trained on the mammograms in the study.

After processing the images, the CAD software provided a prediction score for each breast ranging from 0 (lowest suspicion) to 1 (highest suspicion). To enable comparison of the algorithm’s results with the recorded radiologist decisions, the researchers elected to choose an algorithm output cutpoint that corresponded as closely as possible to the specificity of that of the first-reader radiologists, i.e. 96.6%.

Breast cancer detection performance
First-reader radiologists Second-reader radiologists Double reading consensus AI algorithm #3 AI algorithm #2 AI algorithm #1 Combination of AI algorithm #1 and first-reader radiologists
Area under the curve n/a n/a n/a 0.920 0.922 0.956 n/a
Sensitivity 77.4% 80.1% 85% 67.4% 67% 81.9% 88.6%
Specificity 96.6% 97.2% 98.5% 96.7% 96.6% 96.6% 93%

The researchers noted that the differences in sensitivity between AI algorithm #1 and the other two algorithms and the first reader were statistically significant (p < 0.001 and p = 0.03, respectively).

In an accompanying commentary, Dr. Constance Lehman, PhD, of Harvard Medical School in Boston said that it’s now time to move beyond simulation and reader studies and enter the critical phase of rigorous, prospective clinical evaluation of AI.

“The need is great and a more rapid pace of research in this domain can be partnered with safe, careful, and effective testing in prospective clinical trials,” she wrote. “If AI models can sort women with cancer detected on their mammograms from those without cancer detected on their mammograms, the value of screening mammography can be made available and affordable to a large population of women globally who currently have no access to the life-saving potential of quality screening mammography.”

Click here to read the complete article at AuntMinnie.com

By Erik L. Ridley, AuntMinnieEurope staff writer

August 10, 2020 — Artificial intelligence (AI)-based software can reliably categorize a significant percentage of negative screening mammograms as normal, potentially decreasing mammography reading workload for radiologists by more than half, according to two presentations made at the recent ECR 2020 virtual meeting.

In separate studies that simulated outcomes from the use of AI algorithms to evaluate screening mammograms and set aside certain cases that are highly likely to be normal, Danish and U.S. researchers shared their experiences about how software can decrease the interpretation burden of radiologists without having a negative impact on cancer detection.

Pressure on radiologists

With the vast amount of women enrolled in breast cancer screening programs worldwide, there’s pressure on radiologists to read a substantial amount of mammograms — the majority of which are normal. The use of AI to help radiologists automatically detect a large number of these normal mammograms could possibly increase the performance of the screening program, while also making it more effective, explained Andreas Lauritzen of the University of Copenhagen in Denmark.

In a retrospective study, the investigators assessed the potential clinical impact of using AI software to reduce the screening mammography workload, specifically examining cases where an AI system could substitute for two radiologists when mammograms are very likely to be normal, he noted.

The team analyzed 53,948 mammography exams acquired in the Danish Capital Region breast cancer screening program from November 1, 2012, to December 31, 2013, in women ages 50-70. All exams included four full-field digital mammography (FFDM) images — two mediolateral oblique and two craniocaudal views — that were acquired on a Mammomat Inspiration FFDM system (Siemens Healthineers). Two radiologists read each of the exams, with agreement established in consensus.

The 53,948 exams included 418 screening-detected cancers, 150 interval cancers, and 812 long-term cancers that were confirmed by mammography, ultrasound, and biopsy. There were also 1,306 exams that were noncancer recalls.

The researchers then retrospectively applied version 1.6 of the Transpara AI-based software (ScreenPoint Medical) to these exams. Transpara provides a score of 1-10 to indicate the likelihood of a visible malignancy.

The experiment was set up to have radiologists not read exams deemed by the software to be very likely normal and then double-read the rest of the exams. The researchers then compared the outcome of these experiments with the original screening outcomes.

Lower workload

Lauritzen and colleagues found that the AI software yielded an area under the curve (AUC) of 0.95 for screening-detected cancers and 0.66 for interval cancers. If an AI software scoring threshold of 5 was used, 32,054 exams would be considered likely normal and not read by radiologists, reducing mammography workload by 59.42%.

Screening mammography program outcomes in study of 53,948 screening mammograms from Danish Capital Region
Original screening outcomes Screening outcomes if AI software scoring threshold of 5 was automatically used to determine a normal mammogram
Recall rate 3.18% 2.48%
Positive predictive value 24.34% 30.04%

With this strategy, 16 (3.83%) of the screening-detected cancers that were detected during the normal double reading process would have been missed. However, if this AI strategy was expanded to recall all women with an exam score > 9.96, 16 new cancers would be detected, including five interval cancers and 11 long-term cancers. These added detections would come at the cost of only 91 new noncancer recalls, Lauritzen said.

This demonstrates that it’s possible to maintain a stable cancer detection rate and still avoid a large number of noncancer recalls, he pointed out.

“This study suggests that an AI system can be used to maintain safety of the breast screening program, possibly increase performance, while reducing the number of mammograms that have to be read by radiologists by a considerable amount,” Lauritzen said.

Click here to read the complete article at auntminnieeurope.com