Reduces the Complexity of Reporting for Screening and Diagnostic MRI Exams to Deliver Time-Saving and Patient Safety Benefits

RICHARDSON, Texas, November 12, 2020 Ikonopedia announced today the release of its newly updated next-generation breast MRI reporting module.  The intuitive new interface is designed to reduce the complexity of reporting for screening and diagnostic MRI exams and is compliant to the ACR BI-RADS Atlas Fifth Edition.

The new breast MRI module leverages the intuitive icon-based interface of Ikonopedia’s Mammography and Ultrasound structured reporting modalities to deliver a variety of physician efficiency and patient safety benefits.  Reporting capabilities have been expanded and instinctual organization guides radiologists through BI-RADS criteria to reach an accurate, BI-RADS-compliant, and natural sounding description of lesions. New functionality in the MRI diagnostic modality includes ten lesion assessment categories that adhere to BI-RADS.  The MRI screening modality has been updated to include a new contrast selection dialog as well as to synchronize with the new MRI diagnostic modality.

The enhanced breast MRI module has also been optimized for AI input such as Qlarity Imaging’s QuantX, the first U.S. Food and Drug Administration (FDA)-cleared computer-aided diagnosis software for breast MRI analysis.

We’ve been very pleased with the flexibility and efficiency gains from the intuitive user interface in the updated breast MRI reporting tools, particularly the ability to easily describe trackable entries while maintaining BI-RADS verbiage to create complex reports,” said Erica Guzalo, Section Chief, Breast Imaging, Sinai Health Chicago.  “I also appreciate Ikonopedia’s dedication to continually help solve issues and implement new ideas that are beneficial to us, as users.

  “As we, as an industry, move towards more broadly adopting risk-based screening based on a women’s personal risk and breast density, the utilization of breast MRI will continue to grow,” said Michael Vendrell, MD, co-founder of Ikonopedia.  “This new module streamlines reporting workflow to deliver more accurate diagnoses, reduces the risk of reporting errors, and save time as radiologists face increasing exam volume and data complexity.  These are critical new capabilities to improve patient care and safety.” 

Ikonopedia is an innovative structured breast reporting and MQSA management system designed to dramatically improve reporting efficiency, and optimize facility operations. All findings are saved as discrete data which allows Ikonopedia to prevent errors, maintain BI-RADS-compliant language and automate many time-consuming processes.  Ikonopedia makes it possible to eliminate laterality errors, automatically choose exam-appropriate patient letters and pull forward findings from past exams along with many other time-saving features.

Ikonopedia’s integrated risk assessment tool is now available in dozens of languages and risk data is used to create alerts for the radiologist, populate the clinical section of the report, and automatically update the patient letter. A high-risk patient alert identifies patients with a 20% or greater lifetime risk and information about the score is instantly viewable.

About Ikonopedia

Ikonopedia was founded by three expert breast imaging Radiologists: László Tabár, MD is the author of 6 books in 10 languages on mammography and a world renowned educator;  A. Thomas Stavros, MD is the author of one of the most popular reference books in the field of breast ultrasound; and Michael J. Vendrell, MD is an expert in breast MRI and CAD design with extensive experience in breast-imaging software. For more information, visit


#   #   #

Media Contacts:
Emily Crane

By Erik L. Ridley, AuntMinnie staff writer

August 27, 2020 –– The combination of an artificial intelligence (AI)-based computer-aided detection (CAD) algorithm with radiologist interpretation can detect more cases of breast cancer on screening mammograms than double reading by radiologists, according to research published online August 27 in JAMA: Oncology.

Click here to read the complete article at

Researchers from the Karolinska University Hospital in Stockholm, Sweden, retrospectively compared three commercially available AI models in a case-control study involving nearly 9,000 women who had undergone screening mammography. They found that one of the models demonstrated sufficient diagnostic performance to merit further prospective evaluation as an independent reader.

What’s more, the best results — 88.6% sensitivity with 93% specificity — were achieved when utilizing that algorithm’s results along with the first radiologist interpretation.

“Combining the first readers with the best algorithm identified more cases positive for cancer than combining the first readers with second readers,” wrote the authors, led by Dr. Mattie Salim. “No other examined combination of AI algorithms and radiologists surpassed this sensitivity level.”

The researchers used a study sample of 8,805 women ages 40 to 74 who had received screening mammography at their academic hospital from 2008 to 2015 and who did not have implants or prior breast cancer. All exams were performed on a full-field digital mammography system from Hologic.

Of these women from the public mammography screening program, 8,066 were a random sample of healthy controls and 739 were diagnosed with breast cancer. These 739 cancer cases included 618 actual screening-detected cancers and 121 clinically detected cancers. In order to mimic the 0.5% screening-detected cancer rate in the source screening cohort, a stratified bootstrapping method was used to increase the simulated number of screenings to 113,663.

The researchers then applied AI CAD software from three different vendors, who asked to remain anonymous. None of the algorithms had been trained on the mammograms in the study.

After processing the images, the CAD software provided a prediction score for each breast ranging from 0 (lowest suspicion) to 1 (highest suspicion). To enable comparison of the algorithm’s results with the recorded radiologist decisions, the researchers elected to choose an algorithm output cutpoint that corresponded as closely as possible to the specificity of that of the first-reader radiologists, i.e. 96.6%.

Breast cancer detection performance
First-reader radiologists Second-reader radiologists Double reading consensus AI algorithm #3 AI algorithm #2 AI algorithm #1 Combination of AI algorithm #1 and first-reader radiologists
Area under the curve n/a n/a n/a 0.920 0.922 0.956 n/a
Sensitivity 77.4% 80.1% 85% 67.4% 67% 81.9% 88.6%
Specificity 96.6% 97.2% 98.5% 96.7% 96.6% 96.6% 93%

The researchers noted that the differences in sensitivity between AI algorithm #1 and the other two algorithms and the first reader were statistically significant (p < 0.001 and p = 0.03, respectively).

In an accompanying commentary, Dr. Constance Lehman, PhD, of Harvard Medical School in Boston said that it’s now time to move beyond simulation and reader studies and enter the critical phase of rigorous, prospective clinical evaluation of AI.

“The need is great and a more rapid pace of research in this domain can be partnered with safe, careful, and effective testing in prospective clinical trials,” she wrote. “If AI models can sort women with cancer detected on their mammograms from those without cancer detected on their mammograms, the value of screening mammography can be made available and affordable to a large population of women globally who currently have no access to the life-saving potential of quality screening mammography.”

Click here to read the complete article at