Interview Physics World  August 2020

Training a computer to hunt cancer

AI pioneer Maryellen Giger uses artificial intelligence to improve the accuracy of medical imaging. (Maryellen Giger)

Maryellen Giger, medical physicist and entrepreneur, talks to Margaret Harris about how she established the use of artificial intelligence in breast cancer imaging and helped develop the first FDA-approved computer-aided cancer detection and diagnosis systems

What sparked your initial interest in medical physics?

I was always interested in maths and physics growing up, and I majored in the subjects as an undergraduate. I went to Illinois Benedictine College, just outside Chicago. While it was a small university, it had great internship opportunities, and I spent three summers working at the Fermi National Accelerator Laboratory (Fermilab). At that time they had a neutron therapy system, so I spent one summer programming in assembler, where I programmed some of the temperature controls within the centre. The other two summers I worked more on the hardware of beam diagnostics, and that’s how I found medical physics.

What did your PhD work focus on?

When I began my PhD at the University of Chicago, I decided that I wanted to go into diagnostic imaging. I conducted my dissertation research on evaluating the physical image quality of digital radiographs, which interestingly are the only type of radiograph with which most people are now accustomed; but back then in the early 1980s, we started with screen films. It would take an hour to digitize a single chest radiograph, a process that is now near-instantaneous.

Following my PhD, I spent a year as a postdoc, and then became faculty in the university’s department of radiology. I worked on analysing chest radiographs to detect lung nodules, and then on detecting mass lesions in screening mammograms. While a chest radiograph used to take an hour to digitize, it would take four hours to process. That eventually led to us developing computer-aided detection algorithms for medical analysis.

How did you and your colleagues pioneer the use of AI in breast cancer imaging?

The term computer-aided diagnosis started with us in the 1980s and 1990s, which then segmented into computer-aided detection (CADe) and computer-aided diagnosis (CADx). Once a digital image of a mass has been captured, we need to interpret it – something that is usually done by radiologists, who make qualitative judgements based on their experience and knowledge. But a digital image contains a lot of information – for example, the size and irregularity of a tumour can be estimated by a radiologist, but an AI algorithm can calculate those quantitatively. This can help radiologists detect cancer quicker, and aids clinicians to make more informed diagnoses. What we do is teach the AI how to analyse an image and what to look for.

To understand the difference between detection and diagnosis, think of the Where’s Waldo? books. Screening mammography can be thought of as a thousand-page book, and you have to find Waldo – who is only on five of those pages – in a finite amount of time. So CADe is having a computer to help you find items that have red and white stripes. Then, once you find these, you have another program that will help you determine whether those stripes are something random, like a bucket, or if they are indeed Waldo – CADx.

In 1990 we patented our CADe system to detect abnormal areas in mammograms and chest radiographs. These were later licensed by a company called R2 Technologies (which was acquired by Hologic in 2006). By 1998 R2 translated our research and its own further developments into ImageChecker – the first CADe system approved by the Food and Drug Administration (FDA).

In 2017 QI received clearance for the first FDA-cleared machine-learning-driven system to aid in cancer diagnosis

Translation of your lab’s research also led to the first FDA-cleared CADx system to aid in cancer diagnosis – tell me about this.

In breast cancer screening, if something suspicious is found in the image, the patient may undergo another mammogram, or an ultrasound or MR scan. You end up with images from multiple modalities, and the radiologist then has to assess the likelihood that the lesion is cancer, decide whether to request a biopsy and how quickly to follow up. So we wondered how a computer could help process all that information to aid the radiologist in their decision-making process. With breast MRIs, we quantitatively extracted various image characteristics, similar to what radiologists observe, and then we created algorithms, trained and validated them. We performed a reader study in house and showed that radiologists performed better if they were given this aid.

We began the translation of our research in 2009, through the New Venture Challenge at the university’s Chicago Booth School of Business. The team included two MBA students, one medical student and a medical-physics student from my lab. Out of 111 teams, we made it to the final nine. After this, we created a company, Quantitative Insights (QI), which was later incubated at the Polsky Center for Entrepreneurship and Innovation. QI conducted a clinical reader study, with multiple cases and manufacturers, which was submitted to the FDA.

In 2017 QI received clearance for QuantX, the first FDA-cleared machine-learning-driven system to aid in cancer diagnosis. The system analyses breast MRIs and offers radiologists a score related to the likelihood that a tumour is benign or malignant, using AI algorithms based on those developed in my lab. Soon after, Paragon Biosciences – a Chicago-based life-science innovator – bought QuantX. In 2019 Paragon launched Qlarity Imaging, and units are now being sold and placed. I am still an adviser for the company.

How do you see AI being used in medical imaging in the future?

There are many needs in medical imaging that could benefit from AI. Some AI will contribute to creating better images for either human or computer vision, for example, in developing new tomographic reconstruction techniques. For interpretation, AI will be used to extract quantitative information from images, similar to what we did for CADe and CADx, but now also for concurrent reader systems and ultimately autonomous systems.

There are also many ancillary tasks that AI could help streamline for improved workflow efficiency, such as assessing whether an image is of sufficient quality to be interpreted, maybe even while the patient is still on the table. Then during treatment, monitoring the patient’s progress is important. Thus, using computer vision AI to extract information from, for example, an MRI could yield quantitative metrics for therapeutic response. I believe in the future, we need to watch AI grow and continue to develop AI methods for various medical tasks. Much still needs to be achieved.