Trick Hat Media
Google AI system spots lung cancer before radiologists
Google AI system

Google AI model could detect malignant lung nodules on LDCT scans with performance meeting or exceeding that of expert radiologists, a study including Google scientists said. It predicts lung cancer risk using a patient’s current and prior computed tomography volumes. When prior scans were unavailable, it outperformed 6-radiologists with an 11% reduction in false positives and fewer false negatives.

Google & Northwestern scientists show a precision of new deep learning system to predict lung cancer.

Deep learning – a form of AI – was able to detect malignant lung nodules on low-dose chest computed tomography (LDCT) scans with a performance meeting or exceeding that of expert radiologists, reports a new study from Google & Northwestern Medicine.

This deep-learning system provides an automated image evaluation system to enhance the accuracy of early lung cancer diagnosis that could lead to earlier treatment. The deep-learning system was compared against radiologists on LDCTs for patients, some of whom had biopsy confirmed cancer within a year. In most comparisons, the model performed at or better than radiologists.

Deep learning is a technique that teaches computers to learn by example.

The deep-learning system also produced fewer false positives & fewer false negatives, which could lead to fewer unnecessary follow-up procedures & fewer missed tumors, if it were used in a clinical setting.

The paper was published in Nature Medicine May 20.

“Radiologists generally examine hundreds of two-dimensional images or ‘slices’ in a single CT scan but this new machine learning system views the lungs in a huge, single three-dimensional image,” said study co-author Dr. Mozziyar Etemadi, a research assistant professor of anesthesiology at Northwestern University Feinberg School of Medicine & of engineering at McCormick School of Engineering.

“AI in 3D can be much more sensitive in its ability to detect early lung cancer than the human eye looking at 2-D images. This is technically ‘4D’ because it is not only looking at one CT scan, but two (the current & prior scan) over time.

“In order to build the AI to view the CTs in this way, you require an enormous computer system of Google-scale. The concept is novel but the actual engineering of it is also novel because of the scale.”

Etemadi leads his research team while also in anesthesiology residency training at Northwestern as part of a unique residency research track.

Etemadi’s dual roles allow research in his lab to traverse the technological & communications boundaries between healthcare & engineering. His lab is based inside one of the intensive care units at Northwestern Memorial Hospital to allow seamless communication among engineers & nurses, physicians & other care providers.

“This area of research is incredibly important, as lung cancer has the highest rate of mortality among all cancers & there are many challenges in the way of broad adoption of lung cancer screening,” said Shravya Shetty, technical lead at Google. “Our work examines ways AI can be used to improve the accuracy & optimize the screening process, in ways that could help with the implementation of screening programs. The results are promising & we look forward to continuing our work with partners & peers.”

Lung cancer is the most common cause of cancer-related death in the United States, resulting in an estimated 160,000 deaths in 2018. Large clinical trials across the United States & Europe have shown that chest screening can identify cancer & reduce death rates. However, high error rates & the limited access to these screenings mean that many lung cancers are usually detected at advanced stages, when they are hard to treat.

The deep-learning system utilizes both the primary CT scan and, whenever available, a prior CT scan from the patient as input. Prior CT scans are useful in predicting lung cancer malignancy risk because the growth rate of suspicious lung nodules can be indicative of malignancy. The computer was trained using fully de-identified, biopsy-confirmed low-dose chest CT scans.

The novel system identifies both a region of interest & whether the region has a high likelihood of lung cancer.

The model outperformed 6-radiologists when previous CT imaging was not available & performed as well as the radiologists when there was prior imaging.

“The system can categorize a lesion with more specificity. Not only can we better diagnose someone with cancer, but we can also say if someone doesn’t have cancer, potentially saving them from an invasive, costly & risky lung biopsy,” Etemadi said.

Google scientists developed the deep-learning model & applied it to 2,763 de-identified CT scan sets provided by Northwestern Medicine to validate the accuracy of its new system. The scientists found the artificial-intelligence-powered system was able to spot sometimes-minuscule malignant lung nodules with a model AUC of 0.94 test cases. The cases were pulled from the Northwestern Electronic Data Warehouse as well as other Northwestern Medicine data sources, as a result of complex, highly customized software engineered by Etemadi’s team.

“Most of the software we use as clinicians is designed for patient care, not for research,” Etemadi said. “It took over a year of dedicated effort by my entire team to extract & prepare data to help with this exciting project. The ability to collaborate with world-class scientists at Google, using their unprecedented computing capabilities to create something with the potential to save tens of thousands of lives a year is truly a privilege.”

The authors caution that these findings need to be clinically validated in large patient populations, but they say this model may assist in improving the management & outcome of patients with lung cancer.

The corresponding author on the paper is Dr. Daniel Tse, Google product manager.


Google AI system | Google AI system