New AI Diagnostic Can Predict COVID-19 Without Testing

Researchers at King’s College London, Massachusetts General Hospital and health science company ZOE have developed an artificial intelligence diagnostic that can predict whether someone is likely to have COVID-19 based on their symptoms. Their findings are published today in Nature Medicine.

The AI model uses data from the COVID Symptom Study app to predict COVID-19 infection, by comparing people’s symptoms and the results of traditional COVID tests. Researchers say this may provide help for populations where access to testing is limited. Two clinical trials in the UK and the US are due to start shortly.

More than 3.3 million people globally have downloaded the app and are using it to report daily on their health status, whether they feel well or have any new symptoms such as persistent cough, fever, fatigue and loss of taste or smell (anosmia).

In this study, the researchers analysed data gathered from just under 2.5 million people in the UK and US who had been regularly logging their health status in the app, around a third of whom had logged symptoms associated with COVID-19. Of these, 18,374 reported having had a test for coronavirus, with 7,178 people testing positive.

The research team investigated which symptoms known to be associated with COVID-19 were most likely to be associated with a positive test. They found a wide range of symptoms compared to cold and flu, and warn against focusing only on fever and cough. Indeed, they found loss of taste and smell (anosmia) was particularly striking, with two thirds of users testing positive for coronavirus infection reporting this symptom compared with just over a fifth of the participants who tested negative. The findings suggest that anosmia is a stronger predictor of COVID-19 than fever, supporting anecdotal reports of loss of smell and taste as a common symptom of the disease.

The researchers then created a mathematical model that predicted with nearly 80% accuracy whether an individual is likely to have COVID-19 based on their age, sex and a combination of four key symptoms: loss of smell or taste, severe or persistent cough, fatigue and skipping meals. Applying this model to the entire group of over 800,000 app users experiencing symptoms predicted that just under a fifth of those who were unwell (17.42%) were likely to have COVID-19 at that time.

Researchers suggest that combining this AI prediction with widespread adoption of the app could help to identify those who are likely to be infectious as soon as the earliest symptoms start to appear, focusing tracking and testing efforts where they are most needed.

Professor Tim Spector from King’s College London said: “Our results suggest that loss of taste or smell is a key early warning sign of COVID-19 infection and should be included in routine screening for the disease. We strongly urge governments and health authorities everywhere to make this information more widely known, and advise anyone experiencing sudden loss of smell or taste to assume that they are infected and follow local self-isolation guidelines.”

Source: King’s College London

AI-supported Test for Very Early Signs of Glaucoma Progression

The technology, supported by an artificial intelligence (AI) algorithm, could help accelerate clinical trials, and eventually may be used in detection and diagnostics, according to the Wellcome-funded study published today in Expert Review of Molecular Diagnostics.

Lead researcher Professor Francesca Cordeiro (UCL Institute of Ophthalmology, Imperial College London, and Western Eye Hospital Imperial College Healthcare NHS Trust) said: “We have developed a quick, automated and highly sensitive way to identify which people with glaucoma are at risk of rapid progression to blindness.”

Glaucoma, the leading global cause of irreversible blindness, affects over 60 million people, which is predicted to double by 2040 as the global population ages. Loss of sight in glaucoma is caused by the death of cells in the retina, at the back of the eye.

The test, called DARC (Detection of Apoptosing Retinal Cells), involves injecting into the bloodstream (via the arm) a fluorescent dye that attaches to retinal cells, and illuminates those that are in the process of apoptosis, a form of programmed cell death. The damaged cells appear bright white when viewed in eye examinations – the more damaged cells detected, the higher the DARC count.

One challenge with evaluating eye diseases is that specialists often disagree when viewing the same scans, so the researchers have incorporated an AI algorithm into their method.

In the Phase II clinical trial of DARC, the AI was used to assess 60 of the study participants (20 with glaucoma and 40 healthy control subjects). The AI was initially trained by analysing the retinal scans (after injection of the dye) of the healthy control subjects. The AI was then tested on the glaucoma patients.

Those taking part in the AI study were followed up 18 months after the main trial period to see whether their eye health had deteriorated.

The researchers were able to accurately predict progressive glaucomatous damage 18 months before that seen with the current gold standard OCT retinal imaging technology, as every patient with a DARC count over a certain threshold was found to have progressive glaucoma at follow-up.

“These results are very promising as they show DARC could be used as a biomarker when combined with the AI-aided algorithm,” said Professor Cordeiro, adding that biomarkers – measurable biological indicators of disease state or severity – are urgently needed for glaucoma, to speed up clinical trials as the disease progresses slowly so it can take years for symptoms to change.

“What is really exciting, and actually unusual when looking at biological markers, is that there was a clear DARC count threshold above which all glaucoma eyes went on to progress,” she added.

First author Dr Eduardo Normando (Imperial College London and Western Eye Hospital Imperial College Healthcare NHS Trust) said: “Being able to diagnose glaucoma at an earlier stage, and predict its course of progression, could help people to maintain their sight, as treatment is most successful if provided at an early stage of the disease. After further research in longitudinal studies, we hope that our test could have widespread clinical applications for glaucoma and other conditions.”

The team is also applying the test to rapidly detect cell damage caused by numerous conditions other than glaucoma, such as other neurodegenerative conditions that involve the loss of nerve cells, including age-related macular degeneration, multiple sclerosis, and dementia.

The AI-supported technology has recently been approved by both the UK’s Medicines and Healthcare products Regulatory Agency and the USA’s Food and Drug Administration as an exploratory endpoint for testing a new glaucoma drug in a clinical trial.

The researchers are also assessing the DARC test in people with lung disease, and hope that by the end of this year, the test may help to assess people with breathing difficulties from Covid-19.

DARC is being commercialised by Novai, a newly formed company of which Professor Cordeiro is Chief Scientific Officer.

Source: University College London


Today’s Comic

Artificial Intelligence Might Help Spot, Evaluate Prostate Cancer

Amy Norton wrote . . . . . . . . .

In another step toward using artificial intelligence in medicine, a new study shows that computers can be trained to match human experts in judging the severity of prostate tumors.

Researchers found that their artificial intelligence system was “near perfect” in determining whether prostate tissue contained cancer cells. And it was on par with 23 “world-leading” pathologists in judging the severity of prostate tumors.

No one is suggesting computers should replace doctors. But some researchers do think AI technology could improve the accuracy and efficiency of medical diagnoses.

Typically, it works like this: Researchers develop an algorithm using “deep learning” — where a computer system mimics the brain’s neural networks. It’s exposed to a large number of images — digital mammograms, for example — and it teaches itself to recognize key features, such as signs of a tumor.

Earlier this month, researchers reported on an AI system that appeared to best radiologists in interpreting screening mammograms. Other studies have found that AI can outperform doctors in distinguishing harmless moles from skin cancer, and detecting breast tumor cells in lymph node samples.

The new study looked at whether it’s possible to train an AI system to detect and “grade” prostate cancer in biopsied tissue samples. Normally, that’s the work of clinical pathologists — specialists who examine tissue under the microscope to help diagnose disease and judge how serious or advanced it is.

It’s painstaking work and, to a certain degree, subjective, according to study leader Martin Eklund, a senior researcher at the Karolinska Institute in Sweden.

Then there’s the workload. In the United States alone, more than 1 million men undergo a prostate biopsy each year — producing more than 10 million tissue samples to be examined, Eklund’s team noted.

To create their AI system, the researchers digitized more than 8,000 prostate tissue samples from Swedish men ages 50 to 69, creating high-resolution images. They then exposed the system to roughly 6,600 images — training it to learn the difference between cancerous and noncancerous tissue.

Next came the test phase. The AI system was asked to distinguish benign tissue from cancer in the remaining samples, plus around 300 from men who’d had biopsies at Karolinska. The AI results, the researchers reported, were almost always in agreement with the original pathologist’s assessment.

And when it came to grading the severity of prostate tumors with what’s called a Gleason score, the AI system was comparable to the judgment of 23 leading pathologists from around the world.

Much work, however, remains. A next step, Eklund said, is to see how the AI system performs across different labs and different pathology scanners, which are used to create digital images.

But one day, he said, AI could be used in a number of ways — including as a “safety net” to make sure a pathologist didn’t miss a cancer. It might also improve efficiency by prioritizing suspicious biopsies that pathologists should examine sooner.

Studies like this are a necessary step toward incorporating AI into medical practice, said Dr. Matthew Hanna, a pathologist at Memorial Sloan Kettering Cancer Center in New York City.

But, he stressed, “there’s still a long road ahead.”

Hanna, who was not involved in the study, is also a spokesperson for the College of American Pathologists.

Like Eklund, he said that any AI system would have to be validated across different centers, and different pathology scanners. And ultimately, Hanna said, studies will need to show that such technology can be used effectively in pathologists’ real-world practice.

There are practical realities, too. At the moment, Hanna pointed out, only a relative minority of pathology labs use digital systems in patient care. That’s key because for any AI algorithm to work, there have to be digital images to analyze. Most often, pathologists still study tissue using the classic approach — glass slides and a microscope.

What’s clear is that machines won’t be replacing humans — at least in the foreseeable future.

“This technology is coming,” Hanna said. “But as opposed to replacing doctors, it will transform how they deliver care — hopefully for the better.”

The study was reported online in The Lancet Oncology.

Source: HealthDay


Today’s Comic

AI Beat Humans in Spotting Breast Tumors

Amy Norton wrote . . . . . . . . .

Machines can be trained to outperform humans when it comes to catching breast tumors on mammograms, a new study suggests.

Researchers at Google and several universities are working on an artificial intelligence (AI) model aimed at improving the accuracy of mammography screening. In the Jan. 1 issue of Nature, they describe the initial results: Computers, it seems, can beat radiologists both in detecting breast tumors and avoiding false alarms.

Compared with mammography results collected from routine practice, the computer model reduced false positives by 1.2% (at three U.K. hospitals) and 5.7% (at one U.S. center). “False positive” refers to a mammogram that is deemed abnormal, even though no cancer is present.

“That means we could, potentially, create less angst for patients,” said researcher Dr. Mozziyar Etemadi, an assistant professor at Northwestern University Feinberg School of Medicine, in Chicago.

Artificial intelligence also bested humans when it came to false negatives — where a mammogram is interpreted as normal despite the presence of a tumor. The algorithm reduced those cases by 2.7% in the United Kingdom, and by 9.4% in the United States.

Etemadi called the findings “exciting,” but also stressed that research into using AI in medicine is “still in its infancy.”

Nor will it be replacing humans any time soon. Instead, Etemadi explained, AI is seen as a “tool” to boost doctors’ efficiency and accuracy.

As an example, he said AI could be used to “re-order the queue” — so that instead of analyzing mammograms in the order they come in, radiologists could have certain images with suspicious findings flagged for priority review.

Mammography screening can detect breast cancer in its earliest stages, but it’s imperfect: According to the American Cancer Society, it misses about 20% of cancers. And if a woman gets a mammogram every year for 10 years, she has about a 50% chance of receiving a false positive at some point.

The new study, funded by Google, is the latest to explore whether AI can help detect cancer.

Typically, it works like this: Researchers develop an algorithm using “deep learning” — where a computer system mimics the brain’s neural networks. It’s exposed to a large number of images — digital mammograms, for example — and it teaches itself to recognize key features, such as signs of a tumor.

Other studies have suggested that AI can outperform humans in diagnosing certain cancers. One found that computers bested dermatologists in distinguishing harmless moles from melanoma skin cancer. Another found that AI was typically better than pathologists at finding breast tumor cells in lymph node samples.

This latest AI model was “trained” by exposing it to mammograms from over 90,000 women whose outcomes were known. The researchers then tested the model on a separate dataset, involving mammograms from over 25,000 U.K. women and 3,000-plus U.S. women.

Overall, the model reduced false positive and false negative results. The improvement was greater in the United States. While it’s not certain why, Etemadi pointed to one potential reason: In the United Kingdom, it’s standard for two radiologists to analyze a mammogram, which generally improves the accuracy.

But while the AI model performed well in this “controlled environment,” it remains to be seen how it will work in the real world, said Dr. Stamatia Destounis.

She is a spokesperson for the Radiological Society of North America and a clinical professor of imaging sciences at the University of Rochester, in New York.

“What’s needed are clinical studies in real day-to-day practice to see if these findings can be reproduced,” Destounis said.

Even in this controlled setting, the AI model was not foolproof. It did not detect all cancers or eliminate false positives. And sometimes it lost out to humans.

In a separate experiment, the researchers pitted the AI model against six U.S. radiologists. Overall, the computer was better, but there were cases where the doctors correctly saw a tumor the machine missed.

So what did the AI model overlook? And what did it see that doctors didn’t? No one knows, Etemadi said.

“At this point, we can only observe the patterns,” he said. “We don’t know the ‘why.'”

But, he added, it all suggests the combined forces of human and machine would be better than either alone.

To Destounis, the prospect of a new tool to help spot breast cancer is “exciting.”

“I’m hopeful that AI will be another tool in our clinical practice to help radiologists identify breast cancer as early as possible — when the tumor is smallest and the treatment least invasive,” she said.

Source: HealthDay


Today’s Comic

Artificial Intelligence Beats Some Radiologists at Spotting Bleeds in the Brain

Dennis Thompson wrote . . . . . . . . .

Computer-driven artificial intelligence (AI) can help protect human brains from the damage wrought by stroke, a new report suggests.

A computer program trained to look for bleeding in the brain outperformed two of four certified radiologists, finding abnormalities in brain scans quickly and efficiently, the researchers reported.

“This AI can evaluate the whole head in one second,” said senior researcher Dr. Esther Yuh, an associate professor of radiology at the University of California, San Francisco. “We trained it to be very, very good at looking for the kind of tiny abnormalities that radiologists look for.”

Stroke doctors often say that “time is brain,” meaning that every second’s delay in treating a stroke results in more brain cells dying and the patient becoming further incapacitated.

Yuh and her colleagues hope that AI programmed to find trouble spots in a brain will be able to significantly cut down treatment time for stroke patients.

“Instead of having a delay of 20 to 30 minutes for a radiologist to turn around a CT scan for interpretation, the computer can read it in a second,” Yuh said.

Stroke is the fifth-leading cause of death in the United States, and is a leading cause of disability, according to the American Stroke Association.

There are two types of strokes: ones caused by burst blood vessels in the brain (hemorrhagic), and others that occur when a blood vessel becomes blocked (ischemic).

Yuh’s AI still needs to be tested in clinical trials and approved by the U.S. Food and Drug Administration, but other programs are already helping doctors speed up stroke treatment, said Dr. Christopher Kellner. He is director of the Intracerebral Hemorrhage Program at Mount Sinai, in New York City.

“We are already using AI-driven software to automatically inform us when certain CAT scan findings occur,” he said. “It’s already become, in just the last year, an essential part of our stroke work-up.”

An AI created by a company called Viz.ai is being used at Mount Sinai to detect blood clots that have caused a stroke by blocking the flow of blood to the brain, Kellner said.

Yuh and her team used a library of nearly 4,440 CT scans to train their AI to look for brain bleeding.

These scans are not easy to read, she said. They are low-contrast black-and-white images full of visual “noise.”

“It takes a lot of training to be able to read these — doctors train for years to be able to read these correctly,” Yuh said.

Her team trained its algorithm to the point that it could trace detailed outlines of abnormalities it found, demonstrating their location in a 3-D model of the brain being scanned.

They then tested the algorithm against four board-certified radiologists, using a series of 200 randomly selected head CT scans.

The AI slightly outperformed two radiologists, and slightly underperformed against the other two, Yuh said.

The AI found some small abnormalities that the experts missed. It also provided detailed information that doctors would need to determine the best treatment.

The computer program also provided this information with an acceptable level of false positives, Yuh said. That would minimize how much time doctors would need to spend reviewing its results.

Yuh suspects radiologists always will be needed to double-check the AI, but Kellner isn’t so sure.

“There will definitely be a point where there’s no human involved in the evaluation of the scans, and I think that’s not too far off, honestly,” he said. “I think, ultimately, a computer will be able to scan that faster and send out an alert faster than a human can.”

The new study was published in the Proceedings of the National Academy of Sciences.

Source: HealthDay


Today’s Comic