Researchers Create AI System that Turns Food Photos into Written Recipes

Sam Shead wrote . . . . . . . . .

Researchers at Facebook’s Artificial Intelligence Research (FAIR) group have developed a piece of AI software that can determine what ingredients were used to make a certain dish and describe how they were assembled.

The AI system — built by research scientist Adrianna Romero and several others at FAIR’s Montreal lab — can look at a photo of some banana bread, for example, list the ingredients that went into it, and describe the method required to make it.

“Everyone is always taking pictures of their meals these days,” said Joelle Pineau, head of FAIR’s Montreal lab, in an interview. “Sometimes there’s ingredients you can see but there’s also ingredients you can’t always see, like sugar and salt and things like that,” she added. “So they train it [the AI] with pairs of images and recipes. But then when they test it they just give the image and it generates a recipe.”

While some Facebook and Instagram users would undoubtedly enjoy this feature, Pineau said the social media giant doesn’t currently have any plans to roll out the recipe generating AI.

Asked why FAIR developed the AI system, Pineau said: “We need to have machines that understand the world. Understand not just the visible in the world, but understand that when you have a cake there’s usually sugar in there.”

Source: Forbes

AI Could Predict Cognitive Decline Leading to Alzheimer’s Disease

A team of scientists has successfully trained a new artificial intelligence (AI) algorithm to make accurate predictions regarding cognitive decline leading to Alzheimer’s disease.

Dr. Mallar Chakravarty, a computational neuroscientist at the Douglas Mental Health University Institute, and his colleagues from the University of Toronto and the Centre for Addiction and Mental Health, designed an algorithm that learns signatures from magnetic resonance imaging (MRI), genetics, and clinical data. This specific algorithm can help predict whether an individual’s cognitive faculties are likely to deteriorate towards Alzheimer’s in the next five years.

“At the moment, there are limited ways to treat Alzheimer’s and the best evidence we have is for prevention. Our AI methodology could have significant implications as a ‘doctor’s assistant’ that would help stream people onto the right pathway for treatment. For example, one could even initiate lifestyle changes that may delay the beginning stages of Alzheimer’s or even prevent it altogether,” says Chakravarty, an Assistant Professor in McGill University’s Department of Psychiatry.

The findings, published in PLOS Computational Biology, used data from the Alzheimer’s Disease NeuroImaging Initiative. The researchers trained their algorithms using data from more than 800 people ranging from normal healthy seniors to those experiencing mild cognitive impairment, and Alzheimer’s disease patients. They replicated their results within the study on an independently collected sample from the Australian Imaging and Biomarkers Lifestyle Study of Ageing.

Can the predictions be improved with more data?

“We are currently working on testing the accuracy of predictions using new data. It will help us to refine predictions and determine if we can predict even farther into the future,” says Chakravarty. With more data, the scientists would be able to better identify those in the population at greatest risk for cognitive decline leading to Alzheimer’s.

According to the Alzheimer Society of Canada, 564,000 Canadians had Alzheimer’s or another form of dementia in 2016. The figure will rise to 937,000 within 15 years.

Worldwide, around 50 million people have dementia and the total number is projected to reach 82million in 2030 and 152 in 2050, according to the World Health Organization. Alzheimer’s disease, the most common form of dementia, may contribute to 60–70% of cases. Presently, there is no truly effective treatment for this disease.

Source: McGill University

Chinese AI System Could Predict Diabetes 15 Years in Advance

Alice Shen wrote . . . . . . .

Doctors at a hospital in Shanghai are hoping a new artificial intelligence system will help them to identify patients at risk of developing diabetes up to 15 years in advance.

In tests the model, known as Ruining Knows Sugar, or Ruining Zhitang in Chinese, achieved an accuracy rate of 88 per cent, according to 4 Paradigm, the Beijing-based company that developed the software and which has been working with medical staff at Ruijin Hospital in Shanghai since last year.

According to Tu Weiwei, a machine learning specialist at the tech company, the system was designed to identify those most at risk of developing type 2 diabetes – the most common form of the chronic disease – within the next three years.

It also gave risk forecasts for the next nine and 15 years as a reference, he said.

Ning Guang, a specialist in metabolic diseases and vice-president of Ruijin Hospital, said the new system used medical information from 170,000 individuals from across the country, some of whom had diabetes and others who did not.

The data, which was collected between 2010 and 2013 by the hospital’s diabetes research team, included gender, height, weight, blood sugar levels, smoking and drinking history, and education levels, he said.

The AI algorithm then used that information to make its predictions and “learned” from the results, Ning said.

Diabetes is one of the world’s most common and costly chronic illnesses, affecting one in 11 adults globally.

The ratio in China is similar, at about 10 per cent, but given the huge size of its population that means the country had about 110 million diabetes patients in 2016, according to figures from the International Diabetes Federation and World Health Organisation.

The prevalence of the disease had put a huge strain on China’s medical resources, Ning said.

“If the trend continues, we will need 100,000 more doctors,” he said.

Xu Aimin, a University of Hong Kong professor specialising in diabetes and cardiovascular diseases, said that early diagnosis of diabetes was key to providing timely treatment, prolonging lives and reducing the financial burden the disease has on the country.

“Without concerted action, the incidence is likely to increase,” he said. “Diabetes is not curable, but it can be prevented.”

Xu said also that type 2 diabetes was closely related to obesity and that people could reduce their risk of developing it by making healthy lifestyle choices.

The use of artificial intelligence to help predict and monitor diabetes is growing.

In June, American medical device company Medtronic, working with IBM Waston Health, released its Sugar.IQ app, which evaluates how a user’s blood sugar levels respond to variables such as food intake, insulin dosing and other daily routines.

Source : SCMP

Artificial Intelligence Can Predict Personality by Tracking the Eyes

It’s often been said that the eyes are the window to the soul, revealing what we think and how we feel. Now, new research reveals that your eyes may also be an indicator of your personality type, simply by the way they move.

Developed by the University of South Australia in partnership with the University of Stuttgart, Flinders University and the Max Planck Institute for Informatics in Germany, the research uses state-of-the-art machine-learning algorithms to demonstrate a link between personality and eye movements.

Findings show that people’s eye movements reveal whether they are sociable, conscientious or curious, with the algorithm software reliably recognising four of the Big Five personality traits: neuroticism, extroversion, agreeableness, and conscientiousness.

Researchers tracked the eye movements of 42 participants as they undertook everyday tasks around a university campus, and subsequently assessed their personality traits using well-established questionnaires.

UniSA’s Dr Tobias Loetscher says the study provides new links between previously under-investigated eye movements and personality traits and delivers important insights for emerging fields of social signal processing and social robotics.

“There’s certainly the potential for these findings to improve human-machine interactions,” Dr Loetscher says.

“People are always looking for improved, personalised services. However, today’s robots and computers are not socially aware, so they cannot adapt to non-verbal cues.

“This research provides opportunities to develop robots and computers so that they can become more natural, and better at interpreting human social signals.”

Dr Loetscher says the findings also provide an important bridge between tightly controlled laboratory studies and the study of natural eye movements in real-world environments.

“This research has tracked and measured the visual behaviour of people going about their everyday tasks, providing more natural responses than if they were in a lab.

“And thanks to our machine-learning approach, we not only validate the role of personality in explaining eye movement in everyday life, but also reveal new eye movement characteristics as predictors of personality traits.”

Source: University of South Australia


Today’s Comic

Artificial Intelligence Better Than Dermatologists at Catching Skin Cancers

Amy Norton wrote . . . . . . . . .

A computer can beat even highly experienced dermatologists in spotting deadly melanomas, researchers report.

The study is the latest to test the idea that “artificial intelligence” can improve medical diagnoses.

Typically, it works like this: Researchers develop an algorithm using “deep learning” — where the computer system essentially mimics the brain’s neural networks. It’s exposed to a large number of images — of breast tumors, for example — and it teaches itself to recognize key features.

The new study pitted a well-honed computer algorithm against 58 dermatologists, to see whether machine or humans were better at differentiating melanomas from moles.

It turned out the algorithm was usually more accurate. It missed fewer melanomas, and was less likely to misdiagnose a benign mole as cancer.

That does not mean computers will someday be diagnosing skin cancer, said lead researcher Dr. Holger Haenssle, of the University of Heidelberg in Germany.

“I don’t think physicians will be replaced,” Haenssle said.

Instead, he explained, doctors could use artificial intelligence (AI) as a tool.

“In the future, AI may help physicians focus on the most suspicious skin lesions,” Haenssle said.

A patient might, for instance, undergo whole-body photography (a technology that’s already available), then have those images “interpreted” by a computer algorithm.

“In the next step,” Haenssle explained, “the physician may examine only those lesions labeled as ‘suspicious’ by the computer.”

Doctors already do skin exams with the help of a technology called dermoscopy — where a hand-held device is used to light and magnify the skin. Haenssle said AI could again be used to help analyze those images.

Dr. Mary Stevenson is an assistant professor of dermatology at NYU Langone Medical Center in New York City.

She agreed that the technology is not going to replace doctors, but could serve as an “aid.”

There are still questions to be answered, according to Stevenson, who was not involved in the research. For one, she said, this study focused only on differentiating melanoma from benign moles — and there is more to skin cancer diagnosis than that.

For the study, Haenssle’s team recruited 58 dermatologists from 17 countries. Over half had more than five years of experience and were considered “expert” level.

First, the doctors examined 100 dermoscopic images of either melanomas or harmless moles.

Four weeks later, they viewed those images and were given more information about the patients — such as their age and position of the lesion on the body. That more closely reflected what doctors work with in the “real world.”

In the first phase, the doctors accurately caught melanomas nearly 87 percent of the time, on average; they correctly identified moles about 71 percent of the time.

The computer, however, did better: When it was tuned to have the same level of accuracy as doctors in detecting benign moles, the computer caught 95 percent of melanomas.

The doctors boosted their accuracy when they also had information about the patients. They caught 89 percent of melanomas, and accurately identified benign moles about 76 percent of the time.

The computer still outperformed them, though: At that same level of accuracy for catching melanoma, the computer correctly diagnosed about 83 percent of moles.

Haenssle said that in some parts of Germany, doctors are already using the algorithm tested in this study — in software sold by the company FotoFinder Systems GmbH. He has received fees from the company and others that market devices for skin cancer screening.

For now, traditional skin exams remain the standard of care.

Stevenson said she suggests people get one head-to-toe exam to inspect the skin for suspicious growths — and then talk to their doctor about how to follow up.

“I also recommend getting in front of a mirror once a month to do a self-exam,” Stevenson said.

The point is to spot any changes in the size, shape or color of a mole or other dark spot on the skin. According to Stevenson, some warning signs of melanoma include asymmetry in a growth, as well as irregular borders, uneven coloring and a large diameter (larger than a pencil eraser).

“When melanoma is caught early,” Stevenson said, “it is highly curable.”

Source: HealthDay