NASA Looks to Spice Up Astronaut Menu with Deep Space Food Production

Steve Gorman wrote . . . . . . . . .

In the 2015 sci-fi film “The Martian,” Matt Damon stars as an astronaut who survives on a diet of potatoes cultivated in human feces while marooned on the Red Planet.

Now a New York company that makes carbon-negative aviation fuel is taking the menu for interplanetary cuisine in a very different direction. Its innovation has put it in the finals of a NASA-sponsored contest to encourage development of next-generation technologies for meeting the food needs of astronauts.

Closely held Air Company of Brooklyn has pioneered a way of recycling carbon dioxide exhaled by astronauts in flight to grow yeast-based nutrients for protein shakes designed to nourish crews on long-duration deep-space missions.

“It’s definitely more nutritious than Tang,” said company co-founder and Chief Technology Officer Stafford Sheehan, referring to the powdered beverage popularized in 1962 by John Glenn when he became the first American to orbit Earth.

Sheehan, who has a doctorate in physical chemistry from Yale University, said he originally developed his carbon-conversion technology as a means of producing high-purity alcohols for jet fuel, perfume and vodka.

The NASA-sponsored Deep Space Food Challenge prompted Sheehan to modify his invention as a way of producing edible proteins, carbohydrates and fats from the same system.

TASTES LIKE … SEITAN

The resulting single-cell protein drink entered in NASA’s contest has the consistency of a whey protein shake, Sheehan said. Sheehan compared its flavor with that of seitan, a tofu-like food made from wheat gluten that originated in East Asian cuisine and has been adopted by vegetarians as a meat substitute.

“And you get that sweet-tasting, almost malted flavor to it,” Sheehan said in an interview.

Apart from protein drinks, the same process can be used to create more carbohydrate-heavy substitutes for breads, pastas and tortillas. For the sake of culinary variety, Sheehan said he sees his smoothie being supplemented on missions by other sustainably produced comestibles.

The company’s patented AIRMADE technology was one of eight winners announced by NASA this month in the second phase of its food competition, along with $750,000 in prize money. A final round of the competition is coming up.

Other winners included: a bioregenerative system from a Florida lab to raise fresh vegetables, mushrooms and even insect larvae to be used as micronutrients; an artificial photosynthesis process developed in California to create plant- and fungal-based ingredients; and a gas-fermentation technology from Finland to produce single-celled proteins.

Up to $1.5 million in prize money will be divvied up among the eventual final winners of the contest.

While few if any are likely to earn a place in the Michelin Guide for fine dining, they represent a big leap forward from Tang and the freeze-dried snacks consumed by astronauts in the earliest days of space travel.

The new food-growing schemes are also more appetizing, and promise to be far more nutritious, than Matt Damon’s fictional poop-fertilized potatoes in “The Martian.”

“That was taking an idea to an extreme for a Hollywood movie,” said Ralph Fritsche, space crop production manager at NASA’s Kennedy Space Center in Florida, adding that human waste alone “is not the complete nutrient source that plants need to grow and thrive.”

Keeping astronauts well nourished for extended periods within the limited, zero-gravity confines of space vehicles in low-Earth orbit long has posed a challenge for NASA. For the past two decades, crews aboard the International Space Station have lived on a diet mostly of packaged meals with some fresh produce delivered on regular re-supply missions.

ISS teams also have experimented with growing a number of vegetables in orbit, including lettuce, cabbage, kale and chile peppers, according to NASA.

But the imperative for self-contained, low-waste food production requiring minimal resources has become more pronounced as NASA sets its sights on returning astronauts to the moon and eventual human exploration of Mars and beyond.

Advances in space-based food production also have direct applications for feeding Earth’s ever-growing population in an era when climate change is making food more scarce and harder to produce, Fritsche said.

“Controlled environment agriculture, the first modules we deploy on the moon, will have some similarity to the vertical farms that we’ll have here on Earth,” Fritsche said.

Sheehan’s system starts by taking carbon dioxide gas scrubbed from the air breathed by astronauts and blending it with hydrogen gas extracted from water by electrolysis. The resulting alcohol-and-water mixture is then fed into a small quantity of yeast to grow a renewable supply of single-celled proteins and other nutrients.

In essence, Sheehan said, the carbon dioxide and hydrogen form an alcohol feedstock for the yeast, “and the yeast is the food for the humans.”

“We’re not re-inventing products,” Sheehan said, “we’re just making them in a more sustainable way.”

Source: Reuters

 

 

 

 

Advertisement

Wendy’s Partners with Pipedream to Pilot Industry-First Underground Delivery System for Mobile Orders

The Wendy’s Company today announced a new partnership with Pipedream, a hyperlogistics company, to pilot its underground autonomous robot system with the goal of delivering digital food orders from the kitchen to designated parking spots in seconds, for faster and more convenient pick-up experiences. As the first quick service restaurant (QSR) to pilot this cutting-edge technology, the partnership marks another bold step for Wendy’s in driving industry innovation as it strives to serve digital-forward customers with greater ease, speed and accuracy.

“We know that serving orders quickly and accurately leads to increased customer satisfaction,” said Deepak Ajmani, U.S. Chief Operations Officer, The Wendy’s Company. “Pipedream’s Instant Pickup system has the potential to unlock greater mobile order speed of service and accuracy, enabling us to consistently deliver hot and fresh Wendy’s products to our fans.”

Pipedream’s technology is designed to make digital order pick-up fast, reliable and invisible. By connecting the Wendy’s kitchen to an Instant Pickup portal positioned outside the restaurant, this first-of-its-kind delivery system is designed to provide digital customers with a fast and convenient pick-up option without having to leave their car and increase efficiencies for restaurant crew members by streamlining digital order pick-up points. The technology uses autonomous robots to transport meals underground and deliver at the car-side Instant Pickup portal.

“At Wendy’s, we are consistently innovating to meet our customers however they choose to engage with us,” said Matt Spessard, Senior Vice President and Global Chief Technology Officer, The Wendy’s Company. “As mobile ordering preferences increase, we’re thrilled to be the first quick service restaurant to partner with Pipedream, leveraging their unique delivery technology and system with the goal of reinventing digital pick-ups to bring more Wendy’s to more people as quickly and efficiently as possible.”

“We’re proud to partner with an iconic, innovative brand like Wendy’s to bring the future of mobile order pick-up to the quick service industry,” said Garrett McCurrach, CEO, Pipedream. “By solving order handoff, the final leg of the digital experience, our Instant Pickup technology allows Wendy’s restaurant team members to focus on what matters: serving delicious, high-quality food and connecting with customers in this digital-first world.”

Wendy’s plans to integrate Pipedream’s industry-first underground delivery system into an existing restaurant later this year.

Source: Cision

 

 

 

 

Mind-Reading Technology Can Turn Brain Scans Into Language

Dennis Thompson wrote . . . . . . . . .

A mind-reading device seems like science fiction, but researchers say they’re firmly on the path to building one.

Using functional MRI (fMRI), a newly developed brain-computer interface can read a person’s thoughts and translate them into full sentences, according to a report published May 1 in Nature Neuroscience.

The decoder was developed to read a person’s brain activity and translate what they want to say into continuous, natural language, the researchers said.

“Eventually, we hope that this technology can help people who have lost the ability to speak due to injuries like strokes or diseases like ALS,” said lead study author Jerry Tang, a graduate research assistant at the University of Texas at Austin.

But the interface goes even further than that, translating into language whatever thoughts are foremost in a person’s mind.

“We also ran our decoder on brain responses while the user imagined telling stories and ran responses while the user watched silent movies,” Tang said. “And we found that the decoder is also able to recover the gist of what the user was imagining or seeing.”

Because of this, the decoder is capable of capturing the essence of what a person is thinking, if not always the exact words, the researchers said.

For example, at one point a participant heard the words, “I don’t have my driver’s license yet.” The decoder translated the thought as, “She has not even started to learn to drive yet.”

The technology isn’t at the point where it can be used on just anyone, Tang said.

Training the program required at least 16 hours of participation from each of the three people involved in the research, and Tang said the brain readings from one person can’t be used to inform the scans of another.

The actual scan also involves the cooperation of the person, and can be foiled by simple mental tasks that deflect a participant’s focus, he said.

Still, one expert lauded the findings.

“This work represents an advance in brain-computer interface research and is potentially very exciting,” said Dr. Mitchell Elkind, chief clinical science officer of the American Heart Association and a professor of neurology and epidemiology at Columbia University in New York City.

“The major advance here is being able to record and interpret the meaning of brain activity using a non-invasive approach,” Elkind explained. “Prior work required electrodes placed into the brain using open neurosurgery with the risks of infection, bleeding and seizures. This non-invasive approach using MRI scanning would have virtually no risk, and MRIs are done regularly in brain-injured patients. This approach can also be used frequently in healthy people as part of research, without introducing them to risk.”

Powerful results prompt warning that ‘mental privacy’ may be at risk

Indeed, the results of this study were so powerful that Tang and his colleagues felt moved to issue a warning about “mental privacy.”

“This could all change as technology gets better, so we believe that it’s important to keep researching the privacy implications of brain decoding, and enact policies that protect each person’s mental privacy,” Tang said.

Earlier efforts at translating brain waves into speech have used electrodes or implants to record impulses from the motor areas of the brain related to speech, said senior researcher Alexander Huth. He is an assistant professor of neuroscience and computer science at the University of Texas at Austin.

“These are the areas that control the mouth, larynx, tongue, etc., so what they can decode is how is the person trying to move their mouth to say something, which can be very effective,” Huth said.

The new process takes an entirely different approach, using fMRI to non-invasively measure changes in blood flow and blood oxygenation within brain regions and networks associated with language processing.

“So instead of looking at this kind of low-level like motor thing, our system really works at the level of ideas, of semantics, of meaning,” Huth said. “That’s what it’s getting at. This is the reason why what we get out is not the exact words that somebody heard or spoke. It’s the gist. It’s the same idea, but expressed in different words.”

The researchers trained the decoder by first recording the brain activity of the three participants as they listened to 16 hours of storytelling podcasts like the “Moth Radio Hour,” Tang said.

“This is over five times larger than existing language datasets,” he said. “And we use this dataset to build a model that takes in any sequence of words and predicts how the user’s brain would respond when hearing those words.”

The program mapped the changes in brain activity to semantic features of the podcasts, capturing the meanings of certain phrases and associated brain responses.

The investigators then tested the decoder by having participants listen to new stories.

Making educated guesses based on brain activity

The decoder essentially attempts to make an educated guess about what words are associated with a person’s thoughts, based on brain activity.

Using the participants’ brain activity, the decoder generated word sequences that captured the meanings of the new stories. It even generated some exact words and phrases from the stories.

One example of an actual versus a decoded story:

Actual: “I got up from the air mattress and pressed my face against the glass of the bedroom window expecting to see eyes staring back at me but instead finding only darkness.”

Decoded: “I just continued to walk up to the window and open the glass I stood on my toes and peered out I didn’t see anything and looked up again I saw nothing.”

The decoder specifically captured what a person was focused upon. When a participant actively listened to one story while another played simultaneously, the program identified the meaning of the story that had the listener’s focus, the researchers said.

To see if the decoder was capturing thoughts versus speech, the researchers also had participants watch silent movies and scanned their brain waves.

“There’s no language whatsoever. Subjects were not instructed to do anything while they were watching those videos. But when we put that data into our decoder, what it spat out is a kind of a description of what’s happening in the video,” Huth said.

The participants also were asked to imagine a story, and the device was able to predict the meaning of that imagined story.

“Language is the output format here, but whatever it is that we’re getting at is not necessarily language itself,” Huth said. “It’s definitely getting at something deeper than language and converting that into language, which is kind of at a very high level the role of language, right?”

Decoder is not yet ready for prime-time

Concerns over mental privacy led the researchers to further test whether participants could interfere with the device’s readings.

Certain mental exercises, like naming animals or thinking about a different story than the podcast, “really prevented the decoder from recovering anything about the story that the user was hearing,” Tang said.

The process still needs more work. The program is “uniquely bad” at pronouns, and requires tweaking and further testing to accurately reproduce exact words and phrases, Huth said.

It’s also not terribly practical since it now requires the use of a large MRI machine to read a person’s thoughts, the study authors explained.

The researchers are considering whether cheaper, more portable technology like EEG or functional near-infrared spectrometry could be used to capture brain activity as effectively as fMRI, Tang said.

But they admit they were shocked by how well the decoder did wind up working, which led to their concerns over brain privacy.

“I think my cautionary example is the polygraph, which is not an accurate lie detector, but has still had many negative consequences,” Tang said. “So I think that while this technology is in its infancy, it’s very important to regulate what brain data can and cannot be used for. And then if one day it does become possible to gain accurate decoding without getting the person’s cooperation, we’ll have a regulatory foundation in place that we can build off of.”

Source: HealthDay

 

 

 

 

Study: ChatGPT Scores Nearly 50 per cent on Board Certification Practice Test for Ophthalmology

A study of ChatGPT found the artificial intelligence tool answered less than half of the test questions correctly from a study resource commonly used by physicians when preparing for board certification in ophthalmology.

The study, published in JAMA Ophthalmology and led by St. Michael’s Hospital, a site of Unity Health Toronto, found ChatGPT correctly answered 46 per cent of questions when initially conducted in Jan. 2023. When researchers conducted the same test one month later, ChatGPT scored more than 10 per cent higher.

The potential of AI in medicine and exam preparation has garnered excitement since ChatGPT became publicly available in Nov. 2022. It’s also raising concern for the potential of incorrect information and cheating in academia. ChatGPT is free, available to anyone with an internet connection, and works in a conversational manner.

“ChatGPT may have an increasing role in medical education and clinical practice over time, however it is important to stress the responsible use of such AI systems,” said Dr. Rajeev H. Muni, principal investigator of the study and a researcher at the Li Ka Shing Knowledge Institute at St. Michael’s. “ChatGPT as used in this investigation did not answer sufficient multiple choice questions correctly for it to provide substantial assistance in preparing for board certification at this time.”

Researchers used a dataset of practice multiple choice questions from the free trial of OphthoQuestions, a common resource for board certification exam preparation. To ensure ChatGPT’s responses were not influenced by concurrent conversations, entries or conversations with ChatGPT were cleared prior to inputting each question and a new ChatGPT account was used. Questions that used images and videos were not included because ChatGPT only accepts text input.

Of 125 text-based multiple-choice questions, ChatGPT answered 58 (46 per cent) questions correctly when the study was first conducted in Jan. 2023. Researchers repeated the analysis on ChatGPT in Feb. 2023, and the performance improved to 58 per cent.

“ChatGPT is an artificial intelligence system that has tremendous promise in medical education. Though it provided incorrect answers to board certification questions in ophthalmology about half the time, we anticipate that ChatGPT’s body of knowledge will rapidly evolve,” said Dr. Marko Popovic, a co-author of the study and a resident physician in the Department of Ophthalmology and Vision Sciences at the University of Toronto.

ChatGPT closely matched how trainees answer questions, and selected the same multiple-choice response as the most common answer provided by ophthalmology trainees 44 per cent of the time. ChatGPT selected the multiple-choice response that was least popular among ophthalmology trainees 11 per cent of the time, second least popular 18 per cent of the time, and second most popular 22 per cent of the time.

“ChatGPT performed most accurately on general medicine questions, answering 79 per cent of them correctly. On the other hand, its accuracy was considerably lower on questions for ophthalmology subspecialties. For instance, the chatbot answered 20 per cent of questions correctly on oculoplastics and zero per cent correctly from the subspecialty of retina. The accuracy of ChatGPT will likely improve most in niche subspecialties in the future,” said Andrew Mihalache, lead author of the study and undergraduate student at Western University.

Source: Umity Health Toronto

 

 

 

 

ChatGPT Rated as Better Than Real Doctors for Empathy, Advice

Alan Mozes wrote . . . . . . . . .

Only five months have passed since the world got its first taste of the ground-breaking artificial intelligence (AI) tool known as ChatGPT.

Promising a brave new world of human-machine connectivity, AI demonstrates near-instantaneous access to in-depth information on almost any subject, all in full conversational sentences, often delivered in a human-sounding voice.

A new study says health care may never be the same.

That’s the broad takeaway of groundbreaking research that tackled a potentially existential question: When it comes to providing patients with high-quality medical information — and delivering it with compassion and understanding — who does it better: ChatGPT or your doctor?

The answer: ChatGPT, by a mile.

In fact, after comparing doctor and AI responses to nearly 200 medical questions, a team of health care professionals concluded that nearly 80% of the answers from ChatGPT were more nuanced, accurate and detailed than those shared by physicians.

ChatGPT was no slouch on bedside manner, either. While less than 5% of doctor responses were judged to be “empathetic” or “very empathetic,” that figure shot up to 45% for answers provided by AI.

“For the first time, we compared AI and physicians’ responses to the same patient messages, and AI won in a landslide,” said study leader John Ayers, vice chief of innovation with the division of infectious disease and global public health at the Qualcomm Institute at University of California, San Diego.

“This doesn’t mean AI will replace your physician,” he stressed. “But it does mean a physician using AI can potentially respond to more messages with higher-quality responses and more empathy.”

ChatGPT: Counsel & compassion?

In one example cited in the study, AI was asked about the risk of blindness after an eye got splashed with bleach.

“I’m sorry to hear that you got bleach splashed in your eye,” ChatGPT replied, recommending rinsing the eye with clean water or saline solution as soon as possible.

“It is unlikely that you will go blind from getting bleach splashed in your eye,” the bot assured. “But it is important to take care of the eye and seek medical attention if necessary to prevent further irritation or damage.”

In comparison, a doctor replied to the question this way: “Sounds like you will be fine. You should flush the eye anytime you get a chemical or foreign body in the eye. You can also contact Poison Control 1-800-222-1222.”

Ayers and his colleagues pointed out that the COVID-19 pandemic led a growing number of patients to seek virtual health care. Doctors have seen a notable and sustained surge in emails, texts and hospital-portal messages from patients in need of health advice.

For their analysis, researchers sought out a random sampling of medical questions that had already been posted to the “AskDocs” forum on the social media platform Reddit.

The open forum has more than 450,000 members who regularly turn to it for moderated answers from verified physicians. Questions included concerns about whether swallowing a toothpick can be fatal; what to do about head swelling after bumping into a steel bar; and how to handle a lingering cough.

In all, 195 real patient-doctor exchanges were culled from the site. The original questions were posed again to ChatGPT.

Both doctor and ChatGPT responses were then submitted to panels of three licensed health care professionals, variously drawn from the fields of pediatrics, geriatrics, internal medicine, oncology, infectious disease and preventive medicine.

High marks for accuracy

The result: Nearly 8 out of 10 times ChatGPT answers were deemed to be of higher overall quality than the information previously shared by physicians responding to the social media forum.

Specifically, ChatGPT answers were longer — 168 to 245 words — than doctor responses, which were 17 to 62 words. Moreover, the proportion of ChatGPT responses rated either “good quality” or “very good quality” was nearly four times higher than that from doctors responding online.

The empathy gap — in ChatGPT’s favor — was even more striking, with the panel finding that AI responses were nearly 10 times more likely to be “empathetic” or “very empathetic” than those of physicians online.

HealthDay asked ChatGPT for some recommendations on headache relief. Here was the advice:

‘Value-added’ medicine, not a replacement for doctors

Ayers said the findings suggest that AI “will potentially revolutionize public health.”

But doctors aren’t destined to become dinosaurs. In his view, the future of health care is a world in which doctors are assisted and enabled by AI, not replaced.

“All doctors are in the game for the right reason,” he noted. “Are they sorry that you have a headache? Do they want to give you good quality information? Yes and yes. But given their workload many doctors just don’t have the time to communicate everything they might want to say in an email. They are constrained.”

That’s why ChatGPT does so much better, Ayers said.

“AI messaging is not operating in a constraint,” he explained. “That’s the new value-added of AI-assisted medicine. Doctors will spend less time over verbs and nouns and conjugation, and more time actually delivering health care.”

Are there risks? Yes, said Ayers, who acknowledged that benefits highlighted in a study context don’t always translate to the real world.

“The risk is that we just willy-nilly turn this product on and market it,” he cautioned. “We do need to focus on patient outcomes, and make sure this technology has a positive impact on public health. But our study is very promising. And I’m pretty optimistic.”

Dr. Jonathan Chen, an assistant professor at the Center for Biomedical Informatics Research + Division of Hospital Medicine at the Stanford University School of Medicine in Palo Alto, co-wrote an accompanying editorial.

“As a practicing physician myself, I recognize significant value in direct human interactions in the clinician-patient relationship,” he said.

But at the same time, “we are all also still human, which means we are not always as consistent, empathetic, polite and professional as we may aspire to be every day,” Chen noted. “And certainly we can’t be available 24/7 for all of the people who need our expertise and counsel in the way automated systems can.”

So while “many doctors say they can’t be replaced by chatbots, since they offer the human touch a bot does not,” Chen said the sobering truth is that people don’t own this space as much as they’d like to believe.

“For better and for worse, I easily foresee far more people receiving counseling from bots than live human beings in the not distant future,” he predicted.

Still, like Ayers, Chen suspects there’s far more to be gained than lost by the advent of AI.

“In clinical care, there is always too much work to do with too many patients to see,” he said. “An abundance of information, but a scarcity of time and human connection. While we must beware of unintended harms, I am more hopeful that advancing AI systems can take over many of the mundane paperwork, transcribing, documentation and other tasks that currently turn doctors into the most expensive data entry clerk in the hospital.”

The findings were published online in JAMA Internal Medicine.

Source:


Read also

ChatGPT Scores Close to Passing U.S. Medical Licensing Exams . . . . .