What A Global Flavor Map Can Tell Us About How We Pair Foods


Each node in this network denotes an ingredient, the color indicates food category, and node size reflects the ingredient prevalence in recipes. Two ingredients are connected if they share a significant number of flavor compounds, and link thickness representing the number of shared compounds between the two ingredients.

Nancy Shute wrote . . . . . . . . .

There’s a reason why Asian dishes often taste so different from the typical North American fare: North American recipes rely on flavors that are related, while East Asian cooks go for sharp contrasts.

That’s the word from researchers at the University of Cambridge, who used a tool called network analysis to chart the relationship between chemical flavor compounds. They did it to test the widely believed notion that foods with compatible flavors are chemically similar.

It turns out that’s true in some regional cuisines, particularly in North America – think milk, egg, butter, cocoa, and vanilla. But in East Asia, cooks are more likely to combine foods with few chemical similarities – from shrimp to lemon to ginger, soy sauce, and hot peppers.

The scientists used 56,498 recipes to test their questions about “the “rules’ that may underlie” recipes. (They mined Epicurious and Allrecipes as well as the Korean site Menupan.) They note that we rely on a very small number of recipes — around a one million — compared with all the possible food and flavor combinations available to us — more than a trillion by their estimates.

To illustrate their findings, the scientists decided to show, not just tell. The result: A stunning chart showing which foods are chemical cousins, and which are flavor outliers. Cucumber stands apart, while cheeses cluster in a clique, as do fish. Cumin connects to ginger and cinnamon, while tomato stands in a strange subgroup with chickpeas, mint, cardamom and thyme.



A lists the ingredients in two recipes, together with their flavor compounds. Each flavor compound is linked to the ingredients that contain it, forming a network. B shows the flavor network, whose nodes are ingredients, linked if they share at least one flavor compound.


The work appeared in Scientific Reports, an open-access journal from the Nature Publishing Group.

This network is beautiful enough to hang by the stove, and a big improvement on the usual eyeball-numbing illustrations in scientific journals.

But with people accustomed to eating food from around the world, those preferences may be shifting. A new survey declared salty caramel to be the hot North American flavor for 2012. Can garlic-soy caramels be far behind?

Source : npr


Read also:

Flavor network and the principles of food pairing . . . . .

 

 

 

 

Advertisement

The Scent of Flavor

Linda Bartoshuk wrote . . . . . . . . .

ARISTOTLE CONCLUDED that there are five elementary sensations: sight, hearing, touch—encompassing temperature, irritation, and pain—taste, and smell. He was mistaken.

When Aristotle sniffed an apple, he smelled it. When he bit into the apple and the flesh touched his tongue, he tasted it. But he overlooked something that caused 2,000 years of confusion. If Aristotle had plugged his nose when he tasted the apple, he might have noticed that the apple sensation disappeared leaving only sweetness and perhaps some sourness—depending on the apple. He might have decided that the apple sensation was entirely different from the sweet and sour tastes, and he might have decided that there are six elementary sensations. He didn’t. It was not until 1810 that William Prout, then a young student at the University of Edinburgh, plugged his nose and noticed that he could not taste nutmeg. He wrote,

[T]he sensation produced by the nutmeg or any other substance, when introduced into the mouth, and which ceases the moment the nostrils are closed, is really very different from taste, and ought to be distinguished by another name; that that name should be flavor [emphasis original], the one which seems most naturally and properly to designate it.

We now understand the anatomy of the nose and mouth. There is a conduit from the back of the mouth up into the nose called the retronasal space. When we swallow, volatiles released from foods in the mouth are forced through the retronasal space, up into the nose. The perception of those volatiles gives us flavor. If you plug your nose, air currents cannot move through the retronasal space and flavor is blocked.

If Aristotle had recognized flavor as a distinct sensation, he might have paid attention to how taste, flavor, and smell really work together. Taste handles the sensations evoked when nonvolatiles stimulate receptors on the tongue. Flavor and smell respond to volatiles that stimulate receptors in the nose and send signals up the olfactory nerve. But those signals are processed in different parts of the brain. Smell tells us about objects in the world around us and flavor tells us about foods in our mouths. Smell and flavor cannot both use the olfactory nerve at the same time; they must take turns. The brain needs to know which of the senses is using the nerve in order to send the input to the correct area. Sniffing appears to be the cue that signals smell. Taste appears to be the main cue that signals flavor. The evidence for this, documented below, took a long time to gather, but the search has yielded many important insights with clinical and commercial implications.

The Victim of an Illusion

ARISTOTLE’S MISTAKE is understandable when we consider that retronasal olfactory sensations, or flavors, seem to come from the mouth even though we know they come from the nose. Consider the following demonstration. Plug your nose and put a jelly bean in your mouth. Chew it up and swallow it while keeping your nose tightly closed. You will probably taste sweetness and perhaps a bit of sourness, depending on the jelly bean, but you will not perceive the flavor. That is, you won’t know if the jelly bean is lemon flavored, lime flavored, raspberry flavored, or so on. Now unplug your nose. Suddenly you will perceive the flavor. When you unplugged your nose, the volatiles released by chewing the jelly bean traveled up through the retronasal space into your nose and produced a signal in your olfactory nerve that traveled to your brain.

Think about that moment when you perceived the flavor of the jelly bean. You perceived that flavor as coming from your mouth. Even knowing that the volatiles travel into your nose and the flavor sensation comes from your olfactory nerve, you will still perceive it as coming from your mouth. In 1917, two psychologists, Harry Hollingworth and Albert Poffenberger, became fascinated by this illusion. In their book, The Sense of Taste, they explained the localization of flavor to the mouth as, “true largely because of the customary presence of sensations of pressure, temperature, movement, and resistance which are localized in the mouth.”

This conclusion went unchallenged for decades. Research elsewhere supported the idea that the touch sense controls the localization for other sensations. For example, touch controls the localization of thermal stimuli. To demonstrate this, place two quarters in your freezer to make them cold. Hold one in your hand to make it body temperature. Arrange the three quarters on a flat surface with the body-temperature quarter in the middle. Touch the three quarters simultaneously with your index, middle, and ring fingers. All three quarters will feel cold. The touch sensations “capture” the cold sensation so that coldness seems to come from all three quarters.

The Localization of Flavor

WE ARE now able to anesthetize the chorda tympani taste nerve that mediates taste from the front, mobile part of the tongue. The chorda tympani nerve leaves the tongue in a common sheath with the trigeminal nerve, which mediates touch, temperature, irritation, and pain on the tongue. These nerves travel near the nerve mediating pain from lower teeth. When your dentist gives you an injection of lidocaine to block pain when filling a lower tooth, the nearby trigeminal and chorda tympani nerves are also anesthetized. As a result, your tongue becomes numb and you cannot taste on the side of the injection.

The chorda tympani and lingual nerves separate, and the chorda tympani passes through the middle ear, right behind the eardrum, before it travels to the brain. When otolaryngologists anesthetize the eardrum, they also inadvertently anesthetize the chorda tympani nerve.

As part of a study, we asked volunteers to sample yogurt and tell us where they perceived the flavor. The answer: from all around the inside of the mouth. Whether we anesthetized the chorda tympani taste nerve by dental injection—blocking taste and touch—or otolaryngological injection—blocking only taste—the result was the same. In both cases the flavor jumped to the unanesthetized side of the mouth. Our conclusion was that touch was less important: taste controls the perceptual localization of flavor to the mouth.

Is there any biological purpose served by the flavor localization illusion? Olfaction senses objects in the world outside of us, but also senses objects in our mouths.8 We perceptually localize smells to objects in the world around us. Perceptually localizing both taste and flavor to the mouth emphasizes both as attributes of food.

Taste and Flavor Distinction

PROUT’S INSIGHTFUL distinction between taste and flavor did not gain much traction. The only reference to it by his peers that I have ever found is a footnote written by his friend John Elliotson in his translation of a famous Latin text by Johann Friedrich Blumenbach, Institutiones Physiologicae (The Elements of Physiology). Prout gained his real renown for work in physical chemistry on the hydrogen atom. His work so impressed Ernest Rutherford that the proton was almost named the “prouton.”

Prout was not the only scientist to plug his nose in an effort to discover the origin of flavor. In France, just a few years after Prout, two other scientists, the anatomist Hippolyte Cloquet and the chemist Michel Eugène Chevreul, made similar observations. The Scottish philosopher Alexander Bain, one of the earliest to consider psychology a science, demonstrated his increasingly sophisticated understanding of flavor across the three editions of his book, The Senses and the Intellect. In the 1855 edition, “flavour” was “the mixed effect of taste and odour,” but in 1864, Bain noted that tastes are “the same whether the nostrils are opened or closed,” and flavor results when “odorous particles are carried into the cavities of the nose” and ceases when the nostrils are closed. As it turned out, these observations had almost as little impact as Prout’s.

The distinction between taste and flavor became blurred over the course of the twentieth century. The Arthur D. Little company in Boston was the first to market a method for flavor evaluation for the food industry. In 1945, Flavor, written by Ernest Crocker, a chemist working at Arthur D. Little, was published. Crocker used the word “flavor” to denote the aggregation of all the sensations evoked by eating: taste, olfaction, and touch; like Aristotle, Crocker lumped temperature, irritation, and pain in with touch. The sensations evoked when volatiles rise through the retronasal space into the nose were acknowledged to occur but were described simply as a “back entry” for the detection of odors.

Confusion about the sensation evoked by the travel of volatiles through that “back entry” are reflected in the terms used to describe it. We now use “retronasal olfaction,” but that term did not appear in a published paper until 1984.16 Prior to that, an array of terms had been suggested: “nose sensations,” “Gustatorische Reichen” (gustatory smelling), “expiratory smelling,” “nasal chemoreception,” and “in-mouth olfaction,” to name the ones I’ve found.

Robert Moncrieff wrote The Chemical Senses in 1944. The updated edition published in 1960 was considered the standard text for graduate students in my era. Like the position taken at Arthur D. Little, Moncrieff wrote:

Flavour is a complex sensation. It comprises taste, odour, roughness or smoothness, hotness or coldness, and pungency or blandness. The factor which has the greatest influence is odour. If odour is lacking, then the food loses its flavour and becomes chiefly bitter, sweet, sour or saline.

At least Moncrieff argued that odor was the most important.

The International Standards Organization (ISO) is a federation of groups that set standards reflecting the views of at least 75% of the member bodies voting. The ISO definition of flavor is short but far from sweet: “Flavour: complex combination of the olfactory, gustatory and trigeminal sensations perceived during tasting.” Dictionaries do much the same. Merriam-Webster defines “flavor” as, “The quality of something that affects the sense of taste,” and, “The blend of taste and smell sensations evoked by a substance in the mouth.”

Part of the reason for this confusion is that we lack a verb to describe the perception of flavor. Consider how we describe the sensations evoked by taste, smell, and flavor. I can say, “I taste sugar” and “I smell cinnamon,” but not “I flavor cinnamon.” Using “flavor” as a verb means to add flavor to something rather than to perceive the sensation of flavor. When we want to describe how we perceive the flavor of cinnamon we borrow “taste” and say, “I taste cinnamon.” This only adds to the problem.

An Aggregate of All Sensations

SOME EXPERTS that use “flavor” to describe the aggregate of all sensations evoked by eating have argued that this aggregation has a unitary property. That is, the sensations evoked by eating combine to create something that is different from any of them, i.e., an emergent property. The nature of emergent properties arising from combinations of different sensations has been addressed by Michael Kubovy and David Van Valkenburg.

An emergent property of an aggregate is a property that is not present in the aggregated elements. At room temperature, for example, water is a liquid, but the elements that compose it are both gases. Thus, at room temperature, the property liquid is an emergent property of water. There are two kinds of emergent properties: eliminative and preservative. When hydrogen and oxygen combine to form water, the properties of the elements, both being gasses, are not observable; they are eliminated by the process of aggregation. In the human sciences, such eliminative emergent properties are also common: we can mix two colored lights, such as red and yellow, and observers will not be able to tell whether the orange light they observe is a spectral orange or a mixture. Thus, color mixture is an eliminative emergent property. Preservative emergent properties were first noticed in 1890 by Christian von Ehrenfels, who described a melody as being an emergent property of the set of notes comprising it. The notes can be heard; indeed they must be heard for the melody to be recognized. In a melody, the elements are preserved in the process of aggregation; indeed, the emergence of the melody is conditional upon the audibility of the elements.

Even when “flavor” is considered to emerge from the aggregate of all the sensations evoked by eating, most agree that those individual sensations remain perceptible. In The Psychology of Flavor, Richard Stevenson explicitly notes that flavor is a “preservative emergent property.”

I wish that Crocker and the Arthur D. Little company had coined a new name for the aggregation of the sensations evoked by eating. As a result of this oversight, we are left with two meanings for the word “flavor.” There is little that we can do about this now except to point out that “flavor” can be used to denote retronasal olfaction, or the emergent property of the aggregate of sensations evoked by eating. For the remainder of this article, “flavor” refers to retronasal olfaction.

The Lady Who Could Not Taste Lasagna

NUMEROUS STUDIES have shown that altering the intensity of taste alters the intensity of flavor. The first hint of this dynamic was observed in a patient who cut her tongue licking chocolate pudding out of a can with a sharp edge. I asked the patient to describe what she had lost. She told me that her mother-in-law was a superb Italian cook. She described the wonderful smell she experienced coming from her mother-in-law’s lasagna and the terrible disappointment she felt when she took a bite and perceived nothing. This insight caught my attention because I knew that if the patient could smell the lasagna, her olfactory system was intact, and she should have experienced retronasal olfaction—the flavor of the lasagna. I worried about the possibility that the woman was lying in order to get me to testify in court on her behalf. Indeed, she was then in the process of suing the manufacturer of the can that cut her tongue. Nonetheless, I found her account convincing.

I decided to see if I could duplicate her experience with anesthesia. I ate half a chocolate bar and perceived the usual chocolate sensation I had learned to love as a child. I then anesthetized my mouth by rinsing with the topical anesthetic Dyclone and ate the other half of the chocolate bar. Most of the chocolate sensation was gone. The patient who could no longer taste her mother-in-law’s lasagna was right: if taste is taken away, something goes awry with flavor.

One of my students, Derek Snyder, pursued this topic in his PhD thesis, working with clinical colleagues who used unilateral and bilateral injected anesthesia—dental and otolaryngological—as well as topically applied anesthesia to block taste in volunteers. Blocking taste on only one side of the tongue caused retronasal olfactory sensations to drop by 25%. Blocking taste on both sides led to a drop of 50%. Smell sensations were unchanged.

Some individuals experience much more intense taste sensations than do others because of genetic variation—we call these individuals “supertasters”— and some individuals experience altered taste sensations arising from clinical pathologies. The intensity of our taste sensations predicts the intensity of flavor sensations independent of the ability to smell. If supertasters and non-supertasters both sniff a bowl of chocolate pudding, the two groups will experience, on average, the same chocolate smell. But if both groups eat the pudding, the supertasters will experience the more intense chocolate flavor.

Two taste modifiers also reveal the link between taste and flavor. Gymnema sylvestre is an Indian herb that blocks sweet taste. Medicinal use of this herb dates back two thousand years in Ayurvedic medicine. The ability of Gymnema sylvestre to block sweetness was revealed to the Western world by a nineteenth-century Irish botanist, Michael Edgeworth, while he was working in India. On the advice of neighbors, he chewed the leaves of the plant and discovered he could not taste the sugar in his tea. In 1847, Edgeworth wrote a letter to a fellow botanist, telling him about Gymnema sylvestre. The letter was read at the Linnean Society in London and ultimately described in more detail in the Pharmaceutical Journal.

As part of a study, we made tea from Gymnema sylvestre leaves. Volunteers rinsed their mouths with this tea and then sampled maple syrup and chocolate kisses. The sweetness was substantially reduced, and the maple and chocolate sensations were substantially reduced as well. Recovery from the effects of Gymnema sylvestre also demonstrated the link. The sweetness of sugar syrups made with maple, orange, and raspberry flavors were blocked. As the ability to taste sweetness recovered from the effects of Gymnema sylvestre, the sensations of maple, orange, and raspberry recovered at essentially the same pace. Anyone who wants to experience the effects of Gymnema sylvestre can find it online.

The second taste modifier came from berries found on the Synsepalum dulcificum bush, commonly known as miracle fruit. These berries were first described in English by Archibald Dalzel in 1793. Trained as a physician, but not very successful at it, he found himself in need of money and turned to slave trading in Africa. Dalzel’s observations of the local life where he lived in Africa led to a book, The History of Dahomy, in which he describes a “miraculous berry” that can convert “acids to sweets.” Consumption of the berries were first mentioned more than a century earlier in Wilhelm Müller’s Die Africanische Auf der Guineishen Gold-Cust Gelegene Landschafft Fetu (The African on the Guinean Gold-Coast Landscape, Kingdom of Fetu). The berries were presumably known and used long before that. One of the most interesting uses of miracle fruit in Africa during the nineteenth century was to sweeten palm wine that had turned sour during the long journey from distillation to market.

The glycoprotein responsible for the effects of miracle fruit remains intact when the berries are freeze dried. We asked volunteers to let freeze-dried tablets of miracle fruit dissolve on their tongues. The miracle fruit increased the sweetness of tomatoes and strawberries, both foods that contain acid. That increase in sweetness also increased the tomato and strawberry flavors. As is the case with Gymnema sylvestre, anyone wishing to experience these effects can purchase tablets made from the freeze-dried berries online.

Recently, I had a video call with a very important patient: a young woman who had lost the ability to taste, but still retained her sense of smell. Although she is unable to perceive flavors, there may still be a role for some trigeminal sensation. This patient is unable to feel the burn of chilis, but she can perceive touch on her tongue. Thus, there is still a chance that some trigeminal sensations may also open or close the flavor door.

Together, these studies show that taste and retronasal olfaction are distinct sensations that remain distinct even though their perceived intensities are altered in mixtures of the two. Presumably, this occurs in a part of the brain that receives input from both sensations. Taste is not perceptually a part of retronasal olfaction, but rather signals that an incoming olfactory signal should be processed as flavor rather than smell. The taste cue acts like a valve that lets the retronasal olfactory signal pass through or obstructs it to the degree the valve is open or shut.

Following the Scent of Flavor

RESEARCHERS IN the food industry knew that intensifying taste can intensify flavor as early as the 1950s. Rose Marie Pangborn, for example, showed that adding sucrose to apricot juice intensified the apricot sensation. The reverse effect, intensification of sweetness by retronasal perception of volatiles, was found a bit later. One of the earliest hints came from an experiment we undertook during 1977 in which the addition of ethyl butyrate (fruity flavor) increased the taste of saccharin.44 Another hint came from a horticultural study linking the sweetness of tomatoes to specific volatiles present in the tomatoes. In the following forty years, only a few volatiles were identified that could enhance sweetness, and the effects were quite small.

After leaving Yale for the sunny skies of the University of Florida in the early-2000s, I met Harry Klee, a botanist and world expert on the volatiles in tomatoes. Over the course of the twentieth century, tomatoes were bred to look and ship better with little regard for their palatability. This led to a decline in the flavors of tomatoes. Klee wanted to halt this process and restore highly palatable flavors to the tomato. Howard Moskowitz, a Harvard-trained sensory psychologist who had left academia for the food industry, was an expert at improving the flavors of food products using psychophysics and mathematics. His success with spaghetti sauce was chronicled by Malcolm Gladwell in a New Yorker article. I asked him if he would be willing to work with us on tomatoes.

Moskowitz was fascinated by the possibility of applying his techniques from marketing research to the natural world. We grew 150 different varieties of tomatoes that were mostly heirlooms, that is, tomatoes with a lot of genetic diversity. The tomatoes were analyzed for their chemical content—sugars, acids, volatiles—along with their sensory and hedonic properties—smell, taste, flavor, palatability. We used a method that provides valid comparisons for the perceived intensities of sensations across different people: essential when sensory intensities are to be associated with physical measures. The data were then put into a multiple regression model, allowing us to identify which tomatoes were liked the best and which constituents made them the most liked.

On a whim, I used the data we had gathered to explore a different question: which constituents were contributing to sweetness? To my amazement, flavor—retronasal perception of the volatiles—was contributing substantially to sweetness. Checking individual volatiles identified those responsible. A cherry tomato called “Matina,” for example, contained less sugar than another called “Yellow Jelly Bean,” but the Matina was about twice as sweet as the Yellow Jelly Bean. The volatiles that enhanced sweetness were more abundant in Matina.

We then moved on to strawberries, oranges, and peaches. Each fruit produced a mostly new and different group of sweetness-enhancing volatiles, yielding almost 100 volatiles in total. One exception was blueberries, which contained very few volatiles that enhanced sweetness. When you taste sweetness in a blueberry, you are essentially tasting the sugar. When you taste sweetness in the other fruits we studied, some of the sweetness is coming from the sugar, but a lot of it originates in the volatiles that enhance the sweetness of the sugar. In the early years of studying volatile-enhanced sweetness, none of us had realized that some fruits contain many such volatiles. Each one may produce only a small effect, but the effects are cumulative.

Future Applications

SWEETNESS-ENHANCING volatiles are naturally found in fruits, but adding these volatiles to any food or beverage will add sweetness. Incidentally, sweetness-enhancing volatiles also work on artificial sweeteners. The concentrations of many sweet-enhancing volatiles in fruits are very low, making them a safe alternative to sugars and artificial sweeteners. Since the volatiles that enhance sweetness tend to be different in each fruit, the study of additional fruits will likely add to the list of those already identified.

Noam Sobel and his team have demonstrated that olfactory mixtures behave like mixtures of colored lights. Combinations involving odorants of equal perceived intensities suppress one another resulting in a weak olfactory sensation they called “Laurax”—not to be confused with the famous Lorax described by Dr. Seuss. Laurax was also called “olfactory white” to emphasize its similarity to the white light that can result from color mixtures. In the terminology of Kubovy and Van Valkenburg, these are examples of eliminative emergence. This raises an interesting question: as we combine more and more volatiles that enhance sweetness, will their flavors cancel each other out while the sweetness increases? If so, volatile sweetening will have even more commercial applications.

The ability of volatiles to enhance taste is not limited to sweetness. A different group of volatiles enhance saltiness and are under study for their potential to reduce dependence on sodium. A few volatiles have also been identified that can enhance sourness and bitterness. This may tell us more about how this enhancement occurs in the brain, but these volatiles are unlikely to have as many applications as those that enhance sweetness and saltiness.

Volatile-enhanced tastes are also exciting for their clinical potential. Shortly before the COVID-19 pandemic began, I evaluated a patient who retained normal olfaction but had a reduced ability to taste sweetness. Adding sweetness-enhancing volatiles to sucrose allowed her to perceive normal sweetness. The sweetness-enhancing volatiles created a signal in her olfactory nerve that traveled to the area of the brain that processes sweetness, bypassing her damaged taste nerves. In theory, when we have identified enough taste-enhancing volatiles, we should be able to restore at least some taste perception to patients with taste nerve damage and intact olfaction.

Our love of sweet and salty tastes is at least partly hardwired into our brains. This source of pleasure is important in our lives. The interactions between the distinct sensations of taste and flavor have given us new tools to safeguard those pleasures while reducing our dependence on sugars, artificial sweeteners, and sodium.

Source : Inference

 

 

 

 

How Food Powers Your Body

James Somers wrote . . . . . . . . .

I’ve always been told that I have a fast metabolism. I stay thin no matter what I eat; it’s only in the past few years, as I’ve entered my mid-thirties, that I’ve experienced growing horizontally. I play squash a few times a week, run with a friend on Thursdays, and walk the dog. Otherwise I spend whole days at the computer, then sedentary on the couch, then asleep. And yet I stay lanky and get “hangry” easily; in the afternoons, after a hearty breakfast and two helpings at lunch, I go looking for another meal. I sometimes wake up hungry in the middle of the night. Where’s all the food going?

Our bodies require a lot of calories, and most of them are spent just keeping the machine running. You don’t particularly feel your liver, but sure enough it’s always there, liver-ing; likewise your kidneys, skin, gut, lungs, and bones. Our brains are major energy hogs, consuming around a fifth of our calorie intake despite accounting for just a fiftieth of our body weight on average. Possibly mine is less efficient than yours: I have an anxious cast of mind—I ruminate—and maybe this is like running in place. I sometimes feel sluggish while writing, after working a paragraph over in my head, and I used to assume that this meant I needed caffeine. Eventually, I discovered that a sandwich worked better. The effort of thinking had run my calories low, and it was time to throw another log on the fire.

Fire isn’t merely a metaphor for metabolism. In the eighteenth century, the French chemist Antoine-Laurent de Lavoisier conducted a series of ingenious experiments to prove that our life force was fire. First, he figured out what air was made of; he then, through precise measurements, showed that fire removed oxygen from the air and deposited it in the form of rust. Later, he made a device in which packed ice surrounded a compartment that could be filled with either a lighted flame or a small animal; by measuring how much ice melted, he could relate the energy burned by the flame to that “burned” by the creature. He even created a “respirometer,” an apparatus of tubes and gauges that measured a person’s precise oxygen consumption as they took on various tasks. He concluded that “respiration is nothing but a slow combustion of carbon and hydrogen, similar in all respects to that of a lamp or a lighted candle.” Both flames and living beings exchange energy and gases in what’s known as a combustion reaction. In fire, this reaction runs fast and out of control: energy is ripped from fuel with violent abandon, and nearly all of it is released immediately, as light and heat. But life is more methodical. Cells pluck energy from their fuel with exquisite control, directing every last drop toward their own minute purposes. Almost nothing is wasted.

Clearing up how exactly this is accomplished took another several hundred years. The breakthrough came in the nineteen-thirties, when a brilliant Hungarian chemist named Albert Szent-Györgyi made a study of pigeons’ breast muscle. The muscle, which was strong enough to keep the birds in flight, turned out to be metabolically hyperactive even after it had been pulverized. Szent-Györgyi put some ground-up tissue in a dish, then made careful measurements of the gas and heat emitted as he introduced various chemicals. He found that certain acids increased the muscle’s rate of metabolism more than five-fold. Strangely, these acids weren’t themselves consumed in the reactions: Szent-Györgyi could take as much out of the dish as he’d put in. The acids, he realized, participated in a kind of chemical roundabout, speeding up, or catalyzing, metabolism even as they were constantly being broken down and rebuilt.

A few years later, a German biochemist named Hans Krebs described this chemical cycle more completely, and today it’s known as the Krebs cycle. You may dimly remember the Krebs cycle from high-school biology class—or perhaps you forgot it right after the test. For a long time, the Krebs cycle was a symbol of what I disliked about school—a perfect emblem of boredom and bewilderment. Sitting at desks arranged in rows, we were told the monstrous names of its component parts—succinate, pyruvate, Acetyl-CoA, cytochrome c—while, on the blackboard, we counted NAD+s and FADH2s, and followed “redox” reactions as they “oxidized” or “reduced” elements. I memorized the diagrams in the textbook—arrows, small fonts, tiny plus and minus signs—without ever really understanding what the cycle was for. I was hardly alone in my incomprehension. In the thirty-eight-year run of the modern “Jeopardy!,” the Krebs cycle has been asked about only six times. It has stumped all three players onstage twice.

It’s a shame that organic chemistry has such dread associations, when really there’s so much beauty in it. As the biochemist Nick Lane writes, in his book “Transformer: The Deep Chemistry of Life and Death,” the Krebs cycle is particularly magical—it’s the foundation not just of metabolism but of all complex life on earth. And it’s not really that hard to grasp. Nowadays, even those of us who skipped A.P. Bio are conversant with genes; thanks to the pandemic, we may even know what we’re talking about when we use words like “protein” and “mRNA.” Lane argues that our DNA literacy is actually a form of genetic chauvinism. The secret of life isn’t entirely written in our genes; it also has to do with how we pull energy out of the world—with our ongoing, lifelong slow burn. Understanding the Krebs cycle is worth it because it helps you better understand what it means to be alive.

It’s through the Krebs cycle that we get energy from the food we eat. To grasp how the cycle works, it’s useful to remember what food is made of. Like everything else in the universe, the stuff we eat is made of atoms. An atom is like a little solar system, with a nucleus at its center. Electrons orbit the nucleus like planets circling a sun. (Although actually, according to quantum mechanics, you can’t know exactly where an electron is at any moment—and so really this orbit is less of a fixed path than a sort of cloud of possible positions.) There might be one electron or several within any given atom; they orbit at certain typical distances, known to chemists as orbital shells. Only a finite number of electrons can occupy an orbital shell at any one time: two in the first shell, eight in the second, eighteen in the third, thirty-two in the fourth, and so on—a pattern that defines how the rows of the periodic table are laid out. All of chemistry depends on the fact that electrons that aren’t part of fully filled shells are less stable, especially as they get farther from the nucleus. It’s as if an electron is not meant to wander too far from home.

From time to time, something bumps into an atom. If it’s a photon—a particle of light—then energy from the collision knocks an atom’s electrons into orbits that are more distant from the nucleus. These “high-energy” electrons are like marbles poised on the lip of a bowl—they want to release their potential energy by rolling back down toward the center or, if another atom is near, by spilling over into its bowl. Which way they fall depends on the precise balance of instabilities in each atom—in other words, which has the shell most desperate to be filled. When an atom poised to give up an energetic electron gets close to a neighbor eager to take it, that electron rolls from the lip of one bowl down into the other. In falling, it releases energy. However abstract this may seem, it is the very essence of life. Photons careening from the sun bang into electrons in chlorophyll in plants; a series of chemical reactions transfers those energized electrons from one atom to the next, until eventually they are stored up inside the sugars or starches in fruits, stalks, and seeds.

On a molecular level, a potato isn’t so different from petroleum: it contains molecules rich in high-energy electrons. Through our metabolism, we hope to capture the energy possessed by those electrons in a manageable way. Szent-Györgyi is often credited with saying that life is nothing but an electron looking for a place to rest; the marbles roll downhill, and life makes use of their force. The difficulty is that the electrons with the most energy available don’t just present themselves for the taking. Food is complicated and full of different molecules, many of which contain raw materials that we recycle into the physical structures of our cells. Finding the atoms that are especially dense with energy inside our food is like sifting through a heap of wrecked cars to find the still-charged batteries.

A surprising amount of this sifting happens before we even swallow our food, as the saliva in our mouths breaks down its starches. (Try spitting in a cup of Jell-O pudding and see what happens.) We start to feel sated well before we digest, because our mouths tell our brains that energy is coming and that it’s safe to release some short-term stores. In the meantime, acids in the stomach and enzymes in the small intestine start processing what has arrived. By the time they’re through, the energy-rich molecules in food have had their most restless electrons reshuffled and packed into glucose, a simple sugar. Glucose is like a chemical shipping container. It is an ideal electron transporter, in part because it is high-capacity, conveniently shaped, and easily opened up. It’s also unusually soluble, which means that it travels well through the bloodstream. And it consists only of carbon, oxygen, and hydrogen atoms. The latter two types of atoms are highly reactive—there’s a reason why tanks of hydrogen and oxygen are marked “flammable”—and many unstable electrons circle each atom of carbon, eager to move into other molecules. Our brains, whose parts have especially unpredictable energy requirements—as neurons fire, they create spikes in demand—depend almost exclusively on glucose for energy. Hummingbirds, which have the fastest metabolism of any animal and no time to spare to fuel their wingbeats, similarly feed on a mixture of pure glucose and sucrose.

When glucose reaches our cells, it is—unlike a shipping container—dismantled systematically. A series of reactions strips its highest-energy electrons and uses them to form a small “carrier molecule” known as an NADH. If glucose is like a shipping container, then NADHs are like delivery trucks. The process of loading the electrons into the trucks is called glycolysis. It’s ancient; in fact, it’s how yeast cells harvest energy. When glycolysis occurs in the absence of oxygen, it is known as fermentation. If your muscles are pushed to their limit and there’s not enough oxygen in your bloodstream, your cells ferment glucose as a stopgap measure for energy production.

If there is oxygen involved, the breaking down of glucose becomes much more refined. Oxygen is so hungry for electrons—its outer shell needs only two more to get a complete set—that in effect it pulls them all the way through the Krebs cycle, which is the real powerhouse of our metabolism. The cycle itself is complex, with sequences of chemical formulas that seem purpose-built to traumatize students. But, essentially, glucose is broken in two, and its halves are fed into a series of reactions that strip them for parts; the backbones are then reused for another turn of the cycle. The main thing is that, along the way, energy-rich electrons are peeled off and loaded up onto yet more NADHs—far more than in glycolysis alone. Almost no energy is lost to heat; instead, it is preserved and transformed. Any electron that had a high orbit in glucose is likewise poised at its full potential in NADH.

These NADH molecules will be further transformed. Inside a typical cell in your body are hundreds of thousands of mini-cells called mitochondria—structures believed to have descended from a free-floating bacterium that was ingested by one of our ancestors long ago and coöpted. A mitochondrion is divided into an internal and external chamber by a convoluted border with many folds, which create a huge surface area. Proteins protrude from this membrane like rabbits poking their heads through a hedge. These proteins capture an NADH, then pull its electrons through to the inner chamber, where they finally come to rest in molecules of oxygen. (When oxygen isn’t present, the electrons back up, and the work comes to a halt.) The movement of each electron is timed and arranged just so to cause a proton in the form of a hydronium ion, which is positively charged, to head in the opposite direction. At the moment that the protein pulls each electron inward, it also disgorges the proton, pushing it from the internal chamber to the external one. This extrusion happens everywhere across the membrane. The result is that many positively charged protons build up outside, separated by a wall from the negatively charged electrons held inside. An electrical field comes into being. Quite literally, each mitochondrion becomes a battery, waiting to discharge.

“This charge is awesome,” Lane writes in “Transformer.” The electrical field generated by the process, he explains, has a strength of around thirty million volts per metre—“equivalent to a bolt of lightning across every square nanometre of membrane.” At any moment, in each of your cells, the clouds are gathering, crackling with potential. And yet even this understates the absolute craziness of metabolism; it is wild what happens to those protons. Pulled by the electrical current, they desperately want to get back to the inside of the mitochondrion, where the electrons are. Their only way back, however, is to squeeze through tiny mushroom-shaped conduits that litter the membrane. In 1962, scientists discovered that these conduits are actually little turbines. Seen in minute detail through electron microscopes, they resemble waterwheels; the protons turn them as they pass.

In hibernating bears and newborn humans, the turbines generate heat, which is stored in fat. More commonly, though, each turn of the wheel assembles a molecule of adenosine triphosphate, or ATP—the energy currency of our cells. By dint of its structure, ATP is extremely willing to give up its energy, but it is prevented from doing so by a few precisely controllable molecular speed bumps—like a loaded-up spring held fast with a lock. The generation of ATP amounts to the generation of order out of chaos. In our food, energy is stored in an arbitrary way. But each molecule of ATP is endowed with a standard amount of energy, created by the physical motion of a molecular gear. ATP is used in every kind of cell, where it’s converted into kinetic, chemical, or electrical energy. Our muscles contract when a protein called myosin climbs along a microfibre, crunching it more tightly—each step along the fibre costs one ATP. In our kidneys, ATP powers a chemical pump that recovers ions from our urine. In our brains, ATP endows neurons with their electrical charge. The thunderclouds in our mitochondria are bottled up, shipped, and uncorked.

Lane writes that the “proton motive force” of those little turbines is one of the few mechanisms present in all life forms. In you and me and everything that lives, high-energy electrons are stripped slowly of their verve. Metabolism achieves something miraculous: through painstaking atomic transformations, it extracts from practically any organic chemical a universal unit of energy, deployable in every corner of every cell, and it does this while wasting nothing. Life’s use of a standardized part like ATP is almost Taylorist; the efficiencies are unfathomable. A body ingests charged particles and sends them through tiny windmills; a brain crackling with a hundred trillion electric connections can be powered for a whole day by a sandwich.

It was bold of Lane to write an entire book about the Krebs cycle. Although “Transformer” is aimed at laypeople, it’s not a particularly easy read: there are diagrams of chemical reactions alongside talk of succinate, oxaloacetate, and the reduction of this and that. Reading it, I had to consult Wikipedia and Khan Academy. And yet Lane is passionate about the complex biochemistry he describes, in part because he thinks that understanding metabolism could help us understand a great deal more, from cancer to the origins of life.

Biologists have been somewhat gene-obsessed ever since the discovery of the double helix, in 1953. The central dogma of molecular biology—it is actually called that, the Central Dogma—puts information at the heart of life, and describes how it flows from DNA to RNA to proteins. In the nineties, the gene’s-eye view culminated in the multibillion-dollar Human Genome Project, which promised that genetic sequencing at great scale would answer many of biology and medicine’s most vexing questions. Cancer researchers, accordingly, have tended to take a gene-centric approach to studying the disease: one major effort in the style of the Human Genome Project, the Cancer Genome Atlas, has catalogued millions of potentially cancer-causing mutations across tens of thousands of genes. On the treatment side, the biggest breakthrough in recent memory, immune therapy, can involve genetically modifying immune-system cells so that they target tumors that express a unique DNA sequence. The approach has “really revolutionized therapy,” Raul Mostoslavsky, the scientific co-director of Massachusetts General Hospital’s cancer center, told me. But genes are only part of the story. “It’s very well established that unique features of metabolisms are key in cancer and aging,” Mostoslavsky said. In the past few decades, there has been “an explosion of research done in this area.” Perhaps because it is newer, and rooted in biochemistry rather than genetics, it has had less success working its way into the public imagination.

Much of the new work has centered on the Warburg effect, named for Otto Heinrich Warburg, a German biologist who won a Nobel prize for his research in cellular respiration. The Warburg effect describes the peculiar fact that cancer cells tend to behave as if they’re in a metabolic emergency. When normal cells are short on oxygen, the mitochondrial turbines slow; anaerobic glycolysis, or fermentation, takes over. What’s strange is that cancer cells do this even when oxygen is abundant. The Warburg effect is considered almost universal across cancers; one relatively common sign of a tumor’s presence is a buildup of lactate, caused by the cancer cells fermenting. It’s unclear whether this fermentation is a cause or consequence of the disease. Do cancer cells ferment because they are growing out of control—or is fermentation driving the growth?

Maybe it’s both, but Lane suspects we pay too little attention to the latter possibility. He argues that it might explain the outsized correlation between cancer and aging. From age twenty-four to fifty, your risk of cancer increases ninety-fold, and it continues to grow exponentially from there. A popular hypothesis holds that the root cause of this mounting risk is the accumulation of genetic mutations. But some scientists have argued that the rate of accumulation isn’t nearly fast enough to explain the extraordinary trajectory that cancer risk takes over a lifetime. Nor does the gene’s-eye view explain why some tumors stop growing when moved into a different environment. For Lane, these facts suggest that cancer is best thought of as a derangement of metabolism.

As you age, your mitochondria accumulate wear and tear. Often, the cause is inflammation—whether from disease, injury, or periods of stress. Inflammation itself becomes chronic with age, for reasons that are still not entirely understood. Meanwhile, a process known as mitophagy, in which old mitochondria are eaten by the body so that new ones can grow in their place, slows down. The result of all this is that our mitochondria get tired, and do a slightly worse job. “Overall,” Lane writes, “we have less energy, tend to gain in weight, find it harder to burst into explosive action and suffer from chronic low-grade inflammation.” (“Aging, eh!” he notes.) The conditions grow ripe for cancer: mitochondrial waste products start to pile up, as at a broken assembly line; perhaps, if it gets bad enough, a cell might believe that the backup is due to a lack of oxygen. Alarm signals will be sent to the nucleus to flip a series of epigenetic switches—“we’re suffocating!”—that put the cell into fermentation mode. In that mode, when glucose arrives, the priority becomes stripping it not for its high-energy electrons but for molecular building blocks. The cell reverts to one of its earliest programs, active during embryonic development, in which the prime directive is not to work but to grow. “What actually turns a cell cancerous?” Lane asks. A cancerous environment might “be induced by mutations, infections, low oxygen levels . . . or the decline in metabolism associated with aging itself.”

As a researcher, Lane’s primary interest is in the origin of life, and here, too, an emphasis on metabolism offers a dramatically revisionist account. When we think about how life started, we tend to tell ourselves a story about genes. We say that, in the beginning, shallow seaside pools were filled with a primordial chemical soup; among the chemicals was RNA, a single-stranded, less stable version of DNA. RNA had the ability to catalyze the construction of other molecules, and eventually a version came into existence that could catalyze its own copying. Some energy source must have powered these chemical reactions—perhaps lightning or ultraviolet light from the sun. Regardless, we say, once the copying began, mutations that led to faster or more robust replication won out. Metabolism emerged only later, when ancestors of our cells learned to digest other nearby organic chemicals.

This story was complicated somewhat by the discovery, in 1977, of life in some of the deepest, darkest parts of the ocean. Marine biologists found that huge tube worms were living in places with no light and no plants to eat. How were the worms surviving? It took decades, but scientists eventually uncovered the first link in this dark food chain. Crowds of primitive bacteria live alongside volcanic vents in the seafloor, and they are unusual for being “autotrophs.” The word describes the fact that these bacteria, like plants, build their biomass not by eating but directly from inorganic matter, such as molecules of carbon dioxide floating in water. For autotrophy to work, a steady energy source is required. Plants use sunlight. But these bacteria live in total darkness. How could they possibly be autotrophs?

It turns out that, at the interface between sea and mantle, salt water reacts with the earth in a process called serpentinization. Serpentinization produces energy-rich chemicals, and Lane speculates that they were the primordial energy source that powered the ancestors of the autotrophs. In our metabolisms, the Krebs cycle runs in one direction—food molecules go in, and energy comes out. But the cycle can actually spin both ways, like a turntable. The bacteria surrounding the deep-sea vents run the Krebs cycle in reverse, taking in energy from the vents and using it to assemble the matter of their bodies from simpler parts. They are like candles unburning. Only later, when membranes happened to enclose these reactions, would the need for RNA have arisen. As the first proto-cells floated away from the vents, they lost contact with their energy source; only those carrying the right kind of RNA would have had the tools necessary to survive. The RNA’s job would have been to help catalyze reactions that formerly depended on the vents. Over the next few billion years, the descendants of these primitive organisms would have begun spewing oxygen into the atmosphere as a waste product. Only then would the Krebs cycle as we know it have come into being: by reversing the metabolism of the autotrophs, an organism could take advantage of all that oxygen and turn its body into a kind of furnace. It was this reversal, Lane claims, that begat the Cambrian explosion, an enormous proliferation in the variety and complexity of life that took place some five hundred million years ago.

Any book about just one thing, especially if the author feels like it hasn’t gotten enough attention, runs the risk of becoming a theory of everything. The impression I got from “Transformer” was that the Krebs cycle was the key not just to life and its origins but to aging, cancer, and death. More likely it is just a part of all those things.

Still, there is something to be said for immersion. Recently, I spent a long weekend in a small rented house a few hours north of New York City. The whole time, I had metabolism on the brain. One morning, a friend and I drove to an outdoor restaurant for a late breakfast. The car was running low on fuel; so was I. While we waited for the server, I sat quietly, feeling a little sour and depressed. The sun was beating down on my back—electrons in the wrong form. It was only after the first few bites of my scrambled eggs that I felt the flood of glucose, and became myself again. I could picture what was happening inside my cells. The image would have appealed to an eighteenth-century philosopher: I was a clockwork man charging myself up through the spinning of a billion tiny waterwheels.

Later, back at the house, we played basketball in the driveway. How many ATPs does a jump shot cost? After making a run toward the basket for a layup, I thought about all it had taken to launch my body through the air: a voltage made of protons, a million simultaneous discharges across synaptic clefts. Every motion was an exquisitely controlled lightning strike.

After the game, in the late afternoon, we watched small birds outside the window, their heartbeats racing. I imagined the fastness of their world. If your metabolism speeds up enough, does time slow down? Is that why it’s so hard to catch a bug in your hands?

We decided to make s’mores that night. A friend and I built the fire. We gathered electrons from a woodpile nearby, set them loose with a little butane and a spark, then watched the sun go down. It was strange to imagine that energy from fusion ninety-two million miles away had now taken the form of a marshmallow. Happily, I popped one into my mouth. ♦

Source: The New Yorker

 

 

 

 

The Mysterious, Vexing, and Utterly Engrossing Search for the Origin of Eels

Christina Couch wrote . . . . . . . . .

Every three years, Reinhold Hanel boards a research ship and voyages to the only sea in the world that’s located in the middle of an ocean. The Sargasso, bounded by currents instead of land, is an egg-shaped expanse that takes up about two-thirds of the North Atlantic, looping around Bermuda and stretching east more than 1,000 kilometers. Dubbed the “golden floating rainforest” thanks to the thick tangles of ocher-colored seaweed that blanket the water’s surface, the Sargasso is a slowly swirling sanctuary for over 270 marine species. And each year, the eels arrive.

The European eel and the American eel—both considered endangered by the International Union for Conservation of Nature—make this extraordinary migration. The Sargasso is the only place on Earth where they breed. The slithery creatures, some as long as 1.5 meters, arrive from Europe, North America, including parts of the Caribbean, and North Africa, including the Mediterranean Sea. Hanel, a fish biologist and director of the Thünen Institute of Fisheries Ecology in Bremerhaven, Germany, makes his own month-long migration here alongside a rotating cast of researchers, some of whom hope to solve mysteries that have long flummoxed marine biologists, anatomists, philosophers, and conservationists: What happens when these eels spawn in the wild? And what can be done to help the species recover from the impacts of habitat loss, pollution, overfishing, and hydropower? Scientists say that the answers could improve conservation. But, thus far, eels have kept most of their secrets to themselves.

The idea that eels have sex at all is a fairly modern notion. Ancient Egyptians associated eels with the sun god Atum and believed they sprang to life when the sun warmed the Nile. In the fourth century BCE, Aristotle proclaimed that eels spontaneously generated within “the entrails of the earth” and that they didn’t have genitals.

The no-genital theory held for generations. Roman naturalist Pliny the Elder asserted that eels rubbed against rocks and their dead skin “scrapings come to life.” Others credited eel provenance to everything from horses’ tails to dew drops on riverbanks. In medieval Europe, this presumed asexuality had real economic consequences and helped make the European eel a culturally important species, according to John Wyatt Greenlee, a medieval cartographic historian who wrote part of his dissertation on the subject. Frequent Christian holidays at the time required followers to adhere to church-sanctioned diets for much of the year. These prohibited adherents from eating “unclean” animals or meat that came from carnal acts, which could incite, as Thomas Aquinas put it, “an incentive to lust.” Fish were the exception, Greenlee says, and eels, given their abundance and “the fact that they just sort of appear and that nobody can find their reproductive organs at all,” appealed to anyone trying to avoid a sexy meal.

Eels could be practically anything to anyone: dinner or dessert; a cure for hangovers, drunkenness, or ear infections; material for wedding bands or magical jackets. They were even used as informal currency. Since yearly rent and taxes in medieval Europe were often due during Lent—the roughly 40-day period preceding Easter—and monasteries owned land people lived on, tenants sometimes paid with dried eels. Entire villages could pay 60,000 eels or more at once.

Eventually, spontaneous generation theories died. But eel genitals landed in the spotlight again after an Italian surgeon found ovaries in an eel from Comacchio, Italy, and the findings were published in the 18th century. The legitimacy of the so-called Comacchio eel remained in question for decades until an anatomist published a description of ovaries from a different Comacchio eel, launching a race to find testicles. Even the granddaddy of psychosexual development theory got involved: near the beginning of his career, in 1876, Sigmund Freud dissected at least 400 eels in search of gonads. It would be about another two decades before someone discovered a mature male eel near Sicily.

It’s no surprise that it took so long to find eel sex organs. There are more than 800 species, about 15 of which are freshwater varieties, and their bodies change so dramatically with age that scientists long thought the larvae were a different species than adult eels. Eels transform from eggs to transparent willow-leaflike larvae, to wormy see-through babies called glass eels, and onward until full size. Like most eel species, American and European eels don’t fully develop gonads until their last life stage, usually between 7 and 25 years in. Around that time, they leave inland fresh and brackish waters, where people can easily observe them, and migrate up to about 6,000 kilometers—roughly the distance from Canada’s easternmost tip to its westernmost—to the Sargasso.

By now, researchers have seen eels mate in lab settings, but they don’t know how this act plays out in the wild. The mechanisms that guide migration also remain somewhat enigmatic, as do the exact social, physical, and chemical conditions under which eels reproduce. Mature eels die after spawning, and larvae move to freshwater habitat, but when that happens and how each species finds its home continent are also unknown.

“We think that the European eel reproduces in the Sargasso Sea because this is the place where we have found the smaller larvae, but we have never found a European eel egg or the eels spawning,” says Estibaliz Díaz, a biologist at AZTI marine research center in Spain, who studies European eel population dynamics and management. “It’s still a theory that has not been proven.” The same applies to the American eel, and yet more questions remain about how many eels survive migration, what makes the Sargasso so singular, and how factors like climate change might affect it.

Both species have dropped in number, but researchers debate which threat is the biggest. Habitat loss is huge—humans have drained wetlands, polluted waters with urban and agricultural runoff, and built hydropower turbines that kill eels and dams that block the animals from migrating in or out of inland waters. Fishing further reduces eel numbers. Commercial fisheries for adult eels exist, but most eels consumed globally come from the aquaculture industry, which pulls young glass eels from the wild and raises them in farms. American and European eels are among the top three most commercially valuable species alongside the Japanese eel, which is also endangered. While it’s legal to fish for all three, regulations on when, where, and how many eels can be sold vary between countries. The European Union requires member nations to close their marine fisheries for three consecutive months around the winter migration season each year—countries themselves determine exact dates—and prohibits trade outside of member countries, but these management efforts are undermined by black-market traders who illegally export more than 90 tonnes of European eels to Asia every year.

The International Union for Conservation of Nature (IUCN) lists European eels as critically endangered—populations have plummeted more than 90 percent compared with historical levels, and it’s “rather unclear,” as one report notes, whether the decline continues today. By counting glass eels in estuaries and inland waters, researchers found that eel numbers dropped precipitously between the 1980s and 2011, but plateaued afterward without clear cause. American eels are thought to be faring better—they’re considered endangered only by IUCN standards, not by other conservation and research groups—though their numbers have also decreased since the 1970s.

Captive breeding might one day reduce the aquaculture industry’s dependence on wild catches, but isn’t yet viable. Scientists must induce eel gonad development with synthetic hormones. It’s also hard to keep larvae alive. Many researchers believe that, in their natural habitat, larvae eat marine snow—a mélange of decaying organic matter suspended in the water that is impractical to reproduce at commercial scales. Illuminating what happens in the Sargasso could help guide better conservation measures. That’s why Reinhold Hanel heads to sea.

After three years of COVID-19-related delay, in 2023, Hanel will send a research vessel on a 14-day trip from Germany to Bermuda. He’ll fly there and meet up with 11 other eel researchers, then he’ll spend about a month slowly traversing the southern Sargasso, recording ocean conditions, trawling for eel larvae with mesh plankton nets, and sampling for environmental DNA—genetic material shed from skin, mucus, and poop—to track eels by what they leave behind.

Hanel has led voyages like these since 2011. His main goal is to document the abundance of larvae and young eels and, secondarily, to identify possible locations for spawning. By sampling estuaries and inland waters, researchers can identify trends over time to figure out if glass eels in continental waters are increasing or not, but without comparing those trends with similar ones in the Sargasso, it’s impossible to judge whether either American or European eels are bouncing back. Meanwhile, protective regulations aren’t enough, Hanel contends. In 2007, the European Union mandated that member countries develop European eel recovery plans, but several prominent fishery and marine science organizations have criticized the particulars.

In tandem with other measures aimed at reducing eel mortality, provisions like closing fisheries make sense, Hanel says—last year, an international consortium of researchers, of which Hanel is a member, recommended closing fisheries until glass eel stocks recover. But other requirements aren’t rooted in research, including one to ensure 40 percent of adult eels survive to migrate from inland waters to the sea each year. “Scientists cannot say if 40 percent is sufficient to recover the stock,” Hanel says.

That’s why Hanel’s work is so important, says Martin Castonguay, a marine biologist and scientist emeritus at Fisheries and Oceans Canada, who has collaborated with Hanel. Financial obstacles often prevent eel scientists from conducting research outside of inland waters. Research vessels can cost anywhere from CAN $30,000 to $50,000 per day, or just under $1-million for a month-long trip, Castonguay says, requiring scientists to have hefty grants or government support to venture all the way to the Sargasso.

Despite the barriers, scientists keep trying to find answers to how to help eels recover. They have planted hydroacoustic devices in hopes of tracking migrating eels by sound, pored over satellite photos, and injected eels with hormones to induce gonad development before releasing them into the Sargasso to try to study how deep beneath the surface they spawn. Back at home in the lab, they’ve developed algorithms to scan for and spot eels in sonar images of inland waters and built hyperbaric swimming tubes to observe how eels respond to changes in pressure and current strength. They’ve even tried to follow them with satellite transmitters.

In the mid-2010s, Castonguay and four other researchers sewed buoyant trackers to 38 American eels and released them off the coast of Nova Scotia. Every 15 minutes, the trackers recorded the depth at which the eels were swimming, the water temperature, and light levels. The sensors were designed to detach several months later and transmit the data along with the eels’ final location. Unfortunately, they detached before the eels reached any specific spawning locations, though one eel got as close as 100 to 150 kilometers from the spawning region. Still, “it was the first time that an [adult American] eel was documented in the Sargasso,” says Castonguay. Previously, only larvae had been found there. “We were extremely excited.”

If more governments and research institutions were willing to spend the resources, Castonguay adds, these eels wouldn’t be so mysterious. Research on a similar species in Japan offers a case study for how that could work.

On the other side of the globe from the Sargasso, the Japanese eel makes a 3,000-kilometer annual migration from Japan and surrounding countries to the West Mariana Ridge in the western Pacific Ocean. With support from the Japanese government and other scientific institutions, researchers there have identified a spawning location, collected fertilized eggs, and tracked tagged eels swimming to their spawning area—all feats never attained in the Sargasso. They’ve found that Japanese eels spawn over a period of a few days before the new moon, at depths of 150 to 200 meters, and that spawning is triggered in part by temperature shifts that happen as eels move from deep to shallower water. Some eels, they learned, might spawn more than once during a spawning season.

Public outreach efforts have also been important, says University of Tokyo eel biologist Michael Miller. The researcher who led most of the eel work, Katsumi Tsukamoto—a University of Tokyo scientist emeritus known as Unagi Sensei, or Dr. Eel—has worked hard to raise the eels’ public profile. His findings have helped build the case that eels are “something other than just a meal,” Miller says. “It’s something [that’s] part of the Japanese culture and it’s worth conserving,” which has helped boost efforts to protect them.

Hanel is trying to do the same for the eels of the Sargasso and for other species. He speaks to the press and the public as often as he can. He believes, as many others do, that successfully conserving these creatures hinges on whether there’s a unified international effort to do so. But so long as data snapshots come only every few years, answers to questions about spawning and species well-being will stay hidden somewhere in the watery depths, just like the eels themselves.

Source: Hakai Magazine

 

 

 

 

Wearable Sensors Styled into T-shirts and Face Masks

Caroline Brogan wrote . . . . . . . . .

Imperial researchers have embedded new low-cost sensors that monitor breathing, heart rate, and ammonia into t-shirts and face masks.

Potential applications range from monitoring exercise, sleep, and stress to diagnosing and monitoring disease through breath and vital signs.

Spun from a new Imperial-developed cotton-based conductive thread called PECOTEX, the sensors cost little to manufacture. Just $0.15 produces a metre of thread to seamlessly integrate more than ten sensors into clothing, and PECOTEX is compatible with industry-standard computerised embroidery machines.

First author of the research Fahad Alshabouna, PhD candidate at Imperial’s Department of Bioengineering, said: “The flexible medium of clothing means our sensors have a wide range of applications. They’re also relatively easy to produce which means we could scale up manufacturing and usher in a new generation of wearables in clothing.”

The researchers embroidered the sensors into a face mask to monitor breathing, a t-shirt to monitor heart activity, and textiles to monitor gases like ammonia, a component of the breath that can be used to detect liver and kidney function. The ammonia sensors were developed to test whether gas sensors could also be manufactured using embroidery.

Fahad added: “We demonstrated applications in monitoring cardiac activity and breathing, and sensing gases. Future potential applications include diagnosing and monitoring disease and treatment, monitoring the body during exercise, sleep, and stress, and use in batteries, heaters, and anti-static clothing.”

The research is published in Materials Today.

Seamless sensors

Wearable sensors, like those on smartwatches, let us continuously monitor our health and wellbeing non-invasively. Until now, however, there has been a lack of suitable conductive threads, which explains why wearable sensors seamlessly integrated into clothing aren’t yet widely available.

Enter PECOTEX. Developed and spun into sensors by Imperial researchers, the material is machine washable, and is less breakable and more electrically conductive than commercially available silver-based conductive threads, meaning more layers can be added to create complex types of sensor.

The researchers tested the sensors against commercially available silver-based conductive threads during and after they were embroidered into clothing.

During embroidery, PECOTEX was more reliable and less likely to break, allowing for more layers to be embroidered on top of each other.

After embroidery, PECOTEX demonstrated lower electrical resistance than the silver-based threads, meaning they performed better at conducting electricity.

Lead author Dr Firat Güder, also of the Department of Bioengineering, said: “PECOTEX is high-performing, strong, and adaptable to different needs. It’s readily scalable, meaning we can produce large volumes inexpensively using both domestic and industrial computerised embroidery machines.

“Our research opens up exciting possibilities for wearable sensors in everyday clothing. By monitoring breathing, heart rate, and gases, they can already be seamlessly integrated, and might even be able to help diagnose and monitor treatments of disease in the future.”

The embroidered sensors retained the intrinsic properties of the fabric such as wearability, breathability and feel-on-the-skin. They are also machine washable at up to 30°C.

Next, the researchers will explore new application areas like energy storage, energy harvesting and biochemical sensing for personalised medicine, as well as finding partners for commercialisation.

Source: Imperial College