Syndicate content Futurity
Research news from top universities.
Updated: 2 min 58 sec ago

What exactly makes for an endangered species?

4 hours 14 min ago

“What makes for an endangered species classification isn’t always obvious,” says John A. Vucetich, professor at Michigan Technological University. Read on as he explains:

Lions and leopards are endangered species. Robins and raccoons clearly are not. The distinction seems simple until one ponders a question such as: How many lions would there have to be and how many of their former haunts would they have to inhabit before we’d agree they are no longer endangered?

To put a fine point on it, what is an endangered species? The quick answer: An endangered species is at risk of extinction. Fine, except questions about risk always come in shades and degrees, more risk and less risk.

Extinction risk increases as a species is driven to extinction from portions of its natural range. Most mammal species have been driven to extinction from half or more of their historic range because of human activities.

The query “What is an endangered species?” is quickly transformed into a far tougher question: How much loss should a species endure before we agree that the species deserves special protections and concerted effort for its betterment? My colleagues and I put a very similar question to nearly 1,000 (representatively sampled) Americans after giving them the information in the previous paragraph. The results, “What is an endangered species?: judgments about acceptable risk,” are published in Environmental Research Letters.

Three-quarters of those surveyed said a species deserves special protections if it had been driven to extinction from any more than 30% of its historic range. Not everyone was in perfect agreement. Some were more accepting of losses. The survey results indicate that people more accepting of loss were less knowledgeable about the environment and self-identify as advocates for the rights of gun- and landowners. Still, three-quarters of people from the group of people who were more accepting of loss thought special protections were warranted if a species had been lost from more than 41% of their former range.

These attitudes of the American public are aligned with the language of the US Endangered Species Act—the law for preventing species endangerment in the US. That law defines an endangered species as one that is “in danger of extinction throughout all or a significant portion of its range.”

But there might be a problem

Government decision-makers have tended to agree with the scientists they consult in judging what counts as acceptable risk and loss. These scientists express the trigger point for endangerment in very different terms. They tend to say a species is endangered if its risk of total and complete extinction exceeds 5% over 100 years.

Before human activities began elevating extinction risk, a typical vertebrate species would have experienced an extinction risk of 1% over a 10,000-year period. The extinction risk that decision-makers and their consultant experts have tended to consider acceptable (5% over 100 years) corresponds to an extinction risk many times greater that the extinction risk we currently impose on biodiversity! Experts and decision-makers—using a law designed to mitigate the biodiversity crisis—tend to allow for stunningly high levels of risk. But the law and the general public seem accepting of only lower risk that would greatly mitigate the biodiversity crisis.

What’s going on?

One possibility is that experts and decision-makers are more accepting of the risks and losses because they believe greater protection would be impossibly expensive. If so, the American public may be getting it right, not the experts and decision-makers. Why? Because the law allows for two separate judgments. The first judgment is, is the species endangered and therefore deserving of protection? The second judgment is, can the American people afford that protection? Keeping those judgments separate is vital because making a case that more funding and effort is required to solve the biodiversity crisis is not helped by experts and decision-makers when they grossly understate the problem—as they do when they judge endangerment to entail such extraordinarily high levels of risk and loss.

Facts and values

Another possible explanation for the judgments of experts and decision-makers was uncovered in an earlier paper led by Jeremy Bruskotter of Ohio State University, who is also a collaborator on this paper. They showed that experts tended to offer judgments about grizzly bear endangerment based not so much on their own independent expert judgment, but on the basis of what they think (rightly or wrongly) their peers’ judgment would be.

Regardless of the explanation, a good answer to the question “What is an endangered species?” is an inescapable synthesis of facts and values. Experts on endangered species have a better handle on the facts than the general public. However, there is cause for concern when decision-makers do not reflect the broadly held values of their constituents. An important possible explanation for this discrepancy in values is the influence of special interests on decision-makers and experts charged with caring for biodiversity.

Getting the answer right is of grave importance. If we do not know well enough what an endangered species is, then we cannot know well enough what it means to conserve nature, because conserving nature is largely—either directly or indirectly—about giving special care to endangered species until they no longer deserve that label.

About the study

The research was supported in part by the Humane Society of the United States (HSUS) and Center for Biological Diversity (CBD). Survey design, data analysis, and manuscript preparation took place without consulting HSUS or CBD.

Research collaborators include Jeremy T. Bruskotter of Ohio State University, Adam Feltz of University of Oklahoma, and Tom Offer-Westort, also of University of Oklahoma.

Source: Michigan Technological University

The post What exactly makes for an endangered species? appeared first on Futurity.

More than a week of keto might not be good for you

6 hours 28 min ago

A ketogenic diet—which provides 99% of calories from fat and protein and only 1% from carbohydrates—produces health benefits in the short term, but negative effects after about a week, research in mice shows.

The results offer early indications that the keto diet could, over limited time periods, improve human health by lowering diabetes risk and inflammation. They also represent an important first step toward possible clinical trials in humans.

The keto diet has become increasingly popular as celebrities, including Gwyneth Paltrow, Lebron James, and Kim Kardashian, have touted it as a weight-loss regimen.

In the study, researchers found that the positive and negative effects of the diet both relate to immune cells called gamma delta T-cells, tissue-protective cells that lower diabetes risk and inflammation.

A keto diet tricks the body into burning fat, says lead author Vishwa Deep Dixit, professor of comparative medicine and of immunobiology at the Yale University School of Medicine. When the body’s glucose level goes down due to the diet’s low carbohydrate content, the body acts as if it is in a starvation state—although it is not—and begins burning fats instead of carbohydrates. This process in turn yields chemicals called ketone bodies as an alternative source of fuel. When the body burns ketone bodies, tissue-protective gamma delta T-cells expand throughout the body.

This reduces diabetes risk and inflammation, and improves the body’s metabolism, says Dixit. After a week on the keto diet, he says, mice show a reduction in blood sugar levels and inflammation.

But when the body is in this “starving-not-starving” mode, fat storage is also happening simultaneously with fat breakdown, the researchers found. When mice continue to eat the high-fat, low-carb diet beyond one week, Dixit says, they consume more fat than they can burn, and develop diabetes and obesity.

“They lose the protective gamma delta T-cells in the fat,” he says.

Long-term clinical studies in humans are still necessary to validate the anecdotal claims of keto’s health benefits.

“Before such a diet can be prescribed, a large clinical trial in controlled conditions is necessary to understand the mechanism behind metabolic and immunological benefits or any potential harm to individuals who are overweight and prediabetic,” Dixit says.

There are good reasons to pursue further study: According to the Centers for Disease Control, approximately 84 million American adults—or more than one out of three—have prediabetes (increased blood sugar levels), putting them at higher risk of developing type 2 diabetes, heart disease, and stroke. More than 90% of people with this condition don’t know they have it.

“Obesity and type 2 diabetes are lifestyle diseases,” Dixit says. “Diet allows people a way to be in control.”

With the latest findings, researchers now better understand the mechanisms at work in bodies sustained on the keto diet, and why the diet may bring health benefits over limited time periods.

“Our findings highlight the interplay between metabolism and the immune system, and how it coordinates maintenance of healthy tissue function,” says Emily Goldberg, the postdoctoral fellow in comparative medicine who discovered that the keto diet expands gamma-delta T cells in mice.

If the ideal length of the diet for health benefits in humans is a subject for later studies, Dixit says, discovering that keto is better in small doses is good news, he says: “Who wants to be on a diet forever?”

The research was funded in part by grants from the National Institutes of Health. The research appears in Nature Metabolism.

Source: Brita Belli for Yale University

The post More than a week of keto might not be good for you appeared first on Futurity.

Is prosecutor bias really behind disparities in prison?

6 hours 47 min ago

America’s prison populations are disproportionately filled with people of color, but prosecutors’ biases toward defendants’ race and class may not be the primary cause for those disparities, new research suggests.

The finding, which comes from a unique study involving hundreds of prosecutors across the US, counters decades of previous research. Those studies relied on pre-existing data, such as charges and punishments that played out in courtrooms.

“As we go through the criminal justice system and think about what the right reforms are, the sheer bias of the prosecutor doesn’t seem to be the biggest one.”

In a 1993 study, for example, researchers found that prosecutors in Los Angeles were 1.59 times more likely to fully prosecute an African American defendant for crack-related charges than a white defendant. That likelihood was 2.54 times greater for Hispanic defendants compared to white defendants.

The new study, led by Christopher Robertson, a professor of law and associate dean for research and innovation at the James E. Rogers College of Law at the University of Arizona, involved a controlled experiment with prosecutors, asking them to examine the same hypothetical case but changing the race and class of the defendant.

How the study worked

The study, administered online, provided prosecutors with police reports describing a hypothetical crime, which the researchers designed with assistance from experienced prosecutors. All details of the case were the same except for the suspect’s race—either black or white—and occupation—fast-food worker or accountant—to indicate the suspect’s socioeconomic status. Roughly half of the prosecutors received one version of the case; the other half received the other.

The study allowed researchers to “really isolate the prosecutor’s decision-making in a way that mere observational research wouldn’t allow,” says Robertson.

The outcomes the study looked for included whether prosecutors charged a felony, whether they chose to fine the defendant or seek a prison sentence, and the proposed cost of the fine or length of the sentence.

“When we put all those together, we see the same severity of charges, fines, and sentences across all the conditions, whether the defendant was black, whether the defendant was white, whether the defendant had a high-class career or a low-class career,” Robertson says.

“Differences in the actual outcomes—in the actual behavior of the prosecutors—is what we would have expected if they were biased. But since we see no difference in the outcomes, we concluded that they were not substantially biased.”

Surprising results

Given previous research that indicated rampant bias drives criminal justice disparities, Robertson’s results may surprise many—just as they did the researchers.

“We were surprised at the bottom line,” he says.

Robertson offers one possible explanation for the unexpected result. “We conducted this study in 2017 and 2018 and prosecutors have been under a spotlight for some time,” he says. “They’ve been training and are aware of and are working hard to not be biased in their own decision-making.”

The results do not rule out race and class bias as factors in prosecutorial decision-making but suggest that policymakers committed to addressing systemic racism and classism in the legal system may be more successful seeking reforms in other areas.

“The disparities in outcomes are indisputable,” Robertson says. “As we go through the criminal justice system and think about what the right reforms are, the sheer bias of the prosecutor doesn’t seem to be the biggest one.”

Policy change

Robertson says policymakers may be better off focusing on disparities that occur before someone is even arrested, in areas such as economic development and education.

“Crime is associated with poverty, and race in America is associated with poverty, so I think some very front-end questions of social policy are really important,” he says. “At the same time, I think, on the back end, to shift the focus, there’s a growing consensus among people on the left and the right that our 40-year-long war on crime has been ineffectual in some ways and that we could make the criminal justice system much less severe and much less expensive and thereby reduce some of these same disparities.”

Robertson also stresses that his study’s results aren’t the final word on prosecutor bias—a problem that still needs addressing, he says. Even after these findings, he remains a proponent of blinding prosecutors to defendants’ race, a detail that is often not relevant to prosecutors after an arrest is made. Prosecutor blinding is the focus of Robertson’s next research project.

The paper appears in the Journal of Empirical Legal Studies. Additional coauthors are from the University of Utah and Penn State.

Source: Kyle Mittan for University of Arizona

The post Is prosecutor bias really behind disparities in prison? appeared first on Futurity.

Flight safety is at risk when communication breaks down

7 hours 23 min ago

In Atlanta, where the world’s busiest airport fields thousands of flights each day, one researcher is working to improve communication and safety in global travel.

Eric Friginal, a professor of applied linguistics at Georgia State University, recently cowrote English in Global Aviation (Bloomsbury Press, 2019) with his former student Jennifer Roberts, now an aviation English specialist at Embry-Riddle Aeronautical University in Daytona Beach, Florida, and aviation expert Elizabeth Mathews.

Here, Friginal explains how language and communication affect flight safety worldwide:

The post Flight safety is at risk when communication breaks down appeared first on Futurity.

1 step keeps lead out of water when swapping disinfectants

8 hours 56 min ago

Adding orthophosphate to the water supply before switching drinking water disinfectants from free chlorine to chloramine can prevent lead contamination in certain situations, researchers report.

About 80% of water systems across the country use a disinfectant in drinking water that can lead to undesirable byproducts, including chloroform. There is an alternative, but many cities have been afraid to use it.

That’s because in 2000, when the water authority in Washington, DC switched from free chlorine to chloramine, the nation watched as levels of lead in drinking water immediately shot up. They stayed up for four years while scientists determined the problem and implemented a solution.

In other cities that used free chlorine, Washington’s experience had a chilling effect; many have put off switching disinfectants, fearing their own lead crisis.

They may soon be able to safely make the switch, thanks to the new research.

Lead in water pipes

Because of its malleability and longevity, lead was the preferred material for service lines, the pipes that deliver water from a water main to homes, for the first half of the 20th century. As the pipes corrode in the presence of free chlorine, a certain type of lead, PbO2, can build up on their interior surfaces.

That buildup typically isn’t a problem. In fact, so long as free chlorine is in use as a disinfectant, the PbO2 is actually a positive, according to Daniel Giammar, professor of environmental engineering at the McKelvey School of Engineering at Washington University in St. Louis. This form of lead has a low solubility so it stays in a solid form on the pipes, instead of in the water.

PbO2 is not always so benign, however. “There is a potential risk because the solubility is only low if you keep using this type of chlorine,” Giammar says.

Switching to a different disinfectant such as chloramine—the mixture of chlorine and ammonia that Washington switched to in late 2000—causes the lead to become water soluble. The PbO2 then dissolves quickly and releases lead into the water system.

In Washington, researchers determined that adding a particular phosphate, called orthophosphate, to the system would create lead phosphate. This new material was also low solubility, so again, the lead material began to line the walls of the pipes instead of dissolving into drinking water.

“But forming the new, low-solubility coating takes time,” Giammar says. In the case of Washington, “the lead concentrations took months to come down.”

The solution had been identified and implemented, but residents continued to deal with lead in their water for months. “Our overarching question was, ‘Would they have had a problem if they had implemented the solution before they made the chlorine switch? What if they added orthophosphate before, as a preventative measure, and then they switched the disinfectant? Would they have had a problem?'”

Recreating Washington water

To find out, the researchers had to recreate 2000 in their lab. “We had to recreate the crisis, then watch the crisis happen and watch our proposed solution in parallel,” Giammar says. They sourced lead pipes, then recreated Washington water.

First author Yeunook Bae, a PhD student in Giammar’s lab, looped the water through a six-pipe system with free chlorine for 66 weeks to get the lead scales to form. Once they approximated those found in Washington, they divided the pipes into a study group and a control group.

Researchers then added orthophosphate to the water in three of the pipe systems, the study group, for 14 weeks.

Then, as the Washington water authority had done, researchers switched from free chlorine to chloramine in all six systems, looping the water through the pipes for more than 30 weeks.

The lead on the pipes that did not receive orthophosphate became soluble, as it had in Washington, leading to high lead levels in the water. In the pipes to which orthophosphate was added, “levels went from really low to still quite low,” Giammar says.

The researchers designed the experimental setup to let them remove small sections of pipe without disturbing the system. That allowed them to see just how quickly the switch to chloramine affected the system.

The regulatory level the EPA set for lead in drinking water is 15 micrograms of lead per liter of water.

Within five days of the switch, lead levels in the control pipes—those without orthophosphate—rose from five to more than 100 micrograms/liter. During the subsequent 30 weeks, levels never fell below 80 micrograms/liter.

In water treated with orthophosphate, levels remained below 10 micrograms/liter for the duration of the experiment.

The researchers also learned something else: Because of the high levels of calcium in Washington’s water, adding orthophosphate did not result in a pure lead phosphate, but a calcium lead phosphate.

This surprise points to the uniqueness of each situation. Those who oversee water systems and are concerned about switching disinfectants can not only benefit from this study, according to Giammar, but also from their own studies, tailored to their specific water and environmental conditions.

Nevertheless, this finding can help guide decisions in the roughly 80% of American water systems that are still using free chlorine, including Chicago and New York City.

“Our next big step,” Giammar says, “is making sure places that are thinking about switching disinfectant know that the option is there to do it safely.”

The results of the study appear in Environmental Science & Technology.

Source: Washington University in St. Louis

The post 1 step keeps lead out of water when swapping disinfectants appeared first on Futurity.

When did animals first begin to bark, chirp, and moo?

9 hours 6 min ago

The ability to vocalize goes back hundreds of millions of years and is associated with a nocturnal lifestyle, researchers tracing the evolution of acoustic communication report.

Imagine taking a hike through a forest or a stroll through a zoo and not a sound fills the air, other than the occasional chirp from a cricket. No birds singing, no tigers roaring, no monkeys chattering—and no human voices, either. Acoustic communication among vertebrate animals is such a familiar experience that it seems impossible to imagine a world shrouded in silence.

But when and why did the ability to shout, bark, bellow, or moo evolve in the first place?

Examining the evolutionary tree

The authors assembled an evolutionary tree for 1,800 species showing the evolutionary relationships of mammals, birds, lizards and snakes, turtles, crocodilians, and amphibians going back 350 million years. They obtained data from the scientific literature on the absence or presence of acoustic communication within each sampled species and mapped it onto the tree.

Applying statistical analytical tools, they tested several ideas: whether acoustic communication arose independently in different groups and when; whether it is associated with nocturnal activity; and whether it tends to be preserved in a lineage.

The study reveals that the common ancestor of land-living vertebrates, or tetrapods, did not have the ability to communicate through vocalization—in other words, using their respiratory system to generate sound as opposed to making noise in other ways, such as clapping hands or banging objects together. Instead, acoustic communication evolved separately in mammals, birds, frogs, and crocodilians in the last 100-200 million years, depending on the group.

Nighttime acoustic communication

The study also shows that the origins of communication by sound are strongly associated with a nocturnal lifestyle. This makes intuitive sense, because once light is no longer available to show off visual cues such as color patterns to intimidate a competitor or attract a mate, transmitting signals by sound becomes an advantage.

Extrapolating from the species in the sample, the authors estimate that acoustic communication is present in more than two-thirds of terrestrial vertebrates. While some animal groups—birds, frogs, and mammals, for example—readily come to mind for their vocal talents, crocodilians as well as a few turtles and tortoises also have the ability to vocalize.

Interestingly, the researchers found that even in lineages that switched over to a diurnal (active-by-day) lifestyle, the ability to communicate via sound tended to be retained.

“There appears to be an advantage to evolving acoustic communication when you’re active at night, but no disadvantage when you switch to being active during the day,” says John J. Wiens, a professor of ecology and evolutionary biology at the University of Arizona. “We have examples of acoustic communication being retained in groups of frogs and mammals that have become diurnal, even though both frogs and mammals started out being active by night hundreds of millions of years ago.”

According to Wiens, birds kept on using acoustic communication even after becoming diurnal for the most part. Interestingly, many birds sing at dawn, as every birdwatcher can attest. Although speculative, it is possible that this “dawn chorus” behavior might be a remnant of the nocturnal ancestry of birds.

In addition, the research showed that acoustic communication appears to be a remarkably stable evolutionary trait. In fact, the authors raise the possibility that once a lineage has acquired the ability to communicate by sound, the tendency to retain that ability might be more stable than other types of signaling, such as conspicuous coloration or enlarged, showy structures.

Biodiversity and vocalization

In another unexpected result, the study revealed that the ability to vocalize does not appear to be the driver of diversification—the rate at which a lineage evolves into new species—it has been believed to be.

To illustrate this finding, Wiens pointed to birds and crocodilians: Both lineages have acoustic communication and go back roughly 100 million years, but while there are close to 10,000 known bird species, the list of crocodilians doesn’t go past 25. And while there are about 10,000 known species of lizards and snakes, most go about their lives without uttering a sound, as opposed to about 6,000 mammalian species, 95% of which vocalize.

“If you look at a smaller scale, such as a few million years, and within certain groups like frogs and birds, the idea that acoustic communication drives speciation works out,” Wiens says. “But here we look at 350 million years of evolution, and acoustic communication doesn’t appear to explain the patterns of species diversity that we see.”

The authors point out that their findings likely apply not only to acoustic communication, but also to other evolutionary traits driven by the ecological conditions known to shape the evolution of species. While it had been previously suggested that ecology was important for signal evolution, it was thought to apply mostly to subtle differences among closely related species.

“Here, we show that this idea of ecology shaping signal evolution applies over hundreds of millions of years and to fundamental types of signals, such as being able to communicate acoustically or not,” Wiens says.

The paper appears in the Nature Communications. Zhuo Chen, a visiting scientist from Henan Normal University in Xinxiang, China, is a coauthor of the study.

Source: University of Arizona

The post When did animals first begin to bark, chirp, and moo? appeared first on Futurity.

Can tiny doses of lithium treat Alzheimer’s disease?

9 hours 11 min ago

Very low doses of a certain formulation of lithium can halt signs of advanced Alzheimer’s disease and recover lost cognitive abilities, a new study with rats suggests.

The lithium in the study was in a formulation that facilitates passage to the brain. The findings appear in the Journal of Alzheimer’s Disease.

The value of lithium therapy for treating Alzheimer’s disease remains controversial in scientific circles, largely because the information available until now has come from many different approaches, conditions, formulations, timing, and dosages of treatment. That makes the results difficult to compare.

In addition, continued treatment with high doses of lithium carries a number of serious adverse effects, which makes that approach impracticable for long-term treatments, especially in the elderly.

How much lithium?

Study leader Claudio Cuello of McGill University and graduate student Edward Wilson first investigated the conventional lithium formulation and initially gave rats a dosage similar to that used in clinical practice for mood disorders. The results of the initial tentative studies with conventional lithium formulations and dosage were disappointing, however, as the rats rapidly displayed a number of adverse effects.

That interrupted that approach, but it began again when an encapsulated formulation of lithium showed some beneficial effects in a different study involving a mouse model of Huntington disease.

The researchers then applied the new lithium formulation to a rat transgenic model expressing human mutated proteins causative of Alzheimer’s, an animal model they had created and characterized. This rat develops features of the human Alzheimer’s disease, including a progressive accumulation of amyloid plaques in the brain and concurrent cognitive deficits.

“Microdoses of lithium at concentrations hundreds of times lower than applied in the clinic for mood disorders were administered at early amyloid pathology stages in the Alzheimer’s-like transgenic rat. These results were remarkably positive and were published in 2017 in Translational Psychiatry and they stimulated us to continue working with this approach on a more advanced pathology,” says Cuello, part of the department of pharmacology and therapeutics.

Small doses, fewer adverse effects

Encouraged by those earlier results, the researchers set out to apply the same lithium formulation at later stages of the disease to their transgenic rat modeling neuropathological aspects of Alzheimer’s disease. This study found that beneficial outcomes in diminishing pathology and improving cognition are achievable at more advanced stages, akin to late preclinical stages of the disease, when amyloid plaques are already present in the brain and when cognition starts to decline.

“From a practical point of view our findings show that microdoses of lithium in formulations such as the one we used, which facilitates passage to the brain through the brain-blood barrier while minimizing levels of lithium in the blood, sparing individuals from adverse effects, should find immediate therapeutic applications,” says Cuello.

“While it is unlikely that any medication will revert the irreversible brain damage at the clinical stages of Alzheimer’s, it is very likely that a treatment with microdoses of encapsulated lithium should have tangible beneficial effects at early, preclinical stages of the disease.”

What’s next?

Cuello sees two avenues to build further on these most recent findings. The first involves investigating combination therapies using this lithium formulation in concert with other interesting drug candidates. To that end he is pursuing opportunities working with Sonia Do Carmo, a research associate in his lab.

Cuello also believes there is an excellent opportunity to launch initial clinical trials of this formulation with populations with detectable preclinical Alzheimer’s pathology or with populations genetically predisposed to Alzheimer’s, such as adult individuals with Down syndrome. While many pharmaceutical companies have moved away from these types of trials, Cuello hopes to find industrial or financial partners and eventually provide greater hope of an effective treatment for Alzheimer’s disease.

Source: McGill University

The post Can tiny doses of lithium treat Alzheimer’s disease? appeared first on Futurity.

Tests suggest our ancestors could’ve eaten super hard food

9 hours 39 min ago

Tests with orangutan teeth suggest our early human ancestors ate more hard plant foods than previously thought.

Scientists often look at microscopic damage to teeth to infer what an animal was eating. This new research—using experiments looking at microscopic interactions between food particles and enamel—demonstrates that even the hardest plant tissues scarcely wear down primate teeth. The results have implications for reconstructing diet, and potentially for our interpretation of the fossil record of human evolution, researchers say.

“We found that hard plant tissues such as the shells of nuts and seeds barely influence microwear textures on teeth,” says Adam van Casteren, lecturer in biological anthropology at Washington University in St. Louis and the first author of the study in Scientific Reports.

Traditionally, eating hard foods is thought to damage teeth by producing microscopic pits. “But if teeth don’t demonstrate elaborate pits and scars, this doesn’t necessarily rule out the consumption of hard food items,” van Casteren says.

Humans diverged from non-human apes about seven million years ago in Africa. The new study addresses an ongoing debate surrounding what some early human ancestors, the australopiths, were eating. These hominin species had very large teeth and jaws, and likely huge chewing muscles.

“All these morphological attributes seem to indicate they had the ability to produce large bite forces, and therefore likely chomped down on a diet of hard or bulky food items such as nuts, seeds, or underground resources like tubers,” van Casteren says.

But most fossil australopith teeth don’t show the kind of microscopic wear that would be expected in this scenario.

The researchers decided to test it out.

Teeth vs. seeds

Previous mechanical experiments had shown how grit—literally, pieces of quartz rock—produces deep scratches on flat tooth surfaces, using a device that mimicked the microscopic interactions of particles on teeth. But there was little to no experimental data on what happens to tooth enamel when it comes in contact with actual woody plant material.

For this study, the researchers attached tiny pieces of seed shells to a probe that they dragged across enamel from a Bornean orangutan molar tooth.

They made 16 “slides” representing contacts between the enamel and three different seed shells from woody plants that are part of modern primate diets. The researchers dragged the seeds against enamel at forces comparable to any chewing action.

The seed fragments made no large pits, scratches, or fractures in the enamel, the researchers found. There were a few shallow grooves, but the scientists saw nothing that indicated that hard plant tissues could contribute meaningfully to dental microwear. The seed fragments themselves, however, showed signs of degradation from being rubbed against the enamel.

This information is useful for anthropologists who are left with only fossils to try to reconstruct ancient diets.

Big jaws, blunt teeth

“Our approach is not to look for correlations between the types of microscopic marks on teeth and foods being eaten—but instead to understand the underlying mechanics of how these scars on tooth surface are formed,” van Casteren says. “If we can fathom these fundamental concepts, we can generate more accurate pictures of what fossil hominins were eating.”

So those big australopith jaws could have been put to use chewing on large amounts of seeds—without scarring teeth.

“And that makes perfect sense in terms of the shape of their teeth” says Peter Lucas, a coauthor at the Smithsonian Tropical Research Institute, “because the blunt low-cusped form of their molars are ideal for that purpose.”

“When consuming many very small hard seeds, large bite forces are likely to be required to mill all the grains,” van Casteren says. “In the light of our new findings, it is plausible that small, hard objects like grass seeds or sedge nutlets were a dietary resource for early hominins.”

Source: Washington University in St. Louis

The post Tests suggest our ancestors could’ve eaten super hard food appeared first on Futurity.

Family caregivers aren’t hearing ‘do you need help?’

10 hours 56 min ago

Health care workers don’t often ask family caregivers if they need support in managing older adults’ care, according to a new study.

Researchers say most caregivers surveyed for the study report that health care workers listen to them (88.8%) and ask about their understanding of older adults’ treatments (72.1%). But a much smaller proportion (28.2%) say health care workers always or usually ask them whether they need help in caring for the older adult.

The figure was significantly higher, 37.3%, for a subset of people caring for older adults with dementia.

The study, in JAMA Network Open, was an analysis of survey data from 1,916 caregivers, mostly spouses or other family members, who provide care to older adults with activity limitations living in community settings such as private homes, apartment buildings, or senior housing.

“These results suggest that we as a society could do a better job of supporting family caregivers, who are providing the lion’s share of day-to-day care to older adults with activity limitations,” says lead author Jennifer Wolff, professor in the health policy and management department at the Bloomberg School at Johns Hopkins University.

Nearly 20 million Americans are unpaid, usually in-family caregivers for adults over 64, according to the National Academy of Sciences, Engineering, and Medicine.

The care they provide often includes help with taking medication, bringing older adult patients to a health care facility, and assisting with other health care activities. Given these important functions, the interactions between these caregivers and health care workers can affect the quality of care for the older adult patient.

“It’s a potential point of intervention for improving care,” Wolff says.

To get a better picture of this caregiver/health care-worker interface, Wolff and colleagues analyzed 2017 survey data from the National Health and Aging Trends Study (NHATS) and the related National Study of Caregiving (NSOC), including 1,916 caregivers assisting 1,203 community-living, activity-limited older adults. The average caregiver age was 59. About 900 of these caregivers reported having interacted with health care workers of the older adult in the prior year, and also provided responses to key questions about those interactions.

The study results highlight the fact that caregivers are still largely disconnected from the health care system for older adults, which in turn suggests that there is the potential to improve the quality of care, Wolff says.

“That could mean identifying caregivers who could use care-related education and training or who simply need a break, for example, through temporary ‘respite care’ of the older adult patient.”

“We’re developing strategies to more effectively engage family caregivers in care delivery,” Wolff says.

Additional coauthors are from the University of Michigan and Johns Hopkins. The National Institute on Aging funded the work.

Source: Johns Hopkins University

The post Family caregivers aren’t hearing ‘do you need help?’ appeared first on Futurity.

Don’t just fake it: ‘Deep acting’ emotions pays off at work

Fri, 2020-01-24 15:26

Faking positive emotions for coworkers can do more harm than good, researchers say. Making an effort to actually feel them, however, can produce personal and professional benefits.

For a new study, researchers analyzed two types of emotion regulation that people use at work: surface acting and deep acting.

“Surface acting is faking what you’re displaying to other people. Inside, you may be upset or frustrated, but on the outside, you’re trying your best to be pleasant or positive,” says Allison Gabriel, associate professor of management and organizations in the Eller College of Management at the University of Arizona.

“Deep acting is trying to change how you feel inside. When you’re deep acting, you’re actually trying to align how you feel with how you interact with other people.”

The study surveyed working adults in a wide variety of industries including education, manufacturing, engineering, and financial services.

“What we wanted to know is whether people choose to engage in emotion regulation when interacting with their coworkers, why they choose to regulate their emotions if there is no formal rule requiring them to do so, and what benefits, if any, they get out of this effort,” Gabriel says.

Gabriel says when it comes to regulating emotions with coworkers, four types of people emerged from the study:

  • Nonactors, or those engaging in negligible levels of surface and deep acting
  • Low actors, or those displaying slightly higher surface and deep acting
  • Deep actors, or those who exhibited the highest levels of deep acting and low levels of surface acting
  • Regulators, or those who displayed high levels of surface and deep acting

In each study, nonactors made up the smallest group. The other three groups were similar in size.

The researchers identified several drivers for engaging in emotion regulation and sorted them into two categories: prosocial and impression management. Prosocial motives include wanting to be a good coworker and cultivating positive relationships. Impression management motives are more strategic and include gaining access to resources or looking good in front of colleagues and supervisors.

The team found that impression management motives drove regulators, in particular, while deep actors were significantly more likely to be motivated by prosocial concerns. This means that deep actors choose to regulate their emotions with coworkers to foster positive work relationships, as opposed to being motivated by gaining access to more resources.

“The main takeaway,” Gabriel says, “is that deep actors—those who are really trying to be positive with their coworkers—do so for prosocial reasons and reap significant benefits from these efforts.”

According to the researchers, those benefits include receiving significantly higher levels of support from coworkers, such as help with workloads and offers of advice. Deep actors also reported significantly higher levels of progress on their work goals and trust in their coworkers than the other three groups.

The data also show that mixing high levels of surface and deep acting results in physical and mental strain.

“Regulators suffered the most on our markers of well-being, including increased levels of feeling emotionally exhausted and inauthentic at work,” Gabriel says.

While some managers Gabriel spoke to during the course of her research still believe emotions have little to do with the workplace, the study results suggest there is a benefit to displaying positive emotions during interactions at work, she says.

“I think the ‘fake it until you make it’ idea suggests a survival tactic at work,” Gabriel says. “Maybe plastering on a smile to simply get out of an interaction is easier in the short run, but long term, it will undermine efforts to improve your health and the relationships you have at work.”

“In many ways, it all boils down to, ‘Let’s be nice to each other.’ Not only will people feel better, but people’s performance and social relationships can also improve.”

The research appears in the Journal of Applied Psychology. Additional coauthors are from Texas A&M University, the University of Arkansas, and Florida State University.

Source: University of Arizona

The post Don’t just fake it: ‘Deep acting’ emotions pays off at work appeared first on Futurity.

Maps and community are key to flood management

Fri, 2020-01-24 14:09

Community collaboration and high-resolution maps are key to effective flood risk management, according to a new study.

Researchers report a new process called “collaborative flood modeling” can successfully address the increasing threat of rising waters due to climate change, aging infrastructure, and rapid urban development.

“The impacts of flooding continue to escalate in the US and around the world, and the main culprit is urban growth in harm’s way, with communities underprepared to deal with extreme events that are getting more intense in a warming climate,” says Brett Sanders, professor of civil & environmental engineering at the University of California, Irvine, and lead author of the paper in the journal Earth’s Future.

“Our approach rests on making modern flood simulation technologies accessible and useful to everyone within at-risk communities.”

Collaborative flood management

Researchers put the method into practice during the Flood-Resilient Infrastructure & Sustainable Environments project. Beginning in 2013, FloodRISE teams worked in two Southern California coastal areas at risk of flooding—Newport Bay and the Tijuana River Valley—gathering data, conducting surveys, and holding face-to-face meetings with residents.

The technique showed considerable traction in the two regions. For example, following focus group meetings, Newport Beach managers asked FloodRISE engineers for flood elevation datasets to integrate into the geographic information systems the planning department uses.

And after joint sessions in the Tijuana River Valley, San Diego County officials called for additional flood hazard modeling related to proposed dredging, and Tijuana River National Estuarine Research Reserve authorities requested flood hazard information related to a planned road realignment and marsh restoration project.

Collaborative flood modeling combines the experiences and concerns of residents, landowners, government officials, and business leaders with the knowledge and technological capabilities of academic researchers to foster a shared understanding of flood risk.

A crucial element of any such effort is the iterative development of high-resolution flood maps, or visualizations, based on hydrologic models and the insights of people who have lived through past floods.

“When we integrate what people in vulnerable communities have learned with the expertise of civil engineers and social scientists, we create more accurate and more functional flood maps customized for the specific needs of a community,” says Richard Matthew, professor of urban planning & public policy and faculty director of the Blum Center for Poverty Alleviation.

“This takes us away from the one-size-fits-all approach to flood mapping that’s widely in use today. We also found that residents gain a much deeper and more shared awareness of flooding through these highly detailed maps. This is critical for stimulating productive dialogue and deliberation about how to manage risks.”

Skip the technical jargon

Collaborative flood modeling exercises such as those conducted through FloodRISE open flood-related decision-making to diverse groups of constituents, giving them helpful insights into the spatial extent, intensity, timing, chance, and consequences of flooding, Sanders says.

Researchers designed the approach to complement the flood insurance program administered the Federal Emergency Management Agency administers so that a wide range of planning, land use, and behavioral choices come from the best available science in the form of intuitive and accurate flood simulations.

“We have tried to focus on eliminating much of the technical terminology from our discussions with community members,” Sanders says. “We advocate a bottom-up alternative to what has become a top-down process of flood hazard mapping filled with technical jargon that’s short-circuiting important contemplation and dialogue about flooding.”

Communities have embraced the new process, Sanders says. “Existing maps that depict flood risks are difficult to interpret, with cryptic classifications such as ‘Zone X’ and ‘Zone VE.’ We found that communities were eager to have access to this more intuitive and useful information and were quick to adopt it, especially for planning and land management purposes.”

“Our next step is to combine these powerful visualization tools with socioeconomic data to anticipate how different types of severe flooding are likely to affect poor communities in California and then co-develop with them risk management strategies,” Matthew adds. “This is critical if we want to try to avoid the enormous and long-term devastation we’ve seen when large flood events affect poorer communities on the East Coast.”

The National Science Foundation funded the FloodRISE project.

Source: UC Irvine

The post Maps and community are key to flood management appeared first on Futurity.

Teen boys and girls now do equal amounts of housework

Fri, 2020-01-24 10:42

While women still do nearly twice as much housework as men, the division of labor between boys and girls doing household chores is nearly equal, according to a new study.

Researchers studied change in housework as well as “marketwork”—work done for pay outside the home—between 1983 and 2015. This kind of core household chores include the daily drudgery of washing dishes, sweeping, and vacuuming.

It doesn’t include housework done by a cleaning service or home activities such as gardening and child care, says Frank Stafford, a professor of economics and research professor in the Survey Research Center at the Institute for Social Research at the University of Michigan.

In 1983, married men completed 6.4 hours of these chores and 40.1 hours of marketwork. In 2015, they completed 7.8 hours of housework, and 40.4 hours of marketwork. Married women’s hours of participation in the workforce rose from an average of 19.1 in 1983 to 28.2 hours in 2015, and their hours of household chores fell from 26.9 to 15.4 hours.

“In the old days, when young people got married, women radically increased housework substantially and decreased marketwork. It was the reverse for men. Now, it’s not quite such a dramatic reallocation,” Stafford says. “Women still do more, but not as much additional housework upon marriage as they did previously.”

In fact, the total number of hours of housework declined from 1983 to 2015. Married couples did 33.3 hours of housework in 1983 compared to 23.2 hours in 2015. Stafford says technology has something to do with it. Instead of washing dishes by hand, families more often use dishwashers now. More families buy prepared foods.

Fewer overall housework hours also shows up for kids in these families, Stafford says.

In 2002, teenage boys did 21.4 minutes of housework per day, while girls did 40.5 minutes of housework per day. But the amount of housework done by boys compared to girls has become relatively equal. In 2014, boys did 26.8 minutes of housework per day, while girls did 30 minutes.

It’s likely that boys’ and girls’ more equal division of household chores will continue into adulthood, continuing the long-term change to a more equal division of labor between men and women, Stafford says.

The total work hours—market hours plus housework hours—for both men and women remained relatively the same between 1983 and 2015. In 1983, married men spent 46.5 hours doing market and housework. Married women spent 46 hours both working outside the home and in the home. By 2015, married men spent a total of 48.2 hours working, while married women spent a total of 43.6 hours working. The proportion of housework hours fell, which means by 2015, more time was dedicated to work that brings money into the family.

“Total work has remained stable,” Stafford says. “We’re seeing this movement toward higher value marketwork and away from routine housework. From this shift we have greater economic contributions by women and substantial economic growth.”

Stafford and coauthor Ping Li, an economics researcher at South China Normal University, used data from the Panel Study of Income Dynamics to determine the amount of time people invested in housework. The paper will appear in the Journal of Time Use Research.

Source: University of Michigan

The post Teen boys and girls now do equal amounts of housework appeared first on Futurity.

Our pee could become fertilizer with low drug-resistance risk

Fri, 2020-01-24 10:35

Recycled and aged human urine can serve as a fertilizer with low risks of spreading antibiotic-resistant DNA, according to new research.

It’s a key finding in efforts to identify more sustainable alternatives to widely used fertilizers that contribute to water pollution. Their high levels of nitrogen and phosphorus can spur the growth of algae, which can threaten our sources of drinking water.

Urine contains nitrogen, phosphorus, and potassium—key nutrients that plants need to grow. Today, municipal treatment systems don’t totally remove these nutrients from wastewater before releasing it into rivers and streams. At the same time, manufacturing synthetic fertilizer is expensive and energy intensive.

Over the last several years, a group of researchers has studied the removal of bacteria, viruses, and pharmaceuticals in urine to improve the safety of urine-derived fertilizers.

In this new study, they show that the practice of “aging” collected urine in sealed containers over several months effectively deactivates 99% of antibiotic-resistant genes that were present in bacteria in the urine.

“Based on our results, we think that microorganisms in the urine break down the extracellular DNA in the urine very quickly,” says Krista Wigginton, associate professor of civil and environmental engineering at the University of Michigan and corresponding author of the study in the journal Environmental Science and Technology.

“That means that if bacteria in the collected urine are resistant to antibiotics and the bacteria die, as they do when they are stored in urine, the released DNA won’t pose a risk of transferring resistance to bacteria in the environment when the fertilizer is applied.”

Previous research has shown that antibiotic-resistant DNA can be found in urine, raising the question of whether fertilizers derived from it might carry over that resistance.

The researchers collected urine from more than 100 men and women and stored it for 12 to 16 months. During that period, ammonia levels in the urine increase, lowering acidity levels and killing most of the bacteria that the donors shed. Bacteria from urinary tract infections often harbor antibiotic resistance.

When the ammonia kills the bacteria, they dump their DNA into the solution. It’s these extracellular snippets of DNA that the researchers studied to see how quickly they would break down.

Urine has been utilized as a crop fertilizer for thousands of years, but has been getting a closer look in recent years as a way to create a circular nutrient economy. It could enable manufacturing of fertilizers in a more environmentally friendly way, reduce the energy required to manage nutrients at wastewater treatment plants, and create localized fertilizer sources.

“There are two main reasons we think urine fertilizer is the way of the future,” Wigginton says. “Our current agricultural system is not sustainable, and the way we address nutrients in our wastewater can be much more efficient.”

In their ongoing work, the team is moving towards agricultural settings. “We are doing field experiments to assess technologies that process urine into a safe and sustainable fertilizer for food crops and other plants, like flowers. So far, our experimental results are quite promising,” says Nancy Love, professor of civil and environmental engineering.

The National Science Foundation funded the work.

Source: University of Michigan

The post Our pee could become fertilizer with low drug-resistance risk appeared first on Futurity.

How high-protein diets could increase heart attack risk

Fri, 2020-01-24 10:06

High-protein diets may help people lose weight and build muscle, but a new study in mice suggests a down side: more plaque in the arteries.

Further, the new research shows that high-protein diets spur unstable plaque—the kind most prone to rupturing and causing blocked arteries. More plaque buildup in the arteries, particularly if it’s unstable, increases the risk of heart attack.

“There are clear weight-loss benefits to high-protein diets, which has boosted their popularity in recent years,” says senior author Babak Razani, an associate professor of medicine at the Washington University School of Medicine in St. Louis. “But animal studies and some large epidemiological studies in people have linked high dietary protein to cardiovascular problems. We decided to take a look at whether there is truly a causal link between high dietary protein and poorer cardiovascular health.”

The new study appears in the journal Nature Metabolism.

High-protein diets

The researchers studied mice fed a high-fat diet to deliberately induce atherosclerosis, or plaque buildup in the arteries. According to Razani, mice must eat a high-fat diet to develop arterial plaque. Therefore, some of the mice received a high-fat diet that was also high in protein. And others were fed a high-fat, low-protein diet for comparison.

“A couple of scoops of protein powder in a milkshake or a smoothie adds something like 40 grams of protein—almost equivalent to the daily recommended intake,” Razani says. “To see if protein has an effect on cardiovascular health, we tripled the amount of protein that the mice receive in the high-fat, high-protein diet—keeping the fat constant. Protein went from 15% to 46% of calories for these mice.”

The mice on the high-fat, high-protein diet developed worse atherosclerosis—about 30% more plaque in the arteries—than mice on the high-fat, normal-protein diet, despite the fact that the mice eating more protein did not gain weight, unlike the mice on the high-fat, normal-protein diet.

Unstable plaques

“This study is not the first to show a telltale increase in plaque with high-protein diets, but it offers a deeper understanding of the impact of high protein with the detailed analysis of the plaques,” Razani says. “In other words, our study shows how and why dietary protein leads to the development of unstable plaques.”

Plaque contains a mix of fat, cholesterol, calcium deposits, and dead cells. Past work by Razani’s team and other groups has shown that immune cells called macrophages work to clean up plaque in arteries. But the environment inside plaque can overwhelm these cells, and when such cells die, they make the problem worse, contributing to plaque buildup and increasing plaque complexity.

“In mice on the high-protein diet, their plaques were a macrophage graveyard,” Razani says. “Many dead cells in the core of the plaque make it extremely unstable and prone to rupture. As blood flows past the plaque, that force—especially in the context of high blood pressure—puts a lot of stress on it. This situation is a recipe for a heart attack.”

Focus on amino acids

To understand how high dietary protein might increase plaque complexity, Razani and his colleagues studied the path protein takes after it has been digested—broken down into its original building blocks, called amino acids.

Razani and his team found that excess amino acids from a high-protein diet activate a protein in macrophages called mTOR, which tells the cell to grow rather than go about its housecleaning tasks. The signals from mTOR shut down the cells’ ability to clean up the toxic waste of the plaque, and this sets off a chain of events that results in macrophage death. The researchers found that certain amino acids, especially leucine and arginine, were more potent in activating mTOR—and derailing macrophages from their cleanup duties, leading to cell death—than other amino acids.

“Leucine is particularly high in red meat, compared with, say, fish or plant sources of protein,” Razani says. “A future study might look at high-protein diets with different amino acid contents to see if that could have an effect on plaque complexity. Cell death is the key feature of plaque instability. If you could stop these cells from dying, you might not make the plaque smaller, but you would reduce its instability.

“This work not only defines the critical processes underlying the cardiovascular risks of dietary protein but also lays the groundwork for targeting these pathways in treating heart disease,” he says.

Support for the work came from the National Institutes of Health, the American Diabetes Association, the Washington University Diabetic Cardiovascular Disease Center and Diabetes Research Center, the Washington University Mass Spectrometry Core, and the Foundation for Barnes-Jewish Hospital.

Source: Washington University in St. Louis

The post How high-protein diets could increase heart attack risk appeared first on Futurity.

38% of mass shooters have committed domestic violence

Fri, 2020-01-24 08:50

A new study that measures the extent to which perpetrators of domestic violence also commit mass shootings suggests how firearm restrictions may prevent these tragedies.

Under federal law, people convicted of domestic violence misdemeanor crimes are prohibited from purchasing and possessing guns for the rest of their lives. But holes in the system may allow potential mass shooters to slip through the cracks.

The United States currently averages 20 mass shootings per year.

“We found that 38% of known mass shooters had a history of domestic violence, either known to the justice system or mentioned in the media,” says April Zeoli, associate professor of criminal justice at Michigan State University and lead author of a new paper in Criminology & Public Policy.

“People should determine, in their state, whether it may be possible for people convicted of domestic violence to obtain a firearm.”

“Very few of those who committed mass shootings seemed to have firearm restrictions due to domestic violence; the fact that some of them had those restrictions suggests that we are not actually preventing purchase or possession of a gun as well as we could or need to be.”

Some cases of domestic violence never result in firearm restrictions because law enforcement is never involved, because the cases are not referred to prosecutors, because filed charges don’t qualify for firearm restrictions, or because the case doesn’t meet a relationship requirement needed to apply gun restrictions.

“In more than 20 states, a person convicted of misdemeanor domestic violence against a dating partner will not be restricted from firearm access—you must have lived together, be married, or have a child together to qualify for the restriction,” Zeoli says.

Researchers looked at the nearly 90 mass shootings that took place between 2014 and 2017. Zeoli and coauthor Jennifer Paruk cross-checked four separate mass shooting databases—from Every Town for Gun Safety, USA Today, Gun Violence Archives, and Mother Jones—and then used publicly available criminal records to see what other criminal charges the shooters had against them.

“The public sees media reports of mass shootings happening in movie theaters, schools, nightclubs, and beyond—these are the ones that keep us all up at night,” Zeoli says. “But the majority of these mass shootings involved intimate and family member victims.”

The researchers pinpoint ways—called “exit points” in the paper—that firearm restrictions failed to prevent a shooter from buying a gun, which include purchases made through private sales and a failure to report gun disqualifications to the criminal background check system.

“In the case of the Sutherland Springs Baptist church shooting, the shooter did in fact qualify for a gun restriction under federal law because of domestic violence,” Zeoli says. “However, the conviction was in military court, and the military never sent the conviction records to the background check system; so, when he went to buy a gun, nothing showed up on his record.”

Zeoli says she hopes that the findings inspire both the public and lawmakers to learn about their states’ laws, as well as the exit points she and Paruk found that can lead to a gun landing in the hands of the wrong type of person.

“The image you get of mass shootings in the media isn’t always the full picture,” Zeoli says. “People should determine, in their state, whether it may be possible for people convicted of domestic violence to obtain a firearm. Many of those exit points can be closed through legislation and better implementation of the law.

“My feeling—and my hope—is that we’ll continue to see states work to implement restrictions to dangerous individuals from gaining access to guns and prevent gun violence from happening.”

Source: Michigan State University

The post 38% of mass shooters have committed domestic violence appeared first on Futurity.

Why Rilke’s words still resonate (and appear in ‘Jojo Rabbit’)

Thu, 2020-01-23 16:03

The writings of Rainer Maria Rilke take a star turn in the the film Jojo Rabbit, which has been nominated for six Academy Awards, including Best Picture.

The best-selling Austrian poet and novelist’s poem “Go to the Limits of Your Longing” and the letter “Requiem for a Friend”—a tribute to painter Paula Modersohn-Becker—appear in Jojo Rabbit. The film is a biting and poignant satire, in which a 10-year-old boy wrestles with nationalism and anti-Semitism as he comes of age in the closing months of WWII in Hitler’s Germany.

It’s not the first time poetry has made it onto the big screen. Verses have been adapted as film titles (The White Cliffs of Dover, Casey at the Bat, Howl), served as their centerpiece (Dead Poets Society), or were the catalyst for a poignant scene or powerful narrative, as in Sophie’s Choice (borrowing from Emily Dickinson), Four Weddings and a Funeral (W.H. Auden), or Raisin in the Sun (Langston Hughes).

And this year, it’s Rilke’s (1875-1926) turn.

So why Rilke, and why now? Ulrich Baer, professor of comparative literature, German, and English at New York University, has some answers.

The author of The Rilke Alphabet (Fordham, 2014) and Letters on Life: The Wisdom of Rainer Maria Rilke (Modern Library, 2005; audio book recorded by Ethan Hawke), Baer recently edited and translated Rilke’s correspondence in a pair of volumes: Letters to a Young Poet (Insel Verlag, 2018) and The Dark Interval: Rilke’s Letters on Loss, Grief, and Transformation (Modern Library, 2018; audio book recorded by singer Rosanne Cash).

Here, he considers why Rilke resonates on the big screen—in Jojo Rabbit as well as in earlier films such has Only You (1994) and Igby Goes Down (2002)—and in the culture at large:

The post Why Rilke’s words still resonate (and appear in ‘Jojo Rabbit’) appeared first on Futurity.

Should you worry about the Wuhan coronavirus?

Thu, 2020-01-23 14:11

Since December, the Wuhan coronavirus that originated in Wuhan, China, has killed at least 17 people and sickened close to 600.

Cases continue to spread globally, with one identified in Washington state.

In response to the evolving outbreak, the US Centers for Disease Control and Prevention has redirected US-bound travelers from Wuhan to five airports for screening at JFK New York, San Francisco, Los Angeles, Atlanta, and Chicago’s O’Hare.

The Chinese government has quarantined the city of Wuhan, shut down its airport and public transportation, and expanded the public transportation shutdown to at least four more cities.

Debra Chew is a former epidemic intelligence officer for the Centers for Disease Control, and an assistant professor of medicine at Rutgers New Jersey Medical School and medical director for infection prevention and control at University Hospital in Newark, New Jersey.

Here, she discusses what we know about the new infectious disease and who is most at risk:

The post Should you worry about the Wuhan coronavirus? appeared first on Futurity.

Lay counselors help kids in East Africa cope with trauma

Thu, 2020-01-23 13:40

Mental health therapy from trained community-based lay counselors improves trauma-related symptoms up to a year later, a clinical trial with more than 600 children in Kenya and Tanzania shows.

Researchers trained laypeople as counselors to deliver treatment in both urban and rural communities in Kenya and Tanzania, and evaluated the progress of children and their guardians through sessions of trauma-focused cognitive behavioral therapy.

The findings demonstrate that lower-cost, scalable mental health solutions are possible for a part of the world where mental health resources are scarce, researchers say.

An estimated 140 million children around the world have experienced the death of a parent, which can result in grief, depression, anxiety, and other physical and mental health conditions.

In low- and middle-income countries, many of those children end up living with relatives or other caregivers where mental health services are typically unavailable.

“Very few people with mental health needs receive treatment in most places in the world, including many communities in the US,” says Shannon Dorsey, a professor of psychology at the University of Washington and lead author of the paper in JAMA Psychiatry. “Training community members, or ‘lay counselors’ to deliver treatment helps increase the availability of services.”

Talk it out

Cognitive behavioral therapy (CBT), a type of talk therapy, generally involves focusing on thoughts and behavior, and how changing either or both can lead to feeling better. When counselors use CBT for traumatic events, it involves talking about the events and related difficult situations instead of trying to avoid thinking about or remembering them.

Researchers have tested the approach before among children, and in areas with little access to mental health services, Dorsey says. However, the new study is the first clinical trial outside high-income countries to examine improvement in post-traumatic stress symptoms in children over time.

The research builds on work from coauthor Kathryn Whetten, a professor of public policy and global health at Duke University. Whetten has conducted longitudinal research in Tanzania and four other countries on the health outcomes of some 3,000 children who have lost one or both parents.

“They could see, as one child said, ‘I’m not the only one who worries about who will love me now that my mama is gone.'”

“While concerned with the material needs of the household, caregivers of the children repeatedly asked the study team to find ways to help with the children’s behavioral and emotional ‘problems’ that made it so that that the children did not do well in school, at home, or with other children,” Whetten says.

“We knew these behaviors were expressions of anxiety that likely stemmed from their experiences of trauma. We therefore sought to adapt and test an intervention that could help these children succeed.”

Africa is home to about 50 million orphaned children, mostly due to HIV/AIDS and other health conditions.

Therapy normalizes experiences

For the Kenya-Tanzania study, two local nongovernmental organizations, Ace Africa and Tanzania Women Research Foundation, recruited counselors and trained them in CBT methods.

Based on researchers’ prior experience in Africa, the team adapted the CBT model and terminology, structuring it for groups of children in addition to one-on-one sessions. They referred to the sessions as a “class” offered in a familiar building such as a school, rather than as “therapy” in a clinic. The changes aimed to reduce stigma and boost participation in and comfort with the program, Dorsey says.

“Having children meet in groups naturally normalized their experiences. They could see, as one child said, ‘I’m not the only one who worries about who will love me now that my mama is gone.’ They also got to support each other in practicing skills to cope with feelings and to think in different ways in order to feel better,” she says.

“The children learned…you have to convert that relationship to one of memory, but it is still a relationship that can bring comfort.”

Counselors provided 12 group sessions over 12 weeks, along with 3 to 4 individual sessions per child. Caregivers participated in their own group sessions, a few individual sessions and group activities with the children. Activities and discussions centered on helping participants process the death of a loved one: being able to think back on and talk about the circumstances surrounding the parent’s death, for example, and learning how to rely on memories as a source of comfort.

In one activity, children drew a picture of something their parent did with them, such as cooking a favorite meal or walking them to school. Even though children could no longer interact with the parent, the counselors explained that children could hold onto these memories and what they learned from their parents, like how to cook the meal their mother made, or the songs their father taught them.

“The children learned that you don’t lose the relationship,” Dorsey says. “You have to convert that relationship to one of memory, but it is still a relationship that can bring comfort.”

The guardians learned similar coping skills, she says. Relatives usually care for children who have experienced parental death, so they often are also grieving the loss of a family member while taking on the challenge of an additional child in the home.

Potential of lay counselors

Researchers interviewed participants at the conclusion of the 12-week program, and again six and 12 months later. Researchers evaluated a control group of children, who received typical community services offered to orphaned children, such as free uniforms and other, mostly school fee-related help, concurrently.

Improvement in children’s post-traumatic stress symptoms and grief was most pronounced in both urban and rural Kenya. Researchers attribute the success there partly to the greater adversity children faced, such as higher food scarcity and poorer child and caregiver health, and thus the noticeable gains that providing services could yield.

In contrast, in rural Tanzania, children in both the counseling and control groups showed similar levels of improvement, which researchers are now trying to understand. One possible explanation Dorsey says, is that children and caregivers in Tanzania, and particularly in rural areas, may have been more likely to share with others in their village what they learned from therapy.

Even with the different outcomes in the two countries, the intervention by lay counselors who received training from experienced lay counselors shows the effectiveness and scalability of fostering a local solution, Dorsey says. Members of the research team are already working in other countries in Africa and Asia to help lay counselors train others in their communities to work with children and adults.

“If we grow the potential for lay counselors to train and supervise new counselors and provide implementation support to systems and organizations in which these counselors are embedded, communities can have their own mental health expertise,” Dorsey says.

“That would have many benefits, from lowering cost to improving the cultural and contextual fit of treatments.”

The National Institute of Mental Health funded the work. Additional coauthors are from Duke, the University of Washington, Johns Hopkins University, Ace Africa, the Tanzania Women Research Foundation, Drexel University College of Medicine, and Kilimanjaro Christian University Medical College.

Source: Duke University

The post Lay counselors help kids in East Africa cope with trauma appeared first on Futurity.

The average American household wastes $1,866 of food per year

Thu, 2020-01-23 12:53

American households waste, on average, almost a third of the food they acquire, economists report. This food waste has an estimated aggregate value of $240 billion annually.

Divided among the nearly 128.6 million US households, the waste could cost the average household about $1,866 per year.

This inefficiency in the food economy has implications for health, food security, food marketing, and climate change, says Edward Jaenicke, professor of agricultural economics in the College of Agricultural Sciences at Penn State.

“Our findings are consistent with previous studies, which have shown that 30% to 40% of the total food supply in the United States goes uneaten—and that means that resources used to produce the uneaten food, including land, energy, water, and labor, are wasted as well,” Jaenicke says.

“But this study is the first to identify and analyze the level of food waste for individual households, which has been nearly impossible to estimate because comprehensive, current data on uneaten food at the household level do not exist.”

To overcome this limitation, researchers borrowed methodology from the fields of production economics—which models the production function of transforming inputs into outputs—and nutritional science, in which researchers use a person’s height, weight, gender, and age to calculate metabolic energy requirements to maintain body weight.

Healthy diets, more food waste?

In this new approach, Jaenicke and Yang Yu, doctoral candidate in agricultural, environmental, and regional economics, analyzed data primarily from 4,000 households that participated in the US Department of Agriculture’s National Household Food Acquisition and Purchase Survey, known as FoodAPS. The researchers treated the food-acquisition data from this survey as the “input.”

FoodAPS also collected biological measures of participants, allowing the researchers to apply formulas from nutritional science to determine basal metabolic rates and calculate the energy required for household members to maintain body weight, or the “output.”

The difference between the amount of food acquired and the amount needed to maintain body weight represents the production inefficiency in the model, which translates to uneaten, and therefore wasted, food.

“Based on our estimation, the average American household wastes 31.9% of the food it acquires,” Jaenicke says. “More than two-thirds of households in our study have food-waste estimates of between 20% and 50%. However, even the least wasteful household wastes 8.7% of the food it acquires.”

In addition, the researchers used demographic data collected as part of the survey to analyze the differences in food waste among households with a variety of characteristics.

For example, households with higher income generate more waste, and those with healthier diets that include more perishable fruits and vegetables also waste more food, according to the researchers, who report their findings in the American Journal of Agricultural Economics.

“It’s possible that programs encouraging healthy diets may unintentionally lead to more waste,” Jaenicke says. “That may be something to think about from a policy perspective—how can we fine-tune these programs to reduce potential waste.”

Plan before you shop

Household types associated with less food waste include those with greater food insecurity—especially those that participate in the federal SNAP food assistance program, previously known as “food stamps”—as well as those households with a larger number of members.

“People in larger households have more meal-management options,” Jaenicke says. “More people means leftover food is more likely to be eaten.”

In addition, the size of some grocery items may influence waste, he says. “A household of two may not eat an entire head of cauliflower, so some could be wasted, whereas a larger household is more likely to eat all of it, perhaps at a single meal.”

Among other households with lower levels of waste include those who use a shopping list when visiting the supermarket and those who must travel farther to reach their primary grocery store.

“This suggests that planning and food management are factors that influence the amount of wasted food,” Jaenicke says.

3.3 gigatons of greenhouse gas

Beyond the economic and nutritional implications, reducing food waste could be a factor in minimizing the effects of climate change. Previous studies have shown that throughout its life cycle, discarded food is a major source of greenhouse gas emissions, the researchers say.

“According to the UN Food and Agriculture Organization, food waste is responsible for about 3.3 gigatons of greenhouse gas annually, which would be, if regarded as a country, the third-largest emitter of carbon after the US and China,” Jaenicke says.

The researchers suggest the findings can help fill the need for comprehensive food-waste estimates at the household level that can generalize to a wide range of household groups.

“While the precise measurement of food waste is important, it may be equally important to investigate further how household-specific factors influence how much food is wasted,” Jaenicke says. “We hope our methodology provides a new lens through which to analyze individual household food waste.”

The US Department of Agriculture’s National Institute of Food and Agriculture supported this work through its Agriculture and Food Research Initiative.

Source: Penn State

The post The average American household wastes $1,866 of food per year appeared first on Futurity.

Can snails save coffee from fungus? It’s a risky idea

Thu, 2020-01-23 11:15

Could invasive snails save coffee from a devastating pest?

While conducting fieldwork in Puerto Rico’s central mountainous region in 2016, ecologists noticed tiny trails of bright orange snail poop on the undersurface of coffee leaves afflicted with coffee leaf rust, the crop’s most economically important pest.

Intrigued, they conducted field observations and laboratory experiments over the next several years and showed that the widespread invasive snail Bradybaena similaris, commonly known as the Asian tramp snail and normally a plant-eater, had shifted its diet to consume the fungal pathogen that causes coffee leaf rust, which has ravaged coffee plantations across Latin America in recent years.

Now the University of Michigan researchers are exploring the possibility that B. similaris and other snails and slugs, which are part of a large class of animals called gastropods, could be used as a biological control to help rein in coffee leaf rust. But as ecologists, they are keenly aware of the many disastrous attempts at biological control of pests in the past.

“This is the first time that any gastropod has been described as consuming this pathogen, and this finding may potentially have implications for controlling it in Puerto Rico,” says doctoral student Zachary Hajian-Forooshani, lead author of a paper in the journal Ecology.

“But further work is needed to understand the potential tradeoffs B. similaris and other gastropods may provide to coffee agroecosystems, given our understanding of other elements within the system,” says Hajian-Forooshani, whose advisor is ecologist John Vandermeer, a professor in the department of ecology and evolutionary biology.

Coffee rust in Puerto Rico

Vandermeer and ecologist Ivette Perfecto, a professor at the School for Environment and Sustainability, lead a team that has been monitoring coffee leaf rust and its community of natural enemies on 25 farms throughout Puerto Rico’s coffee-producing region.

Those natural enemies include fly larvae, mites, and a surprisingly diverse community of fungi living on coffee leaves, within or alongside the orange blotches that mark coffee leaf rust lesions. Hajian-Forooshani has been studying all of these natural enemies for his doctoral dissertation.

“Of all the natural enemies I have been studying, these gastropods in Puerto Rico most obviously and effectively clear the leaves of the coffee leaf rust fungal spores,” he says.

Chief among those gastropods is B. similaris, originally from Southeast Asia and now one of the world’s most widely distributed invasive land snails. It has a light brown shell that is 12 to 16 millimeters (roughly one-half to two-thirds of an inch) across.

A B. similaris snail on a coffee leaf infected with coffee leaf rust. (Credit: Hajian-Forooshani et al. in Ecology)

In the paper, Hajian-Forooshani, Vandermeer, and Perfecto describe experiments in which a single infected coffee leaf and a single B. similaris snail existed together inside dark containers. After 24 hours, the number of coffee leaf rust fungal spores on the leaves had gone down by roughly 30%.

However, the snails were also responsible for a roughly 17% reduction in the number of lesions caused by another natural enemy of coffee leaf rust, the parasitic fungus Lecanicillium lecanii.

“With the data we are collecting now, we seek to find out if there are any apparent tradeoffs between these two consumers of the coffee leaf rust,” Hajian-Forooshani says. “For example, if the fungal parasite is especially efficient at reducing the rust, and the snail eats it along with the rust itself, that could be a tradeoff: promote the snail to control the rust and face the possibility that the snail eats too much of the other controlling factor.”

But would these snails become a huge problem?

In the paper, the authors say they’re cognizant of “the many disastrous attempts at classical biological control” in the past.

One of the best-known examples of a biological backfire was the introduction of the cane toad into Australia in the mid-1930s to control a beetle that was destroying sugar cane. Long story short, the cane toad was completely ineffective at controlling the beetle and became a pest in its own right by multiplying dramatically in the absence of natural enemies.

So, it’s too soon to tell if the fungus-eating appetite of B. similaris and other snails could be harnessed in the fight against coffee leaf rust. One big unanswered question: Do the fungal spores remain viable after they pass through the guts of the snails?

“The gastropods seem to reduce the number of spores on the leaf, but it’s not clear if the spores can still germinate in the excrement,” Hajian-Forooshani says. “Also, we don’t know how the effect of the gastropods on coffee leaf rust scales up to impact the pathogen dynamics at the farm or regional scale.”

And the potential role of gastropods in the fight against coffee rust elsewhere in Latin America remains unknown. But the researchers hope their findings in Puerto Rico will stimulate further research in other coffee-growing regions.

The US Department of Agriculture’s National Institute of Food and Agriculture supported the work.

Source: University of Michigan

The post Can snails save coffee from fungus? It’s a risky idea appeared first on Futurity.