Futurity.org

Syndicate content Futurity
Research news from top universities.
Updated: 16 min 11 sec ago

Sea levels have already cost Annapolis over $86K

8 hours 25 min ago

Rising sea levels due to climate change are already costing businesses in the city of Annapolis, Maryland, according to new research.

Miyuki Hino, a graduate student at Stanford University, and her colleagues found that downtown Annapolis, Maryland’s state capital, suffered a loss of 3,000 visits in 2017 due to high-tide flooding, which equates to a loss of somewhere between $86,000 and $172,000 in revenue.

“Small businesses in downtown Annapolis rely on visitors. By measuring the extent of the impact of flooding, we can understand the business case—how sea level rise is already impacting businesses’ experiences and profits,” says Samanthe Belanger, a coauthor and Stanford MBA student at the time of the study, which appears in Science Advances.

More floods, fewer customers

High-tide flooding, sometimes called nuisance or sunny-day flooding, happens when ocean waters rise above the levels that coastal infrastructure was designed for. Water sweeps in, filling streets and parking lots and preventing normal pedestrian and vehicle traffic.

“…high-tide flooding is one way to experience [climate change] in your daily life just trying to get to your restaurant reservation.”

Once relatively rare, high-tide flooding days have increased about 60 percent from 20 years ago, according to the National Oceanic and Atmospheric Administration. In 27 locations across the US, the number of high-tide flooding days went from an average of 2.1 days per year in the late 1950s to 11.8 during 2006-10. By 2035, about 170 coastal communities are projected to experience 26 high-tide flooding days a year.

“As global temperatures and sea levels rise, high-tide flooding becomes more frequent,” says Hino, who is in the Emmett Interdisciplinary Program in Environment and Resources. “For coastal businesses, that means more days when customers might not be able to get to their store. Even though most floods only last for a few hours, their impacts can add up.”

Annapolis, home to the US Naval Academy, tops the list of cities experiencing increases in high-tide flooding. In the early 1960s, Annapolis had about four high-tide flooding days a year. In 2017, the small city on Chesapeake Bay experienced 63 high-tide flooding days.

“In Annapolis, as in many places, high-tide flooding is happening right in the heart of things. The historic district is a favorite among locals and tourists alike. It is now frequently flooding,” says Katharine Mach, a senior research scientist at Stanford’s School of Earth, Energy & Environmental Sciences and a coauthor of the study.

Visitors to City Dock

Researchers used parking meters, satellite imagery, interviews, and other data to determine how would-be customers were dissuaded from visiting during flood hours at a popular business region near the water known as City Dock. They found no evidence that customers were returning after the floods had subsided or parked somewhere nearby and braved the water to make it to businesses.

“So often we think of climate change and sea level rise as these huge ideas happening at a global scale, but high-tide flooding is one way to experience these changes in your daily life just trying to get to your restaurant reservation,” Hino says.

In 2017, the loss to City Dock businesses due to flooding was less than 2 percent of annual visitors, but the researchers warn that could get a lot worse as sea levels continue to rise. The study projects that if sea level increases by 3 more inches, visits to City Dock would be reduced by about 4 percent. With 12 inches of sea level rise, visits would be reduced by about 24 percent, a figure that could mean hundreds of thousands in lost revenue.

The US Global Change Research Program Climate Science Special Report has sea level rise projections relative to the year 2000 ranging between 0.5 and 1.2 feet by 2050.

“What we’re finding is something many local leaders in coastal cities already know: the waters are rising up and surging into daily life. So are the costs,” Mach says. “Understanding the impacts for people today—at large and small scale—is an essential starting point for making smart adjustments to the risks.”

Mach is director of the Stanford Environment Assessment Facility at the Stanford Woods Institute for the Environment and a senior research scientist in Earth system science.

Additional coauthors are from Stanford, the Precourt Institute for Energy, and the United States Naval Academy.

Source: Stanford University

The post Sea levels have already cost Annapolis over $86K appeared first on Futurity.

Lots of moms and dads leave STEM careers

9 hours 1 min ago

Nearly half of new moms and a quarter of new dads who work in STEM leave their full-time jobs after a baby’s arrival, according to a new study.

The findings show that 43 percent of new moms and 23 percent of new dads leave within four to seven years of the birth or adoption of their first child.

Women have been underrepresented in these male-dominated fields (science, technology, engineering, and math) for decades, especially as they move further up the career ladder, researchers say. Parenthood may contribute to the gender gap, in part, due to gender-related cultural expectations and workplace obstacles.

“Not only is parenthood an important driver of gender imbalance in STEM employment, both mothers and fathers appear to encounter difficulties reconciling care-giving with STEM careers,” says Erin Cech, assistant professor of sociology at the University of Michigan and lead author of the paper, which appears in the Proceedings of the National Academy of Sciences.

For the study, Cech and Mary Blair-Loy, professor of sociology at the University of California, San Diego, analyzed nationally representative longitudinal survey data from US STEM professionals collected between 2003 and 2010.

Some new mothers—about 1 in 10—continue working in STEM on a part-time basis, but that situation has its own set of setbacks: businesses and universities typically pay part-time work substantially less per hour than full-time work; and part-time work is less likely to include benefits like health care. Part-time work is also less likely to include advancement opportunities.

“Our results indicate the need for employers to establish highly valued and well-paid part-time options as well as ramp-up policies that allow part-time STEM professionals to transition back into full-time work without long-term career penalties,” Blair-Loy says.

If parents leave the workforce, they are unlikely to return by the time their children are old enough to attend school, the researchers say.

“These findings point to the importance of cultural shifts within STEM to value the contributions of STEM professionals with children and the need for creative organizational solutions to help these skilled STEM professionals navigate new caregiving responsibilities alongside their STEM work,” Cech says.

“We need a cultural revolution within many fields to recognize and reward the full value of professionals who also care for children,” Blair-Loy says.

Source: University of Michigan

The post Lots of moms and dads leave STEM careers appeared first on Futurity.

Fireflies inspire new energy-saving LED light bulbs

9 hours 7 min ago

Light-emitting diodes made with firefly-mimicking structures could improve efficiency, new research suggests.

The new type of LED lightbulb could one day light homes while reducing power bills, according to the findings in Optik.

“LED lightbulbs play a key role in clean energy,” says Stuart (Shizhuo) Yin, professor of electrical engineering at Penn State. “Overall commercial LED efficiency is currently only about 50 percent. One of the major concerns is how to improve the so-called light extraction efficiency of the LEDs. Our research focuses on how to get light out of the LED.”

Fireflies and LEDs face similar challenges in releasing the light they produce because the light can reflect backwards and get lost. One solution for LEDs is to texture the surface with microstructures—microscopic projections—that allow more light to escape. In most LEDs these projections are symmetrical, with identical slopes on each side.

Fireflies’ lanterns also have these microstructures, but with asymmetric sides that slant at different angles, giving a lopsided appearance.

Scanning electron microscope image of the asymmetric pyramids that researchers 3D nanoprinted. (Credit: Penn State)

“Later I noticed not only do fireflies have these asymmetric microstructures on their lanterns, but a kind of glowing cockroach was also reported to have similar structures on their glowing spots,” says Chang-Jiang Chen, doctoral student in electrical engineering and lead author of the study. “This is where I tried to go a little deeper into the study of light extraction efficiency using asymmetric structures.”

Using asymmetrical pyramids to create microstructured surfaces, the team found that they could improve light extraction efficiency to around 90 percent.

According to Yin, asymmetrical microstructures increase light extraction in two ways. First, the greater surface area of the asymmetric pyramids allows greater interaction of light with the surface, so it traps less light. Second, when light hits the two different slopes of the asymmetric pyramids there is a greater randomization effect of the reflections, which gives light a second chance to escape.

After the researchers used computer-based simulations to show that the asymmetric surface could theoretically improve light extraction, they next demonstrated this experimentally. Using nanoscale 3D printing, the team created symmetric and asymmetric surfaces and measured the amount of light emitted. As expected, the asymmetric surface allowed the release of more light.

The LED-based lighting market is growing rapidly as the demand for clean energy increases, and is estimated to reach $85 billion by 2024.

Another scanning electron microscope image of the symmetric pyramids. (Credit: Penn State)

“Ten years ago, you go to Walmart or Lowes, LEDs are only a small portion (of their lighting stock),” says Yin. “Now, when people buy lightbulbs, most people buy LEDs.”

LEDs are more environmentally friendly than traditional incandescent or fluorescent lightbulbs because they are longer-lasting and more energy efficient.

Two processes contribute to the overall efficiency of LEDs. The first is the production of light—the quantum efficiency—which scientists measure using how many electrons convert to light when energy passes through the LED material. Scientists have already optimized this part in commercial LEDs. The second process is getting the light out of the LED—called the light extraction efficiency.

“The remaining things we can improve in quantum efficiency are limited,” says Yin. “But there is a lot of space to further improve the light extraction efficiency.”

In commercial LEDs, scientists make the textured surfaces on sapphire wafers. First, scientists use UV light to create a masked pattern on the sapphire surface that provides protection against chemicals. Then when scientists apply chemicals, they dissolve the sapphire around the pattern, creating the pyramid array.

“You can think about it this way, if I protect a circular area and at the same time attack the entire substrate, I should get a volcano-like structure,” explains Chen.

In conventional LEDs, the production process usually produces symmetrical pyramids because of the orientation of the sapphire crystals. According to Chen, the team discovered that if they cut the block of sapphire at a tilted angle, the same process would create the lopsided pyramids. The researchers altered just one part of the production process, suggesting they could easily apply the approach to commercial manufacture of LEDs.

The researchers have filed for a patent on this research.

“Once we obtain the patent, we are considering collaborating with manufacturers in the field to commercialize this technology,” says Yin.

Source: Penn State

The post Fireflies inspire new energy-saving LED light bulbs appeared first on Futurity.

Does nationalism begin with this little kid idea?

9 hours 17 min ago

Research finds that young children see national identity, in part, as biological—a perception that diminishes as they get older.

But despite changes in views of nationality as we age, the work suggests the intriguing possibility that the roots of nationalist sentiments take root early in life.

“As children grow up, they continue to think an individual’s nationality is a stable aspect of their identity—not linked to biology, but nonetheless something that is informative about who they are as a person and reaching beyond the formalities of citizenship,” explains Andrei Cimpian, an associate professor in New York University’s psychology department and the senior author of the study in the Journal of Experimental Psychology: General.

“To speculate, it is possible that the nationalist sentiments seen among adults may be partly facilitated by psychological processes that are at work within the first decade of life.”

A source of meaning

The researchers note that despite a trend of increasing globalization in many aspects of modern life, nationality is a powerful source of meaning in people’s lives. They cite an American National Election Studies survey that shows 72 percent of Americans report being an American is either “very important” or “extremely important” to their identity.

Moreover, the recent rise of nationalist ideologies in the United States and beyond reveals the central role of national groups in the psychological landscape of the 21st century.

In addition, previous studies have found that national groups shape people’s attitudes toward others. For example, stronger national identification predicts more negative attitudes toward immigrants and views of ethnic minorities as outsiders rather than as full-fledged citizens.

Kid questions

Given the influence of national group concepts over how people view themselves and others, Cimpian and first author Larisa Hussak sought to understand how these concepts are not only represented in people’s minds, but also what shape they take early in life and whether or not they evolve over time.

To do so, they conducted a series of experiments that included samples of American children aged 5 to 8. To compare children’s views with those of American adults, Cimpian and Hussak recruited adults, who were in their mid-30s on average, using Amazon’s Mechanical Turk—a tool in which individuals are compensated for completing small tasks. It is common in running behavioral science studies.

In these experiments, the subjects were given a series of prompts aimed at probing their views on the factors behind national identity. For instance, they saw pictures of young children who were identified as Americans or other nationalities (e.g., Canadian) and asked whether national group membership is manifested in a person’s biology (e.g., could it be detected “in their insides”?).

They were also asked if they believed this membership was informative about other aspects of a person (e.g., what games they liked to play at recess). In addition, they were asked what might explain differences in national traditions—for example, if they believed that Americans eat a lot of apple pie because of some inherent features of Americans or because of their history or circumstances.

The American advantage

The authors also sought to measure how American children think about the advantages they have by virtue of belonging to their national group. Specifically, they gauged whether children believed that inequalities favoring their group (i.e., Americans) are legitimate and fair.

The two items in this measure portrayed Americans as having an economic advantage over two different, unfamiliar non-American groups (e.g., “Americans tend to have a lot more money than Daxians” [a fictional nation]). To facilitate children’s understanding, the researchers showed them pictures of an American flag and a non-American flag while presenting the inequality information. The researchers asked children three questions about each inequality, in random order:

  • whether they thought it was fair that Americans had an advantage,
  • whether they thought the inequality was OK,
  • and whether Americans deserved their advantage.

Overall, the results reveal the following about how both children and adults saw national group membership:

  • it is something that is stable and hard to shed;
  • it is informative about a person’s behaviors and preferences;
  • and its meaning goes beyond the formalities of citizenship.
How meaningful is nationality?

“The early-developing belief that nationality tracks deep, meaningful aspects of the social world appears to remain a part of adults’ concepts of national identity, providing further insight into why these concepts are psychologically powerful,” observes Hussak, a doctoral student at the University of Illinois at Urbana-Champaign at the time of research and now a consultant at EAB, a higher education analytical firm.

Yet there were developmental differences as well. Young children were relatively more likely to assume that national identity has a physical or biological basis—something that can be detected in one’s body and passed on from one generation to the next. However, this aspect of children’s concepts of national identity diminished among the older children in the sample and was notably much less prevalent among adults.

In addition, and perhaps more significantly, as subjects aged, the belief that a person’s nationality is informative became increasingly likely to be linked to inequality-rationalizing attitudes: the stronger older children’s expectation was that a person’s nationality conveys rich information about them, the more accepting they were of status inequalities favoring their own national group.

“This work may provide a unique source of insight into current sociopolitical trends that prioritize national interests over globalization and cosmopolitanism,” says Cimpian.

This research, which took place at the Cognitive Development Lab at the University of Illinois at Urbana-Champaign and New York University, had support from a National Science Foundation Graduate Research Fellowship.

Source: New York University

The post Does nationalism begin with this little kid idea? appeared first on Futurity.

Nanoparticle delivery service aids cartilage repair

12 hours 12 min ago

Researchers report a new way to deliver treatment for cartilage regeneration.

The nanoclay-based platform for sustained and prolonged delivery of protein therapeutics has the potential to impact treating osteoarthritis, says study leader Akhilesh K. Gaharwar, assistant professor in the biomedical engineering department at Texas A&M University. The degenerative disease affects nearly 27 million Americans and results from breakdown of cartilage that can lead to damage of the underlying bone.

As America’s population ages, osteoarthritis is likely to become a bigger issue. One of the greatest challenges with treating osteoarthritis and subsequent joint damage is repairing the damaged tissue, especially as cartilage tissue is difficult to regenerate.

One method for repair or regeneration of damaged cartilage tissue is to deliver therapeutic growth factors. Growth factors are a special class of proteins that can aid in tissue repair and regeneration. However, current versions of growth factors break down quickly and require a high dose to achieve a therapeutic potential. Recent clinical studies have demonstrated significant adverse effects to this kind of treatment, including uncontrolled tissue formation and inflammation.

In the new study in ACS Applied Materials and Interfaces, Gaharwar’s lab reports designing two-dimensional mineral nanoparticles to deliver growth factors for a prolonged duration to overcome this drawback. These nanoparticles provide a high surface area and dual charged characteristics that allow for easy electrostatic attachment of growth factors.

“These nanoparticles could prolong delivery of growth factors to human mesenchymal stem cells, which are commonly utilized in cartilage regeneration,” Gaharwar says. “The sustained delivery of growth factors resulted in enhanced stem cell differentiation towards cartilage lineage and can be used for treatment of osteoarthritis.”

“By utilizing the nanoparticle for therapeutic delivery it is possible to induce robust and stable differentiation of stem cells,” says Lauren M. Cross, senior author of the study and research assistant in the biomedical engineering department. “In addition, prolonged delivery of the growth factor could reduce overall costs by reducing growth factor concentration as well as minimizing the negative side effects.”

The National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health and the National Science Foundation supported the work.

Source: Texas A&M University

The post Nanoparticle delivery service aids cartilage repair appeared first on Futurity.

Gold nanowires grow longer with vitamin C

12 hours 30 min ago

Some vitamin C helps turn small gold nanorods into fine gold nanowires, report researchers.

Common, mild ascorbic acid is the not-so-secret sauce that helped the Rice University lab of chemist Eugene Zubarev grow pure batches of nanowires from stumpy nanorods without the drawbacks of previous techniques.

“There’s no novelty per se in using vitamin C to make gold nanostructures because there are many previous examples,” Zubarev says. “But the slow and controlled reduction achieved by vitamin C is surprisingly suitable for this type of chemistry in producing extra-long nanowires.”

Details of the work appear in the journal ACS Nano.

The nanorods are about 25 nanometers thick at the start of the process—and remain that way while their length grows to become long nanowires. Above 1,000 nanometers long, the objects are considered nanowires, and that matters. The wires’ aspect ratio—length over width—dictates how they absorb and emit light and how they conduct electrons. Combined with gold’s inherent metallic properties, that could enhance their value for sensing, diagnostic, imaging, and therapeutic applications.

Zubarev and lead author Bishnu Khanal, a Rice chemistry alumnus, succeeded in making their particles go far beyond the transition from nanorod to nanowire, theoretically to unlimited length.

The researchers also show that the process is fully controllable and reversible. That makes it possible to produce nanowires of any desired length, and thus the desired configuration for electronic or light-manipulating applications, especially those that involve plasmons, the light-triggered oscillation of electrons on a metal’s surface.

The nanowires’ plasmonic response is tunable to emit light from visible to infrared and theoretically far beyond, depending on their aspect ratios.

The process is slow, so it takes hours to grow a micron-long nanowire. “In this paper, we only reported structures up to 4 to 5 microns in length,” Zubarev says, “but we’re working to make much longer nanowires.”

The growth process only appeared to work with pentahedrally twinned gold nanorods, which contain five linked crystals. These five-sided rods—”Think of a pencil, but with five sides instead of six,” Zubarev says—are stable along the flat surfaces, but not at the tips.

“The tips also have five faces, but they have a different arrangement of atoms,” he says. “The energy of those atoms is slightly lower, and when new atoms are deposited there, they don’t migrate anywhere else.”

That keeps the growing wires from gaining girth. Every added atom increases the wire’s length, and thus the aspect ratio.

The nanorods’ reactive tips get help from a surfactant, CTAB, that covers the flat surfaces of nanorods. “The surfactant forms a very dense, tight bilayer on the sides, but it cannot cover the tips effectively,” Zubarev explains.

That leaves the tips open to an oxidation or reduction reaction. The ascorbic acid provides electrons that combine with gold ions and settle at the tips in the form of gold atoms. And unlike carbon nanotubes in a solution that easily aggregate, the nanowires keep their distance from one another.

“The most valuable feature is that it is truly one-dimensional elongation of nanorods to nanowires,” Zubarev says. “It does not change the diameter, so in principal we can take small rods with an aspect ratio of maybe two or three and elongate them to 100 times the length.”

He says the process should apply to other metal nanorods, including silver.

The National Science Foundation and Welch Foundation supported the research.

Source: Rice University

The post Gold nanowires grow longer with vitamin C appeared first on Futurity.

Parents need this skill for the frustrating teen years

12 hours 53 min ago

Parents who are less able to diminish their anger are more likely to resort, over time, to the use of harsh, punitive discipline and hostile conflict behavior toward their teenagers, research finds.

The field of adolescent psychology increasingly focuses on parents, with researchers asking how mothers and fathers control themselves—and their anger—in difficult interactions with their children.

“Discipline issues usually peak during toddlerhood and then again during adolescence, because both periods are really marked by exploration and figuring out who you are, and by becoming more independent,” says Melissa Sturge-Apple, a professor of psychology and dean of graduate studies at the University of Rochester.

Thinking on your feet

Yet the developmental changes during puberty and the transition to adolescence mean that parents need to adjust their parenting behaviors, she adds. Part of that adjustment is parents’ ability to think on their feet and navigate conflicts with flexibility as their teens strive for more autonomy and greater input in the decision-making processes.

Sturge-Apple is the lead author of the study about mothers’ and fathers’ capacity for self-regulation as well as hostile parenting during their child’s early adolescence. The study appears in the journal Development and Psychopathology.

In this study, Sturge-Apple and her colleagues looked at how mothers and fathers regulated their stress in response to conflict with their adolescent children. They then examined how the stress response affected their discipline of the child. The researchers measured parents’ physiological regulation using RMSSD, a common way to assess heart rate variability. The laboratory-based assessments took place roughly one year apart.

Why ‘set shifting’ matters

The scientists also measured parents’ set-shifting capacity—that is, the parents’ ability to be flexible and to consider alternative factors, such as their child’s age and development.

“Set shifting is important because it allows parents to alter flexibly and deliberately their approaches to handling the changeable behaviors of their children in ways that help them to resolve their disagreements,” says Davies.

On average, fathers were not as good as mothers at set shifting and were less able to control their physiological anger response. As a result, they were more likely to think that their teen was intentionally difficult, or “just trying to push buttons,” which in turn guided their decisions about discipline.

However, the researchers found that those fathers who were better at set shifting than others were also better able to counteract difficulties in physiological regulation. These episodes of physiological dysregulation, the team discovered, predicted over time an increase in parents’ angry responses—and that essentially, set shifting offsets this angry response tendency.

“As we learn more, these findings may have important implications for building and refining parenting programs,” says Davies. “For example, there are exercises that help increase physiological regulation in ways that may ultimately reduce hostile parenting behaviors for mothers and fathers.”

Dad the ‘enforcer’?

An obvious deficit sparked the research: more than 99 percent of parent regulation studies have focused exclusively on mothers.

There’s an irony in past research studies’ almost exclusive focus on mothers.

“Dads are typically the enforcer in the family and this role may be difficult to override,” says Sturge-Apple. “Thus, the ability to be flexible in responses may help dads, more than moms, adjust to the changes of adolescence.”

The research, which included 193 fathers, mothers, and their young teenagers (ages 12 to 14), took place at the university’s Mt. Hope Family Center. The National Institute of Child Health and Human Development supported the work.

Source: University of Rochester

The post Parents need this skill for the frustrating teen years appeared first on Futurity.

Teen girls who donate blood have higher anemia risk

14 hours 6 min ago

Adolescent girls who donate blood are at greater risk for iron deficiency and anemia and should take extra steps to keep their bodies’ iron stores up to recommended levels, researchers say.

The researchers recommend these girls consider oral iron supplements, increase the minimum time between donations, and donate platelets or plasma instead of whole blood.

“We’re not saying that eligible donors shouldn’t donate. There are already issues with the lack of blood supply,” says Aaron Tobian, professor of pathology, medicine, oncology, and epidemiology at the Johns Hopkins University School of Medicine and co-lead author of the paper, which appears in Transfusion.

“However, new regulations or accreditation standards could help make blood donation even safer for young donors.”

250 milligrams of iron

Each year, an estimated 6.8 million people in the United States donate blood, according to the American Red Cross, and adolescents are an increasing part of the donor pool, thanks to high school blood drives. In 2015, donors aged 16 to 18 made about 1.5 million blood donations.

Although donation is largely safe, adolescents are at a higher risk for acute donation-related problems, such as injuries from fainting during donation, the researchers say.

Blood donation can also increase the risk of iron deficiency, removing about 200 to 250 milligrams of iron in each whole blood donation. Adolescents typically have lower blood volumes, so donating the same amount of blood as adults leads to a higher proportional loss of iron during donation. Girls are even more at risk of iron deficiency than boys due to blood losses during monthly menstruation.

Previous studies show that being younger, being a girl, and donating more often are all associated with lower serum ferritin levels (a marker for total body iron levels) in blood donors. No previous study using nationally representative data, however, had compared the prevalence of iron deficiency and associated anemia between donor and non-donor populations, specifically in adolescents.

Vulnerable donors

The researchers analyzed data from the National Health and Nutrition Examination Survey, a long-running US Centers for Disease Control and Prevention study, which included collections of blood samples as well as questions about blood donation history, from 1999 to 2010.

The findings show that about 10.7 percent of the adolescents in the study had donated blood within the previous 12 months, compared with about 6.4 percent of adults. Mean serum ferritin levels were significantly lower among blood donors than among non-donors in both the adolescent (21.2 vs. 31.4 nanograms per milliliter) and the adult (26.2 vs. 43.7 nanograms per milliliter) populations.

The prevalence of iron deficiency anemia was 9.5 percent among adolescent donors and 7.9 percent among adult donors—both low numbers, but still significantly higher than that of non-donors in both age groups, which was 6.1 percent. Further, 22.6 percent of adolescent donors and 18.3 percent of adult donors had absent iron stores.

Collectively, the researchers say, the findings highlight the vulnerability of adolescent blood donors to associated iron deficiency.

Tobian and biostatistician Eshan Patel, the study’s co-lead author, note that some federal policies and regulations are already in place to protect donors of all ages from iron deficiency. For instance, donors undergo hemoglobin screening, must meet a minimum weight, and must wait eight weeks between whole blood donations.

Source: Johns Hopkins University

The post Teen girls who donate blood have higher anemia risk appeared first on Futurity.

Why working moms need ‘work-family justice’

Tue, 2019-02-19 19:43

Big changes in US policies and cultural attitudes are necessary to bring work-life balance to America’s working mothers and their families, argues sociologist Caitlyn Collins.

Stress and exhaustion dominate the lives of working mothers in the United States. And no wonder: Of all western industrialized countries, the US ranks dead last for policies that support working mothers and their families.

“I can’t do everything. If I keep all the balls in the air, I’m broken.”

Unlike those in most European countries—and practically every other industrialized nation—American mothers have no access to federal paid parental leave and no minimum standard for vacation and sick days. Many American moms still struggle to raise families in a country with one of the highest gender wage gap and the highest maternal and child poverty rates.

Despite these disparities, work-family conflict is not an inevitable feature of contemporary life, argues Collins, assistant professor of sociology at Washington University in St. Louis, and the author of a new book, Making Motherhood Work: How Women Manage Careers and Caregiving (Princeton University Press, 2019), that details how the United States’ exceptionally family-hostile public policies are hurting women and children.

In the book, Collins takes readers into the kitchens, living rooms, parks, cafés, office cubicles, and conference rooms where working mothers’ daily lives play out in Sweden, Germany, Italy, and the United States. She explores how 135 middle-class women navigate employment and motherhood given the different work-family policies available in each country.

Drawing on in-depth interviews with women in each country conducted over five years, Collins articulates just how crucial government policies and cultural support are to ensuring “work-family justice,” which she describes as an assurance that “every member of society has the opportunity and power to fully participate in both paid work and family care.”

The book offers a clear, research-based argument that the US is failing its mothers and families. America’s mothers don’t need more highly individualistic tips on achieving work-family balance. They need justice.

Unrealistic standards

Blueprints for achieving these changes emerges readily from her intimate examination of the daily lives of working mothers across the four countries. Through side-by-side comparisons, she demonstrates that improving the lives of mothers and their families in the US requires changes both in public policies and cultural attitudes.

In Sweden—renowned for its gender-equal policies—mothers assume they will receive support from their partners, employers, and the government. In the former East Germany, with its history of mandated employment, mothers don’t feel conflicted about working, but some curtail their work hours and ambitions.

“[Y]ou don’t even mention your family.… You are pretending you don’t have anything to do at home.”

Mothers in western Germany and Italy, where maternalist values are strong, face stigma for pursuing careers. Meanwhile, American working mothers stand apart for their guilt and worry.

And Collins reminds readers: these women are middle-class. They’re the proverbial canaries in a coal mine for mothers’ work-family conflict. Low-income women, too often racial/ethnic minorities, have far fewer resources to draw on and less support to reduce their stress than those Collins interviewed. So if middle-class mothers are engulfed in stress, less advantaged mothers’ difficulties are likely far more acute.

Policies alone, Collins discovers, cannot solve women’s struggles. Easing them will require a deeper understanding of cultural beliefs about gender equality, employment, and motherhood. With women held to unrealistic standards in all four countries, the best solutions demand that we redefine motherhood, work, and family.

Real stories

In the book, Collins writes of Samantha, a Washington, DC, lawyer, who before she had children heard that she could do anything, that she could be at the top of her field. That’s a “load of crap,” she says later. “I can’t do everything. If I keep all the balls in the air, I’m broken.”

Donetta, a professor in Rome, recalls how her supervisor warned her not to get pregnant or her career would be over. So “at work,” she explains, “you don’t even mention your family.… You are pretending you don’t have anything to do at home.”

From women in the western German cities of Munich, Stuttgart, and Heilbronn, Collins hears the terms “career whore” and Rabenmutter, or “raven mother,” which refers to a selfish woman who abandons her young in the nest to fly off and pursue a career.

Culture is key

Collins demonstrates that policies alone do not fully account for or solve women’s problems. Work-family policies, she argues, are symptomatic of larger cultural understandings of what is and isn’t appropriate for mothers, and as such they play a role in reproducing the existing social order.

She shows that what mothers want and expect regarding work and family depend on their social context, as do the solutions they employ to alleviate their stress. She highlights how the larger cultural context—including beliefs about gender equality, employment, and motherhood—is crucial for understanding and ameliorating mothers’ difficulties.

“Women’s perspectives should be central to any endeavors in the US to craft, advocate for, and implement work-family policy as a force for social change,” Collins says.

“By gaining firsthand knowledge of how working mothers combine paid work with child-rearing in countries with diverse policy supports, I expose both the promise and the limits of work-family policy for reducing mothers’ work-family conflict and achieving gender equality.”

“Working mothers’ struggles to reconcile employment and motherhood, as well as the policy solutions to resolve this conflict—are of urgent public concern,” adds Collins. “Our government depends on mothers. So why are we failing to support them?”

Source: Washington University in St. Louis

The post Why working moms need ‘work-family justice’ appeared first on Futurity.

In tough times, single moms spend more on kids’ health

Tue, 2019-02-19 19:39

When money is tight, single mothers shift more of their health care dollars to their children and away from themselves, research shows.

An economic shock—such as the loss of a job, income, wealth, or health insurance—does not affect two-parent families the same way, according to the new study.

“In particular, we were interested in whether parents sacrifice their own health care spending in favor of spending for children during such times,” says lead author Alan Monheit a professor of health economics and public policy at Rutgers School of Public Health and a research professor at the Center for State Health Policy at Rutgers Institute for Health, Health Care, and Aging.

“We sought to identify family types whose health care spending was especially vulnerable to changes in their economic status, and whether particular family members’ health care spending was at risk due to a loss in economic status.”

For the study, which appears in Review of Economics of the Household, researchers looked at total health care spending data from 8,960 families from the 2004 to 2012 in the Medical Expenditure Panel Survey (MEPS).

The findings show that both realized income losses and expectations of a decline in economic status, such as an increase in the national unemployment rate, has a significant impact on health care spending decisions of single mothers compared to two-parent families.

“Single mothers shift health spending away from themselves and to their children,” Monheit says. “This shift occurs in single families with regard to total family health care spending, and in some cases with regard to spending for ambulatory care, physician office-based care, and dental care. We find no such shifts in spending in two-parent families.”

The findings are consistent with altruistic behavior by single mothers toward their children, and speaks to the vulnerability of single-mother families compared to two-parent families.

“Our findings may reflect the constrained economic circumstances of many single-mother families, and the difficult trade-offs such families must make to maintain the welfare of their children,” Monheit says.

When families can’t afford regular health care, it can cause delayed treatment, declining health, and greater reliance on hospital emergency departments, which increases overall health care costs for society.

“Our research raises two important questions,” Monheit says.

“First, is the shift in spending in single-mother families in response to an economic shock is likely to be transitory or longer-term in nature? Second, are existing public policy interventions, such as the Affordable Care Act’s Medicaid expansion and Medicaid/CHIP programs in non-expansion states, sufficient to address the health spending consequences of an income loss by single-mother families?”

Source: Rutgers University

The post In tough times, single moms spend more on kids’ health appeared first on Futurity.

Neurons work together to form our sense of texture

Tue, 2019-02-19 15:29

As neurons in somatosensory cortex process information about texture from sensors in the skin, they each respond differently to various features of a surface, creating a high-dimensional representation of texture in the brain, researchers report.

Our hands and fingertips are amazingly sensitive to texture. We can easily distinguish coarse sandpaper from smooth glass, but we also pick up more subtle differences across a wide range of textures, like the slick sheen of silk or the soft give of cotton. The somatosensory cortex is the part of the brain responsible for interpreting the sense of touch.

“Objects can have textures that we can describe in simple terms like rough or soft or hard. But they can also be velvety or cottony or furry,” says Sliman Bensmaia, associate professor of organismal biology and anatomy at the University of Chicago and senior author of the study.

A rotating drum covered with strips of different fabrics, sandpapers, and patterns to measure how the skin and nervous system respond to texture. (Credit: Matt Wood/U. Chicago)

“The variety of different adjectives you can use to describe texture just highlights that it’s a rich sensory space. So it makes sense that you need to have a rich neural space in the brain to interpret that too.”

Different feelings

In a 2013 study in PNAS, Bensmaia’s lab showed how different kinds of nerve fibers respond to different aspects of texture. Some nerves respond mainly to spatial elements of coarse textures, like the raised bumps of a Braille letter that create a pattern when pressed against the skin. Others respond to vibrations created when the skin rubs across fine textures, like fabrics, which account for the vast majority of textures we encounter in the real world.

In that study, Bensmaia and his colleagues used a rotating drum covered with strips of various coarse and fine textures, such as sandpaper, fabrics, and plastics. The drum then ran the textures across the fingertips of Rhesus macaque monkeys, whose somatosensory system is similar to humans, while the researchers recorded the responses in the nerve.

For the new study, the researchers recorded the corresponding responses to the same textures directly from the brain, using electrodes implanted into the somatosensory cortex of the monkeys.

The new data show that the neurons respond in a highly idiosyncratic way to different aspects of texture. Some neurons respond to coarse features of a texture. Others respond to fine features, certain patterns of indentation in the skin, or any number of combinations in between. The researchers identified at least 20 different patterns of response.

Rich sensations

“Some of them map onto things we understand, like roughness or the spatial pattern of a texture,” Bensmaia says. “But then it becomes combinations of skin vibration coupled with patterns of skin deformation, things that are abstract and a little harder to describe.”

But these more abstract features of texture are what can make the difference in being able to distinguish between bedsheets with different thread counts. The researchers recorded responses to 55 different textures, and Bensmaia says he can tell which one researchers used just by looking at the pattern of activity it generated in the brain.

“Velvet is going to excite one subpopulation of neurons more than another, and sandpaper is going to excite another overlapping population,” he says. “It’s this variety in the response that allows for the richness of the sensation.”

Prosthetic hands

Along with Nicho Hatsopoulos, who studies how the brain directs movement in the limbs, Bensmaia also pioneered research to build brain-controlled robotic prosthetic limbs. These devices work by implanting arrays of electrodes in the somatosensory cortex and areas of the brain that control movement. The electrodes pick up activity in neurons as the patient thinks about moving their own arm to direct the robotic arm to move accordingly. The prosthetic hand is fitted with sensors to detect sensations of touch, such as pressing on individual fingertips, which in turn generates electrical signals that stimulate the appropriate areas of the brain.

Theoretically, the same technique could recreate sensations of texture, but Bensmaia points out that the new study shows why this could be a challenging task.

The neurons that correspond to each fingertip are located in clearly defined areas of the somatosensory cortex, so it’s easier to stimulate the appropriate spot for a given touch. But neurons throughout the somatosensory cortex respond to texture inputs, and they’re mixed together. There’s no defined region of neurons that respond to sandpaper or the plastic keyboard of a laptop, for example.

“It’s going to be pretty challenging to be able to create textural sensations through electrical stimulation, because you don’t have these monolithic groups of neurons working together,” he says. “It’s very heterogenous, which could make it difficult to implement in prosthetics. But that’s also how we get this rich sensation of texture in the first place.”

The research appears in the Proceedings of the National Academy of Sciences. Funding for the work came from the National Institute of Neurological Disorders and Stroke.

Source: University of Chicago

The post Neurons work together to form our sense of texture appeared first on Futurity.

Red phosphorus barrier makes batteries safer

Tue, 2019-02-19 15:16

Scientists have taken the next step toward the deployment of powerful, rechargeable lithium metal batteries by making them safer and simpler to manufacture.

Researchers made test cells with a coat of red phosphorus on the separator that keeps the anode and cathode electrodes apart. The phosphorus acts as a spy for management systems that charge and monitor batteries by detecting the formation of dendrites, protrusions of lithium that can cause them to fail.

Lithium metal anodes charge much faster and hold about 10 times more energy by volume than common lithium-ion anodes used in just about every electronic device on the market, including cellphones and electric cars. Anodes are one of two electrodes needed for battery operation.

“It only takes a few fires for people to get really antsy.”

But charging lithium-infused anodes forms dendrites that, if they reach the cathode, cause a short circuit and possibly a fire or explosion. When a dendrite reaches a red phosphorus-coated separator, the battery’s charging voltage changes. That tells the battery management system to stop charging.

Unlike other proposed dendrite detectors, the new strategy doesn’t require a third electrode.

“Manufacturing batteries with a third electrode is very hard,” says chemist James Tour, whose Rice University lab conducted the research. “We propose a static layer that gives a spike in the voltage while the battery is charging. That spike is a signal to shut it down.”

The red phosphorus layer had no significant effect on normal performance in experiments on test batteries.

The researchers built a transparent test cell with an electrolyte (the liquid or gel-like material between the electrodes and around the separator that allows the battery to generate a current) known to accelerate aging of the cathode and encourage dendrite growth. That let them monitor the voltage while they watched dendrites grow.

With an ordinary separator, they saw the dendrites contact and penetrate the separator with no change in voltage, a situation that would lead a normal battery to fail. But with the red phosphorus layer, they observed a sharp drop in voltage when the dendrites contacted the separator.

“As soon as a growing dendrite touches the red phosphorus, it gives a signal in the charging voltage,” says Tour, chair in chemistry and professor of computer science and of materials science and nanoengineering. “When the battery management system senses that, it can say, ‘Stop charging, don’t use.'”

Last year, the lab introduced carbon nanotube films that appear to completely halt dendrite growth from lithium metal anodes.

“By combining the two recent advances, the growth of lithium dendrites can be mitigated, and there is an internal insurance policy that the battery will shut down in the unlikely event that even a single dendrite will start to grow toward the cathode,” Tour says.

“Literally, when you make a new battery, you’re making over a billion of them,” he says. “Might a couple of those fail? It only takes a few fires for people to get really antsy. Our work provides a further guarantee for battery safety. We’re proposing another layer of protection that should be simple to implement.”

The Air Force Office of Scientific Research supported the research, which appears in Advanced Materials.

Source: Rice University

The post Red phosphorus barrier makes batteries safer appeared first on Futurity.

‘Killer cells’ could lead to one-time flu vaccine

Tue, 2019-02-19 13:45

“Killer immune cells” can actually fight all strains of the influenza virus, report researchers. The finding could potentially lead to a universal, one-shot flu vaccine.

In the battle against the flu, these killer immune cells are like the body’s border control.

The white blood cells remember previous exposure to a flu strain and if they recognize an invader, they start an immune response to target and kill off the virus—stopping the infection.

But the three types of influenza virus that can infect humans—strains A, B, and C are problematic. They circulate in the human population globally, and mutate every flu season.

The virus is smart. It mutates in order to hide from our immune system, which means every year we have to have an annual flu vaccination against these new strains. These mutations can also occur when the virus transmits between humans and animal hosts, like birds.

Strain A is usually associated with flu pandemics, while both A and B are associated with annual influenza epidemics. The less common strain C can be responsible for severe illness in children.

Outbreak in China

Despite hopes that scientists could use the “memories” of killer cells (formally known as CD8+T cells) to create a vaccine that would last for life, previous studies have shown that these cells could only mount a repeated attack against strain A.

“Our team has been fascinated by the killer cells for a long time,” says Katherine Kedzierska, a professor at the Peter Doherty Institute for Infection and Immunity at the University of Melbourne and lead author of the paper in Nature Immunology.

Working with scientists at Fudan University in China, researchers studied the immune responses of patients to the first outbreak of the avian-derived H7N9 influenza virus (bird flu) in China in 2013. The outbreak directly contracted from birds and the type A virus dominated it. More than 90 percent of infected people went to the hospital and 35 percent of those people died.

Researchers discovered that patients who recovered within two to three weeks had robust killer CD8+T cell responses. Those who died had a diminished presence of them.

“So our next step was to discover how their protective mechanism worked, and if it had potential for a flu vaccine,” Kedzierska says.

Best target?

A protein coat that covers its genetic code in its core composes the flu virus. The team analyzed which parts of the flu virus commonly showed up in strains A, B, and C in order to find out which would be the best target for a universal vaccine.

When infected, our cells dissect the flu virus and use a protein called HLA to present parts of the virus (peptides) on the cell surface, alerting the immune system that it’s been compromised.

This HLA and viral peptide combination act as a passport or a unique identifier, known as an epitope. Killer cells recognize it, triggering them to kill off the infected cell.

So researchers focused on which epitopes were common in all three flu strains.

“Our first experiments were like finding a needle in a haystack,” says PhD candidate Marios Koutsakos. “We started with 67,000 viral sequences to look for epitopes common among all the flu viruses.

Common parts

Scientists eventually narrowed down these tens of thousands to three cross-reactive epitopes—that is they are common to all flu viruses.

“We identified the parts of the virus that are shared across all flu strains, and sub-strains capable of infecting humans,” Koutsakos says.

The researchers used mass spectrometry for the work. “We determined the structure and chemical properties of different parts of the virus that had not changed in 100 years,” he says.

The researchers found the flu virus epitopes in blood samples taken from healthy humans, and influenza-infected adults and children. They then used the peptides responsible for activating the killer cells to conduct vaccination tests on mice.

“Our vaccination test studies revealed remarkably reduced levels of flu virus and inflammation in the airways in animal models,” Koutsakos says. “These results show that killer T cells provide unprecedented immunity across all flu viruses, a key component of a potential universal vaccine.”

The team has a patent on the discoveries, which means they can develop a universal influenza vaccine approach to reduce the impact of pandemic and seasonal influenza around the world.

Additional researchers are from the University of Melbourne; Monash University; the WHO Collaborating Centre for Reference and Research on Influenza; the Royal Melbourne Hospital; St Jude Children’s Research Hospital; Seqirus; Université Paris-Saclay Cachan; Federation University; St Vincent’s Institute; The Alfred Hospital; the Royal Melbourne Hospital; University of New South Wales; the University of Sydney; and Garvan Institute.

Source: University of Melbourne

The post ‘Killer cells’ could lead to one-time flu vaccine appeared first on Futurity.

Broken ‘brake’ makes it harder to stop drinking

Tue, 2019-02-19 13:27

A broken neurobiological mechanism might explain why a certain subset of people can’t stop themselves from drinking excessively, even in the face of nausea, dizziness, or even losing control.

Karen Szumlinski, a neuroscientist from the University of California, Santa Barbara who investigates binge drinking and the repeated stress of overdrinking on the brain, suggests a neurobiological mechanism might underpin this behavior.

Researchers have uncovered a mechanism in a small brain structure called the bed nucleus of the stria terminalis (BNST) that helps sense alcohol’s negative effects and modulates the urge to drink. When it doesn’t function properly, however, we lose the ability to perceive when we’ve had enough—or, perhaps, one too many—and we continue to drink.

“If a little bit of intoxication is making you nervous, the BNST is doing its job,” says Szumlinski, a coauthor of a paper in the Journal of Neuroscience.

Stop or keep going?

The urge for us to do virtually anything comes from signals that loop in and around our brains in areas that govern our perceptions, emotions, and desires. These in turn connect to our motor functions and create behaviors. This process involves a complex set of signaling pathways, involving many neurotransmitters, as well as their associated proteins and receptors.

Those researchers examined in this study are specific to an area of the brain highly implicated in the interface between anxiety and motivation—the BNST, which is connected to, among other things, both the amygdala (which modulates fear and anxiety), and the nucleus accumbens (reward, aversion, motivation).

In previous studies, the researchers found that binge drinking elevates several aspects of signaling through the excitatory neurotransmitter glutamate in both the amygdala and the nucleus accumbens. Using a variety of experimental approaches, they also showed that this increased glutamate signaling drove excessive drinking. Along with the BNST, these regions form a subcircuit in the brain known as the extended amygdala.

“So in the amygdala the increased glutamate signaling is going to possibly generate negative emotions, and maybe you start feeling depressed or anxious, and then that will translate to a higher motivation to drink coming out of the nucleus accumbens,” Szumlinski says.

Alcoholism—addiction in general—is a shifting target that moves between the motivation toward the “feel-good” effects of the drug and motivation to avoid the unpleasant withdrawal symptoms or to simply feel normal again after the dependency has been established.

The researchers initially presumed that because the BNST is connected to both structures, the function of high glutamate signaling in the BNST is similar to that of the nucleus accumbens and the amygdala. But instead they found it contains a “brake” mechanism, an adaptive response to limit alcohol consumption. And pumping that pedal is a scaffolding protein called Homer2.

As it turns out, Homer2’s effects on the amygdala and nucleus accumbens are opposite to those in the BNST.

When the brakes go out

“When we manipulated Homer2 (in mouse models)—when we knocked it down in the amygdala or the accumbens—the animals stopped binge drinking,” Szumlinski says. When they reduced the expression of Homer2 in the BNST, however, the animals binge drank more. And according to Szumlinski, a lot more.

“We know that the ability of Homer2 to interact with the glutamate receptors (mGlu5) can be regulated in a number of ways,” she continued. “And so we wanted to know: What other part(s) of the signaling pathway is interacting with Homer2, and how might that be contributing to the brake process in the BNST?”

They found their answer in an enzyme. Extracellular signal-regulated kinase (ERK, for short) is another of the usual suspects in the realm of addictive disorders. In a mouse model that carried a mutation in its mGlu5 receptors that kept ERK from activating them, the researchers found that the mutation had an unexpectedly significant impact on alcohol preference and consumption.

“…if any kink happens in that little bit of signaling there, you lose the brakes.”

“Based on the available biochemical information at the time we started testing the mGlu5 mutant mice, we predicted a minimal impact on any behavior,” Szumlinski says. The receptor still worked, it just wasn’t sensitive to ERK, she explains. “But there was a huge impact on drinking behavior in a direction opposite to what we predicted.”

The mutant mice instead exhibited strong preferences for environments in which they experienced the effects of high-dose alcohol (doses that normal mice find aversive) and the mice consumed large amounts of high-dose alcohol under a number of different drinking procedures.

“So it really showed that something’s going on when you drink alcohol,” she says of this brake in the BNST. “You’re activating this enzyme ERK, which would normally phosphorylate the mGlu5 receptor, and help Homer2 bind better.

“All of this together serves as a brake to reduce or at least curb your alcohol consumption. But if any kink happens in that little bit of signaling there, you lose the brakes. Your brake line has been cut, and now you exhibit uncontrolled drinking behavior.”

Busted feedback loop

While all that is occurring in brain behaviorally, tampering with the BNST also seems to shut down or interfere with the typical aversive feedback that would normally prompt the drinker to stop—perhaps the nausea, dizziness, lack of control.

Interestingly, Szumlinski adds, messing with ERK-mGlu5 signaling also makes an animal overtly more drunk: compared to normal mice, the mGlu5 mutants studied lost their motor coordination on low doses of alcohol and they remained asleep longer when administered higher alcohol doses. Typical mice find increased alcohol sensitivity aversive. However, the mGlu5 mutants are falling-over drunk to observers, but they seem to interpret their situation as just fine.

It’s a jump to link the behavior of drunk lab mice and drunk people, notes Szumlinski, but there are connections that can be made in the array of complex brain processes that drive alcoholism.

“How we perceive how drunk we are is going to influence our subsequent drinking,” Szumlinski says. “Although their behavior is telling us they are completely intoxicated, maybe they don’t feel hammered. Or maybe when they’re feeling drunk, they don’t perceive that as a bad thing. Their awareness of their intoxicated state does not line up with their high-dose alcohol preference or their drinking behavior. And so presumably that might have something to do with BNST glutamate function.”

These results fly in the face of the widely accepted notion that one’s sensitivity to alcohol dictates their likelihood of drinking, Szumlinksi says.

“There’s a lot of literature, including lots of human data, that says if you are more sensitive to the intoxicating effects of alcohol, you are less likely to drink,” Szumlinski says. “We see this is in the genetic literature with people who have the various different enzyme mutations. Examples of these sensitivities are demonstrated in, among other reactions, the flushing, headaches, or nasal congestion that happens for some people when they consume alcohol.

“This study says you can be incredibly sensitive to the intoxicating effects of alcohol, but that doesn’t necessarily feed back on you the way it should,” she continues.

“And, presumably, the ability of that intoxication to signal to your body: ‘Hey, stop drinking,’ is somehow regulated by the BNST. The big questions now are: What is the identity of the neural circuit containing the BNST that allows the brakes to engage and how do ‘bad BNST brakes’ relate to Alcohol Use Disorder in the human condition?”

Additional researchers who contributed to the work are fromUC Santa Barbara, the University of New South Wales, and the Johns Hopkins University School of Medicine.

Source: UC Santa Barbara

The post Broken ‘brake’ makes it harder to stop drinking appeared first on Futurity.

Could cockroaches really survive a nuclear apocalypse?

Tue, 2019-02-19 13:04

Many people believe that cockroaches could survive a nuclear bomb and the subsequent radiation exposure, but is that actually true?

The creepy crawlies do have a reputation for resilience, which media reports have suggested may stem from rumors that insects thrived in the aftermath of the atomic bombings of Hiroshima and Nagasaki

But Tilman Ruff, a Nobel Laureate and professor in the School of Population and Global Health at the University of Melbourne who studies the health and environmental consequences of nuclear explosions, says he has yet to see any documented evidence that there were cockroaches scuttling through the rubble.

“I’ve certainly seen photographs of injured people in Hiroshima that have lots of flies around, and you do imagine some insects would have survived,” Ruff says. “But they still would have been affected, even if they appear more resistant than humans.”

Roaches’ bad wrap

The TV series Mythbusters tested the cockroach survival theory in 2012 when they exposed cockroaches to radioactive material. The roaches survived longer than humans would have, but they all died at extreme levels of radiation.

Mark Elgar, a professor at the School of Biosciences, says Mythbusters tests are incomplete because they only looked at how many days the cockroaches lived after exposure. They didn’t look at the cockroaches’ ability to produce viable eggs, thus ensuring the continued survival of the species.

“There is some evidence that they seem quite resilient to gamma rays, although they are not necessarily the most resistant across insects.”

“You could argue,” Elgar adds, “that some ants, particularly those that dig nests deep into the ground, would be more likely to survive an apocalypse than cockroaches.”

“[American and German cockroaches’] habit of basically acting as an unpaid house cleaner horrifies people.”

Previous tests of insects subjected to radiation found that cockroaches, though six to 15 times more resistant than humans, would still fare worse than the humble fruit fly.

Elgar says the feral American and German species of cockroach—the ones you might recognize from your kitchen nooks and crannies—have given the rest of the species a bad rap.

“I think our view of cockroaches is informed by our frequent interaction with the American and German cockroaches, which have spread throughout the world,” Elgar says. “Their habit of basically acting as an unpaid house cleaner horrifies people.”

There are more than 4,000 species of cockroaches, however, including native Australian cockroaches marked by iridescent colors and patterns.

“Some of the Australian bush cockroaches are really lovely looking insects, which might change people’s perspectives,” he says. “The Mardis Gras cockroach, for example, has got these lovely yellow patterns on its plates and bright blue legs with little black spots.”

Cockroaches breed quickly, lay large numbers of eggs and are harder to kill with chemicals than other household insects—all traits that could contribute to the popular belief that they could withstand anything, even a nuclear bomb.

“They are quite well defended. If you try and squish a cockroach it usually gives off an unpleasant smell that acts as a pretty effective deterrent for anything attempting to capture them,” Elgar says. “They’re flat, so they can escape into places you can’t easily access.”

After the bombs fall

Cockroaches feed off the detritus of other living organisms, however; so Elgar questions whether they would be able to thrive without humans and other animals.

“For a while they’ll be able to eat dead bodies and other decaying material but, if everything else has died, eventually there won’t be any food. And they’re not going to make much of a living,” Elgar says.

“The reality is that very little, if anything, will survive a major nuclear catastrophe, so in the longer term, it doesn’t matter really whether you’re a cockroach or not.”

Nuclear explosions “knocks the electrons off atoms and changes the chemistry of things.”

Nuclear explosions affect living things in a range of ways, from the impact of the initial blast to the ionizing radiation released into the air.

Ionizing radiation affects all organisms because it permanently damages DNA, the complex molecular chains that determine who we are and what we pass on to others.

“It knocks the electrons off atoms and changes the chemistry of things,” says Ruff.

Low and prolonged doses of ionizing radiation can lead to diseases like cancer and increase the risk of a range of chronic conditions, particularly cardiovascular disease. High doses can kill cells.

Massive impact

Nuclear explosions are also especially damaging because radioactive substances can accumulate and recycle through the environment—in freshwater systems, the ocean, and the earth.

They also concentrate up the food chain, so animals at the top of the food chain may contain levels of radioisotopes thousands of times higher than in their environment. So even if an organism is less susceptible initially, it’s still part of an ecosystem that has been damaged.

“The evidence from a disaster like Chernobyl is that every organism, from insects to soil bacteria and fungi to birds to mammals, would experience effects in proportion to the degree of contamination,” Ruff says.

Focusing on a single species misses the complexity of the biological environment and how we relate to one another.

“There’s less biological abundance, less species diversity, higher rates of genetic mutation, more tumors, more malformations, more cataracts in their eyes, shorter life spans, and reduced fertility in every biological system.”

In the past, scientists theorized that the more complex an organism, the more likely nuclear radiation was to affect them. So humans would fare worse and insects would do better.

But Ruff says that focusing on a single species misses the complexity of the biological environment and how we relate to one another, as well as interactions between multiple stresses at the same time.

“There’s all sorts of factors we have to look at. There are environmental factors. There are chronic exposures, effects across generations, and food shortages, for example,” he says. “The magnitude of effects of a nuclear explosion is far greater than what you might see in carefully controlled experiments and laboratory conditions.”

So, everything points to the conclusion that no, cockroaches ultimately wouldn’t survive a nuclear apocalypse.

Source: University of Melbourne

The post Could cockroaches really survive a nuclear apocalypse? appeared first on Futurity.

Nap time for teens might benefit their brains

Tue, 2019-02-19 11:07

Which is better for a teen who can’t get the recommended amount of rest: just 6.5 hours of sleep at night, or 5 hours at night plus a nap in the afternoon?

These different sleep schedules may have dissimilar effects on cognition and glucose levels, say researchers.

The handful of studies that have examined split sleep schedules with normal total sleep duration in working-age adults found that both schedules yield comparable brain performance. However, no study has looked at the impact of such schedules on brain function and glucose levels together, especially when total sleep is shorter than optimal. The latter is important because of links between short sleep and diabetes risk.

Split sleep

The researchers measured cognitive performance and glucose levels in students ages 15 to 19 during two simulated school weeks with short sleep on school days and recovery sleep on weekends. On school days, these students received either continuous sleep of 6.5 hours at night or split sleep (night sleep of 5 hours plus a 1.5-hour afternoon nap).

“We undertook this study after students who were advised on good sleep habits asked if they could split up their sleep across the day and night, instead of having a main sleep period at night,” says Michael Chee, director of the Centre for Cognitive Neuroscience, a professor in the neuroscience and behavioral disorders program at Duke-NUS Medical School, and one of the study’s senior authors.

“We found that compared to being able to sleep 9 hours a night, having only 6.5 hours to sleep in 24 hours degrades performance and mood. Interestingly, under conditions of sleep restriction, students in the split sleep group exhibited better alertness, vigilance, working memory, and mood than their counterparts who slept 6.5 hours continuously.

“This finding is remarkable as the measured total sleep duration over 24 hours was actually less in the former group,” Chee adds.

Glucose levels

However, for glucose tolerance, the continuous schedule appeared to be better. “While 6.5 hours of night sleep did not affect glucose levels, the split sleep group demonstrated a greater increase in 2 of 3 blood glucose levels to the standardized glucose load in both simulated school weeks,” notes Joshua Gooley, associate professor in the neuroscience and behavioral disorders program, principal investigator at the Centre for Cognitive Neuroscience, and the senior coauthor of the study.

Although further studies are necessary to see if this finding translates to a higher risk of diabetes later in life, the current findings indicate that beyond sleep duration, different sleep schedules can affect different facets of health and function in directions that are not immediately clear.

The findings appear in the journal SLEEP.

Source: Duke-NUS

The post Nap time for teens might benefit their brains appeared first on Futurity.

Small research teams produce more new ideas

Tue, 2019-02-19 09:53

It’s common to hear that to work out a big research problem, you need a big team. A new analysis of more than 65 million papers, patents, and software projects suggests otherwise.

Researchers examined 60 years of publications and found that smaller teams are far more likely to introduce new ideas to science and technology, while larger teams more often develop and consolidate existing knowledge.

The findings suggest that experts should reassess recent trends in research policy and funding toward big teams.

“Big teams are almost always more conservative. The work they produce is like blockbuster sequels; very reactive and low-risk,” says coauthor James Evans, professor of sociology and director of the Knowledge Lab at the University of Chicago.

“Bigger teams are always searching the immediate past, always building on yesterday’s hits. Whereas the small teams, they do weird stuff—they’re reaching further into the past, and it takes longer for others to understand and appreciate the potential of what they are doing.”

Disrupt or develop?

For the new study, which appears in Nature, researchers collected 44 million articles and more than 600 million citations from the Web of Science database, 5 million patents from the US Patent and Trademark Office, and 16 million software projects from the Github platform. The researchers then computationally assessed each individual work in the massive dataset for how much it disrupted versus developed its field of science or technology.

“Intuitively, a disruptive paper is like the moon during the lunar eclipse; it overshadows the sun—the idea it builds upon—and redirects all future attention to itself,” says coauthor and postdoctoral researcher Lingfei Wu.

“The fact that most of the future works only cite the focal paper and not its references is evidence for the ‘novelty’ of the focal paper. Therefore, we can use this measure, originally proposed by Funk and Owen-Smith, as a proxy for the creation of new directions in the history of science and technology.”

The findings show that disruption dramatically declined with the addition of each additional team member. The same relationship appeared when authors controlled for publication year, topic, or author, or tested subsets of data, such as Nobel Prize-winning articles.

Even review articles, which simply aggregate the findings of previous publications, are more disruptive when authored by fewer individuals, the study shows.

Less is more

The main driver of the difference in disruption between large and small teams appears to be how each treat the history of their field. Larger teams are more likely to cite more recent, highly cited research in their work, building upon past successes and acknowledging problems already in their field’s zeitgeist.

By contrast, smaller teams more often cite older, less popular ideas, a deeper and wider information search that creates new directions in science and technology.

“Small teams and large teams are different in nature,” Wu says. “Small teams remember forgotten ideas, ask questions, and create new directions, whereas large teams chase hotspots and forget less popular ideas, answer questions, and stabilize established paradigms.”

The analysis shows that both small and large teams play important roles in the research ecosystem, with the former generating new, promising insights that larger teams rapidly develop and refine.

Some experiments are so expensive, like the Large Hadron Collider or the search for dark energy, that a single, massive collaboration is the only way to answer them. But an ensemble of independent, risk-taking small teams rather than a large consortium may more effectively pursue other complex scientific question, the authors argue.

“In the context of science, funders around the world are funding bigger and bigger teams,” Evans says. “What our research proposes is that you really want to fund a greater diversity of approaches. It suggests that if you really want to build science and technology, you need to act like a venture capitalist rather than a big bank—you want to fund a bunch of smaller and largely disconnected efforts to improve the likelihood of major, path-breaking success.”

“Most things are going to fail, or are not going to push the needle within a field. As a result, it’s really about optimizing failure,” Evans says. “If you want to do discovery, you have to gamble.”

Source: University of Chicago

The post Small research teams produce more new ideas appeared first on Futurity.

The study of poop finally gets a name

Tue, 2019-02-19 09:25

Researchers have coined a new scientific term that means “excrement examined experimentally,” or, in simpler terms, the study of poop: in fimo.

Why, you might ask, do we need a scientifically accurate term based in Latin for the study of poop?

The answer is quite simple: because so many scientific words are based in Latin and there hasn’t been one for the experimental study of excrement, even though the scientific study of human waste is now at the forefront of biomedical research.

Our stools can tell us a lot about what’s going on inside our gastrointestinal systems because in excrement we find a diverse sampling of the bacteria populating our guts. And all of these microbes—scientifically referred to as the microbiota—can be quite important to human health.

For instance, when our bacterial composition is out of balance, the results can range from the mildly unpleasant to the wildly serious. There’s strong evidence that gut bacteria play important roles in weight gain, eating disorders, cancers, gut diseases, and even autism. Studying these bacteria has become extremely important. Yet, when discussing this work in scientific papers, researchers have not been as accurate in their terminology as scientists are known to be.

Going classic

When Aadra Bhatt, assistant professor of medicine at the University of North Carolina at Chapel Hill, realized the need for a proper term, she set out to find one. But as can happen while digging into Latin, the pursuit of a proper name for the experimental study of poop became soiled with impurities.

Bhatt enlisted the help of Luca Grillo, a classics professor at Notre Dame University, formerly of UNC-Chapel Hill, to help investigate the Latin roots of the word “manure.” Turns out, those pesky Romans had no less than four Latin terms for “manure”—laetamen, merda, stercus, and fimus.

Bhatt and Grillo traced laetamen to the Latin root laetus, which means “fertile, rich, happy.”

“For all its cheerful associations with joy,” Grillo says. “We had to resist the temptation to use ‘in Laetamine‘ as our term of choice because it seems to have been more related to farm animal dung.”

Merda also didn’t smell right. It’s remained unchanged in Romance languages as a reference to poop—merde in French, mierda in Spanish, and merda in Italian—possibly derived from a root word smerd/smord, from which came the Old English derivatives “stinkan” and the current English words of “stink” and “stench.”

And the winner is…

With two terms down the drain, Bhatt researched stercus and fimus. Both are older than laetamen and were never used to refer to stench. Bhatt and Grillo found that the word stercus was used broadly in ancient times, including as a term of abuse. It also seems to share the root word from which “scatology” originated. And scatology refers to “obscene literature.”

Originally, Romans used the term fimus less than stercus, and fimus seemed to refer strictly to the use of manure in agriculture. But, the key Roman writers Virgil, Livy, and Tacitus used fimus and never stercus.

Fimus, then, with its technical accuracy and literary ring made us opt for ‘in fimo‘ as our scientific term of choice for the experimental examination of excrement,” Bhatt says.

Sticks in the mud of modernity might ask, “why not just use ‘fecal’ or ‘in feco?'”

Because both are wrong. Grillo says, “Faex never meant ‘excrement’ in Latin, and its derivative, ‘feces,’ did not enter English usage until the 17th century, when it first referred to the dregs at the bottom of a wine cask or other storage vessels.”

And so, Bhatt and colleagues have been using “fimus” and “in fimo” at international academic conferences to rave reviews.

They’re quite aware of the whimsical nature of this work. Therefore, just as some scientists have fun with their naming of model organisms—such as “Dumpy” for a mutant model of the classic worm C. elegans—Bhatt and colleagues have devised a playful term for the active enzymes they extract from their in fimo samples: poopernatant.

Their proposal appears in the journal Gastroenterology.

Source: UNC Chapel Hill

The post The study of poop finally gets a name appeared first on Futurity.

Rare type of tumors can’t handle cholesterol

Tue, 2019-02-19 08:56

Scientists have discovered a rare tumor type that is unable to synthesize cholesterol, a molecule without which cells can’t survive.

“These cells become dependent on taking up cholesterol from their environment, and we can use this dependency to design therapies that block cholesterol uptake,” says Kivanç Birsoy, assistant professor at Rockefeller University, who reports the findings in Nature.

Birsoy has long had a fascination with the rare cases in which cancers lose the ability to make key nutrients. Some types of leukemia, for example, are unable to synthesize the amino acid asparagine. As a first line of defense against these cancers, doctors give patients a drug known as asparaginase, which breaks down the amino acid, removing it from the blood. Without access to external stores of the nutrient, the cancer cells die.

Birsoy and his colleagues set out to look for other cancer types that might be vulnerable to cut-offs in nutrient supply. The researchers looked first to cholesterol, an essential ingredient for all dividing cells. Typically, cancer cells either make cholesterol themselves, or acquire it from the cellular environment, where it is present in the form of low-density lipoprotein (LDL).

The researchers placed 28 different cancer cell types in an environment that lacked cholesterol, and noted which ones survived. Cells associated with a rare type of lymphoma, known as ALK-positive ALCL, did not endure these conditions, suggesting that these cells could not synthesize cholesterol on their own.

When the researchers reviewed gene expression data from the cholesterol-dependent cell lines, they discovered that these cancers lacked an enzyme involved in the synthesis of cholesterol. Without this enzyme, the cells accumulated squalene, a poorly studied metabolite that acts as a precursor for cholesterol.

Though the inability to make cholesterol should be a bad thing, a buildup of squalene, Birsoy notes, may actually be beneficial to cancer cells. “These cells need to deal with oxidative stress in their environment. And we believe squalene is one way to increase antioxidant capacity,” he says.

In another experiment, the researchers knocked out the cancer cells’ LDL receptors, a primary means of absorbing external cholesterol. As a result, the cells had no access to the nutrient and died. This outcome points to a novel way to kill ALCL cells, which can become resistant to chemotherapy. “We think therapies that block uptake of cholesterol might be particularly effective against drug-resistant forms of ALCL,” says Birsoy.

Moving forward, the researchers plan to screen other cancers for similar vulnerabilities. Says Birsoy: “This is part of a larger strategy of looking for nutrient dependencies or deficiencies in various cancer types.”

Source: Rockefeller University

The post Rare type of tumors can’t handle cholesterol appeared first on Futurity.

Can couches and vinyl floors make kids really sick?

Mon, 2019-02-18 16:11

Children who live in homes with all vinyl flooring or flame-retardant chemicals in the sofa have significantly higher concentrations of potentially harmful compounds in their blood or urine than children who live in homes that don’t, according to a new study.

The study shows that kids living in homes where the sofa in the main living area contains flame-retardant polybrominated diphenyl ethers (PBDEs) in its foam have a six-fold higher concentration of PBDEs in their blood serum.

In laboratory tests, scientists have linked exposure to PBDEs to neurodevelopmental delays, obesity, endocrine and thyroid disruption, cancer, and other diseases.

In the new study, researchers found that children from homes with vinyl flooring in all areas had concentrations of benzyl butyl phthalate metabolite in their urine 15 times higher than those in children living with no vinyl flooring.

Experts have linked benzyl butyl phthalate to respiratory disorders, skin irritations, multiple myeloma, and reproductive disorders.

“SVOCs are widely used in electronics, furniture, and building materials and can be detected in nearly all indoor environments,” says Heather Stapleton, an associate professor of environmental health at Duke University’s Nicholas School of the Environment.

“Human exposure to them is widespread, particularly for young children who spend most of their time indoors and have greater exposure to chemicals found in household dust.”

“Nonetheless, there has been little research on the relative contribution of specific products and materials to children’s overall exposure to SVOCs,” she says.

To address that gap, the researchers began a three-year study in 2014 of in-home exposures to SVOCs among 203 children from 190 families.

“Our primary goal was to investigate links between specific products and children’s exposures, and to determine how the exposure happened—was it through breathing, skin contact, or inadvertent dust inhalation,” Stapleton says.

The researchers analyzed samples of indoor air, indoor dust, and foam collected from furniture in each of the children’s homes, along with a hand wipe sample, urine, and blood from each child.

“We quantified 44 biomarkers of exposure to phthalates, organophosphate esters, brominated flame retardants, parabens, phenols, antibacterial agents, and perfluoroalkyl and polyfluoroalkyl substances (PFAS),” Stapleton says.

Stapleton and colleagues presented the findings at the annual meeting of the American Association for the Advancement of Science. Additional researchers are from Duke, Boston University’s School of Public Health, and the Centers for Disease Control & Prevention.

Source: Duke University

The post Can couches and vinyl floors make kids really sick? appeared first on Futurity.