Futurity.org

Syndicate content Futurity
Research news from top universities.
Updated: 12 min 29 sec ago

Can this diabetes drug prevent ‘pollution’ heart attacks?

Fri, 2018-10-12 11:05

Metformin, a safe and inexpensive drug used to treat type 2 diabetes, decreases the risk of heart attacks and strokes triggered by air pollution.

It works by reducing inflammation in the lungs that triggers clotting, according to a new study.

Metformin flips a switch in immune cells that reside in the lung and continuously samples the air we breathe. It prevents those immune cells, known as macrophages, from releasing dangerous molecules into the blood that promote heart attacks and strokes after pollution exposure.

“These findings suggest metformin as a potential therapy to prevent some of the premature deaths attributable to air pollution exposure worldwide,” says co-lead author Scott Budinger, professor of airway diseases and chief of pulmonary and critical care at Northwestern University Feinberg School of Medicine. Budinger is also a Northwestern Medicine pulmonary and critical care physician and a member of the Robert H. Lurie Comprehensive Cancer Center of Northwestern University.

More than 100 million people take metformin worldwide. The drug works by targeting the mitochondria—the cell’s energy center—in lung macrophages. When air pollution particles get into the lungs, the mitochondria release hydrogen peroxide that promotes inflammation and clotting. Metformin slows down the mitochondria and the release of hydrogen peroxide.

“The simplest next step would be to validate our study with metformin in people in China or other places where exposure to high levels of air pollution are common to see if it reduces inflammation,” Budinger says

China and India

Air pollution remains an enormous US public health problem, causing thousands of excess deaths in the Medicare population alone each year. The large majority of these deaths are due to heart attacks and strokes.

Because air pollution levels are about 10 times higher in China, India, and other parts of the developing world compared to the US, the global health impact of air pollution is much larger, Budinger notes.

In the study, which appears in Cell Press, a pediatric formulation of metformin was given to mice in their drinking water for three days. It was an equivalent concentration to the dose people take for diabetes. Mice were exposed to air pollution from Chicago in a specially designed chamber that concentrates the particles to levels similar to those seen in China.

When mice were exposed to air pollution in the laboratory, their macrophages released an inflammatory molecule called IL-6, which has been linked to heart attacks and strokes. Metformin prevented the release of IL-6 and reduced the speed at which clots formed after an injury. The same findings were seen in lung macrophages from humans.

“We know it’s an anti-diabetic drug, it can be an anti-cancer drug, and now our study suggests it’s a reasonable anti-inflammatory drug.”

The findings are a result of Budinger’s more than 20-year collaboration with Northwestern
scientist Navdeep Chandel, who studies metformin and its effects on mitochondrial metabolism.

Three years ago, Chandel, professor of medicine & cell biology, showed how metformin inhibits cancer progression. Studies had shown that the drug prevented cancer progression, but scientists didn’t fully understand how it worked. The researchers discovered that metformin slows mitochondrial metabolism to prevent the growth of cancer.

Slow down aging?

To prove that targeting the mitochondria in macrophages could prevent inflammation in response to pollution, Budinger and Chandel created mice where lung macrophages lacked key mitochondrial proteins.

Like the mice treated with metformin, these mice were protected against pollution-induced inflammation. These results suggest that “metformin is a pharmacological way of doing the same thing,” Chandel says.

“We know it’s an anti-diabetic drug, it can be an anti-cancer drug, and now our study suggests it’s a reasonable anti-inflammatory drug.”

Now, the Chandel and Budinger labs are determining whether metformin can target mitochondrial metabolism to prevent or slow aging and age-related diseases including diabetes, inflammation, cancer, and neurodegeneration. In parallel, other scientists are planning to give metformin to people older than 65 to see if it can delay the onset of aging-related diseases in the Targeting Aging with Metformin (TAME) trial.

Other coauthors are from Northwestern and the University of Chicago. The National Institutes of Health, the Veterans Administration, and the US Department of Defense funded the work.

Source: Northwestern University

The post Can this diabetes drug prevent ‘pollution’ heart attacks? appeared first on Futurity.

White Americans peg ‘illegal’ immigrants by country of origin

Fri, 2018-10-12 10:31

Many white Americans make assumptions about whether an immigrant is “illegal” based on country of origin. According to researchers, political rhetoric and stereotypes are driving those false notions.

A recent study shows that white Americans believe immigrants from Mexico, El Salvador, Syria, Somalia, and other countries President Donald Trump labeled “shithole” nations have no legal right to be in the United States. Just knowing an immigrant’s national origin is enough to believe they are probably undocumented, says Ariela Schachter, study co-author and an assistant professor of sociology at Washington University in St. Louis.

“Our study demonstrates that the white American public has these shared, often factually incorrect, stereotypes about who undocumented immigrants are,” Schachter says. “And this is dangerous because individuals who fit this ‘profile’ likely face additional poor treatment and discrimination because of suspicions of their illegality, regardless of their actual documentation.”

The study appears in the journal American Sociological Review.

The power of perception

The findings suggest that the mere perception of illegal status may be enough to place legal immigrants, and even US citizens, at greater risk for discrimination in housing and hiring, for criminal profiling and arrest by law enforcement, and for public harassment and hate crimes in the communities they now call home.

“When people form impressions about who they think is ‘illegal,’ they often do not have access to individuals’ actual documents. There have actually been a number of recent incidents in which legal immigrants and even US-born Americans are confronted by immigration authorities about their status. So these judgments seem to be based on social stereotypes. Our goal was to systematically uncover them,” says study co-author René D. Flores, an assistant professor of sociology at the University of Chicago.

From a broader sociological perspective, the researchers argue that an immigrant’s real standing in American society is shaped not just by legal documentation, but also by social perceptions.

“These findings reveal a new source of ethnic-based inequalities—’social illegality’—that may potentially increase law enforcement scrutiny and influence the decisions of hiring managers, landlords, teachers, and other members of the public,” they conclude in the research.

Jumping to conclusions

Conducted in November 2017, the experimental survey asked a representative sample of 1,500 non-Hispanic white Americans to guess whether a hypothetical immigrant was in the country illegally—and perhaps a threat worth reporting to authorities—based on the reading of a brief biographical sketch.

By systematically varying the immigrant’s nation of origin, education level, language skills, police record, gender, age, race, and other variables, researchers created a pool of nearly 7 million unique immigrant sketches that touched on a range of stereotypes. Each respondent was randomly assigned to view 10 of these unique sketches during the survey.

Using complex statistical analysis, researchers estimated how much each of these individual immigrant traits and stereotypes influenced the assumptions of white Americans from various demographic backgrounds, geographic regions, and self-identified political affiliations.

“There’s a clear implication that the Trump administration’s rhetoric on immigrant criminality is driving these beliefs, which, again, are not based in reality.”

Surprisingly, the study found that white Republicans and white Democrats jump to many of the same conclusions about the legal status of hypothetical immigrants—except when it comes to the receipt of government benefits.

Democrats rightfully recognize that in order to receive government benefits, immigrants must have legal documentation, whereas Republicans are more likely to suspect that receiving benefits marks an immigrant as illegal, even though by law undocumented immigrants are blocked from receiving federal benefits such as welfare.

Most tellingly, even the slightest hint of an immigrant with a criminal background has a huge effect on whether a white American suspects that the immigrant is in the country illegally.

“Saying an immigrant committed a crime had a larger impact on suspicions of illegality than saying they were, say, Mexican,” Schachter says. “This is true for both white Democrats and white Republicans. There’s a clear implication that the Trump administration’s rhetoric on immigrant criminality is driving these beliefs, which, again, are not based in reality. In fact, other research finds that undocumented immigrants are less likely to commit crimes than native-born Americans.”

‘Invisible illegality’

The study also demonstrates significant differences in how immigrants from various countries and differing social statuses are likely to be treated in the United States.

It found that white Americans seldom suspect European and Asian immigrants of being in the country illegally, even though undocumented immigrants from these regions now constitute almost 20 percent of the nation’s undocumented immigrant population of about 11 million. The study categorizes these immigrants as experiencing “invisible illegality” because their status is so rarely questioned.

Immigrants from Syria, Somalia, and other nations denigrated by the Trump Administration during the “Muslim Ban” controversy also face higher suspicions of illegality, even though most have a documented legal right to be in the country based on refugee status.

Likewise, the study found immigrants from violence-torn El Salvador, many of whom have been granted the right to remain in the United States on a temporary basis, are significantly more likely to be suspected of illegality and reported to authorities, as compared to immigrants from Canada or Italy.

Source: Washington University in St. Louis

The post White Americans peg ‘illegal’ immigrants by country of origin appeared first on Futurity.

Antisocial kids may learn it from their parents

Fri, 2018-10-12 09:37

Less parental warmth and more harshness at home can affect how aggressive children become and whether they lack empathy and a moral compass, according to a new study.

Researchers studied 227 pairs of identical twins. They analyzed small differences in the parenting that each twin experienced to see whether these differences could predict the likelihood of antisocial behaviors emerging.

They discovered that the twin who experienced stricter or harsher treatment and less emotional warmth from parents had a greater chance of showing aggression and a lack of empathy and moral compass—a set of characteristics known as callous-unemotional (CU) traits.

Nurture vs. nature

Parents of the twins completed a 50-item questionnaire about the home environment. They also established their harshness and warmth levels by rating 24 statements such as “I often lose my temper with my child” and “My child knows I love him/her.” The researchers assessed child behavior by asking the mother to report on 35 traits related to aggression and CU traits.

“The study convincingly shows that parenting—and not just genes—contributes to the development of risky callous-unemotional traits,” says Luke Hyde, associate professor of psychology at the University of Michigan. “Because identical twins have the same DNA, we can be more sure that the differences in parenting the twins received affects the development of these traits.”

The work, which appears in the Journal of the American Academy of Child and Adolescent Psychiatry, is the latest in a series of studies from lead author Rebecca Waller, assistant professor of psychology at the University of Pennsylvania, and colleagues using observation to assess a variety of aspects of parenting.

The initial research, which considered a biological parent and child, confirmed that parental warmth plays a significant role in whether CU traits materialize.

“Some of the early work on callous-unemotional traits focused on their biological bases, like genetics and the brain, making the argument that these traits develop regardless of what is happening in a child’s environment, that parenting doesn’t matter,” says Waller. “We felt there must be something we could change in the environment that might prevent a susceptible child from going down the pathway to more severe antisocial behavior.”

Waller and Hyde teamed with S. Alexandra Burt, co-director of the Twin Registry at Michigan State University, on the study, using 6-to-11-year-old participants from a large, ongoing study of twins that Burt directs.

Changing behavior

A potential next step is to turn these findings into useable interventions for families trying to prevent a child from developing such traits or to improve troubling behaviors that have already begun, Waller says.

“From a real-world standpoint, creating interventions that work practically and are actually able to change behaviors in different types of families is complicated,” she says. “But these results show that small differences in how parents care for their children matters. Our focus now is on adapting already-successful parenting programs to include specific interventions focused on callous-unemotional traits as well.”

Though an intervention with parents could succeed, the researchers stress that the work isn’t blaming parents for their child’s CU or aggressive behaviors.

“…treatments that work with parents likely can help, even for the most at-risk children.”

“Our previous work with adopted children also showed that genes do matter, and so there is a back and forth,” Hyde says. “Some children may be more difficult to parent. The most important message is that treatments that work with parents likely can help, even for the most at-risk children.”

The researchers acknowledge some limitations to the study, for example, that it skews heavily toward two-parent families, meaning the findings may not be as generalizable to single-parent homes. It also assesses parenting measures and twin behaviors based solely on parenting reports.

Yet despite these drawbacks, the researchers say the work broadens the understanding of how different forms of antisocial behavior, like aggression and callous-unemotional traits, emerge.

“This provides strong evidence that parenting is also important in the development of callous-unemotional traits,” Hyde says. “The good news is we know that treatments can help parents who may need extra support with children struggling with these dangerous behaviors.”

Funding for this research came from the National Institute of Mental Health and the Eunice Kennedy Shriver National Institute for Child Health and Human Development.

Source: University of Pennsylvania

The post Antisocial kids may learn it from their parents appeared first on Futurity.

Overlooked organ turns some ants into giant soldiers

Thu, 2018-10-11 16:24

A seemingly unimportant rudimentary organ that only appears briefly during the final stages of larval development may explain why some ants are tiny workers and others are huge soldiers, a new study finds.

The new research answers a question that perplexed Charles Darwin. So much so, that it actually led him to doubt his own theory of evolution.

Darwin wondered, if natural selection works at the level of the individual, fighting for survival and reproduction, how can a single colony produce worker ants that are so dramatically different in size—from the minor workers with their small heads and bodies, to the large-headed soldiers with their huge mandibles—especially if, as in the genus Pheidole, they are sterile?

The answer, according to a paper in Nature, is that the colony itself generates soldiers and regulates the balance between soldiers and minor workers thanks to the seemingly useless organ. This organ appears only briefly during the final stages of larval development and only in some of the ants—the ones that will become soldiers.

“These rudimentary ‘organs’ are not a secondary effect of hormones and nutrition, but are instead responsible for generating the soldiers.”

“It was a completely unexpected finding. People had noticed that during the development of soldiers that a seemingly useless rudimentary ‘organ’ would pop up and then disappear. But they assumed that it was just a secondary effect of the hormones and nutrition that were responsible for turning the larvae into soldiers,” says senior author Ehab Abouheif from McGill University’s biology department.

“What we discovered was that these rudimentary ‘organs’ are not a secondary effect of hormones and nutrition, but are instead responsible for generating the soldiers,” adds first author Rajendhran Rajakumar. “It is their passing presence that regulates the head and body of soldiers to grow at rapid rates, until you get these big-headed soldiers with huge mandibles and big bodies.”

Waiting in the wings

Abouheif has been studying wings in ants for the past twenty-three years. He was curious about the function of the wing imaginal discs which appear, transiently, in the final stages of larval development among the soldier ants—even though the soldier ants never actually develop wings.

He and his team spent nine years in the lab using various techniques (surgical and molecular) to cut away portions of the rudimentary wing discs from the larvae of soldier ants in the widespread and very diverse Pheidole genus. They discovered that by doing so, they affected the growth of the head and the body. Indeed, they found that they were able to scale the size of soldier ants by cutting away differing degrees of the imaginal wing discs, with a corresponding decrease in the size of the heads and bodies of the soldier ants.

It was clear confirmation that the rudimentary wing discs play a crucial role in the development of soldier ants.

Pheromones keep the rank and file

The researchers also discovered that the colony as a whole maintains the balance between soldiers and minor workers by regulating the growth of the rudimentary wing discs in larvae.

Earlier research had shown that the ratio of minor workers to soldiers remains constant in all colonies of the Pheidole genus, with a proportion of minor workers at 90 to 95 percent and soldier ants at 5 to 10 percent. The team has found that the soldier ants maintain this ratio by halting the growth of the rudimentary wing disc with an inhibitory pheromone when there are too many soldiers.

However, the colony is able to ramp up the number of soldier ants very quickly if it is under threat or the numbers of soldiers have dropped for some reason. This is because the rudimentary wing discs that play such a crucial role in regulating the number of soldier ants appear only in the final stages of larval development.

Other overlooked organs?

Based on his team’s discovery in ants, Abouheif proposes that rudimentary organs may play a much larger role in an organism’s development than had previously been imagined.

“Until now, people have assumed that these organs simply offer evidence of evolution and common descent, overlooking any current functions for them. Now that we know the crucial role played in Pheidole ant colonies by the rudimentary wing disc, it means that we will have to go back and look at other rudimentary organs in the same light. Who knows what scientists will discover?”

Konrad Lorenz Institute (KLI) fellowships, the Natural Sciences and Research Council of Canada (NSERC), and a Guggenheim Fellowship funded the research.

Source: McGill University

The post Overlooked organ turns some ants into giant soldiers appeared first on Futurity.

Lab-grown retinas reveal how the real thing forms

Thu, 2018-10-11 15:45

Biologists have grown human retina tissue from scratch to learn how the cells that let us see in color are made.

The work may lay the groundwork for therapies for eye diseases such as color blindness and macular degeneration. It also establishes lab-created “organoids”—artificially grown organ tissue—as a model to study human development on a cellular level.

“Everything we examine [in a retina organoid] looks like a normal developing eye, just growing in a dish,” says Robert Johnston, a developmental biologist at Johns Hopkins University. “You have a model system that you can manipulate without studying humans directly.”

The fate of stem cells

Johnston’s lab explores how a cell’s fate is determined—what happens in the womb to turn a developing stem cell into a cell with a specific function. In the retina research, he and his team focused on the development of cells that allow people to see blue, red, and green—the three cone photoreceptors in the human eye.

While most vision research is done on mice and fish, neither of those species has the dynamic daytime and color vision of humans. So Johnston’s team created the human eye tissue they needed from stem cells.

“Trichromatic color vision differentiates us from most other mammals,” says lead author Kiara Eldred, a graduate student. “Our research is really trying to figure out what pathways these cells take to give us that special color vision.”

Over months, as the cells grew in the lab and became full-blown retina tissue, the team found the blue-detecting cells materialized first, followed by the red- and green-detecting ones. In both cases, they found, the key to the molecular switch was the ebb and flow of thyroid hormone. Importantly, the thyroid gland, which of course wasn’t in the lab dish, didn’t control the level of this hormone, but the eye tissue itself did.

Once the researchers understood how the amount of thyroid hormone dictated whether the cells became blue or red and green receptors, they could manipulate the outcome, creating retinas that—if they had been part of a complete human eye—would have seen only blue, and others that would have detected green and red.

Insight into vision

The finding that thyroid hormone is essential for creating red-green cones provides insight into why pre-term babies, who have lowered thyroid hormone levels as they are lacking the maternal supply, have a higher incidence of vision disorders.

“If we can answer what leads a cell to its terminal fate, we are closer to being able to restore color vision for people who have damaged photoreceptors,” Eldred says. “This is a really beautiful question, both visually and intellectually—what is it that allows us to see color?”

These findings are a first step for the lab. In the future, the researchers would like to use organoids to learn even more about color vision and the mechanisms involved in the creation of other regions of the retina, such as the macula. Since macular degeneration is one of the leading causes of blindness in people, understanding how to grow a new macula could lead to clinical treatments.

“What’s exciting about this is our work establishes human organoids as a model system to study mechanisms of human development,” Johnston says. “What’s really pushing the limit here is that these organoids take nine months to develop just like a human baby. So what we’re really studying is fetal development.”

The research appears in the journal Science.

Additional researchers who contributed to the work are from Johns Hopkins; the Shiley Eye Institute of the University of California, San Diego; and the National Institutes of Health.

The Pew Charitable Trusts, the Howard Hughes Medical Institute, the National Science Foundation, and the National Institutes of Health funded the study.

Source: Johns Hopkins University

The post Lab-grown retinas reveal how the real thing forms appeared first on Futurity.

Online insomnia therapy offers round-the-clock benefits

Thu, 2018-10-11 14:57

Digital cognitive behavioral therapy (CBT) improves not only insomnia symptoms, but functional health, psychological well-being, and sleep-related quality of life, according to a year-long study involving 1,711 people.

A major limitation of insomnia treatments is the lack of providers to deliver CBT, but this study, which appears in JAMA Psychiatry, used an online platform that made it easily accessible to users. It also automated and tailored the treatment around the user’s sleep patterns.

There is a four-to-six month wait for an insomnia patient to get an appointment in his sleep clinic, says coauthor Jason Ong, associate professor of neurology in sleep medicine at Northwestern University Feinberg School of Medicine. “We can reach many more patients with insomnia by using a digitally based program.”

Life essential

Previous research has identified insomnia as a risk factor for the development of mental health disorders, cardiovascular disease, and type 2 diabetes.

“Sleep ranks with air, water, and food as one of the essentials of life, yet 10 to 12 percent of the population doesn’t get enough of it due to insomnia,” says lead author Colin Espie, professor of sleep medicine at Oxford University and chief medical officer of Big Health, a provider of automated and personalized behavioral medicine programs for mental health.

“Our study suggests that digital medicine could be a powerful way to help millions of people not just sleep better, but achieve better mental and physical well-being as a result,” Espie says.

The study provides new evidence that the clinical benefits of digital CBT extend beyond sleep to also improve a person’s daytime functioning.

“Typically, what leads patients to seek treatment is when their insomnia begins to impact their quality of life or daytime functioning,” Ong says. “The fact that we saw improvements in both of these areas shows that the digital program has benefits around the clock.”

Before sleeping pills

Though people with insomnia have traditionally received treatment with pharmaceuticals, new guidelines the American College of Physicians published in 2016 recommend that CBT be used first-line, ahead of sleeping pills.

For the study, participants received treatment using the Sleepioprogram and an associated iOS app. Sleepio, a digital sleep improvement program featuring CBT techniques that Espie designed, is a product of Big Health.

Delivery was structured into six sessions lasting an average of 20 minutes each, with participants having access to the intervention for up to 12 weeks. Researchers assessed participants online at 0 weeks (baseline), four weeks (mid-treatment), eight weeks (post-treatment), and 24 weeks (follow-up). Program content was based on CBT manuals and included behavioral, cognitive and educational components.

“In clinical studies, dCBT has repeatedly achieved statistically significant and clinically meaningful results for outcomes including sleep, mental health, and daytime functioning,” Espie says. “Our latest results indicate that dCBT can be an effective, inexpensive way to help insomnia sufferers achieve better health over the long term through behavior change.”

Big Health (Sleepio) Ltd. funded the work. Grants awarded to the University of Oxford also provided funding.

Source: Northwestern University

The post Online insomnia therapy offers round-the-clock benefits appeared first on Futurity.

Chewing gum may be a good way to get your vitamins

Thu, 2018-10-11 14:23

Chewing gum may be an effective delivery system for some vitamins, according the new research.

Nearly 15 percent of all chewing gum varieties sold promise to provide health-enhancing supplements to users, so researchers studied whether two vitamin-supplemented products were effective at delivering vitamins to the body.

The research marks the first time that researchers closely scrutinized vitamin delivery from chewing gum, according to Joshua Lambert, professor of food science in the College of Agricultural Sciences at Penn State. The findings, he suggests, indicate that chewing gum—a pleasant habit for many—could be a strategy to help reduce vitamin deficiency around the world, a problem described as an epidemic.

Just chew it

Even in the United States vitamin deficiency is a serious problem, with nearly one in 10 people over the age of 1 deficient in vitamins B6 and C, according to a recent analysis of the National Health and Nutrition Examination Survey.

“I was slightly surprised that no one had done a study like this before given the number of supplement-containing gum products on the market,” Lambert says. “But there is no requirement that nutritional gums be tested for efficacy, since they fall into the category of dietary supplements.”

To find out if supplemented gum contributes vitamins to chewers’ bodies, researchers had 15 people chew two off-the-shelf supplemented gums and measured the levels of eight vitamins released into their saliva. In a separate experiment with the same people, researchers measured levels of seven vitamins in their plasma.

The researchers used an identical gum product—minus the vitamin supplements—as a placebo in the study.

Lambert and colleagues found that gum released retinol (A1), thiamine (B1), riboflavin (B2), niacinamide (B3), pyridoxine (B6), folic acid, cyanocobalamin (B12), ascorbic acid (C), and alpha-tocopherol (E) into the saliva of participants who chewed the supplemented gums.

After chewing the supplemented gums, participants’ blood plasma vitamin concentrations, depending on which supplemented gum they chewed, were higher for retinol, by 75 to 96 percent; pyridoxine, 906 to 1,077 percent; ascorbic acid, 64 to 141 percent; and alpha-tocopherol, 418 to 502 percent, compared to the placebo.

Increasing vitamin levels

For the most part, the research demonstrates that levels of water-soluble vitamins such as vitamins B6 and C were higher in the plasma of participants who chewed supplemented gum compared to participants who chewed the placebo gum. In supplemented gum chewers, researchers also saw increases in the plasma of several fat-soluble vitamins such as the vitamin-A derivative retinol and the vitamin-E derivative alpha tocopherol.

That was the most significant finding of the study, Lambert points out. At least for the products tested, the chewers almost completely extracted water-soluble vitamins from the gum during the process of chewing. The fat-soluble vitamins were not completely released from the gum.

“Improving the release of fat-soluble vitamins from the gum base is an area for future development for the manufacturer,” he says.

Lambert offers one caution about the findings, which appear in Journal of Functional Foods.

“This study was done in an acute setting—for a day we have shown that chewing supplemented gum bumps up vitamin levels in blood plasma,” he says. “But we haven’t shown that this will elevate plasma levels for vitamins long-term. Ideally, that would be the next study. Enroll people who have some level of deficiency for some of the vitamins in supplemented gum and have them chew it regularly for a month to see if that raises levels of the vitamins in their blood.”

Additional researchers are from Penn State and Vitaball, Inc. employee. The one Vitaball employee involved in the work participated in the study design and editing the manuscript, but was not in participant recruitment, sample collection, data analysis, or preparation of the manuscript. No other employee of Vitaball, Inc., participated in the study design and execution.

Vitaball Inc. and the US Department of Agriculture supported this research.

Source: Penn State

The post Chewing gum may be a good way to get your vitamins appeared first on Futurity.

For women, higher BMI ups risk for early onset colon cancer

Thu, 2018-10-11 13:57

Women who are overweight or obese have up to twice the risk of developing colorectal cancer before age 50 as women who have what is considered a normal body mass index, according to a new study.

Overall rates of new colorectal cancer cases and deaths from the disease have decreased steadily since 1980 in the United States, largely owing to recommended colonoscopy screening starting at age 50.

But for reasons that remain unknown, new cases of, and deaths due to, both colon and rectal cancers have been increasing for younger adults ages 20 to 49.

The new study, which appears in JAMA Oncology, is among the first epidemiologic analyses of the potential contributors to early-onset colorectal cancer—those cases diagnosed in people younger than 50.

Obesity epidemic

Higher current BMI, BMI at 18 years of age, and weight gain since early adulthood are associated with increased risk of colorectal cancer under age 50, the findings show.

The study included data from 85,256 women ages 25 to 44 in the Nurses’ Health Study II, which began in 1989. Researchers have collected detailed information on body weight throughout the life course, family and endoscopy histories, and lifestyle factors at study baseline and every two to four years. Up to 2011, doctors diagnosed 114 colorectal cancer cases under age 50.

“…we were surprised by the strength of the link and the contribution of obesity and weight change since early adulthood.”

“Our findings really highlight the importance of maintaining a healthy weight, beginning in early adulthood for the prevention of early-onset colorectal cancer,” says co-senior author and cancer epidemiologist Yin Cao assistant professor of surgery in the Public Health Sciences division at Washington University in St. Louis.

“We hypothesized that the obesity epidemic may partially contribute to this national and global concern in early-onset colorectal cancer rates, but we were surprised by the strength of the link and the contribution of obesity and weight change since early adulthood.”

Family history

Compared with women with the lowest BMIs (18.5-22.9 kilograms per square meter), women with the highest BMIs (greater than 30) had almost twice the risk of early-onset colorectal cancer.

According to the Centers for Disease Control and Prevention, the normal BMI range is 18.5-24.9 kilograms per square meter. BMIs from 25-29.9 are considered overweight, and BMIs greater than 30 are considered obese.

Early-onset colorectal cancer remains relatively rare at about eight cases per 100,000 people, but since these populations do not receive routine screening, people often don’t get a diagnosis until later stages of the disease, making them much more difficult to treat.

The researchers estimated that roughly 22 percent of early-onset colorectal cancer could have been prevented had all participants had a normal BMI between 18.5-24.9. On a population scale, this represents thousands of potentially preventable colorectal cancer cases among the younger population. Further, higher risk of early-onset colorectal cancer associated with increasing BMI held even among women with no family history of the disease.

The American Cancer Society recently lowered the recommended age at which most people should undergo a first screening colonoscopy. The new guidelines recommend screening beginning at age 45, down from the previous recommendation of age 50.

Risk factor surrogate?

More research is necessary, including additional validation studies as well as cost-benefit analyses, to see if BMI should be considered in deciding the appropriate age an individual should begin screening or complement current screening efforts, Cao says.

The researchers emphasize that this is an association study—it doesn’t establish that increasing weight is a cause of early-onset colorectal cancer. It is possible BMI could be serving as a surrogate for other risk factors that may influence colorectal cancer risk, including metabolic syndrome and diabetes, which also have been increasing at the population level.

More studies are needed to uncover the best ways to identify young people at high risk of colorectal cancer at younger ages. And since this study was limited to women who were predominantly white, more research is necessary to see if these associations hold for men and for diverse racial and ethnic populations.

“Emerging data also suggest that early-onset colorectal cancer may be different on a molecular level from cases diagnosed at older ages,” Cao says. “Because early-onset colorectal cancer is rare, we need more collaborative research to understand why case rates and deaths among younger people are increasing and what could be done to slow them down.”

Additional researchers are from the Harvard T.H. Chan School of Public Health, Brigham and Women’s Hospital, and Massachusetts General Hospital at Harvard Medical School.

Source: Washington University in St. Louis

The post For women, higher BMI ups risk for early onset colon cancer appeared first on Futurity.

New tests offer faster Lyme disease detection

Thu, 2018-10-11 12:27

New techniques spot Lyme bacteria infection faster than the three weeks it currently takes, researchers say.

The new techniques can detect an active infection with the Lyme bacteria faster than the three weeks it takes for the current indirect antibody-based tests, which have been a standard since 1994. Another advantage of the new tests is that a positive result in the blood indicates the infection is active and needs treatment immediately, allowing quicker treatment to prevent long-term health problems.

The techniques detect DNA or protein from the Lyme disease bacteria Borrelia burgdorferi.

“…Lyme disease has increased in numbers to 300,000 per year in the United States and is spreading across the country and world.”

“These direct tests are needed because you can get Lyme disease more than once, features are often non-diagnostic, and the current standard FDA-approved tests cannot distinguish an active, ongoing infection from a past cured one,” says Steven Schutzer, a physician-scientist at Rutgers University New Jersey Medical School and lead author of the paper, which appears in Clinical Infectious Diseases.

“The problem is worsening because Lyme disease has increased in numbers to 300,000 per year in the United States and is spreading across the country and world,” Schutzer says.

Lyme disease signs frequently, but not always, include a red ring or bull’s eye skin rash. When there is no rash, a reliable laboratory test is necessary, and preferably one that indicates active disease.

The only FDA-approved Lyme disease tests rely on detecting antibodies that the body’s immune system makes in response to the disease. Such a single antibody test is not an active disease indicator but rather only an exposure indicator—past or present.

“The new tests that directly detect the Lyme agent’s DNA are more exact and are not susceptible to the same false-positive results and uncertainties associated with current FDA-approved indirect tests,” Schutzer says.

“It will not be surprising to see direct tests for Lyme disease join the growing list of FDA-approved direct tests for other bacterial, fungal, and viral infections that include Staphylococcus, Streptococcus, Candida, influenza, HIV, herpes, and hepatitis, among others.”

Additional coauthors are from Rutgers, Harvard University, Yale University, the National Institute of Allergy and Infectious Diseases, the FDA, the Centers for Disease Control and Prevention, and other institutions.

Source: Rutgers University

The post New tests offer faster Lyme disease detection appeared first on Futurity.

Californian whales may save their Russian cousins

Thu, 2018-10-11 09:28

Hope for an alarmingly low number of gray whales in the western Pacific Ocean might rest with their cousins to the east, according to a study of the animals’ genetic resources.

The population of gray whales that live along the coasts of California and Mexico is booming—numbering about 27,000. But there are only about 200 along the Russian coast. The new study is clarifying why.

“At any one time, there is a huge disparity in the number of whales in each location,” says Andrew DeWoody, professor in the forestry and natural resources department at Purdue University.

“Some think that intense Russian and Japanese commercial whaling in the 1950s might have wiped out the entire population in the west. It’s possible then that a few survived and have been increasing in population. Or some might have dispersed from the east to make up today’s western population. It might also be a combination of the two,” DeWoody says.

Researchers compared the genotypes of 77 western and 135 eastern gray whales and found that the two populations have diverged genetically. The genotypes show distinct alleles—variations in genes—in each group. The separation is similar to levels sometimes seen between distinct subspecies.

Interbreeding whales

The analysis also shows, however, that there is some genetic admixture. People have observed whales traversing the Pacific Ocean, and genetic data show that there seems to be at least some interbreeding among the populations.

“That’s good news. If you have a tiny population, a critically endangered population as we see in the west, you want to maintain gene flow and make sure they don’t lose genetic diversity,” says Anna Brüniche-Olsen, a postdoctoral researcher in DeWoody’s lab. “There seems to be gene flow between the two. It might be that even though it’s different population, they’re not completely separated.”

The findings, which appear in Biology Letters, reveal that both populations of whales have significant amounts genetic diversity, which is especially important for the endangered western gray whales. The researchers will monitor the subgroups to see whether they continue to diverge or if the intermingling will lead to a loss of the distinct groups.

Better method

“Maybe they’ll become one single gene pool in the future,” DeWoody says. “Or they could be quite different and at some point won’t mate with one another any longer.”

The findings may also guide new standards for the International Union for Conservation of Nature, which maintains the Red List, a comprehensive list of endangered animals around the world. DeWoody is involved with a group in the IUCN that is developing guidelines for using genetics to determine the threat level for animals such as the gray whale.

“As we’ve seen with gray whales, it’s a lot more complicated than the number of animals in the west and the east,” DeWoody says. “Using genetics is going to prove to be a better method for understanding the population structures of endangered species and how those might be affected by human pressures or by natural processes such as ocean currents.”

Source: Purdue University

The post Californian whales may save their Russian cousins appeared first on Futurity.

These 3D-printed parts ‘remember’ how we use them

Thu, 2018-10-11 09:03

Researchers have developed 3D-printed devices that can track and store their own use—without batteries or electronics.

Cheap and easily customizable, 3D-printed devices are perfect for assistive technology, like prosthetics or “smart” pill bottles that can help patients remember to take their daily medications. But these plastic parts don’t have electronics, which means they can’t monitor how patients are using them.

The new system uses a method called backscatter, through which a device can share information by reflecting signals that an antenna has transmitted to it.

“We’re interested in making accessible assistive technology with 3D printing, but we have no easy way to know how people are using it,” says coauthor Jennifer Mankoff, a professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington. “Could we come up with a circuitless solution that could be printed on consumer-grade, off-the-shelf printers and allow the device itself to collect information? That’s what we showed was possible in this paper.”

Previously the team developed the first 3D printed objects that connect to Wi-Fi without electronics. These purely plastic devices can measure if a detergent bottle is running low and then automatically order more online.

“Using plastic for these applications means you don’t have to worry about batteries running out or your device getting wet. That can transform the way we think of computing,” says senior author Shyam Gollakota, an associate professor in the Allen School. “But if we really want to transform 3D printed objects into smart objects, we need mechanisms to monitor and store data.”

The researchers tackled the monitoring problem first. In a previous study, their system tracks movement in one direction, which works well for monitoring laundry detergent levels or measuring wind or water speed. But now they needed to make objects that could monitor bidirectional motion like the opening and closing of a pill bottle.

“Last time, we had a gear that turned in one direction. As liquid flowed through the gear, it would push a switch down to contact the antenna,” says lead author Vikram Iyer, a doctoral student in the UW Department of Electrical & Computer Engineering. “This time we have two antennas, one on top and one on bottom, that can be contacted by a switch attached to a gear. So opening a pill bottle cap moves the gear in one direction, which pushes the switch to contact one of the two antennas. And then closing the pill bottle cap turns the gear in the opposite direction, and the switch hits the other antenna.”

Movement is captured when the switch contacts one of the two antennas. Both of the antennas are identical, so the team had to devise a way to decode which direction the cap was moving.

(Credit: Credit: Mark Stone/U. Washington)

“The gear’s teeth have a specific sequencing that encodes a message. It’s like Morse code,” says coauthor Justin Chan, a doctoral student in the Allen School. “So when you turn the cap in one direction, you see the message going forward. But when you turn the cap in the other direction, you get a reverse message.”

In addition to tracking, for example, pill bottle cap movement, this same method can monitor how people use prosthetics, such as 3D-printed e-NABLE arms. These mechanical hands, which attach at the wrist, help children with hand abnormalities grasp objects. When children flex their wrists, cables on the hand tighten to make the fingers close. So the team 3D printed an e-NABLE arm with a prototype of their bidirectional sensor that monitors the hand opening and closing by determining the angle of the wrist.

The researchers also wanted to create a 3D-printed object that could store its usage information while out of Wi-Fi range. For this application, they chose an insulin pen that could monitor its use and then signal when it was getting low.

“You can still take insulin even if you don’t have a Wi-Fi connection,” Gollakota says. “So we needed a mechanism that stores how many times you used it. Once you’re back in the range, you can upload that stored data into the cloud.”

This method requires a mechanical motion, like the pressing of a button, and stores that information by rolling up a spring inside a ratchet that can only move in one direction. Each time someone pushes the button, the spring gets tighter. It can’t unwind until the user releases the ratchet, hopefully when in range of the backscatter sensor. Then, as the spring unwinds, it moves a gear that triggers a switch to contact an antenna repeatedly as the gear turns. Each contact is counted to determine how many times the user pressed the button.

Each time someone pushes the button, a spring inside the ratchet gets tighter.

These devices are only prototypes to show that it is possible for 3D printed materials to sense bidirectional movement and store data. The next challenge will be to take these concepts and shrink them so that they can work in real pill bottles, prosthetics, or insulin pens, Mankoff says.

“This system will give us a higher-fidelity picture of what is going on,” she says. “For example, right now we don’t have a way of tracking if and how people are using e-NABLE hands. Ultimately what I’d like to do with these data is predict whether or not people are going to abandon a device based on how they’re using it.”

The team will present its findings at the ACM Symposium on User Interface Software and Technology in Berlin.

The National Science Foundation and Google Faculty Awards funded the research.

Source: Sarah McQuate for University of Washington

The post These 3D-printed parts ‘remember’ how we use them appeared first on Futurity.

How single cells can shed light on ‘fungal dark matter’

Wed, 2018-10-10 15:56

Researchers have developed a way to generate genomes from single cells of uncultivated fungi.

Fungi can be found on forest floors, in swamps, and in houses, ranging in size from smaller than the period on your smartphone’s keyboard to stretching over several city blocks.

Scientists estimate more than a million species live on this planet, but most of that diversity remains unknown because the fungi have avoided detection and scientists have not cultured them for study in laboratories.

Now, scientists are using single-cell genomics to expand the fungal tree of life. They tested the approach on several uncultivated fungal species representing early diverging fungi, the earliest evolutionary branches in the fungal genealogy that provide a repertoire of important and valuable gene products. The findings appear in Nature Microbiology.

Tracing the fungal family tree

“Most of the phylogenetic diversity represents early diverging fungi. We know from environmental DNA surveys that they’re common in many habitats, but they’re presumably microscopic so you really have to look for them,” says co-senior author of the study Tim James, an associate professor in the ecology and evolutionary biology department at the University of Michigan.

“We don’t know what they look like and we know we can’t culture them, since what you can culture is not representative of what you see in environmental DNA,” James says. “We would love to be able to look at a given sample and identify what the cells might look like, but we also want to look at the genomes of the organisms and infer what they’re like. That’s where single-cell genomics comes in.”

“This work was a proof-of-principle that the single-cell genomics approach can reconstruct near-complete fungal genomes.”

Through projects such as the US Department of Energy’s Joint Genome Institute’s 1,000 Fungal Genomes, researchers aim to expand the known fraction of fungal diversity with representative genome sequences for various lineages. Even with such efforts though, the majority of available genomes belong to just two major lineages, Ascomycota and Basidiomycota. The early-diverging lineages that are closer to the base of the Fungal Tree of Life have few representative genomes.

“Conceptually, this is a pilot project,” says Joint Genome Institute data scientist and first author Steven Ahrendt. “This is a similar idea to the approach JGI has taken with microbial dark matter—that the species are out there, but they don’t show up in plate-based culturing.”

‘Fungal dark matter’

The researchers applied the single-cell genomics approach to eight fungi, seven of which belong to the early diverging lineages Cryptomycota, Chytridiomycota, and Zoopagomycota. In addition, six of the seven fungi are mycoparasites, or fungi that attack other fungi. As such, they need to be able to infest the hosts without harming themselves. These species were grown in co-culture with their hosts, and then researchers isolated the spores of the parasites for sampling.

“That mycoparasitic lifestyle might be a factor in why these species are unculturable,” Ahrendt says.

Looking at the genomes of the six mycoparasites, the team found that essential metabolism genes for pathways involving thiamine, urea, and sulfate, among others, were missing, which could make culturing them difficult.

James notes that the fungi researchers used in the study came from a wide range of mycoparasitic strategies and represent a large amount of evolutionary time, which made it difficult for the team to identify novel sets of genes that could shed light on the mycoparasitic lifestyle. What the study really highlights, he adds, is that the single-cell approach is feasible for what he calls “fungal dark matter.”

The fungal single cells yielded anywhere from 6 percent of the genome to 88 percent, but combining the single cells yielded genome co-assemblies ranging from 73 percent complete up to 99 percent complete.

Tweaking the pipeline

There are around 2,000 described species of early diverging fungi, and about 120,000 described species of the Ascomycota and Basidiomycota, James says.

“We’ve described maybe 5 percent of the fungal diversity, and we’re in an era where we can start to get at that missing piece of the diversity,” he says.

The single-cell genomics approach will be applied to a JGI Community Science Program proposal that James is leading and that involves 50 unknown early diverging fungi from aquatic environments.

“What I’d really like to see is people take up this approach and tweak the pipeline to fit different organismal groups,” he says. “This pilot just started the exploration by looking at unicellular aquatic organisms, and yet we have organisms in soil, in plants, and so on.”

“This work was a proof-of-principle that the single-cell genomics approach can reconstruct near-complete fungal genomes and provide insights into phylogenetic position and metabolic capacities of diverse unculturable species from environmental samples,” says JGI Fungal Program head and co-senior author Igor Grigoriev.

“Several genomes in this study represent the first references for fungal phyla containing mostly species that have not been or cannot be cultured. Having genome sequences and metabolic reconstructions of a broad diversity of uncultured fungal species enable us to better understand fungal evolution and expand the catalogs of gene, enzymes, and pathways,” Grigoriev says.

The genomes of the species referenced in the study are available in JGI’s fungal portal, MycoCosm, as well as on GenBank.

Additional researchers who contributed to this work are from University of California, Berkeley; Ottawa Hospital Research Institute, Canada; Aix-Marseille University, France; Institut National de la Recherche Agronomique, France; King Abdulaziz University, Saudi Arabia; and University of Florida.

Source: University of Michigan

The post How single cells can shed light on ‘fungal dark matter’ appeared first on Futurity.

Vaccinate humans to protect mosquitoes from malaria?

Wed, 2018-10-10 14:59

Researchers have devised a simple way to boost the efficacy of a new kind of malaria vaccine.

For decades, scientists have been trying to develop a vaccine that prevents mosquitoes from spreading malaria among humans.

This unique approach—in which immunized humans transfer anti-malarial proteins to mosquitoes when bitten—is called a transmission-blocking vaccine (TBV). A few malarial TBVs have shown promise but researchers have not widely tested them due to unwanted side effects or limited effectiveness. The new research could change that.

If the new method is successful, it could help reduce the spread of the disease, which kills more than 400,000 people annually, mostly small children in sub-Saharan Africa.

“Malaria is a huge global problem. This approach—using a transmission-blocking vaccine—could be part of a suite of tools that we use to tackle the disease,” says Jonathan Lovell, associate professor of biomedical engineering at the University at Buffalo and lead author of the paper, which appears in Nature Nanotechnology.

Beyond bug nets

Utilizing TBVs to fight malaria stems, in part, from how the disease is spread. Here is how it works: a mosquito carrying the disease bites a child and transmits the malaria parasite. Later, a non-infected mosquito bites the child, and this time it’s the child who passes the parasite to the mosquito. That mosquito later bites a new victim and infects them with the parasite.

The development of effective TBVs—combined with bug nets, insecticides, anti-parasitic drugs, and others types of vaccines—could help break this vicious cycle, proponents say. While a TBV would not directly prevent an immunized person from getting infected, the vaccine would reduce the odds that people living in that community get malaria, hopefully to zero.

Prior research has focused on techniques like genetic engineering and chemical binding of toxin proteins to boost TBV responses. Each strategy has potential, but they’re also time- and resource-consuming. The new biotechnology differs in its relative ease of assembly and overall effectiveness, Lovell says.

The malaria parasite’s life cycle includes numerous stages. Different proteins represent the best vaccine target antigens, which are proteins that a vaccine mounts an immune response against. To purify these antigens for a vaccine, researchers often modify them with a small chain of amino acids called a polyhistidine-tag.

Malaria-attacking antibodies

Researchers discovered that they could mix the antigens with nanoparticles containing small amounts of cobalt-porphyrin and phospholipid, which is similar in structure to vitamin B12 and is responsible for binding the nanoparticle to the antigens.

The resulting structure is a next-generation adjuvant, which is an immunological agent that enhances the efficacy of vaccines. The vaccine works by inducing humans to make malaria-attacking antibodies, which then move to the mosquito as it bites the immunized human.

Tests involving mice and rabbits showed that antibodies from a protein called Pfs25 effectively blocked the development of malaria-causing parasites inside the gut of mosquitoes. Additional tests paired the adjuvant with multiple malaria antigens, suggesting its promise for blocking the spread of malaria at numerous stages of the disease.

The next step is to prepare additional experiments that will justify moving the technology into human trials.

Additional coauthors are from Walter Reed Army Institute of Research, the National Institutes of Health, McGill University, and the PATH Malaria Vaccine Initiative. The PATH Malaria Vaccine Initiative, the National Institutes of Health, and the intramural program of the National Institute of Allergy and Infectious Diseases supported the work.

Source: University at Buffalo

The post Vaccinate humans to protect mosquitoes from malaria? appeared first on Futurity.

Group prenatal care cuts preterm birth risk

Wed, 2018-10-10 14:58

Researchers have discovered that group prenatal care for expecting mothers reduces the risks for preterm birth and low birth weight.

A new study of more than 9,000 women compared those who received either CenteringPregnancy or Expect With Me group prenatal care to those who received traditional one-on-one care.

Researchers found that group prenatal care patients had a 37 percent lower risk of having a preterm birth and a 38 percent lower risk of having a low birth weight baby than women receiving traditional one-on-one care. Better attendance at the group visits also lead to more pronounced benefits.

“The health benefits of group prenatal care are enormous…”

Women with five or more group prenatal care visits had a 68 percent lower risk of having a preterm birth and a 66 percent lower risk of having a low birth weight baby than their peers receiving traditional care.

These findings come from the largest study comparing group prenatal care to traditional one-on-one care, to date.

“The health benefits of group prenatal care are enormous,” says coauthor Jessica Lewis, deputy director of pregnancy research at Yale University’s School of Public Health. “Preterm birth and low birth weight are the second leading causes of infant mortality in the US, and cost more than $38 billion dollars per year.”

Group prenatal care typically brings together 8 to 12 women for 2-hour long sessions on the same schedule as traditional prenatal care. Each patient gets a brief one-on-one check-up and then most of the time is spent in a facilitated discussion on the topics of pregnancy and childbirth. Women receive 20 hours of care over the course of a pregnancy, compared to 2 hours in traditional care.

Prenatal care providers, who offer education and support while working to increase patient engagement lead the groups. Expect With Me includes a social media platform, where women can continue to access resources, track their health metrics, and connect with other moms and providers between visits.

Previous studies of group prenatal care have primarily focused on young, low-income, minority women. The study provides evidence that group prenatal care sharply reduces adverse birth outcomes for a diversity of women, says lead author Shayna Cunningham, a research scientist. “We need to expand access to group prenatal care for all women to improve outcomes and eliminate health disparities.”

“Future analyses will aim to understand the mechanisms by which group prenatal care results in better outcomes,” Cunningham says.

The findings are published in the Journal of Women’s Health.

Additional coauthors are from the Yale School of Public Health and Vanderbilt University Medical Center. The United Health Foundation funded the study.

Source: Yale University

The post Group prenatal care cuts preterm birth risk appeared first on Futurity.

An upshot of having ADHD? ‘Outside the box’ thinking

Wed, 2018-10-10 12:49

People often believe those with Attention Deficit Hyperactivity Disorder face challenges that could hinder future employment, but a new study finds that adults with ADHD feel empowered doing creative tasks, which could help them on the job.

The tendency of individuals with ADHD—a mental disorder commonly diagnosed at childhood—to resist conformity and ignore typical information may be an asset in fields that value innovative and nontraditional approaches, such as marketing, product design, technology, and computer engineering, says study author Holly White, a researcher in the psychology department at the University of Michigan.

White studied a group of college students with and without ADHD and compared how they performed in lab tasks of creativity. The imagination task allowed a person to invent a new example of a common category that is different from existing examples. In the “alien fruit” invention task, a person must create an example of a fictional fruit that might exist on another planet but is different from a fruit known to exist on Earth.

Individuals with ADHD may be less prone to design fixation, which is the tendency to get stuck in a rut or stick closely to what already exists when creating a new product.

In doing this creative task, non-ADHD participants often modeled their creations after specific common fruits—such as an apple or strawberry. Those creations were less innovative, White says. But in this study, participants with ADHD created “alien fruits” that differed more from typical Earth fruit and were more original, compared to non-ADHD participants.

The second creative task required participants to invent labels for new products in three categories without copying the examples provided. The ADHD group created labels that were more unique and less similar to the examples provided, compared to the non-ADHD group.

White says the results suggest that individuals with ADHD may be more flexible in tasks that require creating something new, and less likely to rely on examples and previous knowledge.

“As a result, the creative products of individuals with ADHD may be more innovative, relative to creations of non-ADHD peers,” she says. Individuals with ADHD may be less prone to design fixation, which is the tendency to get stuck in a rut or stick closely to what already exists when creating a new product, White said.

“This has implications for creative design and problem-solving in the real world, when the goal is to create or invent something new without being overly constrained by old models or ways of doing things,” she said.

The findings appear in the Journal of Creative Behavior.

Source: University of Michigan

The post An upshot of having ADHD? ‘Outside the box’ thinking appeared first on Futurity.

Efforts to save the Amazon threaten neighboring savanna

Wed, 2018-10-10 12:42

Protecting the Amazon rainforest from deforestation may just be shifting the damage to a less renowned neighbor, according to new research. The unintended consequences are profound.

Efforts to rein in agriculture activities in the Amazon have led to an 80 percent reduction in rainforest destruction between the early 2000s to 2015.

Yet, farming and ranching have caused 6.6 times more destruction of natural vegetation in the nearby Tocantins state of the Cerrado in central Brazil, without a corresponding uprising of concern.

Robbing Peter to pay Paul?

“We are not saying reducing rainforest destruction in the Amazon shouldn’t get attention,” says Yue Dou, a research associate in the Center for Systems Integration and Sustainability (CSIS) at Michigan State University. “But attention has to be paid in the major destruction of another area which also has significant biodiversity.”

Cerrado is a Brazilian savanna of varied, wooded grasslands that covers more than 20 percent of the country. The Amazon’s rainforest terrain of towering, ancient broadleaf trees, has a wide appeal and international fascination. The Cerrado, though a global biodiversity hotspot, hasn’t commanded the same attention.

Both areas of Brazil have been farmed aggressively. Two supply-chain agreements placed bans on purchasing soybeans grown on Amazonian lands after 2006 or beef raised on Amazon land deforested after 2009 vastly slowed deforestation. Researchers calculated that the policies reduced deforestation from 22,766 square miles to 11,013 square miles in the Amazon.

Yet destruction in the Cerrado surged as soybean farmers and cattle ranchers sought new places to produce highly demanded foods. In the state of Tocantins alone the conversion to agricultural land increased from 465 square miles to 3,067 square miles from 2007 to 2015.

Counterintuitive truths

The reasons behind the hidden impacts are complex and can be difficult to understand—it’s hard to realize success in one part of the country can spill over with setbacks in a neighboring area, the researchers say.

Colonization, road building, available infrastructure, and effectiveness of law enforcement are among the many moving parts that cause people to cut down natural vegetation and farm. Comparing the rainforest to Cerrado is also challenging.

That’s why scientists worked with a telecoupling framework capable of examining many different factors that integrates different disciplines to allow scientists to holistically understand ecological and socioeconomic interactions over distances.

“In our increasingly complex world, we need to look at problems in new ways that can reflect subtleties and truths that are counterintuitive,” says coauthor Jianguo “Jack” Liu, director at CSIS.

“Progress in sustainability must be genuine and we can’t allow ourselves to be blinded by success in one place at the expense of invisible impacts on other places. The telecoupling framework helps to bring together many different kinds of information to fully understand important change in our telecoupled world,” Liu says.

The findings appear in the Journal of Geographic Sciences.

Additional coauthors are from the State University of Campinas, Brazil and CSIS. The National Science Foundation and the São Paulo Research Foundation funded the work.

Source: Michigan State University

The post Efforts to save the Amazon threaten neighboring savanna appeared first on Futurity.

Implant dissolves into the body after it speeds nerve healing

Wed, 2018-10-10 11:56

Scientists have developed the first ever bioresorbable electronic medicine: a biodegradable wireless implant that speeds nerve regeneration and improves the healing of damaged nerves.

In a study with rats, the device delivered regular pulses of electricity to damaged peripheral nerves after a surgical repair process, accelerating the regrowth of nerves in the rats’ legs and enhancing the ultimate recovery of muscle strength and control.

The wireless device, about the size of a dime and the thickness of a sheet of paper, operates for about two weeks before naturally absorbing into the body.

The scientists envision that such transient engineered technologies could one day complement or replace pharmaceutical treatments for a variety of medical conditions in humans.

This type of technology, which the researchers refer to as a “bioresorbable electronic medicine,” provides therapy and treatment over a clinically relevant period of time and directly at the site where it’s needed, thereby reducing side effects or risks associated with conventional, permanent implants.

Open the window

“These engineered systems provide active, therapeutic function in a programmable, dosed format and then naturally disappear into the body, without a trace,” says co-senior author John A. Rogers, professor of materials science and engineering, biomedical engineering and neurological surgery in the McCormick School of Engineering and Northwestern University Feinberg School of Medicine. “This approach to therapy allows one to think about options that go beyond drugs and chemistry.”

While researchers haven’t tested the device in humans, the findings offer promise as a future therapeutic option for nerve injury patients. For cases requiring surgery, standard practice is to administer some electrical stimulation during the surgery to aid recovery. But until now, doctors have lacked a means to continuously provide that added boost at various time points throughout the recovery and healing process.

“We know that electrical stimulation during surgery helps, but once the surgery is over, the window for intervening is closed,” says co-senior author Wilson “Zack” Ray, an associate professor of neurosurgery, of biomedical engineering, and of orthopedic surgery at Washington University in St. Louis. “With this device, we’ve shown that electrical stimulation given on a scheduled basis can further enhance nerve recovery.”

via GIPHY

Over the past eight years, Rogers and his lab have developed a complete collection of electronic materials, device designs, and manufacturing techniques for biodegradable devices with a broad range of options that offer the potential to address unmet medical needs.

When Ray and his colleagues at Washington University identified the need for electrical stimulation-based therapies to accelerate wound healing, Rogers and colleagues at Northwestern went to their toolbox and designed and developed a thin, flexible device that wraps around an injured nerve and delivers electrical pulses at selected time points for days before the device harmlessly degrades in the body.

A transmitter outside the body that acts much like a cellphone-charging mat powers and controls the device wirelessly. Rogers and his team worked closely with the Washington University team throughout the development process and animal validation.

The Washington University researchers then studied the bioresorbable electronic device in rats with injured sciatic nerves. This nerve sends signals up and down the legs and controls the hamstrings and muscles of the lower legs and feet.

They used the device to provide one hour per day of electrical stimulation to the rats for one, three, or six days or no electrical stimulation at all, and then monitored their recovery for the next 10 weeks.

Beyond the nervous system

The findings show that any electrical stimulation was better than none at all at helping the rats recover muscle mass and muscle strength. Further, the more days of electrical stimulation the rats received, the more quickly and thoroughly they recovered nerve signaling and muscle strength. Researchers found no adverse biological effects from the device and its reabsorption.

“Before we did this study, we weren’t sure that longer stimulation would make a difference, and now that we know it does, we can start trying to find the ideal time frame to maximize recovery,” Ray says. “Had we delivered electrical stimulation for 12 days instead of six, would there have been more therapeutic benefit? Maybe. We’re looking into that now.”

By varying the composition and thickness of the materials in the device, Rogers and colleagues can control the precise number of days it remains functional before the body absorbs it.

“This notion of transient electronic devices has been a topic of deep interest in my group for nearly 10 years—a grand quest in materials science, in a sense.”

New versions can provide electrical pulses for weeks before degrading. The ability of the device to degrade in the body takes the place of a second surgery to remove a non-biodegradable device, thereby eliminating additional risk to the patient.

“We engineer the devices to disappear,” Rogers says. “This notion of transient electronic devices has been a topic of deep interest in my group for nearly 10 years—a grand quest in materials science, in a sense. We are excited because we now have the pieces—the materials, the devices, the fabrication approaches, the system-level engineering concepts—to exploit these concepts in ways that could have relevance to grand challenges in human health.”

The research study also showed the device can work as a temporary pacemaker and as an interface to the spinal cord and other stimulation sites across the body. These findings suggest broad utility, beyond just the peripheral nervous system.

Source: Northwestern University

The post Implant dissolves into the body after it speeds nerve healing appeared first on Futurity.

Not all neurons die the same way in people with ALS

Wed, 2018-10-10 10:50

Researchers have discovered that two different kinds of motor neurons that die in people with amyotrophic lateral sclerosis (ALS) may not die the same way.

The research offers an important insight for understanding the disease and, eventually, finding a cure.

ALS is a surprisingly common disease that causes the death of motor neurons in the spine that control voluntary muscles such as those involved in walking, talking, chewing, or breathing.

In some people with ALS, neurons in the brain that issue commands to these spinal motor neurons also die. It’s not clear why both types of neurons are affected in some people with ALS, but not others.

“Our results raise the possibility that the glutamatergic neurons in the brains of some ALS patients die in ways that are somehow different than how the spinal cord neurons die,” says Anne Hart, a professor of neuroscience at Brown University and researcher at the Carney Institute for Brain Science. “Before, we all assumed that both kinds of neurons died the exact same way.”

One cure may not be enough

The implications could be significant, Hart says. This is the first clue that future treatments for spinal cord neurons might not cure all people with ALS, because they won’t help neurons the disease affects in the brain.

Though many cases of ALS don’t have a clear genetic component, about 1 percent of people with ALS have mutations in SOD1, a protein involved in breaking down naturally occurring free radicals from oxidative stress.

For the new study, which appears in PLOS Genetics, researchers precisely and selectively engineered C. elegans—transparent worms about as long as a pinhead—so that the worm’s SOD1 gene would contain a mutation such as those found in these people with ALS.

“We certainly can’t prove this in worms, but it opens up a whole new way of looking at ALS.”

The results could explain why ALS only affects spinal neurons in some people, yet neurons in both the spine and the brain die in others.

More research needs to be done to see if the findings from worms will hold true in mammalian brains and lead to a better understanding of why neurons degenerate in people with ALS.

“We certainly can’t prove this in worms,” Hart says, “but it opens up a whole new way of looking at ALS.”

Long-term effort

The work in Hart’s lab to develop a new ALS model has been going on for years. Postdoctoral researcher Jill Yersak began engineering different ALS mutations into worms seven years ago. Worms have neurons very similar to human neurons and are less expensive and produce results faster than mice or other mammals, Hart says.

Graduate student Saba Baskoylu completed the project. Then, the research team ran numerous tests to see how the different patient versions of the SOD1 protein affected neuron function, motor neuron death, and worm behavior.

Researchers found that four patient gene mutations caused neurodegeneration after oxidative stress in a type of neuron similar to those in the human spine, likely through increased toxic protein accumulation that doesn’t happen with the normal protein.

However, two patient gene mutations also caused degeneration in a different type of neuron—similar to the neurons in the human brain—in part because the mutant protein no longer functioned correctly during oxidative stress.

The other kinds of neurons were healthy in the new ALS models, even after oxidative stress, which is very much like the specificity of neuron death in people with ALS, Hart says. In contrast, previous worm models weren’t very specific—the patient version of SOD1 could kill almost any worm neuron.

CRISPR editing

Researchers made these earlier worm models by adding extra copies of a patient disease gene to the worms, which would then express the patient version of SOD1 at high levels.

Now, thanks to new genetic tools like CRISPR/Cas9, directly editing the genes of worms and other animals is affordable and reliable. That allows scientists to make more precise models compared to simply adding extra copies of a gene, Hart says.

Hart’s goal was a more precise disease model that would let her group study the earliest events in ALS, by using these tools to change one “letter” in the worms’ standard “blueprint” for the SOD1 protein. The worms should have normal amounts of the protein and no extra gene copies.

The team accomplished their goal and discovered that the glutamatergic neurons—similar to the ALS-affected neurons in human brains—and the cholinergic neurons—similar to the spinal neurons—in worms degrade for different reasons. They will do more research on these worm models.

“We can now use these new ALS models to find other proteins and genes that we can use to stop neurodegeneration in worms,” Hart says.

She plans to use the models to test many different small molecules for potential therapeutic drugs and to find other genes whose inactivation will suppress neurodegeneration. Then, collaborators can test these genes in mice or human cell cultures, with the hope of helping people with ALS.

“ALS is complicated, you can see why it’s taking everyone a while to figure out what’s going on,” Hart says.

The ALS Finding A Cure Foundation, the ALS Association, and the Judith and Jean and Pape Adams Charitable Foundation funded the work.

Source: Brown University

The post Not all neurons die the same way in people with ALS appeared first on Futurity.

Diamond tech destroys ‘forever chemicals’ in water

Wed, 2018-10-10 09:55

Researchers are developing a scalable treatment option for PFAS-contaminated wastewater.

More than 1.5 million Michigan residents and potentially more than hundreds of sites nationwide—and counting—have PFAS-tainted water.

PFAS, or per- and polyfluoroalkyl substances, are colloquially known as “forever chemicals” because they are so difficult to break down. They are found in water supplies wherever people use flame retardants, waterproofing, or vapor suppressants. People can absorb PFAS through direct contact (drinking, bathing, swimming) or indirectly (eating meat or vegetables that have been exposed to PFAS).

A successful method to destroy PFAS has remained elusive because of their extremely tough chemical structure. And because the recalcitrant compounds are difficult to break down, they accumulate over time and have been linked to adverse health effects.

PFAS are so potent that even trace amounts are dangerous. Imagine three drops of an eye dropper in an Olympic-size pool. These three drops are about equal to the EPA health advisory level for PFOA and PFOS (two types of PFAS) in drinking water, which is 70 parts per trillion.

Now, researchers have a viable solution to treat PFAS-contaminated wastewater that’s ready for a pilot-scale investigation. The electrochemical oxidation system uses boron-doped diamond electrodes. The process breaks down the contaminants’ formidable molecular bonds, cleaning the water while systematically destroying the hazardous compounds.

“EO, or electrochemical oxidation, is a simple, clean, and effective method for destruction of PFAS and other co-contaminants as a complementary procedure to other wastewater treatment processes,” says Cory Rusinek, an electrochemist at the Michigan State University-Fraunhofer USA, Inc. Center for Coatings and Diamond Technologies. “If we can remove it from wastewater, we can reduce its occurrence in surface waters.”

Wastewater treatment is a multi-step process to remove contaminants or add beneficial chemicals to create safe, dischargeable water. The EO system has advanced to the laboratory scale, successfully removing PFAS from gallons of tainted water. While many electrodes have been investigated in EO, boron-doped diamond electrodes have shown the most promise for contaminant degradation with a number of studies showing its ability to degrade PFAS.

The EO process systematically breaks down PFAS, transforming it from a hazardous material to carbon dioxide, water and fluoride. The BDD electrodes are proving themselves as key workhorses in the process. Even after hundreds of rounds of treatments, the BDD electrodes are showing little, if any, wear.

Depending on its success and other advances, it’s possible that this process could eventually become a complementary component of a municipal drinking water system, the researchers say.

Source: Michigan State University

The post Diamond tech destroys ‘forever chemicals’ in water appeared first on Futurity.

‘Stationary waves’ fuel extreme wet and dry weather

Wed, 2018-10-10 09:32

New research that examines the role of stationary low- and high-pressure systems projects that global warming will spawn more extreme wet and dry weather around the world.

Those extremes include more frequent dry spells in the northwestern, central, and southern United States and in Mexico, and more frequent heavy rainfall events in south Asia, the Indochinese Peninsula, and southern China.

One reason: subtropical stationary waves in northern summers. These planet-spanning waves are composed of persistent high-pressure systems over the North Pacific and North Atlantic and persistent low-pressure systems over Eurasia and North America, the study says. The high-pressure systems provide persistent conditions for dry weather, while the low-pressure systems fuel wet weather.

The intensity of subtropical stationary waves during northern summers increased from 1979 to 2013, and projections suggest the increase will accelerate as the climate warms, the study says.

“Increasingly strong subtropical stationary waves play an important role in explaining the increase in extremely dry weather in North America and extremely wet weather in south and southeast Asia,” says lead author Jiacan Yuan, a postdoctoral associate in the earth and planetary sciences department at Rutgers University–New Brunswick and the university’s Institute of Earth, Ocean, and Atmospheric Sciences.

Subtropical stationary waves may serve as an important link connecting regional droughts and extreme rainfall events with global warming, the study says. Such extremes, which have increased significantly in recent decades because of a warming climate, can cause enormous economic losses and threaten lives.

Examples of extreme events include catastrophic floods in South Asia during the 2017 monsoon season, which killed about 1,300 people died and affected more than 45 million people, according to a United Nations Children’s Fund report. A severe drought afflicted Texas in 2011, with the Texas AgriLife Extension Service estimating direct agricultural losses at $5.2 billion.

The findings appear in the Journal of Climate.

Scientists from Duke University and Georgia Institute of Technology also contributed to the study.

Source: Rutgers University

The post ‘Stationary waves’ fuel extreme wet and dry weather appeared first on Futurity.