New research lends evidence to the idea that children learn the ability to understand basic grammar early in language development, rather than possessing it innately.
Matthew Frank, associate professor of psychology at Stanford University, analyzed toddlers’ early language and found that rule-based grammatical knowledge emerges gradually with a significant increase around the age of 24 months.
The new study in Psychological Science also points out the need to gather more data that track children’s speech over time, which would help make future research more precise.
“The ability of humans to acquire and use language is a big difference between us and other species, and it’s also one of the biggest scientific puzzles out there,” says Frank, who coauthored the study. “Studying language acquisition in children is one way for us to try to find out what makes us human.”Imitating or understanding?
Previous research has shown that children use articles, such as “a” and “the,” early and in an overwhelmingly correct way. But, Frank says, it is difficult to sort out whether children are just imitating adults or if they actually understand that articles should be used before nouns like “dog” or “ball”—and can use them appropriately with new nouns that are unknown to them.
To address that difficulty, the team created a new statistical model to measure changes in a child’s grammar over time. The model relies on Bayesian inference, a method that helps estimate the level of certainty in results. In addition, it takes into account the relationship between what the child says and what the child has heard from adults, separating imitation from generalization.Team puts ‘weird’ grammar on the map
Researchers applied this model on data sets available for 27 toddlers and found that rule-based grammatical knowledge in their speech wasn’t constant and was more present in older children.
Frank says the statistical model they used allowed them to not only analyze children’s language but also stay away from overly confident interpretations when there was too little data for a particular child.Conflicting reports
The study underscored the fact that data on language development in children under two years old is lacking. To characterize children’s initial level of grammar use, Frank says, it’s critical for scientists to have a sophisticated analytical model as well as consistent recordings that start from the time children begin talking.Preschoolers learn words best if they have a nap
According to Frank, the current lack of data and the analytical challenges it presents have led to researchers on opposite sides of the grammar debate to draw contradictory conclusions. For instance, two studies published in peer-reviewed journals in 2013 used similar data sets, but one inferred that grammatical knowledge is innate while the other concluded that grammar is a learned skill.
“People have very strong feelings about the question of innateness versus learning,” Frank says. “We really didn’t know what to expect because there were these conflicting reports out there.”
The team hopes that its statistical model, together with new data sets, will help move the debate forward.An app to come
To help increase the pool of data, Frank and his colleagues are building an online database called Wordbank. The site aims to spur the gathering of data on children’s vocabulary and early language development and encourages researchers to share their data with different institutions and universities. Frank is also collaborating on a smartphone app for collecting early vocabulary data from parents.
“It’s going to take a tremendous amount of data to study this problem and build enough evidence for how children learn language,” Frank says. “We’re hoping that once we have those data, we can get a clearer picture of children’s early learning.”
Additional coauthors of this study are from the University of California, Berkeley; Stanford and the MIT Media Lab; and MIT. The National Science Foundation, the Alfred P. Sloan Research Fellowship, and the Center for Advanced Study in the Behavioral Sciences supported the work.
Source: Stanford University
Warming in the 21st century has reduced Colorado River flows by at least 0.5 million acre-feet—about the amount of water used by 2 million people for one year, a new study warns.
“This paper is the first to show the large role that warming temperatures are playing in reducing the flows of the Colorado River,” says Jonathan Overpeck, professor of geosciences and of hydrology and atmospheric sciences at the University of Arizona.
From 2000-2014, the river’s flows declined to only 81 percent of the 20th-century average, a reduction of about 2.9 million acre-feet of water per year. One acre-foot of water will serve a family of four for one year, according to the US Bureau of Reclamation.
From one-sixth to one-half of the 21st-century reduction in flow can be attributed to the higher temperatures since 2000. The new analysis shows that as temperatures continues to rise, Colorado River flows will continue to decline.
Current climate change models indicate temperatures will increase as long as humans continue to emit greenhouse gases into the atmosphere, but the projections of future precipitation are far less certain.30% by midcentury
Forty million people rely on the Colorado River for water, according to the US Bureau of Reclamation. The river supplies water to seven US Western states plus the Mexican states of Sonora and Baja California.Adding water to dry riverbeds comes with a price
“The future of Colorado River is far less rosy than other recent assessments have portrayed,” says Bradley Udall, a senior water and climate scientist/scholar at Colorado State University’s Colorado Water Institute. “A clear message to water managers is that they need to plan for significantly lower river flows.” The study’s findings “provide a sobering look at future Colorado River flows.”
The Colorado River Basin has been in a drought since 2000. Previous research has shown the region’s risk of a megadrought—one lasting more than 20 years—rises as temperatures increase.
“We’re the first to make the case that warming alone could cause Colorado River flow declines of 30 percent by midcentury and over 50 percent by the end of the century if greenhouse gas emissions continue unabated,” Overpeck says.
The researchers began their study, published in the journal Water Resources Research, because Udall learned that recent Colorado flows were lower than managers expected given the amount of precipitation. The team wanted to provide water managers with insight into how future projections of temperature and precipitation for the Colorado River Basin would affect the river’s flows.Colorado River Delta flood keeps paying off
They began by looking at the drought years of 2000-2014. About 85 percent of the river’s flow originates as precipitation in the Upper Basin—the part of the river that drains portions of Wyoming, Utah, Colorado, and New Mexico. The team found during 2000-2014, temperatures in the river’s Upper Basin were 1.6 degrees Fahrenheit (0.9 degree Celsius) higher than the average for the previous 105 years.
“A megadrought in this century will throw all our operating rules out the window.”
To see how increased temperatures might contribute to the reductions in the river’s flow that have been observed since 2000, they reviewed and synthesized 25 years of research about how climate and climate change have and will affect the region and how temperature and precipitation affect the river’s flows.
Water loss increases as temperatures rise because plants use more water, and higher temperatures increase evaporative loss from the soil and from the water surface and lengthen the growing season.What about a megadrought?
In previous studies, researchers have showed current climate models simulated 20th-century conditions well, but the models cannot simulate the 20- to 60-year megadroughts known to have occurred in the past. Moreover, many of those models did not reproduce the current drought.
Those researchers and others suggest the risk of a multidecadal drought in the Southwest in the 21st century is much higher than climate models indicate and that as temperatures increase, the risk of such a drought increases.
“A megadrought in this century will throw all our operating rules out the window,” Udall says.
The findings show that all current climate models agree that temperatures in the Colorado River Basin will continue rising if the emission of greenhouse gases is not curbed. However, the models’ predictions of future precipitation in the basin have much more uncertainty.
“Even if the precipitation does increase, our work indicates that there are likely to be drought periods as long as several decades when precipitation will still fall below normal,” Overpeck says.
The new study suggests Colorado River flows will continue to decline. “I was surprised at the extent to which the uncertain precipitation aspects of the current projections hid the temperature-induced flow declines,” Udall says.
The US Bureau of Reclamation lumps temperature and precipitation together in its projections of Colorado River flow, he says. “Current planning understates the challenge that climate change poses to the water supplies in the American Southwest. My goal is to help water managers incorporate this information into their long-term planning efforts.”
The Colorado Water Institute, National Science Foundation, the National Oceanic and Atmospheric Administration, and the US Geological Survey funded the work.
Source: University of Arizona
We actively compete with our coworkers for a limited amount of perks, including raises, promotions, bonuses, and recognition. But new research shows that, more often than not, people fall short in determining which coworkers might be trying to edge them out on the job.
“We looked at whether people understood what other people in the workplace thought of them,” says Hillary Anger Elfenbein, professor of organizational behavior at Washington University in St. Louis. “You tend to know who likes you. But, for negative feelings, including competitiveness, people had no clue.”
“You need to pay more attention to what people do rather than what they say.”
Elfenbein and colleagues ran two different studies during the course of their research, recently published in the journal Psychological Science.
In the first, they surveyed salespeople at a Midwestern car dealership where competition was both normal and encouraged. The second study included surveys from more than 200 undergraduate students in 56 separate project groups. All were asked similar questions about their coworkers, and what they assumed those people thought of them. When the responses about competition were analyzed, the results were striking: While there were outliers, they completely canceled out.
In other words, coworkers have no clue about their competitive cohorts.
“Some people show their competitiveness, some people you can tell have it out for you, but others have it out for you and act like they’re your close friend,” Elfenbein says. “Those two effects wash out, and people on average have zero idea about who feels competitively toward them.”Does team competition cut women’s creativity?
The researchers offer two main reasons for the disconnect: First, people tend to mask outward feelings of competitiveness toward others in an effort to be polite. Also, the concept of reciprocity played a role.
“For liking, reciprocation is a good thing,” Elfenbein says. “You keep dates, you give gifts, you have shared, positive experiences. But to get the benefits of competition, such as promotions or perks, you don’t need it to be reciprocated. And when you don’t get that feeling back, it’s hard to gauge who’s truly competing against you.”
For a manager in the workplace who wants a strong and cohesive team, transparency and uncrossable lines appear to be the key in maintaining the balance, the researchers say.
“You want to promote a climate where there is friendly competition,” Elfenbein says. “At the car dealership, everybody knows they are competing against each other. Entire salaries can be based on performance. But if you create a climate where there are boundaries you don’t cross, you can make space for mutual healthy competition to be rewarded.”
As for the individual in the workplace who fears being blindsided by coworkers?
“You need to pay more attention to what people do rather than what they say,” Elfenbein says. “When people are too polite to say something to your face, you need a good, strong network that will let you know what other people really think.”
Coauthors of the study are Noah Eisenkraft from the University of North Carolina at Chapel Hill and Shirli Kopelman from the University of Michigan.
Teacher ratings of parental involvement early in a child’s academic career can accurately predict the child’s academic and social success, new research shows.
The findings show the importance of teacher-parent connections and also the need for training teachers on how to create effective relationships with all parents, says Keith Herman, a professor in the University of Missouri College of Education and co-director of the Missouri Prevention Center.
“It’s clear from years of research that teacher perceptions, even perceptions of which they are not aware, can greatly impact student success,” Herman says. “If a teacher has a good relationship with a student’s parents or perceives that those parents are positively engaged in their child’s education, that teacher may be more likely to give extra attention or go the extra mile for that student.
“If the same teacher perceives another child’s parents to be uninvolved or to have a negative influence on the child’s education, it likely will affect how the teacher interacts with both the child and the parent.”
“Negative perceptions often bring out negative behaviors.”
For their study, Herman and colleagues randomly assigned more than 100 teachers to receive a professional development program called the Incredible Years. The program aims to prepare teachers to develop more effective relationships with parents and students, and to improve their classroom management skills.
Teachers completed surveys about their more than 1,800 students and parents at the beginning and end of the school year, including answering questions asking about the quantity and quality of their relationships with parents and the parents’ involvement in their children’s education. The researchers also collected ratings and observations on student behavior and academic performance.
Children whose parents were identified by teachers as more positively involved had higher levels of prosocial behaviors and more academic success. Additionally, the researchers found that parents who had children in classrooms where teachers received the training were more likely to develop more positive behaviors, including higher involvement and bonding with the teacher.Teacher outreach to parents aligns with stereotypes
“Negative perceptions often bring out negative behaviors,” Herman says. “We also know, from this and prior studies, that teachers are more likely to report less comfort and alignment with parents whose children have academic and social problems, and parents from low-income and/or from racial or ethnic minority groups. In other words, often the families and students who need the most positive attention and support to re-engage them in education, are often the ones who are viewed the least favorably.
“Fortunately, this study shows that we can support teachers to improve their relationships with all parents, resulting in a better education for all children while also encouraging parents to become more involved in the education process.”
Herman and colleagues have successfully implemented a teacher-training program that improves teacher-parent relationships and creates more positive perceptions of parental involvement. Papers outlining this study and the teacher-training program have been accepted for publication in School Psychology Quarterly and the Journal of School Psychology.
Source: University of Missouri
People taking heartburn drugs called proton pump inhibitors—Prevacid, Prilosec, Nexium, and Protonix—may not be aware of kidney damage linked to the medications, research suggests.
The new study evaluated the use of PPIs in 125,000 patients. Results indicate that more than half of patients who develop chronic kidney damage while taking the drugs don’t experience acute kidney problems beforehand, meaning patients may not be aware of a decline in kidney function, according to researchers.
“Our results indicate kidney problems can develop silently and gradually over time…”
Therefore, people who take PPIs, and their doctors, should be more vigilant in monitoring use of these medications.
“The onset of acute kidney problems is not a reliable warning sign for clinicians to detect a decline in kidney function among patients taking proton pump inhibitors,” says Ziyad Al-Aly, the study’s senior author and an assistant professor of medicine at Washington University School of Medicine.
“Our results indicate kidney problems can develop silently and gradually over time, eroding kidney function and leading to long-term kidney damage or even renal failure. Patients should be cautioned to tell their doctors if they’re taking PPIs and only use the drugs when necessary.”
More than 15 million Americans suffering from heartburn, ulcers, and acid reflux have prescriptions for PPIs, which bring relief by reducing gastric acid. Many millions more purchase the drugs over-the-counter and take them without being under a doctor’s care.Roller coasters can jostle out kidney stones
The researchers—including first author Yan Xie, a biostatistician at the St. Louis Veterans Affairs—analyzed data from the Department of Veterans Affairs databases on 125,596 new users of PPIs and 18,436 new users of other heartburn drugs referred to as H2 blockers. The latter are much less likely to cause kidney problems but often aren’t as effective.
Over five years of follow up, the researchers found that more than 80 percent of PPI users did not develop acute kidney problems, which often are reversible and are characterized by too little urine leaving the body, fatigue, and swelling in the legs and ankles.
However, more than half of the cases of chronic kidney damage and end-stage renal disease associated with PPI use occurred in people without acute kidney problems.Heartburn drugs in hospital raise death risk
In contrast, among new users of H2 blockers, 7.67 percent developed chronic kidney disease in the absence of acute kidney problems, and 1.27 percent developed end-stage renal disease.
End-stage renal disease occurs when the kidneys can no longer effectively remove waste from the body. In such cases, dialysis or a kidney transplant is needed to keep patients alive.
“Doctors must pay careful attention to kidney function in their patients who use PPIs, even when there are no signs of problems,” cautions Al-Aly, who also is the VA’s associate chief of staff for research and education and co-director of the VA’s Clinical Epidemiology Center. “In general, we always advise clinicians to evaluate whether PPI use is medically necessary in the first place because the drugs carry significant risks, including a deterioration of kidney function.”
The study appears in Kidney International. Funding came from the US Department of Veterans Affairs.
The post On heartburn drugs? Kidney trouble could surprise you appeared first on Futurity.
The human heart beats more than 2.5 billion times in an average lifetime. Now, a new 3D organ-on-a-chip can mimic the heart’s amazing biomechanical properties.
“We created the I-Wire Heart-on-a-Chip so that we can understand why cardiac cells behave the way they do by asking the cells questions, instead of just watching them,” says John Wikswo, professor of living state physics, biomedical engineering, molecular physiology, biophysics, and physics at Vanderbilt University.
“We believe it could prove invaluable in studying cardiac diseases, drug screening, and drug development, and, in the future, in personalized medicine by identifying the cells taken from patients that can be used to patch damaged hearts effectively.”
The device and the results of initial experiments show that it faithfully reproduces the response of cardiac cells to two different drugs that affect heart function in humans. The findings appear in the journal Acta Biomaterialia. A companion article in the same issue presents a biomechanical analysis of the I-Wire platform that can be used for characterizing biomaterials for cardiac regenerative medicine.Two millionths of a human heart
The unique aspect of the new device, which represents about two millionths of a human heart, is that it controls the mechanical force applied to cardiac cells. This allows the researchers to reproduce the mechanical conditions of the living heart, which is continually stretching and contracting, in addition to its electrical and biochemical environment.‘Kidney on a chip’ could lead to precision drug dosing
“Heart tissue, along with muscle, skeletal, and vascular tissue, represents a special class of mechanically active biomaterials,” Wikswo says. “Mechanical activity is an intrinsic property of these tissues so you can’t fully understand how they function and how they fail without taking this factor into account.”
“Currently, we don’t have many models for studying how the heart responds to stress. Without them, it is very difficult to develop new drugs that specifically address what goes wrong in these conditions,” says Charles Hong, associate professor of cardiovascular medicine at Vanderbilt’s School of Medicine, who didn’t participate in the research but is familiar with it. “This provides us with a really amazing model for studying how hearts fail.”I-Wire device with cardiac fiber shown in magnification window. (Credit: VIIBRE/Vanderbilt)
The I-Wire device consists of a thin thread of human cardiac cells 0.014 inches thick (about the size of 20-pound monofilament fishing line) stretched between two perpendicular wire anchors. The amount of tension on the fiber can be varied by moving the anchors in and out, and the tension is measured with a flexible probe that pushes against the side of the fiber.
The fiber is supported by wires and a frame in an optically clear well that is filled with liquid medium like that which surrounds cardiac cells in the body. The apparatus is mounted on the stage of a powerful optical microscope that records the fiber’s physical changes. The microscope also acts as a spectroscope that can provide information about the chemical changes taking place in the fiber. A floating microelectrode also measures the cells’ electrical activity.Drugs and toxins
The I-Wire system can be used to characterize how cardiac cells respond to electrical stimulation and mechanical loads and can be implemented at low cost, small size, and low fluid volumes, which make it suitable for screening drugs and toxins. Because of its potential applications, Vanderbilt University has patented the device.Chip tracks cells by giving each one a digital code
Unlike other heart-on-a-chip designs, I-Wire allows the researchers to grow cardiac cells under controlled, time-varying tension similar to what they experience in living hearts. As a consequence, the heart cells in the fiber align themselves in alternating dark and light bands, called sarcomeres, which are characteristic of human muscle tissue. The cardiac cells in most other heart-on-a-chip designs do not exhibit this natural organization.
In addition, the researchers have determined that their heart-on-a-chip obeys the Frank-Starling law of the heart. The law, which was discovered by two physiologists in 1918, describes the relationship between the volume of blood filling the heart and the force with which cardiac cells contract. The I-Wire is one of the first heart-on-a-chip devices to do so.Close up view of the I-Wire cardiac fiber shown at two different levels of magnification. (Credit: VIIBRE/Vanderbilt)
To demonstrate the I-Wire’s value in determining the effects that different drugs have on the heart, scientists tested its response with two drugs known to affect heart function in humans: isoproterenol and blebbistatin. Isoproterenol is a medication used to treat bradycardia (slow heart rate) and heart block (obstruction of the heart’s natural pacemaker). Blebbistatin inhibits contractions in all types of muscle tissue, including the heart.
The device faithfully reproduces the response of cardiac cells in a living heart, says Veniamin Sidorov, the research assistant professor who led the device’s development.
“Cardiac tissue has two basic elements: an active, contractile element and a passive, elastic element. By separating these two elements with blebbistatin, we successfully characterized the elasticity of the artificial tissue. By exposing it to isoproterenol, we tested its response to adrenergic stimulation, which is one of the main systems responsible for regulation of heart contractions.
“We found that the relationship between these two elements in the cardiac fiber is consistent with that seen in natural tissue. This confirms that our heart-on-a-chip model provides us with a new way to study the elastic response of cardiac muscle, which is extremely complicated and is implicated in heart failure, hypertension, cardiac hypertrophy and cardiomyopathy.”
The National Institutes of Health, the National Science Foundation, the Defense Threat Reduction Agency, the American Heart Association, and the Department of Veterans Affairs funded the work.
Source: Vanderbilt University
The post Thumping ‘thread’ mimics heart’s response to drugs appeared first on Futurity.
There’s no easy way to predict which teenager will become a problem drug user. While certain personality traits—impulsiveness for example—may signal danger, not every adolescent fits the description.
A new study in the journal Nature Communications suggests brain scans may be a way to tell which teen is bored, in a manner of speaking, by the promise of easy money, even when they might not realize it themselves.
Researchers sorted through an intriguing dataset covering, among other things, 144 European adolescents who scored high on a test of what’s called novelty seeking—roughly, the sorts of personality traits that might indicate someone is at risk for drug or alcohol abuse.
Novelty seeking isn’t inherently bad, says Brian Knutson, professor of psychology at Stanford University. On a good day, the urge to take a risk on something new can drive innovation.
But, on a bad day, it can lead people to drive recklessly, jump off cliffs, and ingest whatever someone hands out at a party. Psychologists know that teens who score high on tests of novelty seeking are on average a bit more likely to abuse drugs. The question was, could there be a better test, one both more precise and more individualized, that could tell whether novelty seeking might turn into something more destructive.
Researchers thought so—and suspected that a brain-scanning test called the Monetary Incentive Delay Task, or MID, could be the answer. Knutson had developed the task early in his career as a way of targeting a part of the brain now known to play a role in mentally processing rewards like money or the high of a drug.More legal pot, but fewer teen pot problems
For the test, people lie down in an MRI brain scanner to play a simple video game for points, which they can eventually convert to money. More important than the details of the game, however, is this: At the start of each round, each player gets a cue about how many points he stands to win during the round. It’s at that point that players start to anticipate future rewards. For most people, that anticipation alone is enough to kick the brain’s reward centers into gear.
This plays out differently—and a little puzzlingly—in adolescents who use drugs. Adolescents’ brains in general respond less when anticipating rewards, compared with adults’ brains. But that effect is even more pronounced when those kids use drugs, which suggests one of two things: Either drugs suppress brain activity, or the suppressed brain activity somehow leads youths to take drugs.
If it’s the latter, then Knutson’s task could predict future drug use. But no one was sure, mainly because there has been little study of brain activity in non-drug-using adolescents that was compared to eventual drug use.If they’re bored, teen pot smokers may try other drugs
Christian Büchel, a professor of medicine at Universitätsklinikum Hamburg Eppendorf and coauthor of the current study, had already collected data on around 1,000 14-year-olds as they went through Knutson’s MID task.
They had also followed up with each of them two years later to find out if they’d become problem drug users—for example, if they smoked or drank on a daily basis or ever used harder drugs like heroin. Then, the researchers focused their attention on 144 adolescents who hadn’t developed drug problems by age 14 but had scored in the top 25 percent on a test of novelty seeking.
Analyzing that data, Knutson and Büchel found they could correctly predict whether youngsters would go on to abuse drugs about two-thirds of the time based on how their brains responded to anticipating rewards—a substantial improvement over behavioral and personality measures, which correctly distinguished future drug abusers from other novelty-seeking 14-year-olds about 55 percent of the time or only a little better than chance.
“This is just a first step toward something more useful,” Knutson says. “Ultimately the goal—and maybe this is pie in the sky—is to do clinical diagnosis on individual patients” in the hope that doctors could stop drug abuse before it starts.
Source: Stanford University
If you take a walk in the desert on a moonlit night, you might see the animal kingdom’s top-performing athletes darting from flower to flower and hovering in midair: moths of the hawkmoth family.
Nectar-feeding moths, pollinating bats, and hummingbirds are masters in sustaining the most intense exercise of all animals. To extract nectar from a flower, they must hover in front of the flower before darting off to the next one. But how can these organisms perform such feats on a diet that’s mostly sugar?
New research not only offers an explanation, but also suggests that these animals stay healthy not despite, but because of, their sugary diet.
Oxygen, while necessary for life to exist, is a double-edged sword. The more we engage in intense aerobic exercise, such as hovering, the more oxygen reveals its ugly side in the form of reactive oxygen species—small reactive molecules that wreak havoc on cells.
Researchers in the lab of Goggy Davidowitz in the entomology department in the University of Arizona’s College of Agriculture and Life Sciences discovered that hawkmoths (also known as Manduca moths) have evolved a strategy that helps them minimize the muscle damage inflicted by the oxidative stress generated during sustained flight. The results appear in the journal Science.(Credit: Kiley Riffell Photography/Flickr) 80 cans of soda
“If you wanted to consume the equivalent amount of sugar that a moth takes up in one meal, you’d have to drink 80 cans of soda,” says Eran Levin, who led the research as a postdoctoral fellow in Davidowitz’s group. “It’s amazing that an animal can process such an amount of sugar in such a short time.”
“If we understand how the moth is doing it, you can find out how we do it. And we can learn about what goes wrong with our sugar consumption.”
Nectar-feeding moths and hummingbirds don’t take up any antioxidants with their diet, which begs the question of how they deal with the oxidative damage their muscles are suffering during the moths’ nightly foraging flights.
Two sophisticated pieces of equipment set up to work in tandem made it possible to study in great detail the metabolism of Manduca moths during sustained flight. The team found that the insects actually use the sugar in their diet to make their own antioxidants. They accomplish this by shunting the carbohydrates they consume to a metabolic pathway that evolved early on in the evolution of life: the pentose phosphate pathway.
Humans, too, have this pathway, but it cannot, on its own, produce all the antioxidants needed, which is why athletes drink antioxidant-laced sports drinks and parents tell their children to eat their veggies. Fitting this pattern, migrating birds often are observed eating berries and fruit—both rich in antioxidants—during stopovers.
“Manduca is a well-suited model system to study this metabolic pathway, which is the same for bacteria and sequoia trees,” Levin explains. “If we understand how the moth is doing it, you can find out how we do it. And we can learn about what goes wrong with our sugar consumption.”‘Impossible’ results
During the flight experiments, the researchers noticed something strange: The measurements tracking how much oxygen the moths consumed and how much carbon dioxide they produced didn’t add up.
“If you burn all the sugar you’re eating, you expect the same ratio of carbon dioxide exhaled to oxygen consumed,” Levin says. “This is normal when you feed on carbohydrates, but we obtained results that shouldn’t have been possible according to the scientific literature.”
“…we suggest these high-performing animals consume a sugar-heavy diet to protect their muscles from damage.”
Reluctant to trust the data their moths were generating, Davidowitz contacted the manufacturer of the flight measurement apparatus. The CEO of the company came out, and after much troubleshooting, tinkering, and adjusting, the readings stayed the same.
One day, a colleague suggested flying a bumblebee in place of a moth, because bumblebees are known to burn only carbohydrates during flight. Sure enough, the machine spat out the expected values.
“That told us our data were correct,” Levin says. “They indicated that 40 percent of the carbon in our moth flight experiments had to come from something other than carbohydrates, so we looked for an explanation, and the only such pathway that would produce those results is the pentose phosphate pathway.”Hawkmoths slow their brains to see in the dark
While flying, it turned out the moths were not only burning carbohydrates, but fat as well. As soon as they rested, within seconds, they shunted their metabolism to the pentose phosphate pathway.
In addition to solving the issue of the higher-than-what-theory-allows measurements, the results provided the answer to the mystery of how a nectar-feeding organism avoids killing itself from oxidative stress.
“On our flight apparatus, moths fly about three miles a night on average,” Davidowitz says. “We don’t know how much they are actually flying in the wild.”
When moths burn lipids during intense exercise, they produce more reactive oxygen species that pose further danger to their flight muscles. “We think the tissue repair occurs when they rest, but we haven’t measured that,” Davidowitz says.Protecting their muscles
Moths that were frequent and intense flyers were found to have less oxidative cell damage than those that did not, which seemed counterintuitive, Levin says.
“There is this common notion out there where we tend to think that all animals that feed on sugar are very active and fast-living creatures,” he says. “But our experiments suggest that this is actually not the case. In fact, there is much more energy to be gained by burning fats, so we suggest these high-performing animals consume a sugar-heavy diet to protect their muscles from damage.”Lots of hawkmoths rub genitals to jam bat sonar
Levin says he thinks the principles observed in Manduca moths apply to all animals, as similar respiratory values have been measured in marsupials, mammals, and birds. “But because they seemed to contradict theory, those measurements usually didn’t make it into the paper, or they were ascribed to lipid synthesis,” Levin says.
Adds Davidowitz: “We think the ability to shunt glucose through an ancient metabolic pathway has allowed animals that only feed on nectar to embark on long migrations, such as monarch butterflies, hummingbirds, and bats.”
Coauthors of the paper from New Mexico State University in Las Cruces, New Mexico, and the University of Arizona’s School of Plant Sciences BIO5 Institute. Funding came from the National Science Foundation.
Source: University of Arizona
Wastewater from oil and gas operations—including fracking for shale gas—at a West Virginia site altered microbes downstream, according to a new study.
The study, published in Science of the Total Environment, shows that wastewater releases, including briny water that contained petroleum and other pollutants, altered the diversity, numbers, and functions of microbes. The shifts in the microbial community indicated changes in their respiration and nutrient cycling, along with signs of stress.
The study also documented changes in antibiotic resistance in downstream sediments, but did not uncover hot spots, or areas with high levels of resistance. The findings point to the need to understand the impacts on microbial ecosystems from accidental releases or improper treatment of fracking-related wastewater. Moreover, microbial changes in sediments may have implications for the treatment and beneficial reuse of wastewater, the researchers say.
“My hope is that the study could be used to start making hypotheses about the impacts of wastewater,” says Nicole Fahrenfeld, lead author of the study and assistant professor in Rutgers University department of civil and environmental engineering. Much remains unknown about the impacts of wastewater from fracking, she adds.Fracking chemicals alter hormones of baby mice
“I do think we’re at the beginning of seeing what the impacts could be,” says Fahrenfeld, who works in the School of Engineering. “I want to learn about the real risks and focus our efforts on what matters in the environment.”Fracking and its wastewater
Underground reservoirs of oil and natural gas contain water that is naturally occurring or injected to boost production, according to the US Geological Survey, whose scientists contributed to the study. During fracking, a fracturing fluid and a solid material are injected into an underground reservoir under very high pressure, creating fractures to increase the porosity and permeability of rocks.
Liquid pumped to the surface is usually a mixture of the injected fluids with briny water from the reservoir. It can contain dissolved salt, petroleum, and other organic compounds, suspended solids, trace elements, bacteria, naturally occurring radioactive materials, and anything injected into wells, the USGS says. Such water is recycled, treated, and discharged; spread on roads, evaporated, or infiltrated; or injected into deep wells.
Fracking for natural gas and oil and its wastewater has increased dramatically in recent years. And that could overwhelm local infrastructure and strain many parts of the post-fracking water cycle, including the storage, treatment, reuse, transportation, or disposal of the wastewater, according to the USGS.Changes at Wolf Creek
For the new study, water, and sediment samples were collected from tributaries of Wolf Creek in West Virginia in June 2014, including an unnamed tributary that runs through an underground injection control facility.
The facility includes a disposal well, which injects wastewater to 2,600 feet below the surface, brine storage tanks, an access road, and two lined ponds (now-closed) that were used to temporarily store wastewater to allow particles to settle before injection.Maps suggest safer places to inject wastewater
Water samples were shipped to Rutgers, where they were analyzed. Analysis of sediment samples took place at the Waksman Genomics Core Facility at Rutgers. The study generated a rich dataset from metagenomic sequencing, which pinpoints the genes in entire microbial communities, Fahrenfeld notes.
“The results showed shifts in the microbial community and antibiotic resistance, but this site doesn’t appear to be a new hot spot for antibiotic resistance,” she says. The use of biocides in some fracturing fluids raised the question of whether this type of wastewater could serve as an environment that is favorable for increasing antimicrobial resistance. Antimicrobial resistance detected in these sediments did not rise to the levels found in municipal wastewater—an important environmental source of antimicrobial resistance along with agricultural sites.
Antibiotics and similar drugs have been used so widely and for so long that the microbes the antibiotics are designed to kill have adapted to them, making the drugs less effective, according to the US Centers for Disease Control and Prevention. At least 2 million people become infected with antibiotic-resistant bacteria each year in the US, with at least 23,000 of them dying from the infections.
“We have this really nice dataset with all the genes and all the microbes that were at the site,” Fahrenfeld says. “We hope to apply some of these techniques to other environmental systems.”
Source: Rutgers University
The post Microbes differ downstream from fracking wastewater appeared first on Futurity.
A brain-to-computer hookup recently allowed people with severe limb weakness to type via direct brain control at the highest speeds and accuracy levels reported to date.
Two of the participants have amyotrophic lateral sclerosis, also called Lou Gehrig’s disease, and one has a spinal cord injury.
They each had one or two baby-aspirin-sized electrode arrays placed in their brains to record signals from the motor cortex, a region controlling muscle movement. The signals were transmitted to a computer via a cable and translated by algorithms into point-and-click commands guiding a cursor to characters on an onscreen keyboard.
Each participant, after minimal training, mastered the technique sufficiently to outperform the results of any previous test of brain-computer interfaces, or BCIs, for enhancing communication by people with similarly impaired movement. Notably, they achieved the typing rates without the use of automatic word-completion assistance common in electronic keyboarding applications nowadays, which likely would have boosted their performance.
One participant, Dennis Degray of Menlo Park, California, was able to type 39 correct characters per minute, equivalent to about eight words per minute.
This point-and-click approach could be applied to a variety of computing devices, including smartphones and tablets, without substantial modifications, the researchers say. Their findings appear in the journal eLife.
“This is like one of the coolest video games I’ve ever gotten to play with. And I don’t even have to put a quarter in it.”
“Our study’s success marks a major milestone on the road to improving quality of life for people with paralysis,” says Jaimie Henderson, professor of neurosurgery at Stanford University, who performed two of the three device-implantation procedures at Stanford Hospital. The third took place at Massachusetts General Hospital.
“This study reports the highest speed and accuracy, by a factor of three, over what’s been shown before,” says co-senior author Krishna Shenoy, professor of electrical engineering. “We’re approaching the speed at which you can type text on your cellphone.”
“The performance is really exciting,” says former postdoctoral scholar Chethan Pandarinath, who now has a joint appointment at Emory University and the Georgia Institute of Technology as an assistant professor of biomedical engineering. “We’re achieving communication rates that many people with arm and hand paralysis would find useful. That’s a critical step for making devices that could be suitable for real-world use.”
Shenoy’s lab pioneered the algorithms used to decode the complex volleys of electrical signals fired by nerve cells in the motor cortex, the brain’s command center for movement, and convert them in real time into actions ordinarily executed by spinal cord and muscles.
“These high-performing BCI algorithms’ use in human clinical trials demonstrates the potential for this class of technology to restore communication to people with paralysis,” says postdoctoral scholar Paul Nuyujukian.‘I was taking out the trash in the rain’
Millions of people with paralysis live in the United States. Sometimes their paralysis comes gradually, as occurs in ALS. Sometimes it arrives suddenly, as in Degray’s case.
Now 64, Degray became quadriplegic on October 10, 2007, when he fell and sustained a life-changing spinal-cord injury. “I was taking out the trash in the rain,” he said. Holding the garbage in one hand and the recycling in the other, he slipped on the grass and landed on his chin. The impact spared his brain but severely injured his spine, cutting off all communication between his brain and musculature from the head down. “I’ve got nothing going on below the collarbones,” he says.Watch paralyzed man move robotic arm with his mind
Degray received two device implants at Henderson’s hands in August 2016. In several ensuing research sessions, he and the other two study participants, who underwent similar surgeries, were encouraged to attempt or visualize patterns of desired arm, hand, and finger movements. Resulting neural signals from the motor cortex were electronically extracted by the embedded recording devices, transmitted to a computer and translated by Shenoy’s algorithms into commands directing a cursor on an onscreen keyboard to participant-specified characters.The quick brown fox…
The researchers gauged the speeds at which the patients were able to correctly copy phrases and sentences—for example, “The quick brown fox jumped over the lazy dog.” Average rates were 7.8 words per minute for Degray and 6.3 and 2.7 words per minute, respectively, for the other two participants.Rerouted nerves let paralyzed people use hands
The investigational system used in the study, an intracortical brain-computer interface called the BrainGate Neural Interface System, represents the newest generation of BCIs. Previous generations picked up signals first via electrical leads placed on the scalp, then by being surgically positioned at the brain’s surface beneath the skull.
An intracortical BCI uses a tiny silicon chip, just over one-sixth of an inch square, from which protrude 100 electrodes that penetrate the brain to about the thickness of a quarter and tap into the electrical activity of individual nerve cells in the motor cortex.
Henderson likened the resulting improved resolution of neural sensing, compared with that of older-generation BCIs, to that of handing out applause meters to individual members of a studio audience rather than just stationing them on the ceiling, “so you can tell just how hard and how fast each person in the audience is clapping.”24/7 wireless system
The day will come—closer to five than 10 years from now, Shenoy predicts—when a self-calibrating, fully implanted wireless system can be used without caregiver assistance, has no cosmetic impact. and can be used around the clock.
“I don’t see any insurmountable challenges,” he says. “We know the steps we have to take to get there.”
Degray, who continues to participate actively in the research, knew how to type before his accident but was no expert at it. He described his newly revealed prowess in the language of a video game aficionado.
“This is like one of the coolest video games I’ve ever gotten to play with,” he says. “And I don’t even have to put a quarter in it.”
Stanford research assistant Christine Blabe is also a study coauthor, as are BrainGate researchers from Massachusetts General Hospital and Case Western University.
Funding came from the National Institutes of Health, the Stanford Office of Postdoctoral Affairs, the Craig H. Neilsen Foundation, the Stanford Medical Scientist Training Program, Stanford BioX-NeuroVentures, the Stanford Institute for Neuro-Innovation and Translational Neuroscience, the Stanford Neuroscience Institute, Larry and Pamela Garlick, Samuel and Betsy Reeves, the Howard Hughes Medical Institute, the US Department of Veterans Affairs, the MGH-Dean Institute for Integrated Research on Atrial Fibrillation and Stroke, and Massachusetts General Hospital.
Stanford’s Office of Technology Licensing holds intellectual property on the intercortical BCI-related engineering advances made in Shenoy’s lab.
Source: Bruce Goldman for Stanford University
The post Implant lets paralyzed people type with their minds appeared first on Futurity.
Scientists report that a metamaterial is the first to achieve the kind of performance predicted by theoretical bounds.
Its lightness, strength, and versatility lends itself well to a variety of applications, from buildings to vehicles to packaging and transport, says Jonathan Berger, mechanical engineer and materials scientist at the University of California, Santa Barbara. He developed the material in 2015 and reports these findings in a letter published in Nature.
Called Isomax, the beauty of the solid foam—in this case loosely defined as a combination of a stiff substance and air pockets—lay in the geometry within. Instead of the typical assemblage of bubbles or a honeycomb arrangement, the ordered cells were set apart by walls forming the shapes of pyramids with three sides and a base, and octahedra, reinforced inside with a “cross” of intersecting diagonal walls.
The combination of the pyramid and cross-shaped cells, says Berger, resulted in a structure that had low density—mostly air, in fact—yet was uncommonly strong for its mass.Metamaterial could lead to ‘perfect lens’
“The Isomax geometry is maximally stiff in all directions,” explains Berger. Other geometries—a honeycomb, for instance—may be able to resist forces from one direction, but approach it from a different direction and the cell will collapse easily. Isomax’s cell structure makes it possible for the material to resist crushing and shearing forces without the need to make it heavier or denser.
However, for all the early interest that his proposed metamaterial generated and the computer modeling that supported his claims, Berger knew he couldn’t rest until science backed him up.
“There was obviously a lot of positive feedback, but for me as a scientist, it’s a bit too much hand-waving until you have something in a peer-reviewed journal,” Berger says.
And now his work is paying off.Computer models and math on paper
“I carried out some simplified calculations of the stiffnesses of some of the foams and was able to see that the pencil-and-paper results agreed with the computer calculations,” explains coauthor and materials and mechanical engineering professor Robert McMeeking, whose research focuses on computational science and engineering as well as the mechanics of materials, including their fracture and durability. “This gave us confidence that the computer calculations were both correct and being formulated accurately.”
McMeeking’s calculations also prove that, in the case of the lightest weight foams, they were identifying the optimal geometries of the foams that enabled them to achieve the maximum possible stiffness. “That finding also meant that we could be sure that the computer calculations were also successfully obtaining the optimal geometries for the heavier weight foams, where pencil-and-paper calculations are almost impossible to carry out because they are much more complicated,” he says.Tiny rods in sea sponges are awesome at not buckling
The closed-form solutions and equations developed to create the mechanical model of the metamaterial’s behavior “matched up beautifully” with the earlier computer models, says Berger.Ultralight vehicles
Given its properties, Isomax “is going to be a very interesting metamaterial,” says coauthor Haydn N. G. Wadley of the University of Virginia. “It will also be an excellent thermal insulating and sound absorbing material. Potential applications for this ultralight material are likely to emerge in aerospace structures, for lightweighting automobiles and in many robotic machines, especially mobile types that carry their own power and must maneuver.”
The development could not have better timing. As resources become more limited and concern for energy efficiency grows, a material with this mass relative to its strength would require fewer resources to produce and less fuel to transport. The simple geometry makes it versatile enough to fabricate for a variety of situations, and, functionally graded, it can be used to create objects with varying levels of stiffness from one end to another, such as prosthetics and replacement joints, and the design is compatible with manufacturing methods from origami-like folding to bonding and 3D printing.
The study is one of a series of steps investigating the potential of this metamaterial. Berger and team are currently following up with experimental analysis and are looking into manufacturing methods that may allow for efficient fabrication.
More information about Isomax is available at Nama Development, a company Berger formed with the help of John Greathouse and UC Santa Barbara’s Technology Management Program.
Source: UC Santa Barbara
Scientists have figured out an Ice Age paradox and their findings add to mounting evidence that climate change could bring higher seas than most models predict.
Small spikes in the temperature of the ocean, rather than the air, likely drove the rapid disintegration cycles of the expansive ice sheet that once covered much of North America.
The behavior of this ancient ice sheet—called Laurentide—has puzzled scientists for decades because its periods of melting and splintering into the sea occurred at the coldest times in the last Ice Age. Ice should melt when the weather is warm, but that’s not what happened.
“We’ve shown that we don’t really need atmospheric warming to trigger large-scale disintegration events if the ocean warms up and starts tickling the edges of the ice sheets,” says Jeremy Bassis, associate professor of climate and space sciences and engineering at the University of Michigan.
“It is possible that modern-day glaciers, not just the parts that are floating but the parts that are just touching the ocean, are more sensitive to ocean warming than we previously thought.”Why ice ages (now) happen every 100,000 years
This mechanism is likely at work today on the Greenland ice sheet and possibly Antarctica. Scientists know this in part due to Bassis’ previous work. Several years ago, he came up with a new, more accurate way to mathematically describe how ice breaks and flows. His model has led to a deeper understanding of how the Earth’s store of ice could react to changes in air or ocean temperatures, and how that might translate to sea level rise.
Last year, other researchers used it to predict that melting Antarctic ice could raise sea levels by more than three feet, as opposed to the previous estimate that Antarctica would only contribute centimeters by 2100.
In the new study, published in the journal Nature, researchers applied a version of this model to the climate of the last Ice Age, which ended about 10,000 years ago. They used ice core and ocean-floor sediment records to estimate water temperature and how it varied. Their aim was to see if what’s happening in Greenland today could describe the behavior of the Laurentide Ice Sheet.
Scientists refer to these bygone periods of rapid ice disintegration as Heinrich events: Icebergs broke off the edges of Northern Hemisphere ice sheets and flowed into the ocean, raising sea level by more than 6 feet over the course of hundreds of years. As the icebergs drifted and melted, dirt they carried settled onto the ocean floor, forming thick layers that can be seen in sediment cores across the North Atlantic basin. These unusual sediment layers are what allowed researchers to first identify Heinrich events.
“Decades of work looking at ocean sediment records has shown that these ice sheet collapse events happened periodically during the last Ice Age, but it has taken a lot longer to come up with a mechanism that can explain why the Laurentide ice sheet collapsed during the coldest periods only. This study has done that,” says geochemist and coauthor Sierra Petersen, a research fellow in earth and environmental sciences.Ice Age map helps predict the next mega drought
The researchers set out to understand the timing and size of the Heinrich events. Through their simulations, they were able to predict both, and also to explain why some ocean warming events triggered Heinrich events and some did not. They even identified an additional Heinrich event that had previously been missed.
Heinrich events were followed by brief periods of rapid warming. The Northern Hemisphere warmed repeatedly by as many as 15 degrees Fahrenheit in just a few decades. The area would stabilize, but then the ice would slowly grow to its breaking point over the next thousand years. Their model was able to simulate these events as well.
The new model takes into account how the Earth’s surface reacts to the weight of the ice on top of it. Heavy ice depresses the planet’s surface, at times pushing it below sea level. That’s when the ice sheets are most vulnerable to warmer seas. But as a glacier retreats, the solid Earth rebounds out of the water again, stabilizing the system. From that point the ice sheet can begin to expand again.
“There is currently large uncertainty about how much sea level will rise and much of this uncertainty is related to whether models incorporate the fact that ice sheets break,” Bassis says. “What we are showing is that the models we have of this process seem to work for Greenland, as well as in the past so we should be able to more confidently predict sea level rise.”
Portions of Antarctica have similar geography to Laurentide: Pine Island, Thwaites glacier, for example.
“We’re seeing ocean warming in those region and we’re seeing these regions start to change. In that area, they’re seeing ocean temperature changes of about 2.7 degrees Fahrenheit,” Bassis says. “That’s pretty similar magnitude as we believe occurred in the Laurentide events, and what we saw in our simulations is that just a small amount of ocean warming can destabilize a region if it’s in the right configuration, and even in the absence of atmospheric warming.”
The National Science Foundation and the National Atmospheric and Oceanic Administration supported the work.
Source: University of Michigan
The post Scientists find answer to Ice Age’s Laurentide paradox appeared first on Futurity.
Cells within our bodies divide and change over time, with thousands of chemical reactions occurring within each cell daily. This makes it difficult for scientists to understand what’s happening inside. New nanostraws offer a non-disruptive way to find out.
A problem with the current method of cell sampling, called lysing, is that it ruptures the cell. Once the cell is destroyed, it can’t be sampled from again. This new sampling system relies on tiny tubes 600 times smaller than a strand of hair that allow researchers to sample a single cell at a time. The nanostraws penetrate a cell’s outer membrane, without damaging it, and draw out proteins and genetic material from the cell’s salty interior.
“It’s like a blood draw for the cell,” says Nicholas Melosh, an associate professor of materials science and engineering at Stanford University and senior author of a paper describing the work in the Proceedings of the National Academy of Sciences.Mysteries inside cells
The nanostraw sampling technique, according to Melosh, will significantly impact our understanding of cell development and could lead to much safer and effective medical therapies because the technique allows for long term, non-destructive monitoring.
“What we hope to do, using this technology, is to watch as these cells change over time and be able to infer how different environmental conditions and ‘chemical cocktails’ influence their development—to help optimize the therapy process,” Melosh says.
If researchers can fully understand how a cell works, then they can develop treatments that will address those processes directly. For example, in the case of stem cells, researchers are uncovering ways of growing entire, patient-specific organs. The trick is, scientists don’t really know how stem cells develop.Droplets say if cell habitat is Jell-O or boney
“For stem cells, we know that they can turn into many other cell types, but we do not know the evolution—how do they go from stem cells to, say, cardiac cells? There is always a mystery. This sampling technique will give us a clearer idea of how it’s done,” says Yuhong Cao, a graduate student and first author on the paper.
The sampling technique could also inform cancer treatments and answer questions about why some cancer cells are resistant to chemotherapy while others are not.
“With chemotherapy, there are always cells that are resistant,” says Cao. “If we can follow the intercellular mechanism of the surviving cells, we can know, genetically, its response to the drug.”Tiny cell doorways
The sampling platform on which the nanostraws are grown is tiny—about the size of a gumball. It’s called the Nanostraw Extraction (NEX) sampling system, and it was designed to mimic biology itself.
In our bodies, cells are connected by a system of “gates” through which they send each other nutrients and molecules, like rooms in a house connected by doorways. These intercellular gates, called gap junctions, are what inspired Melosh six years ago, when he was trying to determine a non-destructive way of delivering substances, like DNA or medicines, inside cells. The new NEX sampling system is the reverse, observing what’s happening within rather than delivering something new.3D map depicts cell ‘wheel of death’ in action
“It’s a super exciting time for nanotechnology,” Melosh says. “We’re really getting to a scale where what we can make controllably is the same size as biological systems.”
Building the NEX sampling system took years to perfect. Not only did Melosh and his team need to ensure cell sampling with this method was possible, they needed to see that the samples were actually a reliable measure of the cell content, and that samples, when taken over time, remained consistent.
When the team compared their cell samples from the NEX with cell samples taken by breaking the cells open, they found that 90 percent of the samples were congruous. Melosh’s team also found that when they sampled from a group of cells day after day, certain molecules that should be present at constant levels remained the same, indicating that their sampling accurately reflected the cell’s interior.
With help from collaborators Sergiu P. Pasca, assistant professor of psychiatry and behavioral sciences, and Joseph Wu, professor of radiology, Melosh and coworkers tested the NEX sampling method not only with generic cell lines, but also with human heart tissue and brain cells grown from stem cells. In each case, the nanostraw sampling reflected the same cellular contents as lysing the cells.
The goal of developing this technology, according to Melosh, was to make an impact in medical biology by providing a platform that any lab could build. Only a few labs across the globe, so far, are employing nanostraws in cellular research, but Melosh expects that number to grow dramatically.
“We want as many people to use this technology as possible,” he says.
Funding for the work came from the National Institute of Standards and Technology, the Knut and Alice Wallenberg Foundation, the National Institutes of Health, Stanford Bio-X, the Progenitor Cell Biology Consortium, the National Institute of Mental Health, an MQ Fellow award, the Donald E. and Delia B. Baxter Foundation, and the Child Health Research Institute.
Source: Jackie Flynn for Stanford University
The post Nanostraw doesn’t destroy cells as it samples their guts appeared first on Futurity.
Parents are more likely to change their child’s lifestyle if schools offer educational materials alongside body mass index screening results, a new study shows.
Some parents in the study received only BMI results, while others had access to the Family Nutrition and Physical Activity screening tool, an online tool designed to help parents evaluate their home environments and practices.
“The FNPA assessment can be a good supplement to any school obesity prevention program and it is also useful for clinical evaluations,” says Greg Welk, professor of kinesiology at Iowa State University. “Some clinics are now using it in parent well-child visits so that pediatricians can advise parents about how to help their kids.”Marketers make sugary ‘toddler milk’ seem healthy
Welk says the supplemental information appeared to help parents in the study understand BMI results, as well as identify strategies to take at home, such as offering more fruits and vegetables, limiting screen time, helping their child be more active, and making sure he or she gets enough sleep. The study, published in Childhood Obesity, analyzed nearly 1,500 parental surveys from 31 Pennsylvania elementary schools.
As of 2012, 21 states required schools to measure and collect BMI statistics. However, as researchers explained in the paper, a third of these schools did not require parental notification and only one-quarter had a policy regarding referrals. Welk says BMI is useful for school screening because it is quick and non-invasive. However, the statistics are of little use if not shared with parents.
“The use of BMI screening on a regular basis can help schools by providing information to help evaluate changes at the school level. It can also directly help individual children and parents to potentially identify growth patterns that may predispose youth to becoming overweight or obese,” Welk says.Lessons for mom keep baby from growing overweight
The American Academy of Pediatrics and the Institute of Medicine have endorsed BMI screening for use in school assessments, but it is important to follow recommended practices for assessment and notification, he adds. Supplemental information such as the FNPA is also recommended since it gives parents information that they can use to help their child.
Obesity affects one in six children and teens in the US, according to the Centers for Disease Control and Prevention. The researchers say their results show there is potential to increase awareness and access to educational tools related to BMI reports. If this information prompts parents to make changes, it could help reduce obesity rates.
In the paper, the researchers cited prevalence estimates that indicate more than 17 percent of American youth are obese, but very few parents identify their own children as having weight problems. As few as 2 percent of parents with overweight children and 17 percent of parents with obese children describe their children as overweight.
Lisa Bailey-Davis with Geisinger Health System led the work, which was part of a larger study funded by the National Institutes of Health. Welk led the overall project along with former PhD student, Karissa Peyer, now at the University of Tennessee-Chattanooga.
Source: Iowa State University
The post Can this tool turn children’s BMI results into action? appeared first on Futurity.
Engineers have used high-performance computing to examine the best way to treat an aneurysm.
To reduce blood flow into aneurysms, surgeons often insert a flow diverter—tiny tubes made of weaved metal, like stents—across the opening. The reduced blood flow into the aneurysm minimizes the risk of a rupture, researchers say.
But, if the opening, or neck, of an aneurysm is large, surgeons will sometimes overlap two diverters, to increase the density of the mesh over the opening. Another technique is to compress the diverter to increase the mesh density and block more blood flow.
“When doctors see the simulated blood flow in our models, they’re able to visualize it.”
A computational study published in the American Journal of Neuroradiology shows the best option is the single, compressed diverter—provided it produces a mesh denser than the two overlapped diverters, and that it covers at least half of the aneurysm opening.
“When doctors see the simulated blood flow in our models, they’re able to visualize it. They see that they need to put more of the dense mesh here or there to diffuse the jets (of blood), because the jets are dangerous,” says lead author Hui Meng, a mechanical engineering professor at the University at Buffalo.
Working with the university’s supercomputing facility, the Center for Computational Research, Robert Damiano and Nikhil Paliwal, both PhD candidates in Meng’s lab, used virtual models of three types of aneurysms—fusiform (balloons out on all sides), and medium and large saccular (balloons on one side)—and applied engineering principles to model the pressure and speed of blood flowing through the vessels.
The engineers modeled three different diverter treatment methods—single non-compacted, two overlapped, and single compacted, and ran tests to determine how they would affect blood flow in and out of the aneurysm using computational fluid dynamics.
“We used equations from fluid mechanics to model the blood flow, and we used structural mechanics to model the devices,” Damiano says. “We’re working with partial differential equations that are complex and typically unsolvable by hand.”This aneurysm surgery goes better with statins
These equations are converted to millions of algebraic equations and are solved using the supercomputer. The very small size of the mesh added to the need for massive computing power.
“The diverter mesh wires are 30 microns in diameter,” Paliwal says. “To accurately capture the physics, we needed to have a maximum of 10 to 15 micron grid sizes. That’s why it is computationally very expensive.”
The models showed that compressing a diverter produced a dense mesh that covered 57 percent of a fusiform-shaped aneurysm. That proved more effective than overlapping two diverters.
The compacted diverter was less effective in saccular aneurysms. As diverters are compressed, they become wider and bump into the sides of the vessel, so they could not be compressed enough to cover a small opening of an aneurysm. Compression was more effective in a large necked saccular aneurysm, producing a dense mesh that covered 47 percent of the opening.Blood vessel ‘spaghetti’ makes mini-brain more real
Because a porous scaffold is needed to allow cell and tissue growth around the neck of the aneurysm, complete coverage using a solid diverter isn’t the best option, Paliwal says. Further, solid diverters could risk blocking off smaller arteries.
The team next would like to look back over hundreds of previous cases, to determine how blood flow was affected by the use of diverters. The idea is to build a database so that more definitive conclusions can be drawn.
“We’re going to look at and model previous cases, and hopefully we’ll have a way to determine the best treatment to cause the best outcome for new aneurysm cases,” Damiano says.
Source: University at Buffalo
The post Supercomputer tests ways to divert blood from aneurysm appeared first on Futurity.
What looks like a caterpillar chewing on a leaf or a beetle consuming fruit is likely a three-way battle that benefits most, if not all of the players involved, research shows.
“Plants are subject to attack by an onslaught of microbes and herbivores, yet are able to specifically perceive the threat and mount appropriate defenses,” says Gary W. Felton, professor and head of entomology at Penn State. “But, herbivores can evade plant defenses by using symbiotic bacteria that deceive the plant into perceiving an herbivore threat as microbial, suppressing the plant’s defenses against herbivores.”
Felton’s research looked at two crop pests—tomato fruitworms and the Colorado potato beetle—plant reactions to the pests, and the microbes that they carry. He presented his findings on February 18 at the annual meeting of the American Association for the Advancement of Science in Boston. This broad look at herbivore-plant interactions takes into account the entire phytobiome—the plants, their environment, their predators, and the organisms that colonize them.
Tomato fruitworms may be the most important crop pest in North and South America. According to Felton, the caterpillar enjoys eating more than 100 different agricultural crops. Unfortunately, it likes to eat what we eat.
The Colorado potato beetle moved quickly across the US from Mexico in the mid-1800s and took only 20 to 30 years to reach New York and Long Island. It strips leaves down to the veins, leaving skeletal remains.
Plants have two lines of defense against these predators. One reaction, regulated by jasmonic acid, comes into play when insects chew on the plant’s leaves, stems, or fruit, damaging the plant and leaving insect saliva. The other is turned on when an insect regurgitates stomach contents containing microbes onto the plant triggering a response by the plant to microbial pathogens that uses salicylic acid.
When microbes—viruses and bacteria—are symbiotic companion of the insects, these pathways can be interrupted.Virus injections
“Parasitoids (predatory insects) inject eggs into the caterpillar and the developing parasitoid eventually kills the caterpillar,” says Felton. “Along with the eggs, the parasitoid injects a symbiotic virus that knocks out the immune system of the caterpillar and kill the component in the caterpillar saliva that signals the plant that it is being attacked.”Tiny beetles may be coming for our guacamole
When a parasitoid-infected tomato fruitworm attacks a plant, the plant does not realize the caterpillar is chewing on it, none of the chemical defense systems in the plant activate. This benefits the caterpillar and the symbiotic microbe, but does not do much for the plant.
When the Colorado potato beetle—which likes potato plants, but will eat all the plants in the nightshade family—regurgitates its stomach contents onto a leaf, the bacteria from the beetle’s gut triggers the plant’s microbial response, but turns off the plant’s response to chewing. The bacteria are able to spread and the herbivore, the beetle, gets to strip the leaves without encountering the plant’s chemical response.Better insecticides to come?
The Colorado potato beetle has gone through whole sequences of insecticides and developed resistance.
“More than two decades ago, neonicotinoids became widely used against the beetles, and that worked,” says Felton. “But they may be losing their effect.”Radioactive tracer shows corn roots fight pest
The Colorado potato beetle suppresses the plant’s chewing response only when the beetles feed on tomatoes or potatoes, not when they feed on other members of the nightshade family like eggplants or peppers. The symbiotic bacteria only develop in the beetle gut when feeding on tomatoes and potatoes.
Understanding the interaction among plants, their predators, and the microbes that live in them, may help researchers understand how to control these pests.
“When we know more about all these microbe, herbivore, and plant interactions, we may be able to manipulate the system to make the plants manipulate the bacteria,” says Felton. “Probiotics (mixes of specific bacteria or viruses) could alter the gut microbiome to benefit the plant.”
Some microbes turn on the plants, increasing growth and enhancing the defensive systems, according to Felton. Some caterpillar bacteria seem to make some seeds germinate faster.
“Lots of companies are investing in beneficial natural plant microbes,” says Felton. “This could improve plant productivity.
The National Science Foundation supported this work.
Source: Penn State
To contain and eradicate foot-and-mouth disease in cows, research suggests establishing how many animals can be vaccinated per day and tailoring controls accordingly.
A 2001 foot-and-mouth disease (FMD) outbreak cost the UK economy an estimated £8 billion and led to the culling of approximately seven million animals.
“There is always uncertainty in the likely effectiveness of any control strategy for an infectious disease outbreak. However in the case of FMD, if we can accurately determine the daily capacity to vaccinate animals, we can potentially save millions of pounds for the farming industry,” says Michael Tildesley of the University of Warwick School of Life Sciences.
Using a mathematical model of the UK farming landscape, Tildesley and colleagues simulated numerous scenarios of infection—to varying levels of severity and speed—calculating the most effective and efficient approaches to stave the spread of disease.Retina scans could spot ‘mad cow’ faster
Many dangerous uncertainties exist when dealing with epidemics like FMD, such as: the efficacy of vaccinations, the time it takes for livestock to become immune after receiving vaccines, and the number of vaccine doses available. Uncertainty leads to huge potential losses of both money and livestock.
This new FMD model demonstrates that the major uncertainty to be resolved is how many vaccine doses are available. If this is known, the infection can be contained efficiently—despite all other unknown factors.
By using the new FMD model and confirming what vaccination capacity exists, the UK could save up to £50 million, and around 200,000 animals could be spared from culling in any future epidemic. Furthermore, any outbreak using such tailored vaccination can generally be eradicated almost a week sooner than previous outbreaks.
The paper appears in PLOS Computational Biology.
The research was carried out in collaboration with Penn State, Vanderbilt University, and the United States Geological Survey. Funding came from the Biotechnology and Biological Sciences Research Council and the US National Institutes of Health.
Source: University of Warwick
New research clarifies why some people seem to derive a heart-protective benefit from eating soy foods and others don’t.
Japanese men who are able to produce equol—a substance made by some types of “good” gut bacteria when they metabolize isoflavones (micronutrients found in dietary soy)—have lower levels of a risk factor for heart disease than people that can’t produce it, according to a new study in the British Journal of Nutrition.
“Scientists have known for some time that isoflavones protect against the buildup of plaque in arteries, known as atherosclerosis, in monkeys, and are associated with lower rates of heart disease in people in Asian countries,” says senior author Akira Sekikawa, associate professor of epidemiology at the University of Pittsburgh.
“We were surprised when a large trial of isoflavones in the US didn’t show the beneficial effects among people with atherosclerosis in Western countries. Now, we think we know why.”
All monkeys can produce equol, as can 50 to 60 percent of people in Asian countries. However, only 20 to 30 percent of people in Western countries can.Don’t count on soy to ease your asthma
For the study, researchers recruited 272 Japanese men aged 40 to 49 and performed blood tests to find out if they were producing equol. After adjusting for other heart disease risk factors such as high blood pressure, cholesterol, smoking, and obesity, they found that the equol-producers had 90-percent lower odds of coronary artery calcification, a predictor of heart disease, than the equol non-producers.
“I do not recommend that people start taking equol to improve their heart health or for any other reason unless advised by their doctor. Much more study is needed.”
The daily intake of dietary isoflavones—found in traditional soy foods such as tofu, miso, and soymilk—is 25 to 50 milligrams in China and Japan, while it is less than 2 milligrams in Western countries. Equol is available as a supplement—bypassing the need for gut bacteria to produce it—though no clinical trials have been performed to determine a safe dosage for heart protective effects, or if it even does provide such protection.
“I do not recommend that people start taking equol to improve their heart health or for any other reason unless advised by their doctor,” Sekikawa says. “Much more study is needed.”
The researchers are now pursuing funding for a much larger observational study to expand on the findings and eventually a randomized clinical trial to examine the effect of taking equol on various medical conditions and diseases.
“Our discovery about equol may have applications far beyond heart disease,” Sekikawa says. “We know that isoflavones may be associated with protecting against many other medical conditions, including osteoporosis, dementia, menopausal hot flashes, and prostate and breast cancers. Equol may have an even stronger effect on these diseases.”
Additional researchers from the University of Pittsburgh and from Shiga University of Medical Science, Shimane University, and Keio University, all in Japan, are coauthors of the study.
The National Institutes of Health; Japanese Ministry of Education, Culture, Sports, Science and Technology; and a small grant from Pitt Public Health’s department of epidemiology funded the work.
Source: University of Pittsburgh
Researchers planning to use the CRISPR genome-editing system to produce designer gut bacteria may need to account for the dynamic evolution of the microbial immune system.
CRISPR is an acquired immune system that allows bacteria and other single-celled organisms to store snippets of DNA to protect themselves from viruses called phages. The system allows a cell to “remember” and mount a defense against phages it has previously battled.
Beginning in 2012, scientists discovered they could use CRISPR proteins to precisely edit the genomes of not only bacteria but also of animals and humans. That discovery captured Science magazine’s Breakthrough of the Year in 2015 and could eventually allow scientists to reprogram the cells of people with genetic diseases.
Despite rapid advances in the use of CRISPR for editing genomes, scientists still have many questions about how CRISPR defenses evolved in bacteria and other single-celled prokaryotic organisms. Michael Deem, a physicist and bioengineer from Rice University, was first drawn to CRISPR in 2010 and has created a number of computer models to explore CRISPR’s inner workings.
In a new study in the Journal of the Royal Society Interface, Deem and former graduate student Pu Han found there is a subtle interplay between phages and bacteria that can change, depending upon how often the two encounter one another and how quickly each evolves defenses against the other. The study documents a strange survival-extinction pattern between bacteria and phages that helps explain seemingly conflicting experimental results that have stymied CRISPR researchers.Evolving together
“There’s a co-evolution between the phages and the bacteria,” Deem says. “The bacteria are incorporating DNA from the phages, and this allows the bacteria or their offspring to be protected against those phages.
Like all living things, phages, which attack only single-celled organisms, are constantly evolving. Deem says the rate at which they mutate and change their DNA sequence is one variable that can affect how well CRISPR can recognize and fight them. Another factor that must be taken into account in modeling CRISPR is that the limited space available for storing phage DNA. CRISPR is constantly acquiring new snippets and throwing out old ones. An additional parameter is the encounter rate, or how often the bacteria and phages come into contact.
“If we plot the results from a simple model that incorporates these parameters, we would see that the results fall into three regions, or phases, one where CRISPR wins out and drives the phages to extinction, one where the phages win and kill off the bacteria and a third phase where the two coexist,” Deem says.
Physicists often use such phase diagrams to probe the dynamics of systems. By altering the encounter and mutation rates, scientists can explore how particular combinations drive the system from one phase to another.Unexpected results
In the new study, Deem and Han, who is now a software engineer at Google, found that certain combinations of encounter and mutation rates produced an unexpected result, a five-region phase diagram where phages twice thrived and were twice nearly killed off, thanks to the complex interplay between the CRISPR add-drop rates and the rate at which the bacteria were exposed to phages.Gene-editing system cures blood disorder in mice
“Generally speaking, we might expect that at high rates of exposure, the CRISPR immune system would drive the phages to extinction because it would encounter them often enough to have a current copy of their DNA in the CRISPR,” Deem says. “In our phase diagram, we refer to this as region four, and our first interesting finding is that while extinction is likely in this case, there is always a probability, which we can calculate, that the phages will escape and not go extinct. That natural variability is of interest.
“The second point is that as we lower the exposure rate of the phages to the bacteria, and there are now fewer phages infecting the bacteria per unit of time, the bacteria have decreased opportunities to acquire DNA from the phages, and the phages can now coexist with the bacteria” he says. “We call this region three. So, we’ve gone from extinction to nonextinction, and we now have coexistence. That’s expected and very reasonable.
“Surprisingly, we found that lowering the exposure rate even more—a case in which the bacteria now have even fewer opportunities to copy DNA into the CRISPR—resulted in another phase where the phages were driven to extinction. That’s region two. And people would not have expected that.”
In examining this result, Deem and Han found that the second extinction event occurred because the infection rate and the bacterial growth rate were the same, and any bacteria that acquired immunity to the phages would reproduce quickly enough to out-compete both all other bacteria and the phages. In this extinction event, a single copy of viral DNA in the CRISPR allowed the bacteria to defeat the phages. This differed from region four—the high exposure case—where many copies of DNA in the CRISPR allowed multiple strains of bacteria to defeat the phages.Phage controversy
Deem says the results help explain previous experimental results that have confused the CRISPR research community.College kids got to name new bacteriophages
“There’s been some controversy about whether CRISPR can control phages and what circumstances lead to coexistence,” Deem says. “The reason for this is that various experiments have produced results from regions two, three, and four. Our results clarify the range of possibilities and confirm that this range has been at least partially measured.”
Deem says the findings apply only to CRISPR’s use in bacterial and prokaryotic systems. In cases where researchers are trying to use CRISPR gene-editing tools to modify those organisms or the phages that affect them, the dynamics should be taken into account.
“For example, people will eventually start editing the microbiome, the community of beneficial gut bacteria and phages that help keep people healthy,” Deem says. “There’s a great deal of work being done now on engineering the microbiome to make people more healthy, to control obesity or mood, for instance. For those interested in engineering the phage-microbiome interaction, it will be important to account for these co-evolutionary subtleties.”
Source: Rice University
The post Will phages complicate quest for designer gut bacteria? appeared first on Futurity.
Elusive planets and dim failed stars may be lurking around the edges of our solar system, and astronomers want the public’s help to hunt them down.
By using a new website called Backyard Worlds: Planet 9, anyone can help search for objects far beyond the orbit of our farthest planet, Neptune, by viewing brief “flipbook” movies made from images captured by NASA’s Wide-field Infrared Survey Explorer (WISE) mission. A faint spot seen moving through background stars might be a new and distant planet orbiting the sun or a nearby brown dwarf.
“Backyard Worlds: Planet 9 has the potential to unlock once-in-a-century discoveries, and it’s exciting to think they could be spotted first by a citizen scientist.”
WISE’s infrared images cover the entire sky about six times over. This has allowed astronomers to search the images for faint, glowing objects that change position over time, which means they are relatively close to Earth. Objects that produce their own faint infrared glow would have to be large, Neptune-size planets or brown dwarfs, which are slightly smaller than stars.
Physicist Aaron Meisner, a postdoctoral researcher at the University of California, Berkeley, specializes in analyzing WISE images and has automated the search using computers, but he jumped at the idea by NASA astronomer Marc Kuchner to ask the public to eyeball the millions of WISE images. Scientists launched the planet and brown dwarf search February 15.
“Automated searches don’t work well in some regions of the sky, like the plane of the Milky Way galaxy, because there are too many stars, which confuses the search algorithm,” Meisner says. Last month he published the results of an automated survey of 5 percent of the WISE data, which revealed no new objects. But, online volunteers “using the powerful ability of the human brain to recognize motion” may be luckier.
“Backyard Worlds: Planet 9 has the potential to unlock once-in-a-century discoveries, and it’s exciting to think they could be spotted first by a citizen scientist,” he says.Odd orbits of Kuiper Belt Objects hint at Planet Nine
“There are just over four light-years between Neptune, the farthest known planet in our solar system, and Proxima Centauri, the nearest star, and much of this vast territory is unexplored,” says Kuchner, the lead researcher and an astrophysicist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland.
“Because there’s so little sunlight, even large objects in that region barely shine in visible light. But by looking in the infrared, WISE may have imaged objects we otherwise would have missed.”A very blue Neptune-like planet, dubbed Planet 9, may be lurking dozens of times further from the sun than Pluto, as depicted in this artist’s rendering. (Credit: NASA) The Planet 9 debate
People have long theorized about unknown planets far beyond Neptune and the dwarf planet Pluto, but until recently there was no evidence to support the idea. Last year, however, Caltech astronomers Mike Brown and Konstantin Batygin found indirect evidence for the existence of an as-yet-unseen ninth planet in the solar system’s outer reaches.
This “Planet 9” would be similar in size to Neptune, but up to a thousand times farther from the sun than Earth, and would orbit the sun perhaps once every 15,000 years. It would be so faint as to have so far evaded discovery.
At the moment, the existence of Planet 9 is still under debate. Meisner thinks it’s more likely that volunteers will find brown dwarfs in the solar neighborhood. While Planet 9 would look very blue in WISE time-lapse animations, brown dwarfs would look very red and move across the sky more slowly.
WISE images have already turned up hundreds of previously unknown brown dwarfs, including the sun’s third- and fourth-closest known neighbors. Meisner hopes that the Backyard Worlds search will turn up a new nearest neighbor to our sun.Be a coauthor
“We’ve pre-processed the WISE data we’re presenting to citizen scientists in such a way that even the faintest moving objects can be detected, giving us an advantage over all previous searches,” he says. Moving objects flagged by participants will be prioritized by the science team for later follow-up observations by professional astronomers. Participants will share credit for their discoveries in any scientific publications that result from the project.The solar system’s fate depends on ‘Planet Nine’
The WISE telescope scanned the entire sky between 2010 and 2011, producing the most comprehensive survey at mid-infrared wavelengths currently available. With the completion of its primary mission, WISE was shut down in 2011, then reactivated in 2013 and given a new mission: assisting NASA’s efforts to identify potentially hazardous near-Earth objects—asteroids and comets in the vicinity of our planet. The mission was renamed the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE).
“By using Backyard Worlds: Planet 9, the public can help us discover more of these strange rogue worlds.”
The new website uses all of the WISE and NEOWISE data to search for unknown objects in and beyond our own solar system, including the putative Planet 9. If Planet 9 exists and is as bright as some predict, it could show up in WISE data.
WISE is uniquely suited for discovering extremely cold brown dwarfs, which can be invisible to the biggest ground-based telescopes despite being very close, Meisner says.
“Brown dwarfs form like stars but evolve like planets, and the coldest ones are much like Jupiter,” said team member Jackie Faherty, an astronomer at the American Museum of Natural History in New York. “By using Backyard Worlds: Planet 9, the public can help us discover more of these strange rogue worlds.”
Backyard Worlds: Planet 9 is a collaboration among NASA, UC Berkeley, the American Museum of Natural History, Arizona State University, the Space Telescope Science Institute in Baltimore, and Zooniverse, a collaboration of scientists, software developers, and educators that collectively develops and manages citizen-science projects on the internet. Zooniverse will spread the word among its many citizen volunteers.
NASA’s Jet Propulsion Laboratory in Pasadena, California, manages and operates WISE, part of NASA’s Explorers Program.
Source: UC Berkeley
The post You could find Planet 9 in these ‘flipbook’ movies appeared first on Futurity.