braindecoder - NeuroNews
First Time Here?

NeuroNews

Featured
NeuroNews
16 February

Some people can concentrate for long periods of time, while others are easily distracted, and this, according to new research, may be because we all lie somewhere on an attention-distractibility spectrum, with people who suffer from Attention-Deficit Hyperactivity Disorder (ADHD) being at one extreme of it.

Sophie Forster of the University of Sussex and Nilli Lavie of University College London recruited 100 healthy undergraduate volunteers, and asked them to perform a computerized visual search test in which they had to find a 'target' letter within a circle of others.

The test varied in difficulty, with some of the targets being placed within a circle of other similar letters, making them harder to find. In some of the trials, the letters where shown with an irrelevant and distracting cartoon image beside them.

The researchers found that their participants' performance varied widely. Some were far more easily distracted by the cartoons than others, as determined by the time it took them to correctly identify the target letter. The more difficult version of the test eliminated this distraction.

Afterwards, all the participants filled out a self-report form designed to measure the extent to which they experienced ADHD-like symptoms in their childhood. This revealed a close link between childhood symptoms and distractibility during the lab test, with those reporting more symptoms being the most easily distracted.

In a second experiment, 101 other volunteers were asked to perform a similar test, this time involving the names of superheroes and other fictional characters, and without a more difficult version. Confirming the results of the first experiment, Forster and Lavie found large individual differences in performance, which again were closely correlated with the participants' self-reports of ADHD-like symptoms in childhood.

"We find that peoples' self-reports of their ADHD-like symptoms in childhood predict the level of distraction they experience in adulthood," says Lavie. "That's why we think attention-distractibility is a long-lasting trait. Otherwise, how would childhood symptoms predict their performance?"

Lavie adds that attention-distractibility may be a 'core' personality trait that persists through life, and this could explain the observed individual differences in the extent to which healthy people can focus their attention.

Attention acts as a 'gateway' to information processing in the brain, because stimuli that aren't attended to do not enter into conscious awareness. People diagnosed with ADHD typically underachieve at school, and are more prone to accidents in daily life. Similarly, distractibility scores predict the likelihood of work and car accidents in healthy people. The observation that the more difficult version of the tests eliminated distraction in all participants, regardless of the extent of their self-reported childhood ADHD-like symptoms, suggests that cognitive training could help people to focus their attention more.

"We are now working on establishing the distraction test as a commercial product," says Lavie. "We are looking to help people with non-clinical ADHD symptoms to learn more about themselves."

Featured
NeuroNews
16 February

A rare side effect of smoking weed is experiencing psychosis or paranoia; researchers have identified a gene variant that predicts a person's risk for this happening.

In a new study of more than 400 healthy young stoners with no family history of psychosis, people with a variant of this gene experienced more mind-altering symptoms when they were high. Although such symptoms are temporary, they might signal higher risk for developing psychotic disorders in the future.

"To find that having this gene variant means that you are more prone to mind-altering affects of cannabis when you don't have psychosis gives us a clue as to how it increases risk in healthy people," coauthor Celia Morgan, of the University of Exeter, said in a statement. "Putting yourself repeatedly in a psychotic or paranoid state might be one reason why these people could go on to develop psychosis when they might not have done otherwise."

Previous research has suggested that cannabis can bump up the risk for psychotic disorders like schizophrenia in a small fraction of its users. People with a family history of psychosis are also more sensitive to the temporary mind-altering effects that can accompany marijuana use, such as hallucinations or delusions.

One variant of a gene called AKT1 may underlie this vulnerability. Among people who carry two copies of this version of the gene, a history of cannabis use is linked with heightened risk of developing psychosis.

The people in these studies had all experienced psychosis or schizophrenia, or had a sibling with schizophrenia. In the new experiment, researchers in the United Kingdom wanted to find out whether this gene might also exert an influence on stoners with no family history of psychosis.

The team gathered 422 people between the ages of 16 and 23 who used cannabis at least once a month, and told them to abstain for 24 hours before two testing days a week apart.

One of those days, the participants stayed sober. They completed a few questionnaires and short-term memory tests, as well as providing samples of urine and hair (which offers a picture of drug use over the past several months).

On the other day, the scientists told the participants to get high. The participants prepped their cannabis (this experiment was BYO weed) and offered up a 0.3-gram sample to be analyzed for its composition.

"The participant then smoked cannabis in front of the experimenter who told participants to smoke at their usual inhalation rate, and to smoke as much as they would normally do to feel 'stoned.' At this point, the testing began," wrote the researchers, who published the findings today in the journal Translational Psychiatry.

The researchers used cheek swabs to find out whether participants had one, two, or zero copies of the variant of AKT1.

The more copies someone had of this gene variant, the more likely they were to have mind-altering symptoms while stoned. A person's gender, ethnicity, years of cannabis use, cannabis dependence or marijuana composition did not predict whether they would have these symptoms.

On the memory test, gender alone was related to how well people did. On the more difficult part of the test, women's performance took a hit when they were stoned.

"Animal studies have found that males have more of the receptors that cannabis works on in parts of the brain important in short term memory, such as the prefrontal cortex," Morgan said. "Our findings indicate that men could be less sensitive to the memory impairing effects of cannabis than females."

Cannabis did have stronger effects on the users who smoked less often, and a higher proportion of women than men were infrequent users. So it's possible that differences in how much men and women regularly toked up played a role this disparity, too.

The findings are the first to connect the AKT1 gene in healthy people with a temporary psychosis-like response to marijuana, which scientists consider a marker for developing psychosis later on. This gene, and the proteins it codes for, could be investigated for developing new treatments for cannabis-induced psychosis, the researchers said.

Featured
NeuroNews
11 February

A pattern of light falls on your retina, and the information is partly processed there before being transmitted to the visual cortex at the back of the brain, passing through hierarchical stages of processing, each generating a successively more complex representation than the last.

But at which stage of this process do you become conscious of the scene you are viewing, and exactly when do you become aware of the objects within it?

According to one popular hypothesis, called the 'global workspace' model, consciousness is an all-or-nothing phenomenon, with awareness of an object emerging only when information has been processed extensively, and has been made available to the brain's working memory system.

New research, however, suggests we can be fully conscious of some features of an object, such as its color or shape, while being completely unaware of others. The study, just published in the journal Consciousness and Cognition, provides some evidence of what neuroscientists call 'partial awareness'.

James C. Elliot of the University of California, Santa Barbara and his colleagues used a phenomenon called attentional blink to try to determine whether or not visual consciousness is all-or-nothing. First described in 1992, attentional blink occurs when two visual stimuli are shown in quick succession, and describes the fact that we consistently fail to detect the second stimulus when it is shown within about half a second after the first one.

In one experiment, 16 undergraduate volunteers were asked to sit at a computer screen, which displayed rapid sequences of letters. The participants fixated on a cross in the middle of the screen, and then saw a white capital letter, followed in quick succession by a number of black, and then colored, letters. Across 300 trials, each participant was instructed to identify the white letter, and also the color of the first colored letter they saw afterwards.

The second experiment involved 26 more participants, and was designed in exactly the same way, except that this time, some were asked to identify the white letter in each sequence, as well as name the next one, and also report its color, while the rest were told to ignore the first one and identify each second letter and its color.

The participants experienced attentional blink that led them to being aware of one feature of the letters presented to them but not the other.

In the first experiment, the attentional blink significantly decreased the accuracy with which they could name the colour of the second 'target' letter in each sequence, regardless of whether or not they could correctly identify the first letter. The only factor that seemed to affect the participants' performance was the time lag between the two visual stimuli in each trial, such that their accuracy increased with a longer delay.

This was confirmed by the second experiment. In some cases, the participants could name the target letter without correctly reporting its color, and in others they could correctly report its color without knowing what letter it was that they had seen. Again, attentional blink made it difficult for the participants to both identify the second letter and correctly name what color it was, and their accuracy decreased with the time lag between the letters.

The findings thus provide evidence for partial visual awareness, and suggest that consciousness does not, after all, occur in an all-or-nothing fashion. Even when it seems your brain has limited resources to absorb everything in a fraction of a second, it's still processing that information and some of what your eyes have seen may enter into your conscious mind.

"This is an interesting and novel study which challenges the assumption that consciousness is all-or-nothing, [and that] you either see something or you don't," says Steve Fleming, a cognitive neuroscientist at UCL.

He adds, however, that the findings are not conclusive. "The participants may have had similar awareness of both aspects [of the target letters] but failed to report one because of noise in the reporting process, so convergent evidence from other measures of awareness – such as visibility or confidence ratings – would be a useful next step."

Featured
NeuroNews
11 February

"When I wish to find out how wise, or how stupid, or how good, or how wicked is any one, or what are his thoughts at the moment, I fashion the expression of my face, as accurately as possible, in accordance with the expression of his, and then wait to see what thoughts or sentiments arise in my mind or heart, as if to match or correspond with the expression." – Edgar Allen Poe, "The Purloined Letter"

The above quotation from a Poe short story is attributed to a clever schoolboy who uses his empathetic skills to win guessing games. It's also the epigraph of a review article, published today in Trends in Cognitive Sciences, by University of Wisconsin psychologist Paula Niedenthal and colleagues about how mimicking facial expressions helps us to read emotions.

The schoolboy's strategy works, it turns out, and is something that humans do unconsciously when we interact. We mimic the facial expressions of others, and these movements call up the emotions that we feel when we make the same faces ourselves, the article explained. This sensorimotor simulation, as it's called, helps us to feel what another person might be feeling. For this reason, facial mimicry helps us to read others' emotions. (Our tendencies to empathize and mimic may also be the reason why emotions seem to be contagious.)

And the opposite may be true as well. Some evidence suggests that impairing the ability to mimic decreases our ability to discern emotions. For example, one study by Niedenthal and others found that the longer infant boys used pacifiers, which prevent emotional expressions of the mouth, the less they mimicked others' facial expressions and the worse they scored on a test of emotional intelligence as older children. (This effect was not seen in girls.) Another study found that college students wearing a boil-and-bite mouth guard, which hampered their ability to mimic, were worse at distinguishing between genuine and fake smiles than those who did not wear the guard.

How it works

The faces we make are external representations of what we feel inside. There's something to the idea that this also works in reverse and if you make an emotional expression, the associated emotions will follow. For example, if you force yourself to smile you might actually feel better. Conversely, in people who have facial paralysis, the degree to which the zygomatic muscle, which lifts the corners of the mouth, is immobilized predicts severity of depression.

That's why when we mimic the countenance of another person, the same internal link to the associated feelings gets switched on and we also mimic those feelings—at least, our versions of them.

You might wonder why such machinations—unconscious, remember—are necessary to understand that a smile is happy or a frown is sad. It's because most expressions are not so stereotypical. "They are instead fleeting, subtle, and somewhat idiosyncratic facial gestures shaped by learning and culture," Niedenthal and her colleagues wrote. "Different emotions, attitudes, and intentions can be communicated with the slight changes in eyebrow position, head tilt, onset dynamic, or lip press." Through mimicry, we can understand these emotional expressions without naming or even thinking about them at all.

Yet facial mimicry is not absolutely required for understanding emotions. For example, people with Moebius syndrome, a weakness or paralysis of the facial muscles that prevents the formation of facial expressions, can still read others' emotions, Niedenthal told Braindecoder. Scientists aren't completely sure what comes first in the mental processing of another's expression—facial mimicry or emotional identification. One hypothesis is that rather than mimicry triggering emotions, facial mimicry could be a "spill-over" from an emotional identification with another person's expression. It's also possible that the motor and mental responses mutually influence each other, Niedenthal said.

What we mimic may affect what we see

The ability to mimic affects emotional perception not only in the more abstract, empathetic sense; it also seems to affect visual perception. In a study, led by Niedenthal, wearing a gel-based face mask that hindered their facial movements seemed to prevent participants from distinguishing between photos that varied subtly in emotional expressions. In the study, participants were shown a single photo on a continuum from angry-to-sad. Then they were shown a pair of photos, the one they had just seen, and another on the continuum, and asked to identify the photo they had already viewed. Those who wore the mask were less able to distinguish between the two photos than mask-less control participants, suggesting that mimicking an expression may be important for seeing it clearly.

Don't try this at home

If you get excited to try out mimicry during your next conversation, you may quickly find that it's hard to talk while mimicking someone who is listening. While conversations involve their own mimicry systems, Niedenthal said, these dynamics of facial mimicry apply more to situations where people are not talking—for example, an encounter with a stranger. In a scenario like this, the ability to read emotions is less about intellectual understanding or responding empathetically but about behavior: for example, approaching another person or not.

"When someone is showing you with a smile that they are approachable and that they are not dangerous, and/or that they're sort of a member of the same category and similar to you, the most important thing that you get from that is not the label, 'oh, this is an affiliative kind of smile.' What's the most important thing is that you approach them," Niedenthal said. "Whereas a different kind of smile may push you away."

This kind of emotional understanding may be particularly useful, say, in a country where you don't speak the language, Niedenthal explained. "If you're in a different country where you can't communicate otherwise, then simulating the person's facial expression will give you that really subtle information about 'what does this really mean?'" She continued, "You don't say, 'that's a smile so that means we're gonna be friends.' That would be a kind of stupid thing to do, right? Instead, you adjust your behavior according to the meaning of the smile as you've simulated it."

An outstanding question, the authors wrote, is whether people could learn to improve their emotional understanding by improving their mimicry skills or by consciously mimicking others, as Poe's schoolboy character does.

"In that quote, he's talking about something that's done consciously," Niedenthal said. "In our research and in our theory, while somebody may try to do that consciously, we largely think of it as spontaneous or automatic and not intentional." Spontaneous and intentional expressions involve different areas of the brain, the authors wrote, so the effects of concerted and unconscious mimicry might not be the same.

So don't try facial mimicry at home. Or rather, don't worry: you're probably doing it already.

Featured
NeuroNews
10 February

How fit someone is in midlife may predict how their brain will have aged decades in the future, according to research published today in the journal Neurology.

In the study, which spanned more than 20 years, researchers saw that middle-aged people who had poor fitness ended up with smaller brain volume than their fitter peers.

Typically, the brain shrinks at a rate of about 0.2 percent a year in older age. Scientists have linked dementia and cognitive decline with an acceleration in this process. Fortunately, research has also shown that the brain is still plastic late in life, and that exercise can help older people stave off brain atrophy. In one study, for example, people between 55 and 80 years old were able to increase the volume of their hippocampus, a memory-related region, by 2 percent after one year with an aerobic walking group.

Less is known about how a person's fitness throughout life might relate to later brain aging. To find out, researchers asked 1,583 healthy people with an average age of 40 to run on a treadmill until they were too exhausted to continue or had reached a certain heart rate. To get a picture of each person's fitness level and exercise capacity, the researchers logged the amount of time people spent on the treadmill, as well as how their heart rate and blood pressure went up as they started to run.

Two decades later, the researchers repeated the treadmill test and took MRI scans of the participants' brains. The researchers discounted those participants who developed heart disease or began taking beta-blocker medications to treat high blood pressure. Of the 1,094 remaining participants, those who'd been in poor shape in middle age had smaller brain volume than people who'd had better fitness. This difference persisted even after the researchers controlled for other factors like smoking.

"During the treadmill test, people with about 17 beats per minute higher heart rate or 14 units higher diastolic blood pressure had smaller brains later in life, at a rate that we would say was equal to one year accelerated aging," said coauthor Nicole Spartano, a postdoctoral fellow at the Boston University School of Medicine.

The older participants also took a second, shorter treadmill test so the researchers could check their heart rate and blood pressure again. An exaggerated surge in people's heart rate, which signals poorer fitness, was associated with a smaller brain. But this time their blood pressure response to exercise didn't have any association with brain size.

When the researchers repeated the analysis with the whole pool, this time including people with heart disease and those on blood pressure medication, the relationship between fitness and brain volume was more pronounced.

"Fitness may be especially important for prevention of brain aging in people with heart disease or at risk for heart disease," Spartano said.

This study showed a link between poor fitness in middle age and reduced brain volume later in life; the results don't tell whether the first directly causes the second.

"But from other studies we know that training programs that improve fitness may increase blood flow and oxygen delivery to the brain over the short term," Spartano said. "Over the course of a lifetime, improved blood flow may have an impact on brain aging and prevent cognitive decline in older age."

Genetics can also contribute to a person's fitness level, which the researchers did not examine in this study. In future, the team plans to track exercise and brain size at more frequent points, and to see whether the results translate to a more diverse population (this study included mostly white people of European ancestry).

"Instead of just using snap-shots of fitness or brain structure and function, we hope to be able to look at changes in these lifestyle factors and how they relate to changes in brain structure and function over time," Spartano said.

Featured
NeuroNews
09 February

The clear, bright yellow of a lemon is a far cry from the flat, waxy color of butter or the soft gold of champagne. But we recognize them all as yellow.

Is that because our visual brain categorizes the continuous spectrum of colors into groups? Or is it our language that determines how we perceive colors? After all, Isaac Newton's breakup of the visible spectrum included a blue-purple color called indigo. But indigo was later stripped of its status as a color. Isaac Asimov wrote in the 1970s, "It has never seemed to me that indigo is worth the dignity of being considered a separate color. To my eyes it seems merely deep blue."

To find out how we categorize so many different hues into groups of yellow, orange, green and so on, scientists turned to babies, who, unlike Newton and Asimov, are not yet influenced by language. As reported Monday in Proceedings of the National Academy of Sciences, it turns out that color categories might be clear to us long before we have the words for them.

"Color categories seem to be defined, to our surprise, before (or independent of) the acquisition of language," said coauthor Jiale Yang, a psychologist at Chuo University in Tokyo, in an email to Brain Decoder.

Humans can discern thousands of colors, at a conservative estimate. And we have many names for them, from magenta to saffron to cerulean to chartreuse. But each language breaks colors into only a few basic categories. These can range from 11 to 12 terms in most languages spoken in industrialized cultures to as few as two in others. English and Japanese have eleven: black, white, gray, pink, brown, red, orange, yellow, green, blue and purple.

To see whether these categories can develop before language and which brain areas encode them, Yang and his colleagues scanned the brains of both adults and infants ages 5 to 7 months. Using NIRS, a technique that is similar to fMRI but easier to use on babies, the team monitored activity in areas involved in color processing.

In one experiment, participants saw a grid of shapes that flashed between two different colors. When watching colors switch from blue to green, adults and babies had a spike in activity in a region called the occipito-temporal cortex, which is involved in visual processing. When the colors alternated between different shades of green, there was no boost. This suggested that different color categories are recognized in the visual cortex in infants.

To confirm that babies use similar categories to adults, the team did a second experiment. First they showed infants a smiling face pattern in one color. The babies then saw that face alongside a second one, identical save for its color. Sometimes the faces were two shades of green, sometimes two shades of blue and sometimes one was blue and the other green. In each pair, the two hues used were equally distant from each other on the color spectrum.

The researchers wanted to see how long the babies would spend looking at the new-color face. Babies are interested in novelty and usually look at new stimuli longer than old ones, so if they stared longer at that face, it would mean they recognized it as being different enough to merit further investigation. The babies spent significantly longer looking at the new face only when it belonged in a different color category from the one they'd first been shown.

And while babies showed they were aware of the difference between color categories, we also know they are likely able to distinguish color variations within each category. One line of evidence comes from a previous experiment, in which babies could find a green target on a background that was a different shade of green.

The new findings suggest that the ability to perceive different color categories might have an innate foundation across languages and cultures, the researchers concluded. But this doesn't mean we all group our colors the same way.

"The effect of language pressure for inter-personal communication…may modify the sensory categories later," Yang said. "The color categories we perceive are corresponding to the color lexicons we use, and more importantly, they may be different across languages."

In other words, Newton and Asimov's virtual debate on the existence of indigo is not yet solved. In fact, the color blue is particularly contested across cultures. Unlike English, the Russian language has different words for lighter blues (goluboy) and darker blues (siniy). For a Russian speaker, goluboy and siniy are as distinct from each other as yellow is from green.

"There is no single generic word for 'blue' in Russian that can be used to…adequately translate the title of this work from English to Russian," the authors of a 2007 paper called "Russian blues reveal effects of language on color discrimination" noted cheekily.

In their experiment, native English and Russian speakers had to quickly distinguish various shades of blue. Russian speakers were faster, compared with English speakers, in discriminating goluboy and siniy.

"The critical difference in this case is not that English speakers cannot distinguish between light and dark blues, but rather that Russian speakers cannot avoid distinguishing them: they must do so to speak Russian in a conventional manner," the researchers wrote. "The case of the Russian blues suggests that habitual or obligatory categorical distinctions made in one's language result in language-specific categorical distortions in objective perceptual tasks."

So we might start with a few universal color distinctions. But the ways they are fine-tuned by our language do end up affecting our perception.

(image) Image: Jiale Yang, Chuo University

Featured
NeuroNews
08 February

It's the second week of February, and for many of us in the Northern Hemisphere that means we're hitting the limits of our ability to cope with winter. It's hard to get excited about anything—even hot chocolate—when the cold and dark seem to roll on forever.

We know that our mood can ebb and flow with the seasons. About 20 percent of Americans suffer from winter blues or full-blown seasonal affective disorder. But it's not just our mood that changes. Depending on the time of year, our brains may need to work harder to sustain certain cognitive processes, a recent experiment indicates.

When scientists in Belgium and the United Kingdom asked people to perform tasks requiring them to pay attention or work with new information, they did just as well in any season. But the brain activity behind these abilities rose and fell at different points in the year, the researchers reported today in the journal Proceedings of the National Academy of Sciences.

"Our brain is not the same with season," said coauthor Gilles Vandewalle, of the University of Liège in Belgium. "For most people this is likely not perceived. But potentially for people more vulnerable to season…what we see may contribute to the negative impact of season on mood. Depression depends on many parameters and what we see may be one of them."

Vandewalle and his team asked 28 healthy, young people in Belgium to spend four and a half days in a laboratory, staggered at different points between May 2010 and October 2011. After several days cloistered away from any season-dependent cues like sunlight, the participants had their brains scanned with fMRI during different activities.

In one task, people had to hit a button whenever they saw a stopwatch appear on a computer screen; this tested how well they could pay attention. In another, the participants had to listen to a stream of consonants and report whether a particular sound was the same as one they'd heard three noises earlier. This tested their ability to store, update and compare information in their short-term memory.

How people did on these tests did not seem to have any connection to what season it was outside. But how their brains responded to these challenges did vary.

While engrossed in the attention-related test, certain regions in people's brains crackled with the most activity in June and the least near the winter solstice. Among these areas were the thalamus and amygdala (which are involved in alertness), the frontal areas and hippocampus (which play a role in executive control, or the regulation of various thinking processes) and the precuneus (which plays a role in visuospatial attention).

While trying to recall and compare consonants, people's brains followed a different pattern, showing more activity in fall and less near the spring equinox. As in the attention task, frontal areas and the thalamus were involved. The insula, which plays a role in attention, executive processes and mood regulation, was also more active in autumn.

The researchers didn't find any variations in the participants' hormones, alertness or sleep that would explain the seasonal patterns of brain activity. The team concluded that the brain doesn't draw on the same resources year-round.

When brain areas involved in attention fire up near the summer solstice, one possible explanation is that more resources are free to be tapped, making the task easier. By contrast, "Lower activation in winter can mean that the brain is less efficient and that it will be more difficult to do the task, and in a way more costly," Vandewalle said. In winter, the brain might have to pull from other resources to get the same performance.

It's not clear why this is, though. Different messenger molecules might be more or less available at some times of year. Some studies have found more serotonin in people's cerebrospinal fluid and blood in summer, and more dopamine in fall, the researchers noted. A protein called brain-derived neurotrophic factor, which is involved in learning, also gets a boost in fall.

"With fMRI we only have access the activity that fluctuates with the task," Vandewalle said. "We don't know if baseline brain activity changes...So other parameters of brain function that we did not measure may fluctuate as well."

The many ways that seasons affect us deserve more investigation, he added. Our brain's seasonal calendar can't make us migrate or hibernate, like some mammals do. But studies have found that seasons might play a role in when people conceive, die or commit suicide. Seasons also influence when our brains flip on certain genes and how much we eat. And though we might not notice it, seasons might cause shifts in our blood pressure and cholesterol.

"If the brain is seasonal, other aspects of physiology may also be," Vandewalle said.

Featured
NeuroNews
08 February

In 1996, a 22-year-old Louisiana man confessed to raping, beating and murdering his 14-year-old step-cousin and leaving her body under a Mississippi River bridge. He was convicted of the crime and sentenced to death.

But Damon Thibodeaux was not a murderer. The details of the crime that he confessed didn't match what had actually happened, according to The Innocence Project, a legal group that works to exonerate the wrongfully convicted. Thibodeaux told police that he'd strangled his cousin with a gray or white wire from his car, for example. She'd actually been killed with a red wire, which was found near her body.

Thibodeaux is no isolated example; as many as 25 percent of people wrongfully sentenced to death in the United States (an estimated 4 percent of those on death row) falsely confessed to the crime, research suggests. Eighty-eight of the first 325 DNA exonerations by The Innocent Project involved a case where the suspect had confessed despite being innocent. But why would anyone confess to a crime they didn't commit?

A new study finds that one answer may be sleep deprivation. And it doesn't take much. After one night of lost sleep, almost 70 percent of people participating in a lab experiment made a false confession — urged on only by a couple of stern computer warnings.

University of California cognitive psychologist Elizabeth Loftus and her colleagues used a well-established laboratory method to evoke false confessions from study participants. They brought volunteers into the lab and had them do some computer questionnaires and tasks, all while warning them not to press the "escape" key, lest they erase data and wreck the experiment.

A week later, the participants — 88 in this study — returned and did more computer work after seeing another warning against pressing the "escape" key. Then they all spent the night in a sleep laboratory. Half of them got to sleep, while the other half were randomly assigned to stay up all night.

The next morning, participants filled out more questionnaires. At the end of the task, each person saw a screen stating that a research assistant had witnessed them pressing the forbidden "escape" key and asking them to confirm this account and sign the confession. If they refused, they saw the screen a second time, and were again urged to confess. In reality, none of the participants were guilty.

Despite this, the sleep-deprived had a hard time maintaining their innocence. While 82 percent of the well-rested participants resisted signing the first confession, only half of the sleep-deprived participants refused to sign. When urged a second time, 68.2 percent of the sleep-deprived participants falsely admitted to pressing the key, compared with a 38.6 percent of the well-rested participants. The findings appear this week in the journal Proceedings of the National Academy of Sciences (PNAS).

"People don't want to do this very much if they're rested, because they didn't hit the key," Loftus told Brain Decoder. "But if they're not rested, the data sort of speak for themselves."

Participants were more likely to make a false confession if they scored high on tests of impulsive decision-making, the researchers found. And regardless of whether they'd stayed up all night or gotten shut-eye, self-reported sleepiness was linked to making a false confession. Those who said they were the sleepiest were 4.5 times more likely to confess than those who said they were not sleepy or only somewhat sleepy.

In the real world, suspects of crimes are often interrogated into the wee hours of the night, Loftus said. Prisoners at Guantanamo Bay were also subject to long hours of sleep deprivation, according to reports by the International Committee of the Red Cross and accounts by detainees.

Previous work by Loftus and her colleagues found that sleep deprivation makes eyewitnesses more susceptible to false memories. Suspects falsely confess for multiple reasons, Loftus said. Some are mentally unstable and want to be associated with a famous crime. Others see confession as the lesser of two evils. Someone accused of child abuse might confess if the police tell them that coming clean is the only way to see their kid again, Loftus said.

"There's maybe a little of that happening here," Loftus said of the latest research. "Maybe some of these sleep deprived people just want to go home and just don't feel like having a confrontation."

There's another, more insidious way that sleep deprivation can work, though. When pressured enough, people may actually believe that they are guilty. They start "remembering" the crime and even provide details about what they did and how they did it. Sleep deprivation might have made participants less confident in their own memories, Loftus said.

"Certainly sleep deprivation affects cognitive functioning, your speed of processing, your judgment," she said.

For eyewitnesses, experts advise interrogation by someone who doesn't know who the suspect is in the crime, in order to avoid memory contamination. Something like that would be harder to do when questioning suspects themselves, Loftus said, but there might be other ways to combat the suggestive effects of interrogation. One way to combat false confessions would be to videotape all interrogations, she said. The footage might be able to show if a suspect was exhausted or not, or if they were being badgered or otherwise pressured by the interrogators.

In the 1996 murder case, Thibodeaux recanted his confession the next day, and no physical evidence linked him to the crime. But confessions are powerful. As a study published in the journal Law and Human Behavior in 1997 (the year Thibodeaux was convicted) revealed, juries are more likely to convict a suspect who falsely confessed, even if they know the confession was untrue. Thibodeaux spent 16 years behind bars before DNA testing exonerated him. He was released in 2012.

Featured
NeuroNews
01 February

"For me, it's like looking at the world through a curtain of transparent static. Like the signal isn't coming in great on an old bunny-eared TV"—this is how one Reddit user describes his experience with "visual snow" syndrome.

Just like many other people with this syndrome, who see snow or TV-like static across their visual fields, the Reddit user reports that his symptoms get worse with less ambient light. He also notes they increase whenever he gets migraines—which have previously been associated with the syndrome. "During these auras, lights may jump around and blink, I'll see spots of red and blue pixels, etc.," he says.

Visual snow is a chronic condition, with many people reporting that their symptoms persist at all times, even when their eyes are closed. The syndrome is also frequently accompanied by other visual phenomena such as the persistence of previously viewed images, a condition known as palinopsia, or the presence of perceived flashes of light as well as non-visual symptoms such as tinnitus (ringing in the ears).

The severity of symptoms varies from patient to patient. "Most patients can cope with visual snow in daily life, it is annoying however," said Clare L. Fraser, of the University of Sydney, who has treated patients with the syndrome. "Some patients find doing any reading or studying nearly impossible as the 'snow' obscures what they are trying to read, so some patients have dropped out of university. I have other patients who made it through law school, and with some simple adjustments, have coped really well."

Fraser and her colleagues have now looked at the characteristics and potential treatment for the syndrome in a new study of 32 patients, published in the Journal of Clinical Neuroscience. The researchers found that the classic symptom of seeing fine, predominantly black and white static was reported in 91 percent of patients, while the remaining patients reported seeing fine chromatic static. The majority of the patients also experienced more than one additional visual phenomenon such as persistent after-images or stars bursts and colored blobs. In addition, the researchers found that 63 percent of the patients experienced high-pitched tinnitus, 44 percent experienced migraine with or without aura, and some also had tremor and balance problems.

(video)

Most patients (65 percent) pointed to factors that aggravated their symptoms, such as high contrast text on a computer screen, darkness, exhaustion and stress. Thirty five percent of the patients said that altering the ambient light helped them alleviate their symptoms, but many—57 percent—were not able to point to any strategies that could help them cope.

The researchers also tested the use of tinted lenses as a potential way to decrease the severity of symptoms in 12 of the patients. "This was the first study to report the use of tinted lenses as a potential mechanism to make the symptoms more tolerable," Fraser told Braindecoder. "Colored lenses have been used by other doctors for this condition, but it seemed that no one has studied the effect and mechanism in detail before." The researchers found the use of the lenses seemed to provide some relief to most of the patients in the small sample, particularly when they used lenses in the yellow-blue color spectrum.

The mechanisms behind visual snow are still unclear. Though its occurrence has been linked with stress, depression, migraines and previous use of hallucinogenic drugs, "no clear causative agent has been identified," the researchers said. In fact, scientists have recently suggested that visual snow syndrome is a unique clinical phenomenon that is distinct from migraine.

Overall, the thinking about possible mechanisms behind the syndrome is now shifting, noted Fraser. The results of the new study, for example, suggest that visual snow may be caused by an imbalance in inputs from two major visual pathways that receive input for color and motion, Fraser said. Such an imbalance is what neuroscientists call thalamocortical dysrhythmia, meaning the normal neural activity between the thalamus and various regions of the cortex is disrupted.

Colored tinted lenses such as the ones used in the new study "will be affecting the relative visual inputs along these two pathways by adjusting the color perception," she said. And the fact that certain colored lenses seem to alleviate the symptoms "points us to the hypothesis that visual syndrome is part of the thalamo-cortical dysrhythmias [and that] there is an imbalance in these inputs driving the condition, which is altered by wearing the lenses," Fraser said.

Featured
NeuroNews
25 January

Scientists in China report that they have genetically engineered monkeys that could be used to study autism in humans.

With this development, brain scientists might be a step closer to a better animal model for human psychiatric disorders that are tough to mimic in mice.

Researchers at the Chinese Academy of Sciences' Institute for Neurosciences in Shanghai genetically modified eight cynomolgus monkeys (also known as crab-eating macaques) so that they overexpressed the human gene MECP2 in the brain. In humans, MECP2 provides cells instructions for producing a protein vital for normal brain function. Having extra copies of this gene leads to excess protein production, which can cause intellectual impairment, and MECP2 duplication syndrome is one of the dozens of genetic mutations that's been implicated in autism spectrum disorder.

Though the genetically altered monkeys did not have the seizures or the severity of cognitive impairment typically seen in humans with MECP2 duplication syndrome, they did show some autism-like behaviors. They displayed an increase in repetitive movements, like running in circles; they had heightened anxiety levels; and they had decreased social interaction, spending less time sitting with other monkeys. These monkeys also passed along the mutation to their offspring, and the second generation of monkeys also showed the same behavioral symptoms.

The researchers, who published their findings in the journal Nature today, think their transgenic monkey model could be much more promising than rodent models for scientists who are trying to pin down the exact brain circuits responsible for the symptoms of autism and other psychiatric disorders.

"Currently researchers all over the world are using mostly rodent models to study human brain disorders," neuroscientist Zilong Qiu, one of the study authors, told reporters in a news briefing. "We have been concerned whether we can mimic the complicated symptoms of human autism patients in a mouse."

Scientists have recently reported a lot of success treating autism in mouse models, but they have not had as much success translating those treatments to humans, said Yong-hui Jiang, who was not involved in the study but researches the genetic basis for autism spectrum disorders at Duke University School of Medicine.

"It's easy to manipulate mice, but translating the behavior profile for mice to humans has been difficult," Jiang told Brain Decoder. "Mouse behavior is more distant from human behavior, and the non-human primate brain clearly has more similarities to the human brain."

Because it is time-intensive and expensive to raise monkeys with specific genetic mutations, the transgenic monkey model isn't likely to be used in studies of disorders where rodents can be used effectively. "It's still much more costly than mice," Mu-ming Poo, the director of the Institute for Neurosciences, said in a news briefing. "On the other hand, there's no choice."

Beyond autism, it is hard to faithfully reproduce the symptoms of other psychiatric disorders such as schizophrenia, bipolar disorder, depression and OCD in mice. Jiang said the new research demonstrates the feasibility of a non-human primate model.

"Hopefully we will have more and more research in this direction so we can really understand better the value of a non-human primate model for studying neuropsychiatric disorders in the future," Jiang said.

Already, Qui and his colleagues are conducting brain imaging studies on their lab monkeys to try to identify the brain circuit deficiencies that lead to autism-like symptoms. Then, they hope to work on therapies that could reverse the negative effects of the MECP2 duplication.

MECP2 mutations only affect a small proportion of people with autism, but the hope is that this model will broadly apply to at least some variations of autism, said Sarika Peters, a psychologist at Vanderbilt University who was not involved in the study. She said that it will be interesting to see whether future studies with this model will allow scientists to study other symptoms involving eye contact and nonverbal gestures.

Featured
NeuroNews
12 January

Scientists have discovered a new use for a drug once dismissed as a safe but useless failure. This drug was no help against diseases that destroy neurons, but may instead bring speedy relief from depression, researchers reported this week in the journal Molecular Psychiatry.

In a recent experiment, the drug worked in as little as half an hour in mice, making them begin to act differently from those with symptoms of depression. This indicates that the drug may have a chance of becoming a new option for patients who don't improve on conventional antidepressants.

"The classic antidepressants all act the same way, and there are a lot of people who don't respond," said coauthor Solomon Snyder, a neuroscientist at the Johns Hopkins School of Medicine in Baltimore, Md. "And the main problems with all the antidepressants on the market is that it takes two to six weeks before they do anything. If you're suicidally depressed you can't sit around waiting six weeks."

The new drug is called CGP3466B (CGP for short) and its effects resembles those of ketamine, which is best known for its uses as an anesthetic in operating rooms and, recreationally, as a hallucinogen. Recently, scientists have turned to ketamine as an alternative treatment for depression, because it seems to bring relief from depressive symptoms within hours. For some patients, the relief seems to last for months. One problem with ketamine, though, is that at higher doses it can also cause mental distortions and psychosis.

To find a drug that could work like ketamine minus the side effects, scientists had to understand how ketamine works in the brain to bring about antidepressant effects. In the new research, Snyder and his team identified several proteins that both the redeemed drug and ketamine interfere with.

Many traditional antidepressants primarily affect the neurotransmitter serotonin. Ketamine takes another path. Previously, scientists had figured out that ketamine turns on a protein called mTOR. Scientists aren't sure how this protein is involved in brain functioning, or whether it is depleted in people with depression. But they were able to show that ketamine rouses mTOR, and that this is key to its effectiveness against depression.

"The mystery was, how does ketamine increase mTOR?" Snyder said. To find out, he and his colleagues dosed mouse nerve cells with ketamine. Ketamine doesn't directly impact mTOR, but its presence messes with a few other proteins that do. Ultimately, it stops the on-switch for mTOR from getting destroyed.

The researchers then turned to CGP, another molecule that can jam the same chain reaction that ketamine does. Earlier studies had tested CGP on neurodegenerative diseases such as Parkinson's but found it ineffective against those conditions. They did, however, show that CGP is safe in humans, has no notable side effects and works at minuscule concentrations.

"There weren't many people focusing on it…but we found it fascinating," Snyder said. "This is the most potent action of any drug I have encountered in 45 years of research with the exception of botulinum toxin [Botox]."

To watch CGP in action, Snyder and his team gave the drug to mice genetically engineered to have symptoms resembling depression. The mice then completed behavior tests that predict how well an antidepressant is working. Mice given CGP were more spirited than their depressed peers. After a dose of CGP, the mice were willing to spend more time trying to escape a pool of water and boldly traveled across new terrain to reach a snack. It takes traditional antidepressants 21 days to make a difference on this test; CGP kicked in after only half an hour.

It will take more research to find out whether the drug is effective in humans and how long the effects might last.

Compared with ketamine, "CGP is not being studied as extensively," Snyder said. "But…it's in the past been given in clinical trials, so if somebody wanted to pursue it now for depression it would be easy."

Featured
NeuroNews
07 January

Knowing that other people might gossip about your behavior may motivate you to become more generous, a new study suggests.

The link between gossip and eagerness to share your resources appears to be even more pronounced if the potential recipient of your generosity happens to have a lot of social connections, according to the new research.

"People are motivated to maintain a good reputation in their social environment," said study author Junhui Wu, of Vrije Universiteit Amsterdam in the Netherlands. At the same time, they tend to exchange and spread information about others' reputation in social networks, she said. "Thus, cues that others might gossip about our behavior should function to make us more generous and cooperative in order to secure a good reputation," Wu told Braindecoder.

Wu and her colleagues set out to examine the link between gossip and generosity in the context of social network connections. Social networks are frequently densely interconnected, which means that gossip communicated to just one person can spread widely throughout the network. These networks also tend to have a few people who have a lot of social connections; these people have more potential to spread gossip and influence others' reputation, compared with their peers who have fewer connections, the researchers noted.

In the experiments involving more than 600 people, the researchers used the concept of the so-called dictator game to measure the participants' generosity levels. As part of the game, the participants were asked how many of 100 lottery tickets they wanted to give to a potential recipient, with each ticket representing a chance to win a $2 bonus. The recipient or a third-party observer was connected to either none, one, four, or eight of the participants' future partners in a second game. The participants were also told that the recipients or the observers would have the opportunity to gossip to the future game partners.

The researchers found that the participants were consistently more generous with the lottery tickets when the recipient or observer had more connections to future partners, and therefore, presented higher risk for gossip. This tendency was brought about by the participants' increased concerns for their reputation during their interactions with others, according to the study, published in December in Evolution and Human Behavior.

The researchers also examined whether there would be any differences in how two types of people, called prosocials and proselfs, might adjust their generosity in response to potential gossip. In general, prosocials tend to maximize outcomes of a given social interaction for themselves and others, whereas proselfs are only focused on their own benefit during interactions.

In this game, however, it turned out that both prosocials and proselfs increased their generosity in response to gossip. But there was a difference between the two types. Proselfs were more driven by concerns about their reputations compared with prosocials. "This reputational concern is more predictive of their generosity, compared to prosocials, who already show a relatively higher baseline level of generosity," Wu said.

But why do people care about their reputation in the first place, especially in a lab game? One theory is that the human mind, shaped throughout the evolution, encourages behaviors that have maximized chances of surviving in the environment of our ancestors. Having a good reputation is beneficial because it helps one to stay included in the group and secure food and support.

The new results suggest that natural selection may therefore have shaped an information processing system that allows people to identify opportunities to promote a good reputation, for instance through showing off their generosity. "When people believe that their interaction partner or a third-party observer has more connections in social networks, and thus has more gossip capacity, they tend to be more generous to their interaction partner," Wu said.

Featured
NeuroNews
06 January

A loss of neurons in part of the hippocampus, a brain region related to memory, might explain some of the symptoms of schizophrenia. Shown in light blue above, this area has long been overlooked because it is so tiny and hard to distinguish from surrounding parts of the hippocampus.

In an experiment, published today in the journal Neuron, scientists found that mice with a rodent version of schizophrenia were missing some of the neurons in this spot, leading to other changes that might account for the problems that those with schizophrenia can have relating to other people.

It's estimated that about 1 percent of adults have schizophrenia, a condition that tends to develop in early adulthood. Schizophrenia is famous for causing hallucinations and delusions, but it also involves social symptoms such as becoming withdrawn and showing an apparent flatness or lack of emotion.

"This is one of the defining features of schizophrenia and it is often present even before the full-blown disease," said coauthor Joseph Gogos, a neuroscientist at Columbia University and the Zuckerman Institute in New York City. "Part of it is linked to impaired social cognition, a mental operation that allows us to perceive the intent, the mood and the action of other people and respond appropriately."

Postmortem examinations on people who had schizophrenia have revealed that their neurons were diminished in a spot in the hippocampus called area CA2. This brain region helps us form memories of social interactions.

To investigate how an altered CA2 region might contribute to the disease, Gogos and his colleagues genetically engineered mice to have symptoms resembling those of schizophrenia. Specifically, they replicated a genetic deletion that has been found in subsets of people with a high risk of developing schizophrenia and similar disorders.

As the altered mice reached adulthood, they started to show symptoms of faulty social cognition. The rodents spent a long time investigating mice they had already met, indicating that they could not recall previous encounters.

A brain tissue examination of these mice showed the rodents had fewer neurons in area CA2 than their healthy peers did, leading to alterations in the activity and connections among the remaining neurons.

These changes—the loss of neurons and social symptoms— emerged as the mice entered young adulthood, which parallels the onset of schizophrenia in people, Gogos said.

"How might these changes lead to impaired social memory, we do not know but it is our intention to understand this," said Gogos. "Part of the explanation may have to do with the fact that CA2 contains high levels of vasopressin, a hormone that plays a role in social behaviors."

Featured
NeuroNews
05 January

Lumos Labs, the company behind the "brain training" program Lumosity has agreed to pay $2 million to settle Federal Trade Commission charges that it deceived consumers.

For 10 years, the Lumosity program was marketed to consumers and its 70 million users as a way to keep their brain sharp and healthy, despite a lack of conclusive scientific evidence for such claims.

"Lumosity preyed on consumers' fears about age-related cognitive decline, suggesting their games could stave off memory loss, dementia, and even Alzheimer's disease," said Jessica Rich, director of the FTC's Bureau of Consumer Protection in a press statement. "But Lumosity simply did not have the science to back up its ads."

Questions about the validity of claims about brain games have piled up in recent years. In 2014, a group of about 70 neuroscientists released an open letter warning that claims about positive effects of brain games on people's cognitive skills and brain health are not based on actual science.

Lumosity is one of the most well-known brain training products in an industry that's been forecasted to reach $6 billion in 2020. The program includes more than 40 games, each of them purportedly targeting a specific brain area. The company sold subscriptions for these games, with options ranging from monthly payments of $14.95 to lifetime memberships for $299.95.

According to the FTC's complaint, the company claimed that training with these games for a recommended three to four times a week "will improve performance on everyday tasks; will improve school, work, and athletic performance; will delay age-related decline in memory and protect against other age-related conditions such as mild cognitive impairment, dementia, and Alzheimer's disease; and will reduce cognitive impairment associated with the side effects of chemotherapy, post-traumatic stress disorder, traumatic brain injury, attention deficit hyperactivity disorder, Turner syndrome, stroke, and other health conditions."

The FTC also alleged that some consumer testimonials on the Lumosity website had been solicited through contests that promised significant prizes, including a free iPad and a round-trip to San Francisco.

In a statement to NBC News, Lumos Labs defended its products and said the settlement does not pertain to "the rigor of our research or the quality of the products." The company said it has made "strong contributions to the scientific community and will not change its focus: "We remain committed to moving the science of cognitive training forward and contributing meaningfully to the field's community and body of research."

Under the settlement with the FTC, Lumos Labs has to notify customers who signed up for an auto-renewal plan and provide them a way to cancel their subscription. The settlement requires the company and its co-founders to have "competent and reliable scientific evidence before making future claims about any benefits for real-world performance, age-related decline, or other health conditions."

Featured
NeuroNews
16 December 2015

In an experiment called the rubber hand illusion, a person can feel sensations that arise from a dummy hand. Now a new study shows that, if you incorporate some gross poop-, vomit- and blood-like looking substances into the experiment, the illusion can also serve as a vehicle for transmitting the sensation of disgust.

The new results could one day help researchers improve treatment of a form of obsessive-compulsive disorder (OCD) that is characterized by excessive disgust and an overwhelming aversion to contamination, the researchers said.

The classic version of the rubber hand illusion provides clues to how the brain constructs our body image. In the experiment, a person puts their both hands on a table, and a fake rubber hand is placed on their left side, close to their left hand. The rubber hand is separated from the real left hand by a small partition that prevents the person from seeing their real hand. The experimenter then strokes the subject's real hand and the fake hand simultaneously with paintbrushes. As a result, the subject will have the illusion that the fake hand becomes their real hand.

The overwhelming aversion to certain things that some people with OCD feel shares some similarities with the sensation of disgust that most of us feel when we encounter something that grosses us out. Study author Baland Jalal, currently at the University of Cambridge, wanted to see what would happen if you put something disgusting on the fake hand while a person without OCD is having the illusion. "Will they also have disgust sensations arising from the fake hand?" he said he wondered.

To find out, Jalal and his colleagues at the University of California, San Diego, recruited 14 college students and created gross-looking substances that mimicked poop, blood and vomit. The people in the study were told those were real, so the researchers had to make sure they looked convincing. To create the fake poop, they used Nutella, peanut butter, chocolate and barbecue sauce. "Then we added artificial odor to make it really disgusting," Jalal told Braindecoder. For the fake blood, they used red ink, water and barbecue sauce, and to make the fake vomit, they used pieces of various foods including beans and rice.

The researchers first showed their creations to the participants to see which one each of them would find the most disgusting. The scientists then conducted the rubber hand illusion experiment in two different conditions: in one condition, the paintbrushes were moved simultaneously, and in the other, they were moved in an asynchronous manner. Typically, if the brushes are moved simultaneously, the rubber hand illusion is expected to be more intense, compared with the other condition. This was the experience of 11 of the 14 people in the study, and the remaining three were excluded, according to the study, published recently in PLOS ONE.

The researchers then repeated the rubber hand experiment in the 11 people, this time adding the substance that was rated as the most disgusting by each participant. They placed this substance, neatly served on a tissue or bandage, on the rubber hand, and proceeded to stroke each participant's real hand and the rubber hand, either simultaneously or in an asynchronous manner. It turned out nine of the 11 people reported feeling more disgusted when their own hands were being brushed simultaneously with the rubber hands, compared with the asynchronous condition. "Disgust seemed to be integrated into their body image — it became a part of them," Jalal said.

Jalal said this experiment might one day be used for treatment of people with OCD, whose condition is typically treated by exposing them to objects contaminated with whatever they are most aversive to, and then preventing them from giving in to their compulsion to wash their hands. The thing is, a lot of people drop out of this type of treatment because they just can't tolerate direct exposure, he noted. The rubber hand illusion technique could therefore be used as an intermediate step to gradually, indirectly expose patients with OCD to the substance in question.

Featured
NeuroNews
14 December 2015

Our senses receive faint signals all the time, leading us to wonder: Was that a drop of rain I felt? Did I just smell smoke? Is that flicker deep in the subway tunnel the lights of an approaching train? Did my phone just chime?

"What we actually perceive is definitely not always a fixed version of what's out there in the world," said Daniel O'Connor, a neuroscientist at Johns Hopkins University in Baltimore. "What we perceive depends a lot on context."

Take the sense of touch for example. Scientists have known for years that if you give animals a weak touch, they sometimes notice it and sometimes not. The touch is measured and always the same, so at some point in the journey from the skin receptors through the rest of the nervous system, something must happen to sensory information that allows the brain to perceive it or interpret it in certain ways.

This intervention happens not in the skin sensors that record touch in the first place but all the way up in the cerebral cortex, O'Connor and his team report in a study published December 7 in the journal Nature Neuroscience. Surprisingly, the brain area that processes sensory information and decides what we feel is not calling all the shots but is guided by an area higher up.

"It's the signals from higher brain areas that are decisive in determining what mice and—by inference what we—perceive when you get an ambiguous stimulus," O'Connor said.

To investigate how our brains process weak signals, O'Connor and his team trained mice to lick a spout when they felt their whiskers being flicked. Only when the scientists had tweaked the rodents' whiskers did the spout dispense a drop of water.

The researchers then repeatedly wiggled the mice's whiskers very, very slightly. Sometimes the mice would realize and lick the spout, sometimes not. As the mice reacted (or didn't) to the whisker wiggles, the researchers read their neurons' activity, starting with those that have projections all the way out to the whisker and record the mechanical whisker deflection signal. "They're the very first neurons in the sensory processing chain," O'Connor said.

Activity in those neurons never wavered; it was the same whether or not the mice felt the whisker touch and headed over to the waterspout. "That means that what is critical to the perception—the detection of the whisker wiggle—is not noise in the input," O'Connor said.

The team then followed the neurons' signal as it traveled from those neurons to the rest of the brain. Activity in the thalamus, which relays sensory information to other parts of the brain, also did not predict whether the animal would pick up on the whisker flick.

It wasn't until the sensory signal reached the cerebral cortex that differences in brain activity emerged. In an area called the primary somatosensory cortex, the researchers observed more activity during those trials in which the mice sensed the whisker flick and headed to the waterspout.

It turned out that this surplus of activity in the primary somatosensory cortex was actually triggered by neurons in another area called the secondary somatosensory cortex. As suggested by its name, this region comes after the primary somatosensory cortex in the processing chain.

In cases where mice became aware of the whisker touch, messages from this area were traveling back to shape the perception formulated by the primary somatosensory cortex. "Basically they go from higher brain areas back to earlier areas," O'Connor said.

Now the scientists have to figure out what factors cause neurons in the secondary somatosensory cortex to fire or not in response to the same faint signal. One possibility is that it's the initial state of the brain right before the whisker flick that determines whether it would be felt—just like how a soft beep could be heard or go unnoticed depending on the noise levels in the environment.

In future projects, "We're reverse-engineering the actual brain circuit that determines why the same stimulus can produce different perceptions," O'Connor said. "By manipulating neural activity, we're trying to understand where that variability comes from."

Featured
NeuroNews
11 December 2015

One of the best metrics New Yorkers have for how good a book is: can it make me miss my stop on the subway? The best reads are so enthralling that you don't even hear the doors lurch open or the announcement reminding you what stop you've just arrived at.

"When we're highly engrossed in a task then we tend to miss sound information that we would have otherwise noticed," said Maria Chait, a cognitive neuroscientist at University College London.

Scientists call this experience inattentional deafness. In a new study published online December 8 in the Journal of Neuroscience, Chait and her colleagues have discovered what the phenomenon looks like in the brain. When our attention is focused on visually demanding tasks, there's a brief but sharp reduction of brain activity used to process noises. This drop happens very early on in our brain's handling of sound input, suggesting that early sound and visual processing is drawn from a shared pool of mental resources.

Intentional deafness could be a liability for professionals who are bombarded by sensory messages as they try to do their jobs. "A surgeon might miss an important beep of a monitor or a control tower operator might miss a message on their headphones while they're focusing on the screen," Chait said. "So it's important to understand what gives rise to these situations."

Previous research has shown that asking people to focus very hard on a visual task makes them less likely to register irrelevant beeps from a machine. "We wanted to understand how is this behavioral effect reflected in the brain, at what point in the processing of the sound does it get suppressed?" Chait said.

To find out, she and her colleagues flashed scatterings of letters across a screen for one tenth of a second and asked people to report whether they'd seen an X or Z among the symbols. Sometimes the researchers also played brief tones while the letters flashed, having told the participants that any noises they might hear would be irrelevant blips caused by the machines working.

As the participants concentrated, the researchers scanned their brains with magnetoencephalography, or MEG, a technique that measures the magnetic fields generated by the changing electrical activity of neurons. This allowed the team to watch how these people's brains responded to the sounds.

Normally, sound information enters through the ear, passes through several areas in the midbrain and thalamus, then reaches the primary auditory cortex. Damage to this area makes people deaf and unaware of sounds, but they keep their reflex reactions to alarming sounds.

The output of the primary auditory cortex will then reach the secondary and tertiary auditory cortices, which are formed concentrically around one another and process different aspects of sounds—for example, Wernicke's area which helps us understand language, resides in the secondary auditory cortex.

Then the results of the processing spill outwards to frontal areas involved in our conscious understanding of what we are hearing—identifying it, relating it to other sounds we've heard before, deciding how to react to it, and so on.

It was soon into this process, in the secondary auditory cortex, that the researchers saw a difference in the brain activity of people who were concentrating hard on the letters.

The brain's characteristic response to a simple sound is typically around 250 milliseconds long. But in people experiencing inattentional deafness, the brain response started to diverge 100 milliseconds after the onset of the sound, suggesting that's the point in the processing of the sound where attention, or lack there of, starts to affect the process.

"We know that by 100 milliseconds the sound hasn't reached awareness yet," Chait said. "This competition for attentional resources, that gives rise to inattentional deafness, happens much earlier than we thought."

This shows that the brain's auditory and visual systems share resources, and that focusing on one can make us briefly unable to perceive input from the other. "Even processing a very simple sound cannot be independent of other stuff that is going on in the brain at the same time," Chait said.

The good news is that, as risky as it might be to accidentally tune out sounds from our environment, we can't sustain the hyper-focused attention responsible for inattentional deafness. "We can deplete resources, but this depletion of resources can only last for a short duration," said Chait.

Featured
Animals
10 December 2015

Infidelity: it's a classic subject, and natural selection has assured that it persists, at least in one not-so-romantic animal model, the prairie vole. Though mostly monogamous, prairie voles occasionally stray from their partners. But these 'cheating' voles may just have poor memory, a new study suggests.

The researchers found that variation in a certain gene is linked with the voles' differing degrees of fidelity. The gene encodes the receptor for the hormone vasopressin, which can influence the voles' memory and learning, especially of places. So, a male prairie vole with fewer vasopressin receptors in the brain may simply have difficulty remembering he is wandering far outside his territory—into areas where he may have previously fought with male adversaries.

Researchers have been studying the mating habits of prairie voles since the 70s, when, during studies of vole populations, University of Illinois at Urbana-Champaign researcher Lowell Getz realized that he and his colleagues tended to catch pairs of male and female prairie voles in live traps together—they even caught the same pairs on multiple occasions. Since then, researchers have been trying to understand the factors behind the voles' sexual tendencies.

One such factor is vasopressin, a hormone produced in the brains of many mammals, including voles and humans. Although best known for its roles in retaining water in the body and constricting blood vessels, studies suggest this hormone also has roles in social and sexual behaviors. In 2008, evolutionary biologist and neuroscientist Steve Phelps, now at the University of Texas at Austin, and his colleagues found that amount of the vasopressin receptors in an area of the brain called the retrosplenial cortex (RSC) was correlated with prairie-vole promiscuity. Males that sired offspring with females other than their mates had lower levels of the receptor in the RSC; males faithful to their partners had higher levels of the receptor there.

Phelps hypothesizes that "when males are in a social situation, they release vasopressin broadly in the brain and depending on where the receptors are, that changes how they deal with the social situation," he says.

The RSC is part of a brain network that's involved in episodic memory, navigation and spatial learning. Vasopressin binding to its receptor in the RSC could possibly help prairie voles form memories of where they encountered other males in the past, the researchers think. Males with many vasopressin receptors in the RSC might have strong memories of such encounters and, therefore, be deterred from wandering outside their territory (and, consequently, from philandering). Conversely, males with fewer vasopressin-receptors in the RSC might stray because they have forgotten the risks—encountering and fighting with other males—of doing so, the researchers suggest.

In the new paper, published today in Science, Phelps and colleagues identify two different versions, or alleles, of the vasopressin-receptor gene: one that is associated with high expression of the vasopressin receptor in the RSC, one associated with low expression of the gene there. Furthermore, they found genomic evidence that evolution has selected for the maintenance of these two distinct alleles, rather than favoring one over the other, a phenomenon called balancing selection.

"We showed that natural selection is actively keeping around variation in the brain," Phelps says.

Why maintain genes that promote opposite patterns of behavior, faithfulness and infidelity? Perhaps to allow prairie voles to adapt to changes in population density, the researchers suggest, which varies quite a bit in prairie voles. The researchers hypothesize that in situations of high population density, a roving vole is more likely to encounter available females; they also tend to be less successful at guarding their own mates from the advances of other males, Phelps says. When population densities are low, on the other hand, a male may be better off staying close to his own mate, since the likelihood of his finding other available females is lower.

Phelps and colleagues are currently testing this hypothesis by experimentally altering vole population densities and observing whether the frequencies of the two vasopressin-receptor alleles change accordingly.

Featured
NeuroNews
03 December 2015

A single traumatic experience is enough to rattle people hard, but it rarely takes them down. In the face of subsequent ordeals, however, we become more vulnerable: a new study suggests that a second trauma has a critical role in leading to posttraumatic stress disorder, a condition that can include flashbacks, nightmares and intense anxiety, enough to disrupt a person's life.

When scientists gave rats one painful electric shock, the rodents' memories and stress response temporarily became blunted but returned to normal as expected. But after a second unpredictable harrowing experience, the animals began to show fear and anxiety similar to what is seen in people with posttraumatic stress disorder (PTSD).

"Having repeated traumatic experiences is likely a critical factor for the development of posttraumatic stress disorder," said coauthor Cristina Alberini, a neuroscientist at New York University.

How exactly going through terrifying experiences leads to PTSD isn't clear, although stress and certain hormones contribute. Also unknown is why 10 to 20 percent of people who experience a trauma go on to develop PTSD, while others don't. What researchers have seen is that the condition is correlated with having a prior history of trauma.

The stress curve

Strangely, small amounts of stress actually boost performance and memory at first in people, which is the opposite effect of that wrought by a powerful trauma. "A little bit of stress increases performances up to an optimal level, after which increasing the level of stress is actually detrimental and is going to cause impairment in performance and memory," Alberini said. Scientists can plot this effect as an upside-down U-curve.

(image)

Alberini and her colleagues found that this is the case in rats, too. They placed rats in a box that was lit on one side, dim on the other. Rats like to explore and prefer not to stay in brightly lit areas, so they quickly headed into the dark side of the enclosure. Upon entering the shaded side, the rats received a shock on their feet. The researchers then scooped them up and returned them to their home habitats.

Then the researchers measured how well the rats remembered their shock experience by having the animals revisit the enclosure and measuring how long they hovered in the lighted area before entering the spot that had shocked them before.

"If the memory is disrupted…they're going to go in even though they got a shock some days before," Alberini said.

For low-intensity shocks, the rats seemed to recall the distress just fine. They did the wise thing and avoided the likely dangerous dark side of the cage. But the rats that had received more intense shocks showed impaired memory: they didn't pause before visiting the dark side of the enclosure.

A second trauma

Having shown that rats follow the same inverted-u pattern that humans do in response to a first stressful experience, the researchers investigated what happened when the rats faced a second trauma. Would the forgetful rats remember the trauma once they're reminded with another electric shock?

A reminder shock could determine "whether memory is actually impaired in the sense that it has not been formed properly or whether the memory is there but inhibited," Alberini said. "If we give a reminder shock, if the memory was there but inhibited, it should actually reinstate."

Sure enough, rats that had experienced one traumatic intense shock were now spurred to remember it when faced with another. "The rats…don't want to go into the dark side of the chamber ever, meaning the memory was super-strong," Alberini said. "We measured the anxiety response and it was significantly increased in this group."

The role of uncertainty

The second shock actually came in two forms: sometimes quite predictably in the same dark corner of the cage, and sometimes in a completely different environment where the rats could not predict it.

When the reminder shock was unpredictable, it pushed the rats into showing symptoms that echo PTSD in humans. If the shock happened in the same area as before, the rats didn't become as upset as when it struck in a new context.

"Our conclusion was that two traumatic experiences…are actually critical for the development of these symptoms," Alberini said. This confirms and extends observations that a previous history of trauma is correlated with PTSD in people. "Previous history of trauma is…a loose description," she said. "In the clinical history it's very difficult to define when, how many, how expected and all of that."

Like people with PTSD, the rats also could not extinguish their distress. Normally, when rats are repeatedly exposed to an area that has shocked them before, if no further unpleasantness ensues the rats stop associating that area with foot shocks and end their avoidance of it.

This type of learning is called extinction, and the traumatized rats couldn't seem to manage it; their fear persisted, and even extended to new places that the rodents hadn't experienced before. "Animals exposed to the first traumatic experience and the reminder shock not only had a very strong memory and anxiety increase [but] also had an impaired extinction," Alberini said.

The u-shaped pattern of rats and people's response to an initial trauma reflects how it may be best to deal with frightening experience of different intensity. It is necessary to form a good memory of a relatively dangerous experience in order to avoid it in the future. But when the distress is immense, the memory impairment may act as a sort of shield, dampening fear-based learning until further trauma reveals it.

"Although a first traumatic experience elicits a blunted stress and memory expression, perhaps as protective measure, an unpredictable second traumatic experience, therefore multiple traumatic hits, critically contributes to generating behavioral responses typical of…PTSD," wrote Alberini and her colleagues, who published the findings December 1 in the Journal of Neuroscience.

"The good news is also that now we have a model [in rats] in which we can study this posttraumatic condition and hopefully test for treatments that may alleviate these problems," she said.

Featured
NeuroNews
01 December 2015

The idea of being able to direct your dreams is so appealing. You could fly or defeat the zombie hordes or perform ballet to an adoring crowd, if only you could realize that you were dreaming and seize control!

Unfortunately, there's no surefire way to spark lucid dreaming, which happens when the sleeper realizes that they aren't really awake. But, a new study suggests, pounding the snooze button might help. Scientists at Swansea University in Wales have reported that lucid dream frequency is associated with how often people use the snooze button on their alarm clocks, wake up during the night, and remember their dreams overall.

About half of us have a lucid dream at least once in our lives, and one in five people have them at least once a month. Though no single trick has a very high success rate, scientists have found that two methods can be somewhat effective at inducing lucid dreams when combined. To use the techniques, people must visualize becoming lucid before they go to bed. People must also set an alarm for an hour before their natural wake time and, when they are roused, focus on being lucid when they nod off again.

So, "Although the use of alarm clocks to elicit lucid dreams has thus been proposed, there have been no studies about the use of alarm clocks by lucid dreamers," wrote the researchers. They asked 84 people between the ages of 18 and 75 if they'd ever had a lucid dream, how often they could recall their dreams, how often they woke up during the night, and whether they used alarm clocks and snooze buttons.

It turned out that the participants who most often experienced lucid dreams were also more likely to hit the snooze button and wake up more in the night, and could often recall their dreams. On top of this, the researchers were able to use dream recall frequency and snooze button use to correctly guess whether someone had ever had a lucid dream in 78.6 percent of the participants.

To make sure alarm clocks themselves weren't behind the snooze button effect, the researchers compared alarm clock users who also used the snooze button to those that didn't. The snooze-button users still had significantly more lucid dreams.

"Using the practice of alarm clock snoozing might increase the occurrence of lucid dreams," wrote the researchers, who published the findings November 30 in the journal Dreaming. "However, because this is a correlational study, it may be that lucid dreamers for some reason choose to snooze, rather than the causation being that snoozing causes lucid dreaming."

One reason for the link between snoozing and lucid dreams could be that interrupting someone's sleep makes them more likely to hit REM—the stage during which dreams most often occur—quickly when they drift off again.

Two drawbacks to the study are that the pool of people who enrolled wasn't very large, and they were asked to recall their dreams after the fact. In future, the researchers say, they'd like to pull in more people and experiment with snooze button use when people are actually sleeping.

Featured
NeuroNews
30 November 2015

You've probably heard by now that nature is pretty beneficial to your happiness. If, like half the world's population, you live in an urban space, you might even consider how soothing these elusive green spaces would be on your commute through packed subways or turnpikes clotted with cars.

It can feel frustrating for city dwellers to be cut off from nature when there's a mountain of evidence that nature is good for individual wellbeing, boosting mood, attention, memory and ability to cope with stress, not to mention health.

But these advantages are worth seeking out, and it turns out that they benefit you and your fellow metro riders and traffic jam inmates collectively. New research indicates that nature is good for society as well as for individuals. After surveying several thousand people, scientists in the United States and United Kingdom found that contact with nature was linked with more cohesive communities and lower crime rates.

Giving people this sense of social cohesion could be one way that nature boosts their happiness and willingness to contribute to their community. "Natural spaces foster a sense of relating to the outside world, which generalizes to a caring and closeness with other people," the researchers wrote.

Nature brings people out of their homes and offices and into communal spaces, encouraging them to interact. Natural spaces also make people feel more empathetic and possibly even more charitable—in one experiment, plants or pictures of nature were enough to make people more likely to give money to others in a game.

"Immersing oneself in natural environments is an inherently satisfying activity, one that promotes a sense of connection in people regardless of background and location," wrote the researchers, who published the new findings November 25 in the journal BioScience.

To put a more quantifiable measure on the connection between nature and communities, they surveyed about 2,000 adults between the ages of 22 and 65 in the United Kingdom. These people reported how urban or natural the views from their home were. They also rated the quality of the natural areas near them, along with how easy these oases were to reach and how often they visited.

The perceived quality, views and time spent in nature explained about 8 percent of the variation in how cohesive people felt their community was (this was judged by how likely people were to agree with statements like "I care about other people in my neighborhood" or "I feel that people within my neighborhood are on the 'same team'").

This perception of cohesiveness in turn correlated with other benefits, like individual happiness, higher workplace productivity and concern for the environment. By contrast, other attributes like age, income, gender and education combined explained only 3 percent of the variation.

Nature also accounted for 4 percent of the variation in crime rates. "The more nature in one's surroundings, the less crime was reported in the area," the researchers wrote. Nature was almost as powerful a predictor as socioeconomic deprivation, which accounts for 5 percent of the variance in crime.

Crime tends to be lower in communities where people support each other and are organized enough to coordinate neighborhood watches and other preventative measures. "The positive impact of local nature on neighbors' mutual support may discourage crime, even in areas lower in socioeconomic factors," the researchers wrote.

It's possible that other features of rural living could explain the links between nature and community that the researchers saw. In the countryside, people don't move in or leave as often, and neighbors often share similar ethnocultural backgrounds; this could all add to a sense of cohesion.

In future, the researchers also want to find out whether different aspects of nature have their own effects on a sense of community. Does it matter if the nature is a meadow, a forest or a lake? Is having a diverse community of plants and animals important? Do human-dominated landscapes like gardens make less of an impact than wild woodlands? And could these community benefits be even more important to old people, who are more likely to feel lonely and cut off, and to young people, who are exposed to more crime?

A sense of community is less tangible than wild-caught fish or timber. But, the new findings suggest, social cohesion and lower crime rates are likely among the many boons provided by nature that often go overlooked.

Featured
Odd Brains
30 November 2015

"I'm running through a forest, and a group of people is following me closely. I suddenly see flying faces coming towards me. I find it strange, confusing, impossible... it makes me question it all and I tell myself, as usual, that I must be dreaming…The faces have their mouths open and pass me by, brushing up against me. I try to touch them with my hand but I can't, then I try to touch the branches along the edge of the path and can't touch them either. My hand goes straight through. Everything goes through me."

This is how one of the bizarre and memorable dreams of a young man named Michel Chesi started. As he continued to run in the forest, he was suddenly stopped by a dark figure blocking his way. He felt like he had run into an invisible wall and fallen to the ground. He was terrified, thinking the figure was an embodiment of evil. He wanted to get out of the scene and, with a snap of the fingers, he found himself surrounded with total darkness. Then, a series of small illuminated letters appeared to him, hovering in the dark, spelling out the word "XANTIAZIZ." He tried to remember the letters to write them down in his diary, and that's when he seemed to wake up. Trying to grab a flashlight and a pen, he found there was nothing on his nightstand. The room was empty of any other furniture. Soon the dark figure started to permeate him.

He then woke up, this time for real, realizing that his previous "awakening" was part of a lucid dream. Chesi, now a 39-year-old resident of Marin-Epagnier in Switzerland, had such dreams from time to time when he was between 18 and 37 years old. The dreams were often accompanied by other bizarre experiences during which he had the sensation of leaving his body. Sometimes, Chesi would even experience "walking" in his neighborhood without physically leaving his bed.

Chesi's strange nightly experiences coincided with other symptoms such as vertigo attacks, dizziness, floating sensations while walking, and nausea, all of which suggested he might have a problem with his body's balance system.

"Over the years, I developed problems with my balance that were getting worse and worse and that's why I consulted a doctor," Chesi told Braindecoder.

The doctor, Dr. Dominique Vibert, a specialist for neurological disorders of the ear, examined Chesi and determined that his symptoms were due to damage to his right inner ear (a diagnosis called vestibulopathy), probably caused by a viral infection. The inner ear is one part of the vestibular system in the body that plays a crucial role in maintaining our sense of balance.

During the course of treatment for his vestibular symptoms, Chesi told Vibert about his out-of-body experiences (OBEs) and lucid dreams that he had been having. His OBEs occurred about three to four times a year, and lucid dreams occurred several times a month.

Curious to learn more about the mechanisms of these bizarre phenomena, Vibert and her colleagues decided to study Chesi's case, which they described in a recent report published in Multisensory Research. Based on their observations and the results of several experiments, the researchers attributed his experiences to a faulty multisensory integration, linked to his inner ear problem.

Combining senses

People normally rely on a combination of visual, vestibular, somatosensory and proprioceptive information in order to accurately position their bodies in space, and to position themselves in their bodies, explained study co-author Mariia Kaliuzhna of Ecole Polytechnique Fédérale de Lausanne in Switzerland.

To see whether Chesi could properly integrate signals coming from his vestibular system and visual system, the researchers did a series of experiments. In one, they put him in a centrifuge cockpit-style chair, which rotated his whole body. A computer monitor was positioned in front of his face, showing a 3D pattern of dots that simulated his self-rotation.

Then Chesi was rotated in the same direction twice and his task was to judge whether the second rotation was bigger or smaller than the first one. In one condition, the screen in front of him was blank and he could only perform the task based on his vestibular information. In another condition, the chair remained stationary, and only the movement of the dots induced the feeling of rotation, meaning that he could only perform the task based on visual information. Then, in the last condition, the chair and the dots moved together and he had to base his judgment on both of these stimuli.

(image)Schematic view of the experimental setup.M. Kaliuzhna et al. / Multisensory Research 28 (2015) 613–635

For healthy people, who properly combine the vestibular signals from body rotation and the visual signals from the moving dots, it's easier to give correct answers when the two types of information are present, like in the last condition, than when only one type, vestibular or visual, is available.

But this wasn't the case for Chesi. He actually performed worse on the test when the dots moved too.

"We show for the first time that somebody having OBE does not integrate visual and vestibular cues, that is, he does not combine information from these two sources in an appropriate way," Kaliuzhna told Braindecoder.

The researchers think the Chesi might also process somatosensory information in an altered way. In a second experiment, the team conducted an augmented version of the Rubber Arm illusion, called the Full Body illusion. They had Chesi lay down in a darkened room on a robotic device designed to gently stroke his back, while wearing a head-mounted display to watch an image of a male body seen from the back. The touch of the robotic device was represented in the display as a red dot, moving along the back of the body. The researchers then tested what would happen in two conditions: in the first, synchronous condition, the robotic device and the red dot moved together in the same way. In the second, asynchronous condition, a delay was introduced between the dot and the robotic device, as a result of which the dot was seen in a different location than the one where touch was being felt. It has been shown that, in the synchronous condition, participants think they are closer to the virtual body than they actually are and identify themselves more with this body, Kaliuzhna said.

"The interesting result this manipulation produced in the patient is the strong feeling to float in the air, as well as a tingling sensation in his legs and lower back, which was reminiscent to him of the sensation he has during this OBE," she said. "Our control subjects did not report such sensations."

Nocturnal voyages

Chesi believes his experiences actually started long time ago when he was very young, but in the form of night terrors. "I started to overcome these night terrors towards the end of my adolescence," he said. Instead, he tried to enjoy the bizarre world of his dreams.

During a typical OBE episode, Chesi would first wake up at night and look at the clock, close his eyes and fall asleep again. But then he would experience the sensation of falling in complete darkness. The sensation would stop and he would experience the feeling of floating in the air. At that point he would hear a crackling sound inside his head, open his eyes and experience the sensation of leaving his body and the bed. He would then be turned around 180 degrees to see himself lying on the bed. During most of his out-of-body experiences, he felt he was positioned under the ceiling of his bedroom or next to his bed. But from time to time he also experienced so-called distant disembodiment, and felt that he was walking around the neighborhood.

These episodes typically lasted about two to three minutes. When he woke up, he felt a tingling sensation across his body for a couple of minutes, and then felt invigorated.

These out-of-body experiences started out in lucid dreams, in which Chesi suddenly became conscious about his dreaming and felt that he could control the content of the dreams. He often tried to interact with the environment while dreaming — he tried to touch things, pass through walls or talk to people.

"In my experiences, I often tried to explore the dream environment in terms of sensations, especially touch, and the transition between lucid dreaming and disembodiment," Chesi said. "I believe that the transition from one to the other is caused by the brain that puts itself in an "alert" mode: the lucid dream is a reality that visually resembles the reality that the dreaming person knows in his daily life and his brain cannot tell the difference."

Brain signals, interrupted

The findings of the new study are in line with previous research that has linked a faulty integration of vestibular signals and sensory information with various illusory body perceptions such as feelings of depersonalization, illusory self-motion, room-tilt illusions and disembodiment. In one such study, scientists were actually able to disrupt this integration by electrically stimulating a brain region and induce OBE in a person.

In Chesi's case, it seems that a problem with the balance organs of the inner ear may have interfered with relaying body-related cues to the brain. This could occur at the level of the brainstem, and lead to a faulty integration of signals further up in the brain, such as in the temporo-parietal junction, Kaliuzhna said.

"Interestingly, one of the first relays that processes vestibular information is at the level of the brainstem; regions in the brainstem also regulate the sleep cycle and dream activity," she said.

As Chesi underwent treatment for his vestibular symptoms of vertigo and dizziness, the frequency of his lucid dreams and out-of-body experiences decreased. Although the dreams had started out as frightening experiences, over the years he had come to enjoy them. "In that world, everything is possible, the only limit is our imagination. It is a world where emotions are at least ten times more intense than in our reality and our engine to move around and to create is our thoughts." These days, he doesn't have them anymore, he said."I am not relieved to have my lucid dreams and OBE disappear... actually, I miss it a lot today."



The interview with Michel Chesi was conducted in French and Translated to English.

Featured
NeuroNews
24 November 2015

Scientists are using the parlor game Rocks-Paper-Scissors (and its extended version, Rocks-Paper-Scissors-Lizard-Spock) to develop better prosthetics to read and act on the intentions of people who can't move.

In a new study, researchers asked a paralyzed man with electrodes implanted in his brain to imagine making different hand shapes from the game. The scientists were able to read patterns of brain activity and know what hand sign he was imagining.

It's the first time scientists have been able to decode hand shapes from a part of the brain called the posterior parietal cortex. This is the area of the brain where intentions about moving a limb take place, before being sent to the motor cortex, which is in charge of actually executing the movement. Reading movement intentions could allow paralyzed people to direct robot hands with their minds and form more dexterous, complex hand shapes than those allowed by taking the signals from the brain areas traditionally tapped for neuroprosthetics.

Where to read from?

Erik G. Sorto has been paralyzed from the neck down for more than a decade. A few months ago, the same team of researchers who produced the new study announced that implanting electrodes in Sorto's brain allowed him to control a robot limb well enough to pick up and sip from a cup with a straw in it.

(video)

Traditionally, scientists have implanted electrodes in the motor cortex to read signals of voluntary muscle movements. But in fact, when reaching out to grab a cup, we don't think about each muscle movement that makes up that action. We just "intend" to reach out and grab, and the brain works out the rest. Therefore, neuroprosthetics that use direct movement signals from the motor cortex end up making delayed, jerky movements because people are forced to focus on each individual muscle move.

So for Sorto, scientists turned to another part of the brain, the posterior parietal cortex. "It takes sensory information and processes it in a way that it can be used to plan actions that are then sent to motor cortex where they are executed," said coauthor Richard Andersen, a neuroscientist at the California Institute of Technology in Pasadena.

Because this area controls the intent to move, connecting electrodes to this region instead allows Sorto to make smoother movements. "This intent can be interpreted by smart computers that can handle the fine details of the robotic movement rather than relying on the brain to provide all this detailed information," Andersen said.

In nonhuman primates, scientists had previously decoded neuron activity from the posterior parietal cortex that corresponds with the intention to grasp an object. Andersen and his colleagues wanted to know with more complicated hand shapes could also be read from the posterior parietal cortex in humans. This would allow for neuroprosthetics with better motor skills.

Rock, paper, scissors, neurons

For the new study, Andersen and his colleagues showed Sorto pictures of a rock, some paper, scissors, a lizard, or a photo of Leonard Nimoy in his role as Spock. Sorto then had to imagine making a corresponding hand shape.

For each hand shape, different groups of neurons became active. This meant that, based on which nerve cells were firing, the scientists could figure out what shape Sorto was imagining.

The hand shapes that Sorto imagined forming for each symbol were not directly related to the hand shape he would have had to make to actually grasp its corresponding object. This means that the neurons whose activity Andersen and his team read can be used for tasks like communication. "The area can go beyond just the simplistic mapping of an object to a grasp," Andersen said.

This means that neuroprosthetics reading from the posterior parietal cortex could make hand shapes that are complicated and more abstract than those used to grab objects.

The team also saw two distinct populations of neurons become active for each of the hand shapes. "Some cells are coding the intent of the subject and others are coding what the subject is seeing," Andersen said.

Some of these neurons became active when Sorto saw a picture of a rock or lizard but did not fire when he just heard those words spoken aloud. The other population of neurons still spiked in activity if Sorto heard the word "rock" and had to imagine a fist shape. These cells "are more generally coding the imagined movement triggered by vision or hearing," Andersen said.

"This finding of two populations is important for separating attentional-visual components of neuronal response from motor-related components," he and his colleagues concluded in the paper, which they published November 18 in the Journal of Neuroscience. This "can be used to drive neuroprostheses in an intuitive manner."

Featured
NeuroNews
23 November 2015

The skills that enable one person to knuckle down and focus while another's mind wanders don't come down to one or even a handful of spots in the brain, a new study suggests. Scientists have identified brain networks that spread across many regions, with synchronous activity patterns that are linked to a person's ability to concentrate.

In the new research, published today in Nature Neuroscience, certain patterns of activity between different brain areas predicted high or low attention skills in adults, and also correctly pinpointed how severe kids' and teens' symptoms of ADHD were.

In past studies of attention, scientists have found regions in the frontal and parietal lobes that seem to be involved. But until now researchers haven't identified specific patterns of brain activity that can summarize overall attentional ability in different groups of people.

"We actually found that almost every part of the brain was involved, and connections that span the cortex and subcortical regions," said coauthor Monica Rosenberg, a cognitive neuroscientist at Yale University. "It was a lot more complicated and widespread than we had originally thought."

Rosenberg and her colleagues scanned the brains of 25 adults both when they rested quietly and when they tried to sustain focus for 30 minutes. Their chore was designed to be tedious, and involved watching image after image of cities and mountains and clicking a button each time they saw a city.

As the participants relaxed or struggled to pay attention, the researchers examined how activity fluctuated between different regions of their brains. The fMRI scans showed many brain regions that changed activity at the same time, suggesting a high "functional connectivity" among these regions. Of these, 757 connections were stronger in people with better attention, and 630 different connections were stronger in people with worse attention.

Though many parts of the brain seem to play a role in focus, a few trends stood out to the researchers. "Connections between the occipital lobe—the visual cortex at the back of the brain—and the motor cortex and cerebellum predicted better attention," Rosenberg said.

On the other hand, connections between the temporal and parietal lobes, and within the temporal lobe and cerebellum, predicted worse sustained attention. "So if you had really strong connections between your temporal lobes and between the hemispheres of your cerebellum then it was more likely you would have a hard time paying attention to something," Rosenberg said.

She and her colleagues wanted to find out if these same brain regions could also predict attentional skills in a completely different group of people. The researchers examined a dataset of 113 children and adolescents from China, some of whom had ADHD. "The kids we predicted would do really poorly on the [city and mountain] task had many or very severe symptoms of ADHD," Rosenberg said.

The team observed the same link between specific patterns of brain activity and the kids' performance on the attention test. "The fact that network models defined in healthy adults from the Yale–New Haven community predicted ADHD symptoms in children and adolescents from Beijing suggests meaningful overlap between the neural mechanisms that are important for sustained attention and the neural dysfunction that leads to an ADHD diagnosis," Rosenberg and her colleagues wrote.

It's not clear why connections between all these different areas indicate that a person is more or less adept at keeping their focus on track. It may be that everybody has a unique profile of brain connections involved in attention.

"From that connectivity profile, we can extract information about how well you pay attention to things," said Rosenberg. Perhaps, in future, "We can predict who's going to go on to develop an attention deficit…and then we could use that person's connectivity profile to develop some kind of training program or intervention to help them."