braindecoder - NeuroTech
First Time Here?

NeuroTech

Featured
NeuroNews
24 November 2015

Scientists are using the parlor game Rocks-Paper-Scissors (and its extended version, Rocks-Paper-Scissors-Lizard-Spock) to develop better prosthetics to read and act on the intentions of people who can't move.

In a new study, researchers asked a paralyzed man with electrodes implanted in his brain to imagine making different hand shapes from the game. The scientists were able to read patterns of brain activity and know what hand sign he was imagining.

It's the first time scientists have been able to decode hand shapes from a part of the brain called the posterior parietal cortex. This is the area of the brain where intentions about moving a limb take place, before being sent to the motor cortex, which is in charge of actually executing the movement. Reading movement intentions could allow paralyzed people to direct robot hands with their minds and form more dexterous, complex hand shapes than those allowed by taking the signals from the brain areas traditionally tapped for neuroprosthetics.

Where to read from?

Erik G. Sorto has been paralyzed from the neck down for more than a decade. A few months ago, the same team of researchers who produced the new study announced that implanting electrodes in Sorto's brain allowed him to control a robot limb well enough to pick up and sip from a cup with a straw in it.

(video)

Traditionally, scientists have implanted electrodes in the motor cortex to read signals of voluntary muscle movements. But in fact, when reaching out to grab a cup, we don't think about each muscle movement that makes up that action. We just "intend" to reach out and grab, and the brain works out the rest. Therefore, neuroprosthetics that use direct movement signals from the motor cortex end up making delayed, jerky movements because people are forced to focus on each individual muscle move.

So for Sorto, scientists turned to another part of the brain, the posterior parietal cortex. "It takes sensory information and processes it in a way that it can be used to plan actions that are then sent to motor cortex where they are executed," said coauthor Richard Andersen, a neuroscientist at the California Institute of Technology in Pasadena.

Because this area controls the intent to move, connecting electrodes to this region instead allows Sorto to make smoother movements. "This intent can be interpreted by smart computers that can handle the fine details of the robotic movement rather than relying on the brain to provide all this detailed information," Andersen said.

In nonhuman primates, scientists had previously decoded neuron activity from the posterior parietal cortex that corresponds with the intention to grasp an object. Andersen and his colleagues wanted to know with more complicated hand shapes could also be read from the posterior parietal cortex in humans. This would allow for neuroprosthetics with better motor skills.

Rock, paper, scissors, neurons

For the new study, Andersen and his colleagues showed Sorto pictures of a rock, some paper, scissors, a lizard, or a photo of Leonard Nimoy in his role as Spock. Sorto then had to imagine making a corresponding hand shape.

For each hand shape, different groups of neurons became active. This meant that, based on which nerve cells were firing, the scientists could figure out what shape Sorto was imagining.

The hand shapes that Sorto imagined forming for each symbol were not directly related to the hand shape he would have had to make to actually grasp its corresponding object. This means that the neurons whose activity Andersen and his team read can be used for tasks like communication. "The area can go beyond just the simplistic mapping of an object to a grasp," Andersen said.

This means that neuroprosthetics reading from the posterior parietal cortex could make hand shapes that are complicated and more abstract than those used to grab objects.

The team also saw two distinct populations of neurons become active for each of the hand shapes. "Some cells are coding the intent of the subject and others are coding what the subject is seeing," Andersen said.

Some of these neurons became active when Sorto saw a picture of a rock or lizard but did not fire when he just heard those words spoken aloud. The other population of neurons still spiked in activity if Sorto heard the word "rock" and had to imagine a fist shape. These cells "are more generally coding the imagined movement triggered by vision or hearing," Andersen said.

"This finding of two populations is important for separating attentional-visual components of neuronal response from motor-related components," he and his colleagues concluded in the paper, which they published November 18 in the Journal of Neuroscience. This "can be used to drive neuroprostheses in an intuitive manner."

Featured
NeuroNews
18 November 2015

Scientists have come up with an idea for a new type of brain scan to hone in on the region causing seizures in people with epilepsy.

These damaged areas, or lesions, are sometimes missed by traditional scans. The new method aims to locate these hard-to-find areas by using high resolution MRI to track amounts of a certain neurotransmitter in different parts of the brain, the researchers report in a pilot study.

About 65 million people worldwide have epilepsy. For one third of these people, medication does not control their seizures, and they can only find relief by undergoing surgery to remove the area with the lesion. But doctors can't always tell where in the brain the problem area is. Brain scans such as traditional MRI and EEG sometimes fail to pinpoint any seizure-causing region, even when combined.

"New neuroimaging techniques capable of detecting subtle lesions could potentially improve patient care and increase the chance of seizure freedom after surgery," wrote the researchers, a team at the University of Pennsylvania who published the findings in the October 14 issue of Science Translational Medicine.

The new technique, called GluCEST, picks up on patterns of glutamate, a neurotransmitter that relays messages through the brain. Normally, it quickly dissipates once its job is done. But previous research has found that glutamate builds up in people with epilepsy, causing overstimulation that leads to seizures.

The researchers tested it on four people with temporal lobe epilepsy, who weren't responding to drug treatment. In each of these people, the scans found more glutamate in the hippocampus on one side of the brain than the other. EEG tests showed that GluCEST picked the hippocampus located in the same hemisphere as the epileptic area.

The findings suggest the new method could develop into an effective technique for hard-to-spot lesions. However, the findings will have to be confirmed in more people.

Featured
NeuroNews
15 November 2015

In recent years scientists have gained the ability to identify where memories are registered in the brains of lab animals. They can manipulate those memories, erase them or even implant new ones. It may now be not too far-fetched an idea to ponder: Could we use such methods in humans, for example, to treat those suffering from PTSD? And if so, should we?

This is one of the questions explored tonight on "Breakthrough: Decoding the Brain," which airs at 9 p.m. ET/8 p.m. CT on National Geographic Channel. Director Brett Ratner and narrator Adrien Brody meet with scientists who are using innovative technologies to develop new treatments for neurological disorders. New tools to explore the brain have also helped scientists begin to unravel profound mysteries such as how memories are made and how the brain gives rise to consciousness.

"Decoding the Brain" is part of "Breakthrough," a series developed by GE and National Geographic Channel. Each episode explores scientific discoveries in brain sciences, longevity, water, energy, pandemics and cyborg technology.

(video)

Making and breaking memories

Changing or erasing memories is an old idea. But only in recent years have scientists become able to actually do it, thanks to optogenetics, a relatively new method that allows researchers use light to switch neurons on and off.

Steve Ramirez is a memory researcher at the Massachusetts Institute of Technology who uses optogenetics to study the formation of memories as well as the possibilities for manipulating them. He and his colleagues have been able to erase memories or implant false memories in mice. He hopes this work could help develop new treatments for people suffering from PTSD.

So far, optogenetics has been used only in animals. But if it ever gets translated for use in humans, would it be ethical to use? The toll that a disorder like PTSD takes on a person's quality of life makes memory manipulation an appealing treatment as a last resort.

(video)

But being able to directly manipulate memories may have implications beyond just erasing the traumatic ones. What if during the worst moments of your life, you could push a button and watch some memories of happier times? Researchers have shown this could potentially lessen the damage that stressful situations could impose on the brain. In a study published earlier this year in Nature, Ramirez and his colleagues were able to keep track of happy memories as they were being registered in the brain of male mice during flirtation with female mice. Later, they rekindled the memories by activating the neurons holding those memories. The team showed that evoking positive memories made the animals more resilient when they were put under stress and reduced their depression-like behaviors.

A switch for consciousness?

Neurologist Mohamad Koubeissi is the Director of the Epilepsy Center at George Washington University in Washington, DC. His research aims to reduce the occurrence of seizures in people with epilepsy who don't respond to medications. To do this, Koubeissi implants electrodes deep in the brains of patients to control the activity of neurons whose abnormal firing leads to seizures.

Aside from the therapeutic potentials, this research also offers a unique chance for studying long-standing research questions by giving scientists direct access to the brain. And sometimes, it results in surprising discoveries. For example, when evaluating a patient, Koubeissi discovered that stimulating a small area of the brain called the claustrum can result in global disruption of consciousness. The patient had an electrode implanted in the claustrum, and was asked to read something from a book. When Koubeissi turned on electrical stimulation, the patient suddenly stopped reading and stared blankly. She later had no memory of what happened during the stimulation. When the stimulation was turned off, she resumed exactly where she had left off. This finding suggests that the claustrum is connected with other brain regions that together give rise to consciousness.

You can watch Koubeissi and his patient in this video of the first 5 minutes of tonight's episode:

(video)

Featured
NeuroTech
03 November 2015

Controlling live insects with a smartphone, or moving the arm of one person using nerve signals of another are the kinds of weird, madcap projects that neuroscientist Greg Gage and his colleagues work on at their Michigan-based startup Backyard Brains. And thanks to their DIY kits, you could try them out to.


Gage hopes that affordable DIY kits will bring neuroscience to the public and help spread basic understanding of the brain. "If someone wants to learn about the brain, they typically have no choice but to go to graduate school," Gage said. "This isn't the case in other areas of science. You can study the planets or stars with a cheap telescope, but most importantly, you don't have to get a PhD in astrophysics."

Gage himself started out as an electrical engineer working on circuit boards for a touchscreen kiosk company. A chance encounter kindled Gage's love for neuroscience and changed his life. It was when visiting a lab at Leiden University in the Netherlands "when I heard a neuron for the first time," Gage said.

Neurons signal by generating electrical impulses known as action potentials or "spikes," which are turned to sound on recording devices for monitoring neuronal activity. Gage heard a live recording of these spikes from a rat motor cortex, the part of the brain that involved in controlling movements.

"I heard a distinct popping sound every time the rat would move its right paw," Gage recalled. "I was mesmerized. I knew at that moment I wanted to be a neuroscientist."

"Later I would find out that I was not alone," Gage added. "Many neuroscientists claim hearing spikes was a deciding factor."

Gage quit his job and went back to grad school at the University of Michigan. "Everyone told me I was crazy to leave my well-paid and comfortable job," Gage said.

Given his delayed start at becoming a scientist, Gage did not want anyone else missing their calling, so while in grad school, he visited schools with his labmate Tim Marzullo to help explain scientific careers to students. They often wanted to show real spikes to kids, but they could not risk bringing their expensive lab equipment to school visits. Instead, they came up with a way to use off-the-shelf electronics to bring simple neuroscience experiments outside the lab.

"We set about building what we called the $100 spike," Gage said. "Could we build neuroscience equipment rugged enough that students could use it, and cheap enough that schools could afford it?"

Their flagship product, the SpikerBox, is a DIY kit to detect spikes in crickets, earthworms and other invertebrates found in a pet store. The device debuted at the Society for Neuroscience conference a few years ago and kickstarted Gage and Marzullo's careers at Backyard Brains.

Since then, Gage and his colleagues have developed other kits for experiments involving the nervous system. One is the RoboRoach, which they tout as the world's first commercially available cyborg. By sticking a pack of electronics onto a cockroach's back and attaching silver electrodes to its antennas, you can use send commands to the insect over Bluetooth, wirelessly controlling the movements the roach's movements by electrically stimulating their antennas.

"If you see a cockroach walking and you touch its antenna, it will turn in the other direction. That's called a wall-following behavior. With the RoboRoach kit, we are talking to the same neurons using small pulses of electricity, making the cockroach think it is touching something."

Taking the idea of remote-controlling a roach one step further, Gage and his colleagues then developed the Human-Human Interface, a simple kit that helps you control someone else's arm by moving your own arm. One side of the device picks up electrical activity of muscles as you flex your arm and the other side stimulates a nerve in the arm of another person and makes him flex his arm too.

Gage hopes such devices in the future can provide amateurs with all they need to conduct experiments at their homes and come up with their own discoveries. Such breakthroughs are already happening in mathematics and astronomy, "but not in neuroscience," Gage said. "Our goal is to be able to change that. We want real discoveries to happen in the home, using our gear."

Featured
NeuroNews
23 September 2015

Human brains can now be linked well enough for two people to play guessing games without speaking to each other, scientists report. The researchers hooked up several pairs of people to machines that connected their brains, allowing one to deduce what was on the other's mind.

One of the machines could decode information about brain activity to seek a person's answers to different yes-or-no queries about the object she'd chosen. The other machine could take this incoming information and use it to stimulate a second player's visual cortex, letting him know how his partner had answered.

"We wanted something where people could interact, exchange questions and answers, and do this only with their minds," said Andrea Stocco, a neuroscientist at the University of Washington in Seattle who published the findings today in the journal PLOS ONE. "We have an interface that, though simple, can be used to collaborate in real time." This brain-to-brain interface technology could one day allow people to empathize or see each other's perspectives more easily by sending others concepts too difficult to explain in words, he said.

Stocco and his colleagues rounded up 10 people to play a modified version of "20 questions" in pairs. One member of the pair would choose an object, such as the city of San Francisco. The other, sitting in a lab a mile away, would be presented with a list of possible objects and asked to narrow it down by selecting questions such as "Is the city in North America?"

The question would appear on the computer screen of the respondent, who then "answered" by glancing at "yes" or "no" buttons, each paired with a LED light flashing either 13 or 12 times per second. The LED light flashes were so similar that an observer could never distinguish between them, Stocco said. "But they produce signals in the visual part of the brain that are sufficiently different that the computer has no trouble understanding them."

An EEG cap on the participant's brain gathered these electrical signals so that the computer could determine whether the participant was gazing at the "yes" or "no" button.

This information was recorded and sent to the waiting inquirer, who was hooked up to a transcranial magnetic stimulation (TMS) machine. Only when the answer was "yes," the machine stimulated the participant's visual cortex, causing them to see a blob or line appear in their field of vision. Then they were able to ask more questions until they had figured out what object their partner had chosen.

Though a single wrong answer could lead them astray, the five pairs of players were able to guess the correct object in 72 percent of the games (players connected to a sham brain-to-brain interface only guessed correctly 18 percent of the time). Taking into account each question and response, the players had an overall accuracy of about 94 percent.

Ideally, brain-to-brain interfaces would one day allow one person to think about an object, say a hammer, and another to know this, along with the hammer's shape and what the first person wanted to use it for. "That would be the ideal type of complexity of information we want to achieve," Stocco said. "We don't know whether that future is possible."

But if it is, Stocco believes such brain-to-brain links could help people communicate concepts they have trouble putting into words: feelings, emotions, images, and very personal experiences. Someone who is terrified of airplanes could transmit feelings to someone who just doesn't get it. "If I give you my idea of an airplane you can understand why I'm terrified, and all these things that are beyond real language," Stocco said.

Alternatively, someone who has a condition such as anxiety could benefit from a calmer person's perspective on obstacles that feel all consuming. "We could make his brain or her brain understand that there is a different way to see the problem," Stocco said.

Featured
NeuroNews
15 September 2015

Scientists have developed a new way to control individual neurons of a nematode using ultrasonic waves.

The new technique, called Sonogenetics, uses the same type of waves used in medical sonograms, and may be a less invasive alternative to optogenetics, which uses light to activate or silence neurons.

Methods that enable scientists to manipulate the activity of specific neurons in a living animal can offer great insight about the workings of the nervous system. In optogenetics, neurons are genetically engineered to express light-sensitive proteins and react to light. However, optogenetics is invasive and requires surgery to implant a fiber optic cable to deliver the light to the cells. Ultrasound waves, on the other hand, can even pass through the bones and reach cells deep inside the tissue.

"In contrast to light, low-frequency ultrasound can travel through the body without any scattering," said Sreekanth Chalasani , an assistant professor of molecular neurobiology at at the Salk Institute for Biological Studies in La Jolla, California. "This could be a big advantage when you want to stimulate a region deep in the brain without affecting other regions," said Stuart Ibsen, coauthor of the study, published today in the journal Nature Communications.

But it has been difficult to use ultrasound for targeting single neurons—the minimum focal zone of the ultrasound is larger than an individual cell and the waves stimulate a population of neurons.

To make ultrasound more suitable for studying individual cells, Chalasani and his colleagues used gas-filled microbubbles to amplify the waves. Next, they genetically modified the cells to render them sensitive to the ultrasound. Specifically, they found a membrane ion channel, TRP-4, which in response to the mechanical force of the ultrasound waves can open up and activate the cell. The researchers added the TRP-4 channel to the neurons of a C. elegans, making them react to ultrasound. As a result, the crawling nematode turned and switch directions with just a pulse of ultrasound.

The method has only been used in C. elegans neurons. Next, the team plans to test the approach in mice.

"The real prize will be to see whether this could work in a mammalian brain," Chalasani said.

Featured
NeuroNews
27 July 2015

We are increasingly trading our pens for keyboards, but at the same time in the digital world, neural networks are learning handwriting.

As reported on FactCoDesign, in a new project Alex Graves at the University of Toronto has taught a neural network to generate text that looks like it was written by hand.

Neural networks try to simulate the way the biological brain works by learning how things should look like and processing this information through a large number of interconnected processing elements (like neurons in the brain) to generate an output. When the output is repeatedly fed back into the system, the network ultimately generates something new in its own style. For example, Google DeepDream's trippy images are made by a neural network that has been trained on a large database of images. Similarly, Graves's network has been trained on a handwriting database that contains many examples of handwritten letters.

You can test out Graves's Neural Network Handwriting Generator for yourself. Just type in some text, and let the neural network develop its own handwriting style.

(image)

Featured
NeuroTech
24 July 2015

What do you think of when you see the word Titanic? Maybe you recall the ill-fated cruise liner, or the 1998 movie starring Leonardo DiCaprio. If you're a linguist, you might think of the Greek roots of the word, harkening back to the titans of lore. No matter the associations the word inspires, there's a good chance that your brain processes it differently than anyone else's. Recently, a team of researchers from Binghamton University has figured out a way to pick up on those unique signals of neural processing to identify a person's "brainprint," which could be used as a hyper-secure login for technological devices.

The brainprint that the researchers detect is an echo of brain signals to a specific stimulus, "like ripples in water when you drop a pebble in," says Sarah Laszlo, a professor of psychology and linguistics at Binghamton University and one of the researchers behind a recent series of experiments.

Take, for example, a stimulus like a word on a screen, as Laszlo tested in her first study, published recently in the journal Neurocomputing. Participants were hooked up to an electroencephalogram (EEG) so that the researchers could see the patterns of electrical signals generated by the brain. The researchers found that the signal activates a system in the left anterior temporal lobe, thought to play a key role in semantic memory, a part of our long-term memory independent of our personal experiences that remembers things like the names of colors or what LOL stands for.

The reason why people process the same stimuli differently isn't exactly clear, Laszlo says. "This is speculation, but especially in the case of responses to words, there's a difference in people's connection between neurons, not the shape or speed [of those connections]. For the visual system, it's the shape of the brain—people differ a lot in the shape of their visual cortex." And though these connections can change a bit over time, the major structures stay pretty constant; Laszlo has done some previous work taking EEGs of children for several years starting in Pre-K, and Laszlo says she could "easily tell which kid was which from year to year" based on just their EEGs.

Everyone's semantic memory is a little different, too. So it makes sense that there are lots of factors that can affect a person's brainprint, Laszlo says—the person's fluency in the language in which the test is conducted, whether she is left or right handed, whether her parents are left or right handed, her age, the size of her brain, if she's alert or tired, if she's interested in the task at hand, if she's stressed or calm.

In Laszlo's first experiment, the researchers saw that each person's EEG results, a line punctuated with peaks and valleys sampling brain activity every few milliseconds, was just a little different. And when they repeated the test with the same words and participants six months later, a computer algorithm was able to match the two readouts with 94 percent accuracy.

Devices that can read biometrics with 94 percent accuracy are good, but not good enough, Laszlo says. In the world of consumer tech a device needs to be accurate 100 percent of the time. "You wouldn't want to be locked out of your phone," she says.

So in this second experiment, the results of which have not been published yet, Laszlo and her collaborators tried using different types of stimuli that might elicit a stronger signal in the brain. That meant showing images about which people have very strong opinions—actor Benedict Cumberbatch, for example, or sushi. And they found their algorithm's accuracy drastically improved.

"What got us to 100 percent is combining different kinds of information, like vocabulary and images, with participants' feelings about facial attractiveness of celebrities," Laszlo says. "We try to elicit reactions, and that seems to give us edge."

Even with this high accuracy rate, putting on and setting up the cap for the EEG to take the brainprint takes a minimum of 15 minutes, so it isn't really ideal for devices like personal computers or cell phones. But Laszlo envisions that the brainprint could be used for high-security access points that currently require extra biometrics, such retinal scans or voiceprints. For these applications, the brainprint could even be a better biometric because it's cancelable—if a hacker captures a person's brainprint for one word or photo used for access, the system administrators can just change the stimulus, and the hacker can't anticipate the resulting brainprint. Plus, Laszlo says, "The brainprint would be robust to threat of violent attack. Retinal scans can still work if you kill someone and take their eyeball. You cannot take someone's brain, because once they're dead their brain stops working." A brainprint is also bulletproof: a person held at gunpoint would display obvious signs of stress, and the sign-in probably wouldn't work, or could set off some sort of alarm.

Now, Laszlo and her team are investigating whether it would really be impossible would be for the brainprint to be hacked in order to pursue a grant from the National Science Foundation. "I have a research assistant training to have his brain activity impersonate someone else's," she says. "He spends two hours per day, twice per week sitting in front of computer that stimulates him visually in sync with a target person's brain activity."

The training is working—sort of. "To start out with his brain activity was 60 percent [similar to] his target, and training has boosted him to 80 percent. But to fool the system he has to be 99 percent."

If her team is awarded the grant from the NSF, the hacking attempts will be even more rigorous. But for now, the brainprint seems to be hack-proof. If the tests go well and the engineers make the technology sleeker, it could be implemented in about three years. "Right now it's in the sweet spot—the training works but not well enough to fool the system," Laszlo says. "You can try [to hack the system] but you won't succeed."

Featured
Brain Candy
07 July 2015

Every machine and piece of technology we use, from cars to refrigerators to smartphones, started as an idea, designed and shaped in the human brain before it existed in the real world. However, once built and used, the machines can shape our brains in turn. Here are five effects that modern technology is having on the brain.

We almost never dream in black and white anymore

In the 1950s, dream researchers commonly thought people dreamed mostly in black and white. Now, however, people mostly dream in color. In 2002, researcher Eric Schwitzgebel at the University of California at Riverside found that of 67 volunteers he questioned, nearly two-thirds said they dreamed in color, none said they dreamed in black and white, more than a quarter said they dreamed in both, and about a sixth said they did not know.

People dreamed in color well before the 1950s too, with ancient Greek philosophers Aristotle and Epicurus both noting colors existing during dreams. Schwitzgebel concluded that the rise of black and white films and television during the first half of the 20th century likely explained the results seen during the 1950s.

The advent of color video is eliminating grayscale dreams. In 2008, researcher Eva Murzyn, then at the University of Dundee in Scotland, found that adults over the age of 55 who grew up with black and white TV and movies were more likely to dream in black and white, while adults under the age of 25 dreamed mostly in color.

The Internet is changing our memory

Google and other search engines are changing the way our brains remember information. In 2011, Columbia University psychologist Betsy Sparrow and her colleagues found that we often forget things we are confident we can find on the Internet, and instead are more likely to remember things we think are not available online.

Essentially, our brains now adapting to rely on the Internet for memory in much the same way they have long relied on family and friends to support our memories. We are remembering less through knowing information itself and more through knowing where to find information. In the future, Sparrow suggests that this shift may lead teachers to focus less on memorization and more on imparting greater understanding of ideas and ways of thinking.

Glowing screens are making us sleep less

Ninety percent of Americans now use some kind of electronic device at least a few nights per week within an hour of bedtime. In 2014, scientists found that 12 volunteers who used an iPad in dim light before bed for five consecutive days suppressed levels of melatonin, a brain chemical that helps control sleep. When compared to people who read a paper book instead, the iPad-readers took nearly 10 minutes longer to fall asleep, experienced a significantly lower amount of REM sleep, and reported feeling less wakeful in the morning.

Biological clocks such as the human body's circadian rhythms depend largely on cues such as light in the environment. The screens of smartphones, tablets, laptops and desktops emit light that is bluer than natural light, and previous research suggests that bluer light may have a greater impact on sleep and circadian rhythms.

Smartphones are rewiring our brains

Today nearly two-thirds of Americans own a smartphone, and the touchscreens we use to control them are reshaping the brain.

Smartphones make people use their fingertips, and especially their thumbs, in new ways. Scientists found greater electrical activity in certain brain areas of smartphone users when their thumb, index and middle fingertips were touched, in comparison to people who still used old-school mobile phones. The amount of activity in the brain linked with the thumb and index fingertips was directly proportional to the intensity of phone use, as recorded by phone data.

Video games can boost some mental skills

Just as playing sports can improve the body, so too does research suggest that playing video games can enhance the brain. For instance, a number of studies have found that first-person shooter games such as Medal of Honor, Unreal Tournament 2004 and Call of Duty 2 can train the brain to process visual data more quickly and keep track of more items moving in one's field of vision, as well as better detect contrast between objects in dim environments in a way that could prove useful when driving at night. Research also suggests that real-time strategy games such as Starcraft might enhance the ability to multi-task.

Still, video games might not always change brains for the better. Although first-person shooter games such as Halo might boost visual skills, they might also reduce a player's ability to control impulsive behavior.

Featured
NeuroTech
21 May 2015

Scientists have successfully converted human blood into cells that go on to become neurons of many types, according to a new study.

The technique may make it easier for researchers to study sensory neurons and their role in pain perception.

Reprogramming blood may also be simpler and less time-consuming than other techniques that produce stem cells from skin, the researchers said in their study published online today in Cell Reports.

The team added a cocktail of molecules to fresh and frozen blood, coaxing it to become cells that develop into neurons. These forerunner cells became the types of neurons found in the central nervous system (the brain and spinal cord) and the peripheral nervous system (which is laced through the rest of the body), as well as support cells called glia.

The scientists could get 1 million sensory neurons from a blood sample, which could be used in studies focusing on developing and testing drugs on these cells, the researchers said.

The method could make it easier to study the neurons in the peripheral nervous system, which are responsible for sending sensory information to the brain about pressure, temperature and pain. Currently, studying the role of these neurons in pain perception is challenging, coauthor Mickie Bhatia said in a press release. "Unlike blood, a skin sample or even a tissue biopsy, you can't take a piece of a patient's neural system. It runs like complex wiring throughout the body."

Many current painkillers work on both parts of the nervous system, leading to grogginess. "You don't want to feel sleepy or unaware, you just want your pain to go away," said Bhatia, a stem cell scientist at McMaster University in Hamilton, Canada. "But, up until now, no one's had the ability and required technology to actually test different drugs to find something that targets the peripheral nervous system and not the central nervous system."

The practice of reprogramming mature cells into stem cells has been evolving since scientists first pulled it off with mice in 2006.


Featured
NeuroTech
19 June 2015

When the first guests arrive at Nagasaki's Hotel Henn Na in July this year, they will be greeted and served by robots. In a similar approach, Toshiba's android Aiko recently held a short-term role greeting customers at a department store in Tokyo. Customers were comfortable approaching Aiko to ask for directions and even the receptionist who normally holds the post felt her android colleague was doing a reasonable job.

There is no doubt that developments in android technology mean that robots now look more lifelike than ever. It is even possible to imagine mistaking a robot such as Aiko for a human, at least at first glance. However, encountering near-human agents may not always be a comfortable experience.

Hotel Henn Na claims it is looking to explore the elements that "personify" hotels and to recreate that by using person-like entities. The big question is how people interact with robots, such as Aiko, that have a human appearance. Such interactions require considerably more than a robot with an apparently human face.

(video)

Our research at the Open University has considered aspects of how it might feel to be in a situation similar to interacting with a robot receptionist. It asks whether the properties of a robot receptionist's appearance and behaviour are able to make us feel comfortable or may make us decidedly uncomfortable. Our work into the so-called uncanny valley effect also suggests that a key aspect of such interactions will be the ability of the robot to reproduce and convey realistic emotions, particularly through their facial expressions.

The uncanny valley effect is now a well-known relationship between how "human-like" a robot is and how comfortable people feel when interacting with it. There is a point where something can seem too close to human to be comfortable – and our research has explored some explanations for why this might occur. For a long time, research in this area was mainly focused on the practical aspects of how near-human agents appear and on the techniques that might make them look more realistic and acceptable enough to bridge this uncanny valley.

Our early research had observed that some of the eeriest and most unsettling near-human agents were those with exaggerated features, such as a "cute" doll with dark, wide and blank eyes. Building on the idea of disturbing facial features, our more recent research looked at the role of emotions in the uncanny valley effect and considered whether that eeriness may actually be a result of those near-human faces being unable to present realistic emotional expressions.

Most of the research into this area has considered how artificial faces can be made to appear more human-like, but we looked at how even human faces can appear eerie under certain conditions. We have found that when parts of photographs of faces posing emotional expressions were combined, certain combinations of emotions evoked a sense of eeriness. This effect was particularly strong when happy mouths were paired with scared or angry eyes.

In other words, encountering a face where the mouth is clearly smiling, but the eyes are not displaying the same emotion, can be quite disturbing. It follows that unless a robot face can display emotions accurately and appropriately, just as a human would, a human viewer could be left with a distinct feeling of unease.

So, it will be a long time before human receptionists superseded. Even now that the design of robot faces is at a point where they can be made to appear human-like, robots will need better abilities to be able to perform convincing emotional interactions.

Creating near-human agents that can portray genuinely believable emotional expressions presents a highly complex technological challenge. So while this tension between appearance and emotional interaction may currently confine robots to the uncanny valley, it could also offer a way out.

An alternative would be for robot designers to avoid trying to mimic human faces too closely until technology has progressed to allow their feelings to be represented realistically as well. Otherwise, our expectations about emotions would just be raised by the their human-like nature and then not met by their expressive ability. Choosing to make robots less human in appearance could avoid a constant race where robots that appear more human-like also require ever more convincing emotional abilities, offering a way around the uncanny valley.

(image)

Easier to love?

This article is written by Stephanie Lay and Graham Pike. It first appeared onThe Conversation and is republished here under a Creative Commons license.


Featured
NeuroNews
11 June 2015

When we have to fumble through a familiar place in the dark, we may be relying on neurons called grid cells to guide us.

In order to navigate, we have to be able to orient ourselves in space. One way of doing this is to calculate where we are based on how far we've moved from a starting point. Scientists have found clusters of neurons called grid cells in some mammal species that help them chart their path. The cells fire in patterns that represent different locations in the environment. So far, grid cells have been found in rats, mice, bats and monkeys.

To see whether humans are using similar neural network to find their way, Timothy McNamara, a psychologist at Vanderbilt University, and his team turned to virtual reality. Participants in the experiment donned virtual reality goggles and navigated a landscape marked by posts of light, which disappeared once participants reached them. When the participants arrived at the last post, they had to return to where they started.

The return trip took place in darkness, so the participants would have to find their way based on where they remembered the first post being. For the first three trips, the dimensions of the landscape remained the same. But in the final expedition, the researchers created a warped version of the virtual landscape by changing one of the dimensions by 40 percent.

The participants messed up in ways that suggest they are using grid cells or something like them. Even when they didn't realize that the landscape had changed, the new size of their enclosure threw them off course, and they ended up at a spot significantly farther from the target than they did when the enclosure remains the same.

"When the enclosure increases in size they tend to undershoot and when it decreases they tend to overshoot," McNamara said in a statement.

Theses findings suggested that people's estimates of distance are skewed when the dimensions change in a place they've been before. How much more people err in their route when the space changes roughly matches what researchers would expect to see if we depend on grid-like cells.

"We still can't say for certain that people use a grid-cell system to navigate," said McNamara, who published the findings today in Current Biology. "But we can say that, if people use a different system, it seems to behave in exactly the same way."

Featured
NeuroNews
10 June 2015

Scientists have developed an electronic mesh that can be injected into the brains of living mice. This technology will allow researchers to track the action of many individual neurons simultaneously.

To better understand the workings of the brain, scientists have developed probes to measure the changes in electricity, or action potentials, that signal neurons firing. But it's easy to damage the very tissue you want to monitor. Previous attempts using probes made from silicon, metal, carbon and other materials diminished the density of neurons around the site where probe was inserted.

The new mesh, built from electrodes and flexible polymer wires, is rolled up to fit into a syringe with a needle just 100 micrometers in diameter. After injection, it unfurls to most of its original size. The mesh is designed not to damage neurons—it is almost as flexible as living tissue, so it won't harm neurons when the recipient moves around.

The researchers implanted the meshes into two brain regions (the hippocampus and lateral ventricle) of live mice. When the team checked 5 weeks later, the rodents didn't appear to have had an immune response to the foreign objects.

What did happen was that the mesh integrated with the network of surrounding neurons. The researchers used the new technology to record activity in the rodents' brains, including the action potentials created by individual neurons.

The success the ultra-fine mesh had in recording the workings of neurons without damaging them indicates that injectable electronics could be used to monitor neurons after brain injuries, the researchers (collaborators from Harvard University and China's National Center for Nanoscience and Technology) wrote July 8 in Nature, where they published the findings.