Logic isn't always absolute.
Logic isn't always absolute.
A simple shape? Far from it.
And scientists don't quite know how.
After the British painter Samuel Palmer lost his 19-year-old son in 1861, he called the death "the catastrophe of my life." In a letter to a friend, he described the anguish:
"I dreamed one night that dear More was alive again and that after throwing my arms round his neck and finding beyond all doubt that I had my living son in my embrace – we went thoroughly into the subject, and found that the death and funeral at Abinger had been fictitious. For a second after waking the joy remained – then came the knell that wakes me every morning – More is dead! More is dead!"
Palmer's tragedy probably resonates with anyone who has known loss. Sadness, from the profound grief of a parent to transient moodiness of a child, is a universal emotion. It also, on its face, doesn't make a great deal of sense.
Why would an emotion that prompts people to cry, mope, lose their appetites and withdraw from the world do any good? What's the point?
Psychologists have been exploring those questions since the days of Sigmund Freud, and they've discovered some very compelling answers. In fact, sadness – at least the mild kind – might help us process memories, navigate uncertain social situations and even stamp out the cognitive biases that color our judgments. More provocatively, the benefits of sadness might even explain the horror of major depression.
The evolution of sadness
It's simple to imagine how emotions evolved to keep our ancestors safe. A Paleolithic person without a healthy sense of fear would likely fall off a cliff or get mauled by a bear before passing on their genetic lineage; likewise, pleasure prompts us to engage in behaviors crucial to successful reproduction, like eating and sex.
Evolutionary theorists think sadness might play its own survival role. In work done in the 1940s and 1950s, John Bowlby, a British psychologist, developed the theory of attachment. This theory, still influential today, held that infants and children are motivated to stay close to their caregivers in order to maintain their survival. In healthy attachment, the caregiver is sensitive to the child's needs, and the child feels comfortable roaming a safe distance away from that caregiver, knowing their base of safety is nearby.
Sadness, from this point of view, is the emotion that makes attachment work. Sadness accompanies loss (such as the loss of a parental figure), and encourages the sad person to remedy that loss (by finding out where Mom went). Of course, if the missing person has died, the loss can't be remedied.
"Loss of a loved person is one of the most intensely painful experiences any human being can suffer," Bowlby wrote in his 1980 book, " Attachment and Loss, Volume III: Loss, Sadness and Depression."
In this view, sadness is the price we pay for our ability to form bonds with one another. The price can be high indeed; Bowlby cites a study of 56 Swedish mothers who had lost an infant and found that one to two years later, one-third of them had severe psychiatric disorders such as depression and anxiety.
Sadness might also be a way to navigate loss in a social world. Some theorists think the emotion evolved as a cry for help. Tears, in particular, might be a way of saying, "Hey, I'm not okay." In a 2009 paper in the journal Evolutionary Psychology, for example, Tel Aviv University researchers suggest that tears signal vulnerability and promote bonding. And a 2013 study in the same journal flashed teary faces on a screen for mere milliseconds, too little time for the tears to register consciously. The study found that whether a face was sad or neutral, tears prompted participants to later say that that same person was in more need of social support than if there had been no tears — even though the participants hadn't consciously noticed that the people in the photos were crying.
The benefits of feeling blue
The sadness accompanying a great loss is deep and often debilitating. A mildly bad mood, on the other hand, might be beneficial.
"Mild sadness seems to function as an alarm signal, indicating that the current situation is new, unfamiliar and challenging," said Joe Forgas, a psychologist at the University of New South Wales in Australia.
This signal seems to put the brain on high alert. Sad people have a more accurate eye for details, Forgas and his colleagues have found, and they're more reliable eyewitnesses to confusing events. They're also less likely to fall into common cognitive traps, like believing that a handsome person must be nice (the halo effect) or remembering initial information better than information added later (the primacy effect).
In some cases, sadness might even make you a better person. In one 2010 study, Forgas and his team asked participants to play the dictator game, in which one person is given a certain amount of cash and then must decide how much to keep and how much to pass on to a partner. Typically, people do end up giving some money away, suggesting that we're not entirely driven by greed or self-interest.
In Forgas' version of the game, some participants were in a happy mood, and some were sad. Surprisingly, the people who were feeling down shared more of their bounty than the sunny, chipper types. The researchers suspect that sad people were in an externally-oriented state and were thus more concerned with social norms and what their partner might think of them than the happy participants.
Going deeper: why does depression exist?
If sadness has some clear benefits, the pictures gets much more murky when it comes to depression. Major depression makes people less accurate at identifying others' emotions, impairs working memory and damages cognitive control, as reviewed in a 2007 paper in the journal Emotion.
Major depression can also lead to death — it's a major risk factor for suicide. Which raises a question: Why would such a debilitating condition occur so frequently? According to the National Institute of Mental Health (NIMH), some 6.7 percent of U.S. adults experienced a major depressive episode in 2013. The lifetime risk of experiencing depression hovers somewhere around 16 percent. The average age of onset is 32, and 3.3 percent of teens have experienced serious depression, according to the NIMH. The high numbers have evolutionary theorists asking questions.
"What's really weird is to see a healthy 20-year-old person with no sign of infection, no sign of injury, with a severe brain dysfunction at very high rates," said Ed Hagen, director of the bioanthropology lab at Washington State University Vancouver. Depression dwarfs other brain disorders such as schizophrenia, which affects about 1 percent of people over the lifetime.
"We don't see any other major dysfunction in any other organ at these high rates in otherwise healthy [young people]" Hagen said. "Could it really be true that our brains are dysfunctional at this high rate at a very young age?"
In Hagen's view, the frequency of depression suggests that the disorder is telling us something important: It's like the pain of a broken ankle, signaling a deeper problem that needs to be fixed.
"If you walked around on a broken ankle as if nothing was wrong, you'd make your ankle a lot worse," Hagen said. "If you did something stupid to break that ankle you need to think about what you did and don't do it again next time." Psychic pain, he said, may serve the same purpose. Even suicide attempts he said, might be an extreme way to signal to others that something is deeply wrong and must change quickly.
This isn't to say that depression shouldn't be treated, or that people can simply bootstrap their way out of a depressive episode; instead, Hagen argues that prescribing antidepressants without exploring the triggers of the depression through therapy is short-sighted.
"That would have the same negative consequences as giving people a bunch of Percocet but not actually fixing the broken ankle," he said.
Not everyone agrees with this evolutionary line of thinking. Many researchers think of depression as less like the pain of a broken ankle and more like cancer. Instead of an unpleasant but helpful signal, they say, depression is normal sadness run amok, much like cancer is normal cell proliferation that has gotten out of control. Depression is complex, with many genetic pathways to the disease, these researchers point out. And when the genetics are complicated, even traits that are more harmful than helpful can persist.
"Depression is a very serious illness, leading to an inability to cope and often leading to suicide," Forgas said. "Any beneficial effects it may have are likely to be coincidental and minor compared to its overwhelming cost."
There is a third metaphor that might explain high rates of depression. It's possible, Hagen said, that depression is like diabetes, or obesity — a reflection of the ways in which our modern environment fails to match up with the ancestral conditions in which our brains evolved. Free access to the fatty, sugary food our brains evolved to crave has led to high obesity rates. Likewise, depression is linked to urban living, which is on the rise. Modern life is also more isolated and less active than our ancestors' existence. These factors might help explain why depression is relatively rampant.
"Either depression has some function," Hagen said, "or there's a pretty severe mismatch between the modern environment and the environment we evolved in."
While the relative benefits of depression are very much up for debate, the benefits of sadness deserve the spotlight. The "cultural obsession" of seeking constant happiness is a mistake, Forgas said.
"Human beings have a variety of evolved affective responses for good reason, and we should accept that mild negative affect is part of life and should be accepted as such," he said. "The popular tendency in the media to represent positive affect as the only acceptable state to be in actually creates more suffering, since it posits an unattainable standard, paradoxically likely to cause more frustration and misery."
In my dreams, I'm wandering a vast maze of high school hallways, knowing that class starts at any minute. It's the first day of school — I can't be late. But I have no idea where my first class is. My schedule has gone missing. I keep wandering around, quietly panicking, knowing class is starting but powerless to figure out what to do.
And then I wake up in a cold sweat and remember I haven't been in high school in more than a decade.
My school anxiety dreams are far from unique. People years further than I from graduation day report nightmares about showing up unprepared to exams, losing their locker combination or discovering they never actually graduated from college. Though research suggests these dreams fade with age, they are still common among people in their 50s and beyond. In fact, studies in Canada, the United States and Japan regularly find that dreams about school, teachers and studying consistently rank in the top 3 most common subjects for dreams.
But why? Why should lockers and hallway mazes and course credits haunt us long after we've slipped from the educational system's grasp?
To answer that question, I first had to find out how frequent school dreams really are.
Turns out, there's a scientific questionnaire for that — and it reveals that the prevalence of school dreams is no illusion. The Typical Dream Questionnaire, developed in the 1950s, lists 55 dream themes and asks research participants to check each that they've experienced, and to identify the most common. School-related dreams are on the top of the list across several cultures. In a 2003 study of Canadian college students, dreams of schools, teachers and studying ranked 4th-most-common, after dreams of being chased, dreams of sex and dreams of falling. The dream of failing an exam came in 10th. In studies conducted in the 1950s in American and Japanese college students, 71 percent of Americans and 86 percent of Japanese reported school-related dreams, which again made these dreams the 4th-most-common type reported, after being attacked or pursued, falling or trying again and again to do something.
University of Montreal psychologist Tore Nielsen and colleagues have administered the Typical Dream Questionnaire to more than a thousand American, Japanese and Canadian college students, as well as to sleep-disordered patients of all ages. They've found that these percentages are remarkably consistent, as Nielsen and colleagues wrote in a commentary in the journal Behavioral and Brain Sciences in 2000. Across all populations, the researchers wrote, school and studying dreams rank 3rd-most-common, with 73 percent of people reporting experiencing them. Another 47 percent say they've dreamt about failing an exam, making those dreams 10th-most-common on the list.
Other common dreams contain themes that overlap with anxiety-ridden school dreams. Dreams of being late rank 5th overall, for example, and trying again and again to do something rank 6th.
Jonathan Barclay, a 38-year-old from Oklahoma (and, full disclosure, my brother-in-law) illustrates this overlap well. Barclay has school anxiety dreams at least twice a month, he said, and they often center on frustrating attempts to get something done: "forgetting to go to class, having car trouble so I can't get to class, wandering around looking for my classroom, forgetting what time the class starts or on what days it occurs, not being able to find the guidance office to get my complete schedule … the list goes on and on."
"I really hate these dreams," Barclay added.
Less-scientific surveys also find lots of people dreaming not-so-sweet school dreams. Three years ago, Lauri Loewenberg, an author and dream analyst based in Florida, surveyed 5,000 dream enthusiasts through her thedreamzone.com newsletter and found that school dreams ranked as the second-most-common (right after dreams of one's partner cheating). The three most common school dream themes, she said, were being unable to find a classroom or locker; being unprepared for a test; and having to retake classes or credits.
The reminiscence bump
All of these findings might suggest that we're collectively holding on to a lot of high school trauma. Or you could go with the pop-psychology interpretations of these dreams: If you're dreaming about not getting credit for college biology, for example, maybe it means you feel like you're not getting enough credit for your contributions at work.
"Job stress, job issues usually manifest in school dreams more than they do in dreams about your actual job," Loewenberg said.
Certainly, some people say they get school anxiety dreams more often when real life is stressful (though others say they come at random). But either way, why school? Why not job interviews or client presentations or family reunions or any of the other myriad scenarios in which adults could humiliate themselves?
The reason might have to do with a quirk of memory called the reminiscence bump. In older people, memories from late adolescence and early adulthood tend to be the strongest, according to studies stretching back to the 1980s. Researchers aren't sure why these years loom so large. Perhaps the novelty of these years lends itself to sharper memories; perhaps this time period is so crucial to self-identity that the details stand out; or perhaps we're cognitively at our sharpest in youth, so memories are encoded more effectively.
There's some evidence that the reminiscence bump might play into our dreams. Older people (age 60 to 77) report more dreams with references to their early adolescence and adulthood than to childhood or later adulthood, according to a 2005 study. However, it's not clear whether most "typical dreams" follow that pattern, or whether positive and negative emotions surrounding these youthful experiences might influence their dream recall, the University of Montreal's Nielsen wrote in a later study.
Anecdotally, other teen memories do intrude in later dreams. Loewenberg said that people tell her that they often dream of their first loves, even years later when they're happily married.
"It's like our first experiences in life kind of get imprinted into our subconscious and become part of who we are," she said.
The continuity hypothesis
Michael Schredl, the head of the sleep laboratory at the Central Institute for Mental Health in Mannheim, Germany, isn't buying the reminiscence bump theory. Rather, he thinks the universal experience of going to school and being tested is simply a convenient way for the brain to express real-world anxieties.
"The examination dreams are triggered by current life situations that have similar emotional qualities," Schredl said.
This notion fits with the "continuity hypothesis," which holds that dreams reflect people's waking concerns. Although this hypothesis is widely accepted among members of the public, scientists actually debate it quite a bit. Some researchers concede that yes, people often dream about quotidian daily events, but that dreams that are more complex or bizarre don't fit that pattern. One alternative dream hypothesis is called the activation-synthesis hypothesis; it holds that as we enter REM sleep, neurochemical changes occur. The brain then throws together strange narratives and bizarre imagery in an effort to make sense of the biological changes it's experiencing.
Very little research has looked at whether seemingly bizarre dreams might be metaphorical, as non-academic dream analysts like Loewenberg argue. One 1969 study published in the Archives of General Psychiatry, however, suggested they can be, in the weirdest way possible: The researchers had young men watch erotic films and then report their dreams. Those who'd watched the films said they'd dreamt about more phallic imagery than those who'd watched non-raunchy movies.
Many people who have school dreams tie them to their specific histories and anxieties. Mike Cronin, a reporter at the Asheville Citizen-Times, used to ignore his math homework all semester in both high school and college, only to cram at the last minute. Today, at 46, he's haunted by dreams of going about his life, knowing that math homework is piling up. The dread in these dreams builds like a rollercoaster car being pulled to the top of the incline, Cronin said. Others who reported their school anxiety dreams to Braindecoder made a point of mentioning that they graduated with honors or had multiple graduate degrees.
In fact, school anxiety dreams might sometimes be beneficial — when they're relevant to school-related tasks at hand. A 2014 study queried would-be medical students about their dreams in the nights leading up to a huge qualifying exam for medical school. Of 719 students who responded to surveys, 60 percent dreamt of the exam the night before taking it. In 78 percent of those dreams, the students dreamt of forgetting answers, being late, or otherwise screwing up.
But a more surprising finding also emerged: Students who dreamt about the test the night before got better scores, the researchers reported in the journal Consciousness and Cognition. It's possible that the more high-strung students both studied harder and worried more about the exam, so much so that it penetrated their dreams, the researchers wrote.
Or, more provocatively, perhaps the dream episodes helped the students "rehearse" for the stressful event, allowing them to work out their anxiety in a safe place.
"The dramatization of concerns during dreams may train the brain," the researchers wrote.
The question of whether dream "practice" counts is an open one, and researchers are even studying whether athletes could rehearse physical movements in their sleep. But the next time you wake up panicking about your college course credits, give yourself a break. Maybe the reason you relive this moments in sleep is the same reason you really did graduate in your waking life.
And whatever the case may be, remember that you're not alone.
One of the things that artificial neural networks share with us is that their "brain" can be sort of a black box. So in an attempt to better understand the innerworkings of an image classification network, Google engineers turned the network
upside down, created trippy images that took the internet by storm, and then put the source code on github so that anyone can try their hand at understanding how the code works.
One of the several groups jumping on the opportunity for peaking into an artificial neural network was Deepdreamr. Thousands of people submitted their photographs to the group's website and got a beautiful, trippy image back. "We gained a lot of knowledge about how the code itself works while we were working with it," team member James Bateman said.
Meanwhile, the neural net learned more and more from all the user-submitted images, adding every little new thing it saw to its library. "It wouldn't be the brain it is without the community because they fed it all the images. We couldn't possibly put that many images into it ourselves," team member Roz Woolverton said.
So we asked the kind people of Deepdreamr to feed some biological brain images to the machine and see how the artificial neural network tries to make sense of the neurons.
This is the Deep Dream version of one of the first drawings of a neuron, by Otto Deiters, published in 1865:
An artificial neural network simulates actual networks of neurons and how they are connected to each other. The network is trained by processing millions of images, gradually adjusting its network parameters until it classifies images correctly. Deep Dream hasn't been trained to recognize neurons, at least not extensively. Rather, it's been shown a large number of animal images, mainly dogs. So naturally, it tends to see dogs everywhere, like in this drawing of a dog's olfactory bulb, by Camillo Golgi in 1875:
And this is what happened to the drawing of cells in a pigeon's cerebellum, by Santiago Ramón y Cajal, 1899:
Deep Dream started as an experiment with Google's image recognition software. The engineers wanted to understand what the network actually "sees" at the intermediate levels of its processing. So they fed the software noisy pictures and asked it to recognize patterns in the images. Through repetition, the network enhanced any pattern it saw, until the image was completely altered. This is how in Deep Dream even a small worm can be turned into a dragon. The original image for the results below is of a glowing C. Elegans, made by neurobiologist Martin Chalfie in the 1990s. The worm glows because scientists inserted a piece of DNA coding for green fluorescent protein into the neurons of the worm, causing the cells to produce the glowing protein and light up under the blue light.
The following image is the Deep Dream version of a slice of an adult mouse brain, lit up thanks to fluorescent protein producing neurons:
(Original image via Frontiers in Neural Circuits)
Neural networks consist of several layers of artificial neurons. The first layers that take the image are tasked with extracting basic features of the image, such as edges and corners. Each layer then sends the results to the next layer, which looks for progressively higher-level features of the image, like shapes and objects. The highest layers put those results together to recognize a complex object such as an animal or a building.
The following image is taken from one of the intermediate layers of the neural net. The original image is of neurons in a mouse hippocampus labeled using brainbow method:
(Original image by Jeff Lichtman and colleagues via Center for Brain Science)
This is how Golgi's drawing looks like at the same layer:
Recently, scientists figured out how to make brains transparent using a method called CLARITY. The method allows scientists to look at the glowing neurons in the whole, three-dimensional brain of the animals without having to cut the brain into slices. This is what the machine saw:
(Original image by Karl Deisseroth and colleagues, Stanford University)
Finally, we had the machine take a look at a beautiful sci-art work by neuroscientist Luis de la Torre-Ubieta of UCLA. The original image consists of 5.3-micrometer thick slices of transparent mouse brain with green-glowing neurons that are color-coded by their depth from red on top to orange, yellow, purple, blue and green at the very bottom.
Anyone can see the face of Jesus on a slice of toast, a rocky face on Mars, or a man in the moon. But although seeing nonexistent faces is generally quite common, some people may be more likely than others to experience the phenomenon: personality, sex, and emotional state may influence our tendency to perceive faces and other meaningful patterns when they don't actually exist, according to new research presented earlier this month at the 19th annual meeting of the Association for the Scientific Study of Consciousness
This phenomenon, called pareidolia, occurs as a result of how the brain works. One area of the brain, called the right fusiform face area, is specialized to process true faces, and the same area also activates when people see a face pattern inside noise. This universal experience has led to countless photographs, a $28,000 sale of an old grilled cheese sandwich, and was famously exploited by the Swiss psychologist Hermann Rorschach to develop his inkblot test, which is still used today to gauge the mental state of psychiatric patients or examine personality traits of anyone in general.
In the new study, Norimichi Kitagawa of the NNT Communication Science Laboratory in Tokyo and his colleagues designed an experiment to test whether one's personality traits and emotional state can affect the tendency to experience pareidolia, and the characteristics of the pareidolic images that people experience might predict their personality and emotional state.
The researchers recruited 166 healthy undergraduate students and asked them to complete the Ten Item Personality Inventory and the Positive and Negative Affect Scale, questionnaires designed to assess the 'big five' personality traits and emotional mood, respectively. They showed all the participants the same pattern of random dots, asking them to report whatever shapes they saw within it, and also to trace the shapes they saw onto the pattern with a pen.
They found that some of the participants had a greater tendency than others to perceive meaningful shapes in the random pattern of dots – including not only faces, but also animals and plants – and that this tendency was correlated with particular traits and moods.
Overall, those with a higher neuroticism score were more likely to experience pareidolia, as were those who reported being in a less negative mood. But that the physical characteristics of the pareidolic images they perceived was not related to the participants' personality traits or emotional states. The researchers also found that women were more likely than men to experience pareidolia.
Exactly why certain traits might make one more susceptible to pareidolia is still unclear, but Kitagawa and his colleagues propose that an evolutionary purpose may have been behind women's higher tendency for pareidolia. Females are often physically weaker than males, and this, the researchers say, may have made them more sensitive to meaningful stimuli within noise, better enabling them to detect predators in a forest.
Neurotic people tend to be less emotionally stable than others, and this, too, may make them tend to see meaningful patterns that aren't actually there. Likewise, certain moods may increase the tendency to see such patterns. "We think positive moods enhance creativity," says Kitagawa, "so people with higher positive mood scores may find more possible interpretations of the dots."
You're going to die one day. You belong to the only species that is consciously aware of this simple, universally true fact. And just being reminded that you will someday die can manipulate your behavior now — in ways that seemingly have little to do with death.
Five to 10 minutes after reading this article, once you've pushed conscious thoughts of death out of your mind, you'll probably become more interested in being famous, more likely to support a charismatic leader, and potentially more interested in having children. You may also be less likely to approve of breastfeeding and more supportive of war. At least, this is what studies have found.
The reason? According to theorists in the field of Terror Management, all of these attitude changes serve to bolster us in the face of our mortality. When death hovers at the edge of consciousness, humans strive to push it down.
"In order to function with psychological equanimity in the world, we humans have to believe there's something more, that we're not just these creatures that are fated to obliteration upon death," said Jeff Greenberg, a psychologist at the University of Arizona.
Greenberg, along with psychologists Tom Pyszczynski, and Sheldon Solomon, is one of the inventors of Terror Management Theory, the idea that the need to cope with death influences a wide range of human behaviors. The three are also authors of the recent book " The Worm at the Core: On the Role of Death in Life" (Random House, 2015).
The trio first formally proposed their theory in 1986, after reading the work of Ernest Becker, an anthropologist and author of " The Denial of Death," which won a Pulitzer Prize in 1974. In the book, Becker theorized that much of human behavior is driven by the search for immortality, both literal (belief in an afterlife) and figurative (the desire to leave a mark on the world or be part of something greater than ourselves).
"We realized that he was making a lot of sense," Greenberg said.
The researchers came up with the evocative name for the theory during a conference in 1984, Greenberg recalled. "We wanted to convey the essence of what it was about, and clearly that was the terror of death," he said. He "blurted out" the name Terror Management Theory to his colleague Sheldon Solomon, and they both laughed in approval and added it to their presentation.
"In the 1980s, numerous elder statespersons of the field chided us on the lurid nature of the theory's name!" Greenberg wrote in an email. "Didn't bother us, though."
The psychologists had a name for their theory. But, being scientists, they also needed proof to back it up.
The psychology of death
Becker held that people tamp down their anxiety about death by boosting their self-esteem, assuring themselves that they have value in a meaningful universe. One way to do this is to link yourself to something bigger: your own culture or worldview.
To test this idea, Greenberg and his colleagues recruited a group of municipal court judges and asked some of them to jot down feelings about their own deaths. Shortly after, the judges were given a hypothetical case in which they had to set bond for a woman arrested for prostitution.
Thinking about death had a huge effect on the judges' decisions, the team reported in October 1989 in the Journal of Personality and Social Psychology. Judges who hadn't pondered death set bond at $50, on average. Judges who had thought about their own mortality set bond nearly 10 times higher — at $455. Presumably, thoughts of death made the judges cling tighter to their own worldviews, which were of the law-and-order type.
Follow-up experiments showed that this effect was unique to death; people don't respond this way when confronted with thoughts of pain or failure. The effect is also largely subconscious. When people ponder death consciously, they tend to be rational about it, said Pyszczynski, who works at the University of Colorado, Colorado Springs.
"You think, 'I really need to make an appointment to get this thing on my skin checked out,' or 'I really need to get more exercise or quit smoking,'" he said.
It's when the thoughts of death are at the edge of awareness that the mind starts its terror-denying gymnastics routine.
"People deal with the problem of death with things that have no relation to death whatsoever," Pyszczynski said.
For example: breastfeeding. A 2007 study found that after people were reminded of death, they became more negative toward breastfeeding in public and less welcoming of a person described as breastfeeding in a nearby room. Breastfeeding, the researchers found, reminded people of their animal nature, or "creatureliness." Another study, published in the Journal of Personality and Social Psychology in 2014, found that when reminded of their own creatureliness with thoughts of pregnancy, menstruation and breastfeeding, women became more likely to objectify themselves. Essentially, the researchers suggested, they tried to view themselves as objects in order to deny their own mortality.
On the other hand, multiple studies have shown that thoughts of death increase people's enthusiasm for having children — and even changes their attitudes towards baby names. A 2011 study published in The Journal of Research in Personality found that after being reminded of death, people became more interested in naming a child after themselves. What better way to live on than through a namesake?
Thinking about morality is also associated with an increased desire for fame, according to research by Greenberg and others. People reminded of their deaths desired fame more, were more interested in having a star in the sky named after them and even cast a kinder eye on paintings supposedly done by Johnny Depp (the celebrity of choice in this particular experiment).
Circling the wagons
Perhaps the most striking effect of thinking about one's death, however, is that it makes humans more insular.
The theory, Greenberg said, works like this: As children, we're completely helpless. We learn quickly that to keep our parents' love and protection, we have to behave in certain ways and uphold certain values. As we age and become aware of bigger and bigger threats in the world, our parents are less able to play that protective role.
"What we do is we transfer that function of providing psychological security to larger structures," Greenberg said. That might mean God, country, or concepts like freedom and democracy.
Thus, when death threatens, we cling to those values ever tighter. This has some interesting effects. A 2011 study in the journal Current Directions in Psychological Science found that thoughts of mortality made people more likely to support a charismatic leader who shares one's worldview. In a 2006 study, working with Iranian researchers, Pyszczynski and his colleagues found that Iranian students in a control condition preferred a peace-preaching person to a suicide-bomber, but mortality reminders made the participants grow more supportive of terrorist attacks against Americans. And Americans, particularly conservatives, who were reminded of 9/11 attacks, were more supportive of war against Iraq and even nuclear attack against threatening nations, according to 2011 research. Studies in Israel echoed these findings.
"We have something in common with our enemies, in that we're all driven by the basic fear that comes from being human," Pyszczynski said.
But Terror Management isn't all bad. Thinking of death can increase people's charitable behavior, Greenberg said. And wealthy people often strive to leave their mark on the world through charitable giving or foundations that do good.
"It has the most to do with the nature of your worldview," Greenberg said. If your worldview suggests positive ways to leave your mark on the world, thinking of death will drive you to do good. If it suggests negative ways, like terrorism or infamy, thinking of death is likely to have bad outcomes.
Understanding Terror Management makes it easier to grasp why people can believe things that seem, to outsiders, completely absurd. Studying the field has made Greenberg feel less judgmental — and more self-aware of the ways in which he tries to be of value in the world.
"The truth is that we're all insecure," he said. "We're all dealing with the vulnerability of the knowledge of our own mortality."
Imagine sitting in a chair, flexing your wrist at random. As far as you know, you decide to flex your wrist, and then you flex it. Decide, flex. Decide, flex.
But what if you found out that while you're sitting peacefully, not planning your next movement, your brain is ahead of the game? What if, in fact, our decisions are made in our subconscious before our conscious mind ever becomes aware of them?
Those are ideas raised by studies stretching back into the 1980s — studies that have led many in the neuroscience community to argue that free will is nothing but an illusion. Our actions, according to this line of thought, are determined mostly subconsciously. When we become conscious of our intentions, we can usually give a reasonable explanation for our choices, but that's all retrospective. There is no "ghost in the machine" swooping in to make decisions against our brain activity — how could there be, when we are the sum of our brain activity?
But don't sound the death knell for free will just yet. Despite tantalizing evidence that our brains are doing much more than we're aware of, free will may still exist. It just hasn't been proven yet.
Goodbye, free will?
The idea that we might need to prove something as self-evident as free will probably seems odd. As humans, we feel with certainty that we're making conscious choices about our day-to-day lives.
In fact, though, we can conduct a wide array of activities without consciousness at all.
"Most of what our brains do is unconscious," said Thalia Wheatley, a neuroscientist at Dartmouth University. "I'm not conscious about every word I'm about to say … I'm not conscious of the fact that I'm pacing around my kitchen right now and my feet aren't tripping."
But these are the activities we do with a brain on autopilot. What about the things we intentionally attend to and decisions we make explicitly? A series of experiments carried out in the 1980s by psychologist Benjamin Libet suggest that even our "conscious" choices may not be so conscious, after all. Libet asked subjects to watch a clock face with a spinning dial, and to occasionally, and randomly, flex a wrist (later experiments were also done with pushing a button and other similarly simple actions).
The participants were then told to gauge, as best they could, where the dial on the clock was pointing when they made the decision to move. All of this was done while the researchers monitored the electrical activity of the participants' brains with electroencephalography (EEG).
Typically, Libet found, people reported making the decision to move about 200 milliseconds before they actually did. Surprisingly, though, the electrical activity in the motor areas of the brain began ramping up about 550 milliseconds before the movement — some 350 milliseconds prior to the conscious decision to move.
"It was seen as like the brain version of determinism," said John-Dylan Haynes, a neuroscientist at the Bernstein Center for Computational Neuroscience in Berlin. "The brain prepares a decision for you, then when it comes to you making up your mind consciously, the dice have already been cast."
Predicting the future
In 2008, Haynes and his colleagues published a study that pushed back the throw of the dice even further. In this study, participants had their brains scanned by functional magnetic resonance imaging (fMRI) while they were given a choice between pushing a button with their left hand or a button with their right hand. The researchers wanted to know what sort of brain activity might be going on in areas beyond just the motor-planning regions studied in Libet's work.
The researchers found that two regions became more active (as measured by an uptick in blood flow to those areas) before a person became conscious of their button-pushing choice. One was Brodmann area 10, which is at the very front of the prefrontal cortex, behind the forehead, and is generally used in executive control. The other active region was part of the parietal cortex, in the upper middle region of the brain. The parietal cortex is involved with sensory integration.
Most striking of all, though, was when those regions became active: at least 7 seconds prior to making a conscious choice to move — a gap 20 times larger than the previously measured 350 milliseconds in the motor-planning regions. What's more, fMRI imaging includes a slight time delay, so that means the activity could have started as much as 10 seconds before the conscious decision point.
The researchers could even predict the choice of button the person would make with about 60 percent accuracy based on the brain activity alone — hardly a crystal ball, but better than pure chance.
The free will debates
These experiments don't bode well for free will, but don't go robbing a liquor store and blaming your subconscious just yet.
The implications of these studies are hotly contested. Many, like neuroscientist Sam Harris, believe that our choices are entirely opaque to us, driven by the complex interaction of our genes and environment, and thus out of our control. "You can do what you decide to do, but you cannot decide what you will decide to do," Harris writes in his book, "Free Will" (Free Press, 2012).
Wheatley, too, doubts that true free will could exist, given what we know about the brain.
"You are your neurons," she said. "There's nothing else up there. When you really think about the brain, the physical system, it becomes very difficult to think about where choice comes in, where free will comes in."
Even consciousness, Wheatley notes, doesn't guarantee free will. You can be conscious of a choice, but could you have truly made another decision, given the brain activity that led to that choice?
Others are not so convinced. No experiment done so far rules out free will, argues Alfred Mele, a philosopher at Florida State University and author of "Free: Why Science Hasn't Disproved Free Will" (Oxford University Press, 2014).
There are two ways to think about free will, Mele said. One is that if a person is sane, rational and uncoerced, and they can make a decision, then they have free will. This is the definition on which our court system is based.
"It's clear that most people, some of the time, do make such decisions, so they would have free will, according to this conception of it," Mele said.
The free will most neuroscientists are discussing is a bit more nuanced. The idea is that, given everything leading up to a conscious choice — external influences, brain activity, prior experiences — people can still make a different choice.
"It could all be shaped, and probabilities could be influenced by the environment and so on, but when you make your decision, there are different possible ways you could go," Mele said.
Even when measuring very trivial decisions with the most accurate technology (electrodes actually inserted into the brain, as part of treatment for severe epilepsy), researchers can only predict decisions based on brain activity about 80 percent of the time. It could be that the equipment isn't good enough to yield perfect predictions, Mele said. Or it could be that there is a certain amount of randomness present in the brain – randomness that represents free will.
It's clear that a decision process begins in the brain far before we're aware of it, Haynes said. But what isn't as clear is whether that process can be stopped, and thus it isn't clear that these experiments rule out free will.
"Can you interfere with this process at any point in time? What are your chances of stopping this process?" Haynes asked. He and his colleagues are in the process of submitting a paper to a peer-reviewed paper on just this subject.
"All I can say is that the data we have at the moment suggests that people can control this process all the way right until the end," Haynes said.
Why free will matters
Free will or not, we are built to assume agency over our actions. It's not surprising to feel a need to solve the mystery of whether something that feels so intertwined with our being is actually real. But all of this free will debate also matters because it turns out people behave very differently when they think they're not accountable for their behavior.
In 2008, psychologists Kathleen Vohs and Jonathan Schooler published a study in the journal Psychological Science in which they asked people to take a math test on a computer. Because of a computer glitch, the participants were told, the answers to the questions would appear onscreen unless they quickly pressed the space bar before each question. Prior to the test, some of the participants read articles telling them that science had disproven free will.
Those participants, primed to disbelieve free will, were more likely to cheat by not pressing the space bar. Similar studies showed other bad behavior. A 2009 study by Florida State University psychologist Roy Baumeister found that when people were told they had no free will, they became more aggressive towards others, giving them spicy hot sauce even after being told they disliked spicy food.
On the other hand, the belief that there is no free will can also make people less punitive and less likely to seek vengeance, according to a 2014 article published in the journal Psychological Science.
"When you confront people and say, 'Oh, you don't have free will,' they can act badly," Wheatley said. But that's not the whole story, she said. "They can also become more compassionate."
The next step for neuroscience, Wheatley said, should be to investigate ever-more complicated decisions. The criticism of work like Libet's and Haynes' tends along the lines of "so what?" she said, because the choices are so primitive. Haynes' recent work, she said, has been delving into more complex choices.
"These decisions are still no 'where to go to college' or 'who to marry,' but they're getting more interesting," she said.
How does that 3-pound lump of electrochemical jelly inside your head form, store and recall an apparently infinite number of memories? This is one of the most enduring mysteries of neuroscience and an endless source of fascination for those investigating how the brain works.
But despite years of rigorous scientific investigation, we are still not exactly sure how memories are formed and where in the brain they reside. Research in the 1970s began to unravel the mystery, and eventually led to a theory about how neurons co-operate to store memories. A memory, the theory holds, is a pattern of neuronal connections. But this widely accepted idea is now being challenged by some recent experimental findings, which point to the neurons themselves as the placeholders for memories.
The current view of how memories are formed originates in a hugely influential 1949 book called The Organization of Behavior, by the psychologist Donald Hebb. "When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it," he wrote, "some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased." In other words, synapses, or connections between neurons, become stronger and easier to form when cells are repeatedly active at the same time.
Hebb's idea was way ahead of its time. It was not until more than 20 years later that physiologists Tim Bliss and Terje Lømo discovered such a mechanism. They found that simultaneous, repetitive electrical stimulation of pairs of neurons in slices of rabbit brain increased the efficiency of the signaling between the cells, strengthening the synapses at which they communicate with each other for many hours after the initial stimulation.
This process, which Bliss and Lømo called long-term potentiation (LTP) has been studied intensively ever since, and today it is widely believed to be the neural basis of learning and memory. Studying the sea slug Aplysia californica, neuroscientist Eric Kandel and his colleagues found that when a short-term memory is formed, there's a transient increase in the strength of existing synapses within a group of neurons. For a long-term memory to form, the expression of genes in neurons changes and results in producing new proteins and permanent synaptic connections. This work led to a Nobel Prize in Physiology and Medicine for Kandel and his colleagues in 2000, and ever since, the general consensus has been that memories reside in the pattern of synaptic connections and are subsequently recalled by reactivation of the same pattern of activity within a network of neurons.
But new research suggests that synapses aren't the center of action when it comes to memory — instead, rather than being held in synaptic constellations, long-term memories may be actually stored in neurons.
David Glanzman of the University of California, Los Angeles and his colleagues dissected sensory and motor neurons from sea slug, and grew them in Petri dishes. Under these conditions, the cells spontaneously form new connections to each other, but the researchers also added the neurotransmitter serotonin to the dishes. This induces a simple form of learning by strengthening the new connections.
When they looked at the cells 48 hours later, they found that the new connections had disappeared; so, too, had other connections that existed before treatment with serotonin. But they also noticed that the cells had formed other new synapses, so that the overall number of connections between the cells remained the same, suggesting that they had retained information about the number of synapses they had formed, and that the precise pattern of connections is not essential for storing the memories.
In another set of experiments, the researchers sensitized the sea slugs to mild electric shocks, so that they reflexively withdraw when exposed to them later. They then treated the creatures with a drug that inhibits protein synthesis. This prevents synapse formation, and should thus eliminate the creature's memory of the shocks.
Remarkably, though, they found that the memories remained intact, or were reactivated in some way— even though the synapses thought to store them had been destroyed, the sea slugs still withdrew when exposed to more electric shocks later on.
These findings, the researchers said, suggest that established memories are not maintained by synapses, but instead form through changes in the cells.
However, the findings don't mean that synapses are not needed at all, Glanzman says. "Think of the cell body or nucleus of neurons as the brain of the concert pianist, and synaptic connections as his or her hands and fingers. Memory is the music. There is no music without the pianist's hands and fingers, but the pianist's knowledge of how to play Chopin, for example, does not reside there."
The idea that the nervous system may be able to regenerate lost synaptic connections is a radical one, Glanzman says. If the findings extend to mammals and humans, one implication is that developing drugs to target synapses and erase traumatic memory (for treating PTSD) would be futile. On the other hand, in people with Alzheimer's disease, which destroys synapses, the lost memories may still exist as long as the neurons are still alive, Glanzman says.
"This is highly provocative work… [that challenges] the idea that synapses store long-term memories," says Steve Ramirez, a memory researcher at the Massachusetts Institute of Technology, "but it's never fully clear that what happens in a dish actually recapitulates what happens in the brain."
It's well known that protein synthesis, the recycling of the synaptic vesicles that store neurotransmitter molecules, and various other cellular processes are involved in memory, and it's possible that some kinds of memories are more dependent on these processes than on events taking place at synapses.
"Memories that involve highly complex learning procedures might differentially recruit cellular and synaptic processes," says Ramirez, "but this is an under-exploded area in our field, and the biological basis of different kinds of memories is a tantalizing beast to study."
Neurons may also have as yet unknown mechanisms by which they can retain information about their connectivity patterns even when their synapses are abolished. Moreover, some kinds of memories may be more reliant on cellular processes than others.
"Memories indeed recruit synapses, but they also alter the biochemical cocktails existing in neurons, the physiological bolts of micro-lightning that neurons fire, and their overall architecture," says Ramirez. "All these processes seem necessary for memory formation, storage, and retrieval, but the question is, which are sufficient to reinstate particular memories?"
"Long-term memory storage may recruit global cellular changes to alter a brain cell in a long-lasting manner," he adds, "whereas short-term memories may recruit reverberant synaptic activity."
Sea slugs are popular with memory researchers because they have a relatively simple nervous system consisting of just 20,000 neurons. Even though they are separated from humans and other mammals by more than 500 million years of evolution, they share the same molecular machinery for memory. But they may nevertheless have memory mechanisms that aren't found in the human brain.
The conflicting results may therefore point to processes that are unique to sea slugs, or more generally to invertebrates. " Aplysia is beautiful because we can really go in and mechanistically study defined circuits with known structure and unparalleled control over the system as a whole," says Ramirez, "but the kinds of memories they store tend to be more reflexive and not the rich repertoire of memories that humans have [so we should] extrapolate with caution."
But Glanzman finds it very unlikely that he has discovered a novel memory mechanism specific to sea slugs. Generally, basic physiological mechanisms are preserved throughout evolution — and there's no reason to think this one is an exception.
"I am an unregenerate Darwinian in this regard. I believe that all of the basic cellular and molecular mechanisms of learning and memory will prove to be general, from worms to man," Glanzman said.
The brain is, as the old cliché goes, the most complex object in the known universe, and our understanding of how it works is at best rudimentary. "The idea that synapses store long-term memories is an overly simplistic model of a highly complex phenomenon," says Ramirez. And so this new controversy may simply reflect our profound ignorance of brain function.
This month, a jury sentenced Dzhokhar Tsarnaev to death for his role in the 2013 bombing of the Boston Marathon. The sentence raised an outcry from those who oppose the death penalty — particularly in Boston, where few support capital punishment — and a corresponding backlash from those who favor it.
No one, however, was suggesting Tsarnaev go free. Not punishing a convicted murderer? The idea seems alien, even disgusting.
Humans crave justice. We need it. As the most socially cooperative creatures on the planet, we open ourselves up to exploitation and danger every time we trust others, and so we carry expectations of fair play. As the Boston case shows, we may not always agree on what justice is, but the idea is woven deep into our minds. Researchers suspect that the urge to punish is built into our brains, and studies show that meting out a well-deserved punishment tickles our neural reward centers.
Humans start making judgments about crime and punishment young. A 2011 study published in Developmental Science found that even at only 3 months of age, babies preferred looking at characters that helped others over characters that hindered others. For infants this young, looking at an object or person predicts their likelihood of interacting with it, suggesting that babies are drawn to those who are cooperative and pro-social.
Further studies, many conducted at the Yale University Infant Cognition Center, show that babies are surprisingly sophisticated in making moral judgments. Take another 2011 study, this one published in Proceedings of the National Academy of Sciences. Yale researchers and their colleagues showed 5-month-olds and 8-month-olds shows in which one puppet either helped or hindered another.
Next, this helping or hindering puppet was shown playing with a ball and dropping it, and a third puppet was shown either taking the ball or returning it to its owner. The 5-month-old babies were found to prefer the puppet who gave the ball back, regardless of whether the ball's owner was a helper or a hinderer — they just liked seeing nice interactions. But at 8 months, babies only preferred that the ball's owner get the ball back if that puppet had previously been helpful. In other words, 8-month-old babies liked to see do-gooders rewarded and wrongdoers punished.
Justice in the brain
So the building blocks of justice are in place at a young age. But where do they sit in the brain? Studies have pointed at a number of brain areas that seem to react to unfairness — including the insula, anterior cingulate cortex and temporoparietal junction — and yet another set of regions, including the prefrontal cortex, that process the judgment.
In one study, published in Science in 2004, researchers scanned people's brains with positron emission tomography (PET) while they decided whether and how much to punish another person.
The study used a simple economic game to set up the punishment scenario. Two players, A and B, are both given $10. A can send her $10 to B; if she does, the amount will be quadrupled, and B will get $40. B can then chose to split the $50 total between himself and A, or can be greedy and keep all the money.
But there's a catch for B: If he keeps all the money, Player A gets an opportunity to punish him. In some cases, the researchers arranged the experiment so that A would have to pay a price to punish B; in others, the punishment was free. Similarly, the punishment was sometimes symbolic and sometimes cost B real money.
In all cases, the researchers found that a deep-brain area called the anterior dorsal striatum became active. This spot, particularly a section called the caudate, is known for making decisions that could lead to rewards. Intriguingly, the more active the caudate, the more likely participants were to punish harshly, the researchers found.
"High caudate activity seems to be responsible for a high willingness to punish, which suggests that caudate activation reflects the anticipated satisfaction from punishing defectors," they wrote.
The satisfaction of a punishment well-delivered highlights an important angle of the whole justice business: It's inexorably tangled up with our emotions.
Imagine you're told that of one mountain climber's murder of another by cutting his rigging. In one scenario, the victim is simply described as dying from his injuries. In another, you hear that every bone in his body was broken and that his "screams are muffled by thick, foamy blood flowing from his mouth."
As you might imagine, when participants in a 2014 Nature Neuroscience study heard the latter description, they became much more keen to punish the murders than when they heard the less-detailed version. Functional magnetic resonance imaging (fMRI) of the participants' brain as they made their decision offered a hint why: graphic descriptions of harm stimulate the amygdala, the almond-sized brain region involved in emotional processing and fear. The graphic details also boosted the chatter between the amygdala and the lateral prefrontal cortex, which is an important region for decision-making.
Unsurprisingly, when people were told the climber's death was an accident, they were much less likely to want to punish his climbing partner — even if they heard a graphic description of the injuries. In these situations, medial prefrontal and dorsomedial areas of the brain leapt into action, suppressing the amygdala and its cognitive fear-mongering. The brain, in other words, has a built-in brake on our thirst for punishment.
It's this push-and-pull between leniency and retribution that makes up our sense of justice. And that's where things get messy.
Should a killer like Tsarnaev be led to the execution chamber? Or spend his life in near-isolation in a supermax prison? Not even his victims agree. The parents of an 8-year-old killed in the bombing asked for capital punishment to be taken off the table. Other victims who lost limbs, along with the father of a 29-year-old woman who died in the bombing, told reporters they were pleased with the sentence.
Where you stand on Tsarnaev's death sentence and other moral questions may be a matter of gut feeling. Psychologists have found ample evidence that people make moral judgments first by intuition, only going back later to fill in a rationale. And in doing so, they don't always treat all the evidence equally. A famous 1979 study on attitudes toward the death penalty, for example, found that people accepted information that supported their prior convictions about capital punishment without question. When presented with evidence that contradicted their position, by contrast, they picked it apart. In other words, people cherry-picked what information they would seriously evaluate.
"The reasoning process is more like a lawyer defending a client than a judge or scientist seeking truth," social psychologist Jonathan Haidt, a professor at New York University, wrote in a 2001 paper on the topic.
People usually fail to notice that their reasoning is muddied by bias and emotion, said Peter Ditto, a psychologist at the University of California, Irvine, who studies what he calls "hot cognition."
"We think we're Joe Friday, just thinking about things dispassionately, but there is overwhelming evidence that that's not the way cognition works, " Ditto said.
Questions of morality are ripe for this sort of emotion-driven processing, Ditto said, because humans are wired to care deeply about what is right and what is wrong.
That our brains try to cling to their beliefs even in the face of contradictory evidence may not even be a bad thing. Our view of the world requires some sort of consistency, and we have to use the past to inform the future, Ditto said. We also need to pay close attention to unusual or unwelcome information, because it could constitute a threat. Unfortunately, as a result, that often means that we over-scrutinize information that doesn't fit our worldview, while letting supporting information glide by, unexamined.
Add to this emotional stew the "just world fallacy," which is the cognitive bias that tells us the world is inherently just and thus people reap what they sow, and the concept of true justice gets murky fast. Nevertheless, even if no one can agree on what it is, the belief in justice is part of our brains — and perhaps an indispensable part of who we are.
In 2012, seven-year-old Charlotte Neve had a brain hemorrhage while she was sleeping. She was rushed to the hospital where doctors performed surgery to stop the bleeding, but afterwards she had several seizures and slipped into a coma from which no one was sure she would recover. Charlotte's mother, Leila, was at her bedside listening to the radio when Adele's hit "Rolling In The Deep" started playing. Leila and Charlotte had sung the song together many times and, as Leila sang along to her unconscious daughter, she saw Charlotte smile. The doctors were stunned. Over the next two days, Charlotte recovered more of her faculties—she could talk, focus on colors, and get out of bed.
Though cases like Charlotte's are rare, music is one of several triggers that have been known to occasionally rouse patients from unconsciousness. The exact mechanisms are still a mystery, but using new technologies, researchers can observe better than ever before the ways in which music can stimulate key parts of the unconscious brain.
There are two main reasons why someone would go into a minimally conscious state: traumatic injury from an outside force, and when the brain is deprived of blood or oxygen in the case of cardiac arrest or stroke. On the surface, these events appear to have the same result, but they affect the brain tissue in different ways. In traumatic brain injuries,the most common cause of comas, the injury usually affects the axons, the long fibers that transmit signals from one brain area to another. But in case of a stroke or cardiac arrest, a lack of oxygen creates issues with the neurons themselves. That means that the prognosis for people in comas with different causes is very different: "Neuronal injury in general is more likely irreversible, whereas axonal injury may be partially reversible," said James Bernat, neuroscience professor at the Dartmouth Medical School. Most patients in a vegetative state do not recover if they have shown no progress after a year.
Whether or not a patient will come out of a coma—and how long it will take—depends on factors such as the patient's age and what parts of the brain are damaged. Sometimes a person will spontaneously snap out of a coma if the brain has recovered enough to do so, Bernat said. But other times, parts of the brain essential to its function work just fine, but are isolated because the area around them is damaged. There might be a way to "stimulate a stimulatable area" that's still intact using less common neural pathways, Bernat said. Exposing a patient to different types of stimuli could help tap into those uncommon pathways, "and perhaps familiar music is one route," he added.
In recent years, researchers have been able to use newer tools like fMRI and PET scans to gaze inside the brains of comatose patients to assess their state of consciousness. In the first of many such studies, published in 2000, researchers put patients with "disorders of consciousness" (such as coma, vegetative state) in an fMRI machine and saw a significant increase in brain activity when the patient's name was called. Another study from 2004 found that infant cries elicited a similar amount of stimulation.
Earlier this year, a team of French researchers took the research a step farther, publishing a study in which they looked at how music affects the unconscious brain more clearly.They selected 13 patients who were in comas for different reasons, including trauma and cardiac arrest. For half the patients, the researchers played an excerpt of their preferred music; the other half listened to continuous background noise. Then, while measuring the electrical charge in the patients' brains with an electroencephalograph (EEG), the researchers called the patient's name. The patients who had listened to music first had much more electrical activity in their brains than those who had listened to noise, indicating that the music was perhaps stimulating the brain in a way other types of sounds couldn't. Those with traumatic brain injuries showed the most dramatic response. "These findings demonstrate for the first time that music has a beneficial effect on cognitive processes of patients with disorders of consciousness," the study authors wrote.
The researchers hypothesized that it is the autobiographical quality of music, its connection to our emotions, that elicits such a strong neurological response from patients. Our love of music could be wired so deep in the brain that it lingers underneath unconsciousness. While music itself is not a foolproof way to rouse someone from unconsciousness, it can lead to more activity in the brain, which could help stimulate pathways that might have been damaged before, or even activate alternate routes to parts of the brain essential to our consciousness.
As a result of the trauma, Charlotte was left with partial blindness and memory loss. In retrospect, her doctors agree that her brain must have already been healing imperceptibly before waking from the coma—as Bernat points out, the music didn't actually cause the brain to reconstruct the damaged pathways. But Charlotte's brain has continued to heal on its own; not long after she was sent home from the hospital, she was able to ride her bike and returned to dance class.
In the early 19th century, neuroanatomist Franz Joseph Gall believed that the cerebellum, the little attachment to the brain that packs half of the neurons in our head, is the "organ of the instinct of reproduction." The bigger it is, the stronger our libido.
But if you've ever lost your balance, or staggered home from a party after a few too many drinks, you'll know what happens when it isn't working properly. The cerebellum, which means little brain in Latin, has critical roles in controlling and coordinating movement, functions that it performs automatically and unthinkingly. Without a cerebellum, you would have a hard time walking straight or learning to ride a bike.
But some researchers now believe that the humble little brain has roles beyond just fine-tuning movement and may also contribute to higher mental functions such as thought and emotions.
"Most neurologists are still convinced that the cerebellum does not do more than coordination of sensorimotor functions," says neurologist Peter Mariën of Vrije Universiteit in Brussels, "but there are now thousands of well-designed studies that you can't neglect, and I think that's abundant evidence."
A thinking cerebellum
The roles traditionally ascribed to the cerebellum come mostly from animal experiments performed throughout the 19th century, which showed that cerebellar lesions invariably caused some kind of motor disturbance on the same side of the body as the damage.
Later, the neurologist Gordon Holmes studied more than 40 soldiers who returned from World War one with cerebellar injuries due to gunshot wounds. He noted that the effects of the lesions "fall almost exclusively upon the motor system," observations that confirmed the presumed role of the cerebellum and strengthened this traditional view.
Image shows a cross-section of human cerebellum. From Gray's Anatomy.
Sitting behind the brainstem at the top of the spinal cord, the cerebellum is perfectly positioned to read movement-related information as it enters and leaves the brain. It has two hemispheres, each consisting of a single sheet of tissue that is tightly folded like an accordion from front to back. These contain a number of distinct cell types, including Purkinje neurons, the largest cells in the brain, and granule cells, the smallest. Despite accounting for just 10 percent of the total volume of the brain, it contains more cells than all the other parts put together.
Cells in the cerebellum are arranged in a highly orderly manner. Granule cells have a single fiber that runs from left to right, while Purkinje cells have a large, flat and elaborately branched dendritic tree positioned perpendicularly to them. Each granule cell fiber thus runs through and connects with the dendrites of numerous Purkinje cells, each of which forms connections with hundreds of thousands of granule cell fibers.
Image: A drawing of Purkinje cell by Santiago Ramón y Cajal, 1899.
This arrangement of cells seems perfectly suited to coordinating the activities of muscle groups. The dense network of cells is the last step that smooths the rough, clumsy motor actions and allow for fine, precise moves.
But the organ could be doing the same thing to cognitive functions.
Evidence began to emerge about 25 years ago, when neuroscientists started using functional neuroimaging techniques such as fMRI and PET to study brain activity, and sometimes found that the cerebellum is activated during language and problem-solving tasks.
These findings were dismissed as anomalies, but since then, hundreds more such studies have reported activity in the cerebellum during mental tasks, leading one researcher to state that human functional neuroimaging has "generated compelling evidence that the human cerebellum responds to cognitive task demands… without setting out to do so."
Another line of evidence comes from clinical studies of people born without a cerebellum. Conditions like this are extremely rare and, not surprisingly, children who lack a cerebellum have great difficulty learning to walk, keeping their balance and performing fine movements. But apparently they also exhibit some differences in their intellectual and emotional skills.
Image shows brain scans of a person born without a cerebellum. Credit: Feng Yu et al.
Neurologist Jeremy Schmahmann of Massachusetts General Hospital and his colleagues have studied several dozen patients with rare diseases that affect the cerebellum. Their research suggests that damage to the rear portion of the cerebellum can impair executive functions such as planning, abstract reasoning and working memory, whereas damage to the interior portion may lead to only small deficits in executive and visuospatial functions. They argue that these observations constitute a previously unrecognized condition, which they have named cerebellar cognitive affective syndrome.
A need for more evidence
Not everyone is taken by the idea of the thinking little brain.
"I'm not convinced by the clinical evidence," says Mitchell Glickstein, emeritus professor of neuroscience at University College London. "The lesions aren't restricted to the cerebellum, and these patients probably have vascular damage that's being ignored." He adds that he has seen clear data from children who have had cerebellar tumors removed: "They have a tendency towards more labile emotions, but there's not a hint of any intellectual deficit."
Glickstein is also skeptical of the neuroimaging and connectivity data showing the cerebellum is lit-up during nonmotor tasks. "Many of the frontal areas connected to the cerebellum control eye movements," he says. "Almost everything we do is preceded by eye movements, but nobody ever controls for that in their experiments."
Or, the cerebellum could be just looking out for the next motor function it has to manage, Glickstein said. "It wants to know what's going to happen in the next few seconds of your life. It wants to organize, arrange and plan movements without thinking about them."
According to Glickstein, the evidence that the cerebellum plays a role in cognition is rather weak. "If I saw an unequivocal cerebellar lesion associated with an unequivocal cognitive deficit, then I'd believe it, but I haven't seen anything like this so far."
Mariën says the problem lies in how and when patients with cerebellar lesions are tested. "The brain might be able to compensate for cerebellar lesions quite quickly," he says. "If you see patients one or two months after a cerebellar stroke, there's a big chance that there's already been some compensation."
A little brain that does a little bit of everything
If the cerebellum has a role in cognition, it seems to be a discrete one. Patients show deficits that are too subtle to detect using conventional tests designed to assess cortical functions, Mariën says. "There's a need to develop fine-grained tools that are able to discriminate its functional roles."
More recent functional neuroimaging studies have found that multiple circuits connect the cerebellum to areas of the prefrontal cortex known to be involved in complex cognition, which suggests that the organ may be able to modify the activity of prefrontal areas.
And it is possible that a disconnect between the cerebellum and these areas could lead to cognitive symptoms. For example, Mariën and his colleagues recently reported the case of a patient with cognitive symptoms linked with a brain lesion in the brainstem. The lesion was in a spot that contains numerous pathways connecting the cerebellum to the rest of the brain.
"The cerebellum on itself does not have a seat for cognition," Mariën said. "But parts of it are connected with primary areas in the brain involved in cognition. If you disconnect the cerebellum of input from these areas, then its modulatory role is lost."
Looking at the cerebellum as part of a network rather than a stand-alone attachment to the brain could perhaps reveal the true nature of this old part of the brain.
"I expect new diagnostic tools and the further development of neuroimaging techniques will further our insights into its cognitive functions."
As an animal lover with a special affinity for cats, every few weeks I receive an article from a concerned friend, claiming that the Toxoplasma parasite in cats is making us crazy and killing more than a million people each year by driving them to car crashes, pushing them to violent suicides, boosting their risk of brain cancer, or destining their brains for schizophrenia or neurosis.
So cats are obviously the devil and turning me into a crazy person.
But I've always been too proud of the incredibly complex human brain to buy into this idea. And I've always had a nagging skepticism — this just sounds too bad to be true. Like anything else in science, we need extraordinary evidence for such extraordinary claims about a mind-controlling parasite being the source of all evil.
So how certain are we that the Toxoplasma parasite is really able to invade our brains and take control?
"This indeed is a controversial area of investigation," said David J. Bzik, a professor of microbiology and immunology at Geisel School of Medicine at Dartmouth College in New Hampshire. "The evidence that Toxoplasma affects human behavior is extremely dubious and sketchy at this time, with the exception of an immunocompromised person."
As it turns out, even when it comes to mice and rats, what the Toxoplasma parasite actually does to its host is unclear and remains a matter of debate among researchers in the field. Some argue that theories about this parasite need a "rethink," and some go as far as asking whether the widespread narrative of the Toxoplasma parasite is merely fiction.
The story of Toxoplasma, and what it really does to rats
Here's what we know about the single-celled parasite Toxoplasma gondii (T. gondii): it lives and reproduces in cats' intestines, but it can infect other animals and humans who eat undercooked or raw contaminated pork, lamb or other meat, or come in contact with feces from an infected cat. It's estimated to infect 30 percent of the world's population. In unborn babies and in people with compromised immune systems, the infection can be devastating, but among healthy people, very few show any symptoms because their immune system efficiently attacks the parasite and keeps it in a "latent form" inside cysts, unable to cause illness, according to the Centers for Disease Control and Prevention.
From here, a huge debate begins over a controversial theory. The most outspoken advocate on one side is Jaroslav Flegr, a Czech scientist who suspects our minds have been taken over by parasites. The theory Flegr ascribes to holds that if Toxoplasma parasite finds itself in a rat, it wants to get back into a cat's intestines, because that's the only place it can reproduce. So in a stride for survival, the parasite manipulates the behavior of the infected rat in order to increase its chances of being caught and eaten by a cat. For example, it changes the rats' brains in a way that they get less terrified of cats and become even attracted to feline odor.
This is an appealing story for sure, and a widely accepted view, but it lacks solid evidence.
"In rodents, one of the theories has been that dopamine levels are increased in the brain following infection. The evidence for this is sketchy at best, and a recent paper from a prominent Toxoplasma lab, by Dr. David Sibley at Washington University in St. Louis, could not replicate these findings," Bzik said. Moreover, it's been proposed that the effects of Toxoplasma cysts on the brain's dopamine system are due to an increase in an enzyme called tyrosine hydroxylase, but Sibley's study didn't find evidence for this connection.
So what does the Toxoplasma parasite really do to rats? In an article published in Trends in Parasitology in 2013, Andrew Thompson and his colleagues from School of Veterinary and Biomedical Science at Murdoch University in Australia present a round-up of the findings about Toxoplasma infection in rats.
They found that studies have reported a range of behavioral changes in rats infected by the Toxoplasma parasite. Some of these changes increased rodents' risk of getting hunted by cats, including increased activity level, decreased anxiety, impaired motor performance and reaction time, and inappropriate responses to cat odors.
However, the researchers found that these findings are not consistent across all studies. Some studies didn't find a change in rodents' behavior that would turn them into easier prey for cats.
Moreover, some studies also observed behavioral changes that are not related to enhanced predation, including impaired learning and memory, and changes in dominance, social interaction and mate choice. (The researchers provide a complete list of these observations in a reader-friendly table in the article.)
Could these behavioral changes be coincidental? The "perfect" story of the Toxoplasma parasite and how it makes rodents less afraid of cats, has made it tempting to believe that a logical evolutionary mechanism is behind the phenomenon. But in fact, given the inconsistency of observations, there's reason to think that what Toxoplasma does is by pure chance, the researchers said.
Another interesting piece of evidence comes from a different parasite, called Eimeria vermiformis. Studies have found that mice infected with E. vermiformis, too, seem less afraid of of cat odor. But in this case, unlike the Toxoplasma parasite, E. vermiformis doesn't need to get back to cats to reproduce. In fact, if the mouse is caught by a cat, the parasite will die too.
Together, these findings suggest it's possible that if infected rodents become less afraid of cat odor, it's just a coincidental side effect of a general reduction in anxiety and fearfulness, the researchers said. It may have nothing to do with Toxoplasma's master plan for survival.
Mice and men are somewhat different
Virtually all we know about Toxoplasma infection effects in the brain comes from studies in rodents.
Humans and rodents have a lot in common, but findings from animal studies don't always directly translate to humans. Moreover, Toxoplasma infection looks quite different in rodents. It visibly takes over their brain tissue, and their immune system doesn't seem to do a great job at keeping the parasite at bay.
"There is no doubt that Toxoplasma infection of mice alters their behavior but it does need to be kept in mind that Toxoplasma infections in rodents can be severe and chronic and this can cause abnormal brain pathology from primary infection or from the recurrent infections that can and do occur in the brain of rodents," Bzik said.
In contrast, humans with a healthy immune system control Toxoplasma infections extremely well, Bzik said. "Immune control of Toxoplasma in human brains is excellent and you only see re-emergent infections in severe immune deficiency," he said. Even in HIV/AIDS, in which the virus takes gradual control over the immune system, Toxoplasma is actually one of the latest infections to occur. A reinfection in a person with compromised immune system causes dangerous, often lethal encephalitis, and patients can show behavioral effects due to direct brain damage.
In a person with normal immune control, the parasite continues to exist in a latent form, in small cysts inside body tissues, such as muscles and the brain. But it is unclear to what extent such cysts reside in brain tissue of asymptomatic people, because, as one researcher put it, it is hard to find healthy volunteers for brain biopsy.
"There is usually no reason to look so it would have to be a retrospective study," said Gustavo Arrizabalaga, an assistant professor at Indiana University.
Some evidence that latent parasites exist in the brain comes from occasional case reports of people who get an infection when their immune system gets temporarily shut down for an organ transplant or cancer treatment.
"The fact that these individuals get toxoplasmosis in the brain would indicate that these people harbored the cysts when they were healthy," Arrizabalaga said.
What about observational studies?
Consistent with Flegr's argument, there's a host of studies linking Toxoplasma infection with outcomes in humans, such as higher risk for car accidents, brain cancer, schizophrenia and suicide. (One study linked the parasite with a positive outcome, having better control of actions.)
However, the findings of these studies are correlational, and don't prove a cause-and-effect relationship, which makes it difficult to draw definitive conclusions. For example, someone with schizophrenia might struggle to keep up good hygiene and so may be more likely to acquire Toxoplasmosis.
Also, there may be other factors behind the link.
"People who are infected with Toxoplasma are most likely the ones who are more likely to eat raw meat and to be infected," Bzik said. "So the behavior association is just as likely to be due to a pre-existing behavioral change linked to the desire to eat raw meat."
Moreover, some of the studies suffer from quality issues in the methods, such as having a small size or not controlling for confounding factors. And the results are sometimes up for different interpretations. For example, in a study that linked the parasite to suicide risk in people, researchers looked at 517 women who had tried to kill themselves, with 78 of these women attempting violent methods such as guns. The researchers found that women who were once infected with Toxoplasma were 1.8 times more likely to attempt suicide by violent means than uninfected women. Eighteen of the women in the whole sample succeeded at killing themselves, and eight of them had once been infected.
"The scientific problem with every one of these studies is that the populations studied are too small to gain meaningful insights," Bzik said. "With the human stories, there currently is no hard or definitive evidence that Toxoplasma causes behavioral changes at this time. But it makes for really nice and sometimes fearful stories that are widely publicized."
It could be that these findings are true. It could be that my cat-controlled brain is trying to convince you otherwise. And remember that Toxoplasma is a nasty parasite that can cause severe disease in vulnerable people. But clearly, before making a final call on how it affects our brains, we should pursue more evidence. And before understanding the effects of Toxoplasma on human psychology, at the very least, its behavior in rodents should be better studied, researchers argue.
"In light of the questionable assumptions and the inconsistent evidence that underlie the accepted dogma, we believe the effect of T. gondii on rodent behavior is not yet well understood," Thompson and his colleagues wrote.
"Given that research into human behavior is based at least partly on findings in rodents, it is vital that we have a good understanding of how rodent behavior is affected by T. gondii, before we extrapolate to other species."
Reality is real, but what you see, that's all in your head.
For at least ten years, Jennifer Aniston has been, in a way, part of a furious debate among neuroscientists about how we encode our memories.
Her involvement began in 2005, when UCLA neurosurgeons were preparing a patient with debilitating epilepsy for surgery. The doctors had implanted electrodes into the patient's hippocampus, a brain structure known to be critical for memory, to locate the abnormal tissue causing his seizures. As the patient was lying awake, waiting for the electrical storm to sweep across his brain, researchers ran some exploratory experiments and discovered something remarkable: a single neuron that only fired in response to images of the actress.
It seemed as if this one neuron in the patient's entire brain was devoted to Jennifer Aniston.
The researchers then found even more cells that fired in response to particular celebrities or objects, such as Halle Berry, Luke Skywalker, the White House or the Eiffel Tower.
It seemed that the scientists had discovered "Grandmother cells," also called gnostic cells, the hypothetical neurons whose name was coined by cognitive scientist Jerry Lettvin in the late 1960's during a thought experiment. Single cell recordings from the brains of cats and monkeys had just been developed and scientists learned that the brain processed visual information hierarchically, with cells at each successive stage becoming more and more specific in what they respond to. Perhaps, at the top of the hierarchy, there was a single, highly specific neuron that would only respond to pictures of a single person, and thus enable you to recognize your grandmother.
Do we all have a Jennifer Aniston neuron? (Does she have one too?)
Recordings taken from the brains of epileptic patients seemed to be consistent with the idea of grandmother cells. However, it also seemed that such cells code for concepts, rather than individuals. The Jennifer Aniston neuron, for example, also fired to Lisa Kudrow, her co-star in the TV series "Friends," and the Luke Skywalker neuron also responded to images of Yoda. Rodrigo Quian Quiroga, who led the study, dubbed such neurons "concept cells."
But it seems unlikely that there's only one cell coding for a memory, be it a celebrity or a concept. What would happen if this cell died? And do we even have enough neurons to represent all the concepts we encounter in the world? Quiroga and his colleagues proposed that it is more likely that a relatively small number of neurons, say, 20,000 neurons, code for a memory, and the so-called grandmother cells they found belonged to this " sparse network."
While not as extreme as having a single neuron devoted to each celebrity you know, the idea of sparse networks contradicted an alternative theory, which held that memories are distributed across many millions of neurons.
How sparse are our memory holders?
The question of sparse versus distributed memory networks was recently put into test in a new study, published April 1 in the Journal of Neuroscience, and the findings suggest the truth may be somewhere in between.
Andre Valdez of the Barrow Neurological Institute in Phoenix, Arizona and his colleagues recorded the responses of 432 single cells in the brains of 21 epileptics while they viewed photographs of animals, landmarks and people, which were chosen to closely match those used in the earlier studies.
The researchers found that many of these cells responded to more than one of the objects in the photos. A significant proportion of the cells fired in response to multiple objects, while only a few acted like grandmother cells and responded selectively to just one.
"A substantial fraction of the neurons encoded the objects that were presented, and the majority of those encoded two or more objects," says Peter Steinmetz, senior author of the study. "Prior research argued in favor of localist, Grandmother cell-like coding, [but here] the coding appears more distributed."
The earlier experiments involved showing patients the same images over and over again, up to 50 times, but this time the researchers adopted a slightly different approach. "We decided to use the same type of stimuli, but showed them a limited number of times to see what the coding would look like in a more natural state," Steinmetz explains.
Thus, one possible explanation for the differences between the results is that each study looked at different stages of the process by which memories are encoded. New memories are thought to be encoded by "fuzzy" neuronal networks that later refine their responses to become more selective.
"The idea would be that hippocampal neurons initially code more broadly but then become more tuned to things that need to be remembered," says Steinmetz. Neuronal activity would thus appear more distributed during earlier stages, and become more localized as the cells fine-tune their responses. "The question is how distributed is the coding?"
One way the researchers are planning to address this is to compare what is called population sparseness, or the activity of a small fraction of neurons in a population in a given time frame, with lifetime sparseness, or the activity of single cells over time as they respond to small sets of images.
Steinmetz spent eight years collecting single cell recordings from epileptic patients at the Barrow Neurological Institute in Phoenix, and now directs the newly established Nakamoto Brain Research Institute in Tempe, Arizona, which is dedicated to analyzing the data.
"We will be examining some of our existing data in terms of [population and lifetime coding] in the future, and we will also be examining whether multiple neurons tend to fire together during encoding and learning," he says. "We have over 3 terabytes of data to work on, so analyzing it should keep us busy for quite a while."
In a virtual reality experiment, scientists made people feel as if their bodies were transparent.
Designers have taken inspiration from the wrinkly surface of the brain and microscopic images of neurons to make dazzling jewelry, clothing and housewares.
In a bold move, MIT neuroscientist Nancy Kanwisher went bald to teach students a brain anatomy lesson:
Kanwisher first uses a 3D model of the brain to show some of the special areas that are dedicated to specific mental functions, such as recognizing faces or processing language. Then in a surprising move, she shaves away all the "damn hair" in the way and gets a drawing of the cortex on her scalp (that grad student who drew that sophisticated cortex is an artist by the way!), so that now you can better see where these brain areas are on an actual living person. And now, you may never forget.
Besides giving creative brain talks, Kanwisher studies the functional organization of the brain. Many of our mental functions depend on collaborative networks in the brain, with a lot of cross-talk between regions. But there are some cortical regions that are highly specialized for specific tasks, such as recognizing faces, written words, bodies and places. Kanwisher's lab is focused on studying such specialized areas and exploring what mental tasks "get their own special patch of cortex," and what that may tell us about the architecture of the brain and mind.
The spider may be itsy bitsy but its brain is nothing short of amazing. With just a poppy-seed-sized noggin, these arthropods employ sophisticated hunting methods, can find their way out of complicated labyrinths, and some have mating dance moves that take the breath away from their fellow spiders.
"Spiders are very smart, that's why we're studying them," says Ronald Hoy, a professor of neurobiology and behavior at Cornell University. "They use visual cues to steer by, and the kind of mazes that they can solve is considered to be pretty impressive for an invertebrate."
Jumping spider attacking in mid air. Credit: Gil Menda, Paul Shamble, Tsevi Beatus.
To be able to do such fine mental tasks, spiders need an elaborate nervous system. But they don't have much space in their tiny bodies and there's a limit to how small neurons can get and still function.
Perhaps as a solution to space limits, some small spiders have brains that spill out all the way into their legs. Scientists have discovered that the central nervous systems of the smallest spiders fill up almost 80 percent of their total body cavity, including about a quarter of the space inside their legs. In general, spiders have a two-part body in which the head and the thorax are fused into one big segment called the cephalothorax.
Insects, on the other hand, have a separate head, a separate thorax and a separate abdomen found in the front end of the combined cephalothorax. This means that in spiders, there's much more fusion of the nervous system than in most insects.
In terms of wiring, however, spiders follow the same sorts of rules found in both vertebrates and invertebrates.
"If you look at a section of spider brain you'll find that there are clusters of cell bodies with a cabling of the axons going from one part to another part and that's true of insects and that's true of us too," Hoy says. "Things are just more compact in a spider's brain because you're packing a normal head brain into the thoracic ganglion."
A male Phidippus audax, a common jumping spider in North America. Credit: Gil Menda
Another amazing feature of some spiders is their sophisticated visual systems. Jumping spiders, for example, have eight eyes, giving them a nearly 360 degree panoramic view, with two front-facing eyes that are as acute as human eyes. The visual combo allows these hunters to pursue and pounce on prey, much like cats do. But an interesting question for scientists is how the spider brain actually processes the visual information.
Hoy is part of one of the first teams to record the activity of neurons in a spider brain, a monumental feat because the insides of spider bodies are under pressure, like air in a balloon, and even the smallest incisions could make everything squirt out, leaving the critter to die. Using a very fine electrode that made a fast-healing hole in the spider's head, Hoy's team successfully punctured the tiny brain of a jumping spider and recorded neuron responses associated with visual cues, such as flies, their natural prey. This gave the team an unprecedented look into the microscopic brain that processes information from the jumping spider's eight eyes.
Researchers can now utilize this method to learn more about the complex noodle of these tiny critters. In over 250 years of spider research, scientists have identified more than 44,500 species of spiders, but estimate there are at least as many yet to be discovered.
"There are very few studies on spider brains," Hoy says. "We think it's really exciting that now it's possible to record spider brains and that others will follow up and really start to study it in more detail."
Now, let's all watch a peacock spider dance:
As humans, we're proud of our big brains. But our brains aren't nearly the biggest on earth. That distinction belongs to the sperm whale, whose brain on average weighs 17 pounds, about four times as much as a human's. Elephants also have large brains that weigh on average 11 pounds. However, these animals are huge — what happens if one takes into account brain size relative to body size?
One way to compare brains is the encephalization quotient (EQ), the ratio between actual brain size and that expected from the species' body size. An EQ above 1.0 indicates a brain larger than expected for the size of the animal. Humans are at the top of this measurement, with an EQ of about 7.5. Elephants are at about 2.0, chimpanzees at 2.5, and bottlenose dolphins at 4.14.
Still, the EQ alone is not enough to capture the intricacies of brains of different species. Some brains are more convoluted, or wrinkly, than others, packing a larger surface area in a smaller space. Some brains contain more neurons than others, by packing them more tightly in the same space. And then there's the specific patterns of organization in the brain, which enables some species to do things that humans simply can't.
In addition to being big, dolphin brains are highly elaborated and complex, with a cerebral cortex that has more folds and wrinkles than our own.
Emory University's Lori Marino and colleagues have examined the fossil evidence and determined that the ancestors of modern whales and dolphins experienced a sudden increase in brain size, which occurred 10 million years after the animals moved to a fully aquatic lifestyle. The new environment could have spurred a boost in dolphins' brain power, but the researchers suspect that the evolution of cetaceans' large brains was in fact driven by their complex social lives.
In the wild, dolphins live in large, complex groups and form different kinds of relationships, including long-term bonds, alliances, and cooperative networks that rely on cognitive abilities such as learning and memory. Some field studies have documented cultural learning of dialects, foraging sites, and foraging strategies, such as the use of specific tools. Dolphins' social skills and culture likely depend upon a sophisticated and complex communication system, involving vocal, visual, and tactile signals — all supported by a big, elaborate brain.
Jacopo Annese, director of The Brain Observatory, University of California San Diego, handles a dolphin brain.
An Elephant Never Forgets
When it comes to land animals, elephants possess both the largest brains and the greatest volume of cerebral cortex. But their EQ, while still above 1.0, isn't as impressive as that of cetaceans.
Benjamin Hart, a distinguished professor at the University of California, Davis School of Veterinary Medicine, argues that EQ is not the most relevant way to understand elephant brains. He suggests a better measure is the total amount of cerebral cortex that is not dedicated to body size-related functions (such as innervating skin and muscle) and various sensory systems. When these are subtracted, estimations reveal that elephants still have about double the volume of cerebral cortex available for higher mental activities as humans.
The mental abilities elephants excel at include long-term spatial and social memory and empathy-related behaviors. Elephants range over many hundreds of miles, and must remember where vegetation and water resources are located over this range, and even when different resources are seasonally available.
"Elephants travel long distances and lots of time can go by between them visiting different areas of their range," Hart says. "For instance, if a severe drought hits, the matriarch, who's been around a lot longer, can recall where there was water outside their normal range 35 years earlier and lead her group there."
Drawing by Lorena Kaz via Frontiers in Human Neuroscience
Elephants also have keen social memories, recognizing the individual calls of over 100 different elephants from various families and clans. Over the years, numerous observations of wild elephants have found them to be highly empathic and sensitive. They form strong social bonds, console other elephants when they are upset, defend vulnerable individuals from danger, remove foreign objects from other elephants, and assist elephants who are disabled or whose mobility has been impaired. They even appear to grieve for dead family members and show more interest in the bones of elephants than the bones of other species.
Elephants' unique mental abilities may be explained not by the size of their brains, but by differences in the microscopic anatomy of their neurons and the types of connections between neurons.
In elephants, the neurons of the cerebral cortex are large — second only in size to those of the sperm whale — and they are spread further apart than those of humans and apes. Many of those neurons send connections to distant cortical areas. The primate brain shows more connectivity between nearby neurons. In contrast, the elephant brain appears to be more global and less compartmentalized into local areas than the primate brain, Hart says.
"The elephant brain not only differs in size, it differs in its makeup," Hart says. "It's put together differently than the primate brain."
Animals such as elephants and dolphins show that big brains support big mental abilities. No matter how you measure their brains, there's no denying the cognitive capacities of these creatures are impressive.
Every summer a mysterious neurological illness sweeps into communities in Muzaffarpur district in Bihar state of India. Some 200 children, most of them under 5, are shaken awake with seizures in the middle of the night, appearing mentally confused and not fully conscious. The disease kills between a third to half of the children and leaves the rest brain-damaged.
This sudden malady has puzzled scientists for at least 20 years.
Multiple investigations failed to find a cause for the illness. It clearly affected the brain but patients didn't show the typical signs of an infection. Researchers have looked for viruses, checked for exposure to pesticides, but nothing had been found to be the culprit.
Now a recent investigation by the Indian health authorities and the Centers for Disease Control and Prevention (CDC) has zeroed in on lychee fruits - or more specifically a toxin in the fruit - as a likely culprit.
Muzaffarpur is a lychee-producing region, and researchers had suspected something in the fruit might be causing the outbreaks, as they always coincide with the month-long harvesting season.
During the recent investigations, which spanned 2013 and 2014, investigators found an underlying explanation for patients' neurological symptoms. They noticed victims of the mystery illness have severely low blood sugar levels when they arrive at the hospital. The brain needs an adequate supply of glucose to function and maintain cellular activities; when blood sugar is too low, it can results in seizures, coma, permanent brain damage or death.
The researchers identified the illness as a "hypoglycemic encephalopathy," according to their report, published in the 30 January issue of Morbidity and Mortality Weekly Report.
Upon finding that low blood sugar was at the heart of what was happening to patients, doctors started treating patients by giving them intravenous dextrose to raise their blood sugar. As a result, the mortality rate was reduced from 44 percent in 2013 to 31 percent in 2014.
The investigators believe that a toxin in lychee may be causing the crash in blood glucose levels. A compound found in lychee seeds, called methylenecyclopropylglycine, is known to cause low blood sugar in animal studies. Moreover, outbreaks of similar sudden neurologic illnesses occurring in lychee-growing regions of Bangladesh and Vietnam have been reported.
It may be surprising that lychee can cause such a severe crash in blood sugar, but experts have pointed out that victims were undernourished children who had low glucose stores. Heavy ingestion of the fruit by these children could result in toxic hypoglycemic syndrome.
To confirm the lychee hypothesis, the researchers are testing the urine and blood of patients for the lychee compound, while also looking for any signs of other toxins such as pesticides or heavy metals.
For now, public health recommendations are focused on reducing death rate by urging affected families to seek medical care, and ensuring hospitals and doctors can quickly identify and treat low blood sugar in ill children, the investigators said.
Imagine you get back home late after a really long, exhausting day, and you finally get to crash on your bed. You are just on the brink of falling asleep. That's when it starts — you feel like your head is about to burst as you begin to hear a disturbing, loud noise that sounds like a gunshot or an explosion, and it's making you really scared.
This is exactly the experience of people with a disorder aptly named the exploding head syndrome.
"Bangs on a tin tray," "enormous roar," "whiplash," and "fireworks" are some of the terms used to describe the sounds reported by people with the condition, according to studies.
Now, it turns out the syndrome may be fairly common in college students. A new study published March 13 in the Journal of Sleep Research shows that about 16 to 18 percent of the 211 students in the study had experienced the condition at least once in their lives, or even more frequently.
"Some researchers have speculated that it is a rare condition that only occurs in older adults, but results from the new study indicate that it is not uncommon in younger, college-aged individuals," says study author Brian A. Sharpless, an assistant professor of psychology at Washington State University.
During an episode, a person may get a rapid heartbeat and may also start sweating. About 10 percent of the people with the syndrome also experience "alarming flashes of light that go along with along the explosive sound," Sharpless said. In some people, symptoms occur when they are about to wake up, as opposed to when they are about to fall asleep, and for many the experience can be quite unsettling.
"People report feeling almost on the verge of panic," Sharpless said.
In the most severe case Sharpless has heard of, a woman experienced about seven episodes of explosive head syndrome per night. "Clearly this was very disruptive to sleep and quite troubling," he said.
Right now, there are a lot of unknowns when it comes to the prevalence of the disorder, due to the scarcity of data that's available. "It's partly because people who experience the syndrome may be hesitant to report these symptoms to doctors for fear of embarrassment or being viewed as 'crazy,' and many doctors don't specifically ask about these episodes due to the perceived rarity of them," Sharpless said.
Moreover, the majority of scientific literature on the topic consists of individual case reports, as opposed to studies that assess large numbers of people, he said.
Although it is not clear what exactly causes the disorder, several theories have been proposed over the years. For instance, some researchers have suggested that it may have something to do with ear dysfunctions. Others have said it could occur as a side effect of withdrawal from certain antidepressants (SSRIs) or psychoactive drugs called benzodiazepines, Sharpless wrote in a review published in 2014 in the journal Sleep Medicine Reviews.
According to the most popular theory, the disorder may be linked to neuronal dysfunction in the brainstem – the posterior part of the brain – that occurs when a person is transitioning from wakefulness to sleep.
Examples of successful treatment that have previously been reported in case reports involved the use of the tricyclic antidepressant clomipramine or calcium channel blockers, according to Sharpless' review. Simply talking to patients and educating them about the condition may also be helpful.
The good news is that, "for most people, it is just an unusual experience that isn't going to be a problem in their lives," Sharpless said.
"For those of you who are troubled by it, it probably wouldn't hurt to consult with a sleep specialist," he recommended.
The relationship between marijuana use and sleep is complicated. But a new study finds that daily pot smokers have sleep and activity rhythms that are more in sync with the natural light-dark cycle provided by the rising and setting sun.
Previous research has found hints that marijuana may affect the human circadian rhythm — the internal clock that "tells" us when we should sleep. Studies have been scarce, but in a 2010 study on mice, the authors found that pot appeared to disturb the animals' circadian rhythm. The investigators hypothesized that the disturbance may help explain the altered sense of time that is often reported by people who smoke marijuana.
Some human studies have shown that pot smokers may spend more time in bed than non-smokers. However, smokers also appear to have shorter overall sleep times than non-smokers, according to other research. All these findings lend support to the idea that circadian rhythms in marijuana users might be altered in one way or another.
In the new study, published in March in Chronobiology International, the researchers followed 17 pot smokers and 13 non-smokers for three weeks and looked at their overall circadian rhythm, sleep and activity patterns. The smokers in the new study had used pot daily for at least a year. The researchers found that the circadian rhythm in pot users was in fact more in sync with the daily light-dark cycle, compared with non-users.
The researchers noted that they do not know whether this would also be the case in people who smoke pot occasionally.
It is also not clear why there seems to be a link between chronic marijuana use and an increased adaptability to the natural light-dark cycle, but the researchers have some ideas.
One of them is related to the concept of so-called "zeitgebers" – external cues that keep our internal clock in sync with the natural light-dark cycle, which is dictated by the rotating earth. Two important zeitgebers are the sun and artificial light, says study author Jeff Dyche of James Madison University.
"When we step outside and the light hits our eyes, it cues our brain as to the time of day," Dyche says. Another zeitgeber is for instance eating breakfast at a certain time or going to work at a certain time, he adds. So the researchers suspect that marijuana may also act as a zeitgeber, as users often choose to smoke it at the same time of the day.
"Almost all our pot users were using late in the evening, and this cued them to go to bed," Dyche says. Smoking in the evening may be the typical trend among marijuana users according to the findings of a 2006 study.
On the other hand, the non-smokers in the study did not have that extra zeitgeber in the form of pot, and perhaps that's why "their rhythms, on average, were not as good," Dyche says.
Interestingly, previous research has shown that other drugs, such as cocaine, can also impact the circadian rhythm. "This is likely because most people that do drugs every day tend to use them at about the same time," Dyche says. "Thus, the drug itself becomes a cue, or zeitgeber, to the user."
The reverse-wiring of the human eye has long been a mystery. Now researchers find how this curious wiring might actually enhance our vision.
YOUR BRAIN: ANSWERED