Skip to main content

Verified by Psychology Today

10 Myths About The Mind

It’s high time we put the most enduring myths about human behavior to bed, and see the mind—and the world—as it is.

Guy Billout
Guy Billout

Myths come in many shades. Some notions are baldly incorrect: There is no evidence that humans use only 10 percent of their brains, for instance. Then there are misconceptions that contain a modicum of truth, or were once widely believed by experts. Some gain traction because they promise up-by-the-bootstraps solutions and a heavy dose of self-determination: It became faddish to tout 10,000 hours of practice as a surefire path to expertise. Plenty of misconceptions involve cleaving people into discrete categories, artificial distinctions that belie the complexity of the human mind. And there are myths that serve as hedges against an unjust world: If multiple intelligences exist and everyone excels at something, the world would be a bit more fair. It’s high time we put the most enduring myths about human behavior to bed, and see the mind—and the world—as it is.

1. BIRTH ORDER

Personality is not shaped by whether one is a firstborn, the youngest, or an only child.

You’ve heard it from grade school on: Firstborn children become strong-willed, dominant adults. And as parental helpers when younger sibs come along, the eldest grow to be the most conscientious of the bunch. Younger siblings, seeking a place in the family, become experimenters, less conformist and conventional than are firstborns. These are among the ideas proposed by psychologists who have argued that the spots children occupy in the family pecking order have lasting effects on who they are.

Careful testing of these hypotheses, however, finds little to no evidence for them. An investigation published this year found no support for the posited effect of birth order on the propensity to take risks. In 2015, German psychologists analyzed data from thousands of people in the U.S., U.K., and Germany and found no significant correlations between birth order and traits such as agreeableness, conscientiousness, or imagination. In another study that year, psychologists Rodica Damian and Brent Roberts found only very small associations between birth order and personality, and some of them contradicted previous theorizing (later-borns, for example, were not more agreeable than firstborns).

The idea that being an older or younger sibling molds one’s personality seems intuitive, says Roberts, a psychologist at the University of Illinois: “It’s very hard to get out of your own experience—‘I do know about that one rebellious third child and that really responsible older child who took care of the other kids.’” But testing for a real effect, he says, means accounting, for instance, for what older and younger siblings are each like at the same age. A younger child may seem more rebellious today, but could cool down by the time he’s as old as the firstborn is now.

One birth order finding that might actually hold up is a slight IQ advantage in firstborns. The German team found a roughly 1.5 IQ-point increase, on average, for each older birth position. They also found somewhat higher self-reported intellect ratings in firstborns. Why this might be is not yet clear. And even this finding might not be universal: A recent study of an Indonesian sample did not find any link between birth order and intelligence.

Guy Billout
Guy Billout

2. SEX ADDICTION

It’s not an excuse for cheating, because it rarely (if ever) occurs.

People repeatedly cheat on their partners, engage in risky sex acts at a cost to their mental and physical well-being, and blow up their lives for a one-night stand. Sex can be damaging. But can it be addictive? You won’t find “sex addiction” in the current Diagnostic and Statistical Manual of Mental Disorders (DSM-5), used to make diagnoses in the U.S. The World Health Organization has added “compulsive sexual behavior disorder” to its diagnostic guide, but does not use the “A” word.

Whether a habit fits into the category of addiction can be judged based on six criteria, says Mark Griffiths, a psychologist at Nottingham Trent University in the U.K. who researches addiction. The object in question—heroin, gambling, pornography, sex, or something else—is used to modify one’s mood, consumes one’s thoughts even in its absence, and presents a clear internal or interpersonal conflict. Critically, addiction leads to biological tolerance, so that the quantity of the substance or activity needed to achieve the same effect increases over time, and withdrawal involves psychological or physiological suffering—signs like irritability, nausea, and stomach cramps. A true addict is also at risk for relapse.

With regard to sexual behavior, Griffiths says, “the number of people who would actually reach all of my criteria are few and far between.” In many cases of supposed behavioral addiction, whether centered on sex, gambling, exercise, or another activity, “people are engaged in problematic behavior rather than addictive behavior.”

Adding to those criteria, Duke University psychiatrist Allen Frances, who chaired the DSM-IV Task Force, says that in genuine cases of addiction, “something that might have given pleasure at the beginning no longer gives pleasure but can’t be stopped.” For most people, including those for whom sex creates problems, the sexual act itself remains pleasurable. “There may be a very few people who just can’t stop, and it’s ruining their lives and family,” Frances says. But he cautions that overuse of the “sex addiction” label—including by individuals who are caught in affairs and may be eager to deflect blame—risks “turning bad behavior into a mental disorder.”

3. LEFT-BRAINED OR RIGHT-BRAINED?

You do not have a dominant brain hemisphere.

Are you creative, prone to sudden bursts of insight? Or perhaps your thinking is more deliberate and logical? A popular idea suggests that the right hemisphere dominates in the brains of intuitive thinkers, whereas analytical thinkers are “left-brained.”

The right and left hemispheres do specialize in different mental functions. But the notion that individuals rely more heavily on one or the other glosses over the complexity of the left-right relationship.

“The best-documented differences tend to be subtle,” notes neuroscientist Stephen Kosslyn, a professor emeritus at Harvard University. In the mythical left-brained/right-brained scheme, the left hemisphere facilitates language, while the right handles perception. “But in fact,” he explains, “language is distributed across the hemispheres. At least in right-handed people, the left hemisphere is typically better at using grammar when producing and understanding language, whereas the right hemisphere is better at parsing tone of voice to understand intent,” such as whether a speaker is joking. Likewise, perception involves both sides of the brain. Neuroimaging research, according to Kosslyn, shows that these processes recruit both hemispheres. Brain structure and function vary between individuals, and a left-right division is too blunt to capture that variation.

The myth, which has its roots in experiments with split-brain patients, persists in part because dichotomies are easy to grasp. “It makes sense that we have left and right parts of our brains and, analogous to our hands, that they have different capabilities,” states Kosslyn. However, while left-handed you may be, left-brained you are not.

4. LEARNING STYLES

Tailoring education to “visual learners” or “auditory learners” doesn’t make sense.

Some students, if asked, might say that they prefer studying a concept using illustrations, while others prefer verbal lessons. That does not mean, however, that the students will actually learn the material better given one mode or another. The idea that educators should match their instruction to students’ individual learning styles—which are often divided into visual, auditory, and kinesthetic or tactile categories—has been around for decades. But scientific reviews have found scant justification for the practice.

The endorsement of the learning-styles myth by many teachers may stem from “their (correctly) noticing how often one student may achieve enlightenment from an approach that seems useless for another student,” University of California, San Diego psychologist Harold Pashler and colleagues suggested in a 2009 paper. Students do of course differ in ability, and the manner of instruction can make a difference—certain students could potentially benefit from more structured teaching, for example. What studies have failed to show is that, for students in general, educators are more successful if they target putatively hands-on learners, auditory learners, or visual learners with distinct styles of instruction rather than giving all students the same kind of lesson—or one that involves different elements, such as a combination of words and visuals.

Researchers caution that the myth could even impede learning, as when “people think they themselves are limited and are not going to be able to learn in certain ways,” says University of Michigan psychologist Susan Gelman, who co-authored a recent study on people’s beliefs about the concept of learning styles. “Or, they may not try to learn a certain skill because they think it doesn’t match the way their brain works.”

Critics of the learning-styles concept stress that there are evidence-backed techniques for enhancing learning that could apply to virtually everyone. Some strategies work well for most students, says psychologist Shaylene Nancekivell, the lead author of the recent study. For example, she says, “a lot of students who struggle don’t realize that they have to practice retrieving information, not just taking it in.” There’s solid evidence that quizzing can aid memory. Unsubstantiated ideas about what differentiates students could distract from what boosts all of them.

Guy Billout
Guy Billout

5. MULTIPLE INTELLIGENCES

“Intrapersonal intelligence,” “musical-rhythmic intelligence,” and others are not equivalent to intelligence as captured by an IQ test.

A knack for writing musical hooks is a valuable gift, and there is no doubt that it relies on cognitive ability. But attributing that skill to a specific, musical form of intelligence muddies the well-established construct of general intelligence. General intelligence, which IQ tests reliably assess, has proven a robust predictor of such life outcomes as educational attainment and later success.

The theory of multiple intelligences, introduced in the 1980s by Harvard developmental psychologist Howard Gardner, posed a popular challenge to the construct of general intelligence. Gardner proposed eight different types of intelligence, from linguistic and spatial to interpersonal, kinesthetic, and musical—and, a later addition, the capacity to make distinctions about the natural world.

“There is virtually no empirical support for it—never has been,” says Richard Haier, an emeritus professor at the University of California, Irvine, and author of The Neuroscience of Intelligence. “People have tried to develop tests for the so-called independent intelligences, and when they give these to a group of people, the scores on the various tests are correlated with each other, just like all other mental tests of ability.” Gardner, for his part, describes the different categories in his theory as “relatively independent.” He argues that his theory was based on “empirical evidence,” but not “on experimental evidence, because it can’t be proved or disproved by the usual experimental methods.”

It’s true, as Gardner and others point out, that there are different cognitive abilities on which the same individual can score relatively high or low. But there is an underlying association between mental abilities, and it is key to the concept of general intelligence, or what intelligence researchers call g. The g factor becomes evident in statistical analysis of individuals’ scores on different cognitive tests—if a person scores relatively high on one, she also tends to perform relatively well on the others. “The g factor is a good estimate of abstract reasoning ability, and when most people talk about who’s intelligent and who isn’t, they’re talking about abstract reasoning ability,” Haier says. Intelligence is not the only individual difference that predicts how one will fare in school and beyond—other traits, such as conscientiousness, have important roles to play. But the evidence for and predictive power of general intelligence is empirically strong.

“Educators still love the concept of multiple intelligences, and they generally don’t like the g concept so much because it implies that there are certain cognitive limitations that students have,” Haier says. Needless to say, the world would be more just if that were not the case. “But the way I see it, educators could make good use of knowing about the g factor and what it means.”

6. 10,000 HOURS of PRACTICE

Deliberate practice alone will not an expert make.

Practice anything—playing the flute, solving equations, writing fluently—and you stand to get better at it. To claim membership in the elite ranks of any field, practice is indispensable. But that does not mean that extensive, focused practice on its own bridges the gap between those who are merely good at something and those who are truly great.

In 1993, Florida State University psychologist K. Anders Ericsson and colleagues conducted a study in which violin students, who were sorted into three discrete tiers, were asked to estimate how much practice they had accumulated to that point. For the highest tier of violinists, the average estimate—at age 20—was about 10,000 hours, higher than the averages for the two lower groups. That and other studies have been cited as evidence for the importance of deliberate practice in achievement. Indeed, the researchers suggested that their practice-centric framework accounted for “the major facts about the nature and scarcity of exceptional performance,” without relying on innate ability.

“Characteristics once believed to reflect innate talent,” they wrote, “are actually the result of intense practice extended for a minimum of 10 years.” In his 2008 bestseller, Outliers, journalist Malcolm Gladwell summarized Ericsson’s work and coined the “10,000-Hour Rule,” declaring 10,000 hours “the magic number of greatness.”

Ericsson himself hit the brakes on this rule after it had become a cultural mainstay. The idea that a certain amount of practice automatically makes someone an expert, Ericsson wrote in 2013, is a “popularized but simplistic view of our work circulated on the internet.” Further, recent research has challenged the very claim that deliberate practice—let alone a predetermined amount—is the most salient factor in achieving high-level performance. In a 2014 meta-analysis, psychologist Brooke Macnamara and colleagues examined more than 80 studies of performance across domains that included sports, music, and education. They found that deliberate practice accounted for at most about one-quarter of performance differences. In a later paper, they reported that among athletes classified as “elite,” practice explained only about 1 percent of the variation in performance.

“When you’re explaining performance differences between individuals, practice is almost always important, but it doesn’t account for everything,” says Macnamara, now at Case Western Reserve University. She points to a study of chess players that found that some qualified for a World Chess Federation title after as few as 3,000 hours of practice, while others required more than 20,000 hours.

What matters besides practice? It depends on the pursuit, of course, but “intelligence seems to be important,” Macnamara says, as do “the age at which you start, the type of training, if you have a coach or not, and your working memory capacity. A number of factors play a role.”

One hour of practice is not necessarily going to result in the same amount of gain for two different athletes or musicians. “These myths tend to fall prey to the single-cause fallacy—we have this complex situation, but let’s explain it with one pithy, simple explanation,” she says. “And that’s always going to fall short.”

Guy Billout
Guy Billout

7. MALE AND FEMALE BRAINS

The brains and minds of men and women differ in important ways.

Few would deny that men and women differ physically: While tall, muscular women abound, men are, on average, taller than women and have much greater grip strength. But their brains and behavior reflect no significant differences, argue many people, including some psychologists. Increasingly, it seems, it is de rigueur to reject or downplay psychological differences between the sexes—despite substantial scientific evidence that they exist.

Women tend to engage in more altruistic behavior and rate higher on certain measures of empathy. Men, on average, perform better on tasks in which they mentally rotate an object, while women can better remember the location of objects. Evolutionary theorists postulate that sex differences arose because male and female hominids faced different reproductive and survival pressures.

Men are also much more likely to be diagnosed with autism spectrum disorder, for instance, while rates of mood disorders and Alzheimer’s disease are higher among women. These sex differences can have important implications for understanding and treating disorders.

A recent review of sex differences in vulnerability to stress examined findings in humans and nonhuman animals on molecular as well as behavioral levels. Among those findings, notes co-author and Virginia Tech neuroscientist Georgia Hodes, is that “boys and girls, particularly adolescents, had different responses to experiencing post-traumatic stress disorder. Girls had internalizing symptoms, such as self-blame, and boys engaged much more in externalizing behavior,” including acting disruptively. It could be useful, she says, for adults to recognize that the same disorders can produce considerably different symptoms.

Sex differences can be important in the development of medications, too, Hodes points out—past efforts show that a drug tested on male animals won’t necessarily work for human females.

“No one’s saying that men and women are completely different beings. There is probably more that overlaps than is different,” Hodes says. “But we need to understand these differences. I think it becomes especially important when you’re trying to develop better treatments.”

8. THE DEPRESSION GENE

There is no single gene for depression, schizophrenia, or any other psychiatric disorder.

The likelihood that one will experience anxiety, an episode of major depression, or an autism spectrum diagnosis is to some degree dictated by genetics. “You have a much higher risk of having depression if you have a sibling or parent with depression, and the same is true for schizophrenia and basically all psychiatric conditions,” says Kevin Mitchell, a neurogeneticist at Trinity College Dublin. Accordingly, since at least the early 1990s, researchers in psychiatry have searched for specific genes that have a major influence on a person’s risk for mental disorders.

But scientists have failed to turn up reliable evidence that any single, common genetic variant matters much when it comes to mental illness. Earlier this year, a research team reported that none of 18 “candidate genes” for major depression—genes that past studies had suggested might have a meaningful association with the disorder—showed any such associations.

Today, geneticists know that it’s not one or even a handful of alleles that predispose people toward particular conditions, but a sea of them; psychiatric disorders are highly polygenic. “There may be rare mutations that put someone at high risk for these conditions, but the effects are modified by the genetic background that people have,” says Mitchell. That background is made up of hundreds or thousands of gene variants that appear widely in the population but to differing degrees in each individual. Each one contributes, at most, a very small amount to a person’s risk for having a disorder. The connection between one’s genetic profile and vulnerability is highly complex.

Also complicating the picture, Mitchell explains, is that specific genetic variants influence risk for a range of different disorders, rather than connecting one-to-one. “They’re not so specific. And that fits the epidemiology, because if your sibling has schizophrenia, your risk of schizophrenia is increased, as is your risk of depression. Your risk of ADHD is increased. Your risk of autism is increased. There’s a shared genetic basis for all these things.”

As the architecture of complex disorders is mapped, the many genetic variants associated with each disorder are weighted to create polygenic risk scores. Someday, when these predictive scores are refined and widely used, the candidate gene myth will be dispelled once and for all.

9. THE FIVE STAGES OF GRIEF

People do not grieve in a set, predictable manner.

Denial, anger, bargaining, depression, acceptance. Those who have heard of these “five stages of grief” might understandably assume that they compose a roadmap to death and dying. Those coping with a death, the concept suggests, struggle to process it, grow angry, plead with God to prevent or undo it, sink into sadness, and, eventually, come around to living with the loss.

In reality, grief is not so regimented. Even the originator of the five stages, psychiatrist Elisabeth Kübler-Ross—who first used them to describe how terminally ill patients anticipate their own death—lamented in a later book that the stages had been “very misunderstood,” and noted that not everyone experiences all of the stages or follows them in the same order.

There is no doubt that bereaved people become angry and depressed about a loss, and “people do have a hard time believing it’s real,” says George Bonanno, a professor of clinical psychology at Columbia University. “What really happens is that people have to reconcile all of the memories and expectations in their life. Your brain is a prediction organ, so it predicts that that person is still there. You have to update your world, and that’s painful and difficult.” But he and other researchers reject the idea that such experiences should be treated as stages. “The problem,” he says, “is that when people don’t go through those stages, they start to worry they’re doing something wrong.”

Grieving people take many different paths; some clearly recover from loss more easily than do others. A months- or years-long stretch of grief-related symptoms that impedes normal functioning—such as intense emotional distress and painful yearning for the deceased—can signal that treatment is in order. “If you’re doing really badly after many, many months, then you have a pathology and you need help,” Bonanno says. “But most people don’t—it’s only about 10 percent.” His research on trajectories of recovery has found that a majority of study participants, months after a loved one’s death, show few or no symptoms. “It’s natural to be really upset when someone you care about dies,” he says. For most of us, he argues, that high intensity wanes. The idea of stages of grief, on the other hand, refuses to die.

Guy Billout
Guy Billout

10. ATTACHMENT STYLE

Early interactions with parents do not critically determine how people relate to others when grown.

While many individuals have little trouble getting close to romantic partners, trusting them, confiding in them, and relying on them as a base of support, not everyone relates to others in this way. Some have a strong aversion to becoming entwined; others are prone to anxiety about how much they can truly rely on someone. Psychologists use the term “attachment style” to describe how individuals differ in the degrees of avoidance or anxiety they exhibit in their relationships and whether they are relatively secure or insecure about their bonds.

Attachment theory started as an exploration of the relationship between infants and caregivers, and studies suggested that some children show markedly anxious or avoidant behaviors after being separated from their parents. Given the appearance of such differences in early life, a common misconception about adult attachment styles is that they are essentially based on how one related to and was treated by parents as a child. But the connection between how one is raised and how one turns out is not as simple as it might look.

“People think of attachment, in some ways, as an inoculation,” says Jay Belsky, a researcher of child development at the University of California, Davis. “That if you’re secure or insecure as an infant or a toddler, that sets things up for the rest of your life. And that’s just an exaggeration.” The continuity between attachment characteristics in childhood and adulthood varies among individuals—and early neglect or abuse can, of course, cause long-term harm. But long-term studies find that average correlations between early and later attachment-related measures are moderate at most.

Early relationships could have a “developmental legacy,” Belsky says, but it depends on what follows. “Imagine that I am an insecure child, but I encounter teachers in a school system who are wonderfully supportive and caring and are patient and attentive toward me. Those experiences can reshape my internal working model, how I perceive, think about, and respond to the world.”

Attachment researchers such as R. Chris Fraley, of the University of Illinois at Urbana-Champaign, take a similarly broad view. “The reason why any one person may be insecure as an adult could be due to something in his or her recent history, such as experiencing a devastating breakup,” he explains. “Or it could be explained by a history of interpersonal interactions.”

Also, a suboptimal experience with a parent might affect the parent-child bond without coloring other relationships. “We know, for example, that children who do not grow up in the same household as one of their parents do not have as secure a relationship with that parent as children who do,” Fraley notes.

As is the case with other characteristics, multiple factors—including one’s genes—shape one’s orientation to parents or partners. It’s not simply a matter of whether a parent was sufficiently sensitive and caring. “I compare human development to meteorology,” Belsky says. “You might say, ‘It’s really humid, so it’s going to rain,’ but whether it rains depends on multiple factors. We, too, have a lot of moving parts.”

Submit your response to this story to letters@psychologytoday.com. If you would like us to consider your letter for publication, please include your name, city, and state. Letters may be edited for length and clarity.

Pick up a copy of Psychology Today on newsstands now or subscribe to read the rest of the latest issue.

Facebook/LinkedIn image: Just dance/Shutterstock