Did you know that people who rely on food stamps are less likely to make six figure salaries than those who don’t? Or that people who don’t attend AA meetings are more likely to suffer from liver disease than those who do? It’s true. And you should be very concerned.
Jean Twenge’s story in this month’s Atlantic (Have Smartphones Destroyed a Generation?) is a textbook example of conflating causation with correlation. Her central thesis concerns a possible link between depressing teen statistics and the twin evils of smartphones and social media.
The trouble began five years ago, coinciding with what Ms. Twenge considers to be a seminal event: “It was exactly the moment when the proportion of Americans who owned a smartphone surpassed 50 percent.” That’s interesting, but do you know what else happened in 2012? A madman shot up an elementary school in Connecticut. A U.S. ambassador was assassinated in Libya. A roving robot successfully landed on Mars. London hosted the Olympic games. Another madman shot up a movie theater in Colorado. Will Ms. Twenge write stories about these events as well? (More importantly: If not, why not?)
So right out of the gate she’s looking for a cause to fit an effect. She cleverly couches what amounts to an unsubstantiated claim within an unappetizing soufflé of unrelated causal statistics and devil’s advocate arguments. She uses sneaky words like “suggests” and “likely” and “unmistakable” to strike fear in the hearts of young parents. She throws out one set of correlated behaviors and statistics after another in such a way that clearly betrays her bias. She does all this—she walks right up to the line and stands on it—but she never actually comes out and says what the nervous reader will be tempted to think. (Notice the question mark at the end of her headline.) She is being very clever about not overselling her case.
Look, I understand she’s probably working with limited data. We don’t have statistical windows into every little bit of our lives, especially from the last few years. And studying the effects of technology on behavior takes time—decades, even. But it’s always regrettable to see an alarmist conflate correlation with causation to rack up book sales. And make no mistake—Ms. Twenge is selling a book.
Her story makes for a very long game of spot-the-logical-fallacy. Really it’s just one fallacy, but she breaks out some smoke and mirrors along the way to throw off the reader. Let’s dive in.
Recent research suggests that screen time, in particular social-media use, does indeed cause unhappiness.
Notice the word “suggests”.
Teens who spend three hours a day or more on electronic devices are 35 percent more likely to have a risk factor for suicide, such as making a suicide plan… One piece of data that indirectly but stunningly captures kids’ growing isolation, for good and for bad.
I wish I had a dollar for every time she used “more likely” or “less likely”, or some similar phrase, to scare the reader into her corner. Likelihood is not on par with causality. (I demonstrate this in my opening paragraph.) Just because a particular behavior is more or less likely among some subset of the population that happens to engage in some activity does not in itself mean a damn thing. Also, “stunning” or not, there’s no mistaking the meaning of the word “indirectly”.
The correlations between depression and smartphone use are strong enough to suggest that more parents should be telling their kids to put down their phone.
“Strong” correlations might “suggest” things, but that’s all they do.
Teens who spend more time on social media also spend more time with their friends in person, on average—highly social teens are more social in both venues, and less social teens are less so. But at the generational level, when teens spend more time on smartphones and less time on in-person social interactions, loneliness is more common. So is depression. Once again, the effect of screen activities is unmistakable.
“More common” is the insidious “more likely” by a different name. An “unmistakable effect” is subjective and doesn’t live within driving distance of a causal effect. It’s curious that she goes out of her way here to point out a statistic that runs counter to her thesis (“Teens who spend more time on social media also spend more time with their friends in person”). There’s a method to this particular kind of madness, and it brings us to the ticklish topic of hedging.
Here Ms. Twenge plays devil’s advocate just enough to lull the reader into a false sense of security.
Of course, these analyses don’t unequivocally prove that screen time causes unhappiness; it’s possible that unhappy teens spend more time online.
Of course she could be wrong. What reasonable person wouldn’t admit this? She attempts to use this particular case (“it’s possible that unhappy teens spend more time online”) in a tortured proof by contradiction later in the same paragraph to bolster her insinuation. It doesn’t work.
Depression and suicide have many causes; too much technology is clearly not the only one.
Look how intellectually honest she’s being!
And the teen suicide rate was even higher in the 1990s, long before smartphones existed.
She pretty much pulls the rug out from under her own feet with this one.
Again, it’s difficult to trace the precise paths of causation.
She’s going to try anyway.
Smartphones could be causing lack of sleep, which leads to depression, or the phones could be causing depression, which leads to lack of sleep. Or some other factor could be causing both depression and sleep deprivation to rise. But the smartphone, its blue light glowing in the dark, is likely playing a nefarious role.
Here she artfully allows that she could be wrong, but there’s our old friend, the word “likely”. (And don’t get me started on the word “nefarious”.)
Significant effects on both mental health and sleep time appear after two or more hours a day on electronic devices.
This is one of the only sentences in the article that outlines actual causality between two observed behaviors. I don’t have any reason to doubt the claim, but it’s a rather insidious way to inspire confidence in the rest of the article, which almost wholly relies on correlative behaviors. And everyone knows the best place to hide a lie—or an unsubstantiated claim—is between two truths.
“One study asked college students with a Facebook page to complete short surveys on their phone over the course of two weeks. They’d get a text message with a link five times a day, and report on their mood and how much they’d used Facebook. The more they’d used Facebook, the unhappier they felt, but feeling unhappy did not subsequently lead to more Facebook use.”
This is the mangled attempt at a proof by contradiction that I mentioned earlier. Here she’s trying to suggest that because the causal arrow has been shown in this case not to point in one direction—unhappiness doesn’t necessarily cause more social media use—it must point in the other. She employs a bit of devil’s advocate along the way, but there’s a problem: Just because you can show a causal arrow doesn’t point in one direction, it doesn’t necessarily follow that it must point in the other. In fact, the findings of this Facebook experiment has no bearing—none whatsoever!—on the existence of causal arrows in the first place.
Eighth-graders who are heavy users of social media increase their risk of depression by 27 percent, while those who play sports, go to religious services, or even do homework more than the average teen cut their risk significantly.
Oy. The causal link between exercise and improved mood (and less depression) is well documented, but has literally nothing to do with social media and smartphone use. It’s deeply disingenuous to link the two. What’s more, these may be overlapping circles in a Venn diagram; it’s possible that depressed teens who use smartphones and social media also go to church and play sports. The recent uptick in wearable technology nudges us to get out and move around in ways that weren’t possible just a few years ago. We can even track our workouts and share them online. And by the way, “increase their risk” is the insidious “more likely” by another name.
It may be a comfort, but the smartphone is cutting into teens’ sleep: Many now sleep less than seven hours most nights. Sleep experts say that teens should get about nine hours of sleep a night; a teen who is getting less than seven hours a night is significantly sleep deprived. Fifty-seven percent more teens were sleep deprived in 2015 than in 1991. In just the four years from 2012 to 2015, 22 percent more teens failed to get seven hours of sleep. The increase is suspiciously timed, once again starting around when most teens got a smartphone.
Oh look! More correlative behavior. Notice her use of the term “suspiciously timed”. Ms. Twenge boldly claims that the “the smartphone is cutting into teens’ sleep”, but yet again offers no evidence to back up the assertion (sorry, the insinuation) of a causative arrow.
/ / /
It’s interesting that she starts out her article with an anecdote about helicopter parenting. Right at the outset of her story is another possible cause of these alarming behaviors. And of course it’s a complex stew that contributes to human nature. Who knows, maybe teenagers are generally rattled by the twin school shootings of 2012. Maybe they really don’t like Barack Obama. Maybe it’s something in the water they drink. Smartphones and social media are disruptive forces to be sure, but I wish we could talk about them in less shrill ways. Ms. Twenge is sounding alarms that have been around for generations. Remember when home computers and always-on internet were supposed to make us feel more cut off from each other? Or what about the personal digital assistant? Or the calculator? Or the printing press? (Those hateful newspapers—soon no one will have conversations on trains!)
Ms. Twenge should be ashamed of this blatant emotional manipulation marinading in slippery statistics and logical fallacies, but something tells me she sleeps just fine.
One thought on “No, Smartphones Haven’t Destroyed A Generation”
She is a second/third tier thinker at best. And any discipline that allows such to rise to prominence within their domain is a priori suspect.