By Galen Panger
If you ever have a chance to visit Facebook’s headquarters, one of the things that will jump out at you are the posters. There are so many posters. In big, red all-caps, they practically yell at you from the walls. “PROCEED AND BE BOLD,” bellows one. “STAY FOCUSED AND KEEP SHIPPING,” shouts another. “MOVE FAST AND BREAK THINGS” is perhaps the most famous of these primal screams.
It’s all part of an open propaganda effort to keep employees on their toes, to keep the company’s hacker values in plain sight. It sounds hokey, and it sort of is, but it also feels empowering. “FAIL HARDER,” urges one, without caveat. Well, okay, if you insist.
One of the cool things about the mind control effort is that employees get to participate in it. And so if your team thinks up a mantra it wants the company to adopt, it’s easy to organize a field trip for you and your colleagues to go ahead and print up a bunch and then plaster them all over campus.
When I did a stint at Facebook a couple summers ago, I was a mere intern on the user experience research team, but I decided to go ahead and organize a trip for me and all the other researchers, including the data scientists, to print up our own mantra. It would be softer than the others, less aggressive. And it would emphasize something that we researchers were supposed to know acutely: that there were humans on the other end when the company failed, when things moved too fast, and when things were broken.
Though it was still red, it wasn’t all-caps, and it was in a softer cursive. Not everybody liked it, but it spread, spawned other posters, and made a difference to some of the folks who came later.
Facebook’s hacker culture is truly something to behold, but it also means the company constantly steps in it. Facebook upsets a lot of people, and often. The culture makes the company more impervious to criticism, but in some ways also maladapted and unempathetic. And so Facebook and its culture will seemingly always be a source of tension — for us users, as well as for the people who work there.
On a return trip to HQ a few weeks ago, while waiting in the lobby, I noticed two new posters I hadn’t seen before. As if to underscore the company’s cultural tension, right next to a poster blaring “RUTHLESS PRIORITIZATION” was another quietly suggesting, almost pleading, “Empathy: Have Some.”
Fail Harder
A lot has been said about the Facebook emotion experiment and the ensuing backlash in recent weeks. For a study about emotional contagion, the spiraling outrage was awfully ironic. Once again, Facebook had stepped in it.
Surveying the damage on Techmeme, it was hard to pin the cause of the outrage on any one reason. Is there something about psychology research generally that creeps people out? If so, I don’t really blame them. Psychologists love to point out unsettling things about human nature, like the fact that we’re vulnerable to persuasion and manipulation. I don’t blame people for being unsettled.
Did people feel betrayed about the lack of informed consent? You know, in psychology research, when people find out they’ve been an unwitting experimental subject, it’s not uncommon for them to feel duped. They’re at least surprised. The only distinction is that academics who experiment on subjects without getting their consent first usually tell people about it immediately afterward. They debrief the subjects and answer questions. They unruffle ruffled feathers. They may allow a subject to remove his or her data from the experiment. In some cases, they even offer follow-up services. Given that Facebook did nothing to inform subjects or make them feel whole again, it’s hard to blame folks for feeling unduly violated.
The experiment also forced many people to contemplate, for the first time, the kind of persuasive power Facebook might surreptitiously wield around the world given its size and scale. You don’t have to be a Facebook skeptic to be concerned about a demonstration of that kind of power. Unlike the voting experiment Facebook ran in 2008, there were no visible signs here that an experiment was occurring, and no one would have known if the company hadn’t published the results. So yeah, I think it’s normal for people to feel a little unnerved.
And perhaps the experiment was another cold reminder that “we are the product.” One of the stated reasons Facebook conducted the experiment was to understand how emotions on Facebook might make the site more or less engaging. If Facebook can keep us glued to it longer, and therefore make more money, by amping up or down certain emotions from our friends, would it do this? Are the emotions we express in earnest just another commodity in the Facebook economy? Something about that feels like a loss.
On the other side of the firestorm were people who couldn’t see how the experiment was any different from your run-of-the-mill psychology experiment. Or, alternatively, how it was different from the widespread Internet practice of A/B testing, where you experiment with different variations of a website to see which is most effective at persuading visitors to buy, or download, or whatever the site’s goal is. Some of these experiments feel blatantly manipulative, like the headlines that are constantly tested and retested on visitors to see which ones will get them to click. We have a word for headlines like this: “click-bait.” But nobody ever hands out consent forms.
The every-which-way quality of the reaction, I think, comes in part from the fact that the study crossed academic and corporate boundaries, two areas with different ethical standards. It was unclear which to hold the company to. Facebook’s study received the academic imprimatur but, it seems, didn’t have to play by the same rules as other academic research. That said, given the backlash, the study now bears something of an an ethical disclaimer.
Empathy: Have Some
Amidst the firestorm, one thing that no one commented on or seemed to acknowledge was the other reason Facebook conducted the experiment. According to Adam Kramer, the study’s lead author, Facebook was hoping to understand more about how it impacts our emotional well-being. Is it true, as is commonly feared, that the ostensibly happy world of Facebook actually makes us unhappy? That Facebook makes us worse off?
Responding to the furor over the study, Kramer wrote the following:
The reason we did this research is because we care about the emotional impact of Facebook and the people that use our product. We felt that it was important to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out. At the same time, we were concerned that exposure to friends’ negativity might lead people to avoid visiting Facebook.
If you were a researcher at Facebook, probably one of the things that would provide you with the greatest source of tension about your job would be evidence that the product you’re pushing to half the world’s population actually causes them to feel “negative or left out.” That would be a pretty epic fail for a company that wants to “make the world more open and connected.”
I believe that Kramer is concerned with addressing the popular worry that Facebook makes us unhappy. Not just because I’ve met him but because, in the study, he seems adamant about refuting it. In discussing his findings, Kramer asserts that the study “stands in contrast to theories that suggest viewing positive posts by friends on Facebook may somehow affect us negatively, for example, via social comparison.”
Ruthless Prioritization
There’s good reason for Kramer and the other researchers at Facebook to be concerned that the ostensibly happy world of Facebook might actually be making us worse off. The logic of social comparison implies that when we see all the happy things people say about their lives on Facebook, we naturally compare ourselves to that and, if we don’t measure up, can end up feeling worse about our own lives.
The thing is, given the way people on Facebook tend to share only the good stuff about their lives, and not the bad stuff, we may all have the opportunity to feel like we don’t measure up. Adding to the effect, Facebook’s algorithms heavily promote big social comparison-inducing announcements from our friends. “I just bought a house!” they say, or “I just got a job at Facebook!” or “Look at me in front of the Taj Mahal!” Even the number of Likes on our friends’ posts is a probable source of social comparison.
One highly-publicized study on this topic suggests that the travel and leisure photos we post are a particular source of envy for our friends. Next time you hear someone say they’re posting vacation photos on Facebook just to make their friends jealous, let them know they probably will.
Another study employed text message surveys to measure how often people used Facebook and how they felt as a result, and found that the more people used Facebook at one time point, the worse they felt at the next. The authors of this study also finger social comparison as a culprit.
Social comparison is a good hypothesis, particularly because it’s been the subject of reams of research in the social sciences, dating back to the 1950s when Leon Festinger first proposed the concept. Social comparison has been demonstrated through a range of methods to affect our happiness, from brain imaging research on momentary rewards in the brain to socioeconomic surveys on longer-term life satisfaction. In fact, social comparison is often posited as the solution to what’s known as the “Easterlin Paradox,” which finds that, while our happiness increases with our income, societies that get richer do not tend to get happier.
This paradox results because, as Richard Layard writes in the UN World Happiness Report, people in a growing economy are only getting richer in absolute terms; on average, they do not become richer in comparison to others. Surprisingly but consistently, the satisfaction we receive from our own incomes depends a great deal on the incomes of others around us. We feel worse about our lives when the people we compare ourselves to are better off than us, and better when we’re better off.
This research on social comparison doesn’t necessarily paint humanity in the most glamorous light given what we’re all raised to think about coveting thy neighbor’s wife, or house. But social comparison appears to be fundamental to human nature, and if it can help explain the Easterlin Paradox, perhaps it can tell us something about the Facebook paradox, too.
There are other possibilities. Kramer’s study also mentions an “alone together” theory, wherein being constantly connected to one another actually makes us feel more lonely — that the connectedness of Facebook and other virtual means are poor substitutes for real, face-to-face connections.
Others might say that Facebook enables a kind of social transparency that isn’t always good for us. We can now see more vividly when our friends are hanging out without us, when our recent ex has found someone new, or when someone we thought was reasonable is actually kind of an ideologue. In contrast to social comparison, where Facebook is providing us with a slanted or artificial view of our friends’ lives that makes us feel bad, here, seeing the truth about our friends’ lives is what makes us feel bad. Perhaps it’s better if we don’t know certain truths about our friends.
Another major area of concern about Facebook pertains to loneliness. The number of people in this country who say they have someone to confide in has declined dramatically over the past few decades, leading to a rise in loneliness. Facebook may owe its success in part to this long-term trend. But while Facebook has enabled us to maintain a semblance of connection with an enormous number of people, perhaps it has distracted us from developing and maintaining the few, close friendships that really count for our well-being. Because when it comes to combating loneliness, quality trumps quantity. But Facebook is all about quantity.
Yet another hypothesis is that we can’t figure out how to stop using Facebook, but also don’t find it particularly worthwhile or meaningful, causing us to feel worse after an extended period of use.
They’ve engineered the site through many experiments to be as engaging as possible, even addictive, but to the detriment of users’ ability to regulate their own use. This might be why some feel compelled to take a “vacation” from Facebook.
The tech industry at large, much like the food industry, has gotten much smarter over the years about how to create habit-forming products. Academics call all these new smartphone- and app-driven urges “checking habits.”
No one, perhaps, has been more successful at fostering these habits than Facebook: on average, we now spend twice as much time on the service as we did last year. Indeed, part of the rationale for the Facebook experiment itself was to see what effect emotions might have on our engagement. At Facebook, there is probably nothing more important than engagement. “Ruthless Prioritization,” after all.
Probably the best set of evidence to-date about Facebook’s impact on our well-being comes from one of Facebook’s own researchers, Moira Burke. Her less publicized but more academically-acclaimed set of studies on the well-being impacts of Facebook use suggest that we shouldn’t look at Facebook as one monolithic experience. Rather, it’s a set of different experiences with different effects on our well-being.
Employing back-end usage data from Facebook, Burke finds that while conversations with friends are associated with a number of benefits for our well-being (much like real-life conversations), looking through the News Feed isn’t. In fact, she finds that greater passive consumption over time, controlling for individual predispositions, is associated with lower perceived social support, lower bridging social capital (feeling part of a broader community), and marginally lower positive affect, higher depression, and higher stress.
All of these findings present challenges to Facebook, but to me, the findings about social support are the toughest. Burke’s research suggests that the more people browse News Feed over time, the more they begin to agree with statements like “I feel that there is no one I can share my most private worries and fears with,” and the less they agree with statements like, “When I need suggestions on how to deal with a personal problem, I know someone I can turn to.” Given the decline we’ve seen over the past few decades in the number of people who say they have someone to confide in, the idea that the Facebook News Feed could make this worse is a concern. And when you consider that lonely people are more likely to use News Feed, there’s a possibility this decline in social support might be somewhat self-reinforcing.
Overall, while these negative findings are not the focus of Burke’s research, they are striking. They suggest that we’re uplifted by messages directed to us, but not so much by messages from friends directed to no one in particular.
Burke’s recommendation to Facebook users is to change how they use the site. On Facebook, “the outcomes that you experience really depend on how you interact with people,” she told NPR, suggesting that users might want to reach out to one another more and think about browsing the News Feed less.
This seems like good advice, until you remember that Facebook is built around News Feed and designed to encourage you to engage with it. It’s the centerpiece of Facebook’s business model. It’s the first thing you see when you log on, and it appears to be where the majority of activity on the service comes from. With Facebook Home, the company sought to make News Feed even more pervasive by bringing it to our smartphone lock screens.
There are no official statistics on time spent in directed communication versus passive consumption, but Burke’s research (conducted in 2011) provides some clues. While Burke’s median subject received about 100 total comments, messages and Wall posts over a month, they loaded News Feed nearly 800 times and viewed 140 profiles. This suggests a high ratio of passive consumption to directed communication. In my own survey work looking at heavy users of Facebook in the U.S., users report spending about 62% of their time in passive consumption activities versus about 34% in directed communication.
If directed communication is good for us but passive consumption is bad for us, and we spend less time conversing with our friends than observing them, then it’s plausible that Facebook is, on the whole, bad for us. Instead of connecting us, Facebook may be making it harder for us to fulfill our most basic social needs.
At the very least, it is a question worth Facebook’s attention.
Measure Twice, Cut Once
Facebook’s emotional contagion experiment arrives in this context, and the study seems partly intended to alleviate these fears that News Feed is making us unhappy. At a basic level, the logic of emotional contagion implies that like emotions cause like emotions. Therefore, if we assume that the Facebook News Feed is filled with more positive expressions than negative ones, then Facebook should on the whole be a force for spreading positive emotions, not negative ones. This is the pattern Adam Kramer and his co-authors say they’ve found.
With envy, it’s pretty easy to see how the reverse pattern might appear so that in some cases positive expressions make us feel negative. At other times, we might see that negative expressions make us feel positive, which is true for schadenfreude, or pleasure in another’s misfortune. Is Kramer right, though, that the dominant pattern on Facebook is for like emotions to cause like emotions?
Rather than experimentally increase the emotion in users’ News Feeds to see if they became more emotional as a result, Kramer’s experiment involved reducing the emotion in users’ News Feeds to see if they became less emotional. This makes interpreting the results a bit more difficult because reducing emotion seems counter to the idea of “contagion.” But I suppose it’s fine to go along with the assumption that decreasing emotion should have the same, only opposite, effect as increasing it. It’s sort of like studying the spread of cholera by removing a contaminated well.
Kramer selected some 700,000 users at random and assigned each of them to one of four groups. In the first group, he removed somewhere between 10–90% of the positive posts they would have seen in their News Feeds over one week, and in the second group he similarly removed 10–90% of negative posts they would have seen. The other two groups were control groups where he removed similar proportions of posts at random.
Kramer expected to see users who had positive posts removed become less positive in their own posts and correspondingly more negative. Similarly, Kramer expected to see users who had negative posts removed become less negative and more positive in their own posts. And indeed, this is the pattern Kramer believes he and his co-authors have found. A number of problems with the study and its methodology make it difficult to support that conclusion, however.
The first question about the study is whether anything notable happened. This was a common criticism. Although Facebook has tremendous scale, it doesn’t mean the scientific community should care about every effect the company can demonstrate. Neither should the company itself work on small stuff that barely moves the needle. Though Kramer said he removed a lot of emotion from users’ News Feeds (between 10–90% of positive or negative posts), he saw very little change in the emotions users subsequently expressed. All of the changes were 0.1% or less. That’s not 10% or 1% — that’s 0.1%.
So did something notable happen? It doesn’t seem that way, but then again Facebook wouldn’t have conducted this experiment if it knew what would happen. Nor do academics only consider a study notable if large effects are reported.
Still, the small effects raise important questions. Why were they so small? Though Kramer doesn’t offer much insight on this question, one possibility is that we’re immovable, unresponsive lumps on Facebook. Emotions might not be all that contagious on Facebook, either because we’re emotionally uninvested or because we don’t read News Feed very carefully. Or perhaps we plan our posts days in advance and don’t let the emotions of others influence what we say. Something along these lines could be the explanation, though I’m skeptical that Facebook has so little emotional power or that users are planning many posts in advance.
Kramer and his co-authors needed an accurate measure of emotion for two purposes: first, to determine whether a post in News Feed was positive or negative, and thus a candidate for removal; and second, to measure how positive or negative users became in their own posts as a result. Large errors on both ends could dramatically reduce the size of the effects Kramer could observe, or perhaps bias them in unknown ways.
For both jobs, Kramer used a simple word count technique. Posts in News Feed were classified as positive or negative if they contained at least one positive or negative word (it is unclear what happened when a combination of positive and negative words appeared). Then, to measure how positive or negative users became in their own posts, Kramer took the percentage of words in those posts that were positive and negative for each user, and averaged those percentages across all users in each condition.
Words were determined to be positive or negative using a dictionary provided by the Linguistic Inquiry and Word Count software, known as LIWC, last updated in 2007. About 47% of posts in the experiment contained positive words while about 22% of posts contained negative words, leaving 31% of posts with no emotional words at all, as defined by LIWC. Everything but the text of the posts was discarded for this analysis, including photos.
On an intuitive level, there are many reasons to be skeptical about this technique. First, it was developed for the analysis of long form writing, which social media is anything but. Second, the dictionary was generated not by a statistical technique (such as machine learning), but rather by a handful of experts who consulted Roget’s Thesaurus, common psychological scales, and standard English dictionaries.
Third, though it’s plausible that certain words have clear emotional significance, the technique is hampered by the fact that it analyzes words in isolation. This means that not only would LIWC classify a phrase like “not good” as positive because it contains the word “good,” it also means LIWC cannot grasp the meaning of idioms, irony, sarcasm, slang, inside jokes or really any turn of phrase. All linguistic features unique to the Internet era such as emoticons, memes and hashtags are lost on LIWC, too.
Further, the dictionary has an emotional designation for only 915 words out of 170,000 currently in use in English, meaning there are probably some important gaps. Were those 31% of posts with no emotional words really unemotional? And because the technique discards everything but the text of the post, we’re essentially flying blind: no photos are included in the analysis despite how important they are on Facebook.
I’d be willing to suspend disbelief on this count if LIWC had a record as a valid measure of emotion in social media, meaning it has a connection to some “ground truth.” Ed Diener’s Satisfaction With Life scale, for example, is a self-reported measure of happiness that has been shown to correlate with such “ground truth” indicators as how often you smile, how your friends rate your happiness, the level of stress hormones in your blood, the strength of your immune system, and more. Most machine learning techniques, too, will provide some proof of their efficacy.
LIWC, in contrast, has a very mixed track record, though it continues to be widely used by academics for the analysis of emotion in social media. Many studies, like the Facebook experiment, take LIWC’s output completely at face value, with no effort to show it produces valid results for their data. Those studies that do report on even the most high-level validation procedures, however, show mixed results.
For example, when Kramer says that LIWC “correlates with self-reported and physiological measures of well-being,” he cites three studies. Two of these studies examine emotion in social media with LIWC, but contain no validation work. The third study, which looks at the contagion of negative emotion in instant messaging, finds that LIWC actually cannot tell the difference between groups sharing negative vs. neutral emotions.
Looking more broadly, one study compares a number of similar techniques and finds that LIWC is a middling performer, at best. It is consistently too positive in its ratings, even labeling the conversation in social media around the H1N1 disease outbreak as positive overall. Another study that looks at emotional contagion in instant messaging finds that, even when participants have been induced to feel sad, LIWC still thinks they’re positive.
When massaged enough or used in combination with other techniques, there’s evidence that LIWC can perform well, in some cases coming close to or even beating the performance of machine learning techniques. Used in raw form as in the Facebook experiment, however, it appears to be substantially inferior to machine learning. Further, we know next to nothing about how well LIWC performs in social media when it comes to emotions under the big headings of positive and negative emotion. If it detects some negative emotions, like anger, better than others like sadness this too may bias what we learn from the Facebook experiment.
Even the dictionary’s own developers caution that “LIWC represents only a transitional text analysis program” in the shift to modern machine learning techniques. Why, then, with world-class engineering resources at Facebook’s disposal, are they still using LIWC?
Because of the error LIWC introduces into the measurement of emotion in Facebook posts, it’s hard to know what to make of the effects reported in the experiment. Kramer was able to find statistically significant results owing to the hundreds of thousands of participants in the experiment. But if the pattern of emotional contagion played out in the way Kramer believes it did, it was probably more potent than the tiny 0.1% changes reported. And if the pattern didn’t play out as expected, LIWC unfortunately may not be able to tell us.
At the very least, Kramer and his co-authors should have tested LIWC and shown how well it can perform on Facebook posts (whole posts, including those with photos). It really makes no sense to use LIWC without an understanding of how well it can do the job.
Back to Basics
When it comes down to it, the biggest problem with the Facebook experiment actually has less to do with the technique used to measure emotion in posts, and more to do with the use of posts in the first place. Are Facebook posts a good representation of the emotional consequences of News Feed?
In a word: no. Facebook posts are likely to be a highly biased representation of how Facebook makes people feel because Facebook posts are a highly biased representation of how we feel in general.
To understand this, let’s take a simple example. Social scientists have found that people are more likely to share content or information with others on the Internet when they’re emotionally aroused. Stuff that “goes viral” tends to be stuff that gets us excited, which is why viral sites like Upworthy and Buzzfeed are full of content geared to rile us up. This means that Facebook posts, like other means of sharing, are likely to be biased toward high-arousal emotions such as excitement, joy and anger, and biased away from low-arousal emotions like depression, sadness or calm.
Because of this arousal bias, the Facebook experiment is not set up to capture the effects of social comparison, or of feeling “left out” or “alone together” for that matter. These phenomena may be more likely to produce low-arousal feelings of sadness or depression, rather than anger. The result is that we keep these sad feelings to ourselves — we don’t broadcast them to our friends.
If Facebook posts better represented low-arousal emotions, we might see a different pattern of results in the experiment. Reducing positive emotions in News Feed might result in a decrease in both positive and negative posts, because not only do people experience less positive emotional contagion, they also experience less negative social comparison. Conversely, increasing positive emotions in News Feed might result in more of both positive and negative posts.
It might even be the case that increasing positive emotions in News Feed results in more negative feelings than positive. But because of the way the Facebook experiment is currently designed, we’d never find out.
When it comes right down to it, it doesn’t take much more than common sense or a basic grasp of the social sciences to understand that what we say on Facebook is a biased representation of how we feel, and that this bias seriously limits what we can learn from the Facebook experiment, as designed.
Looking at social situations in general, we know for example that there are powerful pressures to conform to the attitudes, feelings and beliefs of others. And so if we look at Facebook from this standpoint, it’s easy to see how the effects reported in the Facebook experiment might be due to conformity rather than genuine emotional contagion. Consciously or unconsciously, we may sense a certain emotional tone to our News Feeds and therefore adapt what we post, ever so slightly, so that we don’t stick out too much.
We also take great pains in social situations to present ourselves favorably to others — and nowhere more than on Facebook. Self-presentation or “impression management,” as a famous sociologist once put it, is in fact a pillar of social media research. This is because social networks like Facebook offer us unprecedented means to manage the impressions we give to others. We can tweak what we say before we say it. We can upload just the photos that are beautiful, or flattering, and we can retake that photo until we get it from just the right angle. We literally need admit no flaws. I woke up like this, says Beyoncé, and we have no evidence to doubt her.
At the same time, social networks like Facebook make it harder than ever to give favorable impressions to all of our friends at once. When we post on Facebook, most of us are speaking into relatively vast audiences of close family members, distant relatives, co-workers, colleagues, high school and college classmates, acquaintances we’ve met from every place we’ve ever been, and true friends we’ve collected from all the different parts of our lives.
And unless you do a lot of deleting, what you say on Facebook sticks around forever — making context collapse something that happens not only across friend groups, but also across time.
So what are reasonable people to do? Well, we play up the good stuff about our lives, and play down the bad stuff. We project unbalanced portrayals of our lives — the ups, without so many downs. This is partly why social comparison is thought to be such a cause for concern on social media. Nobody really measures up to the selves we can project on Facebook.
We may also round the edges off of our thoughts, trying to find the “lowest common denominator” that enables us to manage the context collapse and speak to all of our friends at once. Often, however, we may simply self-censor. This is the implication of impression management and context collapse and, indeed, this is what social scientists find.
One of those social scientists happens to be Adam Kramer. In a study published last year, Kramer found a remarkable level of self-censorship on Facebook. Observing a random sample of millions of users, Kramer and his co-author found that about 33% of status updates are censored at the last minute. That is, people begin to type a post (at least 5 characters) but then decide to delete it. If we censor fully a third of what we want to express at the last minute, how much are we censoring before we even reach for the keyboard?
Comparing the rate of self-censorship of posts to comments, which are censored less than posts, Kramer reflects on the difficulty of appealing to all our friends at once:
Posts … make it hard to conceptualize an ‘audience’ relative to comments, because many posts (e.g. status updates) are undirected projections of content that might be read by anyone in one’s friend list. Conversely, comments are specifically targeted, succinct replies to a known audience. Even groups of users who are known to be comfortable with more self-disclosure are often only comfortable with such disclosure to a well-specified audience, such as close relationships, so it makes sense that users pay special attention to posts to ensure that content is appropriate for the “lowest common denominator.”
Given Kramer’s keen understanding of the “special attention” people pay to what they say in social media, and their tendency to speak to the “lowest common denominator,” it’s perplexing that he would fail to mention how the social context in which people are transmitting and receiving emotions might bias the results of his experiment. Instead, Kramer practically ignores the social context. It’s lousy social science.
Kramer does briefly mention one possible social bias, which is mimicry. Have you ever noticed when you’re hanging out with somebody and you start shaking your leg, that they start shaking their leg, too? Or when you start humming, they start humming? That’s mimicry. It’s thought to help us express our affinity for and similarity to others, and could be another reason we saw the pattern of results in the experiment that we did. Kramer quickly dispenses with mimicry, but uses some unclear logic to do so.
You’ll recall Kramer found that reducing the presence of one emotion caused less of that emotion and more of the other to be expressed. Kramer claims that this “cross-emotional encouragement effect,” as he puts it, “cannot be explained by mimicry alone.” But how? If it can’t be explained by mimicry, how can it be explained by emotional contagion? Kramer doesn’t say.
It’s true that there’s a hot debate among emotion researchers about whether positive and negative emotions behave like opposites (more of one implies less of the other), or if they’re more independent. But if positive and negative emotions behave like opposites for emotional contagion, it’s not clear why they wouldn’t for mimicry, or any other phenomenon with social and emotional components. Because Kramer’s logic is fraught on this point, it’s probably safe to say that we can’t rule mimicry out. I’m not sure how we can distinguish mimicry from emotional contagion when we’re interpreting text anyway.
When it comes to impression management and context collapse, if they make it so hard to say anything on Facebook, how come our News Feeds haven’t run dry? The two dynamics don’t predict that all sharing will cease, but they do predict people won’t feel empowered to share as much of their lives as they want to.
Some people may be more inclined to self-censor than others. People who are less conscientious, for example, may not try as hard to manage impressions and may self-censor less. Some simply may not care what certain groups of friends think, or in the moment they may forget who might all be listening. Some perhaps are fatalistic about the risks of sharing on Facebook (“this is the way the world is now”), while others might hope society is evolving to become more tolerant of mishaps. At the same time, some may suffer from more context collapse than others.
Facebook has succeeded despite these dynamics because it taps into our desire to connect to others, to belong and, perhaps most of all, to share our experiences with others. Whereas in normal conversation we might devote about a third of our speech to talking about ourselves or our personal relationships, in social media, this kind of speech appears to be the overwhelming majority. Being able to reflect and express ourselves with the possibility that others might listen is intrinsically rewarding.
Still, impression management and context collapse offer an opportunity for other social networks to emerge and thrive. Facebook’s famed network effects, which have drawn so many users to its platform, tend to increase context collapse. Each new ‘friend’ we make on Facebook adds complexity to the audience we speak to, which means each new social network or app to emerge offers a chance to start fresh.
And I don’t know about you, but I get some pretty unflattering photos from friends on Snapchat, which I think is probably a sign of something healthy. I think people can get a little tired of the Facebook dog and pony show.
Fixing the Facebook Experiment
The Facebook experiment demonstrates that the company continues to grapple with its impact on our social and emotional lives. But given the risks — of pissing people off, of making bad decisions with bad research — Facebook needs to get it right. This means improving the study’s research design, working harder on ethics, and remembering to have some empathy for people on the receiving end of the research.
To shore up the Facebook experiment’s research design, Facebook should work on fixing three problems: how it deals with social biases, how it measures emotion in posts, and how it accounts for its holistic impact on our well-being.
The most important thing Facebook could do to improve the methodology of the emotional contagion experiment would be to stop relying on the posts people broadcast to friends as a representation of the emotional consequences of News Feed.
Experience sampling involves randomly interrupting people as they go about their lives to ask how they’re feeling in the moment. It’s private, so it’s less subject to social biases. It does not rely on recollections, which can be off. And it solicits experiences evenly across time, rather than relying on only the moments or feelings people think to share.
Using experience sampling, Facebook could interrupt people at random times as they use Facebook properties, like News Feed, and ask them to privately report how they’re feeling. A small window or pop-up with the short survey should suffice. And while most experience sampling studies interrupt people multiple times a day, Facebook has so many users it need not be anywhere near as demanding.
The data from these private reports could then help Facebook understand the spread of emotion through the News Feed by interrupting users as they browse. If they experimentally amp up one emotion or another in users’ News Feeds, experience sampling should capture whether emotional contagion is occurring.
If implemented across Facebook properties, experience sampling could help Facebook evaluate literally any change in its user experience. To further investigate the dynamics of social comparison, for example, Facebook might experimentally amp up or down the number of posts with lots of Likes or comments, or might amp up or down the announcements of weddings, job promotions, or accomplishments. It could change the order of posts. It could amp up travel and leisure photos, study the effect of new profile photos, or see how posts from different types of people affect us whether exes, family members, or college friends. It could figure out whether people really do feel bad when they use Facebook for an extended period, and discover what the optimal amount of time is to use the site.
Experience sampling could also be used to determine how users fare outside Facebook as a result of changes on Facebook. Any of Facebook’s smartphone apps could be engineered to prompt users during the day to report on their emotional states, even when they’re not using the apps (thanks to the magic of push notifications). Experience sampling outside Facebook would help make Facebook’s research more holistic, helping to address the reality that people’s immediate emotional reactions to changes on Facebook may not reflect the longer-term effects of those changes on their lives.
Another reason to deploy an experience sampling infrastructure? Google’s doing it. Kramer could also revive his cool Gross National Happiness project, and put it on a much sounder scientific footing. But now I’m really dreaming.
A second big way to improve the emotional contagion experiment would be to stop relying on LIWC to classify individual posts as positive or negative. Instead, Facebook should ask users themselves to rate the emotions conveyed by their posts at the moment they post. Is the post positive? Negative? How much? How do they intend others to feel when they see it? This could be implemented as a pop-up similar to the experience sampling above, or it could be integrated into the status update box itself. Importantly, users could be asked to rate the emotional contents of their whole posts, inclusive of photos and all the nuances of language. And it would be private.
Facebook already has a “Feeling” indicator users can include in their posts, and if it’s not already using this in research, it should. The indicator is likely a big improvement over LIWC — or maybe Facebook will surprise us all and use the indicator to validate LIWC. However, the feeling indicator is still part of the post, still public, and therefore still part of the performance. It offers no gradations, and may have other biases and limitations, like the order in which emotions are presented (don’t be surprised if this has a big influence), or the fact that many users neglect to use it. To be more rigorous, Facebook may want to implement a private, more “scientific” indicator instead.
Again, this would be useful for research at Facebook beyond the current study. Are all emotions contagious? Which are the most contagious? What emotions do users intend others to feel, and are they successful? Do changes in Facebook’s user experience affect the emotions and feelings users broadcast to their friends?
Linking these two suggestions — experience sampling while users browse News Feed and a quick survey at the status update box — would address the Facebook experiment’s problems with social bias and with measuring emotion in posts, two of its most important limitations. Experience sampling would help capture any emotional contagion users experience, while surveys at the status update box would help Kramer select positive and negative posts for display or removal.
A third suggestion would be for Kramer to imitate his colleague Moira Burke and her efforts to capture Facebook’s holistic impact on well-being. Burke includes a long battery of psychological scales in her work, which looks at many facets of well-being, from social support to life satisfaction to depression and stress. Burke administers the battery twice — once before the study period, and once after — and looks at any differences caused by different patterns of Facebook use. She also employs a relatively lengthy timeframe: a month.
Though Facebook is all about moving fast, longer timeframes are probably better for capturing changes in well-being. It would be interesting to use Burke’s psychological battery to measure the holistic impact of amping emotions up or down in News Feed over a month. It could be that amping up positive emotions in News Feed causes deep and lasting happiness. Or maybe the opposite. Or maybe it’s more complicated.
The social comparison literature suggests, for example, that people who are unhappy feel particularly bad when exposed to unfavorable information about how they compare to others. Often people who are happier can fend off the bad feelings, but people who are feeling bad or depressed are more vulnerable. Experience sampling in concert with Burke’s battery of psychological scales would help us understand how users in different states of well-being fare on Facebook. It could be that amping up positive emotions in News Feed is harder on people who are less happy.
Getting Consent
Improving the Facebook experiment’s research design would have important benefits for its ethics, too. To survey users or ask how they’re feeling, Facebook would have to get consent first.
Experience sampling, for example, would be awfully off-putting for users who had no reason to expect it. Why is Facebook asking for my mood? Getting consent from users would ensure they’re not surprised, that they take questions seriously, and that they feel respected. It also allows people with special conditions, like chronic depression, to protect themselves and opt out.
Getting consent does come with some tradeoffs for Facebook, however. First, not everyone will consent. This matters because we want results to be representative of the population as a whole. If many people opt out, and not at random, the results may not reflect the Facebook population. Although this bias can be addressed by weighting the data, concern about low participation rates are probably one reason Kramer did not ask for consent for the Facebook experiment, and may not in the future.
Participation rates are something Facebook can work on, however. It can work on how it asks or how it presents the ask. Facebook can help people understand how it benefits from the research, or whether they might benefit, too. And it can provide small rewards. For example, many experience sampling studies give participants their data back after the study is over so that they might learn something about themselves, their moods and experiences. Small rewards, however token, help establish a give-and-take.
Another reason Facebook may shy away from getting consent in the future is because it could be a source of bad PR if a participant leaks the research to the press, or a member of the press is included in a study. Achieving high rates of participation will probably require that Facebook be more transparent about its research, whether it’s transparency for subjects only or transparency for the public at large. This is something of a risk for Facebook because, as we’ve seen, transparency can invite sensationalism. I do think Facebook can find a workable balance on this, though, which protects users, encourages participation, and protects Facebook, too.
In sum, getting consent for research likely to affect participants’ well-being is the right thing to do and it enables Facebook to do research of a much higher caliber. It’s not the only ethical consideration, however. What are the ground rules, for example, for companies who publish academic research about their own service’s impact on well-being? Is there a conflict of interest? Facebook clearly has an interest in combatting the notion that it makes people feel “negative or left out” given repeated press attention to the matter. Can we rely on the company to be impartial and to work hard to produce results that are unbiased? And would the study have been published if the results reflected more poorly on the company?
Many companies publish research in academic venues, and many of them are headline sponsors of those same venues. Facebook is the headline sponsor at CSCW, for example, which is one of the premier venues for research on how social media impacts us. It recruits young researchers for internships there and throws parties for academics who attend the conference. Personally, I think academics have a lot to gain from partnering with industry, but I won’t pretend the relationship is ethically uncomplicated. Academics are seen as the adjudicators of important social issues, and companies have a stake in getting to know — and influence — them.
Living with Facebook
As Facebook continues to grapple with its impact on our well-being, happily, there’s also some evidence that the company is moving away from its “you only have yourself to blame” stance when it comes to the user experience.
This week, the company announced steps to reduce the presence of click-bait stories in News Feed. We can assume here that we’re to blame for the proliferation of click-bait headlines — after all, they wouldn’t be promoted by Facebook’s algorithms if we didn’t click on them. But Facebook realizes in this case that our behaviors don’t always lead to desirable outcomes. In fact, the company says, a recent survey found that users much prefer headlines that help them decide if they want to read the full article to the sleazier click-bait headlines that will only tease what the article is about. Responding to this, the company announced it would start reducing the prominence of click-bait stories in News Feed.
It seems the company, in this case, recognizes that the outcomes we experience on the service depend as much on how Facebook is designed as they do on our own choices and behaviors. It’s not a big leap, then, for the company to make adjustments that reduce some of the negative well-being outcomes people experience on the service, even if their own clicks and Likes are the cause. Indeed, the company already has a set of tools to mediate disputes between users when they say things or post photos that are hurtful. Concern with users’ well-being has precedent at Facebook.
First, we should be more aware of how much time we spend on Facebook, and recognize that the service, like many others, is designed to be habit-forming and addictive. Many people already deactivate their accounts when they need to focus and reduce distractions. I’m a fan of the site LifeHacker, which has a bunch of other ideas on how you can reduce your usage of Facebook and other distracting services. Use some of these, turn off notifications for Facebook, or delete the app from your phone if you want to spend less time on it.
Second, while Facebook’s PR paints the service as an intimate portrait of our lives, keep in mind that it’s not. It’s a performance, not a well-rounded view of our friends’ struggles, travails and ups and downs. Catch yourself when you start comparing your life unfavorably to someone on Facebook, and realize it’s not the full picture. Misery has more company than you think. And if you’re in a sour mood, consider avoiding the service for a while, because social comparisons that make you feel bad are much more likely when you’re down.
Third, take advantage of the features Facebook offers to curate your experience. If someone’s posts make you feel bad, unfollow them so they don’t appear in your News Feed anymore. Or unfriend them. Like and comment on posts that you want to see more of and ignore those you don’t. Let other people Like your friend’s new glamor shot.
Clicking, Liking and commenting are signals to both your friend and Facebook that you want to see more posts like that. Your friend feels rewarded when you interact with their post, and Facebook takes it as a cue to show you more posts like that. Want to see more reflection and thoughtfulness in your News Feed, and less self-promotion or invective? Be mindful about what you Like. And use what yousay in social media to promote the values you want to see.
Fourth, consider reducing your use of News Feed. As we’ve seen, spending more time on News Feed doesn’t seem to make us happier overall, and may make us worse off. The posts we broadcast to no one in particular, it seems, make no one in particular feel important to us. It’s normal and wonderful to share our successes with others, but try to do so in a way that doesn’t induce mass social comparison. One idea would be to use another service (like WhatsApp, Snapchat or, gasp! email) to designate specific recipients for your announcement. It lets those people know they’re important to you, and limits the social comparison your wider circle of Facebook friends may experience.
Fifth, invest more in fewer, deeper relationships with others. Facebook lets you maintain a connection with an enormous number of people who’ve crossed your path in life, but that connectedness is something of a mirage. When it comes to your well-being, make sure you have people to confide in — people who can be there for you, warts and all. Don’t let Facebook substitute for having real friends.
Finally, it’s time to put to bed the notion that “if you don’t like it you don’t have to use it” when it comes to Facebook. It’s not a workable suggestion for most people. Similarly, if you’re organizing a protest against Facebook, don’t ask people to stop using it. Facebook is too important for many of us to stop using completely — it’s how many of us get invited to events, it’s where many people look when they want to reach out and, though it has its issues, it’s a major realm of social interaction and social discussion online. Don’t tell people to stop using it.
Instead, if you want to stake Facebook, band together with others and agree to not post for a few days. Dry up News Feed. Starve the beast. Facebook’s big cash cow is News Feed, and News Feed is full of the things we say and do. The best way to express your civil disobedience against Facebook is to take a moment of silence. They can keep you hooked, but they can’t make you speak.
In the end, if there’s one thing that can be said about our relationship to disruptive technologies, it’s that we almost always feel tension about them. We become romanced by utopian visions of new technologies, but then wring our hands when they don’t turn out to be everything we hoped, and at times we can be downright dystopian about them. Google is making us stupid. GPS will drive you into a lake. Mark Zuckerberg promised an open and connected new world, and all we got was, well, Facebook.
New technology is never all good or bad, it’s complex. It’s the terribly unsatisfying truth, most of the time. That means we have to work to understand technology in its complexity, and decide what we’ll do about its downsides. If Facebook can take its lessons from the backlash and improve the way it does research, both ethically and methodologically, I think it has a shot at making Facebook a better experience for everyone. On this point, I hope, Facebook will proceed and be bold.
Galen Panger is a Ph.D. candidate in the School of Information. Send him an email or tweet with your thoughts and comments.
Originally published on Medium.
"People over pixels" photo by Jolie O’Dell; emotions image photo by Booshoo; sad bear photo by Lintmachine; Beyoncé photo by Nonu; collapse photo by Phil Hilfiker; grateful emoticon courtesy of Facebook; "Proceed and be bold" image courtesy of Ben Barry.