In the years leading up to Trump’s election, traditional media gatekeepers found themselves shoved aside by trolls and tech companies who told us they were only giving us what we wanted. By Andrew Marantz
Fri 7 Feb 2020 06.00 GMT
In 2012, a small group of young men, former supporters of the libertarian Republican congressman Ron Paul, started a blog called The Right Stuff. They soon began calling themselves “post-libertarians,” although they weren’t yet sure what would come next. By 2014, they’d started to self-identify as “alt-right”. They developed a countercultural tone – arch, antic, floridly offensive – that appealed to a growing cohort of disaffected young men, searching for meaning and addicted to the internet. These young men often referred to The Right Stuff, approvingly, as a key part of a “libertarian-to-far-right pipeline”, a path by which “normies” could advance, through a series of epiphanies, toward “full radicalisation”. As with everything the alt-right said, it was hard to tell whether they were joking, half-joking or not joking at all.
Lose yourself in a great story: Sign up for the long read email
The Right Stuff ’s founders came up with talking points – narratives, they called them – that their followers then disseminated through various social networks. On Facebook, they posted Photoshopped images, or parody songs, or “countersignal memes” – sardonic line drawings designed to spark just enough cognitive dissonance to shock normies out of their complacency. On Twitter, the alt-right trolled and harassed mainstream journalists, hoping to work the referees of the national discourse while capturing the attention of the wider public. On Reddit and 4chan and 8chan, where the content moderation was so lax as to be almost non-existent, the memes were more overtly vile. Many alt-right trolls started calling themselves “fashy”, or “fash-ist”. They referred to all liberals and traditional conservatives as communists, or “degenerates”; they posted pro-Pinochet propaganda; they baited normies into arguments by insisting that “Hitler did nothing wrong”.
When I first saw luridly ugly memes like this, in 2014 and 2015, I wasn’t sure how seriously to take them. Everyone knows the most basic rule of the internet: don’t feed the trolls, and don’t take tricksters at their word. The trolls of the alt-right called themselves provocateurs, or shitposters, or edgelords. And what could be edgier than joking about Hitler? For a little while, I was able to avoid reaching the conclusion that would soon become obvious: maybe they meant what they said.
I spent about three years immersing myself in two worlds: the world of these edgelords – meta-media insurgents who arrayed themselves in opposition to almost all forms of traditional gatekeeping – and the world of the new gatekeepers of Silicon Valley, who, whether intentionally or not, afforded the gatecrashers their unprecedented power.
“The left won by seizing control of media and academia,” a blogger on The Right Stuff, using the pseudonym Meow Blitz, wrote in 2015. “With the internet, they lost control of the narrative.” By “the left”, he meant the whole standard range of American culture and politics – everyone who preferred democracy to autocracy, everyone who resisted the alt-right’s vision of a white American ethnostate.
For decades, Meow Blitz argued, this pluralistic worldview – the mainstream worldview – had gone effectively unchallenged, but now, by promoting their agenda on social media, he and his fellow propagandists could push the US in a more fascist-friendly direction. “Isis became the most powerful terrorist group in the world because of flashy internet videos,” he wrote. “If you’re alive in the year 2015 and you don’t understand the power of the interwebz you’re an idiot.”
To the post’s intended audience, this was supposed to be invigorating. To me, it was more like a faint whiff of sulphur that may or may not turn out to be a gas leak. The post was called “Right Wing Trolls Can Win”. Would the neofascists win? I had a hard time imagining it. Could they win? That was a different question. “The culture war is being fought daily from your smartphone,” the post continued. On this one point, at least, I had to agree with Meow Blitz. To change how we talk is to change who we are.
During the long 2016 presidential campaign, Donald Trump seemed to draw on pools of dark energy not previously observed within the universe of the American electorate. The mainstream media used the catchall term alt-right, which appealed to newspaper editors and TV-news producers who hoped to connote frisson and novelty without passing explicit judgment. Instead of denouncing the alt-right, reporters often described it as “divisive” or “racially charged”. They tried to present both sides neutrally, as journalistic convention seemed to require.
The definition of alt-right continued to expand. By the summer of 2016, it was such a big tent that it included any conservative or reactionary who was active online and too belligerently anti-establishment to feel at home in the Republican party – a category that included the Republican nominee for president. This was an oddly broad definition for what was supposed to be a fringe movement, and yet no one seemed eager to clear up the semantic confusion. The Clinton campaign played up the alt-right’s size and influence, while the alt-right was all too glad to be perceived as vast and menacing. There was no way to measure precisely how many Americans were alt-right, and there never would be. Estimates ranged from a few hundred to a few million. Still, what mattered was not the movement’s headcount, but its collective impact on the national vocabulary.
“We’re the platform for the alt-right,” Steve Bannon said in July 2016, when he was running the pro-Trump web tabloid Breitbart. Later that year, after leading the Trump campaign to victory and being tapped to serve as chief White House strategist, Bannon claimed that he’d only meant to align himself with an insurgent brand of civic nationalism, not with ethno-nationalism. Yet a core within the movement still insisted on a narrower definition of alt-right, one based on explicit antisemitism and white supremacy. This core had always existed; no one who was versed in the far-right blogosphere could have missed it.
Mainstream journalists, or at least the ones who were paying attention, were daunted by the fiscal precarity of their industry, the plummeting cultural authority of their institutions, and the unpredictable dynamics of social media outrage. The more these threats loomed, the more journalists clung to one of the few professional axioms that still seemed beyond dispute: in all matters of political opinion, a reporter should strive to remain neutral. This is true enough, for certain kinds of journalists, when applied to certain prosaic debates about tariffs and treaties. When it comes to core matters of principle, though, it’s not always possible to be both even-handed and honest. The plain fact was that the alt-right was a racist movement full of creeps and liars. If a newspaper’s house style didn’t allow its reporters to say so, at least by implication, then the house style was preventing its reporters from telling the truth.
Neutrality has never been a universal good, even in the simplest of times. In unusual times – say, when the press has been drafted, without its consent or comprehension, into a dirty culture war – neutrality might not always be possible. Some questions aren’t really questions at all. Should Muslim Americans be treated as real Americans? Should women be welcome in the workplace? To treat these as legitimate topics of debate is to be not neutral, but complicit. Sometimes, even for a journalist, there is no such thing as not picking a side.
In April 2014, looking for new story ideas, I attended a tech conference in a stylish hotel in Lower Manhattan. The conference was called F.ounders, a word that no one, including the founders of F.ounders, could decide how to pronounce. Half of us stammered over the stray full stop. The other half ignored it. It stood for nothing, apparently, except for the general concept of innovation.
At this point, Google owned almost 40% of the online advertising market, and Facebook owned another 10%. Some analysts were already warning that they might comprise a duopoly. Both companies’ business models, especially Facebook’s, were built around microtargeting. Filter bubbles, in other words, were not a temporary bug but a central feature of social media. It was hard to see how the latter could flourish without the former. If filter bubbles were bad for democracy, then, were Google and Facebook also bad for democracy?
It was a fair question, almost an obvious one, and yet the cultural vocabulary of the time did not allow most people to hold it in their heads for long. The Arab spring of 2011 had been organised, in part, via social media, and was often called the Twitter revolution. Mark Zuckerberg had been named Time’s person of the year in 2010; in the hagiographic cover photo, his eyes were oceanic and farseeing, dreaming up ingenious new ways to forge human bonds. If some movies and books portrayed him as shifty, even a bit ruthless, it was still possible to imagine that ruthlessness, in the tradition of Thomas Edison or Steve Jobs, was merely the cost of doing business. Zuckerberg’s motto, “Move fast and break things”, was generally treated as a sign of youthful insouciance, not of galling rapacity. Facebook’s users – more than a billion of them – seemed happy. Its investors were delighted. If social media wasn’t a good product, then why was it so successful?
At the time, it was still considered divisive (at swanky New York tech conferences, anyway) to wonder whether the be-hoodied young innovators of Silicon Valley might turn out to be robber barons. It was far more socially acceptable to extol the gleaming vehicle of technology – to gaze in amoral awe at its speed and vigour – than to ask precisely where it was headed, or whether it might one day hurtle off a cliff. Such questions had come to seem fusty and antidemocratic; people who spent too much time worrying about them were often dismissed as cranks or luddites. To a techno-optimist, there was only one way the vehicle could possibly be going: forward.
When it was founded in 2004, Facebook billed itself as “an online directory that connects people through social networks at colleges”. Within a few years, this self-description had morphed into a far more grandiose mission statement: “Facebook gives people the power to share and make the world more open and connected.” Mark Zuckerberg was careful not to call himself a gatekeeper. On the contrary, he portrayed himself as a Robin Hood figure, snatching power from the gatekeepers and redistributing it to the people, who could presumably be trusted to do the right thing.
The traditional gatekeeper media that held sway in the US in the middle of the 20th century was, inarguably, a deeply flawed system. The nation’s most prominent journalists, from celebrity newscasters to unheralded assignment editors, were, by and large, upper-middle-class white men in grey suits. Many were blinkered coastal elites, either too circumspect or too myopic to risk departing meaningfully from the socially acceptable narrative, even when elements of that narrative were misleading or flat-out false. But what if the fourth estate turned out to be, like democracy, the worst system except for all the others? If history was an arc bending inexorably toward justice, then there was no need to worry about any of this – technological