It was a momentous day for the peasants of the Januschau, a remote part of Eastern Prussia. For the first time in their, or their fathers’, or their fathers’ fathers’ lives, they were called upon to vote. For centuries, they had been subjects—virtually possessions—of the Oldenburg family, with no voice and very few rights. Now, they were to partake in the incomprehensibly noble act of ruling themselves.
As they gathered around the local inn, which had hurriedly been converted into a polling station for the occasion, they saw that the new world retained quite a few elements of the old. The land inspector of the Oldenburg family was handing out sealed envelopes. They contained ballots that had already been marked. Most peasants did as they were told. They cast their first ever ballot without knowing who it was they were voting for.
One lone rebel dared to open his envelope. He immediately attracted the inspector’s fury. Striking the man with a cane, the land inspector shouted in indignation, “It’s a secret ballot, you swine!”
In most places, democracy’s pretense to let the people rule was a little more serious and the elite’s grip on the electoral process a little more tenuous. Even so, this story from the dawn of democracy encapsulates the basic deal that traditional elites offered to the people at the inception of our political system: “As long as you let us call the shots, we will pretend to let you rule.” It’s a deal that has proven phenomenally successful for two hundred and fifty years. Today, that deal is becoming increasingly difficult to sustain, and the reason is both unlikely and counterintuitive: the rise of the Internet and social media is making the ideological foundation of liberal democracy—which has had a tight hold over our imagination for the better part of two centuries—look increasingly brittle.
Until a few years ago, most observers were highly optimistic about the effect that the rise of the Internet and social media would have on our political system. In one of the most subtle early analyses of what he tellingly called “Liberation Technology,” Larry Diamond argued that new digital tools would empower “citizens to report news, expose wrongdoing, express opinions, mobilize protest, monitor elections, scrutinize government, deepen participation, and expand the horizons of freedom.” Diamond’s article was published in the summer of 2010. Within less than a year, its most optimistic predictions seemed to come true. Mass protests erupted in Tunisia, in Egypt, in Libya, and, finally, in Syria. In most of these countries, long-serving autocrats were ousted from power. And in each case, protesters had used social media to criticize the government, to chronicle attempted crackdowns, and to coordinate the time and location of their protests. Twitter, Andrew Sullivan wrote in The Atlantic, had proven to be a “critical tool for organizing.” In twenty-first-century conflict, Nicholas Kristof echoed in TheNew York Times, “government thugs firing bullets” would increasingly come up against the resistance of “young protesters firing ‘tweets.’”
The positive effects of digital technology were increasingly being felt closer to home, too. As Clay Shirky argued in Here Comes Everybody: The Power of Organizing Without Organizations, even in longstanding democracies like the United States the power of digital technology made it much easier for activists to coordinate. In the wake of the financial crisis, this seemed to play out in myriad forms. On the right, the Tea Party was inspired by a viral rant on CNBC, and made extensive use of online tools, from meetup.org to email lists. On the left, Occupy Wall Street and Black Lives Matter heavily relied on social media to assemble and coordinate a loose network of activists all over the country. In every corner of the land, a reenergized public seemed to attest to the democratizing potential of social media.
As late as 2014 or 2015, the conventional wisdom on social media was overwhelmingly positive. Then came the rise of Donald Trump.
Throughout Trump’s unlikely campaign, it was obvious how important social media was to his ability to bypass the traditional gatekeepers of American politics. In an earlier age, most television networks would likely have refused to air his blatant lies or his tirades against immigrants, religious minorities, and political opponents. But thanks to Twitter, Donald Trump did not need the infrastructure of traditional media outlets. Instead, he could tweet messages directly to his millions of followers. Once he had done so, established broadcasters faced a stark choice: ignore the main subject of conversation and make themselves irrelevant—or discuss each tweet at length, thereby amplifying Trump’s message even as they ostensibly scrutinized it. Perhaps unsurprisingly, they chose the latter course of action.
Thanks in part to his use of Twitter—as well as his campaign’s shrewd (and possibly illegal) use of micro-targeting on Facebook—Donald Trump carried off a narrow victory. In the months that followed the 2016 election, the conventional wisdom flipped. If the Internet had been portrayed as a savior a few short years before, it now had to be the angel of death. Breathless claims about digital technology’s liberating potential turned into equally breathless prognostications of doom. Social media was declared the most dangerous foe of liberal democracy. “It’s time,” Farhad Manjoo of TheNew York Times wrote, a few days after the election,to start recognizing that social networks actually are becoming the world-shattering forces that their boosters long promised they would be—and to be unnerved, rather than exhilarated, by the huge social changes they could uncork… In a way, we are now living through a kind of bizarro version of the utopia that some in tech once envisioned would be unleashed by social media.
Manjoo has a point: the negative potential of social media is all too real. Yet it is over-simple to say that social media is a “world-shattering force,” sure to result in a terrible dystopia. The truth about social media is not that it is necessarily good or bad for liberal democracy. Nor is it that social media inherently strengthens or undermines tolerance. On the contrary, it is that social media closes the technological gap between insiders and outsiders.
Until a few decades ago, governments and big media companies enjoyed an oligopoly over the means of mass communication. As a result, they could set the standards of acceptable political discourse. At times, this meant marginalizing passionate critics of the status quo—and thus making it harder for the weak and powerless to make their voices heard. At other times, it meant declining to publish conspiracy theories, outright lies, or racist rants—and thus stabilizing liberal democracy.
With the rise of social media, this technological advantage has all but evaporated. As a result, the political and financial elites now face a set of much more powerful challengers: Thanks to Twitter and Facebook, courageous voices pushing for change—like the students of Parkland, Florida, who have recently become leading voices in the movement for greater gun control—can make a huge mark on public discourse. But for the same reasons, the hucksters of hatred and the merchants of mendacity also have a much easier time undermining liberal democracies.
A lot of ink has been spilled in the past years about how to deal with these transformations. One response has been to put pressure on Twitter and Facebook to change their algorithms and enforce stricter community guidelines; this is the tack that most tech critics have taken in the United States. Another response has been to limit what can be said on social media platforms by coercive legislation; this is the stance that European governments have adopted, with remarkable speed. (In Germany, for example, the executives of social media platforms that fail to delete racially incendiary comments—or, for that matter, blasphemy—now face steep fines.)
Only time can tell whether either of these strategies will succeed in putting the genie back in the bottle. It is certainly imaginable that voters might prove willing to sacrifice unfettered Internet freedom in the hopes of achieving more political stability; that users of Facebook and Twitter will prove so attached to existing platforms that they will stick with them, even as stricter limits on their freedom of expression are enforced; and that hateful messages will effectively be marginalized once a combination of changed algorithms and human censors weed out the most blatant offenders. But it seems at least as plausible to think that Americans won’t be willing to compromise on their First Amendment rights; that they will decamp to more freewheeling alternatives if existing social media platforms are tamed; and, indeed, that more subtle, yet no less powerful, forms of hate will continue to spread on existing platforms even if its most outrageous manifestations are suppressed.
There is, then, a very real possibility that the rise of digital technology, and the concomitant spread of essentially costless communication, have set up a direct clash between two of our most cherished values: freedom of speech and the stability of our political system. This is the first way in which the Internet is threatening to end the hegemony of liberal democracy, and it explains why the public debate about how to regulate platforms like Twitter and Facebook has become so intense.
But as I set out to understand the reasons why, as I show in my research with Roberto Foa, Americans are less invested in democracy than they were a few decades ago, and this decline is especially precipitous among young people, I increasingly came to fear that the challenge is even more fundamental. If the rise of digital technology has shaken our political system to the core, the fact that once marginal voices can bypass traditional gatekeepers is not the only reason. Rather, the daily experience of liking and sharing posts on social media may habituate users to a simulated form of direct democracy that makes the existing institutions of representative democracy appear intolerably outmoded.
This raises a set of questions that are just as disturbing as the ones many of us have focused on in recent months: Could digital natives—reared on the direct efficacy of social media—simply be less willing to tolerate the slow, indirect workings of analogue institutions designed in the eighteenth century? And might they therefore be more resistant to accepting the democratic myth that has long underwritten the stability of the American Republic?
The political systems of countries like Great Britain and the United States were founded not to promote, but to oppose, democracy; they only acquired a democratic halo in retrospect, thanks to more recent claims that they allowed the people to rule. The credibility of those claims depends on what they are compared to. So long as the memory of absolute monarchy was recent, and a more directly democratic system seemed unfeasible, liberal democracies could claim to empower the people. This held true for the century or so during which democracy enjoyed an unprecedented ideological hegemony. In the age of the Internet, it no longer does. As a result, the democratic myth that helped to make our institutions look uniquely legitimate is losing its hold.
The undemocratic roots of our supposedly democratic institutions are clearly on display in Great Britain. Parliament was not designed to put power in the hands of the people; it was a blood-soaked compromise between a beleaguered monarch and the upper echelons of the country’s elite. Only when the franchise was gradually expanded over the course of the nineteenth and twentieth centuries did anybody have the idea that this system of government could possibly be thought to resemble a democracy. Even then, the widening of the franchise turned out to transform the system much less fully than both the advocates and the opponents of democratic reform had predicted.
Because the US was founded in a more ideologically self-conscious manner, the same history is even more evident here. For the Founding Fathers, the election of representatives, which we have come to regard as the most democratic way to translate popular views into public policy, was a mechanism for keeping the people at bay. Elections were, in the words of James Madison, meant to “refine and enlarge the public views, by passing them through the medium of a chosen body of citizens, whose wisdom may best discern the true interest of their country.” That this radically curtailed the degree to which the people could actually influence the government was no accident. “The public voice, pronounced by the representatives of the people,” Madison argued, “will be more consonant to the public good than if pronounced by the people themselves, convened for the purpose.” In short, the Founding Fathers did not believe a representative republic to be second best; they found it far preferable to the factious horrors of a true democracy.
It was only in the nineteenth century, as egalitarian sentiment rose on both sides of the Atlantic, that a set of entrepreneurial thinkers began to dress an ideologically self-conscious republic up in the unaccustomed robes of a born-again democracy. The very same institutions that had once been designed to exclude the people from any substantive share in the government were now commended for facilitating government “of the people, by the people, for the people.”
America increasingly came to be seen as a democracy. But even then, reality lagged far behind. Only gradually did the US make real improvements to its democratic process. With the ratification of the Fifteenth Amendment in 1870, “race, color, or previous condition of servitude” could no longer be used to deny citizens the right to vote (though, in practice, they often still were). The direct election of senators was established by the Seventeenth Amendment, ratified in 1913. Finally, the Nineteenth Amendment, passed in 1920, accepted women’s suffrage and decreed that “the right of citizens of the United States to vote shall not be denied or abridged… on account of sex.”
These reforms did make American institutions more democratic. But the transformation of the language we use to describe the institutions of American democracy has been much more far-reaching than the transformation of the institutions themselves. And crucial to that transformation was a story about the limits of democratic governance under modern conditions.
In ancient Athens, so the story went, the people—or at least those who were regarded as the people, which is to say adult male citizens—could rule directly because there were relatively few of them, because the territory of the state was so small, and because they had leisure to govern since so many of them owned slaves who took care of their daily needs. This is no longer the case in modern democracies. As John Adams noted, the people “can never act, consult, or reason together, because they cannot march five hundred miles, nor spare the time, nor find a space to meet.” In industrial nations that expanded over a huge territory direct democracy was thought to be impossible.
This assumption allowed the democratic writers of the late nineteenth century to carry out a peculiar reinvention of American government. While representative institutions had been founded in ideological opposition to democracy, they were now re-described as the closest instantiation of that ideal possible under modern conditions. Thus, the founding myth of liberal-democratic ideology—the improbable fiction that representative government would facilitate the rule of the people—was born.
A man who puts new wine into old bottles, warns the Gospel of Luke, is likely to come to grief: “the new wine will burst the bottles, and be spilled, and the bottles shall perish.” The opposite comes closer to the truth for democracy. The rising tide of egalitarian sentiment during the nineteenth century should, by rights, have come into conflict with a set of avowedly aristocratic institutions. Instead, its fresh packaging gave the representative institutions of the United States and the United Kingdom a new lease on life. It pleased the elites who continued to get their way on the most important issues as much as it pleased the egalitarians who came to see it as a realization of their aspirations.
For a long century, the founding myth of liberal democracy retained sufficient footing in reality to keep a deep hold over the popular imagination, and help one political system conquer half the globe. But that basis is now crumbling. With the advent of the Internet, John Adams’s worry about the people’s inability to deliberate together has come to seem quaint. It may be true that the people cannot march five hundred miles, nor find a place to meet. But why should they need to? If the people truly sought to govern themselves, they could easily do so. The physical agora of ancient Athens could be replaced by a virtual agora that would allow millions to debate and vote on policy proposals with even greater ease. As a result, citizens now have a much more instinctive sense that our democratic institutions are highly mediated. To previous generations, it might have seemed natural that the people would rule through parliamentary institutions, and elect their representatives by going to a polling station. To a generation raised on the digital, plebiscitary, and immediate voting of Twitter and Facebook, of Big Brother and American Idol, these institutions have come to seem strangely cumbersome—and so it should not come as a surprise that it has also come to view them more and more critically.
Many of today’s citizens may not be as invested in the outcome of debates on public policy as they are in their favorite reality show. They may not even want their influence on government to be as immediate as their vote in a TV talent show. But they have a very clear model for what it feels like to have a real, direct impact. They know that if we wanted to design a system of government that truly allowed the people to rule, it would not look much like the representative democracy of today.
The rise of the Internet and social media has thus created a giant mismatch between the direct efficiency of our digital lives and the cumbersome inefficiency of our formal institutions—and that has accentuated the contrast between our system’s promise to let the people rule and the reality that the people rarely feel as though they can have a real impact on the most important decisions facing their country. As a result, the ideological foundations of our political system are rapidly eroding. The Internet threatens to end the hegemony of liberal democracy not only by amplifying the voice of a small band of haters and extremists, but also by alienating a much larger number of digital natives from the decidedly analogue institutions by which they are governed.
We have only just begun to face up to the first big corrosive influence of digital technology on our politics: the way in which social media has helped to mainstream extremists. We’re only beginning to understand how we can stop vast platforms like Facebook and Twitter from spreading hate and fake news—and whether that will even prove possible without sacrificing constitutive elements of our political system. Our uncertainty is even more radical with regard to the second big corrosive influence of digital technology on our politics: how digital technology undermines the democratic myth that has sustained our political system for the past several centuries. As a result, we have not even started to address the issue of how to make the democratic promise of our political system ring true for a new generation.
The frustrations that digital natives feel toward existing institutions could trigger a self-correcting mechanism: as more people demand new forms of participation, we may undertake ambitious reforms to make politics more responsive. And as politics becomes more responsive, digital natives may learn to make peace with the reformed institutions of liberal democracy. There are, undoubtedly, many ways to incorporate digital technology into governance in a productive way. Just as digital-native companies have shown the degree to which modern-day businesses are more successful when they are built around the opportunities and requirements of e-commerce—rather than simply designing a new website to complement their existing distribution channels—so, too, governments could radically reimagine how they serve their citizens. Similarly, experiments with forms of e-participatory budgeting, in which citizens make their own decisions about how money for municipal improvements should be spent, demonstrate that it is possible to introduce forms of direct democracy into local government.
But there are also strong reasons to suspect real limits to the ways in which technology might improve government. Citizens would certainly trust governments more if they could improve the speed and the efficiency with which administrations provide basic services to the public. But the widespread frustration with the state has less to do with excessive bureaucracy or overly cumbersome processes than it does with the underlying reality of the economy and the welfare state: what political scientists call the “performance legitimacy” of our political system has suffered from a combination of rising living costs, stagnating real wages, growing inequality, and dwindling social services.
More important, the real barrier to public participation in politics has always been interest, time, and expertise as much as it has been technology. Even if it were easy to weigh in, even vote, on every decision made at the local, county, state, and federal level, most citizens would hardly marshal the enthusiasm to be so intimately engaged with such a wide variety of questions of public policy. Nor would most citizens miraculously develop the expertise to assess, for example, what kinds of regulations are needed to keep a power plant safe. For good reason, direct democracy has always had more adherents in theory than it does in practice.
A dozen years after the invention of the printing press, the new technology had not yet left the city of Mainz. Only a tiny portion of the world’s population had ever held a printed book in their hands. Most things were as yet unaffected by the coming revolution in communication and politics. A dozen years after the invention of Facebook, by contrast, the new technology has spread to every corner of the globe. Some two billion people actively use the platform. It is too early to tell whether the rise of social media will, with the benefit of dozens, or hundreds, of years of hindsight, turn out to change the world for better or for worse. But there can be little doubt that, in the short run—which is to say, for the rest of our lives—it will make for a more chaotic world.
In recent years, it has been the populists who have exploited the new technology most effectively to undermine the basic elements of liberal democracy. Unfettered by the constraints of the old media system, and buoyed by a growing popular cynicism about democracy’s promise, the demagogues have been willing to say whatever it takes to get elected—to flatter and deceive, to obfuscate, and even to incite hatred of their fellow citizens. Perhaps their rhetoric will prove to be unstoppable. As one state legislator recently pointed out to me, it is difficult for a rational politician to win a debate with a three-sentence answer when his rival is offering a one-sentence answer—especially when the other candidate can blast his simplistic take all over Twitter and Facebook.
But just as the pro-democracy activists who used social media to topple dictators underestimated how difficult the task of consolidating their victory would turn out to be, so the ascendant populists may yet find the technological future to be more challenging than they expect. “Whoever is winning at the moment,” George Orwell once wrote, “will always seem to be invincible.” Once the populists capture the government, and start to break many of their promises, they may be rudely reminded of social media’s potential to mobilize the newly-empowered outsiders against their rule.
All is not lost. But to revitalize liberal democracy in the digital age, it will not be enough to think carefully about how to enforce privacy rights or stifle the most hateful voices on the Internet. We must also think anew about how to fill the democratic promise with meaning for a new generation that has lost the belief in the democratic myth that long provided legitimacy for our political system.