Political lies aren’t new, but the methods of spreading them are

- ADVERTISEMENT -
Lie Machines

Lie Machines

By Philip N. Howard

Yale. 204 pp. $26

Before Breitbart and 4chan mastered the art of the strategically incisive political lie, the Greek deity Zeus was already at the bleeding edge of fake news. In Book II of the Iliad, the god sends Agamemnon, king of the Achaeans, a dream urging him to dispatch his men into battle, promising that this will end Troy’s siege. But Zeus is lying. He knows an all-out attack will fail and hopes Agamemnon will get a thorough walloping. (The king doesn’t take the bait).

It’s telling, too, that Zeus chose to speak through a dream: Long before Mark Zuckerberg or Jack Dorsey, Homer grasped that an intimate, direct channel of communication can stir folks to action with more ferocity than any unvarnished public appeal.

In the 20th century, the political lie piped directly into an individual’s sightline flourished like a tenacious weed. In the 1910s, the U.S. Committee on Public Information built public support for American involvement in World War I not just with films, pamphlets and posters, but also the judiciously delivered fib, often fed to a newspaper. Forty years later, tobacco companies employed public relations giant Hill & Knowlton not to deny cigarettes’ carcinogenic effect, but to seed doubt about medical research and cast blame elsewhere. Legislation that may have saved hundreds of thousands languished as a result.

All those not living in a cave know by now that social media platforms such as Facebook, Twitter, YouTube, Instagram and even Tinder have become vehicles for a veritable tsunami of mendacious and polarizing information. Debates rage about whether Russian troll farm efforts changed the outcome of the 2016 presidential election. (The best analysis suggests they did not.) The nonpartisan Brennan Center for Justice has already warned that Russian interventions in this year’s vote will be “more brazen” than in 2016.

But how much of this is really new? Has technology made the political lie any different from its Attic or Progressive-era precursor? Philip Howard, the author of “Lie Machines,” is unquestionably well-placed to illuminate this question. Director of the Oxford Internet Institute, Howard was asked in 2017 by the Senate Intelligence Committee to conduct a postmortem on the social media activities of the Russian Internet Research Agency. A seemingly modest operation run out of a nondescript St. Petersburg office with between 40 and 100 employees and a $10 million budget, the IRA – notice how even the name will baffle a standard search engine – has given President Vladimir Putin a large return on his small investment.

The work of Howard and his team was pivotal in clarifying the Russian strategy in 2016. They showed, for instance, how the IRA targeted Americans at the poles of the political spectrum, exacerbating their divisions, and flooded swing districts with misleading or inflammatory advertisements. Tens of millions of users viewed IRA ads.

Howard cautions against optimism that the quality of online political discourse will improve soon. To the contrary, a new tool kit for Web-based public lies has been tested by Russia and China, for use first at home and then against foreign foes. It is diffusing quickly to more nations. Howard counts five other countries – India, Iran, Pakistan, Saudi Arabia and Venezuela – that are using the same tool kit against overseas democratic publics. In 2020, there were “organized social media misinformation teams” working for parties and governments in some 70 countries. Howard spoke to firms in Poland and Brazil that are helping in those efforts and found robust competition among producers of the mediated lie. In this market, his analysis suggests, the incentives driving supply are unlikely to abate.

So regardless of whether any specific electoral result changed because of Russian interference, Howard is persuasive when he identifies the emergent phenomenon of “computational propaganda” as a serious threat to democracy. Using big-data analytics, this tactic involves the tailoring and targeting of propaganda to individuals’ fears and weaknesses. This is a bit different from what retailers do. Amazon endeavors to know what you want before you even think of it. In contrast, the propagandist seeks to exploit your fears and change what you believe, and hence how you vote.

Much of Howard’s analysis is haunted by a hazy notion that new computational tools are just different versions of earlier forms of propaganda. But he also gives a more specific and persuasive argument that these tools are really something new: Like Agamemnon’s dream, computational propaganda allows organized political forces to mainline influence directly into voters’ minds, appealing not with reason but through manipulation of their emotions.

Social psychologists have long stressed the importance of personal networks in learning about the world. This finding matters in this context because many people’s exposure to computational propaganda is a result of the sharing of posts by friends and family. Howard’s research showed that some 30 million people shared the IRA’s Facebook and Instagram posts. This diffusion enables the IRA and its ilk to tap directly into the critical psychological infrastructure at work in individual belief formation.

If the supply of computational propaganda is likely to grow, what can be done? Here, “Lie Machines” disappoints. Howard bemoans the ways in which the data economy has changed the balance of power between the citizenry and organized powers. He presses for mandatory reporting and auditing of social media. More vaguely, he gestures toward the possibility of unlocking the “real potential of social media platforms to support public life.” But this notion comes on the second-to-last page of the book. It is never given substance in the form of concrete and specific proposals.

Most profoundly, Howard skips a beat by focusing solely on supply and ignoring demand. Literature once more helps us grasp the difficulty he evades. Shakespeare, in his play “Othello,” has the villainous Iago explain how he persuaded the titular character to murder his wife on a false suspicion of adultery. Says Iago of his lies: They were “no more/ Than what he found himself was apt and true.” So too of computational propaganda today. Its power lies not solely in advances in algorithmic techniques but in its grasp of the distinctive vanities of its targets. Like Homer, Shakespeare understood what we tend too quickly to forget: It is our vanities that make us vulnerable to persuasion of all sorts. But they are difficult to remedy precisely because they also make us who we are.

Share

LEAVE A REPLY

Please enter your comment!
Please enter your name here