Tuesday, July 2, 2019

The Web Is A Low-Trust Society

Back in 1992 Robert Putnam et al published Making democracy work: civic traditions in modern Italy, contrasting the social structures of Northern and Southern Italy. For historical reasons, the North has a high-trust structure whereas the South has a low-trust structure. The low-trust environment in the South had led to the rise of the Mafia and persistent poor economic performance. Subsequent effects include the rise of Silvio Berlusconi.

Now, in The Internet Has Made Dupes-And Cynics-Of Us All, Zynep Tufecki applies the same analysis to the Web:
ONLINE FAKERY RUNS wide and deep, but you don’t need me to tell you that. New species of digital fraud and deception come to light almost every week, if not every day: Russian bots that pretend to be American humans. American bots that pretend to be human trolls. Even humans that pretend to be bots. Yep, some “intelligent assistants,” promoted as advanced conversational AIs, have turned out to be little more than digital puppets operated by poorly paid people.

The internet was supposed to not only democratize information but also rationalize it—to create markets where impartial metrics would automatically surface the truest ideas and best products, at a vast and incorruptible scale. But deception and corruption, as we’ve all seen by now, scale pretty fantastically too.
Below the fold, some commentary.

Tufecki summarizes the contrast between high- and low-trust societies thus:
At some point, the typical response to this onslaught of falsehood is to say, lol, nothing matters. But when so many of us are reaching this point, it really does matter. Social scientists distinguish high-trust societies (ones where you can expect most interactions to work) from low-trust societies (ones where you have to be on your guard at all times). People break rules in high-trust societies, of course, but laws, regulations, and norms help to keep most abuses in check; if you have to go to court, you expect a reasonable process. In low-trust societies, you never know. You expect to be cheated, often without recourse. You expect things not to be what they seem and for promises to be broken, and you don’t expect a reasonable and transparent process for recourse. It’s harder for markets to function and economies to develop in low-trust societies. It’s harder to find or extend credit, and it’s risky to pay in advance.

The internet is increasingly a low-trust society—one where an assumption of pervasive fraud is simply built into the way many things function.
Indeed. This has been a theme of many of my recent posts. Following Putnam, Tufecki points out that:
People do adapt to low-trust societies, of course. Word-of-mouth recommendations from familiar sources become more important. Doing business with family and local networks starts taking precedence, as reciprocal, lifelong bonds bring a measure of predictability. Mafia-like organizations also spring up, imposing a kind of accountability at a brutal cost.

Ultimately, people in low-trust societies may welcome an authoritarian ruler, someone who will impose order and consequences from on high. Sure, the tyrant is also corrupt and cruel; but the alternative is the tiring, immiserating absence of everyday safety and security. During the reign of Kublai Khan, it was said that “a maiden bearing a nugget of gold on her head could wander safely throughout the realm.” The Great Khan required absolute submission, but even repression has some seeming perks.
Adapting to the absence of trust by embracing authoritarian rules seems to be happening in Western "democracies". Wikipedia describes Silvio Berlusconi thus:
Berlusconi was the first person to assume the premiership without having held any prior government or administrative offices. He is known for his populist political style and brash, overbearing personality. ... Supporters emphasize his leadership skills and charismatic power, his fiscal policy based on tax reduction, and his ability to maintain strong and close foreign relations with both the United States and Russia. In general, critics address his performance as a politician, and the ethics of his government practices in relation to his business holdings. Issues with the former include accusations of having mismanaged the state budget and of increasing the Italian government debt. The second criticism concerns his vigorous pursuit of his personal interests while in office, including benefitting from his own companies' growth due to policies promoted by his governments, having vast conflicts of interest due to ownership of a media empire with which he has restricted freedom of information and finally, being blackmailed as leader because of his turbulent private life.
Britain appears about to have an un-elected (and un-electable) Prime Minister foisted on the country. Wikipedia describes Boris Johnson thus:
Johnson is a controversial figure in British politics and journalism. Supporters have praised him as an entertaining, humorous, and popular figure with appeal beyond traditional Conservative voters. Conversely, he has been criticised by figures on both the left and right, who have accused him of elitism, cronyism, dishonesty, laziness, and using racist and homophobic language. Johnson is the subject of several biographies and a number of fictionalised portrayals.
His ex-editor describes him thus:
I have known Johnson since the 1980s, when I edited the Daily Telegraph and he was our flamboyant Brussels correspondent. I have argued for a decade that, while he is a brilliant entertainer who made a popular maĆ®tre d’ for London as its mayor, he is unfit for national office, because it seems he cares for no interest save his own fame and gratification.

Tory MPs have launched this country upon an experiment in celebrity government, matching that taking place in Ukraine and the US, and it is unlikely to be derailed by the latest headlines. The Washington Post columnist George Will observes that Donald Trump does what his political base wants “by breaking all the china”. We can’t predict what a Johnson government will do, because its prospective leader has not got around to thinking about this. But his premiership will almost certainly reveal a contempt for rules, precedent, order and stability.

A few admirers assert that, in office, Johnson will reveal an accession of wisdom and responsibility that have hitherto eluded him, not least as foreign secretary. This seems unlikely, as the weekend’s stories emphasised. Dignity still matters in public office, and Johnson will never have it. Yet his graver vice is cowardice, reflected in a willingness to tell any audience, whatever he thinks most likely to please, heedless of the inevitability of its contradiction an hour later.
Does any of this sound familiar to an American ear? These charlatans seem impedance-matched to the Web, especially when, as in both Trump's and Johnson's cases, they are assisted by Rupert Murdoch, Mark Zuckerberg, Jack Dorsey and Vladimir Putin.

Source
The authoritarian trend of the US right was under way before Trump, as documented in What Happened to America’s Political Center of Gravity? by Sahil Chinoy. That data only goes to 2016, but Trump has obviously accelerated the trend, since Republicans who voice skepticism about Trump now instantly become persona non grata. Chinoy writes, and the diagram above shows, that even before Trump took office:
The Republican Party leans much farther right than most traditional conservative parties in Western Europe and Canada, according to an analysis of their election manifestos. It is more extreme than Britain’s Independence Party and France’s National Rally (formerly the National Front), which some consider far-right populist parties. The Democratic Party, in contrast, is positioned closer to mainstream liberal parties.
What can be done? Zynep Tufecki's suggestion is:
There are better ways of beating back the tide of deception. They involve building the kinds of institutions and practices online that have historically led to fair, prosperous, open societies in the physical world. Better rules and technologies that authenticate online transactions; a different ad-tech infrastructure that resists fraud and preserves privacy; regulations that institute these kinds of changes into law: Those would be a start. It’s hard to believe we’ve let it get this far, but here we are. Right now, everyone knows the internet is fake. The problem is that, lol, all of this matters.
A more detailed set of suggestions comes from Ann M. Ravel, Samuel C. Woolley and Hamsini Sridharan in Principles and Policies to Counter Deceptive Digital Politics. They start by setting out six principles, summarized in this table:
  1. Transparency
  2. Accountability
  3. Standards
  4. Coordination
  5. Adaptability
  6. Inclusivity
As summarized in this table they use the principles to propose immediate policies in these five areas:
  1. Campaign Finance
  2. Data Usage and Privacy
  3. Automated and Fake Accounts
  4. Platform Liability
  5. Multisector Infrastructure
And longer-term systemic changes in four areas:
  1. Global Cooperation
  2. Research and Development
  3. Media and Civic Education
  4. Competition
These are all worthy proposals to enhance the trustworthiness of the Web, although some would sacrifice user privacy for that goal. For example, among the techniques for bot suppression are reCAPTCHAs, such as Google's "I'm not a robot" box.

But, as Katherine Schwab reports in Google’s new reCAPTCHA has a dark side, the more effective the reCAPTCHA is at discriminating between humans and bots the more effective it is at tracking the humans:
Google analyzes the way users navigate through a website and assigns them a risk score based on how malicious their behavior is. ...
According to tech statistics website Built With, more than 650,000 websites are already using reCaptcha v3; overall, there are at least 4.5 million websites use reCaptcha, including 25% of the top 10,000 sites.
...
But this new, risk-score based system comes with a serious trade-off: users’ privacy.

According to two security researchers who’ve studied reCaptcha, one of the ways that Google determines whether you’re a malicious user or not is whether you already have a Google cookie installed on your browser. It’s the same cookie that allows you to open new tabs in your browser and not have to re-log in to your Google account every time. But according to Mohamed Akrout, a computer science PhD student at the University of Toronto who has studied reCaptcha, it appears that Google is also using its cookies to determine whether someone is a human in reCaptcha v3 tests. Akrout wrote in an April paper about how reCaptcha v3 simulations that ran on a browser with a connected Google account received lower risk scores than browsers without a connected Google account.
The Google cookie allows Web sites to ask "Papers, Please" before letting visitors enter. It is the Web equivalent of a passport, and it lets Google track you across the Web to sell you to advertisers.

Both Tufecki's and Ravel et al's proposals face three major hurdles. First, they are a US-centric response to a problem that they recognize is global:
In order to protect democracy at the global level, the U.S. must convene and take part in international conversations about deceptive digital politics. Democracies around the world are grappling with computational propaganda, but despite the global scale and interconnectedness of the problem, there has been little international coordination to resolve it. Several countries have attempted to regulate the problem on their own, with varying degrees of success. Some have taken markedly authoritarian approaches, further destabilizing democracy rather than safeguarding it.
Second, the parties that control the government in the US, the UK and several other countries, and also the senior management of the major platforms, understand that they owe their position to techniques these proposals attack. So the prospects for enacting Tufecki's or Ravel et al's ideas at a global, or even just a US, level are non-existent. The idea that, under the Trump administration, the US would "convene and take part in international conversations about deceptive digital politics" is beyond laughable.

Third, and more fundamentally, they all approach the problem of inadequate trustworthiness by attempting to reduce the amount of untrustworthy content people see. In effect, they propose censoring moderating the Web. We have a lot of experience showing that content moderation at Web scale doesn't work.

The alternative approach is to adapt to the flood of untrustworthy content by rendering people less likely to be affected by it, by inoculating them against it. Cory Doctorow reports on two experiments that suggest inoculation might work. First, in A "Fake News Game" that "vaccinates" players against disinformation, he writes:
Bad News is a free webgame created by two Cambridge psych researchers; in a 15-minute session, it challenges players to learn about and deploy six tactics used in disinformation campaigns ("polarisation, invoking emotions, spreading conspiracy theories, trolling people online, deflecting blame, and impersonating fake accounts").

The game was created to test the hypothesis that learning how these techniques worked would make players more discerning when they were encountered in the wild. To evaluate the proposition, players are quizzed before and after the game and asked to evaluate the credibility of a series of tweets.

In their analysis of the results, the study authors make the case that the game is indeed capable of "vaccinating" players against disinformation. During the three months that the study ran, 43,687 subjects played the game. These subjects (a "convenience sample") were self-selecting and skewed older, educated, male and liberal, but "the sample size still allowed us to collect relatively large absolute numbers of respondents in each category."
The paper describing the research is Fake news game confers psychological resistance against online misinformation by Jon Roozenbeek and Sander van der Linden. The abstract states:
The spread of online misinformation poses serious challenges to societies worldwide. In a novel attempt to address this issue, we designed a psychological intervention in the form of an online browser game. In the game, players take on the role of a fake news producer and learn to master six documented techniques commonly used in the production of misinformation: polarisation, invoking emotions, spreading conspiracy theories, trolling people online, deflecting blame, and impersonating fake accounts. The game draws on an inoculation metaphor, where preemptively exposing, warning, and familiarising people with the strategies used in the production of fake news helps confer cognitive immunity when exposed to real misinformation. We conducted a large-scale evaluation of the game with N = 15,000 participants in a pre-post gameplay design. We provide initial evidence that people’s ability to spot and resist misinformation improves after gameplay, irrespective of education, age, political ideology, and cognitive style.
Second, in Internet users are wising up to persuasive "nudge" techniques Doctorow writes:
marketing techniques don't age well: once you encounter a trick a few times, it starts to lose power, in the way that so many phenomena regress to the mean. Behavioral marketers know that they can prolong the efficacy of these techniques with "intermittent reinforcement" (that is, using each technique sparingly, at random intervals, which make them more resistant to our ability to grow accustomed to them), but marketers have a collective action problem, a little dark-side Tragedy of the Commons: it's in the advertising industry's overall interest to limit the use of techniques so that we don't get accustomed to them, but any given marketer knows that if they don't use the technique to exhaustion, some other marketer will, so each marketer "overgrazes" the land (that is, us), in order to beat the others.
The report on which Doctorow's piece is based is Consumers Are Becoming Wise to Your Nudge by Simon Shaw:
Two thirds of the British public (65 percent) interpreted examples of scarcity and social proof claims used by hotel booking websites as sales pressure. Half said they were likely to distrust the company as a result of seeing them (49 percent). Just one in six (16 percent) said they believed the claims.

The results surprised us. We had expected there to be cynicism among a subgroup—perhaps people who booked hotels regularly, for example. The verbatim commentary from participants showed people see scarcity and social proof claims frequently online, most commonly in the travel, retail, and fashion sectors. They questioned truth of these ads, but were resigned to their use:

It’s what I’ve seen often on hotel websites—it’s what they do to tempt you.

Have seen many websites do this kind of thing so don’t really feel differently when I do see it.

In a follow up question, a third (34 percent) expressed a negative emotional reaction to these messages, choosing words like contempt and disgust from a precoded list. Crucially, this was because they ascribed bad intentions to the website. The messages were, in their view, designed to induce anxiety:

… almost certainly fake to try and panic you into buying without thinking.

I think this type of thing is to pressure you into booking for fear of losing out and not necessarily true.
Two tactics effectively limit the spread of science denialism by Kathleen O'Grady describes similar inoculation processes:
when the results of all six experiments were combined to create a larger, more-powerful data set, the overall picture was that both topic and technique rebuttals worked equivalently well. The researchers also discovered that the combined rebuttals had no additional benefit.

In other words, it's effective to either present audience with accurate facts or describe the rhetorical techniques that had been used to spread misinformation.
The argument here is similar to that I made in my three part series Michael Nelson's CNI Keynote. Doing what can be done to reduce untrustworthy-ness on the Web can only go so far if it isn't to become wholesale censorship. So enhancing Web users' skepticism is necessary, even though to do so adapts to rather than alleviates the low-trust environment of the Web.

No comments: