- How to Regulate (and Not Regulate) Social Media by Jack Balkin
- Bipartisan legislation would force Big Tech to allow interoperability with small competitors by Cory Doctorow
- The Good And The Bad Of The ACCESS Act To Force Open APIs On Big Social Media by Mike Masnick
- Testimony by Maciej Cegłowski to the Senate Committee on Banking, Housing, and Urban Affairs for their hearing on Privacy Rights and Data Collection in a Digital Economy
- A Framework for Regulating Competition on the Internet by Ben Thompson.
- A Better Internet Is Waiting for Us by Annalee Newitz
Prof. Jack M. Balkin of Yale Law School starts by asking the right question:
To understand how to regulate social media you must first understand why you want to regulate it.He then sets out his answer:
We should regulate social media companies because they are key institutions in the twenty-first century digital public sphere. A public sphere does not work properly without trusted and trustworthy intermediate institutions that are guided by professional and public-regarding norms.Balkin's idea of "responsible" social media companies is that:
The current economic incentives of social media companies hinder them from playing this crucial role and lead them to adopt policies and practices that actually undermine the health and vibrancy of the digital public sphere.
The point of regulating social media is to create incentives for social media companies to become responsible and trustworthy institutions that will help foster a healthy and vibrant digital public sphere. It is equally important to ensure that there are a large number of different kinds of social media companies, with diverse affordances, value systems, and innovations.
Social media perform their public functions well when they promote these three central values: political democracy, cultural democracy, and the growth and spread of knowledge. More generally, a healthy, well-functioning digital public sphere helps individuals and groups realize these three central values of free expression. A poorly functioning public sphere, by contrast, undermines political and cultural democracy, and hinders the growth and spread of knowledge.How would social media behave differently in Balkin's environment of:
trusted and trustworthy intermediate institutions that are guided by professional and public-regarding normsBalkin's answer appears to be that they would impose "civility norms":
Generally speaking, the free speech principle allows the state to impose only a very limited set of civility norms on public discourse, leaving intermediate institutions free to impose stricter norms in accord with their values. This works well if there are many intermediate institutions. The assumption is that in a diverse society with different cultures and subcultures, different communities will create and enforce their own norms, which may be stricter than the state’s. I believe that a diversity of different institutions with different norms is a desirable goal for the public sphere in the twenty-first century too. But I also believe that there is a problem ... when only one set of norms is enforced or allowed. If private actors are going to impose civility norms that are stricter than what governments can impose, it is important that there be many different private actors imposing these norms, reflecting different cultures and subcultures, and not just two or three big companies.Balkin points out that the requirement to impose "civility norms" makes treating the platforms as public utilities unconstitutional in the US:
if social media companies are treated as state actors and have to abide by existing free speech doctrines—at least in the United States—they will simply not be able to moderate effectively. Facebook and Twitter’s community standards, for example, have many content-based regulations that would be unconstitutional if imposed by government actors. Even if one eliminated some of these rules the minimum requirements for effective online moderation would violate the First Amendment.So, if just creating the National Public Radio of social media doesn't address the problem so what, in Balkin's view, would?
Treating social media companies as state actors or as public utilities does not solve the problems of the digital public sphere. One might create a public option for social media services, but this, too, cannot serve as a general solution to the problems that social media create. Instead, this essay describes three policy levers that might create better incentives for privately-owned companies: (1) antitrust and competition law; (2) privacy and consumer protection law; and (3) a careful balance of intermediary liability and intermediary immunity rules.How would Balkin use these policy levers?
Antitrust and competition lawBalkin's goals in this area are:
First, competition policy should aim at producing many smaller companies. You might think of this as a sort of social media federalism.These are all good, and inter-dependent. They match the emerging consensus on the ill-effects of the US abandonment of anti-trust enforcement, for example Lina M Khan's The Separation of Platforms and Commerce, or Sam Long's Monopoly Power and the Malfunctioning American Economy (for the Institutional Investor!), and even the House anti-trust chairman.
Second, we want to prevent new startups from being bought up early. This helps innovation. It prevents large companies from buying up potential competitors and killing off innovations that are not consistent with their current business models.
Third, competition policy should seek to separate different functions that are currently housed in the same company. This goal of separation of functions is different from a focus on questions of company size and market share.
Privacy and consumer protection lawBalkin wants to treat the platforms as "information fiduciaries":
Information fiduciaries have three basic duties towards the people whose data they collect: a duty of care, a duty of confidentiality, and a duty of loyalty. The fiduciary model is not designed to directly alter content moderation practices, although it may have indirect effects on them. Rather, the goal of a fiduciary model is to change how digital companies, including social media companies, think about their end users and their obligations to their end users. Currently, end users are treated as a product or a commodity sold to advertisers. The point of the fiduciary model is to make companies stop viewing their end users as objects of manipulation—as a pair of eyeballs attached to a wallet, captured, pushed and prodded for purposes of profit.Again, imposing the information fiduciary model on the platforms is probably a good idea, but neither Balkin nor I are clear about what the resulting change in business models would be. Presumably Balkin believes that the platforms would have to stop "selling users personal information". But at least Facebook argues that it doesn't do this, it merely sells the ability for advertisers to target ads using, but never accessing, Facebook's database of personal information. Legal experts are skeptical and, given its history of lying, any pronouncement from Facebook should be taken with several grains of salt.
This has important consequences for how companies engage in surveillance capitalism. If we impose fiduciary obligations, even modest ones, business models will have to change, and companies will have to take into account the effects of their practices on the people who use their services.
Intermediary liability and intermediary immunity rulesCurrently, social media platforms (and many other Internet services) have intermediary immunity via Section 230 of the Communications Decency Act, which:
says that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider" (47 U.S.C. § 230). In other words, online intermediaries that host or republish speech are protected against a range of laws that might otherwise be used to hold them legally responsible for what others say and do. The protected intermediaries include not only regular Internet Service Providers (ISPs), but also a range of "interactive computer service providers," including basically any online service that publishes third-party content.Balkin proposes to use intermediary immunity as a lever:
Because the current broad scope of intermediary immunity is not required by the First Amendment or the free speech principle more generally, governments should use the offer of intermediary immunity as a lever to get social media companies to engage in public-regarding behavior. In particular, one should use intermediary immunity as a lever to get social media companies to accept fiduciary obligations toward their end users.Under the DMCA, platforms have distributor liability for copyright infringement, which Balkin describes thus:
Governments might also condition intermediary immunity on accepting obligations of due process and transparency. Social media companies currently have insufficient incentives to invest in moderation services and to ensure that their moderators are treated properly. ... But governments should also create incentives for platforms to invest in increasing the number of moderators they employ as well as providing more due process for end-users. ... In short, I don’t want to scrap intermediary immunity. I want to use it to create incentives for good behavior.
Distributor liability means that companies are immune from liability until they receive notice that content is unlawful. Then they have to take down the content within a particular period of time or else they are potentially vulnerable to liability (although they may have defenses under substantive law).Balkin suggests imposing distributor liability in addition to copyright for disapproved content such as "revenge porn", and paid advertisements.
My CritqueIt is hard to disagree with Balkin's high-level analysis, and his three-way classification of the levers available to the US government. Nevertheless, I have a number of problems with his proposals.
Balkin's diagnosis of the problem to be remedied appears to be two-fold:
- There are too few platforms (Facebook and its subsidiaries, Twitter, YouTube).
- They impose anti-social "civility norms" (e.g. refusing to take down false political ads).
Rogue archivist Carl Malamud had posted filmmaker Frank Capra's classic Prelude to War on YouTube. If you're unfamiliar with Prelude to War, it's got quite a backstory. During World War II, the US government decided that, in order to build up public support for the war, it would fund Hollywood to create blatant American propaganda. They had Frank Capra, perhaps Hollywood's most influential director during the 1930s, produce a bunch of films under the banner "Why We Fight." The very first of these was "Prelude to War."I.e. YouTube classifies it as one of the "videos that promote or glorify Nazi ideology." and leaves Malamud no avenue of recourse. Which is clearly ridiculous:
The film, which gives a US government-approved history of the lead up to World War II includes a bunch of footage of Adolf Hitler and the Nazis. Obviously, it wasn't done to glorify them. The idea is literally the opposite. However, as you may recall, last summer when everyone was getting mad (again) at YouTube for hosting "Nazi" content, YouTube updated its policies to ban "videos that promote or glorify Nazi ideology." We already covered how this was shutting down accounts of history professors. And, now, it's apparently leading them to take US propaganda offline as well.
Malamud received a notice saying the version of "Prelude to War" that he had uploaded had been taken down for violating community guidelines. He appealed and YouTube has rejected his appeal, apparently standing by its decision that an anti-Nazi US propaganda film financed by the US government and made by famed director Frank Capra... is against the site's community guidelines.
Of course, as Malamud himself points out, what's particularly ridiculous is that this isn't the only version of the film he's uploaded. So while that one is still down, another one is still there. You can watch it now. Well, at least until YouTube decides this one also violates community standards.The above is by Mike Masnick, who has long argued Masnick's Impossibility Theorem, namely that it is impossible to do content moderation at scale well. There are two main reasons:
- The amount of content to be moderated is enormous. I can effectively moderate this blog; I'm the only one who can post, and I moderate all comments, rejecting most as spam. But YouTube alone ingests about 30,000 hours of video each hour. Assuming an 8-hour workday they would need 90,000 humans, or 90% of their full-time employees, watching the uploads. Of course, they don't want to waste full-timers on this task, so they use contractors working in appalling conditions and suffering PTSD. Even then they can only review a tiny fraction of the incoming flood of content.
- The question of whether each individual content upload violates "civility norms" is subjective. As Carl Malamud shows, automated rules such as "no images of Hitler" don't cut it. So the platform needs their content moderators to be broadly educated, aware of the context of the content, and motivated to take care reviewing. This isn't what you get from minimum-wage contractors, so the platforms are going to allow almost everything through and respond minimally to the complaints they receive.
Any regulatory approach that places responsibility for effective enforcement of "civility norms" on the platforms can succeed only by vastly reducing the amount of content that must adhere to those norms.
Balkin's use of intermediary immunity as a lever would certainly greatly increase the cost of content moderation, and thus greatly reduce the amount of content to be moderated:
Governments might also condition intermediary immunity on accepting obligations of due process and transparency. Social media companies currently have insufficient incentives to invest in moderation services and to ensure that their moderators are treated properly. ... But governments should also create incentives for platforms to invest in increasing the number of moderators they employ as well as providing more due process for end-users. They should also require companies to hire independent inspectors or ombudsmen to audit the company’s moderation practices on a regular basis.Lets make an estimate of how big this effect would be. In 2017, YouTube's global net revenue was $7.8B so, assuming 30K hours/hour uploaded, each uploaded hour "earned" about $30. Out of that needs to come the unknown costs of ingesting, storing, and streaming the content, and Google's 22% margins. But it looks like they could afford a $15/hour minimum wage with zero overheads to watch the incoming videos (and do a lousy job of moderation).
But if Google were liable for errors and omissions in moderation, they would need to spend a lot more per hour. So the response would likely be to charge creators for uploading their videos, perhaps around $1/minute. This would certainly greatly reduce the rate of video upload. Unfortunately, the reduction would be unlikely to fall mostly on the socially undesirable content. An example would be one of my favorite genres to watch while exercising, "train driver's eye" videos such as the hours-long labors of love by Don Coffey.
You could argue that whole genres of content need no moderation. What could be the problem with a "driver's eye" video? As residents of Palo Alto know only too well, suicide by train is all too frequent, and traumatizing for the driver.
Section 230 immunity is under sustained attack, most recently from Donald Trump's Roy Cohn. For a robust defense, see David French's The Growing Threat to Free Speech Online.
Despite this, Balkin seems detached from reality when he describes the "offer of intermediary immunity". Intermediary immunity is something social media platforms have had under law for more than two decades, and upon which they have built their entire business model. The government cannot in practice "offer" something that companies spending tens of millions a year on lobbying have had for decades.
On the other hand, Mike Masnick points out that removing intermediary liability might have the opposite effect to the critics goal:
removing Section 230 or making companies more liable for failing to moderate their platforms literally removes their incentives to "mitigate unlawful behavior." Because the most widely accepted standard pre-CDA 230 was that sites had to have knowledge to become liable. Thus, removing 230 creates more incentive for sites to stop looking, to stop mitigating, and to let everything flow.Balkin does understand that in this environment there are also perverse incentives reducing the amount of content, which he calls "collateral censorship":
Because companies can’t supervise everything that is being posted on their sites, once they face the prospect of intermediary liability they will take down too much content, because it is not their speech and they have insufficient incentives to protect itBut he greatly underestimates the damage that results from imposing distributor liability, as we can see from the flourishing ecosystem of YouTube copyright trolls, and the widespread use of DMCA takedowns to suppress criticism. Distributor liability should not be imposed unless the penalties for making false claims greatly exceed the benefits accruing to true claims. He suggests that platforms would be liable only for the content of ads:
This logic does not apply in the same way, however, for paid advertisements. Companies actively solicit paid advertisements—indeed, this is how social media companies make most of their money. As a result, even with distributor liability, companies still have incentives to continue to run ads. These incentives lessen (although they do not completely eliminate) the problems of collateral censorship.Balkin is right that distributor liability for ads has fewer externalities. But it doesn't seem to be practical given the way ad placement systems work in the real world, by the system of real-time auctions described, for example, in Maciej Cegłowski's What Happens Next Will Amaze You. In a world of diverse, competing platforms advertisers would presumably need such a system even more than they do now. Especially given the counter-intuitive behavior I described in Advertising Is A Bubble.
In the probable case, the result of applying Balkin's levers would be that the platforms would have much less content against which to advertise, presumably making the ads less valuable, and their need to moderate the ads would lead the platforms to charge more, reducing the (largely illusory) benefit of advertisers' spending. So not just less content, and thus less happy users, but also fewer ads, and thus happier users but less profitable platforms.
I'm skeptical that the bulk of the anti-social content Balkin believes violates "civility norms" is paid ads. Sophisticated information warriors know that "organic" content is far more effective, and have many techniques for disguising their weaponry so it doesn't need paid placement.
In this hypothetical world of diverse, interoperable platforms, suppose the US platforms did charge for uploads enough to cover the cost of moderation. Part of the diversity among platforms would be the jurisdiction to which they were subject. Interoperable competitors would arise that, not doing business in the US, did not have to charge for uploads because they didn't need effective moderation. Balkin acknowledges this problem then assumes it away:
Can the U.S. do this on its own? After all, anything we do in the U.S. will be affected by what other countries and the E.U. do. Today, the E.U., China, and the U.S. collectively shape much of Internet policy. They are the three Empires of the Internet, and other countries mostly operate in their wake. Each Empire has different values and incentives, and each operates on the Internet in a different way.He also ignores another serious practical difficulty when he writes:
Existing judge-made doctrines of antitrust law might not be the best way to achieve these ends, because they are not centrally concerned with these ends. We might need new statutes and regulatory schemes that focus on the special problems that digital companies pose for democracy.It is true that, both on the left and on the right, politicians and academics are arguing for reform of anti-trust legislation. But absent change of control in the Senate and the White House, and a massive purge of Federalist Society judges, it is hard to see such legislation being passed against the overwhelming lobbying resources the platforms could deploy against an existential threat.
In this context Benedict Evan's fascinating How to lose a monopoly: Microsoft, IBM and anti-trust makes three important observations:
- A big rich company, a company that dominates the market for its product, and a company that dominates the broader tech industry are three quite different things. Market cap isn’t power.
- IBM ruled mainframes and Microsoft ruled PCs, and when those things were the centre of tech, that gave them dominance of the broader tech industry. When the focus of tech moved away from mainframes and then PCs, IBM and then Microsoft lost that dominance, but that didn’t mean they stopped being big companies. We just stopped being scared of them.
- For both IBM and Microsoft, market power in one generation of tech didn’t give them market power in the next, and anti-trust intervention didn’t have much to do with it. It doesn’t matter how big your castle is if the trade routes move somewhere else.
Today, it’s quite common to hear the assertion that our own dominant tech companies - Google, Facebook et al - will easily and naturally transfer their dominance to any new cycle that comes along. This wasn’t true for IBM or Microsoft, the two previous generations of tech dominance, but then there’s another assertion - that this was because of anti-trust intervention, especially for Microsoft. This tends to be said as though it can be taken for granted, but in fact it’s far from clear that this is actually true.Although I agree with Evans' analysis of history showing that IBM and Microsoft failed to transfer their monopoly to new fields, I'm skeptical that it applies to the present. The reason is that for both IBM and Microsoft, anti-trust enforcement was a reality they lived. So they were inhibited from buying up potential competitors. For Google and Facebook, anti-trust enforcement is a myth; they can buy up anyone they want with no fear of the Justice Department. So, despite Evans' analysis, it seems likely that the current monopolists will enjoy their ill-gotten gains for much longer than IBM and Microsoft did.
Balkin might also think about the ethics of mandating that platforms provide vast numbers of jobs so awful that moderators have to sign away their right to sue for getting PTSD.
Finally, there is an unresolved conflict at the heart of Balkin's (and Doctorow's and Masnick's) advocacy of interoperability. The point of interoperability is to present users with a seamless view across diverse social media platforms, thus at least partially obscuring the provenance of content. The point of regulation is to create:
trusted and trustworthy intermediate institutions that are guided by professional and public-regarding normsIf this is to have the desired effect, users need to be aware of the provenance, relying on content from "trusted and trustworthy intermediate institutions".
Anna Merlan reports on another platform totally incapable of enforcing its "social norms" in Here Are the Most Common Airbnb Scams Worldwide:
"At the end of October, former VICE senior staff writer Allie Conti shared her story of a disastrous vacation to Chicago, where she tumbled into a nationwide scam run by a prolific grifter (or grifters), which exploited Airbnb’s loosely written rules and even looser enforcement.
Conti’s investigation revealed a platform with serious problems policing itself, and sought to uncover the people who’d figured out ways to profit from that disarray. She ultimately traced the nexus of her own scam experience back to a company that used fake profiles and reviews to conceal a variety of wrongs—from last-minute property switches, to units with sawdust on the floor and holes in the wall."
Because, under the DMCA, no-one is penalized for false copyright claims except the victims (who have no effective recourse), we get ludicrous situations like the one described by Timothy Geigner in YouTube Takes Down Live Stream Over Copyright Claim...Before Stream Even Starts. The claimant was CNN, who emerged unscathed, despite claiming copyright over something they had never seen, and which would have contained none of their copyright material had they seen it.
After reading Dominic Rushe's $15bn a year: YouTube reveals its ad revenues for the first time I need to correct the arithmetic above:
"assuming 30K hours/hour uploaded, each uploaded hour "earned" about $57. Out of that needs to come the unknown costs of ingesting, storing, and streaming the content, and Google's 22% margins. But it looks like they could afford a $15/hour minimum wage with 50% overheads to watch the incoming videos (and do a lousy job of moderation)."
Kate Cox shows hw devoted Facebook is to enforcing "civility norms" in What it takes to get a hate page off Facebook: A letter from the state AG:
"Almost a year after state officials formally asked Facebook to take action to remove a racist and anti-Semitic group page, the globe-spanning social network has finally taken the page down.
The offices of New Jersey Attorney General Gurbir Grewal and New Jersey Governor Phil Murphy acknowledged Facebook's action against the page, "Rise Up Ocean County," in a joint announcement Wednesday."
Don’t Use the Word ‘Did’ or a Dumb Anti-Piracy Company Will Delete You From Google reports on yet another example of how impossible content moderation at scale is.
Post a Comment