- Convince advertisers that it is an effective means of manipulating the behavior of the mass of the population.
- Avoid regulation by convincing governments that it is not an effective means of manipulating the behavior of the mass of the population.
Facebook has tried to finesse the dilemma by lying to both sides:
- As Max Read describes in How Much of the Internet Is Fake? Turns Out, a Lot of It, Actually, they have lied to advertisers about what their money is buying, exaggerating the effectiveness of their platform:
Metrics should be the most real thing on the internet: They are countable, trackable, and verifiable, and their existence undergirds the advertising business that drives our biggest social and search platforms. Yet not even Facebook, the world’s greatest data–gathering organization, seems able to produce genuine figures. In October, small advertisers filed suit against the social-media giant, accusing it of covering up, for a year, its significant overstatements of the time users spent watching videos on the platform (by 60 to 80 percent, Facebook says; by 150 to 900 percent, the plaintiffs say). According to an exhaustive list at MarketingLand, over the past two years Facebook has admitted to misreporting the reach of posts on Facebook Pages (in two different ways), the rate at which viewers complete ad videos, the average time spent reading its “Instant Articles,” the amount of referral traffic from Facebook to external websites, the number of views that videos received via Facebook’s mobile site, and the number of video views in Instant Articles.Josh Marshall has more on this side of the dilemma in How Facebook Punked and then Gut Punched the News Biz.
Roger Mcnamee's devastating I Mentored Mark Zuckerberg. I Loved Facebook. But I Can't Stay Silent About What's Happening describes how they have lied to governments, minimizing the effectiveness of their platform:
when confronted with evidence that disinformation and fake news had spread over Facebook and may have influenced a British referendum or an election in the U.S., Facebook followed a playbook it had run since its founding: deny, delay, deflect, dissemble. Facebook only came clean when forced to, and revealed as little information as possible. Then it went to Plan B: apologize, and promise to do better.Trevor Timm has more on this side of the dilemma in How Facebook Borrows From the NSA Playbook.
Facebook, its CEO Mark Zuckerberg, and its COO Sheryl Sandberg, and its public relations people, and its engineers have lied. They have lied repeatedly. They have lied exhaustively. They have lied so much they've lost track of their lies, and then lied about them.The two reports were by Gabriel J.X. Dance, Michael LaForgia and Nicholas Confessore in the New York Times:
By any measure, Facebook as an organization has knowingly, willingly, purposefully, and repeatedly lied. And two reports this week demonstrate that the depth of its lying was even worse than we previously imagined.
- As Facebook Raised a Privacy Wall, It Carved an Opening for Tech Giants, which McCarthy summarizes thus:
Facebook cut data-exchange deals with all sorts of companies based on this premise: give them what they want, and in return they would be hauled onto Zuckerberg's internet reservation.
For example, Yahoo! got real-time feeds of posts by users' friends – reminding us of Cambridge Analytica gathering information on millions of voters via a quiz app, and using it to target them in contentious political campaigns in the US and Europe.
Microsoft's Bing was able to access the names of nearly all Facebook users’ friends without permission, and Amazon was able to get at friends' names and contact details. Russian search engine Yandex had Facebook account IDs at its fingertips, though it claims it didn't even know this information was available. Facebook at first told the New York Times Yandex wasn't a partner, and then told US Congress it was.
- Facebook Gave Device Makers Deep
Access to Data on Users and Friends, which McCarthy summarizes thus:
Facebook got in bed with smartphone manufacturers, such as Apple, Amazon, BlackBerry, Microsoft, and Samsung. Facebook secretly gave the device makers access to each phone user's Facebook friends' profiles, when the handheld was linked to its owner's account, bypassing protections.
Faced with evidence of its data-sharing agreements where – let's not forget this – Facebook provided third parties access to people's personal messages, and more importantly to their contacts lists and friends' feeds, the company claims it broke no promises because it defined the outfits it signed agreements with as "service providers." And so, according to Facebook, it didn't break a pact it has with the US government's trade watchdog, the FTC, not to share private data without permission, and likewise not to break agreements it has with its users.Trying to sell diametrically opposed stories to two different audiences just isn't effective PR. Lying to both sides at the same time just makes it safe to assume that, whatever Facebook is admitting to, the truth is far worse.
As for the question of potential abuse of personal data handed to third parties, Facebook amazingly used the same line that it rolled out when it attempted to deflect the Cambridge Analytica scandal: that third parties were beholden to Facebook's rules about using data. But, of course, Facebook doesn't check or audit whether that is the case.
Cory Doctorow reports on a wonderful example of Facebook's speaking out of both sides of its mouth:
Though Facebook's lobbying associations spent the whole debate over the EU Copyright Directive arguing (correctly) that algorithmic filters to catch copyright infringement would end up blocking mountains of legitimate speech (while still letting through mountains of infringement), Facebook secretly told the EU Commission that it used filters all the time, had utmost confidence in them, and couldn't see any problems with their use.Doctorow's piece is based on Laura Kayali's Inside Facebook’s fight against European regulation, which is a detailed analysis of Facebook's lobbying the EU from documents released following a freedom of information request.
Natasha Lomas' The case against behavioral advertising is stacking up provides another example of Facebook's duplicity, based on research by Carnegie Mellon University professor of IT and public policy, Alessandro Acquisti, which showed that:
behaviourally targeted advertising had increased the publisher’s revenue but only marginally. At the same time they found that marketers were having to pay orders of magnitude more to buy these targeted ads, despite the minuscule additional revenue they generated for the publisher.The alternative to targeted ads is DuckDuckGo-style contextual ads. Lomas continues:
“What we found was that, yes, advertising with cookies — so targeted advertising — did increase revenues — but by a tiny amount. Four per cent. In absolute terms the increase in revenues was $0.000008 per advertisment,” Acquisti told the hearing. “Simultaneously we were running a study, as merchants, buying ads with a different degree of targeting. And we found that for the merchants sometimes buying targeted ads over untargeted ads can be 500% times as expensive.”
If Acquisti’s research is to be believed ... there’s little reason to think [contextual] ads would be substantially less effective than the vampiric microtargeted variant that Facebook founder Mark Zuckerberg likes to describe as “relevant”.Given Facebook's history of lying about metrics, even if it did produce internal data showing that the targeted ads were worth the cost, why should advertisers believe it? And if the internal data doesn't show that, Facebook can't afford to release it.
The ‘relevant ads’ badge is of course a self-serving concept which Facebook uses to justify creeping on users while also pushing the notion that its people-tracking business inherently generates major extra value for advertisers. But does it really do that? Or are advertisers buying into another puffed up fake?
Facebook isn’t providing access to internal data that could be used to quantify whether its targeted ads are really worth all the extra conjoined cost and risk. While the company’s habit of buying masses of additional data on users, via brokers and other third party sources, makes for a rather strange qualification. Suggesting things aren’t quite what you might imagine behind Zuckerberg’s drawn curtain.
Cory Doctorow reports on another of Facebook's unending attempts to prevent outsiders finding out about what it is actually doing as opposed to what it says it is doing:
Propublica is one of many organizations, mainly nonprofits, whose "ad transparency" tools scrape Facebook ads and catalog them, along with the targeting data that exposes who is paying for which messages to be shown to whom.So Facebook is lying about its database of political ads, too. ProPublica's full report is here.
Facebook previously warned these watchdogs that their tools violated Facebook's terms of service and that their days were numbered and now Facebook has made technical changes to its site that locks out ad transparency tools operated by Propublica, Mozilla, and Who Targets Me.
Facebook says that its limited database of political ads makes the independent tools redundant, but Facebook's own database contains important omissions, including ads from the NRA, ads targeting Bernie Sanders supporters, and more. Moreover, this database is only available in three of the dozens of countries where Facebook is a significant political player.
It is actually hard to keep up with Facebook. The very same day that Doctorow penned the report above, he followed up with Project Atlas: Facebook has been secretly paying Iphone users to install an all-surveilling "VPN" app. This is an area where Facebook has form:
Facebook previously faced disgrace and crisis when it was revealed that Onavo, a so-called VPN app that was actually grabbing a huge tranche of data from users; Apple subsequently removed Onavo from its app store.Undeterred, Facebook launched "Project Atlas":
The program recruits users aged 13 to 35, and has been running since 2016. Facebook confirmed that it uses the app to "gather data on usage habits."How is the app gathering data?
The "Facebook Research" VPN is an app that circumvents Apple's ban on certain kinds of surveillance by cloaking itself as a beta app and distributing through the Applause, Betabound and Utest services, rather than Apple's App Store: users get up to $20/month, plus referral fees, to run the app, which comes with a man-in-the-middle certificate that lets Facebook intercept "private messages in social media apps, chats from in instant messaging apps – including photos/videos sent to others, emails, web searches, web browsing activity, and even ongoing location information by tapping into the feeds of any location tracking apps you may have installed."So Facebook appears to be lying to the "users aged 13 to 35" by not telling them what data it is collecting, and it was lying to Apple:
Facebook does not distribute the "Research" app through Apple's own beta-test program, choosing instead to launder it through third parties. Facebook is pretty clearly violating Apple's policies in doing this.
Not many 13-year-old employees at Facebook, so Apple banned the app:Apple says Facebook broke an agreement it made with Apple by publishing a “research” app for iPhone users that allowed the social giant to collect all kinds of personal data about those users, TechCrunch reported Tuesday. The app allowed Facebook to track users’ app history, their private messages, and their location data. Facebook’s research effort reportedly targeted users as young as 13 years old.As of last summer, apps that collect that kind of data are against Apple’s privacy guidelines. That means Facebook couldn’t make this research app available through the App Store, which would have required Apple approval.Instead, Facebook apparently took advantage of Apple’s “Developer Enterprise Program,” which lets approved Apple partners, like Facebook, test and distribute apps specifically for their own employees. In those cases, the employees can use third-party services to download beta versions of apps that aren’t available to the general public.
Apple won’t let Facebook distribute the app anymore ... Apple’s statement also mentions that Facebook’s “certificates” — plural — have been revoked. That implies Facebook cannot distribute other apps to employees through this developer program right now, not just the research app.As Rob Price reported, actual Facebook employees were dependent upon the certificates too:
Apple revoked the developer certificate that Facebook was using to power the research apps — and, in doing so, crippled Facebook's workforce.
This is because the same certificate that authenticated the research apps was also used in the key internal Facebook apps its tens of thousands of employees use every day.
The move dramatically escalated tensions between Facebook and Apple, and has left Facebook employees unable to communicate with colleagues, access internal information, and even use company transportation.
Affected apps include internal builds of Workplace, Facebook's internal version of Facebook for employee communications; Workplace Chat; Instagram; Messenger; "and other internal apps like Mobile Home and the Ride app."
Techcrunch has also discovered that:
Google has been running an app called Screenwise Meter, which bears a strong resemblance to the app distributed by Facebook Research that has now been barred by Apple, TechCrunch has learned.However, at least:
In its app, Google invites users aged 18 and up (or 13 if part of a family group) to download the app by way of a special code and registration process using an Enterprise Certificate. That’s the same type of policy violation that led Apple to shut down Facebook’s similar Research VPN iOS app, which had the knock-on effect of also disabling usage of Facebook’s legitimate employee-only apps — which run on the same Facebook Enterprise Certificate — and making Facebook look very iffy in the process.
Unlike Facebook, Google is much more upfront about how its research data collection programs work, what’s collected and that it’s directly involved. It also gives users the option of “guest mode” for when they don’t want traffic monitored, or someone younger than 13 is using the device.I suppose that's something, but if Facebook's app violates Apple's policies so does Google's.