- Convince advertisers that it is an effective means of manipulating the behavior of the mass of the population.
- Avoid regulation by convincing governments that it is not an effective means of manipulating the behavior of the mass of the population.
Facebook has tried to finesse the dilemma by lying to both sides:
- As Max Read describes in How Much of the Internet Is Fake? Turns Out, a Lot of It, Actually, they have lied to advertisers about what their money is buying, exaggerating the effectiveness of their platform:
Metrics should be the most real thing on the internet: They are countable, trackable, and verifiable, and their existence undergirds the advertising business that drives our biggest social and search platforms. Yet not even Facebook, the world’s greatest data–gathering organization, seems able to produce genuine figures. In October, small advertisers filed suit against the social-media giant, accusing it of covering up, for a year, its significant overstatements of the time users spent watching videos on the platform (by 60 to 80 percent, Facebook says; by 150 to 900 percent, the plaintiffs say). According to an exhaustive list at MarketingLand, over the past two years Facebook has admitted to misreporting the reach of posts on Facebook Pages (in two different ways), the rate at which viewers complete ad videos, the average time spent reading its “Instant Articles,” the amount of referral traffic from Facebook to external websites, the number of views that videos received via Facebook’s mobile site, and the number of video views in Instant Articles.Josh Marshall has more on this side of the dilemma in How Facebook Punked and then Gut Punched the News Biz.
Roger Mcnamee's devastating I Mentored Mark Zuckerberg. I Loved Facebook. But I Can't Stay Silent About What's Happening describes how they have lied to governments, minimizing the effectiveness of their platform:
when confronted with evidence that disinformation and fake news had spread over Facebook and may have influenced a British referendum or an election in the U.S., Facebook followed a playbook it had run since its founding: deny, delay, deflect, dissemble. Facebook only came clean when forced to, and revealed as little information as possible. Then it went to Plan B: apologize, and promise to do better.Trevor Timm has more on this side of the dilemma in How Facebook Borrows From the NSA Playbook.
Facebook, its CEO Mark Zuckerberg, and its COO Sheryl Sandberg, and its public relations people, and its engineers have lied. They have lied repeatedly. They have lied exhaustively. They have lied so much they've lost track of their lies, and then lied about them.The two reports were by Gabriel J.X. Dance, Michael LaForgia and Nicholas Confessore in the New York Times:
By any measure, Facebook as an organization has knowingly, willingly, purposefully, and repeatedly lied. And two reports this week demonstrate that the depth of its lying was even worse than we previously imagined.
- As Facebook Raised a Privacy Wall, It Carved an Opening for Tech Giants, which McCarthy summarizes thus:
Facebook cut data-exchange deals with all sorts of companies based on this premise: give them what they want, and in return they would be hauled onto Zuckerberg's internet reservation.
For example, Yahoo! got real-time feeds of posts by users' friends – reminding us of Cambridge Analytica gathering information on millions of voters via a quiz app, and using it to target them in contentious political campaigns in the US and Europe.
Microsoft's Bing was able to access the names of nearly all Facebook users’ friends without permission, and Amazon was able to get at friends' names and contact details. Russian search engine Yandex had Facebook account IDs at its fingertips, though it claims it didn't even know this information was available. Facebook at first told the New York Times Yandex wasn't a partner, and then told US Congress it was.
- Facebook Gave Device Makers Deep
Access to Data on Users and Friends, which McCarthy summarizes thus:
Facebook got in bed with smartphone manufacturers, such as Apple, Amazon, BlackBerry, Microsoft, and Samsung. Facebook secretly gave the device makers access to each phone user's Facebook friends' profiles, when the handheld was linked to its owner's account, bypassing protections.
Faced with evidence of its data-sharing agreements where – let's not forget this – Facebook provided third parties access to people's personal messages, and more importantly to their contacts lists and friends' feeds, the company claims it broke no promises because it defined the outfits it signed agreements with as "service providers." And so, according to Facebook, it didn't break a pact it has with the US government's trade watchdog, the FTC, not to share private data without permission, and likewise not to break agreements it has with its users.Trying to sell diametrically opposed stories to two different audiences just isn't effective PR. Lying to both sides at the same time just makes it safe to assume that, whatever Facebook is admitting to, the truth is far worse.
As for the question of potential abuse of personal data handed to third parties, Facebook amazingly used the same line that it rolled out when it attempted to deflect the Cambridge Analytica scandal: that third parties were beholden to Facebook's rules about using data. But, of course, Facebook doesn't check or audit whether that is the case.
Cory Doctorow reports on a wonderful example of Facebook's speaking out of both sides of its mouth:
Though Facebook's lobbying associations spent the whole debate over the EU Copyright Directive arguing (correctly) that algorithmic filters to catch copyright infringement would end up blocking mountains of legitimate speech (while still letting through mountains of infringement), Facebook secretly told the EU Commission that it used filters all the time, had utmost confidence in them, and couldn't see any problems with their use.Doctorow's piece is based on Laura Kayali's Inside Facebook’s fight against European regulation, which is a detailed analysis of Facebook's lobbying the EU from documents released following a freedom of information request.
Natasha Lomas' The case against behavioral advertising is stacking up provides another example of Facebook's duplicity, based on research by Carnegie Mellon University professor of IT and public policy, Alessandro Acquisti, which showed that:
behaviourally targeted advertising had increased the publisher’s revenue but only marginally. At the same time they found that marketers were having to pay orders of magnitude more to buy these targeted ads, despite the minuscule additional revenue they generated for the publisher.The alternative to targeted ads is DuckDuckGo-style contextual ads. Lomas continues:
“What we found was that, yes, advertising with cookies — so targeted advertising — did increase revenues — but by a tiny amount. Four per cent. In absolute terms the increase in revenues was $0.000008 per advertisment,” Acquisti told the hearing. “Simultaneously we were running a study, as merchants, buying ads with a different degree of targeting. And we found that for the merchants sometimes buying targeted ads over untargeted ads can be 500% times as expensive.”
If Acquisti’s research is to be believed ... there’s little reason to think [contextual] ads would be substantially less effective than the vampiric microtargeted variant that Facebook founder Mark Zuckerberg likes to describe as “relevant”.Given Facebook's history of lying about metrics, even if it did produce internal data showing that the targeted ads were worth the cost, why should advertisers believe it? And if the internal data doesn't show that, Facebook can't afford to release it.
The ‘relevant ads’ badge is of course a self-serving concept which Facebook uses to justify creeping on users while also pushing the notion that its people-tracking business inherently generates major extra value for advertisers. But does it really do that? Or are advertisers buying into another puffed up fake?
Facebook isn’t providing access to internal data that could be used to quantify whether its targeted ads are really worth all the extra conjoined cost and risk. While the company’s habit of buying masses of additional data on users, via brokers and other third party sources, makes for a rather strange qualification. Suggesting things aren’t quite what you might imagine behind Zuckerberg’s drawn curtain.
Cory Doctorow reports on another of Facebook's unending attempts to prevent outsiders finding out about what it is actually doing as opposed to what it says it is doing:
Propublica is one of many organizations, mainly nonprofits, whose "ad transparency" tools scrape Facebook ads and catalog them, along with the targeting data that exposes who is paying for which messages to be shown to whom.So Facebook is lying about its database of political ads, too. ProPublica's full report is here.
Facebook previously warned these watchdogs that their tools violated Facebook's terms of service and that their days were numbered and now Facebook has made technical changes to its site that locks out ad transparency tools operated by Propublica, Mozilla, and Who Targets Me.
Facebook says that its limited database of political ads makes the independent tools redundant, but Facebook's own database contains important omissions, including ads from the NRA, ads targeting Bernie Sanders supporters, and more. Moreover, this database is only available in three of the dozens of countries where Facebook is a significant political player.
It is actually hard to keep up with Facebook. The very same day that Doctorow penned the report above, he followed up with Project Atlas: Facebook has been secretly paying Iphone users to install an all-surveilling "VPN" app. This is an area where Facebook has form:
Facebook previously faced disgrace and crisis when it was revealed that Onavo, a so-called VPN app that was actually grabbing a huge tranche of data from users; Apple subsequently removed Onavo from its app store.Undeterred, Facebook launched "Project Atlas":
The program recruits users aged 13 to 35, and has been running since 2016. Facebook confirmed that it uses the app to "gather data on usage habits."How is the app gathering data?
The "Facebook Research" VPN is an app that circumvents Apple's ban on certain kinds of surveillance by cloaking itself as a beta app and distributing through the Applause, Betabound and Utest services, rather than Apple's App Store: users get up to $20/month, plus referral fees, to run the app, which comes with a man-in-the-middle certificate that lets Facebook intercept "private messages in social media apps, chats from in instant messaging apps – including photos/videos sent to others, emails, web searches, web browsing activity, and even ongoing location information by tapping into the feeds of any location tracking apps you may have installed."So Facebook appears to be lying to the "users aged 13 to 35" by not telling them what data it is collecting, and it was lying to Apple:
Facebook does not distribute the "Research" app through Apple's own beta-test program, choosing instead to launder it through third parties. Facebook is pretty clearly violating Apple's policies in doing this.
Not many 13-year-old employees at Facebook, so Apple banned the app:Apple says Facebook broke an agreement it made with Apple by publishing a “research” app for iPhone users that allowed the social giant to collect all kinds of personal data about those users, TechCrunch reported Tuesday. The app allowed Facebook to track users’ app history, their private messages, and their location data. Facebook’s research effort reportedly targeted users as young as 13 years old.As of last summer, apps that collect that kind of data are against Apple’s privacy guidelines. That means Facebook couldn’t make this research app available through the App Store, which would have required Apple approval.Instead, Facebook apparently took advantage of Apple’s “Developer Enterprise Program,” which lets approved Apple partners, like Facebook, test and distribute apps specifically for their own employees. In those cases, the employees can use third-party services to download beta versions of apps that aren’t available to the general public.
Apple won’t let Facebook distribute the app anymore ... Apple’s statement also mentions that Facebook’s “certificates” — plural — have been revoked. That implies Facebook cannot distribute other apps to employees through this developer program right now, not just the research app.As Rob Price reported, actual Facebook employees were dependent upon the certificates too:
Apple revoked the developer certificate that Facebook was using to power the research apps — and, in doing so, crippled Facebook's workforce.
This is because the same certificate that authenticated the research apps was also used in the key internal Facebook apps its tens of thousands of employees use every day.
The move dramatically escalated tensions between Facebook and Apple, and has left Facebook employees unable to communicate with colleagues, access internal information, and even use company transportation.
Affected apps include internal builds of Workplace, Facebook's internal version of Facebook for employee communications; Workplace Chat; Instagram; Messenger; "and other internal apps like Mobile Home and the Ride app."
Techcrunch has also discovered that:
Google has been running an app called Screenwise Meter, which bears a strong resemblance to the app distributed by Facebook Research that has now been barred by Apple, TechCrunch has learned.However, at least:
In its app, Google invites users aged 18 and up (or 13 if part of a family group) to download the app by way of a special code and registration process using an Enterprise Certificate. That’s the same type of policy violation that led Apple to shut down Facebook’s similar Research VPN iOS app, which had the knock-on effect of also disabling usage of Facebook’s legitimate employee-only apps — which run on the same Facebook Enterprise Certificate — and making Facebook look very iffy in the process.
Unlike Facebook, Google is much more upfront about how its research data collection programs work, what’s collected and that it’s directly involved. It also gives users the option of “guest mode” for when they don’t want traffic monitored, or someone younger than 13 is using the device.I suppose that's something, but if Facebook's app violates Apple's policies so does Google's.
Benedict Carey's This Is Your Brain Off Facebook reports on a major study of the psychological effects of quitting Facebook:
"So what happens if you actually do quit? A new study, the most comprehensive to date, offers a preview.
Expect the consequences to be fairly immediate: More in-person time with friends and family. Less political knowledge, but also less partisan fever. A small bump in one’s daily moods and life satisfaction. And, for the average Facebook user, an extra hour a day of downtime.
The study, by researchers at Stanford University and New York University, helps clarify the ceaseless debate over Facebook’s influence on the behavior, thinking and politics of its active monthly users, who number some 2.3 billion worldwide."
Ron Amadeo reports that what is sauce for the goose is sauce for the gander:
"According to a report from The Verge, Apple has shut down Google's internal iOS apps for doing the exact same thing Facebook was doing—distributing enterprise apps outside of the company."
Roger McNamee's book, Zucked: Waking Up to the Facebook Catastrophe is well worth reading.
In German Regulators Just Outlawed Facebook's Whole Ad Business, Emily Dreyfuss describes how Germany's cartel office cuts through Facebook's obfuscation (my emphasis):
"Facebook’s massively lucrative advertising model relies on tracking its one billion users—as well as the billions on WhatsApp and Instagram—across the web and smartphone apps, collecting data on which sites and apps they visit, where they shop, what they like, and combining all that information into comprehensive user profiles. Facebook has maintained that collecting all this data allows the company to serve ads that are more relevant to users’ interests. Privacy advocates have argued that the company isn’t transparent enough about what data it has and what it does with it. As a result, most people don’t understand the massive trade-off they are making with their information when they sign up for the “free” site.
On Thursday, Germany’s Federal Cartel Office, the country’s antitrust regulator, ruled that Facebook was exploiting consumers by requiring them to agree to this kind of data collection in order to have an account, and has prohibited the practice going forward."
The headline says it all in Matthew Hughes' Facebook lets you search for pictures of your female friends, but not your male ones:
"If you’re feeling an overwhelming sense of deja vu, you’re not alone. The predecessor to Facebook was a deeply unsavory site called Facemash that allowed Harvard University students to rate their female colleagues based on perceived physical attractiveness."
To no-ones' great surprise, in Gambling, porn, and piracy on iOS: Apple’s enterprise certificate woes continue Samuel Axon reports that Google and Facebook are far from the only abusers of certificates from Apple's Enterprise Developer Program:
"the Reuters report describes the use of enterprise certificates to distribute pirated versions of popular iOS software like Minecraft, Spotify, and Pokémon Go. ... The distributors impersonate legitimate businesses to gain access to Apple’s enterprise certification program and tools. ... Earlier this week, a TechCrunch investigation also discovered a "dozen hardcore-pornography apps and a dozen real-money gambling apps that escaped Apple's oversight."
Signing up developers is the priority for the program, not auditing whether they are abiding by the agreements they signed. And the agreements almost certainly absolve Apple of any responsibility for what is done with the certificates.
The UK House of Commons isn't happy with Facebook:
"Facebook deliberately broke privacy and competition law and should urgently be subject to statutory regulation, according to a devastating parliamentary report denouncing the company and its executives as “digital gangsters”.
The final report of the Digital, Culture, Media and Sport select committee’s 18-month investigation into disinformation and fake news accused Facebook of purposefully obstructing its inquiry and failing to tackle attempts by Russia to manipulate elections."
The Economist speculates What would happen if Facebook were turned off?:
"Were Mark Zuckerberg to turn off his creation, another, similar platform might be propelled to dominance. But the Facebook era could instead be the product of unique, fleeting historical circumstances. In that case, a sunnier social-network ecology might be achievable—if only the citizens of Facebook could be nudged to seek something better."
Lauren Finer's Facebook reportedly gets deeply personal info, such as ovulation times and heart rate, from some apps start:
"Facebook receives highly personal information from apps that track your health and help you find a new home, testing by The Wall Street Journal found. Facebook can receive this data from certain apps even if the user does not have a Facebook account, according to the Journal."
Of course, Facebook had nothing to do with this, it only buried this in its SDK:
"A Facebook spokesperson told CNBC, "Sharing information across apps on your iPhone or Android device is how mobile advertising works and is industry standard practice. The issue is how apps use information for online advertising. We require app developers to be clear with their users about the information they are sharing with us, and we prohibit app developers from sending us sensitive data. We also take steps to detect and remove data that should not be shared with us."
So that's all OK, isn't it.
Inevitably, Facebook was lying about who was using their "research" app. Josh Constine reports that Facebook admits 18% of Research spyware users were teens, not <5%:
"In the response from Facebook’s VP of US public policy Kevin Martin, the company admits that (emphasis ours) “At the time we ended the Facebook Research App on Apple’s iOS platform, less than 5 percent of the people sharing data with us through this program were teens. Analysis shows that number is about 18 percent when you look at the complete lifetime of the program, and also add people who had become inactive and uninstalled the app.” So 18 percent of research testers were teens. It was only less than 5 percent when Facebook got caught. Given users age 13 to 35 were eligible for Facebook’s Research program, 13 to 18 year olds made of 22 percent of the age range. That means Facebook clearly wasn’t trying to minimize teen involvement, nor were they just a tiny fraction of users."
No-one should believe a word Facebook says about anything.
The Register's Kieren McCarthy has an overview of the myriad lies Facebook told about its "research" app.
Revealed: Facebook’s global lobbying against data privacy laws by Carole Cadwalladr and Duncan Campbell is based on documents from the Six4Three cased that the UK seized. It reveals that:
"Facebook has targeted politicians around the world – including the former UK chancellor, George Osborne – promising investments and incentives while seeking to pressure them into lobbying on Facebook’s behalf against data privacy legislation, an explosive new leak of internal Facebook documents has revealed.
The documents, which have been seen by the Observer and Computer Weekly, reveal a secretive global lobbying operation targeting hundreds of legislators and regulators in an attempt to procure influence across the world, including in the UK, US, Canada, India, Vietnam, Argentina, Brazil, Malaysia and all 28 states of the EU."
Sue Halpern's Mark Zuckerberg’s Plans to Capitalize on Facebook’s Failures is a suitably skeptical take on Zuckerberg's plan for a privacy-focused (part of) Facebook:
"In Wednesday’s announcement, Zuckerberg, who is famous for shambling apologies, seemed to have finally run out of mea culpas. “I understand that many people don’t think Facebook can or would even want to build this kind of privacy-focused platform,” he wrote, “because frankly we don’t currently have a strong reputation for building privacy protective services.” On that point, at least, he’s right."
See also Isobel Asher Hamilton's Mark Zuckerberg's boast about Facebook's data storage was torn apart by human rights groups:
"In his blockbuster blog, Mark Zuckerberg boasted that Facebook has chosen not to store data in countries that "have a track record of violating human rights like privacy or freedom of expression."
But just months ago, Facebook announced plans to open a data center in Singapore, a country with a poor record when it comes to freedom of expression, according to human rights groups."
In A New Privacy Constitution for Facebook Bruce Schneier and Adam Shostack lay out what Zuckerberg would need to do to deliver on his "pivot to privacy":
"There is ample reason to question Zuckerberg’s pronouncement: The company has made — and broken — many privacy promises over the years. And if you read his 3,000-word post carefully, Zuckerberg says nothing about changing Facebook’s surveillance capitalism business model. All the post discusses is making private chats more central to the company, which seems to be a play for increased market dominance and to counter the Chinese company WeChat.
In security and privacy, the devil is always in the details — and Zuckerberg’s post provides none. But we’ll take him at his word and try to fill in some of the details here. What follows is a list of changes we should expect if Facebook is serious about changing its business model and improving user privacy."
The first link is to Ryan Nakashima's Promises, promises: Facebook's history with privacy, an 11-year timeline of broken privacy promises from a year ago.
Facebook’s Data Deals Are Under Criminal Investigation continues the New York Times' investigation of Facebook's data-sharing deals:
"Federal prosecutors are conducting a criminal investigation into data deals Facebook struck with some of the world’s largest technology companies, intensifying scrutiny of the social media giant’s business practices as it seeks to rebound from a year of scandal and setbacks.
A grand jury in New York has subpoenaed records from at least two prominent makers of smartphones and other devices, according to two people who were familiar with the requests and who insisted on anonymity to discuss confidential legal matters. Both companies had entered into partnerships with Facebook, gaining broad access to the personal information of hundreds of millions of its users."
And also Facebook has been down for hours, Instagram is back in service.
Kieren McCarthy's Facebook blames 'server config change' for 14-hour outage. Someone run that through the universal liar translator is appropriately skeptical of Facebook's sole tweet about their outage:
"The other big question is how a "server configuration change" led to not just Facebook but also its Messenger, WhatsApp, and Instagram services going down. That would strongly suggest that Facebook has either connected them up or attempted to connect them up at a low level, merging them into one broad platform. In January, it emerged that CEO Mark Zuckerberg had ordered that his instant-chat applications and social network be intertwined. And this month, Zuck alluded to this in an otherwise aimless blog post: "With all the ways people also want to interact privately, there's also an opportunity to build a simpler platform that's focused on privacy first."
So, that "server configuration change" may have been more conspiracy than cockup, a move to bring together Facebook's individual components. An effort so large and complex, it resulted in 14 hours of downtime. That may help explain why the biz is being so secretive about the cause of the outage. Bringing together everything under one roof is certainly one way to avoid potential regulatory break-up."
Carole Cadwalladr's Facebook faces fresh questions over when it knew of data harvesting looks like yet another case of Facebook's inability to tell the truth:
"Facebook has repeatedly refused to say when its senior executives, including Zuckerberg, learned of how Cambridge Analytica had used harvested data from millions of people across the world to target them with political messages without their consent. But Silicon Valley insiders have told the Observer that Facebook board member Marc Andreessen, the founder of the venture capital firm Andreessen Horowitz and one of the most influential people in Silicon Valley, attended a meeting with [Christopher] Wylie held in Andreessen Horowitz’s office two years before he came forward as a whistleblower."
Brian Krebs repors that Facebook Stored Hundreds of Millions of User Passwords in Plain Text for Years. Facebook claims:
"Facebook says an ongoing investigation has so far found no indication that employees have abused access to this data. ... the investigation so far indicates between 200 million and 600 million Facebook users may have had their account passwords stored in plain text and searchable by more than 20,000 Facebook employees. The source said Facebook is still trying to determine how many passwords were exposed and for how long, but so far the inquiry has uncovered archives with plain text user passwords dating back to 2012.
My Facebook insider said access logs showed some 2,000 engineers or developers made approximately nine million internal queries for data elements that contained plain text user passwords."
Julia Carrie Wong's Facebook acknowledges concerns over Cambridge Analytica emerged earlier than reported describes yet another example of Facebook's inability to tell the truth:
"Facebook employees were aware of concerns about“improper data-gathering practices” by Cambridge Analytica months before the Guardian first reported, in December 2015, that the political consultancy had obtained data on millions from an academic. The concerns appeared in a court filing by the attorney general for Washington DC and were subsequently confirmed by Facebook.
The new information “could suggest that Facebook has consistently mislead [sic]” British lawmakers “about what it knew and when about Cambridge Analytica”, tweeted Damian Collins, the chair of the House of Commons digital culture media and sport select committee (DCMS) in response to the court filing."
Cecilia Kang's The Mounting Federal Investigations Into Facebook lists the FTC, the SEC, the DoJ, the US Attorney for the Eastern District of NY,and HUD as the agencies currently investigating Facebook.
All you need to know is in the headline of Kevin Poulsen's ‘Beyond Sketchy’: Facebook Demanding Some New Users’ Email Passwords. Well, why not, if you can get away with it?
Rob Price reports on Facebook's use of coin-op media to spread their lies in Facebook is partnering with a big UK newspaper to publish sponsored articles downplaying 'technofears' and praising the company:
"Facebook has found a novel solution to the never-ending deluge of negative headlines and news articles criticizing the company: Simply paying a British newspaper to run laudatory stories about it.
Facebook has partnered with The Daily Telegraph, a broadsheet British newspaper, to run a series of features about the company, Business Insider has found — including stories that defend it on hot-button issues it has been criticised over like terrorist content, online safety, cyberbullying, fake accounts, and hate speech."
Jake Kanter's Facebook's activist shareholders are making another dramatic bid to oust Mark Zuckerberg and abolish the firm's share structure explains why:
"Zuckerberg's weighty power is why activist shareholders want to abolish the share structure. At the annual investor meeting, they will have the chance to vote on a proposal, which calls for the introduction of "fair and appropriate mechanisms through which disproportionate rights of Class B shareholders could be eliminated."
It said: "Fake news, election interference, and threats to our democracy -- shareholders need more than deny, deflect, and delay. We urge shareholders to vote FOR a recapitalization plan for all outstanding stock to have one vote per share."
It is not clear which investor has drawn up the proposal, but Facebook again calls for it to be dismissed by shareholders, as they have during the last five annual meetings. "We believe that our capital structure is in the best interests of our stockholders and that our current corporate governance structure is sound and effective," it said."
Today's drops in the bucket are Mark Zuckerberg leveraged Facebook user data to fight rivals and help friends, leaked documents show by Olivia Solon and Cyrus Farivar:
"Mark Zuckerberg oversaw plans to consolidate the social network’s power and control competitors by treating its users’ data as a bargaining chip, while publicly proclaiming to be protecting that data, according to about 4,000 pages of leaked company documents largely spanning 2011 to 2015 and obtained by NBC News."
and 15 Months of Fresh Hell Inside Facebook by Nicholas Thompson and Fred Vogelstein:
"If you want to promote trustworthy news for billions of people, you first have to specify what is trustworthy and what is news. Facebook was having a hard time with both. To define trustworthiness, the company was testing how people responded to surveys about their impressions of different publishers. To define news, the engineers pulled a classification system left over from a previous project—one that pegged the category as stories involving “politics, crime, or tragedy.”
That particular choice, which meant the algorithm would be less kind to all kinds of other news—from health and science to technology and sports—wasn’t something Facebook execs discussed with media leaders in Davos. And though it went through reviews with senior managers, not everyone at the company knew about it either."
Today's drop in the bucket is Facebook admits harvesting contacts of the 1.5m email passwords it asked for:
"A few weeks ago, we learned that Facebook asked for the personal email passwords of some users logging in. Today, it admits that it used the passwords to harvest 1.5m users' email contacts without consent. Facebook claims that doing this was "unintentional," despite contact harvesting being the plainly obvious purpose of demanding people's email passwords and notifications in Facebook informing users that their contacts were being imported."
As Cory Doctorow writes:
"Facebook's cycle of promises and lies depends upon journalistic objectivity being warped into a perverse assumption of Facebook's good faith. When we fail to report each privacy abuse in the context of all the other ones, we simply fail."
Another drop in the bucket for today! msmash at /. reports:
"Last month, Facebook disclosed that hundreds of millions of users on its platform had their account passwords stored in plain text -- in some cases going back to 2012 -- and searchable by thousands of Facebook employees. Today, the company quietly updated that blog post to reveal that Instagram users are also impacted."
What is the "credibility" of which you speak?
Jon Brodkin's Facebook fights to “shield Zuckerberg” from punishment in US privacy probe starts:
"Federal Trade Commission officials are discussing whether to hold Facebook CEO Mark Zuckerberg personally accountable for Facebook's privacy failures, according to reports by The Washington Post and NBC News. Facebook has been trying to protect Zuckerberg from that possibility in negotiations with the FTC, the Post wrote.
Federal regulators investigating Facebook are "exploring his past statements on privacy and weighing whether to seek new, heightened oversight of his leadership," the Post reported, citing anonymous sources who are familiar with the FTC discussions."
The headline of Cory Doctorow's Facebook has hired the Patriot Act's co-author and "day-to-day manager" to be its new general counsel tells you everything you need to know about today's drop in the bucket.
Yesterday's drop in the bucket is Zuck it up: Facebook hit with triple whammy of legal probes, action in Canada, US, Ireland. The sub-head is , The three are:
"The Office of the Privacy Commissioner of Canada ... has emitted a report from that investigation: it concludes that Facebook ran roughshod over Canada’s privacy laws by failing to obtain proper consent from its users to ultimately share that profile data with Cambridge Analytica, and failing to protect user information. Now, OPC wants to take Facebook to court to, hopefully, force it to comply with Canadian law."
"BREAKING: We're launching an investigation into Facebook's unauthorized collection of 1.5M of their users’ email contact databases.
Facebook has repeatedly demonstrated a lack of respect for consumer information while at the same time profiting from mining that data.
— NY AG James (@NewYorkStateAG) April 25, 2019"
"Ireland’s Data Protection Commission declared an investigation into Facebook’s practices to see whether they violate Europe's GDPR. And what’s more, it’s for a separate incident: logging hundreds of millions of user account passwords in plain text in its servers"
Today's drop in the bucket comes from Philip Michaels' If Facebook Wants Our Trust, Mark Zuckerberg Must Resign:
"Consider the F8 conference from a year ago that began with a prolonged apology over the Cambridge Analytica scandal, where a data mining firm got a hold of the personal data of 50 million Facebook users. Zuckerberg began his keynote promising changes at Facebook, such as using artificial intelligence to identify fake accounts, instituting new rules for ad transparency, and trying to keep fake and misleading news from filling up your Facebook feed. Zuckerberg also promised a Clear History feature that would let you easily delete information about apps and websites you've interacted with, sort of like erasing your browser history.
So what's changed in the last year? Well, the Clear History feature never launched — it's coming later in 2019, Facebook now says — but the company showed off its commitment to safeguarding data in far more telling ways."
Which is followed by a 10-item list of some of the more egregious privacy violations in the last year. Michaels concludes:
"So when F8 begins Tuesday, expect an extensive if not especially detailed talk about how this time, Facebook's really going to get privacy right. Expect developers in attendance to applaud. Expect the reporters there to dutifully take down every one of Zuckerberg's pronouncements.
And expect another apology a few months later when word of another breach leaks out."
Today's drop in the bucket comes from Charlie Warzel. If a $5 Billion Fine Is Chump Change, How Do You Punish Facebook? points out that:
"As BuzzFeed News pointed out on Wednesday, in just one hour of after-hours trading after signaling its impending $3 billion to $5 billion fine, Facebook’s market capitalization increased by $40 billion."
"The only potential crisis then seems reserved for us, the users. If the regulators fail to meet the moment they’ll have wasted a rare opportunity for real accountability. Worse yet, quietly prompting Facebook to write a check for an amount it’s already safely ferreted away will be an admission that regulators are out of ideas when it comes to the company’s unprecedented power and reach."
Mike Masnick;s Content Moderation At Scale Is Impossible: Facebook Still Can't Figure Out How To Deal With Naked Breasts concludes:
"rules will never encompass every possible situation, and we'll continue to see stories like this basically forever. We keep saying that content moderation at scale is impossible to do well, and part of that is because of stories like this. You can't create rules that work in every case, and there are more edge cases than you can possibly imagine."
Today's drop in the bucket comes from Munsif Vengattil and Paresh Dave with Facebook 'labels' posts by hand, posing privacy questions:
"Over the past year, a team of as many as 260 contract workers in Hyderabad, India has ploughed through millions of Facebook Inc photos, status updates and other content posted since 2014. ... Details of the effort were provided by multiple employees at outsourcing firm Wipro Ltd over several months. ... The Wipro work is among about 200 content labeling projects that Facebook has at any time, employing thousands of people globally, company officials told Reuters. Many projects are aimed at “training” the software that determines what appears in users’ news feeds and powers the artificial intelligence underlying many other features. ... But one former Facebook privacy manager, speaking on condition of anonymity, expressed unease about users’ posts being scrutinized without their explicit permission. The European Union’s year-old General Data Protection Regulation (GDPR) has strict rules about how companies gather and use personal data and in many cases requires specific consent.
“One of the key pieces of GDPR is purpose limitation,” said John Kennedy, a partner at law firm Wiggin and Dana who has worked on outsourcing, privacy and AI.
If the purpose is looking at posts to improve the precision of services, that should be stated explicitly, Kennedy said. Using an outside vendor for the work could also require consent, he said."
Kaya Yurieff provides today's drop in the bucket with Instagram still doesn't have vaccine misinformation under control:
"Two months after Facebook pledged to fight vaccine misinformation on its platforms, and in the midst of a measles outbreak in New York City, Instagram is still serving up posts from anti-vaccination accounts and anti-vaccination hashtags to anyone searching for the word "vaccines."
An Instagram spokesperson would not elaborate on why anti-vax accounts appear high up in search results, but said the company is working to find solutions for search results and hashtags which contain a high percentage of vaccine misinformation.
"They're doing the minimum to give the optics of some level of social corporate responsibility, without wanting to take on the anti-vaccine movement," said Peter Hotez, professor and dean of the National School of Tropical Medicine at Baylor College of Medicine."
Today's drop in the bucket comes from Facebook co-founder Chris Hughes in It’s Time to Break Up Facebook:
"Mark is a good, kind person. But I’m angry that his focus on growth led him to sacrifice security and civility for clicks. I’m disappointed in myself and the early Facebook team for not thinking more about how the News Feed algorithm could change our culture, influence elections and empower nationalist leaders. And I’m worried that Mark has surrounded himself with a team that reinforces his beliefs instead of challenging them."
Mike Masnick gives Chis Hughes's op-ed a good fisking in Facebook Co-Founder Chris Hughes Calls For Facebook's Breakup... But Seems Confused About All The Details. The key paragraph is:
"Moving to a world of protocols, in which you could move away from Facebook but still have access to the content and people who stayed on Facebook is the real solution. It removes the lock-in that is the true issue of dominance. And, unless antitrust somehow forces that to happen, a separate Instagram doesn't solve any of these issues."
Nick Clegg is Facebook's Vice-President for Global Affairs and Communications. His previous greatest achievement was as Leader of the Liberal Democrats in the UK Parliament, and Deputy Prime Minister in a coalition with the Conservatives under Prime Minister David Cameron. His tenure was such a success that his party went from 57 to 8 seats.
So he's obviously the right person to push back against Chris Hughes' New York Times op-ed. Clegg's op-ed, Breaking Up Facebook Is Not the Answer claims that "companies should be held accountable for their actions" but studiously avoids Hughes' main point that Zuckerberg's sole personal control of the company makes holding it accountable effectively impossible.
Overall, Clegg's op-ed reads just as one would expect from a failed politician on the lobbying gravy-train.
Cory Doctorow recommends an hour-long talk at Berkeley's i-School by Alex Stamos, formerly Chief Security Officer at Facebook, and now at Stanford:
"Stamos describes the crisis that giant platforms face in trying to balance out anti-abuse (which benefits from spying on users) with privacy and compliance with government regulation, and how they game those contradictions to let them do a terrible job on every front while sidestepping blame.
Stamos reveals the internal debate about moderation and bad speech -- harassment, extremist recruiting, disinformation -- at the platforms, and blames the platforms' unwillingness to make this dialog public for their crisis of credibility."
Sam Biddle's Thanks to Facebook, Your Cellphone Company Is Watching You More Closely Than Ever is today's drop in the bucket. Biddle reveals that:
"A confidential Facebook document reviewed by The Intercept shows that the social network courts carriers, along with phone makers — some 100 different companies in 50 countries — by offering the use of even more surveillance data, pulled straight from your smartphone by Facebook itself.
Offered to select Facebook partners, the data includes not just technical information about Facebook members’ devices and use of Wi-Fi and cellular networks, but also their past locations, interests, and even their social groups. This data is sourced not just from the company’s main iOS and Android apps, but from Instagram and Messenger as well. The data has been used by Facebook partners to assess their standing against competitors, including customers lost to and won from them, but also for more controversial uses like racially targeted ads."
Cory Doctorow's Facebook's Dutch Head of Policy lied to the Dutch parliament about election interference points to <a href="https://www.bitsoffreedom.nl/2019/05/21/facebook-lies-to-dutch-parliament-about-election-manipulation/>today's drop in the bucket</a> from Dutch activists Bits of Freedom:
"Wednesday May 15, 2019, Facebook’s Head of Public Policy for the Netherlands spoke at a round table in the House of Representatives about data and democracy. The Facebook employee reassured members of parliament that Facebook has implemented measures to prevent election manipulation. He stated: 'You can now only advertise political messages in a country, if you’re a resident of that country'."
But, as they show, that wasn't true.
Kieren McCarthy provides today's drop in the bucket with Revealed: Facebook, Google's soft-money 'blackmail' to stall Euro fake news crackdown:
"director-general of the consumer advocacy group BEUC, Monique Goyens – alleged that senior Facebook staff threatened to create problems if the group of experts pursued an effort to investigate whether the US tech giant was abusing its market power. "We were blackmailed," she bluntly summarized.
If this market abuse probe had continued, it would have encouraged the EU competition commissioner to examine whether Facebook and Google's business models had enabled the spread of fake news. Facebook didn't want that to happen, and according to another member who has remained anonymous, the biz considered pulling its financial support from organizations the experts were representing as a way of killing the idea."
Ryan Grenoble has today's drop in the bucket with 5 Percent Of Facebook Is Fake:
"About 5% of Facebook’s monthly active users are fake.
The figure may not sound like much, but given Facebook’s 2.4 billion monthly active users, that’s an astounding 120 million active users on the platform who are not who ― or even what ― they claim to be.
The social media platform included the figure in its Community Standards Enforcement report, the third edition of which it published Thursday."
Another drop in the bucket today! Josh Marshall's Facebook Holds Out for Phony Pelosi Vids shows that while YouTube has taken down the fake "drunk Pelosi" videos, Facebook:
"continue to derive ad and data revenues from the video and curry favor with the President of the United States by keeping it online."
Kara Swisher weighs in on Facebook's ridiculous excuses for not suppressing the "drunk Nancy Pelosi" video:
"Facebook’s product policy and counterterrorism executive, Monika Bickert, drew the short straw and had to try to come up with a cogent justification for why Facebook was helping spew ugly political propaganda.
“We think it’s important for people to make their own informed choice for what to believe,” she said in an interview with CNN’s Anderson Cooper. “Our job is to make sure we are getting them accurate information.”
This is ridiculous. The only thing the incident shows is how expert Facebook has become at blurring the lines between simple mistakes and deliberate deception, thereby abrogating its responsibility as the key distributor of news on the planet."
Bennett Cyphers' Fines Aren’t Enough: Here’s How the FTC Can Make Facebook Better has three useful suggestions for the FTC:
- Stop Third-Party Tracking
- Don’t Merge WhatsApp, Instagram, and Facebook Data
- Stop Data Broker-Powered Ad Targeting
Today's drop in the bucket comes from White Nationalist Groups Banned By Facebook Are Still On The Platform. Jane Lytvynenko, Craig Silverman and Alex Boutilier report that (my emphasis):
"Over a month after Facebook announced a ban of a number of white nationalist, white supremacist, and other hate groups, they are still on the platform and continue to use it for recruitment.
After the deadly attacks in Christchurch, New Zealand, where the gunman went live on Facebook for several minutes before killing 51 people, the platform announced several bans around the world of known extremist groups. The bans were imperfect.
In Canada, many of the groups remained on the platform. In the US, an early announcement of the ban allowed those who were de-platformed to ask their supporters to follow them elsewhere. Now researchers are saying some of the banned groups are still active on Facebook, and attempts to report them have been ignored by the company.
“Facebook likes to make a PR move and say that they’re doing something but they don’t always follow up on that,” Megan Squire, an Elon University computer science professor who researches online extremism, told a joint BuzzFeed News–Toronto Star investigation.
Kevin Chan, one of Facebook’s global policy directors, said while they proactively removed some hate groups, the company also relies on users, journalists, and other sources to report when banned personalities make it back on the platform."
But it turned out that, for example:
"In November last year, [Squire] flagged an Indiana-based Klan group that used the page to organize events and rallies.
The report was never acted on by Facebook, she said. When she learned the group was holding a rally in Ohio last weekend, Squire checked her Facebook support inbox and said her report from last year was still under review.
“This is literally the Klan we’re talking about,” she said."
John Oates reports on today's drop in the bucket:
"A court in Delaware has backed investors who want to see internal emails and other documents relating to how Facebook handed data on 50 million users to Cambridge Analytica.
The ruling (PDF) said that shareholders provided enough evidence to support a claim that failures by senior management and board members at the social network may have allowed the illegal data slurp to happen.
The judgement noted that at the time of the breach Facebook was already under Federal Trade Commission decree and had promised to strengthen its data protection policies as a result."
The best example yet of Facebook's Catch/22 comes from Carrie Mihalcik's Facebook reportedly thinks there's no 'expectation of privacy' on social media:
"Facebook on Wednesday reportedly argued that it didn't violate users' privacy rights because there's no expectation of privacy when using social media.
"There is no invasion of privacy at all, because there is no privacy," Facebook counsel Orin Snyder said during a pretrial hearing to dismiss a lawsuit stemming from the Cambridge Analytica scandal, according to Law 360.
The company reportedly didn't deny that third parties accessed users' data, but it instead told US District Judge Vince Chhabria that there's no "reasonable expectation of privacy" on Facebook or any other social media site.
The social network's legal argument comes as the world's largest social network is more publicly trying to convince people that it knows how to protect their personal information. Earlier this month, Facebook COO Sheryl Sandberg said she and CEO Mark Zuckerberg will do "whatever it takes" to keep people safe on Facebook."
And another drop in the bucket from the WSJ
"There’s a kids’ iOS app called Curious World that, not surprisingly, stars the cute little pants-less monkey. Turns out, the app was collecting my son’s age, name and every book he tapped, and sending that data to Facebook Inc.
The company’s response? Whoopsies!
“There was some rogue code in the app that was mistakenly sending this data,” said Abhi Arya, the chief executive of Curious World. He says this info was not used by the company or Facebook. In response to this reporting, the company is updating the software.
Sharing information businesses know about children under 13 is prohibited by Facebook’s Terms of Service, a Facebook spokesman said. Apple says it is investigating the situation."
So what exactly had Facebook done to enforce their Terms of Service? Or Apple, for that matter?
"my colleague Mark Secada and I tested 80 apps, most of which are promoted in Apple’s App Store as “Apps We Love.” All but one used third-party trackers for marketing, ads or analytics. The apps averaged four trackers apiece.
Some apps send personal data without ever informing users in their privacy policies, others just use industry-accepted—though sometimes shady—ad-tracking methods. As my colleague Sam Schechner reported a few months ago (also with Mark’s assistance), many apps send info to Facebook, even if you’re not logged into its social networks. In our new testing, we found that many also send info to other companies, including Google and mobile marketers, for reasons that are not apparent to the end user."
So why is it OK for companies to outsource enforcement of the Terms of Service to journalists?
In his New York Times op-ed A Brief History of How Your Privacy Was Stolen, Roger McNamee continues to ask good questions:
"Why it is legal for service providers to comb our messages and documents for economically valuable data? Why is it legal for third parties to trade in our most private information, including credit card transactions, location and health data, and browsing history? Why is it legal to gather any data at all about minors? Why is it legal to trade predictions of our behavior?"
Today's drops in the bucket come from Jake Kanter's Facebook shareholder revolt gets bloody: Powerless investors vote overwhelmingly to oust Mark Zuckerberg as chairman:
"Some 68% of ordinary investors, those who are not part of management or the board, want to oust Zuckerberg as chairman and bring in an independent figure to chair Facebook's board. This was a significant increase on the 51% who voted in favor of an almost identical proposal last year."
And a study from Veronica Marotta, Vibhanshu Abhishek, and Alessandro Acquisti entitled Online Tracking and Publishers’ Revenues:An Empirical Analysis:
"While the impact of targeted advertising on advertisers’ campaign effectiveness has been vastly documented, much less is known about the value generated by online tracking and targeting technologies for publishers – the websites that sell ad spaces. In fact, the conventional wisdom that publishers benefit too from behaviorally targeted advertising has rarely been scrutinized in academic studies. We investigate how the (un)availability of users’ cookies, which affects the ability of advertisers to perform behavioral targeting, impacts publishers’ revenues. ... We find that when a user’s cookie is available publisher’s revenue increases by only about 4%. This corresponds to an average increase of $0.00008 per advertisement. The results contribute to the current debate over online behavioral advertising, and how benefits accrued from tracking and targeting online consumers may be differentially allocated to various stakeholders in the advertising ecosystem."
Cory Doctorow and Mike Masnick both comment on the study. Doctorow writes:
"They found that despite the 40% "ad-tech" premium charged by behavioral ad companies, the ads only added about 4% the media companies that published them, meaning that behavioral advertising is a losing proposition. What's more, serving behaviorally targeted ads involves a great deal of expense in the form of compliance and liability, numbers that will only go up as more privacy laws are enacted."
In other words, DuckDuckGo is only generates fractionally less revenue per ad than Google, despite their much simpler and more privacy-preserving business model. Masnick writes:
"As a site that relies on advertising to make money, this is hellishly frustrating. For years we've been pitching non-invasive, non-tracking ad campaigns for Techdirt. Over and over again we tell potential advertisers that people here would be much more open to paying attention to their ads if they promised not to do any tracking at all. And, over and over again companies (even those that initially express interest) decide to throw all their money at the big flashy adtech firms that promise to use "AI" and "machine learning" to better target their ads -- and get little in return for it."
Today's third drop in the bucket comes from Max Read's Does Facebook Have a Leaks Problem?:
"What mastermind created the infamous, slowed-down video of Nancy Pelosi — the one intended to make her sound drunk that went viral across right-wing media two weeks ago? ... Over the weekend, in an article in the Daily Beast, security journalist Kevin Poulsen unmasked the mastermind ... and found “a Donald Trump superfan and occasional sports blogger from the Bronx” named Shawn Brooks. Brooks had posted the video to two Facebook politics pages he was an administrator of, earning himself “nearly $1,000 in shared ad revenue.” ... But there was one striking detail to the article: Poulsen had confirmed Brooks’s identity with the help of an anonymous “Facebook official.” Someone inside Facebook had, it seems, accessed Brooks’s theoretically private user activity and used it to confirm Poulsen’s story."
Today's drop in the bucket was noticed by msmash at /. Facebook Worries Emails Could Show Zuckerberg Knew of Questionable Privacy Practices quotes from a WSJ story (my emphasis):
"Within the company, the unearthing of the emails in the process of responding to a continuing federal privacy investigation has raised concerns that they would be harmful to Facebook -- at least from a public-relations standpoint -- if they were to become public, one of the people said. The potential impact of the internal emails has been a factor in the tech giant's desire to reach a speedy settlement of the investigation by the Federal Trade Commission, one of the people said. Facebook is operating under a 2012 consent decree with the agency related to privacy, and the emails sent around that time suggest that Mr. Zuckerberg and other senior executives didn't make compliance with the FTC order a priority, the people said.
It couldn't be determined exactly what emails the agency has requested and how many of them relate to Mr. Zuckerberg. The FTC investigation began more than a year ago after reports that personal data of tens of millions of Facebook users improperly wound up in the hands of Cambridge Analytica, a data firm that worked on President Trump's 2016 campaign. The FTC is investigating whether that lapse violated the 2012 consent decree with the agency in which Facebook agreed to better protect user privacy. Since the Cambridge Analytica affair, other privacy missteps have come to light, adding to Facebook's headaches."
Don't think an Englishman won't know an understatement when he sees one!
Today's drop in the bucket comes from Zack Whittaker in Facebook collected device data on 187,000 users using banned snooping app:
"Facebook obtained personal and sensitive device data on about 187,000 users of its now-defunct Research app, which Apple banned earlier this year after the app violated its rules.
The social media giant said in a letter to Sen. Richard Blumenthal’s office — which TechCrunch obtained — that it collected data on 31,000 users in the U.S., including 4,300 teenagers. The rest of the collected data came from users in India."
Sam Biddle's In Court, Facebook Blames Users for Destroying Their Own Right to Privacy is a wonderful example of Facebook's Catch-22. He contrasts Zuckerberg's statements on privacy:
"In April 2018, Facebook CEO Mark Zuckerberg sat before members of both houses of Congress and told them his company respected the privacy of the roughly two billion people who use it. “Privacy” remained largely undefined throughout Zuckerberg’s televised flagellations, but he mentioned the concept more than two dozen times, including when he told the Senate’s Judiciary and Commerce committees, “We have a broader responsibility to protect people’s privacy even beyond” a consent decree from federal privacy regulators, and when he told the House Energy and Commerce Committee, “We believe that everyone around the world deserves good privacy controls.” A year later, Zuckerberg claimed in interviews and essays to have discovered the religion of personal privacy and vowed to rebuild the company in its image."
With Facebook's lawyers' statements in court:
"An outside party can’t violate what you yourself destroyed, Snyder seemed to suggest. Snyder was emphatic in his description of Facebook as a sort of privacy anti-matter, going so far as to claim that “the social act of broadcasting your private information to 100 people negates, as a matter of law, any reasonable expectation of privacy.” You’d be hard-pressed to come up with a more elegant, concise description of Facebook than “the social act of broadcasting your private information” to people. So not only is it Facebook’s legal position that you’re not entitled to any expectation of privacy, but it’s your fault that the expectation went poof the moment you started using the site (or at least once you connected with 100 Facebook “friends”)."
Today's drop in the bucket is Facebook content moderators break NDAs to expose shocking working conditions involving gruesome videos and feces smeared on walls by Lauren Feiner, based on Bodies In Seats by Casey Newton. Feiner reports:
"Three former Facebook content moderators agreed to put themselves in legal jeopardy to expose the appalling working conditions they experienced while employed by a vendor for the tech giant, according to a new report by The Verge.
Workers reported a dirty office environment where they often find pubic hair and bodily waste around their desks. Conditions at the Tampa site are so strenuous that workers regularly put their health in danger, several people told The Verge. One worker kept a trash can by her desk to throw up while she was sick since she had already used all her allotted bathroom breaks. Cognizant is not required to offer sick leave in Florida. One man had a heart attack at his desk and died shortly after, The Verge reported, and the site has not yet gotten a defibrillator."
Today's drop in the bucket is Alex Hern's Facebook usage falling after privacy scandals, data suggests:
"Since April 2018, the first full month after news of the Cambridge Analytica scandal broke in the Observer, actions on Facebook such as likes, shares and posts have dropped by almost 20%, according to the business analytics firm Mixpanel.
This month a market research firm, eMarketer, reported a decline in Facebook usage in the US, saying the typical Facebook user spent 38 minutes a day on the site, down from 41 minutes in 2017.
“On top of that, Facebook has continued to lose younger users, who are spreading their time and attention across other social platforms and digital activities,” eMarketer said."
In The Separation of Platforms and Commerce, Lina Khan reviews a historical example of Facebook's lying, the bait-and-switch they imposed on publishers worried about installing the "Like" button:
"To assuage publishers’ concerns, Facebook maintained the perception that it would not use these plug-ins to monitor users for the purpose of selling advertising. Keen to harness Facebook’s expansive network to increase clicks, publishers flocked to the plug-ins. Within the first week of the rollout, over 50,000 websites installed Facebook’s social plug-ins, helping Facebook embed its code across the internet. Contrary to Facebook’s representations, researchers later exposed that Facebook was using the Like button code to track what users were reading or buying—even if a user hadn’t clicked the Like button and even if the user had logged out of Facebook. Despite facing public backlash for both its apparent deception and its pervasive surveillance, Facebook did not change course—perhaps because it no longer faced serious competition in the social network market."
Page 1003, my emphasis.
Jemima Kelly's Nick Clegg: “I'm not just providing a PR gloss” is today's example of Facebook lying:
"ICYMI, former deputy prime minister-turned-YouTube sensation-turned-Facebook flack Nick Clegg went on Radio 4's Today programme earlier on Monday to plead for more regulation for Facebook. (Yes, really. And no, of course this isn't about heading off calls for a break-up of the company.)
Clegg said his calls for more regulation were absolutely not about fighting what a potential “dismemberment” of the company."
Today's drop in the bucket is Inside The Secret Border Patrol Facebook Group Where Agents Joke About Migrant Deaths And Post Sexist Memes by A.C. Thompson:
"Members of a secret Facebook group for current and former Border Patrol agents joked about the deaths of migrants, discussed throwing burritos at Latino members of Congress visiting a detention facility in Texas on Monday and posted a vulgar illustration depicting Rep. Alexandria Ocasio-Cortez engaged in oral sex with a detained migrant, according to screenshots of their postings."
Almost half of all Border Patrol agents were members.
Another drop in the bucket for today in Facebook abused to spread Remote Access Trojans since 2014 by Charlie Osborne:
"Facebook has been exploited to act as a distribution platform for a set of Remote Access Trojans (RATs) for years, researchers say.
According to Check Point Research, a "large-scale" campaign has been operating under Facebook's radar since at least 2014 throughout a campaign related to politics in Libya.
The aim of the operation has been to spread RATs including Houdini, Remcos, and SpyNote. Tens of thousands of victims from Libya, Europe, the US, and China are believed to have been compromised."
"These Pages and accounts violated our policies and we took them down after Check Point reported them to us. We are continuing to invest heavily in technology to keep malicious activity off Facebook, and we encourage people to remain vigilant about clicking on suspicious links or downloading untrusted software."
Yet again, the responsibility for preventing Facebook spreading malware is outsourced to security firms, the press and the users. It simply isn't Facebook's responsibility to police their platform.
Tim Wu, in an interview with Nicholas Thompson, demolishes Mark Zuckerberg's arguments against the breakup of Facebook. Read the whole thing here:
"there’s a subtle idea where big tech starts promising it's going to do government's work for it: We’re going to provide security, we're going to fight Russia, and so forth. First of all, I don't think Facebook has a good track record of protecting this country against foreign attack. So if they're promising more of the same, I don't want to hear it. And I also think anyone who studies systems knows that centralized systems are dangerous, because they offer one big, giant target"
Tip of the hat to Cory Doctorow.
Today's drops in the bucket include:
- Rob Price's Mark Zuckerberg's security chief is leaving after an investigation into allegations of misconduct. Misconduct hardly describes it - they "included accusations of "pervasive discriminatory conduct," "horrific levels of sexual harassment and battery," and of creating an environment in which support staff were repeatedly subjected to homophobic and transphobic diatribes — as well as allegations that Booth made racist remarks about Priscilla Chan, Zuckerberg's wife."
- 2nd Customs and Border Protection-connected secret Facebook group shows mocking images by Geneva Sands and Nick Valencia.It appears likely that, just as with the first Facebook group revealed by The Intercept, CBP management was well aware of this one:
"The issue is not new, however, for Customs and Border Protection. In 2018, a senior official warned all agency employees of potential discipline, after having been informed of a private Facebook group with inappropriate and offensive posts, according to a memo obtained by CNN."
- How Facebook Fought Fake News About Facebook by Mark Bergen and Kurt Wagner. Facebook's response:
“We didn’t use this internal tool to fight false news because that wasn’t what it was built for, and it wouldn’t have worked,” the spokeswoman wrote in an email. “The tool was built with simple technology that helped us detect posts about Facebook based on keywords, so we could consider whether to respond to product confusion on our own platform. Comparing the two is a false equivalence."
As the NYT's Kevin Roose tweeted:
"You could write a dissertation about this quote, and the difference between what FB considers "product confusion" (wrong stuff about us, which must be removed immediately) and "false news" (wrong stuff about other people, which is protected free speech)"
Aaron Greenspan's Facebook: Mark Zuckerberg’s Fake Accounts Ponzi Scheme argues that Facebook's lying is at the heart of the company's growth, because its growth has been based on fake accounts:
"In Singer v. Facebook, Inc.—a lawsuit filed in the Northern District of California alleging that Facebook has been telling advertisers that it can “reach” more people than actually exist in basically every major metropolitan area—the plaintiffs quote former Facebook employees, understandably identified only as Confidential Witnesses, as stating that Facebook’s “Potential Reach” statistic was a “made-up PR number” and “fluff.” Also, that “those who were responsible for ensuring the accuracy ‘did not give a shit.’” Another individual, “a former Operations Contractor with Facebook, stated that Facebook was not concerned with stopping duplicate or fake accounts.”
That’s probably because according to its last investor slide deck and basic subtraction, Facebook is not growing anymore in the United States, with zero million new accounts in Q1 2019, and only four million new accounts since Q1 2017. That leaves the rest of the world, where Facebook is growing fastest “in India, Indonesia, and the Philippines,” according to Facebook CFO David Wehner.
Wehner didn’t mention the fine print on page 18 of the slide deck, which highlights the Philippines, Indonesia and Vietnam as countries where there are “meaningfully higher” percentages of, and “episodic spikes” in, fake accounts. In other words, Facebook is growing the fastest in the locations worldwide where one finds the most fraud. In other other words, Facebook isn’t growing anymore at all—it’s shrinking."
The headline of Aarti Shahani's FTC To Hold Facebook CEO Mark Zuckerberg Liable For Any Future Privacy Violations is a bit misleading (my emphasis):
"Facebook CEO Mark Zuckerberg will have to personally answer to federal regulators under an agreement to settle a privacy case with the Federal Trade Commission that includes a $5 billion penalty for the giant social media company, the agency announced Wednesday. Separately, Facebook will pay $100 million to settle a case with the Securities and Exchange Commission for making misleading disclosures about the risk that users' data would be misused, the SEC said.
Under the FTC agreement, Zuckerberg will be required to submit quarterly compliance reports directly to the federal regulators and to Facebook's board of directors. If the Facebook co-founder or "designated compliance officers" violate the agreement, they could be subject to civil and criminal penalties, the FTC said."
The new privacy requirements in the settlement sound good, but I'm sure Facebook's lawyers can make mincemeat out of them. But even if they don't, and the FTC levys the next $5B fine on Zuckerberg personally, it wouldn't make much of a dent in his estimated $80B net worth. $5B is about 20% of the increase in his net worth year-to-date. So pretty much a "cost of doing business" fine.
Kieren McCarthy reports that UK parliament sends snippy letter to Zuck and his poodle Clegg as it seems Facebook has been lying again:
"Facebook has been asked to explain "direct contradictions" in its testimony to the UK Parliament in light of new information revealed in a complaint from the US Securities and Exchange Commission (SEC) last week.
In a letter [PDF] sent from the chair of the Digital, Culture, Media and Sport Committee, Damian Collins, to former Parliamentarian and now Facebook's VP of communications, Nick Clegg, the lawmakers ask a series of precise questions, quoting from the SEC complaint, and ask Facebook to explain how it fits with the company's previous answers over the Cambridge Analytica scandal."
Trauma Counselors Were Pressured to Divulge Confidential Information About Facebook Moderators, Internal Letter Claims by Sam Biddle continues Facebook's inability to come clean:
"Neither Facebook nor Accenture responded to questions about these allegations beyond their general denials."
Craig Silverman's headline sums it up - Facebook Said It Would Give Detailed Data To Academics. They’re Still Waiting.:
"In 2018, Facebook announced a partnership to provide data to academics to “help people better understand the broader impact of social media on democracy — as well as improve our work to protect the integrity of elections.”
But as of today, many of the academic teams remain on hold because Facebook has yet to provide key data required to conduct research into sharing patterns of fake and polarized news, among other projects. Facebook has also declined to provide some of the data it originally said it would offer, citing privacy concerns."
The researchers should have looked at th history of Facebook's public statements.
Libra: The known unknowns and unknown unknowns by Barry Eichengreen shows that Facebook either isn't revealing or hasn't thought through their plans for Libra:
"Details about Libra, Facebook’s planned global currency, are not easy to decipher. This may be because the currency’s own designers are making up things on the fly. More likely is that they have a reasonably complete plan but are reluctant to reveal it in order to avoid exciting regulators in the US and abroad. But no one knows for sure."
And Thomas Hale's ECB board member slams “cartel-like” Libra shows that central bankers don't like the little they do know. Here is Yves Mersch:
" Libra’s ecosystem is not only complex, it is actually cartel-like. To begin with, Libra coins will be issued by the Libra Association – a group of global players in the fields of payments, technology, e‑commerce and telecommunications. The Libra Association will control the Libra blockchain and collect the digital money equivalent of seignorage income on Libra. The Libra Association Council will take decisions on the Libra network’s governance and on the Libra Reserve, which will consist of a basket of bank deposits and short-term government securities backing Libra coins. Libra-based payment services will be managed by a fully owned subsidiary of Facebook, called Calibra. Finally, Libra coins will be exclusively distributed through a network of authorised resellers, centralising control over public access to Libra. With such a set-up, it is difficult to discern the foundational promises of decentralisation and disintermediation normally associated with cryptocurrencies and other digital currencies. On the contrary, similarly to public money Libra will actually be highly centralised, with Facebook and its partners acting as quasi-sovereign issuers of currency."
Kieren McCarthy reports that:
"Facebook has been caught bending the truth again – only this time it has been forced to out itself.
For years the antisocial media giant has claimed it doesn’t track your location, insisting to suspicious reporters and privacy advocates that its addicts “have full control over their data,” and that it does not gather or sell that data unless those users agree to it.
No one believed it. So, when it (and Google) were hit with lawsuits trying to get to the bottom of the issue, Facebook followed its well-worn path to avoiding scrutiny: it changed its settings and pushed out carefully worded explanations that sounded an awful lot like it wasn’t tracking you anymore. But it was. Because location data is valuable."
But the Facebook app was actually tracking location even when the user thought they weren't running it. Then they discovered that Google's and Apple's latest OS updates are designed to make background location tracking highly visible:
"late on Monday, Facebook emitted a blog post in which it kindly offered to help users “understand updates” to their “device’s location settings.”
Facebook's post is a masterpiece of obfuscation. Go read McCarthy's analysis of it, you will thank me.
Davey Alba's Ahead of 2020, Facebook Falls Short on Plan to Share Data on Disinformation points out that:
"In April 2018, Mark Zuckerberg, Facebook’s chief executive, told Congress about an ambitious plan to share huge amounts of posts, links and other user data with researchers around the world so that they could study and flag disinformation on the site.
... [Zuckerberg] said he hoped “the first results” would come by the end of that year.
But nearly 18 months later, much of the data remains unavailable to academics because Facebook says it has struggled to share the information while also protecting its users’ privacy. And the information the company eventually releases is expected to be far less comprehensive than originally described.
Seven nonprofit groups that have helped finance the research efforts, including the Knight Foundation and the Charles Koch Foundation, have even threatened to end their involvement."
More fool the foundations for believing anything Facebook says. They should have looked at the record.
Shaun Nichols' sub-head says it all Zuck-bucks dead in the water as payment giants snub currency tech:
"The Facebook-backed Libra crypto-currency project was dealt a crushing blow Friday when eBay, Stripe, and others yanked their support.
The two corporations confirmed within the past hour or so they will withdraw from the Libra Association, joining PayPal, which pulled out earlier this week. Mastercard, Visa, and Latin America payment giant Mercardo Pago are also reportedly pulling out, bringing the total number of Libra backers reportedly ejecting to six out of 28 initially supporting the project."
"Visa has confirmed it is exiting Facebook's crypto-dosh for now.
“Visa has decided not to join the Libra Association at this time," a spokesperson told The Register"
Mark Zuckerberg's questioning by Rep. Katie Porter and Rep. Alexandria Ocasio-Cortez makes must-watch video. AOC in particular got another instance of Facebook's version of "truth".
Today is a two-fer in Facebook lies! See Jon Schwarz' For Some Reason, We Can’t Find a Single Leftist Mark Zuckerberg Invited to His Dinners With Pundits From “Across the Spectrum” for the details.
Carole Cadwalladr's What happened when Alexandria Ocasio-Cortez came face to face with Facebook’s Mark Zuckerberg addresses Facebook's lying head-on:
"Alexandria Ocasio-Cortez, the congresswoman for New York’s 14th district, got five minutes to grill [Zuckerberg].
“Mr Zuckerberg, what year and month did you personally become aware of Cambridge Analytica?”
This was the question that I and a small group of super-nerds, which includes MPs such as Damian Collins and Ian Lucas, have been banging on about for more than a year. Because Zuckerberg last testified to Congress in April 2018 but so many more facts have come to light since. Facts that cast serious doubts over that testimony. Doubts that have opened up a serious question: did he lie on oath to Congress?
This is what Ocasio-Cortez was trying to dig into. “I’m not sure of the exact time,” said Zuckerberg. “But it was probably around the time when it became public, I think it was around March of 2018, I could be wrong.”
The problem for Zuckerberg is not that this defies common sense, though it does. It’s that it opens up a whole new world of trouble for Facebook. To understand why, you need to go back to the four days that followed Facebook’s legal threat to us and then its middle-of-the-night press release. ... When Zuckerberg finally emerged, a story was in place. The problem for him is that this has steadily been unravelling ever since.
In July this year, a Securities and Exchange Commission (SEC) investigation revealed that employees knew that Cambridge Analytica, described as a “sketchy” business, had been “scraping” Facebook data in 2015, before even the Guardian’s first report.
Because we know Facebook lied. The SEC investigation says that. To us at the Observer, in fact “… when asked by reporters in 2017 about its investigation into the Cambridge Analytica matter, Facebook falsely claimed the company found no evidence of wrongdoing”."
Aaron Sorkin, who wrote the screenplay for the 2010 movie The Social Network, has an open letter to Mark Zuckerberg that is a powerful rebuttal to Zuckerberg's political ad policy:
"right now, on your website, is an ad claiming that Joe Biden gave the Ukrainian attorney general a billion dollars not to investigate his son. Every square inch of that is a lie and it’s under your logo. That’s not defending free speech, Mark, that’s assaulting truth.
You and I want speech protections to make sure no one gets imprisoned or killed for saying or writing something unpopular, not to ensure that lies have unfettered access to the American electorate.
Mike Masnick fact-checks Aaron Sorkin's open letter about Facebook's policy of not fact-checking political ads. Seems like nit-picking to me.
Today's entry in the log-of-lies is Leaked documents show Facebook leveraged user data to fight rivals and help friends by Olivia Solon and Cyrus Farivar. They report on a cache of 7000 pages of internal Facebook documents:
"Taken together, they show how Zuckerberg, along with his board and management team, found ways to tap Facebook users' data — including information about friends, relationships and photos — as leverage over the companies it partnered with. In some cases, Facebook would reward partners by giving them preferential access to certain types of user data while denying the same access to rival companies.
All the while, Facebook planned to publicly frame these moves as a way to protect user privacy, the documents show."
Today's entry in the log-of-lies comes from Salvador Rodriguez. In Facebook co-founder Chris Hughes doesn't recall Zuckerberg discussing the Iraq War at Harvard he reports that:
"Facebook co-founder Chris Hughes said on Friday that he doesn't recall Mark Zuckerberg ever discussing the Iraq War during the early days of the company, contradicting recent comments from the CEO tying the war to his views on free speech.
"I had never heard that before, and the internet had never heard that before," Hughes said an event with the Bay Area Chapter of the American Constitution Society. "I don't remember ever talking about that with Mark."
Last month, Zuckerberg told an audience at Georgetown University that discussion about the Iraq War at Harvard, where he was a student, and on Facebook in its embryonic days, played a key role in his controversial positions on policing speech."
Via Xeni Jardin, today's entry in the log-of-lies:
"Facebook users, who really should reconsider their life choices right about now, are noticing that Facebook's iOS app is accessing their iPhone or iPad camera while they're doing completely unrelated things, like skimming their FB feed to catch up on what friends are posting. One workaround is to go into your permissions and revoke the app's right to use your camera."
"This sounds like a bug, we are looking into it."
Schrodinger’s Cab Firm: Uber’s Existential Crisis by John Bull of the excellent if intermittent London rconnections blog goes into fascinating detail about Uber facing a similar Catch-22 to Facebook's:
"The tribunal had looked at the fundamental contradiction lurking at the heart of Uber and decided that they were having none of it. They were presented with ample evidence that Uber consistently referred to ‘our drivers’. The firm constantly told passengers that it was Uber who was carrying them around. Uber also seemed happy to claim to TfL and the London Assembly Transport Committee that it employed people too. Indeed the tribunal were presented with testimony from the same team of lawyers that had presented to the Transport Committee. In it Ms Bertram herself referred to Uber creating jobs and having drivers.
“To our considerable surprise,” The ruling says, “Ms Bertram attempted before us to dismiss this as a typographical error.”
All of this led to one simple, potentially devastating legal conclusion for Uber.
Facing insuperable demand for lies, Facebook automated the process. Facebook Gives Workers a Chatbot to Appease That Prying Uncle by Sheera Frenkel and Mike Isaac reports that:
" Some Facebook employees recently told their managers that they were concerned about answering difficult questions about their workplace from friends and family over the holidays.
So just before Thanksgiving, Facebook rolled out something to help its workers: a chatbot that would teach them official company answers for dealing with such thorny questions."
Social media platforms leave 95% of reported fake accounts up, study finds by Kate cox reports that:
"Through the four-month period between May and August of this year, the research team conducted an experiment to see just how easy it is to buy your way into a network of fake accounts and how hard it is to get social media platforms to do anything about it.
The research team spent €300 (about $332) to purchase engagement on Facebook, Instagram, Twitter, and YouTube, the report (PDF) explains. That sum bought 3,520 comments, 25,750 likes, 20,000 views, and 5,100 followers. They then used those interactions to work backward to about 19,000 inauthentic accounts that were used for social media manipulation purposes.
About a month after buying all that engagement, the research team looked at the status of all those fake accounts and found that about 80 percent were still active. So they reported a sample selection of those accounts to the platforms as fraudulent. Then came the most damning statistic: three weeks after being reported as fake, 95 percent of the fake accounts were still active."
So the platforms can't moderate content themselves, and they can'y crowdsource it either. With a 5% chance that something will happen, people won't bother reporting fake accounts.
See also Facebook May Face Another Fake News Crisis in 2020 by Will Oremus:
"In October, a bogus news article claimed that House Speaker Nancy Pelosi had diverted $2.4 billion from Social Security to pay for the impeachment of Trump. “ENOUGH!!!!!!! NO ONE IS SUPPOSED TO TOUCH THAT MONEY!!!!” begins one top comment on the story in a Trump-Pence 2020 group. “Treasonous worthless bitch,” reads another. The article was shared on Facebook more than a million times."
Sara Morrison's Facebook is gearing up for a battle with California’s new data privacy law reports on Facebook's desperate attempt to pretend that the California Consumer Protection Act doesn't mean what it says:
"According to the Wall Street Journal, Facebook will claim that it doesn’t sell the data that its web trackers collect; it simply provides a service to businesses and websites that install Pixel on their sites. Because of this, it believes its web trackers are exempt from CCPA’s regulations, which have exceptions for data exchanged with a “service provider” that is “necessary to perform a business purpose.”
Legal experts who spoke to Recode disagreed with Facebook’s interpretation."
lauren Feiner's Facebook fails to convince lawmakers it needs to track your location at all times reports that, when Facebook lets users turn off tracking services, they still track them:
"Facebook was responding to an inquiry from Sen. Josh Hawley, R-Mo., and Sen. Chris Coons, D-Del., who asked Facebook last month to "respect" users' decisions to keep their locations private. In a letter dated December 12 that was released Tuesday, Facebook explained how it is able to estimate users' locations used to target ads even when they've chosen to reject location tracking through their smartphone's operating system.
Facebook said that even when location tracking is turned off, it can deduce users' general locations from context clues like locations they tag in photos as well as their devices' IP addresses. While this data is not as precise as Facebook would collect with location tracking enabled, the company said it uses the information for several purposes, including alerting users when their accounts have been accessed in an unusual place and clamping down on the spread of false information.
Facebook acknowledged it also targets ads based on the limited location information it receives when users turn off or limit tracking. Facebook doesn't allow users to turn off location-based ads, although it does allow users to block Facebook from collecting their precise location, the company wrote."
Kari Paul's Teen Vogue pulls glowing Facebook story after 'sponsored content' accusations is yet another report of Facebook's duplicity:
"The confusing saga began when the Condé Nast publication on Wednesday published an article titled How Facebook Is Helping Ensure the Integrity of the 2020 Election. The story focused largely on how five female Facebook employees were “taking measures to protect against foreign interference and stop the spread of misinformation”, providing the social media company an uncritical platform to defend its policy of refusing to censor political advertisements ahead of the 2020 elections."
It turns out that a Facebook executive wrote it.
Xeni jardin reports that Facebook's loss of credibility may be starting to have a significant impact. Reuters summarizes a Wall Street Journal article entitled Media rating council says social-media giant could be denied accreditation:
"Facebook Inc is at risk of losing a key seal of approval that gives companies confidence they are getting what they pay for when it comes to advertising with the social-media giant, the Wall Street Journal reported on Friday.
The company failed to address advertiser concerns arising from a 2019 audit, concerning how Facebook measures and reports data about video advertisements, the Journal reported, citing a notice from the Media Rating Council (MRC)."
Jamie Powell's Facebook: presented without comment presents an extract from pleadings in a lawsuit about the basis on which Facebook charges advertisers:
"Documents now confirm senior executives knew for years Potential Reach was inflated and misleading – yet they failed to act, and even took steps to conceal the problem. One Facebook employee wrote, “My question lately is: how long can we get away with the reach overestimation?” In fall 2017, Facebook COO Sheryl Sandberg acknowledged in an internal email she had known about problems with Potential Reach for years. The Potential Reach Product Manager (Yaron Fidler) proposed a fix that would have decreased the Potential Reach numbers. But Facebook’s metrics leadership team rejected his proposal because the “revenue impact” for Facebook would be “significant.” Fidler responded, “it’s revenue we should have never made given the fact it’s based on wrong data.” Fidler’s proposals to fix the flawed metric were repeatedly rejected. Instead, Facebook developed talking points to deflect from the truth.
The Ad Contrarian's Heads Must Roll makes an important point:
"Let's be very clear. Facebook is a product of the advertising industry. The advertising and marketing industries are fully complicit with Facebook. Every bit of squalid work that Facebook does is at the behest of marketers.
Anyone in a position of authority in our industry who claims that he or she does not know what Facebook is up to is either a liar or an imbecile. Wait...I take that back. No one is that stupid. If they say they don't know, they're liars.
Over 98% of Facebook's revenue comes from advertising. It is time for the "leaders" of our industry -- the 4As, the ANA, the IAB, and the major holding companies and brands -- to be held accountable. They have encouraged, financed, and defended the creeps at Facebook (and the rest of the adtech squidocracy) too long. They have done nothing -- zero -- to attenuate the frightening effects of adtech's dangerous activities.
Our industry "leaders" have neglected their responsibilities and driven the trustworthiness of our industry into the toilet while fraud has run rampant and unscrupulous operators like Facebook have taken control."
Jude Karabus reports that Facebook fined £50m in UK for 'conscious' refusal to report info and 'deliberate failure to comply' during Giphy acquisition probe:
"The UK's Competition and Markets Authority (CMA) has smacked Facebook with a £50m ($68.7m) fine for "deliberately" not giving it the full picture about its ongoing $400m acquisition of gif-slinger Giphy.
The move – fingered by the CMA as a "major breach" – comes just weeks after the antisocial network dismissed the UK's regulator's initial findings as being based on "fundamental errors" and just hours after the US Dept of Justice and its Department of Labor announced separate agreements with the firm in which it will fork over $14.25m to settle allegations of discriminatory hiring practices."
Just the cost-of-doing-business as usual.
Post a Comment