Thursday, June 28, 2018

Rate limits

Andrew Marantz writes in Reddit and the Struggle to Detoxify the Internet:
[On 2017's] April Fools’, instead of a parody announcement, Reddit unveiled a genuine social experiment. It was called r/Place, and it was a blank square, a thousand pixels by a thousand pixels. In the beginning, all million pixels were white. Once the experiment started, anyone could change a single pixel, anywhere on the grid, to one of sixteen colors. The only restriction was speed: the algorithm allowed each redditor to alter just one pixel every five minutes. “That way, no one person can take over—it’s too slow,” Josh Wardle, the Reddit product manager in charge of Place, explained. “In order to do anything at scale, they’re gonna have to coöperate."
The r/Place experiment successfully forced coöperation, for example with r/AmericanFlagInPlace drawing a Stars and Stripes, or r/BlackVoid trying to rub out everything:
Toward the end, the square was a dense, colorful tapestry, chaotic and strangely captivating. It was a collage of hundreds of incongruous images: logos of colleges, sports teams, bands, and video-game companies; a transcribed monologue from “Star Wars”; likenesses of He-Man, David Bowie, the “Mona Lisa,” and a former Prime Minister of Finland. In the final hours, shortly before the experiment ended and the image was frozen for posterity, BlackVoid launched a surprise attack on the American flag. A dark fissure tore at the bottom of the flag, then overtook the whole thing. For a few minutes, the center was engulfed in darkness. Then a broad coalition rallied to beat back the Void; the stars and stripes regained their form, and, in the end, the flag was still there.
What is important about the r/Place experiment? Follow me below the fold for an explanation.

Marantz wrote a long and very interesting article covering a lot of ground, but the r/Place part is a great example of the importance to the integrity of the Internet of rate limits. This is a topic I've written about before, for example in 2014's What Could Possibly Go Wrong and in 2015's Brittle Systems. In the latter I wrote:
The design goal of almost all systems is to do what the user wants as fast as possible. This means that when the bad guy wrests control of the system from the user, the system will do what the bad guy wants as fast as possible. Doing what the bad guy wants as fast as possible pretty much defines brittleness in a system; failures will be complete and abrupt. In last year's talk at UC Berkeley's Swarm Lab I pointed out that rate limits were essential to LOCKSS, and linked to Paul Vixie's article Rate-Limiting State making the case for rate limits on DNS, NTP and other Internet services.
In How China censors the net: by making sure there’s too much information,  John Naughton reviews Censored: Distraction and Diversion Inside China’s Great Firewall by Margaret Roberts. The book is about how China "manages" the Internet:
Censorship 2.0 is based on the idea that there are three ways of achieving the government’s desire to keep information from the public – fear, friction and flooding. Fear is the traditional, analogue approach. It works, but it’s expensive, intrusive and risks triggering a backlash and/or the “Streisand effect” – when an attempt to hide a piece of information winds up drawing public attention to what you’re trying to hide (after the singer tried to suppress photographs of her Malibu home in 2003).

Friction involves imposing a virtual “tax” (in terms of time, effort or money) on those trying to access censored information. If you’re dedicated or cussed enough you can find the information eventually, but most citizens won’t have the patience, ingenuity or stamina to persevere in the search. Friction is cheap and unobtrusive and enables plausible denial (was the information not available because of a technical glitch or user error?).

Flooding involves deluging the citizen with a torrent of information – some accurate, some phoney, some biased – with the aim of making people overwhelmed. In a digital world, flooding is child’s play: it’s cheap, effective and won’t generate backlash. (En passant, it’s what Russia – and Trump – do.)
Friction is the technique behind "walled gardens". The defense against friction is net neutrality, which is why the big (ISP, content) companies hate it so much.

Note, as both r/Place and Vixie did, that the defense against flooding is rate limits, which implies keeping state. Here is Vixie:
Every reflection-friendly protocol mentioned in this article is going to have to learn rate limiting. This includes the initial TCP three-way handshake, ICMP, and every UDP-based protocol. In rare instances it's possible to limit one's participation in DDoS reflection and/or amplification with a firewall, but most firewalls are either stateless themselves, or their statefulness is so weak that it can be attacked separately. The more common case will be like DNS [Response Rate Limiting], where deep knowledge of the protocol is necessary for a correctly engineered rate-limiting solution applicable to the protocol. Engineering economics requires that the cost in CPU, memory bandwidth, and memory storage of any new state added for rate limiting be insignificant compared with an attacker's effort. Attenuation also has to be a first-order goal—we must make it more attractive for attackers to send their packets directly to their victims than to bounce them off a DDoS attenuator.
Internet routers could prevent many kinds of DDoS attacks by implementing Source Address Verification (SAV), which would prevent attackers spoofing their packets' source address. But they don't, which makes rate limiting all stateless protocols essential:
This effort will require massive investment and many years. It is far more expensive than SAV would be, yet SAV is completely impractical because of its asymmetric incentives. Universal protocol-aware rate limiting (in the style of DNS RRL, but meant for every other presently stateless interaction on the Internet) has the singular advantage of an incentive model where the people who would have to do the work are actually motivated to do the work. This effort is the inevitable cost of the Internet's "dumb core, smart edge" model and Postel's law ("be conservative in what you do, be liberal in what you accept from others").
DNS RRL was the first of these efforts. Here is a simple explanation of how DNS RRL works:
If one packet with a forged source address arrives at a DNS server, there is no way for the server to tell it is forged. If hundreds of packets per second arrive with very similar source addresses asking for similar or identical information, there is a very high probability of those packets, as a group, being part of an attack. The RRL software has two parts. It detects patterns in arriving queries, and when it finds a pattern that suggests abuse, it can reduce the rate at which the replies are sent.
Even before the 2016 elections social media platforms were discovering that they are just as vulnerable to DDoS-style attacks as Internet services further down the stack.

Damn the torpedoes, full speed ahead. Andrew Marantz writes in Reddit and the Struggle to Detoxify the Internet:
Social-media executives claim to transcend subjectivity, and they have designed their platforms to be feedback machines, giving us not what we claim to want, nor what might be good for us, but what we actually pay attention to.
Attention begets attention and the feedback loop accelerates. In Twitter CEO wants to study platform’s “health,” but is he ignoring the cancer? Sam Machkovech quotes Jack Dorsey:
We love instant, public, global messaging and conversation. It's what Twitter is, and it's why we're here. But we didn't fully predict or understand the real-world negative consequences. ... We aren't proud of how people have taken advantage of our service or our inability to address it fast enough.
"instant, public, global messaging and conversation" - Twitter and Reddit are clearly designed to go as fast as possible. Machkovech writes:
the current algorithm is designed solely to suggest and auto-forward content that is simply the busiest—the most liked, most seen stuff.
Actually, this isn't quite true. Twitter started trying a form of rate-limiting more than a year ago. In Twitter Is Now Temporarily Throttling Reach Of Abusive Accounts Alex Kantrowitz wrote:
Twitter is temporarily decreasing the reach of tweets from users it believes are engaging in abusive behavior via a new protocol that began rolling out last week.

The protocol temporarily prevents tweets from users Twitter deems abusive from being displayed to people who don't follow them, effectively reducing their reach. If the punished user mentions someone who doesn't follow them, for instance, that person would not see the tweet in their notifications tab. And if the punished user's followers retweet them, those retweets wouldn't be shown to people who don't follow them.

Those impacted by the new protocol are already tweeting screenshots of Twitter's emails detailing their punishments. "We've detected some potentially abusive behavior from your account," the emails read. "So only your follower can see your activity on Twitter for the amount of time shown below."
The protocol doesn't limit the rate at which the victim can tweet, but it does limit the rate at which their tweets can spread via retweets. It does have an essential feature of rate limits, which acknowledges that abuse detection is probabilistic; the penalty times out. Kantrowitz describes other, non-rate-limiting, efforts to curb abuse:
In 2017, Dorsey made curbing harassment Twitter’s top priority, and Twitter’s product team released anti-harassment features at an unprecedented pace for the notoriously slow-moving company. They collapsed tweets they thought might be abusive, they built anti-abuse filters into search, they started allowing users to mute people who hadn’t confirmed their email addresses, phone numbers, or were using default profile pictures. They introduced a mute filter that could be applied to specific words. They even killed the default profile photo, doing away with the eggs that had long been synonymous with toxic trolls.
Despite these efforts, Machkovech shows example after example of clearly abusive tweets from bogus accounts, enabled by the ease with which accounts can be created:
Dorsey pointed to the private research firm Cortico, who created a series of conversation "health" metrics based on its studies of Twitter data: shared attention; shared reality; variety; and receptivity. ... Dorsey's calls for conversation health metrics do not in any way appreciate the apparent next-level disruption tactic already being rolled out on Twitter this year: subtler, seemingly real accounts popping up with the express intent of passing those four metrics on their face. I have chronicled an apparent rise in this account type for the past few weeks at my own Twitter account, often finding accounts that have existed for as briefly as a few months or as long as nine years.
Lower down the network stack these are called "Sybil attacks"; the defense is to slow or restrict the activity of new identities, or to impose a proof-of-work to render account creation and/or early activity expensive. But, as Machkovech is seeing, these defenses aren't effective against well-resourced adversaries with long time horizons:
But it could be something even scarier: an effort to test and tease Twitter's systems and to harvest innocent bystanders' reactions, thereby dumping fuel into an artificial intelligence-powered botnet. 2018's Twitter is already confusing, in terms of verifying whether a drive-by poster is in any way legitimate. What happens if 2020's Twitter is inundated in tens of thousands of real-sounding, TOS-abiding robots—assuming Twitter still exists by then?
In the design of the LOCKSS system we referred to this as the "Wee Forest Folk" attack, one in which the attacker builds up a good reputation over time through a large number of low-value transactions, then cashes in over a short period via a small number of high-value transactions.

It seems that temporarily limiting the rate and reach of posts from new and suspected abusive identities, the analogy of China's "friction", is necessary but not sufficient. It is also problematic for the platforms. Facilitating the on-boarding of new customers is important to customer acquisition and retention. Abuse detection is fallible, and will inevitably annoy some innocent customers. Because the limits affect (at least some) actual people, they are more problematic than rate limits lower down the stack, which merely annoy software.

5 comments:

  1. "I asked Ms. Gadde in several different ways if there was anything Mr. Trump could tweet that might result in censure from the company. She declined to answer directly, pointing me instead to a January statement in which the company stated that blocking a world leader’s tweets “would hide important information people should be able to see and debate.”

    But what if that “important information” conflicts with Twitter’s mission to promote a healthy public conversation?

    Sooner or later, Twitter’s executives and employees are going to have to make a decision about which is more important, Mr. Trump’s tweets or the company’s desire to promote a healthy public conversation. It’s hard to see how both are tenable."

    The conclusion of Farhad Manjoo's Employee Uprisings Sweep Many Tech Companies. Not Twitter.

    ReplyDelete
  2. In Tesla Short-Sellers Harass Pulitzer-Winning Journalist Into Deleting Twitter Account Due To Review Kossack "Rei" reports:

    "The Tesla Model 3 has gotten no shortage of positive, sometimes glowing reviews from reviewers — but perhaps it was Dan’s credentials that made this review one step too far for the short-sellers attempting to take down Tesla. Or perhaps it was the fact that Musk retweeted him. ...
    Dan soon fell under a storm of attacks for his review, both in the comments section of the WSJ and more extensively on Twitter. He was accused of being duped with a “rigged” car; of being in Tesla’s pocket; of having bias against other brands; and a continuous onslaught of other attacks. Dan spent much of Friday and Saturday defending himself on Twitter against well-known short sellers such as Mark Spiegel and popular Seeking Alpha contributor Montana Skeptic.

    And then gave up. And deleted his account to terminate the harassment."

    Twitter has a rate limit problem.

    ReplyDelete
  3. Andrea James' Twitter's NSFW porn spam nightmare for women with common names reports on another area where Twitter needs rate limits:

    "For at least a couple of years, Twitter has allowed one porn spam bot to clog up search results for common women's names, as well as for names of young female celebrities. It would not take a lot to create an algorithm to block this specific spam, but it's still here, because Twitter can't seem to address the platform's pervasive hostility to women.

    The porn spambots typically pump out two posts a minute with a random string of sex-related search terms, along with a short video always overlaid with the same text and translucent rectangle to avoid copyright flagging of the clips they use. As soon as you block one, another appears."

    ReplyDelete
  4. The BBC reports on another rate limit issue:

    "The volume of disinformation on the internet is growing so big that it is starting to crowd out real news, the Commons Digital, Culture, Media and Sport Committee chairman has said.

    Tory MP Damian Collins said people struggle to identify "fake news".

    MPs in their committee report said the issue threatens democracy and called for tougher social network regulation."

    ReplyDelete
  5. "A team of researchers at Duo Security has unearthed a sophisticated botnet operating on Twitter — and being used to spread a cryptocurrency scam. ... The team used Twitter’s API and some standard data enrichment techniques to create a large data set of 88 million public Twitter accounts, comprising more than half a billion tweets. ... The study led them into some interesting analysis of botnet architectures — and their paper includes a case study on the cryptocurrency scam botnet they unearthed (which they say was comprised of at least 15,000 bots “but likely much more”), and which attempts to syphon money from unsuspecting users via malicious “giveaway” link. ... ‘Attempts’ being the correct tense because, despite reporting the findings of their research to Twitter, they say this crypto scam botnet is still functioning on its platform — by imitating otherwise legitimate Twitter accounts, including news organizations (such as the below example), and on a much smaller scale, hijacking verified accounts" from Duo Security researchers’ Twitter ‘bot or not’ study unearths crypto botnet by Natasha Lomas. The Duo team's paper is here.

    ReplyDelete