Tuesday, September 18, 2018

Vint Cerf on Traceability

Vint Cerf's Traceability addresses a significant problem:
how to preserve the freedom and openness of the Internet while protecting against the harmful behaviors that have emerged in this global medium. That this is a significant challenge cannot be overstated. The bad behaviors range from social network bullying and misinformation to email spam, distributed denial of service attacks, direct cyberattacks against infrastructure, malware propagation, identity theft, and a host of other ills
Cerf's proposed solution is:
differential traceability. The ability to trace bad actors to bring them to justice seems to me an important goal in a civilized society. The tension with privacy protection leads to the idea that only under appropriate conditions can privacy be violated. By way of example, consider license plates on cars. They are usually arbitrary identifiers and special authority is needed to match them with the car owners ... This is an example of differential traceability; the police department has the authority to demand ownership information from the Department of Motor Vehicles that issues the license plates. Ordinary citizens do not have this authority.
Below the fold I examine this proposal and one of the responses.

The first thing to note is that while Cerf's license plate example seems good:
It is illegal to run a license plate check on someone else, regardless of the circumstances. Only a member of law enforcement can run a license plate or lookup license plate numbers to find vehicle owner information.

While there are companies that offer such things as a free license plate or tag number search, you should always be careful when dealing with this type of company and always read the fine print. If you see an online license plates search, It is most likely a scam.
Unfortunately, it illustrates one of the problems with trusting the police to remove the veil of "apparent anonymity". For example, Hack Attack, Nick Davies' astonishing account of the Murdoch press' "information operations" shows how dependent they were on bribing corrupt serving and former police officers:
the reporter who first told me about bribing police officers ... had spent years with the Daily Mail, probably the most hardline law-and-order newspaper in the country ... unless it is itself the offender. [He] told detailed stories of using a former detective as a go-between, to hand over envelopes of cash for serving officers, to persuade them to disclose material from police computers or from current investigations. ... It turned out that all this crime had built up slowly among numerous Fleet Street papers, quality and tabloid. It had reached the point in the early 2000s where several news desks had banned their reporters from commissioning investigators, not because so much of their work was illegal but simply because they were costing such a lot. ... two [Information Commissioner] reports described a network which for years had been run by a private investigator called Steve Whittamore, who ... had two men inside the Driver and Vehicle Licensing Agency.
Among those deanonymized by Whittamore were:
the owners of every car which was parked near a village green where the actor Hugh Grant was playing cricket.
Police corruption is a world-wide problem whose effects are probably worse than "social network bullying and misinformation". For a more recent example see last Thursday's New York Times:
It was a sweeping and complex criminal enterprise: brothels in Brooklyn, where 15-minute sexual encounters added up to more than $2 million in profits in a 13-month period, and nail salons in Queens, where managers, runners and agents placed bets in an old-school numbers racket.

And the mastermind was a retired New York City police detective who recruited at least seven police officers acting as foot soldiers, according to court documents charging the group on Thursday.
In the US, private investigators use license plate databases to provide "ordinary citizens" with deanonymization services for a price:
if a client asks us to trace a suspicious license plate, we log into a specialized database that supplies DMV/MVD data on vehicles in all 50 states.
Where do the databases get their information?

Government custodians aren't the only ones with a corruption problem. Another investigator described by Nick Davies:
had targeted the call centres of the main mobile phone companies by paying cash bribes to some staff there ... doubling their legitimate salary by selling confidential information.
Mobile phone companies, now the world's leading ISPs, see collecting and selling personal information as essential to their business model. Other platform companies face similar corruption problems:
Employees of Amazon, primarily with the aid of intermediaries, are offering internal data and other confidential information that can give an edge to independent merchants selling their products on the site, according to sellers who have been offered and purchased the data, brokers who provide it and people familiar with internal investigations.
On Dave Farber's IP list Lauren Weinstein responded to Cerf's article:
While I have frequently called for greater accountability in key aspects of Internet operations (in particular, public access to WHOIS domain data except in limited circumstances), I fear that in the general case Vint's Traceability proposal would mostly gladden the hearts of bad governmental players in countries such as China, Russia, and even here in the USA. It basically amounts to an escrowed identity system, a concept that has been widely and appropriately criticized in the encryption arena. Given that a significant degree of anonymity is crucial for human rights advocates and others who live in areas of the world that are routinely under government oppression, I do not see obvious ways that Vint's proposal could be implemented without innocent parties being even more at the mercy of oppressive governments than they are today.
Weinstein is right to stress the importance to free society of "a significant degree of anonymity". Cerf is right to point out that:
absolute anonymity is actually quite difficult to achieve ... and might not be absolutely desirable given the misbehaviors apparent anonymity invites.
The important point to note is the agreement that what we have now is a "significant degree" of "apparent anonymity". Recent events and research have illuminated the capabilities governments, corporations and malign actors have to deanonymize Internet actors. Examples include:
  • The fact that Robert Meuller's office was able to indict twelve named GRU operatives, justifying the indictments with copious details about their organizations and activities. This despite the fact that, as professional cyber intelligence operatives, the twelve would have been intensively trained in counter-surveillance.
  • Tor is the best widely-available tool for anonymous access to the Web. But, like all software, it has flaws. In Tor's case, these flaws often allow users to be deanonymized. For example, in Exploit vendor drops Tor Browser zero-day on Twitter Catalin Cimpanu reports that:
    Zerodium, a company that buys and sells vulnerabilities in popular software, has published details today on Twitter about a zero-day vulnerability in the Tor Browser, a Firefox-based browser used by privacy-conscious users for navigating the web through the anonymity provided by the Tor network.

    In a tweet, Zerodium said the vulnerability is a full bypass of the "Safest" security level of the NoScript extension that's included by default with all Tor Browser distributions. ... Zerodium CEO Chaouki Bekrar provided more details about today's zero-day. ... "This Tor Browser exploit was acquired by Zerodium many months ago as a zero-day and was shared with our government customers."
    Not to mention the time when, with DOD funding:
    The Software Engineering Institute ("SEI") of Carnegie Mellon University (CMU) compromised the network in early 2014 by operating relays and tampering with user traffic.
    The deanonymized individual's information was supplied to the FBI.
  • Despite the fact that all transactions are recorded in a public database, it is widely assumed that Bitcoin and other cryptocurrencies provide "significant anonymity", especially when combined with "mixers" such as CoinJoin. But Bitcoin and Anonymity: Not So Much on CoinLab's blog reports that:
    CoinLab’s Patent for methods for deanonymizing Bitcoin wallets and transactions was released March 28 [2016]. We have held off on describing methodologies and impacts until the patent was published, but it seems past time to talk about what we can see using this technology, and possible impacts on Bitcoin businesses.
    Deanonymizing cryptocurrency users has been a fruitful research field. Examples include Toward De-Anonymizing Bitcoin by Mapping Users Location, When the cookie meets the blockchain: Privacy risks of web payments via cryptocurrencies and even Bitcoin over Tor isn't a Good Idea.
  • Correlating the vast amount of information collected and traded by trackers in the Internet advertising ecosystem allows platforms and their customers (and their customers' customers, ...) to deanonymize Web users with ease.  Jack Balkin and Jonathan Zittrain's A Grand Bargain to Make Tech Companies Trustworthy suggests making the platforms "information fiduciaries":
    Like older fiduciaries, [platforms] have become virtually indispensable. Like older fiduciaries, these companies collect a lot of personal information that could be used to our detriment. And like older fiduciaries, these businesses enjoy a much greater ability to monitor our activities than we have to monitor theirs. As a result, many people who need these services often shrug their shoulders and decide to trust them. But the important question is whether these businesses, like older fiduciaries, have legal obligations to be trustworthy. The answer is that they should.
    By restricting the flow of tracking data through the ecosystem, this proposal would reduce the number of companies with easy deanonymization capability, but it would cement the platforms' role as the one-stop-shop for corporate and government deanonymization services.
In effect, Cerf's proposal is not that authorities be provided with a new capability to deanonymize Internet actors, but rather that some unspecified actions be taken to greatly reduce the cost of the capabilities they (and many others) already have. As I understand it, Weinstein's position is that explicit support for "differential traceability" would be a bad idea; that the cost of current deanonymization techniques restricts their application to cases where the returns justify the investment, thus providing "a significant degree of anonymity".

Currently, the cost of deanonymization is probably higher for law enforcement than for companies or bad actors. How could a large cost reduction be made available only to law enforcement users trusted not to abuse it? Explicit support for "differential traceability" has the same problem as the Clipper chip, Ray Ozzie's recent "Clear" proposal, government advocacy of encryption "backdoors", etc. The support would have to be implemented in widely-available software. It would have bugs, so even if law enforcement were incorruptible, the bad guys (whether criminals or hostile nation-state actors) would find ways to subvert it.

These would likely include both the ability to create new Sybil identities, and the ability to impersonate innocent users. The first vitiates the whole point of the system, the second makes it even more dangerous than the current situation. At least now everyone understands that attributing Internet actions to specific people is dangerous. But after implementing "differential traceability" the authorities would need to convince everyone that it was flawless, otherwise why did they go to all that trouble? So the general public's valuable skepticism about Internet identities would be undermined.

The significant problems Cerf lists could perhaps be somewhat mitigated if the cost and hassle factor for law enforcement of deanonymizing Internet malefactors were reduced. But law enforcement, especially international law enforcement, has limited resources and many tasks for good reason assigned higher priority than "social network bullying and misinformation". It is doubtful that such cost reduction would change these priorities much. Mueller's GRU indictments show that in important cases law enforcement can and does deanonymize bad actors; Facebook's deletion of material from the Internet Research Agency shows that, under pressure, companies do the same.

Unless priorities were changed enough to greatly raise malefactors perception of the risks they run, efforts to implement explicit as opposed to implicit "differential traceability" would be ineffectual. The impossibility of restricting explicit "differential traceability" to law enforcement, and the fact that law enforcement's trustworthiness is highly variable, argue strongly against efforts to implement it.

Thanks to Lauren Weinstein for permission to quote his response in full.

4 comments:

  1. As you mentioned, perfect privacy is perhaps impossible. For instance, back in the 1970's Lance Hoffman published a paper (which seems not to be online) with a title something like "Extracting Personally Identifiable Information From Anonymous Databases". It described how easy it is to use focused searches on knowable, non-sensitive attributes (such as the degrees granted by particular schools) to zero-in on a single individual.

    In today's world of linking via chains of attributes it often is not that hard to link "anonymous" data from one source with "anonymous data" from another and eventually tie the whole thing to a named individual. A couple of years ago I watched as a skilled tracker (and a volunteer subject) used fairly generic and generally considered non-sensitive attributes to end up getting into some of the target's quite sensitive personally identifiable information - and this was done in a few minutes from a laptop during a conference session.

    Way back in in the 1970's - a period in which there was a lot of good thinking about privacy, but which has been effectively lost because it is found only on paper; it was never digitized. One good volume (which is online) is the 1973 HEW Report on Records, Computers and the Rights of Citizens - https://www.epic.org/privacy/hew1973report/

    That report, if I remember rightly, suggested that in addition to adding friction to data access that there should be some rules:

    1. That when possible information should be fuzzed to remove some number of significance - like providing only zip codes rather than addresses, telephone prefixes rather than full numbers, grouping of numeric data into ranges, etc.

    2. That no access should itself be anonymous; that those making access should be required to state onto a permanent record their own identity and the reason for the access.

    3. When possible there should be notice to the data subject and an opportunity to challenge the access. (This is, of course, something that is very context sensitive - there will be many situations in which notice would be inappropriate.)

    4. That the one seeking access should bear all of the costs springing from that access.

    5. That there be some sort of contractual (with third party beneficiary rights) or other legal framework (such as fiduciary responsibilities) that define and constrain how the data obtained may be used and obligations for non-linking and destruction.

    When we (the collective "we") designed the internet we shied away from the hard issue of privacy. In the early net we all knew one another - it was somewhat like a club of acquaintances. It was known as far back as the mid 1970's that often times one wanted something on the net that could serve as a lord-of-identity against which access rights could be measured. But that clashed with fear of governments and abusive corporations - Watergate was fresh in everyone's memory, "1984" was in the future and there was concern that the book could become reality. (It is interesting that the Chair of the Senate Committee that addressed Watergate was a strong advocate of privacy in information systems.)

    --karl--
    Karl Auerbach

    ReplyDelete
  2. India has a biometric identity system that it claims is completely secure:

    "Claims made in the report about Aadhaar being vulnerable to tampering leading to ghost entries in Aadhaar database by purportedly bypassing operators’ biometric authentication to generate multiple Aadhaar cards is totally baseless."

    The report in question is the result of a 3-month investigation by Huffpost India that found:

    "The authenticity of the data stored in India's controversial Aadhaar identity database, which contains the biometrics and personal information of over 1 billion Indians, has been compromised by a software patch that disables critical security features of the software used to enrol new Aadhaar users, a three month-long investigation by HuffPost India reveals.

    The patch—freely available for as little as Rs 2,500 (around $35)— allows unauthorised persons, based anywhere in the world, to generate Aadhaar numbers at will, and is still in widespread use."

    The patch is pretty trivial:

    "Using the patch is as simple as installing the enrolment software on a PC, and replacing a folder of Java libraries using the standard Control C, Control V cut-paste commands familiar to any computer user.

    Once the patch is installed, enrolment operators no longer need to provide their fingerprint to use the enrolment software, the GPS is disabled, and the sensitivity of the iris scanner is reduced. This means that a single operator can log into multiple machines at the same time, reducing the cost per enrolment, and increasing their profits.

    ReplyDelete
  3. French police officer caught selling confidential police data on the dark web by Catalin Cimpanu illustrates the problem perfectly:

    "The officer stands accused of selling confidential information such as sensitive documents that made their way into the hands of cyber-criminals, ...

    French authorities also say the officer advertised a service to track the location of mobile devices based on a supplied phone number. He advertised the system as a way to track spouses or members of competing criminal gangs. Investigators believe Haurus was using the French police resources designed with the intention to track criminals for this service.

    He also advertised a service that told buyers if they were tracked by French police and what information officers had on them."

    French Cop Arrested For Selling Sensitive Law Enforcement Info On The Dark Web by Tim Cushing comments:

    "this is exactly why no government agency -- not to mention the private companies involved -- should be allowed to utilize encryption backdoors, as the EFF's Director of Cybersecurity, Eva Galperin, pointed out on Twitter. It's not just about the hundreds of malicious hackers who will see an inviting, new attack vector. It's that no one -- public or private sector -- can be completely trusted to never expose or misuse these avenues of access. And since that's a fact of life, sometimes the best solution is to remove the temptation."

    "

    ReplyDelete
  4. Thomas Claburn reports:

    "A database search of car registrations appears to have outed more than 300 GRU agents.

    Following Thursday's report from Dutch and British authorities of a thwarted hacking attack involving four Russian nationals alleged to be officers in Russia's Main Directorate of the General Staff of the Armed Forces (GRU) cyber warfare unit, investigative news site Bellingcat and The Insider, a Russian crowdfunded news organization, used the names of the alleged perpetrators to identify 305 other potential GRU agents."

    ReplyDelete