Tuesday, December 18, 2018

Securing The Software Supply Chain

This is the second part of a series about trust in digital content that might be called:
Is this the real life?
Is this just fantasy?
The first part was Certificate Transparency, about how we know we are getting content from the Web site we intended to. This part is about how we know we're running the software we intended to. This question, how to defend against software supply chain attacks, has been in the news recently:
A hacker or hackers sneaked a backdoor into a widely used open source code library with the aim of surreptitiously stealing funds stored in bitcoin wallets, software developers said Monday.

The malicious code was inserted in two stages into event-stream, a code library with 2 million downloads that's used by Fortune 500 companies and small startups alike. In stage one, version 3.3.6, published on September 8, included a benign module known as flatmap-stream. Stage two was implemented on October 5 when flatmap-stream was updated to include malicious code that attempted to steal bitcoin wallets and transfer their balances to a server located in Kuala Lumpur.
See also here and here. The good news is that this was a highly specific attack against a particular kind of cryptocurrency wallet software; things could have been much worse. The bad news is that, however effective they may be against some supply chain attacks, none of the techniques I discuss below the fold would defend against this particular attack.

In an important paper entitled Software Distribution Transparency and Auditability, Benjamin Hof and Georg Carle from TU Munich use Debian's Advanced Package Tool (APT) as an example of a state-of-the-art software supply chain, and:
  • Describe how APT works to maintain up-to-date software on clients by distributing signed packages.
  • Review previous efforts to improve the security of this process.
  • Propose to enhance APT's security by layering a system similar to Certificate Transparency (CT) on top.
  • Detail the operation of their systems' logs, auditors and monitors, which are similar to CT's in principle but different in detail.
  • Describe and measure the performance of an implementation of their layer on top of APT using the Trillian software underlying some CT implementations.
There are two important "missing pieces" in their system, and all the predecessors, which are the subjects of separate efforts:
  • Reproducible Builds.
  • Bootstrappable Compilers.

How APT Works

A system running Debian or other APT-based Linux distribution runs software it received in "packages" that contain the software files, and metadata that includes dependencies. Their hashes can be verified against those in a release file, signed by the distribution publisher. Packages come in two forms, source and compiled. The source of a package is signed by the official package maintainer and submitted to the distribution publisher. The publisher verifies the signature and builds the source to form the compiled package, whose hash is then included in the release file.

The signature on the source package verifies that the package maintainer approves this combination of files for the distributor to build. The signature on the release file verifies that the distributor built the corresponding set of packages from approved sources and that the combination is approved for users to install.

Previous Work

It is, of course, possible for the private keys on which the maintainer's and distributor's signatures depend to be compromised:
Samuel et al. consider compromise of signing keys in the design of The Update Framework (TUF), a secure application updater. To guard against key compromise, TUF introduces a number of different roles in the update release process, each of which operates cryptographic signing keys.

The following three properties are protected by TUF. The content of updates is secured, meaning its integrity is preserved. Securing the availability of updates protects against freeze attacks, where an outdated version with known vulnerabilities is served in place of a security update. The goal of maintaining the correct combination of updates implies the security of meta data.
The goal of introducing multiple roles each with its own key is to limit the damage a single compromised key can do. An orthogonal approach is to implement multiple keys for each role, with users requiring a quorum of verified signatures before accepting a package:
Nikitin et al. develop CHAINIAC, a system for software update transparency. Software developers create a Merkle tree over a software package and the corresponding binaries. This tree is then signed by the developer, constituting release approval. The signed trees are submitted to co-signing witness servers.

The witnesses require a threshold of valid developer signatures to accept a package for release. Additionally, the mapping between source and binary is verified by some of the witnesses. If these two checks succeed, the release is accepted and collectively signed by the witnesses.

The system allows to rotate developer keys and witness keys, while the root of trust is an offline key. It also functions as a timestamping service, allowing for verification of update timeliness.

CT-like Layer

Hof and Carle's proposal is to use verifiable logs, similar to those in CT, to ensure that malfeasance is detectable. They write:
Compromise of components and collusion of participants must not result in a violation of the following security goals remaining undetected. A goal of our system is to make it infeasible for the attacker to deliver targeted backdoors. For every binary, the system can produce the corresponding source code and the authorizing maintainer. Defined irregularities, such as a failure to correctly increment version numbers, also can be detected by the system.
As I understand it, this is accurate but somewhat misleading. Their system adds a transparency layer on top of APT:
The APT release file identifies, by cryptographic hash, the packages, sources, and meta data which includes dependencies. This release file, meta data, and source packages are submitted to a log server operating an appendonly Merkle tree, as shown in Figure 2. The log adds a new leaf for each file.

We assume maintainers may only upload signed source packages to the archive, not binary packages. The archive submits source packages to one or more log servers. We further assume that the buildinfo files capturing the build environment are signed and are made public, e.g. by them being covered by the release file, together with other meta data.

In order to make the maintainers uploading a package accountable, a source package containing all maintainer keys is created and submitted into the archive. This constitutes the declaration by the archive, that these keys were authorized to upload for this release. The key ring is required to be append-only, where keys are marked with an expiry date instead of being removed. This allows verification of source packages submitted long ago, using the keys valid at the respective point in time.
Just as with CT, the log replies to each valid submission with a signed commitment, guaranteeing that it will shortly produce the signed root of a Merkle tree that includes the submission:
At release time, meta data and release file are submitted into the log as well. The log server produces a commitment for each submission, which constitutes its promise to include the submitted item into a future version of the tree. The log only accepts authenticated submissions from the archive. The commitment includes a timestamp, hash of the release file, log identifier and the log's signature over these. The archive should then verify that the log has produced a signed tree root that resolves the commitment. To complete the release, the archive publishes the commitments together with the updates. Clients can then proceed with the verification of the release file.

The log regularly produces signed Merkle tree roots after receiving a valid inclusion request. The signed tree root produced by the log includes the Merkle tree hash, tree size, timestamp, log identifier, and the log's signature.
The client now obtains from the distribution mirror not just the release file, but also one or more inclusion commitments showing that the release file has been submitted to one or more of the logs trusted both by the distributor and the client:
Given the release file and inclusion commitment, the client can verify by hashing that the commitment belongs to this release file and also verify the signature. The client can now query the log, demanding a current tree root and an inclusion proof for this release file. Per standard Merkle tree proofs, the inclusion proof consists of a list of hashes to recompute the received root hash. For the received tree root, a consistency proof is demanded to a previous known tree root. The consistency proof is again a list of hashes. For the two given tree roots, it shows that the log only added items between them. Clients store the signed tree root for the largest tree they have seen, to be used in any later consistency proofs. Set aside split view attacks, which will be discussed later, clients verifying the log inclusion of the release file will detect targeted modifications of the release.
Like CT, in addition to logs their system includes auditors, typically integrated with clients, and independent monitors regularly checking the logs for anomalies. For details, you need to read the paper, but some idea can be gained from their description of how the system detects two kinds of attack:
  • The Hidden Version Attack
  • The Split View Attack

The Hidden Version Attack

Hof and Carle describe this attack thus:
The hidden version attack attempts to hide a targeted backdoor by following correct signing and log submission procedures. It may require collusion by the archive and an authorized maintainer. The attacker prepares targeted malicious update to a package, say version v1.2.1, and a clean update v1.3.0. The archive presents the malicious package only to the victim when it wishes to update. The clean version v.1.3.0 will be presented to everybody immediately afterwards.

A non-targeted user is unlikely to ever observe the backdoored version, thereby drawing a minimal amount of attention to it. The attack however leaves an audit trail in the log, so the update itself can be detected by auditing.

A package maintainer monitoring uploads for their packages using the log would notice an additional version being published. A malicious package maintainer would however not alert the public when this happens. This could be construed as a targeted backdoor in violation of the stated security goals.
It is true that the backdoored package would be in the logs, but that in and of itself does not indicate that it is malign:
To mitigate this problem a minimum time between package updates can be introduced. This can be achieved by a fixing the issuance of release files and their log submission to a static frequency, or by alerting on quick subsequent updates to one package.
There may be good reasons for releasing a new update shortly after its predecessor; for example a vulnerability might be discovered in the predecessor shortly after release.
In the hidden version attack, the attacker increases a version number in order to get the victim to update a package. The victim will install this backdoored update. The monitor detects the hidden version attack due to the irregular release file publication. There are now two cases to be considered. The backdoor may be in the binary package, or it may be in the source package.

The first case will be detected by monitors verifying the reproducible builds property. A monitor can rebuild all changed source packages on every update and check if the resulting binary matches. If not, the blame falls clearly on the archive, because the source does not correspond to the binary, which can be demonstrated by exploiting reproducible builds.

The second case requires investigation of the packages modified by the update. The source code modifications can be investigated for the changed packages, because all source code is logged. The fact that source code can be analyzed, and no analysis on binaries is required, makes the investigation of the hidden version alert simpler. The blame for this case falls on the maintainer, who can be identified by their signature on the source package. If the upload was signed by a key not in the allowed set, the blame falls on the archive for failing to authorize correctly.

If the package version numbers in the meta data are inconsistent, this constitutes a misbehavior by the submitting archive. It can easily be detected by a monitor. Using the release file the monitor can also easily ensure, by demanding inclusion proofs, that all required files have been logged.
Note that although their system's monitors detect this attack, and can correctly attribute it, they do so asynchronously. They do not prevent the victim installing the backdoored update.

The Split View Attack

The logs cannot be assumed to be above suspicion. Hof and Carle describe a log-based attack:
The most significant attack by the log or with the collusion of the log is equivocation. In a split-view or equivocation attack, a malicious log presents different versions of the Merkle tree to the victim and to everybody else. Each tree version is kept consistent in itself. The tree presented to the victim will include a leaf that is malicious in some way, such as an update with a backdoor. It might also omit a leaf in order to hide an update. This is a powerful attack within the threat model that violates the security goals and must therefore be defended. A defense against this attack requires the client to learn if they are served from the same tree as the others.
Their defense requires that their be multiple logs under independent administration, perhaps run by different Linux distributions. Each time a "committing" log generated a new tree root containing new package submissions, it would be required to submit a signed copy of the root to one or more "witness" logs under independent administration. The "committing" log will obtain commitments from the "witness" logs, and supply them to clients. Clients can then verify that the root they obtain from the "committing" log matches that obtained directly from the "witness" logs:
When the client now verifies a log entry with the committing log, it also has to verify that a tree root covering this entry was submitted into the witnessing log. Additionally, the client verifies the append-only property of the witnessing log.

The witnessing log introduces additional monitoring requirements. Next to the usual monitoring of the append-only operation, we need to check that no equivocating tree roots are included. To this end, a monitor follows all new log entries of the witnessing log that are tree roots of the committing log. The monitor verifies that they are all valid extensions of the committing log's tree history.

Reproducible Builds

One weakness in Hof and Carle's actual implementation is in the connection between the signed package of source and the hashes of the result of compiling it. It is in general impossible to verify that the binaries are the result of compiling the source. In many cases, even if the source is re-compiled in the same environment the resulting binaries will not be bit-for-bit identical, and thus their hashes will differ. The differences have many causes, including timestamps, randomized file names, and so on. Of course, changes in the build environment can also introduce differences.

To enable binaries to be securely connected to their source, a Reproducible Builds effort has been under way for more than 5 years. Debian project lead Chris Lamb's 45-minute talk Think you're not a target? A tale of 3 developers ... provides an overview of the problem and the work to solve it using three example compromises:
  • Alice, a package developer who is blackmailed to distribute binaries that don't match the public source.
  • Bob, a build farm sysadmin whose personal computer has been compromised, leading to a compromised build toolchain in the build farm that inserts backdoors into the binaries.
  • Carol, a free software enthusiast who distributes binaries to friends. An evil maid attack has compromised her laptop.
As Lamb describes, eliminating all sources of irreproducibility from a package is a painstaking process because there are so many possibilities. They include non-deterministic behaviors such as iterating over hashmaps, parallel builds, timestamps, build paths, file system directory name order, and so on. The work started in 2013 with 24% of Debian packages building reproducibly. Currently, over 90% of the Debian packages are now reproducible. That is good, but 100% coverage is really necessary to provide security.

Bootstrappable Compilers

One of the most famous of the ACM's annual Turing Award lectures was Ken Thompson's 1984 Reflections On Trusting Trust (also here). In 2006, Bruce Schneier summarized its message thus:
Way back in 1974, Paul Karger and Roger Schell discovered a devastating attack against computer systems. Ken Thompson described it in his classic 1984 speech, "Reflections on Trusting Trust." Basically, an attacker changes a compiler binary to produce malicious versions of some programs, INCLUDING ITSELF. Once this is done, the attack perpetuates, essentially undetectably. Thompson demonstrated the attack in a devastating way: he subverted a compiler of an experimental victim, allowing Thompson to log in as root without using a password. The victim never noticed the attack, even when they disassembled the binaries -- the compiler rigged the disassembler, too.
Schneier was discussing David A. Wheeler's Countering Trusting Trust through Diverse Double-Compiling. Wheeler's subsequent work led to his 2009 Ph.D. thesis. To oversimpify, his technique involves the suspect compiler compiling its source twice, and comparing the output to that from a "trusted" compiler compiling the same source twice. He writes:
DDC uses a second “trusted” compiler cT, which is trusted in the sense that we have a justified confidence that cT does not have triggers or payloads
There are two issues here. The first is an assumption that the suspect compiler's build is reproducible. The second is the issue of where the "justified confidence" comes from. This is the motivation for the Bootstrappable Builds project, whose goal is to create a process for building a complete toolchain starting from a "seed" binary that is simple enough to be certified "by inspection". One sub-project is Stage0:
Stage0 starts with just a 280byte Hex monitor and builds up the infrastructure required to start some serious software development. With zero external dependencies, with the most painful work already done and real langauges such as assembly, forth and garbage collected lisp already implemented
The current 0.2.0 release of Stage0:
marks the first C compiler hand written in Assembly with structs, unions, inline assembly and the ability to self-host it's C version, which is also self-hosting
There is clearly a long way still to go to a bootstrapped full toolchain.

A More Secure Software Supply Chain

A software supply chain based on APT enhanced with Hof and Carle's transparency layer, distributing packages reproducibly built with bootstrapped compilers, would be much more difficult to attack than current technology. Users of the software could have much higher confidence that the binaries they installed had been built from the corresponding source, and that no attacker had introduced functionality not evident in the source.

These checks would take place during software installation or update. Users would still need to verify that the software had not been modified after installation, perhaps using a tripwire-like mechanism, But this mechanism would have a trustworthy source of the hashes it needs to do its job.

Remaining Software Problems

Despite all these enhancements, the event-stream attack would still have succeeded. The attackers targeted a widely-used, fairly old package that was still being maintained by the original author, a volunteer. They offered to take over what had become a burdensome task, and the offer was accepted. Now, despite the fact that the attacker was just an e-mail address, they were the official maintainer of the package and could authorize changes. Their changes, being authorized by the official package maintainer, would pass unimpeded through even the enhanced supply chain.

First, it is important to observe the goal of Hof and Carle's system is to detect targeted attacks, those delivered to a (typically small) subset of user systems. The event-stream attack was not targeted; it was delivered to all systems updating the package irrespective of whether they contained the wallet to be compromised. That their system is designed only to detect targeted attacks seems to me to be a significant weakness. It is very easy to design an attack, like the event-stream one, that is broadcast to all systems but is harmless on all but the targets.

Second, Hof and Carle's system operates asynchronously, so is intended to detect rather than prevent victim compromise. Of course, once the attack was detected it could be unambiguously attributed. But:
  • The attack would already have succeeded in purloining cryptocurrency from the target wallets. This seems to me to be a second weakness; in many cases the malign package would only need to be resident on the victim for a short time to exfiltrate critical data, or install further malware providing persistence.
  • Strictly speaking, the attribution would be to a private key. More realistically, it would be to a key and an e-mail address. In the case of an attack, linking these to a human malefactor would likely be difficult, leaving the perpetrators free to mount further attacks. Even if the maintainer had not, as in the event-stream attack, been replaced via social engineering, it is possible that their e-mail and private key could have been compromised.
The event-stream attack can be thought of as the organization-level analog of a Sybil attack on a peer-to-peer system. Creating an e-mail identity is almost free. The defense against Sybil attacks is to make maintaining and using an identity in the system expensive. As with proof-of-work in Bitcoin, the idea is that the white hats will spend more (compute more useless hashes) than the black hats. Even this has limits. Eric Budish's analysis shows that, if the potential gain from an attack on a blockchain is to be outweighed by its cost, the value of transactions in a block must be less than the block reward.

Would a similar defense against "Sybil" attacks on the software supply chain be possible? There are a number of issues:
  • The potential gains from such attacks are large, both because they can compromise very large numbers of systems quickly (event-stream had 2M downloads), and because the banking credentials, cryptocurrency wallets, and other data these systems contain can quickly be converted into large amounts of cash.
  • Thus the penalty for mounting an attack would have to be an even larger amount of cash. Package maintainers would need to be bonded or insured for large sums, which implies that distributions and package libraries would need organizational structures capable of enforcing these requirements.
  • Bonding and insurance would be expensive for package maintainers, who are mostly unpaid volunteers. There would have to be a way of paying them for their efforts, at least enough to cover the costs of bonding and insurance.
  • Thus users of the packages would need to pay for their use, which means the packages could neither be free, nor open source.
The FOSS (Free Open Source Software) movement will need to find other ways to combat Sybil attacks, which will be hard if the reward for a successful attack greatly exceeds the cost of mounting it. How to adequately reward maintainers for their essential but under-appreciated efforts is a fundamental problem for FOSS.

Hof and Carle's system shares one more difficulty with CT. Both systems are layered on top of an existing infrastructure, respectively APT and TLS with certificate authorities. In both cases there is a bootstrap problem, an assumption that as the system starts up there is not an attack already underway. In CT's case the communications between the CA's, Web sites, logs, auditors and monitors all use the very TLS infrastructure that is being secured (see here and here). This is also the case for Hof and Carle, plus they have to assume the lack of malware in the initial state of the packages.

Hardware Supply Chain Problems

All this effort to secure the software supply chain will be for naught if the hardware it runs on is compromised:
  • Much of what we think of as "hardware" contains software to which what we think of as "software" has no access or visibility. Examples include Intel's Management Engine, the baseband processor in mobile devices, complex I/O devices such as NICs and GPUs. Even if this "firmware" is visible to the system CPU, it is likely supplied as a "binary blob" whose source code is inaccessible.
  • Attacks on the hardware supply chain have been in the news recently, with the firestorm of publicity sparked by Bloomberg's, probably erroneous reports, of a Chinese attack on SuperMicro motherboards that added "rice-grain" sized malign chips.
The details will have to wait for a future post.


Anonymous said...

Nit: in the last bullet point, I think you mean "Bloomberg", not "Motherboard".

David. said...

Thanks for correcting my fused neurons, Bryan!

David. said...

I really should have pointed out that this whole post is about software that is installed on your device. These days, much of the software that runs on your device is not installed, it is delivered via ad networks and runs inside your browser. As blissex wrote in this comment, we are living:

"in an age in which every browser gifts a free-to-use, unlimited-usage, fast VM to every visited web site, and these VMs can boot and run quite responsive 3D games or Linux distributions"

Ad blockers, essential equipment in this age, merely reduce the incidence of malware delivered via ad networks. Brannon Dorsey's fascinating experiments in malvertising are described by Cory Doctorow thus:

"Anyone can make an account, create an ad with god-knows-what Javascript in it, then pay to have the network serve that ad up to thousands of browser. ... Within about three hours, his code (experimental, not malicious, apart from surreptitiously chewing up processing resources) was running on 117,852 web browsers, on 30,234 unique IP addresses. Adtech, it turns out, is a superb vector for injecting malware around the planet.

Some other fun details: Dorsey found that when people loaded his ad, they left the tab open an average of 15 minutes. That gave him huge amounts of compute time -- 327 full days, in fact, for about $15 in ad purchase."

David. said...

I regret not citing John Leyden's Open-source software supply chain vulns have doubled in 12 months to illustrate the scope of the problem:

"Miscreants have even started to inject (or mainline) vulnerabilities directly into open source projects, according to Sonatype, which cited 11 recent examples of this type of malfeasance in its study.

El Reg has reported on several such incidents including a code hack on open-source utility eslint-scope back in July."


"organisations are still downloading vulnerable versions of the Apache Struts framework at much the same rate as before the Equifax data breach, at around 80,000 downloads per month.

Downloads of buggy versions of another popular web application framework called Spring were also little changed since a September 2017 vulnerability, Sonatype added. The 85,000 average in September 2017 has declined only 15 per cent to 72,000 over the last 12 months."

David. said...

Catalin Cimpanu's Users report losing Bitcoin in clever hack of Electrum wallets describes a software supply chain attack that started around 21st December and netted around $750K "worth" of BTC.

David. said...

Popular WordPress plugin hacked by angry former employee is like the event-stream hack in that no amount of transparency would have prevented it. The disgruntled perpetrator apparently had valid credentials for the official source of the software:

"The plugin in question is WPML (or WP MultiLingual), the most popular WordPress plugin for translating and serving WordPress sites in multiple languages.

According to its website, WPML has over 600,000 paying customers and is one of the very few WordPress plugins that is so reputable that it doesn't need to advertise itself with a free version on the official WordPress.org plugins repository."

David. said...

The fourth annual report for the National Security Adviser from the Huawei Cyber Security Evaluation Centre Oversight Board in the UK is interesting. The Centre has access to the source code for Huawei products, and is working with Huawei to make the builds reproducible:

"3.15 HCSEC have worked with Huawei R&D to try to correct the deficiencies in the underlying build and compilation process for these four products. This has taken significant effort from all sides and has resulted in a single product that can be built repeatedly from source to the General Availability (GA) version as distributed. This particular build has yet to be deployed by any UK operator, but we expect deployment by UK operators in the future, as part of their normal network release cycle. The remaining three products from the pilot are expected to be made commercially available in 2018H1, with each having reproducible binaries."

David. said...

Huawei says fixing "the deficiencies in the underlying build and compilation process" in its carrier products will take five years.

David. said...

In Cyber-Mercenary Groups Shouldn't be Trusted in Your Browser or Anywhere Else, the EFF's Cooper Quintin describes the latest example showing why Certificate Authorities can't be trusted:

"DarkMatter, the notorious cyber-mercenary firm based in the United Arab Emirates, is seeking to become approved as a top-level certificate authority in Mozilla’s root certificate program. Giving such a trusted position to this company would be a very bad idea. DarkMatter has a business interest in subverting encryption, and would be able to potentially decrypt any HTTPS traffic they intercepted. One of the things HTTPS is good at is protecting your private communications from snooping governments—and when governments want to snoop, they regularly hire DarkMatter to do their dirty work.
DarkMatter was already given an "intermediate" certificate by another company, called QuoVadis, now owned by DigiCert. That's bad enough, but the "intermediate" authority at least comes with ostensible oversight by DigiCert."

Hat tip to Cory Doctorow.

David. said...

Gareth Corfield's Just Android things: 150m phones, gadgets installed 'adware-ridden' mobe simulator games reports on a very successful software supply chain attack:

"Android adware found its way into as many as 150 million devices – after it was stashed inside a large number of those bizarre viral mundane job simulation games, we're told.
Although researchers believed that the titles were legitimate, they said they thought the devs were “scammed” into using a “malicious SDK, unaware of its content, leading to the fact that this campaign was not targeting a specific country or developed by the same developer.”

David. said...

Kim Zetter's Hackers Hijacked ASUS Software Updates to Install Backdoors on Thousands of Computers is an excellent example of a software supply chain attack:

"Researchers at cybersecurity firm Kaspersky Lab say that ASUS, one of the world’s largest computer makers, was used to unwittingly install a malicious backdoor on thousands of its customers’ computers last year after attackers compromised a server for the company’s live software update tool. The malicious file was signed with legitimate ASUS digital certificates to make it appear to be an authentic software update from the company, Kaspersky Lab says."

David. said...

Sean Gallagher's UK cyber security officials report Huawei’s security practices are a mess reports on the latest report from the HCSEC Oversight Board. They still can't do reproducible builds:

"HCSEC reported that the software build process used by Huawei results in inconsistencies between software images. In other words, products ship with software with widely varying fingerprints, so it’s impossible to determine whether the code is the same based on checksums."

Which isn't a surprise, Huawei already said it'd take another 5 years. But I'd be more concerned that:

"One major problem cited by the report is that a large portion of Huawei’s network gear still relies on version 5.5 of Wind River’s VxWorks real-time operating system (RTOS), which has reached its “end of life” and will soon no longer be supported. Huawei has bought a premium long-term support license from VxWorks, but that support runs out in 2020."

And Huawei is rolling its own RTOS based on Linux. What could possibly go wrong?

David. said...

The latest software supply chain attack victim is bootstrap-sass via RubyGems, with about 28M downloads.

David. said...

It turns out that ShadowHammer Targets Multiple Companies, ASUS Just One of Them:

"ASUS was not the only company targeted by supply-chain attacks during the ShadowHammer hacking operation as discovered by Kaspersky, with at least six other organizations having been infiltrated by the attackers.

As further found out by Kaspersky's security researchers, ASUS' supply chain was successfully compromised by trojanizing one of the company's notebook software updaters named ASUS Live Updater which eventually was downloaded and installed on the computers of tens of thousands of customers according to experts' estimations."

David. said...

Who Owns Huawei? by Christopher Balding and Donald C. Clarke concludes that:

"Huawei calls itself “employee-owned,” but this claim is questionable, and the corporate structure described on its website is misleading."

David. said...

David A. Wheeler reports on another not-very-successful software supply chain attack:

"A malicious backdoor has been found in the popular open source software library bootstrap-sass. This was done by someone who created an unauthorized updated version of the software on the RubyGems software hosting site. The good news is that it was quickly detected (within the day) and updated, and that limited the impact of this subversion. The backdoored version ( was only downloaded 1,477 times. For comparison, as of April 2019 the previous version in that branch ( was downloaded 1.2 million times, and the following version (which duplicated was downloaded 1,700 times (that’s more than the subverted version!). So it is likely that almost all subverted systems have already been fixed."

Wheeler has three lessons from this:

1. Maintainers need 2FA.
2. Don't update your dependencies in the same day they're released.
3. Reproducible builds!

David. said...

Andy Greenberg's A mysterious hacker gang is on a supply-chain hacking spree ties various software supply chain attacks together and attributes them:

"Over the past three years, supply-chain attacks that exploited the software distribution channels of at least six different companies have now all been tied to a single group of likely Chinese-speaking hackers. The group is known as Barium, or sometimes ShadowHammer, ShadowPad, or Wicked Panda, depending on which security firm you ask. More than perhaps any other known hacker team, Barium appears to use supply-chain attacks as its core tool. Its attacks all follow a similar pattern: seed out infections to a massive collection of victims, then sort through them to find espionage targets."

David. said...

Someone Is Spamming and Breaking a Core Component of PGP’s Ecosystem by Lorenzo Franceschi-Bicchierai reports on an attack on two of the core PGP developers,Robert J. Hansen and Daniel Kahn Gillmor :

"Last week, contributors to the PGP protocol GnuPG noticed that someone was “poisoning” or “flooding” their certificates. In this case, poisoning refers to an attack where someone spams a certificate with a large number of signatures or certifications. This makes it impossible for the the PGP software that people use to verify its authenticity, which can make the software unusable or break. In practice, according to one of the GnuPG developers targeted by this attack, the hackers could make it impossible for people using Linux to download updates, which are verified via PGP."

The problem lies in the SKS keyserver:

"the SKS software was written in an obscure language by a PhD student for his thesis. And because of that, according to Hansen, “there is literally no one in the keyserver community who feels qualified to do a serious overhaul on the codebase.”

In other words, these attacks are here to stay."

David. said...

Dan Goodin's The year-long rash of supply chain attacks against open source is getting worse is a useful overview of the recent incidents pointing to the need for verifiable logs and reproducible builds. And, of course, for requiring developers to use multi--factor authentication.

David. said...

Catalin Cimpanu's Hacking 20 high-profile dev accounts could compromise half of the npm ecosystem is based on Small World with High Risks:A Study of Security Threats in the npm Ecosystem by Marcus Zimmerman et al:

"Their goal was to get an idea of how hacking one or more npm maintainer accounts, or how vulnerabilities in one or more packages, reverberated across the npm ecosystem; along with the critical mass needed to cause security incidents inside tens of thousands of npm projects at a time.
the normal npm JavaScript package has an abnormally large number of dependencies -- with a package loading 79 third-party packages from 39 different maintainers, on average.

This number is lower for popular packages, which only rely on code from 20 other maintainers, on average, but the research team found that some popular npm packages (600) relied on code written by more than 100 maintainers.
"391 highly influential maintainers affect more than 10,000 packages, making them prime targets for attacks," the research team said. "If an attacker manages to compromise the account of any of the 391 most influential maintainers, the community will experience a serious security incident."

Furthermore, in a worst-case scenario where multiple maintainers collude, or a hacker gains access to a large number of accounts, the Darmstadt team said that it only takes access to 20 popular npm maintainer accounts to deploy malicious code impacting more than half of the npm ecosystem."

David. said...

Five years after the Equation Group HDD hacks, firmware security still sucks by Catalin Cimpanu illustrates how far disk drive firmware security is ahead of the rest of the device firmware world:

"In 2015, security researchers from Kaspersky discovered a novel type of malware that nobody else had seen before until then.

The malware, known as NLS_933.dll, had the ability to rewrite HDD firmware for a dozen of HDD brands to plant persistent backdoors. Kaspersky said the malware was used in attacks against systems all over the world.

Kaspersky researchers claimed the malware was developed by a hacker group known as the Equation Group, a codename that was later associated with the US National Security Agency (NSA).

Knowing that the NSA was spying on their customers led many HDD and SSD vendors to improve the security of their firmware, Eclypsium said.

However, five years since the Equation Group's HDD implants were found in the wild and introduced the hardware industry to the power of firmware hacking, Eclypsium says vendors have only partially addressed this problem.

"After the disclosure of the Equation Group's drive implants, many HDD and SSD vendors made changes to ensure their components would only accept valid firmware. However, many of the other peripheral components have yet to follow suit," researchers said."

David. said...

Marc Ohm et al analyze supply chain attacks via open source packages in three reposiotries in Backstabber’s Knife Collection: A Review of Open Source Software Supply Chain Attacks:

"This paper presents a dataset of 174 malicious software packages that were used in real-world attacks on open source software supply chains,and which were distributed via the popular package repositories npm, PyPI, and RubyGems. Those packages, dating from November 2015 to November 2019, were manually collected and analyzed. The paper also presents two general attack trees to provide a structured overview about techniques to inject malicious code into the dependency tree of downstream users, and to execute such code at different times and under different conditions."

David. said...

Bruce Schneier's Survey of Supply Chain Attacks starts:

"The Atlantic Council has a released a report that looks at the history of computer supply chain attacks."

The Atlantic Council also has a summary of the report entitled Breaking trust: Shades of crisis across an insecure software supply chain:

"Software supply chain security remains an under-appreciated domain of national security policymaking. Working to improve the security of software supporting private sector enterprise as well as sensitive Defense and Intelligence organizations requires more coherent policy response together industry and open source communities. This report profiles 115 attacks and disclosures against the software supply chain from the past decade to highlight the need for action and presents recommendations to both raise the cost of these attacks and limit their harm."

David. said...

Via my friend Jim Gettys, we learn of a major milestone in the development of a truly reproducible build environment. Last June Jan Nieuwenhuizen posted Guix Further Reduces Bootstrap Seed to 25%. The TL;DR is:

"GNU Mes is closely related to the Bootstrappable Builds project. Mes aims to create an entirely source-based bootstrapping path for the Guix System and other interested GNU/Linux distributions. The goal is to start from a minimal, easily inspectable binary (which should be readable as source) and bootstrap into something close to R6RS Scheme.

Currently, Mes consists of a mutual self-hosting scheme interpreter and C compiler. It also implements a C library. Mes, the scheme interpreter, is written in about 5,000 lines of code of simple C. MesCC, the C compiler, is written in scheme. Together, Mes and MesCC can compile a lightly patched TinyCC that is self-hosting. Using this TinyCC and the Mes C library, it is possible to bootstrap the entire Guix System for i686-linux and x86_64-linux."

The binary they plan to start from is:

"Our next target will be a third reduction by ~50%; the Full-Source bootstrap will replace the MesCC-Tools and GNU Mes binaries by Stage0 and M2-Planet.

The Stage0 project by Jeremiah Orians starts everything from ~512 bytes; virtually nothing. Have a look at this incredible project if you haven’t already done so."

In mid November Nieuwenhuizen tweeted:

"We just compiled the first working program using a Reduced Binary Seed bootstrap'ped*) TinyCC for ARM"

And on December 21 he tweeted:

"The Reduced Binary Seed bootstrap is coming to ARM: Tiny C builds on @GuixHPC wip-arm-bootstrap branch"

Starting from a working TinyCC, you can build the current compiler chain.

David. said...

Michael Larabel reports that openSUSE Factory Achieves Bit-By-Bit Reproducible Builds:

"Since last month openSUSE Factory has been producing bit-by-bit reproducible builds sans the likes of embedded signatures. OpenSUSE Tumbleweed packages for that rolling-release distribution are being verified for bit-by-bit reproducible builds.

SUSE/openSUSE is still verifying all packages are yielding reproducible builds but so far it's looking like 95% or more of packages are working out."

More details are in Jan Zerebecki's openSUSE Factory enabled bit-by-bit reproducible builds on the openSUSE blog.