The Atlantic Council has released a report that looks at the history of computer supply chain attacks.The Atlantic Council also has a summary of the report entitled Breaking trust: Shades of crisis across an insecure software supply chain:
Software supply chain security remains an under-appreciated domain of national security policymaking. Working to improve the security of software supporting private sector enterprise as well as sensitive Defense and Intelligence organizations requires more coherent policy response together industry and open source communities. This report profiles 115 attacks and disclosures against the software supply chain from the past decade to highlight the need for action and presents recommendations to both raise the cost of these attacks and limit their harm.Below the fold, some commentary on the report and more recent attacks.
The report divides their survey into five "trends":
- Deep Impact: State actors target the software supply chain and do so to great effect.
- Abusing Trust: Code signing is a deeply impactful tactic for compromising software supply chains as it can be used to undermine other related security schemes, including program attestation.
- Breaking the Chain: Hijacked updates are a common and impactful means of compromising supply chains and they recurred throughout the decade despite being a well-recognized attack vector.
- Poisoning the Well: Attacks on OSS were popular, but unnervingly simple in many cases.
- Downloading Trouble: App stores represent a poorly addressed source of risk to mobile device users as they remain popular despite years of evidence of security lapses.
Deep Impact
The report's examples of state-sponsored supply chain attacks include CCleaner, NotPetya, Kingslayer, SimDisk, and ShadowPad. They write:States have used software supply chain attacks to great effect. Hijacked updates have routinely delivered the most crippling state-backed attacks, thanks in part to a continued failure to secure the code-signing process. And while concerns about the real-world ramifications of attacks on firmware, IoT devices, and industrial systems are warranted, these are far from novel threats. Stuxnet and other incidents have had physical impacts as early as 2012. Several of these incidents, like NotPetya and the Equifax data breach in 2017, impacted millions of users, showcasing the immense potential scale of software supply chain attacks and their strategic utility for states. For Russia, these attacks have meant access to foreign critical infrastructure, while for China, they have facilitated a massive and multifaceted espionage effort. This section discusses trends in known state software supply chain attacks supported by publicly reported attribution, focused on four actors: Russia, China, Iran, and North Korea. The data in this report also include incidents linked to Egypt, India, the United States, and Vietnam, for a total of 27 distinct attacks.The US has mounted some of the most significant attacks of this kind:
Stuxnet — widely attributed to the United States and Israel — was one of the earliest examples to use stolen certificates and potentially one of a handful to leverage a hardware vector to compromise. Two campaigns were attributed to the US-attributed Equation Group (PassFreely and EquationDrug & GrayFish). In 2013, PassFreely enabled bypass of the authentication process of Oracle Database servers and access to SWIFT (Society for Worldwide Interbank Financial Telecommunication) money transfer authentication. In 2015, the EquationDrug & GrayFish programs used versions of nls_933w.dll to communicate with a C&C server to flash malicious copies of firmware onto a device’s hard disk drive, infecting 500 devices. Both attacks involved a supply chain service provider as a distribution vector.Taiwan has long been a priority target for China's offensive cyber-warriors. Andy Greenberg's Chinese hackers have pillaged Taiwan’s semiconductor industry reports on one of their successes:
At the Black Hat security conference today, researchers from the Taiwanese cybersecurity firm CyCraft plan to present new details of a hacking campaign that compromised at least seven Taiwanese chip firms over the past two years. The series of deep intrusions—called Operation Skeleton Key due to the attackers' use of a "skeleton key injector" technique—appeared aimed at stealing as much intellectual property as possible, including source code, software development kits, and chip designs. And while CyCraft has previously given this group of hackers the name Chimera, the company's new findings include evidence that ties them to mainland China and loosely links them to the notorious Chinese state-sponsored hacker group Winnti, also sometimes known as Barium, or Axiom.These weren't supply chain attacks, but Kim Zetter reports on a wildly successful one that was in Hackers Hijacked ASUS Software Updates to Install Backdoors on Thousands of Computers:
Researchers at cybersecurity firm Kaspersky Lab say that ASUS, one of the world’s largest computer makers, was used to unwittingly install a malicious backdoor on thousands of its customers’ computers last year after attackers compromised a server for the company’s live software update tool. The malicious file was signed with legitimate ASUS digital certificates to make it appear to be an authentic software update from the company, Kaspersky Lab says.One reason for believing that this is a state-sponsored attack is that it was precisely targeted:
ASUS, a multi-billion dollar computer hardware company based in Taiwan that manufactures desktop computers, laptops, mobile phones, smart home systems, and other electronics, was pushing the backdoor to customers for at least five months last year before it was discovered, according to new research from the Moscow-based security firm.
The researchers estimate half a million Windows machines received the malicious backdoor through the ASUS update server, although the attackers appear to have been targeting only about 600 of those systems. The malware searched for targeted systems through their unique MAC addresses. Once on a system, if it found one of these targeted addresses, the malware reached out to a command-and-control server the attackers operated, which then installed additional malware on those machines.
Kaspersky Lab said it uncovered the attack in January after adding a new supply-chain detection technology to its scanning tool to catch anomalous code fragments hidden in legitimate code or catch code that is hijacking normal operations on a machine. The company plans to release a full technical paper and presentation about the ASUS attack, which it has dubbed ShadowHammer, next month at its Security Analyst Summit in Singapore. In the meantime, Kaspersky has published some of the technical details on its website.
The goal of the attack was to surgically target an unknown pool of users, which were identified by their network adapters’ MAC addresses. To achieve this, the attackers had hardcoded a list of MAC addresses in the trojanized samples and this list was used to identify the actual intended targets of this massive operation. We were able to extract more than 600 unique MAC addresses from over 200 samples used in this attack. Of course, there might be other samples out there with different MAC addresses in their list.Kaspersky attributes this attack to the same state-sponsored group as Operation Skeleton Key:
Although precise attribution is not available at the moment, certain evidence we have collected allows us to link this attack to the ShadowPad incident from 2017. The actor behind the ShadowPad incident has been publicly identified by Microsoft in court documents as BARIUM. BARIUM is an APT actor known to be using the Winnti backdoor. Recently, our colleagues from ESET wrote about another supply chain attack in which BARIUM was also involved, that we believe is connected to this case as well.The 2019 ESET report starts:
This is not the first time the gaming industry has been targeted by attackers who compromise game developers, insert backdoors into a game’s build environment, and then have their malware distributed as legitimate software. In April 2013, Kaspersky Lab reported that a popular game was altered to include a backdoor in 2011. That attack was attributed to perpetrators Kaspersky called the Winnti Group.
Yet again, new supply-chain attacks recently caught the attention of ESET Researchers. This time, two games and one gaming platform application were compromised to include a backdoor. Given that these attacks were mostly targeted against Asia and the gaming industry, it shouldn’t be surprising they are the work of the group described in Kaspersky’s “Winnti – More than just a game”.
Abusing Trust
This section discusses code signing:Code signing is crucial to analyzing software supply chain attacks because, used correctly, it ensures the integrity of code and the identity of its author. Software supply chain attacks rely on the attacker’s ability to edit code, to pass it off as safe, and to abuse trust in the code’s author in a software environment full of third-party dependencies (so many that vendors are usually unaware of the full extent of their dependencies). If code signing were unassailable, there would be far fewer software supply chain attacks as applications could assert code was unmodified and coming straight from the authentic source without concern.Code signing is indeed important, but it is far from unassailable. The first problem is that it is often incompletely implemented. For example, last February, Shaun Nichols' What do a Lenovo touch pad, an HP camera and Dell Wi-Fi have in common? They'll swallow any old firmware, legit or saddled with malware reported that:
Eclypsium said on Monday that, despite years of warnings from experts – and examples of rare in-the-wild attacks, such as the NSA's hard drive implant – devices continue to accept unsigned firmware. The team highlighted the TouchPad and TrackPoint components in Lenovo laptops, HP Wide Vision FHD computer cameras, and the Wi-Fi adapter in Dell XPS notebooks.If some components of your laptop can be compromised, skilled intruders can probably compromise the whole machine. And, since Microsoft and the component manufacturers can't agree on who is responsible for implementing code signing for component firmware, it isn't going to get implemented.
...
Perhaps most frustrating is that these sort of shortcomings have been known of for years, and have yet to be cleaned up. The Eclypsium team contacted Qualcomm and Microsoft regarding the Dell adapter – Qualcomm makes the chipset, Microsoft's operating system provides signature checks – and encountered a certain amount of buck-passing.
"Qualcomm responded that their chipset is subordinate to the processor, and that the software running on the CPU is expected to take responsibility for validating firmware," Eclypsium reports.
"They [Qualcomm] stated that there was no plan to add signature verification for these chips. However, Microsoft responded that it was up to the device vendor to verify firmware that is loaded into the device."
Next, the report's high-level description of the code signing process is correct as far as it goes:
Code signing is an application of public key cryptography and the trusted certificate system to ensure code integrity and source identity. When code is sent to an external source, it usually comes with a signature and a certificate. The signature is a cryptographic hash of the code itself that is encrypted with a private key. The consumer uses the public key associated with the code issuer to decrypt the hash and then hashes their copy of the code. If their hash matches the decrypted hash, they know their copy of the code is the same as the code that was hashed and encrypted by the author. The certificate is a combination of information that generally includes the name of the certificate authority (CA), the name of the software issuer, a creation and expiration date, and the public key associated with their private key. It provides the CA’s assurance to the consumer that the issuer of the code has the private key that is cryptographically connected to the listed public key. The software issuer alone possesses the private key.Notice the assumptions in this explanation:
- The CA is trustworthy.
- "The software issuer alone possesses the private key"
If you look into the security settings for your browser you will find a long list of trusted certificate authorities. ... Trust in "secure" connections to websites depends on trusting every single one of these organizations. ... The list includes, among many mysterious organizations I have no reason to trust:CAs are so trustworthy that their certificates are routinely for sale to malefactors, as Dan Goodin wrote in 2018's One-stop counterfeit certificate shops for all your malware-signing needs:
And that's not all. Many of these organizations have decided that because I trust them, I also trust organizations which they declare are trustworthy. As of 2010, the EFF found more than 650 organizations that Internet Explorer and Firefox trusted, including the US Dept. of Homeland Security. The idea that none of these organizations could ever be convinced to act against my interests is ludicrous.
- Comodo and RSA, both compromised in 2011, along with DigiNotar and StartSSL, which were in the list then.
- Experian, which today I find has been supplying the data on which an identity theft service is based.
- The governments of, among others, China, Japan, Taiwan and the Netherlands.
Barysevich identified four such sellers of counterfeit certificates since 2011. Two of them remain in business today. The sellers offered a variety of options. In 2014, one provider calling himself C@T advertised certificates that used a Microsoft technology known as Authenticode for signing executable files and programming scripts that can install software. C@T offered code-signing certificates for macOS apps as well. ... "In his advertisement, C@T explained that the certificates are registered under legitimate corporations and issued by Comodo, Thawte, and Symantec—the largest and most respected issuers,"Abuse of the trust users place in CAs is so routine that in 2012 Google started work on an approach that requires participation by certificate owners, specified in RFC6962, and called Certificate Transparency (CT):
Google's Certificate Transparency project fixes several structural flaws in the SSL certificate system, which is the main cryptographic system that underlies all HTTPS connections. These flaws weaken the reliability and effectiveness of encrypted Internet connections and can compromise critical TLS/SSL mechanisms, including domain validation, end-to-end encryption, and the chains of trust set up by certificate authorities. If left unchecked, these flaws can facilitate a wide range of security attacks, such as website spoofing, server impersonation, and man-in-the-middle attacks.I wrote about the details in Certificate Transparency:
Certificate Transparency helps eliminate these flaws by providing an open framework for monitoring and auditing SSL certificates in nearly real time. Specifically, Certificate Transparency makes it possible to detect SSL certificates that have been mistakenly issued by a certificate authority or maliciously acquired from an otherwise unimpeachable certificate authority. It also makes it possible to identify certificate authorities that have gone rogue and are maliciously issuing certificates.
The basic idea is to accompany the certificate with a hash of the certificate signed by a trusted third party, attesting that the certificate holder told the third party that the certificate with that hash was current. Thus in order to spoof a service, an attacker would have to both obtain a fraudulent certificate from a CA, and somehow persuade the third party to sign a statement that the service had told them the fraudulent certificate was current.CT is about improving the trustworthiness of the certificates that secure the Web, but similar considerations apply to code-signing.
As for the idea that "The software issuer alone possesses the private key", this again isn't necessarily true in the real world. For example, in the case of the attack on ASUS:
The attackers used two different ASUS digital certificates to sign their malware. The first expired in mid-2018, so the attackers then switched to a second legitimate ASUS certificate to sign their malware after this.The attackers clearly had access to one of the machines on ASUS' internal network that used the private key to sign released binaries. These machines should have been air-gapped to ensure that only physical access could generate valid signatures. But that would be inconvenient. The care some vendors take to keep their private keys secret is illustrated by Catalin Cimpanu's's Microsoft warns about two apps that installed root certificates then leaked the private keys:
Kamluk said ASUS continued to use one of the compromised certificates to sign its own files for at least a month after Kaspersky notified the company of the problem, though it has since stopped. But Kamluk said ASUS has still not invalidated the two compromised certificates, which means the attackers or anyone else with access to the un-expired certificate could still sign malicious files with it, and machines would view those files as legitimate ASUS files.
The issue with the two HeadSetup apps came to light earlier this year when German cyber-security firm Secorvo found that versions 7.3, 7.4, and 8.0 installed two root Certification Authority (CA) certificates into the Windows Trusted Root Certificate Store of users' computers but also included the private keys for all in the SennComCCKey.pem file."The software issuer alone possesses the private key" isn't the only concern about private keys. A leak of any of the private keys in the chain from the code-signing key to the CA will vitiate code-signing.
In a report published today, Secorvo researchers published proof-of-concept code showing how trivial would be for an attacker to analyze the installers for both apps and extract the private keys.
Making matters worse, the certificates are also installed for Mac users, via HeadSetup macOS app versions, and they aren't removed from the operating system's Trusted Root Certificate Store during current HeadSetup updates or uninstall operations.
...
Sennheiser's snafu, tracked as CVE-2018-17612, is not the first of its kind. In 2015, Lenovo shipped laptops with a certificate that exposed its private key in a scandal that became known as Superfish. Dell did the exact same thing in 2016 in a similarly bad security incident that became known as eDellRoot.
The report is correct when it states:
Attacks that compromise the code signing system are extremely potent—they get a victim’s system to run malicious code from an apparently legitimate source with significant authority to modify system files. As such, they have been on the rise recently and are a fundamental dimension of software supply chain attacks. ... Code-signing vulnerabilities are troubling because, if used successfully, they let attackers impersonate any trusted program and bypass some modern security tools. They also sow uncertainty about the authenticity of the very same patches and updates software developers use to fix security holes in their code.
Source |
In addition to stealing or forging certificates, attackers can also buy them from vendors online. These certificates could have been obtained through system access, but they can also come from mistakes (certificate authorities, or CAs, can be tricked into giving attackers legitimate certificates), insiders (employees with access to keys can sell them), or resellers (intermediate certificate issuers who require far less due diligence). The marketplaces are limited by the number of legitimate certificates issued, but they have been growing. The more expensive the certificate, the more trusted it is. The cost of a certificate is anywhere up to approximately $1,800 for an Extended Validation certificate issued by Symantec in 2016. CAs can revoke certificates that have been compromised and issue new ones, but the systems of reporting, intermediaries, and automatic revocation checks are slow to close security gaps. Moreover, without purchasing them off the black market and risking being defrauded, it is hard to know when certificates have actually been compromised until they are used in an attack.The reason the cost of an attack is so low is that, as shown by Axel Arnbak et al in Security Collapse in the HTTPS Market, CA's incentives aren't to be secure, they are to make more money. The easiest way to do that is to grow to be "too big to fail", so that even if you fail at security, customers have no choice but to continue to use you.
Breaking the Chain
This section discusses where along the supply chain most attacks take place, and points to updates to deployed software as the favorite:Software updates are a major trust mechanism in the software supply chain and they are a popular vector to distribute attacks relying on compromised or stolen code-signing tools. Hijacking update processes is a key component of the more complex and harmful software supply chain attacks. The majority of hijacked updates in this report’s dataset were used to spread malicious code to third-party applications as new versions of existing software and open-source dependencies spread malware inserted into open-source libraries.As with the ASUS attack, the attackers need to sign their malware, so:
Hijacking updates generally requires accessing a certificate or developer account, versus app store and open-source attacks which can rely on unsigned attacker-made software. CCleaner is a great example. In September 2018, Cisco Talos researchers disclosed that the computer cleanup tool had been compromised for several months. Attackers had gained access to a Piriform developer account and used it to obtain a valid certificate to update the CCleaner installer to include malicious code that sent system information back to a command and control (C&C) server. The attack initially infected more than two million computers and downloaded a second, stealthier payload onto selected systems at nearly two dozen technology companies.This last point is worth stressing. The most important defense is promptly applied patches. As with the anti-vax movement, the more malefactors can delay or prevent systems being patched, the more victims they can infect.
...
Hijacked updates rely on compromising build servers and code distribution tools. Rarely do attacks involve brute force assaults against cryptographic mechanisms. The implication is that many of the same organizational cybersecurity practices used to protect enterprise networks and ensure limited access to sensitive systems can be applied, if only more rigorously, here to address malicious updates. The update channel is a crucial one for defenders to distribute patches to users. Failure to protect this linkage could have dangerous ripple effects if users delay applying, or even begin to mistrust, these updates.
Poisoning the Well
This section discusses some of the particular issues of the open source software supply chain:Attacks on OSS have grown more frequent in recent years. In February 2020, two accounts uploaded more than 700 packages to the Ruby repository and used typosquatting to achieve more than 100,000 downloads of their malware, which redirected Bitcoin payments to attacker-controlled wallets. Many of these attacks remain viable against users for weeks or months after software is patches because of the frequency with which open source projects patch and fail to notify users. Repositories and hubs can do more to help, providing easy to use tools for developers to notify users of changes and updates and shorten the time between when a vulnerability is fixed and users notified.On August 3rd, the Linux Foundation announced the formation of the Open Source Security Foundation. From their press release:
The OpenSSF is a cross-industry collaboration that brings together leaders to improve the security of open source software (OSS) by building a broader community with targeted initiatives and best practices. It combines efforts from the Core Infrastructure Initiative, GitHub’s Open Source Security Coalition and other open source security work from founding governing board members GitHub, Google, IBM, JPMorgan Chase, Microsoft, NCC Group, OWASP Foundation and Red Hat, among others. Additional founding members include ElevenPaths, GitLab, HackerOne, Intel, Okta, Purdue, SAFECode, StackHawk, Trail of Bits, Uber and VMware.This is an encouraging development, however:
The initial technical initiatives will focus on:These are all good things to work on, but notice something missing? There's nothing about reproducible builds without which, as I explained at length in Securing The Software Supply Chain, the connection between the source code in the repository and the actual binaries delivered through the software supply chain is vulnerable. In common with much of the open source codebase, the reproducible builds effort has been struggling through lack of funding commensurate with its critical importance:
- Vulnerability Disclosures
- Security Tooling
- Security Best Practices
- Identifying Security Threats to Open Source Projects
- Securing Critical Projects
- Developer Identity Verification
eliminating all sources of irreproducibility from a package is a painstaking process because there are so many possibilities. They include non-deterministic behaviors such as iterating over hashmaps, parallel builds, timestamps, build paths, file system directory name order, and so on. The work started in 2013 with 24% of Debian packages building reproducibly. Currently, over 90% of the Debian packages are now reproducible. That is good, but 100% coverage is really necessary to provide security.
Downloading Trouble
When asked why he robbed banks, Willie Sutton replied "that where the money is". Similarly, for malware authors, app stores are like a watering hole where the vulnerable systems are:As a consolidated venue, app stores simplify the user’s search for software that maximizes the value of their devices. However, as a high-traffic download center that connects end users to third-party developers, app stores have become a popular way to attack the software supply chain.The report describes three kinds of app store attacks. First, apps developed by the attacker that they sneak past the app stores' screening processes:
...
The app store model continues to weather a storm of attacks due to the difficulty of vetting the third-party products at its heart. Malware obfuscation techniques have continued to evolve, and the volume of apps that need screening has also increased.
One example is the ExpensiveWall attack, named after one of several apps that attackers uploaded to the Google Play Store, Lovely Wallpaper, which provided background images for mobile users. The malicious software in the apps evaded Google Play’s screening system using encryption and other obfuscation techniques. The malware, once downloaded, charged user accounts and sent fraudulent premium SMS messages, driving revenue to the attackers.Second, apps re-packaged by the attacker:
they take a legitimate app, add their own malicious code, and then bundle it as a complete item for download, usually on third-party sites. In one case, attackers repackaged Android versions of Pokémon Go to include the DroidJack malware, which could access almost all of an infected device’s data and send it to an attacker C&C server server by simply asking for extra app permissions. The attackers placed the malware-infested app on third-party stores to take advantage of the game’s staggered release schedule, preying on users looking for early access to the game.Note that since the attackers were unable to replace the legitimate version on the official app store, victims had to download the malware from unofficial sources.
Third, attacks against the Software Development Kits (SDKs) used to build apps distributed via the app store:
There is also publicly available evidence that groups compromise the software used to build apps, also known as software development kits (SDKs), allowing them to inject malware into legitimate apps as they are created. Whatever the model, malicious actors make an effort to obfuscate their malware to evade detection by app store curators. Compromising development tools used to build apps for those stores provides tremendous scale in a software supply chain attack. One example is the XcodeGhost malware, first detected early in the fall of 2015. Xcode is a development environment used exclusively to create iOS and OS X apps. Typically, the tool is downloaded directly from the App Store, but it is available elsewhere on the web outside Apple’s review. A version of Xcode found on Baidu Yunpan, a Chinese file-sharing service, came embedded with malicious logic that modified apps created in the development environment.Note that, while the developers had to download the SDK from unofficial sources, the eventual victims downloaded the malware from the official app store, where the developers had unwittingly placed it.
...
Multiple apps created with XcodeGhost were accepted into Apple’s App Store, including malicious versions of WeChat, WinZip, and China Unicom Mobile Office, eventually impacting more than 500 users. XcodeGhost aptly demonstrates the significance of app stores as a distribution method.
The huge potential for compromising SDKs illustrates the importance of reproducible builds, which allow multiple independent third parties to build software from the same source, and ensure that the results are identical. If the vendor had downloaded a compromised SDK from an unofficial source but the third parties had correctly obtained it, the builds would be different. The kind of software supply chain security described by Benjamin Hof and Georg Carle in Software Distribution Transparency and Auditability is easier to implement in an open source environment, where truly independent auditors have access to the source. In a closed-source environment, third part auditors would be under contract to the vendor, reducing their independence. For more details, see Securing The Software Supply Chain.
Recommendations
The report understands that the fundamental problem is economic and social:Software supply chain attacks and disclosures that exploit vulnerabilities across the lifecycle of software in this dataset are too cheap relative to the harm they can impose. For example, while software updates are the critical channel to bring updates and patches to software users, they have been hijacked consistently over the past decade. The problem is generally not developers brewing their own cryptographic protections or poorly implemented accepted protocols but failing to rigorously implement basic protections for their networks, systems, and build servers.Many of the reports recommendations are, as often with national security reports, encrypted in bureaucratese. But this one is clear and to the point:
Open-source software constitutes core infrastructure for major technology systems and critical software pipelines. The absence of US public support to secure these products, at a cost point far below what is spent annually on other domains of infrastructure security, is a striking lapse. The US Congress should appropriate suitable funds, no less than $25 million annually, and unambiguous grant-making authority, to DHS CISA to support baseline security improvements in open-source security packages through a combination of spot grants up to $500,000 administrated in conjunction with the US Computer Emergency Response Team (US-CERT), and an open rolling grant application process.As is this one:
The DoD should create a new nonprofit entity in the public domain supported by Carnegie Mellon University’s Software Engineering Institute (SEI) and Linux Foundation’s staff and networks. SEI should support the organization as a principal contractor funded via indefinite delivery/indefinite quantity contract from the DoD, at an amount not less than $10 million annually. The Linux Foundation should manage a priority list of software tools and open-source projects to invest in and support. This entity should support the long-term health of the software supply chain ecosystem by maintaining, improving, and integrating software tools ... making them freely available with expert integration assistance and other appropriate resources.One would hope that it would be possible to convince DHS CISA and US-CERT to fund the reproducible builds effort!
Several of their recommendations refer to the importance of the "Software Bill of Materials" (SBOM). The reason is obvious, without accurate and machine-readable lists of what the software in a system consists of, it is hard to know if the system has vulnerabilities. The Linux Foundation recently posted Healthcare industry proof of concept successfully uses SPDX as a software bill of materials format for medical devices:
Software Package Data Exchange (SPDX) is an open standard for communicating software bill of materials (SBOM) information that supports accurate identification of software components, explicit mapping of relationships between components, and the association of security and licensing information with each component. The SPDX format has recently been submitted by the Linux Foundation and the Joint Development Foundation to the JTC1 committee of the ISO for international standards approval.They explain the importance of the SBOM thus:
A group of eight healthcare industry organizations, composed of five medical device manufacturers and three healthcare delivery organizations (hospital systems), recently participated in the first-ever proof of concept (POC) of the SPDX standard for healthcare use.
A Software Bill of Materials (SBOM) is a nested inventory or a list of ingredients that make up the software components used in creating a device or system. This is especially critical in the medical device industry and within healthcare delivery organizations to adequately understand the operational and cyber risks of those software components from their originating supply chain.In a reproducible build environment, the SBOM needs to include not just the component packages, but also the entire toolchain used to build the released binaries, in case some of the tools have been compromised.
Some cyber risks come from using components with known vulnerabilities. Known vulnerabilities are a widespread problem in the software industry, such as known vulnerabilities in the Top 10 Web Application Security Risks from the Open Web Application Security Project (OWASP). Known vulnerabilities are especially concerning in medical devices since the exploitation of those vulnerabilities could lead to loss of life or maiming. One-time reviews don’t help, since these vulnerabilities are typically found after the component has been developed and incorporated. Instead, what is needed is visibility into the components of a medical device, similar to how food ingredients are made visible.
Don't get me started on the importance of this one:
The root of most software supply chain security protections is strong cryptography. Efforts by the United States and other governments to incentivize adoption of weak or compromised cryptographic standards work at cross-purposes with widespread industry practitioner and policy maker consensus. These efforts also present systemic threats to an already porous software supply chain. The legacy of US attempts to block the sale of “strong cryptography” is recurring attacks on weaker encryption standards and widespread access to previously controlled technologies. The United States and its allies should work in support of strong and accessible cryptographic standards and tools for developers.This one is not encrypted but it is somewhat problematic:
The US Congress should extend final goods assembler liability to operators of major open-source repositories, container managers, and app stores. These entities play a critical security governance role in administering large collections of open-source code, including packages, libraries, containers, and images. Governance of a repository like GitHub or an app hub like the PlayStore should include enforcing baseline life cycle security practices in line with the NIST Overlay, providing resources for developers to securely sign, distribute, and alert users for updates to their software. This recommendation would create a limited private right of action for entities controlling app stores, hubs, and repositories above a certain size to be determined. The right would provide victims of attacks caused by code, which failed to meet these baseline security practices, a means to pursue damages against repository and hub owners. Damages should be capped at $100,000 per instance and covered entities should include, at minimum, GitHub, Bitbucket, GitLab, and SourceForge, as well as those organizations legally responsible for maintaining container registries and associated hubs, including Docker, OpenShift, Rancher, and Kubernetes.The fact that software vendors use licensing to disclaim liability for the functioning of their products is at the root of the lack of security in systems. These proposals are plausible but I believe they would either be ineffective or, more likely, actively harmful. There is so much to write about them that they deserve an entire post to themselves.
Today's example of the failure of app stores to protect users is described by Dan Goodin in The accidental notary: Apple approves notorious malware to run on Macs:
ReplyDelete"When might an Apple malware protection pose more user risk than none at all? When it certifies a trojan as safe even though it sticks out like a sore thumb and represents one of the biggest threats on the macOS platform.
The world received this object lesson over the weekend after Apple gave its imprimatur to the latest samples of “Shlayer,” the name given to a trojan that has been among the most—if not the most—prolific pieces of Mac malware for more than two years. The seal of approval came in the form of a notarization mechanism Apple introduced in macOS Mojave to, as Apple put it, “give users more confidence” that the app they install “has been checked by Apple for malicious components.”
With the roll out of macOS Catalina, notarization became a requirement for all apps."
The problem of sorting the malware goats from the sheep is similar to the problem of content moderation which, at scale, doesn't work.
Steven J. Vaughan-Nichols provides a comprehensive overview of the open source community's response to the Biden administration's executive order on "cyber security" in Linux and open-source communities rise to Biden's cybersecurity challenge.
ReplyDeleteFBI sold phones to organized crime and read 27 million “encrypted” messages by Jon Brodkin reports that:
ReplyDelete"The Federal Bureau of Investigation created a company that sold encrypted devices to hundreds of organized crime syndicates, resulting in 800 arrests in 16 countries, law-enforcement authorities announced today. The FBI and agencies in other countries intercepted 27 million messages over 18 months before making the arrests in recent days, and more arrests are planned."
They created the entire supply chain!