New FDA cybersecurity guidelines are out. Join the webinar to learn more.
New FDA cybersecurity guidelines are out. Join the webinar to learn more.

Who’s Responsible for Securing Custom Open Source Software?

Who’s Responsible for Securing Custom Open Source Software?

Is open source software, one of the great wonders of our time, setting us up for one of the great cybersecurity blunders of our time? 

In an ongoing effort to develop more innovative technologies, companies are turning to open source libraries for inspiration to reliably bolstering capabilities using fewer resources. By taking only part of an open source software’s code, software engineers are recognizing impressive results with a more stable build, all while cutting development time! But, this seemingly win-win scenario introduces some pressing questions. Ones that if not addressed, leave cracks that puts an entire organization at the mercy of a hacker’s keystrokes.

To protect organizations, Chief Information Security Officers (CISO) and Chief Product Security Officers (CPSO) need to figure out- who is responsible for securing this custom open source software?

Where custom OSS security threats lie

When new features, capabilities, or even full versions are needed, software engineers are tasked with writing the relevant code that brings this idea to life. 

Since relying on open source software has been a common practice in the software development industry for some time, various organizations have come together to maintain and patch software on an ongoing basis. This ensures full coverage for anyone using their code without each team needing to go in and conduct maintenance on their own. If an update or patch is rolled out, one backend update can address concerns.

But, this cut and dry example doesn’t take into account the ongoing trend of customizing existing open source software. 

At times, open source software packages cover a range of capabilities that go beyond the scope of the project. To simplify the process, developers will simply cut and paste the portions that are needed, leaving the other aspects of the code out. The problem is that these pieces of code, which are used to create customized open source software, are not the full OSS. This puts them at risk of not being included in updates, meaning that companies won’t know if their products have new vulnerabilities that put their brand reputation and possibly end-user’s lives at risk. 

Owning custom OSS security

To add to the complexity of this problem, organizations can source out full or partial projects to third-party vendors. Each vendor has their own methods and open source libraries that they can choose from, introducing new uncertainties into an already confusing compilation of software greatest hits.

Cherry-picking from two safe open source packages may save time but by chiseling out the pieces that are most relevant, developers are inherently foregoing the protections that come from reliable open source libraries. 

Key questions about who is responsible for the ongoing monitoring, patching, and security needs of the product all have one simple answer: no one. 

Unless a clear agreement is reached between organizations and their vendors about who will maintain the software over time, companies must take full responsibility for securing their products, along with the custom open source software they rely on.

When writing proprietary code based on open source software, best practices must be outlined. An example of this can be informing CISOs and CPSOs of:

  • Code information- What code are they using, which libraries they were initially picked from, and where can they get the latest vulnerability updates
  • Operational use- What is being done with this code once deployed? 
  • Responsibility- Who is responsible for keeping this code up to date. If there is no clear stakeholder, one must be assigned internally.

Using SBOMs and VEX to address OSS blindspots

If developers rely on whole or partial OSS and plug it into customized code, the code and libraries are vulnerable. What’s more, a piece of code written by a developer may lack the critical source data needed to properly protect the device- vendor agreement or not. 

To keep a tight perimeter, cybersecurity leaders must be able to identify each component, its version, and its vulnerabilities in relation to the setup and architecture of their product. Creating a software bill of materials (SBOM) allows for the cataloging of device information. The problem becomes clearer when multiple products run on thousands of components, each with their own sources, risks, and potential vulnerabilities. Suddenly creating SBOM goes from a small cybersecurity step to a great cybersecurity mountain.

Cybellum’s Product Security Platform was developed to automate this tedious process at scale, allowing for the rapid generation, approval, and management of SBOMs. Now that you know of all internal components, a proper monitoring plan can come into play and VEX reports can be generated to identify risk to a specific device. 

VEX Generation GIF

With regular updates regarding the original open source packet that your custom OSS was based on, a cybersecurity blindspot is lifted to monitor proprietary code, open source code, and the gray areas that developers operate within in-between.

Safer is better

Considering the uncertainty surrounding custom open source software, it is always better to take full responsibility for any components within your software. 

Continuously running these customized OSS against a known vulnerability database, such as the National Vulnerability Database (NVD) is the only way to ensure that your components are maintained with the same vigilance as their original versions. This is possible by:

  • Option 1: Asset and SBOM cataloging– Treat the software component like any open source software, even if it’s only 30%, taking full responsibility of monitoring 
  • Option 2: Knowledge bases– Only take code from organizations or consortiums that continuously test their solutions, then receive regular updates when a notification is sent. 

Unless working with a trust vendor who updates the archives and knowledge base surrounding this piece of open-source code, organizations must treat the code like full open source software. Only by taking full responsibility upon themselves can companies ensure security in their final product and  remain secure into the future.