Adopt MLSecOps for Secure Machine Learning at Scale

We’re excited to bring back Transform 2022 in person on July 19 and virtually from July 20-28. Join leaders in AI and data for in-depth discussions and exciting networking opportunities. Register today!


Given the complexity, sensitivity, and scale of a typical enterprise software stack, security has understandably always been a top concern for most IT teams. But in addition to the well-known security challenges facing DevOps teams, organizations must also consider a new source of security challenges: machine learning (ML).

ML adoption is skyrocketing across industries, with McKinsey finding that by the end of last year, 56% of companies had adopted ML in at least one business function. However, in the race to adoption, many encounter the distinct security challenges that come with ML, as well as challenges to deploy and operate ML responsibly. This is especially true in newer contexts where machine learning is deployed at scale for use cases involving critical data and infrastructure.

ML-related security issues become particularly pressing when the technology is operating in a real-world enterprise environment, given the magnitude of potential disruption posed by security breaches. Meanwhile, ML must also fit into existing practices of IT teams and avoid being a source of bottlenecks and downtime for the business. Along with the principles governing the responsible use of AI, this means that teams are changing their practices to incorporate robust security practices into their workloads.

The rise of MLSecOps

To address these concerns, machine learning practitioners are working to adapt the practices they have developed for devops and IT security for large-scale ML deployment. That’s why professionals working in the industry are building a specialization that integrates security, devops, and ML — Machine Learning Security Operations, or “MLSecOps” for short. As a practice, MLSecOps works to bring together ML infrastructure, automation between development and operations teams, and security policies.

But what challenges does MLSecOps actually solve? And how?

The rise of MLSecOps has been encouraged by the growing importance of a wide range of security challenges facing the industry. To give an idea of ​​the scope and nature of the issues to which MLSecOps has emerged in response, let’s examine two in detail: model endpoint access and supply chain vulnerabilities.

Access to the model

There are major security risks posed by varying levels of unrestricted access to machine learning models. The first level of access to a model, more intuitive, can be defined as a “black box” access, namely being able to make inferences on ML models. While essential to ensure that models are consumed by various applications and use cases to produce business value, unrestricted access to consume a model’s predictions can introduce various security risks.

An exposed model may be subject to an “adversarial” attack. Such an attack sees a model reverse-engineered to generate “conflicting examples”, which are inputs to the model with additional statistical noise. This statistical noise serves to trick a model into misinterpreting an input and predicting a different class than would be intuitively expected.

A classic example of an adversarial attack involves an image of a stop sign. When contradictory noise is added to the image, it can trick an AI-powered self-driving car into thinking it’s an entirely different sign – like a “give way” sign – while looking like a stop sign for a human.

Example of a common adversarial attack on image classifiers. Image by Fabio Carrara, Fabrizio Falchi, Giuseppe Amato (ISTI-CNR), Rudy Becarelli and Roberto Caldelli (CNIT Research Unit at MICC ‒ University of Florence) via ERCIM.

Then there is access to the “white box” model, which involves accessing the internals of a model, at different stages of the development of the machine learning model. At a recent software development conference, we showed how it is possible to inject malware into a model, which can trigger arbitrary and potentially malicious code when deployed in production.

Other challenges can arise around data leakage. Researchers have successfully reverse engineered training data from a model’s learned internal weights, which can lead to sensitive and/or personally identifiable data being leaked, potentially causing significant harm.

Supply chain vulnerabilities

Another security issue facing ML is one that much of the software industry also faces, namely the software supply chain issue. Ultimately, this problem comes down to the fact that an enterprise IT environment is incredibly complex and relies on many software packages to operate. And often, a breach in just one of these programs in an organization’s supply chain can compromise an otherwise fully secure setup.

In a non-ML context, consider the SolarWinds breach in 2020 which saw large swaths of the US federal government and corporate world breached via a supply chain vulnerability. This has heightened the urgency to strengthen the software supply chain across industries, especially given the role of open source software in the modern world. Moreover, even the White House is now holding high-level summits on the issue.

Just as supply chain vulnerabilities can induce a breach in any software environment, they can also attack the ecosystem around an ML model. In this scenario, the effects can be even worse, especially given the amount of ML that relies on open source advancements and the complexity of the models including the library downstream supply chain they need to operate efficiently.

For example, this month the long-established Ctx Python package on the PyPI open source repository was discovered to have been compromised by information-stealing code, with over 27,000 copies of the compromised packages uploaded.

Since Python is one of the most popular languages ​​for ML, supply chain trade-offs such as the Ctx violation are particularly pressing for ML models and their users. All maintainers, contributors or users of software libraries would have encountered at some point the challenges posed by the second, third or fourth level or higher dependencies that libraries bring to the table – for ML these challenges can become much more complex .

Where does MLSecOps come from?

Something common to the two examples above is that although they are technical issues, they don’t need new technology to be solved. Instead, these risks can be mitigated by existing processes and employees by imposing high standards on both. I consider this to be the motivating principle behind MLSecOps – the centrality of strong processes to empower ML for production environments.

For example, although we have only covered two high-level areas specific to ML models and code, there is also a wide range of challenges around the ML system infrastructure. Authentication and authorization best practices can be used to protect model access and endpoints and ensure they are only used when needed. For example, model access can take advantage of tiered authorization systems, which can mitigate the risk of malicious parties having both black-box and white-box access. The role of MLSecOps, in this case, is to develop strong practices that strengthen access to models while inhibiting the work of data scientists and devops teams as little as possible, allowing teams to operate much more efficiently.

The same goes for the software supply chain, with good MLSecOps requiring teams to have a process in place to regularly check their dependencies, update them as needed, and act quickly when a vulnerability is flagged as a possibility. The challenge for MLSecOps is to develop these processes and integrate them into the daily workflows of the rest of the IT team, with the idea of ​​largely automating them to reduce the time spent manually reviewing a chain. software provisioning.

There is also a wide range of challenges around the infrastructure behind ML systems. But what these examples have hopefully shown us is this: while no ML model and its associated environment can be made tamper-proof, most security breaches only occur due to a lack of best practices at different stages of the development life cycle.

The role of MLSecOps is to intentionally introduce security into the infrastructure that oversees the end-to-end machine learning lifecycle, including the ability to identify what those vulnerabilities are, how they can be fixed, and how these remedies can adapt to the day. daily life of team members.

MLSecOps is an emerging field, with people working in and around it continuing to explore and define security vulnerabilities and best practices at every stage of the machine learning lifecycle. If you are an ML practitioner, now is a great time to contribute to the ongoing discussion as the field of MLSecOps continues to grow.

Alejandro Saucedo is the Technical Director of Machine Learning at Seldon.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including data technicians, can share data insights and innovations.

If you want to learn more about cutting-edge insights and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.

You might even consider writing your own article!

Learn more about DataDecisionMakers

Sherry J. Basler