UK NCSC releases ‘vague’ safety plan for machine learning models

The UK’s National Cyber ​​Security Center (NCSC) has published a set of security principles for developers and companies implementing machine learning models. An ML specialist who spoke with Technical monitor said the principles represent a positive direction, but are “vague” on specifics.

The principles established by the NCSC provide a “direction of travel” rather than specific instructions. (Photo by Gorodenkoff/iStock)

The NCSC has developed its security principles as the role of machine learning and artificial intelligence grows in industry and society at large, from the AI ​​assistant in smartphones to the use of the Internet. learning in healthcare. IBM’s most recent Global AI Adoption Index found that 35% of companies said they were using AI in their business, and an additional 42% said they were exploring AI.

The NCSC says that as the use of machine learning grows, it is important for users to know that it is deployed safely and does not put personal safety or data at risk. “It turns out to be really difficult,” the agency said in a blog post. “It is these challenges, many of which have no simple solutions, that have motivated us to develop practical guidance in the form of our principles.”

This involved looking at techniques and defenses against potential security vulnerabilities, but also taking a more pragmatic approach and finding concrete ways to protect machine learning systems from being exploited in a real environment.

The nature of machine learning models, which evolves them through automatic data analysis, makes them difficult to secure. “Because a model’s internal logic is data-driven, its behavior can be difficult to interpret, and it’s often difficult (if not impossible) to fully understand why it does what it does,” the NCSC blog post says. .

This means that many machine learning components are deployed in networks and systems without the same high level of security found in non-automated tools, leaving large parts of the system inaccessible to cybersecurity professionals. This causes some vulnerabilities to be missed and exposes the system to attacks and also introduces more inherent machine learning vulnerabilities present at all stages of the machine learning lifecycle.

Lack of transparency in machine learning models

The group of attacks designed to exploit these inherent problems with machine learning is known as “adversarial machine learning” (AML) and understanding them requires knowledge of multiple disciplines, including data science, cybersecurity, and software development.

The NCSC has produced a set of Security Principles for Systems Containing ML Components with the goal of raising awareness of AML attacks and defenses for anyone involved in developing, deploying, or decommissioning a system containing ML. The logic used by ML models and the data used to train the models can often be opaque, leaving security experts in the dark when it comes to inspecting them for security vulnerabilities.

Content from our partners
A model for solving the HR challenges of mergers and acquisitions
How the retail industry can take strong action against cyberattacks
How to fight the rise of cyberattacks

The principles suggest designing for security when writing system requirements, securing the supply chain and ensuring data comes from a trusted source, and securing the infrastructure by applying trust controls to everything. what goes into the development environment.

Assets should be tracked through the creation of documentation covering the creation, operation, and lifecycle management of models and datasets, and ensuring that the model architecture is designed for security.

“ML isn’t necessarily more dangerous than any other logical element in a software system, but there are some nuances in ML models that should be appreciated,” says Nigel Cannings, founder of compliance solutions company Intelligent. Voice.

“An ML system is built on data, the finished system represents valuable intellectual property and in some cases the data used to train it is also something that often needs to be protected.”

This reflects concerns raised by the NCSC which said that without open information about the data used to train the machine learning algorithms or the methods it uses to make its conclusions, it is difficult to spot vulnerabilities that could expose the system.

However, Cannings cautions that while the NCSC principles are a positive initiative, the lack of detail makes them less useful as a tool for communicating potential risks. “The NCSC principles are vague and provide general guidelines with a lot of borrowing from conventional software cybersecurity,” he says. “They are not fake and emphasize the importance of educating developers and data scientists, but more detail could have been provided to communicate the risks.”

NCSC ML “One Direction of Travel” Safety Principles

Developers and administrators are likely to take steps to protect their models if they are aware of the risks they may be exposing, Canning explains, adding that “just as software engineering has evolved to be increasingly more security conscious, ML and MLOps will benefit from this practice too.

The NCSC principles are more of a “direction of travel” than a set of guidelines or plans to follow, he says, and the exact steps taken will vary by model and change with research.

Todd R Weiss, analyst at Futurum Research, adds: “It is wise to consider all aspects of security as they relate to AI and ML, although both technologies can also help companies identify and solve technological challenges. Like so many things in life, AI and ML are also double-edged swords that can bring huge benefits as well as damage. These concerns must be weighed against their benefits as part of an overall IT infrastructure and business strategy.

Despite these inherent risks, Weiss said AI and ML are “much more beneficial and useful as technologies in our world.” He argues, “Without AI and ML, incredibly powerful digital twins wouldn’t be possible, medical breakthroughs wouldn’t happen, and nascent metaverse communities wouldn’t be possible. There are many more examples of the benefits of AI and ML, and there will always be bad actors looking for ways to wreak havoc on all forms of technology.”

Weiss commended the NCSC for its ML Security Principles, as they “encourage awareness, acceptance, and critical reflection on these lingering concerns and can actively help companies take these issues to heart when using and of exploring AI and ML”.

Read more: Meta has questions to answer about its responsible AI plans

Sherry J. Basler