How hackers are using AI and machine learning to target businesses

Cybersecurity has benefited from advances in machine learning and AI. Security teams today are inundated with data on potential suspicious activity, but are often left searching for needles in haystacks. AI helps defenders find real threats in this data through recognizing patterns in network traffic, malware indicators, and user behavioral trends.

Unfortunately, attackers have found their own ways to use these beneficial advances in AI and machine learning against us. Easy access to cloud environments makes it easy to get started with AI and build powerful, high-performance learning models.

Let’s look at how hackers are using AI and machine learning to target businesses, as well as ways to prevent AI-driven cyberattacks.

3 Ways Attackers Use AI Against Defenders

1. Test the success of their malware against AI-powered tools

Attackers can use machine learning in several ways. The first – and easiest – is to build their own machine learning environments and model their own malware and attack practices to determine the types of events and behaviors defenders are looking for.

Sophisticated malware, for example, can modify local system libraries and components, run processes in memory, and communicate with one or more domains belonging to an attacker’s control infrastructure. All of these activities combined create a profile called tactics, techniques and procedures (TTP). Machine learning models can observe TTPs and use them to build detection capabilities.

By observing and predicting how TTPs are detected by security teams, adversaries can subtly and frequently modify indicators and behaviors to stay ahead of defenders who rely on AI-powered tools to detect attacks.

2. Poison the AI ​​with inaccurate data

Attackers also use machine learning and AI to compromise environments by poisoning AI models with inaccurate data. Machine learning and AI models rely on properly labeled data samples to create accurate and repeatable detection profiles. By introducing benign files that look like malware or creating behavior patterns that turn out to be false positives, attackers can trick AI models into believing that the attack behaviors are not malicious . Attackers can also poison AI models by introducing malicious files that AI formations have labeled as safe.

3. Map existing AI models

Attackers are actively seeking to map existing and developing AI models used by cybersecurity vendors and operations teams. By learning how AI models work and what they do, adversaries can actively disrupt operations and machine learning models in their cycles. This can allow hackers to influence the model by tricking the system to favor attackers and their tactics. It can also allow hackers to completely evade known patterns by subtly modifying the data to avoid detection based on known patterns.

How to defend against AI-driven attacks

Defending against AI-driven attacks is extremely difficult. Advocates should ensure that labels associated with data used in training models and model development are accurate. By ensuring that the data has accurate label IDs, it is likely that the datasets used to train the models will become smaller, which does not help AI efficiency.

For those building AI security detection models, introducing adversarial techniques and tactics during modeling can help align pattern recognition with tactics seen in the wild. Researchers at Johns Hopkins University have developed the TrojAI software framework to help generate AI models for Trojans and other malware models. MIT researchers have released TextFooler, a tool that does the same for natural language models, which could be useful for building more resilient AI models that detect problems like bank fraud.

As AI grows in prominence, attackers will seek to overtake the efforts of defenders with their own research. Keeping abreast of attackers’ tactics to defend against them is crucial for security teams.

Sherry J. Basler