How safe is the data you use to train your machine learning model?

This Article Is Based On The Research Paper 'Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets'. All Credit For This Research Goes To The Researchers Of This Paper 👏👏👏

Please Don't Forget To Join Our ML Subreddit

In today’s world, when machine learning and artificial intelligence are used in all fields, the importance of data increases dramatically. A new risk to the data used to train these machine learning algorithms has emerged. According to recent studies, the data used to train a machine learning model is no longer secure. A person knowing only the model’s algorithm can reconstruct and infer the sensitive information used to train the model in a variety of ways.

According to these studies, machine learning models can be poisoned by a user in order to reconstruct the data used to train that specific model. The disturbing nature of these attacks is highlighted by researchers from Google, National University of Singapore, Yale-NUS College and Oregon State University. Previously, it was understood that after the development of a machine learning model, the original training data was deleted, which made it difficult for hackers to harvest important information. However, new research indicates that it is quite viable for an attacker to mine the model to make predictions and then use those predictions to derive a pattern. Once a pattern is identified, it can be used to reconstruct the original training data set.

This new type of attack simply consists of observing the predictions of the model without influencing the training process. Researchers are trying to determine the effectiveness of data poisoning in disclosing sensitive information. To do this, they want to assess the effectiveness and threat level of different forms of inference attacks, as well as the “poisoning” of training data. To start, they looked at membership inference attacks, which allow attackers to identify whether a certain data record was part of the training set or not. They also looked at reconstruction attacks, which allow them to partially recreate training data. These attacks, for example, can generate sentences that significantly overlap with words used to form a language pattern or conclude a statement. Researchers have found that these attacks are frighteningly successful, implying that cryptographic privacy technologies may not be adequate to protect the privacy of user data.

This study identified serious flaws in the data privacy of machine learning algorithms. Additionally, it has been revealed that data leaks are significantly greater when a user is allowed to poison data compared to regular inference attacks. As researchers focus on developing sophisticated inference attacks to reveal new system vulnerabilities, a team of NUS researchers has created an open-source tool to help analyze data leaks from AI models. The tool simulates membership inference attacks and in doing so is able to quantify the level of risk. This tool helps identify weak points in the dataset and show possible techniques that can be used to mitigate leaks. The NUS team named this tool Machine Learning Privacy Meter. Until proper integration of data privacy protection tools occurs, no AI model is immune to inference attacks.

Article: https://arxiv.org/pdf/2204.00032.pdf

References:

  • https://arxiv.org/pdf/1610.05820.pdf
  • https://arxiv.org/pdf/2204.00032.pdf
  • https://techxplore.com/news/2022-04-involve-poisoning-machine.html
  • https://github.com/privacytrustlab/ml_privacy_meter

Sherry J. Basler