Blog | G5 Cyber Security

Machine Learning Threats: Real World Examples

TL;DR

Yes, machine learning systems are vulnerable to attack. These aren’t just theoretical problems – there have been real-world incidents. This guide explains common threats and how to protect your models.

1. Data Poisoning Attacks

Data poisoning happens when attackers inject bad data into the training set of a machine learning model. This causes the model to learn incorrect patterns, leading to misclassifications or other unwanted behaviour. It’s like teaching someone wrong facts.

2. Adversarial Examples

Adversarial examples are inputs specifically crafted to fool a machine learning model, even though they look almost identical to legitimate inputs to a human.

3. Model Extraction Attacks

Model extraction attacks involve stealing the knowledge embedded within a machine learning model without having access to its training data.

4. Evasion Attacks

Evasion attacks occur at inference time – after the model has been trained and deployed. Attackers modify inputs to bypass security checks or misclassify data.

5. Supply Chain Attacks

Compromising third-party libraries or datasets used in your machine learning pipeline.

Exit mobile version