Blog | G5 Cyber Security

New Framework Released to Protect Machine Learning Systems From Adversarial Attacks

Microsoft has released a framework to help security analysts detect, respond to, and remediate adversarial attacks against machine learning (ML) systems. The Adversarial ML Threat Matrix is an attempt to organize the different techniques employed by malicious adversaries in subverting ML systems. 30% of all AI cyberattacks by 2022 are expected to leverage training-data poisoning, model theft, or adversarial samples to attack machine learning-powered systems, according to a Gartner report cited by Microsoft. The development is the latest in a series of moves undertaken to secure AI from data poisoning and model evasion attacks.

Source: https://thehackernews.com/2020/10/adversarial-ml-threat-matrix.html

Exit mobile version