The Adversarial ML Threat Matrix is an extension of MITRE’s ATT&CK framework for the classification of attack techniques. The threat matrix is based on a variety of case studies to identify the common tactics and techniques used by attackers. The information should help secure not just the developers of ML systems but companies that are using those systems as well, says Jonathan Spring, senior member of the technical staff of the CERT Division of Carnegie Mellon University’s Software Engineering Institute. Only three of 28 companies surveyed by Microsoft thought they had the tools in place to secure their ML systems.”]