AI Security Frameworks – Ensuring Trust in Machine Learning
SOURCE: CYBERSECURITYNEWS.COM
MAY 17, 2025
NIST Finalizes Cyber Attack Guidance for Adversarial Machine Learning
SOURCE: NATLAWREVIEW.COM
APR 11, 2025
by: Hunton Andrews Kurth’s Privacy and Cybersecurity of Hunton Andrews Kurth - Privacy and Information Security Law Blog-Hunton Andrews Kurth
Friday, April 11, 2025
On March 24, 2025, the U.S. National Institute of Standards and Technology (“NIST”) published a report titled “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations” (the “Report”). The Report provides a taxonomy of concepts and defines terminology in the field of adversarial machine learning, identifies current challenges in the life cycle of AI systems, and describes methods for mitigating and managing the consequences of cyber attacks on such systems.
The Report states that it is directed primarily at those responsible for designing, developing, deploying, evaluating, and governing AI systems. It is designed to aid in securing AI applications against attacks that include adversarial manipulation of training data, provision of adversarial inputs to adversely affect the performance of AI systems, and malicious manipulations, modifications or interactions with models to exfiltrate sensitive information from training data.
LATEST NEWS
WHAT'S TRENDING
Data Science
5 Imaginative Data Science Projects That Can Make Your Portfolio Stand Out
OCT 05, 2022
SOURCE: CYBERSECURITYNEWS.COM
MAY 17, 2025
SOURCE: TECHEBLOG.COM
MAY 03, 2025
SOURCE: TECHEXPLORIST.COM
MAY 03, 2025