“RMF: A Risk Measurement Framework for Machine Learning Models” Paper
RMF: A Risk Measurement Framework for ML Models is a new tool designed to help teams assess the vulnerability of their machine learning models to real-world attacks. It evaluates how easy and costly it is to attack a model, helping teams prioritize the right defenses. Especially valuable for AI systems in security-critical sectors like finance and healthcare, RMF supports safer, more resilient deployment of machine learning
Find more here: https://dl.acm.org/doi/10.1145/3664476.3670867
