"Failure Modes in Machine Learning: Adversarial and unintentional" academic article

This is a pretty interesting read.

It will likely be relevant to a few other people on this forum including the ML specialists at Sense.

“…taxonomy of machine-learning failures that encompasses both mistakes and attacks, or – in their words – intentional and unintentional failure modes. It’s a good basis for threat modeling.” -Schneier

Link to his blog:

Microsoft link: https://docs.microsoft.com/en-us/security/failure-modes-in-machine-learning

Direct PDF link: https://arxiv.org/ftp/arxiv/papers/1911/1911.11034.pdf

1 Like

Thanks for sharing. I’ll pass along to the team as well.

Interesting taxonomy, but I think Microsoft is a little weak in the number and categorization of unintentional failure modes. Glad to see they are thinking about how to look at ML errors, but I there are many other flavors that they did not see or have overgeneralized into a non-prescriptive category. For instance, I see many errors related to model interference - two or more ML models interfering with each other when trying to do multiple predictions from a single training set.