I received a message recently where the leader observed that advances in ML/AI in a given field will “make us even more effective at our work.” While there may be an opportunity there, it brings with it another target for mature and advanced hostile activity. In global financial services enterprises, we are all using these technologies. Depending on the nature of your ML/AI application, attack-influenced outputs could have serious negative consequences. Abuse via ML/AI have been a thing for quite a while. Machine learning as implemented is vulnerable to ‘wild patterns’ or ‘adversarial machine learning.’
Modern technologies based on pattern recognition, machine learning and data-driven artificial intelligence have been fooled by carefully-perturbed inputs into delivering misleading outputs. Here is a useful set of illustrations. Battista Biggio & Fabio Rolia published a more thorough history of this topic here. If you still can’t picture how this might happen today, see a story in the news yesterday.
If you are using any of the ML/AI-enabled security related platforms or services, you may want to invest in a quick read of the two papers linked above. If you are actively involved in engineering the use of such a platform, you should consider reviewing the resources from IBM below:
IBM published a suite of ML & classifier attacks and defensive methods, and some additional work on detection, and it remains an active project: https://github.com/IBM/adversarial-robustness-toolbox & https://adversarial-robustness-toolbox.readthedocs.io/en/latest/index.html
There are useful links to supporting resources throughout these materials.
As these platforms & services become more important to your organization — and to the success of your customers — pay special attention to the details of your implementation. We all know that hostile agents are good at manipulating humans so we have invested in a broad spectrum of efforts to resist those attacks and to reduce the impacts upon their success. We all need to approach ML/AI-enabled technologies and services in an analogous way — and ensure that our vendors & partners are doing the same.
There is a good high-level primer on this topic at:
“Breaking neural networks with adversarial attacks — Are the machine learning models we use intrinsically flawed?”
By Anant Jain, 02-09-2019.
There is an excellent history of this topic at:
“Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning.”
By Battista Biggio & Fabio Rolia
This is not an abstract, niche concern. See: “Police have used celebrity look-alikes, distorted images to boost facial-recognition results, research finds.”
By Drew Harwell , 05-16-2019