Facial-Identification Bias and Error – Lesson For AI/ML-Enabled Security Services

February 2, 2019

Six months ago I published a short rant about the potential for material bias and unknown error represented in the many AI/ML-driven security services being pitched to global financial services enterprises. Since that time, and in the most general terms my experiences in this field have been less-than positive. The central themes that I hear from proponents of these services seem to be — “you don’t get it, the very nature of AI/ML incorporates constant improvements,” and “you are just resistant to change.” There seems to be little appetite for investigating any of the design, implementation, and operational details needed to understand whether given services would deliver cost and risk-relevant protections — which should be in the foreground of our efforts to protect trillions of dollars worth of other people’s money. It is not. “Buy in, buster.” Then a quick return to what seems central to our industry’s global workforce — distraction. Ug.

Because of the scale of our operations and their interconnectedness with global economic activity, financial services risk management professionals need to do the work required to make ‘informed-enough’ decisions.

Recent assessments of leading facial-identification systems have shown that some incorporate material bias and error. In a manner analogous to facial recognition technologies, AI/ML-driven security analysis technology is coded, configured, and trained by humans, and must incorporate real potential for material bias and unknown error.

Expense pressures and an enduring faith in technology have delivered infrastructure complexity and attack surfaces that were unthinkable only a few years ago. Concurrently, hostile activity is more diverse and continues to grow in scale. We need to find new ways to deal with that complexity at scale in an operational environment that is in constant (often undocumented) flux. Saying “yes” to opaque AI/ML-enabled event/threat/vulnerability analysis services might be the right thing to do in some situations. Be prepared, though, for that day when your risk management operations are exposed to legal discovery… Will “I had faith in my vendor” be good-enough to protect your brand? That seems like a sizable risk. Bias and error find there way into much of what we do. Attempting to identify and deal with them and their potential impacts have part of global financial services risk management for decades. Don’t let AI/ML promoters eliminate that practice from your operations.

REFERENCES:
“Bias & Error In Security AI/ML.”
https://completosec.wordpress.com/2018/07/14/bias-error-in-security-ai-ml/

“Amazon facial-identification software used by police falls short on tests for accuracy and bias, new research finds.”
https://www.washingtonpost.com/technology/2019/01/25/amazon-facial-identification-software-used-by-police-falls-short-tests-accuracy-bias-new-research-finds/
https://www.washingtonpost.com/technology/amazon-facial-identification-software-used-by-police-falls-short-on-tests-for-accuracy-and-bias-new-research-finds/2019/01/25/fa74fbb5-1079-4cc1-8cb4-23d71bd238e2_story.html
By By Drew Harwell, 01-25-2019


%d bloggers like this: