Twitter Reduces Privacy Options

April 11, 2020

The Electronic Frontier Foundation (EFF) provided some context for the recent change in Twitter privacy options.  I think that it is an excellent read and recommend it to anyone involved in Financial Services security — especially those involved in mobile application architecture, design, and implementation.

Their conclusion:

Users in Europe retain some level of control over their personal data in that they get to decide whether advertisers on Twitter can harvest user’s device identifiers. All other Twitter users have lost that right.

The more broadly-available are user’s device identifiers — especially in the context of their behaviors (how they use their devices) — the greater are the risks associated with resisting a range of attacks.  We already have a difficult time identifying customers, vendors, contractors, the people we work with, and leaders throughout our organizations.  We depend on all kinds of queues (formal and informal) for making trust decisions.  As the pool of data available to hostile agents (because if it is gathered it will be sold and/or leaked) grows along every relevant dimension, the more difficult it is for us to find information that only the intended/expected individual would know or would have.

Defending against competent social engineering is already a great challenge — and behaviors like Twitter’s* will make it more difficult.

Note: Twitter is hardly alone in its attraction to the revenue that this type of data sales brings in…


Breach May Indicate Quality Management Weaknesses

February 26, 2020

There is a new reason for concern about facial recognition technology, surveillance, and the error & bias inherent in their use.  The quality of the applications that make up these systems may be less well managed than one might assume or hope.

Clearview AI is a startup that scrapes social media platforms has compiled billions of photos for facial recognition technology reported that:

…an intruder “gained unauthorized access” to its list of customers, to the number of user accounts those customers had set up, and to the number of searches its customers have conducted.
…that there was “no compromise of Clearview’s systems or network.”

Tor Ekeland, an attorney for the company said what I read as the equivalent of ‘trust us & don’t worry about it’:

“Security is Clearview’s top priority, unfortunately, data breaches are part of life in the 21st century. Our servers were never accessed. We patched the flaw, and continue to work to strengthen our security.”

The company sells its services to hundreds of law-enforcement agencies & others.  The New York Times reported that Clearview’s app is being used by police to identify victims of child sexual abuse.

In one of their services, a user uploads a photo of a person, and the application replies with links to Internet-accessible photos on platforms where Clearview scraped them.  In another (not yet a public product), it appears that there are interfaces to augmented reality devices so the user might be able to identify every person they saw.

So, what could go wrong?

Based on the available reporting and their lawyer’s statements, my assumptions include;

  • The company amasses billions of images of human faces along with metadata about each — which include, but are not limited to links to the original hosting location on-line.
  • The company sells their services to policing and security-related organizations world-wide.
  • Something went seriously wrong with the way that their application (and/or infrastructure) enforced access control — leading me to believe that the company has ineffective secure coding and/or secure code analysis practices.
  • The company states that we should accept their assertion that breaches of Clearview’s applications are just a part of doing business.

Application quality and management attitude/values matter.

Because Clearview’s decisions about which photos of given human faces are associated with other photos representing the same individual can be used for identifying criminal suspects, they have more or less weight in criminal investigations and the subsequent litigation & imprisonment…  If Clearview AI points an investigator to the wrong individual, the consequences can be extreme.  In that context — because we should not expect or tolerate unfounded or mistaken arrest or imprisonment — weak or otherwise ineffective application architecture, design, or implementation should be strongly resisted.  To me, nothing in Clearview’s public statements about the breach inspire confidence that they have that mandate-for-quality in their company’s DNA (you may read their statements differently).

Ineffective application development (security issues are one facet) can result in almost any kind of flaw — some of which could result in incidental or systemic errors matching photos.  This has happened before — as there have been examples of widely-used face-matching AI implementations being materially less accurate on images associated with a given race or gender.

There are other risks.  When used by some individuals (authorized or not), it seems reasonable to assume that the Clearview’s system(s) will be used in ways that result in blackmail, coercion, or other types of attacks/threats.  This is not to imply that the company designed it for those purposes, just that it just seems like a good fit.  (We tolerate the sale of hand guns, axes and steak knives even though they can also play a key role in blackmail, coercion, or other types of attacks/threats as well.)  In part because of its global access and the ability of a hostile party to remain largely ‘unseen’ attacks that use Clearview’s applications are materially different from those other weapons.

In global financial services enterprises we deal with constant oversight of our risk management practices.  The best teams seem to be organized in ways that enhance the probability of strong and effective attack resistance over time — tolerating the challenges of evolving features, technology, operations, and attacks.  In my experience, it is often relatively easy to identify this type of team…

That is one end of a broad continuum of quality management applicable to any industry.  Some teams exist elsewhere on that continuum, and it is not always easy to peg where that might be for given organizations.  In the public facts and company statements associated with the recent Clearview breach, it does not look like they occupy the location on that continuum that we would hope.


“Facial-Recognition Company That Works With Law Enforcement Says Entire Client List Was Stolen.” By Betsy Swan, Feb. 26, 2020

“Clearview AI has billions of our photos. Its entire client list was just stolen.” By Jordan Valinsky, February 26

And for some broader background:

“The Secretive Company That Might End Privacy as We Know It.” By Kashmir Hill, Published Jan. 18, 2020 and Updated Feb. 10, 2020

“This man says he’s stockpiling billions of our photos.” By Donie O’Sullivan, Mon February 10, 2020

Capital One Concerns Linked To Overconfidence Bias?

August 2, 2019

Earlier this week on July 29, FBI agents arrested Paige “erratic” Thompson related to downloading ~30 GB of Capital One credit application data from a rented cloud data server.

In a statement to the FBI, Capital One reported that an intruder executed a command that retrieved the security credentials for a web application firewall administrator account, used those credentials to list the names of data folders/buckets, and then to copy (sync) them to buckets controlled by the attacker.  The incident appears to have affected approximately 100 million people in the United States and six million in Canada.

If you just want to read a quick summary of the incident, try “For Big Banks, It’s an Endless Fight With Hackers.” or “Capital One says data breach affected 100 million credit card applications.”

I just can’t resist offering some observations, speculation, and opinions on this topic.

Since as early as 2015 the Capital One CIO has hyped their being first to cloud, their cloud journey, their cloud transformation and have asserted that their customers data was more secure in the cloud than in their private data centers.  Earlier this year the company argued that moving to AWS will “strengthen your security posture” and highlighted their ability to “reduce the impact of compliance on developers” (22:00) — using AWS security services and the network of AWS security partners — software engineers and security engineers “should be one in the same.”(9:34)

I assume that this wasn’t an IT experiment, but an expression of a broader Capital One corporate culture, their values and ethics.  I also assume that there was/is some breakdown in their engineering assumptions about how their cloud infrastructure and its operations worked.  How does this happen?  Given the information available to me today, I wonder about the role of malignant group-think & echo chamber at work or some shared madness gripping too many levels of Capital One management.  Capital One has to have hordes of talented engineers — some of whom had to be sounding alarms about the risks associated with their execution on this ‘cloud first‘ mission (I assume they attempted to communicate that it was leaving them open to accusations of ‘mismanaging customer data’, ‘inaccurate corporate communications,’ excessive risk appetite, and more).  There were lots of elevated risk decisions that managers (at various levels) needed to authorize…

Based on public information, it appears that:

  • The sensitive data was stored in a way that it could be read from the “local instance” in clear text (ineffective or absent encryption).
  • The sensitive data was stored on a cloud version of a file system, not a database (weaker controls, weaker monitoring options).
  • The sensitive data was gathered by Capital One starting in 2005 — which suggests gaps in their data life-cycle management (ineffective or absent data life-cycle management controls)
  • There were no effective alerts or alarms announcing unauthorized access to the sensitive data (ineffective or absent IAM monitoring/alerting/alarming).
  • There were no effective alerts or alarms announcing ‘unexpected’ or out-of-specification traffic patterns (ineffective or absent data communications or data flow monitoring/alerting/alarming).
  • There were no effective alerts or alarms announcing social media, forums, dark web, etc. chatter about threats to Capital One infrastructure/data/operations/etc. (ineffective or absent threat intelligence monitoring & analysis, and follow-on reporting/alerting/alarming).
  • Capital One’s conscious program to “reduce the compliance burden that we put on our developers” (28:23) may have obscured architectural, design, and/or implementation weaknesses from Capital One developers (a lack of security transparency, possibly overconfidence that developers understood their risk management obligations, and possible weaknesses in their secure software program).
  • Capital One ‘wrapped’ a gap in IAM vendor Sailpoint’s platform with custom integrations to AWS identity infrastructure (16:19) (potentially increasing the risk of misunderstanding or omission in this identity & access management ‘plumbing’).
  • There may have been application vulnerabilities that permitted the execution of server side commands (ineffective input validation, scrubbing, etc. and possibly inappropriate application design, and possible weaknesses in their secure code review practices and secure software training).
  • There may have been infrastructure configuration decisions that permitted elevated rights access to local instance meta-data (ineffective configuration engineering and/or implementation).
  • There must be material gaps or weaknesses in Capital One’s architecture risk assessment practices or in how/where they are applied, and/or they must have been incomplete, ineffective, or worse for a long time.
  • And if this was the result of ‘designed-in‘ or systemic weaknesses at Capital One, there seems to be room for questions about their SEC filings about the effectiveness of their controls supportable by the facts of their implementation and operational practices.

In almost any context this is a pretty damning list.  Most of these are areas where global financial services enterprises are supposed to be experts.

Aren’t there also supposed to be internal systems in place to ensure that each financial services enterprise achieves risk-reasonable levels of excellence in each of the areas mentioned in the bullets above?  And where were the regulations & regulators that play a role in assuring that it the case?

How does an enormous, heavily-regulated financial services enterprise get into a situation like this?  There is a lot of psychological research suggesting that overconfidence is a widespread cognitive bias and I’ve read, for example, that it underpins what is sometimes called ‘organizational hubris,’ which seems like a useful label here.   The McCombs School of Business Ethics Unwrapped program defines ‘overconfidence bias’ as “the tendency people have to be more confident in their own abilities than is objectively reasonable.”  That also seems like a theme applicable to this situation.  Given my incomplete view of the facts, it seems like this may have been primarily a people problem, and only secondarily a technology problem.  There is probably no simple answer…

Is the Capital One case unique?  Could other financial services enterprises be on analogous journeys?

“Capital One Data Theft Impacts 106M People.” By Brian Krebs.
“Why did we pick AWS for Capital One? We believe we can operate more securely in their cloud than in our own data centers.” By Rob Alexander, CIO, Capital One, and
“For Big Banks, It’s an Endless Fight With Hackers.” By Stacy Cowley and Nicole Perlroth, 30 July 2019.
“Capital One says data breach affected 100 million credit card applications.” By Devlin Barrett.…
“AWS re:Inforce 2019: Capital One Case Study: Addressing Compliance and Security within AWS (FND219)”
“Capital One Data Theft Impacts 106M People.”
“Frequently Asked Questions.”
Overconfidence Bias defined:
Scholarly articles for cognitive bias overconfidence:,16&as_vis=1&q=cognitive+bias+overconfidence&scisbd=1
“How to Recognize (and Cure) Your Own Hubris.” By John Baldoni.


Cloud File Sync Requires New Data Theft Protections

June 28, 2018

Microsoft Azure File Sync has been slowly evolving since it was released last year. and and

The company also added “Azure:” [Azure Drive] to PowerShell to support discovery and navigation of all Azure resources including filesystems.

Azure File Sync helps users keep Azure File shares in sync with their Windows Servers. Microsoft promotes the happy-path, where these servers are on-premise in your enterprise, but supports syncing with endpoints of any trust relationship.

What are my concerns?

The combination makes it much easier to discover Azure-hosted data and data exfiltration paths and then to get them set up to automatically ship new data into or out of your intended environment(s).  In other words, helping hostile parties to introduce their data or their malware into your organization’s Azure-hosted file systems, or helping hostile parties to steal your data while leaving a minimum of evidence describing who did what.

Why would I say that?

Many roles across global Financial Services enterprises are engaging in architecture risk analysis (ARA) as part of their day to day activities.  If we approach this topic like we were engaged in ARA fact finding, we might discover the following:

Too easy to share with untrustworthy endpoints:
It appears that anyone with the appropriate key (a string) can access a given Azure File Share from any Azure VM on any subscription. What could go wrong?
Microsoft customers can use shared access signatures (SAS) to generate tokens that have specific permissions, and which are valid for a specified time interval. These shared access signature keys are supported by the Azure Files (and File Sync) REST API and the client libraries.
A financial services approach might permit Azure File drive Shares on a given private Virtual Network to be secured in a manner so it would be only available via the Virtual Network using a private IP address on that same network.

Weak audit trail:
If you need to mount the Azure file share over SMB, you currently must use the storage account keys assigned to the underlying Azure File Storage.
As a result, in the Azure logs and file properties the user name for connecting to a given Azure File share is the storage account name regardless of who is using the storage account keys. If multiple users connect, they have to share an account. This seems to make effective auditing problematic.  It also seems to violate a broad range of commitments we all make to regulators, customers, and other constituencies.
This limitation may be changing. Last month Microsoft announced a preview of more identity and authorization options for interacting with Azure storage. Time will tell. and

Missing link(s) to Active Directory:
Azure Files does not support Active Directory directly, so those sync’d shares don’t enforce your AD ACLs.
Azure File Sync preserves and replicates all discretionary ACLs, or DACLs, (whether Active Directory-based or local) to all server endpoints to which it syncs. Because those Windows Server instances can already authenticate with Active Directory, Microsoft sells Azure File Sync as safe-enough (…to address that happy path).  Unfortunately, Azure File Sync will synchronize files with untrusted servers — where all those controls can be ignored or circumvented.

Requires weakening your hardened endpoints:
Azure File Sync requires that Windows servers host the AzureRM PowerShell module, which currently requires Internet Explorer to be installed. …Hardened server no more…

Plans for public anonymous access:
Microsoft is planning to support public anonymous read access to files stored on Azure file storage via its REST interface. and

Port 445 (again):
Azure file storage configuration is exposed via TCP port 445. Is it wise to begin opening up port 445 of your Microsoft cloud environment? Given the history of Microsoft vulnerabilities exposed on port 445, many will probably hesitate.

Goal of hosting Windows File Server in Azure:
Microsoft intends to deliver Azure Files in a manner that ensures parity with Windows File Server.

What other potential issues or concerns should we investigate?

  • Does the Azure File Storage REST interface resist abuse well enough to support its use in specified use cases (since each use case will have given risks and opportunities)?
  • Can a given use case tolerate risks associated with proposed or planned Microsoft upgrades to Azure File Storage REST, Azure File Sync, or Azure:?
  • Are there impacts on or implications for the way we need to manage our Azure AD?
  • Others?

What do you think?


Increasingly Difficult to Conduct Sensitive Business

May 11, 2018

Craig S. Smith updates us on some of the latest misuses of Alexa and Siri — with attackers “exploiting the gap between human and machine speech recognition.”  Using only audio an attacker can mute your device, then begin issuing commands.  At a minimum, this is a data leakage challenge.  Depending on the configuration of your mobile device or your Apple/Amazon/Google table-top device, those commands may be coming from you — along with the authority that brings.  For some that translates into a risk worth considering.

Working on any type of truly confidential business around your voice-ready devices is increasingly risk-rich.  For global Financial Services enterprises, the scale of the risks seems to warrant keeping significant distance between all voice-aware devices and your key leaders, those with material finance approval authority, anyone working on core investing strategy or its hands-on execution — the list goes on.  Leaving all mobile devices outside Board of Directors meetings is common practice.  Maybe that practice needs to be expanded.

Read this short article and think about your exposures.



“Alexa and Siri Can Hear This Hidden Command. You Can’t.” By Craig S. Smith


Panera Bread Didn’t Take Security Seriously

April 3, 2018

I finally just read Brian Krebs and Dylan Houlihan on the 2017-2018 Panera Bread data breach of millions of customer records via unsafe APIs and applications.  This breach involved a collection of seriously flawed and insecure software wrapped in seriously flawed management.  Everyone in our business should read this and ensure that our leaders do too.  Could this happen to your organization?

Dylan Houlihan had a couple excellent recommendations.  He wrote:


  • “We need to collectively examine what the incentives are that enabled this to happen. I do not believe it was a singular failure with any particular employee. It’s easy to point to certain individuals, but they do not end up in those positions unless that behavior is fundamentally compatible with the broader corporate culture and priorities.”
  • “If you are a security professional, please, I implore you, set up a basic page describing a non-threatening process for submitting security vulnerability disclosures. Make this process obviously distinct from the, “Hi I think my account is hacked” customer support process. Make sure this is immediately read by someone qualified and engaged to investigate those reports, both technically and practically speaking. You do not need to offer a bug bounty or a reward. Just offering a way to allow people to easily contact you with confidence would go a long way.”



“No, Panera Bread Doesn’t Take Security Seriously.” By Dylan Houlihan, 04-02-2018

“ Leaks Millions of Customer Records.” By Brian Krebs, 04-02-2018

Recent US-CERT & FBI Alert A Good Read — Applicable to Us

March 19, 2018

The United States Computer Emergency Readiness Team (US-CERT) recently released an alert about sophisticated attacks against individuals and infrastructure that contained an excellent explanation of the series of attacker techniques that are applicable to all global Financial Services enterprises. Many of the techniques are possible and effective because of the availability of direct Internet connections. Absent direct Internet connectivity, many of the techniques detailed in the CERT alert would be ineffective.

Global Financial Services enterprises, responsible for protecting hundreds of billions, even trillions of dollars (other people’s money) are attractive cybercrime targets. We are also plagued by hucksters & hypesters who are attempting to transform our companies into what they claim will be disruptive, agile organizations using one or another technical pitch that simply translates into “anything, anywhere, anytime.”  The foundation of these pitches seems to be “Internet everywhere” or even “replace your inconvenient internal networks with the Internet” while eliminating those legacy security and constraining security practices.

We can all learn from the details in this Alert.

From the alert:

[The] alert provides information on Russian government actions targeting U.S. Government entities as well as organizations in the energy, nuclear, commercial facilities, water, aviation, and critical manufacturing sectors. It also contains indicators of compromise (IOCs) and technical details on the tactics, techniques, and procedures (TTPs) used by Russian government cyber actors on compromised victim networks. DHS and FBI produced this alert to educate network defenders to enhance their ability to identify and reduce exposure to malicious activity.

DHS and FBI characterize this activity as a multi-stage intrusion campaign by Russian government cyber actors who targeted small commercial facilities’ networks where they staged malware, conducted spear phishing, and gained remote access into energy sector networks. After obtaining access, the Russian government cyber actors conducted network reconnaissance, moved laterally, and collected information pertaining to Industrial Control Systems (ICS).

Take the time to review it. Replace “industrial control systems” with your most important systems as you read.

For many of us, the material may be useful in our outreach and educational communications.

The 20-some recommendations listed in the “General Best Practices Applicable to this Campaign” section also seem applicable to Financial Services.

“Alert (TA18-074A) Russian Government Cyber Activity Targeting Energy and Other Critical Infrastructure Sectors.” Release date: March 15, 2018.

“Cyberattacks Put Russian Fingers on the Switch at Power Plants, U.S. Says.” By Nicole Perlroth and David E. Sanger,The New York Times.

Cloud Risk Assessment Challenge Thoughts

February 3, 2018

Technology is often at the center of efforts to sell new business models. From some perspectives, “Cloud” is more about new business models than about technology-enabled capabilities. Over the last decade or more, “cloud” marketers and hypists have constructed intricate structures of propaganda that trap the unwary in a matrix, a fog, a web of artifice and deceit.[1]  I think that a “cloud first” belief system is misused in ways that sometimes results in excessive risk-taking.  Belief systems are tricky to deal with and can cause some to dismiss or ignore inputs that might impact core tenets or dogma.

My reading and first hand experience lead me to believe that many are willing to migrate operations to other people’s computers — “the cloud” — without clearly evaluating impacts to their core mission, their risk management story-telling, and risk posture. Too many cloud vendors remain opaque to risk assessment, while leaning heavily on assertions of “compliance” and alignment with additionally hyped ancillary practices [containers, agile, encryption, etc.].

None of this rant implies that all Internet-centric service providers are without value. My core concern is with the difficulty in determining the risks associated with using one or another of them for given global Financial Services use cases.  That difficulty is only amplified when some involved exist within a reality-resisting “cloud first” belief system.

Because some “cloud” business models are exceptionally misaligned with global Financial Services enterprise needs and mandates, it is critically important to understand them. A given “cloud” vendor’s attack surface combined with a prodigious and undisciplined risk appetite can result in material misalignment with Financial Services needs. Again, this does not invalidate all “cloud” providers for all use cases, it elevates the importance of performing careful, thorough, clear-headed, evidence-informed risk assessments.  In our business, we are expected, even required, to protect trillions of dollars of other people’s money, to live up to our long and short term promises, and to comply with all relevant laws, regulations, and contracts.  And we are expected to do so in ways that are transparent enough for customers, prospects, regulators, and others to determine if we are meeting their expectations.

  • Evidence is not something to be used selectively to support beliefs.
  • Research is not hunting for justifications of existing beliefs.
  • Hunt for evidence. Use your cognitive capabilities to evaluate it.
  • Soberly analyze your beliefs.
  • Let the evidence influence your beliefs.
  • When needed, build new beliefs.[2]

Effective risk management has little room for anyone captured within a given belief system or abusing the power to create one’s own reality.

This remains a jumbled and unfinished thought that I will continue to evolve here.

What do you think?

[1] Derived from a phrase by Michelle Goldberg.
[2] Thank you Alex Wall, Johnston, IA. Author of a Letter to the Editor in the Feb 3, 2018 Des Moines Register.

Another Exfiltration Tool

January 30, 2018

It is a challenge to keep up with the free HTTPS-enabled data exfiltration tools available.  As security professionals in global Financial Services enterprise, we have obligations to exhibit risk-reasonable behaviors.  Resisting easy, “invisible” data theft is a core deliverable in our layered security services.

Google is offering a cool “Cloud Shell” that falls into the category I was thinking of when I wrote the paragraph above.  It is a highly-functional Linux shell that is available to anyone with https access to the Internet.  There are lots of good reasons for Google to offer this service.  And they require an active credit card for initial on-boarding — allowing some to argue that there are limits to the anonymity this service might deliver.  There are also lots of global Financial Services enterprise misuse cases.  Quick, easy, difficult-to-understand data exfiltration being the first that came to mind.  Hosting “trustworthy” command and control applications is another.  With Internet access, sudo, and persistent storage the only limitations seem to be the creativity of any given hostile party.

Financial Services brands managing trillions of dollars for others need to protect against the misuse of tooling like this.  The challenge is that some of us use Google Cloud services for one or another subset of our business activities. And in those approved contexts, that represents risk-reasonable behavior.

This situation is just another example of external forces driving our internal priorities in ways that will require a quick response, and will also induce ongoing risk management workload.

So it goes.


Google Cloud Shell:

Low Profile Office 365 Breach Reported

August 18, 2017

A couple years ago I wrote:

“I am told by many in my industry (and some vendors) that ‘if we put it in the cloud it will work better, cheaper, be safer, and always be available.’ Under most general financial services use cases (as opposed to niche functionality) that statement seems without foundation.”

Although many individuals have become more sophisticated in the ways they pitch ‘the cloud’ I still hear versions of this story on a fairly regular basis…

Today I learned about a recent Office 365 service outage that reminded me that issues concerning our use of ‘cloud’ technology and the commitments we in the Global Financial Services business make to our customers, prospects, marketers, investors, and regulators seem to remain very much unresolved.

What happened?

According to Microsoft, sometime before 11:30 PM (UTC) on August 3rd 2017, the company introduced an update to the Activity Reports service in the Office 365 admin center which resulted in customers usage reports of one tenant being displayed in another tenant’s administrative portal.

Some customer o365 administrators noticed that the reported email and SharePoint usage for their tenants had spiked. When they investigated, the o365 AdminPortal ( displayed activity for users from one or more AzureAD domains outside their tenant. In the most general terms, this was a breach. The breach displayed names and email addresses of those users along with some amount of service traffic detail, for example, user named X (having email address sent 193 and received 467 messages, as well as uploaded 9 documents to SharePoint, and read 45 documents in the previous week.

Some subset of those 0365 customers reported the breach to Microsoft.

Microsoft reported that at they disabled the Activity Reports services at 11:40 PM UTC the same day, and that they had a fix in place by 3:35 AM UTC.

Why should I care?

As Global Financial Services Enterprises we make a lot of promises (in varying degrees of formality) to protect the assets for which we are responsible and we promote our ethical business practices. For any one or our companies, our risk profile is rapidly evolving in concert with expanded use of a range of cloud services. Those risks appear in many forms. All of us involved in Global Financial Services need our security story-telling to evolve in alignment with the specific risks we are taking when we choose to operate one or another portion of our operations in ‘the cloud.’ In addition, our processes for detecting and reporting candidate “breaches” also need to evolve in alignment with our use of all things cloud.

In this specific situation it is possible that each of our companies could have violated our commitments to comply with the European GDPR (General Data Protection Regulations:, had it happened in August 2018 rather than August 2017. We all have formal processes to report and assess potential breaches. Because of the highly-restricted access to Office 365 and Azure service outage details, is seems easy to believe that many of our existing breach detection and reporting processes are no longer fully functional.

Like all cloud stuff, o365 and Azure are architected, designed, coded, installed, hosted, maintained, and monitored by humans (as is their underlying infrastructure of many and varied types).
Humans make mistakes, they misunderstand, they miscommunicate, they obfuscate, they get distracted, they get tired, they get angry, they ‘need’ more money, they feel abused, they are overconfident, they believe their own faith-based assumptions, they fall in love with their own decisions & outputs, they make exceptions for their employer, they market their services using language disconnected from raw service-delivery facts, and more. That is not the whole human story, but this list attempts to poke at just some human characteristics that can negatively impact systems marketed as ‘cloud’ on which all of us perform one or another facet of our business operations.

I recommend factoring this human element into your thinking about the value proposition presented by any given ‘cloud’ opportunity. All of us will need to ensure that all of our security and compliance mandated services incorporate the spectrum of risks that come with those opportunities. If we let that risk management and compliance activity lapse for too long, it could put any or all of our brands in peril.

“Data Breach as Office 365 Admin Center Displays Usage Data from Other Tenants.”
By Tony Redmond, August 4, 2017

European GDPR (General Data Protection Regulations)

%d bloggers like this: