Protect Location Data

January 15, 2019

Location tracking is sensitive business — one loaded with real and perceived risks and injustices.  There are global financial services business use cases where location data can be used to help resist fraud — for example: ‘is the customer present for the requested financial transaction?‘  Do so only with explicit customer consent.

Mobile phones constantly communicate with nearby cell phone towers to help telecom providers know where to route given calls and texts. From this, telecom companies also work out the phone’s approximate location.

Telecom companies doing business in the United States sell access to their customers’ location data, primarily to location aggregators. These data aggregators then sell it to others. Despite what carriers want us to believe, location data is not tightly controlled or carefully regulated. It is not.

Telecoms preach that they protect customer data and that location-based services require user’s notice and consent. That is not their behavior.

Carefully vet the use of location data services in your organizations and then carefully limit access to the resulting data.  In the long run, to do otherwise is a needless risk of brand damage.

Joseph Cox wrote an article on this topic published last week on MotherBoard.  It is worth reviewing to better understand how location services can be misused.

If you already employ these services — review your practices.

I Gave a Bounty Hunter $300. Then He Located Our Phone.” by Joseph Cox


Some Upgrades Are Not Optional – Engineer Them For The Reality of Your Operations

December 5, 2018

Kubernetes enables complexity at scale across cloud-enabled infrastructure.

Like any other software, it also is — from time to time — vulnerable to attack.

A couple days ago I initially read about a CVSS v3.0 ‘9.8’ (critical) Kubernetes vulnerability:

“In all Kubernetes versions prior to v1.10.11, v1.11.5, and v1.12.3, incorrect handling of error responses to proxied upgrade requests in the kube-apiserver allowed specially crafted requests to establish a connection through the Kubernetes API server to backend servers, then send arbitrary requests over the same connection directly to the backend, authenticated with the Kubernetes API server’s TLS credentials used to establish the backend connection.”

The default configuration for a Kubernetes API server’s Transport Layer Security (TLS) credentials grants all users (authenticated and unauthenticated) permission to perform discovery API calls that result in this escalation.  As a result, in too many implementations, anyone who knows about this hole can take command of a targeted Kubernetes cluster. Game over.

Red Hat is quoted as describing the situation as: “The privilege escalation flaw makes it possible for any user to gain full administrator privileges on any compute node being run in a Kubernetes pod. This is a big deal. Not only can this actor steal sensitive data or inject malicious code, but they can also bring down production applications and services from within an organization’s firewall.”

This level of criticality is a reminder about how important it is to engineer-in the ability to perform on-demand Kubernetes infrastructure upgrades during normal business operations. In situations like the one described above, these upgrades must occur without material impact on business operations.  In real business infrastructure that is a serious engineering challenge.

Today is not the day to be asking “how will we upgrade all this infrastructure plumbing in real-time, under real business loads?”

The recent Kubernetes vulnerability is a reminder about how complex is the attack surface of every global financial services enterprise. With that complexity, comes a material obligations to understand your implementations and their operations under expected conditions — one of which is the occasional critical vulnerability that must be fixed immediately.

Oh joy.

Kubernetes Security Announcement – v1.10.11, v1.11.5, v1.12.3 released to address CVE-2018-1002105!topic/kubernetes-announce/GVllWCg6L88
CVE-2018-1002105 Detail
Kubernetes’ first major security hole discovered

MacOS Firewall Bypass Demos

August 9, 2018

“You’re are not going to buy a car and expect it to fly?” Patrick Wardle, Chief Research Officer at Digita Security and founder of Objective-See, describing why he presented some of his research on MacOS firewall bypasses.

That sort of makes sense.  Nobody buys a Mac and expects it to resist attack?

In any case, we all have members of our workforce using Macs for non-trivial business operations.  We need to clearly understand the attack surface and Mac’s resistance to attack.  P.Wardle provides a little help on that exercise in his BlackHat presentation: “Fire & Ice: Making and Breaking macOS Firewalls.”

Tom Spring has a useful summary of the presentation on ThreatPost: “Black Hat 2018: Patrick Wardle on Breaking and Bypassing MacOS Firewalls.”  It is worth a read.  There is no reason for me to echo its content here.


“Fire & Ice: Making and Breaking macOS Firewalls”

“Black Hat 2018: Patrick Wardle on Breaking and Bypassing MacOS Firewalls” By Tom Spring

Cloud Concentration Brings Operational Risks

July 18, 2018

The sales job included a theme that the Internet is so decentralized that anything named cloud simply inherited all the good ‘ilities’ that massive decentralization might deliver. One of the core tenets of that Internet story-telling is about the resiliency that decentralization guarantees — as in: no single outage can cause problems for your cloud-delivered services.

The reality is that story is too often just nonsense. Google, Amazon, and Microsoft control enormous amounts of the infrastructure and operations upon which “cloud” services depend. When any of these three companies has an outage, broad swaths of “cloud-delivered” business services experience outages as well. If you are going to depend on cloud-enabled operations for your success, use care in how you define that success, and how your communicate about service levels with your customers. This is especially problematic for industries where customers have been trained to expect extremely high service levels — global financial services enterprises fall into this category.  Factor this risk into your business & technical plans.

Yesterday, Tuesday, July 17th, Google’s services that provide computing storage and data management tools for companies failed. The customers who depended upon those foundational services also failed. Snapchat and Spotify were two high profile examples, but the outages were far more widespread. Google services that also depend upon the storage and data management tools also failed. It appears that Google Cloud Networking was knocked down as the company reported “We are investigating a problem with Google Cloud Global Load balancers returning 502s for many services including AppEngine, Stackdriver, Dialogflow, as well as customer Global Load Balancers.” This has broad impact because all customers attempting to deliver high quality services depend on load balancers. It appears that this networking issue caused downstream outages like those experienced by Breitbart and Drudge Report.

This was the 5th non-trivial outage this year for Google’s AppEngine. Google’s ComputeEngine has had 7 outages this year, 12 for their Cloud Networking, 8 for their Stackdriver, 3 for Google Cloud Console, 3 for Cloud Pub/Sub, 4 for Google Kubernetes Engine, 2 for Cloud Storage, 4 for BigQuery, 2 for their DataStore, 2 for their Cloud Developer Tools, and 1 reported for their Identity & Security services.
To add insult, it appears that the Google Enterprise Support page may have also not working during the outage on the 17th…

Amazon also experienced service failures that were widely reported the day before (although they are not documented on the company’s service health dashboard).
Microsoft had “internal routing” failures that resulted in widespread service outages for over an hour on the 16th as well.

Plan this reality into your cloud-enabled strategies, architectures, designs, implementations, testing, monitoring, reporting, and contracts.

July 31, 2018 UPDATE:

One of my peers asked about the ability of legacy financial services IT shops to deliver ‘cloud-like’ service levels, arguing that all of our global financial services enterprises have material entrenched IT infrastructure that has service level issues as well.  It seems like a fair reaction, so here is my response:
My motivation for the essay above was driven by several observations:
(1) There are still individuals in our industry who seem to believe there is some magical power available to ‘cloud’ things that will make existing problems and constraints disappear. These individuals make me tired. I tried to highlight a little of cloud vendor’s muggle nature.
(2) The three main cloud vendors I mention — Amazon, Google and Microsoft — provide what I perceive as selective and self-serving views into their service outages. They have outages, short and not-so-short, that are unreported on all of their outage-reporting interfaces available to me. That behavior seems to be designed into their development and infrastructure management practices as well as their sales/marketing practices. When they fail fast, it is on customer’s time and investments (ours), not only on their own (and, I believe, rapid recovery skills do not justify or compensate for any given outage…). I mentioned Amazon on this score, but I my reading helps me believe that all three exhibit this behavior.
(3) We all have ‘knobs’ or ‘levers’ available to us at our corporations to influence our service levels in ways that might help us distinguish ourselves from our competition. My working understanding is that service levels are just an expression of management will. I get it that those responsible for serious profit/loss decision-making have a fiendishly difficult role. That said, if it were important-enough, our leaders would adjust the available management ‘knobs’ in ways that do/can/would deliver whatever was needed — on our systems or those owned & managed by others. I understand that there are a hornet’s nest of competing priorities and trade-offs that make those decisions tricky. I also know that many of those ‘knobs’ are not available to our management teams in analogous ways for most of our ‘cloud’ vendor candidates.

Some argue that there are the right drivers to replace our systems with ‘cloud-enabled’ vendor services in combination with some amount of our code and configuration. To live out that desire, it seems like most of us would need to re-architect much of our business practices and in parallel the stacks of systems that enable them if we were expecting to deliver competitive sets of features and accompanying service levels in the global financial services enterprise arena. A material chunk of that re-architecting would need to compensate for the outage patterns exhibited by the ecosystem of major and minor players involved — and because this is my blog, to compensate for the new information security risk management challenges that path presents as well.

Google Cloud Status Dashboard

Amazon Service Health Dashboard

Azure Status History
Azure Status
Office365 Health

Google Cloud Has Disruption, Bringing Snapchat, Spotify Down
By Mark Bergen, July 17, 2018

Spotify, Snapchat, and more are down following Google Cloud incident (update: fixed)
Jeff Grubb, JULY 17, 2018

Google Cloud Platform fixes issues that took down Spotify, Snapchat and other popular sites
Chloe Aiello, 07-17-2018

[Update: Resolved] Google Cloud has been experiencing an outage, resulting in widespread problems with several services
By Ryne Hager, 07-17-2018

Google Enterprise Support page Outage Reference

Bias & Error In Security AI/ML

July 14, 2018

It is difficult to get through a few minutes today without the arrival of some sort of vendor spam including the use of artificial intelligence and machine learning (AI/ML) to analyze event/threat/vulnerability data and then provide actionable guidance, or to perform/trigger actions themselves.

Global financial services enterprises have extreme risk analysis needs in the face of enormous streams of threat, vulnerability, and event data. While it might seem attractive to hook up with one or more of these AI/ML hypsters, think hard before incorporating these types of systems into your risk analysis pipelines.  At some point they will be exposed to discovery — and at that point is there risk to your brand?

In a manner analogous to facial recognition technologies, AI/ML-driven security analysis technology is coded, configured, and trained by humans, and must incorporate the potential for material bias and unknown error.

Microsoft recently called for regulation of facial recognition technology and its application.  I don’t know if regulation is the appropriate path for AI/ML-driven security analysis technologies.  I think that we do, though, need to remain aware of the bias and error in our implementations — and protect our employers from unjustifiable liability risks on this front.  Demand transparency and strong evidence of due diligence from your vendors, and test, test, test.


“Facial recognition technology: The need for public regulation and corporate responsibility.” Jul 13, 2018, by Brad Smith – President and Chief Legal Officer, Microsoft

“The Future Computed – Artificial Intelligence and its role in society.”
By Microsoft.

“Amazon’s Facial Recognition Wrongly Identifies 28 Lawmakers, A.C.L.U. Says.”
By Natasha Singer, July 26, 2018 (added/updated 07-27-2018)

Cloud File Sync Requires New Data Theft Protections

June 28, 2018

Microsoft Azure File Sync has been slowly evolving since it was released last year. and and

The company also added “Azure:” [Azure Drive] to PowerShell to support discovery and navigation of all Azure resources including filesystems.

Azure File Sync helps users keep Azure File shares in sync with their Windows Servers. Microsoft promotes the happy-path, where these servers are on-premise in your enterprise, but supports syncing with endpoints of any trust relationship.

What are my concerns?

The combination makes it much easier to discover Azure-hosted data and data exfiltration paths and then to get them set up to automatically ship new data into or out of your intended environment(s).  In other words, helping hostile parties to introduce their data or their malware into your organization’s Azure-hosted file systems, or helping hostile parties to steal your data while leaving a minimum of evidence describing who did what.

Why would I say that?

Many roles across global Financial Services enterprises are engaging in architecture risk analysis (ARA) as part of their day to day activities.  If we approach this topic like we were engaged in ARA fact finding, we might discover the following:

Too easy to share with untrustworthy endpoints:
It appears that anyone with the appropriate key (a string) can access a given Azure File Share from any Azure VM on any subscription. What could go wrong?
Microsoft customers can use shared access signatures (SAS) to generate tokens that have specific permissions, and which are valid for a specified time interval. These shared access signature keys are supported by the Azure Files (and File Sync) REST API and the client libraries.
A financial services approach might permit Azure File drive Shares on a given private Virtual Network to be secured in a manner so it would be only available via the Virtual Network using a private IP address on that same network.

Weak audit trail:
If you need to mount the Azure file share over SMB, you currently must use the storage account keys assigned to the underlying Azure File Storage.
As a result, in the Azure logs and file properties the user name for connecting to a given Azure File share is the storage account name regardless of who is using the storage account keys. If multiple users connect, they have to share an account. This seems to make effective auditing problematic.  It also seems to violate a broad range of commitments we all make to regulators, customers, and other constituencies.
This limitation may be changing. Last month Microsoft announced a preview of more identity and authorization options for interacting with Azure storage. Time will tell. and

Missing link(s) to Active Directory:
Azure Files does not support Active Directory directly, so those sync’d shares don’t enforce your AD ACLs.
Azure File Sync preserves and replicates all discretionary ACLs, or DACLs, (whether Active Directory-based or local) to all server endpoints to which it syncs. Because those Windows Server instances can already authenticate with Active Directory, Microsoft sells Azure File Sync as safe-enough (…to address that happy path).  Unfortunately, Azure File Sync will synchronize files with untrusted servers — where all those controls can be ignored or circumvented.

Requires weakening your hardened endpoints:
Azure File Sync requires that Windows servers host the AzureRM PowerShell module, which currently requires Internet Explorer to be installed. …Hardened server no more…

Plans for public anonymous access:
Microsoft is planning to support public anonymous read access to files stored on Azure file storage via its REST interface. and

Port 445 (again):
Azure file storage configuration is exposed via TCP port 445. Is it wise to begin opening up port 445 of your Microsoft cloud environment? Given the history of Microsoft vulnerabilities exposed on port 445, many will probably hesitate.

Goal of hosting Windows File Server in Azure:
Microsoft intends to deliver Azure Files in a manner that ensures parity with Windows File Server.

What other potential issues or concerns should we investigate?

  • Does the Azure File Storage REST interface resist abuse well enough to support its use in specified use cases (since each use case will have given risks and opportunities)?
  • Can a given use case tolerate risks associated with proposed or planned Microsoft upgrades to Azure File Storage REST, Azure File Sync, or Azure:?
  • Are there impacts on or implications for the way we need to manage our Azure AD?
  • Others?

What do you think?


Increasingly Difficult to Conduct Sensitive Business

May 11, 2018

Craig S. Smith updates us on some of the latest misuses of Alexa and Siri — with attackers “exploiting the gap between human and machine speech recognition.”  Using only audio an attacker can mute your device, then begin issuing commands.  At a minimum, this is a data leakage challenge.  Depending on the configuration of your mobile device or your Apple/Amazon/Google table-top device, those commands may be coming from you — along with the authority that brings.  For some that translates into a risk worth considering.

Working on any type of truly confidential business around your voice-ready devices is increasingly risk-rich.  For global Financial Services enterprises, the scale of the risks seems to warrant keeping significant distance between all voice-aware devices and your key leaders, those with material finance approval authority, anyone working on core investing strategy or its hands-on execution — the list goes on.  Leaving all mobile devices outside Board of Directors meetings is common practice.  Maybe that practice needs to be expanded.

Read this short article and think about your exposures.



“Alexa and Siri Can Hear This Hidden Command. You Can’t.” By Craig S. Smith


%d bloggers like this: