Oval Office Credential Theft is a Warning to All

October 13, 2018

IT architects cry foul about what they perceive as authentication fatigue…  “Security makes users login too many times and this must stop!”
It seems like investments attempting to reduce the number redundant workforce authentications are not often justified using the language of risk management. If they were, we might find identity technology evolving in new ways.

In a recent oval office press event, Kanye West inadvertently exposed his iPhone passcode to a crowd of news cameras. As a result, earth knows his six-digit passcode is an unbroken series of zeros “000000.” …not something too business-relevant, except that it is an example of a class of credential leakage that is already an issue for global financial services enterprises.

With increasing frequency, our increasingly mobile workforce enters passwords in the presence of live cameras. We should not assume that all the resulting recordings are secured in ways that match our obligation to keep business user name/password pairs confidential. Because credentials can be relatively easily monetized, a percentage of these recorded credentials are and will be used in unauthorized ways. The only variables are which credentials, at what time(s), and by whom.

We should assume that this scenario will become increasingly common for all workforce roles who are mobile-enabled.

Do you support your database, server, identity directory, or cloud admins, your financial officers, your investment or human resource professionals, your legal council, any other roles with elevated rights to work where they choose? If yes, tick your risk-pool meter up again. Assign someone to investigate and report on how vulnerable your organization is to unauthorized credential re-use. Large, global financial services enterprises have complex attack surfaces for this use case. For most, this analysis will not be easy, and will likely not result in a happy story.

Maybe authenticating less often is a part of addressing this risk.  I don’t know.  For some, multi-factor authentication plays a key role in resisting credential theft and reuse.  Some types of multi-factor authentication can resist some types of credential attacks. But “some” should not be a comforting qualifier. Working out the right mix of identity verification is going to require creativity, rigor, and serious risk management analysis — adult, professional, investigation and analysis.

Get started.

REFERENCES:

“The Cybersecurity 202: Kanye West is going to make password security great again.”
By Derek Hawkins, October 12, 2018
https://www.washingtonpost.com/news/powerpost/paloma/the-cybersecurity-202/2018/10/12/the-cybersecurity-202-kanye-west-is-going-to-make-password-security-great-again/

Advertisements

Facebook Identity Token Thefts Result in Breach

September 28, 2018

Facebook’s VP of Product Management, Guy Rosen said today that a vulnerability in the company’s  “View As” feature that enabled attackers to steal to user’s access tokens.  These tokens are presented to Facebook infrastructure in ways that allow users to remain authenticated and able to interact with their accounts over multiple sessions. Once in an attacker’s possession, these tokens would permit attackers to impersonate actual users and enable Facebook accounts takeover.  Facebook’s reported that the current risk mitigation is their invalidating at least 40 million authentication tokens and temporarily turning off the “View As” feature while they “conduct a thorough security review.”  He went on:

“This attack exploited the complex interaction of multiple issues in our code. It stemmed from a change we made to our video uploading feature in July 2017, which impacted “View As.” The attackers not only needed to find this vulnerability and use it to get an access token, they then had to pivot from that account to others to steal more tokens.”

Mr. Rosen wrote that Facebook engineers learned of this vulnerability three days ago on Tuesday, September 25 and that almost 50 million Facebook accounts were affected.

Writing safe-enough user-facing software that must interact with a complex ecosystem of applications and APIs is a serious challenge for all of us.  In that context, this latest Facebook vulnerability should be a cautionary tale for all organizations implementing apps and APIs incorporating persistent token-based authentication.  Could any of us market our way out of a situation where 40 or 50 million of customers, marketers, and other partners were vulnerable to account takeover?

Too many architects and developers seem risk-inappropriately infatuated by pundit pronouncements about authentication fatigue and the “best practice” of extending any given authentication across multiple sessions using persistent tokens.  At our scale (a trillion dollars U.S. AUM each, give or take a few hundred billion), global financial services enterprises ought to be able to reason our way through the fog of happy-path chatter to engineer session management practices that meet the risk appetites of our various constituencies.

REFERENCES:

“Security Update.” By Guy Rosen, VP of Product Management
Security Update

 

 


MacOS Firewall Bypass Demos

August 9, 2018

“You’re are not going to buy a car and expect it to fly?” Patrick Wardle, Chief Research Officer at Digita Security and founder of Objective-See, describing why he presented some of his research on MacOS firewall bypasses.

That sort of makes sense.  Nobody buys a Mac and expects it to resist attack?

In any case, we all have members of our workforce using Macs for non-trivial business operations.  We need to clearly understand the attack surface and Mac’s resistance to attack.  P.Wardle provides a little help on that exercise in his BlackHat presentation: “Fire & Ice: Making and Breaking macOS Firewalls.”

Tom Spring has a useful summary of the presentation on ThreatPost: “Black Hat 2018: Patrick Wardle on Breaking and Bypassing MacOS Firewalls.”  It is worth a read.  There is no reason for me to echo its content here.

REFERENCES:

“Fire & Ice: Making and Breaking macOS Firewalls”
https://www.blackhat.com/us-18/briefings.html#fire-and-ice-making-and-breaking-macos-firewalls

“Black Hat 2018: Patrick Wardle on Breaking and Bypassing MacOS Firewalls” By Tom Spring
https://threatpost.com/patrick-wardle-breaks-and-bypasses-macos-firewalls/134784/


Cloud File Sync Requires New Data Theft Protections

June 28, 2018

Microsoft Azure File Sync has been slowly evolving since it was released last year.
https://azure.microsoft.com/en-us/blog/announcing-the-public-preview-for-azure-file-sync/ and
https://docs.microsoft.com/en-us/azure/storage/files/storage-sync-files-deployment-guide?tabs=portal and https://azure.microsoft.com/en-us/roadmap/azure-file-sync/

The company also added “Azure:” [Azure Drive] to PowerShell to support discovery and navigation of all Azure resources including filesystems.
https://blogs.msdn.microsoft.com/powershell/2017/10/19/navigate-azure-resources-just-like-a-file-system/

Azure File Sync helps users keep Azure File shares in sync with their Windows Servers. Microsoft promotes the happy-path, where these servers are on-premise in your enterprise, but supports syncing with endpoints of any trust relationship.

What are my concerns?

The combination makes it much easier to discover Azure-hosted data and data exfiltration paths and then to get them set up to automatically ship new data into or out of your intended environment(s).  In other words, helping hostile parties to introduce their data or their malware into your organization’s Azure-hosted file systems, or helping hostile parties to steal your data while leaving a minimum of evidence describing who did what.

Why would I say that?

Many roles across global Financial Services enterprises are engaging in architecture risk analysis (ARA) as part of their day to day activities.  If we approach this topic like we were engaged in ARA fact finding, we might discover the following:

Too easy to share with untrustworthy endpoints:
It appears that anyone with the appropriate key (a string) can access a given Azure File Share from any Azure VM on any subscription. What could go wrong?
Microsoft customers can use shared access signatures (SAS) to generate tokens that have specific permissions, and which are valid for a specified time interval. These shared access signature keys are supported by the Azure Files (and File Sync) REST API and the client libraries.
A financial services approach might permit Azure File drive Shares on a given private Virtual Network to be secured in a manner so it would be only available via the Virtual Network using a private IP address on that same network.
https://feedback.azure.com/forums/217298-storage/suggestions/5993281-azure-files-on-a-virtual-network

Weak audit trail:
If you need to mount the Azure file share over SMB, you currently must use the storage account keys assigned to the underlying Azure File Storage.
As a result, in the Azure logs and file properties the user name for connecting to a given Azure File share is the storage account name regardless of who is using the storage account keys. If multiple users connect, they have to share an account. This seems to make effective auditing problematic.  It also seems to violate a broad range of commitments we all make to regulators, customers, and other constituencies.
https://feedback.azure.com/forums/217298-storage/suggestions/33477253-keep-track-of-the-file-owner
This limitation may be changing. Last month Microsoft announced a preview of more identity and authorization options for interacting with Azure storage. Time will tell.
https://docs.microsoft.com/en-us/rest/api/storageservices/Authorization-for-the-Azure-Storage-Services and https://docs.microsoft.com/en-us/rest/api/storageservices/authenticate-with-azure-active-directory

Missing link(s) to Active Directory:
Azure Files does not support Active Directory directly, so those sync’d shares don’t enforce your AD ACLs.
Azure File Sync preserves and replicates all discretionary ACLs, or DACLs, (whether Active Directory-based or local) to all server endpoints to which it syncs. Because those Windows Server instances can already authenticate with Active Directory, Microsoft sells Azure File Sync as safe-enough (…to address that happy path).  Unfortunately, Azure File Sync will synchronize files with untrusted servers — where all those controls can be ignored or circumvented.
https://docs.microsoft.com/en-us/azure/storage/files/storage-files-faq#security-authentication-and-access-control

Requires weakening your hardened endpoints:
Azure File Sync requires that Windows servers host the AzureRM PowerShell module, which currently requires Internet Explorer to be installed. …Hardened server no more…
https://feedback.azure.com/forums/217298-storage/suggestions/31909372-add-support-for-server-core-installations-for-azur

Plans for public anonymous access:
Microsoft is planning to support public anonymous read access to files stored on Azure file storage via its REST interface.
https://feedback.azure.com/forums/217298-storage/suggestions/7188650-allow-anonymous-public-access-to-azure-file-storag and https://docs.microsoft.com/en-us/rest/api/storageservices/authenticate-with-azure-active-directory

Port 445 (again):
Azure file storage configuration is exposed via TCP port 445. Is it wise to begin opening up port 445 of your Microsoft cloud environment? Given the history of Microsoft vulnerabilities exposed on port 445, many will probably hesitate.
https://feedback.azure.com/forums/217298-storage/suggestions/15001032-allow-access-to-file-storage-configuration-to-use

Goal of hosting Windows File Server in Azure:
Microsoft intends to deliver Azure Files in a manner that ensures parity with Windows File Server.
https://feedback.azure.com/forums/217298-storage/suggestions/19693045-automatically-mount-an-azure-file-share-to-a-windo

What other potential issues or concerns should we investigate?

  • Does the Azure File Storage REST interface resist abuse well enough to support its use in specified use cases (since each use case will have given risks and opportunities)?
  • Can a given use case tolerate risks associated with proposed or planned Microsoft upgrades to Azure File Storage REST, Azure File Sync, or Azure:?
  • Are there impacts on or implications for the way we need to manage our Azure AD?
  • Others?

What do you think?

 


Another Exfiltration Tool

January 30, 2018

It is a challenge to keep up with the free HTTPS-enabled data exfiltration tools available.  As security professionals in global Financial Services enterprise, we have obligations to exhibit risk-reasonable behaviors.  Resisting easy, “invisible” data theft is a core deliverable in our layered security services.

Google is offering a cool “Cloud Shell” that falls into the category I was thinking of when I wrote the paragraph above.  It is a highly-functional Linux shell that is available to anyone with https access to the Internet.  There are lots of good reasons for Google to offer this service.  And they require an active credit card for initial on-boarding — allowing some to argue that there are limits to the anonymity this service might deliver.  There are also lots of global Financial Services enterprise misuse cases.  Quick, easy, difficult-to-understand data exfiltration being the first that came to mind.  Hosting “trustworthy” command and control applications is another.  With Internet access, sudo, and persistent storage the only limitations seem to be the creativity of any given hostile party.

Financial Services brands managing trillions of dollars for others need to protect against the misuse of tooling like this.  The challenge is that some of us use Google Cloud services for one or another subset of our business activities. And in those approved contexts, that represents risk-reasonable behavior.

This situation is just another example of external forces driving our internal priorities in ways that will require a quick response, and will also induce ongoing risk management workload.

So it goes.

REFERENCE:

Google Cloud Shell: https://cloud.google.com/shell/


Low Profile Office 365 Breach Reported

August 18, 2017

A couple years ago I wrote:

“I am told by many in my industry (and some vendors) that ‘if we put it in the cloud it will work better, cheaper, be safer, and always be available.’ Under most general financial services use cases (as opposed to niche functionality) that statement seems without foundation.”

Although many individuals have become more sophisticated in the ways they pitch ‘the cloud’ I still hear versions of this story on a fairly regular basis…

Today I learned about a recent Office 365 service outage that reminded me that issues concerning our use of ‘cloud’ technology and the commitments we in the Global Financial Services business make to our customers, prospects, marketers, investors, and regulators seem to remain very much unresolved.

What happened?

According to Microsoft, sometime before 11:30 PM (UTC) on August 3rd 2017, the company introduced an update to the Activity Reports service in the Office 365 admin center which resulted in customers usage reports of one tenant being displayed in another tenant’s administrative portal.

Some customer o365 administrators noticed that the reported email and SharePoint usage for their tenants had spiked. When they investigated, the o365 AdminPortal (https://portal.office.com/adminportal/) displayed activity for users from one or more AzureAD domains outside their tenant. In the most general terms, this was a breach. The breach displayed names and email addresses of those users along with some amount of service traffic detail, for example, user named X (having email address userNameX@domainY.com) sent 193 and received 467 messages, as well as uploaded 9 documents to SharePoint, and read 45 documents in the previous week.

Some subset of those 0365 customers reported the breach to Microsoft.

Microsoft reported that at they disabled the Activity Reports services at 11:40 PM UTC the same day, and that they had a fix in place by 3:35 AM UTC.

Why should I care?

As Global Financial Services Enterprises we make a lot of promises (in varying degrees of formality) to protect the assets for which we are responsible and we promote our ethical business practices. For any one or our companies, our risk profile is rapidly evolving in concert with expanded use of a range of cloud services. Those risks appear in many forms. All of us involved in Global Financial Services need our security story-telling to evolve in alignment with the specific risks we are taking when we choose to operate one or another portion of our operations in ‘the cloud.’ In addition, our processes for detecting and reporting candidate “breaches” also need to evolve in alignment with our use of all things cloud.

In this specific situation it is possible that each of our companies could have violated our commitments to comply with the European GDPR (General Data Protection Regulations: http://www.eugdpr.org/), had it happened in August 2018 rather than August 2017. We all have formal processes to report and assess potential breaches. Because of the highly-restricted access to Office 365 and Azure service outage details, is seems easy to believe that many of our existing breach detection and reporting processes are no longer fully functional.

Like all cloud stuff, o365 and Azure are architected, designed, coded, installed, hosted, maintained, and monitored by humans (as is their underlying infrastructure of many and varied types).
Humans make mistakes, they misunderstand, they miscommunicate, they obfuscate, they get distracted, they get tired, they get angry, they ‘need’ more money, they feel abused, they are overconfident, they believe their own faith-based assumptions, they fall in love with their own decisions & outputs, they make exceptions for their employer, they market their services using language disconnected from raw service-delivery facts, and more. That is not the whole human story, but this list attempts to poke at just some human characteristics that can negatively impact systems marketed as ‘cloud’ on which all of us perform one or another facet of our business operations.

I recommend factoring this human element into your thinking about the value proposition presented by any given ‘cloud’ opportunity. All of us will need to ensure that all of our security and compliance mandated services incorporate the spectrum of risks that come with those opportunities. If we let that risk management and compliance activity lapse for too long, it could put any or all of our brands in peril.

REFERENCES:
“Data Breach as Office 365 Admin Center Displays Usage Data from Other Tenants.”
By Tony Redmond, August 4, 2017
https://www.petri.com/data-breach-office-365-admin-center

European GDPR (General Data Protection Regulations)
http://www.eugdpr.org/


Workforce Mobility = More Shoulder Surfing Risk

July 20, 2017

An individual recently alerted me to an instance of sensitive information being displayed on an application screen in the context of limited or non-existent business value. There are a few key risk management issues here – if we ship data to a user’s screen there is a chance that:

  • it will be intercepted by unauthorized parties,
  • unauthorized parties will have stolen credentials and use them to access that data, and
  • unauthorized parties will view it on the authorized-user’s screen.

Today I am most interested in the last use case — where traditional and non-traditional “shoulder surfing” is used to harvest sensitive data from user’s screens.

In global financial services, most of us have been through periods of data display “elimination” from legacy applications. In the last third of the 20th century United States, individual’s ‘Social Security Numbers’ (SSN) evolved into an important component of customer identification. It was a handy key to help identify one John Smith from another, and to help identify individuals whose names were more likely than others to be misspelled. Informtation Technology teams incorporated SSN as a core component of individual identity across the U.S. across many industries. Over time, individual’s SSNs became relatively liquid commodities and helped support a broad range of criminal income streams. After the turn of the century regulations and customer privacy expectations evolved to make use of SSN for identification increasingly problematic. In response to that cultural change or to other trigger events (privacy breach being the most common), IT teams invested in large scale activities to reduce dependence on SSNs where practical, and to resist SSN theft by tightening access controls to bulk data stores and by removing or masking SSNs from application user interfaces (‘screens’).

For the most part, global financial services leaders, application architects, and risk management professionals have internalized the concept of performing our business operations in a way that protects non-public data from ‘leaking’ into unauthorized channels. As our business practices evolve, we are obligated to continuously re-visit our alignment with data protection obligations. In software development, this is sometimes called architecture risk analysis (an activity that is not limited to formal architects!).

Risk management decisions about displaying non-public data on our screens need to take into account the location of those screens and the assumptions that we can reliably make about those environments. When we could depend upon the overwhelming majority of our workforce being in front of monitors located within workplace environments, the risks associated with ‘screen’ data leakage to unauthorized parties were often managed via line-of-sight constraints, building access controls, and “privacy filters” that were added to some individual’s monitors. We designed and managed our application user interfaces in the context of our assumptions about those layers of protection against unauthorized visual access.

Some organizations are embarked on “mobilizing” their operations — responding to advice that individuals and teams perform better when they are unleashed from traditional workplace constraints (like a physical desk, office, or other employer-managed workspace) as well as traditional workday constraints (like a contiguous 8, 10, or 12-hour day). Working from anywhere and everywhere, and doing so at any time is pitched as an employee benefit as well as a business operations improvement play. These changes have many consequences. One important impact is the increasing frequency of unauthorized non-public data ‘leakage’ as workforce ‘screens’ are exposed in less controlled environments — environments where there are higher concentrations of non-workforce individuals as well as higher concentrations of high-power cameras. Inadvertently, enterprises evolving toward “anything, anywhere, anytime” operations must assume risks resulting from exposing sensitive information to bystanders through the screens used by their workforce, or they must take measures to effectively deal with those risks.

The ever more reliable assumption that our customers, partners, marketers, and vendors feel increasingly comfortable computing in public places such as coffee shops, lobbies, airports and other types of transportation hubs, drives up the threat of exposing sensitive information to unauthorized parties.

This is not your parent’s shoulder surfing…
With only modest computing power, sensitive information can be extracted from images delivered by high-power cameras. Inexpensive and increasingly ubiquitous multi-core machines, GPUs, and cloud computing makes computing cycles more accessible and affordable for criminals and seasoned hobbyists to extract sensitive information via off-the-shelf visual analysis tools

This information exposure increases the risks of identity theft and theft of other business secrets that may result in financial losses, espionage, as well as other forms of cyber crime.

The dangers are real…
A couple years ago Michael Mitchell and An-I Andy Wang (Florida State University), and Peter Reiher (University of California, Los Angeles) wrote in “Protecting the Input and Display of Sensitive Data:”

The threat of exposing sensitive information on screen
to bystanders is real. In a recent study of IT
professionals, 85% of those surveyed admitted seeing
unauthorized sensitive on-screen data, and 82%
admitted that their own sensitive on-screen data could
be viewed by unauthorized personnel at times. These
results are consistent with other surveys indicating that
76% of the respondents were concerned about people
observing their screens, while 80% admitted that
they have attempted to shoulder surf the screen of a
stranger .

The shoulder-surfing threat is worsening, as mobile
devices are replacing desktop computers. More devices
are mobile (over 73% of annual technical device
purchases) and the world’s mobile worker
population will reach 1.3 billion by 2015. More than
80% of U.S. employees continues working after leaving
the office, and 67% regularly access sensitive data at
unsafe locations. Forty-four percent of organizations
do not have any policy addressing these threats.
Advances in screen technology further increase the risk
of exposure, with many new tablets claiming near 180-
degree screen viewing angles.

What should we do first?
The most powerful approach to resisting data leakage via user’s screens is to stop sending that data to those at-risk application user interfaces.

Most of us learned that during our SSN cleanup efforts. In global financial services there were only the most limited use cases where an SSN was needed on a user’s screen. Eliminating SSNs from the data flowing out to those user’s endpoints was a meaningful risk reduction. Over time, the breaches that did not happen only because of SSN-elimination activities could represent material financial savings and advantage in a number of other forms (brand, good-will, etc.).

As we review non-public data used throughout our businesses, and begin the process of sending only that required for the immediate use case to user’s screens, it seems rational that we will find lots of candidates for simple elimination.

For some cases where sensitive data may be required on ‘unsafe’ screens Mitchell, Wang, and Reiher propose an interesting option (cashtags), but one beyond the scope of my discussion today.

REFERENCES:
“Cashtags: Protecting the Input and Display of Sensitive Data.”
By Michael Mitchell and An-I Andy Wang (Florida State University), and Peter Reiher (University of California, Los Angeles)
https://www.cs.fsu.edu/~awang/papers/usenix2015.pdf


%d bloggers like this: