Ransomware and My Cloud

December 10, 2017

I just reviewed descriptions of sample incidents associated with ransomware outlined in the ‘Top 10″ review by TripWire.

Ransomware attacks — malware that encrypts your data followed by the attacker attempting to extort money from you for the decryption secrets — are a non-trivial threat to most of us as individuals and all financial services enterprises.

Unfortunately for some, their corporate culture tends to trust workforce users’ access to vast collections of structured and unstructured business information.  That ‘default to trust’ enlarges the potential impacts of a ransomware attack.

As global Financial Services security professionals, we need to resist the urge to share unnecessarily.

We need to quickly detect and respond to malware attacks in order to constrain their scope and impacts.  Because almost every global Financial Services enterprise represents a complex ecosystem of related and in some cases dependent operations, detection may involve many layers, technologies, and activities.  It is not just mature access/privilege management, patching, anti-virus, or security event monitoring, or threat intelligence alone.

All of us also need to ensure that we have a risk-relevant post-ransomware attack data recovery capability that is effective across all our various business operations.

So, does the cloud make me safe from ransomware attack?  No.  Simply trusting your cloud vendor (or their hype squad) on this score does not reach the level of global Financial Services due diligence.  It seems safe to assert that for any given business process, the countless hardware, software, process, and human components that make up any cloud just make it harder to resist and to recovery from ransomware attack.  And under many circumstances, the presence of cloud infrastructure — by definition, managed by some other workforce using non-Financial Services-grade endpoints — increases the probability of this family of malware attack.

 

REFERENCE:

“10 of the Most Significant Ransomware Attacks of 2017.” By David Bisson, 12-10-2017. https://www.tripwire.com/state-of-security/security-data-protection/cyber-security/10-significant-ransomware-attacks-2017/​

Advertisements

Innovation and Secure Software

November 15, 2017

​I sometimes get questions about the applicability of secure software standards and guidelines to work described as innovation or innovative.  Sometimes these interactions begin with an outright rejection of “legacy” risk management in the context of an emerging new “thing.”  I believe that under most circumstances, any conflict that begins here is voluntary and avoidable.  As global financial services organizations, our risk management obligations remain in force for mature & stable development projects as well as for innovation-oriented efforts.

In any discussion of innovation, I arrive with my own set of assumptions:

  • Innovation can occur at all levels of the human, business, operations, technology stack, and often requires concurrent change at multiple layers.
  • Innovation, in any context, does not invalidate our risk management obligations.
  • One of the most common and insidious innovation anti-patterns is constantly looking for the next hot tool that’s going to solve our/your problems.*

Software-centric innovation may generate new or help highlight existing gaps in your secure software standards/guidelines.

If there are gaps in your existing secure software guidance — so that the new “thing” seems to be out of phase and disconnected from that legacy, those gaps need to be closed.

Sometimes gaps like these appear because of changes in vocabulary.  This is generally an easy issue to deal with.  If all involved can agree on the trajectory of the innovative development, then you can begin with something as simple as a memo of understanding, with updates to secure software standards/guidelines to follow at a pace determined by the priority of that work (if there is a formal, fines & penalties regulatory compliance issue involved, it is a higher priority than if were only an exercise in keeping your documentation up-to-date).

Other times an organization is introducing a new business process, a new type of business, or a new technology that does not map well to the existing concepts and/or assumptions expressed in your secure software standards/guidance.  An example of this situation occurred as we all began to invest in native mobile apps.  At that time, mobile app ecosystems did not incorporate a lot of the common security mechanisms and capabilities that had been in place for server and desktop environments for a long time.  This type of change requires a mix of simple vocabulary and content change in corporate secure software standards/guidance.  Again, if those involved can agree on some fundamental assumptions about what the new software is doing and where it is executing, along with sharing an understanding of its external behaviors (passing data, resolving names, signaling, trusts, etc.), you can take a multi-step process to get secure software guidance synchronized with your business environment.  The first step being some sort of formal memo of understanding, followed by the research, collaboration, and writing required to get your secure software standards/guidelines and your business operations back into phase.

Is it possible that your enterprise could introduce something so alien and so disruptively new that there was just no connective tissue between that investment and your existing secure software guidance?  Sure.  What if financial services enterprises decided that they needed to begin building our own proprietary hardware (from the chips all the way up the stack to network I/O) to deal with the combination of gigantic data-sets, complex analyses, and extremely short timelines (throw in some ML & AI to add sex appeal).  Our current generation of secure software standards/guidelines would not likely be well aligned with the risk management challenges presented by microchip architecture, design, implementation and the likely tightly-coupled low level software development that would be required to use them.  I would not be surprised if much of what we have currently published in our organizations would be virtually unrelatable to what would be needed to address the scenario above.  I think that the only businesslike path to dealing with that secure software challenge would be to acquire highly-specialized, experienced human resource to guide us through that kind of dis-contiguous evolution.  That would be a material challenge, but one that our business will not often face.

Given the current state of our secure software training and guidance resources, it seems like most of us in global Financial Services enterprises should expect to find that most ​innovation efforts are aligned-enough with the existing secure software standards/guidelines, or (less frequently) only somewhat out of sync because of differences in vocabulary, or misaligned underlying assumptions or concepts.  Those are expected as part of our evolving software-driven businesses and the evolution of hostile forces that our businesses are exposed to.

So, innovate!  Any of our success in the global financial services marketplace is not guaranteed.  And, dive into working through decision-making about your architecture, design, and implementation risk management obligations.  And finally, use the existing technical and human resources available to you to deal with any new risk management challenges along the way.

* Rough quote from: “Practical Monitoring: Book Review and Q&A with Mike Julian.” By Daniel Bryant, Nov 07, 2017. https://www.infoq.com/articles/practical-monitoring-mike-julian.


Two Mobile Security Resources

October 16, 2017
Although it should not have taken this long, I just ran across two relatively new resources for those doing Financial Services business in an environment infested with mobile devices.

If you are a mobile developer or if your organization is investing in mobile apps, I believe that you should [at a minimum] carefully and thoughtfully review the Study on Mobile Device Security by the U.S. Department of Homeland Security and the Adversarial Tactics, Techniques & Common Knowledge Mobile Profile by Mitre.  Both seem like excellent, up-to-date overviews. The Mitre publication should be especially valuable for Architecture Risk Analysis and threat analysis in virtually any mobile context.

REFERENCES:
“Study on Mobile Device Security.” April 2017, by the US Dept. of Homeland Security (125 pages)https://www.dhs.gov/sites/default/files/publications/DHS%20Study%20on%20Mobile%20Device%20Security%20-%20April%202017-FINAL.pdf
“Adversarial Tactics, Techniques & Common Knowledge Mobile Profile.” October 2017, by The MITRE Corporation https://attack.mitre.org/mobile/index.php/Main_Page

 


Workforce Mobility = More Shoulder Surfing Risk

July 20, 2017

An individual recently alerted me to an instance of sensitive information being displayed on an application screen in the context of limited or non-existent business value. There are a few key risk management issues here – if we ship data to a user’s screen there is a chance that:

  • it will be intercepted by unauthorized parties,
  • unauthorized parties will have stolen credentials and use them to access that data, and
  • unauthorized parties will view it on the authorized-user’s screen.

Today I am most interested in the last use case — where traditional and non-traditional “shoulder surfing” is used to harvest sensitive data from user’s screens.

In global financial services, most of us have been through periods of data display “elimination” from legacy applications. In the last third of the 20th century United States, individual’s ‘Social Security Numbers’ (SSN) evolved into an important component of customer identification. It was a handy key to help identify one John Smith from another, and to help identify individuals whose names were more likely than others to be misspelled. Informtation Technology teams incorporated SSN as a core component of individual identity across the U.S. across many industries. Over time, individual’s SSNs became relatively liquid commodities and helped support a broad range of criminal income streams. After the turn of the century regulations and customer privacy expectations evolved to make use of SSN for identification increasingly problematic. In response to that cultural change or to other trigger events (privacy breach being the most common), IT teams invested in large scale activities to reduce dependence on SSNs where practical, and to resist SSN theft by tightening access controls to bulk data stores and by removing or masking SSNs from application user interfaces (‘screens’).

For the most part, global financial services leaders, application architects, and risk management professionals have internalized the concept of performing our business operations in a way that protects non-public data from ‘leaking’ into unauthorized channels. As our business practices evolve, we are obligated to continuously re-visit our alignment with data protection obligations. In software development, this is sometimes called architecture risk analysis (an activity that is not limited to formal architects!).

Risk management decisions about displaying non-public data on our screens need to take into account the location of those screens and the assumptions that we can reliably make about those environments. When we could depend upon the overwhelming majority of our workforce being in front of monitors located within workplace environments, the risks associated with ‘screen’ data leakage to unauthorized parties were often managed via line-of-sight constraints, building access controls, and “privacy filters” that were added to some individual’s monitors. We designed and managed our application user interfaces in the context of our assumptions about those layers of protection against unauthorized visual access.

Some organizations are embarked on “mobilizing” their operations — responding to advice that individuals and teams perform better when they are unleashed from traditional workplace constraints (like a physical desk, office, or other employer-managed workspace) as well as traditional workday constraints (like a contiguous 8, 10, or 12-hour day). Working from anywhere and everywhere, and doing so at any time is pitched as an employee benefit as well as a business operations improvement play. These changes have many consequences. One important impact is the increasing frequency of unauthorized non-public data ‘leakage’ as workforce ‘screens’ are exposed in less controlled environments — environments where there are higher concentrations of non-workforce individuals as well as higher concentrations of high-power cameras. Inadvertently, enterprises evolving toward “anything, anywhere, anytime” operations must assume risks resulting from exposing sensitive information to bystanders through the screens used by their workforce, or they must take measures to effectively deal with those risks.

The ever more reliable assumption that our customers, partners, marketers, and vendors feel increasingly comfortable computing in public places such as coffee shops, lobbies, airports and other types of transportation hubs, drives up the threat of exposing sensitive information to unauthorized parties.

This is not your parent’s shoulder surfing…
With only modest computing power, sensitive information can be extracted from images delivered by high-power cameras. Inexpensive and increasingly ubiquitous multi-core machines, GPUs, and cloud computing makes computing cycles more accessible and affordable for criminals and seasoned hobbyists to extract sensitive information via off-the-shelf visual analysis tools

This information exposure increases the risks of identity theft and theft of other business secrets that may result in financial losses, espionage, as well as other forms of cyber crime.

The dangers are real…
A couple years ago Michael Mitchell and An-I Andy Wang (Florida State University), and Peter Reiher (University of California, Los Angeles) wrote in “Protecting the Input and Display of Sensitive Data:”

The threat of exposing sensitive information on screen
to bystanders is real. In a recent study of IT
professionals, 85% of those surveyed admitted seeing
unauthorized sensitive on-screen data, and 82%
admitted that their own sensitive on-screen data could
be viewed by unauthorized personnel at times. These
results are consistent with other surveys indicating that
76% of the respondents were concerned about people
observing their screens, while 80% admitted that
they have attempted to shoulder surf the screen of a
stranger .

The shoulder-surfing threat is worsening, as mobile
devices are replacing desktop computers. More devices
are mobile (over 73% of annual technical device
purchases) and the world’s mobile worker
population will reach 1.3 billion by 2015. More than
80% of U.S. employees continues working after leaving
the office, and 67% regularly access sensitive data at
unsafe locations. Forty-four percent of organizations
do not have any policy addressing these threats.
Advances in screen technology further increase the risk
of exposure, with many new tablets claiming near 180-
degree screen viewing angles.

What should we do first?
The most powerful approach to resisting data leakage via user’s screens is to stop sending that data to those at-risk application user interfaces.

Most of us learned that during our SSN cleanup efforts. In global financial services there were only the most limited use cases where an SSN was needed on a user’s screen. Eliminating SSNs from the data flowing out to those user’s endpoints was a meaningful risk reduction. Over time, the breaches that did not happen only because of SSN-elimination activities could represent material financial savings and advantage in a number of other forms (brand, good-will, etc.).

As we review non-public data used throughout our businesses, and begin the process of sending only that required for the immediate use case to user’s screens, it seems rational that we will find lots of candidates for simple elimination.

For some cases where sensitive data may be required on ‘unsafe’ screens Mitchell, Wang, and Reiher propose an interesting option (cashtags), but one beyond the scope of my discussion today.

REFERENCES:
“Cashtags: Protecting the Input and Display of Sensitive Data.”
By Michael Mitchell and An-I Andy Wang (Florida State University), and Peter Reiher (University of California, Los Angeles)
https://www.cs.fsu.edu/~awang/papers/usenix2015.pdf


Insecure software is the root cause…

May 17, 2017

If you are involved in creating, maintaining, operating or acquiring risk-appropriate software, this short blog about the recent wannacry ransomware exercise is worth reading.

https://blog.securitycompass.com/wannacry-and-the-elephant-in-the-room-c9b24cfee2bd


​The Treacherous 12 – Cloud Computing Top Threats in 2016

April 25, 2017

The Cloud Security Alliance published “The Treacherous 12 – Cloud Computing Top Threats in 2016” last year.  I just saw it cited in a security conference presentation and realized that I had not shared this reference.  For those involved in decision-making about risk management of their applications, data, and operations, this resource has some value.  If you have not yet experienced a challenge to host your business in “the cloud”** it is likely you will in the future.

In my opinion, the Cloud Security Alliance is wildly optimistic about the business and compliance costs and the real risks associated with using shared, fluid, “cloud” services to host many types of global financial services business applications & non-public data.  That said, financial services is a diverse collection of business activities, some of which may be well served by credible “cloud” service providers (for example, but not limited to, some types of sales, marketing, and human resource activities).  In that context, the Cloud Security Alliance still publishes some content that can help decision-makers understand more about what they are getting into.

“The Treacherous 12 – Cloud Computing Top Threats in 2016” outlines what “experts identified as the 12 critical issues to cloud security (ranked in order of severity per survey results)”:

  1. Data Breaches
  2. Weak Identity, Credential and Access Management
  3. Insecure APIs
  4. System and Application Vulnerabilities
  5. Account Hijacking
  6. Malicious Insider
  7. Advanced Persistent Threats (APTs)
  8. Data Loss
  9. Insufficient Due Diligence
  10. Abuse and Nefarious Use of Cloud Services
  11. Denial of Service
  12. Shared Technology Issues

For each of these categories, the paper includes some sample business impacts, supporting anecdotes and examples, candidate controls that may help address given risks, and links to related resources.

If your role requires evaluating risks and opportunities associated with “cloud” anything, consider using this resource to help flesh out some key risk issues.

 

**Remember, as abstraction is peeled away “the cloud” is an ecosystem constructed of other people’s “computers” supported by other people’s employees…

REFERENCES:

Cloud Security Alliance:
https://cloudsecurityalliance.org

“The Treacherous 12 – Cloud Computing Top Threats in 2016”
https://downloads.cloudsecurityalliance.org/assets/research/top-threats/Treacherous-12_Cloud-Computing_Top-Threats.pdf


Do Not Use On-Line Services to Encode or Encrypt Secrets

March 17, 2017

I received an excellent reminder about protecting secrets from a developer this morning. His advice included:

In the course of development work, many of us need to encode or encrypt strings.  He had just bumped into a situation where teams were using an Internet-available, public service to base 64 encode OAuth key/secret pairs.  These OAuth “secrets” are used all over the Internet to authenticate against web service interfaces.  Too often they are static/permanent strings — which means that once stolen they are useful to anyone, hostile or otherwise, for long periods of time.  This type of authentication credential must be very carefully protected throughout its entire life-cycle.
[Please stick with me even if you are not familiar with base 64 or OAuth, because this is broadly reusable advice]

The specific site is not really important as it could have been one of thousands of other free data encoding/encrypting sites.

The risk issue is associated with the fact that the “free” encoding service cloud site knows the client’s source IP address (plus other endpoint/user-identifying metadata) and the secrets that the user inputs. Using that information, they can infer (with some confidence) that a given company is using these secrets, and can sometimes also infer what the secrets are used for by the structure of the inputs. Nothing on the Internet is truly free. We need to assume that these sites earn revenue by monetizing what they learn. Cyber-crime is a business, and it is often less expensive to buy information about specific or classes of candidate targets than to independently perform the initial reconnaissance. So we should expect that some percentage of what free sites learn ends up as inputs to cyber-crime planning and activities. In that context, our secrets would not remain secret — and our risks would be elevated. In addition, extruding secrets in this way would also violate company policy at every global Financial Services enterprise.

Lucky for all of us, there are easy alternatives to using Internet-available public services to encode/encrypt our secrets.

Encoding can be as simple as a PowerShell or Python one-liner:

powershell "[convert]::ToBase64String([Text.Encoding]::UTF8.GetBytes(\"mySecret\"))"

or

python -c "import base64; encoded=base64.b64encode(b'mySecret'); print encoded;"

Or you can use any other development language of choice to easily assemble a utility to encode secrets. This is not technically difficult or especially risky.

Encrypting safely is a greater challenge. Understand your goals first. Once you know what you need to achieve, you can work with a professional to select a cryptosystem and coding/operational processes that should have a chance of meeting those goals. Cryptography can go wrong. Do not attempt to invent your own.


%d bloggers like this: