Deception Has a Place in Secure Software

April 1, 2018

Deception has been standard military practice for millennia.  Attackers and defenders employ deception for a variety of goals:

Deceive – Cause a person to believe what is not true
Degrade – Temporary reduction in effectiveness
Delay – Slow the time of arrival of forces or capabilities
Deny – Withhold information about capabilities
Destroy – Enemy capability cannot be restored
Disrupt – Interrupt or impede capabilities or systems
Divert – Force adversary to change course or direction
Exploit – Gain access to systems to collect or plant information
Neutralize – Render adversary incapable of interfering with activity
Suppress – Temporarily degrade adversary/tool below level to accomplish mission

The U.S. military uses what they call a “See, Think, Do” deception methodology.

The core idea is to manipulate the cognitive processes in the deception target’s mind that result in targeting decisions and in adversary actions that are advantageous to our operations, our tactical or strategic goals.  This methodology tends to result in looping through the following three questions:

(1) What does the target of our deceptive activities see when they observe our operations?
(2) What conclusions does that target draw from those observations?
(3) What action may the target take as a result of the conclusions based upon those observations?

Successful deception operations are those that do more than make the target “believe” or “think” that the deception is true.  Success also needs to result in action(s) or inaction that supports the our operational plan(s).

Deception tactics can target human attackers, their organizations, their code, or any set thereof.

It is standard practice across global financial services enterprise information security to implement layers of protections — never depending on only a single security device.  We are at a stage in the battle with global cybercrime that may demand we introduce deception as a new layer of defense.  When we architect, design, and implement our applications and systems, we may enhance our resistance to attack by employing tactics analogous to military deception to influence attackers and the hostile code they use.  This will not be quick or easy.

Who might you assign to this task?  Do not immediately regress to: “I wonder who is available.”  Like many security tasks, deception planning requires a relatively unique skillset.  We build and deploy our software in ways that expose a multitude of interfaces.  That practice results in complex and often numerous abuse cases.  Our worker will need to understand and analyze that matrix from a number of perspectives, and to project other’s thinking and actions into the future.  We might expect them to:

  1. Understand each component’s deception and other information operations/influence capabilities.
  2. Be intimately familiar with their organization’s missions and focus.
  3. Understand the concepts of centers of gravity, calculated risk, initiative, security, and surprise.
  4. Understand friendly and adversary intelligence systems and how they function.
  5. Possess technical understanding of intelligence sensors, the platforms on which they deploy, their reporting capabilities, and associated processing methodologies.
  6. Understand the psychological and cultural factors that might influence the adversary’s planning and decision making.
  7. Understand potential adversaries’ planning and decision-making processes (both formal and informal).
  8. Understand the assets that are available to support the deception.
It is more difficult than just that.  We live in a world of laws, regulations, contracts, and norms that will constrain our behaviors in ways that differ from what may be acceptable on other battlefields.  Our leaders and practitioners need to understand those limits and manage their activities in ways that align with our obligations.  This will require much more than technical and operational competence.  It requires a high level of maturity and a finely calibrated moral & ethical compass.  Superior deception campaigns will require careful planning, effective guard-rails, and serious management.
Darn!  Another difficult staffing challenge…
Get use to it if you want to deliver your applications to a user-base anywhere on the Internet, and/or if you want to run your business in the cloud — especially if your are a global financial services enterprise — you need to expand and enhance your threat resistance using deception.
What do you think?

REFERENCES:

“Influence Operations and the Internet: A 21st Century Issue Legal, Doctrinal, and Policy Challenges in the Cyber World.”
“JP 3-13.3, Operations Security.” 04 January 2012
http://www.dtic.mil/doctrine/concepts/concepts.htm
Advertisements

Recent US-CERT & FBI Alert A Good Read — Applicable to Us

March 19, 2018

The United States Computer Emergency Readiness Team (US-CERT) recently released an alert about sophisticated attacks against individuals and infrastructure that contained an excellent explanation of the series of attacker techniques that are applicable to all global Financial Services enterprises. Many of the techniques are possible and effective because of the availability of direct Internet connections. Absent direct Internet connectivity, many of the techniques detailed in the CERT alert would be ineffective.

Global Financial Services enterprises, responsible for protecting hundreds of billions, even trillions of dollars (other people’s money) are attractive cybercrime targets. We are also plagued by hucksters & hypesters who are attempting to transform our companies into what they claim will be disruptive, agile organizations using one or another technical pitch that simply translates into “anything, anywhere, anytime.”  The foundation of these pitches seems to be “Internet everywhere” or even “replace your inconvenient internal networks with the Internet” while eliminating those legacy security and constraining security practices.

We can all learn from the details in this Alert.

From the alert:

[The] alert provides information on Russian government actions targeting U.S. Government entities as well as organizations in the energy, nuclear, commercial facilities, water, aviation, and critical manufacturing sectors. It also contains indicators of compromise (IOCs) and technical details on the tactics, techniques, and procedures (TTPs) used by Russian government cyber actors on compromised victim networks. DHS and FBI produced this alert to educate network defenders to enhance their ability to identify and reduce exposure to malicious activity.

DHS and FBI characterize this activity as a multi-stage intrusion campaign by Russian government cyber actors who targeted small commercial facilities’ networks where they staged malware, conducted spear phishing, and gained remote access into energy sector networks. After obtaining access, the Russian government cyber actors conducted network reconnaissance, moved laterally, and collected information pertaining to Industrial Control Systems (ICS).

Take the time to review it. Replace “industrial control systems” with your most important systems as you read.

For many of us, the material may be useful in our outreach and educational communications.

The 20-some recommendations listed in the “General Best Practices Applicable to this Campaign” section also seem applicable to Financial Services.

REFERENCES
“Alert (TA18-074A) Russian Government Cyber Activity Targeting Energy and Other Critical Infrastructure Sectors.” Release date: March 15, 2018. https://www.us-cert.gov/ncas/alerts/TA18-074A

“Cyberattacks Put Russian Fingers on the Switch at Power Plants, U.S. Says.” By Nicole Perlroth and David E. Sanger,The New York Times. https://www.nytimes.com/2018/03/15/us/politics/russia-cyberattacks.html


Cloud Risk Assessment Challenge Thoughts

February 3, 2018

Technology is often at the center of efforts to sell new business models. From some perspectives, “Cloud” is more about new business models than about technology-enabled capabilities. Over the last decade or more, “cloud” marketers and hypists have constructed intricate structures of propaganda that trap the unwary in a matrix, a fog, a web of artifice and deceit.[1]  I think that a “cloud first” belief system is misused in ways that sometimes results in excessive risk-taking.  Belief systems are tricky to deal with and can cause some to dismiss or ignore inputs that might impact core tenets or dogma.

My reading and first hand experience lead me to believe that many are willing to migrate operations to other people’s computers — “the cloud” — without clearly evaluating impacts to their core mission, their risk management story-telling, and risk posture. Too many cloud vendors remain opaque to risk assessment, while leaning heavily on assertions of “compliance” and alignment with additionally hyped ancillary practices [containers, agile, encryption, etc.].

None of this rant implies that all Internet-centric service providers are without value. My core concern is with the difficulty in determining the risks associated with using one or another of them for given global Financial Services use cases.  That difficulty is only amplified when some involved exist within a reality-resisting “cloud first” belief system.

Because some “cloud” business models are exceptionally misaligned with global Financial Services enterprise needs and mandates, it is critically important to understand them. A given “cloud” vendor’s attack surface combined with a prodigious and undisciplined risk appetite can result in material misalignment with Financial Services needs. Again, this does not invalidate all “cloud” providers for all use cases, it elevates the importance of performing careful, thorough, clear-headed, evidence-informed risk assessments.  In our business, we are expected, even required, to protect trillions of dollars of other people’s money, to live up to our long and short term promises, and to comply with all relevant laws, regulations, and contracts.  And we are expected to do so in ways that are transparent enough for customers, prospects, regulators, and others to determine if we are meeting their expectations.

  • Evidence is not something to be used selectively to support beliefs.
  • Research is not hunting for justifications of existing beliefs.
  • Hunt for evidence. Use your cognitive capabilities to evaluate it.
  • Soberly analyze your beliefs.
  • Let the evidence influence your beliefs.
  • When needed, build new beliefs.[2]

Effective risk management has little room for anyone captured within a given belief system or abusing the power to create one’s own reality.

This remains a jumbled and unfinished thought that I will continue to evolve here.

What do you think?

[1] Derived from a phrase by Michelle Goldberg.
[2] Thank you Alex Wall, Johnston, IA. Author of a Letter to the Editor in the Feb 3, 2018 Des Moines Register.


Ransomware and My Cloud

December 10, 2017

I just reviewed descriptions of sample incidents associated with ransomware outlined in the ‘Top 10″ review by TripWire.

Ransomware attacks — malware that encrypts your data followed by the attacker attempting to extort money from you for the decryption secrets — are a non-trivial threat to most of us as individuals and all financial services enterprises.

Unfortunately for some, their corporate culture tends to trust workforce users’ access to vast collections of structured and unstructured business information.  That ‘default to trust’ enlarges the potential impacts of a ransomware attack.

As global Financial Services security professionals, we need to resist the urge to share unnecessarily.

We need to quickly detect and respond to malware attacks in order to constrain their scope and impacts.  Because almost every global Financial Services enterprise represents a complex ecosystem of related and in some cases dependent operations, detection may involve many layers, technologies, and activities.  It is not just mature access/privilege management, patching, anti-virus, or security event monitoring, or threat intelligence alone.

All of us also need to ensure that we have a risk-relevant post-ransomware attack data recovery capability that is effective across all our various business operations.

So, does the cloud make me safe from ransomware attack?  No.  Simply trusting your cloud vendor (or their hype squad) on this score does not reach the level of global Financial Services due diligence.  It seems safe to assert that for any given business process, the countless hardware, software, process, and human components that make up any cloud just make it harder to resist and to recovery from ransomware attack.  And under many circumstances, the presence of cloud infrastructure — by definition, managed by some other workforce using non-Financial Services-grade endpoints — increases the probability of this family of malware attack.

 

REFERENCE:

“10 of the Most Significant Ransomware Attacks of 2017.” By David Bisson, 12-10-2017. https://www.tripwire.com/state-of-security/security-data-protection/cyber-security/10-significant-ransomware-attacks-2017/​


Two Mobile Security Resources

October 16, 2017
Although it should not have taken this long, I just ran across two relatively new resources for those doing Financial Services business in an environment infested with mobile devices.

If you are a mobile developer or if your organization is investing in mobile apps, I believe that you should [at a minimum] carefully and thoughtfully review the Study on Mobile Device Security by the U.S. Department of Homeland Security and the Adversarial Tactics, Techniques & Common Knowledge Mobile Profile by Mitre.  Both seem like excellent, up-to-date overviews. The Mitre publication should be especially valuable for Architecture Risk Analysis and threat analysis in virtually any mobile context.

REFERENCES:
“Study on Mobile Device Security.” April 2017, by the US Dept. of Homeland Security (125 pages)https://www.dhs.gov/sites/default/files/publications/DHS%20Study%20on%20Mobile%20Device%20Security%20-%20April%202017-FINAL.pdf
“Adversarial Tactics, Techniques & Common Knowledge Mobile Profile.” October 2017, by The MITRE Corporation https://attack.mitre.org/mobile/index.php/Main_Page

 


Low Profile Office 365 Breach Reported

August 18, 2017

A couple years ago I wrote:

“I am told by many in my industry (and some vendors) that ‘if we put it in the cloud it will work better, cheaper, be safer, and always be available.’ Under most general financial services use cases (as opposed to niche functionality) that statement seems without foundation.”

Although many individuals have become more sophisticated in the ways they pitch ‘the cloud’ I still hear versions of this story on a fairly regular basis…

Today I learned about a recent Office 365 service outage that reminded me that issues concerning our use of ‘cloud’ technology and the commitments we in the Global Financial Services business make to our customers, prospects, marketers, investors, and regulators seem to remain very much unresolved.

What happened?

According to Microsoft, sometime before 11:30 PM (UTC) on August 3rd 2017, the company introduced an update to the Activity Reports service in the Office 365 admin center which resulted in customers usage reports of one tenant being displayed in another tenant’s administrative portal.

Some customer o365 administrators noticed that the reported email and SharePoint usage for their tenants had spiked. When they investigated, the o365 AdminPortal (https://portal.office.com/adminportal/) displayed activity for users from one or more AzureAD domains outside their tenant. In the most general terms, this was a breach. The breach displayed names and email addresses of those users along with some amount of service traffic detail, for example, user named X (having email address userNameX@domainY.com) sent 193 and received 467 messages, as well as uploaded 9 documents to SharePoint, and read 45 documents in the previous week.

Some subset of those 0365 customers reported the breach to Microsoft.

Microsoft reported that at they disabled the Activity Reports services at 11:40 PM UTC the same day, and that they had a fix in place by 3:35 AM UTC.

Why should I care?

As Global Financial Services Enterprises we make a lot of promises (in varying degrees of formality) to protect the assets for which we are responsible and we promote our ethical business practices. For any one or our companies, our risk profile is rapidly evolving in concert with expanded use of a range of cloud services. Those risks appear in many forms. All of us involved in Global Financial Services need our security story-telling to evolve in alignment with the specific risks we are taking when we choose to operate one or another portion of our operations in ‘the cloud.’ In addition, our processes for detecting and reporting candidate “breaches” also need to evolve in alignment with our use of all things cloud.

In this specific situation it is possible that each of our companies could have violated our commitments to comply with the European GDPR (General Data Protection Regulations: http://www.eugdpr.org/), had it happened in August 2018 rather than August 2017. We all have formal processes to report and assess potential breaches. Because of the highly-restricted access to Office 365 and Azure service outage details, is seems easy to believe that many of our existing breach detection and reporting processes are no longer fully functional.

Like all cloud stuff, o365 and Azure are architected, designed, coded, installed, hosted, maintained, and monitored by humans (as is their underlying infrastructure of many and varied types).
Humans make mistakes, they misunderstand, they miscommunicate, they obfuscate, they get distracted, they get tired, they get angry, they ‘need’ more money, they feel abused, they are overconfident, they believe their own faith-based assumptions, they fall in love with their own decisions & outputs, they make exceptions for their employer, they market their services using language disconnected from raw service-delivery facts, and more. That is not the whole human story, but this list attempts to poke at just some human characteristics that can negatively impact systems marketed as ‘cloud’ on which all of us perform one or another facet of our business operations.

I recommend factoring this human element into your thinking about the value proposition presented by any given ‘cloud’ opportunity. All of us will need to ensure that all of our security and compliance mandated services incorporate the spectrum of risks that come with those opportunities. If we let that risk management and compliance activity lapse for too long, it could put any or all of our brands in peril.

REFERENCES:
“Data Breach as Office 365 Admin Center Displays Usage Data from Other Tenants.”
By Tony Redmond, August 4, 2017
https://www.petri.com/data-breach-office-365-admin-center

European GDPR (General Data Protection Regulations)
http://www.eugdpr.org/


Workforce Mobility = More Shoulder Surfing Risk

July 20, 2017

An individual recently alerted me to an instance of sensitive information being displayed on an application screen in the context of limited or non-existent business value. There are a few key risk management issues here – if we ship data to a user’s screen there is a chance that:

  • it will be intercepted by unauthorized parties,
  • unauthorized parties will have stolen credentials and use them to access that data, and
  • unauthorized parties will view it on the authorized-user’s screen.

Today I am most interested in the last use case — where traditional and non-traditional “shoulder surfing” is used to harvest sensitive data from user’s screens.

In global financial services, most of us have been through periods of data display “elimination” from legacy applications. In the last third of the 20th century United States, individual’s ‘Social Security Numbers’ (SSN) evolved into an important component of customer identification. It was a handy key to help identify one John Smith from another, and to help identify individuals whose names were more likely than others to be misspelled. Informtation Technology teams incorporated SSN as a core component of individual identity across the U.S. across many industries. Over time, individual’s SSNs became relatively liquid commodities and helped support a broad range of criminal income streams. After the turn of the century regulations and customer privacy expectations evolved to make use of SSN for identification increasingly problematic. In response to that cultural change or to other trigger events (privacy breach being the most common), IT teams invested in large scale activities to reduce dependence on SSNs where practical, and to resist SSN theft by tightening access controls to bulk data stores and by removing or masking SSNs from application user interfaces (‘screens’).

For the most part, global financial services leaders, application architects, and risk management professionals have internalized the concept of performing our business operations in a way that protects non-public data from ‘leaking’ into unauthorized channels. As our business practices evolve, we are obligated to continuously re-visit our alignment with data protection obligations. In software development, this is sometimes called architecture risk analysis (an activity that is not limited to formal architects!).

Risk management decisions about displaying non-public data on our screens need to take into account the location of those screens and the assumptions that we can reliably make about those environments. When we could depend upon the overwhelming majority of our workforce being in front of monitors located within workplace environments, the risks associated with ‘screen’ data leakage to unauthorized parties were often managed via line-of-sight constraints, building access controls, and “privacy filters” that were added to some individual’s monitors. We designed and managed our application user interfaces in the context of our assumptions about those layers of protection against unauthorized visual access.

Some organizations are embarked on “mobilizing” their operations — responding to advice that individuals and teams perform better when they are unleashed from traditional workplace constraints (like a physical desk, office, or other employer-managed workspace) as well as traditional workday constraints (like a contiguous 8, 10, or 12-hour day). Working from anywhere and everywhere, and doing so at any time is pitched as an employee benefit as well as a business operations improvement play. These changes have many consequences. One important impact is the increasing frequency of unauthorized non-public data ‘leakage’ as workforce ‘screens’ are exposed in less controlled environments — environments where there are higher concentrations of non-workforce individuals as well as higher concentrations of high-power cameras. Inadvertently, enterprises evolving toward “anything, anywhere, anytime” operations must assume risks resulting from exposing sensitive information to bystanders through the screens used by their workforce, or they must take measures to effectively deal with those risks.

The ever more reliable assumption that our customers, partners, marketers, and vendors feel increasingly comfortable computing in public places such as coffee shops, lobbies, airports and other types of transportation hubs, drives up the threat of exposing sensitive information to unauthorized parties.

This is not your parent’s shoulder surfing…
With only modest computing power, sensitive information can be extracted from images delivered by high-power cameras. Inexpensive and increasingly ubiquitous multi-core machines, GPUs, and cloud computing makes computing cycles more accessible and affordable for criminals and seasoned hobbyists to extract sensitive information via off-the-shelf visual analysis tools

This information exposure increases the risks of identity theft and theft of other business secrets that may result in financial losses, espionage, as well as other forms of cyber crime.

The dangers are real…
A couple years ago Michael Mitchell and An-I Andy Wang (Florida State University), and Peter Reiher (University of California, Los Angeles) wrote in “Protecting the Input and Display of Sensitive Data:”

The threat of exposing sensitive information on screen
to bystanders is real. In a recent study of IT
professionals, 85% of those surveyed admitted seeing
unauthorized sensitive on-screen data, and 82%
admitted that their own sensitive on-screen data could
be viewed by unauthorized personnel at times. These
results are consistent with other surveys indicating that
76% of the respondents were concerned about people
observing their screens, while 80% admitted that
they have attempted to shoulder surf the screen of a
stranger .

The shoulder-surfing threat is worsening, as mobile
devices are replacing desktop computers. More devices
are mobile (over 73% of annual technical device
purchases) and the world’s mobile worker
population will reach 1.3 billion by 2015. More than
80% of U.S. employees continues working after leaving
the office, and 67% regularly access sensitive data at
unsafe locations. Forty-four percent of organizations
do not have any policy addressing these threats.
Advances in screen technology further increase the risk
of exposure, with many new tablets claiming near 180-
degree screen viewing angles.

What should we do first?
The most powerful approach to resisting data leakage via user’s screens is to stop sending that data to those at-risk application user interfaces.

Most of us learned that during our SSN cleanup efforts. In global financial services there were only the most limited use cases where an SSN was needed on a user’s screen. Eliminating SSNs from the data flowing out to those user’s endpoints was a meaningful risk reduction. Over time, the breaches that did not happen only because of SSN-elimination activities could represent material financial savings and advantage in a number of other forms (brand, good-will, etc.).

As we review non-public data used throughout our businesses, and begin the process of sending only that required for the immediate use case to user’s screens, it seems rational that we will find lots of candidates for simple elimination.

For some cases where sensitive data may be required on ‘unsafe’ screens Mitchell, Wang, and Reiher propose an interesting option (cashtags), but one beyond the scope of my discussion today.

REFERENCES:
“Cashtags: Protecting the Input and Display of Sensitive Data.”
By Michael Mitchell and An-I Andy Wang (Florida State University), and Peter Reiher (University of California, Los Angeles)
https://www.cs.fsu.edu/~awang/papers/usenix2015.pdf


%d bloggers like this: