New Technology and Service Options Do Not Trump Law and Regulations

May 16, 2017

A couple weeks ago I received a letter from Wells Fargo. After mentioning some brokerage account details there were a couple paragraphs of disclosure about $2.5 M in penalties for failing to effectively protect business-related electronic records.  Wells Fargo has been having a rough time lately.  But this situation is just so self-inflicted, and so likely to happen elseware as Financial Services organization’s technology personnel attempt to demonstrate that they can “deliver more for less…” that I thought it might be worth sharing as a cautionary tale.

The disclosures outlined that the bank’s brokerage and independent wealth management businesses paid $1 million and another $1.5 million in fines & penalties because they failed to keep hundreds of millions of electronic documents in a “write once, read many” format — as required by the regulations under which they do business.

Federal securities laws and Financial Industry Regulatory Authority (FINRA) rules require that electronic storage media hosting certain business-related electronic records “preserve the records exclusively in a non-rewriteable and non-erasable format.” This type of storage media has a legacy of being referred to as WORM or “write once, read many” technology that prevents the alteration or destruction of the data they store. The SEC has stated that these requirements are an essential part of the investor protection function because a firm’s books and records are the “primary means of monitoring compliance with applicable securities laws, including anti-fraud provisions and financial responsibility standards.”  Requiring WORM technology is associated with maintaining the integrity of certain financial records.

Over the past decade, the volume of sensitive financial data stored electronically has risen exponentially and there have been increasingly aggressive attempts to hack into electronic data repositories, posing a threat to inadequately protected records, further emphasizing the need to maintain records in WORM format. At the same time, in some financial services organizations “productivity” measures have resulted in large scale, internally-initiated customer fraud, again posing a threat to inadequately protected records.

My letter resulted from a set of FINRA actions announced late last December that imposed fines against 12 firms for a total of $14.4 million “for significant deficiencies relating to the preservation of broker-dealer and customer records in a format that prevents alteration.” In their December 21st press release FINRA said that they “found that at various times, and in most cases for prolonged periods, the firms failed to maintain electronic records in ‘write once, read many,” or WORM, format.”

FINRA reported that each of these 12 firms had technology, procedural and supervisory deficiencies that affected millions, and in some cases, hundreds of millions, of records core to the firms’ brokerage businesses, spanning multiple systems and categories of records. FINRA also announced that three of the firms failed to retain certain broker-dealer records the firms were required to keep under applicable record retention rules.

Brad Bennett, FINRA’s Executive Vice President and Chief of Enforcement, said, “These disciplinary actions are a result of FINRA’s focus on ensuring that firms maintain accurate, complete and adequately protected electronic records. Ensuring the integrity of these records is critical to the investor protection function because they are a primary means by which regulators examine for misconduct in the securities industry.”

FINRA reported 99 related “books and records” cases in 2016, which resulted in $22.5 million in fines. That seems like real money…

Failure to effectively protect these types of regulated electronic records may result in reputational (impacting brand & sales) and financial (fines & penalties) harm. Keep that in mind as vendors and hype-sters attempt to sell us services that persist regulated data. New technology and service options do not supersede or replace established law and regulations underwhich our Financial Services companies operate.

REFERENCES:
“FINRA Fines 12 Firms a Total of $14.4 Million for Failing to Protect Records From Alteration.”
December 21, 2016
http://www.finra.org/newsroom/2016/finra-fines-12-firms-total-144-million-failing-protect-records-alteration

“Annual Eversheds Sutherland Analysis of FINRA Cases Shows Record-Breaking 2016.”
February 28, 2017
https://us.eversheds-sutherland.com/NewsCommentary/Press-Releases/197511/Annual-Eversheds-Sutherland-Analysis-of-FINRA-Cases-Shows-Record-Breaking-2016

“Is Compliance in FINRA’s Crosshairs?”
http://www.napa-net.org/news/technical-competence/regulatory-agencies/is-compliance-in-finras-crosshairs/

SEC Rule 17a-4 & 17a-3 of the Securities Exchange Act of 1934:
“SEC Rule 17a-4 & 17a-3 – Records to be made by and preserved by certain exchange members, brokers and dealers.” (vendor summary)
http://www.17a-4.com/regulations-summary/

“SEC Interpretation: Electronic Storage of Broker-Dealer Records.”
https://www.sec.gov/rules/interp/34-47806.htm

“(17a-3) Records to be Made by Certain Exchange Members, Brokers and Dealers.”
http://www.finra.org/industry/interpretationsfor/sea-rule-17a-3

“(17a-4) Records to be Preserved by Certain Exchange Members, Brokers and Dealers.”
http://www.finra.org/industry/interpretationsfor/sea-rule-17a-4

Advertisements

​The Treacherous 12 – Cloud Computing Top Threats in 2016

April 25, 2017

The Cloud Security Alliance published “The Treacherous 12 – Cloud Computing Top Threats in 2016” last year.  I just saw it cited in a security conference presentation and realized that I had not shared this reference.  For those involved in decision-making about risk management of their applications, data, and operations, this resource has some value.  If you have not yet experienced a challenge to host your business in “the cloud”** it is likely you will in the future.

In my opinion, the Cloud Security Alliance is wildly optimistic about the business and compliance costs and the real risks associated with using shared, fluid, “cloud” services to host many types of global financial services business applications & non-public data.  That said, financial services is a diverse collection of business activities, some of which may be well served by credible “cloud” service providers (for example, but not limited to, some types of sales, marketing, and human resource activities).  In that context, the Cloud Security Alliance still publishes some content that can help decision-makers understand more about what they are getting into.

“The Treacherous 12 – Cloud Computing Top Threats in 2016” outlines what “experts identified as the 12 critical issues to cloud security (ranked in order of severity per survey results)”:

  1. Data Breaches
  2. Weak Identity, Credential and Access Management
  3. Insecure APIs
  4. System and Application Vulnerabilities
  5. Account Hijacking
  6. Malicious Insider
  7. Advanced Persistent Threats (APTs)
  8. Data Loss
  9. Insufficient Due Diligence
  10. Abuse and Nefarious Use of Cloud Services
  11. Denial of Service
  12. Shared Technology Issues

For each of these categories, the paper includes some sample business impacts, supporting anecdotes and examples, candidate controls that may help address given risks, and links to related resources.

If your role requires evaluating risks and opportunities associated with “cloud” anything, consider using this resource to help flesh out some key risk issues.

 

**Remember, as abstraction is peeled away “the cloud” is an ecosystem constructed of other people’s “computers” supported by other people’s employees…

REFERENCES:

Cloud Security Alliance:
https://cloudsecurityalliance.org

“The Treacherous 12 – Cloud Computing Top Threats in 2016”
https://downloads.cloudsecurityalliance.org/assets/research/top-threats/Treacherous-12_Cloud-Computing_Top-Threats.pdf


Do Not Use On-Line Services to Encode or Encrypt Secrets

March 17, 2017

I received an excellent reminder about protecting secrets from a developer this morning. His advice included:

In the course of development work, many of us need to encode or encrypt strings.  He had just bumped into a situation where teams were using an Internet-available, public service to base 64 encode OAuth key/secret pairs.  These OAuth “secrets” are used all over the Internet to authenticate against web service interfaces.  Too often they are static/permanent strings — which means that once stolen they are useful to anyone, hostile or otherwise, for long periods of time.  This type of authentication credential must be very carefully protected throughout its entire life-cycle.
[Please stick with me even if you are not familiar with base 64 or OAuth, because this is broadly reusable advice]

The specific site is not really important as it could have been one of thousands of other free data encoding/encrypting sites.

The risk issue is associated with the fact that the “free” encoding service cloud site knows the client’s source IP address (plus other endpoint/user-identifying metadata) and the secrets that the user inputs. Using that information, they can infer (with some confidence) that a given company is using these secrets, and can sometimes also infer what the secrets are used for by the structure of the inputs. Nothing on the Internet is truly free. We need to assume that these sites earn revenue by monetizing what they learn. Cyber-crime is a business, and it is often less expensive to buy information about specific or classes of candidate targets than to independently perform the initial reconnaissance. So we should expect that some percentage of what free sites learn ends up as inputs to cyber-crime planning and activities. In that context, our secrets would not remain secret — and our risks would be elevated. In addition, extruding secrets in this way would also violate company policy at every global Financial Services enterprise.

Lucky for all of us, there are easy alternatives to using Internet-available public services to encode/encrypt our secrets.

Encoding can be as simple as a PowerShell or Python one-liner:

powershell "[convert]::ToBase64String([Text.Encoding]::UTF8.GetBytes(\"mySecret\"))"

or

python -c "import base64; encoded=base64.b64encode(b'mySecret'); print encoded;"

Or you can use any other development language of choice to easily assemble a utility to encode secrets. This is not technically difficult or especially risky.

Encrypting safely is a greater challenge. Understand your goals first. Once you know what you need to achieve, you can work with a professional to select a cryptosystem and coding/operational processes that should have a chance of meeting those goals. Cryptography can go wrong. Do not attempt to invent your own.


Another Example of How Cloud eq Others Computers

March 2, 2017

I have a sticker on my laptop reminding me that “The cloud is just other people’s computers.” (from StickerMule)  There is no cloud magic.  If you extend your global Financial Services operations into the cloud, it needs to be clearly and verifiably aligned with your risk management practices, your compliance obligations, your contracts, and the assumptions of your various constituencies.  That is a tall order.  Scan the rest of this short outline and then remember to critically evaluate the claims of the hypesters & hucksters who sell “cloud” as the solution to virtually any of your challenges.

Amazon reminded all of us of that fact this week when maintenance on some of their cloud servers cascaded into a much larger 2 hour service outage.

No data breach.  No hack.  Nothing that suggests hostile intent.  Just a reminder that the cloud is a huge, distributed pile of “other people’s computers.”  They have all the hardware and software engineering, operations, and life-cycle management challenges that your staff find in their own data centers.  A key difference, though, is that they are also of fantastic scale, massively shared, and their architecture & operations may not align with global Financial Services norms and obligations.

Amazon reported that the following services were unavailable for up to two and half hours Tuesday Morning (28 Feb, 2017):

  • S3 storage
  • The S3 console
  • Amazon Elastic Compute Cloud (EC2) new instance launches
  • Amazon Elastic Block Store (EBS) volumes
  • AWS Lambda

This resulted in major customer outages.

Here is how Amazon described the outage:

  1. “…on the morning of February 28th. The Amazon Simple Storage Service (S3) team was debugging (a billing system) issue…”
  2. “At 9:37AM PST, an authorized S3 team member using an established playbook executed a command which was intended to remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process.”
  3. “Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended.”
  4. “The servers that were inadvertently removed supported two other S3 subsystems.”
  5. “One of these subsystems, the index subsystem, manages the metadata and location information of all S3 objects in the region. This subsystem is necessary to serve all GET, LIST, PUT, and DELETE requests.”
  6. “The second subsystem, the placement subsystem, manages allocation of new storage and requires the index subsystem to be functioning properly to correctly operate. The placement subsystem is used during PUT requests to allocate storage for new objects.”
  7. “Removing a significant portion of the capacity caused each of these systems to require a full restart.”
  8. “While these subsystems were being restarted, S3 was unable to service requests.”
  9. “Other AWS services in the US-EAST-1 Region that rely on S3 for storage, including the S3 console, Amazon Elastic Compute Cloud (EC2) new instance launches, Amazon Elastic Block Store (EBS) volumes (when data was needed from a S3 snapshot), and AWS Lambda were also impacted while the S3 APIs were unavailable.”

There is no magic in the cloud. It is engineered and operated by people. Alignment between your corporate culture, your corporate compliance obligations, your contractual obligations, and those of your cloud providers is critical to your success in global Financial Services. If those cloud computers and the activities by armies of humans who manage them are not well aligned with your needs and obligations, then you are simply depending on “hope” — one of the most feeble risk management practices. You are warned — again.

What do you think?

REFERENCES:
“The embarrassing reason behind Amazon’s huge cloud computing outage this week.”
https://www.washingtonpost.com/news/the-switch/wp/2017/03/02/the-embarrassing-reason-behind-amazons-huge-cloud-computing-outage-this-week/
By Brian Fung, March 2

“Summary of the Amazon S3 Service Disruption in the Northern Virginia (US-EAST-1) Region.”
https://aws.amazon.com/message/41926/


Bumper Sticker from RBS

February 19, 2017

The research team at security & risk data aggregator (and more) Risk-Based Security (RBS) published a couple of their observations this month — observations that should be a reminder to all of us involved in global Financial Services risk management. RBS catalogs & analyzes thousands of breaches every year. They suggest that ad-hoc, personality-based, or other types of company-unique security practices put companies at a self-inflicted and avoidable level of risk. RBS researchers summarize their findings into a couple central themes:

  • “Breaches can happen at even the most security-conscious organizations.”
  • “The tenacity and skill of attackers when it comes to searching out weaknesses in organizational practices and processes is unrelenting.”

There are a couple key components to their follow-on recommendation:

  • Employ a methodical and risk-based approach to security management, where risk assessments incorporate both:
    • The organization’s security practices, and
    • Downstream risk posed by vendors, suppliers and other third parties.

To address these risks and add structure to day-to-day risk management work, RBS researchers recommend that we:

  • Define security objectives and
  • Select and implement security controls.

The content of the memo describing RBS research observations is useful at the most high levels as a reminder in global Financial Services. While this seem a little like recommending we all continue to breath, eat well, and sleep enough. Their guidance to leverage mature security frameworks “to create robust security programs based on security best practice” is a long bumper sticker. In Financial Services at global scale, we all know that and our various constituencies regularly remind us of those types of requirements. Sometimes they even show up in our sales literature, contracts, and SEC filings.

Bumper stickers may still have their place. RBS observations and recommendations may not be what we need for implementation, but they can help with our elevator speeches & the tag-lines required in a number of frequently encountered risk management interactions.

Here is one way to use summarize their observations and recommendations:

Breaches continue to happen. 
Attackers are tenacious and unrelenting. 
Employ risk-appropriate levels of rigor and 
risk-based prioritization in the application 
your security practices, as well as in 
downstream  risk posed by vendors, suppliers 
and other third parties.

RESOURCES:

“Risk Based Security, NIST and University of Maryland Team Up To Tackle Security Effectiveness.”
https://www.riskbasedsecurity.com/2017/02/risk-based-security-nist-and-university-of-maryland-team-up-to-tackle-security-effectiveness/
February 17, 2017; By RBS

Another key message in the blog was to highlight a joint research project by “NIST’s Computer Security Resource Center and the University of Maryland, known as the Predictive Analytics Modeling Project (http://csrc.nist.gov/scrm/pamp-assessment-faqs.htm). The aim of the project is to conduct the primary research needed in order to build tools that can measure the effectiveness of security controls.” The project web site also says their mission includes seeing “how your organization compares to peers in your industry.” For more on this project see “CyberChain” at: https://cyberchain.rhsmith.umd.edu/.


Make use of OWASP Mobile Top 10

February 14, 2017

OWASP “Mobile Security Project” team updated their Mobile Top 10 Vulnerability list this week. {in the process they broke some of their links, if you hit one, just use the 2015 content for now: https://www.owasp.org/index.php/Projects/OWASP_Mobile_Security_Project_-2015_Scratchpad}

I was in a meeting yesterday with a group reviewing one facet of an evolving proposal for Office 365 as the primary collaboration and document storage infrastructure for some business operations.

Office 365 in global Financial Services? Yup. Technology pundits-for-sale, tech wannabes, and some who are still intoxicated by their mobile technology have been effective in their efforts to sell “cloud-first.” One outcome of some types of “cloud-enabled” operations is the introduction of mobile client platforms. Even though global Financial Services enterprises tend to hold many hundreds of billions or trillions of other people’s dollars, some sell (even unmanaged) mobile platforms as risk appropriate and within the risk tolerance of all relevant constituencies… My working assumption is that those gigantic piles of assets and the power that can result from them necessarily attract a certain amount of hostile attention. That attention requires that our software, infrastructure, and operations be resistant enough to attack to meet all relevant risk management obligations (contracts, laws, regulations, and more). This scenario seems like a mismatch — but I digress.

So, we were attempting to work through a risk review of Mobile Skype for Business integration. That raised a number of issues, one being the risks associated with the software itself. The mobile application ecosystem is composed of software that executes & stores information locally on mobile devices as well as software running on servers in any number of safe and wildly-unsafe environments. Under most circumstances the Internet is in between. By definition this describes a risk-rich environment.

All hostile parties on earth are also attached to the Internet. As a result, software connected to the Internet must be sufficiently resistant to attack (where “sufficient” is associated with a given business and technology context). Mobile applications are hosted on devices and within operating systems having a relatively short history. I believe that they have tended to prize features and “cool” over effective risk management for much of that history (and many would argue that they continue to do so). As a result, the mobile software ecosystem has a somewhat unique vulnerability profile compared to software hosted in other environments.

The OWASP “Mobile Security Project” team research resulted in the Top 10 mobile vulnerabilities list below. I think it is a useful tool to support those involved in thinking about writing or buying software for that ecosystem. You can use it in a variety of ways. Challenge your vendors to show you evidence (yes, real evidence) that they have dealt with each of these risks. You can do the same with your IT architects or anyone who plays the role of an architect for periods of time — then do it again with your developers and testers later. Business analysts, or those who act as one some of the time should also work through adding these as requirements as needed.  Another way to use this Mobile Top 10 resource is to help you identify and think through the attack surface of an existing or proposed mobile-enabled applications, infrastructure, and operations.

OK, I hope that provides enough context to make use of the resource below.

REFERENCES:

Mobile Top 10 2016-Top 10
https://www.owasp.org/index.php/Mobile_Top_10_2016-Top_10

M1 – Improper Platform Usage
https://www.owasp.org/index.php/Mobile_Top_Ten_2016-M1-Improper_Platform_Usage
This category covers misuse of a platform feature or failure to use platform security controls. It might include Android intents, platform permissions, misuse of TouchID, the Keychain, or some other security control that is part of the mobile operating system. There are several ways that mobile apps can experience this risk.

M2 – Insecure Data Storage
https://www.owasp.org/index.php?title=Mobile_Top_Ten_2016-M2-Insecure_Data_Storage  This new category is a combination of M2 + M4 from Mobile Top Ten 2014. This covers insecure data storage and unintended data leakage.

M3 – Insecure Communication
https://www.owasp.org/index.php?title=Mobile_Top_Ten_2016-M3-Insecure_Communication This covers poor handshaking, incorrect SSL versions, weak negotiation, cleartext communication of sensitive assets, etc.

M4 – Insecure Authentication
https://www.owasp.org/index.php?title=Mobile_Top_Ten_2016-M4-Insecure_Authentication This category captures notions of authenticating the end user or bad session management. This can include:
Failing to identify the user at all when that should be required
Failure to maintain the user’s identity when it is required
Weaknesses in session management

M5 – Insufficient Cryptography
https://www.owasp.org/index.php?title=Mobile_Top_Ten_2016-M5-Insufficient_Cryptography The code applies cryptography to a sensitive information asset. However, the cryptography is insufficient in some way. Note that anything and everything related to TLS or SSL goes in M3. Also, if the app fails to use cryptography at all when it should, that probably belongs in M2. This category is for issues where cryptography was attempted, but it wasn’t done correctly.

M6 – Insecure Authorization
https://www.owasp.org/index.php?title=Mobile_Top_Ten_2016-M6-Insecure_Authorization This is a category to capture any failures in authorization (e.g., authorization decisions in the client side, forced browsing, etc.). It is distinct from authentication issues (e.g., device enrolment, user identification, etc.).

If the app does not authenticate users at all in a situation where it should (e.g., granting anonymous access to some resource or service when authenticated and authorized access is required), then that is an authentication failure not an authorization failure.

M7 – Client Code Quality
https://www.owasp.org/index.php?title=Mobile_Top_Ten_2016-M7-Poor_Code_Quality
This was the “Security Decisions Via Untrusted Inputs”, one of our lesser-used categories. This would be the catch-all for code-level implementation problems in the mobile client. That’s distinct from server-side coding mistakes. This would capture things like buffer overflows, format string vulnerabilities, and various other code-level mistakes where the solution is to rewrite some code that’s running on the mobile device.

M8 – Code Tampering
https://www.owasp.org/index.php?title=Mobile_Top_Ten_2016-M8-Code_Tampering
This category covers binary patching, local resource modification, method hooking, method swizzling, and dynamic memory modification.

Once the application is delivered to the mobile device, the code and data resources are resident there. An attacker can either directly modify the code, change the contents of memory dynamically, change or replace the system APIs that the application uses, or modify the application’s data and resources. This can provide the attacker a direct method of subverting the intended use of the software for personal or monetary gain.

M9 – Reverse Engineering
https://www.owasp.org/index.php?title=Mobile_Top_Ten_2016-M9-Reverse_Engineering
This category includes analysis of the final core binary to determine its source code, libraries, algorithms, and other assets. Software such as IDA Pro, Hopper, otool, and other binary inspection tools give the attacker insight into the inner workings of the application. This may be used to exploit other nascent vulnerabilities in the application, as well as revealing information about back end servers, cryptographic constants and ciphers, and intellectual property.

M10 – Extraneous Functionality
https://www.owasp.org/index.php?title=Mobile_Top_Ten_2016-M10-Extraneous_Functionality Often, developers include hidden backdoor functionality or other internal development security controls that are not intended to be released into a production environment. For example, a developer may accidentally include a password as a comment in a hybrid app. Another example includes disabling of 2-factor authentication during testing.


Code Review Required for JavaScript Too

December 17, 2016

In the course of my professional activity, I have repeatedly bumped into well-meaning individuals (developers, architects, leaders, and more) who believe that browser-hosted JavaScript is not a real security concern (anything important to protecting our systems and data happens back on servers, right?)  This can lead to a certain amount of passivity during code-promotion code reviews.

I was just looking for a current copy of the Automated Penetration Testing Toolkit (APT2), and happen to glance at one of the author’s blogs, and was presented with another reason that JavaScript code review and accompanying risk management is still a core capability.

Much of the automated web application testing that I see day to day does a great job finding bugs.  It does not, though, do such a great job identifying new features that require only a little code to implement.

Professional pen tester Adam Compton offers some keystroke logging JavaScript along with a primitive  to catch & log the output.  Adding a keystroke logging “feature” to one of your JavaScript modules could involve only a few lines of code, but could result in any number of abuse cases — and resulting in harm of varying scope & impacts.  Monitoring for “feature” additions to your JavaScript is just another reason to keep that language on your code review radar.

REFERENCES:

Automated Penetration Testing Toolkit (APT2)
https://github.com/MooseDojo/apt2

New Script/Tool: KeyLogging in JavaScript
http://blog.seedsofepiphany.com/2015/04/new-scripttool-keylogging-in-javascript.html

 


%d bloggers like this: