Two Mobile Security Resources

October 16, 2017
Although it should not have taken this long, I just ran across two relatively new resources for those doing Financial Services business in an environment infested with mobile devices.

If you are a mobile developer or if your organization is investing in mobile apps, I believe that you should [at a minimum] carefully and thoughtfully review the Study on Mobile Device Security by the U.S. Department of Homeland Security and the Adversarial Tactics, Techniques & Common Knowledge Mobile Profile by Mitre.  Both seem like excellent, up-to-date overviews. The Mitre publication should be especially valuable for Architecture Risk Analysis and threat analysis in virtually any mobile context.

REFERENCES:
“Study on Mobile Device Security.” April 2017, by the US Dept. of Homeland Security (125 pages)https://www.dhs.gov/sites/default/files/publications/DHS%20Study%20on%20Mobile%20Device%20Security%20-%20April%202017-FINAL.pdf
“Adversarial Tactics, Techniques & Common Knowledge Mobile Profile.” October 2017, by The MITRE Corporation https://attack.mitre.org/mobile/index.php/Main_Page

 

Advertisements

Low Profile Office 365 Breach Reported

August 18, 2017

A couple years ago I wrote:

“I am told by many in my industry (and some vendors) that ‘if we put it in the cloud it will work better, cheaper, be safer, and always be available.’ Under most general financial services use cases (as opposed to niche functionality) that statement seems without foundation.”

Although many individuals have become more sophisticated in the ways they pitch ‘the cloud’ I still hear versions of this story on a fairly regular basis…

Today I learned about a recent Office 365 service outage that reminded me that issues concerning our use of ‘cloud’ technology and the commitments we in the Global Financial Services business make to our customers, prospects, marketers, investors, and regulators seem to remain very much unresolved.

What happened?

According to Microsoft, sometime before 11:30 PM (UTC) on August 3rd 2017, the company introduced an update to the Activity Reports service in the Office 365 admin center which resulted in customers usage reports of one tenant being displayed in another tenant’s administrative portal.

Some customer o365 administrators noticed that the reported email and SharePoint usage for their tenants had spiked. When they investigated, the o365 AdminPortal (https://portal.office.com/adminportal/) displayed activity for users from one or more AzureAD domains outside their tenant. In the most general terms, this was a breach. The breach displayed names and email addresses of those users along with some amount of service traffic detail, for example, user named X (having email address userNameX@domainY.com) sent 193 and received 467 messages, as well as uploaded 9 documents to SharePoint, and read 45 documents in the previous week.

Some subset of those 0365 customers reported the breach to Microsoft.

Microsoft reported that at they disabled the Activity Reports services at 11:40 PM UTC the same day, and that they had a fix in place by 3:35 AM UTC.

Why should I care?

As Global Financial Services Enterprises we make a lot of promises (in varying degrees of formality) to protect the assets for which we are responsible and we promote our ethical business practices. For any one or our companies, our risk profile is rapidly evolving in concert with expanded use of a range of cloud services. Those risks appear in many forms. All of us involved in Global Financial Services need our security story-telling to evolve in alignment with the specific risks we are taking when we choose to operate one or another portion of our operations in ‘the cloud.’ In addition, our processes for detecting and reporting candidate “breaches” also need to evolve in alignment with our use of all things cloud.

In this specific situation it is possible that each of our companies could have violated our commitments to comply with the European GDPR (General Data Protection Regulations: http://www.eugdpr.org/), had it happened in August 2018 rather than August 2017. We all have formal processes to report and assess potential breaches. Because of the highly-restricted access to Office 365 and Azure service outage details, is seems easy to believe that many of our existing breach detection and reporting processes are no longer fully functional.

Like all cloud stuff, o365 and Azure are architected, designed, coded, installed, hosted, maintained, and monitored by humans (as is their underlying infrastructure of many and varied types).
Humans make mistakes, they misunderstand, they miscommunicate, they obfuscate, they get distracted, they get tired, they get angry, they ‘need’ more money, they feel abused, they are overconfident, they believe their own faith-based assumptions, they fall in love with their own decisions & outputs, they make exceptions for their employer, they market their services using language disconnected from raw service-delivery facts, and more. That is not the whole human story, but this list attempts to poke at just some human characteristics that can negatively impact systems marketed as ‘cloud’ on which all of us perform one or another facet of our business operations.

I recommend factoring this human element into your thinking about the value proposition presented by any given ‘cloud’ opportunity. All of us will need to ensure that all of our security and compliance mandated services incorporate the spectrum of risks that come with those opportunities. If we let that risk management and compliance activity lapse for too long, it could put any or all of our brands in peril.

REFERENCES:
“Data Breach as Office 365 Admin Center Displays Usage Data from Other Tenants.”
By Tony Redmond, August 4, 2017
https://www.petri.com/data-breach-office-365-admin-center

European GDPR (General Data Protection Regulations)
http://www.eugdpr.org/


Workforce Mobility = More Shoulder Surfing Risk

July 20, 2017

An individual recently alerted me to an instance of sensitive information being displayed on an application screen in the context of limited or non-existent business value. There are a few key risk management issues here – if we ship data to a user’s screen there is a chance that:

  • it will be intercepted by unauthorized parties,
  • unauthorized parties will have stolen credentials and use them to access that data, and
  • unauthorized parties will view it on the authorized-user’s screen.

Today I am most interested in the last use case — where traditional and non-traditional “shoulder surfing” is used to harvest sensitive data from user’s screens.

In global financial services, most of us have been through periods of data display “elimination” from legacy applications. In the last third of the 20th century United States, individual’s ‘Social Security Numbers’ (SSN) evolved into an important component of customer identification. It was a handy key to help identify one John Smith from another, and to help identify individuals whose names were more likely than others to be misspelled. Informtation Technology teams incorporated SSN as a core component of individual identity across the U.S. across many industries. Over time, individual’s SSNs became relatively liquid commodities and helped support a broad range of criminal income streams. After the turn of the century regulations and customer privacy expectations evolved to make use of SSN for identification increasingly problematic. In response to that cultural change or to other trigger events (privacy breach being the most common), IT teams invested in large scale activities to reduce dependence on SSNs where practical, and to resist SSN theft by tightening access controls to bulk data stores and by removing or masking SSNs from application user interfaces (‘screens’).

For the most part, global financial services leaders, application architects, and risk management professionals have internalized the concept of performing our business operations in a way that protects non-public data from ‘leaking’ into unauthorized channels. As our business practices evolve, we are obligated to continuously re-visit our alignment with data protection obligations. In software development, this is sometimes called architecture risk analysis (an activity that is not limited to formal architects!).

Risk management decisions about displaying non-public data on our screens need to take into account the location of those screens and the assumptions that we can reliably make about those environments. When we could depend upon the overwhelming majority of our workforce being in front of monitors located within workplace environments, the risks associated with ‘screen’ data leakage to unauthorized parties were often managed via line-of-sight constraints, building access controls, and “privacy filters” that were added to some individual’s monitors. We designed and managed our application user interfaces in the context of our assumptions about those layers of protection against unauthorized visual access.

Some organizations are embarked on “mobilizing” their operations — responding to advice that individuals and teams perform better when they are unleashed from traditional workplace constraints (like a physical desk, office, or other employer-managed workspace) as well as traditional workday constraints (like a contiguous 8, 10, or 12-hour day). Working from anywhere and everywhere, and doing so at any time is pitched as an employee benefit as well as a business operations improvement play. These changes have many consequences. One important impact is the increasing frequency of unauthorized non-public data ‘leakage’ as workforce ‘screens’ are exposed in less controlled environments — environments where there are higher concentrations of non-workforce individuals as well as higher concentrations of high-power cameras. Inadvertently, enterprises evolving toward “anything, anywhere, anytime” operations must assume risks resulting from exposing sensitive information to bystanders through the screens used by their workforce, or they must take measures to effectively deal with those risks.

The ever more reliable assumption that our customers, partners, marketers, and vendors feel increasingly comfortable computing in public places such as coffee shops, lobbies, airports and other types of transportation hubs, drives up the threat of exposing sensitive information to unauthorized parties.

This is not your parent’s shoulder surfing…
With only modest computing power, sensitive information can be extracted from images delivered by high-power cameras. Inexpensive and increasingly ubiquitous multi-core machines, GPUs, and cloud computing makes computing cycles more accessible and affordable for criminals and seasoned hobbyists to extract sensitive information via off-the-shelf visual analysis tools

This information exposure increases the risks of identity theft and theft of other business secrets that may result in financial losses, espionage, as well as other forms of cyber crime.

The dangers are real…
A couple years ago Michael Mitchell and An-I Andy Wang (Florida State University), and Peter Reiher (University of California, Los Angeles) wrote in “Protecting the Input and Display of Sensitive Data:”

The threat of exposing sensitive information on screen
to bystanders is real. In a recent study of IT
professionals, 85% of those surveyed admitted seeing
unauthorized sensitive on-screen data, and 82%
admitted that their own sensitive on-screen data could
be viewed by unauthorized personnel at times. These
results are consistent with other surveys indicating that
76% of the respondents were concerned about people
observing their screens, while 80% admitted that
they have attempted to shoulder surf the screen of a
stranger .

The shoulder-surfing threat is worsening, as mobile
devices are replacing desktop computers. More devices
are mobile (over 73% of annual technical device
purchases) and the world’s mobile worker
population will reach 1.3 billion by 2015. More than
80% of U.S. employees continues working after leaving
the office, and 67% regularly access sensitive data at
unsafe locations. Forty-four percent of organizations
do not have any policy addressing these threats.
Advances in screen technology further increase the risk
of exposure, with many new tablets claiming near 180-
degree screen viewing angles.

What should we do first?
The most powerful approach to resisting data leakage via user’s screens is to stop sending that data to those at-risk application user interfaces.

Most of us learned that during our SSN cleanup efforts. In global financial services there were only the most limited use cases where an SSN was needed on a user’s screen. Eliminating SSNs from the data flowing out to those user’s endpoints was a meaningful risk reduction. Over time, the breaches that did not happen only because of SSN-elimination activities could represent material financial savings and advantage in a number of other forms (brand, good-will, etc.).

As we review non-public data used throughout our businesses, and begin the process of sending only that required for the immediate use case to user’s screens, it seems rational that we will find lots of candidates for simple elimination.

For some cases where sensitive data may be required on ‘unsafe’ screens Mitchell, Wang, and Reiher propose an interesting option (cashtags), but one beyond the scope of my discussion today.

REFERENCES:
“Cashtags: Protecting the Input and Display of Sensitive Data.”
By Michael Mitchell and An-I Andy Wang (Florida State University), and Peter Reiher (University of California, Los Angeles)
https://www.cs.fsu.edu/~awang/papers/usenix2015.pdf


Pirated Software and Network Segmentation

July 17, 2017

Global financial services enterprises face a complex web of risk management challenges.
Sometimes finding the right grain for security controls can be a difficult problem.
This can be especially problematic when there is a tendency to attribute specific risks to cultures or nations.

A couple months ago I read a short article on how wannacry ransomware impacted organizations in China. Recently, while responding to a question about data communications connectivity and segmenting enterprise networks, I used some of the factoids in this article. While some propose material “savings” and “agility” enabled by uninhibited workforce communications and sharing, the global financial services marketplace imposes the need for rational/rationalized risk management and some level of due diligence evidence. Paul Mozur provides a brief vignette about some of the risks associated with what seems like China’s dependence on pirated software. Mr. Mozur argues that unlicensed Windows software is not being patched, so the vulnerability ecosystem in China is much richer for attackers than is found in societies where software piracy is less pronounced. Because of the scale of the issue, this seems like it is a valid nation-specific risk — one that might add some context to some individual’s urges to enforce China-specific data communications controls.

Again, there is no perfect approach to identifying security controls at the right grain. Story-telling about risks works best with real and relevant fact-sets. This little article may help flesh out one facet of the risks associated with more-open, rather than more segmented data communications networks.

REFERENCES:
“China, Addicted to Bootleg Software, Reels From Ransomware Attack.”
https://mobile.nytimes.com/2017/05/15/business/china-ransomware-wannacry-hacking.html


‘Best Practices’ IT Should Avoid

June 20, 2017

12 ‘Best Practices’ IT Should Avoid At All Costs.

A colleague mentioned this title and I could not resist scanning the list.

They offer support to some of the funnier Dilbert cartoons, AND they should spark some reflection (maybe more) for some of us working in Global Financial Services.

1. Tell everyone they’re your customer
2. Establish SLAs and treat them like contracts
3. Tell dumb-user stories
4. Institute charge-backs
5. Insist on ROI
6. Charter IT projects
7. Assign project sponsors
8. Establish a cloud computing strategy
9. Go Agile. Go offshore. Do both at the same time
10. Interrupt interruptions with interruptions
11. Juggle lots of projects
12. Say no or yes no matter the request

If any of these ring local (or ring true), then I strongly recommend Bob Lewis’ review of these ‘best practices.’

If any of them make you wince, you might want to read an excellent response to Mr. Lewis by Dieder Pironet.

In any case, this seems like an important set of issues. Both these authors do a good job reminding us that we should avoid simply repeating any them without careful analysis & consideration.

REFERENCES:
12 ‘Best Practices’ IT Should Avoid At All Costs.
http://www.cio.com/article/3200445/it-strategy/12-best-practices-it-should-avoid-at-all-costs.html
By Bob Lewis, 06-13-2017

12 ‘best practices’ IT should avoid at all costs – My stance.
https://www.linkedin.com/pulse/12-best-practices-should-avoid-all-costs-my-stance-didier-pironet
By Didier Pironet, 06-19-2017


Insecure software is the root cause…

May 17, 2017

If you are involved in creating, maintaining, operating or acquiring risk-appropriate software, this short blog about the recent wannacry ransomware exercise is worth reading.

https://blog.securitycompass.com/wannacry-and-the-elephant-in-the-room-c9b24cfee2bd


New Technology and Service Options Do Not Trump Law and Regulations

May 16, 2017

A couple weeks ago I received a letter from Wells Fargo. After mentioning some brokerage account details there were a couple paragraphs of disclosure about $2.5 M in penalties for failing to effectively protect business-related electronic records.  Wells Fargo has been having a rough time lately.  But this situation is just so self-inflicted, and so likely to happen elseware as Financial Services organization’s technology personnel attempt to demonstrate that they can “deliver more for less…” that I thought it might be worth sharing as a cautionary tale.

The disclosures outlined that the bank’s brokerage and independent wealth management businesses paid $1 million and another $1.5 million in fines & penalties because they failed to keep hundreds of millions of electronic documents in a “write once, read many” format — as required by the regulations under which they do business.

Federal securities laws and Financial Industry Regulatory Authority (FINRA) rules require that electronic storage media hosting certain business-related electronic records “preserve the records exclusively in a non-rewriteable and non-erasable format.” This type of storage media has a legacy of being referred to as WORM or “write once, read many” technology that prevents the alteration or destruction of the data they store. The SEC has stated that these requirements are an essential part of the investor protection function because a firm’s books and records are the “primary means of monitoring compliance with applicable securities laws, including anti-fraud provisions and financial responsibility standards.”  Requiring WORM technology is associated with maintaining the integrity of certain financial records.

Over the past decade, the volume of sensitive financial data stored electronically has risen exponentially and there have been increasingly aggressive attempts to hack into electronic data repositories, posing a threat to inadequately protected records, further emphasizing the need to maintain records in WORM format. At the same time, in some financial services organizations “productivity” measures have resulted in large scale, internally-initiated customer fraud, again posing a threat to inadequately protected records.

My letter resulted from a set of FINRA actions announced late last December that imposed fines against 12 firms for a total of $14.4 million “for significant deficiencies relating to the preservation of broker-dealer and customer records in a format that prevents alteration.” In their December 21st press release FINRA said that they “found that at various times, and in most cases for prolonged periods, the firms failed to maintain electronic records in ‘write once, read many,” or WORM, format.”

FINRA reported that each of these 12 firms had technology, procedural and supervisory deficiencies that affected millions, and in some cases, hundreds of millions, of records core to the firms’ brokerage businesses, spanning multiple systems and categories of records. FINRA also announced that three of the firms failed to retain certain broker-dealer records the firms were required to keep under applicable record retention rules.

Brad Bennett, FINRA’s Executive Vice President and Chief of Enforcement, said, “These disciplinary actions are a result of FINRA’s focus on ensuring that firms maintain accurate, complete and adequately protected electronic records. Ensuring the integrity of these records is critical to the investor protection function because they are a primary means by which regulators examine for misconduct in the securities industry.”

FINRA reported 99 related “books and records” cases in 2016, which resulted in $22.5 million in fines. That seems like real money…

Failure to effectively protect these types of regulated electronic records may result in reputational (impacting brand & sales) and financial (fines & penalties) harm. Keep that in mind as vendors and hype-sters attempt to sell us services that persist regulated data. New technology and service options do not supersede or replace established law and regulations underwhich our Financial Services companies operate.

REFERENCES:
“FINRA Fines 12 Firms a Total of $14.4 Million for Failing to Protect Records From Alteration.”
December 21, 2016
http://www.finra.org/newsroom/2016/finra-fines-12-firms-total-144-million-failing-protect-records-alteration

“Annual Eversheds Sutherland Analysis of FINRA Cases Shows Record-Breaking 2016.”
February 28, 2017
https://us.eversheds-sutherland.com/NewsCommentary/Press-Releases/197511/Annual-Eversheds-Sutherland-Analysis-of-FINRA-Cases-Shows-Record-Breaking-2016

“Is Compliance in FINRA’s Crosshairs?”
http://www.napa-net.org/news/technical-competence/regulatory-agencies/is-compliance-in-finras-crosshairs/

SEC Rule 17a-4 & 17a-3 of the Securities Exchange Act of 1934:
“SEC Rule 17a-4 & 17a-3 – Records to be made by and preserved by certain exchange members, brokers and dealers.” (vendor summary)
http://www.17a-4.com/regulations-summary/

“SEC Interpretation: Electronic Storage of Broker-Dealer Records.”
https://www.sec.gov/rules/interp/34-47806.htm

“(17a-3) Records to be Made by Certain Exchange Members, Brokers and Dealers.”
http://www.finra.org/industry/interpretationsfor/sea-rule-17a-3

“(17a-4) Records to be Preserved by Certain Exchange Members, Brokers and Dealers.”
http://www.finra.org/industry/interpretationsfor/sea-rule-17a-4


%d bloggers like this: