‘Best Practices’ IT Should Avoid

June 20, 2017

12 ‘Best Practices’ IT Should Avoid At All Costs.

A colleague mentioned this title and I could not resist scanning the list.

They offer support to some of the funnier Dilbert cartoons, AND they should spark some reflection (maybe more) for some of us working in Global Financial Services.

1. Tell everyone they’re your customer
2. Establish SLAs and treat them like contracts
3. Tell dumb-user stories
4. Institute charge-backs
5. Insist on ROI
6. Charter IT projects
7. Assign project sponsors
8. Establish a cloud computing strategy
9. Go Agile. Go offshore. Do both at the same time
10. Interrupt interruptions with interruptions
11. Juggle lots of projects
12. Say no or yes no matter the request

If any of these ring local (or ring true), then I strongly recommend Bob Lewis’ review of these ‘best practices.’

If any of them make you wince, you might want to read an excellent response to Mr. Lewis by Dieder Pironet.

In any case, this seems like an important set of issues. Both these authors do a good job reminding us that we should avoid simply repeating any them without careful analysis & consideration.

REFERENCES:
12 ‘Best Practices’ IT Should Avoid At All Costs.
http://www.cio.com/article/3200445/it-strategy/12-best-practices-it-should-avoid-at-all-costs.html
By Bob Lewis, 06-13-2017

12 ‘best practices’ IT should avoid at all costs – My stance.
https://www.linkedin.com/pulse/12-best-practices-should-avoid-all-costs-my-stance-didier-pironet
By Didier Pironet, 06-19-2017


​The Treacherous 12 – Cloud Computing Top Threats in 2016

April 25, 2017

The Cloud Security Alliance published “The Treacherous 12 – Cloud Computing Top Threats in 2016” last year.  I just saw it cited in a security conference presentation and realized that I had not shared this reference.  For those involved in decision-making about risk management of their applications, data, and operations, this resource has some value.  If you have not yet experienced a challenge to host your business in “the cloud”** it is likely you will in the future.

In my opinion, the Cloud Security Alliance is wildly optimistic about the business and compliance costs and the real risks associated with using shared, fluid, “cloud” services to host many types of global financial services business applications & non-public data.  That said, financial services is a diverse collection of business activities, some of which may be well served by credible “cloud” service providers (for example, but not limited to, some types of sales, marketing, and human resource activities).  In that context, the Cloud Security Alliance still publishes some content that can help decision-makers understand more about what they are getting into.

“The Treacherous 12 – Cloud Computing Top Threats in 2016” outlines what “experts identified as the 12 critical issues to cloud security (ranked in order of severity per survey results)”:

  1. Data Breaches
  2. Weak Identity, Credential and Access Management
  3. Insecure APIs
  4. System and Application Vulnerabilities
  5. Account Hijacking
  6. Malicious Insider
  7. Advanced Persistent Threats (APTs)
  8. Data Loss
  9. Insufficient Due Diligence
  10. Abuse and Nefarious Use of Cloud Services
  11. Denial of Service
  12. Shared Technology Issues

For each of these categories, the paper includes some sample business impacts, supporting anecdotes and examples, candidate controls that may help address given risks, and links to related resources.

If your role requires evaluating risks and opportunities associated with “cloud” anything, consider using this resource to help flesh out some key risk issues.

 

**Remember, as abstraction is peeled away “the cloud” is an ecosystem constructed of other people’s “computers” supported by other people’s employees…

REFERENCES:

Cloud Security Alliance:
https://cloudsecurityalliance.org

“The Treacherous 12 – Cloud Computing Top Threats in 2016”
https://downloads.cloudsecurityalliance.org/assets/research/top-threats/Treacherous-12_Cloud-Computing_Top-Threats.pdf


Another Example of How Cloud eq Others Computers

March 2, 2017

I have a sticker on my laptop reminding me that “The cloud is just other people’s computers.” (from StickerMule)  There is no cloud magic.  If you extend your global Financial Services operations into the cloud, it needs to be clearly and verifiably aligned with your risk management practices, your compliance obligations, your contracts, and the assumptions of your various constituencies.  That is a tall order.  Scan the rest of this short outline and then remember to critically evaluate the claims of the hypesters & hucksters who sell “cloud” as the solution to virtually any of your challenges.

Amazon reminded all of us of that fact this week when maintenance on some of their cloud servers cascaded into a much larger 2 hour service outage.

No data breach.  No hack.  Nothing that suggests hostile intent.  Just a reminder that the cloud is a huge, distributed pile of “other people’s computers.”  They have all the hardware and software engineering, operations, and life-cycle management challenges that your staff find in their own data centers.  A key difference, though, is that they are also of fantastic scale, massively shared, and their architecture & operations may not align with global Financial Services norms and obligations.

Amazon reported that the following services were unavailable for up to two and half hours Tuesday Morning (28 Feb, 2017):

  • S3 storage
  • The S3 console
  • Amazon Elastic Compute Cloud (EC2) new instance launches
  • Amazon Elastic Block Store (EBS) volumes
  • AWS Lambda

This resulted in major customer outages.

Here is how Amazon described the outage:

  1. “…on the morning of February 28th. The Amazon Simple Storage Service (S3) team was debugging (a billing system) issue…”
  2. “At 9:37AM PST, an authorized S3 team member using an established playbook executed a command which was intended to remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process.”
  3. “Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended.”
  4. “The servers that were inadvertently removed supported two other S3 subsystems.”
  5. “One of these subsystems, the index subsystem, manages the metadata and location information of all S3 objects in the region. This subsystem is necessary to serve all GET, LIST, PUT, and DELETE requests.”
  6. “The second subsystem, the placement subsystem, manages allocation of new storage and requires the index subsystem to be functioning properly to correctly operate. The placement subsystem is used during PUT requests to allocate storage for new objects.”
  7. “Removing a significant portion of the capacity caused each of these systems to require a full restart.”
  8. “While these subsystems were being restarted, S3 was unable to service requests.”
  9. “Other AWS services in the US-EAST-1 Region that rely on S3 for storage, including the S3 console, Amazon Elastic Compute Cloud (EC2) new instance launches, Amazon Elastic Block Store (EBS) volumes (when data was needed from a S3 snapshot), and AWS Lambda were also impacted while the S3 APIs were unavailable.”

There is no magic in the cloud. It is engineered and operated by people. Alignment between your corporate culture, your corporate compliance obligations, your contractual obligations, and those of your cloud providers is critical to your success in global Financial Services. If those cloud computers and the activities by armies of humans who manage them are not well aligned with your needs and obligations, then you are simply depending on “hope” — one of the most feeble risk management practices. You are warned — again.

What do you think?

REFERENCES:
“The embarrassing reason behind Amazon’s huge cloud computing outage this week.”
https://www.washingtonpost.com/news/the-switch/wp/2017/03/02/the-embarrassing-reason-behind-amazons-huge-cloud-computing-outage-this-week/
By Brian Fung, March 2

“Summary of the Amazon S3 Service Disruption in the Northern Virginia (US-EAST-1) Region.”
https://aws.amazon.com/message/41926/


Bumper Sticker from RBS

February 19, 2017

The research team at security & risk data aggregator (and more) Risk-Based Security (RBS) published a couple of their observations this month — observations that should be a reminder to all of us involved in global Financial Services risk management. RBS catalogs & analyzes thousands of breaches every year. They suggest that ad-hoc, personality-based, or other types of company-unique security practices put companies at a self-inflicted and avoidable level of risk. RBS researchers summarize their findings into a couple central themes:

  • “Breaches can happen at even the most security-conscious organizations.”
  • “The tenacity and skill of attackers when it comes to searching out weaknesses in organizational practices and processes is unrelenting.”

There are a couple key components to their follow-on recommendation:

  • Employ a methodical and risk-based approach to security management, where risk assessments incorporate both:
    • The organization’s security practices, and
    • Downstream risk posed by vendors, suppliers and other third parties.

To address these risks and add structure to day-to-day risk management work, RBS researchers recommend that we:

  • Define security objectives and
  • Select and implement security controls.

The content of the memo describing RBS research observations is useful at the most high levels as a reminder in global Financial Services. While this seem a little like recommending we all continue to breath, eat well, and sleep enough. Their guidance to leverage mature security frameworks “to create robust security programs based on security best practice” is a long bumper sticker. In Financial Services at global scale, we all know that and our various constituencies regularly remind us of those types of requirements. Sometimes they even show up in our sales literature, contracts, and SEC filings.

Bumper stickers may still have their place. RBS observations and recommendations may not be what we need for implementation, but they can help with our elevator speeches & the tag-lines required in a number of frequently encountered risk management interactions.

Here is one way to use summarize their observations and recommendations:

Breaches continue to happen. 
Attackers are tenacious and unrelenting. 
Employ risk-appropriate levels of rigor and 
risk-based prioritization in the application 
your security practices, as well as in 
downstream  risk posed by vendors, suppliers 
and other third parties.

RESOURCES:

“Risk Based Security, NIST and University of Maryland Team Up To Tackle Security Effectiveness.”
https://www.riskbasedsecurity.com/2017/02/risk-based-security-nist-and-university-of-maryland-team-up-to-tackle-security-effectiveness/
February 17, 2017; By RBS

Another key message in the blog was to highlight a joint research project by “NIST’s Computer Security Resource Center and the University of Maryland, known as the Predictive Analytics Modeling Project (http://csrc.nist.gov/scrm/pamp-assessment-faqs.htm). The aim of the project is to conduct the primary research needed in order to build tools that can measure the effectiveness of security controls.” The project web site also says their mission includes seeing “how your organization compares to peers in your industry.” For more on this project see “CyberChain” at: https://cyberchain.rhsmith.umd.edu/.


Use care when describing how you do Financial Services security

March 3, 2016

Use care when describing how you do your Financial Services security.  This seems especially relevant as some in our industry attempt to drive down costs by extending their operations into low cost consumer-heritage cloud services and onto other types of opaque Internet platforms of all kinds.  Consultants, pundits, analysts, and hucksters are all attempting to make a living by selling schemes that incorporate one or many of these options.  What they tend to omit, are the impacts that their ideas may have on the truthfulness of your public and contractual security assurances.

The Consumer Financial Protection Bureau (CFPB) just fined Dwolla $100,000 U.S. for misleading users about the company’s data security practices.  In addition, Dwolla must report virtually all security-related activities to the CFPB and request permission for certain types of security changes for the next 5 years.  The CFPB also put the Dwolla Board of Directors on notice that they must demonstrate more intense and more regular involvement in and oversight of Dwolla security measures and their effectiveness.

The CFPB also required Dwolla to implement a long list of measures to improve the safety and security of its operations and the consumer information that is stored on, or transmitted through, its network(s). [see pages 12-13 for just the initial summary]

A key mandate seems to be that these security measures must evolve as Dwolla grows.  The CFPB wrote that Dwolla must protect the confidentiality, integrity, and availability of sensitive consumer information with “administrative, technical, and physical safeguards appropriate to Respondent’s size and complexity, the nature and scope of Respondent’s activities, and the sensitivity of the personal information collected about consumers.”  So this is not a simple once-and-done mandate at all.

Dwolla operates an online payments-transfer network.

The CFPB said Dwolla misrepresented the security of its platform, which collects users’ personal information at account set up.  All Financial Services enterprises collect users’ personal information at account setup…

The CFPB wrote that Dwolla had failed to:

  • Adopt and implement data-security policies and procedures reasonable and appropriate for the organization;
  • Use appropriate measures to identify reasonably foreseeable security risks;
  • Ensure that employees who have access to or handle consumer information received adequate training and guidance about security risks;
  • Use encryption technologies to properly safeguard sensitive consumer information; and
  • Practice secure software development, particularly with regard to consumerfacing applications developed at an affiliated website, Dwollalabs. (Note: Under this heading, the CFPB also included ending the use of customer information in the non-production environment.)

Would your Financial Services organization hold up against a thorough review of these two areas of secure operations?

In response, Dwolla wrote:

Dwolla was incorporating new ideas because we wanted 
to build a safer product, but at the time we may not have 
chosen the best language and comparisons to describe 
some of our capabilities. It has never been the 
company’s intent to mislead anyone on critical issues 
like data security. For any confusion we may have caused, 
we sincerely apologize.

In that blog entry, they go on to describe how they implement security today.  They use careful words to describe their current status and strategy.

Dwolla has been an optimistic, agile, cloud-friendly, fast-evolving financial services specialist company for years.  The CFPB fine is a signal that optimism and its close relative in some approaches to ‘risk management‘ — hope — are not going to be tolerated as effective protections for customer personal information.  I understand that we must always attempt to better serve our customers (real and prospective) and partners, but keep this reminder about how ‘security cannot only be words’ in mind as you explore wildly hyped technology options with enthusiasts who promote them.

REFERENCES

Administrative Proceeding File No. 2016-CFPB-0007
In the Matter of: Dwolla, Inc. Consent Order

Dwolla: https://www.dwolla.com/

“We are Never Done.” http://blog.dwolla.com/we-are-never-done/

“Dwolla fined $100,000 for misleading data security claims.”
Federal agency orders D.M.-based financial technology firm to bolster security
Matthew Patane, The Des Moines Register, page 11A. 3/3/2016 (from the physical paper copy)

“CFPB Fines Fintech Firm Dwolla Over Data-Security Practices — Online-payment company agrees to improve how it protects customer data.”

FBI Director James Comey on Some China Risks

October 5, 2014

For a variety of reasons, it is often a challenge to generate the appropriate level of information security awareness in executive leadership.
For some this has been especially true when the issues are associated with nation-state actors or a given culture.

For enterprises extending their operations into China, it may be difficult to build an effective risk-management message in the face of the virtually-intoxicating potential for growth and profit.

In that context, a recent interview with FBI Director James Comey included some unambiguous statements that might be helpful in framing some of the risks of integrating or extending your Financial Services operations into China. The interview was aired on the October 5, 2014 episode of 60 Minutes.

Scott Pelley: What countries are attacking the United States as we sit here in cyberspace?

James Comey: Well, I don’t want to give you a complete list. But I can tell you the top of the list is the Chinese. As we have demonstrated with the charges we brought earlier this year against five members of the People’s Liberation Army. They are extremely aggressive and widespread in their efforts to break into American systems to steal information that would benefit their industry.

Scott Pelley: What are they trying to get?

James Comey: Information that’s useful to them so they don’t have to invent. They can copy or steal so learn about how a company might approach negotiation with a Chinese company, all manner of things.

Scott Pelley: How many hits from China do we take in a day?

James Comey: Many, many, many. I mean, there are two kinds of big companies in the United States. There are those who’ve been hacked by the Chinese and those who don’t know they’ve been hacked by the Chinese.

Scott Pelley: The Chinese are that good?

James Comey: Actually, not that good. I liken them a bit to a drunk burglar. They’re kicking in the front door, knocking over the vase, while they’re walking out with your television set. They’re just prolific. Their strategy seems to be: We’ll just be everywhere all the time. And there’s no way they can stop us.

Scott Pelley: How much does that cost the U.S. economy every year?

James Comey: Impossible to count. Billions.

The entire transcript is available at:
http://www.cbsnews.com/news/fbi-director-james-comey-on-threat-of-isis-cybercrime/

REFERENCE:

Other Completosec Channel blog entries on this topic:
https://completosec.wordpress.com/category/china/


Third-Party Security Assessments – We Need a Better Way

July 6, 2014

“According to a February 2013 Ponemon Institute survey, 65% of organizations transferring consumer data to third-party vendors reported a breach involving the loss or theft of their information. In addition, nearly half of organizations surveyed did not evaluate their partners before sharing sensitive data.” [DarkReading]

Assessing the risks associated with extending Financial Services operations into vendor/partner environments is a challenge.  It often results in less-than-crisp indicators of more or less risk.  Identifying, measuring, and dealing with these risks with a risk-relevant level of objectivity is generally not cheap and often takes time — and sometimes it is just not practical using our traditional approaches.  Some approaches also only attempt to deal with a single point-in-time, which ignores the velocity of business and technical change.

There are a number of talented security assessment companies that offer specialized talent, experience, and localized access virtually world-wide.  The challenge is less about available talent, but of time/delay, expense, and risks that are sometimes associated with revealing your interest in any given target(s).

There are also organizations which attempt to replace a repetitive, labor-intensive process with a non-repetitive, labor-saving approach that may reduce operational expenses and may also support some amount of staff redeployment.  The Financial Services Round Table/BITS has worked toward this goal for over a decade.  Their guidance is invaluable.  For those in the “sharing” club, it appears to work well when used applied to a range established vendor types.  It is also, though, a difficult fit for many situations where the candidate vendor/partners are all relatively new (some still living on venture capital) and are still undergoing rapid evolution.  Some types of niche, cloud-based specialty service providers fall easily into this category.  The incentive to invest in a “BITS compliant” assessment for these types of targets seems small, and any assessment’s lasting value seems equally small.

Some challenges are enhanced by increasing globalization – for example, how do we evaluate the risks associated with a candidate vendor that has technical and infrastructure administrative support personnel spread across Brazil, Costa Rica, U.S East & West coasts, Viet Nam, China, India, Georgia, Germany, and Ireland?  Culture still matters.  What a hassle…

None of that alters the fact that as global financial services organizations we have obligations to many of our stakeholders to effectively manage the risks associated with extending our operations into vendor’s environments and building business partnerships.

When the stakes are material – for example during merger or acquisition research – it is easy to understand the importance of investing in an understanding of existing and candidate third-party risks.  There are many other situations where it seems “easy” to understand that a third party security assessment is mandated.  Unfortunately, not all use cases seem so universally clear-cut.

When we are attempting to evaluate platform or vendor opportunities, especially when in the early stages of doing so, the time and expense associated with traditional approaches to full-bore third-party risk assessments are a mismatch.  Performing third-party risk assessments in-house can also reveal sensitive tactical or strategic planning which can negatively impact existing relationships, add unnecessary complexity to negotiations, or, in edge cases, even disrupt relationships with key regulators.  As an industry, we have got to get better at quick-turn-around third-party risk assessments that are “good-enough” for many types of financial services decision-making.

For years, “technicians” have been evaluating Internet-facing infrastructure for signals of effective technology-centric risk management practices – or for their absence.  Poorly configured or vulnerable email or DNS infrastructure, open SNMP services, “external” exposure of “internal” administrative interfaces, SSL configurations, public announcements of breaches, and more have been used by many in their attempts to read “signals” of stronger or weaker risk management practices.  A colleague just introduced me to a company that uses “externally-observable” data to infer how diligent a target organization is in mitigating technology-associated risks.  Based on a quick scan of their site, they tell a good story.*  I am interested in learning about anyone’s experience with this, or this type of service.

*I have no relationships with BitsightTech, financial or otherwise.

 

REFERENCES:

“BitSight Technologies Launches Information Security Risk Rating Service.” 9/10/2013
http://www.darkreading.com/bitsight-technologies-launches-information-security-risk-rating-service/d/d-id/1140452?

“Bits Framework For Managing Technology Risk For Service Provider Relationships.” November 2003 Revised In Part February 2010.
http://www.bits.org/publications/vendormanagement/TechRiskFramework0210.pdf

Shared Assessments.
https://sharedassessments.org/

The company a colleague mentioned to me…
http://www.bitsighttech.com/


%d bloggers like this: