Another Example of How Cloud eq Others Computers

March 2, 2017

I have a sticker on my laptop reminding me that “The cloud is just other people’s computers.” (from StickerMule)  There is no cloud magic.  If you extend your global Financial Services operations into the cloud, it needs to be clearly and verifiably aligned with your risk management practices, your compliance obligations, your contracts, and the assumptions of your various constituencies.  That is a tall order.  Scan the rest of this short outline and then remember to critically evaluate the claims of the hypesters & hucksters who sell “cloud” as the solution to virtually any of your challenges.

Amazon reminded all of us of that fact this week when maintenance on some of their cloud servers cascaded into a much larger 2 hour service outage.

No data breach.  No hack.  Nothing that suggests hostile intent.  Just a reminder that the cloud is a huge, distributed pile of “other people’s computers.”  They have all the hardware and software engineering, operations, and life-cycle management challenges that your staff find in their own data centers.  A key difference, though, is that they are also of fantastic scale, massively shared, and their architecture & operations may not align with global Financial Services norms and obligations.

Amazon reported that the following services were unavailable for up to two and half hours Tuesday Morning (28 Feb, 2017):

  • S3 storage
  • The S3 console
  • Amazon Elastic Compute Cloud (EC2) new instance launches
  • Amazon Elastic Block Store (EBS) volumes
  • AWS Lambda

This resulted in major customer outages.

Here is how Amazon described the outage:

  1. “…on the morning of February 28th. The Amazon Simple Storage Service (S3) team was debugging (a billing system) issue…”
  2. “At 9:37AM PST, an authorized S3 team member using an established playbook executed a command which was intended to remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process.”
  3. “Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended.”
  4. “The servers that were inadvertently removed supported two other S3 subsystems.”
  5. “One of these subsystems, the index subsystem, manages the metadata and location information of all S3 objects in the region. This subsystem is necessary to serve all GET, LIST, PUT, and DELETE requests.”
  6. “The second subsystem, the placement subsystem, manages allocation of new storage and requires the index subsystem to be functioning properly to correctly operate. The placement subsystem is used during PUT requests to allocate storage for new objects.”
  7. “Removing a significant portion of the capacity caused each of these systems to require a full restart.”
  8. “While these subsystems were being restarted, S3 was unable to service requests.”
  9. “Other AWS services in the US-EAST-1 Region that rely on S3 for storage, including the S3 console, Amazon Elastic Compute Cloud (EC2) new instance launches, Amazon Elastic Block Store (EBS) volumes (when data was needed from a S3 snapshot), and AWS Lambda were also impacted while the S3 APIs were unavailable.”

There is no magic in the cloud. It is engineered and operated by people. Alignment between your corporate culture, your corporate compliance obligations, your contractual obligations, and those of your cloud providers is critical to your success in global Financial Services. If those cloud computers and the activities by armies of humans who manage them are not well aligned with your needs and obligations, then you are simply depending on “hope” — one of the most feeble risk management practices. You are warned — again.

What do you think?

REFERENCES:
“The embarrassing reason behind Amazon’s huge cloud computing outage this week.”
https://www.washingtonpost.com/news/the-switch/wp/2017/03/02/the-embarrassing-reason-behind-amazons-huge-cloud-computing-outage-this-week/
By Brian Fung, March 2

“Summary of the Amazon S3 Service Disruption in the Northern Virginia (US-EAST-1) Region.”
https://aws.amazon.com/message/41926/


Make use of OWASP Mobile Top 10

February 14, 2017

OWASP “Mobile Security Project” team updated their Mobile Top 10 Vulnerability list this week. {in the process they broke some of their links, if you hit one, just use the 2015 content for now: https://www.owasp.org/index.php/Projects/OWASP_Mobile_Security_Project_-2015_Scratchpad}

I was in a meeting yesterday with a group reviewing one facet of an evolving proposal for Office 365 as the primary collaboration and document storage infrastructure for some business operations.

Office 365 in global Financial Services? Yup. Technology pundits-for-sale, tech wannabes, and some who are still intoxicated by their mobile technology have been effective in their efforts to sell “cloud-first.” One outcome of some types of “cloud-enabled” operations is the introduction of mobile client platforms. Even though global Financial Services enterprises tend to hold many hundreds of billions or trillions of other people’s dollars, some sell (even unmanaged) mobile platforms as risk appropriate and within the risk tolerance of all relevant constituencies… My working assumption is that those gigantic piles of assets and the power that can result from them necessarily attract a certain amount of hostile attention. That attention requires that our software, infrastructure, and operations be resistant enough to attack to meet all relevant risk management obligations (contracts, laws, regulations, and more). This scenario seems like a mismatch — but I digress.

So, we were attempting to work through a risk review of Mobile Skype for Business integration. That raised a number of issues, one being the risks associated with the software itself. The mobile application ecosystem is composed of software that executes & stores information locally on mobile devices as well as software running on servers in any number of safe and wildly-unsafe environments. Under most circumstances the Internet is in between. By definition this describes a risk-rich environment.

All hostile parties on earth are also attached to the Internet. As a result, software connected to the Internet must be sufficiently resistant to attack (where “sufficient” is associated with a given business and technology context). Mobile applications are hosted on devices and within operating systems having a relatively short history. I believe that they have tended to prize features and “cool” over effective risk management for much of that history (and many would argue that they continue to do so). As a result, the mobile software ecosystem has a somewhat unique vulnerability profile compared to software hosted in other environments.

The OWASP “Mobile Security Project” team research resulted in the Top 10 mobile vulnerabilities list below. I think it is a useful tool to support those involved in thinking about writing or buying software for that ecosystem. You can use it in a variety of ways. Challenge your vendors to show you evidence (yes, real evidence) that they have dealt with each of these risks. You can do the same with your IT architects or anyone who plays the role of an architect for periods of time — then do it again with your developers and testers later. Business analysts, or those who act as one some of the time should also work through adding these as requirements as needed.  Another way to use this Mobile Top 10 resource is to help you identify and think through the attack surface of an existing or proposed mobile-enabled applications, infrastructure, and operations.

OK, I hope that provides enough context to make use of the resource below.

REFERENCES:

Mobile Top 10 2016-Top 10
https://www.owasp.org/index.php/Mobile_Top_10_2016-Top_10

M1 – Improper Platform Usage
https://www.owasp.org/index.php/Mobile_Top_Ten_2016-M1-Improper_Platform_Usage
This category covers misuse of a platform feature or failure to use platform security controls. It might include Android intents, platform permissions, misuse of TouchID, the Keychain, or some other security control that is part of the mobile operating system. There are several ways that mobile apps can experience this risk.

M2 – Insecure Data Storage
https://www.owasp.org/index.php?title=Mobile_Top_Ten_2016-M2-Insecure_Data_Storage  This new category is a combination of M2 + M4 from Mobile Top Ten 2014. This covers insecure data storage and unintended data leakage.

M3 – Insecure Communication
https://www.owasp.org/index.php?title=Mobile_Top_Ten_2016-M3-Insecure_Communication This covers poor handshaking, incorrect SSL versions, weak negotiation, cleartext communication of sensitive assets, etc.

M4 – Insecure Authentication
https://www.owasp.org/index.php?title=Mobile_Top_Ten_2016-M4-Insecure_Authentication This category captures notions of authenticating the end user or bad session management. This can include:
Failing to identify the user at all when that should be required
Failure to maintain the user’s identity when it is required
Weaknesses in session management

M5 – Insufficient Cryptography
https://www.owasp.org/index.php?title=Mobile_Top_Ten_2016-M5-Insufficient_Cryptography The code applies cryptography to a sensitive information asset. However, the cryptography is insufficient in some way. Note that anything and everything related to TLS or SSL goes in M3. Also, if the app fails to use cryptography at all when it should, that probably belongs in M2. This category is for issues where cryptography was attempted, but it wasn’t done correctly.

M6 – Insecure Authorization
https://www.owasp.org/index.php?title=Mobile_Top_Ten_2016-M6-Insecure_Authorization This is a category to capture any failures in authorization (e.g., authorization decisions in the client side, forced browsing, etc.). It is distinct from authentication issues (e.g., device enrolment, user identification, etc.).

If the app does not authenticate users at all in a situation where it should (e.g., granting anonymous access to some resource or service when authenticated and authorized access is required), then that is an authentication failure not an authorization failure.

M7 – Client Code Quality
https://www.owasp.org/index.php?title=Mobile_Top_Ten_2016-M7-Poor_Code_Quality
This was the “Security Decisions Via Untrusted Inputs”, one of our lesser-used categories. This would be the catch-all for code-level implementation problems in the mobile client. That’s distinct from server-side coding mistakes. This would capture things like buffer overflows, format string vulnerabilities, and various other code-level mistakes where the solution is to rewrite some code that’s running on the mobile device.

M8 – Code Tampering
https://www.owasp.org/index.php?title=Mobile_Top_Ten_2016-M8-Code_Tampering
This category covers binary patching, local resource modification, method hooking, method swizzling, and dynamic memory modification.

Once the application is delivered to the mobile device, the code and data resources are resident there. An attacker can either directly modify the code, change the contents of memory dynamically, change or replace the system APIs that the application uses, or modify the application’s data and resources. This can provide the attacker a direct method of subverting the intended use of the software for personal or monetary gain.

M9 – Reverse Engineering
https://www.owasp.org/index.php?title=Mobile_Top_Ten_2016-M9-Reverse_Engineering
This category includes analysis of the final core binary to determine its source code, libraries, algorithms, and other assets. Software such as IDA Pro, Hopper, otool, and other binary inspection tools give the attacker insight into the inner workings of the application. This may be used to exploit other nascent vulnerabilities in the application, as well as revealing information about back end servers, cryptographic constants and ciphers, and intellectual property.

M10 – Extraneous Functionality
https://www.owasp.org/index.php?title=Mobile_Top_Ten_2016-M10-Extraneous_Functionality Often, developers include hidden backdoor functionality or other internal development security controls that are not intended to be released into a production environment. For example, a developer may accidentally include a password as a comment in a hybrid app. Another example includes disabling of 2-factor authentication during testing.


Another Reason to Disbelieve The Apple Security Story

September 2, 2014

Some subset of any Financial Services organization’s workforce has BYOD fever.  For many in our business, that fever has infected one or more senior leaders who cannot be ignored.  In Financial Services, we are collectively responsible for protecting $trillions of other people’s assets.

Most of the BYOD fever seems to be associated with new mobile devices.  From what I can observe, many Financial Services organizations are emphasizing their attraction to Apple iPads over Android or other alternatives.  That behavior seems out of phase with our due diligence obligations.

Apple has invested what must be a tremendous amount of resource and effort in building an image that incorporates something like “trust me, but I will not respond to your requests for transparency…”  For some reason, that seems to work.  This is in spite of regular patching of vulnerabilities that could have been discovered at architectural analysis, design, coding, static code security analysis, QA or penetration testing.  Apparently those activities do not incorporate effective secure software practices.

The latest example of Apple’s approach to software security made the news over the last weekend.  A vulnerability in iCloud enabled a trivial attack to discover the passwords of a number of targeted individuals.  Those passwords were then used to steal those user’s iCloud “protected” personal files.  Apple did not enforce a “max attempts” threshold for failed attempts to login to iCloud, which permitted attackers to pound away at the URL https://fmipmobile.icloud.com/fmipservice/device/$apple_id/initClient with basic auth attempts using scripts or malware that identified itself as ‘User-Agent’: ‘FindMyiPhone/376 CFNetwork/672.0.8 Darwin/14.0.0’.   An easy-to-understand proof-of-concept application is available on github.

Remember, in Financial Services, implementing some type of failed-login governor has been standard practice since we have been using the Internet for business.  Our constituents expect some type of “n-failed-login-attempts-and-you-are-locked-out.”  They may not consciously think through a detailed rationale, it is just a small but essential part of exhibiting a threshold level of Financial Services due diligence.  I assume that one possible root cause was that Apple engineers and architects must have reasoned that either nobody could format a basic auth HTTP POST with some json payload and sling it at their iCloud web service interface, or they believed that their closed ecosystem and black box approach to security implementations would keep their web service interface from being discovered.  Alternatively, they specified a max-failed-login-attempts feature into iCloud designs, but Apple management directed them to remove it based on non-technical rationale.  There could be other root causes of this vulnerability, but with the resources available to Apple, none seem in alignment with their “trust us” story-telling.  Their iCloud authentication implementation was just not fit for Financial Services workforce operating environments — while they have been arguing that “”iCloud is built with industry-standard security practices and employs strict policies to protect your data.”

Brute forcing passwords is a proven, decades-old practice that is highly effective unless resisted (because people, in large numbers, behave so predictably).  Financial Services-grade businesses understand this and implement and enforce policies that generally resist bald, brute force attacks.  It is a small, simple, basic, and essential characteristic of any Internet-ready system hosting non-public resources.  The fact that Apple implemented an Internet-facing authentication interface without resistance to brute force password attack, then failed to implement defense in depth (i.e., instrument the environment with effective IDS/IPS, identity fraud detection, and more) demonstrates — again — their unfitness for the Financial Services workforce environment.

Update 09-03-2014:

Could it be that Apple considers hackappcom’s proof of concept application and demonstrations of its use just another side-show?  They reacted to news about the celebrity data theft using what I read as legalistic and deflecting language:

“None of the cases we have investigated has resulted from any breach in any of Apple’s systems,” Nat Kerris, a company spokeswoman, said in a statement. “We are continuing to work with law enforcement to help identify the criminals involved.”

Update 09-06-2014:

Apple, via CEO Tim Cook continued the Apple ecosystem and its technology are safe theme, blaming users for the recent iCloud vulnerabilities and their exploit, saying in a WSJ interview that Apple would ratchet up user awareness communications about stronger and safer passwords, and apparently will not be investing in more effective engineering:

“When I step back from this terrible scenario that happened and say what more could we have done, I think about the awareness piece,” he said. “I think we have a responsibility to ratchet that up. That’s not really an engineering thing.”

Mr. Cook also said that Apple would would begin sending users email messages and push notifications when certain AppleID events occur or when a user’s account data lands on a new device.

After a 40-hour investigation “concluded that there was no breach of its data servers. The company has said it discovered a number of celebrity accounts were compromised by targeted attacks…”

Sure.  Not ready for Financial Services.

 

REFERENCES:

“Hacker leaks dozens of nude celebrity pics in alleged iCloud hack.”
By Cody Lee, Aug 31, 2014
http://www.idownloadblog.com/2014/08/31/icloud-celeb-nude-pics-hack/

“Apple reportedly patches Find My iPhone vulnerability to hack Apple ID accounts.” By Christian Zibreg, Sep 1, 2014
http://www.idownloadblog.com/2014/09/01/icloud-hacking-patched-find-my-iphone/

“ibrute.” By hackappcom
https://github.com/hackappcom/ibrute

“Apple patches ‘Find My iPhone’ exploit.” By Adrian Kingsley-Hughes, Sep 1, 2014
http://www.zdnet.com/apple-patches-find-my-iphone-exploit-7000033171/

“iCloud: iCloud security and privacy overview.”
http://support.apple.com/kb/HT4865

“iCloud Keychain.”
http://www.slideshare.net/alexeytroshichev/icloud-keychain-38565363

“Privacy Collides With the Wild Web.” By Mike Isaac, Sep 2, 2014
http://mobile.nytimes.com/2014/09/03/technology/trove-of-nude-photos-sparks-debate-over-online-behavior.html?_r=0 (downloaded Wed 09/03/2014 5:46 AM)

“Apple Says It Will Add New iCloud Security Measures After Celebrity Hack.” By Brian X. Chen, Sep 4, 2014 http://mobile.nytimes.com/blogs/bits/2014/09/04/apple-says-it-will-add-new-security-measures-after-celebrity-hack/

“Tim Cook Says Apple to Add Security Alerts for iCloud Users — Apple CEO Denies a Lax Attitude Toward Security Allowed Hackers to Post Nude Photos of Celebrities.” By Daisuke Wakabayashi, Sep 5, 2014 http://online.wsj.com/news/article_email/tim-cook-says-apple-to-add-security-alerts-for-icloud-users-1409880977-lMyQjAxMTA0MDAwNDEwNDQyWj

“Apple Media Advisory: Update to Celebrity Photo Investigation.” http://www.apple.com/pr/library/2014/09/02Apple-Media-Advisory.html (downloaded Sat 09/06/2014 5:56 PM)


String-Matching in a Web Application Firewall [WAF]

April 2, 2009

In a review of a loaner web application firewall, a colleague noticed that it seemed to be regex-centric.  I asked if it was possible for him to extract a list of the regexs used by the applicance and then send me a copy of the set.  I really didn’t have much hope for getting the list, but there it was in my email that evening.

My colleague had set of a test environment having an application server, the WAF (web application firewall), a traditional firewall, and his client PC.  All traffic travelling between the client PC and the application server had to traverse the firewall and the WAF.  He reported that the WAF did a good job identifying and denying his cross site scripting (XSS) and SQL injection attacks on the web application, and did not exhibit excessive false positives during his testing.  It appeared to be equally capable regardless of attempts to attack form fields, hidden fields, or any type of POST or GET values.  Based on that finding alone, this appliance might be worth its cost for specific, known-vulnerable, high-risk situations where it is impossible to remediate the application weaknesses or those repairs will take an extended period of time.  Vulnerable vendor applications come to mind…

So, based on a quick review of the information in the email from my associate, the WAF organizes its regex patterns into the following categories:command injection, credit card numbers, cross site scriptin (XSS), directory traversal, protected files, protected directories, forbidden UNC references, LDAP injection, restricted characters, quotes, carriage return line feed, carriage return, tab,  SQL injection, and SSI injection.  Given the layout of this data, it was difficult for me to immediately determine whether there was a bias or focus on one area over another, so I spent some time reviewing the regexes more carefully.  They fall into the following distribution:

124 command injection,
92 SQL injection,
71 cross site scripting (XSS),
29 restricted characters,
14 SSI injection,
12 credit card numbers,
8 LDAP injection,
4 protected files,
4 protected directories,
2 directory traversal,
2 quotes,
2 carriage return line feed,
1 forbidden UNC references,
1 carriage return,
1 tab.

They appear to be written in a way that requires many of them to be used in combination with each other in order to fire positive alarms without too many false positives.  The appliance does not reveal how it assembles these combinations, or what rules it uses when it identifies certain patterns of rule-firing.

Also, there are some vulnerability categories that were not represented in the regex category listing.  When I think of the common web application vulnerability lists (for example OWASP) along with my own environment, I wonder about the absence of XPath injection, cookie poisoning, some common types of URI parameter tampering, and inappropriate error messages.  There are more as well…

There are certain types of vulnerable applications for which it might be a poor match.  If the application vulnerabilities are clustered around inappropriate use of URI values for identification or access control, then it appeared that this applicance would be especially difficult to configure so that it might resist misuse of those URI-related vulnerabilities.

Based only on reading and some brief experimenting, it appears that WAFs may fit some difficult risk management issues that we already have or expect to see in the near future.  I am very interested in any experience that you might have employing web application firewalls in your infrastructure.


Need Cultural Change at Adobe – Vulnerabilities Too Numerous

February 25, 2009

From their long and growing list of products and services, Adobe appears to be attempting to dominate the rich, user-centric application, communications, and information-delivery environments.
(see: http://www.adobe.com/products/ and http://labs.adobe.com/)

They have been pumping out new functionality, new development environments, new languages, etc. at a pace that is difficult to imagine.  How do they manage the pool of energy and creativity required to initiate and maintain their current (accellerating) trajectory?

In financial services, “cool” and “new” are not unknown, but we need to manage them into business environments that must constantly demonstrate a threshold level of due care and due diligence.

Adobe products, new and old, keep getting hacked.  On the consumer/customer as well as corporate fronts, the latest include critical vulnerabilities in Flash/AIR/Flex and Adobe Reader/Acrobat.  Both involve remote exploit and potential for executing arbitrary code on an end-user’s PC.  Because Flash and PDF files are found “everywhere” throughout the Internet, this set of vulnerabilities presents a particilarly difficult risk equation for PC users — and for the information security personnel who serve them.

There have been at least 8 publically-disclosed vulnerabilities in Adobe Flash, and at least 6 in Adobe Reader/Acrobat in the last year.  That extended a well-established tradition of vulnerabilities another year.

Because these Adobe products are found on virtually all Windows PCs, the culture at Adobe that generates and accepts this tradition of regularly-vulnerable software must be modified.  We need to raise the volume of our input to Adobe on this topic, and consider going broader with this campaign, maybe even to investors.

What do you think?  What would work most effectively?

— References —
Many of the Adobe collection can be found at: http://www.adobe.com/products/ and http://labs.adobe.com/

Adobe Flash Player (Flex/Air as well) Multiple Vulnerabilities  (Feb 25, 2009 http://secunia.com/advisories/34012/ and http://labs.idefense.com/intelligence/vulnerabilities/display.php?id=773)
Adobe Reader/Acrobat JBIG2 Stream Array Indexing Vulnerability (Feb 2, 2009 http://www.kb.cert.org/vuls/id/905281 and http://secunia.com/advisories/33901/)