DeepFakes Also a Threat to Corporate Brands

December 16, 2019

The corporate risk management business just keeps expanding….

Attention to ‘deepfakes’ is often mapped to elections or celebrities.

Misleading videos or audios altered using artificial intelligence are also a threat to Corporate brands.

These ‘deepfakes’ can used for direct theft, to spread information meant to move a company’s stock price, or to undermine a company’s relationship with customers.

Companies involved in businesses that depend on customer’s trust could be harmed by videos or audios highlight fake trust-violating misdeeds.

“Symantec identified at least three successful audio attacks on companies earlier this year, where scammers impersonated CEOs or chief financial officers’ voices, requesting an urgent transfer of funds. Millions of dollars were stolen from each business, whose names were not disclosed.”

Global financial services enterprises need to build or acquire muscle in this area now.

See Cat Zakrzewski’s recent outline of the topic at: “Businesses should be watching out for deepfakes too, experts warn.”

“The Technology 202: Businesses should be watching out for deepfakes too, experts warn.”
By Cat Zakrzewski,

Capital One Concerns Linked To Overconfidence Bias?

August 2, 2019

Earlier this week on July 29, FBI agents arrested Paige “erratic” Thompson related to downloading ~30 GB of Capital One credit application data from a rented cloud data server.

In a statement to the FBI, Capital One reported that an intruder executed a command that retrieved the security credentials for a web application firewall administrator account, used those credentials to list the names of data folders/buckets, and then to copy (sync) them to buckets controlled by the attacker.  The incident appears to have affected approximately 100 million people in the United States and six million in Canada.

If you just want to read a quick summary of the incident, try “For Big Banks, It’s an Endless Fight With Hackers.” or “Capital One says data breach affected 100 million credit card applications.”

I just can’t resist offering some observations, speculation, and opinions on this topic.

Since as early as 2015 the Capital One CIO has hyped their being first to cloud, their cloud journey, their cloud transformation and have asserted that their customers data was more secure in the cloud than in their private data centers.  Earlier this year the company argued that moving to AWS will “strengthen your security posture” and highlighted their ability to “reduce the impact of compliance on developers” (22:00) — using AWS security services and the network of AWS security partners — software engineers and security engineers “should be one in the same.”(9:34)

I assume that this wasn’t an IT experiment, but an expression of a broader Capital One corporate culture, their values and ethics.  I also assume that there was/is some breakdown in their engineering assumptions about how their cloud infrastructure and its operations worked.  How does this happen?  Given the information available to me today, I wonder about the role of malignant group-think & echo chamber at work or some shared madness gripping too many levels of Capital One management.  Capital One has to have hordes of talented engineers — some of whom had to be sounding alarms about the risks associated with their execution on this ‘cloud first‘ mission (I assume they attempted to communicate that it was leaving them open to accusations of ‘mismanaging customer data’, ‘inaccurate corporate communications,’ excessive risk appetite, and more).  There were lots of elevated risk decisions that managers (at various levels) needed to authorize…

Based on public information, it appears that:

  • The sensitive data was stored in a way that it could be read from the “local instance” in clear text (ineffective or absent encryption).
  • The sensitive data was stored on a cloud version of a file system, not a database (weaker controls, weaker monitoring options).
  • The sensitive data was gathered by Capital One starting in 2005 — which suggests gaps in their data life-cycle management (ineffective or absent data life-cycle management controls)
  • There were no effective alerts or alarms announcing unauthorized access to the sensitive data (ineffective or absent IAM monitoring/alerting/alarming).
  • There were no effective alerts or alarms announcing ‘unexpected’ or out-of-specification traffic patterns (ineffective or absent data communications or data flow monitoring/alerting/alarming).
  • There were no effective alerts or alarms announcing social media, forums, dark web, etc. chatter about threats to Capital One infrastructure/data/operations/etc. (ineffective or absent threat intelligence monitoring & analysis, and follow-on reporting/alerting/alarming).
  • Capital One’s conscious program to “reduce the compliance burden that we put on our developers” (28:23) may have obscured architectural, design, and/or implementation weaknesses from Capital One developers (a lack of security transparency, possibly overconfidence that developers understood their risk management obligations, and possible weaknesses in their secure software program).
  • Capital One ‘wrapped’ a gap in IAM vendor Sailpoint’s platform with custom integrations to AWS identity infrastructure (16:19) (potentially increasing the risk of misunderstanding or omission in this identity & access management ‘plumbing’).
  • There may have been application vulnerabilities that permitted the execution of server side commands (ineffective input validation, scrubbing, etc. and possibly inappropriate application design, and possible weaknesses in their secure code review practices and secure software training).
  • There may have been infrastructure configuration decisions that permitted elevated rights access to local instance meta-data (ineffective configuration engineering and/or implementation).
  • There must be material gaps or weaknesses in Capital One’s architecture risk assessment practices or in how/where they are applied, and/or they must have been incomplete, ineffective, or worse for a long time.
  • And if this was the result of ‘designed-in‘ or systemic weaknesses at Capital One, there seems to be room for questions about their SEC filings about the effectiveness of their controls supportable by the facts of their implementation and operational practices.

In almost any context this is a pretty damning list.  Most of these are areas where global financial services enterprises are supposed to be experts.

Aren’t there also supposed to be internal systems in place to ensure that each financial services enterprise achieves risk-reasonable levels of excellence in each of the areas mentioned in the bullets above?  And where were the regulations & regulators that play a role in assuring that it the case?

How does an enormous, heavily-regulated financial services enterprise get into a situation like this?  There is a lot of psychological research suggesting that overconfidence is a widespread cognitive bias and I’ve read, for example, that it underpins what is sometimes called ‘organizational hubris,’ which seems like a useful label here.   The McCombs School of Business Ethics Unwrapped program defines ‘overconfidence bias’ as “the tendency people have to be more confident in their own abilities than is objectively reasonable.”  That also seems like a theme applicable to this situation.  Given my incomplete view of the facts, it seems like this may have been primarily a people problem, and only secondarily a technology problem.  There is probably no simple answer…

Is the Capital One case unique?  Could other financial services enterprises be on analogous journeys?

“Capital One Data Theft Impacts 106M People.” By Brian Krebs.
“Why did we pick AWS for Capital One? We believe we can operate more securely in their cloud than in our own data centers.” By Rob Alexander, CIO, Capital One, and
“For Big Banks, It’s an Endless Fight With Hackers.” By Stacy Cowley and Nicole Perlroth, 30 July 2019.
“Capital One says data breach affected 100 million credit card applications.” By Devlin Barrett.…
“AWS re:Inforce 2019: Capital One Case Study: Addressing Compliance and Security within AWS (FND219)”
“Capital One Data Theft Impacts 106M People.”
“Frequently Asked Questions.”
Overconfidence Bias defined:
Scholarly articles for cognitive bias overconfidence:,16&as_vis=1&q=cognitive+bias+overconfidence&scisbd=1
“How to Recognize (and Cure) Your Own Hubris.” By John Baldoni.


Mobile App Password Reset Flaw – $500K Stolen

July 4, 2019

Logic flaw(s) combined with weak identity verification practices in “7Pay” – 7-Eleven’s new payment app – led to over $500,000 in fraudulent charges.

7Pay was only recently released. With it, individuals could pay for purchases using a linked/registered credit or debit card at Seven-Eleven stores in Japan. The initial discovery of application exploit was via a customer complaint about unauthorized credit/debit card transactions.

  • Fraudulent purchases were initially reported on July 2nd.
  • By July 3rd a Seven-Eleven in-house investigation had discovered a pattern of unauthorized 7Pay charges and began posting warnings.
  • By July 4th the company had documented “approximately 900” affected 7Pay app users having unauthorized charges amounting to approximately $500K U.S. They suspended all charges from credit cards and debit cards plus charges from Seven Bank ATMs, and stopped accepting any new 7Pay app registrations.

A hostile agent or agents exploited a weak “password reset” feature in the 7Pay app. It required requesters to present “e-mail address, date of birth, phone number.”  What could go wrong?

Here is the flow:

  1. User navigates to the Reset Password screen.
  2. User enters the member ID (email address) associated with the target account and taps/clicks.
  3. The application presents a new screen requesting the birthdate and phone number associate with that account. This screen also permits entering a “destination e-mail address” to which a password reset link will be sent. The “destination e-mail address” appeared to have no restrictions and no pre-established relationship with the account-holder/user was required. After entering an email address associated with a hostile party, the user taps/clicks.
  4. The 7Pay system responds by sending a password reset link to an email account under the hostile agent’s control.
  5. When the hostile party clicks/taps on the link & resets the password, the 7Pay system responds by sending password-reset notices to both the attacker’s and the authorized user’s email addresses.
    (A 7Pay feature defaulted the birthdate to 1 Jan 2019 for users during registration. The identification logic was materially weakened because a subset of the authorized 7Pay users accepted this default “birthdate.”)
  6. When the hostile party logs into the target account, the authorized user’s existing authenticated sessions were all automatically logged off. Now the authorized user has no useful 7Pay sessions and they don’t know the account password.
    (At this point, the hostile agent can either sell the account information to others or begin charging against the victim’s account(s)).

In addition to suspending the 7Pay app, Seven-Elevan Japan has promised to compensate all customers who experienced unauthorized transactions via the 7Pay system.
The company has also set up an emergency customer call center (team and incident-unique dial-in number).

The combination of email address + phone number + date-of-birth seems like a weak identification for candidate users of a financial transaction mobile app. The addition of a default birthdate value weakened this already weak design.

Adding a feature enabling users to send a password reset link to an arbitrary email address also seems like a relatively extreme outlier in financial transaction app design.

What could Seven-Eleven have done to avoid this situation?

  • They could have required a second-factor to their identification process — for example sending a ‘random’ text or voiced code to the mobile device on file for the account and then require input of that code on the password reset request screen before authorizing the current user to use the password reset functionality or before committing their inputs.
  • They could have required authorized users to create their own security question/answer pairs (maybe more than one) before creating/enabling their 7Pay accounts, then present the question & require the corresponding answer on the password reset request screen before authorizing the current user to use the password reset functionality or before committing their inputs.
  • They could have integrated a 3rd party (question-response) identity providing vendor into their mobile user identification process — driving the user through a short series of questions, requiring valid answers before authorizing the current user to use the password reset functionality or before committing their inputs.
  • They could have integrated with one or another top tier 3rd party authentication service and inherit some identity-proofing capabilities.
  • They could have added an optional strong, 3rd-party-enabled 2-factor authentication option (think hardware token).
  • They could have chosen to simply request a ‘birthdate’ from the user — coaching them to use a ‘random’ date rather than their real one.
  • They could have chosen to only send password-reset links to email addresses ‘on file’ that have been verified via some out-of-process means.

What would you have done in this situation?

“Important notice about 7pay.” Seven & i Holdings Co., Ltd. 07-03-2019.

“About suspension of charge function by unauthorized access to “7pay (seven pay)” some accounts.” Seven & i Holdings Co., Ltd. Seven-Eleven Japan Co., Ltd. Seven Pay. 07-04-2019.

“7pay credit card abuse: two deadly possible weaknesses that third party hijacking may be.”  By Hiroshi Mikami, 07-04-2019 (via Google translate).

“7-Eleven Japanese customers lose $500,000 due to mobile app flaw.” By Catalin Cimpanu, July 4, 2019.


Some Upgrades Are Not Optional – Engineer Them For The Reality of Your Operations

December 5, 2018

Kubernetes enables complexity at scale across cloud-enabled infrastructure.

Like any other software, it also is — from time to time — vulnerable to attack.

A couple days ago I initially read about a CVSS v3.0 ‘9.8’ (critical) Kubernetes vulnerability:

“In all Kubernetes versions prior to v1.10.11, v1.11.5, and v1.12.3, incorrect handling of error responses to proxied upgrade requests in the kube-apiserver allowed specially crafted requests to establish a connection through the Kubernetes API server to backend servers, then send arbitrary requests over the same connection directly to the backend, authenticated with the Kubernetes API server’s TLS credentials used to establish the backend connection.”

The default configuration for a Kubernetes API server’s Transport Layer Security (TLS) credentials grants all users (authenticated and unauthenticated) permission to perform discovery API calls that result in this escalation.  As a result, in too many implementations, anyone who knows about this hole can take command of a targeted Kubernetes cluster. Game over.

Red Hat is quoted as describing the situation as: “The privilege escalation flaw makes it possible for any user to gain full administrator privileges on any compute node being run in a Kubernetes pod. This is a big deal. Not only can this actor steal sensitive data or inject malicious code, but they can also bring down production applications and services from within an organization’s firewall.”

This level of criticality is a reminder about how important it is to engineer-in the ability to perform on-demand Kubernetes infrastructure upgrades during normal business operations. In situations like the one described above, these upgrades must occur without material impact on business operations.  In real business infrastructure that is a serious engineering challenge.

Today is not the day to be asking “how will we upgrade all this infrastructure plumbing in real-time, under real business loads?”

The recent Kubernetes vulnerability is a reminder about how complex is the attack surface of every global financial services enterprise. With that complexity, comes a material obligations to understand your implementations and their operations under expected conditions — one of which is the occasional critical vulnerability that must be fixed immediately.

Oh joy.

Kubernetes Security Announcement – v1.10.11, v1.11.5, v1.12.3 released to address CVE-2018-1002105!topic/kubernetes-announce/GVllWCg6L88
CVE-2018-1002105 Detail
Kubernetes’ first major security hole discovered

Oval Office Credential Theft is a Warning to All

October 13, 2018

IT architects cry foul about what they perceive as authentication fatigue…  “Security makes users login too many times and this must stop!”
It seems like investments attempting to reduce the number redundant workforce authentications are not often justified using the language of risk management. If they were, we might find identity technology evolving in new ways.

In a recent oval office press event, Kanye West inadvertently exposed his iPhone passcode to a crowd of news cameras. As a result, earth knows his six-digit passcode is an unbroken series of zeros “000000.” …not something too business-relevant, except that it is an example of a class of credential leakage that is already an issue for global financial services enterprises.

With increasing frequency, our increasingly mobile workforce enters passwords in the presence of live cameras. We should not assume that all the resulting recordings are secured in ways that match our obligation to keep business user name/password pairs confidential. Because credentials can be relatively easily monetized, a percentage of these recorded credentials are and will be used in unauthorized ways. The only variables are which credentials, at what time(s), and by whom.

We should assume that this scenario will become increasingly common for all workforce roles who are mobile-enabled.

Do you support your database, server, identity directory, or cloud admins, your financial officers, your investment or human resource professionals, your legal council, any other roles with elevated rights to work where they choose? If yes, tick your risk-pool meter up again. Assign someone to investigate and report on how vulnerable your organization is to unauthorized credential re-use. Large, global financial services enterprises have complex attack surfaces for this use case. For most, this analysis will not be easy, and will likely not result in a happy story.

Maybe authenticating less often is a part of addressing this risk.  I don’t know.  For some, multi-factor authentication plays a key role in resisting credential theft and reuse.  Some types of multi-factor authentication can resist some types of credential attacks. But “some” should not be a comforting qualifier. Working out the right mix of identity verification is going to require creativity, rigor, and serious risk management analysis — adult, professional, investigation and analysis.

Get started.


“The Cybersecurity 202: Kanye West is going to make password security great again.”
By Derek Hawkins, October 12, 2018

Facebook Identity Token Thefts Result in Breach

September 28, 2018

Facebook’s VP of Product Management, Guy Rosen said today that a vulnerability in the company’s  “View As” feature that enabled attackers to steal to user’s access tokens.  These tokens are presented to Facebook infrastructure in ways that allow users to remain authenticated and able to interact with their accounts over multiple sessions. Once in an attacker’s possession, these tokens would permit attackers to impersonate actual users and enable Facebook accounts takeover.  Facebook’s reported that the current risk mitigation is their invalidating at least 40 million authentication tokens and temporarily turning off the “View As” feature while they “conduct a thorough security review.”  He went on:

“This attack exploited the complex interaction of multiple issues in our code. It stemmed from a change we made to our video uploading feature in July 2017, which impacted “View As.” The attackers not only needed to find this vulnerability and use it to get an access token, they then had to pivot from that account to others to steal more tokens.”

Mr. Rosen wrote that Facebook engineers learned of this vulnerability three days ago on Tuesday, September 25 and that almost 50 million Facebook accounts were affected.

Writing safe-enough user-facing software that must interact with a complex ecosystem of applications and APIs is a serious challenge for all of us.  In that context, this latest Facebook vulnerability should be a cautionary tale for all organizations implementing apps and APIs incorporating persistent token-based authentication.  Could any of us market our way out of a situation where 40 or 50 million of customers, marketers, and other partners were vulnerable to account takeover?

Too many architects and developers seem risk-inappropriately infatuated by pundit pronouncements about authentication fatigue and the “best practice” of extending any given authentication across multiple sessions using persistent tokens.  At our scale (a trillion dollars U.S. AUM each, give or take a few hundred billion), global financial services enterprises ought to be able to reason our way through the fog of happy-path chatter to engineer session management practices that meet the risk appetites of our various constituencies.


“Security Update.” By Guy Rosen, VP of Product Management
Security Update



Cloud File Sync Requires New Data Theft Protections

June 28, 2018

Microsoft Azure File Sync has been slowly evolving since it was released last year. and and

The company also added “Azure:” [Azure Drive] to PowerShell to support discovery and navigation of all Azure resources including filesystems.

Azure File Sync helps users keep Azure File shares in sync with their Windows Servers. Microsoft promotes the happy-path, where these servers are on-premise in your enterprise, but supports syncing with endpoints of any trust relationship.

What are my concerns?

The combination makes it much easier to discover Azure-hosted data and data exfiltration paths and then to get them set up to automatically ship new data into or out of your intended environment(s).  In other words, helping hostile parties to introduce their data or their malware into your organization’s Azure-hosted file systems, or helping hostile parties to steal your data while leaving a minimum of evidence describing who did what.

Why would I say that?

Many roles across global Financial Services enterprises are engaging in architecture risk analysis (ARA) as part of their day to day activities.  If we approach this topic like we were engaged in ARA fact finding, we might discover the following:

Too easy to share with untrustworthy endpoints:
It appears that anyone with the appropriate key (a string) can access a given Azure File Share from any Azure VM on any subscription. What could go wrong?
Microsoft customers can use shared access signatures (SAS) to generate tokens that have specific permissions, and which are valid for a specified time interval. These shared access signature keys are supported by the Azure Files (and File Sync) REST API and the client libraries.
A financial services approach might permit Azure File drive Shares on a given private Virtual Network to be secured in a manner so it would be only available via the Virtual Network using a private IP address on that same network.

Weak audit trail:
If you need to mount the Azure file share over SMB, you currently must use the storage account keys assigned to the underlying Azure File Storage.
As a result, in the Azure logs and file properties the user name for connecting to a given Azure File share is the storage account name regardless of who is using the storage account keys. If multiple users connect, they have to share an account. This seems to make effective auditing problematic.  It also seems to violate a broad range of commitments we all make to regulators, customers, and other constituencies.
This limitation may be changing. Last month Microsoft announced a preview of more identity and authorization options for interacting with Azure storage. Time will tell. and

Missing link(s) to Active Directory:
Azure Files does not support Active Directory directly, so those sync’d shares don’t enforce your AD ACLs.
Azure File Sync preserves and replicates all discretionary ACLs, or DACLs, (whether Active Directory-based or local) to all server endpoints to which it syncs. Because those Windows Server instances can already authenticate with Active Directory, Microsoft sells Azure File Sync as safe-enough (…to address that happy path).  Unfortunately, Azure File Sync will synchronize files with untrusted servers — where all those controls can be ignored or circumvented.

Requires weakening your hardened endpoints:
Azure File Sync requires that Windows servers host the AzureRM PowerShell module, which currently requires Internet Explorer to be installed. …Hardened server no more…

Plans for public anonymous access:
Microsoft is planning to support public anonymous read access to files stored on Azure file storage via its REST interface. and

Port 445 (again):
Azure file storage configuration is exposed via TCP port 445. Is it wise to begin opening up port 445 of your Microsoft cloud environment? Given the history of Microsoft vulnerabilities exposed on port 445, many will probably hesitate.

Goal of hosting Windows File Server in Azure:
Microsoft intends to deliver Azure Files in a manner that ensures parity with Windows File Server.

What other potential issues or concerns should we investigate?

  • Does the Azure File Storage REST interface resist abuse well enough to support its use in specified use cases (since each use case will have given risks and opportunities)?
  • Can a given use case tolerate risks associated with proposed or planned Microsoft upgrades to Azure File Storage REST, Azure File Sync, or Azure:?
  • Are there impacts on or implications for the way we need to manage our Azure AD?
  • Others?

What do you think?


Increasingly Difficult to Conduct Sensitive Business

May 11, 2018

Craig S. Smith updates us on some of the latest misuses of Alexa and Siri — with attackers “exploiting the gap between human and machine speech recognition.”  Using only audio an attacker can mute your device, then begin issuing commands.  At a minimum, this is a data leakage challenge.  Depending on the configuration of your mobile device or your Apple/Amazon/Google table-top device, those commands may be coming from you — along with the authority that brings.  For some that translates into a risk worth considering.

Working on any type of truly confidential business around your voice-ready devices is increasingly risk-rich.  For global Financial Services enterprises, the scale of the risks seems to warrant keeping significant distance between all voice-aware devices and your key leaders, those with material finance approval authority, anyone working on core investing strategy or its hands-on execution — the list goes on.  Leaving all mobile devices outside Board of Directors meetings is common practice.  Maybe that practice needs to be expanded.

Read this short article and think about your exposures.



“Alexa and Siri Can Hear This Hidden Command. You Can’t.” By Craig S. Smith


Recent US-CERT & FBI Alert A Good Read — Applicable to Us

March 19, 2018

The United States Computer Emergency Readiness Team (US-CERT) recently released an alert about sophisticated attacks against individuals and infrastructure that contained an excellent explanation of the series of attacker techniques that are applicable to all global Financial Services enterprises. Many of the techniques are possible and effective because of the availability of direct Internet connections. Absent direct Internet connectivity, many of the techniques detailed in the CERT alert would be ineffective.

Global Financial Services enterprises, responsible for protecting hundreds of billions, even trillions of dollars (other people’s money) are attractive cybercrime targets. We are also plagued by hucksters & hypesters who are attempting to transform our companies into what they claim will be disruptive, agile organizations using one or another technical pitch that simply translates into “anything, anywhere, anytime.”  The foundation of these pitches seems to be “Internet everywhere” or even “replace your inconvenient internal networks with the Internet” while eliminating those legacy security and constraining security practices.

We can all learn from the details in this Alert.

From the alert:

[The] alert provides information on Russian government actions targeting U.S. Government entities as well as organizations in the energy, nuclear, commercial facilities, water, aviation, and critical manufacturing sectors. It also contains indicators of compromise (IOCs) and technical details on the tactics, techniques, and procedures (TTPs) used by Russian government cyber actors on compromised victim networks. DHS and FBI produced this alert to educate network defenders to enhance their ability to identify and reduce exposure to malicious activity.

DHS and FBI characterize this activity as a multi-stage intrusion campaign by Russian government cyber actors who targeted small commercial facilities’ networks where they staged malware, conducted spear phishing, and gained remote access into energy sector networks. After obtaining access, the Russian government cyber actors conducted network reconnaissance, moved laterally, and collected information pertaining to Industrial Control Systems (ICS).

Take the time to review it. Replace “industrial control systems” with your most important systems as you read.

For many of us, the material may be useful in our outreach and educational communications.

The 20-some recommendations listed in the “General Best Practices Applicable to this Campaign” section also seem applicable to Financial Services.

“Alert (TA18-074A) Russian Government Cyber Activity Targeting Energy and Other Critical Infrastructure Sectors.” Release date: March 15, 2018.

“Cyberattacks Put Russian Fingers on the Switch at Power Plants, U.S. Says.” By Nicole Perlroth and David E. Sanger,The New York Times.

Make use of OWASP Mobile Top 10

February 14, 2017

OWASP “Mobile Security Project” team updated their Mobile Top 10 Vulnerability list this week. {in the process they broke some of their links, if you hit one, just use the 2015 content for now:}

I was in a meeting yesterday with a group reviewing one facet of an evolving proposal for Office 365 as the primary collaboration and document storage infrastructure for some business operations.

Office 365 in global Financial Services? Yup. Technology pundits-for-sale, tech wannabes, and some who are still intoxicated by their mobile technology have been effective in their efforts to sell “cloud-first.” One outcome of some types of “cloud-enabled” operations is the introduction of mobile client platforms. Even though global Financial Services enterprises tend to hold many hundreds of billions or trillions of other people’s dollars, some sell (even unmanaged) mobile platforms as risk appropriate and within the risk tolerance of all relevant constituencies… My working assumption is that those gigantic piles of assets and the power that can result from them necessarily attract a certain amount of hostile attention. That attention requires that our software, infrastructure, and operations be resistant enough to attack to meet all relevant risk management obligations (contracts, laws, regulations, and more). This scenario seems like a mismatch — but I digress.

So, we were attempting to work through a risk review of Mobile Skype for Business integration. That raised a number of issues, one being the risks associated with the software itself. The mobile application ecosystem is composed of software that executes & stores information locally on mobile devices as well as software running on servers in any number of safe and wildly-unsafe environments. Under most circumstances the Internet is in between. By definition this describes a risk-rich environment.

All hostile parties on earth are also attached to the Internet. As a result, software connected to the Internet must be sufficiently resistant to attack (where “sufficient” is associated with a given business and technology context). Mobile applications are hosted on devices and within operating systems having a relatively short history. I believe that they have tended to prize features and “cool” over effective risk management for much of that history (and many would argue that they continue to do so). As a result, the mobile software ecosystem has a somewhat unique vulnerability profile compared to software hosted in other environments.

The OWASP “Mobile Security Project” team research resulted in the Top 10 mobile vulnerabilities list below. I think it is a useful tool to support those involved in thinking about writing or buying software for that ecosystem. You can use it in a variety of ways. Challenge your vendors to show you evidence (yes, real evidence) that they have dealt with each of these risks. You can do the same with your IT architects or anyone who plays the role of an architect for periods of time — then do it again with your developers and testers later. Business analysts, or those who act as one some of the time should also work through adding these as requirements as needed.  Another way to use this Mobile Top 10 resource is to help you identify and think through the attack surface of an existing or proposed mobile-enabled applications, infrastructure, and operations.

OK, I hope that provides enough context to make use of the resource below.


Mobile Top 10 2016-Top 10

M1 – Improper Platform Usage
This category covers misuse of a platform feature or failure to use platform security controls. It might include Android intents, platform permissions, misuse of TouchID, the Keychain, or some other security control that is part of the mobile operating system. There are several ways that mobile apps can experience this risk.

M2 – Insecure Data Storage  This new category is a combination of M2 + M4 from Mobile Top Ten 2014. This covers insecure data storage and unintended data leakage.

M3 – Insecure Communication This covers poor handshaking, incorrect SSL versions, weak negotiation, cleartext communication of sensitive assets, etc.

M4 – Insecure Authentication This category captures notions of authenticating the end user or bad session management. This can include:
Failing to identify the user at all when that should be required
Failure to maintain the user’s identity when it is required
Weaknesses in session management

M5 – Insufficient Cryptography The code applies cryptography to a sensitive information asset. However, the cryptography is insufficient in some way. Note that anything and everything related to TLS or SSL goes in M3. Also, if the app fails to use cryptography at all when it should, that probably belongs in M2. This category is for issues where cryptography was attempted, but it wasn’t done correctly.

M6 – Insecure Authorization This is a category to capture any failures in authorization (e.g., authorization decisions in the client side, forced browsing, etc.). It is distinct from authentication issues (e.g., device enrolment, user identification, etc.).

If the app does not authenticate users at all in a situation where it should (e.g., granting anonymous access to some resource or service when authenticated and authorized access is required), then that is an authentication failure not an authorization failure.

M7 – Client Code Quality
This was the “Security Decisions Via Untrusted Inputs”, one of our lesser-used categories. This would be the catch-all for code-level implementation problems in the mobile client. That’s distinct from server-side coding mistakes. This would capture things like buffer overflows, format string vulnerabilities, and various other code-level mistakes where the solution is to rewrite some code that’s running on the mobile device.

M8 – Code Tampering
This category covers binary patching, local resource modification, method hooking, method swizzling, and dynamic memory modification.

Once the application is delivered to the mobile device, the code and data resources are resident there. An attacker can either directly modify the code, change the contents of memory dynamically, change or replace the system APIs that the application uses, or modify the application’s data and resources. This can provide the attacker a direct method of subverting the intended use of the software for personal or monetary gain.

M9 – Reverse Engineering
This category includes analysis of the final core binary to determine its source code, libraries, algorithms, and other assets. Software such as IDA Pro, Hopper, otool, and other binary inspection tools give the attacker insight into the inner workings of the application. This may be used to exploit other nascent vulnerabilities in the application, as well as revealing information about back end servers, cryptographic constants and ciphers, and intellectual property.

M10 – Extraneous Functionality Often, developers include hidden backdoor functionality or other internal development security controls that are not intended to be released into a production environment. For example, a developer may accidentally include a password as a comment in a hybrid app. Another example includes disabling of 2-factor authentication during testing.

%d bloggers like this: