Another BYOD Security Challenge – User-Managed Remote Access Software

August 16, 2014

In a recent Defcon presentation three researchers demonstrated that scanning the Internet — the entire Internet — is now a practical exercise.  That idea alone should force us all to re-frame our thinking about how we measure the effectiveness of our infrastructure’s defensive posture — but that is not a topic for this post.  As part of their work, the team demonstrated the scale of unauthenticated remote access set up on business and personal endpoints accessible from the Internet.  As families acquire more and more Internet endpoints, it appears that some in each household want to access or manage some of them remotely.  This might be as simple as accessing the “office Mac” from an tablet on the couch, or as crazy as unauthenticated remote access to that home office endpoint while traveling.  The use case doesn’t matter as much as the behavior itself.  If people are setting up unauthenticated remote access (or using default or ‘easily-guessable’ passwords) on the endpoints they also want to bring to work, we all have a problem…

Regardless of how ill-conceived, BYOD experiments, even formal BYOD programs seem to be a fever without a cure.  When a financial services workforce uses non-company endpoints we inherit all the risks associated with their all-too-often unprofessional and uninformed management practices.  Now we have evidence that one facet of that behavior is the installation and use of unauthenticated remote access software.  There are a number of popular approaches.  The Defcon presentation appears to have focused on VNC (virtual network computing), but there are other popular packages used to support convenient remote access – Wikipedia lists dozens.

We need to train our workforce that they need to limit their exposure (and the organization’s via BYOD) to the risks associated with remote access software. At the very highest level, they need to understand that for any endpoint used for financial services work:

  1. Don’t run software (whatever it is) that is not really needed
  2. If your really need it, learn how to manage it and configure it to deliver only the features you need — in the context of end user-managed BYOD environments, running software you don’t understand is not risk-reasonable in the context of performing financial services business (our regulators require and our customers and partners expect that we perform our business activities using risk management practices stronger than simple ‘hope’
  3. If you need remote access exercise the principle of least privilege
    1. Install and configure the software so that by default it is not turned on (if it is not running it will not support unintended remote access)
    2. Turn on your remote access software only when you need it, and then turn it off again as soon as is practical afterword
    3. Configure the remote access software to include a risk-reasonably short session timeout
    4. Permit only uniquely-authenticated users having a strong, unique, time-limited password
  4. Restrict remote access to your endpoint as much as possible
  5. Turn off all remote access you can get away with
  6. Use multiple layers of protection to implement defense in depth
    1. Run an endpoint firewall configured to reject all inbound communications attempts except those you explicitly authorize
    2. Don’t grant apps permissions that you don’t understand
    3. Don’t grant apps permissions that would enable access to business data or business communications
    4. Run one or more anti-malware packages
    5. Use security-centric web proxies
    6. Configure your browser(s) in their most paranoid settings
    7. Turn on your search engine’s ‘recommendation’ or anti-hostility service
    8. If your operating system supports it, perform your work in the absence of administrative rights (don’t make yourself equivalent to root or the local administrator)

In addition to end user education, and before permitting even the most limited BYOD experiment, financial services enterprises should have their infrastructure configured to resist the use of virtually all known remote access software on those non-company devices.  The port-blocking and protocol recognition will not be perfect, but it will stop the unauthorized use of the most casual installations.  As a result, we also need to have our SIEM infrastructure configured to alert and then staff to deal with those alerts.  In addition, and using the same signature and/or correlation logic configured in the SIEM, those with widespread IPS infrastructure can block BYOD remote access attempts (at least in some scenarios).

All of the security measures required to deal with BYOD fever will add expense that needs to be introduced into the BYOD economic equation.  All of the new risks also need to be introduced into the overall enterprise risk management pool.  The impacts will be different for various organizations.  For some, it seems reasonable to assume that the new costs and risks will far exceed any real benefits that could possibly be delivered in a financial services enterprise environment. 

REFERENCES:

“Thousands of computers open to eavesdropping and hijacking.” By Lisa Vaas on August 15, 2014, http://nakedsecurity.sophos.com/2014/08/15/thousands-of-computers-open-to-eavesdropping-and-hijacking/

“Mass Scanning the Internet: Tips, Tricks, Results.” By Robert Graham, Paul McMillan, & Dan Tentler, https://www.defcon.org/html/defcon-22/dc-22-speakers.html#Graham 

“Comparison of remote desktop software.” From Wikipedia, http://en.wikipedia.org/wiki/Comparison_of_remote_desktop_software

“Principle of Least Privilege.” From Wikipedia, http://en.wikipedia.org/wiki/Principle_of_least_privilege

“Defense in depth.” From Wikipedia,
http://en.wikipedia.org/wiki/Defense_in_depth_%28computing%29


Catastrophes occur – Are we prepared?

July 23, 2014

Catastrophes occur.

Short term incentives, goals, and resulting business practices tend to devalue preparing for low-frequency high-impact events. In addition, human cognitive biases like those generally called “availability” and “perception distortion” and a host of others, tend to weaken attempts at effective long-term risk analysis as well. Because catastrophes occur, and because recovery requires activities materially different from dealing with more “normal” negative events, we are required to have plans in place to deal with them (or to have made sufficiently-informed decisions not to). In global Financial Services, I believe that major populations of our stakeholders assume that we are doing so.

This category of events includes, but is not limited to earthquakes, floods, droughts, tsunamis, cyclones and more. Some Financial Services organizations have attempted to address these natural and some political risks via geographic distribution of all critical functions — where the loss of any given locality or region would remain below the threshold of “catastrophe.” That approach is not effective against other types of systemic vulnerabilities. Increasingly interconnected global business and technology infrastructure and operations have added new categories of potential catastrophe. It is likely that there are new vulnerabilities that emerge from a greater degree of interdependency and interconnectedness than executive decision-makers understand. The combination of globalization in the Financial Services industry along with Internet-enabled real-time reach is often highlighted as bringing opportunities to hedge risks through investment, vendor, partner, and customer diversity. The potential that it also brings for strategic and enterprise-wide harm is not so well documented.

Internet “plumbing” like DNS or traffic routing are the product of relatively “ancient” architectures, and in some instances, incorporate decades-old code. Successful widespread exploit of Internet of Internet “plumbing” could result in catastrophic impacts on global financial services — virtually all of our markets depend upon real-time or near-real-time Internet connectivity. Sometimes this is a direct impact, but it will almost certainly damage operations somewhere down the supply-chain. Patching, disinfection, throttling, or containment at Internet scales is a challenge — one that we are not generally prepared for. Successful targeted or widespread endpoint exploit via one or another Internet pathogen has the potential for catastrophic impacts — if hostile agent can employ malware to gain partial or total control of all our infrastructure and/or user endpoints, we don’t own our businesses anymore — that kind of asset-transfer is something all financial services leaders need to be aware of. For many of us, even the failure of a single vendor/partner or a network of vendors/partner presenting a common interface could result in materially-negative, even catastrophic consequences. What would happen to your organization if Amazon, Google, Bloomberg, Bank of New York, (pick your large-scale partner) no longer had an effective Internet presence? How would your enterprise continue to function if broad categories of securities trading and/or settlement went dark because a systemic weakness in that “market” was exploited, and “turned-off?”

I believe that for most of us in Financial Services, this topic deserves more attention than it has generally been receiving.

The World Economic Forum [WEF] has been sponsoring some work on this topic that might be a useful resource in any effort to get this effort started, restarted, or enhanced at your organization.

In their 2014 “Global Risks Report” WEF authors argued that a myopic focus on quantitative risk probability measures can disserve organizations. They also warn of how too heavily weighted “intuitive” thinking about risk can also weaken an organization’s ability to deal with potentially-catastrophic risks.

I strongly recommend reading this the 2014 WEF “Global Risks Report,” especially section 2, pages 38 through 47, where it focusses on cyber-risks and strategies for managing global risks.

As a teaser, glance at their quick review of risk management and monitoring strategies below:

Risk-management strategies are guided by a firm’s risk appetite; the level of risk an organization is prepared to accept to achieve its objectives, such as profitability and safety goals. Often, although not always, there is a trade-off between profitability in times of normal operations and resilience in the face of negative events affecting the firm. Examples of risk management and monitoring strategies include:
  • Mitigation measures: Actions taken by the firm to reduce the likelihood and/or consequences of a negative event; for example, designing plants to withstand specific levels of natural disasters such as earthquakes, floods and hurricanes.
  • Accountability measures: Finding ways to incentivize individual employees not to cut corners in ways that would normally be undetectable but would increase a firm’s vulnerability in a crisis, such as failing to maintain back-ups. Some firms hire external consultants to assess how effectively they are mitigating risks identified as priorities.
  • Supply-chain diversification: Sourcing supplies and raw materials from multiple providers in different locations to minimize disruption if one link in the supply chain is broken. Another hedge against sudden unavailability of inputs is to maintain an excess inventory of finished products.
  • Avoiding less profitable risks: Firms may decide to drop activities altogether if they represent a small part of their overall business but a significant part of their risk profile.
  • Transferring the risk: In addition to the range of insurance products available — liability, property, business interruption — some large firms run their own “captive” insurance companies to distribute risks across their own different operations and subsidiaries.
  • Retaining the risk: When insurance is unobtainable or not cost-effective, firms can choose to set aside reserves to cover possible losses from low-probability risks.
  • Early warning systems: Some firms employ their own teams to scan for specific risks that may be brewing, from political crises, for example, to storms off the coast of Africa that may become hurricanes in the US in the next fortnight.
  • Simulations and tabletop exercises: Many firms simulate crisis situations; for example, by making critical staff unexpectedly unavailable and assessing how other employees cope. Such exercises can capture lessons to be integrated into the risk-management strategy.
  • Back-up sites: Many firms are set up so that if one or more factory or office becomes unusable, others are quickly able to assume the same functions.

    [Italics above quoted from: WEF, GRR 2014, page 44]

 

REFERENCES:
World Economic Forum – Global Risks Report 2014
http://www3.weforum.org/docs/WEF_GlobalRisks_Report_2014.pdf


Third-Party Security Assessments – We Need a Better Way

July 6, 2014

“According to a February 2013 Ponemon Institute survey, 65% of organizations transferring consumer data to third-party vendors reported a breach involving the loss or theft of their information. In addition, nearly half of organizations surveyed did not evaluate their partners before sharing sensitive data.” [DarkReading]

Assessing the risks associated with extending Financial Services operations into vendor/partner environments is a challenge.  It often results in less-than-crisp indicators of more or less risk.  Identifying, measuring, and dealing with these risks with a risk-relevant level of objectivity is generally not cheap and often takes time — and sometimes it is just not practical using our traditional approaches.  Some approaches also only attempt to deal with a single point-in-time, which ignores the velocity of business and technical change.

There are a number of talented security assessment companies that offer specialized talent, experience, and localized access virtually world-wide.  The challenge is less about available talent, but of time/delay, expense, and risks that are sometimes associated with revealing your interest in any given target(s).

There are also organizations which attempt to replace a repetitive, labor-intensive process with a non-repetitive, labor-saving approach that may reduce operational expenses and may also support some amount of staff redeployment.  The Financial Services Round Table/BITS has worked toward this goal for over a decade.  Their guidance is invaluable.  For those in the “sharing” club, it appears to work well when used applied to a range established vendor types.  It is also, though, a difficult fit for many situations where the candidate vendor/partners are all relatively new (some still living on venture capital) and are still undergoing rapid evolution.  Some types of niche, cloud-based specialty service providers fall easily into this category.  The incentive to invest in a “BITS compliant” assessment for these types of targets seems small, and any assessment’s lasting value seems equally small.

Some challenges are enhanced by increasing globalization – for example, how do we evaluate the risks associated with a candidate vendor that has technical and infrastructure administrative support personnel spread across Brazil, Costa Rica, U.S East & West coasts, Viet Nam, China, India, Georgia, Germany, and Ireland?  Culture still matters.  What a hassle…

None of that alters the fact that as global financial services organizations we have obligations to many of our stakeholders to effectively manage the risks associated with extending our operations into vendor’s environments and building business partnerships.

When the stakes are material – for example during merger or acquisition research – it is easy to understand the importance of investing in an understanding of existing and candidate third-party risks.  There are many other situations where it seems “easy” to understand that a third party security assessment is mandated.  Unfortunately, not all use cases seem so universally clear-cut.

When we are attempting to evaluate platform or vendor opportunities, especially when in the early stages of doing so, the time and expense associated with traditional approaches to full-bore third-party risk assessments are a mismatch.  Performing third-party risk assessments in-house can also reveal sensitive tactical or strategic planning which can negatively impact existing relationships, add unnecessary complexity to negotiations, or, in edge cases, even disrupt relationships with key regulators.  As an industry, we have got to get better at quick-turn-around third-party risk assessments that are “good-enough” for many types of financial services decision-making.

For years, “technicians” have been evaluating Internet-facing infrastructure for signals of effective technology-centric risk management practices – or for their absence.  Poorly configured or vulnerable email or DNS infrastructure, open SNMP services, “external” exposure of “internal” administrative interfaces, SSL configurations, public announcements of breaches, and more have been used by many in their attempts to read “signals” of stronger or weaker risk management practices.  A colleague just introduced me to a company that uses “externally-observable” data to infer how diligent a target organization is in mitigating technology-associated risks.  Based on a quick scan of their site, they tell a good story.*  I am interested in learning about anyone’s experience with this, or this type of service.

*I have no relationships with BitsightTech, financial or otherwise.

 

REFERENCES:

“BitSight Technologies Launches Information Security Risk Rating Service.” 9/10/2013
http://www.darkreading.com/bitsight-technologies-launches-information-security-risk-rating-service/d/d-id/1140452?

“Bits Framework For Managing Technology Risk For Service Provider Relationships.” November 2003 Revised In Part February 2010.
http://www.bits.org/publications/vendormanagement/TechRiskFramework0210.pdf

Shared Assessments.
https://sharedassessments.org/

The company a colleague mentioned to me…
http://www.bitsighttech.com/


Mobile Malware Hits Bank Customers with Classic Ransom Scam

June 14, 2014

There is something greater than 100 million individuals using mobile banking apps in North America.  Given their primitive security capabilities, that describes a material attack surface.

Mobile Trojan Svpeng was identified stealing mobile banking credentials almost a year ago by Kaspersky Labs.

The malware has continued to evolve since then and since the start of this month it has been circulating as classic ransomware attacking Android-based mobile devices.

Initially it looks for banking applications from USAA, Citigroup, American Express, Wells Fargo, Bank of America, TD Bank, JPMorgan Chase, BB&T and Regions Bank, and when it finds one or more, it forwards that information to a server under the cybercriminals’ control.

It imitates a scan of the phone and announces that it has found some prohibited content.

The malware then blocks the phone and demands a payment of $200 to unblock it.

It also displays a photo of the user taken by the phone’s front camera.

The creators of the Trojan finally provide detailed directions for paying the ransom payments using ‘Green Dot’ MoneyPak vouchers.

Expect this model to continue evolving.  The team behind it understands how to get their malware out onto individual’s mobile devices, how to collect user credentials, how to target mobile banking customers, and appears to be in the process of building a database of endpoints and individuals that use specific banking apps.  It does not require much creativity to picture a business model where this information is sold to other hostile parties in an on-line datamart — crime, theft, & harm to follow…

This is another reason to enhance and actively manage the quality of your anti-fraud processes, algorithms, and infrastructure.

REFERENCES:

“Latest version of Svpeng targets users in US.”
Roman Unuchek, June 11, 2014
http://www.securelist.com/en/blog/8227/Latest_version_of_Svpeng_targets_users_in_US

“Kaspersky Lab detects mobile Trojan Svpeng: Financial malware with ransomware capabilities now targeting U.S. users”
June 11, 2014
http://www.kaspersky.com/about/news/virus/2014/Kaspersky-Lab-detects-mobile-Trojan-Svpeng-Financial-malware-with-ransomware-capabilities-now-targeting-US-users

“First Major Mobile Banking Security Threat Hits the U.S.”
By Penny Crosman , JUN 13, 2014
http://www.americanbanker.com/issues/179_114/first-major-mobile-banking-security-threat-hits-the-us-1068100-1.html


Exploits Violate OAuth 2.0 and OpenID Assumptions

May 14, 2014

Earlier this month, numerous outlets reported that Wang Jing, a PhD student in mathematics from Nanyang Technological University, uncovered serious security vulnerabilities in OAuth 2.0 and OpenID, the technologies used by many websites to authenticate users via third-party websites.

Almost all major providers of OAuth 2.0 and OpenID identity services are affected, such as Facebook, Google, Yahoo, LinkedIn, Microsoft, Paypal, GitHub, QQ, Taobao, Weibo, VK, Mail.Ru, Sohu, etc.

Remember, OAuth 2.0 and OpenID are ways that 3rd party applications can support user authentication without maintaining a robust identity directory and the identity life-cycle processes that come with it. It is commonplace today to bump into offers to use your Facebook, Google, Twitter, or github credentials on a 3rd party app or site.

That equation, an identity provider being used by a third-party client application requires a certain level of trust between all three parties involved: provider, 3rd party, and end user. The vulnerability uncovered by W.Jing shows how an attacker can exploit weaknesses in provider infrastructure and 3rd party applications to cause those 3rd party applications to untintentionally act as a bridge between the provider and the attacker.

Some days it seems like input validation is the solution to almost every software security issue…*
More effective validation of inputs by third party application developers and the providers could deliver significant resistance to attackers. White lists of trusted sources as well as more thorough verification procedures at the providers could materially tamp down the risks associated with this class of vulnerabilities. The white list approach would require a level of accuracy and maintenance that it seem (at least to this author) like it will not happen without external incentives being imposed.

This vulnerability is especially notable because:

  • It enables Open Redirect Attacks
  • It enables unauthorized access / identity fraud
  • It could lead to sensitive information leakage and/or customer information breaches
  • It has wide coverage: most of the major internet companies provide these types of authentication/authorization services — and some Financial Services organizations would like to offer these options as well, especially in (but not limited to) the moble device environment.
  • It is difficult to “patch”

Some in the security community are playing down the potential impacts of this class of vulnerabilities based on their assertion that “most of the websites using OAuth 2.0 and OpenID are social in nature,” or that the complexity of exploit means that the “majority of criminals won’t bother with it.” If you are in financial services and are using OAuth 2.0 and/or OpenID, think carefully through that logic in the context of your brand.

If you use any of those “sign in with my _____ ID” offerings, it seems rational for you to do some research to see if all the identity & authorization service providers involved are resistant to this new class of attack.

Some of the biggest identity-providers on earth are vulnerable to this new class of attack, including, but not limited to: facebook.com, google.com, linkedin.com, yahoo.com, live.com (Microsoft), vk.com, qq.com (Tencent), weibo.com (Sina), paypal.com, mail.ru, taobao.com (Alibaba), sina.com.cn (Sina), sohu.com, 163.com, github.com, alipay.com (Alibaba), and more.

For those of you who need to see the technical details, W.Jing has blog entries and youtube videos detailing proof of concept attacks against each of the properties identified above on his blog at: http://tetraph.com/covert_redirect/oauth2_openid_covert_redirect.html#portfolio

* But not everyday…

REFERENCES:
“Covert Redirect Vulnerability Related to OAuth 2.0 and OpenID.”
By WANG Jing, May 2014.
http://tetraph.com/covert_redirect/oauth2_openid_covert_redirect.html

“OAuth and OpenID Vulnerable to Covert Redirect, But Have No Vulnerability.”
By Anthony M. Freed, 05-04-2014
http://www.tripwire.com/state-of-security/top-security-stories/oauth-and-openid-vulnerable-to-covert-redirect-but-have-no-vulnerability/

“Security Flaw Found In OAuth 2.0 And OpenID; Third-Party Authentication At Risk.”
By Tim Wilson, 05-04-2014
http://www.darkreading.com/security-flaw-found-in-oauth-20-and-openid-third-party-authentication-at-risk/

 


Graphic of DDoS Evolution 2013-2014

April 20, 2014
We are all attempting to figure out the right investments in DDoS resistance and mitigation.  In the fog of hype and vendor pitches, it is difficult to get some perspective on what we need to be preparing for on this front.  Every hard data resource has exaggerated value in the current situation.
Incapsula recently released “2013-2014 DDoS Threat Landscape Report.”  Their findings outlined below are based on hundreds of attacks perpetrated against websites using Incapsula’s DDoS Mitigation service.  Using that data their report concludes (most quoted from their report):

Network (Layer 3 & 4) DDoS Attacks

  • Large SYN Floods account for 51.5% of all large-scale attacks
  • Almost one in every three attacks is above 20Gbps
  • 81% of attacks are multi-vector threats
  • Normal SYN flood & Large SYN flood combo is the most popular multi-vector attack (75%)
  • NTP reflection was the most common large-scale attack method in February 2014
2014: Emerging Trends
    • “Hit and Run” DDoS attacks: frequent short bursts of traffic, are specifically designed to exploit the weakness of services that were designed for manual triggering (e.g., GRE tunneling to DNS re-routing). Hit and Run attacks are now changing the face of anti-DDoS industry, pushing it towards “Always On” integrated solutions.
    • Multi-Vector Threats: 81% of all network attacks employed at least two different attack methods, with almost 39% using three or more different attack methods simultaneously.  Multi-vector tactics increase the attacker’s chance of success by targeting several different networking or infrastructure resources.  Combinations of different offensive techniques are also often used to create “smokescreen” effects, where one attack is used to create noise, diverting attention from another attack vector. Moreover, multi-vector methods enable attackers to exploit holes in a target’s security perimeter, causing conflicts in automated security rules and spreading confusion among human operators.
    • Attack Type Facilitates Growth: Today large scale DDoS attacks (20Gbps and above) already account for almost 33% of all network DDoS events. There is no doubt that the increasing adoption of these techniques will facilitate the growth of future volumetric network DDoS attacks, which could in turn drive an increase in investment in networking resources.  During January and February of 2014 a significant increase in the number of NTP Amplification attacks was noted. In fact, this reached the point where, in February, NTP Amplification attacks became the most commonly used attack vector for large scale network DDoS attacks.
    • Weapn of Choice: attackers’ most common “weapons of choice”: i.e., large SYN floods, NTP Amplification and DNS Amplification
    • NTP DDoS is on the Rise

Application (Layer 7) DDoS Attacks

In the second half of 2013 Incapsula began to encounter a much more complex breed of DDoS offenders, including browser-based bots which were immune to generic filtering methods and could only be stopped by a combination of customized security rules and reputation-based heuristics.  (High volume is not essential)…even a rate of 50-100 requests/second would be enough to cripple most mid-sized websites, exceeding typical capacity margins.
  • DDoS bot traffic is up by 240%:  On average, Incapsula recorded over 12 million unique DDoS bot sessions on a weekly basis, which represents a 240% increase over the same period in 2013.
  • More than 25% of all Botnets are located in India, China and Iran
  • USA is ranked number 5 in the list of “Top 10” attacking countries
  • 29% of Botnets attack more than 50 targets a month — 7% attach more than 100 per month.
  • 29.9% of DDoS bots can hold cookies.  In the fourth quarter of 2013, Incapsula reported the first encounter with browser-based DDoS bots that were able to bypass both JavaScript and Cookie challenges – the two most common methods of bot filtering.  This trend continues in 2014.
  • 46% of all spoofed user-agents are fake Baidu Bots (while 11.7% are fake Googlebots)
2014: Emerging Trends
    • Botnet Geo-Locations
    • “Shared Botnets”
    • Bots are Evolving
    • Common Spoofed User-Agents

 

REFERENCES:

2013-2014 DDoS Threat Landscape Report
http://www.incapsula.com/images/blog/images/2013-14_ddos_threat_landscape.pdf

The findings above are summarized in the graphic below from Incapsula
Image: http://www.incapsula.com/images/blog/images/ddos-attack-trends-2013-2014.jpg
DDoS Graphic from Incapsula
DDoS Graphic from Incapsula


Follow

Get every new post delivered to your Inbox.

%d bloggers like this: