Over-the-Air Mobile Hack

June 23, 2020

An iOS mobile device used by a Moroccan journalist appears to have been attacked using a ‘network-only’ approach to delivering NSO Group’s “Pegasus” spyware.

This is an attack vector that is difficult to defend against. High value personnel (can you transfer $150M U.S?) doing business in areas with an elevated risk of cyber attack should be factoring this attack vector into their defenses.

Illustration from The Star

Read the whole story by Marco Chown Oved at https://www.thestar.com/news/canada/2020/06/21/journalists-phone-hacked-by-new-invisible-technique-all-he-had-to-do-was-visit-one-website-any-website.html

REFERENCES:
“Journalist’s phone hacked by new ‘invisible’ technique: All he had to do was visit one website. Any website.”
By Marco Chown Oved, Sun., June 21, 202
https://www.thestar.com/news/canada/2020/06/21/journalists-phone-hacked-by-new-invisible-technique-all-he-had-to-do-was-visit-one-website-any-website.html


Apple iOS Vuln via Mail

April 26, 2020

ZecOps announced a collection of iOS vulnerabilities associated with the iOS Mail app that enables hostile agents to run arbitrary code and to delete messages since at least since iOS 6…
So far, this has been described as a set of Out-Of-Bounds write and Heap-Overflow vulnerabilities that are being used against targeted high value endpoints. My interpretation of their detailed write-up is that this qualifies as a remote, anonymous, arbitrary code execution vulnerability. As such, even if it must be targeted and even if it may not be an ‘easy‘ attack, because global financial services organizations are targeted by some amount of determined adversaries, we need to take it seriously.

Apple responded by rejecting the idea that this represented an elevated risk for consumers because their security architecture was not violated and they found no evidence of impact to their customers — while engineering a fix that will be rolled out soon. Is it time to factor this elevated risk behavior (reject-but-fix) into our threat models?

The ZecOps headline was:

“The attack’s scope consists of sending a specially crafted email to a victim’s mailbox enabling it to trigger the vulnerability in the context of iOS MobileMail application on iOS 12 or maild on iOS 13. Based on ZecOps Research and Threat Intelligence, we surmise with high confidence that these vulnerabilities – in particular, the remote heap overflow – are widely exploited in the wild in targeted attacks by an advanced threat operator(s).”

For global financial services enterprises, the presence of hundreds of billions, even trillions of dollars in one or another digital form seems to make this risk rise to the level of relevance. This is especially true because of the effectiveness of Apple’s marketing techniques across broad categories of roles expected to populate our organizations — i.e., our staff and leaders often use Apple devices.

On one front “Apple’s product security and the engineering team delivered a beta patch (to ZecOps) to block these vulnerabilities from further abuse once deployed to GA.”

On another front Apple also publicly rejected the ZecOps claims about finding evidence of the exploit being used saying “are insufficient to bypass iPhone and iPad security protections, and we have found no evidence they were used against customers.” If I read this assertion carefully and in the context of potential future legal action or sales headwinds, it does not inspire confidence that the vulnerabilities were not real-and-exploitable-as-described — only that Apple rejects some narrowly-crafted subset of ZecOps’ announcement/analysis and that they still stand behind the effectiveness of some subset of the iOS architecture.

Apple’s full statement:

“Apple takes all reports of security threats seriously. We have thoroughly investigated the researcher’s report and, based on the information provided, have concluded these issues do not pose an immediate risk to our users. The researcher identified three issues in Mail, but alone they are insufficient to bypass iPhone and iPad security protections, and we have found no evidence they were used against customers. These potential issues will be addressed in a software update soon. We value our collaboration with security researchers to help keep our users safe and will be crediting the researcher for their assistance.”

The Apple echo-chamber kicked in to support the rejection in its most comprehensive and positive interpretation…

ZecOps’ summary of their findings includes (quoted):

  • The vulnerability allows remote code execution capabilities and enables an attacker to remotely infect a device by sending emails that consume significant amount of memory
  • The vulnerability does not necessarily require a large email – a regular email which is able to consume enough RAM would be sufficient. There are many ways to achieve such resource exhaustion including RTF, multi-part, and other methods
  • Both vulnerabilities were triggered in-the-wild
  • The vulnerability can be triggered before the entire email is downloaded, hence the email content won’t necessarily remain on the device
  • We are not dismissing the possibility that attackers may have deleted remaining emails following a successful attack
  • Vulnerability trigger on iOS 13: Unassisted (/zero-click) attacks on iOS 13 when Mail application is opened in the background
  • Vulnerability trigger on iOS 12: The attack requires a click on the email. The attack will be triggered before rendering the content. The user won’t notice anything anomalous in the email itself
  • Unassisted attacks on iOS 12 can be triggered (aka zero click) if the attacker controls the mail server
  • The vulnerabilities exist at least since iOS 6 – (issue date: September 2012) – when iPhone 5 was released
  • The earliest triggers we have observed in the wild were on iOS 11.2.2 in January 2018

Like any large-scale software vendor, Apple fixes a lot of bugs and flaws. I am not highlighting that as an issue.  A certain amount of bugs & flaws are expected in large scale development efforts.  I think that it is important to keep in mind that iOS devices are regularly found in use in safety and critical infrastructure operations, increasing the importance of managing the software lifecycle in ways that minimize the number, scope and nature of bugs & flaws that make it into production.

Apple has a history of enthusiastically rejecting the announcement of some interesting and elevated risk vulnerabilities using narrowly crafted language that would be likely to stand up to legal challenge while concurrently rolling out fixes — which often seems like a pretty overt admission of a given vulnerability.
This behavior leaves me thinking that Apple has created a corporate culture that is impacting their ability to do effective threat modeling.  From the outside, it increasingly appears that Apple’s iOS trust boundaries are expected to match the corporation’s marketing expressions of their control architecture — ‘the happy path‘ where formal iOS isolation boundaries matter only in ways that are defined in public to consumers and that other I/O channels are defined-out of what matters… If I am even a little correct, that cultural characteristic needs to be recognized and incorporated into our risk management practices.

Given the scale of their profits, Apple has tremendous resources that could be devoted to attack surface definition, threat modeling, and operational verification of their assumptions about the same. Many types of OOB Write and Heap-Overflow bugs are good targets for discovery by fuzz testing as well. Until recently I would have assumed that by this point in iOS & iPhone/iPad maturation, Apple had automation in place to routinely, regularly & thoroughly fuzz obvious attack vectors like inbound email message flow in a variety of different ways and at great depth.

This pattern of behavior has been exhibited long enough and consistently enough that it seems material for global financial services enterprises. So many of our corporations support doing large amounts of our business operations on iDevices. I think that we need to begin to factor this elevated risk behavior into our threat models. What do you think?

REFERENCES:
“You’ve Got (0-click) Mail!” By SecOps Research Team, 04-20-2020
https://blog.zecops.com/vulnerabilities/youve-got-0-click-mail/

“Apple’s built-in iPhone mail app is vulnerable to hackers, research says.” By Reed Albergotti, 2020-04-23
https://www.washingtonpost.com/technology/2020/04/23/apple-hack-mail-iphone/

“Apple downplays iOS Mail app security flaw, says ‘no evidence’ of exploits — ‘These potential issues will be addressed in a software update soon’” By Jon Porter, 2020-04-24 https://www.theverge.com/2020/4/24/21234163/apple-ios-ipados-mail-app-security-flaw-statement-no-evidence-exploit


All Attack Vectors Remain Relevant

April 24, 2020

ESET Cybersecurity researchers from said yesterday that they have disabled parts of the “VictoryGate” botnet command & control infrastructure.  It had been directing the activity of a malware botnet of at least 35,000 compromised Windows systems.  The hostile agents were using the Windows endpoints to mine Monero cryptocurrency since May 2019.  Most of the exploited endpoints are located in Latin America, with Peru accounting for 90% of them.  These hosts were owned by both public and private sectors organizations, including financial institutions.

ESET researchers confirmed that USB drives were the main or only malware attack and propagation vector.

“The victim receives a USB drive that at some point was connected to an infected machine. It seemingly has all the files with the same names and icons that it contained originally. Because of this, the contents will look almost identical at first glance” …”When an unsuspecting user “opens” (i.e. executes) one of these files, an AutoIt script will open both the file that was intended, in addition to the initial module, both hidden by VictoryGate in a hidden directory”…

Replication through Removable Media has been an initial access path for attackers for as long as there has been removable media.

Financial services enterprises need to remain vigilant in their efforts to police their entire attack surface — including the boring, routine portions like USBs & USB port usage.

I think that the detailed ESET write-up is worth a read if you have any responsibility for attack surface management, or if you have a role in incident response.

REFERENCES:

“Following ESET’s discovery, a Monero mining botnet is disrupted.” By Alan Warburton, 23 Apr 2020. https://www.welivesecurity.com/2020/04/23/eset-discovery-monero-mining-botnet-disrupted/


Twitter Reduces Privacy Options

April 11, 2020

The Electronic Frontier Foundation (EFF) provided some context for the recent change in Twitter privacy options.  I think that it is an excellent read and recommend it to anyone involved in Financial Services security — especially those involved in mobile application architecture, design, and implementation.

Their conclusion:

Users in Europe retain some level of control over their personal data in that they get to decide whether advertisers on Twitter can harvest user’s device identifiers. All other Twitter users have lost that right.

The more broadly-available are user’s device identifiers — especially in the context of their behaviors (how they use their devices) — the greater are the risks associated with resisting a range of attacks.  We already have a difficult time identifying customers, vendors, contractors, the people we work with, and leaders throughout our organizations.  We depend on all kinds of queues (formal and informal) for making trust decisions.  As the pool of data available to hostile agents (because if it is gathered it will be sold and/or leaked) grows along every relevant dimension, the more difficult it is for us to find information that only the intended/expected individual would know or would have.

Defending against competent social engineering is already a great challenge — and behaviors like Twitter’s* will make it more difficult.

Note: Twitter is hardly alone in its attraction to the revenue that this type of data sales brings in…

REFERENCE:

https://www.eff.org/deeplinks/2020/04/twitter-removes-privacy-option-and-shows-why-we-need-strong-privacy-laws


Capital One Concerns Linked To Overconfidence Bias?

August 2, 2019

Earlier this week on July 29, FBI agents arrested Paige “erratic” Thompson related to downloading ~30 GB of Capital One credit application data from a rented cloud data server.

In a statement to the FBI, Capital One reported that an intruder executed a command that retrieved the security credentials for a web application firewall administrator account, used those credentials to list the names of data folders/buckets, and then to copy (sync) them to buckets controlled by the attacker.  The incident appears to have affected approximately 100 million people in the United States and six million in Canada.

If you just want to read a quick summary of the incident, try “For Big Banks, It’s an Endless Fight With Hackers.” or “Capital One says data breach affected 100 million credit card applications.”

I just can’t resist offering some observations, speculation, and opinions on this topic.

Since as early as 2015 the Capital One CIO has hyped their being first to cloud, their cloud journey, their cloud transformation and have asserted that their customers data was more secure in the cloud than in their private data centers.  Earlier this year the company argued that moving to AWS will “strengthen your security posture” and highlighted their ability to “reduce the impact of compliance on developers” (22:00) — using AWS security services and the network of AWS security partners — software engineers and security engineers “should be one in the same.”(9:34)

I assume that this wasn’t an IT experiment, but an expression of a broader Capital One corporate culture, their values and ethics.  I also assume that there was/is some breakdown in their engineering assumptions about how their cloud infrastructure and its operations worked.  How does this happen?  Given the information available to me today, I wonder about the role of malignant group-think & echo chamber at work or some shared madness gripping too many levels of Capital One management.  Capital One has to have hordes of talented engineers — some of whom had to be sounding alarms about the risks associated with their execution on this ‘cloud first‘ mission (I assume they attempted to communicate that it was leaving them open to accusations of ‘mismanaging customer data’, ‘inaccurate corporate communications,’ excessive risk appetite, and more).  There were lots of elevated risk decisions that managers (at various levels) needed to authorize…

Based on public information, it appears that:

  • The sensitive data was stored in a way that it could be read from the “local instance” in clear text (ineffective or absent encryption).
  • The sensitive data was stored on a cloud version of a file system, not a database (weaker controls, weaker monitoring options).
  • The sensitive data was gathered by Capital One starting in 2005 — which suggests gaps in their data life-cycle management (ineffective or absent data life-cycle management controls)
  • There were no effective alerts or alarms announcing unauthorized access to the sensitive data (ineffective or absent IAM monitoring/alerting/alarming).
  • There were no effective alerts or alarms announcing ‘unexpected’ or out-of-specification traffic patterns (ineffective or absent data communications or data flow monitoring/alerting/alarming).
  • There were no effective alerts or alarms announcing social media, forums, dark web, etc. chatter about threats to Capital One infrastructure/data/operations/etc. (ineffective or absent threat intelligence monitoring & analysis, and follow-on reporting/alerting/alarming).
  • Capital One’s conscious program to “reduce the compliance burden that we put on our developers” (28:23) may have obscured architectural, design, and/or implementation weaknesses from Capital One developers (a lack of security transparency, possibly overconfidence that developers understood their risk management obligations, and possible weaknesses in their secure software program).
  • Capital One ‘wrapped’ a gap in IAM vendor Sailpoint’s platform with custom integrations to AWS identity infrastructure (16:19) (potentially increasing the risk of misunderstanding or omission in this identity & access management ‘plumbing’).
  • There may have been application vulnerabilities that permitted the execution of server side commands (ineffective input validation, scrubbing, etc. and possibly inappropriate application design, and possible weaknesses in their secure code review practices and secure software training).
  • There may have been infrastructure configuration decisions that permitted elevated rights access to local instance meta-data (ineffective configuration engineering and/or implementation).
  • There must be material gaps or weaknesses in Capital One’s architecture risk assessment practices or in how/where they are applied, and/or they must have been incomplete, ineffective, or worse for a long time.
  • And if this was the result of ‘designed-in‘ or systemic weaknesses at Capital One, there seems to be room for questions about their SEC filings about the effectiveness of their controls supportable by the facts of their implementation and operational practices.

In almost any context this is a pretty damning list.  Most of these are areas where global financial services enterprises are supposed to be experts.

Aren’t there also supposed to be internal systems in place to ensure that each financial services enterprise achieves risk-reasonable levels of excellence in each of the areas mentioned in the bullets above?  And where were the regulations & regulators that play a role in assuring that it the case?

How does an enormous, heavily-regulated financial services enterprise get into a situation like this?  There is a lot of psychological research suggesting that overconfidence is a widespread cognitive bias and I’ve read, for example, that it underpins what is sometimes called ‘organizational hubris,’ which seems like a useful label here.   The McCombs School of Business Ethics Unwrapped program defines ‘overconfidence bias’ as “the tendency people have to be more confident in their own abilities than is objectively reasonable.”  That also seems like a theme applicable to this situation.  Given my incomplete view of the facts, it seems like this may have been primarily a people problem, and only secondarily a technology problem.  There is probably no simple answer…

Is the Capital One case unique?  Could other financial services enterprises be on analogous journeys?

REFERENCES:
“Capital One Data Theft Impacts 106M People.” By Brian Krebs. https://krebsonsecurity.com/2019/07/capital-one-data-theft-impacts-106m-people/
“Why did we pick AWS for Capital One? We believe we can operate more securely in their cloud than in our own data centers.” By Rob Alexander, CIO, Capital One, https://aws.amazon.com/campaigns/cloud-transformation/capital-one/ and https://youtu.be/0E90-ExySb8?t=212
“For Big Banks, It’s an Endless Fight With Hackers.” By Stacy Cowley and Nicole Perlroth, 30 July 2019. https://www.nytimes.com/2019/07/30/business/bank-hacks-capital-one.html
“Capital One says data breach affected 100 million credit card applications.” By Devlin Barrett. https://www.washingtonpost.com/national-security/capital-one-data-breach-compromises-tens-of-millions-of-credit-card-applications-fbi-says/2019/07/29/…
“AWS re:Inforce 2019: Capital One Case Study: Addressing Compliance and Security within AWS (FND219)” https://youtu.be/HJjhfmcrq1s
“Capital One Data Theft Impacts 106M People.” https://krebsonsecurity.com/2019/07/capital-one-data-theft-impacts-106m-people/
“Frequently Asked Questions.” https://www.capitalone.com/facts2019/2/
Overconfidence Bias defined: https://ethicsunwrapped.utexas.edu/glossary/overconfidence-bias
Scholarly articles for cognitive bias overconfidence: https://scholar.google.com/scholar?hl=en&as_sdt=1,16&as_vis=1&q=cognitive+bias+overconfidence&scisbd=1
“How to Recognize (and Cure) Your Own Hubris.” By John Baldoni. https://hbr.org/2010/09/how-to-recognize-and-cure-your

 


Input Validation Error Creates Widespread RCE Vulnerability

July 2, 2019

On June 3, 2019 Heiko Schlittermann announced “[CVE-2019-10149: Exim 4.87 to 4.91: Possible Remote Exploit].” The announcement emphasized that they believed there was no evidence of any active exploits at that time and that Exim version 4.92 was not vulnerable.
The Qualys security team had discovered the “remote command execution” vulnerability a more than a week earlier during a code review of the latest Exim updates, named it “The Return of the WIZard” and notified the Exim team.
The race began.
Trend Micro reported that Exim mail transfer agents [MTAs] are used by almost 57% of the internet’s email servers, so the overall attack surface presented by this vulnerability was enormous.
By June 5 at 15:15 UTC the Exim team released a fix to the public.

The Qualys security advisory includes an excellent technical description of the vulnerability and its exploit. Read it if that is of interest to you.
Successful exploit did not involve high-test shell code or extreme coding. One attack path involved an attacker incorporating RCPT TO “${run{…}}@somedestination.com” into their email message (where “somedestination.com” is one of Exim’s relay_to_domains). …a low-tech attack without much resistance to abuse.

A simple and *harmless* example could be:

RCPT TO:<${run{\x2Fbin\x2Fsh\t-c\t\x22id\x3E\x3E\x2Ftmp\x2Fid\x22}}@somedestination.com>

Which is an *escaped* version of:

RCPT TO:<${run{/bin/sh -c "id>>/tmp/id"}}@somedestination.com>

Which would execute on the remote server:

/bin/sh -c "id>>/tmp/id"

The threat that this represents is only limited by the creativity and ability of attackers who chose to invest some energy in exploits of this vulnerability. If you have never managed a Linux/Unix server before — that is, under many expected circumstances, a relatively extreme risk.

The attacker would “own” your server inside your trusted data center or in your “safe” cloud environment, or they would “own” a trusted server inside your trusted vendor’s infrastructure, or they would “own” a server operated by your trusted cloud provider (who preach that they understand security better than you).

All of those scenarios are on the really-not-good part of your risk continuum.

Shodan searches showed millions of vulnerable services still running 10 days after the fix was released.

Nine days after the public announcement, TrendMicro outlined a range of active attacks against this vulnerability.
Similarly, Microsoft’s security team confirmed the presence of “an active Linux worm leveraging a critical Remote Code Execution (RCE) vulnerability, CVE-2019-10149, in Linux Exim email servers running Exim version 4.87 to 4.91.” The team also reminded Microsoft Azure customer’s Linux IaaS instances running a vulnerable version of Exim are affected — that customers using Azure virtual machines (VMs) are responsible for updating the operating systems and all applications running on those VMs.

Ten days later Pierluigi Paganini published a post about attacks trying to spread malware by abusing the CVE-2019-10149 vulnerability.

Today [02 July 2019], nearly a month after the public announcement, Shodan still lists 1,972,180 vulnerable Exim MTAs exposed to the open Internet with the highest concentration in the United States of America. At this time there are 2,120,670 updated/patched servers in the U.S. and 927,894 vulnerable servers — almost 44% of the exposed servers hosted in the U.S. are running vulnerable versions of Exim and are candidates for remote anonymous remote arbitrary code/command execution.

How can so many organizations simply donate their infrastructure (their operations, their money, their customers, their brand, and more) to criminals?

REFERENCES:
Qualys Security Advisory — The Return of the WIZard: RCE in Exim (CVE-2019-10149): https://www.qualys.com/2019/06/05/cve-2019-10149/return-wizard-rce-exim.txt
CVE-2019-10149: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10149
CVE-2019-10149: Exim 4.87 to 4.91: possible remote exploit: https://www.openwall.com/lists/oss-security/2019/06/04/1
Exim: https://en.wikipedia.org/wiki/Exim
SECURITY ALERT: Remote Code Execution (RCE) Vulnerability in Exim MTA (CVE-2019-10149): https://success.trendmicro.com/solution/1122972-security-alert-remote-code-execution-rce-vulnerability-in-exim-mta-cve-2019-10149
Hacker Groups Pounce on Millions of Vulnerable Exim Servers. June 14, 2019: https://www.trendmicro.com/vinfo/be/security/news/cybercrime-and-digital-threats/-hacker-groups-pounce-on-millions-of-vulnerable-exim-servers
Prevent the impact of a Linux worm by updating Exim (CVE-2019-10149). June 14, 2019: https://blogs.technet.microsoft.com/msrc/2019/06/14/prevent-the-impact-of-a-linux-worm-by-updating-exim-cve-2019-10149/
Malware researchers at Cybaze-Yoroi ZLAB observed many attack attempts trying to spread malware abusing the CVE-2019-10149 issue. June 24, 2019 By Pierluigi Paganini: https://securityaffairs.co/wordpress/87523/hacking/cve-2019-10149-wizard-vulnerability.html


Spring Boot & jackson-databind Frustration

March 16, 2019

​If you use Spring Boot framework, you will likely bump into ‘Unsafe Deserialization’ issues highlighted in your static code security analysis results.  Dealing with this type of vulnerability is one of those issues that tends to be one of the more labor intensive for software development teams.

A remote, anonymous, arbitrary code execution vulnerability in the open source component ‘jackson-databind’ is the most common root cause for these issues because spring-boot-starter-actuator uses it as a dependency.  ‘Unsafe Deserialization’ is typically addressed by upgrading com.fasterxml.jackson.core:jackson-databind — but because it is tightly-coupled with spring-boot-starter-actuator, that is problematic.  If your application tolerates it, the quickest fix is to upgrade Spring-Boot… to a version that uses jackson-databind version 2.9.8 (or above [as of today]).  The ‘available’ versions of ‘safe-enough‘ Spring Boot keep shrinking.  The key challenge is jackson-databind’s use of a black-list to resist specific attack payloads.  Every time there is a new attack gadget released, jackson-databind goes from ‘safe-enough’ to CVSSv3 10 (whole-house-fire) overnight.  Because versioning Spring boot… components requires significant effort, there is a lag between any new jackson-databind release and new Spring-boot releases that incorporate it.

If the application (the whole Spring-enabled stack) under evaluation does not employ beans nor does it expose any listening TCP ports (& no RMI, JMSInvoker, HTTPInvoker, etc., and you find no use of readObject(), readObjectNoData(), readResolve(), or readExternal() either), and you can produce some evidence of that, then that application may not be vulnerable.  It can be hard to prove a negative, and because of the depth & complexity of some Spring-boot-enabled applications (outside of the code that you write) that threshold can be a high bar.  So,…circle back to “If your application tolerates it, the quickest fix is to upgrade spring-boot-starter-actuator to a version that uses Jackson-databind version 2.9.8 (or above).”

REFERENCES:

CVE-2017-17485: FasterXML jackson-databind through 2.8.10 and 2.9.x through 2.9.3 allows unauthenticated remote code execution because of an incomplete fix for the CVE-2017-7525 deserialization flaw. This is exploitable by sending maliciously crafted JSON input to the readValue method of the ObjectMapper, bypassing a blacklist that is ineffective if the Spring libraries are available in the classpath. FROM: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-17485

Variants of this issue have been appearing and reappearing since 2011 (https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2011-2894).

Also:
https://snyk.io/vuln/SNYK-JAVA-COMFASTERXMLJACKSONCORE-32111
https://nvd.nist.gov/vuln/detail/CVE-2018-7489
https://nvd.nist.gov/vuln/detail/CVE-2017-7525
https://fortiguard.com/encyclopedia/ips/46004


Some Upgrades Are Not Optional – Engineer Them For The Reality of Your Operations

December 5, 2018

Kubernetes enables complexity at scale across cloud-enabled infrastructure.

Like any other software, it also is — from time to time — vulnerable to attack.

A couple days ago I initially read about a CVSS v3.0 ‘9.8’ (critical) Kubernetes vulnerability:

“In all Kubernetes versions prior to v1.10.11, v1.11.5, and v1.12.3, incorrect handling of error responses to proxied upgrade requests in the kube-apiserver allowed specially crafted requests to establish a connection through the Kubernetes API server to backend servers, then send arbitrary requests over the same connection directly to the backend, authenticated with the Kubernetes API server’s TLS credentials used to establish the backend connection.”

The default configuration for a Kubernetes API server’s Transport Layer Security (TLS) credentials grants all users (authenticated and unauthenticated) permission to perform discovery API calls that result in this escalation.  As a result, in too many implementations, anyone who knows about this hole can take command of a targeted Kubernetes cluster. Game over.

Red Hat is quoted as describing the situation as: “The privilege escalation flaw makes it possible for any user to gain full administrator privileges on any compute node being run in a Kubernetes pod. This is a big deal. Not only can this actor steal sensitive data or inject malicious code, but they can also bring down production applications and services from within an organization’s firewall.”

This level of criticality is a reminder about how important it is to engineer-in the ability to perform on-demand Kubernetes infrastructure upgrades during normal business operations. In situations like the one described above, these upgrades must occur without material impact on business operations.  In real business infrastructure that is a serious engineering challenge.

Today is not the day to be asking “how will we upgrade all this infrastructure plumbing in real-time, under real business loads?”

The recent Kubernetes vulnerability is a reminder about how complex is the attack surface of every global financial services enterprise. With that complexity, comes a material obligations to understand your implementations and their operations under expected conditions — one of which is the occasional critical vulnerability that must be fixed immediately.

Oh joy.

REFERENCES:
Kubernetes Security Announcement – v1.10.11, v1.11.5, v1.12.3 released to address CVE-2018-1002105
https://groups.google.com/forum/#!topic/kubernetes-announce/GVllWCg6L88
CVE-2018-1002105 Detail
https://nvd.nist.gov/vuln/detail/CVE-2018-1002105
Kubernetes’ first major security hole discovered
https://www.zdnet.com/article/kubernetes-first-major-security-hole-discovered/
CVE-2018-1002105
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1002105


Facebook Identity Token Thefts Result in Breach

September 28, 2018

Facebook’s VP of Product Management, Guy Rosen said today that a vulnerability in the company’s  “View As” feature that enabled attackers to steal to user’s access tokens.  These tokens are presented to Facebook infrastructure in ways that allow users to remain authenticated and able to interact with their accounts over multiple sessions. Once in an attacker’s possession, these tokens would permit attackers to impersonate actual users and enable Facebook accounts takeover.  Facebook’s reported that the current risk mitigation is their invalidating at least 40 million authentication tokens and temporarily turning off the “View As” feature while they “conduct a thorough security review.”  He went on:

“This attack exploited the complex interaction of multiple issues in our code. It stemmed from a change we made to our video uploading feature in July 2017, which impacted “View As.” The attackers not only needed to find this vulnerability and use it to get an access token, they then had to pivot from that account to others to steal more tokens.”

Mr. Rosen wrote that Facebook engineers learned of this vulnerability three days ago on Tuesday, September 25 and that almost 50 million Facebook accounts were affected.

Writing safe-enough user-facing software that must interact with a complex ecosystem of applications and APIs is a serious challenge for all of us.  In that context, this latest Facebook vulnerability should be a cautionary tale for all organizations implementing apps and APIs incorporating persistent token-based authentication.  Could any of us market our way out of a situation where 40 or 50 million of customers, marketers, and other partners were vulnerable to account takeover?

Too many architects and developers seem risk-inappropriately infatuated by pundit pronouncements about authentication fatigue and the “best practice” of extending any given authentication across multiple sessions using persistent tokens.  At our scale (a trillion dollars U.S. AUM each, give or take a few hundred billion), global financial services enterprises ought to be able to reason our way through the fog of happy-path chatter to engineer session management practices that meet the risk appetites of our various constituencies.

REFERENCES:

“Security Update.” By Guy Rosen, VP of Product Management
Security Update

 

 


Ransomware and My Cloud

December 10, 2017

I just reviewed descriptions of sample incidents associated with ransomware outlined in the ‘Top 10″ review by TripWire.

Ransomware attacks — malware that encrypts your data followed by the attacker attempting to extort money from you for the decryption secrets — are a non-trivial threat to most of us as individuals and all financial services enterprises.

Unfortunately for some, their corporate culture tends to trust workforce users’ access to vast collections of structured and unstructured business information.  That ‘default to trust’ enlarges the potential impacts of a ransomware attack.

As global Financial Services security professionals, we need to resist the urge to share unnecessarily.

We need to quickly detect and respond to malware attacks in order to constrain their scope and impacts.  Because almost every global Financial Services enterprise represents a complex ecosystem of related and in some cases dependent operations, detection may involve many layers, technologies, and activities.  It is not just mature access/privilege management, patching, anti-virus, or security event monitoring, or threat intelligence alone.

All of us also need to ensure that we have a risk-relevant post-ransomware attack data recovery capability that is effective across all our various business operations.

So, does the cloud make me safe from ransomware attack?  No.  Simply trusting your cloud vendor (or their hype squad) on this score does not reach the level of global Financial Services due diligence.  It seems safe to assert that for any given business process, the countless hardware, software, process, and human components that make up any cloud just make it harder to resist and to recovery from ransomware attack.  And under many circumstances, the presence of cloud infrastructure — by definition, managed by some other workforce using non-Financial Services-grade endpoints — increases the probability of this family of malware attack.

 

REFERENCE:

“10 of the Most Significant Ransomware Attacks of 2017.” By David Bisson, 12-10-2017. https://www.tripwire.com/state-of-security/security-data-protection/cyber-security/10-significant-ransomware-attacks-2017/​


%d bloggers like this: