Android Endpoints Still Rule

October 30, 2021

Five and a half years ago I wrote about how wildly inaccurate assumptions about Apple market-share impacted some software development decisions as well as ‘downstream’ customers and partners….https://completosec.wordpress.com/2016/04/20/recognize-the-fact-of-android-endpoints/

I continue to bump into this attitude about Apple mobile devices and it remains cringe-worthy.  How can this be?  Because it happens so often I assume that this must be something that other North America risk management professionals experience as well.  How can this be?  I understand that the U.S. market has iOS roughly 15% above Android, but suspect that the more than 7 billion people living outside the U.S. represent a huge and growing business opportunity for most global financial services enterprises.  Discounting Android endpoints leaves a lot of opportunity unexplored.

In the last 5.5 years global Apple mobile device market-share has increased roughly 10% while Android increased roughly 9%, leaving them at 28.23% for iOS and 71.08% for Android in October 2021.

If your business is attempting to expand in India or more broadly in Asia, or in South America, iOS market-share is smaller.

India (https://gs.statcounter.com/os-market-share/mobile/india):
Android == 96.03%
iOS == 3.16%
Asia (https://gs.statcounter.com/os-market-share/mobile/asia):
Android == 83.21%
iOS == 16.04%
South America (https://gs.statcounter.com/os-market-share/mobile/south-america):
Android == 88.23%
iOS == 11.46%

 As in 2016, I had to get that out of my system…


New Apps Do Not Have to Include Additional Risks

March 16, 2021

Development team members often ask narrow questions about vulnerabilities and mitigation options that are really about defining the scope of their risk management responsibilities.

Their concern is justified. Data breaches exposed 36 billion records in the first half of 2020 (making it the “worst year on record“) RiskBasedSecurity. In 2020 web applications were involved in 43% of breaches and 72% of breaches involved large business victims. Data breaches can erode customer/partner/investor affinity for your brand, they are expensive, and may impact your compliance posture.

In that context, ‘getting application security right’ is an important mission — one that is foundational for any global financial services enterprise.

Developers often focus on features that seem, because of legacy momentum, to be well understood. In my experience, development teams working in a technology environment new to them tend to focus on security as authentication and authorization features. For many, enterprise infrastructure performed both of those functions in the past and too many developers tended to code in a way that assumed “if you reached ‘here’ you were already authenticated and authorized.” In common use cases today, many authentication and authorization challenges have become the domain of given development team’s feature backlog. Unfortunately, they also become central to team’s security efforts.

Rapidly evolving end user expectations drive a lot of development teams to take on new technologies and methodologies. This results in lots of new decisions about how to perform what used to be routine security tasks — authentication (SSO), authorization, data handling, logging & alerting, and more. Too often only authentication & authorization make the cut during backlog grooming/prioritization. How does a large corporation with multiple lines of business and numerous subsidiaries deal responsibly with this type of change?

I don’t know. But I am certain that ignoring the situation or creating incentive structures that maximize technology diversity are counterproductive.

Regardless of whether a given interaction is stateless or associated with an established session, the frequency of endpoint and server compromise present risks that — for many business use cases — result in an obligation to understand more about the current security context than offered by the presented token(s) representing a valid authenticated and authorized user.

Two recent (and ongoing, record-setting) attacks — SolarWinds & Microsoft Exchange — both involved undeserved assumptions of trust.

One thread of thinking about how to address the risks posed by writing and operating applications exposed to hostile actors involves ‘zero-trust.’ A recent NIST publication outlines one take on how this idea may influence security posture.
I don’t know if ‘zero-trust’ will be be required in our industry. After scanning the NIST SP 800-207 it is in my ‘be careful what you ask for‘ bucket see pages 15-16/6-7 for a quick overview of what compliance with ‘zero trust’ means.

I believe that every organization needs to build some level of consensus on this topic in order to optimize investments in application development & acquisition, as well as in the selection and operation of the infrastructure & services upon which those applications depend.

Hoping that decentralized and independent development team creativity alone will deliver an effective application security posture is risk-inappropriate.

REFERENCES:

“NIST Special Publication 800-207 — Zero Trust Architecture”
https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-207.pdf


SolarWinds-Enabled Hacks Widespread

December 14, 2020

Hostile actors associated with Russian cyber-security organizations used SolarWinds Orion technology to enable unauthorized long-running elevated rights access throughout the U.S. government and as many as hundreds of the Fortune 500 corporations. This access may have included the Office of the President of the United States.

There is no reason for me to copy the operational details here. There are some good write-ups in the REFERENCES section below.

I just wanted to add to their content with this abuse case:

These hostile actors are getting a lot of attention for data & secrets exfiltration. In global financial services enterprises, we move trillions of dollars a day. These hostile actors were able to acquire elevated rights credentials and move laterally for months. They had enough time to figure out the cash management, account management, portfolio management, and back room accounting processes as well as the chains of approvers required to authorize the maintenance of external target accounts and authorizations for the movement of funds/securities. If so motivated, it seems likely they could have moved large amounts of the financial assets for which we are responsible to target accounts of their choosing. If this did not happen, financial services organizations dodged a big one.

In that case, it was only ‘luck’ that protected the financial services industry. Luck is a terrible risk management tool/technique. This hack is a loud signal that our resistance to and detection of attacks needs to be a lot better than it is today. The FireEye and Krebs references below include the types of details that support changes that will help fill some of that gap.

REFERENCES:

“U.S. Treasury, Commerce Depts. Hacked Through SolarWinds Compromise.” By Brian Krebs, 14 Dec 2020.
https://krebsonsecurity.com/2020/12/u-s-treasury-commerce-depts-hacked-through-solarwinds-compromise/

“Highly Evasive Attacker Leverages SolarWinds Supply Chain to Compromise Multiple Global Victims With SUNBURST Backdoor.” By FireEye, 13 December 2020.
https://www.fireeye.com/blog/threat-research/2020/12/evasive-attacker-leverages-solarwinds-supply-chain-compromises-with-sunburst-backdoor.html

“Russian government hackers are behind a broad espionage campaign that has compromised U.S. agencies, including Treasury and Commerce.” By Ellen Nakashima and Craig Timberg, 13 Dec 2020.
https://www.washingtonpost.com/national-security/russian-government-spies-are-behind-a-broad-hacking-campaign-that-has-breached-us-agencies-and-a-top-cyber-firm/2020/12/13/d5a53b88-3d7d-11eb-9453-fc36ba051781_story.html

“Russian Hackers Broke Into Federal Agencies, U.S. Officials Suspect,” By David E. Sanger, 13 Dec 2020.
https://www.nytimes.com/2020/12/13/us/politics/russian-hackers-us-government-treasury-commerce.html

“Suspected Russian hackers spied on U.S. Treasury emails – sources.” By Christopher Bing, 13 Dec 2020.
https://www.reuters.com/article/us-usa-cyber-treasury-exclsuive/exclusive-u-s-treasury-breached-by-hackers-backed-by-foreign-government-sources-idUSKBN28N0PG

17 Dec 2020 Addition:
“Sunburst Backdoor: A Deeper Look Into The SolarWinds’ Supply Chain Malware.” By Sergei Shevchenko, 15 Dec 2020 https://blog.prevasio.com/2020/12/sunburst-backdoor-deeper-look-into.html


Cryptocurrency Implementation Matters

November 16, 2020

Today there were a lot of references to an article by Phil Muncaster about a $2M theft from Akropolis, a Gibraltar-based cryptocurrency savings and borrowing company. You can see more details there.

A hostile party “exploited a bug in the deposit logic of its SavingsModule smart contract to make off with a little over two million in DAI virtual currency.”

Across the industry, and especially in the broader technical and non-technical press, there seems to be too much focus on some of the narrowly-defined cryptocurrency technology security characteristics, and not enough on the oceans of technology that surround that core. In this case some deposit implementation details (and possibly design and architecture details as well) were vulnerable to abuse. Cryptocurrency-centric industries are not special. They need to build up risk management muscle and maintain it across their entire attack surface.

Global financial services enterprises all have internal and external proponents of expanding operations into parts of the cryptocurrency ecosystem. Don’t let hypsters, hucksters & grifters push practices. There is no cryptocurrency magic. None. Building up risk appropriate businesses integrated with cryptocurrency technologies requires as much or more risk management talent and rigor as in any other aspect of the financial services business.

REFERENCES:

“Crypto Firm Offers $200,000 Bug Bounty to Hacker Who Stole $2m.” By Phil Muncaster, 16 Nov 2020
https://www.infosecurity-magazine.com/news/crypto-firm-200k-bug-bounty-hacker/

“Open Letter To Akropolis Delphi Hacker.” By Akropolis, 13 Nov 2020
https://medium.com/@akropolisio/open-letter-to-akropolis-delphi-hacker-91e883667cc

“Harvest Finance Places Bounty on Hacker.” By Sarah Coble, 26 Oct 2020
https://www.infosecurity-magazine.com/news/harvest-finance-places-bounty-on/


GPT-3 as Threat Actor

September 10, 2020

A robot wrote this entire article. Are you scared yet, human?
By GPT-3 (and inputs & editing by theGuardian editorial staff)
https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3

It is important to review the editors note at the end, and for many, it might be best to read that section first.

From my perspective, this article reads like a university student essay.

Researchers have pointed out the potential for using GPT-3 to generate “misinformation, spam, phishing, abuse of legal and governmental processes, fraudulent academic essay writing and social engineering pretexting.”

Given the power of this technology and the ability to ‘tune’ models for given missions, it seems like this will be (or is) part of a powerful tool kit for campaigns that include misleading or otherwise manipulating humans at scale. It is not magic (see: Bloviator and mindless in the references below), but could provide a big assist to those threat actors who need serious scale.

In global financial services it might be time to factor this technology in as a threat actor or an augmented threat actor (augmented by a supporting human threat actor… today)? It is going to take time and effort to acquire/build the skills required for this mission. Better start today.

REFERENCES:

“A robot wrote this entire article. Are you scared yet, human?”
By GPT-3 (and inputs & editing by theGuardian editorial staff)
https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3

Generative Pre-trained Transformer 3 (GPT-3)
https://en.wikipedia.org/wiki/GPT-3

“Language Models are Few-Shot Learners”. arXiv:2005.14165.
https://arxiv.org/abs/2005.14165

“Learning to summarize from human feedback.”
https://arxiv.org/abs/2009.01325 and the supporting code: https://github.com/openai/summarize-from-feedback

“Spinning Up”
An educational resource produced by OpenAI that makes it easier to learn about deep reinforcement learning (deep RL).
https://github.com/openai/spinningup

“GPT-3, Bloviator: OpenAI’s language generator has no idea what it’s talking about.” by Gary Marcus & Ernest Davis
https://www.technologyreview.com/2020/08/22/1007539/gpt3-openai-language-generator-artificial-intelligence-ai-opinion/

“OpenAI’s new language generator GPT-3 is shockingly good—and completely mindless.” by Will Douglas Heaven
https://www.technologyreview.com/2020/07/20/1005454/openai-machine-learning-language-generator-gpt-3-nlp/

Update on 28 Nov 2020:
“Meet GPT-3. It Has Learned to Code (and Blog and Argue). — The latest natural-language system generates tweets, pens poetry, summarizes emails, answers trivia questions, translates languages and even writes its own computer programs.” By Cade Metz, Nov. 24, 2020
https://www.nytimes.com/2020/11/24/science/artificial-intelligence-ai-gpt3.html


Felony Charges for Uber Security Chief

August 20, 2020

Corporate lawyers & leaders may wish to define-away data breaches rather than deal with the reporting and impacts that may follow. They may also take more aggressive measures to avoid revealing data breaches — both proven and potential/probable. That has always been an elevated risk behavior.

Corporate teams continue to expose critically-important access keys and tokens on source code repositories like Github, BitBucket, GitLab and others. These keys can be used to access corporate virtual servers, storage, databases and more in AWS, GCS, Azure, and beyond. Dealing with this scenario can be tricky and may involve difficult ethical issues. Because it remains a regular occurrence across global corporate enterprises, one would expect that leaders would be better prepared for this situation.

Revisiting an Uber breach might be informing…

Uber tried to re-define one of their breach incidents in 2016 through a bug bounty program. Paying hackers $100K and asking them to nondisclosure agreements. Later Brandon Glover and Vasile Mereacre pleaded guilty to the hack in U.S. Federal Court.

Glover and Mereacre breached Uber’s security controls using keys they found available on GitHub — where Uber employees had left them unprotected. The keys allowed the attackers to access Uber’s Amazon web servers, where it stored more than 50 million customer and driver accounts, including a pile of driver’s license numbers. Uber told their workforce that it was temporarily shutting down access to Github in order to hunt for and fix any additional access key (or key access) issues.

Joe Sullivan, Uber’s fired (in 2017) security chief was charged in U.S. District Court in San Francisco today with attempting to conceal a data breach that exposed the email addresses and phone numbers of 57 million drivers and passengers as well as drivers license numbers for about 600,000 more individuals.

The felony charges were related to the failure to promptly disclose the breach to employee and consumer victims.

Joe Sullivan led cybersecurity efforts at Facebook before becoming Uber’s chief security officer in 2015, and is now CISO at Cloudflare.

These behaviors have hard costs. To date Uber has paid out $150 million dollars for their behavior surrounding the 2016 breach.

Is your leadership ready to deal effectively & ethically with scenarios like these?

REFERENCES:
“Former Uber Security Chief Charged With Concealing Hack.” By Kate Conger, NYT, 2020-08-20
https://www.nytimes.com/2020/08/20/technology/joe-sullivan-uber-charged-hack.html

“Inside Uber’s $100,000 Payment to a Hacker, and the Fallout.” By Nicole Perlroth and Mike Isaac, NYT, 2018-01-18
https://www.nytimes.com/2018/01/12/technology/uber-hacker-payment-100000.html


Leaked User Data Increases Company and Investor Risks

August 6, 2020

I just read some articles about 386M User Records Leaked from At Least 18 Companies

It appears that a group called ShinyHunters has been selling this data for ten bitcoins earlier this year (around $100,000 U.S. at the time). The 386 million user records from 18 companies has since leaked and become available to the public for free.

At least 440,000 User records from ProctorU are included among the data leaked to the public in late July. Nina Dillon Britton and Chuyi Wang reported that this data included “usernames, unencrypted passwords, legal names and full residential addresses.” ProctorU administers some exams for a range of financial advisors and investment fiduciaries. In the financial services industry, this type of leakage will increase the risks of a range of social engineering attacks — which seems to put both company and investor funds at enhanced risk.

ProctorU sells invasive services that ‘supervise’ students taking exams, allowing proctors to remotely control students’ computers and require students to show their rooms on camera, & more. The company promotes their services as “integrity-in-action,” stating that “Our job is to deter and prevent any breach of integrity.” They offer statistics about the large numbers of interventions they impose on test takers. They sell their services to companies or educational organizations as a partnership “preserving the value and credibility of an institution or testing program” by providing “a way for you to hold us accountable. That two-way accountability is what allows a client-vendor relationship to become a true partnership. As partners, we can do a lot more together to improve your exams and testing programs.”

The way they sell their services and their behavior in the face of this breach seem out of phase.
In a pair of tweets on August 6th they acknowledged a breach of some type and of what they said was limited to 2014-and-older data. I was unable to find follow-up communications by proctorU on the topic that might clarify the nature and scope of the data exposed in the breach.

Financial services threat-hunters need to factor this type of breach into their monitoring and analysis.

REFERENCED:
“Hacker leaks 386 million user records from 18 companies for free.” By Lawrence Abrams, July 28, 2020
https://www.bleepingcomputer.com/news/security/hacker-leaks-386-million-user-records-from-18-companies-for-free/

“Hackers publish Australian universities’ ProctorU data.” by Nina Dillon Britton and Chuyi Wang, August 5, 2020
http://honisoit.com/2020/08/hackers-publish-australian-universities-proctoru-data/

proctoru breach acknowledgement:
https://twitter.com/ProctorU/status/1291450175815286786

proctoru ‘Integrity-in-Action’ page:
https://www.proctoru.com/integrity-in-action
proctoru ‘5 Ways…’ page:
https://www.proctoru.com/industry-news-and-notes/five-ways-data-can-improve-your-program


Apple iOS Vuln via Mail

April 26, 2020

ZecOps announced a collection of iOS vulnerabilities associated with the iOS Mail app that enables hostile agents to run arbitrary code and to delete messages since at least since iOS 6…
So far, this has been described as a set of Out-Of-Bounds write and Heap-Overflow vulnerabilities that are being used against targeted high value endpoints. My interpretation of their detailed write-up is that this qualifies as a remote, anonymous, arbitrary code execution vulnerability. As such, even if it must be targeted and even if it may not be an ‘easy‘ attack, because global financial services organizations are targeted by some amount of determined adversaries, we need to take it seriously.

Apple responded by rejecting the idea that this represented an elevated risk for consumers because their security architecture was not violated and they found no evidence of impact to their customers — while engineering a fix that will be rolled out soon. Is it time to factor this elevated risk behavior (reject-but-fix) into our threat models?

The ZecOps headline was:

“The attack’s scope consists of sending a specially crafted email to a victim’s mailbox enabling it to trigger the vulnerability in the context of iOS MobileMail application on iOS 12 or maild on iOS 13. Based on ZecOps Research and Threat Intelligence, we surmise with high confidence that these vulnerabilities – in particular, the remote heap overflow – are widely exploited in the wild in targeted attacks by an advanced threat operator(s).”

For global financial services enterprises, the presence of hundreds of billions, even trillions of dollars in one or another digital form seems to make this risk rise to the level of relevance. This is especially true because of the effectiveness of Apple’s marketing techniques across broad categories of roles expected to populate our organizations — i.e., our staff and leaders often use Apple devices.

On one front “Apple’s product security and the engineering team delivered a beta patch (to ZecOps) to block these vulnerabilities from further abuse once deployed to GA.”

On another front Apple also publicly rejected the ZecOps claims about finding evidence of the exploit being used saying “are insufficient to bypass iPhone and iPad security protections, and we have found no evidence they were used against customers.” If I read this assertion carefully and in the context of potential future legal action or sales headwinds, it does not inspire confidence that the vulnerabilities were not real-and-exploitable-as-described — only that Apple rejects some narrowly-crafted subset of ZecOps’ announcement/analysis and that they still stand behind the effectiveness of some subset of the iOS architecture.

Apple’s full statement:

“Apple takes all reports of security threats seriously. We have thoroughly investigated the researcher’s report and, based on the information provided, have concluded these issues do not pose an immediate risk to our users. The researcher identified three issues in Mail, but alone they are insufficient to bypass iPhone and iPad security protections, and we have found no evidence they were used against customers. These potential issues will be addressed in a software update soon. We value our collaboration with security researchers to help keep our users safe and will be crediting the researcher for their assistance.”

The Apple echo-chamber kicked in to support the rejection in its most comprehensive and positive interpretation…

ZecOps’ summary of their findings includes (quoted):

  • The vulnerability allows remote code execution capabilities and enables an attacker to remotely infect a device by sending emails that consume significant amount of memory
  • The vulnerability does not necessarily require a large email – a regular email which is able to consume enough RAM would be sufficient. There are many ways to achieve such resource exhaustion including RTF, multi-part, and other methods
  • Both vulnerabilities were triggered in-the-wild
  • The vulnerability can be triggered before the entire email is downloaded, hence the email content won’t necessarily remain on the device
  • We are not dismissing the possibility that attackers may have deleted remaining emails following a successful attack
  • Vulnerability trigger on iOS 13: Unassisted (/zero-click) attacks on iOS 13 when Mail application is opened in the background
  • Vulnerability trigger on iOS 12: The attack requires a click on the email. The attack will be triggered before rendering the content. The user won’t notice anything anomalous in the email itself
  • Unassisted attacks on iOS 12 can be triggered (aka zero click) if the attacker controls the mail server
  • The vulnerabilities exist at least since iOS 6 – (issue date: September 2012) – when iPhone 5 was released
  • The earliest triggers we have observed in the wild were on iOS 11.2.2 in January 2018

Like any large-scale software vendor, Apple fixes a lot of bugs and flaws. I am not highlighting that as an issue.  A certain amount of bugs & flaws are expected in large scale development efforts.  I think that it is important to keep in mind that iOS devices are regularly found in use in safety and critical infrastructure operations, increasing the importance of managing the software lifecycle in ways that minimize the number, scope and nature of bugs & flaws that make it into production.

Apple has a history of enthusiastically rejecting the announcement of some interesting and elevated risk vulnerabilities using narrowly crafted language that would be likely to stand up to legal challenge while concurrently rolling out fixes — which often seems like a pretty overt admission of a given vulnerability.
This behavior leaves me thinking that Apple has created a corporate culture that is impacting their ability to do effective threat modeling.  From the outside, it increasingly appears that Apple’s iOS trust boundaries are expected to match the corporation’s marketing expressions of their control architecture — ‘the happy path‘ where formal iOS isolation boundaries matter only in ways that are defined in public to consumers and that other I/O channels are defined-out of what matters… If I am even a little correct, that cultural characteristic needs to be recognized and incorporated into our risk management practices.

Given the scale of their profits, Apple has tremendous resources that could be devoted to attack surface definition, threat modeling, and operational verification of their assumptions about the same. Many types of OOB Write and Heap-Overflow bugs are good targets for discovery by fuzz testing as well. Until recently I would have assumed that by this point in iOS & iPhone/iPad maturation, Apple had automation in place to routinely, regularly & thoroughly fuzz obvious attack vectors like inbound email message flow in a variety of different ways and at great depth.

This pattern of behavior has been exhibited long enough and consistently enough that it seems material for global financial services enterprises. So many of our corporations support doing large amounts of our business operations on iDevices. I think that we need to begin to factor this elevated risk behavior into our threat models. What do you think?

REFERENCES:
“You’ve Got (0-click) Mail!” By SecOps Research Team, 04-20-2020
https://blog.zecops.com/vulnerabilities/youve-got-0-click-mail/

“Apple’s built-in iPhone mail app is vulnerable to hackers, research says.” By Reed Albergotti, 2020-04-23
https://www.washingtonpost.com/technology/2020/04/23/apple-hack-mail-iphone/

“Apple downplays iOS Mail app security flaw, says ‘no evidence’ of exploits — ‘These potential issues will be addressed in a software update soon’” By Jon Porter, 2020-04-24 https://www.theverge.com/2020/4/24/21234163/apple-ios-ipados-mail-app-security-flaw-statement-no-evidence-exploit


Breach May Indicate Quality Management Weaknesses

February 26, 2020

There is a new reason for concern about facial recognition technology, surveillance, and the error & bias inherent in their use.  The quality of the applications that make up these systems may be less well managed than one might assume or hope.

Clearview AI is a startup that scrapes social media platforms has compiled billions of photos for facial recognition technology reported that:

…an intruder “gained unauthorized access” to its list of customers, to the number of user accounts those customers had set up, and to the number of searches its customers have conducted.
…that there was “no compromise of Clearview’s systems or network.”

Tor Ekeland, an attorney for the company said what I read as the equivalent of ‘trust us & don’t worry about it’:

“Security is Clearview’s top priority, unfortunately, data breaches are part of life in the 21st century. Our servers were never accessed. We patched the flaw, and continue to work to strengthen our security.”

The company sells its services to hundreds of law-enforcement agencies & others.  The New York Times reported that Clearview’s app is being used by police to identify victims of child sexual abuse.

In one of their services, a user uploads a photo of a person, and the application replies with links to Internet-accessible photos on platforms where Clearview scraped them.  In another (not yet a public product), it appears that there are interfaces to augmented reality devices so the user might be able to identify every person they saw.

So, what could go wrong?

Based on the available reporting and their lawyer’s statements, my assumptions include;

  • The company amasses billions of images of human faces along with metadata about each — which include, but are not limited to links to the original hosting location on-line.
  • The company sells their services to policing and security-related organizations world-wide.
  • Something went seriously wrong with the way that their application (and/or infrastructure) enforced access control — leading me to believe that the company has ineffective secure coding and/or secure code analysis practices.
  • The company states that we should accept their assertion that breaches of Clearview’s applications are just a part of doing business.

Application quality and management attitude/values matter.

Because Clearview’s decisions about which photos of given human faces are associated with other photos representing the same individual can be used for identifying criminal suspects, they have more or less weight in criminal investigations and the subsequent litigation & imprisonment…  If Clearview AI points an investigator to the wrong individual, the consequences can be extreme.  In that context — because we should not expect or tolerate unfounded or mistaken arrest or imprisonment — weak or otherwise ineffective application architecture, design, or implementation should be strongly resisted.  To me, nothing in Clearview’s public statements about the breach inspire confidence that they have that mandate-for-quality in their company’s DNA (you may read their statements differently).

Ineffective application development (security issues are one facet) can result in almost any kind of flaw — some of which could result in incidental or systemic errors matching photos.  This has happened before — as there have been examples of widely-used face-matching AI implementations being materially less accurate on images associated with a given race or gender.

There are other risks.  When used by some individuals (authorized or not), it seems reasonable to assume that the Clearview’s system(s) will be used in ways that result in blackmail, coercion, or other types of attacks/threats.  This is not to imply that the company designed it for those purposes, just that it just seems like a good fit.  (We tolerate the sale of hand guns, axes and steak knives even though they can also play a key role in blackmail, coercion, or other types of attacks/threats as well.)  In part because of its global access and the ability of a hostile party to remain largely ‘unseen’ attacks that use Clearview’s applications are materially different from those other weapons.

In global financial services enterprises we deal with constant oversight of our risk management practices.  The best teams seem to be organized in ways that enhance the probability of strong and effective attack resistance over time — tolerating the challenges of evolving features, technology, operations, and attacks.  In my experience, it is often relatively easy to identify this type of team…

That is one end of a broad continuum of quality management applicable to any industry.  Some teams exist elsewhere on that continuum, and it is not always easy to peg where that might be for given organizations.  In the public facts and company statements associated with the recent Clearview breach, it does not look like they occupy the location on that continuum that we would hope.

REFERENCES:

“Facial-Recognition Company That Works With Law Enforcement Says Entire Client List Was Stolen.” By Betsy Swan, Feb. 26, 2020
https://www.thedailybeast.com/clearview-ai-facial-recognition-company-that-works-with-law-enforcement-says-entire-client-list-was-stolen

“Clearview AI has billions of our photos. Its entire client list was just stolen.” By Jordan Valinsky, February 26
https://www.cnn.com/2020/02/26/tech/clearview-ai-hack/index.html

And for some broader background:

“The Secretive Company That Might End Privacy as We Know It.” By Kashmir Hill, Published Jan. 18, 2020 and Updated Feb. 10, 2020
https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html
and https://www.nytimes.com/2020/02/10/podcasts/the-daily/facial-recognition-surveillance.html

“This man says he’s stockpiling billions of our photos.” By Donie O’Sullivan, Mon February 10, 2020
https://www.cnn.com/2020/02/10/tech/clearview-ai-ceo-hoan-ton-that/index.html


Facts that support a certain level of concern about security and communications products.

February 12, 2020

Facts that support a certain level of concern about security and communications products.

Read Greg Miller’s excellent history of Crypto AG:

“The intelligence coup of the century — For decades, the CIA read the encrypted communications of allies and adversaries.”
By Greg Miller, Feb. 11, 2020
https://www.washingtonpost.com/graphics/2020/world/national-security/cia-crypto-encryption-machines-espionage/

Some Financial Services organizations used encryption technology from Crypto AG as well.

(Miller’s work is supported by: Documents and research by the George Washington University National Security Archive https://nsarchive.gwu.edu/briefing-book/chile-cyber-vault-intelligence-southern-cone/2020-02-11/cias-minerva-secret
a 1997 outline of this history at https://cryptome.org/jya/nsa-sun.htm
as well as https://wikispooks.com/wiki/Crypto_AG and more)
And think about it in the context of Edward Snowden’s releases:
https://en.wikipedia.org/wiki/Edward_Snowden

And think about it in the context of Edward Snowden’s releases:
https://en.wikipedia.org/wiki/Edward_Snowden

What a complicated world.