FA Leaked 800 Million Customer Documents

May 28, 2019

One of the U.S.’s largest title companies with revenues of more than $5 billion and more than 15,000 employees — First American is on Fortune 500. The company also hosted their core customer data behind an application that permitted anonymous users to access almost 900 million digitized customer documents.

Customer data associated with real estate transactions for the last 15 years was vulnerable to theft by simply submitting a document-specific URL. The full spectrum of information associated with the mortgage settlement process was unprotected, including names, bank account numbers, social security numbers, copies of driver’s licenses and more.

This appears to have been vulnerable architecture, design, and implementation. Each document was available to anyone on the Internet via URL ending with a document-specific 9-digit number. The numbers appear to have been sequential — starting in 2003 through at least May 23, 2019. The documents may have been encrypted at some stage in their life-cycle, but documents accessed via the Internet-facing application were not encrypted when downloaded.

This was not a recent change. There is strong evidence that the vulnerability has existed since March 2917.

“First American Financial Corp. Leaked Hundreds of Millions of Title Insurance Records.”
Brian Krebbs, May 24, 2019.

ML/AI Brings Promises and Risks

May 16, 2019

I received a message recently where the leader observed that advances in ML/AI in a given field will “make us even more effective at our work.”  While there may be an opportunity there, it brings with it another target for mature and advanced hostile activity.  In global financial services enterprises, we are all using these technologies.  Depending on the nature of your ML/AI application, attack-influenced outputs could have serious negative consequences.  Abuse via ML/AI have been a thing for quite a while.  Machine learning as implemented is vulnerable to ‘wild patterns’ or ‘adversarial machine learning.’

Modern technologies based on pattern recognition, machine learning and data-driven artificial intelligence have been fooled by carefully-perturbed inputs into delivering misleading outputs.  Here is a useful set of illustrations.  Battista Biggio & Fabio Rolia published a more thorough history of this topic here.  If you still can’t picture how this might happen today, see a story in the news yesterday.

If you are using any of the ML/AI-enabled security related platforms or services, you may want to invest in a quick read of the two papers linked above.  If you are actively involved in engineering the use of such a platform, you should consider reviewing the resources from IBM below:

IBM published a suite of ML & classifier attacks and defensive methods, and some additional work on detection, and it remains an active project: https://github.com/IBM/adversarial-robustness-toolbox & https://adversarial-robustness-toolbox.readthedocs.io/en/latest/index.html

There are useful links to supporting resources throughout these materials.

As these platforms & services become more important to your organization — and to the success of your customers — pay special attention to the details of your implementation.  We all know that hostile agents are good at manipulating humans so we have invested in a broad spectrum of efforts to resist those attacks and to reduce the impacts upon their success.  We all need to approach ML/AI-enabled technologies and services in an analogous way — and ensure that our vendors & partners are doing the same.


There is a good high-level primer on this topic at:
“Breaking neural networks with adversarial attacks — Are the machine learning models we use intrinsically flawed?”
By Anant Jain, 02-09-2019.

There is an excellent history of this topic at:
“Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning.”
By Battista Biggio & Fabio Rolia

This is not an abstract, niche concern.  See: “Police have used celebrity look-alikes, distorted images to boost facial-recognition results, research finds.”
By Drew Harwell , 05-16-2019

Your Home Tech is Eavesdropping on You

May 6, 2019

Microphone-equipped smart devices from Amazon, Apple, & Google keep copies of what they record — and they probably record more than you think.

Geoffrey Fowler writes about this topic after listening to four years of his Alexa archive. He found thousands of fragments of his home-life from the mundain daily life activities to sensitive conversations.

This can also reach into business activities — Mr. Fowler’s Alexa recorded a friend conducting a business deal.  And those recordings are discoverable in court.

Remember that Alexa’s, Seri, and Google’s Assistant’s ‘wake words’ are imperfectly detected…  You may be recording more than you intended.

You can listen to your own Alexa archive here: http://www.amazon.com/alexaprivacy

Mr. Fowler also illustrates the the eight steps required to delete Amazon’s recordings from your Echo speaker in the Alexa app.

Other stuff around your home or workplace may also be recording your activities:

  • Your Nest thermostat (by Google) reports back to its servers in 15-minute increments about not only the climate in your house but also whether there’s anyone moving around (as determined by a presence sensor used to trigger the heat).
  • Your Philips Hue-connected lights track every time they’re switched on and off — data the company keeps forever if you connect to its cloud service (which is required to operate them with Alexa or Assistant).
  • Your Chamberlain MyQ garage opener lets the company keep — indefinitely — a record of every time your door opens or closes.
  • Your Sonos speakers track what albums, playlists or stations you’ve listened to, and when you press play, pause, skip or pump up the volume.
  • And the list goes on…


“”Alexa has been eavesdropping on you this whole time.”
By Geoffrey A. Fowler, 05-06-2019

You can tell:
Amazon to delete the data it already has collected at: http://www.amazon.com/alexaprivacyhttp://www.amazon.com/alexaprivacy
Google to change its Assistant recording settings at: https://myaccount.google.com/activitycontrols/audio
Adjust your Apple Siri recording retention: (...Apple doesn’t give you the ability to say not to store recordings of your audio.)

%d bloggers like this: