Linux Live Distros – Possible ‘Clean’ VPN Client

January 27, 2009

I have long considered customized Linux live distributions as  candidates for trusted-enough remote VPN client end-points.  The fact that we could possibly make a standardized, read-only operating system, and then use evidence of its presence as one layer of our risk management infrastructure, remains attractive.  In any enterprise having a relatively large and diverse, non-technical end-user base, the available distributions have not been a good match.  They did not deal simply and transparently with the diversity of hardware and end-user skill-sets that characterize the enterprise remote end-point environment.  So why think about trust and remote VPN client end-points?

It is is possible that our efforts to rationalize permitting large numbers of Internet-connected Windows clients remote access to sensitive corporate information assets will not be sustainable.  Criminal enterprise has been more and more effective ways to earn profit via information theft.  Sometimes by selling services to others who want specific information, but the range of “business” models is beginning to parallel that of their targets.  Many models depend on social engineering.  Some on malicious software.  Many more on a combination.  In any case, end users and vulnerable software are the primary targets.

In the financial services industry, most remote clients run one or another Windows operating system.  The expense and effort required to purchase, deploy, and then manage new protective technologies…  The complexity of, and resistance to enforcing any sort of configuration requirements on contractors, out-sourcers, and business partners…  The difficulty of reliably patching large numbers of intermittently-attached, non-corporate assets running Windows seems to present extreme challenges.  One of the Network Access Control models may help, but their expense and the investments required to prepare and maintain compliant infrastructure seems to keep them off most short lists of “next steps.”

Two new live Linux distributions were released recently: Ubuntu 8.10 (Intrepid Ibex) and Knoppix 6.0.0 (Adriane 1.1).

Both these CD suggest to me that there may be near-term hope for using stripped-down or hardened re-spins of main-line live Linux distributions as a key technology foundation for more trusted populations of remote, Internet-connected VPN end-points.  Both distributions worked well and without fuss on my consumer-class Toshiba laptop and an 802.11 ABG wireless network.  Ubuntu, a monster that “owns” and increasing share of the Linux desktop functioned flawlessly and its wireless network setup was quick and easy, but it consumed significant memory before performing any useful work (approaching 1GB of RAM).  Knoppix, always a live distribution, also functioned flawlessly and its wireless network setup was easy and intuitive, and it required almost one third of the memory compared to Ubuntu.  I’ll summarize this comparison below:

Laptop Hardware =
Wireless Network: Intel Wireless WiFi Link 3945ABG
CPU: Core Intel(R) Core(TM)2 CPU T5200 @ 1.60GHz stepping 06
L1 cache: 64K, and L2 cache: 2048K

Both distributions use a modern Gnome desktop windowing environment.

Ubuntu 8.10:

Boot time = 3:30 to a usable desktop
Memory (one Firefox browser with 1 tab open + one terminal)
ubuntu@ubuntu:~$ top -n 2
top - 02:25:53 up 1:04, 8 users, load average: 0.00, 0.02, 0.00
Tasks: 131 total, 1 running, 130 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.5%us, 0.2%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 2063268k total, 968084k used, 1095184k free,90164k buffers
Swap: 0k total, 0k used, 0k free, 507016k cached
Boot command line options =
ubuntu@ubuntu:$ cat /proc/cmdline
BOOT_IMAGE=/casper/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper initrd=/casper/initrd.gz quiet splash --
Linux version =
ubuntu@ubuntu:$cat /proc/version
Linux version 2.6.27-7-generic (buildd@rothera) (gcc version 4.3.2 (Ubuntu 4.3.2-1ubuntu10) ) #1 SMP Fri Oct 24 06:42:44 UTC 2008
ubuntu@ubuntu:$ cat /proc/version_signature
Ubuntu 2.6.27-7.14-generic
Modules =
ubuntu@ubuntu:$ cat /proc/modules | wc -l
Network setup =
System-->Preferences-->Network Configuration, then the Wireless tab, and "add" a new interface, and complete the simple configuration options.  It worked the first time, attaching to my Linksys device via 802.11 WiFi at 54Mb/s using WPA2 Personal security, and fetching an IP address at connection time.
Browser = Firefox version 3.0.3

Knoppix 6.0.0:

Boot time = 1:33 to a usable desktop
Memory (one Iceweasel browser with 1 tab open + one terminal)
knoppix@Microknoppix:~$ top -n 2
top - 12:37:04 up 1:24, 0 users, load average: 0.09, 0.12, 0.22
Tasks: 92 total, 1 running, 91 sleeping, 0 stopped, 0 zombie
Cpu(s): 2.1%us, 0.3%sy, 0.0%ni, 97.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 2064732k total, 387560k used, 1677172k free, 4972k buffers
Swap: 0k total, 0k used, 0k free, 166504k cached
Boot options =
knoppix@Microknoppix:~$ cat /proc/cmdline
ramdisk_size=100000 lang=en vt.default_utf8=0 apm=power-off vga=791 initrd=minirt.gz nomce elevator=anticipatory quiet loglevel=0 pci=routeirq BOOT_IMAGE=linux lang=us
Linux version =
knoppix@Microknoppix:~$ cat /proc/version
Linux version 2.6.28 (knopper@Koffer) (gcc version 4.3.2 (Debian 4.3.2-1) ) #4 SMP PREEMPT Sat Jan 3 09:16:41 CET 2009
knoppix@Microknoppix:~$ cat /proc/version_signature
cat: /proc/version_signature: No such file or directory
Modules =
knoppix@Microknoppix:~$ cat /proc/modules | wc -l

Network setup:
Menu–>System Tools–>Wavelan Configuration–>then click on each of the configuration components and enter the values required by your network–>SSID (the utility lists the SSIDs that it has been able to identify), Encryption (WPA/WPA2/WEP/etc), and DHCP/Static IP address. If you have configured the variables in that order, the network setup attempts to acquire an IP address, and if successful, the NetworkManager Applet (ver. 0.7.0) will changed from a “disconnected” status to displaying 4 bars to show the signal strength of your network connection.
Iceweasel (ver. 3.0.5)

Both these distributions have significant positive characteristics in the context of a building a trusted remote, Internet-connected VPN client end-point.  Knoppix was faster and consumed less resource.  Ubuntu has a large following and broad base of end-user-friendly documentation.

I am curious about your consideration of, or experience with this technology, in the context of a customized re-spin, as a remote-access end-point.

UPDATE February 22, 2009: Add Debian GNU/Linux 5.0 (Lenny) to the viable-candidates list.  It is a modern, polished desktop.  It boots in about the same time as Ubuntu 8.10.  I tested with the Live i386 KDE version.  There are others compiled for AMD64, as well as the same that boot into Gnome.  Network setup was as simple as the two distributions I used above.  All three of these ought to be taken as serious considerations for remote access platforms in these economically-challenging times.

Debian GNU/Linux 5.0 i386 Live KDE (Lenny):

Boot time = 3:22 to a usable desktop
Memory (one Firefox browser with 1 tab open + one terminal)

user@debian:~$ top -n 2
top - 20:09:56 up 5 min, 7 users, load average: 0.98, 1.68, 0.83
Tasks: 108 total, 2 running, 106 sleeping, 0 stopped, 0 zombie
Cpu(s): 1.0%us, 0.0%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 2066792k total, 550920k used, 1515872k free, 68284k buffers
Swap: 0k total, 0k used, 0k free, 266816k cached

— References —

Live Distributions:

Ubuntu 8.10:

Knoppix 6.0.0:

Debian GNU/Linux 5.0:

“Successful” Worms Still Interrupt

January 24, 2009

“Conficker” or “Downadup,” the latest major computer worm, is reported to now have infected as many as 10 million computers worldwide.  Attacked systems may slow enterprise Microsoft Active Directory Domain infrastructure, lock out users, disable security product update services, and block access to security-related web sites.  Two major attack or replication vectors are network shares and USB storage devices.  Ineffective  malicious software resistance can cost your business pleanty in lost productivity and clean-up effort and expenses.  This resistance requires careful MS Windows configuration settings, thorough and timely patching of all MS Windows software, security-conscious application development, and up-to-date anti-virus suite(s).

Microsoft Malware Protection Center describes Win32/Conficker as “a worm that infects other computers across a network by exploiting a vulnerability in the Windows Server service (SVCHOST.EXE). If the vulnerability is successfully exploited, it could allow remote code execution when file sharing is enabled. Depending on the specific variant, it may also spread via removable drives and by exploiting weak passwords. It disables several important system services and security products and downloads arbitrary files.”

“Conficker” or “Downadup” is also known by other names:

  • Worm:Win32/Conficker.A (Microsoft)
  • Worm:Win32/Conficker.B (Microsoft)
  • Trojan:Win32/Conficker!corrupt (Microsoft)
  • Crypt.AVL (AVG)
  • Mal/Conficker-A (Sophos)
  • Trojan.Win32.Pakes.lxf (F-Secure)
  • Trojan.Win32.Pakes.lxf (Kaspersky)
  • Trojan.Win32.Agent.bccs (Kaspersky)
  • Trojan-Downloader.Win32.Agent.aqfw (Kaspersky)
  • W32.Downadup (Symantec)
  • W32.Downadup.B (Symantec)
  • W32.Downadup (Symantec)
  • WORM_DOWNAD.A (Trend Micro)
  • Win32/Conficker.A (CA)
  • W32/Conficker.worm (McAfee)
  • TA08-297A (other)
  • CVE-2008-4250 (other)
  • VU827267 (other)
  • Confickr (other)

Microsoft has released a version of the Malicious Software Removal tool (MSRT) that can help remove variants of Win32/Conficker, as have other anti-virus vendors.



—————- Update —————

“Conficker Remains an Issue.”  29 March 2009

A couple months ago we were still all writing about Conficker.  It seems like it just will not die.
It is now so everpresent that it was in the business section of the Des Moines Register (“Troublesome Internet work set to change tactics.” page 9B).
Then tonight it was the headliner on “60 Minutes.” (“The Conficker Worm: What Happens Next?”
Antivirus, corporate anti-malware web proxies, and personal firewalls cost us all a fortune to purchase and manage.

I will be asking my Symantec representative about readiness and execution today, as well as their strategy for not getting in this position again…
Their piece on “60 Minutes” was heavy on technology glitz and soft talk about how difficult the problem is — but how well protected users could be.

I am curious, are you doing anything new to deal with Conficker?

— References —

“Computer worm called ‘real threat.’By Karen Middleton, The News-Courier:

f-secure “Where is Downadup?”

Microsoft Malware Protection Center :

Current Microsoft Malicious Software Removal Tool: and a related blog entry:

— Added 03/29/2009 —

60 Minutes:

Criminal Botnet Exploitation Pattern

January 22, 2009

The January “Linux Format” carried an excellent interview with Ross Anderson, professor of security engineering at Cambridge University, one of the founders of security economics as an academic discipline, and author of “Security Engineering: A Guide to Building Dependable Distributed Systems.”  In response to a series of questions about software quality and hackers, Dr. Anderson briefly summarized his explanation of the life-cycle of of a compromised host.

"In the criminal underworld, there's a set of separate economic forces that determine what the exploitation pattern will look like. What, for example, are the economics of running a botnet? Well, we know that when machines are captured, typically hackers do such high-value exploits as they can – keyloggers for bank data, and that sort of thing – and then they go down the food chain. Compromised machines may end up being used to send spam, and then once they're blacklisted by all the spam filters, they'll end up being used for distributed denial-of-service attacks."

Maybe this explanation of some of the connections between criminal economics and what is happening on the PC front would be useful in our attempts to continue funding desktop and server protection?

— References —

Linux Format:

Interview, Part 1:

“Security Engineering: A Guide to Building Dependable Distributed Systems.”

Remote Desktop Collaboration and Data Leakage

January 20, 2009

I had a question about “data leakage” and “web conferencing.”   While the capabilities of given systems vary, there appear to be potential for:

  • Remote Control resulting in inappropriate access to resources and information, system damage, service outage, information modification, etc. — once a remote control session is established, the remote “guest” generally acts under the permissions of the “host” user’s credentials, and when the host user has administrative rights, the potential for damage is elevated.
  • Unauthorized access to information — where the “authorized user” shares this access with one or more others via application sharing, and/or remote control.
  • Bulk information theft via the “recording” features of the conferencing service — while it may seem inefficient, because the quality of remote desktop and application sharing is getting so good, recording some types of sessions where material amounts of sensitive information pass across the host’s screen.
  • Inappropriate retention of discoverable business interactions — in some industries, strict control of records retention is a critical capability, and remotely-recorded information cannot be managed using most standard corporate records retention practices.
  • Unauthorized recording of application/desktop sharing, voice, and video sessions — regardless of the volume of sensitive information involved, sometimes it is simply inappropriate to permit recording of some information.

Under most circumstances, when the threat of loss appears to be greater than the benefit of supporting a given service, that service is disabled.  Because the marketplace for desktop collaboration services and technologies is rapidly evolving, simply blocking access to them is probably impractical in most organizations.  Because of the types of technologies and implementations involved Data Leakage Prevention (DLP) platforms may not be much help either.  Logging metadata about sessions in your event correlation engine might be useful, but many of the products below appear to operate without emitting this type of information. What are you doing at your organization?

The list below includes desktop collaboration services and technologies that incorporate a range of capabilities.  From an engineering perspective, this is a diverse collection.

Candidate Desktop Collaboration Technologies and Services:
1.      Adobe: Acrobat Connect Pro [SaaS and on-premises]
2.      Adobe: ConnectNow [free SaaS]
3.      Cisco: WebEx [SaaS] (WebEx Meeting Center, WebEx MeetMeNow, and WebEx Pay-Per-Use)
4.      Cisco: Unified MeetingPlace [On-premises]
5.      IBM: Sametime Unyte [SaaS] (IBM acquired WebDialogs August 2007)
6.      IBM: Lotus Sametime [On-premises]
7.      Microsoft: Office Live Meeting [SaaS] (acquired PlaceWare April 2003)
8.      Microsoft Office Communications Server (OCS) [On-premises]
9.      Citrix: GoToMeeting [SaaS]
10.     Oracle: Beehive  [SaaS]
11.     iLinc Communications
12.     Elluminate
13.     Genesys Conferencing
14.     BeamYourScreen
15.     SMART Technologies
16.     Convenos
17.     DigitalMeeting
18.     iVocalize
20.     Netviewer
21.     RHUB
22.     Saba
23.     TeamViewer
24.     Vyew
25.     WebConCentral
26.     Yugma
27.     Zoho
28.     Avaya
29.     Nortel
30.     Alcatel-Lucent
31.     WebHuddle
32.     Dimdim
33.     Novell Kablink (formerly SiteScape)

— References —

Desktop collaboration product from: Burton Group: “Web Conferencing: Getting Green with Web Conferencing.” v 1.0, 23 January 2009, by Bill Pray and Mike Gotta.

SQL Injection Questions About Code?

January 14, 2009

For most of us, there are times in our careers when we need to verify the work of others.

I was recently asked a question about the vulnerability of a Java application to SQL injection attacks.

Remember that SQL injection is a specialized form of injection attack that takes advantage of the syntax of SQL to inject commands that can read or modify a database, or otherwise compromise the meaning of the original query. Input is either ineffectively filtered, or when user input is not strongly typed. Either way, the rogue SQL is “unexpectedly” executed. In some cases this can result in spectacular amounts unauthorized data access. Because of the potential for financial loss, legal liability, and damage to your company’s reputation, resisting SQL injection attacks should be an important component or your standard development practices.

Under most circumstances in a Java application server environment, this translates into vigorous white-list input validation, PreparedStatements, parameterized queries, stored procedures, and employing the principle of least privilege when executing SQL against the target database.

The PreparedStatement object is used to send SQL statements to the database. A PreparedStatement is special type of statement is derived from the more general class, Statement.

Under extreme time pressure, thorough security code reviews are often impractical. Assessment of a particular application for resistance to SQL injection may translate into verifying only that PreparedStatement objects are used and, where rational, parameterized queries are employed.

Check for “Statements” that begin any line (ignoring white space). Use grep or your favorite text-analysis tooling. You don’t want to find any.

Verify that the application does not contain SQL handling like that outlined in code fragment 1 below:

Code Fragment 1

Statement updateSomething = connection.createStatement(

Something like code fragment 2 below would be a signal, not proof, that this application might include some SQL injection resistance.

Code Fragment 2

PreparedStatement updateSomething = connection.prepareStatement(
updateSomething.setInt(1, 50);
updateSomething.setString(2, "TwelveInchRuler");

— References —

Injection vulnerabilities:

SQL injection vulnerabilities:

From Sun Microsystems: Read the rest of this entry »

Application Authentication Vulnerable?

January 13, 2009

Early last week, a young hacker discovered that Twitter permitted an unlimited number of failed login attempts.  He wrote a program to try lots of possible passwords on a given account, and eventually got in.  This technique for gaining unauthorized access to someone’s login is generally referred to as a brute force password attack.

Before he was finished, representatives of a broad cross-section of North American society were involved: President-Elect Barack Obama, CNN correspondent Rick Sanchez, Digg founder Kevin Rose, Britney Spears, and Miley Cyrus, as well as the official feeds for Facebook, CBS News, Fox News, and more.

Could this happen at your company?   No?  Think again.  Ask about the details, and be persistent.

Depending on the business use case, a broadly-implemented best practice is to have the login system respond to successful and unsuccessful (a bad password or invalid account) attempts with equal delay, displaying a generic message about failures, and to limit the number of unsuccessful attempts permitted on any given account (for example, lock the account after 3 consecutive failures).

Bruce Schneier wrote roughly short notice about the Twitter incident which generated a long collection of comments.

Will your authentication system support multiple authentication attempts within a very small period of time?  What if someone sends thousands of authentication attempts on an account in a second?  Or two within .005 seconds? Even security-centric software like SSH have been vulnerable to well-timed authentication attacks.

Take the time to review the discussion that follows Bruce Schneier’s blog entry.  Or have your authentication guru do the same.  Maybe we are not as good at logging our customers, partners, and employees into our systems as we thought?

— References —

“Weak Password Brings ‘Happiness’ to Twitter Hacker” by Kim Zetter


Twitter background:

“Bad Password Security at Twitter” Bruce Schneier

New Attack Surfaces to Defend

January 9, 2009

Dell, Lenovo, Asus, LG, and Hewlett Packard have all recently been shipping PCs that include a traditional Microsoft Windows operating system (OS) and a second OS.  Many models use DeviceVM’s “SplashTop.”

This move is an attempt to provide users with an option to boot more rapidly into an environment that includes only a subset of the applications and features of their primary OS.  Boot time for this “smaller” OS is supposed to be 30 seconds or less.

At the same time, embedded developer Lineo is challenging this field by demonstrating a “quick-start” Linux OS, called “Warp,” capable of booting in under 3 seconds on a 400MHz ARM11 CPU.  This feat includes running Xorg, twm, xlogo, plus three xterms.

This may aid worker productivity.  Its value will only increase as worker mobility increases and as more and more business is performed and delivered via web applications.  If most of their work can be performed via a browser, these fast-boot OSs may become worker’s platform of choice.

Security professionals, prepare for dealing with this new attack surface now.  This will require resources, budget, and time.  Don’t wait until your Purchasing department or a senior officer buys a few dozen or a few tens of thousands of these new PCs at your company.

All these alternative, quick-boot OSs run software, accept and emit inputs, and store some amount of data.  As a result, they will fall prey of all manner of attack.  It will take some effort to learn how to harden, manage, patch, monitor, and report on the status of these new OSs.

— References —

SplashTop on a number of PC vendor’s new platforms:

Lineo press release for Warp:

Corporate Trust Hammered In India Too

January 7, 2009

Individuals and teams across scores of corporations in the U.S. ignored the rules of business and economics that helps hold society together, or they re-wrote them for their own short-term self interest.  Societies across earth are all now paying for that behavior in one way or another.

That moral disease was not limited to the U.S. though.

Satyam (Satyam Computer Services Ltd.) is a global high-tech services corporation based in India.  I have bumped into staff and activities of this company from time to time over the last five years.  Their founder and corporate Chairman B. Ramalinga Raju resigned yesterday after releasing a description of financial fraud that he had led or materially participated in over those same five years.  It will result in suffering far beyond the corporation’s shareholders.  It also highlights the need for effective risk governance and risk management.  Mr. Raju could not have carried out financial fraud of this scale alone.  PWC, one of Satyam’s primary auditors, should come under especially thorough review.  The current story, while still evolving, goes something like…

Satyam Chairman B. Ramalinga Raju resigned yesterday admitting to falsifying company accounts and inflating revenue and profit figures over several years.  He had been with the company for more than 20 years.  During that time Satyam grew from a few individuals to 53,000 employees, having 185 of the Fortune 500 companies as customers and operations in 66 countries.  He ended what was a public confession with: “I am now prepared to subject myself to the laws of the land and face the consequences thereof.” [ page 5]

Satyam had:
1. inflated its operating profit for the three months ended Sept. 30, 2008 from 610 million rupees to 6.49 billion rupees ($136 million).
2. revenue was inflated from 21.12 billion rupees to 27 billion rupees.
3. reported an operating margin of 24% which was actually 3%.
4. reported a non-existent cash balance of 50.4 billion rupees.
5. reported a nonexistent accrued interest of 3.76 billion rupees
6. reported an understated liability of 12.3 billion rupees.
7. reported a debtor position of 4.9 billion rupees (compared with 26.51 billion rupees reflected in its books). [ pages 1-3]

“The gap in the Balance Sheet has arisen purely on account of inflated profits over a period of the last several years (limited only to Satyam standalone, books of subsidiaries reflecting true performance).  What started as a marginal gap between actual operating profit and the one reflected on the books of accounts continued to grow over the years.  It has attained unmanageable proportions as the size of the company operations grew significantly (annualized revenue run rate of Rs. 11,276 crore in the September quarter, 2008 and official reserves of Rs. 8,392 crore).  The differential in the real profits and the one reflected in the books was further accentuated by the fact that the company had to carry additional resources and assets to justify higher level of operations — thereby significantly increasing the costs.” [ page 2] {“crore” refers to ten million, and is a unit in the Indian numbering system}

“Every attempt made to eliminate the gap failed.  As the promoters held a small percentage of equity, the concern was that poor performance would result in a take-over, thereby exposing the gap.  It was like riding a tiger, not knowing how to get off without getting eaten.” [ page 2]

Mr. Raju Recommended:
…quickly explore some merger opportunities.
…restatement of accounts.
[ page 4]

— References —
Yahoo blog:
WSJ Article:
Raw Documents:

The Hindu:
Satyam plunges to all-time low.
Raju quits Satyam; admits to financial wrong-doings.
Stocks Tumble.
Satyam plunges to all-time low; down nearly 70 pc.
Satyam irregularities referred to serious fraud probe agency .
Satyam case not to impact economy: Plan panel

Browser As Your Company’s Outer-Most Application Edge

January 6, 2009

Rich Internet Applications deliver increasing functionality, and with it, increasing amounts of sensitive information, out to end-user’s browsers.  Too often this is a browser and client-platform wasteland without control or consistency. How can we protect our information assets and brand?

More and more regulated personal or health-related information, more valuable intellectual property, more corporate secrets, are reaching our browsers.  As more of our application infrastructure is extended into end-user browsers, demonstrating a threshold level of due diligence is getting more complicated.

Remember when the threshold seemed to be the presence of a top-tier firewall at your Internet perimeter?  Or when a DMZ was enough?  Then hardened web servers, SSL encryption, infrastructure to provide increasingly sophisticated authentication schemes and session management, and more…  The latest battle-ground has been the applications themselves.

Browse the resources at or google ‘web “application security” vulnerabilities 2008‘.  Application-layer vulnerabilities are consuming a greater percentage of the active Internet attack surface.  Microsoft recently reported that 90% of vulnerabilities discovered by researchers were in applications.  They also report that nearly 50% of all vulnerabilities are now rated HIGH severity or higher.

As we extend more of our application functionality, and more of our sensitive and valuable information out of the enterprise into end-user browsers, how are we dealing with the risks associated with that environment?  The “Browser Security Handbook,” written and maintained by googler Michal Zalewski, is an extensive and exhaustive resources for your application architects, designers, coders, quality assurance personnel, along with your application security engineers and assessment staff [more than 75 pages of lucid, often spartan text].  When control matters, the many differences along the many facets of browser technology need to be effectively dealt with. There is no magic to save us.  This is, and is going to continue to be really hard work.  The additional challenge will be to find ways to wring competitive advantage and profits out of these investments in application security.

I believe that this handbook stands alone.  Contrary to what most of us would assume, much of this resource is simply excellent writing.  No waste, some beautiful sentences and paragraphs — even when writing about “Document Object Model” or “Browser-side Javascript.”  Michal Zalewski’s work is a joy to read.  Because this resource now exists, we all have one less excuse to avoid the inevitable slog through application security enhancements and upgrades, quality/vulnerability testing, and financing the whole endeavour.

— References —

Open Web Application Security Project.

Microsoft Security Intelligence Report volume 5 ( January – June 2008 )

“Browser Security Handbook.”

Why Should You Care about Information Security [Two]

January 5, 2009

People cheat.  Dan Ariely and fellow researchers at Harvard Business School, MIT, Princeton, UCLA, and Yale tempted a few thousand people to cheat in a set of controlled experiements involving timed math problems and pay for correct answers.  They found that, if given the chance, about 50% of the people cheated.  For those accountable for corporate resources — value — this should be an alarming percentage.

They also found that varying the risk of getting caught did not change the level of dishonesty, and that rationalizing and justifying dishonesty becomes substantially easier when cheating is one step removed from the final cash.  I work in the financial services industry, where individuals have access to relatively vast economic resources and the means to perform wire transfers, securities purchases and sales, and access to electronic markets worldwide.  Other individuals have access to data representing the personal information associated with millions of customers, along with technology to make that valuable data both “small” and “mobile.”  How can this happen?

We are currently living through a bitter economic trough that could  possibly become a dark, heaving, transforming, ocean of depression.  Some material measure of this was caused by layer after layer of rationalize and justified cheating.  Fancy new mortgages, bundled asset backed securities, collateralised debt obligations, rating and hedge fund manipulations, and more.  Trillions of dollars and scores of valuable brands gone as the result of real human’s conscious actions.  More than most civilians can imagine.  Engaging in immoral conduct, violating core values that maintain peace and support the healthy functioning of societies, even in the absence of immediately-applicable laws, regulations, or policies that would explicitly forbid that behavior, is still inappropriate.

Only when researchers could get participants to complate their own standards of honesty, by recalling the Ten Commandments or signing an honor code, could they eliminate this willingness to cheat.  In fact, it eliminated it.  One lesson appears to be that our investments in security awareness, policy awareness, and formal periodic acknowledgement of employee understanding of these, may need more focus.  We should all ensure that participation is manditory across our corporations, top to bottom, and that it is totally unambiguous that participants need to understand what is going on and then testify to that fact.

— References —

“Conversation Starter — How Honest People Cheat.”
Dan Ariely, January 29, 2008, and in the Harvard Business Review, February 2008, page 24.

%d bloggers like this: