Friday, December 23, 2011

DIY Lock Pick Set


Here's another very interesting post on lock-picking, like making a padlock shim out of soda can: How to make your own lock-picking tools from a windshield wiper. Of course, the skills that come with it, plus the ethics of when it's acceptable to use it, are just as important (if not more).

After you've picked that up, try a copy of Practical Lock Picking: A Physical Penetration Tester's Training Guide or The Complete Book of Locks and Locksmithing.

Tuesday, December 20, 2011

How to Shim Open a Padlock On the Cheap

This is a quick, cheap, and simple way to crack open a padlock with a homemade shim from a soda can. Not a new idea, but a well-described set of instructions.

When Professor Matt Blaze published "Safecracking for the Computer Scientist", he received all sorts of negative feedback. There were things in his paper that were known vulnerabilities to all locksmiths for perhaps as long as a century, yet they were not fixed in future designs of locks. Publishing this info does not harm the public, since criminals already know how to do this. But publishing allows consumers of these lock products to be wiser. Master lock (depicted at right) has certainly known about this vulnerability for decades. It's been floating around the internet at least since the dot-com boom days.

Wednesday, March 23, 2011

RSA SecurID Breach - Seed Record Threats

The following is a threat model that assumes the RSA SecurID seed records have been stolen by a sophisticated adversary, which is probably what happened.

But first, a word from our muse, Bruce Schneier, regarding what he titled back in 2005 as the "Failure of Two Factor Authentication":
Two-factor authentication isn't our savior. It won't defend against phishing. It's not going to prevent identity theft. It's not going to secure online accounts from fraudulent transactions. It solves the security problems we had ten years ago, not the security problems we have today.
[snip]
Here are two new active attacks we're starting to see:
  • Man-in-the-Middle attack. An attacker puts up a fake bank website and entices user to that website. User types in his password, and the attacker in turn uses it to access the bank's real website. Done right, the user will never realize that he isn't at the bank's website. Then the attacker either disconnects the user and makes any fraudulent transactions he wants, or passes along the user's banking transactions while making his own transactions at the same time.
  • Trojan attack. Attacker gets Trojan installed on user's computer. When user logs into his bank's website, the attacker piggybacks on that session via the Trojan to make any fraudulent transaction he wants.
See how two-factor authentication doesn't solve anything? In the first case, the attacker can pass the ever-changing part of the password to the bank along with the never-changing part. And in the second case, the attacker is relying on the user to log in.
[Snip]
Two-factor authentication ... won't work for remote authentication over the Internet.
Bruce was absolutely right. We saw examples of that.

...

Now, let's put the pieces together-- the active MITM attacks that Bruce described, which could result in an offline/passive attack.

In both cases above, the adversary has to act immediately to essentially take over an authenticated session, using either the real-time MITM scenario, or the trojan scenario. But let's assume that the "good guys" have, by now, read Bruce's article [but in all reality, they probably haven't, hence they have an RSA SecurID investment] and have paid attention to the RSA jabber that says to watch for an increase in login attempts. In these examples Bruce describes, the adversary grabs the session and disconnects the valid user (possibly at the presentation layer, by taking over the session in malware that doesn't display what the actions are occurring in the authenticated session).

However, let's assume the adversary let's the user keep his authenticated session. The adversary just monitors the credentials that are entered:
  1. The User ID, and
  2. The one-time-passcode (token's readout, a.k.a. "tokencode", plus the user's PIN)
"Relax," says the security administrator. "That's what these RSA SecurID thingies are for-- to make it meaningless when a bad guy eavesdrops on credentials."

Well, except in the case where the "bad guy" has all of the seed records for all RSA SecurID tokens ever sold.

Quoting from our article from yesterday:
Assume an adversary has now in their possession, all of the seed records for all RSA SecurID tokens that are currently valid (which based on above and previous seems very plausible). Assume they have sufficient computing hardware to mass compute all of the tokencodes for all of the tokens represented by those seed records for a range of time (they obviously are well funded to get the "Advanced Persistent Threat" name). This would be the output of the RSA SecurID algorithm taking all the future units of time as input coupled with the serial number/token codes to generate all of the output "hashes" for each RSA SecurID token that RSA has ever made. These mass computed tokencodes for a given range of time would basically be one big rainbow table, a time computing trade-off not too unlike using rainbow tables to crack password hashes.
[Snip]
Since tokencodes are only 6 digits long, and RSA has sold millions of tokens, the chances of a collision of a token's output with another token's output at a random point in time is significant enough, but phish the same user repeatedly (like asking for "next tokencode") and the adversary now can significantly narrow down the possibilities of which tokens belong to which user because different tokens must appear random and not in sync with each other (otherwise RSA SecurID would have much bigger problems). Do this selectively over a period of time for a high valued asset, and chances are the adversary's presence will go undetected, but the adversary will be able to determine exactly which token (serial number, i.e. seed record) belongs to the victim user.
So, now that the adversary has these "rainbow tables" of RSA SecurID tokencodes, and now that the active attacks Bruce described have morphed into a passive attempt, all it will take is watching particular users create valid sessions-- maybe as little as a single attempt, depending upon the mathematics and randomness of the RSA SecurID token output, but probably more like watching a handful of attempts. At that point, the adversary can then impersonate the victim user at any point in the future.

So, if RSA SecurID seed records are compromised, there is really not much advantage in an RSA SecurID implementation. The threats are essentially the same as an adversary grabbing conventional passwords. The only difference is that a passive attack against compromised seed records may take multiple monitoring attempts, as opposed to a single event. But with simple malware, that won't be much more effort, especially for a high valued asset.

So given what we know, we can assume seed records were compromised. And given how little RSA is talking about it, we cannot really know how they are responding to it. Will they just distribute new tokens without compromised seed records, or will they do something much more significant? Based on what we know today, it makes more sense for an organization that is thinking about an RSA SecurID deployment to rely instead on conventional passwords (e.g. Microsoft Active Directory), and spend the extra money on monitoring for fraud and stronger identity validation for things like password resets.

Tuesday, March 22, 2011

More RSA SecurID Reactions

RSA Released a new Customer FAQ regarding the RSA SecurID breach. Let's break it down ...
Customer FAQ
Incident Overview

1. What happened?

Recently, our security systems identified an extremely sophisticated cyber attack in progress, targeting our RSA business unit. We took a variety of aggressive measures against the threat to protect our customers and our business including further hardening our IT infrastructure and working closely with appropriate authorities.
Glad to see they didn't use the words "Advanced Persistent Threat" there.
2. What information was lost?

Our investigation to date has revealed that the attack resulted in certain information being extracted from RSA’s systems. Some of that information is related to RSA SecurID authentication products.
Hmmm. Seed Records possibly?
3. Why can’t you provide more details about the information that was extracted related to RSA SecurID technology?

Our customers’ security is our number one priority. We continue to provide our customers with all the information they need to assess their risk and ensure they are protected. Providing additional specific information about the nature of the attack on RSA or about certain elements of RSA SecurID design could enable others to try to compromise our customers’ RSA SecurID implementations.
[Emphasis added by Securology]
Whoa! Pause right there. Obviously they have allowed somebody from a Public/Customer Relations background to write this. This is not coming from anybody who *knows security*.

Like we mentioned previously, Kerckhoff's Principle and Shannon's Maxim dictate that the DESIGN be open. These ideas are older than the Internet, and pretty much older than computing itself. So, disclosing the RSA SecurID DESIGN should have no adverse affect on customers with implementations unless the DESIGN is flawed to begin with.

Realistically, this is PR-speak for obfuscating details about what was stolen. All things point to seed records. Source code to on-premise implementations at customer sites shouldn't be affected, because those components aren't facing the Internet, and generally who cares about them? Yes, it's possible to hack the backend through things like XSS (think "Cross Site Printing"), but the state-of-the-art would be to compromise it from the outside using weaknesses found at RSA headquarters: seed records.
4. Does this event weaken my RSA SecurID solution against attacks?

RSA SecurID technology continues to be an effective authentication solution. To the best of our knowledge, whoever attacked RSA has certain information related to the RSA SecurID solution, but not enough to complete a successful attack without obtaining additional information that is only held by our customers. We have provided best practices so customers can strengthen the protection of the RSA SecurID information they hold. RSA SecurID technology is as effective as it was before against other attacks.
[Emphasis added by Securology.]
If it wasn't obvious that it's seed records yet, it should be screaming "SEED RECORDS" by this point. RSA SecurID is a two factor authentication system, meaning you can couple your RSA SecurID time synchronized tokencode with a PIN/Password. So, if the seed records are stolen, then the only way an adversary can impersonate you would be if he knew:
  1. Which RSA SecurID token is assigned to you (i.e. the serial number stored in the RSA SecurID database on-site at a customer's site)
  2. Your PIN/Passcode that is the second facto (i.e. another piece of information stored in the customer's site).
More evidence that the RSA breach was seed records: the serial number and seed records give the adversary half the information needed, but the rest is stored on-site.
5. What constitutes a direct attack on an RSA SecurID customer?

To compromise any RSA SecurID deployment, an attacker needs to possess multiple pieces of information about the token, the customer, the individual users and their PINs. Some of this information is never held by RSA and is controlled only by the customer. In order to mount a successful direct attack, someone would need to have possession of all this information.


6. What constitutes a broader attack on an RSA SecurID customer?

To compromise any RSA SecurID deployment, the attacker needs to possess multiple pieces of information about the token, the customer, the individual users and their PINs. Some of this information is never held by RSA and is controlled only by the customer. In order to mount a successful direct attack, someone would need to have possession of all this information.

The broader attack we referenced most likely would be an indirect attack on a customer that uses a combination of technical and social engineering techniques to attempt to compromise all pieces of information about the token, the customer, the individual users and their PINs. Social engineering attacks typically target customers’ end users and help desks. Technical attacks typically target customers’ back end servers, networks and end user machines. Our prioritized remediation steps in the RSA SecurID Best Practices Guides are focused on strengthening your security against these potential broader attacks.
[Emphasis added by Securology]
This PR person is beginning to agree with us. Yes, the seed records are the hard part. If you are an RSA SecurID customer, assume the adversary has them, and now watch out for the pieces you control.
7. Have my SecurID token records been taken?
[Emphasis added by Securology.]
Yes, it's obvious they have.
For the security of our customers, we are not releasing any additional information about what was taken. It is more important to understand all the critical components of the RSA SecurID solution.

To compromise any RSA SecurID deployment, the attacker needs to possess multiple pieces of information about the token, the customer, the individual users and their PINs. Some of this information is never held by RSA and is controlled only by the customer. In order to mount a successful attack, someone would need to have possession of all this information.
This is beginning to look like a broken record.
8. Has RSA stopped manufacturing and/or distributing RSA SecurID tokens or other products?

As part of our standard operating procedures, while we further harden our environment some operations are interrupted. We expect to resume distribution soon and will share information on this when available.
Of course manufacturing/distribution has stopped. Of course anyone worried about security would have an SOP that says "stop shipping the crypto devices when the seed records are compromised." This is just more evidence that the seed records were compromised.
[...snipped for brevity...]
13. How can I monitor my deployment for unusual authentication activity?

To detect unusual authentication activity, the Authentication Manager logs should be monitored for abnormally high rates of failed authentications and/or “Next Tokencode Required” events. If these types of activities are detected, your organization should be prepared to identify the access point being used and shut them down.

The Authentication Manager Log Monitoring Guidelines has detailed descriptions of several additional events that your organization should consider monitoring.
[Emphasis added by Securology]
Warning about failed authentication and next tokencode events further indicates the seed records were stolen, because this would indicate the adversaries are guessing valid tokencodes but invalid PINs, or guessing tokencodes in order to determine a specific user's serial number (to match stolen seed records with a particular user).
14. How do I protect users and help desks against Social Engineering attacks such as targeted phishing?

Educate your users on a regular basis about how to avoid phishing attacks. Be sure to follow best practices and guidelines from sources such as the Anti-Phishing Working Group (APWG) at http://education.apwg.org/r/en/index.htm.

In addition, make sure your end users know the following:
  • They will never be asked for and should never provide their token serial numbers, tokencodes, PINs, passwords, etc.
Because giving that away is giving away the last parts of information that are "controlled only by the customer", i.e. the mapping of UserIDs to seed records via token serial numbers.
  • Do not enter tokencodes into links that you clicked in an email. Instead, type in the URL of the reputable site to which you want to authenticate
Because a phishing attack that takes a tokencode could be all that is needed to guess which serial number a user has, since that moment in time could be recorded, and all seed records could be used in a parallel, offline attack to compute their token codes at that instance in time. Assume an adversary has now in their possession, all of the seed records for all RSA SecurID tokens that are currently valid (which based on above and previous seems very plausible). Assume they have sufficient computing hardware to mass compute all of the tokencodes for all of the tokens represented by those seed records for a range of time (they obviously are well funded to get the "Advanced Persistent Threat" name). This would be the output of the RSA SecurID algorithm taking all the future units of time as input coupled with the serial number/token codes to generate all of the output "hashes" for each RSA SecurID token that RSA has ever made. These mass computed tokencodes for a given range of time would basically be one big rainbow table, a time computing trade-off not too unlike using rainbow tables to crack password hashes. Then assume the adversaries can phish users into providing a tokencode into a false login prompt. Since tokencodes are only 6 digits long, and RSA has sold millions of tokens, the chances of a collision of a token's output with another token's output at a random point in time is significant enough, but phish the same user repeatedly (like asking for "next tokencode") and the adversary now can significantly narrow down the possibilities of which tokens belong to which user because different tokens must appear random and not in sync with each other (otherwise RSA SecurID would have much bigger problems). Do this selectively over a period of time for a high valued asset, and chances are the adversary's presence will go undetected, but the adversary will be able to determine exactly which token (serial number, i.e. seed record) belongs to the victim user. Or do it in mass quickly (think: social media) and it will harvest many userIDs to serial numbers (seed records) which would be valuable on the black market-- especially for e-commerce banking applications.
It is also critical that your Help Desk Administrators verify the end user’s identity before performing any Help Desk operations on their behalf. Recommended actions include:

· Call the end user back on a phone owned by the organization and on a number that is already stored in the system.

· Send the user an email to a company email address. If possible, use encrypted mail.

· Work with the employee’s manager to verify the user’s identity

· Verify the identity in person

· Use multiple open-ended questions from employee records (e.g., “Name one person in your group” or, “What is your badge number?”). Avoid yes/no questions

Important: Be wary of using mobile phones for identity confirmation, even if they are owned by the company, as mobile phone numbers are often stored in locations that are vulnerable to tampering or social engineering.
[...snipped for brevity...]
The above is very decent advice, not unlike what we posted recently.


So, in summary: yeah, yeah, yeah, seed records were stolen. Little to no doubt about that now.

Friday, March 18, 2011

RSA SecurID Breach - Initial Reactions


RSA, the security division of EMC, was breached by a sophisticated adversary who stole something of value pertaining to RSA SecurID two factor authentication implementations. That much we know for certain.


It's probably also safe to say that RSA SecurID will be knocked at least a notch down off their place of unreasonably high esteem.


And it wouldn't hurt to take this as a reminder that there is no such thing as a perfectly secure system. Complexity wins every time and the adversary has the advantage.


First, note that the original Securology article entitled "Soft tokens aren't tokens at all" is still very valid as the day it was published over 3 years ago. CNET is reporting that RSA has sold 40 million hardware tokens and 250 million software tokens. That means that 86% of all RSA SecurID "tokens" (which are of the "soft token" variety) are already wide open all of the problems that an endpoint device has-- and more importantly, that 86% of the "two factor authentication" products sold and licensed by RSA are not really "two factor authentication" in the first place.


Second, we should note the principles in economics, so eloquently described by your mother as: "don't put all your eggs in one basket", i.e. the principle of diversification. If your organization relies solely on RSA SecurID for security, you were on borrowed time to begin with. For those organizations, this event is just proof that "the emperor hath no clothes".


Third, the algorithm behind RSA SecurID is not publicly disclosed. This should be a red flag to anyone worth their salt in security. This is a direct violation of Kerckhoff's Principle and Shannon's Maxim, roughly that only the encryption keys should be secret and that we should always assume an enemy knows (or can reverse engineer) the algorithm. There have been attempts in the past to reverse engineer the RSA SecurID algorithm, but those attempts are old and not necessarily the way the current version operates.


Fourth, it's probably the seed records that were stolen. Since we know that the algorithm is essentially a black box, taking as input a "seed record" and the current time, then either disclosure of the "seed records" or disclosure of the algorithm could potentially be devastating to any system relying on RSA SecurID for authentication.

Hints that the "seed records" were stolen can be seen in this Network World article:
But there's already speculation that attackers gained some information about the "secret sauce" for RSA SecurID and its one-time password authentication mechanism, which could be tied to the serial numbers on tokens, says Phil Cox, principal consultant at Boston-based SystemExperts. RSA is emphasizing that customers make sure that anyone in their organizations using SecurID be careful in ensuring they don't give out serial numbers on secured tokens. RSA executives are busy today conducting mass briefings via dial-in for customers, says Cox. [emphasis added by Securology]
Suggesting to customers to keep serial numbers secret implies that seed records were indeed stolen.

When a customer deploys newly purchased tokens, the customer must import a file containing a digitally signed list of seed records associated with serial numbers of the device. From that point on, administrators assign a token by serial number, which is really just associating the seed record of the device with the user's future authentication attempts. Any time that user attempts to authenticate, the server takes the current time and the seed record and computes its own tokencode for comparison to the user input. In fact, one known troubleshooting problem happens when the server and token get out of time synchronization. NTP is usually the answer to that problem.

This further strengthens the theory that seed records were stolen by the "advanced persistent threat", since any customer with a copy of the server-side components essentially has the algorithm, through common reversing techniques of binaries. The server's CPU must be able to computer the tokencode via the algorithm, therefore monitoring instructions as they enter the CPU will divulge the algorithm. This is not a new threat, and certainly nothing worthy of a new moniker. The most common example of reversing binaries is for bypassing software licensing features-- it doesn't take a world-class threat to do that. What is much, much more likely is that RSA SecurID seed records were indeed stolen.

The only item of value that could be even more damaging might be the algorithm RSA uses to establish seed records and associate them with serial numbers. Assuming there is some repeatable process to that-- and it makes sense to believe there is since that would make production manufacturing of those devices more simple-- then stealing that algorithm is like stealing all seed records: past, present, and future.

Likewise, even if source code is the item that was stolen, it's unlikely that any of that will translate into real attacks since most RSA SecurID installations do not directly expose the RSA servers to the Internet. They're usually called upon by end-user-facing systems like VPNs or websites, and the Internet tier generally packages up the credentials and passes them along in a different protocol, like RADIUS. The only way a vulnerability in the stolen source code would become very valuable would be if there is an injection vulnerability found in it, such as passing a malicious input in a username and password challenge that resulted in the back-end RSA SecurID systems to fail open, much like a SQL injection attack. It's possible, but much more probable that seed records were the stolen item of value.


How to Respond to the News
Lots of advice has been shared for how to handle this bad news. Most of it is good, but a couple items need a reality check.


RSA filed with the SEC and in their filing there is a copy of their customer support note on the issue. At the bottom of the form, is a list of suggestions:
  • We recommend customers increase their focus on security for social media applications and the use of those applications and websites by anyone with access to their critical networks.
  • We recommend customers enforce strong password and pin policies.
  • We recommend customers follow the rule of least privilege when assigning roles and responsibilities to security administrators.
  • We recommend customers re-educate employees on the importance of avoiding suspicious emails, and remind them not to provide user names or other credentials to anyone ...
  • We recommend customers pay special attention to security around their active directories, making full use of their SIEM products and also implementing two-factor authentication to control access to active directories.
  • We recommend customers watch closely for changes in user privilege levels and access rights using security monitoring technologies such as SIEM, and consider adding more levels of manual approval for those changes.
  • We recommend customers harden, closely monitor, and limit remote and physical access to infrastructure that is hosting critical security software.
  • We recommend customers examine their help desk practices for information leakage that could help an attacker perform a social engineering attack.
  • We recommend customers update their security products and the operating systems hosting them with the latest patches.
[emphasis added by Securology]

Unless RSA is sitting on some new way to shim into the Microsoft Active Directory (AD) authentication stacks (and they have not published it), there is no way to accomplish what they have stated there in bold. AD consists of mainly LDAP and Kerberos with a sprinkling in of a few other neat features (not going into those for brevity). LDAP/LDAPS (the secure SSL/TLS version) and Kerberos are both based on passwords as the secret to authentication. They cannot simply be upgraded into using two-factor authentication.

Assuming RSA is suggesting installing the RSA SecurID agent for Windows on each Domain Controller in an AD forest, that still does not prevent access to making changes inside of AD Users & Computers, because any client must be able to talk Kerberos and LDAP to at least a single Domain Controller for AD's basic interoperability to function-- those same firewall rules for those services will also allow authenticated and authorized users to browse and modify objects within the directory. What they're putting in there just doesn't seem to be possible and must have been written by somebody who doesn't understand the Microsoft Active Directory product line very well.


Securosis has a how-to-respond list on their blog:
Remember that SecurID is the second factor in a two-factor system… you aren’t stripped naked (unless you’re going through airport security). Assuming it’s completely useless now, here is what you can do:
  1. Don’t panic. We know almost nothing at this point, and thus all we can do is speculate. Until we know the attacker, what was lost, how SecurID was compromised (if it is), and the vector of a potential attack we can’t make an informed risk assessment.
  2. Talk to your RSA representative and pressure them for this information.
  3. Assume SecureID is no longer effective. Review passwords tied to SecurID accounts and make sure they are strong (if possible).
  4. If you are a high-value target, force a password change for any accounts with privileges that could be overly harmful (e.g. admins).
  5. Consider disabling accounts that don’t use a password or PIN.
  6. Set password attempt lockouts (3 tries to lock an account, or similar).
[Emphasis added by Securology]
To their first point, I think we can know what was lost: seed records. Without that, there would be no point in filing with the SEC and publicly disclosing that fact. Anybody can know their algorithm for computing one-time passwords by reversing the server side (see above). The only other component in the process is the current time, which is public information. The only private information is the seed record.

On point #4, if your organization is a high-valued asset type of a target, flagging RSA SecurID users to change their PINs or passwords associated with their user accounts may not be a good idea, because as the defense you have to assume this well articulated offensive adversary already has your seed records and therefore could respond to the challenge to reset passwords. A better solution, if your organization is small, is to physically meet with and reset credentials for high valued users. If you cannot do that because your organization is too large of a scale, then your only real option is to monitor user behavior for abnormalities-- which is where most of your true value should come from anyway.

This does tie well with their second suggestion-- pressuring your RSA contact for more information. In all likelihood, if our speculation that seed records were indeed stolen, then the only solution is to demand new RSA SecurID tokens from RSA to replace the ones you have currently. And if RSA is not quick to respond to that, it's for one of two reasons:
  1. This is going to financially hurt them in a very significant way and it's not easy to just mass produce 40 million tokens overnight, OR,
  2. RSA's algorithm for generating seed records and assigning them to token serial numbers is compromised, and they're going to need some R&D time to come up with a fix without breaking current customers who order new tokens under the new seed record generation scheme in the future.

UPDATED TO ADD: Since all things indicate the seed records were compromised, and since Art Coviello's message is that no RSA customers should have reduced security as a result of their breach, then that must mean RSA does not believe SecurID is worth the investment. After all, if RSA SecurID seed records were stolen, it effectively reduces any implementation to just a single factor: the PIN/passwords that are requested in addition to the tokencode. And who would buy all that infrastructure and handout worthless digital keychains when they can get a single factor password authentication for super cheap with Microsoft's Active Directory?

Friday, February 18, 2011

Seven Types of Hackers

This could also be titled "Taxonomies are Difficult".
...


Roger Grimes at InfoWorld has a Seven Types of Hackers article. Taxonomies are generally tough to do, and I think Roger could improve upon his list a bit. Let's break it down ...



Malicious hacker No. 1: Cyber criminals
Professional criminals comprise the biggest group of malicious hackers, using malware and exploits to steal money. It doesn't matter how they do it, whether they're manipulating your bank account, using your credit card numbers, faking antivirus programs, or stealing your identity or passwords. Their motivation is fast, big financial gain.

The #1 problem I have with this label is that all of the activities in the list are typically "crimes" in most jurisdictions. Therefore, people who participate in them are "criminals". And "Cyber" is an annoying word on many levels, but Joe Sixpack will associate that term with computers. I would have chosen "Petty Thieves" as a better label for this category.


Malicious hacker No. 2: Spammers and adware spreaders
Purveyors of spam and adware make their money through illegal advertising, either getting paid by a legitimate company for pushing business their way or by selling their own products. Cheap Viagra, anyone? Members of this group believe they are just "aggressive marketers." It helps them sleep at night.

I am not sure how "adware spreaders" fits for a good taxonomy name, but generally agree this is a legitimate category in and of itself.


Malicious hacker No. 3: Advanced persistent threat (APT) agents
Intruders engaging in APT-style attacks represent well-organized, well-funded groups -- often located in a "safe harbor" country -- and they're out to steal a company's intellectual property. They aren't out for quick financial gain like cyber criminals; they're in it for the long haul. Their dream assignment is to essentially duplicate their victim's best ideas and products in their own homeland, or to sell the information they've purloined to the highest bidder.

Malicious hacker No. 4: Corporate spies
Corporate spying is not new; it's just significantly easier to do, thanks to today's pervasive Internet connectivity. Corporate spies are usually interested in a particular piece of intellectual property or competitive information. They differ from APT agents in that they don't have to be located in a safe-harbor country. Corporate espionage groups aren't usually as organized as APT groups, and they are more focused on short- to midterm financial gains.

I find Category #3 ridiculously similar to Category #4. The only difference is whether they are free-lance (#3) or directly on the payroll (#4). Either way, I'd collapse these two categories into a single category.


Malicious hacker No. 5: Hacktivists
Lots of hackers are motivated by political, religious, environmental, or other personal beliefs. They are usually content with embarrassing their opponents or defacing their websites, although they can slip into corporate-espionage mode if it means they can weaken the opponent. Think WikiLeaks.

Hacktivisism may be a webism, but it's probably it's own category-- political activism through criminal operations on computer systems.


Malicious hacker No. 6: Cyber warriors
Cyber warfare is a city-state against city-state exploitation with an endgame objective of disabling an opponent's military capability. Participants may operate as APT or corporate spies at times, but everything they learn is geared toward a specific military objective. The Stuxnet worm is a great example of this attack method.

I despise the term "cyber warrior" or its parent "cyber warfare". Call it what it is: militaries and their contractors attacking each other. Criminal operations involving computers for a militaristic goal. So a much better title: Military & Military Contractors.


Malicious hacker No. 7: Rogue hackers
There are hundreds of thousands of hackers who simply want to prove their skills, brag to friends, and are thrilled to engage in unauthorized activities. They may participate in other types of hacking (crimeware), but it isn't their only objective and motivation. These are the traditional stereotyped figures popularized by the 1983 film "War Games," hacking late at night, while drinking Mountain Dew and eating Doritos. These are the petty criminals of the cyber world. They're a nuisance, but they aren't about to disrupt the Internet and business as we know it -- unlike members of the other groups.

I'm also not a big fan of this label. It could just as easily be called "Internet Graffiti".


Taxonomies are difficult-- very difficult-- to lay down on paper (or bits). If I were grading this one, I'd give it about a B- or maybe a B. It's far from grade A material, but it has its entertainment value.

Sunday, January 30, 2011

Visualize Irony

What's the point of the heavy-duty chain and lock if one of the chain's links is just a zip-tie?

Thursday, July 1, 2010

Schneier vs PCI

Bruce Schneier just echoed what I wrote back in December 2008 that the encryption key management aspects of PCI 1.2 and earlier are flat-out, numb-skull retarded.

Here's an excerpt of what I said:
What the authors of the DSS were thinking was that PCI compliant merchants would implement cold war-esque missile silo techniques in which two military officers would each place a physical key into a control console and punch in their portion of the launch code sequence. This is technically possible to do with schemes like Adi Shamir's key splitting techniques. However, it rarely makes sense to do so.

Consider an automated e-commerce system. The notion of automation means it works on its own, without human interaction. If that e-commerce system needs to process or store credit card numbers, it will need to encrypt and decrypt them as transactions happen. In order to do those cryptographic functions, the software must have access to the encryption key. It makes no sense for the software to only have part of the key or to rely on a pair of humans to provide it a copy of the key. That defeats the point of automation.

If the pieces of the key have to be put together for each transaction, then a human would have to be involved with each transaction-- definitely not worth the expense! Not to mention an exploit of a vulnerability in the software could result in malicious software keeping a copy of the full key once it's unlocked anyway (because it's the software that does the crypto functions, not 2 people doing crypto in their heads or on pen and paper!).

If a pair of humans are only involved with the initial unlocking of the key, then the software gets a full copy of the key anyway. Any exploit of a vulnerability in the software could potentially read the key, because the key is in its running memory. So, on the one hand, there is no requirement for humans to be involved with each interaction, thus the e-commerce system can operate more cheaply than, say, a phone-order system or a brick-and-mortar retailer. However, each restart of the application software requires a set of 2 humans to be involved with getting the system back and online. Imagine the ideal low-overhead e-commerce retailer planning vacation schedules for its minimal staff around this PCI requirement! PCI essentially dictates that more staff must be hired! Or, that support staff that otherwise would NOT have access to a portion of the key (because they take level 1 calls or work in a different group) now must be trusted with a portion of it. More hands involved means more opportunity for collusion, which increases the risk by increasing the likelihood of an incident, which is NOT what the PCI folks are trying to accomplish!

The difference between a cold war missile silo and an e-commerce software application is the number of "secure" transactions each must have. Missile silos do not launch missiles at the rate of several hundred to several thousand an hour, but good e-commerce applications can take that many credit cards. When there are few (albeit more important) transactions like entering launch codes, it makes sense to require the attention of a couple different people.

So splitting the key such that an e-commerce software application cannot have the full key is stupid.
Here's an excerpt of what Bruce said:
Let's take a concrete example: credit card databases associated with websites. Those databases are not encrypted because it doesn't make any sense. The whole point of storing credit card numbers on a website is so it's accessible -- so each time I buy something, I don't have to type it in again. The website needs to dynamically query the database and retrieve the numbers, millions of times a day. If the database were encrypted, the website would need the key. But if the key were on the same network as the data, what would be the point of encrypting it? Access to the website equals access to the database in either case. Security is achieved by good access control on the website and database, not by encrypting the data.
It's nice to be validated from time to time, especially from the best.

Friday, May 21, 2010

Verisign Turns Yellow

On the heels of turning PGP corp Yellow, now Verisign is turning Yellow, too. Symantec is acquiring Verisign, too.

These overpriced "security solutions" are going to go from bad to worse. I predict agile startups are going to crush them on their prices, since Symantec's goal is obviously to own the entire market with a one-size-fits-all approach, while some startups and smaller companies will probably better understand their customers' needs.

It's ironic how the PGP (distributed) model once fought strongly against the PKI (hierarchical, centralized) model. But now, thanks to deep pockets at Big Yellow, they'll be wearing the same uniform.

SSL and crypto are now commodities, so where are the commodity prices from PGP Corp, Verisign, and Symantec? Simple: they won't have them on their pricing lists.

I've ranted many times about both companies. PGP tries to sell you goods they admit won't solve the problems they're designed for (the "all bets are off when you lose physical control of the device" excuse). And Verisign tries to double-dip on premium "Extended Validation" SSL certs, ignoring their culpability in Certificate Authorities granting SSL certificates to frauds and phishers-- they want you to pay extra for their mistakes.

Do us all a favor and use open source or support their competitors who have true commodity prices.

Monday, March 29, 2010

SSL & Big Government. Where's Phil Zimmerman?

What an interesting year 2010 is already turning out to be in technology, politics, and life as we know it. More censorship battles are going on than ever before (e.g. Google vs. the Great Firewall of China) and the possibility of more ramp up of governments' control over Internet traffic in their respective companies. Australia has content filters on all ISPs in the name of decency, but political dissident websites have slipped into the "indecent" categories. The UK and US are pushing harder to take control of private access to the Internet. Iran shuts down all Internet access within the country during elections. Now this, reports that governments are manipulating the hierarchical Certificate Authority model to eavesdrop on "secure" encrypted connections over the Internet-- and vendors are creating turn-key appliances to make it easy. Do "netizens" still have a Bill of Rights? Who's watching the watchers?

Enter exhibit A: "Has SSL become pointless?" An article on the plausibility of state-sponsored eavesdropping by political coercion of Certificate Authorities to produce duplicate (faked) SSL certificates for Big Brother devices.
In the draft of a research paper released today (PDF available here), Soghoian and Stamm tell the story of a recent security conference where at least one vendor touted its ability to be dropped seamlessly among a cluster of IP hosts, intercept traffic among the computers there, and echo the contents of that traffic using a tunneling protocol. The tool for this surveillance, marketed by an Arizona-based firm called Packet Forensics, purports to leverage man-in-the-middle attack strategies against SSL's underlying cryptographic protocol.
...
As the researchers report, in a conversation with the vendor's CEO, he confirmed that government agencies can compel certificate authorities (CAs) such as VeriSign to provide them with phony certificates that appear to be signed by legitimate root certificates.
...
The researchers have developed a threat model based on their discoveries, superimposing government agencies in the typical role of the malicious user. They call this model the compelled certificate creation attack. As Soghoian and Stamm write, "When compelling the assistance of a CA, the government agency can either require the CA to issue it a specific certificate for each Web site to be spoofed, or, more likely, the CA can be forced to issue an intermediate CA certificate that can then be re-used an infinite number of times by that government agency, without the knowledge or further assistance of the CA. In one hypothetical example of this attack, the US National Security Agency (NSA) can compel VeriSign to produce a valid certificate for the Commercial Bank of Dubai (whose actual certificate is issued by Etisalat, UAE), that can be used to perform an effective man-in-the-middle attack against users of all modern browsers."
There's more info from Wired on the subject as well.

All of this is calling for a return to our roots. Where's Phil Zimmermann when we need him now?

Phil created PGP (Pretty Good Privacy) during the political crypto export wars, creating the first implementation of the "web of trust" model which is an alternative to the hierarchical model that Certificate Authorities use today in SSL Public Key Infrastructure (PKI). Firefox 3 already saw the introduction of some Web-of-Trust-like features for unsigned SSL certs. If you've ever browsed to an HTTPS site using a self-signed certificate, then you have probably seen the dialog box that asks you if you would like to save the the "exception" to trust that SSL certificate, which is very similar to how SSH works in the command line environment. Essentially, that's the basic premise behind the academic researcher's "CertLock" Firefox add-on, which is forthcoming, but extending the web-of-trust model to all SSL certs encountered and adding some decision support for what certificates to trust based on attribute/key changes.

In the hierarchical model which we have today, a bunch of "authorities" tell us which SSL certificates to trust. That's how we operate today. One CA (Certificate Authority) could tell us a cert for somebank.com at IP address A.B.C.D is OK, while a second CA could also exert that a completely different cert for somebank.com hosted at IP address E.F.G.H is also good. Who is the ultimate authority? You are. But you know that your Grandmother may have a hard time telling which certs to trust, which is why this problem exists and exactly why the hierarchical model exists in the first place. In the Web-of-Trust model, there are no authorities. You trust companyA.com and if companyA.com trusts companyB.com you can automatically trust companyB.com, too (or not, it's up to you). You build up links that represent people vouching for other people, just like real life. If you trust in somebody who is not worthy of that trust, then bad things can happen, just like real life.

In the hierarchical model, you're basically outsourcing those trust decisions to third parties you've never met. You're asking all of them--at the same time-- to guarantee your banking connection is secure. Or your connection to Facebook is secure. Or your connection to a politically dissident web forum is secure. I repeat: you're asking every single CA, each of which is an organization of people that you have never met, to all make these decisions for you simultaneously. Does that sound crazy? You bet. What if, in the real world analogue of this, you outsourced to a collection of "authorities" which TV shows you should watch, which products you should buy, and which politicians should get your vote? [In the U.S. we may already have that with the Big Media corporations, but thank goodness for the Internet, er, wait, well, before we knew about governments abusing SSL certificates anyway.]

It's in this hierarchical model that governments can subvert the confidentiality of the communications. And if governments can do this at-will by forcing Certificate Authorities within their jurisdiction to create fraudulent, duplicate certificates, what's going to stop the ISPs or snoops-for-hire that setup the intercepts from saving copies of pertinent data from themselves, outside of the original intent (regardless of its legal status in your home country)? Probably not much. Maybe an audit trail. Maybe. But likely even that is up for manipulation. After all, look at how poorly the Government Accounting Office ranks the various branches of the U.S. federal government's IT Systems-- many of them are receiving failing grades, yet, they still are in operation. Can you trust them with your data?

My browser has over 100 Certificate Authority certificates in it by default. I know each cert represents probably a dozen or more people who can have a certificate issued from the CA, but assuming it's only a single person per certificate, there certainly aren't 100 people out there I would trust in those aspects of my life. [If 100 doesn't seem that high, just count how many Facebook friends you have that you wouldn't really want to know {your credit card number, your plans for next Friday night, the way you voted at the last election, etc.}.

Perhaps we've gone soft. Perhaps we find hassles using PGP to encrypt messages sent through our favorite free webmail service. Perhaps we're trusting that somebody else is securing our information for us. Whatever it is, perhaps we should read Phil Zimmermann's original words, back when the fight for e-mail privacy was so vivid in our daily lives (before most Internet users could even spell "Internet"). Perhaps then we'll revive the fight for privacy in our web traffic as well and look to solutions like the forthcoming CertLock or maybe a full Web-of-Trust SSL implementation built into each of our browsers, rather than leaving all of our security decisions up to so many semi-trustworthy and unknown Certificate Authorities. Back then, the "activist" in each one of us-- each security professional-- told people to use PGP to encrypt ALL email. Why? Because if you didn't, the messages that were encrypted automatically stand out, like you "have something to hide". It's nonsense if you do or don't, but encrypting all the time doesn't reveal anything in the traffic pattern analysis. Perhaps we should revert to that and be more vigilant in our CA selection.

The following are Phil Zimmermann's own words for why he created PGP (Pretty Good Privacy)
:

Why I Wrote PGP

Part of the Original 1991 PGP User's Guide (updated in 1999)
"Whatever you do will be insignificant, but it is very important that you do it."
–Mahatma Gandhi.
It's personal. It's private. And it's no one's business but yours. You may be planning a political campaign, discussing your taxes, or having a secret romance. Or you may be communicating with a political dissident in a repressive country. Whatever it is, you don't want your private electronic mail (email) or confidential documents read by anyone else. There's nothing wrong with asserting your privacy. Privacy is as apple-pie as the Constitution.
The right to privacy is spread implicitly throughout the Bill of Rights. But when the United States Constitution was framed, the Founding Fathers saw no need to explicitly spell out the right to a private conversation. That would have been silly. Two hundred years ago, all conversations were private. If someone else was within earshot, you could just go out behind the barn and have your conversation there. No one could listen in without your knowledge. The right to a private conversation was a natural right, not just in a philosophical sense, but in a law-of-physics sense, given the technology of the time.
But with the coming of the information age, starting with the invention of the telephone, all that has changed. Now most of our conversations are conducted electronically. This allows our most intimate conversations to be exposed without our knowledge. Cellular phone calls may be monitored by anyone with a radio. Electronic mail, sent across the Internet, is no more secure than cellular phone calls. Email is rapidly replacing postal mail, becoming the norm for everyone, not the novelty it was in the past.
Until recently, if the government wanted to violate the privacy of ordinary citizens, they had to expend a certain amount of expense and labor to intercept and steam open and read paper mail. Or they had to listen to and possibly transcribe spoken telephone conversation, at least before automatic voice recognition technology became available. This kind of labor-intensive monitoring was not practical on a large scale. It was only done in important cases when it seemed worthwhile. This is like catching one fish at a time, with a hook and line. Today, email can be routinely and automatically scanned for interesting keywords, on a vast scale, without detection. This is like driftnet fishing. And exponential growth in computer power is making the same thing possible with voice traffic.
Perhaps you think your email is legitimate enough that encryption is unwarranted. If you really are a law-abiding citizen with nothing to hide, then why don't you always send your paper mail on postcards? Why not submit to drug testing on demand? Why require a warrant for police searches of your house? Are you trying to hide something? If you hide your mail inside envelopes, does that mean you must be a subversive or a drug dealer, or maybe a paranoid nut? Do law-abiding citizens have any need to encrypt their email?
What if everyone believed that law-abiding citizens should use postcards for their mail? If a nonconformist tried to assert his privacy by using an envelope for his mail, it would draw suspicion. Perhaps the authorities would open his mail to see what he's hiding. Fortunately, we don't live in that kind of world, because everyone protects most of their mail with envelopes. So no one draws suspicion by asserting their privacy with an envelope. There's safety in numbers. Analogously, it would be nice if everyone routinely used encryption for all their email, innocent or not, so that no one drew suspicion by asserting their email privacy with encryption. Think of it as a form of solidarity.
Senate Bill 266, a 1991 omnibus anticrime bill, had an unsettling measure buried in it. If this non-binding resolution had become real law, it would have forced manufacturers of secure communications equipment to insert special "trap doors" in their products, so that the government could read anyone's encrypted messages. It reads, "It is the sense of Congress that providers of electronic communications services and manufacturers of electronic communications service equipment shall ensure that communications systems permit the government to obtain the plain text contents of voice, data, and other communications when appropriately authorized by law." It was this bill that led me to publish PGP electronically for free that year, shortly before the measure was defeated after vigorous protest by civil libertarians and industry groups.
The 1994 Communications Assistance for Law Enforcement Act (CALEA) mandated that phone companies install remote wiretapping ports into their central office digital switches, creating a new technology infrastructure for "point-and-click" wiretapping, so that federal agents no longer have to go out and attach alligator clips to phone lines. Now they will be able to sit in their headquarters in Washington and listen in on your phone calls. Of course, the law still requires a court order for a wiretap. But while technology infrastructures can persist for generations, laws and policies can change overnight. Once a communications infrastructure optimized for surveillance becomes entrenched, a shift in political conditions may lead to abuse of this new-found power. Political conditions may shift with the election of a new government, or perhaps more abruptly from the bombing of a federal building.
A year after the CALEA passed, the FBI disclosed plans to require the phone companies to build into their infrastructure the capacity to simultaneously wiretap 1 percent of all phone calls in all major U.S. cities. This would represent more than a thousandfold increase over previous levels in the number of phones that could be wiretapped. In previous years, there were only about a thousand court-ordered wiretaps in the United States per year, at the federal, state, and local levels combined. It's hard to see how the government could even employ enough judges to sign enough wiretap orders to wiretap 1 percent of all our phone calls, much less hire enough federal agents to sit and listen to all that traffic in real time. The only plausible way of processing that amount of traffic is a massive Orwellian application of automated voice recognition technology to sift through it all, searching for interesting keywords or searching for a particular speaker's voice. If the government doesn't find the target in the first 1 percent sample, the wiretaps can be shifted over to a different 1 percent until the target is found, or until everyone's phone line has been checked for subversive traffic. The FBI said they need this capacity to plan for the future. This plan sparked such outrage that it was defeated in Congress. But the mere fact that the FBI even asked for these broad powers is revealing of their agenda.
Advances in technology will not permit the maintenance of the status quo, as far as privacy is concerned. The status quo is unstable. If we do nothing, new technologies will give the government new automatic surveillance capabilities that Stalin could never have dreamed of. The only way to hold the line on privacy in the information age is strong cryptography.
You don't have to distrust the government to want to use cryptography. Your business can be wiretapped by business rivals, organized crime, or foreign governments. Several foreign governments, for example, admit to using their signals intelligence against companies from other countries to give their own corporations a competitive edge. Ironically, the United States government's restrictions on cryptography in the 1990's have weakened U.S. corporate defenses against foreign intelligence and organized crime.
The government knows what a pivotal role cryptography is destined to play in the power relationship with its people. In April 1993, the Clinton administration unveiled a bold new encryption policy initiative, which had been under development at the National Security Agency (NSA) since the start of the Bush administration. The centerpiece of this initiative was a government-built encryption device, called the Clipper chip, containing a new classified NSA encryption algorithm. The government tried to encourage private industry to design it into all their secure communication products, such as secure phones, secure faxes, and so on. AT&T put Clipper into its secure voice products. The catch: At the time of manufacture, each Clipper chip is loaded with its own unique key, and the government gets to keep a copy, placed in escrow. Not to worry, though–the government promises that they will use these keys to read your traffic only "when duly authorized by law." Of course, to make Clipper completely effective, the next logical step would be to outlaw other forms of cryptography.
The government initially claimed that using Clipper would be voluntary, that no one would be forced to use it instead of other types of cryptography. But the public reaction against the Clipper chip was strong, stronger than the government anticipated. The computer industry monolithically proclaimed its opposition to using Clipper. FBI director Louis Freeh responded to a question in a press conference in 1994 by saying that if Clipper failed to gain public support, and FBI wiretaps were shut out by non-government-controlled cryptography, his office would have no choice but to seek legislative relief. Later, in the aftermath of the Oklahoma City tragedy, Mr. Freeh testified before the Senate Judiciary Committee that public availability of strong cryptography must be curtailed by the government (although no one had suggested that cryptography was used by the bombers).
The government has a track record that does not inspire confidence that they will never abuse our civil liberties. The FBI's COINTELPRO program targeted groups that opposed government policies. They spied on the antiwar movement and the civil rights movement. They wiretapped the phone of Martin Luther King. Nixon had his enemies list. Then there was the Watergate mess. More recently, Congress has either attempted to or succeeded in passing laws curtailing our civil liberties on the Internet. Some elements of the Clinton White House collected confidential FBI files on Republican civil servants, conceivably for political exploitation. And some overzealous prosecutors have shown a willingness to go to the ends of the Earth in pursuit of exposing sexual indiscretions of political enemies. At no time in the past century has public distrust of the government been so broadly distributed across the political spectrum, as it is today.
Throughout the 1990s, I figured that if we want to resist this unsettling trend in the government to outlaw cryptography, one measure we can apply is to use cryptography as much as we can now while it's still legal. When use of strong cryptography becomes popular, it's harder for the government to criminalize it. Therefore, using PGP is good for preserving democracy. If privacy is outlawed, only outlaws will have privacy.
It appears that the deployment of PGP must have worked, along with years of steady public outcry and industry pressure to relax the export controls. In the closing months of 1999, the Clinton administration announced a radical shift in export policy for crypto technology. They essentially threw out the whole export control regime. Now, we are finally able to export strong cryptography, with no upper limits on strength. It has been a long struggle, but we have finally won, at least on the export control front in the US. Now we must continue our efforts to deploy strong crypto, to blunt the effects increasing surveillance efforts on the Internet by various governments. And we still need to entrench our right to use it domestically over the objections of the FBI.
PGP empowers people to take their privacy into their own hands. There has been a growing social need for it. That's why I wrote it.
Philip R. Zimmermann
Boulder, Colorado
June 1991 (updated 1999)

[In the PDF to the article written by the researchers, a special thanks was called out to certain paper reviewers, including Jon Callas of PGP Corp with whom so much debate has transpired. To mangle Shakespeare: What a tangled web-of-trust we weave!]


UPDATED 4/12/2010: Bruce Schneier's and Matt Blaze's commentary.

Thursday, February 25, 2010

Earth Shattering Attacks on Disk Encryption


Trusted Platform Modules (TPMs) are were the last hope of truly secure distributed computing endpoints. The idea behind TPMs is that they are safe from physical inspection-- resistant to tampering, but we now know that to no longer be true, thanks to some clever research by Christopher Tarnovsky (pictured at left).

Every disk encryption vendor on the planet tries to sell you the impossible: a product that on one hand they claim is impervious to physical access by an adversary, and-- at the same time on the other hand-- a product they conveniently claim is no better than anything else at preventing data loss when physical access is lost to an adversary. What? Does that even make sense?

Of course it doesn't make sense. It makes dollar$.

Yeah, for the great majority of laptop thefts, probably even disk encryption isn't necessary since the thieves are just after hardware, but I never advise anyone risk that. You never know when that casual thief wants to make a quick buck off of hardware sell to a smart, conniving criminal on eBay, for instance, who just might be equipped with the knowledge and intent to steal the data off of the device.

Look at what I wrote back on October 3, 2007 when dealing with PGP Corp's failure to disclose a dangerous encryption bypass feature:
True. It's not a "backdoor" in the sense of 3 letter agencies' wiretapping via a mathematical-cryptographic hole in the algorithm used for either session key generation or actual data encryption, but how can a PGP WDE customer truly disable this "bypass" feature? As long as the function call to attempt the bypass exists in the boot guard's code, then the feature is "enabled", from my point of view. It may go unused, but it may also be maliciously used in the context of a sophisticated attack to steal a device with higher valued data contained within it:
  1. Trojan Horse prompts user for passphrase (remember, PGP WDE synchronizes with Windows passwords for users, so there are plenty of opportunities to make a semi-realistic user authentication dialog).
  2. Trojan Horse adds bypass by unlocking the master volume key with the user's passphrase.
  3. [Optional] Trojan Horse maliciously alters boot guard to disable the RemBypass() feature. [NOTE: If this were to happen, it would be a permanent bypass, not a one-time-use bypass. Will PGP WDE customers have to rely on their users to notice that their installation of Windows boots without the Boot Guard prompting them? Previous experience should tell us that users will either: A) not notice, or B) not complain.]
  4. Laptop is stolen.
I just described the premise behind the Evil Maid attack years before Joanna Rutkowska coined the term.

Then read the cop-out response by Marc Briceno – Director, Product Management of PGP Corp:
No security product on the market today can protect you if the underlying computer has been compromised by malware with root level administrative privileges. That said, there exists well-understood common sense defenses against “Cold Boot,” “Stoned Boot,” “Evil Maid,” and many other attacks yet to be named and publicized.
You can read his full response, but the gist is that he never admits his product has a flawed assumption: that nobody would ever manipulate the PGP BootGuard-- the software that must remain plaintext on the encrypted drive (if wasn't plaintext, the CPU couldn't read the instructions and execute the decryption routine). At least Microsoft's BitLocker, when used with TPMs did not have this vulnerability, although we'll have to see if breaking TPMs is only accomplished by a handful of experts, like Tarnovsky. If it becomes a repeatable task that can be accomplished by inexpensive tools, then BitLocker in TPM mode will be reduced to the lower security status of PGP Whole Disk Encryption.


So which is it, vendors? Are you still letting your marketing people sell encryption products with powerpoint slides that read: "Keeps your data safe when your device is lost or stolen", while having your technical security people say "Well, about that coldboot or evil-maid attack ... well ... all bets are off when you lose physical access to the device."

It's time for vendors to get their stories straight. Stop selling your products to people who are worried about the physical theft of their devices, unless you make it very clear that there are ways around your product that a dedicated and resourceful adversary may be able to defeat-- disk encryption is only good at keeping the casual thieves out.

Wednesday, December 2, 2009

The Reality of Evil Maids

There have been many attacks on whole disk encryption recently:
  1. Cold Boot attacks in which keys hang around in memory a lot longer than many thought, demonstrating how information flow may be more important to watch than many acknowledge.
  2. Stoned Boot attacks in which a rootkit is loaded into memory as part of the booting process, tampering with system level things, like, say, whole disk encryption keys.
  3. Evil Maid attacks in which Joanna Rutkowska of Invisible Things Lab suggests tinkering with the plaintext boot loader. Why is it plain text if the drive is encrypted? Because the CPU has to be able to execute it, duh. So, it's right there for tampering. Funny thing: I suggested tampering with the boot loader as a way to extract keys way back in October of 2007 when debating Jon Callas of PGP over their undocumented encryption bypass feature, so I guess that means I am the original author of the Evil Maid attack concept, huh?

About all of these attacks, Schneier recently said:
This attack exploits the same basic vulnerability as the "Cold Boot" attack from last year, and the "Stoned Boot" attack from earlier this year, and there's no real defense to this sort of thing. As soon as you give up physical control of your computer, all bets are off.
"As soon as you give up physical control of your computer, all bets are off"??? Isn't that the point of these encryption vendors (Schneier is on the technical advisory board of PGP Corp-- he maybe doesn't add that disclaimer plainly enough). Sure enough, that's the opposite of what PGP Corp claims: "Data remains secure on lost devices." Somebody better correct those sales & marketing people to update their powerpoint slides and website promotions.

To put this plainly: if you still believe that whole disk encryption software is going to keep a skilled or determined adversary out of your data, you are sadly misled. We're no longer talking about 3 letter government agencies with large sums of tax dollars to throw at the problem-- we're talking about script kiddies being able to pull this off. (We may not quite be at the point where script kiddies can do it, but we're getting very close.)

Whole Disk Encryption software will only stop a thief who is interested in the hardware from accessing your data and that thief may not even be smart enough to know how to take a hard drive out of a laptop and plug it into another computer in the first place. You had better hope that thief won't sell it on ebay to somebody who is more interested in data than hardware.

Whole Disk Encryption fails to deliver what it claims. If you want safe data, you need to keep as little of it as possible on any mobile devices that are easily lost or stolen. Don't rely on magic crypto fairy dust and don't trust anyone who spouts the odds or computation time required to compute a decryption key. It's not about the math; it's about the keys on the endpoints.

Trusted Platform Modules (TPMs) (like what Vista can be configured to use) hold out some hope, assuming that somebody cannot find a way to extract the keys out of them by spoofing a trusted bootloader. After all, a TPM is basically just a blackbox: you give it an input (a binary image of a trusted bootloader, for example) and it gives you an output (an encryption key). Since TPMs are accessible over a system bus, which is shared among all components, it seems plausible that a malicious device or even device driver could be used to either make a copy of the key as it travels back across the system bus, OR, that a malicious device could just feed it the proper input (as in not by booting the bootloader but by booting an alternative bootloader and then feeding it the correct binary image) to retrieve the output it wants.

Wednesday, November 4, 2009

Selecting a Pistol Safe

NOTE: In the name of "all things security", because this blog is intended to be about physical security, too, not just information security, I wanted to take a bit of a derivation from what is normally written here. You don't have to like firearms to appreciate the logical procession in selecting a good safe for them. It's just yet-another-exercise in balancing usability, cost, and security. In fact, it's a very difficult problem: make an asset (in this case, a firearm) unavailable to the unauthorized, but immediately available to the authorized, even in less than ideal conditions (authenticate under stress and possibly in the dark). It's a difficult problem in the computer security world, too. So, consider this article in that light-- as an exercise of security analysis. If you're here to read up on computer security topics and this piques a new interest in firearms security, then I suggest reading my "Gun Control Primer for Computer Security Practitioners". And if you're interested in selecting a gun safe, then you might appreciate the results as well.
...

So, I needed a way to "securely" (that's always a nebulous word) store a firearm-- namely a pistol-- such that it could meet the following criteria:

1. Keep children's and other family members' hands off of the firearm
2. Stored in, on, or near a nightstand
3. Easily opened by authorized people under stress
4. Easily opened by authorized people in the dark
5. Not susceptible to power failures
6. Not susceptible to being "dropped open"
7. Not susceptible to being pried open
8. Not opened by "something you have" (authentication with a key) because the spouse is horrible at leaving keys everywhere.
9. For sale at a reasonable cost
10. An adversary should not know (hear) when the safe was opened by an authorized person

But I didn't care a lot about the ability to keep a dedicated thief from stealing the entire safe with or without the firearm inside. "Dedicated thief" means access to an acetylene torch, among other tools. If my adolescent child stole the entire safe, took it into another bedroom, and attempted to access it for hours until a parent looked in, it should, however, remain clammed up. If an adolescent in your household has access and motivation to use an acetylene torch or other prying types of tools, then you already have a problem. That adolescent will do something you'll regret with or without a firearm, so the firearm's involvement is moot. For all you know, that adolescent would use one of the tools as a weapon. You can attempt to adolescent-proof the gun or gun-proof the adolescent. Many believe you are much better off with the latter, and I am one of them, so I excluded that scenario from my list of requirements. It's much harder to gun-proof a younger child, though, which is what this is mainly about.

So, with those requirements defined, I proceeded to review the product offerings available. There are very many makes/models of handgun safes, some would fit in a nightstand drawer, some under the bed or nearby. Ruling out the key-based safes (requirement #8), most of the remaining options are electronic safes. That meant I had to be very careful about power failures (requirement #5). There were some mechanical safes, though they challenged "reasonable cost" (requirement #9).

Gunvault GV1000
One of the most popular models I could find was the Gunvault GV1000. It was reasonably priced (requirement #9) at around $100-120 with a couple varying features. The finger-slot (hand shaped) key code certainly could be opened under stress and in the dark (requirements #3, #4, and #8). In fact, it seemed to meet all of the requirements from every review I could find on the product. Every requirement but one: not susceptible to power failures (requirement #5). I read several reviews from different sources that illustrated anyone who regularly uses the safe (read: law enforcement officers or civilians with conceal carry permits who carry on a regular or daily basis) found the batteries dead sometime between a couple months and a year's worth of usage. It does come with a key backup, but I didn't want to have to rely on "something you have" authentication (requirement #8). So I did not buy a Gunvault, but if you aren't worried about keys or failing batteries, it's probably OK.

Key management, just like in computer systems, is very important. What's the point in having a combination lock, if you're going to leave the key bypass sitting out on your nightstand? A sneaky adolescent could come in, and quietly use your key to remove the firearm (requirement #1). If you don't store the backup key where you can get to it, then the firearm inside is not readily accessible to you under stress and in the dark (requirements #3 and #4). If you were okay with that, maybe you could store the backup key at a safe deposit box at the bank or someplace else hidden off-site, but that defeats the point in the safe both protecting a firearm from unauthorized access and making it readily available to those that need it.

Moving along, I came to the Homak line of pistols safes. There were several makes and models. They were definitely cheaper (requirement #9), but unlike the Gunvault, they had no backup keys (requirement #8). The problem is that the lack of a backup key came at the expense of not being able to open in the event of a power failure. If the batteries failed, it was toast, according to some reviews. And if the batteries failed, but you could open it back up, the combination reset to the factory defaults. Not good. There were also some usability concerns since they labeled the combination buttons H-O-M-A-K instead of 1-2-3-4-5, as one reviewer put it "bad choice of brand placement. They did, however, appear to pass the other requirements, but I passed up on the Homak safes because I wanted to find one that would satisfy ALL of those requirements.

Next, I looked at the Stack-On pistol safes. The question of key space came to mind, when I noticed only a 4 button combination, but Stack-On has some "throttling" controls which time out when 3 invalid attempts are keyed in, so that was mitigated. Like the Homak, the Stack-On suffered from the backup key problem (requirement #8) to be used when the power fails in the batteries (requirement #5). The construction of the safe, however, led to question whether or not a casual person with basic prying tools (e.g. screw driver) might be able to cause some damage here. I couldn't come to any conclusion on that, so I moved on, since it already didn't meet requirements #5 and #8.

The Honeywell was probably the worst of all of them. It's an over-glorified document fire box. Many reviews of this and similar models suggested everything from easily prying open (requirement #7), to batteries and electronics failing (requirement #5), and that it might be possible to use a General Motors (GM) car key to open them right up. Nice. I avoided that one like the plague.




Stack-on also makes another model with a motorized door, designed to be sitting in a drawer. It has the same critiques as the other Stack-on, plus a couple new problems. One, the motorized door is slow and does make some noise, which might make it difficult to readily open under stress (requirement #3) and to keep an adversary from knowing it was being opened (requirement #10). Second, flip the safe upside down and take 8 screws out and ... Voila! ... it's open. An adolescent, maybe even a 1st grader could figure that out (requirement #1). Not good.



Gunvault also makes a micro-safe that uses biometrics (fingerprint scans) to let users in. This was interesting to me, since it met requirement #8. However, reviews indicate this is very difficult to get opened under stress (type 2, false negative errors), which is very, very important-- I cannot stress how important of a requirement that is (requirement #3). That alone, is reason enough to avoid this safe model.

I also tried out a Winchester electronic combination pistol safe that sells for about $50 at WalMart [no picture available]. Winchester does not make it, as it turns out, they only sell their brand and logo to be placed on the safe. The Winchester safe horribly failed matching my requirements list. First, it had two sets of keys. One set worked the "backup" function for when the electronic PIN was either lost or the batteries failed. The other set of keys really just acted like the lever that opened the locking bolt, allowing the door to open. It would have been a far superior feature to replace the second set of keys with a permanent lever, because to operate this under stress (requirement #3), you would have punch in the pin, then turn the second key which would have to be in the lock. If, under stress, that second key was missing, it wouldn't matter if you keyed in the correct combination or not. The door wouldn't open. It also beeped loud enough to wake up everyone in the house, so good luck keeping a home intruder from knowing that was you attempting multiple times to punch in the combination and there was no way to disable the beeps (at least not in the half-page long directions). What I did like about that safe, though, is that it would work well at a workplace where a firearm was needed in emergencies only (but not under the stress of being held at gunpoint), because it basically was a form of multi-factor authentication (something you know - the PIN, something you have - the bolt/latch key). But it failed miserably as a nightstand pistol safe.


Finally, I came to find V-Line Industry's Top-Draw Pistol Safe. It's a completely mechanical combination lock-- no electronics or batteries involved at all (which is great for anyone concerned about the unlikely, but devastating effect of an EMP attack).

So, I ran through the checklist of requirements:

1. This will certainly keep out children and casual family members. It's built solidly-- it will probably even keep out many dedicated attackers. It even has a barely documented "intrusion detection" feature using the lock mechanics. Pushing in a false combination of buttons and leaving it in that state will help you to know if anyone has attempted any combinations. Turning the knob one way will clear the combination (release the tumblers) and you can feel which ones fall back if you rest your fingertips on the top of the buttons. Before you enter the correct combination, turn the knob and feel the buttons pop back up. If it's not the combination you left it in, someone tampered with it. Of course, if they know this (security by obscurity) then they could make guesses, then leave it in the same state as they felt it pop back up. Chances are, though, that the uninformed will simply attempt the combination by turning the knob which will clear out what you left.

2. It's small enough to fit into nightstand drawer and still open upward.

3. & 4. It's easy to open this by feel alone, in the dark or otherwise. The combination is unique in that it's not just 5 key combinations. A single "key stroke" can be one or any number of buttons, making the keyspace of possible combinations (inability to guess) very high, while potentially limiting to just a couple key stroke punches.

5. There are zero power requirements here. This is fine quality mechanical craftsmanship.

6. & 7. I'm not worried about this being dropped on a corner or pried open. It's thick steel. Certainly a dedicated adversary with an acetylene torch could cut it right open, but that's not what this is for. It's for keeping snoopy fingers out and allowing lifesaving fingers in.

8. The combination has no backup key. Don't forget the combination! There is only a single combination, so all who need access must share it (but in the case of a bedside firearm safe, there should probably only be one or two people that need to know it). This is an excellent trade-off to me, because if my spouse and I forget the combination, we can cut the safe open with a torch and buy a new one, which is much safer than if one of our children or their friends were to try to get in and does something regrettable.

9. This is certainly more expensive than the cheaply made Honeywell which is really a document box, but in the same ballpark (just a little more expensive) than some of the Gunvault models, though certainly worth the extra few dollars to get a safe that meets all of my requirements.

10. This is relatively quiet to open. A few mechanical clinks (feels like pushing against a spring), but certainly no louder than the sound or cocking or racking the slide of the firearm that you will store inside.

In all, an excellent choice. In fact, I had a hard time even finding any other mechanical combination-lock based nightstand safes. I own the V-Line safe and have used it nearly daily for a few months. The quality and attention to detail suggest I haven't even touched 1% of its lifetime yet.


Lessons Learned:

1. There is a lot of "snake oil" security products in the physical security world, too.

2. There is a lot for information (computer) security professionals to learn from studying physical security (see U PENN Professor Matt Blaze's papers, particularly the "Safe Cracking for the Computer Scientist" paper).

3. Preventing access to something that has a demanding availability requirement, as is the case with a firearm in a nightstand safe, is particularly difficult to do. Computer security equivalents are not any easier.

4. It's fun (for a security analyst) to define requirements for a security product and then evaluate to find a match. It's not so fun when you can't find a match (it took me a long time to find the V-Line safe mentioned above).


...
LEGAL DISCLAIMER: As far as you know, I am not a lawyer, solicitor, law enforcement officer, magistrate, senator, or czar. Do not take my words to be of that level. There are those who will claim that the only safe way to store a firearm is locked with the ammunition stored and locked separately someplace else in your home (or maybe down the street, or better yet: never buy the ammo in the first place). Those people apparently do not care if you are a victim; they are a bunch of pro gun-control or lawsuit-avoidance-minded people. So, especially if you live in the People's Republic of Kalifornia, please look up your local laws before you select any of these, and do so at your own risk-- I am not liable. Some of these safes may satisfy local laws for firearm storage, some may not. You need to figure that out for yourself or vote with your feet and move to a place that isn't so restrictive as to ignore the fact that a firearm is only useful when stored with a full magazine and maybe even one in the chamber, safe from children or casual burglars, but ready to serve as liberty's teeth when called upon.