Showing posts with label security economics. Show all posts
Showing posts with label security economics. Show all posts

Friday, March 18, 2011

RSA SecurID Breach - Initial Reactions


RSA, the security division of EMC, was breached by a sophisticated adversary who stole something of value pertaining to RSA SecurID two factor authentication implementations. That much we know for certain.


It's probably also safe to say that RSA SecurID will be knocked at least a notch down off their place of unreasonably high esteem.


And it wouldn't hurt to take this as a reminder that there is no such thing as a perfectly secure system. Complexity wins every time and the adversary has the advantage.


First, note that the original Securology article entitled "Soft tokens aren't tokens at all" is still very valid as the day it was published over 3 years ago. CNET is reporting that RSA has sold 40 million hardware tokens and 250 million software tokens. That means that 86% of all RSA SecurID "tokens" (which are of the "soft token" variety) are already wide open all of the problems that an endpoint device has-- and more importantly, that 86% of the "two factor authentication" products sold and licensed by RSA are not really "two factor authentication" in the first place.


Second, we should note the principles in economics, so eloquently described by your mother as: "don't put all your eggs in one basket", i.e. the principle of diversification. If your organization relies solely on RSA SecurID for security, you were on borrowed time to begin with. For those organizations, this event is just proof that "the emperor hath no clothes".


Third, the algorithm behind RSA SecurID is not publicly disclosed. This should be a red flag to anyone worth their salt in security. This is a direct violation of Kerckhoff's Principle and Shannon's Maxim, roughly that only the encryption keys should be secret and that we should always assume an enemy knows (or can reverse engineer) the algorithm. There have been attempts in the past to reverse engineer the RSA SecurID algorithm, but those attempts are old and not necessarily the way the current version operates.


Fourth, it's probably the seed records that were stolen. Since we know that the algorithm is essentially a black box, taking as input a "seed record" and the current time, then either disclosure of the "seed records" or disclosure of the algorithm could potentially be devastating to any system relying on RSA SecurID for authentication.

Hints that the "seed records" were stolen can be seen in this Network World article:
But there's already speculation that attackers gained some information about the "secret sauce" for RSA SecurID and its one-time password authentication mechanism, which could be tied to the serial numbers on tokens, says Phil Cox, principal consultant at Boston-based SystemExperts. RSA is emphasizing that customers make sure that anyone in their organizations using SecurID be careful in ensuring they don't give out serial numbers on secured tokens. RSA executives are busy today conducting mass briefings via dial-in for customers, says Cox. [emphasis added by Securology]
Suggesting to customers to keep serial numbers secret implies that seed records were indeed stolen.

When a customer deploys newly purchased tokens, the customer must import a file containing a digitally signed list of seed records associated with serial numbers of the device. From that point on, administrators assign a token by serial number, which is really just associating the seed record of the device with the user's future authentication attempts. Any time that user attempts to authenticate, the server takes the current time and the seed record and computes its own tokencode for comparison to the user input. In fact, one known troubleshooting problem happens when the server and token get out of time synchronization. NTP is usually the answer to that problem.

This further strengthens the theory that seed records were stolen by the "advanced persistent threat", since any customer with a copy of the server-side components essentially has the algorithm, through common reversing techniques of binaries. The server's CPU must be able to computer the tokencode via the algorithm, therefore monitoring instructions as they enter the CPU will divulge the algorithm. This is not a new threat, and certainly nothing worthy of a new moniker. The most common example of reversing binaries is for bypassing software licensing features-- it doesn't take a world-class threat to do that. What is much, much more likely is that RSA SecurID seed records were indeed stolen.

The only item of value that could be even more damaging might be the algorithm RSA uses to establish seed records and associate them with serial numbers. Assuming there is some repeatable process to that-- and it makes sense to believe there is since that would make production manufacturing of those devices more simple-- then stealing that algorithm is like stealing all seed records: past, present, and future.

Likewise, even if source code is the item that was stolen, it's unlikely that any of that will translate into real attacks since most RSA SecurID installations do not directly expose the RSA servers to the Internet. They're usually called upon by end-user-facing systems like VPNs or websites, and the Internet tier generally packages up the credentials and passes them along in a different protocol, like RADIUS. The only way a vulnerability in the stolen source code would become very valuable would be if there is an injection vulnerability found in it, such as passing a malicious input in a username and password challenge that resulted in the back-end RSA SecurID systems to fail open, much like a SQL injection attack. It's possible, but much more probable that seed records were the stolen item of value.


How to Respond to the News
Lots of advice has been shared for how to handle this bad news. Most of it is good, but a couple items need a reality check.


RSA filed with the SEC and in their filing there is a copy of their customer support note on the issue. At the bottom of the form, is a list of suggestions:
  • We recommend customers increase their focus on security for social media applications and the use of those applications and websites by anyone with access to their critical networks.
  • We recommend customers enforce strong password and pin policies.
  • We recommend customers follow the rule of least privilege when assigning roles and responsibilities to security administrators.
  • We recommend customers re-educate employees on the importance of avoiding suspicious emails, and remind them not to provide user names or other credentials to anyone ...
  • We recommend customers pay special attention to security around their active directories, making full use of their SIEM products and also implementing two-factor authentication to control access to active directories.
  • We recommend customers watch closely for changes in user privilege levels and access rights using security monitoring technologies such as SIEM, and consider adding more levels of manual approval for those changes.
  • We recommend customers harden, closely monitor, and limit remote and physical access to infrastructure that is hosting critical security software.
  • We recommend customers examine their help desk practices for information leakage that could help an attacker perform a social engineering attack.
  • We recommend customers update their security products and the operating systems hosting them with the latest patches.
[emphasis added by Securology]

Unless RSA is sitting on some new way to shim into the Microsoft Active Directory (AD) authentication stacks (and they have not published it), there is no way to accomplish what they have stated there in bold. AD consists of mainly LDAP and Kerberos with a sprinkling in of a few other neat features (not going into those for brevity). LDAP/LDAPS (the secure SSL/TLS version) and Kerberos are both based on passwords as the secret to authentication. They cannot simply be upgraded into using two-factor authentication.

Assuming RSA is suggesting installing the RSA SecurID agent for Windows on each Domain Controller in an AD forest, that still does not prevent access to making changes inside of AD Users & Computers, because any client must be able to talk Kerberos and LDAP to at least a single Domain Controller for AD's basic interoperability to function-- those same firewall rules for those services will also allow authenticated and authorized users to browse and modify objects within the directory. What they're putting in there just doesn't seem to be possible and must have been written by somebody who doesn't understand the Microsoft Active Directory product line very well.


Securosis has a how-to-respond list on their blog:
Remember that SecurID is the second factor in a two-factor system… you aren’t stripped naked (unless you’re going through airport security). Assuming it’s completely useless now, here is what you can do:
  1. Don’t panic. We know almost nothing at this point, and thus all we can do is speculate. Until we know the attacker, what was lost, how SecurID was compromised (if it is), and the vector of a potential attack we can’t make an informed risk assessment.
  2. Talk to your RSA representative and pressure them for this information.
  3. Assume SecureID is no longer effective. Review passwords tied to SecurID accounts and make sure they are strong (if possible).
  4. If you are a high-value target, force a password change for any accounts with privileges that could be overly harmful (e.g. admins).
  5. Consider disabling accounts that don’t use a password or PIN.
  6. Set password attempt lockouts (3 tries to lock an account, or similar).
[Emphasis added by Securology]
To their first point, I think we can know what was lost: seed records. Without that, there would be no point in filing with the SEC and publicly disclosing that fact. Anybody can know their algorithm for computing one-time passwords by reversing the server side (see above). The only other component in the process is the current time, which is public information. The only private information is the seed record.

On point #4, if your organization is a high-valued asset type of a target, flagging RSA SecurID users to change their PINs or passwords associated with their user accounts may not be a good idea, because as the defense you have to assume this well articulated offensive adversary already has your seed records and therefore could respond to the challenge to reset passwords. A better solution, if your organization is small, is to physically meet with and reset credentials for high valued users. If you cannot do that because your organization is too large of a scale, then your only real option is to monitor user behavior for abnormalities-- which is where most of your true value should come from anyway.

This does tie well with their second suggestion-- pressuring your RSA contact for more information. In all likelihood, if our speculation that seed records were indeed stolen, then the only solution is to demand new RSA SecurID tokens from RSA to replace the ones you have currently. And if RSA is not quick to respond to that, it's for one of two reasons:
  1. This is going to financially hurt them in a very significant way and it's not easy to just mass produce 40 million tokens overnight, OR,
  2. RSA's algorithm for generating seed records and assigning them to token serial numbers is compromised, and they're going to need some R&D time to come up with a fix without breaking current customers who order new tokens under the new seed record generation scheme in the future.

UPDATED TO ADD: Since all things indicate the seed records were compromised, and since Art Coviello's message is that no RSA customers should have reduced security as a result of their breach, then that must mean RSA does not believe SecurID is worth the investment. After all, if RSA SecurID seed records were stolen, it effectively reduces any implementation to just a single factor: the PIN/passwords that are requested in addition to the tokencode. And who would buy all that infrastructure and handout worthless digital keychains when they can get a single factor password authentication for super cheap with Microsoft's Active Directory?

Tuesday, February 3, 2009

Rubber Hose Cryptanalysis

Rubber hose cryptanalysis, xkcd-style. It's funny because it's true:

Unfortunately, so much of computer security is exactly this way. If the asset is of significant value, the bad guys won't fight fair (they'll fight bits with bats).

Tuesday, September 23, 2008

Venema on Spam

I'm grateful for physicist Wietse Venema's contributions (satan, the coroner's toolkit, TCP Wrappers, and Postfix) to the computer security world, but I certainly Venema's anti-spam solution never gets implemented:

The best theoretic solution is to change the email distribution model, but this may never happen. Right now, email is a "push" technology where the sender has most of the control, and where the receiver bears most of the cost.

The alternative is to use a "pull" model, where the sender keeps the email message on their own server until the receiver downloads it. For example, when my bank wants to send me email, they would send a short message with an URL to view their mail, and my email software would download the message for me. This assumes of course that my email software recognizes my bank's email digital signature and their Web site's SSL certificate, otherwise we would have a phishing problem. Legacy mail software would tell the user that they have email at their bank, and leave it up to the user to download their email.

The "pull" model would change the economics of email. It would move the bulk of the cost from the receivers where it is now, to the senders where it belongs. No-one would read email if its sender doesn't provide a service where recipients can download it from.

Except that his proposed "pull" model would change the incentives in such a way that email uers' would not opt-in. Blackberries and the like use a "push" model today so that busy execs (or wannabe middle managers) can read email while in the tube trains without connectivity. Gmail wants to have that message pulled down and indexed, ready for searching (a different set of security issues). Not to mention that the users will now have to make decisions about whether or not to "pull" email based on just the meta information, not full content inspection (e.g. sender's address and subject line). What happens when my friend is hijacked and has his outgoing mailbox full of spam or viruses destined for me? I would have to tell without the details. Often, I cannot tell whether something is worth reading without skimming the body of the message. It could be my bank telling me my statement is available or that they want to offer me yet another home equity loan (the former is interesting; the latter is junk). It's just not going to work.

I'm all in favor of changing economics of the situation. I just don't think this has it.

Wednesday, November 21, 2007

Rootkitting Your Customers


I am a big fan of Dan Geer (image at left); he always has an interesting perspective on security issues, but that's not to say I agree with him.

Dan wrote a guest editorial that was published in Ryan Naraine's "Zero Day" blog, tackling the topic of trustworthy e-commerce when consumers' PCs are so likely to be infected with who-knows-what (there's even a Slashdot thread to go with it). Dan proposed:
"When the user connects [to your e-commerce site], ask whether they would like to use your extra special secure connection. If they say 'Yes,' then you presume that they always say Yes and thus they are so likely to be infected that you must not shake hands with them without some latex between you and them. In other words, you should immediately 0wn their machine for the duration of the transaction — by, say, stealing their keyboard away from their OS and attaching it to a special encrypting network stack all of which you make possible by sending a small, use-once rootkit down the wire at login time, just after they say 'Yes.'"
I see one major flaw with this: suppose we agree that a benevolent rootkit issued from the merchant is a good idea, how do we guarantee that the rootkit trumps any other malware (i.e. other rootkits) that are running on this presumed-infected consumer's PC? All it would take is a piece of malware that could get into the Trusted Path between the consumer's keyboard and the merchant's good-rootkit.

I understand that Dr. Geer is trying to tackle this infected/zombie problem from the merchant's perspective. And in the grand scheme of things, the merchant has very little control of the trust equation. There are some interesting security economics at play here.

What is needed here is Remote Attestation of the trustworthiness of the consumer's computer system. The problem is, we may never get to the point where remote attestation is possible, because of the socio-political aspects of trustworthy computing, not the technical aspects. It's the same reason why every year for the last decade has been the "year of the PKI", but in none of them have we seen widespread adoption of public key infrastructure to the point that it would be our saving grace or silver bullet like it has been heralded to become. Trustworthy computing, as simple as calculating trust relationships through public key cryptography (such as with the use of TPMs), requires an authority to oversee the whole process. The authority has to vouch for the principals within the authority's realm. The authority has to define what is "correct" and label everyone and everything within its domain as either "correct" or "incorrect", from a trustworthiness perspective. In this distributed e-commerce problem, there is no authority. And who would do it? The Government? The CAs (Verisign, et al)? And ... the more important question ... if one of these organizations did stand up as the authority, who would trust their assertions? Who would agree with their definitions of "correctness"?

Dr. Geer's suggestion will also work and fail like the many NAC/NAP vendors who truly believe they can remotely attest a computer system's trustworthiness by sending a piece of software to run in a CPU controlled by an OS they inherently cannot trust--yet they believe they have trustworthy output from the software. This method of establishing trust is opt-in security; NAC vendors have tried it and failed (and many of them keep trying in an arms race). And when organizations like, say, Citibank start using one-time-use rootkits, the economics for malware to tell the rootkit "these are not the droids you're looking for" become very favorable. At that point, we'll see just how bad opt-in security can be. The economics for attacking NAC implementations, by comparison, are only in the favor of college students who do not wish to run the school's flavor of AV. It's a chicken and egg problem, only we know without debate in this case. The software we send to the remote host may be different, but the principle is the same: Trust must come first before trustworthy actions or output.

But it would probably make for a great movie plot.

Sunday, November 18, 2007

Analyzing Trust in Hushmail

Recently, law enforcement acquired confidential email messages from the so-called secure email service, Hushmail. Law enforcement exploited weaknesses in trust relationships to steal the passwords for secret keys which were then used to decrypt the confidential messages.

There are some lessons from this.

#1. Law enforcement trumps. This is not necessarily a lesson in Trust, per se, but keep in mind that large institutions have extensive resources and can be very persuasive, whether it is persuasion from threat of force or financial loss. Possibly an extremely well funded service (read: expensive) in a country that refuses to comply with US laws and policies (e.g. extradition) could keep messages secret (hence the proverbial Swiss bank account). There are definitely economic incentives to understand in evaluating the overall security of Hushmail's (or a similar service's) solution.

#2. A service like Hushmail, which sits in the middle as a broker for all of your message routing and (at least temporary) storage, is part of the Trusted Path between sender and receiver. Hushmail attempts to limit the scope of what is trusted by employing techniques that prevent their access to the messages, such as encrypting the messages on the client side using a java agent or only storing the passphrases temporarily when encrypting messages on the server side.

A user trusts that Hushmail won't change their passphrase storage from hashed (unknown to Hushmail) to plaintext (known to Hushmail) when the user uses the server side encryption option. A user also trusts that the java applet hasn't changed from the version where strong encryption of the messages happens on the client side without divulging either a copy of the message or the keys to Hushmail. The source code is available, but there is not much to guarantee that the published java applet has not changed. The average, non-technical user will have no clue, since the entire process is visual. Hushmail could easily publish a signed, malicious version of the java applet. There is no human-computer interface that can help the user make a valid trust decision.

#3. The Trusted Path also includes many other components: the user's browser (or browser rootkits), the OS and hardware (and all of the problematic components thereof), the network (including DNS and ARP), and last but not least, the social aspect (people who have access to the user). There are many opportunities to find the weakest link in the chain of trust, that do not involve exploiting weaknesses of the service provider. Old fashioned, face-to-face message exchanges may have a shorter trusted path than a distributed, asynchronous electronic communication system with confidentiality controls built-in (i.e. Hushmail's email). And don't forget Schneier's realism of cryptographic execution:
"The problem is that while a digital signature authenticates the document up to the point of the signing computer, it doesn't authenticate the link between that computer and Alice. This is a subtle point. For years, I would explain the mathematics of digital signatures with sentences like: 'The signer computes a digital signature of message m by computing m^e mod n.' This is complete nonsense. I have digitally signed thousands of electronic documents, and I have never computed m^e mod n in my entire life. My computer makes that calculation. I am not signing anything; my computer is."
#4. Services like Hushmail collect large quantities of encrypted messages, so they are a treasure trove to adversaries. Another economic aspect in the overall trust analysis is that the majority of web-based email service users do not demand these features. So the subset of users who do require extra measures for confidentiality can be easily singled out, regardless of whether the messages will implicate the users in illegal activity (or otherwise meaningful activity to some other form of adversary). And, at a minimum, there is always Traffic Analysis, where relationships can be deduced if email addresses can be linked to individuals. An adversary may not need to know what is sent, so long as something is sent with extra confidential messages.


To sum up, if you expect to conduct illegal or even highly-competitive activity through third-party "private" email services, you're optimistic at best or stupid at worst.

Wednesday, October 31, 2007

Retail, Protected Consumer Information, and Whole Disk Encryption

There has been a lot of discussion around retailers pushing back on the PCI (Payment Card Industry) Data Security Standards group. The claim is that merchants should not have to store credit card data at all. Instead credit card transaction clearinghouses would be the only location where that data would be retained; any current need (transaction lookup, disputes, etc.) would be handled by the payment card processors on a per-request basis.

I really like this idea.

In risk management, there are generally two methods to protecting assets: 1) spend more to prevent threats to the assets, 2) spend more to reduce the numbers/value of the assets. We see a lot of the former (think: anti-virus, anti-spyware, anti-threat-du-jour). We rarely see examples of the latter, but this is a perfectly logical approach.

Dan Geer gave us a great analogy: as threats increase, perimeters detract. A suburban neighborhood is OK with a police car every so many square miles, but an embassy needs armed marines every 50 feet of its perimeter. We can take Dr. Geer's analogy and make a war in everyone's neighborhood-- the local retailers/e-tailers-- or we can reduce those assets to specific locations where they can best be monitored and protected. It just makes sense.

It's also a simple game of economics. The consumer passes the risk to the credit card issuers who pass the risk onto the merchants. If consumers in the US hadn't transferred risk to the credit card issuers (courtesy of US Law limiting credit card fraud to a whopping $50 consumer liability), we would likely not see widespread use of credit cards in the US today. What consumer would stand for getting into greater debt if the card was lost? Likewise, we now are at a turning point with merchants, since card issuers are trying to transfer the risk onto them. Shuffling the risk (by shuffling the custody of confidential credit card data) back to the issuers makes perfect sense. Don't forget the credit card issuers have been in a perfect place all of these years: charging merchants a fee per transaction and charging interest to consumers who maintain a debt beyond 30 days. Since they can double-dip in the economics equation, it makes the most sense for them to take the responsibility.

Wednesday, October 10, 2007

On Open Source and Security

Recently, I noted that it's not important whether source code is open or not for security, it's important to have well-qualified analysts reviewing the code. As a result, I received the following comment: "If you are paranoid, pay some guys you trust to do a review. With [closed source] you can't do that." The following is my response.

...


Well, say I trust Bruce Schneier (I generally do professionally speaking, but not necessarily personally-- and I'll pick on him since he's almost universally accepted as the patron saint of security). Let's say I trust Bruce's analysis of a particular company's products. If Bruce is reviewing the source code and the code is closed to the public but made available to him as an escrow, I would likely be OK with that. Trust is more complicated than just availability of source code. There are not too many people in the world that are qualified to security reviews of security product's code. So, I couldn't trust just anyone's analysis of it. To be honest, if it came down to a computationally-intensive implementation of cryptographic code, I wouldn't even trust my own analysis of it. My point is: Trust is a social-psychological phenomenon, not a technical one.

"Open source" means so many different things to different people. To some it means "free to use, modify, or distribute". To some, it means anyone can review the code. To others, it might just mean the vendor will provide source code escrow services at your request. It might be possible to have a closed source (in the common sense) product opened up to a specific customer (potentially any customer asks the right question the right way).

How many "joe users" that have a one seat install of a product actually review the code? Not many. How many of those one seat installs are actually qualified code reviewers? Fewer still.

Open Source != Security

It (open source) is an unrelated variable. It's like how automobile insurance providers in the US inaccurately categorize all male drivers under 18 years of age as high risk. Not all of them (caricatures and jokes aside) will get tickets, cause wrecks, or otherwise require the insurance agency to pay out. However, the actuarial data presented to the insurers suggests that is a reasonable category of customers for which they should increase premiums. If it was legal and ethical (and affordable) to put all under 25 year old drivers through a *magic* test (I say magic because it may or may not exist) that could differentiate with a higher level of certainty whether the customer had the "x factor" that caused a higher tendency to cause wrecks ... well, that's where the insurance companies would go.

Open Source is like that broad, mis-categorization. There are many open source projects that are never reviewed for potential threats by qualified people. In fact, since "open source" is so "open", there are likely projects that have never even been reviewed by anyone outside of the sole contributor. "Open Source" strikes up a connotation of community and collaboration, but it does not guarantee community and collaboration. Likewise, there's no guarantee that the people reviewing the code aren't adding security problems deliberately.

Trust is a binary action. You either choose to trust someone or something, or you choose not to trust. You might opt to choose to trust someone conditionally, such as I might trust a carpenter to build my house but not to work on my car. Trustworthiness, however, is a totally different equation. People estimate trustworthiness (which is exactly as it reads: calculating how worthy of trust something or someone is) using a combination of perceived reputation (track records) or trusted third parties' estimated trust (e.g. my friend Joe knows a lot about cars, so I trust his mechanic, since Joe would know how to differentiate between a good and bad mechanic).

A product has opened its source code for review. So what? You should be asking the following questions:
  • Why? Why did you open your source?
  • Who has reviewed your source? What's (not) in it for them?
  • What was in the review? Was it just a stamp of approval or were there comments as well?