Wednesday, November 21, 2007

Rootkitting Your Customers


I am a big fan of Dan Geer (image at left); he always has an interesting perspective on security issues, but that's not to say I agree with him.

Dan wrote a guest editorial that was published in Ryan Naraine's "Zero Day" blog, tackling the topic of trustworthy e-commerce when consumers' PCs are so likely to be infected with who-knows-what (there's even a Slashdot thread to go with it). Dan proposed:
"When the user connects [to your e-commerce site], ask whether they would like to use your extra special secure connection. If they say 'Yes,' then you presume that they always say Yes and thus they are so likely to be infected that you must not shake hands with them without some latex between you and them. In other words, you should immediately 0wn their machine for the duration of the transaction — by, say, stealing their keyboard away from their OS and attaching it to a special encrypting network stack all of which you make possible by sending a small, use-once rootkit down the wire at login time, just after they say 'Yes.'"
I see one major flaw with this: suppose we agree that a benevolent rootkit issued from the merchant is a good idea, how do we guarantee that the rootkit trumps any other malware (i.e. other rootkits) that are running on this presumed-infected consumer's PC? All it would take is a piece of malware that could get into the Trusted Path between the consumer's keyboard and the merchant's good-rootkit.

I understand that Dr. Geer is trying to tackle this infected/zombie problem from the merchant's perspective. And in the grand scheme of things, the merchant has very little control of the trust equation. There are some interesting security economics at play here.

What is needed here is Remote Attestation of the trustworthiness of the consumer's computer system. The problem is, we may never get to the point where remote attestation is possible, because of the socio-political aspects of trustworthy computing, not the technical aspects. It's the same reason why every year for the last decade has been the "year of the PKI", but in none of them have we seen widespread adoption of public key infrastructure to the point that it would be our saving grace or silver bullet like it has been heralded to become. Trustworthy computing, as simple as calculating trust relationships through public key cryptography (such as with the use of TPMs), requires an authority to oversee the whole process. The authority has to vouch for the principals within the authority's realm. The authority has to define what is "correct" and label everyone and everything within its domain as either "correct" or "incorrect", from a trustworthiness perspective. In this distributed e-commerce problem, there is no authority. And who would do it? The Government? The CAs (Verisign, et al)? And ... the more important question ... if one of these organizations did stand up as the authority, who would trust their assertions? Who would agree with their definitions of "correctness"?

Dr. Geer's suggestion will also work and fail like the many NAC/NAP vendors who truly believe they can remotely attest a computer system's trustworthiness by sending a piece of software to run in a CPU controlled by an OS they inherently cannot trust--yet they believe they have trustworthy output from the software. This method of establishing trust is opt-in security; NAC vendors have tried it and failed (and many of them keep trying in an arms race). And when organizations like, say, Citibank start using one-time-use rootkits, the economics for malware to tell the rootkit "these are not the droids you're looking for" become very favorable. At that point, we'll see just how bad opt-in security can be. The economics for attacking NAC implementations, by comparison, are only in the favor of college students who do not wish to run the school's flavor of AV. It's a chicken and egg problem, only we know without debate in this case. The software we send to the remote host may be different, but the principle is the same: Trust must come first before trustworthy actions or output.

But it would probably make for a great movie plot.

Tuesday, November 20, 2007

Soft tokens aren't tokens at all

The three categories of authentication:
Something you know
Something you have
Something you are
Physical hardware tokens, like RSA's SecurID, fall into the second category of "something you have". Software tokens, also like RSA's software SecurID, pretend to fall into that same category ... but they are really just another example of "something you know".


Physical tokens are designed to be tamper resistant, which is an important property. By design, if a physical token is tampered, the internal storage of the token's "seed record" (symmetric key) is lost, in an attempt to prevent a physical attacker from duplicating the token. [Note the words "tamper resistant" not "tamper proof".]

Soft tokens, however, do not share this property. Soft tokens are comprised of two components: 1) the software application code that implements the one time password function and 2) the seed record used in the application to generate the one time password function's output.

"Perfect Duplication" is a property of the Internet/Information Age that is shaking the world. The Recording/Movie Production Industries are having a hard time fighting perfect duplication as a means to circumvent licensed use of digital media. Perfect duplication can be a business enabler as well, as it is with news syndication services that distribute perfect copies of their stories throughout the far reaches of the world in a matter of seconds. In the case of soft tokens, though, perfect duplication is a deal breaker.

Soft tokens are designed to be flexible. It's difficult to provision a hardware token to an employee halfway around the world the same day of the request, but with soft tokens provisioning is a piece of cake. Likewise, it's easier to recover soft tokens when that same employee is terminated. Soft tokens run on virtually any platform. RSA supports everything from Windows Desktops to Browser Toolbars to Mobile Devices-- all you need to do is import the seed record.

There's the rub ... Distributing the seed record requires a confidential channel to ensure that it is not perfectly duplicated in transit. Distributing seed records to many of the supported platforms of soft token vendors involves plaintext transmission, such as sending the seed record as an email attachment to a Blackberry client. An administrator may provision the seed record encrypted using an initial passphrase that is distributed out-of-band, but it is common practice for seed records and initial passphrases to be distributed side-by-side. Whereas a physical token can only be in one place at a time, a soft token could be perfectly duplicated by an eavesdropper, even complete with its initial passphrase (especially when it isn't distributed out of band). If Alice receives her soft token and changes its passphrase, Eve could keep her perfect copy with the intial passphrase or choose to change the passphrase-- either way, the back end of the one-time-password authentication system will receive a valid token code (time value encrypted with the seed record).

Likewise, a soft token employed on a malware-ridden remote PC could have its stored contents uploaded to an adversary's server, capturing the seed record. If the malware also captures keystrokes (as software keystroke logging is so common these days), then another opportunity for a perfect duplicate exists. Soft tokens are vulnerable to severe distributed key management problems. Bob (the administrator in this case) cannot know if the one time password was generated by Alice's soft token application or Eve's perfect duplicate.

In short, a soft token can prove a user "has" the token, but it cannot prove the user exclusively has the only copy. Therefore, soft tokens are not truly "something you have"; they are "something you know" (i.e. the passphrase to unlock a seed record).

For organizations seeking soft tokens as a method of achieving multi-factor authentication, a password plus a soft token is simply two instances of "something you know". Thus the organization must ask itself: "Does the complexity of properly distributing seed records in a secure channel as well as the expense of managing and supporting the software token application code really provide sufficient benefit over a simple--yet strong-- password only method?" My prediction is the answer to that question will be: "No, it doesn't".

...
UPDATED 12/11/2007 - Sean Kline, Director of Product Management at RSA, has posted a response [Thanks for the heads up]. Perhaps we will have some interesting dialog.

Still More TOR

F-Secure's blog is discussing how there are more bad TOR nodes out there. I discussed awhile back how TOR's goal of anonymity is not possible. TOR ignores the rules of Trust relationships.

From F-Secure:
"Here's a node that only accepts HTTP traffic for Google and MySpace; it resides under Verizon:

AS | IP | AS Name — 19262 | 71.105.20.179 | VZGNI-TRANSIT - Verizon Internet Services Inc.

While curious and perhaps even suspicious, it isn't necessarily malicious. It could just be a Samaritan particularly concerned with anonymous searches and MySpace profiles for some reason. But there's no way to tell, so why use such a node if you don't have to?"
Or, maybe it's trying to capture credentials, like Google cookie stealing.
"But how about this one?

Now here's a node that was monitoring SSL traffic and was engaging in Man-in-the-Middle (MITM) attacks. Definitely bad.

AS | IP | CC | AS Name — 3320 | 217.233.212.114 | DE | DTAG Deutsche Telekom AG

Here's how the testing was done:

A test machine with a Web server and a real SSL certificate was configured.
A script was used to run through the current exit nodes in the directory cache.
Connections were made to the test machine.
A comparison of the certificates was made.

And the exit node at 217.233.212.114 provided a fake SSL certificate!

Now note, this was only one of about 400 plus nodes tested. But it only takes one."
TOR users: caveat.

UPDATED [11/21/2007]: Here are Heise Security's findings and there is now a Slashdot thread on the subject.

Monday, November 19, 2007

Possible Criminal Charges for Lost Laptops in the UK

Of course, the media are spinning this as "don't encrypt your laptop and you could go to jail" when the goal of the legislation is really: "for those who knowingly and recklessly flout data protection principles".

How many times does it need to be said? Encryption does not equal auto-magical security.

Encryption simply transitions the problem of data confidentiality into a key confidentiality problem. It trades one vast and complicated problem for one slightly less complicated problem. Key management is so crucial, yet it is rarely discussed in these forums. I would rather government officials' laptops not be encrypted than to have them encrypted with poor key management. It's better to know the problem exists than to pretend it doesn't. And it's worse to legislate everyone into pretending the key management problem doesn't exist.

Sunday, November 18, 2007

Analyzing Trust in Hushmail

Recently, law enforcement acquired confidential email messages from the so-called secure email service, Hushmail. Law enforcement exploited weaknesses in trust relationships to steal the passwords for secret keys which were then used to decrypt the confidential messages.

There are some lessons from this.

#1. Law enforcement trumps. This is not necessarily a lesson in Trust, per se, but keep in mind that large institutions have extensive resources and can be very persuasive, whether it is persuasion from threat of force or financial loss. Possibly an extremely well funded service (read: expensive) in a country that refuses to comply with US laws and policies (e.g. extradition) could keep messages secret (hence the proverbial Swiss bank account). There are definitely economic incentives to understand in evaluating the overall security of Hushmail's (or a similar service's) solution.

#2. A service like Hushmail, which sits in the middle as a broker for all of your message routing and (at least temporary) storage, is part of the Trusted Path between sender and receiver. Hushmail attempts to limit the scope of what is trusted by employing techniques that prevent their access to the messages, such as encrypting the messages on the client side using a java agent or only storing the passphrases temporarily when encrypting messages on the server side.

A user trusts that Hushmail won't change their passphrase storage from hashed (unknown to Hushmail) to plaintext (known to Hushmail) when the user uses the server side encryption option. A user also trusts that the java applet hasn't changed from the version where strong encryption of the messages happens on the client side without divulging either a copy of the message or the keys to Hushmail. The source code is available, but there is not much to guarantee that the published java applet has not changed. The average, non-technical user will have no clue, since the entire process is visual. Hushmail could easily publish a signed, malicious version of the java applet. There is no human-computer interface that can help the user make a valid trust decision.

#3. The Trusted Path also includes many other components: the user's browser (or browser rootkits), the OS and hardware (and all of the problematic components thereof), the network (including DNS and ARP), and last but not least, the social aspect (people who have access to the user). There are many opportunities to find the weakest link in the chain of trust, that do not involve exploiting weaknesses of the service provider. Old fashioned, face-to-face message exchanges may have a shorter trusted path than a distributed, asynchronous electronic communication system with confidentiality controls built-in (i.e. Hushmail's email). And don't forget Schneier's realism of cryptographic execution:
"The problem is that while a digital signature authenticates the document up to the point of the signing computer, it doesn't authenticate the link between that computer and Alice. This is a subtle point. For years, I would explain the mathematics of digital signatures with sentences like: 'The signer computes a digital signature of message m by computing m^e mod n.' This is complete nonsense. I have digitally signed thousands of electronic documents, and I have never computed m^e mod n in my entire life. My computer makes that calculation. I am not signing anything; my computer is."
#4. Services like Hushmail collect large quantities of encrypted messages, so they are a treasure trove to adversaries. Another economic aspect in the overall trust analysis is that the majority of web-based email service users do not demand these features. So the subset of users who do require extra measures for confidentiality can be easily singled out, regardless of whether the messages will implicate the users in illegal activity (or otherwise meaningful activity to some other form of adversary). And, at a minimum, there is always Traffic Analysis, where relationships can be deduced if email addresses can be linked to individuals. An adversary may not need to know what is sent, so long as something is sent with extra confidential messages.


To sum up, if you expect to conduct illegal or even highly-competitive activity through third-party "private" email services, you're optimistic at best or stupid at worst.

Wednesday, November 14, 2007

Pay Extra for their Mistakes: EV Certificates

Extended Validation (EV) SSL Certificates are one of the information security industry's worst cover-ups. And to make matters worse, it isn't the Certificate Authorities (CA) that are paying for the mistakes; it's us.

Basically, EV Certs work like this:
  1. Organization requests an EV Cert from a CA.
  2. CA goes through a more stringent legal process to authenticate the requesting organization.
  3. CA issues EV Cert to requesting organization.
  4. Web site visitor sees the address bar turn green (like Microsoft's fictitious Woodgrove Bank example).



The primary problem is that EV certs were not needed until it became apparently obvious through phishing and other fraudulent scams that CAs issue SSL certificates to anyone with a few hundred dollars to buy one. If only CAs followed the diligent process in the first place, there would be no market for EV certs. [The same could be said about DNS registrars not allowing domain registration to be so easy, hence F-Secure's suggestion to create a new ".bank" top level domain (TLD).]

The secondary problem is that CAs are passing their failures to properly validate SSL certificate requests into a new and improved offering at a higher price. Many of the CAs are acting like drug pushers by offering their customers to upgrade to EV SSL certs at no extra price (only to have the cert renewals come in at the 20+% increased price). And there is the obvious complaint that the increased price for a green address bar gives an unfair advantage to big corporations against the independent small business owners who may only afford the traditional SSL certificates.

...
On to some meta-cognitive comments ... This rant is not necessarily timely in the sense that CAs are trying to mass market EV certs, and there have been many people to articulate most of my complaints against them, but there is one key complaint I do not hear from industry analysts: the CAs should have been following the extended (i.e. diligent) process from the very beginning.