Thursday, February 21, 2008

Felten Destroys Whole Disk Encryption

Ed Felten and company publicized some research findings today on a form of side-channel attack against whole disk encryption keys stored in DRAM.

We show that disk encryption, the standard approach to protecting sensitive data on laptops, can be defeated by relatively simple methods. We demonstrate our methods by using them to defeat three popular disk encryption products: BitLocker, which comes with Windows Vista; FileVault, which comes with MacOS X; and dm-crypt, which is used with Linux....

Our research shows that data in DRAM actually fades out gradually over a period of seconds to minutes, enabling an attacker to read the full contents of memory by cutting power and then rebooting into a malicious operating system....
Interestingly, if you cool the DRAM chips, for example by spraying inverted cans of “canned air” dusting spray on them, the chips will retain their contents for much longer. At these temperatures (around -50 °C) you can remove the chips from the computer and let them sit on the table for ten minutes or more, without appreciable loss of data. Cool the chips in liquid nitrogen (-196 °C) and they hold their state for hours at least, without any power. Just put the chips back into a machine and you can read out their contents.
This is deadly for disk encryption products because they rely on keeping master decryption keys in DRAM. This was thought to be safe because the operating system would keep any malicious programs from accessing the keys in memory, and there was no way to get rid of the operating system without cutting power to the machine, which “everybody knew” would cause the keys to be erased.
Our results show that an attacker can cut power to the computer, then power it back up and boot a malicious operating system (from, say, a thumb drive) that copies the contents of memory. Having done that, the attacker can search through the captured memory contents, find any crypto keys that might be there, and use them to start decrypting hard disk contents. We show very effective methods for finding and extracting keys from memory, even if the contents of memory have faded somewhat (i.e., even if some bits of memory were flipped during the power-off interval). If the attacker is worried that memory will fade too quickly, he can chill the DRAM chips before cutting power.
This is a good example of academic security research. We need to see that the trust placed upon the hardware by the whole disk encryption software is a faulty decision.

There's even a video:

Tuesday, February 19, 2008

Websense CEO on AV Signatures

Websense CEO, Gene Hodges, on the futility of signature based antivirus, just an excerpt:

On the modern attack vector: Antivirus software worked fine when attacks were generally focused on attacking infrastructure and making headlines. But current antivirus isn’t very good at protecting Web protocols, argued Hodges. “Modern attackware is much better crafted and stealthy than viruses so developing an antivirus signature out of sample doesn’t work,” said Hodges. The issue is that antivirus signature sampling starts with a customer being attacked. Then that customer calls the antivirus vendor, creates a sample, identifies the malware and then creates the sample. The conundrum for antivirus software comes when there’s malware that’s never detected. If you don’t know you’re being attacked there’s no starting point for a defense. “Infrastructure attacks are noisy because you wanted the victim to know they have been had. You didn’t have to be a brain surgeon to know you were hit by Slammer. Today’s malware attacks are stealthy and don’t want you to know it’s there,” said Hodges.

Is antivirus software necessary? Hodges said that antivirus software in general is still necessary, but the value is decreasing. Hodges recalled discussions at a recent conference and the general feeling from CIOs that viruses and worms were a solved problem. Things will get very interesting if there’s a recession and customers become more selective about how they allocate their security budgets. For instance, Hodges said CIOs could bring in Sophos, Kaspersky and Microsoft as antivirus vendors and “kick the stuffing out of the price structure for antivirus and firewalls.” The dollars that used to be spent on antivirus software could then be deployed for more data centric attacks that require better access control, encryption and data leakage. My take: Obviously, Hodges has a motive here since these budget dollars would presumably flow in Websense’s direction. That said the argument that the value of antivirus software is declining makes a lot of sense and is gaining critical mass.

Web 2.0 as security risk. Hodges said Web 2.0–or enterprise 2.0–techniques could become a security risk in the future, but Websense “really hasn’t seen significant exploitation of business transactions of Web 2.0.” That said enterprises are likely to see these attacks in the future. For starters, enterprises generally allow employees to tap sites like YouTube, Facebook and MySpace. Those sites are big targets for attacks and connections to the enterprise can allow “bad people to sneak bad stuff into good places,” said Hodges. In other words, the honey pot isn’t lifting data from Facebook as much as it is following that Facebook user to his place of employment. Meanwhile, Web connections are already well established in the enterprise via automated XML transactions, service oriented architecture and current ERP systems. Hodges noted that Oracle Fusion and SAP Netweaver applications fall into the Web 2.0 category.


Even the security CEOs can see it (the futility of signature based anti-malware, that is).

Thursday, February 14, 2008

Localhost DNS Entries & "Same Site Scripting"

I'm not a big fan of new names for variations of existing attacks, but Tavis Ormandy (of Google) has pointed out an interesting way to leverage non-fully qualified DNS entries for localhost (127.0.0.1) with XSS:
It's a common and sensible practice to install records of the form
"localhost. IN A 127.0.0.1" into nameserver configurations, bizarrely
however, administrators often mistakenly drop the trailing dot,
introducing an interesting variation of Cross-Site Scripting (XSS) I
call Same-Site Scripting. The missing dot indicates that the record is
not fully qualified, and thus queries of the form
"localhost.example.com" are resolved. While superficially this may
appear to be harmless, it does in fact allow an attacker to cheat the
RFC2109 (HTTP State Management Mechanism) same origin restrictions, and
therefore hijack state management data.

The result of this minor misconfiguration is that it is impossible to
access sites in affected domains securely from multi-user systems. The
attack is trivial, for example, from a shared UNIX system, an attacker
listens on an unprivileged port[0] and then uses a typical XSS attack
vector (e.g. in an html email) to lure a victim into
requesting http://localhost.example.com:1024/example.gif, logging the
request. The request will include the RFC2109 Cookie header, which could
then be used to steal credentials or interact with the affected service
as if they were the victim.

Tavis recommends removing localhost entries from DNS servers that do not have the trailing period (i.e. "localhost" vs. "localhost."). The trailing period assures that somebody cannot setup camp on 127.0.0.1 and steal your web applications cookies or run any other malicious dynamic content in the same domain, exploiting DNS for same origin policy attacks.

Friday, February 1, 2008

WiKID soft tokens

I promised Nick Owens at WiKID Systems a response and it is long overdue. Nick commented on my "soft tokens aren't tokens at all" post:
Greetings. I too have posted a response on my blog. It just points out that our software tokens use public key encryption and not a symmetric, seed-based system. This pushes the security to the initial validation/registration system where admins can make some choices about trade-offs.

Second, I submit that any device with malware on it that successfully connects to the network is bad. So you're better off saving money on tokens and spending it on anti-malware solutions, perhaps at the gateway, defense-in-depth and all.

Third, I point out that our PC tokens provide https mutual authentication, so if you are confident in your anti-malware systems, and are concerned about MITM attacks at the network, which are increasingly likely for a number of reasons, you should consider https mutual auth in your two-factor thinking.

Here's the whole thing:
On the security of software tokens for two-factor authentication
and thanks for stimulating some conversation!

Here is their whitepaper on their soft token authentication system.

Unfortunately, I would like to point out that WiKID is first and foremost vulnerable to the same sort of session stealing malware that Trojan.Silentbanker uses. It doesn't matter how strong your authentication system is when you have a large pile of untrustworthy software in between the user and the server-side application (e.g. browser, OS, third party applications, and all the malware that goes with it). I'll repeat the theme: it's time to start authenticating the transactions, not the user sessions. I went into a little of what that might look like.

Nick is aware of that, which is why he said point number two above. But here's the real problem: the web is designed for dynamic code to be pulled down side-by-side with general data, acquired from multiple sources and run in the same security/trust context. Since our browsers don't know which is information and which is instructions until runtime AND since the instructions are dynamic (meaning they may not be there for the next site visit), how is it NOT possible for malware to live in the browser? I submit that it is a wiser choice to NOT trust your clients' browsers, their input into your application, etc., than to trust that a one time password credential really did get input by the proper human on the other end of the software pile. I suggest that organizations should spend resources being able to detect and recover from security failures (out of band mechanisms come to mind-- a good old fashioned phone call to confirm that $5,000 transaction to a foreign national, perhaps?), rather than assuming the money they invested in some new one time password mechanism exempts them from any such problems.

Microsoft published a document titled "10 Immutable Laws of Security" (nevermind for now that they are neither laws, nor immutable, nor even concise) and point number one is entirely relevant: "Law #1: If a bad guy can persuade you to run his program on your computer, it's not your computer anymore". How does javascript, a Turing Complete programming language, fall into that? If you completely disable script in your browser, most applications break. But if you allow it to run, behaviors you cannot control can run on your behalf. Taking Nick's advice, we should be spending all of your time and resources solving the code and data separation problem on the web, not implementing one time passwords (and I agree with him on that).



Second, I have a hard time calling WiKID a token-- not that it couldn't fit that definition-- it's just that it is a public key cryptography system. I never have referred to a PGP key pair as a token, nor have I heard anyone else. Likewise I don't really ever here anyone say "download this x509 token" ... instead they say "x509 certificate". Smart cards might be the saving grace example that allows me to stretch my mind around the vocabulary; generally speaking, a smart card is a physical "token" and smart card implementations can have a PKC key pair. So, I'll have to extend my personal schema, so to speak, but I guess I'll allow WiKID to fit into the "token" category (but just barely).

The x509 cert example is a great analogy, because under the hood that's basically how WiKID works. Just like an HTTPS session, it swaps public keys (which allows for the mutual authentication) and then a session key is created for challenge/response-- the readout of the "soft token" that the user places into the password form, for example.


There is one concerning issue with WiKID. It uses a relatively unknown public key encryption algorithm called NTRU. NTRU aims to run in low resource environments like mobile phones, which is undoubtedly why WiKID employs it. NTRU is also patented by NTRU Cryptosystems, INC (the patent may have some business/political ramifications similar to PGP's original IDEA algorithm). However, when choosing an encryption algorithm, it is best to use that which has withstood significant peer review. Otherwise, Kerckhoffs' Principle that we have come to know and love as "security by obscurity" will eat us alive when the first decent attack reduces our security to rubble. Googling for "NTRU cryptanalysis" returns around 3,000 hits. Googling for "RSA cryptanalysis" returns around 186,000-- two orders of magnitude higher. This is not the nail in WiKID's coffin, though, but it could be betting the company on Betamax. It is undoubtedly less popular than, say RSA or Eliptic Curve. In most aspects of life, supporting the underdog can result in a great time. Doing it in crypto, however, may not be a good idea.

Before somebody reads the above paragraph and goes in the extreme in either direction, please note my point: the workhorse of WiKID, the NTRU encryption algorithm, has an unknown security value. One could argue that RSA, likewise, has only a mostly known security value, but you decide: "mostly known" or "unknown"? There may not be any problems in NTRU and it may be perfectly safe and secure to use. Conversely it may be the worst decision ever to use it. That's what peer review helps us decide.


...
To sum up ... WiKID is cheap, open source, interesting, and ... still vulnerable to malware problems. And don't forget: you have to choose to use a less popular encryption algorithm.