Tuesday, December 30, 2008

Forging RSA-MD5 SSL Certs

Wow. This is a big deal:
The forged certificates will say they were issued by a CA called "Equifax Secure Global eBusiness", which is trusted by the major browsers. The forged certificates will be perfectly valid; but they will have been made by forgers, not by the Equifax CA.
To do this, the researchers exploited a cryptographic weakness in one of the digital signature methods, "MD5 with RSA", supported by the Equifax CA. The first step in this digital signature method is to compute the hash (strictly speaking, the cryptographic hash) of the certificate contents.
The hash is a short (128-bit) code that is supposed to be a kind of unique digest of the certificate contents. To be secure, the hash method has to have several properties, one of which is that it should be infeasible to find a collision, that is, to find two values A and B which have the same hash.
It was already known how to find collisions in MD5, but the researchers improved the existing collision-finding methods, so that they can now find two values R and F that have the same hash, where R is a "real" certificate that the CA will be willing to sign, and F is a forged certificate. This is deadly, because it means that a digital signature on R will also be a valid signature on F -- so the attacker can ask the CA to sign the real certificate R, then copy the resulting signature onto F -- putting a valid CA signature onto a certificate that the CA would never voluntarily sign.
Browsers rarely get their list of approved CA certs modified throughout the course of their lives. Most people don't know how to change those, let alone why they should. In Firefox 3, the CA can be removed by going to Preferences > Advanced > Encryption > View Certificates > Authorities > Select the certificate and click delete. I assume the CA cert in question is the one with the following foot print, but cannot say for certain (since it has yet to be published):
8F:5D:77:06:27:C4:98:3C:5B:93:78:E7:D7:98:CC
The question is how to respond to this. There are many CAs that use RSA-MD5 instead of RSA-SHA1. Ripping them from the CA list is probably a good idea, even if it breaks a few web apps. If you are the admin of an e-commerce site using a cert issued by one of these RSA-MD5 CAs, you should probably: 1) Ask for your money back and switch to a different CA, 2) Ask for a new cert issued by an RSA-SHA1 CA, or 3) Forego the purchased certs in lieu of new RSA-SHA1 issued certs, probably in that order of effectiveness.
It is interesting to see a practical attack with MD5 collisions, though. Many people thought they weren't likely.
UPDATED: More info here, too.

Monday, December 8, 2008

The Stupidest PCI Requirement EVER!

The Payment Card Industry (PCI) regulatory compliance goals are good, but not perfect. Some individual requirements in the Data Security Standard (DSS) are flat out ridiculous. In particular, a couple regarding key management take the cake.
3.5.2 Store cryptographic keys securely in the fewest possible locations and forms.
...
3.6 Fully document and implement all key-management processes and procedures for cryptographic keys used for encryption of cardholder data, including the following:
...
3.6.3 Secure cryptographic key storage.
Hmm. Before we even get too far, there is a redundancy with 3.5.2 and 3.6.3. Why even have 3.5.2 if 3.6 covers the items in more detail? I digress ...

3.6.6 Split knowledge and establishment of dual control of cryptographic keys.
What the authors of the DSS were thinking was that PCI compliant merchants would implement cold war-esque missile silo techniques in which two military officers would each place a physical key into a control console and punch in their portion of the launch code sequence. This is technically possible to do with schemes like Adi Shamir's key splitting techniques. However, it rarely makes sense to do so.

Consider an automated e-commerce system. The notion of automation means it works on its own, without human interaction. If that e-commerce system needs to process or store credit card numbers, it will need to encrypt and decrypt them as transactions happen. In order to do those cryptographic functions, the software must have access to the encryption key. It makes no sense for the software to only have part of the key or to rely on a pair of humans to provide it a copy of the key. That defeats the point of automation.

If the pieces of the key have to be put together for each transaction, then a human would have to be involved with each transaction-- definitely not worth the expense! Not to mention an exploit of a vulnerability in the software could result in malicious software keeping a copy of the full key once it's unlocked anyway (because it's the software that does the crypto functions, not 2 people doing crypto in their heads or on pen and paper!).

If a pair of humans are only involved with the initial unlocking of the key, then the software gets a full copy of the key anyway. Any exploit of a vulnerability in the software could potentially read the key, because the key is in its running memory. So, on the one hand, there is no requirement for humans to be involved with each interaction, thus the e-commerce system can operate more cheaply than, say, a phone-order system or a brick-and-mortar retailer. However, each restart of the application software requires a set of 2 humans to be involved with getting the system back and online. Imagine the ideal low-overhead e-commerce retailer planning vacation schedules for its minimal staff around this PCI requirement! PCI essentially dictates that more staff must be hired! Or, that support staff that otherwise would NOT have access to a portion of the key (because they take level 1 calls or work in a different group) now must be trusted with a portion of it. More hands involved means more opportunity for collusion, which increases the risk by increasing the likelihood of an incident, which is NOT what the PCI folks are trying to accomplish!

The difference between a cold war missile silo and an e-commerce software application is the number of "secure" transactions each must have. Missile silos do not launch missiles at the rate of several hundred to several thousand an hour, but good e-commerce applications can take that many credit cards. When there are few (albeit more important) transactions like entering launch codes, it makes sense to require the attention of a couple different people.

So splitting the key such that an e-commerce software application cannot have the full key is stupid.

But then there is the coup-de-grace in the "Testing Procedures" of 3.5.2:
3.5.2 Examine system configuration files to verify that keys are stored in encrypted format and that key-encrypting keys are stored separately from data-encrypting keys.
This is the ultimate in pointless PCI requirements. The real world analogue is taking valuables and stashing them in a safe that is unlocked with a key (the "data-encrypting key" or DEK in PCI parlance). Then, stash the key to the first safe into a second safe also unlocked by a key (the "key-encrypting key" or KEK in PCI parlance). Presumably, at this point, this second key will be like something out of a James Bond film where the key is actually in two parts, each possessed by one of two coordinating parties who reside in two geographically distinct locations. In practice, however, the second key is typically just slipped under the floor mat and the two safes are sitting right next to one another. It takes a little longer to get the valuables out of the safe, but does little to actually prevent a thief from doing so.

In an e-commerce system, it's no different. All of the same pointlessness of splitting keys (as described above) still applies, but now there is an additional point of failure and complexity: the KEK. Encryption does add overhead to a software application's performance, though the trade-off is normally warranted. However, if at each transaction the software needs to use the KEK to unlock the DEK and then perform an encrypt or decrypt operation on a Credit Card number, the overhead is now double what it was previously. As such, most software applications that use KEKs don't do that. Instead, they just use the KEK to unlock the DEK and keep the DEK in memory throughout the duration of operation, until presumably the software or server need to be taken offline for reconfiguration of some kind. With an encryption key in memory, there's still the plausible risk that an exploit of a vulnerability in the software application could result in disclosure of the key or the records that are supposedly protected by it. Even if the key was once in memory, recent research reminds us that data remanence of RAM retains sensitive data like keys for longer than we might expect.

And if a software application is allowed to initiate and unlock its keys without a human (or pair of humans in the case of key splitting), such as the case when the e-commerce application calls for a distributed architecture of multiple application servers in a "farm", then there really is no point of having a second key. If the application can read the second key to unlock the first key, then so could an attacker that gets the same rights as the application. The software might as well just read in the first key in an unencrypted form, which would at least be a simpler design.

If the threat model is the server administrator stealing the keys, then what's the point? Administrators have FULL CONTROL. The administrator could just as easily steal the DEK and the KEK. And no, the answer is not encrypting the KEK with a KEKEK (Key-Encrypting-Key-Encrypting Key), nor with a KEKEKEK, etc., because no matter how many keys are involved (or how many safes with keys in them), the last one has to be in plaintext (readable) form! The answer is, make sure the Admins are trustworthy (which means do periodic background checks, psych evaluations if they are legal, pay them well and do your best to monitor their actions).

If the threat model is losing (or having stolen) a backup tape of a server's configuration and data, then, again, a KEK offers little help, since an attacker who has physical access to a storage device has FULL CONTROL and can read the KEK and then decrypt the DEK and the credit card records.

It is also commonly suggested by PCI assessors that KEKs be stored on different servers to deal with the threat of an administrator stealing the DEK. But that is really just the same as the problems above, just rehashed all over again.

If the application on Server A can automatically read in the KEK on Server B, then (again) a vulnerability in the application software could potentially allow malware to steal the KEK from Server B or cache a copy of it once in use. If the admin of Server A also has admin access to Server B, then it's the same problem there, too; the admin can steal the KEK from Server B and unlock the DEK on Server A. If the admin does NOT have access to the KEK, then it's the two-humans-required-for-support-scenarios all over again, like key splitting. If the KEK cannot be read by Server A when it is authorized to do so (such as the mechanism for reading the KEK from Server B is malfunctioning), then Server B's admin must be called (Murphy's Law indicates it will be off-hours, too) to figure out why it's not working. And in a small to medium sized e-commerce group, like in an ideal low-overhead operation, it will almost always be the same person or couple of people that have admin access to both. In the large scale, an admin of Server A and an admin of Server B can just choose to collude together and share the ill gains from the theft of the KEK, the DEK, and the encrypted credit card records.

What about a small e-commerce application, where the web, application, and database tiers are contained in one physical server for cost reasons? In that case, perhaps the "scope" of PCI compliance would have previously been a single server, but using a secondary server to house a single KEK now introduces that secondary server into the littany of other PCI requirements, which will likely erode the cost benefit of a single server to house all tiers of the application in the first place.

In the off-chance that a backup copy of Server A's configuration and data is lost or stolen, there will be a false sense of temporary hope if Sever B's backup is not also lost or stolen. However, collusion and/or social engineering of an admininstrator of Server B still applies. Also, this will allow an attacker time to reverse engineer the software that is now in the hands of the attacker. Is there a remote code execution bug in the software that could allow an attacker to leak either the full encryption key or individual credit card records? Is there a flaw in the way the crypto keys were generated or used that reduces the brute-force keyspace from the safety of its lofty 2^128 bit perch? Did the developers forget to turn off verbose logging after the support incident last weekend? Are there full transaction details in the clear in a log file? A better approach would be to just use some sort of full storage volume encryption INSTEAD of transaction encryption, such that none of those possibilities would occur. But in practice, that is rarely done to servers (and somehow not mandated by the PCI DSS).

So storing the KEK on multiple servers just introduces more support complexity than it reduces risk from data theft.

And if we now know that these requirements don't guarantee anything of substance that would pass Kerckhoff's Principle (i.e. security by obscurity does not make these key storage techniques more "secure"), then we can also say that having multiple keys, separate KEK storage, and key-splitting all violate 3.5.2 because the keys are not stored "securely in the fewest possible locations and forms".

...

To Recap: Splitting keys is not feasible in most cases; it negates the benefit from having less people involved with e-commerce. Encrypting keys with other keys is an attempt to implement computer security perpetual motion machines. If you really are paranoid, implement full volume encryption on your servers. If you're not, well, ditch the transaction crypto unless you just happen to have CPU cycles to spare. If you must be "PCI Compliant" (whatever that means outside of "not paying fines"), then implement your e-commerce app to have an "auditor mode" by default, where it requires two people to each type in part of a key for the application to initiate. Then let it have a normal "back door" mode where it just uses the regular key for everything. [Most PCI Assessors are not skilled or qualified enough to validate the difference by inspecting the software program's behavior. They really just want a screen shot to go into their pretty report. Of course, this requires a detailed understanding of the ethics involved, and your mileage may vary.]

And remember: "you're only paranoid if you're wrong."


...

UPDATED: It was pointed out that there is even one more crazy method that PCI Assessors think can turn this polished turd into alchemist's gold: "encode" the KEK into the software's binary. By "encode" they mean "compile", as in have a variable (probably a text string) that contains the value of the KEK. Rather than have the software read in that KEK value from a file, have it just apply the KEK from the static or constant variable in the decryption operation that unlocks the DEK. This is equally stupid as the above. If the point of having a KEK and a DEK is to prevent someone who has access to the file system from unlocking credit card records, then the PCI folks completely missed "intro to pen testing 101" which describes the ultra l33t h4x0r tool called "strings". Any text strings (ASCII, UNICODE, what have you) can be extracted by that ages-old command line utility. So, if the threat model is somebody who stole the contents of a server's disks-- they win. If the treat model is a server administrator-- they win. Not to mention, the common practices of software developers is to store source code in a repository, presumably a shared repository. Any static variable that is "encoded" into a binary will live in source code (unless, I guess, the developer is sadistic enough to fire up a hex editor and find/replace the ASCII values in the binary after it's compiled) and source code lives in repositories, which means even more opportunities for collusion. This type of crypto fairydust magic is pure fiction-- it just doesn't work like they think it does.

Monday, October 27, 2008

Banks, Malware, and More Failing Tokens

The Kaspersky folks have an interesting report on malware that targets the banking and financial markets that supports and echoes many of the things posted here over the last several months. For one, the banking industry is receiving targeted malware, which makes it more difficult for "signature" based anti-malware solutions to find the malware. For two, issues with second-factor authentication tokens don't solve the malware-in-the-browser problem.
"In order for a cyber criminal to be able to perform transactions when dynamic passwords are in place using phishing, s/he has to use a Man-in-the-Middle attack.... Setting up a MitM attack is inherently more difficult than setting up a standard phishing site; however, there are now MitM kits available, so cyber criminals can create attacks on popular banks with a minimum of effort."

Tuesday, September 23, 2008

Venema on Spam

I'm grateful for physicist Wietse Venema's contributions (satan, the coroner's toolkit, TCP Wrappers, and Postfix) to the computer security world, but I certainly Venema's anti-spam solution never gets implemented:

The best theoretic solution is to change the email distribution model, but this may never happen. Right now, email is a "push" technology where the sender has most of the control, and where the receiver bears most of the cost.

The alternative is to use a "pull" model, where the sender keeps the email message on their own server until the receiver downloads it. For example, when my bank wants to send me email, they would send a short message with an URL to view their mail, and my email software would download the message for me. This assumes of course that my email software recognizes my bank's email digital signature and their Web site's SSL certificate, otherwise we would have a phishing problem. Legacy mail software would tell the user that they have email at their bank, and leave it up to the user to download their email.

The "pull" model would change the economics of email. It would move the bulk of the cost from the receivers where it is now, to the senders where it belongs. No-one would read email if its sender doesn't provide a service where recipients can download it from.

Except that his proposed "pull" model would change the incentives in such a way that email uers' would not opt-in. Blackberries and the like use a "push" model today so that busy execs (or wannabe middle managers) can read email while in the tube trains without connectivity. Gmail wants to have that message pulled down and indexed, ready for searching (a different set of security issues). Not to mention that the users will now have to make decisions about whether or not to "pull" email based on just the meta information, not full content inspection (e.g. sender's address and subject line). What happens when my friend is hijacked and has his outgoing mailbox full of spam or viruses destined for me? I would have to tell without the details. Often, I cannot tell whether something is worth reading without skimming the body of the message. It could be my bank telling me my statement is available or that they want to offer me yet another home equity loan (the former is interesting; the latter is junk). It's just not going to work.

I'm all in favor of changing economics of the situation. I just don't think this has it.

Saturday, September 13, 2008

Computer Security is Harder than Nuclear Physics

It's official. We now have conclusive evidence. Computer Security is, in fact, more difficult than nuclear physics. I submit to you, exhibit A:
As the first particles were circulating in the machine near Geneva where the world wide web was born, a Greek group hacked into the facility, posting a warning about weaknesses in its infrastructure.
Calling themselves the Greek Security Team, the interlopers mocked the IT used on the project, describing the technicians responsible for security as "a bunch of schoolkids."

However, despite an ominous warning "don't mess with us," the hackers said they had no intention of disrupting the work of the atom smasher.
"We're pulling your pants down because we don't want to see you running around naked looking to hide yourselves when the panic comes," they wrote in Greek in a rambling note posted on the LHC's network.
The scientists behind the £4.4 billion "Big Bang" machine had already received threatening emails and been besieged by telephone calls from worried members of the public concerned by speculation that the machine could trigger a black hole to swallow the earth, or earthquakes and tsunamis, despite endless reassurances to the contrary from the likes of Prof Stephen Hawking.
The website - www.cmsmon.cern.ch - can no longer be accessed by the public as a result of the attack.
Scientists working at Cern, the organisation that runs the vast smasher, were worried about what the hackers could do because they were "one step away" from the computer control system of one of the huge detectors of the machine, a vast magnet that weighs 12500 tons, measuring around 21 metres in length and 15 metres wide/high.
If they had hacked into a second computer network, they could have turned off parts of the vast detector and, said the insider, "it is hard enough to make these things work if no one is messing with it."
Fortunately, only one file was damaged but one of the scientists firing off emails as the CMS team fought off the hackers said it was a "scary experience".
The hackers targeted the Compact Muon Solenoid Experiment, or CMS, one of the four "eyes" of the facility that will be analysing the fallout of the Big Bang.
The CMS team of around 2000 scientists is racing with another team that runs the Atlas detector, also at Cern, to find the Higgs particle, one that is responsible for mass.
"There seems to be no harm done. From what they can tell, it was someone making the point that CMS was hackable," said James Gillies, spokesman for Cern. "It was quickly detected."
In all seriousness, computer security is a difficult problem. Very difficult. So difficult, that it is usually not even properly defined. In this HUGE scientific experiment, with $Billions spent to achieve success to the point where they currently are, not to mention the world's brightest scientists (and no doubt a tip-top IT staff to support them) there still was at least one vulnerability that threatened total loss of control of all of their IT systems (including the ones controlling the new controversial device).

Saturday, August 23, 2008

Gmail Mobile Insecurity

Google just released a new set of security features for Gmail. However, you cannot turn on the "always use HTTPS" option if you are also using the older java based Gmail Mobile client for smart phones, like the Blackberry. They have written that app to always fetch new mail and post actions like delete and archive over HTTP instead of HTTPS. With Gmail's new require-HTTPS feature enabled, the mobile client will error out that it cannot fetch mail. The new version (2.0.5 as of right now), is a little quirky with this setting, but it will work with require-HTTPS enabled.

Without the new version, smart phones which can fetch content over local WiFi have a ready made attack vector. Blackberries, which route all traffic through an encrypted tunnel back to the company's BES (Blackberry Enterprise Server), would find themselves vulnerable to eavesdropping and MITM closer to the BES (i.e. the corporate LAN), or, of course, at any hop along the way from the corporate LAN through the ISPs (but ISPs would never snoop on your email ;).

With these embedded devices, how many people stop to think about which protocols these apps use under the hood? It's not like on a traditional browser, where the user can at least monitor link destinations via a status bar. I would also venture to say that not too many people worry about keeping app versions up to date-- there haven't been too many nagging update applications for the majority of smart phones, yet. Google's Mobile Updater requires the user to go in and manually check for new versions. So, it's even more imperative for these app developers to get it right the first time.

Gmail Notifier is also experiencing similar issues.

Saturday, August 16, 2008

The Case of MIT Subway Hackers

By now, you may have read about a group of MIT Students who were set to present some insecurity details of the "CharlieCard" subway system used in Boston and elsewhere. [I myself had wondered just exactly what data resides on the magnetic stripes of a similar transit card from the Washington DC subway system, just this summer. It doesn't take much hard thinking to realize that a requirement for these machines is that they have to work regardless of whether there is connectivity back to the central office to validate a transit card; ergo, there must be monetary values stored on the card.] The transit authority didn't like it and got a federal judge to issue a gag order. A lot of good that did, because MIT still has the presentation (and other items) posted here. And now the Electronic Frontier Foundation is fighting the legal battle for the students. But the real question here (which stirs the disclosure debate once again) is: Who is at fault here?

Short version: everybody is at fault.

Long version ...

On the one hand, this group of students, led by world renown Ron Rivest (that's right, the "R" in the "RSA" encryption alogorithm), only informed the transit authority a week before the presentation that they were going to give the talk at DEFCON. That's right, just 1 week. And their advisor, Ron Rivest, was out of town for at least part of that time. The CFP (Call For Papers) closed on May 15. DEFCON's policy is to notify submissions of acceptance within two business days. So the MIT undergrads should have known no later than Monday, May 19, that they were going to be giving the talk. This wasn't impromptu. In order to get accepted, they would have had to bring the goods. So, they clearly knew enough to start the communication process with the Transit Authority. Giving the MBTA less than a week to respond (this is the bureaucratic US government we're talking about-- nothing gets done in one week) certainly put them on the defensive. That was a stupid mistake by the MIT crew.

On the other hand, fueled by a stick-your-head-in-the-sand and an I-hate-academic-research pair of attitudes, the Transit Authority's gag order really sets a dangerous precedent. Two Ivy League Computer Science Professors with quite the reputation when it comes to security, Matt Blaze at U Penn and Columbia University's Steve Bellovin, have spoken out against this bad precedent, arguing that it stifles needed future research. They also signed a letter from the EFF in an attempt to overturn the ruling (PDF). Here is the list of Professors and Researchers who signed the letter:
  • Prof David Farber, Carnegie Mellon
  • Prof Steven M Bellovin, Columbia University
  • Prof David Wagner, UC Berkeley
  • Prof Dan Wallach, Rice University
  • Prof Tadayoshi Kohno, U Washington
  • Prof David Touretzky, Carnegie Mellon
  • Prof Patrick McDaniel, Penn State University
  • Prof Lorrie Faith Cranor, Carnegie Mellon
  • Prof Matt Blaze, U Penn
  • Prof Stefan Savage, UC San Diego
  • Bruce Schneier
We have to acknowledge that public disclosure of paradigms of attack and defense are key to our success. Note that "paradigms" does not include or insinuate specific attack details, nor does it applaud the bug parade that some (in)security researchers use to justify their careers' existence. Whether or not the MIT kids were just another set of bug paraders, a restraining order needs to be carefully considered, much more so than what was offered here. Transparency in design is what keeps the best security systems in place.

Another fault is that both the Federal courts and the MBTA thought they could control the dissemination of the information solely with a gag order. Have they any clue exactly what sort of people attend Blackhat and DEFCON every August in Las Vegas? The injunction was late; the CDs with presentation materials were already printed and distributed to paying attendees. Good luck getting that content back (even if MIT wasn't bold enough to keep it posted). And there's another HUGE side-effect by gagging presenters at DEFCON: it spurs interest in the roughly 6 or 7 thousand attendees (and all of the others around the world who didn't attend for one reason or another) to take what little knowledge is known (i.e. subway payment system cards can be hacked to get free rides) and make an all-out war against the people trying to withold the information (the MBTA). Even if the talk wasn't one of the best this year, it now holds an elusive status, which is more than enough justification for a bunch of people (who are paid to understand how systems break) to spend extra time figuring out the forboden details. That was plain stupid on the MBTA's part. Even if they thought the students were criminal or inciting criminal fraud, they should have taken another approach (e.g. finding a way to make them financially liable for lost revenue due to fraud committed by these techniques-- not that I condone that action, it just would have been more effective).

How could this have gone better?
Well, Ron Rivest, on behalf of his students, could have contacted the MBTA months in advance. They could have scheduled full briefs with the appropriate audiences without the pressure to immediately act (which is what resulted in the MBTA calling the FBI and pursuing a fraud investigation against the students).

Wednesday, August 13, 2008

Linux SSO to AD

This is a break from the traditional types of posts. It's more of an instructional howto, but I hope that it is valuable nonetheless.
...

Ah, the holy grail of Identity Management: Single Sign On. And in today's enterprise, that likely means Microsoft's Active Directory at the back end. While directions for bringing unix/linux boxes into the AD forest have been out there, they have been sticky at least, requiring random config changes to PAM, Kerberos, Samba, LDAP(S), etc.

Enter likewise-open, which is a great way to package all those up. It's a free software package that also comes with optional enterprise support, which anti-free-and-open-source companies tend to like.

Sure, there are tons of great examples out there that will tell you how to get likewise-open installed (which is a dream compared to the old days of manually configuring Kerberos, PAM, etc.). BUT, they leave you high and dry with any user in the whole domain (possibly the forest) being able to log into the computer, since they focus on "authentication" and not "authorization". Since when is that better than less passwords? They also don't properly show how to manage privileged access once logged in. So the following are some subtle configuration details and nuggets for which I have had to comb the web and anyone who really does enterprise SSO will appreciate.

This will assume you have likewise-open installed, which if you don't, on Ubuntu it's as simple as: sudo apt-get install likewise-open.

For that matter, this entire thing assumes Ubuntu, but would work elsewhere (paths may vary).

Joining to the domain is very easy, too: sudo domainjoin-cli join fully.qualified.domain.name UserID

Now, you probably want to limit who can log on to the shell to just admins. After all, it's Unix; it's not a toy. Use your favorite text editor (as root or sudo) and edit /etc/security/pam_lwidentity.conf. Uncomment this line require_membership_of and add an AD group containing those admins in the Domain\Group format.

Next you'll want to make sure those admins can use sudo otherwise you'll have a root password management problem (the whole point of SSO is to reduce the number of passwords to manage, right?). Edit the /etc/sudoers file by typing sudo visudo and add a line in this format: %DOMAIN\\Group ALL=(ALL) ALL (if you want to allow everything-- follow normal sudo permissions rules to restrict further).

Last, but not least, you'll probably want to get a psychological acceptance from administrators as a security design principle, right? In order to do that, let's get rid of that pesky Domain\UserID format and just use the UserID format. After all, who wants to type in ssh 'domain\userid'@computername when they can just type in ssh computername? This is the coup de grâce in favor of less passwords. Again, as root or sudo, edit /etc/samba/lwiauthd.conf and add winbind use default domain = yes to the end of the file. If you're in a multi-domain forest, you're up a creek (not to mention you probably have a less than simple environment anyway), and at a minimum your users in other domains will have to specify the domain\userid format. But users in the same domain can log in without the domain\ prefix.

One recipe for quick SSO to AD on Unix/Linux in a mere few minutes. Enjoy.

Thursday, June 26, 2008

Breaking Cisco VPN Policy

I am surprised how often I hear an organization operate under the belief that they can really, truly can control what a remote client does under any situation. Here is today's lesson on how you can never know what computer is on the other end of the Cisco VPN tunnel or how it is behaving, thanks to more "opt-in security".


Step 1: Break the misconception that Cisco VPN concentrators authenticate which computer is on the other end of the tunnel.

Cisco VPNs authenticate people, not computers. When the connection is initiated from a client, sure the client passes a "group password" before the user specifies her credentials, but there's nothing that really restricts that group password to your organization's computers.

Consider this: any user who runs the Cisco VPN Client has to have at least read access to the VPN profile (the .pcf files stored in the Program Files --> Cisco Systems --> VPN --> Profiles folders on Windows systems). If the user has READ access, any malware unintentionally launched as that user could easily steal the contents of the file ... and the "encrypted" copy of the group password stored within it. Or, a Cisco VPN client can be downloaded from the web and the .pcf VPN profile can be imported into it. At that point, it's no longer certain that the connection is coming from one of your computers.


Step 2: Break the misconception that the client is even running the platform you expect.

So since any user has READ access to the .pcf VPN profiles, they can open up the text config file in a text editor like notepad, peruse the name-value pairs for the "enc_GroupPwd" value, copy everything after the equal sign ("="), and paste it into a Cisco VPN client password decoder script like this one. A novice, in less than a minute, can have everything she needs to not only make VPN connections from an unexpected device, but also from an unexpected platform. Not to mention the group password is not really all that secret.

Cisco VPN encrypted group passwords, like any encrypted data that an automated system (i.e. software) needs to unlock without interrogating user for a key value that only a human knows, must be stored in such a way that is readable for the software. Even though the name/value pair for group passwords in Cisco profiles is labeled as "encrypted" group password, the software needs to be able to decrypt it to use it when establishing VPN tunnels, which means the group password is also read-accessible for any reverse engineer (hence this source code in C). So it is now trivial to decode a Cisco group password. This is not an attack on crypto, this is an attack on key management coupled with a misunderstanding on how compiled programs work.

Now that a "decrypted" copy of the group password is known, the open source "vpnc" package can pretend to be the same as the commercial Cisco version. Here's how simple Ubuntu makes configuring vpnc to emulate a Cisco client:


A Cisco VPN concentrator (or any other server in a client-server relationship) cannot know for sure what the remote client's platform really is. Any changes Cisco makes to its client to differentiate a "Cisco" client by behavior from, say, a "vpnc" client, can be circumvented by doing a little reverse engineering and coding the emulation of the new behavior into the "vpnc" package. An important thing to keep present in one's mind is that compiled applications, although obscure, are not completely obfuscated beyond recognition. A binary of a program is not like a "ciphertext" in the crypto world. It has to be "plaintext" to the OS & CPU that execute it. If it was not readable, then the instructions could not be read and loaded into the CPU for execution. So, any controls based on compiled code are fruitless. Sure, there will be a window of time from when the changes are deployed in the official client to when they are emulated by a hacked client, but reverse engineering and debugging tools (such as bindiff and OllyDbg) are getting more and more sophisticated, reducing the time barrier.


Step 3: Break the misconception that you can remotely control a client's behavior.

A "client" is just that ... it's a "client". It's remote. You cannot physically touch it. Any commands you send to it have to be accepted by it before they are executed. Case in point with the Cisco VPN client is the notion that "network security" people try to perpetuate: "split-tunneling is bad, so let's disable it." Network security people don't like split tunnels or split routes because they view it as a way of bridging a remote client's local network and the organization's internal network. However, it's futile to get all worked up about it. If you trust clients to remote in, then you cannot control how they choose to route packets (though you can pretend that what I show below doesn't really exist, I guess.)


In the same Ubuntu screen shot above, there's a quick and easy way to implement split-tunnels. It's the "Only use VPN connection for these addresses" check box. Check the box and specify the IP ranges. Voila! You've got a split tunnel. Don't want to route the Internet through your company's slow network or cumbersome filters? Check the box. Want to access a local file share while connected to an app your organization refuses to web-enable? Check the box. You get the idea. This is an excellent example of how many "network security" people believe they can control a remote client, yet as you can see, the only way to continue the misconception is to ignore distributed computing principles.

Doing what I described above is certainly not "new" information-- clearly because there are now GUIs to do it (so it's obviously very mature). However, the principles are still not well known and we have vendors like Cisco continuing the notion that you can remotely control a client by upping the ante. Many organizations' network security people are considering the deployment of NAC ("network access control") with VPN. Microsoft has had an implementation of it (they call it NAP for "network access protection") for years. The problem is, it's based on this false sense of "opt-in security" just like split tunnels. Let's look at an analogy ...
Physical Security Guard: "Do you have any weapons?"
Visitor: [shoves 9mm handgun further into pants] "No, of course not."
Guard: "Are you on drugs?"
Visitor: [double-checks the dime bag hasn't fallen out of his jacket pocket] "No, I never touch the stuff."
Guard: "Do you have any ill intent?"
Visitor: [pats the "Goodbye cruel world" letter in his front pocket] "Absolutely not!"
How is that any different from this?
Server: "Are you running a platform I expect?"
Client: "Of course" (it says from the unexpected platform)
Server: "Are you patched?"
Client: "Of course" (why does that even matter if I'm on a different platform from the patches?)Server: "Are you running AV?"
Client: "Of course" (your AV doesn't even run on my platform)
The answer: it's not any different. Both are fundamentally flawed ideas.

So, to refute implementation specific objections, there are two key ways for a project, such as the vpnc project, to choose to not "opt-in" if so desired. They both involve lying to the server (VPN concentrator):
  1. Take the "inspection" program the server provides the client to prove the client is "safe" and execute the inspection program in a spoofed or virtual environment. When the inspection program looks for evidence of Microsoft Windows + latest patches + AV, spoof the evidence they exist.
  2. Reverse engineer the "everything is OK, let 'em on the network" message the inspection program sends the server, then don't even bother executing the inspection program, just cut to the chase and send the all-clear message by itself.
Sure, there may be some nuances that will make this slightly more difficult, such as adding some dynamic or more involved deterministic logic into the "inspection" program, but the more sophisticated the checks are, the more likely the whole process will break and generate false positives for legit users who are following the rules. The more false positives, the less likely customers will deploy the flawed technology.


...


So to recap: you cannot control a client or truly know anything about it. It's just not possible, so security practitioners should look to setting policies that include the fact that you cannot control a remote client. For a great in-depth review of all of these principles (with tons more examples), I suggest picking up a copy of Gary McGraw's and Greg Hoglund's "Exploiting Software: How to break code" book.

Friday, May 23, 2008

PCI Silverbullet for POS?

Has Verifone created a PCI silverbullet for Point Of Sale (POS) systems with their VeriShield Protect product? It's certainly interesting. It claims to encrypt credit card data BEFORE it enters POS, passing a similarly formatted (16 digit) encrypted card number into POS that presumably only your bank can decrypt and process.


I have to admit, I like the direction it's headed in. Any organization's goal (unless you are a payment processor) should be to reduce your PCI scope as much as possible, not try to bring PCI to your entire organization. This is a perfectly viable option to addressing risk that is often overlooked: ditch the asset. If you cannot afford to properly protect an asset, and you can find a way to not have to care for the asset anymore, then ditch it.

The questions I have about this specific implementation that are certainly going to have to be answered before anyone can use this to get a PCI QSA off of their back are:

1) What cryptographers have performed cryptanalysis on this "proprietary" design? Verifone's liberty to mingle the words "Triple DES" into their own marketing buzz format, "Hidden TDES", should at least concern you, if you know anything about the history of information security and the track records of proprietary encryption schemes. Since the plaintext and the ciphertext are exactly 16 digits (base 10) long and it appears that only the middle 6 digits are encrypted (see image below), this suggests that there might exist problems with randomness and other common crypto attacks. Sprinkle in the fact that credit card numbers must comply with the "Mod 10" rule (Luhn alogirthm), and I'm willing to bet a good number theorist could really reduce the possibilities of the middle 6 digits. If only the middle 6 digits are encrypted, and they have to be numbers between 0 and 9, then the probability of guessing the correct six digit number is one in a million. But the question is (and it's up to a mathematician or number theorist to answer), how many of the other 999,999 combinations of middle 6 digits, when combined with the first 6 and last 4 digits, actually satisfy the Mod 10 rule? [Especially since the "check digit" in the mod 10 credit card number rule is digit 14, which this method apparently doesn't encrypt.] I'm no mathematician, but I'm willing to bet significantly fewer than 999,999 satisfy the mod 10 rule. It's probably a sizeable cut-down on the brute-force space. If there are any other mistakes in the "H-TDES" design or implementation, it might be even easier to fill in the middle 6 gap.

It would be great to know that Verifone's design was open and peer-reviewed, instead of proprietary. I'd be very curious for someone like Bruce Schneier or Adi Shamir to spend some time reviewing it.


2) How are the keys generated, stored, and rotated? I certainly hope that all of these devices don't get hardcoded (eeprom's flashed) with a static shared key (but I wouldn't be surprised if they are). It would be nice to see something like a TPM (secure co-processor) embedded in the device. That way, we'd know there is an element of tamper resistance. It would be very bad if a study like the one the Light Blue Touchpaper guys at Cambridge University just published would detail that all of the devices share the same key (or just as bad, if all of the devices for a given retailer or bank share the same key).

It would be great if each device had its own public keypair and generated a session key with the bank's public key. This could be possible if the hardware card-swipe device sent the cardholder data to the bank directly instead of relying on a back office system to transmit it (arguably the back-end could do the transmission, provided the card swipe had access to generate a session key with the bank directly).

3) Will the PCI Security Council endorse a solution like this? (Unfortunately, this is probably the most pressing question on most organizations' minds.) If this does not take the Point of Sale system out of PCI scope, then most retailers will not embrace the solution. If the PCI Security Council looks at this correctly with an open mind, then they will seek answers to my questions #1 and #2 before answering #3. In theory, if the retailer doesn't have knowledge or possession of the decryption keys, POS would not be in PCI scope any more than the entire Internet is in PCI scope for e-tailers who use SSL.

...

Many vendors (or more accurately "payment service providers") are using "tokenization" of credit card numbers to get the sticky numbers out of e-tailers' databases and applications, which is a similar concept for e-commerce applications. A simple explanation of tokenizing a credit card number is simply creating a surrogate identifier that means nothing to anyone but the bank (service provider) and the e-tailer. The token replaces the credit card number in the e-tailer's systems, and in best-case scenarios the e-tailer doesn't even touch the card for a millisecond. [Because even a millisecond is long enough to be rooted, intercepted, and defrauded; the PCI Security Council knows that.]

It's great to see people thinking about solutions that fit the mantra: "If you don't have to keep it, then don't keep it."

[Note: all images are likely copyrighted by Verifone and are captures from their public presentation in PowerPoint PPS format here.]

...
[Updated May 23, 2008: Someone pointed out that PCI only requires the middle 6 digits (to which I refer in "question 1" above) to be obscured or protected according to requirement 3.3: "Mask PAN when displayed (the first six and last four digits are the maximum number of digits to be displayed)." Hmmm... I'm not sure how that compares to the very next requirement (3.4): "Render PAN [Primary Account Number], at minimum, unreadable anywhere it is stored" Looks like all 16 digits need to be protected to me.]

Saturday, May 17, 2008

Why You Don't Need a Web Application Layer Firewall

Now that PCI 6.6's supporting documents are finally released, a lot people are jumping on the "Well, we're getting a Web Application Firewall" bandwagon. I've discussed the Pros and Cons of Web Application Firewalls vs Code Reviews before, but let's dissect one more objection in favor of WAFs and against code reviews (specifically static analysis) ...

This is from Trey Ford's blog post "Instant AppSec Alibi?"
Let’s evaluate this in light of what happens after a vulnerability is identified- application owners can do one of a couple things…
  1. Take the website off-line
  2. Revert to older code (known to be secure)
  3. Leave the known vulnerable code online
The vast majority of websites often do the latter… I am personally excited that the organizations now at least have a viable new option with a Web Application Firewall in the toolbox! With virtual patching as a legitimate option, the decision to correct a vulnerability at the code level or mitigate the vuln with a WAF becomes a business decision.

There are two huge flaws in Mr Ford's justification of having WAFs as a layer of defense.

1) Web Application Firewalls only address HALF of the problems with web applications
: the syntactic portion, otherwise known in Gary McGraw speak as "the bug parade". The other half of the problems are design (semantic) problems, which Gary refers to as "security flaws". If you read Gary's books, he eloquently points out that research shows actual software security problems fall about 50/50 in each category (bugs and flaws).

For example, a WAF will never detect, correct, or prevent horizontal (becoming another user) or vertical (becoming an administrator) privilege escalation. This is not an input validation issue, this is an authorization and session management issue. If a WAF vendor says their product can do this, beware. Given the ideal best case scenario, let's suppose a WAF can keep track of the source IP address of where "joe" logged in. If joe's session suddenly jumps to an IP address from some very distinctly different geographic location and the WAF thinks this is "malicious" and kills the session (or more realistically the WAF just doesn't pass the transactions from that assumed-to-be-rogue IP to the web application), then there will be false positives, such as corporate users who jump to VPN and continue their browser's session, or individuals who switch from wireless to an "AirCard" or some other ISP. Location based access policies are problematic. In 1995 it was safe to say "joe will only log on from this IP address", but today's Internet is so much more dynamic than that. And if the WAF won't allow multiple simultaneous sessions from the same IP, well forget selling your company's products or services to corporate users who are all behind the same proxy and NAT address.

Another example: suppose your org's e-commerce app is designed so horribly that a key variable affecting the total price of a shopping cart is controlled by the client/browser. If a malicious user could make a shopping cart total $0, or worse -$100 (issuing a credit to the card instead of a debit), then no WAF on today's or some future market is going to understand how to fix that. The WAF will say "OK, that's a properly formatted ASCII represented number and not some malicious script code, let it pass".

Since the PCI Security Standards Council is supporting the notion of Web Application Firewalls, that begs the question: Does the PCI Security Standards Council even understand what a WAF can and cannot do? Section 6.6 requires that WAFs or Code Reviews address the issues inspired by OWASP which are listed in section 6.5:
6.5.1 Unvalidated input
6.5.2 Broken access control (for example, malicious use of user IDs)
6.5.3 Broken authentication and session management (use of account credentials and session cookies)
6.5.4 Cross-site scripting (XSS) attacks
6.5.5 Buffer overflows
6.5.6 Injection flaws (for example, structured query language (SQL) injection)
6.5.7 Improper error handling
6.5.8 Insecure storage
6.5.9 Denial of service
6.5.10 Insecure configuration management
The following items fall into the "implementation bug" category which could be addressed by a piece of software trained to identify the problem (a WAF or a Static Code Analyzer):
6.5.1 Unvalidated input
6.5.4 Cross-site scripting (XSS) attacks
6.5.5 Buffer overflows
6.5.6 Injection flaws (for example, structured query language (SQL) injection)
6.5.7 Improper error handling
These items fall into the "design flaw" category and require human intelligence to discover, correct, or prevent:
6.5.2 Broken access control (for example, malicious use of user IDs)
6.5.3 Broken authentication and session management (use of account credentials and session cookies)
6.5.8 Insecure storage
6.5.9 Denial of service
6.5.10 Insecure configuration management
Solving "design" or "semantic" issues requires building security into the design phase of your lifecycle. It cannot be added on by a WAF and generally won't be found by a code review, at least not one that relies heavily on automated tools. A manual code review that takes into consideration the criticality of a subset of the application (say, portions dealing with a sensitive transaction) may catch this, but don't count on it.



2) If your organization has already deployed a production web application that is vulnerable to something a WAF could defend against, then you are not really doing code reviews. There's no more blunt of a way to put it. If you have a problem in production that falls into the "bug" category that I described above, then don't bother spending money on WAFs. Instead, spend your money on either a better code review tool OR hiring & training better employees to use it (since they clearly are not using it properly).



Bottom line: any problem in software that a WAF can be taught to find could have been caught at development time with a code review tool, so why bother buying both. You show me a problem a WAF can find that slipped through your development process, and I'll show you a broken development process. Web Application Firewalls are solution in search of a problem.

Saturday, May 10, 2008

Sending Bobby Tables to the Moon

NASA has a program where you can send your name to the moon. Just give them your name, they'll store it electronically, and send it on the next lunar rover to be left there forever.

Now, little Bobby Tables will be immortalized forever on the moon.

Saturday, May 3, 2008

Automating Exploitation Creation

Some academic security researchers at Carnegie Mellon have released a very compelling paper which introduces the idea that just monitoring a vendor's patch releases can allow for an automated exploit creation. (They call it "APEG".) They claim that automated analysis of the diffs between a pre-patched program and a patched program is possible-- and that in some cases an exploit can be created in mere minutes when some clients take hours or days to check in and install their updates! Granted there is some well established commentary from Halvar Flake about the use of the term "exploit" since the APEG paper really only describes "vulnerability triggers" (Halvar's term).

Our friends at Carnegie Mellon have proved that the emperor hath no clothes. Creating exploits from analyzing patches is certainly not new. What is novel, in this case, is how the exploit creation process is automated:
"In our evaluation, for the cases when a public proof- of-concept exploit is available, the exploits we generate are often different than those publicly described. We also demonstrate that we can automatically generate polymorphic exploit variants. Finally, we are able to automatically generate exploits for vulnerabilities which, to the best of our knowledge, have no previously published exploit."

Tuesday, April 15, 2008

PCI 1.1 Section 6.6

If you're one of the many practitioners waiting to see how the PCI Security Council clarifies the ambiguous 6.6 requirement, then you may wish to use this interview with Bob Russo, General Manager of the PCI Security Standards Council, as either an inkling towards an interpretation OR just more obfuscation (depending upon your point of view).

Here is the PCI requirement in question:
6.6 Ensure that all web-facing applications are protected against known attacks by applying either of the following methods:
• Having all custom application code reviewed for common vulnerabilities by an organization that specializes in application security
• Installing an application layer firewall in front of web-facing applications.
Note: This method is considered a best practice until June 30, 2008, after which it becomes a requirement.
Russo's comments on the debate of Web Application Firewalls versus Code Reviews:
"Personally, I'd love to see everyone go through on OWASP-based source-code review, but certainly, that's not going to happen," Russo said, referring to the expensive and time-consuming process of manual code reviews. "So the application firewall is probably the best thing to do, but there needs to be some clarification around what it needs to do. That clarification is coming; that's been the biggest question."
Jeremiah Grossman sounded off on the interview as well.



Even given all of the discourse I have heard and read to date, there are many unanswered questions on this one particular point alone. No doubt the PCI Security Standards Council has realized that application layer problems are going to undermine everything else we have taught security practitioners for the last decade about the idiocy of controlling security at the network layers. And no doubt the PCI Security Standards Council understands they have a mighty hand to influence organizations to handle their custom software with the appropriate level of diligence and quality that cardholders deserve. However ...

Here are my top 7 questions concerning PCI 1.1, Requirement 6.6 that are still left unanswered:

1) Please define "web-facing applications". Does that mean HTTP/S applications? Does that mean anything directly exposed to Internet traffic? Or, does it mean any application that Al Gore created? [PCI Expert, Trey Ford, attempted to define "web-facing", but we need the offical PCI Security Council interpretation.]

2) Please define "known attacks". Known by whom? Who is keeping the authoritative list? What happens when new attacks become "known" and are added to the list? Do we have to go back and perform more analysis to check for the new attacks?

3) Please define "an organization that specializes in application security". Is that a third party or can it be a team within an organization? Can it be a team of one in a smaller organization? What is meant by "specializes"? Does that mean "took a class", "has a certificate", or is that reserved for somebody who leads the industry in the subject matter? "Application Security" as a discipline (sorry, Gary McGraw--you're right, we should call it "software security") is new. Will we have a chicken and egg problem trying to establish people as specialists in application security?

4) Does a blackbox (runtime) scanning approach constitute a "review" of custom application code? Or will only a whitebox (source code at development time) cut the mustard? Can automated review tools be used, or must it be 100% manual (human) code review? To what extent can automation be involved? Are there specific products (vendors) that are preferred or required when selecting automated tools?

5) Does the "review" imply remediation? In the case of PCI Vulnerability Scanning Procedures, some lesser vulnerabilities are allowed to exist, but vulnerabilities that the PCI Security Standards Council rate at a higher criticality must absolutely be fixed. What criticality scale must we use? Is there a taxonomy of vulnerabilities that are categorized by "must fix" criticality versus a "should fix" criticality?

6) Please define "an application layer firewall". Is that a preventative tool or can it be just a detective tool (i.e. must it be active or can it be passive, like an IDS)? What "bad things" must it detect? How tight must it be tuned? Will there be a process to pre-certify vendors, or must we invest in it now and hope that auditors will accept what we choose?

7) Why is that we are (as of today) only a mere 76 days out from when requirement 6.6 becomes mandatory and we do NOT yet have clarification? Large organizations move slowly. Complicated "web-facing applications" may take a long time to properly regression test with either option implemented (remediations found in code reviews OR web application layer firewall deployments). We have little over two months to: 1) Understand the requirement, 2) Budget accordingly, and 3) Implement on time and under budget. With PCI DSS version->next right around the corner, why wasn't this requirement held off until it could be properly flushed out in the next version?

Friday, March 21, 2008

University of Washington's Computer Security Course

Tadayoshi Kohno, a Computer Science Professor at the University of Washington, is teaching an undergraduate computer security course with a unique set of intentions: Kohno is trying to teach the security mindset-- the same mindset that Bruce Schneier has been talking about for years.

The results are interesting, not to mention available to the public. Students transforming into a security mindset are writing analytical views of just about anything, from dorm rooms to high-tech. It's available here in blog format.

More Broken DRM

From Slashdot:

"In July 2007, Richard Doherty of the Envisioneering Group (BD+ Standards Board) declared: 'BD+, unlike AACS which suffered a partial hack last year, won't likely be breached for 10 years.' Only eight months have passed since that bold statement, and Slysoft has done it again. According to the press release, the latest version of their flagship product AnyDVD HD can automatically remove BD+ protection and allows you to back-up any Blu-ray title on the market."
How many more times must we endure the faulty logic of DRM (Digital Rights Management)? It's simple, that is if you understand key management. You cannot have a ciphertext (the Blu-ray movie) that you allow an end-user to convert to a plaintext (i.e. when it's playing in a hardware or software player) without also allowing plaintext access to the key that unlocks the ciphertext (which all players must have, otherwise the video is just encrypted data-- not playable).

DRM defies the laws of nature. It's just like the recent cold-boot attacks on disk encryption. The decryption keys are there. They're in the software. If you can manipulate the hardware, you can get them. And sometimes (as is the case with the BD+ hack) you don't even have to manipulate the hardware. The keys have to be stored somewhere-- usually in memory just like the whole disk encryption vendors. In fact, a possible application of the Princeton group's research could be to cold boot computers that are playing BD+ protected blu-ray discs, since they came up with new methods of finding (identifying) encryption keys stored in decaying DRAM, correcting the bit-flip decay.

Even if the Blu-ray people mandated that only hardware Blu-ray devices could be created and sold (since software players have been the primary target for DRM destruction), the keys would have to exist in every one of their customer's homes-- right there in the players! It might be a little more difficult to reverse engineer and discover since hardware tends to not be as flexible as software, but the keys would have to be there, stored in CMOS perhaps, or possibly just hard-coded into the decryption-playback circuits. And we have seen, time and time again, that the efforts of even a single person to reverse engineer the decryption key can be devastating to DRM schemes. All it takes is one person to discover it and a company like Slysoft to find a way to legally market it.


...
In summary: DRM is not possible. If you present data to a person outside of your physical reach, then you cannot control how they use the data. Anyone who claims otherwise is peddling the information security equivalent to perpetual motion. Don't buy it.

Saturday, March 8, 2008

Anderson Proves PIN Entry Devices are Insecure

If there is a theme in good security research right now, it's that we cannot trust hardware.

Ross Anderson and company at the Computer Laboratory at Cambridge University have performed some interesting research demonstrating how a paperclip can be used to steal cardholder data from a bank card PIN Entry Device (PED). Machines believed to be secure because they were assessed through the weakest level of the esteemed Common Criteria are apparently ripe with flaws. The Cambridge group believes that fraudsters have been using these techniques for some time.

Friday, March 7, 2008

Jon Callas Responds to Ed Felten

It's nice to not be on the top spot of Jon Callas' "CTO Corner" anymore ... although I held that spot for four and a half months. Jon Callas, the CTO of PGP Corporation, has moved on to respond to Ed Felten's memory-freezing, whole-disk-encryption-key-stealing crew at Princeton University.

Some highlights from Jon's response ...
"The basic issue is one that we have known for years."
Well, that's not very concerting, or at least it shouldn't be. If it was so well known, then why is PGP Corp just now looking to integrate with hardware and BIOS vendors to attempt to resolve this? That line, along with Jon's general theme, is that this is no big deal ... we've known about it forever ... it's just a new spin on an old trick ...
"Those of us who consider these things have known that this was at least in theory possible for some time. This team did two impressive things: they made it actually work, and they did some math to recover partially-damaged RSA and AES keys. This latter feat they did by looking at scratch variables that the encryption systems use, and back-deducing what some of the damaged bits of the keys must have been. The process is a bit like a big Sudoku game; when you play Sudoku, you deduce what is missing based on what is present."

Again, "it's no big deal", except, wait, yep, there's that really complicated math part. I do like Jon's comparison to Sudoku; it's a good analogy.

"Despite how dramatic this attack is, there is an easy fix for it."
If there really was an easy fix for it, then the whole notion of "coldboot" would be a solved problem, but that's obviously not the case. Ripping power from a running system (which Jon later goes on to say has never been the primary threat that PGP WDE was designed to overcome) does not protect the keys. Even if BIOS vendors started shipping with features that sanitize memory at boot, a quick power off optionally followed by a cool down of DRAM and finally placing the memory into a prepared system could still read the encryption keys. Yes, that requires a dedicated and trained adversary, but there are organizations with very valuable information. Jon should not be so quick to downplay the likelihood that his customers may have such an adversary, unless of course the really security conscious organizations have been skipping his company's products altogether.
"When a computer is hibernated, the contents of its memory is written to disk, and then the computer is shut down. No residual power is supplied to the RAM, so it will fade in one to two minutes, just as if you had shut it off. It doesn't matter what software you are running; if you hibernate a machine with WDE, it will be safe in a couple of moments. (Note: the Cold Boot researchers say that hibernate mode is vulnerable, and they are wrong on this nit. A truly hibernated machine is turned off, but with a copy of RAM written to disk. These machines are safe, once memory has faded.)"
Anyone else want to hear Felten's and crew's response to the hibernate "nit"?
"If there is a hard power loss, such as pulling the battery from a laptop or yanking the power cord out from a server, there's next to nothing that software alone can do. There's next to nothing that hardware can do. We could design hardware and software to do something in this case, but you probably wouldn't pay for it. I wouldn't."
I can think of several options here, all of which cannot be so expensive that when typical economies of scale (mass production and consumer demand) are applied the price becomes unreasonable. I'm not sure what this says about what's on Jon's computer, that he wouldn't be interested something as simple as a small reserve of electrical power (like in a capacitor) that can detect when main power has diminished and employs its small reserve which is just ample to perform a basic overwrite or sanitizing operation on DRAM. Such a feature could not possibly cost more than a seat of PGP WDE.
"External authentication using smart cards, tokens, TPMs, does not solve the problem. There have been reports of some people claiming that it does. It doesn't. Remember, this is very simple; there is some RAM that has a key, and that RAM needs to be cleared. Authentication doesn't clear memory. TPMs do not clear memory. The people who claim that a USB key helps at all are displaying their ignorance."
I agree that USB keys don't clear memory. What was Dr. Eric Cole of SANS thinking when he said this in the Feb 29th issue of their Newsbites?
"(Cole): The cold boot attack has a cool factor to it, but remember that
full disk encryption will protect a system only if it has a strong
password (two factor recommended) and if the system is completely turned
off. Use of a USB token stops the attack. If you turn your system
completely off (and hold on to it for more than 5 seconds) the attack
is not successful. If you do not follow either of these rules, than
full disk encryption can potentially be broken even without this
attack.]"
But a future generation of TPMs, or more specifically secure co-processors, could potentially perform all cryptographic operations in hardware, not just integrity checking of boot procedures. Whereas today's TPMs can store keys only later to hand them off to a process that will unfortunately store them in DRAM, the next generation of secure co-processors could be passed the ciphertext blocks of data for decryption, passing the plaintext version back to a WDE-like service. There will be I/O performance concerns to overcome initially, but it is feasible that a commodity-priced chip will one day solve that problem.
"There is more reason to use WDE in conjunction with either Virtual Disk or NetShare. We have always said that the primary threat model for WDE is a machine that is shut down or hibernated. We have always pointed to the added benefits of the other forms of encryption. In his recent article on mobile data protection, Bruce Schneier touts PGP Virtual Disk. The PGP Encryption Platform gives you defense in depth. Defense in depth is good because the layers of protection give more security."
Translation: buy more of their products.



Of course, there's always the solution I have offered despite common objections: one method for securing information is to not place it on disk at all. Encryption is not always the answer.

Excellent Cold Boot Step-By-Step

News.com has an excellent step-by-step complete with pictures detailing what it takes to steal the encryption keys for Apple's File Vault using the Princeton University's Cold Boot attack on whole disk encryption. Jacob Appelbaum, one of the independent security researchers involved with Ed Felten's Princeton crew, is your guide.

Thursday, February 21, 2008

Felten Destroys Whole Disk Encryption

Ed Felten and company publicized some research findings today on a form of side-channel attack against whole disk encryption keys stored in DRAM.

We show that disk encryption, the standard approach to protecting sensitive data on laptops, can be defeated by relatively simple methods. We demonstrate our methods by using them to defeat three popular disk encryption products: BitLocker, which comes with Windows Vista; FileVault, which comes with MacOS X; and dm-crypt, which is used with Linux....

Our research shows that data in DRAM actually fades out gradually over a period of seconds to minutes, enabling an attacker to read the full contents of memory by cutting power and then rebooting into a malicious operating system....
Interestingly, if you cool the DRAM chips, for example by spraying inverted cans of “canned air” dusting spray on them, the chips will retain their contents for much longer. At these temperatures (around -50 °C) you can remove the chips from the computer and let them sit on the table for ten minutes or more, without appreciable loss of data. Cool the chips in liquid nitrogen (-196 °C) and they hold their state for hours at least, without any power. Just put the chips back into a machine and you can read out their contents.
This is deadly for disk encryption products because they rely on keeping master decryption keys in DRAM. This was thought to be safe because the operating system would keep any malicious programs from accessing the keys in memory, and there was no way to get rid of the operating system without cutting power to the machine, which “everybody knew” would cause the keys to be erased.
Our results show that an attacker can cut power to the computer, then power it back up and boot a malicious operating system (from, say, a thumb drive) that copies the contents of memory. Having done that, the attacker can search through the captured memory contents, find any crypto keys that might be there, and use them to start decrypting hard disk contents. We show very effective methods for finding and extracting keys from memory, even if the contents of memory have faded somewhat (i.e., even if some bits of memory were flipped during the power-off interval). If the attacker is worried that memory will fade too quickly, he can chill the DRAM chips before cutting power.
This is a good example of academic security research. We need to see that the trust placed upon the hardware by the whole disk encryption software is a faulty decision.

There's even a video:

Tuesday, February 19, 2008

Websense CEO on AV Signatures

Websense CEO, Gene Hodges, on the futility of signature based antivirus, just an excerpt:

On the modern attack vector: Antivirus software worked fine when attacks were generally focused on attacking infrastructure and making headlines. But current antivirus isn’t very good at protecting Web protocols, argued Hodges. “Modern attackware is much better crafted and stealthy than viruses so developing an antivirus signature out of sample doesn’t work,” said Hodges. The issue is that antivirus signature sampling starts with a customer being attacked. Then that customer calls the antivirus vendor, creates a sample, identifies the malware and then creates the sample. The conundrum for antivirus software comes when there’s malware that’s never detected. If you don’t know you’re being attacked there’s no starting point for a defense. “Infrastructure attacks are noisy because you wanted the victim to know they have been had. You didn’t have to be a brain surgeon to know you were hit by Slammer. Today’s malware attacks are stealthy and don’t want you to know it’s there,” said Hodges.

Is antivirus software necessary? Hodges said that antivirus software in general is still necessary, but the value is decreasing. Hodges recalled discussions at a recent conference and the general feeling from CIOs that viruses and worms were a solved problem. Things will get very interesting if there’s a recession and customers become more selective about how they allocate their security budgets. For instance, Hodges said CIOs could bring in Sophos, Kaspersky and Microsoft as antivirus vendors and “kick the stuffing out of the price structure for antivirus and firewalls.” The dollars that used to be spent on antivirus software could then be deployed for more data centric attacks that require better access control, encryption and data leakage. My take: Obviously, Hodges has a motive here since these budget dollars would presumably flow in Websense’s direction. That said the argument that the value of antivirus software is declining makes a lot of sense and is gaining critical mass.

Web 2.0 as security risk. Hodges said Web 2.0–or enterprise 2.0–techniques could become a security risk in the future, but Websense “really hasn’t seen significant exploitation of business transactions of Web 2.0.” That said enterprises are likely to see these attacks in the future. For starters, enterprises generally allow employees to tap sites like YouTube, Facebook and MySpace. Those sites are big targets for attacks and connections to the enterprise can allow “bad people to sneak bad stuff into good places,” said Hodges. In other words, the honey pot isn’t lifting data from Facebook as much as it is following that Facebook user to his place of employment. Meanwhile, Web connections are already well established in the enterprise via automated XML transactions, service oriented architecture and current ERP systems. Hodges noted that Oracle Fusion and SAP Netweaver applications fall into the Web 2.0 category.


Even the security CEOs can see it (the futility of signature based anti-malware, that is).

Thursday, February 14, 2008

Localhost DNS Entries & "Same Site Scripting"

I'm not a big fan of new names for variations of existing attacks, but Tavis Ormandy (of Google) has pointed out an interesting way to leverage non-fully qualified DNS entries for localhost (127.0.0.1) with XSS:
It's a common and sensible practice to install records of the form
"localhost. IN A 127.0.0.1" into nameserver configurations, bizarrely
however, administrators often mistakenly drop the trailing dot,
introducing an interesting variation of Cross-Site Scripting (XSS) I
call Same-Site Scripting. The missing dot indicates that the record is
not fully qualified, and thus queries of the form
"localhost.example.com" are resolved. While superficially this may
appear to be harmless, it does in fact allow an attacker to cheat the
RFC2109 (HTTP State Management Mechanism) same origin restrictions, and
therefore hijack state management data.

The result of this minor misconfiguration is that it is impossible to
access sites in affected domains securely from multi-user systems. The
attack is trivial, for example, from a shared UNIX system, an attacker
listens on an unprivileged port[0] and then uses a typical XSS attack
vector (e.g. in an html email) to lure a victim into
requesting http://localhost.example.com:1024/example.gif, logging the
request. The request will include the RFC2109 Cookie header, which could
then be used to steal credentials or interact with the affected service
as if they were the victim.

Tavis recommends removing localhost entries from DNS servers that do not have the trailing period (i.e. "localhost" vs. "localhost."). The trailing period assures that somebody cannot setup camp on 127.0.0.1 and steal your web applications cookies or run any other malicious dynamic content in the same domain, exploiting DNS for same origin policy attacks.

Friday, February 1, 2008

WiKID soft tokens

I promised Nick Owens at WiKID Systems a response and it is long overdue. Nick commented on my "soft tokens aren't tokens at all" post:
Greetings. I too have posted a response on my blog. It just points out that our software tokens use public key encryption and not a symmetric, seed-based system. This pushes the security to the initial validation/registration system where admins can make some choices about trade-offs.

Second, I submit that any device with malware on it that successfully connects to the network is bad. So you're better off saving money on tokens and spending it on anti-malware solutions, perhaps at the gateway, defense-in-depth and all.

Third, I point out that our PC tokens provide https mutual authentication, so if you are confident in your anti-malware systems, and are concerned about MITM attacks at the network, which are increasingly likely for a number of reasons, you should consider https mutual auth in your two-factor thinking.

Here's the whole thing:
On the security of software tokens for two-factor authentication
and thanks for stimulating some conversation!

Here is their whitepaper on their soft token authentication system.

Unfortunately, I would like to point out that WiKID is first and foremost vulnerable to the same sort of session stealing malware that Trojan.Silentbanker uses. It doesn't matter how strong your authentication system is when you have a large pile of untrustworthy software in between the user and the server-side application (e.g. browser, OS, third party applications, and all the malware that goes with it). I'll repeat the theme: it's time to start authenticating the transactions, not the user sessions. I went into a little of what that might look like.

Nick is aware of that, which is why he said point number two above. But here's the real problem: the web is designed for dynamic code to be pulled down side-by-side with general data, acquired from multiple sources and run in the same security/trust context. Since our browsers don't know which is information and which is instructions until runtime AND since the instructions are dynamic (meaning they may not be there for the next site visit), how is it NOT possible for malware to live in the browser? I submit that it is a wiser choice to NOT trust your clients' browsers, their input into your application, etc., than to trust that a one time password credential really did get input by the proper human on the other end of the software pile. I suggest that organizations should spend resources being able to detect and recover from security failures (out of band mechanisms come to mind-- a good old fashioned phone call to confirm that $5,000 transaction to a foreign national, perhaps?), rather than assuming the money they invested in some new one time password mechanism exempts them from any such problems.

Microsoft published a document titled "10 Immutable Laws of Security" (nevermind for now that they are neither laws, nor immutable, nor even concise) and point number one is entirely relevant: "Law #1: If a bad guy can persuade you to run his program on your computer, it's not your computer anymore". How does javascript, a Turing Complete programming language, fall into that? If you completely disable script in your browser, most applications break. But if you allow it to run, behaviors you cannot control can run on your behalf. Taking Nick's advice, we should be spending all of your time and resources solving the code and data separation problem on the web, not implementing one time passwords (and I agree with him on that).



Second, I have a hard time calling WiKID a token-- not that it couldn't fit that definition-- it's just that it is a public key cryptography system. I never have referred to a PGP key pair as a token, nor have I heard anyone else. Likewise I don't really ever here anyone say "download this x509 token" ... instead they say "x509 certificate". Smart cards might be the saving grace example that allows me to stretch my mind around the vocabulary; generally speaking, a smart card is a physical "token" and smart card implementations can have a PKC key pair. So, I'll have to extend my personal schema, so to speak, but I guess I'll allow WiKID to fit into the "token" category (but just barely).

The x509 cert example is a great analogy, because under the hood that's basically how WiKID works. Just like an HTTPS session, it swaps public keys (which allows for the mutual authentication) and then a session key is created for challenge/response-- the readout of the "soft token" that the user places into the password form, for example.


There is one concerning issue with WiKID. It uses a relatively unknown public key encryption algorithm called NTRU. NTRU aims to run in low resource environments like mobile phones, which is undoubtedly why WiKID employs it. NTRU is also patented by NTRU Cryptosystems, INC (the patent may have some business/political ramifications similar to PGP's original IDEA algorithm). However, when choosing an encryption algorithm, it is best to use that which has withstood significant peer review. Otherwise, Kerckhoffs' Principle that we have come to know and love as "security by obscurity" will eat us alive when the first decent attack reduces our security to rubble. Googling for "NTRU cryptanalysis" returns around 3,000 hits. Googling for "RSA cryptanalysis" returns around 186,000-- two orders of magnitude higher. This is not the nail in WiKID's coffin, though, but it could be betting the company on Betamax. It is undoubtedly less popular than, say RSA or Eliptic Curve. In most aspects of life, supporting the underdog can result in a great time. Doing it in crypto, however, may not be a good idea.

Before somebody reads the above paragraph and goes in the extreme in either direction, please note my point: the workhorse of WiKID, the NTRU encryption algorithm, has an unknown security value. One could argue that RSA, likewise, has only a mostly known security value, but you decide: "mostly known" or "unknown"? There may not be any problems in NTRU and it may be perfectly safe and secure to use. Conversely it may be the worst decision ever to use it. That's what peer review helps us decide.


...
To sum up ... WiKID is cheap, open source, interesting, and ... still vulnerable to malware problems. And don't forget: you have to choose to use a less popular encryption algorithm.