Friday, March 21, 2008

University of Washington's Computer Security Course

Tadayoshi Kohno, a Computer Science Professor at the University of Washington, is teaching an undergraduate computer security course with a unique set of intentions: Kohno is trying to teach the security mindset-- the same mindset that Bruce Schneier has been talking about for years.

The results are interesting, not to mention available to the public. Students transforming into a security mindset are writing analytical views of just about anything, from dorm rooms to high-tech. It's available here in blog format.

More Broken DRM

From Slashdot:

"In July 2007, Richard Doherty of the Envisioneering Group (BD+ Standards Board) declared: 'BD+, unlike AACS which suffered a partial hack last year, won't likely be breached for 10 years.' Only eight months have passed since that bold statement, and Slysoft has done it again. According to the press release, the latest version of their flagship product AnyDVD HD can automatically remove BD+ protection and allows you to back-up any Blu-ray title on the market."
How many more times must we endure the faulty logic of DRM (Digital Rights Management)? It's simple, that is if you understand key management. You cannot have a ciphertext (the Blu-ray movie) that you allow an end-user to convert to a plaintext (i.e. when it's playing in a hardware or software player) without also allowing plaintext access to the key that unlocks the ciphertext (which all players must have, otherwise the video is just encrypted data-- not playable).

DRM defies the laws of nature. It's just like the recent cold-boot attacks on disk encryption. The decryption keys are there. They're in the software. If you can manipulate the hardware, you can get them. And sometimes (as is the case with the BD+ hack) you don't even have to manipulate the hardware. The keys have to be stored somewhere-- usually in memory just like the whole disk encryption vendors. In fact, a possible application of the Princeton group's research could be to cold boot computers that are playing BD+ protected blu-ray discs, since they came up with new methods of finding (identifying) encryption keys stored in decaying DRAM, correcting the bit-flip decay.

Even if the Blu-ray people mandated that only hardware Blu-ray devices could be created and sold (since software players have been the primary target for DRM destruction), the keys would have to exist in every one of their customer's homes-- right there in the players! It might be a little more difficult to reverse engineer and discover since hardware tends to not be as flexible as software, but the keys would have to be there, stored in CMOS perhaps, or possibly just hard-coded into the decryption-playback circuits. And we have seen, time and time again, that the efforts of even a single person to reverse engineer the decryption key can be devastating to DRM schemes. All it takes is one person to discover it and a company like Slysoft to find a way to legally market it.


...
In summary: DRM is not possible. If you present data to a person outside of your physical reach, then you cannot control how they use the data. Anyone who claims otherwise is peddling the information security equivalent to perpetual motion. Don't buy it.

Saturday, March 8, 2008

Anderson Proves PIN Entry Devices are Insecure

If there is a theme in good security research right now, it's that we cannot trust hardware.

Ross Anderson and company at the Computer Laboratory at Cambridge University have performed some interesting research demonstrating how a paperclip can be used to steal cardholder data from a bank card PIN Entry Device (PED). Machines believed to be secure because they were assessed through the weakest level of the esteemed Common Criteria are apparently ripe with flaws. The Cambridge group believes that fraudsters have been using these techniques for some time.

Friday, March 7, 2008

Jon Callas Responds to Ed Felten

It's nice to not be on the top spot of Jon Callas' "CTO Corner" anymore ... although I held that spot for four and a half months. Jon Callas, the CTO of PGP Corporation, has moved on to respond to Ed Felten's memory-freezing, whole-disk-encryption-key-stealing crew at Princeton University.

Some highlights from Jon's response ...
"The basic issue is one that we have known for years."
Well, that's not very concerting, or at least it shouldn't be. If it was so well known, then why is PGP Corp just now looking to integrate with hardware and BIOS vendors to attempt to resolve this? That line, along with Jon's general theme, is that this is no big deal ... we've known about it forever ... it's just a new spin on an old trick ...
"Those of us who consider these things have known that this was at least in theory possible for some time. This team did two impressive things: they made it actually work, and they did some math to recover partially-damaged RSA and AES keys. This latter feat they did by looking at scratch variables that the encryption systems use, and back-deducing what some of the damaged bits of the keys must have been. The process is a bit like a big Sudoku game; when you play Sudoku, you deduce what is missing based on what is present."

Again, "it's no big deal", except, wait, yep, there's that really complicated math part. I do like Jon's comparison to Sudoku; it's a good analogy.

"Despite how dramatic this attack is, there is an easy fix for it."
If there really was an easy fix for it, then the whole notion of "coldboot" would be a solved problem, but that's obviously not the case. Ripping power from a running system (which Jon later goes on to say has never been the primary threat that PGP WDE was designed to overcome) does not protect the keys. Even if BIOS vendors started shipping with features that sanitize memory at boot, a quick power off optionally followed by a cool down of DRAM and finally placing the memory into a prepared system could still read the encryption keys. Yes, that requires a dedicated and trained adversary, but there are organizations with very valuable information. Jon should not be so quick to downplay the likelihood that his customers may have such an adversary, unless of course the really security conscious organizations have been skipping his company's products altogether.
"When a computer is hibernated, the contents of its memory is written to disk, and then the computer is shut down. No residual power is supplied to the RAM, so it will fade in one to two minutes, just as if you had shut it off. It doesn't matter what software you are running; if you hibernate a machine with WDE, it will be safe in a couple of moments. (Note: the Cold Boot researchers say that hibernate mode is vulnerable, and they are wrong on this nit. A truly hibernated machine is turned off, but with a copy of RAM written to disk. These machines are safe, once memory has faded.)"
Anyone else want to hear Felten's and crew's response to the hibernate "nit"?
"If there is a hard power loss, such as pulling the battery from a laptop or yanking the power cord out from a server, there's next to nothing that software alone can do. There's next to nothing that hardware can do. We could design hardware and software to do something in this case, but you probably wouldn't pay for it. I wouldn't."
I can think of several options here, all of which cannot be so expensive that when typical economies of scale (mass production and consumer demand) are applied the price becomes unreasonable. I'm not sure what this says about what's on Jon's computer, that he wouldn't be interested something as simple as a small reserve of electrical power (like in a capacitor) that can detect when main power has diminished and employs its small reserve which is just ample to perform a basic overwrite or sanitizing operation on DRAM. Such a feature could not possibly cost more than a seat of PGP WDE.
"External authentication using smart cards, tokens, TPMs, does not solve the problem. There have been reports of some people claiming that it does. It doesn't. Remember, this is very simple; there is some RAM that has a key, and that RAM needs to be cleared. Authentication doesn't clear memory. TPMs do not clear memory. The people who claim that a USB key helps at all are displaying their ignorance."
I agree that USB keys don't clear memory. What was Dr. Eric Cole of SANS thinking when he said this in the Feb 29th issue of their Newsbites?
"(Cole): The cold boot attack has a cool factor to it, but remember that
full disk encryption will protect a system only if it has a strong
password (two factor recommended) and if the system is completely turned
off. Use of a USB token stops the attack. If you turn your system
completely off (and hold on to it for more than 5 seconds) the attack
is not successful. If you do not follow either of these rules, than
full disk encryption can potentially be broken even without this
attack.]"
But a future generation of TPMs, or more specifically secure co-processors, could potentially perform all cryptographic operations in hardware, not just integrity checking of boot procedures. Whereas today's TPMs can store keys only later to hand them off to a process that will unfortunately store them in DRAM, the next generation of secure co-processors could be passed the ciphertext blocks of data for decryption, passing the plaintext version back to a WDE-like service. There will be I/O performance concerns to overcome initially, but it is feasible that a commodity-priced chip will one day solve that problem.
"There is more reason to use WDE in conjunction with either Virtual Disk or NetShare. We have always said that the primary threat model for WDE is a machine that is shut down or hibernated. We have always pointed to the added benefits of the other forms of encryption. In his recent article on mobile data protection, Bruce Schneier touts PGP Virtual Disk. The PGP Encryption Platform gives you defense in depth. Defense in depth is good because the layers of protection give more security."
Translation: buy more of their products.



Of course, there's always the solution I have offered despite common objections: one method for securing information is to not place it on disk at all. Encryption is not always the answer.

Excellent Cold Boot Step-By-Step

News.com has an excellent step-by-step complete with pictures detailing what it takes to steal the encryption keys for Apple's File Vault using the Princeton University's Cold Boot attack on whole disk encryption. Jacob Appelbaum, one of the independent security researchers involved with Ed Felten's Princeton crew, is your guide.

Thursday, February 21, 2008

Felten Destroys Whole Disk Encryption

Ed Felten and company publicized some research findings today on a form of side-channel attack against whole disk encryption keys stored in DRAM.

We show that disk encryption, the standard approach to protecting sensitive data on laptops, can be defeated by relatively simple methods. We demonstrate our methods by using them to defeat three popular disk encryption products: BitLocker, which comes with Windows Vista; FileVault, which comes with MacOS X; and dm-crypt, which is used with Linux....

Our research shows that data in DRAM actually fades out gradually over a period of seconds to minutes, enabling an attacker to read the full contents of memory by cutting power and then rebooting into a malicious operating system....
Interestingly, if you cool the DRAM chips, for example by spraying inverted cans of “canned air” dusting spray on them, the chips will retain their contents for much longer. At these temperatures (around -50 °C) you can remove the chips from the computer and let them sit on the table for ten minutes or more, without appreciable loss of data. Cool the chips in liquid nitrogen (-196 °C) and they hold their state for hours at least, without any power. Just put the chips back into a machine and you can read out their contents.
This is deadly for disk encryption products because they rely on keeping master decryption keys in DRAM. This was thought to be safe because the operating system would keep any malicious programs from accessing the keys in memory, and there was no way to get rid of the operating system without cutting power to the machine, which “everybody knew” would cause the keys to be erased.
Our results show that an attacker can cut power to the computer, then power it back up and boot a malicious operating system (from, say, a thumb drive) that copies the contents of memory. Having done that, the attacker can search through the captured memory contents, find any crypto keys that might be there, and use them to start decrypting hard disk contents. We show very effective methods for finding and extracting keys from memory, even if the contents of memory have faded somewhat (i.e., even if some bits of memory were flipped during the power-off interval). If the attacker is worried that memory will fade too quickly, he can chill the DRAM chips before cutting power.
This is a good example of academic security research. We need to see that the trust placed upon the hardware by the whole disk encryption software is a faulty decision.

There's even a video:

Tuesday, February 19, 2008

Websense CEO on AV Signatures

Websense CEO, Gene Hodges, on the futility of signature based antivirus, just an excerpt:

On the modern attack vector: Antivirus software worked fine when attacks were generally focused on attacking infrastructure and making headlines. But current antivirus isn’t very good at protecting Web protocols, argued Hodges. “Modern attackware is much better crafted and stealthy than viruses so developing an antivirus signature out of sample doesn’t work,” said Hodges. The issue is that antivirus signature sampling starts with a customer being attacked. Then that customer calls the antivirus vendor, creates a sample, identifies the malware and then creates the sample. The conundrum for antivirus software comes when there’s malware that’s never detected. If you don’t know you’re being attacked there’s no starting point for a defense. “Infrastructure attacks are noisy because you wanted the victim to know they have been had. You didn’t have to be a brain surgeon to know you were hit by Slammer. Today’s malware attacks are stealthy and don’t want you to know it’s there,” said Hodges.

Is antivirus software necessary? Hodges said that antivirus software in general is still necessary, but the value is decreasing. Hodges recalled discussions at a recent conference and the general feeling from CIOs that viruses and worms were a solved problem. Things will get very interesting if there’s a recession and customers become more selective about how they allocate their security budgets. For instance, Hodges said CIOs could bring in Sophos, Kaspersky and Microsoft as antivirus vendors and “kick the stuffing out of the price structure for antivirus and firewalls.” The dollars that used to be spent on antivirus software could then be deployed for more data centric attacks that require better access control, encryption and data leakage. My take: Obviously, Hodges has a motive here since these budget dollars would presumably flow in Websense’s direction. That said the argument that the value of antivirus software is declining makes a lot of sense and is gaining critical mass.

Web 2.0 as security risk. Hodges said Web 2.0–or enterprise 2.0–techniques could become a security risk in the future, but Websense “really hasn’t seen significant exploitation of business transactions of Web 2.0.” That said enterprises are likely to see these attacks in the future. For starters, enterprises generally allow employees to tap sites like YouTube, Facebook and MySpace. Those sites are big targets for attacks and connections to the enterprise can allow “bad people to sneak bad stuff into good places,” said Hodges. In other words, the honey pot isn’t lifting data from Facebook as much as it is following that Facebook user to his place of employment. Meanwhile, Web connections are already well established in the enterprise via automated XML transactions, service oriented architecture and current ERP systems. Hodges noted that Oracle Fusion and SAP Netweaver applications fall into the Web 2.0 category.


Even the security CEOs can see it (the futility of signature based anti-malware, that is).

Thursday, February 14, 2008

Localhost DNS Entries & "Same Site Scripting"

I'm not a big fan of new names for variations of existing attacks, but Tavis Ormandy (of Google) has pointed out an interesting way to leverage non-fully qualified DNS entries for localhost (127.0.0.1) with XSS:
It's a common and sensible practice to install records of the form
"localhost. IN A 127.0.0.1" into nameserver configurations, bizarrely
however, administrators often mistakenly drop the trailing dot,
introducing an interesting variation of Cross-Site Scripting (XSS) I
call Same-Site Scripting. The missing dot indicates that the record is
not fully qualified, and thus queries of the form
"localhost.example.com" are resolved. While superficially this may
appear to be harmless, it does in fact allow an attacker to cheat the
RFC2109 (HTTP State Management Mechanism) same origin restrictions, and
therefore hijack state management data.

The result of this minor misconfiguration is that it is impossible to
access sites in affected domains securely from multi-user systems. The
attack is trivial, for example, from a shared UNIX system, an attacker
listens on an unprivileged port[0] and then uses a typical XSS attack
vector (e.g. in an html email) to lure a victim into
requesting http://localhost.example.com:1024/example.gif, logging the
request. The request will include the RFC2109 Cookie header, which could
then be used to steal credentials or interact with the affected service
as if they were the victim.

Tavis recommends removing localhost entries from DNS servers that do not have the trailing period (i.e. "localhost" vs. "localhost."). The trailing period assures that somebody cannot setup camp on 127.0.0.1 and steal your web applications cookies or run any other malicious dynamic content in the same domain, exploiting DNS for same origin policy attacks.

Friday, February 1, 2008

WiKID soft tokens

I promised Nick Owens at WiKID Systems a response and it is long overdue. Nick commented on my "soft tokens aren't tokens at all" post:
Greetings. I too have posted a response on my blog. It just points out that our software tokens use public key encryption and not a symmetric, seed-based system. This pushes the security to the initial validation/registration system where admins can make some choices about trade-offs.

Second, I submit that any device with malware on it that successfully connects to the network is bad. So you're better off saving money on tokens and spending it on anti-malware solutions, perhaps at the gateway, defense-in-depth and all.

Third, I point out that our PC tokens provide https mutual authentication, so if you are confident in your anti-malware systems, and are concerned about MITM attacks at the network, which are increasingly likely for a number of reasons, you should consider https mutual auth in your two-factor thinking.

Here's the whole thing:
On the security of software tokens for two-factor authentication
and thanks for stimulating some conversation!

Here is their whitepaper on their soft token authentication system.

Unfortunately, I would like to point out that WiKID is first and foremost vulnerable to the same sort of session stealing malware that Trojan.Silentbanker uses. It doesn't matter how strong your authentication system is when you have a large pile of untrustworthy software in between the user and the server-side application (e.g. browser, OS, third party applications, and all the malware that goes with it). I'll repeat the theme: it's time to start authenticating the transactions, not the user sessions. I went into a little of what that might look like.

Nick is aware of that, which is why he said point number two above. But here's the real problem: the web is designed for dynamic code to be pulled down side-by-side with general data, acquired from multiple sources and run in the same security/trust context. Since our browsers don't know which is information and which is instructions until runtime AND since the instructions are dynamic (meaning they may not be there for the next site visit), how is it NOT possible for malware to live in the browser? I submit that it is a wiser choice to NOT trust your clients' browsers, their input into your application, etc., than to trust that a one time password credential really did get input by the proper human on the other end of the software pile. I suggest that organizations should spend resources being able to detect and recover from security failures (out of band mechanisms come to mind-- a good old fashioned phone call to confirm that $5,000 transaction to a foreign national, perhaps?), rather than assuming the money they invested in some new one time password mechanism exempts them from any such problems.

Microsoft published a document titled "10 Immutable Laws of Security" (nevermind for now that they are neither laws, nor immutable, nor even concise) and point number one is entirely relevant: "Law #1: If a bad guy can persuade you to run his program on your computer, it's not your computer anymore". How does javascript, a Turing Complete programming language, fall into that? If you completely disable script in your browser, most applications break. But if you allow it to run, behaviors you cannot control can run on your behalf. Taking Nick's advice, we should be spending all of your time and resources solving the code and data separation problem on the web, not implementing one time passwords (and I agree with him on that).



Second, I have a hard time calling WiKID a token-- not that it couldn't fit that definition-- it's just that it is a public key cryptography system. I never have referred to a PGP key pair as a token, nor have I heard anyone else. Likewise I don't really ever here anyone say "download this x509 token" ... instead they say "x509 certificate". Smart cards might be the saving grace example that allows me to stretch my mind around the vocabulary; generally speaking, a smart card is a physical "token" and smart card implementations can have a PKC key pair. So, I'll have to extend my personal schema, so to speak, but I guess I'll allow WiKID to fit into the "token" category (but just barely).

The x509 cert example is a great analogy, because under the hood that's basically how WiKID works. Just like an HTTPS session, it swaps public keys (which allows for the mutual authentication) and then a session key is created for challenge/response-- the readout of the "soft token" that the user places into the password form, for example.


There is one concerning issue with WiKID. It uses a relatively unknown public key encryption algorithm called NTRU. NTRU aims to run in low resource environments like mobile phones, which is undoubtedly why WiKID employs it. NTRU is also patented by NTRU Cryptosystems, INC (the patent may have some business/political ramifications similar to PGP's original IDEA algorithm). However, when choosing an encryption algorithm, it is best to use that which has withstood significant peer review. Otherwise, Kerckhoffs' Principle that we have come to know and love as "security by obscurity" will eat us alive when the first decent attack reduces our security to rubble. Googling for "NTRU cryptanalysis" returns around 3,000 hits. Googling for "RSA cryptanalysis" returns around 186,000-- two orders of magnitude higher. This is not the nail in WiKID's coffin, though, but it could be betting the company on Betamax. It is undoubtedly less popular than, say RSA or Eliptic Curve. In most aspects of life, supporting the underdog can result in a great time. Doing it in crypto, however, may not be a good idea.

Before somebody reads the above paragraph and goes in the extreme in either direction, please note my point: the workhorse of WiKID, the NTRU encryption algorithm, has an unknown security value. One could argue that RSA, likewise, has only a mostly known security value, but you decide: "mostly known" or "unknown"? There may not be any problems in NTRU and it may be perfectly safe and secure to use. Conversely it may be the worst decision ever to use it. That's what peer review helps us decide.


...
To sum up ... WiKID is cheap, open source, interesting, and ... still vulnerable to malware problems. And don't forget: you have to choose to use a less popular encryption algorithm.

Wednesday, January 30, 2008

Two Words: Code Quality

Dr. Brian Chess, Chief Scientist at Fortify and static analysis guru, has a couple very interesting posts on the company blog: one on the U.S. court system paying attention to source code quality of breathalyzers, and one on the quality of source code in closed systems (Nintendo Wii and Apple's iPhone).

It appears that a custom crafted Zelda saved game file can exploit a buffer overflow in Zelda allowing the execution of any code you want to throw at the console-- step one of software piracy on the Wii. It just further illustrates that you should NEVER trust user input-- no matter how unlikely you think the input will be untrustworthy (ahem, saved games in a closed video game system).

About 1.5 Million iPhones are unaccounted for, suggesting they've been hijacked to be set free of AT&T contracts. And this further illustrates that controlling a system at a distance is impossible.

Dr. Chess' comments about the Breathalyzers are choice as well:
"One of the teams used Fortify to analyze the code, and lo-and-behold, they found a buffer overflow vulnerability! This raises the possibility that if you mix just the right cocktail at just the right time, you could craft an exploit. (Dream on.) The real lesson here is that our legal system is waking up to the importance of code. If the code isn’t trustworthy, the outcome isn’t trustworthy either. (Electronic voting machine vendors, you might want to read that last line again.)"

Monday, January 28, 2008

Tuesday, January 15, 2008

Targeted Bank Malware

There have been a lot of interesting things going on with malware these days, but this is on the top of the list (for the next few hours anyway ;). Symantec has a blog write-up on a specific trojan that targets over 400 of the most popular banks in the U.S. and abroad. From the post:
This Trojan downloads a configuration file that contains the domain names of over 400 banks. Not only are the usual large American banks targeted but banks in many other countries are also targeted, including France, Spain, Ireland, the UK, Finland, Turkey—the list goes on.
Targeted malware has some interesting economic ramifications. With signature based anti-malware, the defenses only work if a signature exists (duh!). But the problem is this: will your very large (think yellow) anti-malware vendor really publish a signature to catch malware targeted at only your organization, especially considering that each signature has the possibility of falsely identifying some other binary executable and causing a problem for another of their customers? No doubt anti-malware vendors have seen targeted malware and then specifically not published a signature for all of their customers. They may have released a specific signature for an individual organization, but the support risks are significant. It's better for everyone to NOT publish a signature unless it's widespread. Signatures add overhead, even if it's minimal. It's at least one more signature to search through when comparing each signature. At best, that's logarithmic complexity, and even that minimal complexity across a half-million signatures is expensive.

But that's not all that is interesting about "Trojan.Silentbanker" ...
Targeting over 400 banks ... and having the ability to circumvent two-factor authentication are just two of the features that push Trojan.Silentbanker into the limelight...

The ability of this Trojan to perform man-in-the-middle attacks on valid transactions is what is most worrying. The Trojan can intercept transactions that require two-factor authentication. It can then silently change the user-entered destination bank account details to the attacker's account details instead. Of course the Trojan ensures that the user does not notice this change by presenting the user with the details they expect to see, while all the time sending the bank the attacker's details instead. Since the user doesn’t notice anything wrong with the transaction, they will enter the second authentication password, in effect handing over their money to the attackers. The Trojan intercepts all of this traffic before it is encrypted, so even if the transaction takes place over SSL the attack is still valid. Unfortunately, we were unable to reproduce exactly such a transaction in the lab. However, through analysis of the Trojan's code it can be seen that this feature is available to the attackers.

The Trojan does not use this attack vector for all banks, however. It only uses this route when an easier route is not available. If a transaction can occur at the targeted bank using just a username and password then the Trojan will take that information, if a certificate is also required the Trojan can steal that too, if cookies are required the Trojan will steal those. In fact, even if the attacker is missing a piece of information to conduct a transaction, extra HTML can be added to the page to ask the user for that extra information.
If you understand how Two Factor Authentication really works (at least how banks are implementing it) then you already understand that it cannot stop fraud by an adversary in the middle (a less chauvinistic way to say "man in the middle"). Bruce Schneier has been preaching "the failures of two factor authentication" for a long time. What's monumental about this piece of malware is that it is the first to do well that which pundits have been saying for a long time. Two factor authentication, including PayPal's Security Key (courtesy of Verisign, a company not on my good list), is broken. SiteKey is broken (for the same reasons). What Schneier said in 2005 took until 2008 to materialize (at least publicly). This will not go down in history as an anomaly; this will go down as the first run at creating a sophisticated "engine" to handle MITM on any web application that has financial transactions.

Here's the real question to answer: How many fund transfer transactions must be hijacked to bank accounts in foreign nations that don't extradite their criminals to the U.S. before we finally realize just how bad malware in the "Web 2.0" world (sorry that's O'Reilly's name, I really mean "XMLHTTPRequest" world) can get?

It's about the transactions, folks. It's time to authenticate the transactions, not the users. (Schneier's been saying that, too.) Bank of America is closer-- yet farther away at the same time-- to getting this multi-factor authentication problem solved by implementing out-of-band multi-factor authentication in their "SafePass" service ... BUT, it's still completely vulnerable to this malware (I cannot state that this specimen does in fact implement a MITM on this BoA service since I have not reversed a sample, but if not in version 1.0, it will eventually). I wanted to write a rebuttal to the entry on Professor Mich Kabay's column on Network World, but the authors of this trojan did a better job!

The downfalls of SafePass, which uses SMS text messages to send one-time-use 6 digit passcodes (think RSA SecurID tokens without the hardware) to a customer's mobile phone. It's nice to see authentication out-of-band (not in the same communication channel as the HTTPS session), however, once the session is authenticated, the trojan can still create fraudulent transactions. SafePass could be improved by: 1) not using a non-confidential communication channel (SMS text messages are sent through the service provider's network in plaintext) and 2) requiring the customer to input the authentication code to validate transaction details that are sent with the out-of-band authentication token. Obviously you don't want to skip on #1 or else you'll have a privacy issue when the details of the transaction are broadcast through the provider's network and over the air.

Suppose Eve can modify Alice's payment to Bob the merchant (the transaction's destination) via a MITM trojan like this one. Eve modifies the payment to go to Charlie (an account in the Cayman Islands). Suppose Alice is presented with a "Please validate the transaction details we have sent to your mobile phone and enter the validation code below" message. Alice receives the message and notices the payment is to be sent to Charlie. Since she doesn't intend to pay Charlie, she calls her bank to report the fraud (as directed in the transaction validation method). Of course, that pipe dream would only work if SMS was a confidential channel [and if we could deliver a trustworthy confidential channel to a mobile phone, we will ALREADY solved the trustworthy transaction problem-- they're the same problem] AND if we could find a way to make customers actually PAY ATTENTION to the validation message details.

And of course, incoming SMS messages in the U.S. are charged to the recipient (what a stupid idea-- what happens when people start spamming over SMS like they do over SMTP?).


...

[Side commentary] Oh. And then there's the problem of telling your family members not to click on any random e-cards (or anything else that your family members were not expecting) because it just got a little scarier out there. Imagine your retired parent just clicked on some pretty animated something-or-other that launched some client-side script, installed the Trojan.Silentbanker (or similar) and then the next time they went to check their retirement savings, their life's solvency was sent to a Cayman Islands' account and liquidated faster than you can say "fraud". Does dear old Grandma or Grandpa have to go back to work? They just might, or become a liability on their children and grandchildren. If that won't scare the older generation away from the unsafe web, what will?

Trust is a Simple Equation

[Begin rant]

OK. If security vendors don't get this simple equation, then we might as well all give up and give in...


If you don't know if a computer has been rooted or botted (apologies to the English grammar people-- computer security words are being invented at an ever-increasing rate), then you cannot use that same computer to find out if it has been rooted or botted. Let me say this slightly differently: If you don't know if a computer is worthy of trust (trustworthy), then you cannot trust it to answer correctly if you ask it if it is trustworthy.

It doesn't work in real life. It's stupid to think that a person you just met on the street is trustworthy enough to hold your life savings just because that person says "you can trust me" (or even something of less value than that, probably of any value). My father used to say "never trust a salesman who says the words 'trust me'" because of his life experiences that suggest they're lying most often when they say that (although that may not be statistically relevant, it's relevant as an anecdote).

So why in the world would we EVER trust a piece of software to run on a computer whose state is unknown-- whose TRUSTWORTHINESS is unknown-- to determine if it (or any other system for that matter) is trustworthy???

That's why many NAC implementations fail. It's opt-in security. They send some little java (platform independent) program to run and determine trustworthiness, like presence of patches, AV, etc. Of course, all it takes is a rootkit to say "nothing to see here, move along" to the NAC code. We've seen implementations of that.

So why in the world is Trend Micro-- a company who should KNOW BETTER-- creating code that does just that? RUBotted is the DUMBEST, STUPIDIST, MOST [insert belittling adjective here] idea I have ever seen. It works by running code inside the same CPU controlled by the same OS that is believed already to be botted-- why else would you run the tool unless you already suspected the trustworthiness to be low?!?

This had to have been either designed by a marketing person or a complete security amateur. It attempts to defy (or more realistically: ignore) the laws of trust! How long is it going to be before bot code has the staple feature: "hide command and control traffic from the likes of RUBotted"?


And then eWeek is suggesting that this will be a paid-for-use service or application?!? People, please don't pay money for snake oil, or in this case perpetual motion machines.


This just defies nature. If you wouldn't do it in the "real world", don't do it the "cyber world".



Now, if you could use systems you already know to be trustworthy (i.e. computers that you control and know that an adversary does not control) to monitor systems about which you are not sure, then you may be able to make a valid assertion about the trustworthiness of some other system, but you MUST have an external/third-party trusted system FIRST.


And don't forget that "trust" and "trustworthiness" are not the same.

[End Rant]

Wednesday, January 9, 2008

MBR Rootkits

There is a new flurry of malware floating around in the wild: boot record rootkits (a.k.a. "bootkits"). Yes, for those of you old enough to remember, infecting a Master Boot Record (MBR) is an ancient practice, but it's back.

There are several key details in these events that should be of interest.

As Michael Howard (of Microsoft Security Development Lifecycle fame) points out and Robert Hensing confirms, Windows Vista with Bitlocker Drive Encryption installed (most specifically the use of the Trusted Platform Modules (TPMs), as I have discussed here many times) is immune to these types of attacks. It's not because the drive is encrypted; it's because the TPM validates the chain of trust, starting with the building blocks-- including the MBR.



But that's not the only interesting thing in this story. What's interesting is best found in Symantec's coverage on the latest MBR rootkit (Trojan.Mebroot) that has been recently found in the wild:
"Analysis of Trojan.Mebroot shows that its current code is at least partially copied from the original eEye BootRoot code. The kernel loader section, however, has been modified to load a custom designed stealth back door Trojan 467 KB in size, stored in the last sectors of the disk."
That's right. Our friends at eEye who have been busy hacking the products we use to run our businesses in order to show us the emperor hath no clothes created the first Proof of Concept "bootkit":
"As we have seen, malicious code that modifies a system's MBR is not a new idea – notable research in the area of MBR-based rootkits was undertaken by Derek Soeder of eEye Digital Security in 2005. Soeder created “BootRoot”, a PoC (Proof-of-Concept) rootkit that targets the MBR."

Monday, January 7, 2008

Windows Vista Phones Home

OK. Perhaps not "phone home" in the sense that these people think, but it does in fact do it, at least on a minor scale.

A Windows Vista "feature" called Network Connectivity Status Indicator (NCSI) goes and fetches a file hosted on a Microsoft web server (a farm of servers, no doubt, once a widespread adoption is realized). It's a simple HTTP GET.

In fact, you can see for yourself. Fire up wireshark (or similar) while watching the traffic of a vista client. You'll notice a DNS query for www.msftncsi.com (Microsoft Network Connectivity Status Indicator) and you'll see the GET request for a file named ncsi.txt. Basically, if a client can fetch that file, Vista thinks the network interface has Internet access. Sounds simple, right? And of course, in true Microsoft style, disabling the feature will have negative effects on every application that calls the API that exposes whether or not that file could be fetched (think timeouts and bad coding habits).

What's interesting is the basic traffic analysis that can be performed just by watching clients fetch that file. In that HTTP GET request, the Vista client will pass the user agent string, not to mention it will initiate the TCP session from a randomly chosen high port. And of course, there's the source IP Address (which is likely behind a NAT in an enterprise). Without a degree of certainty, but with a degree of at least entertainment, one could see, say, how many Windows Vista installs existed behind a specific public IP Address. That might be interesting. Especially to somebody having a vested interest in, say, licensing compliance.

But that's not the biggest or most obvious problem. It's not necessarily the most preferred method of checking Internet "connectedness" for an enterprise. And since turning it off supposedly has negative effects (do your own Googling for that), there has to be a better way to configure this service.

And in fact there is ... Here is one option and some implementation choices: configure the service to connect to a different URL, perhaps one controlled by the enterprise in question, or perhaps just to another service. For complete irony, I'm choosing in this example to use the service of an organization that we all know (wink) to do no evil: Google.

Obviously there are going to be a few requirements about the URL you choose:
1) It has to small and lightweight, for performance reasons. You don't want 5000 machines checking for the existing of, say, an ISO CD image file every time the network interface's state changes.
2) It has to be mostly static, otherwise if the URL goes away then your clients (and apps) will think "the Internet needs rebooting".
3) It has to be a URL that is accessible from both inside of an Enterprise Network and outside at the local Starbucks. Remember, if you're an enterprise that authenticates every HTTP object request, this particular feature will run in the "LOCAL SYSTEM" security context inside of Windows, meaning that unless you grant "Domain Computers" or "Vista Computers" (the latter being a group you'd have to create) in your Active Directory forest to have Internet access, this process will fail. Yes ... it checks the NCSI even when a user has yet to log on.

Note that instead of a "ncsi.txt" file (a surprisingly small 34 bytes), I have chosen the ubiquitous "favicon.ico" URL, because it's static, it's prevalent, and because Google (a highly available service world wide) makes use of it, too, although it's a couple orders of magnitude higher at 1002 bytes (but it's ripe for caching). So, the astute readers will notice it meets the above three requirements.

Also note that the NCSI config requires a DNS server name and IP address. I'm choosing OpenDNS, since I've been so impressed with them recently (especially, in this case, their use of Anycast). It should work well for you as well.


Registry tweak.
A quick tweak to the registry is easy if it's just a handful of machines.

Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NlaSvc\Parameters\Internet]
"EnableActiveProbing"=dword:00000001
"ActiveWebProbeHost"="www.google.com"
"ActiveWebProbePath"="favicon.ico"

"ActiveWebProbeContent"="OpenDNS"

"ActiveDnsProbeHost"="resolver1.opendns.com"

"ActiveDnsProbeContent"="208.67.222.222"


Group Policy ADM template.
A custom GPO that you can configure inside the Group Policy object editor to point to a URL of your own choosing. Oh, and you'll want to go through the View > Filtering options and uncheck the last box, as shown below, so that you can actually see the ADM template's setting options.



CLASS MACHINE
CATEGORY "Custom NCSI"

KEYNAME "SYSTEM\CurrentControlSet\Services\NlaSvc\Parameters\Internet"

POLICY "EnableActiveProbing"

VALUENAME "EnableActiveProbing"
VALUEON NUMERIC 1
VALUEOFF NUMERIC 0
END POLICY

POLICY "ActiveWebProbeHost"

PART "ActiveWebProbeHost" EDITTEXT
DEFAULT "www.google.com"

VALUENAME "ActiveWebProbeHost"

END PART

END POLICY


POLICY "ActiveWebProbePath"
PART "ActiveWebProbePath" EDITTEXT
VALUENAME "ActiveWebProbePath"
DEFAULT "favicon.ico"
END PART

END POLICY

POLICY "ActiveWebProbeContent"
PART "ActiveWebProbeContent" EDITTEXT

VALUENAME "ActiveWebProbeContent"
DEFAULT "OpenDNS"
END PART

END POLICY


POLICY "ActiveDnsProbeHost"
PART "ActiveDnsProbeHost" EDITTEXT
VALUENAME "ActiveDnsProbeHost"
DEFAULT "resolver1.opendns.com"
END PART
END POLICY


POLICY "ActiveDnsProbeContent"
PART "ActiveDnsProbeContent" EDITTEXT

VALUENAME "ActiveDnsProbeContent"
DEFAULT "208.67.222.222"

END PART
END POLICY


END CATEGORY




...
And of course, all of this is free for your use but without support or warranty of any kind. I am posting it here because I could not find these answers when I went looking for them.

Saturday, December 29, 2007

AV Signature False Positives

Kaspersky's AV accidentally identified the Windows Explorer process as malware. The same thing happened to Symantec with their Asian Language Windows customers. And Heise is running an article on how AV vendors' ability to protect has decreased since last year.


The problem with these commercial, signature-based, anti-malware solutions is that they work 1) Backwards, and 2) Blind. They operate "backwards" in the sense that they are a default-allow (instead of default-deny) mechanism-- they only block (unless they screw up like this) the stuff they know all of their customers will think is bad. And they operate "blind" in that they don't do any QA on their code in your environment. If you think about it, it's scary: they apply multiple (potentially crippling as evidenced by these recent events) changes to production systems, in most organizations several times per day without proper change control processes. Besides anti-malware, what other enterprise applications operate in such a six-shooters-blazing, wild west cowboy sort of way?


Surely this is one more nail in the signature-based anti-malware coffin.

Tuesday, December 11, 2007

OpenDNS - I think I like you

I think I really like OpenDNS. It's intelligent. It's closer to the problem than existing solutions. And it's free.


OpenDNS works by using Anycast to redirect you to the best DNS servers based on where you are. But before it quickly gives you your response, it can optionally filter out unwanted content. OpenDNS partners with communities and service providers to maintain a database of adult content and malicious websites. If you choose to opt in, each DNS query that matches a known bad site returns your browser to a customizable page that explains why the page is not allowed.

Now, privacy advocates are well aware that there is a potential data collection and use problem. However, DNS queries already are a privacy risk, since an ISP can create quite the portfolio on you based on which names get resolved to numbers. OpenDNS can collect information about you, including statistics associated with DNS usage on the networks you manage, but that choice is not turned on by default-- you have to opt into it as well. So, all things considered, privacy is well managed.

I really like this approach to filtering unwanted HTTP content because it completely prevents any connection between clients and offending servers. In fact, clients don't even get to know who (if you can allow me to personify servers for a moment with the term "who") the server is or where it lives. But what I like even more is that this service is simple. There are no complicated client software installs (that users or children can figure out how to disable), no distributed copies of offending URL databases to replicate and synchronize, and no lexicons for users to tweak. It's lightweight. All it takes is updating a DHCP server's entries for DNS servers to point to 208.67.222.222 and 208.67.220.220 and checking a few boxes for which content is needed to be filtered in an intuitive web administration console. For a home user, that's as easy as updating the DNS server fields in a home router-- and all current and future clients are ready to go. An enterprise could use this service as well as its DNS Forwarders. And many larger customers do. A non-tech-savvy parent could turn on content filtering without the "my kids program the VCR" syndrome resulting in the kids bypassing the filters. Setting an IP Address for a DNS server doesn't stand out as a "net nanny" feature to kids who are left alone with the computer.

Use OpenDNS
Okay, there have to be caveats, right? Here they are ...

If you're planning on using some third-party DNS service--especially one that is free-- it had better be performing well and it had better be a service that you trust (because DNS has been used in the past to send people to malicious sites). Since their inception in July 2006, OpenDNS has serviced over 500 Million DNS requests with a 100% perfect uptime track record. And from their open, collaborative stance on issues like phishing (see phishtank.com), you'll want to trust them.

Any DNS misses (except some common typos) will return you to an OpenDNS web page that tries to "help" you find what you missed. The results look like re-branded Google results. Users taking links off the OpenDNS results page is how OpenDNS makes their revenue--on a pay per click basis. That's how they keep the services free.

Dynamic IP Addresses can mess up a home user's ability to keep content filtering policies in-check (but this won't affect enterprises). But there are a number of ways to keep the policies in-synchrony, including their DNS-O-Matic service. What I'd like to see added on: native consumer router support for Dynamic IP address changes to keep content filtering policies in place no matter what the ISP does. [The Linksys WRT54G wireless router, for example, supports similar functions with TZO and DynDNS today-- it would be nice if OpenDNS was another choice in the drop-down menu.] If my neighbor enrolled in the service, it might be possible for me to get my neighbor's OpenDNS filtering policies if we share the same ISP and Dynamic IP pool, but again, that's what the dynamic IP updating services are for.

Enterprises who decide to use OpenDNS for their primary outgoing DNS resolvers must keep in mind that an offending internal user could simply specify a DNS server of their preference-- one that will let them bypass the content filters. However, a quick and simple firewall policy (not some complicated DMZ rule) to screen all DNS traffic (UDP/TCP 53) except traffic destined for OpenDNS servers (208.67.222.222 and 208.67.220.220) will quell that concern.

So the caveats really are not bad at all.

Since the company is a west coast (SF) startup and since the future seems bright for them as long as they can keep their revenue stream flowing, I imagine they'll be gobbled up by some larger fish [Google?].


So this Christmas, give the gift of safe.




...
This might seem like a blatant advertisement, but (number one) I rarely like a service well enough to advocate or recommend it and (number two) I am not financially affiliated with OpenDNS in any way.

Monday, December 10, 2007

Gary McGraw on Application Layer Firewalls & PCI

This serves as a good follow-up to my dissection of Imperva's Application Layer Firewall vs Code Review whitepaper.

Gary McGraw, the CTO of software security firm Cigital, just published an article on Dark Reading called "Beyond the PCI Bandaid". Some tidbits from his article:

Web application firewalls do their job by watching port 80 traffic as it interacts at the application layer using deep packet inspection. Security vendors hyperbolically claim that application firewalls completely solve the software security problem by blocking application-level attacks caused by bad software, but that’s just silly. Sure, application firewalls can stop easy-to-spot attacks like SQL injection or cross-site scripting as they whiz by on port 80, but they do so using simplistic matching algorithms that look for known attack patterns and anomalous input. They do nothing to fix the bad software that causes the vulnerability in the first place.
Gary's got an excellent reputation fighting information security problems from the software development perspective. His Silver Bullet podcast series is one of a kind, interviewing everyone from Peter Neumann (one of the founding fathers of computer security) to Bruce Schneier (the most well known of the gurus) to Ed Felten (of Freedom to Tinker and Princeton University fame). He is also the author of several very well respected software security books.

Thursday, December 6, 2007

Salting your Hash with URLs

So, when I was reading this post on Light Blue Torchpaper (Cambridge University' Computer Security Lab's blog) a few weeks back, I, like many others (including this Slashdot thread), was reminded about the importance of salting your password hashes ... As it turns out, you really can ask Google for a hash value and it really will return significant results-- like a gigantic, easy-to-use Rainbow Table. Steven Murdoch managed to find "Anthony" with this simple search.

Of course, though, if my story stopped there it would not be that interesting. Security professionals have known about salting hashes to get around known hash values in tables for a long time. But as I thought about the salt hashing, it dawned on me the parallels with some password management tools.

I had been wanting to reduce the number of passwords I have to keep track of for all of the various web applications and forums that require me to have them. I have used Schneier's Password Safe for years now and find it nice, but not portable to other platforms (e.g. mac/linux). Even moving between different PCs is difficult because it requires keeping my password safe database in synchrony. Of course, several browsers have the ability to store passwords with a "master password", but I have several objections to them. First, they are part of the browser's stack, so I have to trust that my browser and the web applications I use won't result in an opportunity for malware to exploit a client-side bug to steal my passwords. Second, they don't tend to work well when moving from machine to machine, so there's a synchrony problem. So, I am always on the lookout for a good alternative. Perhaps one day an authentication system like Info Cards will become a reality in the consumer-space ...

So, when I first stumbled upon Password Maker, an open source password management tool, I wanted to like it. How does it work? From their description:
You provide PASSWORDMAKER two pieces of information: a "master password" -- that one, single password you like -- and the URL of the website requiring a password. Through the magic of one-way hash algorithms, PASSWORDMAKER calculates a message digest, also known as a digital fingerprint, which can be used as your password for the website. Although one-way hash algorithms have a number of interesting characteristics, the one capitalized by PASSWORDMAKER is that the resulting fingerprint (password) does "not reveal anything about the input that was used to generate it." 1 In other words, if someone has one or more of your generated passwords, it is computationally infeasible for him to derive your master password or to calculate your other passwords. Computationally infeasible means even computers like this won't help!
But, as I said: "I wanted to like it." After all, there is nothing that has to be stored ever-- you just have to remember the main password and the algorithm does the rest. There is nothing that has to be installed directly into the browser (unless you want to), not too mention it's very portable from platform to platform. And since there is no password database or safe to move around, there's no synchronization problem-- the site specific passwords are re-created on the fly. It sounds like a panacea. In the algorithm, the URL essentially becomes a salt with the master password as the hash input. The resulting hash [sha1 (master password + URL)] is the new site-specific password. It sounded like a great solution, but I have a couple potentially show-stopping concerns over it.
  1. There are varying opinions, but it is a prudent idea to keep salt values secret. If the salt value becomes known, a rainbow table could be constructed that employs the random password key-space concatenated with the salt. Granted, it might take somebody several days to create the rainbow tables, but it could be done-- especially if there were economic incentives to do so. Imagine that an adversary targets the intersection of MySpace and Paypal users, also assuming (of course) that this Password Maker is at least somewhat popular. Sprinkle in some phishing, XSS, or whatever is needed today to capture some MySpace passwords (which are of considerably lower value than, say, PayPal) to compare against the rainbow tables, and ... Voila ... the adversary now has the master password to input into the Password Maker's scheme to get to access to PayPal.
  2. I am not a mathematical/theoretical cryptographer, but I know better than to take the mathematics in cryptographic hash functions for granted. There has not been much research in hashing hash values, at least not much that has entered the mainstream. And as such, it may be possible to create mathematical shortcuts or trapdoors when hashing the hash values, at least potentially with certain types of input. That is not to be taken lightly. I would not build any critical security system on top of such a mechanism until there was extensive peer review literature proclaiming it a safe practice (also read the section entitled "Amateur Cryptographer" of this Computer World article).
In summary, the Password Maker is an intriguing idea, perhaps even novel, but I wouldn't use it for passwords that get you access to anything of high value. For mundane sites (ones where a compromised password is not a huge deal), it's probably a decent way to manage passwords, keeping separate passwords for each site.

Tuesday, December 4, 2007

Client Software Update Mechanisms

It's 2007. Even the SANS Top 20 list has client-side applications as being a top priority. Simply put, organizations have figured out how to patch their Microsoft products, using one of the myriad of automated tools out there. Now it's all the apps that are in the browser stack in some way or another that are getting the attention ... and the patches.

Also, since it's 2007, it's well-agreed that operating a computer without administrative privileges significantly reduces risk-- although it doesn't eliminate it.

So why is that when all of these apps in the browser stack (Adobe Acrobat Reader, Flash, RealPlayer, Quicktime, etc.) implement automated patch/update mechanisms, that the mechanisms are completely broken if you follow the principle of least privilege and operate your computer as a non-admin? Even Firefox's built-in update mechanism operates the exact same way.

So, here are your options ....

1) Give up on non-admin and operate your computer with privileges under the justification that the patches reduce risk more than decreased privileges.

2) Give up on patching these add-on applications under the justification that decreased privileges reduce more risk than patching the browser-stack.

3) Grant write permissions to the folders (or registry keys) that belong to the applications that need updates so that users can operate the automated update mechanisms without error dialogs, understanding that this could lead to malicious code replacing part or all of the binaries to which the non-admin users now have access.

4) Lobby the vendors to create a trusted update service that runs with privileges, preferably with enterprise controls, such that the service downloads and performs integrity checking upon the needed updates, notifying the user of the progress.

Neither option 1, nor option 2 are ideal. Both are compromises, the success of each depends heavily upon an ever-changing threat landscape. Option 3 might work for awhile, particularly while it is an obscurely used option, but it's very risky. And option 4 is long overdue. Read this Firefox, Apple, Adobe, et al: Create better software update mechanisms. Apple even created a separate Windows application for this purpose, but it runs with the logged-in user's permissions, so it's useless.


...
And this is not even dealing with all of the patching problems large organizations learned while introducing automated patching systems for Microsoft products: components used in business-critical applications must be tested prior to deployment. These self-update functions in the apps described above have zero manageability for enterprises. Most of these products ship new versions with complete installers instead of releasing updates that patch broken components. The only real option for enterprises is to keep aware of the versions as the vendors release them, packaging the installers for enterprise-wide distribution through their favorite tool (e.g. SMS). It would be nice if these vendors could release a simple enterprise proxy, at least on a similar level to Microsoft's WSUS, where updates could be authorized by a centralized enterprise source after proper validation testing in the enterprise's environment.