Kaspersky's AV accidentally identified the Windows Explorer process as malware. The same thing happened to Symantec with their Asian Language Windows customers. And Heise is running an article on how AV vendors' ability to protect has decreased since last year.
The problem with these commercial, signature-based, anti-malware solutions is that they work 1) Backwards, and 2) Blind. They operate "backwards" in the sense that they are a default-allow (instead of default-deny) mechanism-- they only block (unless they screw up like this) the stuff they know all of their customers will think is bad. And they operate "blind" in that they don't do any QA on their code in your environment. If you think about it, it's scary: they apply multiple (potentially crippling as evidenced by these recent events) changes to production systems, in most organizations several times per day without proper change control processes. Besides anti-malware, what other enterprise applications operate in such a six-shooters-blazing, wild west cowboy sort of way?
Surely this is one more nail in the signature-based anti-malware coffin.
(noun) securology.
Latin: se cura logia
Literally translated: the study of being without care or worry
Saturday, December 29, 2007
Tuesday, December 11, 2007
OpenDNS - I think I like you
I think I really like OpenDNS. It's intelligent. It's closer to the problem than existing solutions. And it's free.
OpenDNS works by using Anycast to redirect you to the best DNS servers based on where you are. But before it quickly gives you your response, it can optionally filter out unwanted content. OpenDNS partners with communities and service providers to maintain a database of adult content and malicious websites. If you choose to opt in, each DNS query that matches a known bad site returns your browser to a customizable page that explains why the page is not allowed.
Now, privacy advocates are well aware that there is a potential data collection and use problem. However, DNS queries already are a privacy risk, since an ISP can create quite the portfolio on you based on which names get resolved to numbers. OpenDNS can collect information about you, including statistics associated with DNS usage on the networks you manage, but that choice is not turned on by default-- you have to opt into it as well. So, all things considered, privacy is well managed.
I really like this approach to filtering unwanted HTTP content because it completely prevents any connection between clients and offending servers. In fact, clients don't even get to know who (if you can allow me to personify servers for a moment with the term "who") the server is or where it lives. But what I like even more is that this service is simple. There are no complicated client software installs (that users or children can figure out how to disable), no distributed copies of offending URL databases to replicate and synchronize, and no lexicons for users to tweak. It's lightweight. All it takes is updating a DHCP server's entries for DNS servers to point to 208.67.222.222 and 208.67.220.220 and checking a few boxes for which content is needed to be filtered in an intuitive web administration console. For a home user, that's as easy as updating the DNS server fields in a home router-- and all current and future clients are ready to go. An enterprise could use this service as well as its DNS Forwarders. And many larger customers do. A non-tech-savvy parent could turn on content filtering without the "my kids program the VCR" syndrome resulting in the kids bypassing the filters. Setting an IP Address for a DNS server doesn't stand out as a "net nanny" feature to kids who are left alone with the computer.
Okay, there have to be caveats, right? Here they are ...
If you're planning on using some third-party DNS service--especially one that is free-- it had better be performing well and it had better be a service that you trust (because DNS has been used in the past to send people to malicious sites). Since their inception in July 2006, OpenDNS has serviced over 500 Million DNS requests with a 100% perfect uptime track record. And from their open, collaborative stance on issues like phishing (see phishtank.com), you'll want to trust them.
Any DNS misses (except some common typos) will return you to an OpenDNS web page that tries to "help" you find what you missed. The results look like re-branded Google results. Users taking links off the OpenDNS results page is how OpenDNS makes their revenue--on a pay per click basis. That's how they keep the services free.
Dynamic IP Addresses can mess up a home user's ability to keep content filtering policies in-check (but this won't affect enterprises). But there are a number of ways to keep the policies in-synchrony, including their DNS-O-Matic service. What I'd like to see added on: native consumer router support for Dynamic IP address changes to keep content filtering policies in place no matter what the ISP does. [The Linksys WRT54G wireless router, for example, supports similar functions with TZO and DynDNS today-- it would be nice if OpenDNS was another choice in the drop-down menu.] If my neighbor enrolled in the service, it might be possible for me to get my neighbor's OpenDNS filtering policies if we share the same ISP and Dynamic IP pool, but again, that's what the dynamic IP updating services are for.
Enterprises who decide to use OpenDNS for their primary outgoing DNS resolvers must keep in mind that an offending internal user could simply specify a DNS server of their preference-- one that will let them bypass the content filters. However, a quick and simple firewall policy (not some complicated DMZ rule) to screen all DNS traffic (UDP/TCP 53) except traffic destined for OpenDNS servers (208.67.222.222 and 208.67.220.220) will quell that concern.
So the caveats really are not bad at all.
Since the company is a west coast (SF) startup and since the future seems bright for them as long as they can keep their revenue stream flowing, I imagine they'll be gobbled up by some larger fish [Google?].
So this Christmas, give the gift of safe.
...
This might seem like a blatant advertisement, but (number one) I rarely like a service well enough to advocate or recommend it and (number two) I am not financially affiliated with OpenDNS in any way.
OpenDNS works by using Anycast to redirect you to the best DNS servers based on where you are. But before it quickly gives you your response, it can optionally filter out unwanted content. OpenDNS partners with communities and service providers to maintain a database of adult content and malicious websites. If you choose to opt in, each DNS query that matches a known bad site returns your browser to a customizable page that explains why the page is not allowed.
Now, privacy advocates are well aware that there is a potential data collection and use problem. However, DNS queries already are a privacy risk, since an ISP can create quite the portfolio on you based on which names get resolved to numbers. OpenDNS can collect information about you, including statistics associated with DNS usage on the networks you manage, but that choice is not turned on by default-- you have to opt into it as well. So, all things considered, privacy is well managed.
I really like this approach to filtering unwanted HTTP content because it completely prevents any connection between clients and offending servers. In fact, clients don't even get to know who (if you can allow me to personify servers for a moment with the term "who") the server is or where it lives. But what I like even more is that this service is simple. There are no complicated client software installs (that users or children can figure out how to disable), no distributed copies of offending URL databases to replicate and synchronize, and no lexicons for users to tweak. It's lightweight. All it takes is updating a DHCP server's entries for DNS servers to point to 208.67.222.222 and 208.67.220.220 and checking a few boxes for which content is needed to be filtered in an intuitive web administration console. For a home user, that's as easy as updating the DNS server fields in a home router-- and all current and future clients are ready to go. An enterprise could use this service as well as its DNS Forwarders. And many larger customers do. A non-tech-savvy parent could turn on content filtering without the "my kids program the VCR" syndrome resulting in the kids bypassing the filters. Setting an IP Address for a DNS server doesn't stand out as a "net nanny" feature to kids who are left alone with the computer.
Okay, there have to be caveats, right? Here they are ...
If you're planning on using some third-party DNS service--especially one that is free-- it had better be performing well and it had better be a service that you trust (because DNS has been used in the past to send people to malicious sites). Since their inception in July 2006, OpenDNS has serviced over 500 Million DNS requests with a 100% perfect uptime track record. And from their open, collaborative stance on issues like phishing (see phishtank.com), you'll want to trust them.
Any DNS misses (except some common typos) will return you to an OpenDNS web page that tries to "help" you find what you missed. The results look like re-branded Google results. Users taking links off the OpenDNS results page is how OpenDNS makes their revenue--on a pay per click basis. That's how they keep the services free.
Dynamic IP Addresses can mess up a home user's ability to keep content filtering policies in-check (but this won't affect enterprises). But there are a number of ways to keep the policies in-synchrony, including their DNS-O-Matic service. What I'd like to see added on: native consumer router support for Dynamic IP address changes to keep content filtering policies in place no matter what the ISP does. [The Linksys WRT54G wireless router, for example, supports similar functions with TZO and DynDNS today-- it would be nice if OpenDNS was another choice in the drop-down menu.] If my neighbor enrolled in the service, it might be possible for me to get my neighbor's OpenDNS filtering policies if we share the same ISP and Dynamic IP pool, but again, that's what the dynamic IP updating services are for.
Enterprises who decide to use OpenDNS for their primary outgoing DNS resolvers must keep in mind that an offending internal user could simply specify a DNS server of their preference-- one that will let them bypass the content filters. However, a quick and simple firewall policy (not some complicated DMZ rule) to screen all DNS traffic (UDP/TCP 53) except traffic destined for OpenDNS servers (208.67.222.222 and 208.67.220.220) will quell that concern.
So the caveats really are not bad at all.
Since the company is a west coast (SF) startup and since the future seems bright for them as long as they can keep their revenue stream flowing, I imagine they'll be gobbled up by some larger fish [Google?].
So this Christmas, give the gift of safe.
...
This might seem like a blatant advertisement, but (number one) I rarely like a service well enough to advocate or recommend it and (number two) I am not financially affiliated with OpenDNS in any way.
Labels:
complexity vs security,
content filtering,
malware,
Trust
Monday, December 10, 2007
Gary McGraw on Application Layer Firewalls & PCI
This serves as a good follow-up to my dissection of Imperva's Application Layer Firewall vs Code Review whitepaper.
Gary McGraw, the CTO of software security firm Cigital, just published an article on Dark Reading called "Beyond the PCI Bandaid". Some tidbits from his article:
Gary McGraw, the CTO of software security firm Cigital, just published an article on Dark Reading called "Beyond the PCI Bandaid". Some tidbits from his article:
Web application firewalls do their job by watching port 80 traffic as it interacts at the application layer using deep packet inspection. Security vendors hyperbolically claim that application firewalls completely solve the software security problem by blocking application-level attacks caused by bad software, but that’s just silly. Sure, application firewalls can stop easy-to-spot attacks like SQL injection or cross-site scripting as they whiz by on port 80, but they do so using simplistic matching algorithms that look for known attack patterns and anomalous input. They do nothing to fix the bad software that causes the vulnerability in the first place.Gary's got an excellent reputation fighting information security problems from the software development perspective. His Silver Bullet podcast series is one of a kind, interviewing everyone from Peter Neumann (one of the founding fathers of computer security) to Bruce Schneier (the most well known of the gurus) to Ed Felten (of Freedom to Tinker and Princeton University fame). He is also the author of several very well respected software security books.
Thursday, December 6, 2007
Salting your Hash with URLs
So, when I was reading this post on Light Blue Torchpaper (Cambridge University' Computer Security Lab's blog) a few weeks back, I, like many others (including this Slashdot thread), was reminded about the importance of salting your password hashes ... As it turns out, you really can ask Google for a hash value and it really will return significant results-- like a gigantic, easy-to-use Rainbow Table. Steven Murdoch managed to find "Anthony" with this simple search.
Of course, though, if my story stopped there it would not be that interesting. Security professionals have known about salting hashes to get around known hash values in tables for a long time. But as I thought about the salt hashing, it dawned on me the parallels with some password management tools.
I had been wanting to reduce the number of passwords I have to keep track of for all of the various web applications and forums that require me to have them. I have used Schneier's Password Safe for years now and find it nice, but not portable to other platforms (e.g. mac/linux). Even moving between different PCs is difficult because it requires keeping my password safe database in synchrony. Of course, several browsers have the ability to store passwords with a "master password", but I have several objections to them. First, they are part of the browser's stack, so I have to trust that my browser and the web applications I use won't result in an opportunity for malware to exploit a client-side bug to steal my passwords. Second, they don't tend to work well when moving from machine to machine, so there's a synchrony problem. So, I am always on the lookout for a good alternative. Perhaps one day an authentication system like Info Cards will become a reality in the consumer-space ...
So, when I first stumbled upon Password Maker, an open source password management tool, I wanted to like it. How does it work? From their description:
Of course, though, if my story stopped there it would not be that interesting. Security professionals have known about salting hashes to get around known hash values in tables for a long time. But as I thought about the salt hashing, it dawned on me the parallels with some password management tools.
I had been wanting to reduce the number of passwords I have to keep track of for all of the various web applications and forums that require me to have them. I have used Schneier's Password Safe for years now and find it nice, but not portable to other platforms (e.g. mac/linux). Even moving between different PCs is difficult because it requires keeping my password safe database in synchrony. Of course, several browsers have the ability to store passwords with a "master password", but I have several objections to them. First, they are part of the browser's stack, so I have to trust that my browser and the web applications I use won't result in an opportunity for malware to exploit a client-side bug to steal my passwords. Second, they don't tend to work well when moving from machine to machine, so there's a synchrony problem. So, I am always on the lookout for a good alternative. Perhaps one day an authentication system like Info Cards will become a reality in the consumer-space ...
So, when I first stumbled upon Password Maker, an open source password management tool, I wanted to like it. How does it work? From their description:
You provide PASSWORDMAKER two pieces of information: a "master password" -- that one, single password you like -- and the URL of the website requiring a password. Through the magic of one-way hash algorithms, PASSWORDMAKER calculates a message digest, also known as a digital fingerprint, which can be used as your password for the website. Although one-way hash algorithms have a number of interesting characteristics, the one capitalized by PASSWORDMAKER is that the resulting fingerprint (password) does "not reveal anything about the input that was used to generate it." 1 In other words, if someone has one or more of your generated passwords, it is computationally infeasible for him to derive your master password or to calculate your other passwords. Computationally infeasible means even computers like this won't help!But, as I said: "I wanted to like it." After all, there is nothing that has to be stored ever-- you just have to remember the main password and the algorithm does the rest. There is nothing that has to be installed directly into the browser (unless you want to), not too mention it's very portable from platform to platform. And since there is no password database or safe to move around, there's no synchronization problem-- the site specific passwords are re-created on the fly. It sounds like a panacea. In the algorithm, the URL essentially becomes a salt with the master password as the hash input. The resulting hash [sha1 (master password + URL)] is the new site-specific password. It sounded like a great solution, but I have a couple potentially show-stopping concerns over it.
- There are varying opinions, but it is a prudent idea to keep salt values secret. If the salt value becomes known, a rainbow table could be constructed that employs the random password key-space concatenated with the salt. Granted, it might take somebody several days to create the rainbow tables, but it could be done-- especially if there were economic incentives to do so. Imagine that an adversary targets the intersection of MySpace and Paypal users, also assuming (of course) that this Password Maker is at least somewhat popular. Sprinkle in some phishing, XSS, or whatever is needed today to capture some MySpace passwords (which are of considerably lower value than, say, PayPal) to compare against the rainbow tables, and ... Voila ... the adversary now has the master password to input into the Password Maker's scheme to get to access to PayPal.
- I am not a mathematical/theoretical cryptographer, but I know better than to take the mathematics in cryptographic hash functions for granted. There has not been much research in hashing hash values, at least not much that has entered the mainstream. And as such, it may be possible to create mathematical shortcuts or trapdoors when hashing the hash values, at least potentially with certain types of input. That is not to be taken lightly. I would not build any critical security system on top of such a mechanism until there was extensive peer review literature proclaiming it a safe practice (also read the section entitled "Amateur Cryptographer" of this Computer World article).
Tuesday, December 4, 2007
Client Software Update Mechanisms
It's 2007. Even the SANS Top 20 list has client-side applications as being a top priority. Simply put, organizations have figured out how to patch their Microsoft products, using one of the myriad of automated tools out there. Now it's all the apps that are in the browser stack in some way or another that are getting the attention ... and the patches.
Also, since it's 2007, it's well-agreed that operating a computer without administrative privileges significantly reduces risk-- although it doesn't eliminate it.
So why is that when all of these apps in the browser stack (Adobe Acrobat Reader, Flash, RealPlayer, Quicktime, etc.) implement automated patch/update mechanisms, that the mechanisms are completely broken if you follow the principle of least privilege and operate your computer as a non-admin? Even Firefox's built-in update mechanism operates the exact same way.
So, here are your options ....
1) Give up on non-admin and operate your computer with privileges under the justification that the patches reduce risk more than decreased privileges.
2) Give up on patching these add-on applications under the justification that decreased privileges reduce more risk than patching the browser-stack.
3) Grant write permissions to the folders (or registry keys) that belong to the applications that need updates so that users can operate the automated update mechanisms without error dialogs, understanding that this could lead to malicious code replacing part or all of the binaries to which the non-admin users now have access.
4) Lobby the vendors to create a trusted update service that runs with privileges, preferably with enterprise controls, such that the service downloads and performs integrity checking upon the needed updates, notifying the user of the progress.
Neither option 1, nor option 2 are ideal. Both are compromises, the success of each depends heavily upon an ever-changing threat landscape. Option 3 might work for awhile, particularly while it is an obscurely used option, but it's very risky. And option 4 is long overdue. Read this Firefox, Apple, Adobe, et al: Create better software update mechanisms. Apple even created a separate Windows application for this purpose, but it runs with the logged-in user's permissions, so it's useless.
...
And this is not even dealing with all of the patching problems large organizations learned while introducing automated patching systems for Microsoft products: components used in business-critical applications must be tested prior to deployment. These self-update functions in the apps described above have zero manageability for enterprises. Most of these products ship new versions with complete installers instead of releasing updates that patch broken components. The only real option for enterprises is to keep aware of the versions as the vendors release them, packaging the installers for enterprise-wide distribution through their favorite tool (e.g. SMS). It would be nice if these vendors could release a simple enterprise proxy, at least on a similar level to Microsoft's WSUS, where updates could be authorized by a centralized enterprise source after proper validation testing in the enterprise's environment.
Also, since it's 2007, it's well-agreed that operating a computer without administrative privileges significantly reduces risk-- although it doesn't eliminate it.
So why is that when all of these apps in the browser stack (Adobe Acrobat Reader, Flash, RealPlayer, Quicktime, etc.) implement automated patch/update mechanisms, that the mechanisms are completely broken if you follow the principle of least privilege and operate your computer as a non-admin? Even Firefox's built-in update mechanism operates the exact same way.
So, here are your options ....
1) Give up on non-admin and operate your computer with privileges under the justification that the patches reduce risk more than decreased privileges.
2) Give up on patching these add-on applications under the justification that decreased privileges reduce more risk than patching the browser-stack.
3) Grant write permissions to the folders (or registry keys) that belong to the applications that need updates so that users can operate the automated update mechanisms without error dialogs, understanding that this could lead to malicious code replacing part or all of the binaries to which the non-admin users now have access.
4) Lobby the vendors to create a trusted update service that runs with privileges, preferably with enterprise controls, such that the service downloads and performs integrity checking upon the needed updates, notifying the user of the progress.
Neither option 1, nor option 2 are ideal. Both are compromises, the success of each depends heavily upon an ever-changing threat landscape. Option 3 might work for awhile, particularly while it is an obscurely used option, but it's very risky. And option 4 is long overdue. Read this Firefox, Apple, Adobe, et al: Create better software update mechanisms. Apple even created a separate Windows application for this purpose, but it runs with the logged-in user's permissions, so it's useless.
...
And this is not even dealing with all of the patching problems large organizations learned while introducing automated patching systems for Microsoft products: components used in business-critical applications must be tested prior to deployment. These self-update functions in the apps described above have zero manageability for enterprises. Most of these products ship new versions with complete installers instead of releasing updates that patch broken components. The only real option for enterprises is to keep aware of the versions as the vendors release them, packaging the installers for enterprise-wide distribution through their favorite tool (e.g. SMS). It would be nice if these vendors could release a simple enterprise proxy, at least on a similar level to Microsoft's WSUS, where updates could be authorized by a centralized enterprise source after proper validation testing in the enterprise's environment.
Wednesday, November 21, 2007
Rootkitting Your Customers
I am a big fan of Dan Geer (image at left); he always has an interesting perspective on security issues, but that's not to say I agree with him.
Dan wrote a guest editorial that was published in Ryan Naraine's "Zero Day" blog, tackling the topic of trustworthy e-commerce when consumers' PCs are so likely to be infected with who-knows-what (there's even a Slashdot thread to go with it). Dan proposed:
"When the user connects [to your e-commerce site], ask whether they would like to use your extra special secure connection. If they say 'Yes,' then you presume that they always say Yes and thus they are so likely to be infected that you must not shake hands with them without some latex between you and them. In other words, you should immediately 0wn their machine for the duration of the transaction — by, say, stealing their keyboard away from their OS and attaching it to a special encrypting network stack all of which you make possible by sending a small, use-once rootkit down the wire at login time, just after they say 'Yes.'"I see one major flaw with this: suppose we agree that a benevolent rootkit issued from the merchant is a good idea, how do we guarantee that the rootkit trumps any other malware (i.e. other rootkits) that are running on this presumed-infected consumer's PC? All it would take is a piece of malware that could get into the Trusted Path between the consumer's keyboard and the merchant's good-rootkit.
I understand that Dr. Geer is trying to tackle this infected/zombie problem from the merchant's perspective. And in the grand scheme of things, the merchant has very little control of the trust equation. There are some interesting security economics at play here.
What is needed here is Remote Attestation of the trustworthiness of the consumer's computer system. The problem is, we may never get to the point where remote attestation is possible, because of the socio-political aspects of trustworthy computing, not the technical aspects. It's the same reason why every year for the last decade has been the "year of the PKI", but in none of them have we seen widespread adoption of public key infrastructure to the point that it would be our saving grace or silver bullet like it has been heralded to become. Trustworthy computing, as simple as calculating trust relationships through public key cryptography (such as with the use of TPMs), requires an authority to oversee the whole process. The authority has to vouch for the principals within the authority's realm. The authority has to define what is "correct" and label everyone and everything within its domain as either "correct" or "incorrect", from a trustworthiness perspective. In this distributed e-commerce problem, there is no authority. And who would do it? The Government? The CAs (Verisign, et al)? And ... the more important question ... if one of these organizations did stand up as the authority, who would trust their assertions? Who would agree with their definitions of "correctness"?
Dr. Geer's suggestion will also work and fail like the many NAC/NAP vendors who truly believe they can remotely attest a computer system's trustworthiness by sending a piece of software to run in a CPU controlled by an OS they inherently cannot trust--yet they believe they have trustworthy output from the software. This method of establishing trust is opt-in security; NAC vendors have tried it and failed (and many of them keep trying in an arms race). And when organizations like, say, Citibank start using one-time-use rootkits, the economics for malware to tell the rootkit "these are not the droids you're looking for" become very favorable. At that point, we'll see just how bad opt-in security can be. The economics for attacking NAC implementations, by comparison, are only in the favor of college students who do not wish to run the school's flavor of AV. It's a chicken and egg problem, only we know without debate in this case. The software we send to the remote host may be different, but the principle is the same: Trust must come first before trustworthy actions or output.
But it would probably make for a great movie plot.
Labels:
malware,
opt-in security,
security economics,
Trust
Tuesday, November 20, 2007
Soft tokens aren't tokens at all
The three categories of authentication:
Physical tokens are designed to be tamper resistant, which is an important property. By design, if a physical token is tampered, the internal storage of the token's "seed record" (symmetric key) is lost, in an attempt to prevent a physical attacker from duplicating the token. [Note the words "tamper resistant" not "tamper proof".]
Soft tokens, however, do not share this property. Soft tokens are comprised of two components: 1) the software application code that implements the one time password function and 2) the seed record used in the application to generate the one time password function's output.
"Perfect Duplication" is a property of the Internet/Information Age that is shaking the world. The Recording/Movie Production Industries are having a hard time fighting perfect duplication as a means to circumvent licensed use of digital media. Perfect duplication can be a business enabler as well, as it is with news syndication services that distribute perfect copies of their stories throughout the far reaches of the world in a matter of seconds. In the case of soft tokens, though, perfect duplication is a deal breaker.
Soft tokens are designed to be flexible. It's difficult to provision a hardware token to an employee halfway around the world the same day of the request, but with soft tokens provisioning is a piece of cake. Likewise, it's easier to recover soft tokens when that same employee is terminated. Soft tokens run on virtually any platform. RSA supports everything from Windows Desktops to Browser Toolbars to Mobile Devices-- all you need to do is import the seed record.
There's the rub ... Distributing the seed record requires a confidential channel to ensure that it is not perfectly duplicated in transit. Distributing seed records to many of the supported platforms of soft token vendors involves plaintext transmission, such as sending the seed record as an email attachment to a Blackberry client. An administrator may provision the seed record encrypted using an initial passphrase that is distributed out-of-band, but it is common practice for seed records and initial passphrases to be distributed side-by-side. Whereas a physical token can only be in one place at a time, a soft token could be perfectly duplicated by an eavesdropper, even complete with its initial passphrase (especially when it isn't distributed out of band). If Alice receives her soft token and changes its passphrase, Eve could keep her perfect copy with the intial passphrase or choose to change the passphrase-- either way, the back end of the one-time-password authentication system will receive a valid token code (time value encrypted with the seed record).
Likewise, a soft token employed on a malware-ridden remote PC could have its stored contents uploaded to an adversary's server, capturing the seed record. If the malware also captures keystrokes (as software keystroke logging is so common these days), then another opportunity for a perfect duplicate exists. Soft tokens are vulnerable to severe distributed key management problems. Bob (the administrator in this case) cannot know if the one time password was generated by Alice's soft token application or Eve's perfect duplicate.
In short, a soft token can prove a user "has" the token, but it cannot prove the user exclusively has the only copy. Therefore, soft tokens are not truly "something you have"; they are "something you know" (i.e. the passphrase to unlock a seed record).
For organizations seeking soft tokens as a method of achieving multi-factor authentication, a password plus a soft token is simply two instances of "something you know". Thus the organization must ask itself: "Does the complexity of properly distributing seed records in a secure channel as well as the expense of managing and supporting the software token application code really provide sufficient benefit over a simple--yet strong-- password only method?" My prediction is the answer to that question will be: "No, it doesn't".
...
UPDATED 12/11/2007 - Sean Kline, Director of Product Management at RSA, has posted a response [Thanks for the heads up]. Perhaps we will have some interesting dialog.
Something you knowPhysical hardware tokens, like RSA's SecurID, fall into the second category of "something you have". Software tokens, also like RSA's software SecurID, pretend to fall into that same category ... but they are really just another example of "something you know".
Something you have
Something you are
Physical tokens are designed to be tamper resistant, which is an important property. By design, if a physical token is tampered, the internal storage of the token's "seed record" (symmetric key) is lost, in an attempt to prevent a physical attacker from duplicating the token. [Note the words "tamper resistant" not "tamper proof".]
Soft tokens, however, do not share this property. Soft tokens are comprised of two components: 1) the software application code that implements the one time password function and 2) the seed record used in the application to generate the one time password function's output.
"Perfect Duplication" is a property of the Internet/Information Age that is shaking the world. The Recording/Movie Production Industries are having a hard time fighting perfect duplication as a means to circumvent licensed use of digital media. Perfect duplication can be a business enabler as well, as it is with news syndication services that distribute perfect copies of their stories throughout the far reaches of the world in a matter of seconds. In the case of soft tokens, though, perfect duplication is a deal breaker.
Soft tokens are designed to be flexible. It's difficult to provision a hardware token to an employee halfway around the world the same day of the request, but with soft tokens provisioning is a piece of cake. Likewise, it's easier to recover soft tokens when that same employee is terminated. Soft tokens run on virtually any platform. RSA supports everything from Windows Desktops to Browser Toolbars to Mobile Devices-- all you need to do is import the seed record.
There's the rub ... Distributing the seed record requires a confidential channel to ensure that it is not perfectly duplicated in transit. Distributing seed records to many of the supported platforms of soft token vendors involves plaintext transmission, such as sending the seed record as an email attachment to a Blackberry client. An administrator may provision the seed record encrypted using an initial passphrase that is distributed out-of-band, but it is common practice for seed records and initial passphrases to be distributed side-by-side. Whereas a physical token can only be in one place at a time, a soft token could be perfectly duplicated by an eavesdropper, even complete with its initial passphrase (especially when it isn't distributed out of band). If Alice receives her soft token and changes its passphrase, Eve could keep her perfect copy with the intial passphrase or choose to change the passphrase-- either way, the back end of the one-time-password authentication system will receive a valid token code (time value encrypted with the seed record).
Likewise, a soft token employed on a malware-ridden remote PC could have its stored contents uploaded to an adversary's server, capturing the seed record. If the malware also captures keystrokes (as software keystroke logging is so common these days), then another opportunity for a perfect duplicate exists. Soft tokens are vulnerable to severe distributed key management problems. Bob (the administrator in this case) cannot know if the one time password was generated by Alice's soft token application or Eve's perfect duplicate.
In short, a soft token can prove a user "has" the token, but it cannot prove the user exclusively has the only copy. Therefore, soft tokens are not truly "something you have"; they are "something you know" (i.e. the passphrase to unlock a seed record).
For organizations seeking soft tokens as a method of achieving multi-factor authentication, a password plus a soft token is simply two instances of "something you know". Thus the organization must ask itself: "Does the complexity of properly distributing seed records in a secure channel as well as the expense of managing and supporting the software token application code really provide sufficient benefit over a simple--yet strong-- password only method?" My prediction is the answer to that question will be: "No, it doesn't".
...
UPDATED 12/11/2007 - Sean Kline, Director of Product Management at RSA, has posted a response [Thanks for the heads up]. Perhaps we will have some interesting dialog.
Still More TOR
F-Secure's blog is discussing how there are more bad TOR nodes out there. I discussed awhile back how TOR's goal of anonymity is not possible. TOR ignores the rules of Trust relationships.
From F-Secure:
UPDATED [11/21/2007]: Here are Heise Security's findings and there is now a Slashdot thread on the subject.
From F-Secure:
"Here's a node that only accepts HTTP traffic for Google and MySpace; it resides under Verizon:Or, maybe it's trying to capture credentials, like Google cookie stealing.
AS | IP | AS Name — 19262 | 71.105.20.179 | VZGNI-TRANSIT - Verizon Internet Services Inc.
While curious and perhaps even suspicious, it isn't necessarily malicious. It could just be a Samaritan particularly concerned with anonymous searches and MySpace profiles for some reason. But there's no way to tell, so why use such a node if you don't have to?"
"But how about this one?TOR users: caveat.
Now here's a node that was monitoring SSL traffic and was engaging in Man-in-the-Middle (MITM) attacks. Definitely bad.
AS | IP | CC | AS Name — 3320 | 217.233.212.114 | DE | DTAG Deutsche Telekom AG
Here's how the testing was done:
A test machine with a Web server and a real SSL certificate was configured.
A script was used to run through the current exit nodes in the directory cache.
Connections were made to the test machine.
A comparison of the certificates was made.
And the exit node at 217.233.212.114 provided a fake SSL certificate!
Now note, this was only one of about 400 plus nodes tested. But it only takes one."
UPDATED [11/21/2007]: Here are Heise Security's findings and there is now a Slashdot thread on the subject.
Monday, November 19, 2007
Possible Criminal Charges for Lost Laptops in the UK
Of course, the media are spinning this as "don't encrypt your laptop and you could go to jail" when the goal of the legislation is really: "for those who knowingly and recklessly flout data protection principles".
How many times does it need to be said? Encryption does not equal auto-magical security.
Encryption simply transitions the problem of data confidentiality into a key confidentiality problem. It trades one vast and complicated problem for one slightly less complicated problem. Key management is so crucial, yet it is rarely discussed in these forums. I would rather government officials' laptops not be encrypted than to have them encrypted with poor key management. It's better to know the problem exists than to pretend it doesn't. And it's worse to legislate everyone into pretending the key management problem doesn't exist.
How many times does it need to be said? Encryption does not equal auto-magical security.
Encryption simply transitions the problem of data confidentiality into a key confidentiality problem. It trades one vast and complicated problem for one slightly less complicated problem. Key management is so crucial, yet it is rarely discussed in these forums. I would rather government officials' laptops not be encrypted than to have them encrypted with poor key management. It's better to know the problem exists than to pretend it doesn't. And it's worse to legislate everyone into pretending the key management problem doesn't exist.
Sunday, November 18, 2007
Analyzing Trust in Hushmail
Recently, law enforcement acquired confidential email messages from the so-called secure email service, Hushmail. Law enforcement exploited weaknesses in trust relationships to steal the passwords for secret keys which were then used to decrypt the confidential messages.
There are some lessons from this.
#1. Law enforcement trumps. This is not necessarily a lesson in Trust, per se, but keep in mind that large institutions have extensive resources and can be very persuasive, whether it is persuasion from threat of force or financial loss. Possibly an extremely well funded service (read: expensive) in a country that refuses to comply with US laws and policies (e.g. extradition) could keep messages secret (hence the proverbial Swiss bank account). There are definitely economic incentives to understand in evaluating the overall security of Hushmail's (or a similar service's) solution.
#2. A service like Hushmail, which sits in the middle as a broker for all of your message routing and (at least temporary) storage, is part of the Trusted Path between sender and receiver. Hushmail attempts to limit the scope of what is trusted by employing techniques that prevent their access to the messages, such as encrypting the messages on the client side using a java agent or only storing the passphrases temporarily when encrypting messages on the server side.
A user trusts that Hushmail won't change their passphrase storage from hashed (unknown to Hushmail) to plaintext (known to Hushmail) when the user uses the server side encryption option. A user also trusts that the java applet hasn't changed from the version where strong encryption of the messages happens on the client side without divulging either a copy of the message or the keys to Hushmail. The source code is available, but there is not much to guarantee that the published java applet has not changed. The average, non-technical user will have no clue, since the entire process is visual. Hushmail could easily publish a signed, malicious version of the java applet. There is no human-computer interface that can help the user make a valid trust decision.
#3. The Trusted Path also includes many other components: the user's browser (or browser rootkits), the OS and hardware (and all of the problematic components thereof), the network (including DNS and ARP), and last but not least, the social aspect (people who have access to the user). There are many opportunities to find the weakest link in the chain of trust, that do not involve exploiting weaknesses of the service provider. Old fashioned, face-to-face message exchanges may have a shorter trusted path than a distributed, asynchronous electronic communication system with confidentiality controls built-in (i.e. Hushmail's email). And don't forget Schneier's realism of cryptographic execution:
To sum up, if you expect to conduct illegal or even highly-competitive activity through third-party "private" email services, you're optimistic at best or stupid at worst.
There are some lessons from this.
#1. Law enforcement trumps. This is not necessarily a lesson in Trust, per se, but keep in mind that large institutions have extensive resources and can be very persuasive, whether it is persuasion from threat of force or financial loss. Possibly an extremely well funded service (read: expensive) in a country that refuses to comply with US laws and policies (e.g. extradition) could keep messages secret (hence the proverbial Swiss bank account). There are definitely economic incentives to understand in evaluating the overall security of Hushmail's (or a similar service's) solution.
#2. A service like Hushmail, which sits in the middle as a broker for all of your message routing and (at least temporary) storage, is part of the Trusted Path between sender and receiver. Hushmail attempts to limit the scope of what is trusted by employing techniques that prevent their access to the messages, such as encrypting the messages on the client side using a java agent or only storing the passphrases temporarily when encrypting messages on the server side.
A user trusts that Hushmail won't change their passphrase storage from hashed (unknown to Hushmail) to plaintext (known to Hushmail) when the user uses the server side encryption option. A user also trusts that the java applet hasn't changed from the version where strong encryption of the messages happens on the client side without divulging either a copy of the message or the keys to Hushmail. The source code is available, but there is not much to guarantee that the published java applet has not changed. The average, non-technical user will have no clue, since the entire process is visual. Hushmail could easily publish a signed, malicious version of the java applet. There is no human-computer interface that can help the user make a valid trust decision.
#3. The Trusted Path also includes many other components: the user's browser (or browser rootkits), the OS and hardware (and all of the problematic components thereof), the network (including DNS and ARP), and last but not least, the social aspect (people who have access to the user). There are many opportunities to find the weakest link in the chain of trust, that do not involve exploiting weaknesses of the service provider. Old fashioned, face-to-face message exchanges may have a shorter trusted path than a distributed, asynchronous electronic communication system with confidentiality controls built-in (i.e. Hushmail's email). And don't forget Schneier's realism of cryptographic execution:
"The problem is that while a digital signature authenticates the document up to the point of the signing computer, it doesn't authenticate the link between that computer and Alice. This is a subtle point. For years, I would explain the mathematics of digital signatures with sentences like: 'The signer computes a digital signature of message m by computing m^e mod n.' This is complete nonsense. I have digitally signed thousands of electronic documents, and I have never computed m^e mod n in my entire life. My computer makes that calculation. I am not signing anything; my computer is."#4. Services like Hushmail collect large quantities of encrypted messages, so they are a treasure trove to adversaries. Another economic aspect in the overall trust analysis is that the majority of web-based email service users do not demand these features. So the subset of users who do require extra measures for confidentiality can be easily singled out, regardless of whether the messages will implicate the users in illegal activity (or otherwise meaningful activity to some other form of adversary). And, at a minimum, there is always Traffic Analysis, where relationships can be deduced if email addresses can be linked to individuals. An adversary may not need to know what is sent, so long as something is sent with extra confidential messages.
To sum up, if you expect to conduct illegal or even highly-competitive activity through third-party "private" email services, you're optimistic at best or stupid at worst.
Wednesday, November 14, 2007
Pay Extra for their Mistakes: EV Certificates
Extended Validation (EV) SSL Certificates are one of the information security industry's worst cover-ups. And to make matters worse, it isn't the Certificate Authorities (CA) that are paying for the mistakes; it's us.
Basically, EV Certs work like this:
The primary problem is that EV certs were not needed until it became apparently obvious through phishing and other fraudulent scams that CAs issue SSL certificates to anyone with a few hundred dollars to buy one. If only CAs followed the diligent process in the first place, there would be no market for EV certs. [The same could be said about DNS registrars not allowing domain registration to be so easy, hence F-Secure's suggestion to create a new ".bank" top level domain (TLD).]
The secondary problem is that CAs are passing their failures to properly validate SSL certificate requests into a new and improved offering at a higher price. Many of the CAs are acting like drug pushers by offering their customers to upgrade to EV SSL certs at no extra price (only to have the cert renewals come in at the 20+% increased price). And there is the obvious complaint that the increased price for a green address bar gives an unfair advantage to big corporations against the independent small business owners who may only afford the traditional SSL certificates.
...
On to some meta-cognitive comments ... This rant is not necessarily timely in the sense that CAs are trying to mass market EV certs, and there have been many people to articulate most of my complaints against them, but there is one key complaint I do not hear from industry analysts: the CAs should have been following the extended (i.e. diligent) process from the very beginning.
Basically, EV Certs work like this:
- Organization requests an EV Cert from a CA.
- CA goes through a more stringent legal process to authenticate the requesting organization.
- CA issues EV Cert to requesting organization.
- Web site visitor sees the address bar turn green (like Microsoft's fictitious Woodgrove Bank example).
The primary problem is that EV certs were not needed until it became apparently obvious through phishing and other fraudulent scams that CAs issue SSL certificates to anyone with a few hundred dollars to buy one. If only CAs followed the diligent process in the first place, there would be no market for EV certs. [The same could be said about DNS registrars not allowing domain registration to be so easy, hence F-Secure's suggestion to create a new ".bank" top level domain (TLD).]
The secondary problem is that CAs are passing their failures to properly validate SSL certificate requests into a new and improved offering at a higher price. Many of the CAs are acting like drug pushers by offering their customers to upgrade to EV SSL certs at no extra price (only to have the cert renewals come in at the 20+% increased price). And there is the obvious complaint that the increased price for a green address bar gives an unfair advantage to big corporations against the independent small business owners who may only afford the traditional SSL certificates.
...
On to some meta-cognitive comments ... This rant is not necessarily timely in the sense that CAs are trying to mass market EV certs, and there have been many people to articulate most of my complaints against them, but there is one key complaint I do not hear from industry analysts: the CAs should have been following the extended (i.e. diligent) process from the very beginning.
Wednesday, October 31, 2007
Retail, Protected Consumer Information, and Whole Disk Encryption
There has been a lot of discussion around retailers pushing back on the PCI (Payment Card Industry) Data Security Standards group. The claim is that merchants should not have to store credit card data at all. Instead credit card transaction clearinghouses would be the only location where that data would be retained; any current need (transaction lookup, disputes, etc.) would be handled by the payment card processors on a per-request basis.
I really like this idea.
In risk management, there are generally two methods to protecting assets: 1) spend more to prevent threats to the assets, 2) spend more to reduce the numbers/value of the assets. We see a lot of the former (think: anti-virus, anti-spyware, anti-threat-du-jour). We rarely see examples of the latter, but this is a perfectly logical approach.
Dan Geer gave us a great analogy: as threats increase, perimeters detract. A suburban neighborhood is OK with a police car every so many square miles, but an embassy needs armed marines every 50 feet of its perimeter. We can take Dr. Geer's analogy and make a war in everyone's neighborhood-- the local retailers/e-tailers-- or we can reduce those assets to specific locations where they can best be monitored and protected. It just makes sense.
It's also a simple game of economics. The consumer passes the risk to the credit card issuers who pass the risk onto the merchants. If consumers in the US hadn't transferred risk to the credit card issuers (courtesy of US Law limiting credit card fraud to a whopping $50 consumer liability), we would likely not see widespread use of credit cards in the US today. What consumer would stand for getting into greater debt if the card was lost? Likewise, we now are at a turning point with merchants, since card issuers are trying to transfer the risk onto them. Shuffling the risk (by shuffling the custody of confidential credit card data) back to the issuers makes perfect sense. Don't forget the credit card issuers have been in a perfect place all of these years: charging merchants a fee per transaction and charging interest to consumers who maintain a debt beyond 30 days. Since they can double-dip in the economics equation, it makes the most sense for them to take the responsibility.
I really like this idea.
In risk management, there are generally two methods to protecting assets: 1) spend more to prevent threats to the assets, 2) spend more to reduce the numbers/value of the assets. We see a lot of the former (think: anti-virus, anti-spyware, anti-threat-du-jour). We rarely see examples of the latter, but this is a perfectly logical approach.
Dan Geer gave us a great analogy: as threats increase, perimeters detract. A suburban neighborhood is OK with a police car every so many square miles, but an embassy needs armed marines every 50 feet of its perimeter. We can take Dr. Geer's analogy and make a war in everyone's neighborhood-- the local retailers/e-tailers-- or we can reduce those assets to specific locations where they can best be monitored and protected. It just makes sense.
It's also a simple game of economics. The consumer passes the risk to the credit card issuers who pass the risk onto the merchants. If consumers in the US hadn't transferred risk to the credit card issuers (courtesy of US Law limiting credit card fraud to a whopping $50 consumer liability), we would likely not see widespread use of credit cards in the US today. What consumer would stand for getting into greater debt if the card was lost? Likewise, we now are at a turning point with merchants, since card issuers are trying to transfer the risk onto them. Shuffling the risk (by shuffling the custody of confidential credit card data) back to the issuers makes perfect sense. Don't forget the credit card issuers have been in a perfect place all of these years: charging merchants a fee per transaction and charging interest to consumers who maintain a debt beyond 30 days. Since they can double-dip in the economics equation, it makes the most sense for them to take the responsibility.
Thursday, October 25, 2007
Opt-in Security
So many of computer security implementations today depend on what I call "opt-in" security. It could be called "trusted clients". Most IT implementations consist of a large scale of distributed computers which are getting more and more mobile all of the time. Controlling those distributed systems is a very tough challenge because IT's arms can only reach so far and those corporate employees (let alone consumers) are soooo far away. So enters the idea of centralized control...
The notion is simple: administrators need a central place-- a choke hold, perhaps-- to grasp (in futility, as it happens). Practically every enterprise management tool these days tries to do the same thing: make a computer that I cannot physically touch or see do something I want it to do.
The problem with centralized control of distributed systems today is that it is an illusion at best. When new applications are loaded onto systems, administrators aren't actually touching the systems. Instead, they are sending out code and data to be assimilated into each system. Likewise when "endpoint security" (marketing term du jour) systems are validating that a client computer is compliant with a policy set, it is attempting to reconfigure local settings. But under the microscope, all that really happens is that a computer program processes input, checks some properties' states, and (optionally) sends output back to a "central" server. That is, in a nutshell, all that centralized control is today.
Breaking down the steps ...
Step 1:
Maybe the remote computer's state was trustworthy at one time, back when it was first created, but before it was ever handed to a user. However, its current state is not known. After all, if we knew it was still in a known good state, why would we bother to run our policy status checking program on it?
Step 2:
The policy program runs, takes its actions (if any) and generates its output regarding a newly calculated system trustworthiness "state" of the system. Yet, if we cannot fully trust the host OS, then we cannot fully trust anything it outputs. What if our policy program asks the OS for the status of a resource (e.g. a configuration file/setting)? If the OS is untrustworthy (or if a malicious program has already subverted it), then the output is also not trustworthy; the actual value could be replaced with the value the policy program is expecting to see, so it shuts up and goes away.
Step 3:
After the pseudo-policy-science is complete, the output is sent back to the central server, which (depending upon the application) could do anything from logging and reporting the status to granting access to attempting to further remedy the remote computer.
Now for some examples ...
Mark Russinovich of SysInternals (now Microsoft) fame created a tool called "gpdisable" to demonstrate how Active Directory Group Policy is really just an opt-in tool. As it turns out, his tool could even appease Step 2 above without requiring administrator rights. You cannot download gpdisable from Microsoft today (I think they removed it when SysInternals was purchased), but thanks to the WayBack machine you can download gpdisable here.
PGP Whole Disk Encryption is entirely an implementation of opt-in security. The installer code executes, claiming that the drive in its entirety is encrypted, claiming that the keys it uploads to the central server (like Step #3 above) are the real keys it uses, and of course claiming that the bootguard bypass is reset after the very next boot.
Anti-Virus works the same way. It's not too hard for malware to disable AV. Yet, if the policy polling process (alliteration) sees the process running ("brain dead" or not), it's not necessarily trustworthy.
Network Access/Admission Control/Protection (NAC or NAP depending on your vendor preference) is the worst case of opt-in security. NAC Implementations have been beaten straight up, in the vein of "these aren't the droids you're looking for" and NAC implementations have been hijacked by students.
Ok, so where do we go from here? There are really only two options: 1) bolster trustworthiness in distributed computing, or 2) ditch distributed computing in favor of something simpler, like thin clients and centralized computing.
Option 2 is beyond what can be crammed into this post, but don't think that means vendors are ignoring the possibilities of thin clients. Dell clearly is. Now they just have to figure out how to add mobility into the equation.
Probably the single greatest contributor to the possibility of achieving option 1 is the ubiquitous adoption of TPMs. TPMs could enable Remote Attestation, but they won't be able to do it without also coupling trust at the foundational levels like memory allocation. Regardless, increasing the trustworthiness of distributed computers from "opt-in" to high assurance will require a Trusted Path from the centralized server all the way to the remote computers' hardware, which does not exist today.
The notion is simple: administrators need a central place-- a choke hold, perhaps-- to grasp (in futility, as it happens). Practically every enterprise management tool these days tries to do the same thing: make a computer that I cannot physically touch or see do something I want it to do.
The problem with centralized control of distributed systems today is that it is an illusion at best. When new applications are loaded onto systems, administrators aren't actually touching the systems. Instead, they are sending out code and data to be assimilated into each system. Likewise when "endpoint security" (marketing term du jour) systems are validating that a client computer is compliant with a policy set, it is attempting to reconfigure local settings. But under the microscope, all that really happens is that a computer program processes input, checks some properties' states, and (optionally) sends output back to a "central" server. That is, in a nutshell, all that centralized control is today.
Breaking down the steps ...
Step 1:
Maybe the remote computer's state was trustworthy at one time, back when it was first created, but before it was ever handed to a user. However, its current state is not known. After all, if we knew it was still in a known good state, why would we bother to run our policy status checking program on it?
Step 2:
The policy program runs, takes its actions (if any) and generates its output regarding a newly calculated system trustworthiness "state" of the system. Yet, if we cannot fully trust the host OS, then we cannot fully trust anything it outputs. What if our policy program asks the OS for the status of a resource (e.g. a configuration file/setting)? If the OS is untrustworthy (or if a malicious program has already subverted it), then the output is also not trustworthy; the actual value could be replaced with the value the policy program is expecting to see, so it shuts up and goes away.
Step 3:
After the pseudo-policy-science is complete, the output is sent back to the central server, which (depending upon the application) could do anything from logging and reporting the status to granting access to attempting to further remedy the remote computer.
Now for some examples ...
Mark Russinovich of SysInternals (now Microsoft) fame created a tool called "gpdisable" to demonstrate how Active Directory Group Policy is really just an opt-in tool. As it turns out, his tool could even appease Step 2 above without requiring administrator rights. You cannot download gpdisable from Microsoft today (I think they removed it when SysInternals was purchased), but thanks to the WayBack machine you can download gpdisable here.
PGP Whole Disk Encryption is entirely an implementation of opt-in security. The installer code executes, claiming that the drive in its entirety is encrypted, claiming that the keys it uploads to the central server (like Step #3 above) are the real keys it uses, and of course claiming that the bootguard bypass is reset after the very next boot.
Anti-Virus works the same way. It's not too hard for malware to disable AV. Yet, if the policy polling process (alliteration) sees the process running ("brain dead" or not), it's not necessarily trustworthy.
Network Access/Admission Control/Protection (NAC or NAP depending on your vendor preference) is the worst case of opt-in security. NAC Implementations have been beaten straight up, in the vein of "these aren't the droids you're looking for" and NAC implementations have been hijacked by students.
Ok, so where do we go from here? There are really only two options: 1) bolster trustworthiness in distributed computing, or 2) ditch distributed computing in favor of something simpler, like thin clients and centralized computing.
Option 2 is beyond what can be crammed into this post, but don't think that means vendors are ignoring the possibilities of thin clients. Dell clearly is. Now they just have to figure out how to add mobility into the equation.
Probably the single greatest contributor to the possibility of achieving option 1 is the ubiquitous adoption of TPMs. TPMs could enable Remote Attestation, but they won't be able to do it without also coupling trust at the foundational levels like memory allocation. Regardless, increasing the trustworthiness of distributed computers from "opt-in" to high assurance will require a Trusted Path from the centralized server all the way to the remote computers' hardware, which does not exist today.
Labels:
complexity vs security,
key management,
malware,
opt-in security,
Trust
Wednesday, October 24, 2007
DNS Re-Binding
One of the biggest problems with security threats in the Web 2.0 world is the erosion of trust boundaries. Dynamic web applications pull data together in "mashups" from multiple sources. Modern online business seems to require rich, dynamic code and data, yet as fast as we can field it, there are problems keeping boundaries cleanly separate.
Some researchers at Stanford have spent some time identifying some very critical problems involving the use of DNS in the Web 2.0 world. It's not insecurity research, though, because they provide some options for solutions, addressing the problem conceptually.
Basically, the problem--which can be accomplished for less than $100 in capital expenses-- consists of changing the IP address to which a DNS record points, after a "victim" connects to the malicious web server. This allows for circumvention of firewall policies, since the connections are initiated by victim clients. The paper also discusses how this method could be used to obfuscate or distribute the origins of spam or click fraud.
The solution, though, is simple. DNS records can be "pinned" to an IP Address instead of allowing it to live only transiently in memory for a few seconds. And DNS records pointing to internal or private (non-routable) IP address ranges should be handled with caution.
It's also interesting to note that DNSSEC does nothing to prevent this problem, since this is not a question of the integrity (or authenticity) of the DNS records from the DNS server; the DNS server is providing the malicious records. It's not a man-in-the-middle attack (although that could be a way to implement this, potentially).
And on a side note ... This is yet another example of why firewalls are pretty much useless in a modern information security arsenal. We cannot hide behind OSI Layers 3-4 anymore when all the action is happening in layers 5-7 (or layer 8).
Some researchers at Stanford have spent some time identifying some very critical problems involving the use of DNS in the Web 2.0 world. It's not insecurity research, though, because they provide some options for solutions, addressing the problem conceptually.
[Image taken from PDF document]
Basically, the problem--which can be accomplished for less than $100 in capital expenses-- consists of changing the IP address to which a DNS record points, after a "victim" connects to the malicious web server. This allows for circumvention of firewall policies, since the connections are initiated by victim clients. The paper also discusses how this method could be used to obfuscate or distribute the origins of spam or click fraud.
The solution, though, is simple. DNS records can be "pinned" to an IP Address instead of allowing it to live only transiently in memory for a few seconds. And DNS records pointing to internal or private (non-routable) IP address ranges should be handled with caution.
It's also interesting to note that DNSSEC does nothing to prevent this problem, since this is not a question of the integrity (or authenticity) of the DNS records from the DNS server; the DNS server is providing the malicious records. It's not a man-in-the-middle attack (although that could be a way to implement this, potentially).
And on a side note ... This is yet another example of why firewalls are pretty much useless in a modern information security arsenal. We cannot hide behind OSI Layers 3-4 anymore when all the action is happening in layers 5-7 (or layer 8).
Labels:
complexity vs security,
malware,
research,
Trust
Monday, October 22, 2007
Open Source Trustworthy Computing
There is a pretty good article over at LWN.net about the state of Trustworthy Computing in Linux, detailing the current and planned support for TPMs in Linux. Prof Sean Smith at Dartmouth created similar applications for Linux back in 2003, when the TPM spec was not yet finalized.
Of the TPM capabilities discussed, "remote attestation" was highlighted significantly in the article. But, keep in mind that Linux, being a monolithic kernel, has more components that would need integrity checking than there are available PCRs in the TPMs. The suggested applications (e.g. checking the integrity of an "email garden" terminal by the remote email server) are a stretch of capabilities.
Also keep in mind that unless coupled with trust at the foundational levels of memory architecture, integrity-sensitive objects could be replaced in memory by any device that has access via DMA. IOMMUs or similar will be required to deliver the solution fully.
And on that note, Cornell's academic OS - Nexus has a much better chance of success, because of the limited number of components that live in kernel space. The fewer the items that need "remote attestation", the more likely the attestation will be meaningful at all. At this point, modern operating systems need to simplify more than they need to accessorize, at least if security is important.
Of the TPM capabilities discussed, "remote attestation" was highlighted significantly in the article. But, keep in mind that Linux, being a monolithic kernel, has more components that would need integrity checking than there are available PCRs in the TPMs. The suggested applications (e.g. checking the integrity of an "email garden" terminal by the remote email server) are a stretch of capabilities.
Also keep in mind that unless coupled with trust at the foundational levels of memory architecture, integrity-sensitive objects could be replaced in memory by any device that has access via DMA. IOMMUs or similar will be required to deliver the solution fully.
And on that note, Cornell's academic OS - Nexus has a much better chance of success, because of the limited number of components that live in kernel space. The fewer the items that need "remote attestation", the more likely the attestation will be meaningful at all. At this point, modern operating systems need to simplify more than they need to accessorize, at least if security is important.
Thursday, October 18, 2007
Cornell's Nexus Operating System
I hold out hope for this project: Cornell University's Nexus Operating System. There are only a few publications thus far, but the ideas are very intriguing: microkernel, trustworthy computing via TPMs (they're cheap and pervasive), "active attestation" and labeling, etc.
And it's funded by the National Science Foundation. I hope to see more from Dr. Fred Schneider and company.
And it's funded by the National Science Foundation. I hope to see more from Dr. Fred Schneider and company.
Download Links and Hash Outputs
I never have quite figured out why people will put a download link with a SHA-1 or MD5 hash output side-by-side on the same web page. Somebody named "KJK::Hyperion" has released an unofficial patch to the Microsoft URI problem. Right there on the download page is a set of hash outputs.
From a quality perspective, sure, using a cryptographic hash might demonstrate that the large file you downloaded did or didn't get finish properly, but so could its file size.
Suppose, by either a man-in-the-middle or full-on rooting of the webserver (either will work: one is on the fly while the other is more permanent), that I can replace a generally benevolent binary file with something malicious. If I can do that, what is to stop me from generating a proper (take your pick) SHA-1 or MD5 hash and replacing the good hash on the web page with my bad one? The hash does not tell you anything. If the adversary can tamper and replace one, she could certainly tamper and replace the other.
If you are worried about quality only and not so much about chain-of-custody or tampering, you might as well just place the file size in bytes on the web page. If you are worried about tampering, use a digital signature of some sort (any PKC is better than none) so that the end-user can establish some sort of non-repudiation.
And keep in mind that:
A) You are trusting your computer to do the crypto (you're not doing it in your head),
and
B) Digital signatures can be used in trust decisions, but they do not automatically indicate trustworthiness (i.e. they do not necessarily indicate the author's intentions).
...
This is an excellent quote from Bruce Schneier on the subject of hashes/signatures:
From a quality perspective, sure, using a cryptographic hash might demonstrate that the large file you downloaded did or didn't get finish properly, but so could its file size.
Suppose, by either a man-in-the-middle or full-on rooting of the webserver (either will work: one is on the fly while the other is more permanent), that I can replace a generally benevolent binary file with something malicious. If I can do that, what is to stop me from generating a proper (take your pick) SHA-1 or MD5 hash and replacing the good hash on the web page with my bad one? The hash does not tell you anything. If the adversary can tamper and replace one, she could certainly tamper and replace the other.
If you are worried about quality only and not so much about chain-of-custody or tampering, you might as well just place the file size in bytes on the web page. If you are worried about tampering, use a digital signature of some sort (any PKC is better than none) so that the end-user can establish some sort of non-repudiation.
And keep in mind that:
A) You are trusting your computer to do the crypto (you're not doing it in your head),
and
B) Digital signatures can be used in trust decisions, but they do not automatically indicate trustworthiness (i.e. they do not necessarily indicate the author's intentions).
...
This is an excellent quote from Bruce Schneier on the subject of hashes/signatures:
"The problem is that while a digital signature authenticates the document up to the point of the signing computer, it doesn't authenticate the link between that computer and Alice. This is a subtle point. For years, I would explain the mathematics of digital signatures with sentences like: 'The signer computes a digital signature of message m by computing m^e mod n.' This is complete nonsense. I have digitally signed thousands of electronic documents, and I have never computed m^e mod n in my entire life. My computer makes that calculation. I am not signing anything; my computer is."
Zealots and Good Samaritans in the Case of Wikipedia
Dartmouth College researchers Denise Anthony and Sean W. Smith recently released a technical report of some very interesting research around trustworthy collaboration in an open (and even anonymous) environment: Wikipedia.
What is interesting in this research is the interdisciplinary approach between Sociology and Computer Science, analyzing the social aspects of human behavior and technical security controls within a system. Far too often are those overlapping fields ignored.
Also of interest, Wikipedia's primary security control is the ability to detect and correct security failures, not just prevent them (as so many security controls attempt to do). In an open, collaborative environment, correction is the best option. And since Wikipedia, in true wiki form, keeps every edit submitted by every user (anonymous or not), there is a wealth of information to mine regarding the patterns of (un)trustworthy input and human-scale validation systems.
What is interesting in this research is the interdisciplinary approach between Sociology and Computer Science, analyzing the social aspects of human behavior and technical security controls within a system. Far too often are those overlapping fields ignored.
Also of interest, Wikipedia's primary security control is the ability to detect and correct security failures, not just prevent them (as so many security controls attempt to do). In an open, collaborative environment, correction is the best option. And since Wikipedia, in true wiki form, keeps every edit submitted by every user (anonymous or not), there is a wealth of information to mine regarding the patterns of (un)trustworthy input and human-scale validation systems.
Tuesday, October 16, 2007
Identity Management in Security Products
For years, one of my biggest frustrations with vendors claiming to have "enterprise" software applications (I'm talking general applications: finance, medical records, HR systems, etc.) was that they all built their apps thinking they would have their own user repository. The obvious pain here is that: A) user provisioning/termination (not to mention password resets or compliance) now required admins touching an extra user repository-- an extra step, and B) users had to remember yet-another-password (or worse) keep the passwords set the same across all systems. This is likely old hat for anyone taking the time to read this...
But times have changed (mostly). While there are still "enterprise" grade apps that don't understand identity management strategies (especially, from my experience, in the ASP/Hosted environment-- "Identity Federation" has yet to catch on like wildfire, probably mostly as a result of all of the differing federation standards out there; and like a friend of mine says: 'the wonderful thing about standards is that there are so many from which to choose'), generally speaking the 'industry' has matured to the point now where I actually see some applications come in now with the mantra: "why would we want our app to keep track of users-- aren't you already doing that someplace else?"
Ah, but there must be a reason for me to bring up identity management ...
Security products vendors (specifically, ones that don't focus on identity management as their core competency) range from really bad (Symantec) to 'we get it for users, but not admins' (PGP) to 'yeah, ok, but we will only support one method and it may not be the most flexible for you' (Sourcefire).
Let's take a pause and first make sure we're on the same page with our definitions...
"Identity Management" is comprised of the "Authentication" and "Authorization" of "principals" (I'm abstracting up a layer here to include "users", "computers", "systems", "applications", etc.), as well as the technology used to make managing those aspects easier, (of course) as the business processes require.
Wikipedia's definition of "authentication" is fine for our purposes:
Now let's move on ...
Symantec's Enterprise Anti-Virus suite (which now has the personal firewall, anti-spyware, and probably anti-[insert your marketing threat-du-jour here]), doesn't actually even have authentication for most of the administrative components in its version 10 suite; technically, it only has authorization. To get into the administrative console, you must know a password; this is not authentication in that an admin is not presenting an identity that must be validated by the password-- the admin just presents a password. This is just like using the myriad of Cisco products without RADIUS/TACACS+ authentication: just supply the "enable" password. Fortunately for Cisco, they get this fairly well now. Especially since this product line is for enterprises with Microsoft products, I am surprised that they didn't (at a minimum) just tie authentication and authorization back to Active Directory users and groups.
The threats/risks here are the same ones that come with all password management nightmares. An admin leaves; the one password needs to be reset. Somebody makes an inappropriate configuration change; there's no accountability to know who it was. Those are fundamental identity management problems for enterprises. It is such a shame that a security products vendor couldn't have spent the extra 20 hours of development time to implement some basics.
Moving up a notch in the identity management of security products' food chain, we come to PGP's Universal Server. [All thoughts that I am recently picking on PGP aside, please. This is not a vulnerability, but creates risky situations-- albeit not as bad as the ones I described above for Symantec.] One of the novel features the PGP Universal Server brought to the market was its ability to do key management for users ... Read that: it is good at managing users' keys. User identities can even be tied back into LDAP stores (or not). Public Key crypto (PKC) is manageable even on large scales.
However, what the PGP Universal Server is lacking is the identity management of administrators; administrator credentials are stored within the PGP Universal Server's application code. There is no tie back to a centralized user repository for administrative access. Read that: it is bad at managing admins. If you are an enterprise and you want your 24x7 helpdesk to be able to generate recovery tokens for your whole disk encrypted users, you have to provision yet-another-account for them. If you are an enterprise that wants to leverage an outsourcing arrangement for your helpdesk (which is in and of itself a risky business-- manage your contracts well) and provisioning and termination of your helpdesk staff is important-- look elsewhere, as you'll find nothing helpful here. You'll either have to give some 'supervisor' person the ability to generate helpdesk accounts within the PGP Universal server, or your enterprise's security practitioners will be busy with mundane account creations/terminations.
Why oh why, PGP Corp, did you not build in support for LDAP, Kerberos, RADIUS, TACACS+, SAML, SOA Federation, or one of the other in the myriad of authentication services for managing administrative access to the PGP Universal Server?
Another issue is the notion of authorization within the PGP Server's application. There are several access "roles", but they are all or none. It is not possible-- in today's version-- to authorize a helpdesk user to generate one-time recovery tokens (in the event that a user cannot remember their boot password) for only a subset of computers protected by and reporting to a PGP Universal Server. Say you want your helpdesk to manage all computers except for a special group (e.g. Finance, HR, Executives, etc.). In PGP's product, that sort of authorization control requires a whole new instance; you cannot achieve that level of authorization separation within the tool. It's all or nothing. [Which is considerably similar to how many of PGP's products work-- a binary action. A user is either trusted to have access to the Volume Master Key in PGP Whole Disk Encryption or the user is not trusted at all. There's little sense of what a user can do versus an administrator, let alone a global administrator versus helpdesk personnel.]
Sourcefire is an example of the next tier on the totem pole of identity management inside of security products. Users can log in either locally (application layer accounts, not OS) to the Defense Center console or the server can be configured to authenticate and authorize users against LDAP. This is great when it comes to provisioning/terminating access or just managing password compliance across a range of distributed systems. However, just like the security products mentioned above, I would have liked to see the vendor do more. LDAP is a great protocol for standard user directories, but for environments where access to intrusion detection logs is sensitive business, a natural requirement is multi-factor authentication. Very few one-time-password generators (e.g. RSA SecurID or Secure Computing's Safeword) truly support LDAP as a method of authenticating users. It's even harder still to get other forms of second-factors (i.e. biometrics or smart cards) to work with LDAP. I would have preferred to see a more flexible authentication and authorization architecture, such as requiring both LDAP and RADIUS to handle the situation where an Active Directory password and RSA SecurID token would be required for access.
Moral of the today's story? If you are a security products vendor, make sure you encompass all of the security requirements that are placed upon non-security products.
But times have changed (mostly). While there are still "enterprise" grade apps that don't understand identity management strategies (especially, from my experience, in the ASP/Hosted environment-- "Identity Federation" has yet to catch on like wildfire, probably mostly as a result of all of the differing federation standards out there; and like a friend of mine says: 'the wonderful thing about standards is that there are so many from which to choose'), generally speaking the 'industry' has matured to the point now where I actually see some applications come in now with the mantra: "why would we want our app to keep track of users-- aren't you already doing that someplace else?"
Ah, but there must be a reason for me to bring up identity management ...
Security products vendors (specifically, ones that don't focus on identity management as their core competency) range from really bad (Symantec) to 'we get it for users, but not admins' (PGP) to 'yeah, ok, but we will only support one method and it may not be the most flexible for you' (Sourcefire).
Let's take a pause and first make sure we're on the same page with our definitions...
"Identity Management" is comprised of the "Authentication" and "Authorization" of "principals" (I'm abstracting up a layer here to include "users", "computers", "systems", "applications", etc.), as well as the technology used to make managing those aspects easier, (of course) as the business processes require.
Wikipedia's definition of "authentication" is fine for our purposes:
"Authentication (from Greek αυθεντικός; real or genuine, from authentes; author) is the act of establishing or confirming something (or someone) as authentic, that is, that claims made by or about the thing are true. Authenticating an object may mean confirming its provenance, whereas authenticating a person often consists of verifying their identity. Authentication depends upon one or more authentication factors."Wikipedia's definition of "authorization" is less general, unfortunately, but you'll get the point:
"In security engineering and computer security, authorization is a part of the operating system that protects computer resources by only allowing those resources to be used by resource consumers that have been granted authority to use them. Resources include individual files or items data, computer programs, computer devices and functionality provided by computer applications."
Now let's move on ...
Symantec's Enterprise Anti-Virus suite (which now has the personal firewall, anti-spyware, and probably anti-[insert your marketing threat-du-jour here]), doesn't actually even have authentication for most of the administrative components in its version 10 suite; technically, it only has authorization. To get into the administrative console, you must know a password; this is not authentication in that an admin is not presenting an identity that must be validated by the password-- the admin just presents a password. This is just like using the myriad of Cisco products without RADIUS/TACACS+ authentication: just supply the "enable" password. Fortunately for Cisco, they get this fairly well now. Especially since this product line is for enterprises with Microsoft products, I am surprised that they didn't (at a minimum) just tie authentication and authorization back to Active Directory users and groups.
The threats/risks here are the same ones that come with all password management nightmares. An admin leaves; the one password needs to be reset. Somebody makes an inappropriate configuration change; there's no accountability to know who it was. Those are fundamental identity management problems for enterprises. It is such a shame that a security products vendor couldn't have spent the extra 20 hours of development time to implement some basics.
Moving up a notch in the identity management of security products' food chain, we come to PGP's Universal Server. [All thoughts that I am recently picking on PGP aside, please. This is not a vulnerability, but creates risky situations-- albeit not as bad as the ones I described above for Symantec.] One of the novel features the PGP Universal Server brought to the market was its ability to do key management for users ... Read that: it is good at managing users' keys. User identities can even be tied back into LDAP stores (or not). Public Key crypto (PKC) is manageable even on large scales.
However, what the PGP Universal Server is lacking is the identity management of administrators; administrator credentials are stored within the PGP Universal Server's application code. There is no tie back to a centralized user repository for administrative access. Read that: it is bad at managing admins. If you are an enterprise and you want your 24x7 helpdesk to be able to generate recovery tokens for your whole disk encrypted users, you have to provision yet-another-account for them. If you are an enterprise that wants to leverage an outsourcing arrangement for your helpdesk (which is in and of itself a risky business-- manage your contracts well) and provisioning and termination of your helpdesk staff is important-- look elsewhere, as you'll find nothing helpful here. You'll either have to give some 'supervisor' person the ability to generate helpdesk accounts within the PGP Universal server, or your enterprise's security practitioners will be busy with mundane account creations/terminations.
Why oh why, PGP Corp, did you not build in support for LDAP, Kerberos, RADIUS, TACACS+, SAML, SOA Federation, or one of the other in the myriad of authentication services for managing administrative access to the PGP Universal Server?
Another issue is the notion of authorization within the PGP Server's application. There are several access "roles", but they are all or none. It is not possible-- in today's version-- to authorize a helpdesk user to generate one-time recovery tokens (in the event that a user cannot remember their boot password) for only a subset of computers protected by and reporting to a PGP Universal Server. Say you want your helpdesk to manage all computers except for a special group (e.g. Finance, HR, Executives, etc.). In PGP's product, that sort of authorization control requires a whole new instance; you cannot achieve that level of authorization separation within the tool. It's all or nothing. [Which is considerably similar to how many of PGP's products work-- a binary action. A user is either trusted to have access to the Volume Master Key in PGP Whole Disk Encryption or the user is not trusted at all. There's little sense of what a user can do versus an administrator, let alone a global administrator versus helpdesk personnel.]
Sourcefire is an example of the next tier on the totem pole of identity management inside of security products. Users can log in either locally (application layer accounts, not OS) to the Defense Center console or the server can be configured to authenticate and authorize users against LDAP. This is great when it comes to provisioning/terminating access or just managing password compliance across a range of distributed systems. However, just like the security products mentioned above, I would have liked to see the vendor do more. LDAP is a great protocol for standard user directories, but for environments where access to intrusion detection logs is sensitive business, a natural requirement is multi-factor authentication. Very few one-time-password generators (e.g. RSA SecurID or Secure Computing's Safeword) truly support LDAP as a method of authenticating users. It's even harder still to get other forms of second-factors (i.e. biometrics or smart cards) to work with LDAP. I would have preferred to see a more flexible authentication and authorization architecture, such as requiring both LDAP and RADIUS to handle the situation where an Active Directory password and RSA SecurID token would be required for access.
Moral of the today's story? If you are a security products vendor, make sure you encompass all of the security requirements that are placed upon non-security products.
Browser Rootkits
Rootkits, in general, are of extreme interest to me-- not because of what can be done with them (I can assume anything can be done with them and leave it at that), but because of where they can be run. In essence, a rootkit is the "not gate" of trustworthy computing. If a rootkit can get in the 'chain of trust' then the chain is not trustworthy.
Joanna Rutkowska has demonstrated the importance for some very low level controls in modern operating systems (and virtual machines).
'pdp' at GNUCitizen has a recent write-up of browser rootkits that is good-- it talks generally about the problem, not specifically. I would like to see him propose potential solutions to his concerns, though, which he has not yet done:
Joanna Rutkowska has demonstrated the importance for some very low level controls in modern operating systems (and virtual machines).
'pdp' at GNUCitizen has a recent write-up of browser rootkits that is good-- it talks generally about the problem, not specifically. I would like to see him propose potential solutions to his concerns, though, which he has not yet done:
"As you can see, browser rootkits are probably the future of malware. In the wrong hands, browser technologies are power tool that can be used to keep unaware puppets on a string. I am planning to follow up this post with a more detailed example of how browser rootkits are developed and show some interesting functionalities which can enable attackers to go so deep, no other has ever been."UPDATE: Joanna Rutkowska chimes in.
Wednesday, October 10, 2007
Analyzing Trust in the Microsoft URI Handler Issues
There's a buzz around the Microsoft URI Handlers. Basically, applications that rely on that Windows service can be handed data that isn't separated from code, which is rife with problems in and of itself.
First if affected Firefox, and there were some "no, you should fix it" comments thrown back and forth between Microsoft and Mozilla. Now it's affecting Microsoft applications.
The questions really are: Where should the data validation process occur? Should it happen just once by the OS's built-in URI Handlers, or should each application do its own validation?
The real answer is not that one side or the other should do it, but they should both do it. Any application relying on the Microsoft URI Handles is trusting that the data elements are free from anything malicious. Quite simply, Mozilla and even Microsoft product managers in charge of the apps are naive if they are not also performing validation. It's a matter of understanding trust and trustworthiness.
UPDATED: There is a slashdot thread on the subject and the Microsoft Security Response Center (MSRC) have posted to their blog explaining why they have done what they have done. Either way, if you are a third party application developer, you need to understand that it's always in your best interest to sanitize data yourself-- if for no other reason than the component you trust might not be 100% trustworthy.
First if affected Firefox, and there were some "no, you should fix it" comments thrown back and forth between Microsoft and Mozilla. Now it's affecting Microsoft applications.
The questions really are: Where should the data validation process occur? Should it happen just once by the OS's built-in URI Handlers, or should each application do its own validation?
The real answer is not that one side or the other should do it, but they should both do it. Any application relying on the Microsoft URI Handles is trusting that the data elements are free from anything malicious. Quite simply, Mozilla and even Microsoft product managers in charge of the apps are naive if they are not also performing validation. It's a matter of understanding trust and trustworthiness.
UPDATED: There is a slashdot thread on the subject and the Microsoft Security Response Center (MSRC) have posted to their blog explaining why they have done what they have done. Either way, if you are a third party application developer, you need to understand that it's always in your best interest to sanitize data yourself-- if for no other reason than the component you trust might not be 100% trustworthy.
Trusted vs Trustworthy
From a recent Secunia advisory post:
What Secunia means to say is:
There is a huge difference between trustworthiness and trust. Trust is an action and, as a result, a set of relationships (e.g. Alice trusts Bob; Bob trusts Charlie; therefore, Alice transitively trusts Charlie). Trustworthiness is an estimate of something's (or someone's) worthiness of receiving somebody else's trust. Trust relationships can be mapped out-- it just takes time. Trustworthiness, on the other hand, is very difficult to quantify accurately. I also just discussed this in the context of open source and security.
If those of us in the security community don't get the difference between trust and trustworthiness, then how in the world will our systems protect users who may never understand?
"Solution:What a paradox. "Trust" is an action. If you browse to a website, you are trusting that website. If you clicked a link, then you trusted that link. An "untrusted" PDF file is one that was never opened. By contrast, an opened PDF file is a trusted file. Why are they instructing you to do what you already do?
Do not browse untrusted websites, follow untrusted links, or open untrusted .PDF files." [italics are mine]
What Secunia means to say is:
"Do not browse untrustworthy websites, follow untrustworthy links, or open untrustworthy .PDF files."
There is a huge difference between trustworthiness and trust. Trust is an action and, as a result, a set of relationships (e.g. Alice trusts Bob; Bob trusts Charlie; therefore, Alice transitively trusts Charlie). Trustworthiness is an estimate of something's (or someone's) worthiness of receiving somebody else's trust. Trust relationships can be mapped out-- it just takes time. Trustworthiness, on the other hand, is very difficult to quantify accurately. I also just discussed this in the context of open source and security.
If those of us in the security community don't get the difference between trust and trustworthiness, then how in the world will our systems protect users who may never understand?
On Open Source and Security
Recently, I noted that it's not important whether source code is open or not for security, it's important to have well-qualified analysts reviewing the code. As a result, I received the following comment: "If you are paranoid, pay some guys you trust to do a review. With [closed source] you can't do that." The following is my response.
...
Well, say I trust Bruce Schneier (I generally do professionally speaking, but not necessarily personally-- and I'll pick on him since he's almost universally accepted as the patron saint of security). Let's say I trust Bruce's analysis of a particular company's products. If Bruce is reviewing the source code and the code is closed to the public but made available to him as an escrow, I would likely be OK with that. Trust is more complicated than just availability of source code. There are not too many people in the world that are qualified to security reviews of security product's code. So, I couldn't trust just anyone's analysis of it. To be honest, if it came down to a computationally-intensive implementation of cryptographic code, I wouldn't even trust my own analysis of it. My point is: Trust is a social-psychological phenomenon, not a technical one.
"Open source" means so many different things to different people. To some it means "free to use, modify, or distribute". To some, it means anyone can review the code. To others, it might just mean the vendor will provide source code escrow services at your request. It might be possible to have a closed source (in the common sense) product opened up to a specific customer (potentially any customer asks the right question the right way).
How many "joe users" that have a one seat install of a product actually review the code? Not many. How many of those one seat installs are actually qualified code reviewers? Fewer still.
Open Source != Security
It (open source) is an unrelated variable. It's like how automobile insurance providers in the US inaccurately categorize all male drivers under 18 years of age as high risk. Not all of them (caricatures and jokes aside) will get tickets, cause wrecks, or otherwise require the insurance agency to pay out. However, the actuarial data presented to the insurers suggests that is a reasonable category of customers for which they should increase premiums. If it was legal and ethical (and affordable) to put all under 25 year old drivers through a *magic* test (I say magic because it may or may not exist) that could differentiate with a higher level of certainty whether the customer had the "x factor" that caused a higher tendency to cause wrecks ... well, that's where the insurance companies would go.
Open Source is like that broad, mis-categorization. There are many open source projects that are never reviewed for potential threats by qualified people. In fact, since "open source" is so "open", there are likely projects that have never even been reviewed by anyone outside of the sole contributor. "Open Source" strikes up a connotation of community and collaboration, but it does not guarantee community and collaboration. Likewise, there's no guarantee that the people reviewing the code aren't adding security problems deliberately.
Trust is a binary action. You either choose to trust someone or something, or you choose not to trust. You might opt to choose to trust someone conditionally, such as I might trust a carpenter to build my house but not to work on my car. Trustworthiness, however, is a totally different equation. People estimate trustworthiness (which is exactly as it reads: calculating how worthy of trust something or someone is) using a combination of perceived reputation (track records) or trusted third parties' estimated trust (e.g. my friend Joe knows a lot about cars, so I trust his mechanic, since Joe would know how to differentiate between a good and bad mechanic).
A product has opened its source code for review. So what? You should be asking the following questions:
...
Well, say I trust Bruce Schneier (I generally do professionally speaking, but not necessarily personally-- and I'll pick on him since he's almost universally accepted as the patron saint of security). Let's say I trust Bruce's analysis of a particular company's products. If Bruce is reviewing the source code and the code is closed to the public but made available to him as an escrow, I would likely be OK with that. Trust is more complicated than just availability of source code. There are not too many people in the world that are qualified to security reviews of security product's code. So, I couldn't trust just anyone's analysis of it. To be honest, if it came down to a computationally-intensive implementation of cryptographic code, I wouldn't even trust my own analysis of it. My point is: Trust is a social-psychological phenomenon, not a technical one.
"Open source" means so many different things to different people. To some it means "free to use, modify, or distribute". To some, it means anyone can review the code. To others, it might just mean the vendor will provide source code escrow services at your request. It might be possible to have a closed source (in the common sense) product opened up to a specific customer (potentially any customer asks the right question the right way).
How many "joe users" that have a one seat install of a product actually review the code? Not many. How many of those one seat installs are actually qualified code reviewers? Fewer still.
Open Source != Security
It (open source) is an unrelated variable. It's like how automobile insurance providers in the US inaccurately categorize all male drivers under 18 years of age as high risk. Not all of them (caricatures and jokes aside) will get tickets, cause wrecks, or otherwise require the insurance agency to pay out. However, the actuarial data presented to the insurers suggests that is a reasonable category of customers for which they should increase premiums. If it was legal and ethical (and affordable) to put all under 25 year old drivers through a *magic* test (I say magic because it may or may not exist) that could differentiate with a higher level of certainty whether the customer had the "x factor" that caused a higher tendency to cause wrecks ... well, that's where the insurance companies would go.
Open Source is like that broad, mis-categorization. There are many open source projects that are never reviewed for potential threats by qualified people. In fact, since "open source" is so "open", there are likely projects that have never even been reviewed by anyone outside of the sole contributor. "Open Source" strikes up a connotation of community and collaboration, but it does not guarantee community and collaboration. Likewise, there's no guarantee that the people reviewing the code aren't adding security problems deliberately.
Trust is a binary action. You either choose to trust someone or something, or you choose not to trust. You might opt to choose to trust someone conditionally, such as I might trust a carpenter to build my house but not to work on my car. Trustworthiness, however, is a totally different equation. People estimate trustworthiness (which is exactly as it reads: calculating how worthy of trust something or someone is) using a combination of perceived reputation (track records) or trusted third parties' estimated trust (e.g. my friend Joe knows a lot about cars, so I trust his mechanic, since Joe would know how to differentiate between a good and bad mechanic).
A product has opened its source code for review. So what? You should be asking the following questions:
- Why? Why did you open your source?
- Who has reviewed your source? What's (not) in it for them?
- What was in the review? Was it just a stamp of approval or were there comments as well?
Saturday, October 6, 2007
Sorry for the delay, Jon
I just came across this on Jon Callas' CTO Corner just now (11 PM GMT, when I started this draft). I had a busy day Friday (obviously so did Jon), but on a totally different subject. By the time I got around to checking for comments to moderate (24 hours ago), I noticed there were several (contrary to some, I really am not sitting here trying to build a following ... read if you want, or don't--your choice, obviously). A bunch of them were either 'you don't know what you're talking about' (which is fine if they want to think that-- I posted their comments anyway) or they were 'what about X' comments which I already answered in later posts.
I saw John Dasher's (PGP Product Management) comment right away, published it, and elevated it to a main post/entry. There was a lot of negative hype around the company, apparently, and I did not want to have anything to do with a negative impact to the company [If you don't believe me on that point, all I ask is that you please read the other topics]. My main point was that this feature was not published well enough (Jon and I seem to agree on that).
I want to make this clear though: the time it took to get Jon's comment in the list (as it is now and mirrored here) was unrelated to the content in his comment, or my opinion of him in general. Read it for yourself; I'm not against differing opinions or even insults. Just note that I responded to him as well.
Jon wrote:
Jon also wrote:
I was surprised when Jon wrote:
I do understand exactly why you would want to jump the gun to believe I was editing you out. It's an "everybody has an opinion" world, one where people can choose what content to keep or throw away. So, I cannot fault you for your response. [If I was in your position, I'd probably do exactly the same thing you did for my employer in an official "PR" type letter: acknowledge, compliment, and trivialize in 200 words or less. It was well executed.]
Finally, Jon also wrote:
I would like to believe, Jon, that if we met up in the real world, we would be on the same page.
The door is open for more of your opinions.
I saw John Dasher's (PGP Product Management) comment right away, published it, and elevated it to a main post/entry. There was a lot of negative hype around the company, apparently, and I did not want to have anything to do with a negative impact to the company [If you don't believe me on that point, all I ask is that you please read the other topics]. My main point was that this feature was not published well enough (Jon and I seem to agree on that).
I want to make this clear though: the time it took to get Jon's comment in the list (as it is now and mirrored here) was unrelated to the content in his comment, or my opinion of him in general. Read it for yourself; I'm not against differing opinions or even insults. Just note that I responded to him as well.
Jon wrote:
"As I started asking about what we did and did not do to document the feature, I heard several times, 'I thought we documented that?'I have pointed out several times that there is a difference between documented and publicly documented. Now it's publicly accessible, but when this whole ordeal started out, that same link required a customer ID and password. You can read how I did work with the vendor, PGP Corp, (how their support people were not well aware of the feature) and read how they were satisfied with the way things were documeted BEFORE they made the documentation available to non-customers.
"So our product manager went off to find where it is documented. We found that it was documented in five places, including the release notes for each product (client and server) and the 'What's New?' section of each. Okay, we could do better, but a feature listed in 'What's New?' could hardly be termed 'barely documented.'"
Jon also wrote:
"In the world I live in, the world of cryptographers-are-security-geeks, the word 'backdoor' is a fighting word. It is especially a fighting word when modified with 'intentional.' It means we sold out. It means we've lied to you.I agree that "backdoor" has a negative connotation (I'd be stupid to ignore that now), but I disagree that that the connotation should exist (and arguably so do others). And most importantly, I did not use the words "intentional" + "backdoor" to mean anything aggressive or fighting. In fact, I replaced "backdoor" with "bypass" on the first post (and nearly any other post where I don't give an explanation like this one). Clearly from the beginning, I did not imply an "alternative means of access" like the conspiracy theories claim. Shoot, there are even other beneficial "backdoors" in PGP products, like the ADK (additional decryption key, such as used with Blakely-Shamir key splitting techniques). Those are backdoors in the academic sense, just not in the paranoid, socio-political sense. The question at hand is: "for whom does the door open?" And PGP Corp's official stance is: "never for an adversary or warring faction". All I have tried to point out is that it could possibly (no matter how unlikely you think it might occur) open for a well-timed cat-burglar.
The word 'backdoor' is thus a slur. It is a nasty word. There a plenty of nasty words that insult someone's race, national origin, religion, and so on. I will give no examples of them. 'The B-Word,' as I will call it, is one of those slurs. It is not to be used lightly."
I was surprised when Jon wrote:
"I wrote a tart reply and posted it to his second note. As I am writing this, 15 hours have passed and although he has approved other people's replies, he hasn't approved mine.What can I say, Jon? I guess to be fair (and if nothing else following the excellent precedent you have set), I apologize for not moderating comments sooner. Perhaps I should have emailed you so I could have been more open to your time lines. I do hope that you believe me, but I accept that you may not. I ask that you consider how I have published and linked back to everything you have offered (and then some). And I ask that you consider how the ethics truly apply here (and for what it's worth, I did mull over whether to post or not to post on this subject for months after I last discussed the issue with your employees-- I even asked other people whose ethics I trust for their input prior).
Murphy's law being what it is, it's possible that by the time I dot the 'i’s', cross the 't’s', and get this article posted on my CTO Corner, he may have printed my reply -- but it isn't there now, I just checked. Nonetheless, it angers me immensely that someone would tell lies and insult my personal integrity and the integrity of my company.
I know why he won't post it -- I point out facts, show that he has intentionally misstated facts, knowingly defamed PGP Corporation, and show that he has not lived up to the very ethical standards for which he criticizes other people. He accuses us of technical and ethical violations that he knows are false.
Therefore, I am posting my reply to him below the break. These are the facts Securology doesn't have the courage to post."
I do understand exactly why you would want to jump the gun to believe I was editing you out. It's an "everybody has an opinion" world, one where people can choose what content to keep or throw away. So, I cannot fault you for your response. [If I was in your position, I'd probably do exactly the same thing you did for my employer in an official "PR" type letter: acknowledge, compliment, and trivialize in 200 words or less. It was well executed.]
Finally, Jon also wrote:
"The mark of a good company is how you deal with issues, not the issues themselves."I think Jon did an exceptional job in his response. He was down-to-earth, humble, yet showed great pride in his company and work. He acknowledged "opportunities" (as the American corporate culture calls them) for improvement, as well as strengths. I think we agree on 90% of the details, and things are improving. Sure, some of the details may be over-hyped; that's not my fault and I still hold to my position.
I would like to believe, Jon, that if we met up in the real world, we would be on the same page.
The door is open for more of your opinions.
Subscribe to:
Posts (Atom)