There has been a lot of discussion around retailers pushing back on the PCI (Payment Card Industry) Data Security Standards group. The claim is that merchants should not have to store credit card data at all. Instead credit card transaction clearinghouses would be the only location where that data would be retained; any current need (transaction lookup, disputes, etc.) would be handled by the payment card processors on a per-request basis.
I really like this idea.
In risk management, there are generally two methods to protecting assets: 1) spend more to prevent threats to the assets, 2) spend more to reduce the numbers/value of the assets. We see a lot of the former (think: anti-virus, anti-spyware, anti-threat-du-jour). We rarely see examples of the latter, but this is a perfectly logical approach.
Dan Geer gave us a great analogy: as threats increase, perimeters detract. A suburban neighborhood is OK with a police car every so many square miles, but an embassy needs armed marines every 50 feet of its perimeter. We can take Dr. Geer's analogy and make a war in everyone's neighborhood-- the local retailers/e-tailers-- or we can reduce those assets to specific locations where they can best be monitored and protected. It just makes sense.
It's also a simple game of economics. The consumer passes the risk to the credit card issuers who pass the risk onto the merchants. If consumers in the US hadn't transferred risk to the credit card issuers (courtesy of US Law limiting credit card fraud to a whopping $50 consumer liability), we would likely not see widespread use of credit cards in the US today. What consumer would stand for getting into greater debt if the card was lost? Likewise, we now are at a turning point with merchants, since card issuers are trying to transfer the risk onto them. Shuffling the risk (by shuffling the custody of confidential credit card data) back to the issuers makes perfect sense. Don't forget the credit card issuers have been in a perfect place all of these years: charging merchants a fee per transaction and charging interest to consumers who maintain a debt beyond 30 days. Since they can double-dip in the economics equation, it makes the most sense for them to take the responsibility.
(noun) securology.
Latin: se cura logia
Literally translated: the study of being without care or worry
Wednesday, October 31, 2007
Thursday, October 25, 2007
Opt-in Security
So many of computer security implementations today depend on what I call "opt-in" security. It could be called "trusted clients". Most IT implementations consist of a large scale of distributed computers which are getting more and more mobile all of the time. Controlling those distributed systems is a very tough challenge because IT's arms can only reach so far and those corporate employees (let alone consumers) are soooo far away. So enters the idea of centralized control...
The notion is simple: administrators need a central place-- a choke hold, perhaps-- to grasp (in futility, as it happens). Practically every enterprise management tool these days tries to do the same thing: make a computer that I cannot physically touch or see do something I want it to do.
The problem with centralized control of distributed systems today is that it is an illusion at best. When new applications are loaded onto systems, administrators aren't actually touching the systems. Instead, they are sending out code and data to be assimilated into each system. Likewise when "endpoint security" (marketing term du jour) systems are validating that a client computer is compliant with a policy set, it is attempting to reconfigure local settings. But under the microscope, all that really happens is that a computer program processes input, checks some properties' states, and (optionally) sends output back to a "central" server. That is, in a nutshell, all that centralized control is today.
Breaking down the steps ...
Step 1:
Maybe the remote computer's state was trustworthy at one time, back when it was first created, but before it was ever handed to a user. However, its current state is not known. After all, if we knew it was still in a known good state, why would we bother to run our policy status checking program on it?
Step 2:
The policy program runs, takes its actions (if any) and generates its output regarding a newly calculated system trustworthiness "state" of the system. Yet, if we cannot fully trust the host OS, then we cannot fully trust anything it outputs. What if our policy program asks the OS for the status of a resource (e.g. a configuration file/setting)? If the OS is untrustworthy (or if a malicious program has already subverted it), then the output is also not trustworthy; the actual value could be replaced with the value the policy program is expecting to see, so it shuts up and goes away.
Step 3:
After the pseudo-policy-science is complete, the output is sent back to the central server, which (depending upon the application) could do anything from logging and reporting the status to granting access to attempting to further remedy the remote computer.
Now for some examples ...
Mark Russinovich of SysInternals (now Microsoft) fame created a tool called "gpdisable" to demonstrate how Active Directory Group Policy is really just an opt-in tool. As it turns out, his tool could even appease Step 2 above without requiring administrator rights. You cannot download gpdisable from Microsoft today (I think they removed it when SysInternals was purchased), but thanks to the WayBack machine you can download gpdisable here.
PGP Whole Disk Encryption is entirely an implementation of opt-in security. The installer code executes, claiming that the drive in its entirety is encrypted, claiming that the keys it uploads to the central server (like Step #3 above) are the real keys it uses, and of course claiming that the bootguard bypass is reset after the very next boot.
Anti-Virus works the same way. It's not too hard for malware to disable AV. Yet, if the policy polling process (alliteration) sees the process running ("brain dead" or not), it's not necessarily trustworthy.
Network Access/Admission Control/Protection (NAC or NAP depending on your vendor preference) is the worst case of opt-in security. NAC Implementations have been beaten straight up, in the vein of "these aren't the droids you're looking for" and NAC implementations have been hijacked by students.
Ok, so where do we go from here? There are really only two options: 1) bolster trustworthiness in distributed computing, or 2) ditch distributed computing in favor of something simpler, like thin clients and centralized computing.
Option 2 is beyond what can be crammed into this post, but don't think that means vendors are ignoring the possibilities of thin clients. Dell clearly is. Now they just have to figure out how to add mobility into the equation.
Probably the single greatest contributor to the possibility of achieving option 1 is the ubiquitous adoption of TPMs. TPMs could enable Remote Attestation, but they won't be able to do it without also coupling trust at the foundational levels like memory allocation. Regardless, increasing the trustworthiness of distributed computers from "opt-in" to high assurance will require a Trusted Path from the centralized server all the way to the remote computers' hardware, which does not exist today.
The notion is simple: administrators need a central place-- a choke hold, perhaps-- to grasp (in futility, as it happens). Practically every enterprise management tool these days tries to do the same thing: make a computer that I cannot physically touch or see do something I want it to do.
The problem with centralized control of distributed systems today is that it is an illusion at best. When new applications are loaded onto systems, administrators aren't actually touching the systems. Instead, they are sending out code and data to be assimilated into each system. Likewise when "endpoint security" (marketing term du jour) systems are validating that a client computer is compliant with a policy set, it is attempting to reconfigure local settings. But under the microscope, all that really happens is that a computer program processes input, checks some properties' states, and (optionally) sends output back to a "central" server. That is, in a nutshell, all that centralized control is today.
Breaking down the steps ...
Step 1:
Maybe the remote computer's state was trustworthy at one time, back when it was first created, but before it was ever handed to a user. However, its current state is not known. After all, if we knew it was still in a known good state, why would we bother to run our policy status checking program on it?
Step 2:
The policy program runs, takes its actions (if any) and generates its output regarding a newly calculated system trustworthiness "state" of the system. Yet, if we cannot fully trust the host OS, then we cannot fully trust anything it outputs. What if our policy program asks the OS for the status of a resource (e.g. a configuration file/setting)? If the OS is untrustworthy (or if a malicious program has already subverted it), then the output is also not trustworthy; the actual value could be replaced with the value the policy program is expecting to see, so it shuts up and goes away.
Step 3:
After the pseudo-policy-science is complete, the output is sent back to the central server, which (depending upon the application) could do anything from logging and reporting the status to granting access to attempting to further remedy the remote computer.
Now for some examples ...
Mark Russinovich of SysInternals (now Microsoft) fame created a tool called "gpdisable" to demonstrate how Active Directory Group Policy is really just an opt-in tool. As it turns out, his tool could even appease Step 2 above without requiring administrator rights. You cannot download gpdisable from Microsoft today (I think they removed it when SysInternals was purchased), but thanks to the WayBack machine you can download gpdisable here.
PGP Whole Disk Encryption is entirely an implementation of opt-in security. The installer code executes, claiming that the drive in its entirety is encrypted, claiming that the keys it uploads to the central server (like Step #3 above) are the real keys it uses, and of course claiming that the bootguard bypass is reset after the very next boot.
Anti-Virus works the same way. It's not too hard for malware to disable AV. Yet, if the policy polling process (alliteration) sees the process running ("brain dead" or not), it's not necessarily trustworthy.
Network Access/Admission Control/Protection (NAC or NAP depending on your vendor preference) is the worst case of opt-in security. NAC Implementations have been beaten straight up, in the vein of "these aren't the droids you're looking for" and NAC implementations have been hijacked by students.
Ok, so where do we go from here? There are really only two options: 1) bolster trustworthiness in distributed computing, or 2) ditch distributed computing in favor of something simpler, like thin clients and centralized computing.
Option 2 is beyond what can be crammed into this post, but don't think that means vendors are ignoring the possibilities of thin clients. Dell clearly is. Now they just have to figure out how to add mobility into the equation.
Probably the single greatest contributor to the possibility of achieving option 1 is the ubiquitous adoption of TPMs. TPMs could enable Remote Attestation, but they won't be able to do it without also coupling trust at the foundational levels like memory allocation. Regardless, increasing the trustworthiness of distributed computers from "opt-in" to high assurance will require a Trusted Path from the centralized server all the way to the remote computers' hardware, which does not exist today.
Labels:
complexity vs security,
key management,
malware,
opt-in security,
Trust
Wednesday, October 24, 2007
DNS Re-Binding
One of the biggest problems with security threats in the Web 2.0 world is the erosion of trust boundaries. Dynamic web applications pull data together in "mashups" from multiple sources. Modern online business seems to require rich, dynamic code and data, yet as fast as we can field it, there are problems keeping boundaries cleanly separate.
Some researchers at Stanford have spent some time identifying some very critical problems involving the use of DNS in the Web 2.0 world. It's not insecurity research, though, because they provide some options for solutions, addressing the problem conceptually.
Basically, the problem--which can be accomplished for less than $100 in capital expenses-- consists of changing the IP address to which a DNS record points, after a "victim" connects to the malicious web server. This allows for circumvention of firewall policies, since the connections are initiated by victim clients. The paper also discusses how this method could be used to obfuscate or distribute the origins of spam or click fraud.
The solution, though, is simple. DNS records can be "pinned" to an IP Address instead of allowing it to live only transiently in memory for a few seconds. And DNS records pointing to internal or private (non-routable) IP address ranges should be handled with caution.
It's also interesting to note that DNSSEC does nothing to prevent this problem, since this is not a question of the integrity (or authenticity) of the DNS records from the DNS server; the DNS server is providing the malicious records. It's not a man-in-the-middle attack (although that could be a way to implement this, potentially).
And on a side note ... This is yet another example of why firewalls are pretty much useless in a modern information security arsenal. We cannot hide behind OSI Layers 3-4 anymore when all the action is happening in layers 5-7 (or layer 8).
Some researchers at Stanford have spent some time identifying some very critical problems involving the use of DNS in the Web 2.0 world. It's not insecurity research, though, because they provide some options for solutions, addressing the problem conceptually.
[Image taken from PDF document]
Basically, the problem--which can be accomplished for less than $100 in capital expenses-- consists of changing the IP address to which a DNS record points, after a "victim" connects to the malicious web server. This allows for circumvention of firewall policies, since the connections are initiated by victim clients. The paper also discusses how this method could be used to obfuscate or distribute the origins of spam or click fraud.
The solution, though, is simple. DNS records can be "pinned" to an IP Address instead of allowing it to live only transiently in memory for a few seconds. And DNS records pointing to internal or private (non-routable) IP address ranges should be handled with caution.
It's also interesting to note that DNSSEC does nothing to prevent this problem, since this is not a question of the integrity (or authenticity) of the DNS records from the DNS server; the DNS server is providing the malicious records. It's not a man-in-the-middle attack (although that could be a way to implement this, potentially).
And on a side note ... This is yet another example of why firewalls are pretty much useless in a modern information security arsenal. We cannot hide behind OSI Layers 3-4 anymore when all the action is happening in layers 5-7 (or layer 8).
Labels:
complexity vs security,
malware,
research,
Trust
Monday, October 22, 2007
Open Source Trustworthy Computing
There is a pretty good article over at LWN.net about the state of Trustworthy Computing in Linux, detailing the current and planned support for TPMs in Linux. Prof Sean Smith at Dartmouth created similar applications for Linux back in 2003, when the TPM spec was not yet finalized.
Of the TPM capabilities discussed, "remote attestation" was highlighted significantly in the article. But, keep in mind that Linux, being a monolithic kernel, has more components that would need integrity checking than there are available PCRs in the TPMs. The suggested applications (e.g. checking the integrity of an "email garden" terminal by the remote email server) are a stretch of capabilities.
Also keep in mind that unless coupled with trust at the foundational levels of memory architecture, integrity-sensitive objects could be replaced in memory by any device that has access via DMA. IOMMUs or similar will be required to deliver the solution fully.
And on that note, Cornell's academic OS - Nexus has a much better chance of success, because of the limited number of components that live in kernel space. The fewer the items that need "remote attestation", the more likely the attestation will be meaningful at all. At this point, modern operating systems need to simplify more than they need to accessorize, at least if security is important.
Of the TPM capabilities discussed, "remote attestation" was highlighted significantly in the article. But, keep in mind that Linux, being a monolithic kernel, has more components that would need integrity checking than there are available PCRs in the TPMs. The suggested applications (e.g. checking the integrity of an "email garden" terminal by the remote email server) are a stretch of capabilities.
Also keep in mind that unless coupled with trust at the foundational levels of memory architecture, integrity-sensitive objects could be replaced in memory by any device that has access via DMA. IOMMUs or similar will be required to deliver the solution fully.
And on that note, Cornell's academic OS - Nexus has a much better chance of success, because of the limited number of components that live in kernel space. The fewer the items that need "remote attestation", the more likely the attestation will be meaningful at all. At this point, modern operating systems need to simplify more than they need to accessorize, at least if security is important.
Thursday, October 18, 2007
Cornell's Nexus Operating System
I hold out hope for this project: Cornell University's Nexus Operating System. There are only a few publications thus far, but the ideas are very intriguing: microkernel, trustworthy computing via TPMs (they're cheap and pervasive), "active attestation" and labeling, etc.
And it's funded by the National Science Foundation. I hope to see more from Dr. Fred Schneider and company.
And it's funded by the National Science Foundation. I hope to see more from Dr. Fred Schneider and company.
Download Links and Hash Outputs
I never have quite figured out why people will put a download link with a SHA-1 or MD5 hash output side-by-side on the same web page. Somebody named "KJK::Hyperion" has released an unofficial patch to the Microsoft URI problem. Right there on the download page is a set of hash outputs.
From a quality perspective, sure, using a cryptographic hash might demonstrate that the large file you downloaded did or didn't get finish properly, but so could its file size.
Suppose, by either a man-in-the-middle or full-on rooting of the webserver (either will work: one is on the fly while the other is more permanent), that I can replace a generally benevolent binary file with something malicious. If I can do that, what is to stop me from generating a proper (take your pick) SHA-1 or MD5 hash and replacing the good hash on the web page with my bad one? The hash does not tell you anything. If the adversary can tamper and replace one, she could certainly tamper and replace the other.
If you are worried about quality only and not so much about chain-of-custody or tampering, you might as well just place the file size in bytes on the web page. If you are worried about tampering, use a digital signature of some sort (any PKC is better than none) so that the end-user can establish some sort of non-repudiation.
And keep in mind that:
A) You are trusting your computer to do the crypto (you're not doing it in your head),
and
B) Digital signatures can be used in trust decisions, but they do not automatically indicate trustworthiness (i.e. they do not necessarily indicate the author's intentions).
...
This is an excellent quote from Bruce Schneier on the subject of hashes/signatures:
From a quality perspective, sure, using a cryptographic hash might demonstrate that the large file you downloaded did or didn't get finish properly, but so could its file size.
Suppose, by either a man-in-the-middle or full-on rooting of the webserver (either will work: one is on the fly while the other is more permanent), that I can replace a generally benevolent binary file with something malicious. If I can do that, what is to stop me from generating a proper (take your pick) SHA-1 or MD5 hash and replacing the good hash on the web page with my bad one? The hash does not tell you anything. If the adversary can tamper and replace one, she could certainly tamper and replace the other.
If you are worried about quality only and not so much about chain-of-custody or tampering, you might as well just place the file size in bytes on the web page. If you are worried about tampering, use a digital signature of some sort (any PKC is better than none) so that the end-user can establish some sort of non-repudiation.
And keep in mind that:
A) You are trusting your computer to do the crypto (you're not doing it in your head),
and
B) Digital signatures can be used in trust decisions, but they do not automatically indicate trustworthiness (i.e. they do not necessarily indicate the author's intentions).
...
This is an excellent quote from Bruce Schneier on the subject of hashes/signatures:
"The problem is that while a digital signature authenticates the document up to the point of the signing computer, it doesn't authenticate the link between that computer and Alice. This is a subtle point. For years, I would explain the mathematics of digital signatures with sentences like: 'The signer computes a digital signature of message m by computing m^e mod n.' This is complete nonsense. I have digitally signed thousands of electronic documents, and I have never computed m^e mod n in my entire life. My computer makes that calculation. I am not signing anything; my computer is."
Zealots and Good Samaritans in the Case of Wikipedia
Dartmouth College researchers Denise Anthony and Sean W. Smith recently released a technical report of some very interesting research around trustworthy collaboration in an open (and even anonymous) environment: Wikipedia.
What is interesting in this research is the interdisciplinary approach between Sociology and Computer Science, analyzing the social aspects of human behavior and technical security controls within a system. Far too often are those overlapping fields ignored.
Also of interest, Wikipedia's primary security control is the ability to detect and correct security failures, not just prevent them (as so many security controls attempt to do). In an open, collaborative environment, correction is the best option. And since Wikipedia, in true wiki form, keeps every edit submitted by every user (anonymous or not), there is a wealth of information to mine regarding the patterns of (un)trustworthy input and human-scale validation systems.
What is interesting in this research is the interdisciplinary approach between Sociology and Computer Science, analyzing the social aspects of human behavior and technical security controls within a system. Far too often are those overlapping fields ignored.
Also of interest, Wikipedia's primary security control is the ability to detect and correct security failures, not just prevent them (as so many security controls attempt to do). In an open, collaborative environment, correction is the best option. And since Wikipedia, in true wiki form, keeps every edit submitted by every user (anonymous or not), there is a wealth of information to mine regarding the patterns of (un)trustworthy input and human-scale validation systems.
Tuesday, October 16, 2007
Identity Management in Security Products
For years, one of my biggest frustrations with vendors claiming to have "enterprise" software applications (I'm talking general applications: finance, medical records, HR systems, etc.) was that they all built their apps thinking they would have their own user repository. The obvious pain here is that: A) user provisioning/termination (not to mention password resets or compliance) now required admins touching an extra user repository-- an extra step, and B) users had to remember yet-another-password (or worse) keep the passwords set the same across all systems. This is likely old hat for anyone taking the time to read this...
But times have changed (mostly). While there are still "enterprise" grade apps that don't understand identity management strategies (especially, from my experience, in the ASP/Hosted environment-- "Identity Federation" has yet to catch on like wildfire, probably mostly as a result of all of the differing federation standards out there; and like a friend of mine says: 'the wonderful thing about standards is that there are so many from which to choose'), generally speaking the 'industry' has matured to the point now where I actually see some applications come in now with the mantra: "why would we want our app to keep track of users-- aren't you already doing that someplace else?"
Ah, but there must be a reason for me to bring up identity management ...
Security products vendors (specifically, ones that don't focus on identity management as their core competency) range from really bad (Symantec) to 'we get it for users, but not admins' (PGP) to 'yeah, ok, but we will only support one method and it may not be the most flexible for you' (Sourcefire).
Let's take a pause and first make sure we're on the same page with our definitions...
"Identity Management" is comprised of the "Authentication" and "Authorization" of "principals" (I'm abstracting up a layer here to include "users", "computers", "systems", "applications", etc.), as well as the technology used to make managing those aspects easier, (of course) as the business processes require.
Wikipedia's definition of "authentication" is fine for our purposes:
Now let's move on ...
Symantec's Enterprise Anti-Virus suite (which now has the personal firewall, anti-spyware, and probably anti-[insert your marketing threat-du-jour here]), doesn't actually even have authentication for most of the administrative components in its version 10 suite; technically, it only has authorization. To get into the administrative console, you must know a password; this is not authentication in that an admin is not presenting an identity that must be validated by the password-- the admin just presents a password. This is just like using the myriad of Cisco products without RADIUS/TACACS+ authentication: just supply the "enable" password. Fortunately for Cisco, they get this fairly well now. Especially since this product line is for enterprises with Microsoft products, I am surprised that they didn't (at a minimum) just tie authentication and authorization back to Active Directory users and groups.
The threats/risks here are the same ones that come with all password management nightmares. An admin leaves; the one password needs to be reset. Somebody makes an inappropriate configuration change; there's no accountability to know who it was. Those are fundamental identity management problems for enterprises. It is such a shame that a security products vendor couldn't have spent the extra 20 hours of development time to implement some basics.
Moving up a notch in the identity management of security products' food chain, we come to PGP's Universal Server. [All thoughts that I am recently picking on PGP aside, please. This is not a vulnerability, but creates risky situations-- albeit not as bad as the ones I described above for Symantec.] One of the novel features the PGP Universal Server brought to the market was its ability to do key management for users ... Read that: it is good at managing users' keys. User identities can even be tied back into LDAP stores (or not). Public Key crypto (PKC) is manageable even on large scales.
However, what the PGP Universal Server is lacking is the identity management of administrators; administrator credentials are stored within the PGP Universal Server's application code. There is no tie back to a centralized user repository for administrative access. Read that: it is bad at managing admins. If you are an enterprise and you want your 24x7 helpdesk to be able to generate recovery tokens for your whole disk encrypted users, you have to provision yet-another-account for them. If you are an enterprise that wants to leverage an outsourcing arrangement for your helpdesk (which is in and of itself a risky business-- manage your contracts well) and provisioning and termination of your helpdesk staff is important-- look elsewhere, as you'll find nothing helpful here. You'll either have to give some 'supervisor' person the ability to generate helpdesk accounts within the PGP Universal server, or your enterprise's security practitioners will be busy with mundane account creations/terminations.
Why oh why, PGP Corp, did you not build in support for LDAP, Kerberos, RADIUS, TACACS+, SAML, SOA Federation, or one of the other in the myriad of authentication services for managing administrative access to the PGP Universal Server?
Another issue is the notion of authorization within the PGP Server's application. There are several access "roles", but they are all or none. It is not possible-- in today's version-- to authorize a helpdesk user to generate one-time recovery tokens (in the event that a user cannot remember their boot password) for only a subset of computers protected by and reporting to a PGP Universal Server. Say you want your helpdesk to manage all computers except for a special group (e.g. Finance, HR, Executives, etc.). In PGP's product, that sort of authorization control requires a whole new instance; you cannot achieve that level of authorization separation within the tool. It's all or nothing. [Which is considerably similar to how many of PGP's products work-- a binary action. A user is either trusted to have access to the Volume Master Key in PGP Whole Disk Encryption or the user is not trusted at all. There's little sense of what a user can do versus an administrator, let alone a global administrator versus helpdesk personnel.]
Sourcefire is an example of the next tier on the totem pole of identity management inside of security products. Users can log in either locally (application layer accounts, not OS) to the Defense Center console or the server can be configured to authenticate and authorize users against LDAP. This is great when it comes to provisioning/terminating access or just managing password compliance across a range of distributed systems. However, just like the security products mentioned above, I would have liked to see the vendor do more. LDAP is a great protocol for standard user directories, but for environments where access to intrusion detection logs is sensitive business, a natural requirement is multi-factor authentication. Very few one-time-password generators (e.g. RSA SecurID or Secure Computing's Safeword) truly support LDAP as a method of authenticating users. It's even harder still to get other forms of second-factors (i.e. biometrics or smart cards) to work with LDAP. I would have preferred to see a more flexible authentication and authorization architecture, such as requiring both LDAP and RADIUS to handle the situation where an Active Directory password and RSA SecurID token would be required for access.
Moral of the today's story? If you are a security products vendor, make sure you encompass all of the security requirements that are placed upon non-security products.
But times have changed (mostly). While there are still "enterprise" grade apps that don't understand identity management strategies (especially, from my experience, in the ASP/Hosted environment-- "Identity Federation" has yet to catch on like wildfire, probably mostly as a result of all of the differing federation standards out there; and like a friend of mine says: 'the wonderful thing about standards is that there are so many from which to choose'), generally speaking the 'industry' has matured to the point now where I actually see some applications come in now with the mantra: "why would we want our app to keep track of users-- aren't you already doing that someplace else?"
Ah, but there must be a reason for me to bring up identity management ...
Security products vendors (specifically, ones that don't focus on identity management as their core competency) range from really bad (Symantec) to 'we get it for users, but not admins' (PGP) to 'yeah, ok, but we will only support one method and it may not be the most flexible for you' (Sourcefire).
Let's take a pause and first make sure we're on the same page with our definitions...
"Identity Management" is comprised of the "Authentication" and "Authorization" of "principals" (I'm abstracting up a layer here to include "users", "computers", "systems", "applications", etc.), as well as the technology used to make managing those aspects easier, (of course) as the business processes require.
Wikipedia's definition of "authentication" is fine for our purposes:
"Authentication (from Greek αυθεντικός; real or genuine, from authentes; author) is the act of establishing or confirming something (or someone) as authentic, that is, that claims made by or about the thing are true. Authenticating an object may mean confirming its provenance, whereas authenticating a person often consists of verifying their identity. Authentication depends upon one or more authentication factors."Wikipedia's definition of "authorization" is less general, unfortunately, but you'll get the point:
"In security engineering and computer security, authorization is a part of the operating system that protects computer resources by only allowing those resources to be used by resource consumers that have been granted authority to use them. Resources include individual files or items data, computer programs, computer devices and functionality provided by computer applications."
Now let's move on ...
Symantec's Enterprise Anti-Virus suite (which now has the personal firewall, anti-spyware, and probably anti-[insert your marketing threat-du-jour here]), doesn't actually even have authentication for most of the administrative components in its version 10 suite; technically, it only has authorization. To get into the administrative console, you must know a password; this is not authentication in that an admin is not presenting an identity that must be validated by the password-- the admin just presents a password. This is just like using the myriad of Cisco products without RADIUS/TACACS+ authentication: just supply the "enable" password. Fortunately for Cisco, they get this fairly well now. Especially since this product line is for enterprises with Microsoft products, I am surprised that they didn't (at a minimum) just tie authentication and authorization back to Active Directory users and groups.
The threats/risks here are the same ones that come with all password management nightmares. An admin leaves; the one password needs to be reset. Somebody makes an inappropriate configuration change; there's no accountability to know who it was. Those are fundamental identity management problems for enterprises. It is such a shame that a security products vendor couldn't have spent the extra 20 hours of development time to implement some basics.
Moving up a notch in the identity management of security products' food chain, we come to PGP's Universal Server. [All thoughts that I am recently picking on PGP aside, please. This is not a vulnerability, but creates risky situations-- albeit not as bad as the ones I described above for Symantec.] One of the novel features the PGP Universal Server brought to the market was its ability to do key management for users ... Read that: it is good at managing users' keys. User identities can even be tied back into LDAP stores (or not). Public Key crypto (PKC) is manageable even on large scales.
However, what the PGP Universal Server is lacking is the identity management of administrators; administrator credentials are stored within the PGP Universal Server's application code. There is no tie back to a centralized user repository for administrative access. Read that: it is bad at managing admins. If you are an enterprise and you want your 24x7 helpdesk to be able to generate recovery tokens for your whole disk encrypted users, you have to provision yet-another-account for them. If you are an enterprise that wants to leverage an outsourcing arrangement for your helpdesk (which is in and of itself a risky business-- manage your contracts well) and provisioning and termination of your helpdesk staff is important-- look elsewhere, as you'll find nothing helpful here. You'll either have to give some 'supervisor' person the ability to generate helpdesk accounts within the PGP Universal server, or your enterprise's security practitioners will be busy with mundane account creations/terminations.
Why oh why, PGP Corp, did you not build in support for LDAP, Kerberos, RADIUS, TACACS+, SAML, SOA Federation, or one of the other in the myriad of authentication services for managing administrative access to the PGP Universal Server?
Another issue is the notion of authorization within the PGP Server's application. There are several access "roles", but they are all or none. It is not possible-- in today's version-- to authorize a helpdesk user to generate one-time recovery tokens (in the event that a user cannot remember their boot password) for only a subset of computers protected by and reporting to a PGP Universal Server. Say you want your helpdesk to manage all computers except for a special group (e.g. Finance, HR, Executives, etc.). In PGP's product, that sort of authorization control requires a whole new instance; you cannot achieve that level of authorization separation within the tool. It's all or nothing. [Which is considerably similar to how many of PGP's products work-- a binary action. A user is either trusted to have access to the Volume Master Key in PGP Whole Disk Encryption or the user is not trusted at all. There's little sense of what a user can do versus an administrator, let alone a global administrator versus helpdesk personnel.]
Sourcefire is an example of the next tier on the totem pole of identity management inside of security products. Users can log in either locally (application layer accounts, not OS) to the Defense Center console or the server can be configured to authenticate and authorize users against LDAP. This is great when it comes to provisioning/terminating access or just managing password compliance across a range of distributed systems. However, just like the security products mentioned above, I would have liked to see the vendor do more. LDAP is a great protocol for standard user directories, but for environments where access to intrusion detection logs is sensitive business, a natural requirement is multi-factor authentication. Very few one-time-password generators (e.g. RSA SecurID or Secure Computing's Safeword) truly support LDAP as a method of authenticating users. It's even harder still to get other forms of second-factors (i.e. biometrics or smart cards) to work with LDAP. I would have preferred to see a more flexible authentication and authorization architecture, such as requiring both LDAP and RADIUS to handle the situation where an Active Directory password and RSA SecurID token would be required for access.
Moral of the today's story? If you are a security products vendor, make sure you encompass all of the security requirements that are placed upon non-security products.
Browser Rootkits
Rootkits, in general, are of extreme interest to me-- not because of what can be done with them (I can assume anything can be done with them and leave it at that), but because of where they can be run. In essence, a rootkit is the "not gate" of trustworthy computing. If a rootkit can get in the 'chain of trust' then the chain is not trustworthy.
Joanna Rutkowska has demonstrated the importance for some very low level controls in modern operating systems (and virtual machines).
'pdp' at GNUCitizen has a recent write-up of browser rootkits that is good-- it talks generally about the problem, not specifically. I would like to see him propose potential solutions to his concerns, though, which he has not yet done:
Joanna Rutkowska has demonstrated the importance for some very low level controls in modern operating systems (and virtual machines).
'pdp' at GNUCitizen has a recent write-up of browser rootkits that is good-- it talks generally about the problem, not specifically. I would like to see him propose potential solutions to his concerns, though, which he has not yet done:
"As you can see, browser rootkits are probably the future of malware. In the wrong hands, browser technologies are power tool that can be used to keep unaware puppets on a string. I am planning to follow up this post with a more detailed example of how browser rootkits are developed and show some interesting functionalities which can enable attackers to go so deep, no other has ever been."UPDATE: Joanna Rutkowska chimes in.
Wednesday, October 10, 2007
Analyzing Trust in the Microsoft URI Handler Issues
There's a buzz around the Microsoft URI Handlers. Basically, applications that rely on that Windows service can be handed data that isn't separated from code, which is rife with problems in and of itself.
First if affected Firefox, and there were some "no, you should fix it" comments thrown back and forth between Microsoft and Mozilla. Now it's affecting Microsoft applications.
The questions really are: Where should the data validation process occur? Should it happen just once by the OS's built-in URI Handlers, or should each application do its own validation?
The real answer is not that one side or the other should do it, but they should both do it. Any application relying on the Microsoft URI Handles is trusting that the data elements are free from anything malicious. Quite simply, Mozilla and even Microsoft product managers in charge of the apps are naive if they are not also performing validation. It's a matter of understanding trust and trustworthiness.
UPDATED: There is a slashdot thread on the subject and the Microsoft Security Response Center (MSRC) have posted to their blog explaining why they have done what they have done. Either way, if you are a third party application developer, you need to understand that it's always in your best interest to sanitize data yourself-- if for no other reason than the component you trust might not be 100% trustworthy.
First if affected Firefox, and there were some "no, you should fix it" comments thrown back and forth between Microsoft and Mozilla. Now it's affecting Microsoft applications.
The questions really are: Where should the data validation process occur? Should it happen just once by the OS's built-in URI Handlers, or should each application do its own validation?
The real answer is not that one side or the other should do it, but they should both do it. Any application relying on the Microsoft URI Handles is trusting that the data elements are free from anything malicious. Quite simply, Mozilla and even Microsoft product managers in charge of the apps are naive if they are not also performing validation. It's a matter of understanding trust and trustworthiness.
UPDATED: There is a slashdot thread on the subject and the Microsoft Security Response Center (MSRC) have posted to their blog explaining why they have done what they have done. Either way, if you are a third party application developer, you need to understand that it's always in your best interest to sanitize data yourself-- if for no other reason than the component you trust might not be 100% trustworthy.
Trusted vs Trustworthy
From a recent Secunia advisory post:
What Secunia means to say is:
There is a huge difference between trustworthiness and trust. Trust is an action and, as a result, a set of relationships (e.g. Alice trusts Bob; Bob trusts Charlie; therefore, Alice transitively trusts Charlie). Trustworthiness is an estimate of something's (or someone's) worthiness of receiving somebody else's trust. Trust relationships can be mapped out-- it just takes time. Trustworthiness, on the other hand, is very difficult to quantify accurately. I also just discussed this in the context of open source and security.
If those of us in the security community don't get the difference between trust and trustworthiness, then how in the world will our systems protect users who may never understand?
"Solution:What a paradox. "Trust" is an action. If you browse to a website, you are trusting that website. If you clicked a link, then you trusted that link. An "untrusted" PDF file is one that was never opened. By contrast, an opened PDF file is a trusted file. Why are they instructing you to do what you already do?
Do not browse untrusted websites, follow untrusted links, or open untrusted .PDF files." [italics are mine]
What Secunia means to say is:
"Do not browse untrustworthy websites, follow untrustworthy links, or open untrustworthy .PDF files."
There is a huge difference between trustworthiness and trust. Trust is an action and, as a result, a set of relationships (e.g. Alice trusts Bob; Bob trusts Charlie; therefore, Alice transitively trusts Charlie). Trustworthiness is an estimate of something's (or someone's) worthiness of receiving somebody else's trust. Trust relationships can be mapped out-- it just takes time. Trustworthiness, on the other hand, is very difficult to quantify accurately. I also just discussed this in the context of open source and security.
If those of us in the security community don't get the difference between trust and trustworthiness, then how in the world will our systems protect users who may never understand?
On Open Source and Security
Recently, I noted that it's not important whether source code is open or not for security, it's important to have well-qualified analysts reviewing the code. As a result, I received the following comment: "If you are paranoid, pay some guys you trust to do a review. With [closed source] you can't do that." The following is my response.
...
Well, say I trust Bruce Schneier (I generally do professionally speaking, but not necessarily personally-- and I'll pick on him since he's almost universally accepted as the patron saint of security). Let's say I trust Bruce's analysis of a particular company's products. If Bruce is reviewing the source code and the code is closed to the public but made available to him as an escrow, I would likely be OK with that. Trust is more complicated than just availability of source code. There are not too many people in the world that are qualified to security reviews of security product's code. So, I couldn't trust just anyone's analysis of it. To be honest, if it came down to a computationally-intensive implementation of cryptographic code, I wouldn't even trust my own analysis of it. My point is: Trust is a social-psychological phenomenon, not a technical one.
"Open source" means so many different things to different people. To some it means "free to use, modify, or distribute". To some, it means anyone can review the code. To others, it might just mean the vendor will provide source code escrow services at your request. It might be possible to have a closed source (in the common sense) product opened up to a specific customer (potentially any customer asks the right question the right way).
How many "joe users" that have a one seat install of a product actually review the code? Not many. How many of those one seat installs are actually qualified code reviewers? Fewer still.
Open Source != Security
It (open source) is an unrelated variable. It's like how automobile insurance providers in the US inaccurately categorize all male drivers under 18 years of age as high risk. Not all of them (caricatures and jokes aside) will get tickets, cause wrecks, or otherwise require the insurance agency to pay out. However, the actuarial data presented to the insurers suggests that is a reasonable category of customers for which they should increase premiums. If it was legal and ethical (and affordable) to put all under 25 year old drivers through a *magic* test (I say magic because it may or may not exist) that could differentiate with a higher level of certainty whether the customer had the "x factor" that caused a higher tendency to cause wrecks ... well, that's where the insurance companies would go.
Open Source is like that broad, mis-categorization. There are many open source projects that are never reviewed for potential threats by qualified people. In fact, since "open source" is so "open", there are likely projects that have never even been reviewed by anyone outside of the sole contributor. "Open Source" strikes up a connotation of community and collaboration, but it does not guarantee community and collaboration. Likewise, there's no guarantee that the people reviewing the code aren't adding security problems deliberately.
Trust is a binary action. You either choose to trust someone or something, or you choose not to trust. You might opt to choose to trust someone conditionally, such as I might trust a carpenter to build my house but not to work on my car. Trustworthiness, however, is a totally different equation. People estimate trustworthiness (which is exactly as it reads: calculating how worthy of trust something or someone is) using a combination of perceived reputation (track records) or trusted third parties' estimated trust (e.g. my friend Joe knows a lot about cars, so I trust his mechanic, since Joe would know how to differentiate between a good and bad mechanic).
A product has opened its source code for review. So what? You should be asking the following questions:
...
Well, say I trust Bruce Schneier (I generally do professionally speaking, but not necessarily personally-- and I'll pick on him since he's almost universally accepted as the patron saint of security). Let's say I trust Bruce's analysis of a particular company's products. If Bruce is reviewing the source code and the code is closed to the public but made available to him as an escrow, I would likely be OK with that. Trust is more complicated than just availability of source code. There are not too many people in the world that are qualified to security reviews of security product's code. So, I couldn't trust just anyone's analysis of it. To be honest, if it came down to a computationally-intensive implementation of cryptographic code, I wouldn't even trust my own analysis of it. My point is: Trust is a social-psychological phenomenon, not a technical one.
"Open source" means so many different things to different people. To some it means "free to use, modify, or distribute". To some, it means anyone can review the code. To others, it might just mean the vendor will provide source code escrow services at your request. It might be possible to have a closed source (in the common sense) product opened up to a specific customer (potentially any customer asks the right question the right way).
How many "joe users" that have a one seat install of a product actually review the code? Not many. How many of those one seat installs are actually qualified code reviewers? Fewer still.
Open Source != Security
It (open source) is an unrelated variable. It's like how automobile insurance providers in the US inaccurately categorize all male drivers under 18 years of age as high risk. Not all of them (caricatures and jokes aside) will get tickets, cause wrecks, or otherwise require the insurance agency to pay out. However, the actuarial data presented to the insurers suggests that is a reasonable category of customers for which they should increase premiums. If it was legal and ethical (and affordable) to put all under 25 year old drivers through a *magic* test (I say magic because it may or may not exist) that could differentiate with a higher level of certainty whether the customer had the "x factor" that caused a higher tendency to cause wrecks ... well, that's where the insurance companies would go.
Open Source is like that broad, mis-categorization. There are many open source projects that are never reviewed for potential threats by qualified people. In fact, since "open source" is so "open", there are likely projects that have never even been reviewed by anyone outside of the sole contributor. "Open Source" strikes up a connotation of community and collaboration, but it does not guarantee community and collaboration. Likewise, there's no guarantee that the people reviewing the code aren't adding security problems deliberately.
Trust is a binary action. You either choose to trust someone or something, or you choose not to trust. You might opt to choose to trust someone conditionally, such as I might trust a carpenter to build my house but not to work on my car. Trustworthiness, however, is a totally different equation. People estimate trustworthiness (which is exactly as it reads: calculating how worthy of trust something or someone is) using a combination of perceived reputation (track records) or trusted third parties' estimated trust (e.g. my friend Joe knows a lot about cars, so I trust his mechanic, since Joe would know how to differentiate between a good and bad mechanic).
A product has opened its source code for review. So what? You should be asking the following questions:
- Why? Why did you open your source?
- Who has reviewed your source? What's (not) in it for them?
- What was in the review? Was it just a stamp of approval or were there comments as well?
Saturday, October 6, 2007
Sorry for the delay, Jon
I just came across this on Jon Callas' CTO Corner just now (11 PM GMT, when I started this draft). I had a busy day Friday (obviously so did Jon), but on a totally different subject. By the time I got around to checking for comments to moderate (24 hours ago), I noticed there were several (contrary to some, I really am not sitting here trying to build a following ... read if you want, or don't--your choice, obviously). A bunch of them were either 'you don't know what you're talking about' (which is fine if they want to think that-- I posted their comments anyway) or they were 'what about X' comments which I already answered in later posts.
I saw John Dasher's (PGP Product Management) comment right away, published it, and elevated it to a main post/entry. There was a lot of negative hype around the company, apparently, and I did not want to have anything to do with a negative impact to the company [If you don't believe me on that point, all I ask is that you please read the other topics]. My main point was that this feature was not published well enough (Jon and I seem to agree on that).
I want to make this clear though: the time it took to get Jon's comment in the list (as it is now and mirrored here) was unrelated to the content in his comment, or my opinion of him in general. Read it for yourself; I'm not against differing opinions or even insults. Just note that I responded to him as well.
Jon wrote:
Jon also wrote:
I was surprised when Jon wrote:
I do understand exactly why you would want to jump the gun to believe I was editing you out. It's an "everybody has an opinion" world, one where people can choose what content to keep or throw away. So, I cannot fault you for your response. [If I was in your position, I'd probably do exactly the same thing you did for my employer in an official "PR" type letter: acknowledge, compliment, and trivialize in 200 words or less. It was well executed.]
Finally, Jon also wrote:
I would like to believe, Jon, that if we met up in the real world, we would be on the same page.
The door is open for more of your opinions.
I saw John Dasher's (PGP Product Management) comment right away, published it, and elevated it to a main post/entry. There was a lot of negative hype around the company, apparently, and I did not want to have anything to do with a negative impact to the company [If you don't believe me on that point, all I ask is that you please read the other topics]. My main point was that this feature was not published well enough (Jon and I seem to agree on that).
I want to make this clear though: the time it took to get Jon's comment in the list (as it is now and mirrored here) was unrelated to the content in his comment, or my opinion of him in general. Read it for yourself; I'm not against differing opinions or even insults. Just note that I responded to him as well.
Jon wrote:
"As I started asking about what we did and did not do to document the feature, I heard several times, 'I thought we documented that?'I have pointed out several times that there is a difference between documented and publicly documented. Now it's publicly accessible, but when this whole ordeal started out, that same link required a customer ID and password. You can read how I did work with the vendor, PGP Corp, (how their support people were not well aware of the feature) and read how they were satisfied with the way things were documeted BEFORE they made the documentation available to non-customers.
"So our product manager went off to find where it is documented. We found that it was documented in five places, including the release notes for each product (client and server) and the 'What's New?' section of each. Okay, we could do better, but a feature listed in 'What's New?' could hardly be termed 'barely documented.'"
Jon also wrote:
"In the world I live in, the world of cryptographers-are-security-geeks, the word 'backdoor' is a fighting word. It is especially a fighting word when modified with 'intentional.' It means we sold out. It means we've lied to you.I agree that "backdoor" has a negative connotation (I'd be stupid to ignore that now), but I disagree that that the connotation should exist (and arguably so do others). And most importantly, I did not use the words "intentional" + "backdoor" to mean anything aggressive or fighting. In fact, I replaced "backdoor" with "bypass" on the first post (and nearly any other post where I don't give an explanation like this one). Clearly from the beginning, I did not imply an "alternative means of access" like the conspiracy theories claim. Shoot, there are even other beneficial "backdoors" in PGP products, like the ADK (additional decryption key, such as used with Blakely-Shamir key splitting techniques). Those are backdoors in the academic sense, just not in the paranoid, socio-political sense. The question at hand is: "for whom does the door open?" And PGP Corp's official stance is: "never for an adversary or warring faction". All I have tried to point out is that it could possibly (no matter how unlikely you think it might occur) open for a well-timed cat-burglar.
The word 'backdoor' is thus a slur. It is a nasty word. There a plenty of nasty words that insult someone's race, national origin, religion, and so on. I will give no examples of them. 'The B-Word,' as I will call it, is one of those slurs. It is not to be used lightly."
I was surprised when Jon wrote:
"I wrote a tart reply and posted it to his second note. As I am writing this, 15 hours have passed and although he has approved other people's replies, he hasn't approved mine.What can I say, Jon? I guess to be fair (and if nothing else following the excellent precedent you have set), I apologize for not moderating comments sooner. Perhaps I should have emailed you so I could have been more open to your time lines. I do hope that you believe me, but I accept that you may not. I ask that you consider how I have published and linked back to everything you have offered (and then some). And I ask that you consider how the ethics truly apply here (and for what it's worth, I did mull over whether to post or not to post on this subject for months after I last discussed the issue with your employees-- I even asked other people whose ethics I trust for their input prior).
Murphy's law being what it is, it's possible that by the time I dot the 'i’s', cross the 't’s', and get this article posted on my CTO Corner, he may have printed my reply -- but it isn't there now, I just checked. Nonetheless, it angers me immensely that someone would tell lies and insult my personal integrity and the integrity of my company.
I know why he won't post it -- I point out facts, show that he has intentionally misstated facts, knowingly defamed PGP Corporation, and show that he has not lived up to the very ethical standards for which he criticizes other people. He accuses us of technical and ethical violations that he knows are false.
Therefore, I am posting my reply to him below the break. These are the facts Securology doesn't have the courage to post."
I do understand exactly why you would want to jump the gun to believe I was editing you out. It's an "everybody has an opinion" world, one where people can choose what content to keep or throw away. So, I cannot fault you for your response. [If I was in your position, I'd probably do exactly the same thing you did for my employer in an official "PR" type letter: acknowledge, compliment, and trivialize in 200 words or less. It was well executed.]
Finally, Jon also wrote:
"The mark of a good company is how you deal with issues, not the issues themselves."I think Jon did an exceptional job in his response. He was down-to-earth, humble, yet showed great pride in his company and work. He acknowledged "opportunities" (as the American corporate culture calls them) for improvement, as well as strengths. I think we agree on 90% of the details, and things are improving. Sure, some of the details may be over-hyped; that's not my fault and I still hold to my position.
I would like to believe, Jon, that if we met up in the real world, we would be on the same page.
The door is open for more of your opinions.
Friday, October 5, 2007
About "Backdoors" ...
Again, inspired by the PGP WDE Bypass Issue ...
I am not the only one in the world that uses the term "backdoor" in a generic, non-cryptographic, opposite of your-favorite-national-security-or-mafia-organization-is-going-to-get-you sort of way.
Here are several sources that are considered (at least somewhat) reputable:
Princeton's Wordnet:
Search for yourself. While the use of the term may have prompted a media over-hype, I was not out of line in my word usage. [I removed the term from the parent post anyway, because I do not wish any harm to PGP as a company.] Rarely, if ever, is "backdoor" synonymous with paranoia surrounding an agency having access to your private keys, except ... perhaps ... maybe in a slashdot thread.
UPDATED: Adobe's PDF vulnerability is a current event that illustrates how other people in the "security community" use the word "backdoor" beyond just mathematical-cryptographic access control bypasses.
I am not the only one in the world that uses the term "backdoor" in a generic, non-cryptographic, opposite of your-favorite-national-security-or-mafia-organization-is-going-to-get-you sort of way.
Here are several sources that are considered (at least somewhat) reputable:
Princeton's Wordnet:
"an undocumented way to get access to a computer system or the data it contains"Wikipedia (supposedly the opinion of the general public, mind you)
"A backdoor in a computer system (or cryptosystem or algorithm) is a method of bypassing normal authentication, securing remote access to a computer, obtaining covert access to plaintext, and so on, while attempting to remain undetected. The backdoor may take the form of an installed program (e.g., Back Orifice), or could be a modification to an existing program and/or hardware device."Search Security/Tech Target:
"A back door is a means of access to a computer program that bypasses security mechanisms."F-Secure (in the context of malware):
"Backdoors are remote administration utilities that open infected machines to external control via the Internet or a local network."Albany University's Information Security glossary:
"Normally installed by a virus or worm, a backdoor is a alternate method of accessing a system."The Net Guy (whoever that is-- Google had his definition):
"A means of access to a computer system that bypasses security mechanisms, installed sometimes by an authorized person, sometimes by an attacker. Often installed by programs called Trojans horse programs."And one of my favorite ones, in light of everything recently:
"Also called a trapdoor. An undocumented way of gaining access to a program, online service or an entire computer system. The backdoor is written by the programmer who creates the code for the program. It is often only known by the programmer. A backdoor is a potential security risk."
Search for yourself. While the use of the term may have prompted a media over-hype, I was not out of line in my word usage. [I removed the term from the parent post anyway, because I do not wish any harm to PGP as a company.] Rarely, if ever, is "backdoor" synonymous with paranoia surrounding an agency having access to your private keys, except ... perhaps ... maybe in a slashdot thread.
UPDATED: Adobe's PDF vulnerability is a current event that illustrates how other people in the "security community" use the word "backdoor" beyond just mathematical-cryptographic access control bypasses.
Labels:
education,
key management,
whole disk encryption
PGP's Publicized Documentation of WDE Bypass
John Dasher, Director of Product Management at PGP, commented with a link to PGP's documentation on the WDE Bypass Feature. I thought it pertinent to raise John's comment, especially with this at the end of that page:
"We appreciate and are thankful for open dialog between us, our customers, and even our critics. This discourse always helps us improve our process, our products, and the standards on which they’re based."I cannot agree more with that comment from John.
More on the PGP issue
There have been several comments regarding the PGP whole disk encryption bypass issue and in the process a few people have brought up the question that my discussion of them is exactly what I have preached against with "insecurity researchers"; however, I want to clear a few things up.
Yes, I wrote:
I still have respect for the vendor and I encourage others to evaluate all solutions thoroughly regardless of this or anything else. The bottom line is that customers (current and potential) have the right to know their risks ahead of time-- they should not have to buy consultants/professional services time to become aware of a feature like this. It should just be well known. Jon Callas has a thorough track record of proven expertise and his opinion is very valid, just like my (somewhat opposing) view is valid.
I do also wish everyone to understand that "backdoor" has controversy-- I never intended this to mean a backdoor that law enforcement, etc., could use. It's not a cryptographic backdoor; it's a backdoor in the historical sense-- as in there's a way to get unauthorized access.
Yes, the Trojan possibilities seem absurd because malware could do more. What isn't discussed, though, is that malware doing this in addition to the other things it does leaving workstations in a wholesale bypass state so that any less sophisticated smash/grab thief can have access is still a possibility. It would be more than a nuisance to an enterprise IT shop, whether you think somebody would take the time to do it or not.
And, of course, there's the timing attack: grabbing the machine when it shuts down, but before it comes back up. That's the risk that is barely documented--not discoverable by 99+% of current and potential customers. That's the point. If you read into anything else beyond, then you're missing the idea.
Yes, I wrote:
"So, let's see if we can figure out the economic process ...But this PGP issue is not insecurity research:
1. Find some vulnerability in some widely used product.
2. Create a proof-of-concept and publish it to the world (preferably before sharing it with the vendor).
3. Use FUD (Fear, Uncertainty, and Doubt) to sell the services of a security consultancy startup.
4. Profit!"
- It is not a vulnerability; it's a problem in the design. It's not a coding bug; it's a "dangerous" approach to solving a problem. And if that doesn't convince you that this is not a vulnerability, even PGP thinks there's no problem.
- There's no proof-of-concept. There's a threat model, sure, but that's not a POC. There's no exploit code, there's only paradigms of attack/defense. I think that will even pass the Ranum test.
- If you think this is FUD, I apologize. I obviously do not. However, please note, there are no services being sold, no ads on the pages hosting this content, nor in the feeds, etc. There's zero royalty going to me. I'm not even using a real name to take credit. I just want to discuss paradigms of attack/defense and intricately examine and evaluate what some might call an academic-only exercise. Extended readership is nice, but not the intention here.
- See #3 -- no profit here.
I still have respect for the vendor and I encourage others to evaluate all solutions thoroughly regardless of this or anything else. The bottom line is that customers (current and potential) have the right to know their risks ahead of time-- they should not have to buy consultants/professional services time to become aware of a feature like this. It should just be well known. Jon Callas has a thorough track record of proven expertise and his opinion is very valid, just like my (somewhat opposing) view is valid.
I do also wish everyone to understand that "backdoor" has controversy-- I never intended this to mean a backdoor that law enforcement, etc., could use. It's not a cryptographic backdoor; it's a backdoor in the historical sense-- as in there's a way to get unauthorized access.
Yes, the Trojan possibilities seem absurd because malware could do more. What isn't discussed, though, is that malware doing this in addition to the other things it does leaving workstations in a wholesale bypass state so that any less sophisticated smash/grab thief can have access is still a possibility. It would be more than a nuisance to an enterprise IT shop, whether you think somebody would take the time to do it or not.
And, of course, there's the timing attack: grabbing the machine when it shuts down, but before it comes back up. That's the risk that is barely documented--not discoverable by 99+% of current and potential customers. That's the point. If you read into anything else beyond, then you're missing the idea.
Thursday, October 4, 2007
Response to Jon Callas - continued #2
This is the continuation of my response to Jon Callas regarding the PGP Whole Disk Encryption bypass.
I can appreciate the problems inherent in managing distributed computers while still protecting the confidentiality and integrity of the data contained within them. I understand there is often the need to do maintenance on machines in-mass.
However, what I do not understand is the classification of machines that would have both physical access control needs (hence the PGP Whole Disk Encryption) and the immediate availability without a human nearby. Here's the rough categories/scenarios I can come up with:
I will temporarily skip over #2 (workstations) for #3 servers.
I understand the reasoning behind protecting servers' drives from data leakage via whole disk encryption software. I also understand that large enterprises have farms of servers and rarely manage individual servers, preferring to manage multiple servers at once through a single admin interface. However, a smart enterprise is going to use layers of security, starting at physical security controls (the data center: locked doors, raised floors, controlled environmentals, etc.), moving through network layers (firewalls, IDS), up to the system and application layers. Besides going through the performance trade-offs, and enterprise has to decide if the complexity of WDE is worth the trade-off of the confidentiality/integrity it might provide.
In this situation, I personally prefer the way TPMs (Trusted Platform Modules) have been used in WDE-like implementations (NOTE: PGP Corp does not currently support the use of TPMs with its WDE product). TPMs, in theory (but perhaps not in all implementations), have the ability to create a Trusted Path for the boot process, roughly in these steps:
A major plus with TPMs over PGP's implemented bypass is that its use truly is a customer opt-in choice. Whereas the PGP Boot Guard (today) always checks for the existence of the bypass user access key and supplies it a generic/static x01 passphrase whether a customer knows or wants this feature to work on their systems or not, the TPM is clearly designed and documented to perform that function. There's no hidden trick; it's a deliberate act on behalf of the customer. Another huge plus is that there is no API or command line tool with which to interact in order to establish the automated boot, so there are less opportunities to accidentally disclose a critical boot volume passphrase (i.e. no plaintext scripts).
Now, to return to the issue of workstations ...
I have seen enterprise IT organizations treat workstations as servers, by installing production-dependent servers upon desktop-class hardware and expecting SLAs a server in a well-protected data center (or at least a locked closet) could provide. Those people are making mistakes--not just security mistakes-- but IT management mistakes. If something is that critical, it should be well-supported and funded, being provided the layers of control it should have. And if those are PGP's customers, I would still contend they should pursue the TPM route as described above for servers.
For workstations that are sitting (semi-) permanently on employees' desks, what are the real objections to having users come into work some morning only to see the PGP Boot Guard authentication dialog box waiting for them? If the objection is that the users are not trusted to have a User Access Key of their own (think: kiosks), then again, the TPM route is probably best, so there is less of a key management issue. If the objection is that it simply isn't convenient ... well ... anyone who understands security truly understands how dangerous playing into that line of thinking can be. Most organizations that would use the PGPwde.exe --add-bypass command in-mass will be using a shared administrative account (some call these "service accounts"), which means yet-another-default-password that has to be managed, reset when staff leave, etc. For most organizations, that is a nightmare. To top that off, remote-management tools like Microsoft's SMS often place scripts and installers in temp space, which is rife with problems cleaning up after-the-fact. Sure, if a file is deleted by the OS (using the "tombstoning" technique, not over-writing) on a fully-encrypted volume there is no access to the file when the OS is not running, but online recovery of deleted files within a file system can occur, even if the subsystem is reading/writing blocks of encrypted data from the disk. It's still semi-visible to the OS, which means a passphrase in a script could leak.
So to recap: use PGP WDE with pre-boot authentication under the expectation that users will always have a pre-boot authentication experience, even after maintenance reboots. Where that is not feasible, select a WDE solution that uses TPMs for integrity checking and secure storage of keys.
...
I would bet that some *magic* with TPMs would be possible to create some hybrid best-of-both-worlds scenario. Since TPMs can securely store hashes for matching online hash output with a known good historical state, it may be feasible to create a scenario where the default action is to salt+hash the user's passphrase and compare to hashes known only be the TPM, but follow-up with a one-time-bypass scenario where the TPM is told to check another source for the hash output comparison (i.e. the environmentals of the system, the BIOS, the serial number, the boot loader, etc., all hashed together, similar to some current TPM-WDE implementations today do always). That might make it possible to have pre-boot authentication as the default with a truly trustworthy one-time-bypass in the case of remote/automated maintenance.
And, because I could not leave this important point out, like I said previously:
I can appreciate the problems inherent in managing distributed computers while still protecting the confidentiality and integrity of the data contained within them. I understand there is often the need to do maintenance on machines in-mass.
However, what I do not understand is the classification of machines that would have both physical access control needs (hence the PGP Whole Disk Encryption) and the immediate availability without a human nearby. Here's the rough categories/scenarios I can come up with:
- Mobile Computers (i.e. laptops)
- Less-Mobile Desktops/Workstations
- Servers
I will temporarily skip over #2 (workstations) for #3 servers.
I understand the reasoning behind protecting servers' drives from data leakage via whole disk encryption software. I also understand that large enterprises have farms of servers and rarely manage individual servers, preferring to manage multiple servers at once through a single admin interface. However, a smart enterprise is going to use layers of security, starting at physical security controls (the data center: locked doors, raised floors, controlled environmentals, etc.), moving through network layers (firewalls, IDS), up to the system and application layers. Besides going through the performance trade-offs, and enterprise has to decide if the complexity of WDE is worth the trade-off of the confidentiality/integrity it might provide.
In this situation, I personally prefer the way TPMs (Trusted Platform Modules) have been used in WDE-like implementations (NOTE: PGP Corp does not currently support the use of TPMs with its WDE product). TPMs, in theory (but perhaps not in all implementations), have the ability to create a Trusted Path for the boot process, roughly in these steps:
- TPM takes a cryptographic hash of the software used in the boot process (BIOS, bootstrap, kernel, etc.).
- TPM compares hash output of components to known good hash output of the same components [remember that the TPM is tamper resistant-- any attempts to modify the hash outputs will render the device unusable].
- If the hash outputs match, boot process continues.
- If the hash outputs do not match, the boot process halts.
- The boot strap (i.e. boot guard) can then rely upon the TPM to store the Master Volume Key, releasing it to the known-good (and therefore trustworthy) volume encryption/decryption driver either automatically or after additional authentication processes are completed successfully.
A major plus with TPMs over PGP's implemented bypass is that its use truly is a customer opt-in choice. Whereas the PGP Boot Guard (today) always checks for the existence of the bypass user access key and supplies it a generic/static x01 passphrase whether a customer knows or wants this feature to work on their systems or not, the TPM is clearly designed and documented to perform that function. There's no hidden trick; it's a deliberate act on behalf of the customer. Another huge plus is that there is no API or command line tool with which to interact in order to establish the automated boot, so there are less opportunities to accidentally disclose a critical boot volume passphrase (i.e. no plaintext scripts).
Now, to return to the issue of workstations ...
I have seen enterprise IT organizations treat workstations as servers, by installing production-dependent servers upon desktop-class hardware and expecting SLAs a server in a well-protected data center (or at least a locked closet) could provide. Those people are making mistakes--not just security mistakes-- but IT management mistakes. If something is that critical, it should be well-supported and funded, being provided the layers of control it should have. And if those are PGP's customers, I would still contend they should pursue the TPM route as described above for servers.
For workstations that are sitting (semi-) permanently on employees' desks, what are the real objections to having users come into work some morning only to see the PGP Boot Guard authentication dialog box waiting for them? If the objection is that the users are not trusted to have a User Access Key of their own (think: kiosks), then again, the TPM route is probably best, so there is less of a key management issue. If the objection is that it simply isn't convenient ... well ... anyone who understands security truly understands how dangerous playing into that line of thinking can be. Most organizations that would use the PGPwde.exe --add-bypass command in-mass will be using a shared administrative account (some call these "service accounts"), which means yet-another-default-password that has to be managed, reset when staff leave, etc. For most organizations, that is a nightmare. To top that off, remote-management tools like Microsoft's SMS often place scripts and installers in temp space, which is rife with problems cleaning up after-the-fact. Sure, if a file is deleted by the OS (using the "tombstoning" technique, not over-writing) on a fully-encrypted volume there is no access to the file when the OS is not running, but online recovery of deleted files within a file system can occur, even if the subsystem is reading/writing blocks of encrypted data from the disk. It's still semi-visible to the OS, which means a passphrase in a script could leak.
So to recap: use PGP WDE with pre-boot authentication under the expectation that users will always have a pre-boot authentication experience, even after maintenance reboots. Where that is not feasible, select a WDE solution that uses TPMs for integrity checking and secure storage of keys.
...
I would bet that some *magic* with TPMs would be possible to create some hybrid best-of-both-worlds scenario. Since TPMs can securely store hashes for matching online hash output with a known good historical state, it may be feasible to create a scenario where the default action is to salt+hash the user's passphrase and compare to hashes known only be the TPM, but follow-up with a one-time-bypass scenario where the TPM is told to check another source for the hash output comparison (i.e. the environmentals of the system, the BIOS, the serial number, the boot loader, etc., all hashed together, similar to some current TPM-WDE implementations today do always). That might make it possible to have pre-boot authentication as the default with a truly trustworthy one-time-bypass in the case of remote/automated maintenance.
And, because I could not leave this important point out, like I said previously:
"Please keep in mind that encrypted data is just data protected under time-lock; either the time required to crack the encryption key starting today on current hardware, or the time it takes to wait until current computing hardware can trivially crack the encryption key. And of course, keep in mind that most end-users of a disk encryption software will be setting a password that they can remember and type readily with little slow-down at boot. Even the strongest passwords could be shoulder-surfed and if the motivation is the data, encryption software likely won't be victorious in the end ... but it does make for a good security blanket (sensation of comfort). Instead try to remember that a wonderful and viable alternative is to not carry sensitive data on a mobile device at all! There are layers of security that exist to protect data that is centrally stored on servers in a data center. Those layers tend to not travel well with mobile devices."
PGP Bypass on Slashdot
Thanks, Slashdot. Some of your comments are on target, some ... well, I anticipated the knee-jerk response you gave. Many people (even technical people) make the mistake of thinking the use of crypto automatically equals secure. Yes, I realize it requires cryptographic (as in authorized already) access to the drive, but that's the point. The bypass feature creates an opportunity for authorized users to accidentally allow access to unauthorized users.
The positive aspects of the bypass ...
There are really a couple issues at hand here ...
Some comments bring up the issue of open source vs. closed source for security. Personally, I view that as an irrelevant side-detail; the question I am concerned with is who has access to review the source code. But yes, despite PGP having some sort of an open source code review
process, this feature was still not publicly documented.
UPDATED: Since there are objections to my claim that "The feature wasn't documented" (besides my details about how the feature came to become documented as it is now), I have changed the wording to "The feature wasn't PUBLICLY documented". Because if the documentation isn't in the hands of someone who would find it useful ... then what's the point?
The positive aspects of the bypass ...
- Yes, it does require an authorized person to enable it (not necessarily an administrator, but at least a user).
- Yes, it does make remote, automated management possible. Although, at an expense.
- OK. It's not a true cryptographic backdoor, but it is a dangerous access control bypass. Either way, it's unfriendly to discover after installation.
There are really a couple issues at hand here ...
- There is no central audit trail. Any user could set this feature up without the knowledge of the "remote" admins. In fact, a smart user could create a script (or use someone else's) to disable the boot passphrase after each boot, which leads into the next point ...
- There is no way to disable this feature. Jon Callas' (of PGP Corp) response that the bypass is "disabled" by default is more accurately stated as the bypass feature is "unused" by default. Anyone can use it at any time. In fact, the PGP Boot Guard (as best as one can tell without public documentation) checks for the existence of the bypass at each boot.
- There are no integrity checking controls on the Boot Guard. Admins must trust that the boot guard is not accidentally or intentionally modified to stop the bypass reset/removal function.
- The biggest threat is a timing attack. Capturing a system that hasn't reset the bypass grants full access to the data. If an adversary (or semi-trusted insider) can know when the automated reboots are scheduled, the device can be captured and misused.
- The feature wasn't PUBLICLY documented. There simply is no excuse for this feature not be disclosed to current and potential customers. That is the #1 motivation for my discussion of this problem.
Some comments bring up the issue of open source vs. closed source for security. Personally, I view that as an irrelevant side-detail; the question I am concerned with is who has access to review the source code. But yes, despite PGP having some sort of an open source code review
process, this feature was still not publicly documented.
UPDATED: Since there are objections to my claim that "The feature wasn't documented" (besides my details about how the feature came to become documented as it is now), I have changed the wording to "The feature wasn't PUBLICLY documented". Because if the documentation isn't in the hands of someone who would find it useful ... then what's the point?
Are Security Model's Bankrupt: Microsoft's Stride Chart
Over on the Microsoft SDL (Security Development Lifecycle) blog, there's a post about a security tool Microsoft uses in their threat modeling process called the "Stride Chart". Why is it that Microsoft's Stride Chart appears to be such a weird derivation of Donn Parker's Hexad? Here are the components Microsoft uses to estimate security threats:
First of all, there is some controversy surrounding Parker's Hexad, roughly because it does appear to just be a more detailed model of its parent, the CIA Triad. But looking at the Stride Chart, I have to wonder why Microsoft has chosen six main "threat categories" (not three: CIA), but not the exact same six as the Hexad: Confidentiality, Integrity, and Availability, buffeted by Possession/Control, Authenticity, and Utility. Why Authorization and Non-Repudiation and not Utility and Possession as well? Isn't Non-Repudiation and Authenticity a hair-splitting, English-has-too-many-synonyms type of oversight?
Are security models bankrupt? Has Microsoft (and others, because this Stride Chart is getting head nods) reduced to adding every last security buzzword to their lists? Can the basic security models we use be simplified any further? And, very importantly, where are the logic proofs that formally establish these principles as the foundation upon which all security solutions should be built?
Or, is the security industry so busy building money-making solutions that fundamentals are ignored?
- Authentication
- Integrity
- Non-Repudiation
- Confidentiality
- Availability
- Authorization
First of all, there is some controversy surrounding Parker's Hexad, roughly because it does appear to just be a more detailed model of its parent, the CIA Triad. But looking at the Stride Chart, I have to wonder why Microsoft has chosen six main "threat categories" (not three: CIA), but not the exact same six as the Hexad: Confidentiality, Integrity, and Availability, buffeted by Possession/Control, Authenticity, and Utility. Why Authorization and Non-Repudiation and not Utility and Possession as well? Isn't Non-Repudiation and Authenticity a hair-splitting, English-has-too-many-synonyms type of oversight?
Are security models bankrupt? Has Microsoft (and others, because this Stride Chart is getting head nods) reduced to adding every last security buzzword to their lists? Can the basic security models we use be simplified any further? And, very importantly, where are the logic proofs that formally establish these principles as the foundation upon which all security solutions should be built?
Or, is the security industry so busy building money-making solutions that fundamentals are ignored?
Wednesday, October 3, 2007
Response to Jon Callas - PGP Encryption Bypass
As I can only assume the real Jon Callas placed this comment (and, Jon, I am grateful for your time and thoughts if it is you), here are my responses....
But we are both working under the assumption that we are using the PGP issued boot guard binary to unlock and boot the drive. If (and please correct me if I am wrong, Jon), however, a third party were to reverse engineer the process the PGP boot guard works to build their own (say to boot from media such as a CD or DVD), this bypass, which is simply another key (protected by a passphrase of hexadecimal value x01) to decrypt the Volume Master Key could remain on disk. The users (and administrators) have to trust that the PGP binary will leave the function calls (the ones that remove the bypass from disk) intact.
Essentially, this is an example of a trusted client, which (without going too off subject here) is not that much different from why client-based NAC implementations fail [Because if you cannot trust a machine trying to connect to your network, how can you trust the output of some software running on that machine as an attempt to interrogate its trustworthiness?]. There is no trusted path to validate that the binary image that is called to both add the bypass and to boot the device, removing the bypass, is unchanged from its distribution from its maker, PGP Corp. Administrators who use this feature are putting their trust (or perhaps their faith) in the hope that: 1) the binary as identified by file path has not (and will not be) changed, and 2) there is no interest in the "insecurity research" community to create a method to maliciously alter those binaries.
Basically, (and I apologize, Jon, if this is a simplistic diagram) the image below is a disk that is protected by PGP Whole Disk. The User Access Keys, Boot Guard (software that unlocks the disk), and Volume Master Key may be out of order (probably are-- after I quickly made the diagram I realize the Boot Guard is likely to be first), but the ideas are the same regardless. User Access Keys unlock the Volume Master Key, and users unlock their corresponding access key by their passphrase (or various physical token). If a bypass exists, it is added to the User Access Keys. The Boot Guard has a function call for using the bypass (by attempting to decrypt the bypass user01 access key with a passphrase of value x01) and a function call to remove the bypass from the user access keys on disk.
What if the PGP Boot Guard's function for removing the keys was removed (e.g. by a Trojan/malicious boot guard or boot media)? What controls would then obligate that the bypass key would be removed? If the PGPwde.exe --add-bypass command checks the integrity of the Boot Guard to ensure the RemoveBypass() (or equivalent) function call is intact, it certainly isn't documented that it does so. Regardless of whether it checks at the instant it creates the bypass, there still is no guarantee from that point in time that the Boot Guard won't be manipulated or that alternative boot media won't leave the bypass intact.
Jon went on to say:
What also surprises me about the customers that would require PGP WDE to have such a feature is the way they would have to use the feature. Since this is command line driven, this is obviously designed for use in scripting. I have a hard time fathoming an enterprise organization that would, on one hand, require the use of full disk encryption of computers and then, on the other hand, distribute a script with a hardcoded passphrase in it, presumably using a software distribution tool like Microsoft's Systems Management Server (SMS), or similar. The risk of this feature of PGP WDE notwithstanding, we are talking about admins using shared/generic/static passphrases for all or many computers stored in plaintext scripts, set to execute in mass. If the complexity doesn't accidentally disclose the default administrative passphrase, then the fact that fallible humans keeping human readable scripts in N locations used every time Microsoft releases a patch certainly will. An average security conscious IT shop running Windows products (because PGP WDE is a product for Windows) will have at least 12 opportunities per year for devices to get stolen when they are in this vulnerable "bypass" state. Does the use of this PGP WDE (or any full disk encryption vendor as Jon claims competitors have similar functionality) feature increase the risk that laptops will be stolen on the eve of the second Tuesday of every month?
Jon continued:
"You bring up an interesting issue with the automated reboot feature, but you don't have the details right. I can't fault you for that, as we haven't documented on the web site. Full product documentation should be coming in the next release."I am curious, Jon, if you could be forthcoming with details as to why this feature was not documented. I would hate to pull out the overused "security by obscurity" banner. Was it intentional (and if so, why?) or was it simply oversight?
"The major inaccuracy you have is that the passphrase bypass operates only once. After the system boots, the bypass is reset and has to be enabled again. Note that to enable it, you must have cryptographic access to the volume. You cannot enable it on a bare running disk."Of course you are correct on that detail. I was aware of the one time use parameter, but did unintentionally neglect its inclusion.
But we are both working under the assumption that we are using the PGP issued boot guard binary to unlock and boot the drive. If (and please correct me if I am wrong, Jon), however, a third party were to reverse engineer the process the PGP boot guard works to build their own (say to boot from media such as a CD or DVD), this bypass, which is simply another key (protected by a passphrase of hexadecimal value x01) to decrypt the Volume Master Key could remain on disk. The users (and administrators) have to trust that the PGP binary will leave the function calls (the ones that remove the bypass from disk) intact.
Essentially, this is an example of a trusted client, which (without going too off subject here) is not that much different from why client-based NAC implementations fail [Because if you cannot trust a machine trying to connect to your network, how can you trust the output of some software running on that machine as an attempt to interrogate its trustworthiness?]. There is no trusted path to validate that the binary image that is called to both add the bypass and to boot the device, removing the bypass, is unchanged from its distribution from its maker, PGP Corp. Administrators who use this feature are putting their trust (or perhaps their faith) in the hope that: 1) the binary as identified by file path has not (and will not be) changed, and 2) there is no interest in the "insecurity research" community to create a method to maliciously alter those binaries.
Basically, (and I apologize, Jon, if this is a simplistic diagram) the image below is a disk that is protected by PGP Whole Disk. The User Access Keys, Boot Guard (software that unlocks the disk), and Volume Master Key may be out of order (probably are-- after I quickly made the diagram I realize the Boot Guard is likely to be first), but the ideas are the same regardless. User Access Keys unlock the Volume Master Key, and users unlock their corresponding access key by their passphrase (or various physical token). If a bypass exists, it is added to the User Access Keys. The Boot Guard has a function call for using the bypass (by attempting to decrypt the bypass user01 access key with a passphrase of value x01) and a function call to remove the bypass from the user access keys on disk.
What if the PGP Boot Guard's function for removing the keys was removed (e.g. by a Trojan/malicious boot guard or boot media)? What controls would then obligate that the bypass key would be removed? If the PGPwde.exe --add-bypass command checks the integrity of the Boot Guard to ensure the RemoveBypass() (or equivalent) function call is intact, it certainly isn't documented that it does so. Regardless of whether it checks at the instant it creates the bypass, there still is no guarantee from that point in time that the Boot Guard won't be manipulated or that alternative boot media won't leave the bypass intact.
Jon went on to say:
"We are not the only manufacturer to have such a feature -- all the major people do, because our customers require it of us. A number of other disk encryption vendors call this a "wake on lan" feature, which we believe to be misleading. We call it a passphrase bypass because that is what it is. It is a dangerous, but needed feature. If you run a business where you remotely manage computers, you need to remotely reboot them.Exactly. This is the theme of which I would like to take hold, more so than the hype of a problem in a widely-adopted product. The question at hand may be: "Is whole disk encryption an example of bolt-on security that doesn't truly solve the problem of confidentiality and integrity of data at distributed locations?"
"The scenario you describe is more or less the intended one, and you identify the risk inherent in the feature. If someone enables the bypass and the volume is immediately stolen, then the volume is open. However, this window is usually very small. The people who use it understand the risk."
What also surprises me about the customers that would require PGP WDE to have such a feature is the way they would have to use the feature. Since this is command line driven, this is obviously designed for use in scripting. I have a hard time fathoming an enterprise organization that would, on one hand, require the use of full disk encryption of computers and then, on the other hand, distribute a script with a hardcoded passphrase in it, presumably using a software distribution tool like Microsoft's Systems Management Server (SMS), or similar. The risk of this feature of PGP WDE notwithstanding, we are talking about admins using shared/generic/static passphrases for all or many computers stored in plaintext scripts, set to execute in mass. If the complexity doesn't accidentally disclose the default administrative passphrase, then the fact that fallible humans keeping human readable scripts in N locations used every time Microsoft releases a patch certainly will. An average security conscious IT shop running Windows products (because PGP WDE is a product for Windows) will have at least 12 opportunities per year for devices to get stolen when they are in this vulnerable "bypass" state. Does the use of this PGP WDE (or any full disk encryption vendor as Jon claims competitors have similar functionality) feature increase the risk that laptops will be stolen on the eve of the second Tuesday of every month?
"You do not note, however, that the existence of this feature does not affect anyone who does not use it. It is not a back door, in the sense that cryptographers normally use the word.True. It's not a "backdoor" in the sense of 3 letter agencies' wiretapping via a mathematical-cryptographic hole in the algorithm used for either session key generation or actual data encryption, but how can a PGP WDE customer truly disable this "bypass" feature? As long as the function call to attempt the bypass exists in the boot guard's code, then the feature is "enabled", from my point of view. It may go unused, but it may also be maliciously used in the context of a sophisticated attack to steal a device with higher valued data contained within it:
"You cannot enable the feature without cryptographic access to the volume. If you do not have it enabled, you are not affected, either. I think this is an important thing to remember. Anyone who can enable the feature can mount the volume. It is a feature for manageability, and that's often as important as security, because without manageability, you can't use a security feature."
- Trojan Horse prompts user for passphrase (remember, PGP WDE synchronizes with Windows passwords for users, so there are plenty of opportunities to make a semi-realistic user authentication dialog).
- Trojan Horse adds bypass by unlocking the master volume key with the user's passphrase.
- [Optional] Trojan Horse maliciously alters boot guard to disable the RemBypass() feature. [NOTE: If this were to happen, it would be a permanent bypass, not a one-time-use bypass. Will PGP WDE customers have to rely on their users to notice that their installation of Windows boots without the Boot Guard prompting them? Previous experience should tell us that users will either: A) not notice, or B) not complain.]
- Laptop is stolen.
- Enterprise IT shop sends notification to users regarding upcoming maintenance (OS/Application patch) which will include a mandatory/automated reboot.
- Malicious insider powers off computer when BIOS screen appears, keeping disk in "bypass" state.
- Malicious insider images the drive or steals the device entirely.
Jon continued:
"You say, 'There has to be a better solution for this problem.' Please let me know when you have it. I can come up with alternate solutions, but they all boil down to an authorized user granting a credential to the boot driver allowing the boot driver to get to the volume key. We believe that our solution, which only allows a single reboot to be a good compromise. It doesn't endanger people who don't use the feature, but it allows people to remotely administer their own systems."Jon, I would be more content with this feature in your product, if:
- The feature was documented clearly, including a security warning covering the risks of its use/presence in such a way that administrators must see it.
- The feature could be permanently disabled-- not just ignored or left seemingly unused.
- The intended use of the feature did not require the creation of a passphrase with cryptographic access to the Volume Master Key.
- The intended use of the feature did not require the distribution of plain text scripts with an embedded passphrase to N clients each and every time that feature is needed.
Monday, October 1, 2007
PGP Whole Disk Encryption - Barely Acknowledged Intentional Bypass
Popular whole disk encryption vendor, PGP Corporation, has a remote support “feature” which allows unattended reboots, fully-bypassing the decryption boot process. The feature, which until recently was not documented (customer accessible only) in most support manuals, allows a user who knows a boot passphrase to add a static password (hexadecimal x01) that the boot software knows. If this flag is set, the boot process does not interrogate a user. It simply starts the operating system. The feature can be accessed via the command line (ignore line wrap):
PGP Corporation should also be in the doghouse for keeping this "feature" so quiet. For example, there is nothing in the current release that documents this feature. Even the pgpwde.exe help option delivers nothing to indicate its existence:
There has to be a better solution for this problem.
UPDATED: You can remove any bypass passwords using the "--remove-bypass" switch and you can check to see if bypass are setup using the "--check-bypass" switch. For example:
UPDATE 2: Jon Callas, CTO/CSO of PGP Corp, allegedly (because there is no non-repudiation in the comment) left a comment regarding this post. My response is here.
UPDATE 3: Here is some feedback for the slashdot thread. Here is a continuation of my response to Jon Callas.
UPDATE 4: If you're coming here from Jon Callas' CTO Corner article, then take a quick trip here to see my response.
"%programfiles%\PGP Corporation\PGP Desktop\PGPwde.exe" --add-bypass --passphrase [passphrase here]
How trivial would it be for a Trojan to pretend to be an authentication dialog box and apply the user-supplied password as the drive unlocking passphrase!
This illustrates that, while encryption of hard drives can reduce the exposure of a lost or stolen hard drive to a time-complexity problem, there can also be a false sense of security here, too. My personal anticipation is that the lost laptop parade in the media will eventually include a breach of an encrypted laptop. This feature of PGP notwithstanding, there is the age-old problem of shoulder surfing for the boot passphrase. PGP’s feature now allows a lost machine with the by-pass flag to be easily compromised. Granted, without that feature, a Trojan could capture the boot passphrase anyway. The real solution is to not have critical data on mobile devices. Our banks do not issue us mobile bank vaults—we keep our valuables in nice, safe, centralized bank vaults where layers of security protect them.
Imagine this scenario:- Enterprise IT ships out a maintenance patch requiring a reboot.
- Scripted installation also uses PGP WDE bypass to setup unattended reboot.
- Script preps the system for shutdown, but the laptop is stolen.
PGP Corporation should also be in the doghouse for keeping this "feature" so quiet. For example, there is nothing in the current release that documents this feature. Even the pgpwde.exe help option delivers nothing to indicate its existence:
C:\"%programfiles%\PGP Corporation\PGP Desktop\PGPwde.exe" --help
PGP WDE command line tool.
Commands:
Generic:
-h --help this help message
--version show version information
Disk enumeration:
--enum enumerate all fixed and removable disks
--disk-info display information about a specified disk
User management:
--add-user add a user to specified disk
--remove-user remove a user from specified disk
--list-user list all users from specified disk or user file
--verify-user verify a user on disk with specified passphrase
Disk management:
--disk-status display encryption status of specified disk
--instrument instrument a disk with Bootguard
--uninstrument remove Bootguard instrumentation from specified disk
Disk operation:
--decrypt remove whole disk encryption from specified disk
--stop stop an active encryption or encryption removal process
--wipe-disk wipe the content of specified disk
--recover try to recover specified damaged disk
Options:
Boolean:
--fast enable fast mode encryption
--safe enable safe mode encryption
Integer:
--disk specify physical disk by number
--block-size optional specify encryption block size
Strings:
--public-keyring public keyring file
--private-keyring private keyring file
-i --input input file name
-o --output output file name
--passphrase passphrase for disk user or private PGP key
--ap --admin-passphrase admin symmetric passphrase
--rp --recovery-token recovery passphrase
--asc armored key file in .asc format
--admin-asc armored admin key file
-u --user user name
There has to be a better solution for this problem.
UPDATED: You can remove any bypass passwords using the "--remove-bypass" switch and you can check to see if bypass are setup using the "--check-bypass" switch. For example:
C:\>"%programfiles%\PGP Corporation\PGP Desktop\PGPwde.exe" --remove-bypass
Failed to locate user: ☺user
UPDATE 2: Jon Callas, CTO/CSO of PGP Corp, allegedly (because there is no non-repudiation in the comment) left a comment regarding this post. My response is here.
UPDATE 3: Here is some feedback for the slashdot thread. Here is a continuation of my response to Jon Callas.
UPDATE 4: If you're coming here from Jon Callas' CTO Corner article, then take a quick trip here to see my response.
Subscribe to:
Posts (Atom)