Thursday, October 4, 2007

PGP Bypass on Slashdot

Thanks, Slashdot. Some of your comments are on target, some ... well, I anticipated the knee-jerk response you gave. Many people (even technical people) make the mistake of thinking the use of crypto automatically equals secure. Yes, I realize it requires cryptographic (as in authorized already) access to the drive, but that's the point. The bypass feature creates an opportunity for authorized users to accidentally allow access to unauthorized users.

The positive aspects of the bypass ...
  1. Yes, it does require an authorized person to enable it (not necessarily an administrator, but at least a user).
  2. Yes, it does make remote, automated management possible. Although, at an expense.
  3. OK. It's not a true cryptographic backdoor, but it is a dangerous access control bypass. Either way, it's unfriendly to discover after installation.

There are really a couple issues at hand here ...
  1. There is no central audit trail. Any user could set this feature up without the knowledge of the "remote" admins. In fact, a smart user could create a script (or use someone else's) to disable the boot passphrase after each boot, which leads into the next point ...
  2. There is no way to disable this feature. Jon Callas' (of PGP Corp) response that the bypass is "disabled" by default is more accurately stated as the bypass feature is "unused" by default. Anyone can use it at any time. In fact, the PGP Boot Guard (as best as one can tell without public documentation) checks for the existence of the bypass at each boot.
  3. There are no integrity checking controls on the Boot Guard. Admins must trust that the boot guard is not accidentally or intentionally modified to stop the bypass reset/removal function.
  4. The biggest threat is a timing attack. Capturing a system that hasn't reset the bypass grants full access to the data. If an adversary (or semi-trusted insider) can know when the automated reboots are scheduled, the device can be captured and misused.
  5. The feature wasn't PUBLICLY documented. There simply is no excuse for this feature not be disclosed to current and potential customers. That is the #1 motivation for my discussion of this problem.
The bottom line is that if the bypass was properly documented for potential customers to review before bestowing trust in the product, OR, if the bypass could be truly disabled (guaranteed by integrity checks), then this would not be an issue at all.

Some comments bring up the issue of open source vs. closed source for security. Personally, I view that as an irrelevant side-detail; the question I am concerned with is who has access to review the source code. But yes, despite PGP having some sort of an open source code review
process
, this feature was still not publicly documented.

UPDATED: Since there are objections to my claim that "The feature wasn't documented" (besides my details about how the feature came to become documented as it is now), I have changed the wording to "The feature wasn't PUBLICLY documented". Because if the documentation isn't in the hands of someone who would find it useful ... then what's the point?

7 comments:

Anonymous said...

> There is no central audit trail. Any user could set this feature up without the knowledge of the "remote" admins

the state of the flag can be checkt!

> In fact, a smart user could create a script (or use someone else's) to disable the boot passphrase after each boot

he also could put his PW on a Postit on the monitor/under the Keyboard - so what?

> Anyone can use it at any time

Anyone who knows the PW. But he could also decrypt the whole disc - is that also a security issue?

> Admins must trust that the boot guard is not accidentally or intentionally modified to stop the bypass > reset/removal function.

they can just check either by using "--check-bypass" or by rebooting (if the PC comes back bypass was activated).

> The biggest threat is a timing attack. Capturing a system that hasn't reset the bypass grants full access to the > data. If an adversary (or semi-trusted insider) can know when the automated reboots are scheduled, the device can > be captured and misused.

Why should anyone schedule a reboot with the bypass set (and leave the PC running with an "open" disk)? Either you need a reboot after installing th. then you set the bypas - or you just shut the PC down and let the user enter the PW when he starts up the PC the next time.

> The feature wasn't documented

I kind of understand why they didn't want to put it "front and center" if people react like you .

Tim MalcomVetter said...

Anonymous said: "the state of the flag can be checkt! (sic)"

Well, while it could be remotely polled for state, vigilance would require polling constantly for all machines which is not practical. Machines that are offline or on disconnected/segmented networks can't be polled. So there is no realistic accountability there. And even if the machines are online, only an IT shop that has admins with nothing to do would encourage them to go around using the --check-bypass option regularly.

"he also could put his PW on a Postit (sic) on the monitor/under the Keyboard - so what?"

Humans will do devious things, no doubt. But, OTOH, if a technical solution to prevent people from putting passwords on post-it notes existed, it would become a best seller overnight. Nice analogy, but it doesn't quite fit; PGP WDE is a technical solution. It doesn't have to be implemented in such a way to allow the equivalent of Post-It notes. [Not to mention this is kind of the point/problem with distributed computing-- watch for future posts on alternatives that could prevent these threats from materializing.]

"I kind of understand why they didn't want to put [the bypass feature] 'front and center' if people react like you."

Well, I hope you do realize that regardless of how people will react, keeping these details out of the hands of (potential) customers is security by obscurity. Besides how I feel on the subject, there are many people whose confidence in the product will decrease, unfortunately, simply because the feature was hidden/undocumented publicly. Jon Callas himself admitted that the feature was "dangerous". I hope that you can see past the hype and realize that any vendor with a feature that bypasses the access controls in the product they sell should clearly document that feature, its risks, its benefits, and provide a permanent way to disable it for those customers who wish to not risk its use.

Anonymous said...

You seem to misunderstand the threat model associated with full disk encryption products. You talk about how a devious user could add this bypass flag, but the user doesn't need to -- he/she already has access to all the data.

Full disk encryption protects against third parties, not first parties. If you (as a security professional) need to protect data from the user of the machine, FDE/WDE is wholly inappropriate.

WDE must be used in conjunction with the machine's user. Why? The user can just copy the data off. They could decrypt the drive. They could print things out and leave them in a dumpster. WDE protects against data loss due to hardware theft. That's the threat model. That's all. Again, if the user wants to undermine this security, he/she can, regardless of the brand of FDE used.

Everything else you write about is a red herring.

Concerned a user will add the bypass flag? Remove the PGPWDE command line from the drive.

Need to confirm bypass wasn't used? Look at the user name of the WDE user that unlocked the drive.

Worried about a "timing attack?" Don't use the feature.

Worried that a trojan might turn this on? Worry more that a trojan has your passphrase in the first place.

I read everything you wrote on this topic, and I still don't see how there is a realistic exploit against this from someone without the passphrase. The biggest risk is in someone misusing it, or using it in an insecure way. This is not about "security through obscurity" because there's no actual security issue. Companies who need the feature can use it, individuals who don't are completely unaffected and are less likely to do something silly that lowers their own security profile.

You say, "Besides how I feel on the subject, there are many people whose confidence in the product will decrease, unfortunately, simply because the feature was hidden/undocumented publicly."

Actually, there's a much higher likelihood that that peoples' confidence in the product will decrease because of your irresponsibly calling it a "backdoor". You seem to be conversant in security issues, so you must know (a) this is a highly emotionally charged word and about the worst accusation that can be made against a product like this; (b) in today's environment (political, social, blogosphere echo chamber) such an accusation would be picked up and repeated without research beyond the headline; and (c) PGP's name would be dragged through the mud. It's sad that regardless of how hard we've worked to earn the trust of our customers, we would lose some of that trust because ultimately you disagree with how it should be documented.

Finally, you conclude elsewhere that since no one has raised this that no one has actually reviewed our source code. Many users, experts, and academics have reviewed it, in addition to extensive internal and external pen testing. Have you considered that no one before you has discussed it because maybe, just maybe, they all realize it's not a big deal?

(Yes, I work for PGP but this isn't an official statement, just my opinion as an individual employee).

Anonymous said...

Let it go man, you know you have brought up a totally invalid point. If an admin wants to make sure that users can't use this feature they can just delete pgpwde.exe.

Anyone installing whole disk in a managed environment is likely to have bought consultancy for the install from a pgp expert who will know about this feature.

Tim MalcomVetter said...

Anonymous said: "Let it go man, you know you have brought up a totally invalid point. If an admin wants to make sure that users can't use this feature they can just delete pgpwde.exe."

That's ironic, because I was actually advised by PGP not to delete pgpwde.exe by their product support.

"Anyone installing whole disk in a managed environment is likely to have bought consultancy for the install from a pgp expert who will know about this feature."

Ours didn't know. Neither did our SE/Support. Although I'm sure they all know now. But, read my response here, since this shouldn't require a consultant to become part of the threat/risk evaluation scenario-- it should be known prior to product selection.

Tim MalcomVetter said...

An Anonymous PGP Corp employee said:

"You seem to misunderstand the threat model associated with full disk encryption products. You talk about how a devious user could add this bypass flag, but the user doesn't need to -- he/she already has access to all the data.

"Full disk encryption protects against third parties, not first parties....

"WDE must be used in conjunction with the machine's user. Why? The user can just copy the data off. They could decrypt the drive. They could print things out and leave them in a dumpster. WDE protects against data loss due to hardware theft. That's the threat model. That's all. Again, if the user wants to undermine this security, he/she can, regardless of the brand of FDE used.

"Everything else you write about is a red herring."

Granted, we don't have (known) malware doing something like this today. Granted, malware that could do this could do worse, too, but the threat landscape on malware has changed. I could foresee malware adding on additional components to randomly disable everyone's FDE products (regardless of vendor, apparently) in addition to a slew of other things. Why? Why do malware authors do anything they do? Name recognition, money, espionage/information warfare, etc. The point is to not say "nobody would ever think to do that". I recall a Silver Bullet session with Gary McGraw where Gary had blasted that line of thinking-- that's the same lazy excuse that brought much of the Internet security problems to reality in the 1990s. The point is it can be done (however unlikely is not up for you to decide, it's up to your customers) and PGP did not do even a mediocre job being proactive on that issue.

Tim MalcomVetter said...

Anonymous PGP Corp employee said:

"You say, 'Besides how I feel on the subject, there are many people whose confidence in the product will decrease, unfortunately, simply because the feature was hidden/undocumented publicly.'

Actually, there's a much higher likelihood that that peoples' confidence in the product will decrease because of your irresponsibly calling it a 'backdoor'. You seem to be conversant in security issues, so you must know (a) this is a highly emotionally charged word and about the worst accusation that can be made against a product like this; (b) in today's environment (political, social, blogosphere echo chamber) such an accusation would be picked up and repeated without research beyond the headline; and (c) PGP's name would be dragged through the mud. It's sad that regardless of how hard we've worked to earn the trust of our customers, we would lose some of that trust because ultimately you disagree with how it should be documented."

You're absolutely right; there is an overwhelming tendency to jump to the [insert conspiracy theory here] conclusion when the word "backdoor" is used. For that reason, I have edited my initial post, removing "backdoor" and replacing it with "bypass". Academically speaking, there is still room to argue those are one and the same, but I 100% agree and understand that media can--and did--overreact.

That said, please note that I find it unfortunate if this affects PGP's business in any way. I am glad, however, that PGP now has a much more proactive posture on the issue of the bypass, when it's a risk, and when it is not. Simply put, if the comments that are available today were available months/years ago when the feature was built, this would be a non-issue. That's my main point: documentation and available documentation (call it "awareness") are two different things.