Wednesday, October 3, 2007

Response to Jon Callas - PGP Encryption Bypass

As I can only assume the real Jon Callas placed this comment (and, Jon, I am grateful for your time and thoughts if it is you), here are my responses....

"You bring up an interesting issue with the automated reboot feature, but you don't have the details right. I can't fault you for that, as we haven't documented on the web site. Full product documentation should be coming in the next release."
I am curious, Jon, if you could be forthcoming with details as to why this feature was not documented. I would hate to pull out the overused "security by obscurity" banner. Was it intentional (and if so, why?) or was it simply oversight?

"The major inaccuracy you have is that the passphrase bypass operates only once. After the system boots, the bypass is reset and has to be enabled again. Note that to enable it, you must have cryptographic access to the volume. You cannot enable it on a bare running disk."
Of course you are correct on that detail. I was aware of the one time use parameter, but did unintentionally neglect its inclusion.

But we are both working under the assumption that we are using the PGP issued boot guard binary to unlock and boot the drive. If (and please correct me if I am wrong, Jon), however, a third party were to reverse engineer the process the PGP boot guard works to build their own (say to boot from media such as a CD or DVD), this bypass, which is simply another key (protected by a passphrase of hexadecimal value x01) to decrypt the Volume Master Key could remain on disk. The users (and administrators) have to trust that the PGP binary will leave the function calls (the ones that remove the bypass from disk) intact.

Essentially, this is an example of a trusted client, which (without going too off subject here) is not that much different from why client-based NAC implementations fail [Because if you cannot trust a machine trying to connect to your network, how can you trust the output of some software running on that machine as an attempt to interrogate its trustworthiness?]. There is no trusted path to validate that the binary image that is called to both add the bypass and to boot the device, removing the bypass, is unchanged from its distribution from its maker, PGP Corp. Administrators who use this feature are putting their trust (or perhaps their faith) in the hope that: 1) the binary as identified by file path has not (and will not be) changed, and 2) there is no interest in the "insecurity research" community to create a method to maliciously alter those binaries.

Basically, (and I apologize, Jon, if this is a simplistic diagram) the image below is a disk that is protected by PGP Whole Disk. The User Access Keys, Boot Guard (software that unlocks the disk), and Volume Master Key may be out of order (probably are-- after I quickly made the diagram I realize the Boot Guard is likely to be first), but the ideas are the same regardless. User Access Keys unlock the Volume Master Key, and users unlock their corresponding access key by their passphrase (or various physical token). If a bypass exists, it is added to the User Access Keys. The Boot Guard has a function call for using the bypass (by attempting to decrypt the bypass user01 access key with a passphrase of value x01) and a function call to remove the bypass from the user access keys on disk.


What if the PGP Boot Guard's function for removing the keys was removed (e.g. by a Trojan/malicious boot guard or boot media)? What controls would then obligate that the bypass key would be removed? If the PGPwde.exe --add-bypass command checks the integrity of the Boot Guard to ensure the RemoveBypass() (or equivalent) function call is intact, it certainly isn't documented that it does so. Regardless of whether it checks at the instant it creates the bypass, there still is no guarantee from that point in time that the Boot Guard won't be manipulated or that alternative boot media won't leave the bypass intact.

Jon went on to say:
"We are not the only manufacturer to have such a feature -- all the major people do, because our customers require it of us. A number of other disk encryption vendors call this a "wake on lan" feature, which we believe to be misleading. We call it a passphrase bypass because that is what it is. It is a dangerous, but needed feature. If you run a business where you remotely manage computers, you need to remotely reboot them.

"The scenario you describe is more or less the intended one, and you identify the risk inherent in the feature. If someone enables the bypass and the volume is immediately stolen, then the volume is open. However, this window is usually very small. The people who use it understand the risk."
Exactly. This is the theme of which I would like to take hold, more so than the hype of a problem in a widely-adopted product. The question at hand may be: "Is whole disk encryption an example of bolt-on security that doesn't truly solve the problem of confidentiality and integrity of data at distributed locations?"

What also surprises me about the customers that would require PGP WDE to have such a feature is the way they would have to use the feature. Since this is command line driven, this is obviously designed for use in scripting. I have a hard time fathoming an enterprise organization that would, on one hand, require the use of full disk encryption of computers and then, on the other hand, distribute a script with a hardcoded passphrase in it, presumably using a software distribution tool like Microsoft's Systems Management Server (SMS), or similar. The risk of this feature of PGP WDE notwithstanding, we are talking about admins using shared/generic/static passphrases for all or many computers stored in plaintext scripts, set to execute in mass. If the complexity doesn't accidentally disclose the default administrative passphrase, then the fact that fallible humans keeping human readable scripts in N locations used every time Microsoft releases a patch certainly will. An average security conscious IT shop running Windows products (because PGP WDE is a product for Windows) will have at least 12 opportunities per year for devices to get stolen when they are in this vulnerable "bypass" state. Does the use of this PGP WDE (or any full disk encryption vendor as Jon claims competitors have similar functionality) feature increase the risk that laptops will be stolen on the eve of the second Tuesday of every month?

"You do not note, however, that the existence of this feature does not affect anyone who does not use it. It is not a back door, in the sense that cryptographers normally use the word.

"You cannot enable the feature without cryptographic access to the volume. If you do not have it enabled, you are not affected, either. I think this is an important thing to remember. Anyone who can enable the feature can mount the volume. It is a feature for manageability, and that's often as important as security, because without manageability, you can't use a security feature."
True. It's not a "backdoor" in the sense of 3 letter agencies' wiretapping via a mathematical-cryptographic hole in the algorithm used for either session key generation or actual data encryption, but how can a PGP WDE customer truly disable this "bypass" feature? As long as the function call to attempt the bypass exists in the boot guard's code, then the feature is "enabled", from my point of view. It may go unused, but it may also be maliciously used in the context of a sophisticated attack to steal a device with higher valued data contained within it:
  1. Trojan Horse prompts user for passphrase (remember, PGP WDE synchronizes with Windows passwords for users, so there are plenty of opportunities to make a semi-realistic user authentication dialog).
  2. Trojan Horse adds bypass by unlocking the master volume key with the user's passphrase.
  3. [Optional] Trojan Horse maliciously alters boot guard to disable the RemBypass() feature. [NOTE: If this were to happen, it would be a permanent bypass, not a one-time-use bypass. Will PGP WDE customers have to rely on their users to notice that their installation of Windows boots without the Boot Guard prompting them? Previous experience should tell us that users will either: A) not notice, or B) not complain.]
  4. Laptop is stolen.
OR just this:
  1. Enterprise IT shop sends notification to users regarding upcoming maintenance (OS/Application patch) which will include a mandatory/automated reboot.
  2. Malicious insider powers off computer when BIOS screen appears, keeping disk in "bypass" state.
  3. Malicious insider images the drive or steals the device entirely.

Jon continued:
"You say, 'There has to be a better solution for this problem.' Please let me know when you have it. I can come up with alternate solutions, but they all boil down to an authorized user granting a credential to the boot driver allowing the boot driver to get to the volume key. We believe that our solution, which only allows a single reboot to be a good compromise. It doesn't endanger people who don't use the feature, but it allows people to remotely administer their own systems."
Jon, I would be more content with this feature in your product, if:
  1. The feature was documented clearly, including a security warning covering the risks of its use/presence in such a way that administrators must see it.
  2. The feature could be permanently disabled-- not just ignored or left seemingly unused.
  3. The intended use of the feature did not require the creation of a passphrase with cryptographic access to the Volume Master Key.
  4. The intended use of the feature did not require the distribution of plain text scripts with an embedded passphrase to N clients each and every time that feature is needed.
As for "there must be a better solution", I will follow this post up with potential "better solutions". Some of them may be attainable with your product line, but I'll warn you, Jon, that some of them are not possible today and would require extensive implementation efforts-- potentially rethinking distributed computing entirely. I will likely present a possible solution that will be beyond what your company is positioned to implement.

16 comments:

Jon said...

Thanks for posting my comment, Securology.

Let me respond to your comments.

Unfortunately, my first paragraph got mangled. What I meant to say was that we didn't document it as well as we could have. I noticed the error right after I posted the reply, but the blogging software you use doesn't allow people to edit.

There's an old rule in human-computer interactions that boils down to "the user is always right." If a user can't figure out how to use a feature, it's the developer's fault, not the user's. In this case, if you couldn't find our documentation, then it wasn't documented well enough.

So let me talk first about where we've documented it. The product manager tracked down where it's documented. Here is the list:

We have a paragraph in the documentation that says:


Domain administrator restart bypass. Windows System and Administrator account(s) may now engage a mode to bypass WDE authentication on the next restart by utilizing the privileges of the administration account to act as the authenticated user. This feature enables administrators to perform remote software installations requiring a restart of the target computer. Use of this feature is logged to the PGP Universal server.


This paragraph is in the "What's New" section of Page 3 of the PGP Desktop guide and Page 21 of the PGP Universal Server Admin Guide.

This paragraph also appears in the release notes of both products.

We also have a tech document http://support.pgp.com/?faq=750 that is titled, "HOW-TO: Use the PGP Whole Disk Encryption Authenticated Bypass Feature". Yes, you have you have an account on the support web site.

We've noted ourselves that we don't yet have a "PGP WDE Command Line User's Guide," and ideally it should be there. Also it is not in the usage text that the pgpwde program prints. We'll work on these.

However, while I grant that we didn't document it as well as we could have, I really have to take issue with you that it's not documented. It is documented. It's not something that we are hiding -- heck, it's there because of customer demand. I know you're incredulous, but people want this feature.

But it is documented, and more than just barely. The core issue that you've brought up -- that we did not document it -- is in fact wrong.

Permit me a small rant before I continue. You are wrong on a number of facts, and I accept that we all make mistakes. (Heck, I garbaged the opening paragraph of my reply.) But you're using inflammatory language ("backdoor," "barely documented") and bringing up canards. I count as a canard your comments about how my reply to you has no non-repudiation. Hey, Securology, it's your blog we're using. Your question about whether this is intentional or oversight is essentially a "when did you stop beating your wife" question.

You admitted in your comments:

It's not a "backdoor" in the sense of 3 letter agencies' wiretapping via a mathematical-cryptographic hole in the algorithm used for either session key generation or actual data encryption....


In other words -- by your own admission, you calumniated my company because it makes for a good headline. You made false and defamatory statements about our integrity because you disagree with a feature.

Also, you never contacted PGP, you haven't called or written to as for clarification or explanation. You're behaving like the "Security Researchers" that you decry, who make a fuss in public before they get their facts right. Let me throw this back at you -- is your inaccuracy merely a mistake, or is it a way to increase your blogging cred by libel? I don't think you're doing anything more than being sloppy. I love reading your blog and I cheer you on, but Gosh, Securology, live up to the standards that you profess! Don't be a "Security Researcher" as you describe it in your own blog!

You're doing exactly what you decry, namely:


1. Find some vulnerability in some widely used product.
2. Create a proof-of-concept and publish it to the world (preferably before sharing it with the vendor).
3. Use FUD (Fear, Uncertainty, and Doubt) to sell the services of a security consultancy startup.
4. Profit!


Except that this isn't actually a vulnerability, it's a feature you don't like, and the proof-of-concept presupposes that the adversary have kernel access to the system and can write the disk.

Enough ranting.

Let's go to the next issue.

You're right. There's no trusted path to the boot process. So? Trusted Computing is still a goal more than a reality and this is somehow PGP's fault? I would love to have a trusted path to the boot credentials and in a year or two, I think we'll get there. Yeah, the marketing sheets on TPMs say it's there now, but we do not yet have boot-level, bug-free TPM drivers for all the major TPMs. We're working on it, and as I said, I hope we'll be there. Call me wary, but I'm the one who gets flames when someone can't booth their laptop. TPMs are really close to being ready for prime-time, and when they are, we'll use them.

You said, "If ... a third party were to reverse engineer the process the PGP boot guard works to build their own (say to boot from media such as a CD or DVD)...." I should remind you that we publish our source code. Go download it. Reverse-engineering isn't hard when you have the source.

Permit me to describe what the process is. We take one constant byte and a bunch of random ones, and create a cryptographic credential. We write that on the disk. This is something that is unique (because of the salt) and used once. You must have access to the volume key (knowing the passphrase is an example of access) to produce the credential. After a reboot, Bootguard removes the credential from the disk.

Your pictures and descriptions are accurate enough. Let me summarize your comments and proof-of-concept scenarios thus:

You can shoot yourself in the foot with this.

Brilliant deduction, Holmes! How do you do it?

However:

1. Only you can shoot yourself in the foot.
2. No one else can shoot you in the foot.
3. You can't shoot anyone else in the foot.

Why is this a problem? There is no matter of disabling the feature, if you can't enable it without credentials. Bluntly, if you know a passphrase to the disk you can bypass (once) typing in the passphrase to the disk.

Your scenarios about how someone could do something are certainly possible, but come on! You have assumed an attacker who can run kernel-level code on the system, and then conclude that someone can do mischief.

You've also decided that the mischief they want to do is to reboot the system without Bootguard. If someone can steal your passphrase from you and run kernel-level code, there are far more interesting things that they can do than reboot your system. There really are evil hackers, and you're suggesting that if they use a trojan horse to get my passphrase, they will then reboot my system.

Let's conclude by going to your suggestions.

(1) Despite the fact that we did document it in numerous places, you didn't find it. That means that it wasn't documented well enough. We're working on that right now.

(2) You're not the first person to request a permanent disablement of the feature. As you have noted yourself, this is harder than you'd think. Nonetheless, this might arrive in PGP 9.7. We do recommend to people that they can use Windows file ACLs to disable this for non-administrative users today. It isn't cryptographic, but it works.

(3) A feature to allow a member of an "admin group" (as defined by Active Directory) is now in our "managed beta" of 9.7, and should be in public beta in early November. This also addresses the request you have in (4), as well. Additionally, central audits of all uses of the feature are in beta test, and the reboot bypass can't be enabled unless you have communications back to the PGP Universal Server.

To conclude then, of the four improvements you asked for, there are 3 1/2 in beta test today, and a workaround for that 1/2. If you had contacted us, we could have told you that. Of course, you wouldn't have been Slashdotted. Your headline would have to have said, "PGP Has Enterprise Management Feature Documented in Their 'What's New,' Admin Guide, and Release Notes" and headlines like that are hell on your hit count as they look like press releases.

If you have further questions, let me know. If you doubt whether this comes from the real Jon Callas, send an email to security@pgp.com. I'll reply personally.

Anonymous said...

After reading your response, a question came to me. If the computer is stolen, could a hacker reverses the part of boot guard code that looks for the presence of a bypass key and flip the jump path, i.e. no bypass key run one time unencrypted. Then the computer is up on the hackers own network with a Samba Primary Domain Controller with a known domain admin password. The data on that computer is now wide open without attacking the encryption itself. I would imagine the Boot Guard code could be reversed with a SDK or trial version of the software. We use PGP on our laptops and a reception computer that is in our lobby. I wish an installation switch would allow this feature to be permanently disabled or not included.
Jim

securology said...

Jon, thanks again for commenting. I meant what I said about agreeing with your Director of Product Management - John Dasher.

For the record, every word in this blog has thus far been typed on a device protected with the PGP WDE product, complete with the feature I do not like and wish could be permanently disabled. Also for the record, like I said here, I worked with your company first, months ago. As you mentioned, I'm completely against full disclosure (and even "responsible disclosure" by some people's definitions), but this is not a vulnerability (like you mentioned)-- this is an intentional feature. And I am discussing approaches to solutions, not implementations of solutions.

I only wrote on this subject because you (and others) find nothing wrong with it, but I disagreed (and still do).

I really, genuinely appreciate your down-to-earth response and attitude in general. I appreciate your opinions (and the opinions of others as I have only deleted one comment during moderation thus far-- because it was blatant hatred, not just differing opinions).

As much as I hate to admit it (because it does give the full/responsible disclosers some merit), documenting and publishing my views on this issue did result in my original objective: to inform customers/users of the product that there are risks that are "not well documented" (we don't seem to disagree on those terms).

As for the use of the word "backdoor", I have taken the libery to edit some posts, replacing it with "bypass" in a lot of places. I did that for your company's benefit, because I bear no ill-well for you; in fact, I hope that this feature is fixed to my content so that I can continue being and advising others to be your customer. Please note that I still maintain it is an acceptable use of the word, at least academically speaking. Politically speaking, on the other hand, can be a different story.

Thanks for posting a link to the customer-only-accessible document (which now appears to be open to anyone, no longer requiring a current/valid customer account). That's where I got my information originally-- there and with your support analysts.

You said:
"(1) Despite the fact that we did document it in numerous places, you didn't find it. That means that it wasn't documented well enough. We're working on that right now."

I could and I did find it, but long after the sales & marketing were long gone. Long after the eval and long after the dedicated support analysts had moved on to new clients. How did I find out? By talking with a colleague and hearing the impossible: he was able to remotely reboot PGP WDE encrypted drives. I had to find out how that worked (thinking originally there were passphrases stored insecurely on the unencrypted pre-boot sections of disk). So, I looked through my documentation, which didn't mention the feature. Then I looked through the public online docs also to no avail. So, I opened up a support case and the support analysts didn't have a clue about the feature or how it worked. After a bunch of deliberation and a pat answer that suggested passphrases were stored in plaintext in the pre-boot sections of disk, I finally got them to give me details. After that, the customer accessible KB article went up, but PGP support said there was no need to include the documentation in the admin guides, the "--help" switch, or anything proactive.

You said:
"(2) You're not the first person to request a permanent disablement of the feature."

I might actually be--I hope not--but I might actually be the first.

You said:
"As you have noted yourself, this is harder than you'd think. Nonetheless, this might arrive in PGP 9.7."

When I asked for this, I asked to participate in betas. What's hard about compiling pgpwde.exe once with the feature and once without? Then because, as we agree, there's no Trusted Path, how will the admin 'know' that the pgpwde.exe without the feature is truly the one accessible on disk?

You said:
"Additionally, central audits of all uses of the feature are in beta test, and the reboot bypass can't be enabled unless you have communications back to the PGP Universal Server."

I think those are decent compensating controls. In fact, the reason why we chose PGP WDE over competitors (ahem, Pointsec) was that the competitors did not care about the system once it was encrypted. It was "encrypt and forget" ... nevermind the question of who could generate recovery tokens, manipulate keys, etc. PGP's use of central auditing of those features (with option to syslog to a centralized security event/log manager) was what won me over to begin with. It was the "layers" that are so missing from this bypass feature.

I apologize if you think this is posted just to generate web traffic. I hope you do read beyond this topic. Maybe then you will see that I am just looking for the solutions that embrace the highest possible assurance.

Anonymous said...

Interesting thread. Jon says "...Find some vulnerability in some widely used product" and therefore seems to therefore admit that this product is "vulnerable" in some way.

I was considering purchasing this product, but would not now on the basis of this blog.

Looks like you really got Jon's heckle up. Good stuff... we need scrutiny in abundance. How crazy to produce encryption that can be bypassed in this way! It's a no brainer.

BD said...

I'm reminded of the old INFOSEC practitioner's joke: "If you can do your job, I'm not doing mine." Too many crypto-geeks and old time "infosec" professionals are focused entirely on the "absolute" Confidentiality and/or Integrity of the crypto solution. They make the "good" the enemy of the "perfect." Even the Securology blogger admits he uses the solution he is criticizing. So let's get a grip on practicalities, shall we?

As an Information Assurance professional (as opposed to an INFOSEC professional) I fully understand why most large customers insist on the bypass feature. If a laptop is not "available" to be patched by an enterprise patch solution in a timely manner, greater risks accrue on the day after "patch Tuesday" (when the "zero day" exploit has propagated) than on the day before (when the minuscule window for opportunistic data theft briefly exists.)

In Information Assurance, Confidentiality and Integrity must strike a balance with Availability, as per the CIA triad. When you expand the triad to encompass the Parkerian hexad (including utility as a feature of data) the choice to have a bypass gets even easier.

Is the disk encryption with bypass solution 100% secure? Obviously not. Congratulations on grasping the obvious. Would preventing automatic patching on patch Tuesday be a better alternative? NO, absolutely not. The reality is that relying on any one security control in the absence of any others is a greater threat to the Confidentiality and Integrity of the data than any bypass feature could ever be. The Securology blogger himself admits that what he craves is a "layered" security solution.

So, the question isn't "why is there a bypass feature?" Clearly it exists because some customers demand it for good reasons. The real questions are: first, "what risks accrue to the use of the bypass feature in a given system?" ; and, second, "what other controls exist to mitigate the risks associated with using the bypass feature." Both of these questions require a context driven answer on the part of the security manager for the individual customer. The point is that you cannot discuss security controls in isolation from other security controls without compromising the overall security of the system, including availability.

BD Jones, CISSP

securology said...

BD Jones,

Thanks for posting.

First, you should not the bypass "feature" was not documented when I posted this. That was the whole point: PGP introduced a dangerous feature without adequately disclosing how to use it safely and the risks that (you choose) may or may not be present as a result.

I think you're making out too much of big deal out of marketing "Information Assurance" over "Information Security". They're the same thing; one is trying to market a positive-spin and write off security as impossible, the other ignores the marketing and continues in the arms' race.

Anyone dogmatically fighting for "Information Assurance" over "Information Security" is probably trying to justify their own existence.

The biggest difference I see between IA and IS (if there is a difference) is that IA people try to deal with the cards they were handed, a glass half-full approach; IS people try to see what rules can be bent or broken, what can be thrown away, and how many assumptions we're stuck working in that hold us back. And yes, they tend to be glass half-empty people, because security really isn't possible most of the time.

Also, Parker's Hexad is total waste of time. Remind me to write a small treatise on why his good intentions lead people astray. Even the CIA Triad is flaky. If you show me a failure in one of the letters (C, I, A), I'll show you a simultaneous failure in the others. Again, topic for another post.

Again, thanks for your comments; they provide diversity, but I disagree.

BD said...

First let me say that I agree with you on the weak documentation, which is why I did not address the issue.
Second, let me say that the lack or insufficiency of the documentation was admitted by PGP, so it was really a moot point from the get go.

The real issue for me is the approach to security revealed by your criticisms, which I think are valid, but not sufficient as criteria for evaluating the worth of the product. Clearly, any decisions to use or not use a product with a "feature" that can temporarily disable it, is completely dependent on the overall security context, including the formal risk assessment and BIA. That said, I think your reply strengthens, rather than weakens my point that the IA approach is a healthier one for overall risk management. If you doubt my word then allow me to refer you to the extensive documentation available at the NIST website especially the 800-37 and the 800-53:
http://csrc.nist.gov/publications/PubsSPs.html

As for Parkers Hexad, it is not getting much traction, I just threw it in as icing on the cake. I believe it has value, but that is another argument. My real argument here involves the CIA triad. Which I would love to hear you reduce to one monolithic principle. Please forward that comment to me when you post it so I can debunk it before you do any real damage.

That said, I have to also say, on a more personal note, that you have a real penchant for committing the logical fallacy of "attacking the person." I point out the fact (and it is a demonstrable fact given the NIST 800 series and the entire "defense in depth," FISMA approach espoused by NIST, the federal govt. and the 99% of the security community at this point) that there is a qualitative difference between INFOSEC and IA and you immediately accuse me of being dogmatic and trying to "justify my own existence."

Well all I can say is, that was spoken like a true Infosec person, perhaps someone who, to use your own words, "is probably trying to justify their own existence." The fact that you can aggressively criticize the value of a product you you use yourself, without any consideration for the environment in which it is customarily deployed, simply because you disagree with a feature, illustrates the difference between infosec and IA quite well.

securology said...

BD Jones,

I used and endorsed the product at the time of the writing. I no longer use the product. The places where I have endorsed it are working towards no longer using it. The reasons include this issue, as well as company relations/attitudes, and competitive products.

I said, anyone "dogmatically" fighting for IA over IS is justifying their own existence. You may or may not fit that bill (but it's starting to look that way ;).

Ah, the old "IA" trick of referring to stuffy non-scientific literature. I won't bite on that.

For the record, I am formally trained in IA; hence my disdain for the "art" and not "science". I'll live IA to you for something more scientific.

A good computer scientist would point out to you that PGP and its competitors are pretty much all invalid when things like Princeton's Memory attacks are considered.

Security really isn't possible. Assurance is just pretending it is.

BD said...

There you go attacking the person again.

Tell me, if IA is all just unscientific rot, why does the National Institute for Standards in Technology support it?
Also, if it is not scientific why are top universities offering Masters of Science in IA as opposed to Masters of Science degrees in InfoSec?

Also why doe the NSA ( a pretty serious bunch of guys when it comes to "security") rate the top "IA" programs in the country, but not the top Infosec programs?
http://www.nsa.gov/ia/academia/caeCriteriaList.cfm?MenuID=10.1.1.2

You say you are formally trained in IA, so what is your degree? I assume that since you are a "scientist" you have at least a Masters in IA or Infosec?

BTW, I'm still waiting for that blistering scientific reduction of the CIA triad.

BD said...

BTW, I forgot to comment on the last line in your last post:

"Security really isn't possible. Assurance is just pretending it is."

To use a "scientific" analogy, that's rather like saying Absolute zero (degrees Kelvin) really isn't possible, so there is no such thing as "cold," and ice is just pretending that there are "cold" things.

I think this speaks to the "all or nothing" approach reflected in your original post. I agree, absolute security isn't possible. But security doesn't have to be absolute to have value. Security is relative...relative to the threat environment, the vulnerabilities of a system, and to the value of an asset. Thus disk encryption can have a bypass feature and still be "relatively" more secure than the disk is without it.

For those of us that actually are responsible for securing systems, your attitude seems extreme and impractical. Though it may be true in an "absolute," "scientific," or "theoretical" sense, it has very little to do with the day to day operations of information systems, and that is why we do IA based FISMA compliance in the real world, and not "infosec."

securology said...

BD Jones,

Again, no attack (unless you take it that way) ... but I do seem to be pushing the right buttons. ;)

I am intimately familiar with at least one such Master of Science program in IA. My conclusion is that it is a misnomer, and should be a Master of Arts degree, not science. This particular program is an iconic example upon which many others are based, so I conclude (perhaps fallaciously, but I'll live with that) that they are all a dark art, not a science.

Tell me, are there any formal mathematical models that prove IA theoretical frameworks? If not, then it is not science. I am not aware of any, and I am very well read on published literature.

NIST can make IA standards, but that does not make them scientific documents. If there is no hypothesis, evidence, logical conclusion, AND peer review (repeatability) then it is not science. NIST can tell you things that are a good idea to follow, but they cannot guarantee results (as in, there is no repeatability). Just glorified generalizations. [I am familiar with people who did scholarly work for NIST; they'll confirm what I say.]

At best, IA can appear similar to social science, where correlation might be able to be obtained if really large sample sizes are used and biases minimalized. But correlation is not causation, and there are very few studies with significantly large sample sizes (partly because nobody wants to disclose data on their security failures).

With your absolute zero in Kelvin analogy, there is one important difference: we can measure when we are closer to absolute zero than we were previously. We can even formally (mathematically) represent it; however, in information security, we have no such luck. We have no formal mathematical models. Our "measurements" are really only guesses based on archaic ("archaic" because new attacks happen regularly) actuarial data (and rarely is it even that disciplined). We can check all of the regulatory compliance [du jour] check boxes, but we CANNOT predict when we are closer to absolute security. It just takes one mistake (one weak link as Schneier puts it), to lose it all.

I hate to rain on your Happy Halloween, but based on current research, there is no way to predict that a given system will behave at run-time without leaking (hemorrhaging) information. At design time, it may be possible, but even the simplest systems are too complex (because it must include all layers from hardware to trusted third party software to end product/application software). Even the most expensive check-box developed systems have and will continue to have security flaws that annihilate the protections of systems. All that a check-box methodology really is ... deep-down ... is a list of "bad" things we wish to avoid in future systems (or variations of existing systems). We don't know it's bad until some human looks at it and says "look what I can do with that!", and then some check-box methodology adds it to their list of things to look for.

IA tries to make adequate checklists. Computer science research tries to formally prove what can and cannot be known and at what points of time. Anyone can prove a security failure happened after it actually happened, but proving a security failure won't happen ... well, I don't know of anyone who can do that.

I'll leave you with a quote from Lord Kelvin (since you brought up the Kelvin scale and absolute zero): "If you cannot measure it, you do not yet know anything about it".

BD said...

Now that you are making logical arguments instead of assuming what you want to prove and dismissing those that disagree as "unscientific" people trying to "justify their existence," I accept that this is not an example of "attacking the person." Now to cases. (BTW, this is a very interesting debate now.)

OK, now we get down to the real issue. You don't really believe Infosec is scientific either, not if you set the bar as represented in your last post. I can accept that thesis a lot more readily than the claim that infosec is scientific and IA is not. It's rather like your point about security not existing in an absolute sense: even if it's true, it's trivial. Telling the freezing person in the time of Aristotle that cold really doesn't exist because it cannot be measured is a good example of a "reductio ad absurdum," as I'm certain lord Kevin would tell you if he forgot to wear his long nickers on a really cold day before he invented his Kelvin scale. And don't say he had a Fahrenheit thermometer, since Fahrenheit is an arbitrary scale and does not anchor itself in "absoluteness" nor in formal mathematics. It simply starts with zero and marks the spot where the mercury is in a tube stuck in water as it freezes as "32 degrees" (it could just as well have been ten degrees) and marks the tube where it boils as 212 degrees because the space in between was divided into 180 parts, which made the degrees an arbitrary 1/180th measure of the distance between those two events. (The same arbitrariness applies to Centigrade, despite the indexing of 0 at the freezing point of water.) Interestingly, they didn't really measure temperature but only the effect temperature had on mercury when sealed in a glass tube. But Kelvin was the scientific one, wasn't he? HMM...let's unpack that notion, shall we?

History of Science 101:
Now modern science is based on the work of Francis (Bacon, not Lord Kelvin) in the Instauratio Magna (or Great Renewal) and especially in the section called Novum Organum, where he rejects "authority" as a source of knowledge and proposes a new logic (Induction) to replace the old logic (deduction.) Because he embraced induction as his tool for scientific reasoning, he based everything scientific on "observation" and said that we could only know that which we personally observed on more than one occasion, thus "repeatability" entered the scientific method. Mathematical models were not an issue at that time, in fact mathematics as we know it did not yet exist (but science definitely did) so the early scientists had to make do without such things as "statistics." That is why you find that the "soft" sciences mostly predate the "hard" sciences, like Newtonian physics. But to say they are not science is to assassinate the father to crown the son. (Long live the arrogant usurper!) The point is not whether we can measure everything as in a "hard science." By that criteria, only a few sciences qualify as science.

In fact, one might argue that mathematics isn't science so much as it is a language that expresses science (and thus an "art.") Once the need for that language was sufficiently realized, the early scientists/philosophers , like Newton and Descartes, invented things like calculus and analytic geometry to help them express (i.e., measure) what they were observing. Nevertheless, science was originally and still remains the "art" of observation. In fact, without observation regarding the proportions between numbers, and their relation to physical things (as the seeds of mathematical theory going back to the ancient Greeks) Newton and Leibniz might never have invented calculus.

So to bring the history lesson back to the point you made about IA being at best a "soft" science because we have not yet learned to model it mathematically, (though there are some fledgling efforts in the statistical IDS area) I would say that simply means that the science of IA is immature and needs some sharp minds making better observations and coming up with better hypotheses on how to model risk. I definitely agree that the current models are weak, even pathetic attempts to mathematically predict outcomes. But that doesn't mean that IA is not a science. It means that, so far, there are no IA scientists with the chops to do the math. Remember, all sciences started as "arts" developed by philosophers and all early scientists are typically ridiculed for their efforts.

As for this quote:
"Anyone can prove a security failure happened after it actually happened, but proving a security failure won't happen ... well, I don't know of anyone who can do that."

Proving a negative is a logical problem not reserved for soft sciences or arts. For example, I invite you to prove that there is such a thing as "absolute zero" (i.e., the absence of all heat.) It is at best, a hypothesis, no one has ever observed it, including Kelvin (measuring it does not count, since this is the observation of an effect of absolute zero, not an observation of the thing itself.) To observe it one would have to live it, feel it. Such an observation does not lend itself to repeatability, much less to peer review. Another example, we cannot prove that the sun will not go supernova before you read this. Nevertheless, many "scientists" would claim that they have good reasons (based on induction) to believe that it will not, based solely on observations of previous supernovas and our sun, with or without math. Are we any less scientific because we believe the sun will "rise" tomorrow based on inductive reasoning? I think not, and inductive reasoning is the basis of most knowledge claims in the science of Information Assurance.

BD said...

This is my second attempt to post this response. I take it that you would rather stop with the glib quote of Lord Kelvin and avoid my deconstruction of scientific hubris. That's fine. The use of the moderator function to avoid counterarguments does, however, reduce your blog to a self serving parade of your own (somehow always triumphant) opinions, since no one else is ever allowed the last word.

Here is my previous post, if you have the guts to post it:

Now that you are making logical arguments instead of assuming what you want to prove and dismissing those that disagree as "unscientific" people trying to "justify their existence," I accept that this is not an example of "attacking the person." Now to cases. (BTW, this is a very interesting debate now.)

OK, now we get down to the real issue. You don't really believe Infosec is scientific either, not if you set the bar as represented in your last post. I can accept that thesis a lot more readily than the claim that infosec is scientific and IA is not. It's rather like your point about security not existing in an abosolute sense: it's true, but trivial. Telling the freezing person in the time of Aristotle that cold really doesn't exist because it cannot be measured is a good example of a "reductio ad absurdum," as I'm certain lord Kevin would tell you if he forgot to wear his long nickers on a really cold day before he invented his Kelvin scale. And don't say he had a Fahrenheit thermometer, since Fahrenheit is an arbitrary scale and does not anchor itself in "absoluteness" nor in formal mathematics. It simply starts with zero and marks the spot where the mercury is in a tube stuck in water as it freezes as "32 degrees" (it could just as well have been ten degrees) and marks the tube where it boils as 212 degrees because the space in between was divided into 180 parts, which made the degrees an arbitrary 1/180th measure of the distance between those two events. (The same arbitrariness applies to Centigrade, despite the indexing of 0 at the freezing point of water.) Interestingly, they didn't really measure temperature but only the effect temperature had on mercury when sealed in a glass tube. But Kelvin was the scientific one, wasn't he? HMM...let's unpack that notion, shall we?

History of Science 101:
Now modern science is based on the work of Francis (Bacon, not Lord Kelvin) in the Instauratio Magna (or Great Renewal) and especially in the section called Novum Organum, where he rejects "authority" as a source of knowledge and proposes a new logic (Induction) to replace the old logic (deduction.) Because he embraced induction as his tool for scientific reasoning, he based everything scientific on "observation" and said that we could only know that which we personally observed on more than one occasion, thus "repeatability" entered the scientific method. Mathematical models were not an issue at that time, in fact mathematics as we know it did not yet exist (but science definitely did) so the early scientists had to make do without such things aas "statistics." That is why you find that the "soft" sciences mostly predate the "hard" sciences, like Newtonian physics. But to say they are not science is to assassinate the father to crown the son. (Long live the arrogant usurper!)

In fact, one might argue that mathematics isn't science so much as it is a language that expresses science (and thus an "art.") Once the need for that language was sufficiently realized, the early scientists/philosophers , like Newton and Descartes, invented things like calculus and analytic geometry to help them express (i.e., measure) what they were observing. Nevertheless, science was originally and still remains the "art" of observation. In fact, without observation regarding the proportions between numbers, and their relation to physical things (as the seeds of mathematical theory going back to the ancient Greeks) Newton and Leibniz might never have invented calculus.

So to bring the history lesson back to the point you made about IA being at best a "soft" science because we have not yet learned to model it mathematically, (though there are some fledgling efforts in the statistical IDS area) I would say that simply means that the science of IA is immature and needs some sharp minds making better observations and coming up with better hypotheses on how to model risk. I definitely agree that the current models are weak, even pathetic attempts to mathematically predict outcomes. But that doesn't mean that IA is not a science. It means that, so far, there are no IA scientists with the chops to do the math. Remember, all sciences started as "arts" developed by philosophers and all early scientists are typically ridiculed for their efforts.

Note: since you will not allow me the last word or even unmoderated access to respond in an open forum, I will let this be my last response.

securology said...

BD,

Apparently, I don't have the time you do to respond. At some point, perhaps I'll take you on. You can claim victory for now, but you should know you never have the right to immediate comment posting on somebody else's blog. Get your own if you want instantaneous publication! ;)

Per Selstrom said...

Can I just say I think it's assuring to read Callas reply and I think he deserves credit for going through the issue in such detail.

Per Selstrom, PGP User

Anonymous said...

"Domain administrator restart bypass. Windows System and Administrator account(s) may now engage a mode to bypass WDE authentication on the next restart by utilizing the privileges of the administration account to act as the authenticated user."

When enabling the bypass option, does the system check after reboot whether the correct key was used to disable the authentication, or does it check which partition/disk was used to enable the option ?

If not, would it be possible to hack a system protected with PGP WDE by using a LiveCD with WinPE, PGP WDE, and an alternate passphrase configured on the LiveCD ?