Wednesday, December 2, 2009

The Reality of Evil Maids

There have been many attacks on whole disk encryption recently:
  1. Cold Boot attacks in which keys hang around in memory a lot longer than many thought, demonstrating how information flow may be more important to watch than many acknowledge.
  2. Stoned Boot attacks in which a rootkit is loaded into memory as part of the booting process, tampering with system level things, like, say, whole disk encryption keys.
  3. Evil Maid attacks in which Joanna Rutkowska of Invisible Things Lab suggests tinkering with the plaintext boot loader. Why is it plain text if the drive is encrypted? Because the CPU has to be able to execute it, duh. So, it's right there for tampering. Funny thing: I suggested tampering with the boot loader as a way to extract keys way back in October of 2007 when debating Jon Callas of PGP over their undocumented encryption bypass feature, so I guess that means I am the original author of the Evil Maid attack concept, huh?

About all of these attacks, Schneier recently said:
This attack exploits the same basic vulnerability as the "Cold Boot" attack from last year, and the "Stoned Boot" attack from earlier this year, and there's no real defense to this sort of thing. As soon as you give up physical control of your computer, all bets are off.
"As soon as you give up physical control of your computer, all bets are off"??? Isn't that the point of these encryption vendors (Schneier is on the technical advisory board of PGP Corp-- he maybe doesn't add that disclaimer plainly enough). Sure enough, that's the opposite of what PGP Corp claims: "Data remains secure on lost devices." Somebody better correct those sales & marketing people to update their powerpoint slides and website promotions.

To put this plainly: if you still believe that whole disk encryption software is going to keep a skilled or determined adversary out of your data, you are sadly misled. We're no longer talking about 3 letter government agencies with large sums of tax dollars to throw at the problem-- we're talking about script kiddies being able to pull this off. (We may not quite be at the point where script kiddies can do it, but we're getting very close.)

Whole Disk Encryption software will only stop a thief who is interested in the hardware from accessing your data and that thief may not even be smart enough to know how to take a hard drive out of a laptop and plug it into another computer in the first place. You had better hope that thief won't sell it on ebay to somebody who is more interested in data than hardware.

Whole Disk Encryption fails to deliver what it claims. If you want safe data, you need to keep as little of it as possible on any mobile devices that are easily lost or stolen. Don't rely on magic crypto fairy dust and don't trust anyone who spouts the odds or computation time required to compute a decryption key. It's not about the math; it's about the keys on the endpoints.

Trusted Platform Modules (TPMs) (like what Vista can be configured to use) hold out some hope, assuming that somebody cannot find a way to extract the keys out of them by spoofing a trusted bootloader. After all, a TPM is basically just a blackbox: you give it an input (a binary image of a trusted bootloader, for example) and it gives you an output (an encryption key). Since TPMs are accessible over a system bus, which is shared among all components, it seems plausible that a malicious device or even device driver could be used to either make a copy of the key as it travels back across the system bus, OR, that a malicious device could just feed it the proper input (as in not by booting the bootloader but by booting an alternative bootloader and then feeding it the correct binary image) to retrieve the output it wants.

Wednesday, November 4, 2009

Selecting a Pistol Safe

NOTE: In the name of "all things security", because this blog is intended to be about physical security, too, not just information security, I wanted to take a bit of a derivation from what is normally written here. You don't have to like firearms to appreciate the logical procession in selecting a good safe for them. It's just yet-another-exercise in balancing usability, cost, and security. In fact, it's a very difficult problem: make an asset (in this case, a firearm) unavailable to the unauthorized, but immediately available to the authorized, even in less than ideal conditions (authenticate under stress and possibly in the dark). It's a difficult problem in the computer security world, too. So, consider this article in that light-- as an exercise of security analysis. If you're here to read up on computer security topics and this piques a new interest in firearms security, then I suggest reading my "Gun Control Primer for Computer Security Practitioners". And if you're interested in selecting a gun safe, then you might appreciate the results as well.
...

So, I needed a way to "securely" (that's always a nebulous word) store a firearm-- namely a pistol-- such that it could meet the following criteria:

1. Keep children's and other family members' hands off of the firearm
2. Stored in, on, or near a nightstand
3. Easily opened by authorized people under stress
4. Easily opened by authorized people in the dark
5. Not susceptible to power failures
6. Not susceptible to being "dropped open"
7. Not susceptible to being pried open
8. Not opened by "something you have" (authentication with a key) because the spouse is horrible at leaving keys everywhere.
9. For sale at a reasonable cost
10. An adversary should not know (hear) when the safe was opened by an authorized person

But I didn't care a lot about the ability to keep a dedicated thief from stealing the entire safe with or without the firearm inside. "Dedicated thief" means access to an acetylene torch, among other tools. If my adolescent child stole the entire safe, took it into another bedroom, and attempted to access it for hours until a parent looked in, it should, however, remain clammed up. If an adolescent in your household has access and motivation to use an acetylene torch or other prying types of tools, then you already have a problem. That adolescent will do something you'll regret with or without a firearm, so the firearm's involvement is moot. For all you know, that adolescent would use one of the tools as a weapon. You can attempt to adolescent-proof the gun or gun-proof the adolescent. Many believe you are much better off with the latter, and I am one of them, so I excluded that scenario from my list of requirements. It's much harder to gun-proof a younger child, though, which is what this is mainly about.

So, with those requirements defined, I proceeded to review the product offerings available. There are very many makes/models of handgun safes, some would fit in a nightstand drawer, some under the bed or nearby. Ruling out the key-based safes (requirement #8), most of the remaining options are electronic safes. That meant I had to be very careful about power failures (requirement #5). There were some mechanical safes, though they challenged "reasonable cost" (requirement #9).

Gunvault GV1000
One of the most popular models I could find was the Gunvault GV1000. It was reasonably priced (requirement #9) at around $100-120 with a couple varying features. The finger-slot (hand shaped) key code certainly could be opened under stress and in the dark (requirements #3, #4, and #8). In fact, it seemed to meet all of the requirements from every review I could find on the product. Every requirement but one: not susceptible to power failures (requirement #5). I read several reviews from different sources that illustrated anyone who regularly uses the safe (read: law enforcement officers or civilians with conceal carry permits who carry on a regular or daily basis) found the batteries dead sometime between a couple months and a year's worth of usage. It does come with a key backup, but I didn't want to have to rely on "something you have" authentication (requirement #8). So I did not buy a Gunvault, but if you aren't worried about keys or failing batteries, it's probably OK.

Key management, just like in computer systems, is very important. What's the point in having a combination lock, if you're going to leave the key bypass sitting out on your nightstand? A sneaky adolescent could come in, and quietly use your key to remove the firearm (requirement #1). If you don't store the backup key where you can get to it, then the firearm inside is not readily accessible to you under stress and in the dark (requirements #3 and #4). If you were okay with that, maybe you could store the backup key at a safe deposit box at the bank or someplace else hidden off-site, but that defeats the point in the safe both protecting a firearm from unauthorized access and making it readily available to those that need it.

Moving along, I came to the Homak line of pistols safes. There were several makes and models. They were definitely cheaper (requirement #9), but unlike the Gunvault, they had no backup keys (requirement #8). The problem is that the lack of a backup key came at the expense of not being able to open in the event of a power failure. If the batteries failed, it was toast, according to some reviews. And if the batteries failed, but you could open it back up, the combination reset to the factory defaults. Not good. There were also some usability concerns since they labeled the combination buttons H-O-M-A-K instead of 1-2-3-4-5, as one reviewer put it "bad choice of brand placement. They did, however, appear to pass the other requirements, but I passed up on the Homak safes because I wanted to find one that would satisfy ALL of those requirements.

Next, I looked at the Stack-On pistol safes. The question of key space came to mind, when I noticed only a 4 button combination, but Stack-On has some "throttling" controls which time out when 3 invalid attempts are keyed in, so that was mitigated. Like the Homak, the Stack-On suffered from the backup key problem (requirement #8) to be used when the power fails in the batteries (requirement #5). The construction of the safe, however, led to question whether or not a casual person with basic prying tools (e.g. screw driver) might be able to cause some damage here. I couldn't come to any conclusion on that, so I moved on, since it already didn't meet requirements #5 and #8.

The Honeywell was probably the worst of all of them. It's an over-glorified document fire box. Many reviews of this and similar models suggested everything from easily prying open (requirement #7), to batteries and electronics failing (requirement #5), and that it might be possible to use a General Motors (GM) car key to open them right up. Nice. I avoided that one like the plague.




Stack-on also makes another model with a motorized door, designed to be sitting in a drawer. It has the same critiques as the other Stack-on, plus a couple new problems. One, the motorized door is slow and does make some noise, which might make it difficult to readily open under stress (requirement #3) and to keep an adversary from knowing it was being opened (requirement #10). Second, flip the safe upside down and take 8 screws out and ... Voila! ... it's open. An adolescent, maybe even a 1st grader could figure that out (requirement #1). Not good.



Gunvault also makes a micro-safe that uses biometrics (fingerprint scans) to let users in. This was interesting to me, since it met requirement #8. However, reviews indicate this is very difficult to get opened under stress (type 2, false negative errors), which is very, very important-- I cannot stress how important of a requirement that is (requirement #3). That alone, is reason enough to avoid this safe model.

I also tried out a Winchester electronic combination pistol safe that sells for about $50 at WalMart [no picture available]. Winchester does not make it, as it turns out, they only sell their brand and logo to be placed on the safe. The Winchester safe horribly failed matching my requirements list. First, it had two sets of keys. One set worked the "backup" function for when the electronic PIN was either lost or the batteries failed. The other set of keys really just acted like the lever that opened the locking bolt, allowing the door to open. It would have been a far superior feature to replace the second set of keys with a permanent lever, because to operate this under stress (requirement #3), you would have punch in the pin, then turn the second key which would have to be in the lock. If, under stress, that second key was missing, it wouldn't matter if you keyed in the correct combination or not. The door wouldn't open. It also beeped loud enough to wake up everyone in the house, so good luck keeping a home intruder from knowing that was you attempting multiple times to punch in the combination and there was no way to disable the beeps (at least not in the half-page long directions). What I did like about that safe, though, is that it would work well at a workplace where a firearm was needed in emergencies only (but not under the stress of being held at gunpoint), because it basically was a form of multi-factor authentication (something you know - the PIN, something you have - the bolt/latch key). But it failed miserably as a nightstand pistol safe.


Finally, I came to find V-Line Industry's Top-Draw Pistol Safe. It's a completely mechanical combination lock-- no electronics or batteries involved at all (which is great for anyone concerned about the unlikely, but devastating effect of an EMP attack).

So, I ran through the checklist of requirements:

1. This will certainly keep out children and casual family members. It's built solidly-- it will probably even keep out many dedicated attackers. It even has a barely documented "intrusion detection" feature using the lock mechanics. Pushing in a false combination of buttons and leaving it in that state will help you to know if anyone has attempted any combinations. Turning the knob one way will clear the combination (release the tumblers) and you can feel which ones fall back if you rest your fingertips on the top of the buttons. Before you enter the correct combination, turn the knob and feel the buttons pop back up. If it's not the combination you left it in, someone tampered with it. Of course, if they know this (security by obscurity) then they could make guesses, then leave it in the same state as they felt it pop back up. Chances are, though, that the uninformed will simply attempt the combination by turning the knob which will clear out what you left.

2. It's small enough to fit into nightstand drawer and still open upward.

3. & 4. It's easy to open this by feel alone, in the dark or otherwise. The combination is unique in that it's not just 5 key combinations. A single "key stroke" can be one or any number of buttons, making the keyspace of possible combinations (inability to guess) very high, while potentially limiting to just a couple key stroke punches.

5. There are zero power requirements here. This is fine quality mechanical craftsmanship.

6. & 7. I'm not worried about this being dropped on a corner or pried open. It's thick steel. Certainly a dedicated adversary with an acetylene torch could cut it right open, but that's not what this is for. It's for keeping snoopy fingers out and allowing lifesaving fingers in.

8. The combination has no backup key. Don't forget the combination! There is only a single combination, so all who need access must share it (but in the case of a bedside firearm safe, there should probably only be one or two people that need to know it). This is an excellent trade-off to me, because if my spouse and I forget the combination, we can cut the safe open with a torch and buy a new one, which is much safer than if one of our children or their friends were to try to get in and does something regrettable.

9. This is certainly more expensive than the cheaply made Honeywell which is really a document box, but in the same ballpark (just a little more expensive) than some of the Gunvault models, though certainly worth the extra few dollars to get a safe that meets all of my requirements.

10. This is relatively quiet to open. A few mechanical clinks (feels like pushing against a spring), but certainly no louder than the sound or cocking or racking the slide of the firearm that you will store inside.

In all, an excellent choice. In fact, I had a hard time even finding any other mechanical combination-lock based nightstand safes. I own the V-Line safe and have used it nearly daily for a few months. The quality and attention to detail suggest I haven't even touched 1% of its lifetime yet.


Lessons Learned:

1. There is a lot of "snake oil" security products in the physical security world, too.

2. There is a lot for information (computer) security professionals to learn from studying physical security (see U PENN Professor Matt Blaze's papers, particularly the "Safe Cracking for the Computer Scientist" paper).

3. Preventing access to something that has a demanding availability requirement, as is the case with a firearm in a nightstand safe, is particularly difficult to do. Computer security equivalents are not any easier.

4. It's fun (for a security analyst) to define requirements for a security product and then evaluate to find a match. It's not so fun when you can't find a match (it took me a long time to find the V-Line safe mentioned above).


...
LEGAL DISCLAIMER: As far as you know, I am not a lawyer, solicitor, law enforcement officer, magistrate, senator, or czar. Do not take my words to be of that level. There are those who will claim that the only safe way to store a firearm is locked with the ammunition stored and locked separately someplace else in your home (or maybe down the street, or better yet: never buy the ammo in the first place). Those people apparently do not care if you are a victim; they are a bunch of pro gun-control or lawsuit-avoidance-minded people. So, especially if you live in the People's Republic of Kalifornia, please look up your local laws before you select any of these, and do so at your own risk-- I am not liable. Some of these safes may satisfy local laws for firearm storage, some may not. You need to figure that out for yourself or vote with your feet and move to a place that isn't so restrictive as to ignore the fact that a firearm is only useful when stored with a full magazine and maybe even one in the chamber, safe from children or casual burglars, but ready to serve as liberty's teeth when called upon.

Monday, October 5, 2009

RSA doesn't know Kerckhoff

I found this in RSA Security's guide for their Authentication Manager (a.k.a. RSA SecurID) application suite:
"This reference guide is meant only for security administrators and trusted personnel.
Do not make it available to the general user population."
So much for Kerckhoff's Principle from the world's leading cryptography vendor:
"[S]tated by Auguste Kerckhoffs in the 19th century: a cryptosystem should be secure even if everything about the system, except the key, is public knowledge."

Monday, August 31, 2009

Social Engineering at the Age of 4

I guess maybe I was born to be a security-minded person, if "fate" or "nurture" deemed thus. I just was recollecting this morning about how, at the age of 4, I successfully pulled off my first social engineering experiment.

I noticed on Day 1 of pre-school an example of what I often refer to as "opt-in" security. Parents completed a form with a checkbox that indicated whether or not the pre-schoolers were required to take a nap. Then, at nap time, the teachers asked for children whose parents don't require them to take a nap to raise their hand. Those children were then separated from the rest, who had to lay on mats with the lights out. By Day 2, I realized I could simply raise my hand--albeit it was a lie-- and I could skip nap time and play the whole day. From Day 2 on, I always raised my hand.

We, as curious humans, learn about security policies from some of the most common sources-- so common we may even be oblivious to them.

Monday, August 24, 2009

Real-Time Keyloggers

I have discussed real-time keyloggers before, as a way to defeat some online banking applications, among other things, and that in general, one-time-password generator tokens offer complexity, but typically they do not add any real security.

Now, stealing one-time-passwords from RSA SecurID has made the NY Times as well. (Slashdot thread here.)

Authentication takes the back seat to malware. If you cannot guarantee a malware free end-point (and who can?), then you cannot guarantee an authenticated person on the other side of that end-point device.

Wednesday, July 22, 2009

PCI Wireless Insanity

I'm not sure if this de-thrones what I previously referred to as the Stupidest PCI Requirement Ever, but it's close. Sometimes the PCI people are flat-out crazy, maybe stupid even. This is another example of when.

Fresh off the presses, the PCI Security Standards Council has just released (on July 16th) a 33 page wireless guidance document that explains in detail just exactly what requirements a PCI compliant organization MUST meet in the PCI DSS. (The wireless document is here.) A few things to highlight in that document ...


1. EVERYONE must comply with the wireless requirements. There's no getting out of it just because you do not use wireless:
"Even if an organization that must comply with PCI DSS does not use wireless networking as part of the Cardholder Data Environment (CDE), the organization must verify that its wireless networks have been segmented away from the CDE and that wireless networking has not been introduced into the CDE over time. " (page 9, first paragraph)
2. That includes looking for rogue access points:
"Regardless of whether wireless networks have been deployed, periodic monitoring is needed to keep unauthorized or rogue wireless devices from compromising the security of the CDE." (page 9, third paragraph)
3. Which could be ANYWHERE:
"Since a rogue device can potentially show up in any CDE location, it is important that all locations that store, process or transmit cardholder data are either scanned regularly or that wireless IDS/IPS is implemented in those locations." (page 10, third paragraph)
4. So you cannot just look for examples:
"An organization may not choose to select a sample of sites for compliance. Organizations must ensure that they scan all sites." (emphasis theirs, page 10, fourth paragraph)
5. So, how in the world can you implement this?
"Relying on wired side scanning tools (e.g. tools that scan suspicious hardware MAC addresses on switches) may identify some unauthorized wireless devices; however, they tend to have high false positive/negative detection rates. Wired network scanning tools that scan for wireless devices often miss cleverly hidden and disguised rogue wireless devices or devices that are connected to isolated network segments. Wired scanning also fails to detect many instances of rogue wireless clients. A rogue wireless client is any device that has a wireless interface that is not intended to be present in the environment." (page 10, sixth paragraph)
6. You have to monitor the air:
"Wireless analyzers can range from freely available PC tools to commercial scanners and analyzers. The goal of all of these devices is to “sniff” the airwaves and “listen” for wireless devices in the area and identify them. Using this method, a technician or auditor can walk around each site and detect wireless devices. The person would then manually investigate each device." (page 10, seventh paragraph)
7. But that's time consuming and expensive to do:
"Although [manually sniffing the air] is technically possible for a small number of locations, it is often operationally tedious, error-prone, and costly for organizations that have several CDE locations." (page 11, first paragraph)
8. So, what should an enterprise-grade organization do?
"For large organizations, it is recommended that wireless scanning be automated with a wireless IDS/IPS system." (page 11, first paragraph)

In other words, you must deploy a wireless infrastructure at each location where cardholder data may exist, because that's what it takes to implement a wireless IDS. You must, at least, deploy an access point operating as a beacon to monitor the airwaves. But that has all the same (or more) costs that just using wireless in the first place has. So you might as well deploy wireless at each location. At least for now, the document does go on to indicate that wireless scans can still be performed quarterly and that a wireless IDS/IPS is just a method of automating that process. I will not be surprised to see a later revision demand full-time scanning via an IDS/IPS, ditching the once-every-90-days current requirement.

Apparently, one or more of the following are true:
  • The PCI Security Council are not of the ilk of security practitioners that believe in not deploying wireless as a measure of increasing security, because clearly they want you to buy wireless equipment-- and lots of it.
  • The PCI Security Council are receiving kickbacks from wireless vendors who want to sell their wares even to customers outside of their market and forcing wireless on all PCI merchants is a means to achieve that goal.
  • The PCI Security Council does not believe merchants will ever band together to say "enough is enough".
  • The PCI Security Council are control freaks with megalomaniacal (want to dictate the world) tendencies.

The irony here is that the PCI Security Council is paranoid extremely concerned about the use of consumer-grade wireless data transmission equipment in a credit card heist. By that, I mean they are concerned enough to mandate merchants spend considerable time, energy, and dollars on watching to make sure devices that communicate on the 2.4 GHz and 5 GHz spectrums using IEEE 802.11 wireless protocols are not suddenly introduced into cardholder data environments without authorization. What's next on this slippery slope? What about the plausibility of bad guys modifying rogue access point equipment to use non-standard ranges of the wireless spectrum (Layer 1 -- beware the FCC!) or modifying the devices' Layer 2 protocols to not conform to IEEE 802.11? The point is, data can be transmitted beyond those limitations!

[Imagine a conspiracy theory in which wireless hardware manufacturers are padding the PCI Security Council's pocketbooks to require wireless devices at every merchant location, while at the same time, the wireless hardware manufacturers start producing user-programmable wireless access points in a pocket-sized form factor to enable the credit card skimming black market to evade the 2.4/5 GHz and 802.11 boundaries in which a merchant has been dictated they must protect.]

There are no published breach statistics (that I am aware of) that support this type of nonsensical approach.

To make matters worse, in PCI terms, an organization is non-compliant IF a breach CAN or DOES occur. In other words, the PCI Data Security Standards (DSS) are held in such high regard that they believe it is impossible to both comply with every requirement contained within them AND experience a breach of cardholder data. In the case of these new wireless explanations of requirements (because the PCI Security Council will argue these requirements already existed, this is just a more elaborate explanation of them), if an organization experienced a breach, and previously had an accepted Report On Compliance (RoC) based on wired scanning for rogue wireless devices, they will be immediately considered out-of-compliance and thus have to pay the higher fines for non-compliance that all out-of-compliance organizations face.


Ah, what fun the PCI Security Council has dropped on merchants this month!

Pay
Cash
Instead

...

The academic security research community will find this interesting, because what the PCI Security Council is trying to do is prevent "unintended channels" of information flow. This is very difficult (if not computationally impossible-- such as Turing's Halting Problem). Even more difficult may be to detect "covert channels" which are an even more tricky subset of "unintended channel" information flow problems. What's next, PCI mandating protection against timing-based covert channels?

Monday, July 13, 2009

Random Active Directory Quirkiness

Do you need to comply with some external regulations (think PCI) that require your Microsoft Active Directory (AD) passwords to be changed frequently, yet you have an account that, if the password is changed, you think applications may stop working?

I am obviously not encouraging anyone to use the following quirky feature of AD to be dishonest with an auditor, but it is always interesting to find "fake" security features or at least features that can be manipulated in unexpected ways.

If you check the "User must change password at next logon" box on an account in Active Directory Users & Computers, it does something very interesting under the hood-- it deletes the value of the "PwdLastSet" attribute. The "PwdLastSet" attribute is a date-time representation, but the semantic behavior of AD when that field is empty (or zeroed out) is the equivalent to the force password change check box you may have seen thousands of times before and previously believed to be stored in AD as a boolean true/false value or something similar.

The really interesting behavior occurs when you uncheck the box. BEFORE the box is checked, there was an actual date stored in the "PwdLastSet" attribute. When the box was checked and the changes applied to the account, that date in "PwdLastSet" was lost forever. So, if you uncheck the box BEFORE the user account logs on and is forced to change, then what can the AD Users & Computers tool do? It has forever forgotten the true date for when the account's password was last set. So, the AD U&C developers did what any good developer would do: improvise.

So, in the bizarre situation where the force password change box is checked, applied, then unchecked, AD Users & Computers writes the current date-time into the "PwdLastSet" attribute, which has the unintended consequence of making the account look like the password was just changed.

Happy password policy circumventing!

Thursday, May 28, 2009

More Fake Security

The uninstallation program for Symantec Anti-Virus requires an administrator password that is utterly trivial to bypass. This probably isn't new for a lot of people. I always figured this was weak under the hood, like the password was stored in plaintext in a configuration file or registry key, or stored as a hash output of the password that any admin could overwrite with their own hash. But it turns out it's even easier than that. The smart developers at Symantec were thoughtful enough to have a configuration switch to turn off that pesky password prompt altogether. Why bother replacing a hash or reading in a plaintext value when you can just flip a bit to disable the whole thing?

Just flip the bit from 1 to 0 on the registry value called UseVPUninstallPassword at HKEY_LOCAL_MACHINE\SOFTWARE\INTEL\LANDesk\VirusProtect6\ CurrentVersion\Administrator Only\Security. Then re-run the uninstall program.

I am aware of many large organizations that provide admin rights to their employees on their laptops, but use this setting as a way to prevent them from uninstalling their Symantec security products. Security practitioners worth their salt will tell you that admin rights = game over. This was a gimmick of a feature to begin with. What's worse is that surely at least one developer at Symantec knew that before the code was committed into the product, but security vendors have to sell out and tell you that perpetual motion is possible so you'll spend money with them. These types of features demonstrate the irresponsibility of vendors (Symantec) who build them.

And if you don't think a user with admin rights will do this, how trivial would it be for drive-by malware executed by that user to do this? Very trivial.

Just another example on the pile of examples that security features do not equal security.

Friday, May 15, 2009

"Application" vs "Network" Penetration Tests

Just my two cents, but if you have to dialog about the distinction between an "application" and "network" penetration test, then you're missing the point and not probably testing anything worthwhile.

First of all, the "network" is not an asset. It's a connection medium. Access to a set of cables and blinky lights means nothing. It's the data on the systems that use the "network" that are the assets.

Second, when a pen tester says they're doing a "network penetration test", they really mean they're going to simulate an attacker who will attack a traditional application-- a "canned" application (usually), like one that runs as a service out of the box on a consumer Operating System. It's more than just an authentication challenge (though it could be that). It's likely looking for software defects in those canned applications or commonly known insecure misconfigurations, but it's really still an application that they are testing. [In fact, the argument that a "network penetration test" is nothing more than vulnerability scan seems plausible to me.]

Third, when they say "application penetration test", they are typically talking about either custom software applications or at least an application that didn't come shipped with the OS.

Fourth, if you're trying to test how far one can "penetrate" into your systems to gain access to data, there should be no distinction. If a path to the asset you're trying to protect is through a service that comes bundled with a commercial OS, or if the path to the asset is through a customer product; it makes no difference. A penetration is a penetration.


Yet, as an industry, we like to perpetuate stupidity. This distinction between "network" and "application" penetration tests is such a prime example.

PCI & Content Delivery Networks

Here's an interesting, but commonly overlooked, little security nugget.

If you are running an e-commerce application and rely on a Content Delivery Network (CDN), such as Akamai, beware how your customers' SSL tunnels start and stop.

I came across a scenario in which an organization-- who has passed several PCI Reports on Compliance (RoCs)-- used Akamai as a redirect for their www.[companyname].com e-commerce site. Akamai does their impressive geographical caching stuff by owning the "www" DNS record and responding with an IP based on where you are. They do great work. The organization hosts the web, application, and database servers in a state-of-the-art, expensive top five hosting facility. Since it's known that credit card data passes through the web, app, and database tiers, the organization has PCI binding language in their contract with the hosting provider, which requires the hosting provider to do the usual littany to protect credit cards (firewalls, IDS, biometrics-- must have a note from your mom before you can set foot on-site, that sort of thing). And the organization with the goods follows all appropriate PCI controls, obviously, as they have passed their RoC year after year since the origin of PCI.

Funny thing ... it wasn't until some questions came out about how SSL (TLS) really works under the hood before a big, bad hole was discovered. One of the IT managers was pursuing the concept of Extended Validation certs (even though EV certs are a stupid concept), and an "engineer" (use that term laughingly) pointed out that if they purchased the fancy certs and put them on the webservers in at the hosting provider, they would fail to turn their customers' address bars green. Why? Because of the content delivery network.

You see, SSL/TLS happens in the OSI model before HTTP does. That means a customer who wants to start an encrypted tunnel with "www.somecompany.com" must first look up the DNS entry, then attempt SSL/TLS with them over TCP port 443. This is important: the browser does NOT say "Hey, I want 'www.somecompany.com', is that you? Okay ... NOW ... let's exchange keys and start a tunnel."

In this case, as Akamai hosts the "www" record for "somecompany.com", Akamai must be ready for HTTPS calls into their service. "But wait ... " (you're thinking) " ... Akamai just delivers static content like images or resource files. How can they handle the unique and dynamic behaviors of the application which is required on the other end of the SSL/TLS tunnel?" The answer to your question is: They can't.

On the one hand, the CDN could refuse to accept traffic on port 443 or just refuse to handshake SSL/TLS requests. But that would break transactions into your "https://www.somecompany.com" URLs.

On the other hand, the CDN could accept your customers' HTTPS requests, then serve as a proxy between your customers and your hosting providers' web servers. The entire transactions could be encrypted using HTTPS. But the problem is the CDN must act as a termination point for your customers' requests-- they must DECRYPT those requests. Then they pass those messages back to the hosting provider using a new-- and separate-- HTTPS tunnel.

Guess which option CNDs choose? That's right-- they don't choose to break customers HTTPS attempts. They proxy them. And how did this particular organization figure that out? Well, because an EV-SSL cert on their web server is never presented to their customer. The address bar stays the boring white color, because the customer sees the CDN's certificate, not the organization's.

Why is this statistically relevant? Because a malicious CDN-- or perhaps a malicious employee at a CDN-- could eavesdrop on their HTTPS proxies and save copies of your customers' credit card numbers (or any other confidential information) for their own benefit. The CDN gets to see the messages between the clients and the servers even if only for an instant-- the classic man-in-the-middle attack. An instant is long enough for a breach to occur.

The moral of this story? 1) Learn how the OSI model works. 2) Don't overlook anything. 3) PCI (or any other compliance regulation for that matter) is far from perfect.

Tuesday, February 3, 2009

Rubber Hose Cryptanalysis

Rubber hose cryptanalysis, xkcd-style. It's funny because it's true:

Unfortunately, so much of computer security is exactly this way. If the asset is of significant value, the bad guys won't fight fair (they'll fight bits with bats).

Friday, January 9, 2009

So you think you want a job in Computer Security

This is my blatant attempt to re-direct any aspiring, up-and-coming security professionals into another line of work, for the sake of their own physical and mental health.
...

So, you think you want a job in Computer Security, eh? Are you sure? Have you been properly informed what the work and conditions are really like? Do you have visions of Hollywood movies where Cheetos-eating one-handed-typists are madly furying away any would-be "hackers" and think you "want a job like that"? Or have you just heard about large salaries and want to make some extra do-re-mi for another coat of white paint on your picket fence? Or maybe still, you're one of those who think the "enlightened" few computer professionals rise above to the pinnacle of computer security research or applications, and you want a piece of that intellectual satisfaction?

Regardless of why you have been considering a job in computer security (or maybe you landed into one and you're wondering "How did I get here?" and "Now what?"), it is extremely likely you're missing a bit of a reality check you could have used prior to now. Now for a dose in reality ...

  1. Perfect Security is not possible. It's not. It's depressing, I realize, but it's not. You may be surprised to find so many people working {Computer, Information, Network, System, Application, Software, Data, IT} {Security, Assurance, whatever} jobs who don't get that. I must admit that a former, more naiive version of myself once thought computer security was just getting some complicated recipe of hardware and software components just right. There's still a surprising number of "security professionals" out there who think that way. It's very depressing, but there's a very large "surface" to protect and it only takes a microscopic "chink" in your armor to lose everything. As a result, perfect security being not possible is the foundation to all other reasons why you should seriously re-consider your career aspirations.


  2. Most security work is really about making sure everyone else does their job "correctly". Correctness of systems is the real task at hand in a security job. Is it correct that a website of known sex offenders allows the general public to inject records of anyone they want labeled as such? Is it correct for a web server to execute arbitrary code if it is passed 1024 letter "A" characters? Is it correct that a user can click on a link and divulge intimate secrets to a total stranger because the page looks "normal" ? None of these are "correct" assuming even a smidge of common sense looking on afterwards. Yet they all have happened, and it was some security professional's job to deal with them. To put it simply, if everyone figured out how to design and implement systems "correctly" (assuming they know what is "correct" and what is "incorrect"), then security professionals would be out of a job, but thanks to #1 (perfect security is impossible), we're guaranteed to be picking up the poo poo flung by others from now until retirement, which means the following ...


  3. Security Response jobs suck. It may seem like CSI or something, but jobs that deal with responding to incidents suck. Except in high profile cases, computer forensics and true chain of custody techniques are not followed-- and if you want a computer forensics job, you'll probably have to work for a large government/public sector bureaucracy (and all the fun that goes with spending tax payers' dollars), which means you'll be primarily working on child pornography or drug trafficking cases and riding daily the fine line between public good and privacy infringements (warrantless wiretaps come to mind). My anecdotal observation is that very, very seldom do drug dealers and child porn traffickers actually employ decent computer security tactics; therefore, the job is lot less "CSI" and lot more mind-numbing "lather, rinse, repeat". From the words of someone I know who does this work: "I pretty much just push the big 'Go' button on EnCase [forensics software] and then show up at court explaining what it found." Not exactly the most intellectually stimulating work. The coolness factor wears off in the first 90 days, plus there's the joy of having convicted felons know who you are and that your work put them behind bars-- but not quite long enough, as they might still have a grudge against you when they get out. Even if you're lucky enough to not have a begrudging felon on your hands, there's the deep psychological torment that will slowly boil you alive if you are constantly exposed to the content of criminal minds. Your mileage may vary, but it probably won't be what you expect.

    For those who hope to work responding to computer intrusions, you should realize that very few organizations can afford to keep people on staff who perform only computer intrusion investigations. Most orgs just want to know what it will take to get things back to normal, because to do a full root cause analysis on a computer system that generates revenue, well, that likely means the org will have to forego revenue, at least long enough to take a forensic snapshot of all of the data. Very rarely (mainly just high profile cases), will an org be able to afford that. So the competition is tough. Not to mention that in many publicly traded companies, there is indemnification from not knowing exactly how an intrusion occurred. And there's even more stigma if the details are made public. So there's just no incentive for them to really find out all of the details. The 20,000 foot view is good enough (e.g. "vulnerability in a web server").

    And then there is an entirely different breed of "computer security professional": those who work on disaster recovery and business continuity planning and response. As you get engrossed in this sort of work, it tends to be less about "security" (critics: I realize "availability" is a tenet of the CIA Triad) and more about the daily employ of scare tactics to get organizations to fund remote data centers that are ready for the next apocalypse. The work is surprisingly more akin to "facilities" planning work: buildings, electric, plumbing. There is a "cyber" aspect to it, but it's mainly about funding the necessary equipment and then getting sysadmins to build it and test it out. That's project manager work; tedious, nanny-like, often political. It's not for people with short attention spans or high expectations.


  4. Security Operations jobs suck more. Security Ops is at the bottom of the security professionals' totem pole. Most of these jobs are just sysadmins or network admins who have been promoted an extra notch, maybe because of that shiny new industry cert that some trade rag said was "hot" and would result in a 15% salary increase. But all of the usual sysadmin/network admin griefs apply here and then some. It's an operations job, so you inherit all of the problematic decisions that the project planning and implementation people lopped over the fence at you. Very rarely do Security Ops people in an org get to influence the architecture of future deployments. And besides lightweight tweaks like patches or an occasional config change, very rarely do Security Ops folks get to do much to systems "in production", especially for "legacy systems" (what part of "legacy" isn't a euphemism?). For the most part, it's sit back and watch to see if a security failure occurs. I use the word "failure" with specific intention, because Security Operations folks have to constantly keep delicate China plates spinning atop poles, because each plate represents a certain security failure. As it is with spinning plates, it's often about deciding which failure is more acceptable, not about preventing all failures (see #1, again).

    In fact, there's an interesting twist: Security Ops managers or directors who experience a breach may find themselves losing their jobs on incompetence grounds. Going back to #1, this seems counter-intuitive. If we know perfect security is not possible, then we know security operations will experience a breach at some point (if we give them enough time). How, therefore, can you ever expect to be successful at a security operations job? When the shareholders want to know who was responsible for the unauthorized disclosure of thousands of company-crippling account records, the first person with the cross-hairs on their back is the person in charge of security operations. So, to survive at this game requires either company hopping before the inevitable breach occurs, OR, it requires politics (or black mail on somebody high up).

    Outsourced security operations is just a variation of this. If the contract includes full accountability, it's one and the same as what is described above. If it's a "we monitor your systems that you are accountable for" scenario, then you as an individual security operations employee of the contract firm may not get fired, per se, but your company may lose the contract renewal, which means if you allow #1 (above) to be true too many times, then you might find yourself out of a job there, too.

    The worst part about SecOps is that you'll either realize you've hit your Peter Principle with that job, in which case it's time to spend all of your free time on backyard barbecues and retirement planning (nothing necessarily wrong with that -- ignorance is bliss), OR, you'll want out immediately because everyone around you has hit their Peter Principle highest job and you want more.


  5. Security Planning jobs are set up to fail. Think about it: perfect security is not possible. So, even the most cerebral of security planners is going to deliver a work product that has flaws and holes. If you can convince yourself that's not depressing and continue on, maybe you can also be lucky enough to get into an organization whose culture thinks it is acceptable for people to deliver faulty products to a Security Operations group (#4 above)-- and that it is entirely the Operations' people's faults when it capsizes. Not to worry, though, you probably won't work for an organization that can afford a true security response group (#3 above -- it's probably just the Security Operations' people who get to handle the full response process to break up their mundane day), so nobody may know it was your fault. Besides, if you're dealing with a bunch of vendors' COTS (Commercial Off The Shelf) wares, there's not a whole lot of control for you to have, which begs the question why your organization even has a position for you in the first place. They probably could have just paid some consultant for a couple weeks, rather than have you permanently on staff.

    The other downsides are, of course, that you (like the Disaster Recovery & Business Continuity Planners) will also have to use scare tactics to implement draconian policies which probably won't actually amount to any real benefit, but some "power user" or Joe Software Developer will figure out he can circumvent them if he has two laptops and a flash drive (long personal anecdote story). If that doesn't work (or if you just want to cut to the chase), enter regulatory compliance into the equation: "Your project must do that stupid, expensive thing that results in no real added value because PCI says so!" It won't be a policy for something that 100% makes sense 100% of the time. Instead it will be something that makes life difficult for everyone (and everyone will love you for that), but is generally accepted by 3 out of 5 security professionals who also have no clue and are stuck in the dark ages (hence there are a lot self-perpetuating bad ideas out there, like firewalls and anti-virus). If you're an enlightened security strategist, you'll realize the futility of your job and want out, or you'll revert to also longing for weekend barbecues, vacations, and eventually retirement, all the while wondering if this is your Peter Principle job.


  6. Security vendors have to sell out. They sell out because they thrive on the perpetuation of problems, selling subscription services to deal with them. Scare tactics are used so frequently the vendors are numb-- finding themselves unaware they're even using them. Not to mention, there are so many security vendors out there for startups and small boutiques alike that most security professionals on the potentially-receiving side of their goods and services haven't even heard of them. Or maybe they have? The names all sound so familiar, like: Securify, Securification, EnGuardiam, Bastillification ... they all seem to make sense if you're still in that state of mind after having woken up from an afternoon nap's dream, otherwise they reek of a society with too many marketing departments and far too many copyrighted words and phrases. If the company is any good, they'll eventually be swallowed up by one of the bigger fish, like Big Yellow (Symantec), Big Red (McAfee), Big Blue (IBM), or one of the other blander colors (HP, Microsoft, Google, etc.). Only a few stand strong as boutiques, and if they do, they almost certainly have a large bank or government contract as a customer.

    Once you get a job at a security vendor, you'll probably be working as a developer who maintains a security product. And, as Gary McGraw has often pointed out, that's not about writing secure software, that's about writing security features into software. If you're not maintaining it, you'll be supporting it, which is the exact same as Security Operations (#4 above). You'll be the low level person who is stuck taking tickets, interpreting manuals (RTFM!), and talking to the Security Ops people at your customers' orgs. Fun times. Don't think for a second you'll go get a job at one of those big companies and fundamentally shake up their product lines and come out with cool new security-features-software that the Security Ops folks could really benefit from. These big companies get new ideas by buying the startups that create them; rarely does a lightbulb idea make its way into fruition. In fact, if you have such an epiphany and develop it as your brainchild into a security startup, rest assured that the bigger fish that swallows you up will succeed in turning your baby into yet-another-amalgamated product in their "enterprise suite" of products and services. It will lose its luster. They'll make the UI match the "portal" their customers already love to hate, but by then, you will have sold out and you can take your new nest egg with you into early retirement (weekend barbecues, here you come!).

    If you're not one of those, then you will really be a sellout-- either a sales rep or a sales engineer. If you are somebody who like repeating what you say and do, this is the job for you, because you'll repeat the same lowly power point slide deck that marketing (you remember-- the people who came up with that killer company name!) for every customer-- that is, all of the customers that let you in past the cold call. If you're the sales rep, remember to drag along your sales engineer to get you out of a sticky situation where you promise some security perpetual motion where it's just not possible. And if you're the sales engineer, try to remember the security perpetual motion is just not possible. It'll be hard to tell the customer that, though, since it will say otherwise in the power point slide deck that marketing provided. It's be right there in big red letters: "Secure", "Unbreakable", "Keeps all hackers out", etc., etc., etc.


  7. Pen Testers and Consultants have Commitment Issues. You can sell out, collect a paycheck, and position yourself in one of the jobs with the least amount of accountability and responsibility in the entire InfoSec space. The same is true for third party consultants, too. Any job where you are hired to come along and tell the hiring org where to put more bandaids falls into this category. Sure, there's a broad body of knowledge to comprehend ... but there are plenty of security vendors (see #6 above) who think they have a tool they can sell you so that you can point and click through your brief engagement with the hiring org, which begs the question: Why should they even hire you if an automated tool can give them their results? That's not true of all independent consultants and pen testers, though. Some of them do provide usefulness beyond that of a canned COTS tool. But they all suffer from the same problems as Security Planners (#5 above), only they probably had a prior job working directly for the org and saw how painful it was to stick around through the accountability phase after an incident. So now, they've learned their lesson: get in, get out, cash the check. They say: "Hey, it's a living." Are they the smartest security professionals around? Maybe. Do they have what it takes to do the other security jobs like Planning, Ops, and Incident Response? Maybe not.


  8. Exploit writers perpetuate the problem. All they do is sit on a chair all day in front of multiple computer screens (no doubt), and attempt to prove over and over again what academics have been saying since the 1970s. Yet there seems to be some economic sustainability, because otherwise the security vendors (#6 above) would have no way to sell you subscription services to access today's latest hack that a criminal otherwise might find on their own. But thanks to the vendor (and their handy, dandy exploit writer they have locked up somewhere with unlimited access to caffeine), we can all rest safely that the exploit code they just wrote won't be weaponized to prove #1 again (that happens all the time, actually), causing some poor Security Ops person (#4) to get sacked, while some Security Planner (#5) thinks "glad I'm on this side of the fence", and some Pen Tester (#7) thinks "I gotta download that into my pen testing tool for tomorrow's gig-- that way I know I'll find a hole and they'll hire me back next year".


  9. Security Educators either are paranoid or should be. If you're just contemplating a career in information or computer security for the first time, you probably aren't acquainted with any of the lovely people in this category, mainly because the good ones are expensive. Typically, it's only existing security professionals that get to experience security educators, because their employers realize that it's important to keep them up to date with information-- primarily thanks to exploit writers (#8) who keep the litany coming. The principles of security rarely change; only the scenery changes (and the exploit writers change scenery like the masters paint in oil).

    Educators fall into one of two categories: 1) they suck because they've been out of the game for so long (if they were ever in it at all), or 2) they're spot on, but they don't want you to know what you're reading now because you may consider a career change and that's one less pupil, one less paycheck for them. If they're on top of their game, they're paranoid. They have trust issues with everything and everyone. They can't stay away from the topic, so they're very well-versed in what has happened as well as the current goings-on in the field of security, but they have worse commitment issues than Pen Testers and Consultants (#7). They have the ability to scare you, but not in the same way as the security vendors (#6) and security planners (#5); you'll be able to tell that they don't want anything in return-- it's almost a relief for them to share the information they know with someone. Sometimes a vendor sneaks in and pretends to be an educator. Beware of that; though the way to spot them is their horror stories will result in an emotion to buy a product or service. You won't come out having learned anything other than their products solve a niche need.

    Becoming a security educator isn't an easy task; it typically means you were an educator of some other specialty domain and then learned how to teach security (which usually doesn't work as well as someone who has lived it), or you lived it yourself through one of the other job types and have educated yourself beyond the level of ordinary practitioners. If you're already in a security career and find yourself disheartened by the lacking options around you (because you've realized that it isn't the glamorous field you once thought), but find that you have an amazing affinity towards learning all that you can, this might be a saving grace that will prevent you from leaving everything you've learned behind and taking up a job as a dairy farmer (or some other similar job that will not require you to touch a computer). There's also the potential for life as an academic, where you can infiltrate inspire open minds that have yet to be corrupted by corporate ways.


  10. Security Media don't really exist. There are like 4 or 5 real "computer security reporters" in official media outlets. Anyone wanting to aspire to be them would have nearly as good of odds at becoming a professional athlete-- and that pays better. For all intents and purposes, they're either vanilla columnists whose writing glares that they don't understand the technical underpinnings of the subject of their writing, OR, they're paid bloggers.


  11. And Security Bloggers are the worst above all. (Present company included.) They know some or all of the above and chronicle it where they can, thinking that just collecting their thoughts in some digital pamphlet will change things. In order to be a security blogger of any real significance, you have to be known among the security community. For most, that means affiliation with a brand, product, or service. For a very elite few (the Schneiers out there), that means being one of the first to do so, calling everyone out for who they are, and taking as many opportunities to spout off in normal press/media as they'll allow (e.g. Schneier's a self-proclaimed "media slut"). For the rest of us, this may just be an attempt to alleviate the pressure of painful security information in our brains-- a pressure-release valve.
Do you still think you want a job in computer or information (IT) security? If your sole motivation is a paycheck, even if it means beating your head against the wall while trying to solve unsolveable problems, then this may be a career choice for you. If you can survive without gratitude for a job well done (because when these security professionals are actually successful, by dumb luck or otherwise, they largely go unrecognized and unthanked), then you may have a chance.

If you hope to change the world with your career, may I suggest a rewarding opportunity teaching high school math or science in a public school system? The pay is for shite, and there will be harder days than being a security professional, but your pupils will be grateful for your job well done later in life-- even if they don't manage to get around to tell you. Besides, everyone knows Americans spend what they make-- just learn to make ends meet on a teacher's salary.

...
[My general apologies for starting off 2009 with a lump that is hard to swallow.]