Showing posts with label Marketing FUD. Show all posts
Showing posts with label Marketing FUD. Show all posts

Friday, October 26, 2012

Sony's PS3 DRM Cracked

Anyone who pays any attention to DRM will extrapolate the general principle:
You can never prevent an end-user who has physical control of a device from breaking any DRM scheme you can invent.
Sony just learned their DRM lesson (again).  I'm sure that people at Sony already know this principle, but some "suit" tells the engineers to "do something about the problem" so they implement a technical speed bump. That's all it is and will ever be.

Thursday, October 18, 2012

Skeleton Keys

Wouldn't it be really scary if physical locks in large planned cities like NYC were designed to use skeleton keys-- master keys that are shared with do-gooder firefighters and locksmiths alike-- without ever thinking what could happen if such keys got into realm of the average Joe, whose do-gooder status was unknown?  Yep it would.

Look but don't pay attention to key teeth details!
Wouldn't it be even scarier if those who cried "the sky is falling, the sky is falling" also were dumb enough to post high res photos of the skeleton keys on their websites (pictured left) so that anyone with access to key blanks and tools could easily measure and create their own skeleton key copies?  Again, yes.

Friday, October 5, 2012

Coping with Compromised Certificate Authorities

With the news containing stories of malware distributing via compromised Certificate Authorities, it makes sense that some IT Security blogs would address "what to do" if this happens to your CA.  This blog post gets it wrong, though:
What would you do if you found out that the Certificate Authority that provides Digital Certificates to your company was compromised, and Microsoft was adding the Certificate Authority’s public key to Windows un-trusted Root Store? Well if you have not got a contingency plan to implement then I can presume you will be in a panic to purchase new certificates from another Certificate Authority... It can take Certificate Authority’s (CA’s) a few days to validate domain ownership and company registration details... While all this is happening your customers are getting a message from Internet Explorer that your SSL certificate is not to be trusted.
What can you do?
  • Do not rely on one Certificate Authority for all of your certificates. You should have a relationship with at least two well known Certificate Authority’s and the CA’s should have validated all of your domains. This will let you quickly order Digital Certificates from the second CA without having to go through the company validation process...
  • If you cannot tolerate any downtime for a service you can take the extra step in which you create backup certificates for each service using your backup Certificate Authority. This will enable you to implement the backup certificates without having to contact the second CA and joining the queue of company’s looking for new certificates.
Keep in mind that the worst-case scenario described above would require the Root CA Certificate to be compromised.  Most Root CAs are offline certs, meaning the computers that house them are not powered on except during special circumstances when new intermediate CA certificates are generated, OR, they are online in an "air gap" (disconnected from the internet) network accessible only via sneakernet.  Exploiting an offline CA is a big deal, and if it occurs it won't be just your organization that is affected, but likely a large part of the entire internet.

So a much more plausible option:
  • The CA will just create a new intermediate CA cert and re-issue client certs to all of its paying customers.
In other words: nothing to see here, please move along.

Friday, August 24, 2012

Protecting Cars from Viruses

Reuters is running a story that should amuse any computer security professional: Experts hope to shield cars from computer viruses.

An excerpt:

Intel's McAfee unit, which is best known for software that fights PC viruses, is one of a handful of firms that are looking to protect the dozens of tiny computers and electronic communications systems that are built into every modern car.

It's scary business. Security experts say that automakers have so far failed to adequately protect these systems, leaving them vulnerable to hacks by attackers looking to steal cars, eavesdrop on conversations, or even harm passengers by causing vehicles to crash.
Our guess is that when cars get to the point that they drive themselves, those who understand how malware works-- and more important: how undeniably complicated modern software and its hardware architecture can be-- will start donning a pair of Converse Chuck Taylors and resemble a modern Luddite by driving themselves, a la Will Smith in I, Robot.

When you look at the statistics, you are far more likely to get injured or die in a car accident than you are in nearly any other security risk you face in your daily life.  Even with the vast skies being what they are, and the regulations on the airlines industry and their pilots, it's not possible to keep air travel 100% safe, though it's safer than driving (once you get past the TSA checkpoint).

Computerized, self-driving cars may improve (emphasis on "may") safety stats; however, not if their software landscape looks like anything else we operate with a CPU in it these days.  There are agencies with an operating budget larger than the GDP of several nations that are terrified about the possibility of malware injected into things like military aircraft or missile guidance systems.  Given that, how in the world is an automobile for ~$20K (which is at most 1% of the price tag of the military's concerns) ever going to be 100% free of malware?  Simple: it won't be.
Toyota Motor Corp, the world's biggest automaker, said it was not aware of any hacking incidents on its cars.
"They're basically designed to change coding constantly. I won't say it's impossible to hack, but it's pretty close," said Toyota spokesman John Hanson. [emphasis ours]
Oh, we've never heard that before...

Officials with Hyundai Motor Co, Nissan Motor Co and Volkswagen AG said they could not immediately comment on the issue.

A spokesman for Honda Motor Co said that the Japanese automaker was studying the security of on-vehicle computer systems, but declined to discuss those efforts.
Mums the word is a much smarter response to the press.
A spokesman for the U.S. Department of Homeland Security declined to comment when asked how seriously the agency considers the risk that hackers could launch attacks on vehicles or say whether DHS had learned of any such incidents.
They probably declined to comment because they are working on exploits for these as well.  Say it ain't so?  Look no further than Stuxnet and Flame, of which the US Gov takes full authorship credits.  It's the future of the "cyberwarfarestate".

We can't keep malware out of critical infrastructure SCADA systems.  There's no way we can keep it out of your mom's minivan.

Wednesday, August 8, 2012

MS-CHAPv2 Crack

It should come as no real surprise: MS-CHAPv2 is broken.  It's an ancient scheme.  If you were paying attention, you would have migrated your VPNs and Wireless networks away from it years ago anyway.

Here's a great break down of what this means to your wireless networks.

An even simpler one is to just note that these combinations are still fine:
  • IPSEC and OpenVPNs are fine.
  • WPA2 Enterprise wireless with PEAP is fine.
  • WPA2 Non-Enterprise (i.e. home) wireless is fine (from this).
And, of course, keep in mind it still takes 24 hours (right now, but that's sure to be sped up) to actually crack the DES encryption key with this exploit.  Since it's 24 hours and not 24 ms, that means an attacker will more than just casually find you and exploit you.  Your network will have to be a target first, at least to some degree.

Tuesday, February 7, 2012

Verisign Hacked!


Verisign was breached according to an SEC report (Reuters), yet they report almost no details and act like it's no big deal!

An excerpt from Reuters (emphasis mine):

"Oh my God," said Stewart Baker, former assistant secretary of the Department of Homeland Security and before that the top lawyer at the National Security Agency. "That could allow people to imitate almost any company on the Net."
I knew instantly why Baker is a former Assistant Secretary to DHS: because he understands the gravity of a real security incident. Had he not understood, he would probably still be employed at DHS, along with all of the other laughing stocks and poster children for security theater.

Back on topic: Verisign is probably the single largest peddler of SSL certificates and their Certificate Authorities (CAs) are probably used by more browsers and other applications than any other. Talk about all your eggs in one basket! Not to mention their impact on the control of DNS.

In a past life as a customer of Verisign's certificates, I did not like dealing with them. They were arrogant, acted like they had no competitors, and charged exorbitant prices for their certs. That stated, the fact that mum's the word on what could possibly be the single largest breach in internet history is much cause for concern. If their private keys for any of their CA certs, including their intermediary certs, are breached, then anybody could impersonate any site they wish on the web.

First it was RSA being tight lipped on their SecurID breach, and now it's Verisign on who knows what was breached.

In the authentication world, there really only are 2 methodologies: A) hierarchical, or B) web of trust. Public Key Infrastructure (i.e. Certificate Authorities) are hierarchical. Essentially, we all trust a self-appointed few to discern for us who is authentic and who is not. In the web of trust model, that discernment choice is distributed among all the participants. You may chose to trust a website is your bank, you may not. The most common implementation of web of trust is PGP (the protocol, not the PGP company, which is rife with their own history of issues.) The con to web of trust is that your Grandma (or maybe even you) won't know who to trust, so she'll have a hard time setting up her {computer, iPhone, whatever}. In the hierarchical model, you don't have to think, but sometimes not thinking is a bad thing.

...

What can be learned from this?

1) Even the largest internet security giants can fall, and when they fall they hit the ground hard. A large, recognizable brand may not necessarily improve security. Though these incidents do not conclusively prove this, there is reason to believe that these companies present themselves as a treasure trove to their adversaries. They simply house assets of far greater value than what may otherwise be understood. Aligning your business with these high valued assets might be attracting unnecessary attention from web thieves to your business.

2) It is probably time to revisit the web of trust model.

Tuesday, March 22, 2011

More RSA SecurID Reactions

RSA Released a new Customer FAQ regarding the RSA SecurID breach. Let's break it down ...
Customer FAQ
Incident Overview

1. What happened?

Recently, our security systems identified an extremely sophisticated cyber attack in progress, targeting our RSA business unit. We took a variety of aggressive measures against the threat to protect our customers and our business including further hardening our IT infrastructure and working closely with appropriate authorities.
Glad to see they didn't use the words "Advanced Persistent Threat" there.
2. What information was lost?

Our investigation to date has revealed that the attack resulted in certain information being extracted from RSA’s systems. Some of that information is related to RSA SecurID authentication products.
Hmmm. Seed Records possibly?
3. Why can’t you provide more details about the information that was extracted related to RSA SecurID technology?

Our customers’ security is our number one priority. We continue to provide our customers with all the information they need to assess their risk and ensure they are protected. Providing additional specific information about the nature of the attack on RSA or about certain elements of RSA SecurID design could enable others to try to compromise our customers’ RSA SecurID implementations.
[Emphasis added by Securology]
Whoa! Pause right there. Obviously they have allowed somebody from a Public/Customer Relations background to write this. This is not coming from anybody who *knows security*.

Like we mentioned previously, Kerckhoff's Principle and Shannon's Maxim dictate that the DESIGN be open. These ideas are older than the Internet, and pretty much older than computing itself. So, disclosing the RSA SecurID DESIGN should have no adverse affect on customers with implementations unless the DESIGN is flawed to begin with.

Realistically, this is PR-speak for obfuscating details about what was stolen. All things point to seed records. Source code to on-premise implementations at customer sites shouldn't be affected, because those components aren't facing the Internet, and generally who cares about them? Yes, it's possible to hack the backend through things like XSS (think "Cross Site Printing"), but the state-of-the-art would be to compromise it from the outside using weaknesses found at RSA headquarters: seed records.
4. Does this event weaken my RSA SecurID solution against attacks?

RSA SecurID technology continues to be an effective authentication solution. To the best of our knowledge, whoever attacked RSA has certain information related to the RSA SecurID solution, but not enough to complete a successful attack without obtaining additional information that is only held by our customers. We have provided best practices so customers can strengthen the protection of the RSA SecurID information they hold. RSA SecurID technology is as effective as it was before against other attacks.
[Emphasis added by Securology.]
If it wasn't obvious that it's seed records yet, it should be screaming "SEED RECORDS" by this point. RSA SecurID is a two factor authentication system, meaning you can couple your RSA SecurID time synchronized tokencode with a PIN/Password. So, if the seed records are stolen, then the only way an adversary can impersonate you would be if he knew:
  1. Which RSA SecurID token is assigned to you (i.e. the serial number stored in the RSA SecurID database on-site at a customer's site)
  2. Your PIN/Passcode that is the second facto (i.e. another piece of information stored in the customer's site).
More evidence that the RSA breach was seed records: the serial number and seed records give the adversary half the information needed, but the rest is stored on-site.
5. What constitutes a direct attack on an RSA SecurID customer?

To compromise any RSA SecurID deployment, an attacker needs to possess multiple pieces of information about the token, the customer, the individual users and their PINs. Some of this information is never held by RSA and is controlled only by the customer. In order to mount a successful direct attack, someone would need to have possession of all this information.


6. What constitutes a broader attack on an RSA SecurID customer?

To compromise any RSA SecurID deployment, the attacker needs to possess multiple pieces of information about the token, the customer, the individual users and their PINs. Some of this information is never held by RSA and is controlled only by the customer. In order to mount a successful direct attack, someone would need to have possession of all this information.

The broader attack we referenced most likely would be an indirect attack on a customer that uses a combination of technical and social engineering techniques to attempt to compromise all pieces of information about the token, the customer, the individual users and their PINs. Social engineering attacks typically target customers’ end users and help desks. Technical attacks typically target customers’ back end servers, networks and end user machines. Our prioritized remediation steps in the RSA SecurID Best Practices Guides are focused on strengthening your security against these potential broader attacks.
[Emphasis added by Securology]
This PR person is beginning to agree with us. Yes, the seed records are the hard part. If you are an RSA SecurID customer, assume the adversary has them, and now watch out for the pieces you control.
7. Have my SecurID token records been taken?
[Emphasis added by Securology.]
Yes, it's obvious they have.
For the security of our customers, we are not releasing any additional information about what was taken. It is more important to understand all the critical components of the RSA SecurID solution.

To compromise any RSA SecurID deployment, the attacker needs to possess multiple pieces of information about the token, the customer, the individual users and their PINs. Some of this information is never held by RSA and is controlled only by the customer. In order to mount a successful attack, someone would need to have possession of all this information.
This is beginning to look like a broken record.
8. Has RSA stopped manufacturing and/or distributing RSA SecurID tokens or other products?

As part of our standard operating procedures, while we further harden our environment some operations are interrupted. We expect to resume distribution soon and will share information on this when available.
Of course manufacturing/distribution has stopped. Of course anyone worried about security would have an SOP that says "stop shipping the crypto devices when the seed records are compromised." This is just more evidence that the seed records were compromised.
[...snipped for brevity...]
13. How can I monitor my deployment for unusual authentication activity?

To detect unusual authentication activity, the Authentication Manager logs should be monitored for abnormally high rates of failed authentications and/or “Next Tokencode Required” events. If these types of activities are detected, your organization should be prepared to identify the access point being used and shut them down.

The Authentication Manager Log Monitoring Guidelines has detailed descriptions of several additional events that your organization should consider monitoring.
[Emphasis added by Securology]
Warning about failed authentication and next tokencode events further indicates the seed records were stolen, because this would indicate the adversaries are guessing valid tokencodes but invalid PINs, or guessing tokencodes in order to determine a specific user's serial number (to match stolen seed records with a particular user).
14. How do I protect users and help desks against Social Engineering attacks such as targeted phishing?

Educate your users on a regular basis about how to avoid phishing attacks. Be sure to follow best practices and guidelines from sources such as the Anti-Phishing Working Group (APWG) at http://education.apwg.org/r/en/index.htm.

In addition, make sure your end users know the following:
  • They will never be asked for and should never provide their token serial numbers, tokencodes, PINs, passwords, etc.
Because giving that away is giving away the last parts of information that are "controlled only by the customer", i.e. the mapping of UserIDs to seed records via token serial numbers.
  • Do not enter tokencodes into links that you clicked in an email. Instead, type in the URL of the reputable site to which you want to authenticate
Because a phishing attack that takes a tokencode could be all that is needed to guess which serial number a user has, since that moment in time could be recorded, and all seed records could be used in a parallel, offline attack to compute their token codes at that instance in time. Assume an adversary has now in their possession, all of the seed records for all RSA SecurID tokens that are currently valid (which based on above and previous seems very plausible). Assume they have sufficient computing hardware to mass compute all of the tokencodes for all of the tokens represented by those seed records for a range of time (they obviously are well funded to get the "Advanced Persistent Threat" name). This would be the output of the RSA SecurID algorithm taking all the future units of time as input coupled with the serial number/token codes to generate all of the output "hashes" for each RSA SecurID token that RSA has ever made. These mass computed tokencodes for a given range of time would basically be one big rainbow table, a time computing trade-off not too unlike using rainbow tables to crack password hashes. Then assume the adversaries can phish users into providing a tokencode into a false login prompt. Since tokencodes are only 6 digits long, and RSA has sold millions of tokens, the chances of a collision of a token's output with another token's output at a random point in time is significant enough, but phish the same user repeatedly (like asking for "next tokencode") and the adversary now can significantly narrow down the possibilities of which tokens belong to which user because different tokens must appear random and not in sync with each other (otherwise RSA SecurID would have much bigger problems). Do this selectively over a period of time for a high valued asset, and chances are the adversary's presence will go undetected, but the adversary will be able to determine exactly which token (serial number, i.e. seed record) belongs to the victim user. Or do it in mass quickly (think: social media) and it will harvest many userIDs to serial numbers (seed records) which would be valuable on the black market-- especially for e-commerce banking applications.
It is also critical that your Help Desk Administrators verify the end user’s identity before performing any Help Desk operations on their behalf. Recommended actions include:

· Call the end user back on a phone owned by the organization and on a number that is already stored in the system.

· Send the user an email to a company email address. If possible, use encrypted mail.

· Work with the employee’s manager to verify the user’s identity

· Verify the identity in person

· Use multiple open-ended questions from employee records (e.g., “Name one person in your group” or, “What is your badge number?”). Avoid yes/no questions

Important: Be wary of using mobile phones for identity confirmation, even if they are owned by the company, as mobile phone numbers are often stored in locations that are vulnerable to tampering or social engineering.
[...snipped for brevity...]
The above is very decent advice, not unlike what we posted recently.


So, in summary: yeah, yeah, yeah, seed records were stolen. Little to no doubt about that now.

Friday, March 18, 2011

RSA SecurID Breach - Initial Reactions


RSA, the security division of EMC, was breached by a sophisticated adversary who stole something of value pertaining to RSA SecurID two factor authentication implementations. That much we know for certain.


It's probably also safe to say that RSA SecurID will be knocked at least a notch down off their place of unreasonably high esteem.


And it wouldn't hurt to take this as a reminder that there is no such thing as a perfectly secure system. Complexity wins every time and the adversary has the advantage.


First, note that the original Securology article entitled "Soft tokens aren't tokens at all" is still very valid as the day it was published over 3 years ago. CNET is reporting that RSA has sold 40 million hardware tokens and 250 million software tokens. That means that 86% of all RSA SecurID "tokens" (which are of the "soft token" variety) are already wide open all of the problems that an endpoint device has-- and more importantly, that 86% of the "two factor authentication" products sold and licensed by RSA are not really "two factor authentication" in the first place.


Second, we should note the principles in economics, so eloquently described by your mother as: "don't put all your eggs in one basket", i.e. the principle of diversification. If your organization relies solely on RSA SecurID for security, you were on borrowed time to begin with. For those organizations, this event is just proof that "the emperor hath no clothes".


Third, the algorithm behind RSA SecurID is not publicly disclosed. This should be a red flag to anyone worth their salt in security. This is a direct violation of Kerckhoff's Principle and Shannon's Maxim, roughly that only the encryption keys should be secret and that we should always assume an enemy knows (or can reverse engineer) the algorithm. There have been attempts in the past to reverse engineer the RSA SecurID algorithm, but those attempts are old and not necessarily the way the current version operates.


Fourth, it's probably the seed records that were stolen. Since we know that the algorithm is essentially a black box, taking as input a "seed record" and the current time, then either disclosure of the "seed records" or disclosure of the algorithm could potentially be devastating to any system relying on RSA SecurID for authentication.

Hints that the "seed records" were stolen can be seen in this Network World article:
But there's already speculation that attackers gained some information about the "secret sauce" for RSA SecurID and its one-time password authentication mechanism, which could be tied to the serial numbers on tokens, says Phil Cox, principal consultant at Boston-based SystemExperts. RSA is emphasizing that customers make sure that anyone in their organizations using SecurID be careful in ensuring they don't give out serial numbers on secured tokens. RSA executives are busy today conducting mass briefings via dial-in for customers, says Cox. [emphasis added by Securology]
Suggesting to customers to keep serial numbers secret implies that seed records were indeed stolen.

When a customer deploys newly purchased tokens, the customer must import a file containing a digitally signed list of seed records associated with serial numbers of the device. From that point on, administrators assign a token by serial number, which is really just associating the seed record of the device with the user's future authentication attempts. Any time that user attempts to authenticate, the server takes the current time and the seed record and computes its own tokencode for comparison to the user input. In fact, one known troubleshooting problem happens when the server and token get out of time synchronization. NTP is usually the answer to that problem.

This further strengthens the theory that seed records were stolen by the "advanced persistent threat", since any customer with a copy of the server-side components essentially has the algorithm, through common reversing techniques of binaries. The server's CPU must be able to computer the tokencode via the algorithm, therefore monitoring instructions as they enter the CPU will divulge the algorithm. This is not a new threat, and certainly nothing worthy of a new moniker. The most common example of reversing binaries is for bypassing software licensing features-- it doesn't take a world-class threat to do that. What is much, much more likely is that RSA SecurID seed records were indeed stolen.

The only item of value that could be even more damaging might be the algorithm RSA uses to establish seed records and associate them with serial numbers. Assuming there is some repeatable process to that-- and it makes sense to believe there is since that would make production manufacturing of those devices more simple-- then stealing that algorithm is like stealing all seed records: past, present, and future.

Likewise, even if source code is the item that was stolen, it's unlikely that any of that will translate into real attacks since most RSA SecurID installations do not directly expose the RSA servers to the Internet. They're usually called upon by end-user-facing systems like VPNs or websites, and the Internet tier generally packages up the credentials and passes them along in a different protocol, like RADIUS. The only way a vulnerability in the stolen source code would become very valuable would be if there is an injection vulnerability found in it, such as passing a malicious input in a username and password challenge that resulted in the back-end RSA SecurID systems to fail open, much like a SQL injection attack. It's possible, but much more probable that seed records were the stolen item of value.


How to Respond to the News
Lots of advice has been shared for how to handle this bad news. Most of it is good, but a couple items need a reality check.


RSA filed with the SEC and in their filing there is a copy of their customer support note on the issue. At the bottom of the form, is a list of suggestions:
  • We recommend customers increase their focus on security for social media applications and the use of those applications and websites by anyone with access to their critical networks.
  • We recommend customers enforce strong password and pin policies.
  • We recommend customers follow the rule of least privilege when assigning roles and responsibilities to security administrators.
  • We recommend customers re-educate employees on the importance of avoiding suspicious emails, and remind them not to provide user names or other credentials to anyone ...
  • We recommend customers pay special attention to security around their active directories, making full use of their SIEM products and also implementing two-factor authentication to control access to active directories.
  • We recommend customers watch closely for changes in user privilege levels and access rights using security monitoring technologies such as SIEM, and consider adding more levels of manual approval for those changes.
  • We recommend customers harden, closely monitor, and limit remote and physical access to infrastructure that is hosting critical security software.
  • We recommend customers examine their help desk practices for information leakage that could help an attacker perform a social engineering attack.
  • We recommend customers update their security products and the operating systems hosting them with the latest patches.
[emphasis added by Securology]

Unless RSA is sitting on some new way to shim into the Microsoft Active Directory (AD) authentication stacks (and they have not published it), there is no way to accomplish what they have stated there in bold. AD consists of mainly LDAP and Kerberos with a sprinkling in of a few other neat features (not going into those for brevity). LDAP/LDAPS (the secure SSL/TLS version) and Kerberos are both based on passwords as the secret to authentication. They cannot simply be upgraded into using two-factor authentication.

Assuming RSA is suggesting installing the RSA SecurID agent for Windows on each Domain Controller in an AD forest, that still does not prevent access to making changes inside of AD Users & Computers, because any client must be able to talk Kerberos and LDAP to at least a single Domain Controller for AD's basic interoperability to function-- those same firewall rules for those services will also allow authenticated and authorized users to browse and modify objects within the directory. What they're putting in there just doesn't seem to be possible and must have been written by somebody who doesn't understand the Microsoft Active Directory product line very well.


Securosis has a how-to-respond list on their blog:
Remember that SecurID is the second factor in a two-factor system… you aren’t stripped naked (unless you’re going through airport security). Assuming it’s completely useless now, here is what you can do:
  1. Don’t panic. We know almost nothing at this point, and thus all we can do is speculate. Until we know the attacker, what was lost, how SecurID was compromised (if it is), and the vector of a potential attack we can’t make an informed risk assessment.
  2. Talk to your RSA representative and pressure them for this information.
  3. Assume SecureID is no longer effective. Review passwords tied to SecurID accounts and make sure they are strong (if possible).
  4. If you are a high-value target, force a password change for any accounts with privileges that could be overly harmful (e.g. admins).
  5. Consider disabling accounts that don’t use a password or PIN.
  6. Set password attempt lockouts (3 tries to lock an account, or similar).
[Emphasis added by Securology]
To their first point, I think we can know what was lost: seed records. Without that, there would be no point in filing with the SEC and publicly disclosing that fact. Anybody can know their algorithm for computing one-time passwords by reversing the server side (see above). The only other component in the process is the current time, which is public information. The only private information is the seed record.

On point #4, if your organization is a high-valued asset type of a target, flagging RSA SecurID users to change their PINs or passwords associated with their user accounts may not be a good idea, because as the defense you have to assume this well articulated offensive adversary already has your seed records and therefore could respond to the challenge to reset passwords. A better solution, if your organization is small, is to physically meet with and reset credentials for high valued users. If you cannot do that because your organization is too large of a scale, then your only real option is to monitor user behavior for abnormalities-- which is where most of your true value should come from anyway.

This does tie well with their second suggestion-- pressuring your RSA contact for more information. In all likelihood, if our speculation that seed records were indeed stolen, then the only solution is to demand new RSA SecurID tokens from RSA to replace the ones you have currently. And if RSA is not quick to respond to that, it's for one of two reasons:
  1. This is going to financially hurt them in a very significant way and it's not easy to just mass produce 40 million tokens overnight, OR,
  2. RSA's algorithm for generating seed records and assigning them to token serial numbers is compromised, and they're going to need some R&D time to come up with a fix without breaking current customers who order new tokens under the new seed record generation scheme in the future.

UPDATED TO ADD: Since all things indicate the seed records were compromised, and since Art Coviello's message is that no RSA customers should have reduced security as a result of their breach, then that must mean RSA does not believe SecurID is worth the investment. After all, if RSA SecurID seed records were stolen, it effectively reduces any implementation to just a single factor: the PIN/passwords that are requested in addition to the tokencode. And who would buy all that infrastructure and handout worthless digital keychains when they can get a single factor password authentication for super cheap with Microsoft's Active Directory?

Friday, May 21, 2010

Verisign Turns Yellow

On the heels of turning PGP corp Yellow, now Verisign is turning Yellow, too. Symantec is acquiring Verisign, too.

These overpriced "security solutions" are going to go from bad to worse. I predict agile startups are going to crush them on their prices, since Symantec's goal is obviously to own the entire market with a one-size-fits-all approach, while some startups and smaller companies will probably better understand their customers' needs.

It's ironic how the PGP (distributed) model once fought strongly against the PKI (hierarchical, centralized) model. But now, thanks to deep pockets at Big Yellow, they'll be wearing the same uniform.

SSL and crypto are now commodities, so where are the commodity prices from PGP Corp, Verisign, and Symantec? Simple: they won't have them on their pricing lists.

I've ranted many times about both companies. PGP tries to sell you goods they admit won't solve the problems they're designed for (the "all bets are off when you lose physical control of the device" excuse). And Verisign tries to double-dip on premium "Extended Validation" SSL certs, ignoring their culpability in Certificate Authorities granting SSL certificates to frauds and phishers-- they want you to pay extra for their mistakes.

Do us all a favor and use open source or support their competitors who have true commodity prices.

Wednesday, December 2, 2009

The Reality of Evil Maids

There have been many attacks on whole disk encryption recently:
  1. Cold Boot attacks in which keys hang around in memory a lot longer than many thought, demonstrating how information flow may be more important to watch than many acknowledge.
  2. Stoned Boot attacks in which a rootkit is loaded into memory as part of the booting process, tampering with system level things, like, say, whole disk encryption keys.
  3. Evil Maid attacks in which Joanna Rutkowska of Invisible Things Lab suggests tinkering with the plaintext boot loader. Why is it plain text if the drive is encrypted? Because the CPU has to be able to execute it, duh. So, it's right there for tampering. Funny thing: I suggested tampering with the boot loader as a way to extract keys way back in October of 2007 when debating Jon Callas of PGP over their undocumented encryption bypass feature, so I guess that means I am the original author of the Evil Maid attack concept, huh?

About all of these attacks, Schneier recently said:
This attack exploits the same basic vulnerability as the "Cold Boot" attack from last year, and the "Stoned Boot" attack from earlier this year, and there's no real defense to this sort of thing. As soon as you give up physical control of your computer, all bets are off.
"As soon as you give up physical control of your computer, all bets are off"??? Isn't that the point of these encryption vendors (Schneier is on the technical advisory board of PGP Corp-- he maybe doesn't add that disclaimer plainly enough). Sure enough, that's the opposite of what PGP Corp claims: "Data remains secure on lost devices." Somebody better correct those sales & marketing people to update their powerpoint slides and website promotions.

To put this plainly: if you still believe that whole disk encryption software is going to keep a skilled or determined adversary out of your data, you are sadly misled. We're no longer talking about 3 letter government agencies with large sums of tax dollars to throw at the problem-- we're talking about script kiddies being able to pull this off. (We may not quite be at the point where script kiddies can do it, but we're getting very close.)

Whole Disk Encryption software will only stop a thief who is interested in the hardware from accessing your data and that thief may not even be smart enough to know how to take a hard drive out of a laptop and plug it into another computer in the first place. You had better hope that thief won't sell it on ebay to somebody who is more interested in data than hardware.

Whole Disk Encryption fails to deliver what it claims. If you want safe data, you need to keep as little of it as possible on any mobile devices that are easily lost or stolen. Don't rely on magic crypto fairy dust and don't trust anyone who spouts the odds or computation time required to compute a decryption key. It's not about the math; it's about the keys on the endpoints.

Trusted Platform Modules (TPMs) (like what Vista can be configured to use) hold out some hope, assuming that somebody cannot find a way to extract the keys out of them by spoofing a trusted bootloader. After all, a TPM is basically just a blackbox: you give it an input (a binary image of a trusted bootloader, for example) and it gives you an output (an encryption key). Since TPMs are accessible over a system bus, which is shared among all components, it seems plausible that a malicious device or even device driver could be used to either make a copy of the key as it travels back across the system bus, OR, that a malicious device could just feed it the proper input (as in not by booting the bootloader but by booting an alternative bootloader and then feeding it the correct binary image) to retrieve the output it wants.

Wednesday, July 22, 2009

PCI Wireless Insanity

I'm not sure if this de-thrones what I previously referred to as the Stupidest PCI Requirement Ever, but it's close. Sometimes the PCI people are flat-out crazy, maybe stupid even. This is another example of when.

Fresh off the presses, the PCI Security Standards Council has just released (on July 16th) a 33 page wireless guidance document that explains in detail just exactly what requirements a PCI compliant organization MUST meet in the PCI DSS. (The wireless document is here.) A few things to highlight in that document ...


1. EVERYONE must comply with the wireless requirements. There's no getting out of it just because you do not use wireless:
"Even if an organization that must comply with PCI DSS does not use wireless networking as part of the Cardholder Data Environment (CDE), the organization must verify that its wireless networks have been segmented away from the CDE and that wireless networking has not been introduced into the CDE over time. " (page 9, first paragraph)
2. That includes looking for rogue access points:
"Regardless of whether wireless networks have been deployed, periodic monitoring is needed to keep unauthorized or rogue wireless devices from compromising the security of the CDE." (page 9, third paragraph)
3. Which could be ANYWHERE:
"Since a rogue device can potentially show up in any CDE location, it is important that all locations that store, process or transmit cardholder data are either scanned regularly or that wireless IDS/IPS is implemented in those locations." (page 10, third paragraph)
4. So you cannot just look for examples:
"An organization may not choose to select a sample of sites for compliance. Organizations must ensure that they scan all sites." (emphasis theirs, page 10, fourth paragraph)
5. So, how in the world can you implement this?
"Relying on wired side scanning tools (e.g. tools that scan suspicious hardware MAC addresses on switches) may identify some unauthorized wireless devices; however, they tend to have high false positive/negative detection rates. Wired network scanning tools that scan for wireless devices often miss cleverly hidden and disguised rogue wireless devices or devices that are connected to isolated network segments. Wired scanning also fails to detect many instances of rogue wireless clients. A rogue wireless client is any device that has a wireless interface that is not intended to be present in the environment." (page 10, sixth paragraph)
6. You have to monitor the air:
"Wireless analyzers can range from freely available PC tools to commercial scanners and analyzers. The goal of all of these devices is to “sniff” the airwaves and “listen” for wireless devices in the area and identify them. Using this method, a technician or auditor can walk around each site and detect wireless devices. The person would then manually investigate each device." (page 10, seventh paragraph)
7. But that's time consuming and expensive to do:
"Although [manually sniffing the air] is technically possible for a small number of locations, it is often operationally tedious, error-prone, and costly for organizations that have several CDE locations." (page 11, first paragraph)
8. So, what should an enterprise-grade organization do?
"For large organizations, it is recommended that wireless scanning be automated with a wireless IDS/IPS system." (page 11, first paragraph)

In other words, you must deploy a wireless infrastructure at each location where cardholder data may exist, because that's what it takes to implement a wireless IDS. You must, at least, deploy an access point operating as a beacon to monitor the airwaves. But that has all the same (or more) costs that just using wireless in the first place has. So you might as well deploy wireless at each location. At least for now, the document does go on to indicate that wireless scans can still be performed quarterly and that a wireless IDS/IPS is just a method of automating that process. I will not be surprised to see a later revision demand full-time scanning via an IDS/IPS, ditching the once-every-90-days current requirement.

Apparently, one or more of the following are true:
  • The PCI Security Council are not of the ilk of security practitioners that believe in not deploying wireless as a measure of increasing security, because clearly they want you to buy wireless equipment-- and lots of it.
  • The PCI Security Council are receiving kickbacks from wireless vendors who want to sell their wares even to customers outside of their market and forcing wireless on all PCI merchants is a means to achieve that goal.
  • The PCI Security Council does not believe merchants will ever band together to say "enough is enough".
  • The PCI Security Council are control freaks with megalomaniacal (want to dictate the world) tendencies.

The irony here is that the PCI Security Council is paranoid extremely concerned about the use of consumer-grade wireless data transmission equipment in a credit card heist. By that, I mean they are concerned enough to mandate merchants spend considerable time, energy, and dollars on watching to make sure devices that communicate on the 2.4 GHz and 5 GHz spectrums using IEEE 802.11 wireless protocols are not suddenly introduced into cardholder data environments without authorization. What's next on this slippery slope? What about the plausibility of bad guys modifying rogue access point equipment to use non-standard ranges of the wireless spectrum (Layer 1 -- beware the FCC!) or modifying the devices' Layer 2 protocols to not conform to IEEE 802.11? The point is, data can be transmitted beyond those limitations!

[Imagine a conspiracy theory in which wireless hardware manufacturers are padding the PCI Security Council's pocketbooks to require wireless devices at every merchant location, while at the same time, the wireless hardware manufacturers start producing user-programmable wireless access points in a pocket-sized form factor to enable the credit card skimming black market to evade the 2.4/5 GHz and 802.11 boundaries in which a merchant has been dictated they must protect.]

There are no published breach statistics (that I am aware of) that support this type of nonsensical approach.

To make matters worse, in PCI terms, an organization is non-compliant IF a breach CAN or DOES occur. In other words, the PCI Data Security Standards (DSS) are held in such high regard that they believe it is impossible to both comply with every requirement contained within them AND experience a breach of cardholder data. In the case of these new wireless explanations of requirements (because the PCI Security Council will argue these requirements already existed, this is just a more elaborate explanation of them), if an organization experienced a breach, and previously had an accepted Report On Compliance (RoC) based on wired scanning for rogue wireless devices, they will be immediately considered out-of-compliance and thus have to pay the higher fines for non-compliance that all out-of-compliance organizations face.


Ah, what fun the PCI Security Council has dropped on merchants this month!

Pay
Cash
Instead

...

The academic security research community will find this interesting, because what the PCI Security Council is trying to do is prevent "unintended channels" of information flow. This is very difficult (if not computationally impossible-- such as Turing's Halting Problem). Even more difficult may be to detect "covert channels" which are an even more tricky subset of "unintended channel" information flow problems. What's next, PCI mandating protection against timing-based covert channels?

Thursday, May 28, 2009

More Fake Security

The uninstallation program for Symantec Anti-Virus requires an administrator password that is utterly trivial to bypass. This probably isn't new for a lot of people. I always figured this was weak under the hood, like the password was stored in plaintext in a configuration file or registry key, or stored as a hash output of the password that any admin could overwrite with their own hash. But it turns out it's even easier than that. The smart developers at Symantec were thoughtful enough to have a configuration switch to turn off that pesky password prompt altogether. Why bother replacing a hash or reading in a plaintext value when you can just flip a bit to disable the whole thing?

Just flip the bit from 1 to 0 on the registry value called UseVPUninstallPassword at HKEY_LOCAL_MACHINE\SOFTWARE\INTEL\LANDesk\VirusProtect6\ CurrentVersion\Administrator Only\Security. Then re-run the uninstall program.

I am aware of many large organizations that provide admin rights to their employees on their laptops, but use this setting as a way to prevent them from uninstalling their Symantec security products. Security practitioners worth their salt will tell you that admin rights = game over. This was a gimmick of a feature to begin with. What's worse is that surely at least one developer at Symantec knew that before the code was committed into the product, but security vendors have to sell out and tell you that perpetual motion is possible so you'll spend money with them. These types of features demonstrate the irresponsibility of vendors (Symantec) who build them.

And if you don't think a user with admin rights will do this, how trivial would it be for drive-by malware executed by that user to do this? Very trivial.

Just another example on the pile of examples that security features do not equal security.

Friday, May 15, 2009

"Application" vs "Network" Penetration Tests

Just my two cents, but if you have to dialog about the distinction between an "application" and "network" penetration test, then you're missing the point and not probably testing anything worthwhile.

First of all, the "network" is not an asset. It's a connection medium. Access to a set of cables and blinky lights means nothing. It's the data on the systems that use the "network" that are the assets.

Second, when a pen tester says they're doing a "network penetration test", they really mean they're going to simulate an attacker who will attack a traditional application-- a "canned" application (usually), like one that runs as a service out of the box on a consumer Operating System. It's more than just an authentication challenge (though it could be that). It's likely looking for software defects in those canned applications or commonly known insecure misconfigurations, but it's really still an application that they are testing. [In fact, the argument that a "network penetration test" is nothing more than vulnerability scan seems plausible to me.]

Third, when they say "application penetration test", they are typically talking about either custom software applications or at least an application that didn't come shipped with the OS.

Fourth, if you're trying to test how far one can "penetrate" into your systems to gain access to data, there should be no distinction. If a path to the asset you're trying to protect is through a service that comes bundled with a commercial OS, or if the path to the asset is through a customer product; it makes no difference. A penetration is a penetration.


Yet, as an industry, we like to perpetuate stupidity. This distinction between "network" and "application" penetration tests is such a prime example.

Friday, January 9, 2009

So you think you want a job in Computer Security

This is my blatant attempt to re-direct any aspiring, up-and-coming security professionals into another line of work, for the sake of their own physical and mental health.
...

So, you think you want a job in Computer Security, eh? Are you sure? Have you been properly informed what the work and conditions are really like? Do you have visions of Hollywood movies where Cheetos-eating one-handed-typists are madly furying away any would-be "hackers" and think you "want a job like that"? Or have you just heard about large salaries and want to make some extra do-re-mi for another coat of white paint on your picket fence? Or maybe still, you're one of those who think the "enlightened" few computer professionals rise above to the pinnacle of computer security research or applications, and you want a piece of that intellectual satisfaction?

Regardless of why you have been considering a job in computer security (or maybe you landed into one and you're wondering "How did I get here?" and "Now what?"), it is extremely likely you're missing a bit of a reality check you could have used prior to now. Now for a dose in reality ...

  1. Perfect Security is not possible. It's not. It's depressing, I realize, but it's not. You may be surprised to find so many people working {Computer, Information, Network, System, Application, Software, Data, IT} {Security, Assurance, whatever} jobs who don't get that. I must admit that a former, more naiive version of myself once thought computer security was just getting some complicated recipe of hardware and software components just right. There's still a surprising number of "security professionals" out there who think that way. It's very depressing, but there's a very large "surface" to protect and it only takes a microscopic "chink" in your armor to lose everything. As a result, perfect security being not possible is the foundation to all other reasons why you should seriously re-consider your career aspirations.


  2. Most security work is really about making sure everyone else does their job "correctly". Correctness of systems is the real task at hand in a security job. Is it correct that a website of known sex offenders allows the general public to inject records of anyone they want labeled as such? Is it correct for a web server to execute arbitrary code if it is passed 1024 letter "A" characters? Is it correct that a user can click on a link and divulge intimate secrets to a total stranger because the page looks "normal" ? None of these are "correct" assuming even a smidge of common sense looking on afterwards. Yet they all have happened, and it was some security professional's job to deal with them. To put it simply, if everyone figured out how to design and implement systems "correctly" (assuming they know what is "correct" and what is "incorrect"), then security professionals would be out of a job, but thanks to #1 (perfect security is impossible), we're guaranteed to be picking up the poo poo flung by others from now until retirement, which means the following ...


  3. Security Response jobs suck. It may seem like CSI or something, but jobs that deal with responding to incidents suck. Except in high profile cases, computer forensics and true chain of custody techniques are not followed-- and if you want a computer forensics job, you'll probably have to work for a large government/public sector bureaucracy (and all the fun that goes with spending tax payers' dollars), which means you'll be primarily working on child pornography or drug trafficking cases and riding daily the fine line between public good and privacy infringements (warrantless wiretaps come to mind). My anecdotal observation is that very, very seldom do drug dealers and child porn traffickers actually employ decent computer security tactics; therefore, the job is lot less "CSI" and lot more mind-numbing "lather, rinse, repeat". From the words of someone I know who does this work: "I pretty much just push the big 'Go' button on EnCase [forensics software] and then show up at court explaining what it found." Not exactly the most intellectually stimulating work. The coolness factor wears off in the first 90 days, plus there's the joy of having convicted felons know who you are and that your work put them behind bars-- but not quite long enough, as they might still have a grudge against you when they get out. Even if you're lucky enough to not have a begrudging felon on your hands, there's the deep psychological torment that will slowly boil you alive if you are constantly exposed to the content of criminal minds. Your mileage may vary, but it probably won't be what you expect.

    For those who hope to work responding to computer intrusions, you should realize that very few organizations can afford to keep people on staff who perform only computer intrusion investigations. Most orgs just want to know what it will take to get things back to normal, because to do a full root cause analysis on a computer system that generates revenue, well, that likely means the org will have to forego revenue, at least long enough to take a forensic snapshot of all of the data. Very rarely (mainly just high profile cases), will an org be able to afford that. So the competition is tough. Not to mention that in many publicly traded companies, there is indemnification from not knowing exactly how an intrusion occurred. And there's even more stigma if the details are made public. So there's just no incentive for them to really find out all of the details. The 20,000 foot view is good enough (e.g. "vulnerability in a web server").

    And then there is an entirely different breed of "computer security professional": those who work on disaster recovery and business continuity planning and response. As you get engrossed in this sort of work, it tends to be less about "security" (critics: I realize "availability" is a tenet of the CIA Triad) and more about the daily employ of scare tactics to get organizations to fund remote data centers that are ready for the next apocalypse. The work is surprisingly more akin to "facilities" planning work: buildings, electric, plumbing. There is a "cyber" aspect to it, but it's mainly about funding the necessary equipment and then getting sysadmins to build it and test it out. That's project manager work; tedious, nanny-like, often political. It's not for people with short attention spans or high expectations.


  4. Security Operations jobs suck more. Security Ops is at the bottom of the security professionals' totem pole. Most of these jobs are just sysadmins or network admins who have been promoted an extra notch, maybe because of that shiny new industry cert that some trade rag said was "hot" and would result in a 15% salary increase. But all of the usual sysadmin/network admin griefs apply here and then some. It's an operations job, so you inherit all of the problematic decisions that the project planning and implementation people lopped over the fence at you. Very rarely do Security Ops people in an org get to influence the architecture of future deployments. And besides lightweight tweaks like patches or an occasional config change, very rarely do Security Ops folks get to do much to systems "in production", especially for "legacy systems" (what part of "legacy" isn't a euphemism?). For the most part, it's sit back and watch to see if a security failure occurs. I use the word "failure" with specific intention, because Security Operations folks have to constantly keep delicate China plates spinning atop poles, because each plate represents a certain security failure. As it is with spinning plates, it's often about deciding which failure is more acceptable, not about preventing all failures (see #1, again).

    In fact, there's an interesting twist: Security Ops managers or directors who experience a breach may find themselves losing their jobs on incompetence grounds. Going back to #1, this seems counter-intuitive. If we know perfect security is not possible, then we know security operations will experience a breach at some point (if we give them enough time). How, therefore, can you ever expect to be successful at a security operations job? When the shareholders want to know who was responsible for the unauthorized disclosure of thousands of company-crippling account records, the first person with the cross-hairs on their back is the person in charge of security operations. So, to survive at this game requires either company hopping before the inevitable breach occurs, OR, it requires politics (or black mail on somebody high up).

    Outsourced security operations is just a variation of this. If the contract includes full accountability, it's one and the same as what is described above. If it's a "we monitor your systems that you are accountable for" scenario, then you as an individual security operations employee of the contract firm may not get fired, per se, but your company may lose the contract renewal, which means if you allow #1 (above) to be true too many times, then you might find yourself out of a job there, too.

    The worst part about SecOps is that you'll either realize you've hit your Peter Principle with that job, in which case it's time to spend all of your free time on backyard barbecues and retirement planning (nothing necessarily wrong with that -- ignorance is bliss), OR, you'll want out immediately because everyone around you has hit their Peter Principle highest job and you want more.


  5. Security Planning jobs are set up to fail. Think about it: perfect security is not possible. So, even the most cerebral of security planners is going to deliver a work product that has flaws and holes. If you can convince yourself that's not depressing and continue on, maybe you can also be lucky enough to get into an organization whose culture thinks it is acceptable for people to deliver faulty products to a Security Operations group (#4 above)-- and that it is entirely the Operations' people's faults when it capsizes. Not to worry, though, you probably won't work for an organization that can afford a true security response group (#3 above -- it's probably just the Security Operations' people who get to handle the full response process to break up their mundane day), so nobody may know it was your fault. Besides, if you're dealing with a bunch of vendors' COTS (Commercial Off The Shelf) wares, there's not a whole lot of control for you to have, which begs the question why your organization even has a position for you in the first place. They probably could have just paid some consultant for a couple weeks, rather than have you permanently on staff.

    The other downsides are, of course, that you (like the Disaster Recovery & Business Continuity Planners) will also have to use scare tactics to implement draconian policies which probably won't actually amount to any real benefit, but some "power user" or Joe Software Developer will figure out he can circumvent them if he has two laptops and a flash drive (long personal anecdote story). If that doesn't work (or if you just want to cut to the chase), enter regulatory compliance into the equation: "Your project must do that stupid, expensive thing that results in no real added value because PCI says so!" It won't be a policy for something that 100% makes sense 100% of the time. Instead it will be something that makes life difficult for everyone (and everyone will love you for that), but is generally accepted by 3 out of 5 security professionals who also have no clue and are stuck in the dark ages (hence there are a lot self-perpetuating bad ideas out there, like firewalls and anti-virus). If you're an enlightened security strategist, you'll realize the futility of your job and want out, or you'll revert to also longing for weekend barbecues, vacations, and eventually retirement, all the while wondering if this is your Peter Principle job.


  6. Security vendors have to sell out. They sell out because they thrive on the perpetuation of problems, selling subscription services to deal with them. Scare tactics are used so frequently the vendors are numb-- finding themselves unaware they're even using them. Not to mention, there are so many security vendors out there for startups and small boutiques alike that most security professionals on the potentially-receiving side of their goods and services haven't even heard of them. Or maybe they have? The names all sound so familiar, like: Securify, Securification, EnGuardiam, Bastillification ... they all seem to make sense if you're still in that state of mind after having woken up from an afternoon nap's dream, otherwise they reek of a society with too many marketing departments and far too many copyrighted words and phrases. If the company is any good, they'll eventually be swallowed up by one of the bigger fish, like Big Yellow (Symantec), Big Red (McAfee), Big Blue (IBM), or one of the other blander colors (HP, Microsoft, Google, etc.). Only a few stand strong as boutiques, and if they do, they almost certainly have a large bank or government contract as a customer.

    Once you get a job at a security vendor, you'll probably be working as a developer who maintains a security product. And, as Gary McGraw has often pointed out, that's not about writing secure software, that's about writing security features into software. If you're not maintaining it, you'll be supporting it, which is the exact same as Security Operations (#4 above). You'll be the low level person who is stuck taking tickets, interpreting manuals (RTFM!), and talking to the Security Ops people at your customers' orgs. Fun times. Don't think for a second you'll go get a job at one of those big companies and fundamentally shake up their product lines and come out with cool new security-features-software that the Security Ops folks could really benefit from. These big companies get new ideas by buying the startups that create them; rarely does a lightbulb idea make its way into fruition. In fact, if you have such an epiphany and develop it as your brainchild into a security startup, rest assured that the bigger fish that swallows you up will succeed in turning your baby into yet-another-amalgamated product in their "enterprise suite" of products and services. It will lose its luster. They'll make the UI match the "portal" their customers already love to hate, but by then, you will have sold out and you can take your new nest egg with you into early retirement (weekend barbecues, here you come!).

    If you're not one of those, then you will really be a sellout-- either a sales rep or a sales engineer. If you are somebody who like repeating what you say and do, this is the job for you, because you'll repeat the same lowly power point slide deck that marketing (you remember-- the people who came up with that killer company name!) for every customer-- that is, all of the customers that let you in past the cold call. If you're the sales rep, remember to drag along your sales engineer to get you out of a sticky situation where you promise some security perpetual motion where it's just not possible. And if you're the sales engineer, try to remember the security perpetual motion is just not possible. It'll be hard to tell the customer that, though, since it will say otherwise in the power point slide deck that marketing provided. It's be right there in big red letters: "Secure", "Unbreakable", "Keeps all hackers out", etc., etc., etc.


  7. Pen Testers and Consultants have Commitment Issues. You can sell out, collect a paycheck, and position yourself in one of the jobs with the least amount of accountability and responsibility in the entire InfoSec space. The same is true for third party consultants, too. Any job where you are hired to come along and tell the hiring org where to put more bandaids falls into this category. Sure, there's a broad body of knowledge to comprehend ... but there are plenty of security vendors (see #6 above) who think they have a tool they can sell you so that you can point and click through your brief engagement with the hiring org, which begs the question: Why should they even hire you if an automated tool can give them their results? That's not true of all independent consultants and pen testers, though. Some of them do provide usefulness beyond that of a canned COTS tool. But they all suffer from the same problems as Security Planners (#5 above), only they probably had a prior job working directly for the org and saw how painful it was to stick around through the accountability phase after an incident. So now, they've learned their lesson: get in, get out, cash the check. They say: "Hey, it's a living." Are they the smartest security professionals around? Maybe. Do they have what it takes to do the other security jobs like Planning, Ops, and Incident Response? Maybe not.


  8. Exploit writers perpetuate the problem. All they do is sit on a chair all day in front of multiple computer screens (no doubt), and attempt to prove over and over again what academics have been saying since the 1970s. Yet there seems to be some economic sustainability, because otherwise the security vendors (#6 above) would have no way to sell you subscription services to access today's latest hack that a criminal otherwise might find on their own. But thanks to the vendor (and their handy, dandy exploit writer they have locked up somewhere with unlimited access to caffeine), we can all rest safely that the exploit code they just wrote won't be weaponized to prove #1 again (that happens all the time, actually), causing some poor Security Ops person (#4) to get sacked, while some Security Planner (#5) thinks "glad I'm on this side of the fence", and some Pen Tester (#7) thinks "I gotta download that into my pen testing tool for tomorrow's gig-- that way I know I'll find a hole and they'll hire me back next year".


  9. Security Educators either are paranoid or should be. If you're just contemplating a career in information or computer security for the first time, you probably aren't acquainted with any of the lovely people in this category, mainly because the good ones are expensive. Typically, it's only existing security professionals that get to experience security educators, because their employers realize that it's important to keep them up to date with information-- primarily thanks to exploit writers (#8) who keep the litany coming. The principles of security rarely change; only the scenery changes (and the exploit writers change scenery like the masters paint in oil).

    Educators fall into one of two categories: 1) they suck because they've been out of the game for so long (if they were ever in it at all), or 2) they're spot on, but they don't want you to know what you're reading now because you may consider a career change and that's one less pupil, one less paycheck for them. If they're on top of their game, they're paranoid. They have trust issues with everything and everyone. They can't stay away from the topic, so they're very well-versed in what has happened as well as the current goings-on in the field of security, but they have worse commitment issues than Pen Testers and Consultants (#7). They have the ability to scare you, but not in the same way as the security vendors (#6) and security planners (#5); you'll be able to tell that they don't want anything in return-- it's almost a relief for them to share the information they know with someone. Sometimes a vendor sneaks in and pretends to be an educator. Beware of that; though the way to spot them is their horror stories will result in an emotion to buy a product or service. You won't come out having learned anything other than their products solve a niche need.

    Becoming a security educator isn't an easy task; it typically means you were an educator of some other specialty domain and then learned how to teach security (which usually doesn't work as well as someone who has lived it), or you lived it yourself through one of the other job types and have educated yourself beyond the level of ordinary practitioners. If you're already in a security career and find yourself disheartened by the lacking options around you (because you've realized that it isn't the glamorous field you once thought), but find that you have an amazing affinity towards learning all that you can, this might be a saving grace that will prevent you from leaving everything you've learned behind and taking up a job as a dairy farmer (or some other similar job that will not require you to touch a computer). There's also the potential for life as an academic, where you can infiltrate inspire open minds that have yet to be corrupted by corporate ways.


  10. Security Media don't really exist. There are like 4 or 5 real "computer security reporters" in official media outlets. Anyone wanting to aspire to be them would have nearly as good of odds at becoming a professional athlete-- and that pays better. For all intents and purposes, they're either vanilla columnists whose writing glares that they don't understand the technical underpinnings of the subject of their writing, OR, they're paid bloggers.


  11. And Security Bloggers are the worst above all. (Present company included.) They know some or all of the above and chronicle it where they can, thinking that just collecting their thoughts in some digital pamphlet will change things. In order to be a security blogger of any real significance, you have to be known among the security community. For most, that means affiliation with a brand, product, or service. For a very elite few (the Schneiers out there), that means being one of the first to do so, calling everyone out for who they are, and taking as many opportunities to spout off in normal press/media as they'll allow (e.g. Schneier's a self-proclaimed "media slut"). For the rest of us, this may just be an attempt to alleviate the pressure of painful security information in our brains-- a pressure-release valve.
Do you still think you want a job in computer or information (IT) security? If your sole motivation is a paycheck, even if it means beating your head against the wall while trying to solve unsolveable problems, then this may be a career choice for you. If you can survive without gratitude for a job well done (because when these security professionals are actually successful, by dumb luck or otherwise, they largely go unrecognized and unthanked), then you may have a chance.

If you hope to change the world with your career, may I suggest a rewarding opportunity teaching high school math or science in a public school system? The pay is for shite, and there will be harder days than being a security professional, but your pupils will be grateful for your job well done later in life-- even if they don't manage to get around to tell you. Besides, everyone knows Americans spend what they make-- just learn to make ends meet on a teacher's salary.

...
[My general apologies for starting off 2009 with a lump that is hard to swallow.]

Friday, May 23, 2008

PCI Silverbullet for POS?

Has Verifone created a PCI silverbullet for Point Of Sale (POS) systems with their VeriShield Protect product? It's certainly interesting. It claims to encrypt credit card data BEFORE it enters POS, passing a similarly formatted (16 digit) encrypted card number into POS that presumably only your bank can decrypt and process.


I have to admit, I like the direction it's headed in. Any organization's goal (unless you are a payment processor) should be to reduce your PCI scope as much as possible, not try to bring PCI to your entire organization. This is a perfectly viable option to addressing risk that is often overlooked: ditch the asset. If you cannot afford to properly protect an asset, and you can find a way to not have to care for the asset anymore, then ditch it.

The questions I have about this specific implementation that are certainly going to have to be answered before anyone can use this to get a PCI QSA off of their back are:

1) What cryptographers have performed cryptanalysis on this "proprietary" design? Verifone's liberty to mingle the words "Triple DES" into their own marketing buzz format, "Hidden TDES", should at least concern you, if you know anything about the history of information security and the track records of proprietary encryption schemes. Since the plaintext and the ciphertext are exactly 16 digits (base 10) long and it appears that only the middle 6 digits are encrypted (see image below), this suggests that there might exist problems with randomness and other common crypto attacks. Sprinkle in the fact that credit card numbers must comply with the "Mod 10" rule (Luhn alogirthm), and I'm willing to bet a good number theorist could really reduce the possibilities of the middle 6 digits. If only the middle 6 digits are encrypted, and they have to be numbers between 0 and 9, then the probability of guessing the correct six digit number is one in a million. But the question is (and it's up to a mathematician or number theorist to answer), how many of the other 999,999 combinations of middle 6 digits, when combined with the first 6 and last 4 digits, actually satisfy the Mod 10 rule? [Especially since the "check digit" in the mod 10 credit card number rule is digit 14, which this method apparently doesn't encrypt.] I'm no mathematician, but I'm willing to bet significantly fewer than 999,999 satisfy the mod 10 rule. It's probably a sizeable cut-down on the brute-force space. If there are any other mistakes in the "H-TDES" design or implementation, it might be even easier to fill in the middle 6 gap.

It would be great to know that Verifone's design was open and peer-reviewed, instead of proprietary. I'd be very curious for someone like Bruce Schneier or Adi Shamir to spend some time reviewing it.


2) How are the keys generated, stored, and rotated? I certainly hope that all of these devices don't get hardcoded (eeprom's flashed) with a static shared key (but I wouldn't be surprised if they are). It would be nice to see something like a TPM (secure co-processor) embedded in the device. That way, we'd know there is an element of tamper resistance. It would be very bad if a study like the one the Light Blue Touchpaper guys at Cambridge University just published would detail that all of the devices share the same key (or just as bad, if all of the devices for a given retailer or bank share the same key).

It would be great if each device had its own public keypair and generated a session key with the bank's public key. This could be possible if the hardware card-swipe device sent the cardholder data to the bank directly instead of relying on a back office system to transmit it (arguably the back-end could do the transmission, provided the card swipe had access to generate a session key with the bank directly).

3) Will the PCI Security Council endorse a solution like this? (Unfortunately, this is probably the most pressing question on most organizations' minds.) If this does not take the Point of Sale system out of PCI scope, then most retailers will not embrace the solution. If the PCI Security Council looks at this correctly with an open mind, then they will seek answers to my questions #1 and #2 before answering #3. In theory, if the retailer doesn't have knowledge or possession of the decryption keys, POS would not be in PCI scope any more than the entire Internet is in PCI scope for e-tailers who use SSL.

...

Many vendors (or more accurately "payment service providers") are using "tokenization" of credit card numbers to get the sticky numbers out of e-tailers' databases and applications, which is a similar concept for e-commerce applications. A simple explanation of tokenizing a credit card number is simply creating a surrogate identifier that means nothing to anyone but the bank (service provider) and the e-tailer. The token replaces the credit card number in the e-tailer's systems, and in best-case scenarios the e-tailer doesn't even touch the card for a millisecond. [Because even a millisecond is long enough to be rooted, intercepted, and defrauded; the PCI Security Council knows that.]

It's great to see people thinking about solutions that fit the mantra: "If you don't have to keep it, then don't keep it."

[Note: all images are likely copyrighted by Verifone and are captures from their public presentation in PowerPoint PPS format here.]

...
[Updated May 23, 2008: Someone pointed out that PCI only requires the middle 6 digits (to which I refer in "question 1" above) to be obscured or protected according to requirement 3.3: "Mask PAN when displayed (the first six and last four digits are the maximum number of digits to be displayed)." Hmmm... I'm not sure how that compares to the very next requirement (3.4): "Render PAN [Primary Account Number], at minimum, unreadable anywhere it is stored" Looks like all 16 digits need to be protected to me.]