Showing posts with label PCI. Show all posts
Showing posts with label PCI. Show all posts

Monday, September 8, 2014

PCI and Retailer Breaches

Just a quick thought in the absence of meaningful thoughts on here ...

When Target was breached at the end of 2013 (and every day since then), from the safety of their climate controlled armchairs, pundits have cast judgement on Target.  "Target was negligent." Or maybe "Their PCI QSA wasn't thorough."  Sentiments along those lines ...

Now, here comes The Home Depot's breach.  Same malware.  Same techniques.  Quite possibly orders of magnitude higher in scope than Target (time will tell).  The same ol' drums will beat from smart phones and tablets in living rooms everywhere.

The reality is ... It's very difficult.  Difficult to get security "correct."  And more difficult to keep it in that "correct" state over time.  A single chink in the armor, so the ol' stitch goes ...

For the self-acclaimed pundits who throw rocks in glass houses, consider this: 
Every single Level 1 merchant that has been breached has signed RoCs (Reports on Compliance) signed by both a third party and representatives of the credit card brands.

Now for the realistically jaded perspective: PCI is really just about transferring risk to merchants and away from the card brands.  That's it.  Does it work?  Sure it does, since consumers are still using credit cards at merchants, and the economy hums along.

That is all ...

Thursday, March 1, 2012

Brute Forcing Credit Card Numbers

PCI Regulations allow merchants to store the first 6 digits plus the last 4 digits of a customer's credit card number. Ever wonder just how secure that is?

Well, without knowing anything else, if a credit card is stored as 1234-56xx-xxxx-1234, the possible missing middle digits range from 00-0000 to 99-9999, roughly one million (1,000,000) possible combinations. That seems very tough to guess (without being detected).

However, credit card numbers all implement Luhn's Algorithm, which is a special mathematical formula that uses the last digit in the number as a check digit. Not all of the 1,000,000 middle combinations will pass Luhn's check. Turns out (since modulus 10 math is involved), the quantity of missing middle number combinations is at most 100,000 possibilities, not a million. Luhn's reduces the complexity by an order of magnitude.

So, what if an attacker can get just one more digit somehow? Well, it's only 10^4 combinations then: 10,000 possibilities. What if they can get two more digits? The math follows this formula: 10 ^ (n-1). Here's the table:


6 digits
10 ^5 = 100,000
5 digits
10^4 = 10,000
4 digits
10^3 = 1,000
3 digits
10^2 = 100
2 digits
10^1 = 10
1 digit*
10^0 = 1

* It makes sense, if you're missing a single digit, Luhn's will help you recover it. That is the purpose of that algorithm originally.

Now, as far as practical applications for abusing the knowledge of Luhn's Algorithm on a PCI acceptably-formatted credit card number are concerned ... 100,000 attempted transactions to brute force a card number by a single merchant will certainly be detected and the merchant's ability to process any transaction will be in jeopardy. So, an attacker with access to a merchant account is probably not a valid threat to model.

For an attacker to attempt to make purchases at varying merchants with this brute force scheme and everything but the middle 6 digits, the attacker will also have to have the billing address and potentially the CVV code. That makes the problem significantly harder. But as the attacker can discover missing digits from the middle six, the problem becomes easier. If the victim is well chosen, and the attacker can do something like shoulder surf at a point of sale machine to visually see and remember a digit or two or three, then the problem gets noticeably easier. If the attacker can do that, the attacker can probably also guess the billing address and name. There's still that pesky CVV code, though (that's another 3 digits which compounds things).

Realistically, though, for an attacker to get that much information on a victim, the victim would probably have to be oblivious or have an extremely large line of credit to make it worthwhile.

For the rest of us, we're fairly safe with the PCI rules of first 6 plus last 4 digits being public knowledge.

Check this out for yourself. Here's the source code for a very simple C# console application that will take whatever first 6 plus last 4 digits you provide it, and churn out all of the possible middle combinations. Here is the inspiration for the C# Luhn's implementation. [Sorry about the code formatting.]

using System;
using System.Linq;

namespace Luhn
{
public class Luhn
{
private static int _middle = -1;
private static int _counter;
private static int _places;

public static void Main(string[] args)
{
if (!args.Any())
{
PrintUsage();
return;
}

var cc = args[0].Replace("-", "").Replace(" ", "").Replace("x", "X");

if (cc.Length != 16)
{
Console.WriteLine("input is not correct length.");
PrintUsage();
return;
}

_places = cc.Length - cc.Replace("X", "").Length;
var limit = Math.Pow(10, _places);

Console.WriteLine("Places: {0}", _places);
Console.WriteLine("Limit: {0}", limit);

while (_middle < limit)
{
var s = FindNext(cc);
if (!PassesLuhnCheck(s)) continue;
Console.WriteLine("Valid: {0}", s);
_counter++;
}

Console.WriteLine("\r\nFound {0} potential matches for {1}", _counter, args[0]);
}

private static void PrintUsage()
{
Console.Write("Usage: luhn.exe [credit card number]\r\n"
+ " in format like 1234-56xx-xxxx-1234\r\n"
+ " or like 1234-5678-xxxx-1234, etc.\r\n\r\n");
}

private static string FindNext(string number)
{
_middle++;
var middle = _middle.ToString();
while (middle.Length < _places)
{
middle = "0" + middle;
}
return (number.Replace(GetPlaceHolder(), middle));
}

private static string GetPlaceHolder()
{
var s = "";
for (var i = 0; i < _places; i++)
{
s += "X";
}
return s;
}

private static bool PassesLuhnCheck(string number)
{
var deltas = new[] { 0, 1, 2, 3, 4, -4, -3, -2, -1, 0 };
var checksum = 0;
var chars = number.ToCharArray();
for (var i = chars.Length - 1; i > -1; i--)
{
var j = chars[i] - 48;
checksum += j;
if (((i - chars.Length) % 2) == 0)
checksum += deltas[j];
}

return ((checksum % 10) == 0);
}
}
}

Thursday, July 1, 2010

Schneier vs PCI

Bruce Schneier just echoed what I wrote back in December 2008 that the encryption key management aspects of PCI 1.2 and earlier are flat-out, numb-skull retarded.

Here's an excerpt of what I said:
What the authors of the DSS were thinking was that PCI compliant merchants would implement cold war-esque missile silo techniques in which two military officers would each place a physical key into a control console and punch in their portion of the launch code sequence. This is technically possible to do with schemes like Adi Shamir's key splitting techniques. However, it rarely makes sense to do so.

Consider an automated e-commerce system. The notion of automation means it works on its own, without human interaction. If that e-commerce system needs to process or store credit card numbers, it will need to encrypt and decrypt them as transactions happen. In order to do those cryptographic functions, the software must have access to the encryption key. It makes no sense for the software to only have part of the key or to rely on a pair of humans to provide it a copy of the key. That defeats the point of automation.

If the pieces of the key have to be put together for each transaction, then a human would have to be involved with each transaction-- definitely not worth the expense! Not to mention an exploit of a vulnerability in the software could result in malicious software keeping a copy of the full key once it's unlocked anyway (because it's the software that does the crypto functions, not 2 people doing crypto in their heads or on pen and paper!).

If a pair of humans are only involved with the initial unlocking of the key, then the software gets a full copy of the key anyway. Any exploit of a vulnerability in the software could potentially read the key, because the key is in its running memory. So, on the one hand, there is no requirement for humans to be involved with each interaction, thus the e-commerce system can operate more cheaply than, say, a phone-order system or a brick-and-mortar retailer. However, each restart of the application software requires a set of 2 humans to be involved with getting the system back and online. Imagine the ideal low-overhead e-commerce retailer planning vacation schedules for its minimal staff around this PCI requirement! PCI essentially dictates that more staff must be hired! Or, that support staff that otherwise would NOT have access to a portion of the key (because they take level 1 calls or work in a different group) now must be trusted with a portion of it. More hands involved means more opportunity for collusion, which increases the risk by increasing the likelihood of an incident, which is NOT what the PCI folks are trying to accomplish!

The difference between a cold war missile silo and an e-commerce software application is the number of "secure" transactions each must have. Missile silos do not launch missiles at the rate of several hundred to several thousand an hour, but good e-commerce applications can take that many credit cards. When there are few (albeit more important) transactions like entering launch codes, it makes sense to require the attention of a couple different people.

So splitting the key such that an e-commerce software application cannot have the full key is stupid.
Here's an excerpt of what Bruce said:
Let's take a concrete example: credit card databases associated with websites. Those databases are not encrypted because it doesn't make any sense. The whole point of storing credit card numbers on a website is so it's accessible -- so each time I buy something, I don't have to type it in again. The website needs to dynamically query the database and retrieve the numbers, millions of times a day. If the database were encrypted, the website would need the key. But if the key were on the same network as the data, what would be the point of encrypting it? Access to the website equals access to the database in either case. Security is achieved by good access control on the website and database, not by encrypting the data.
It's nice to be validated from time to time, especially from the best.

Wednesday, July 22, 2009

PCI Wireless Insanity

I'm not sure if this de-thrones what I previously referred to as the Stupidest PCI Requirement Ever, but it's close. Sometimes the PCI people are flat-out crazy, maybe stupid even. This is another example of when.

Fresh off the presses, the PCI Security Standards Council has just released (on July 16th) a 33 page wireless guidance document that explains in detail just exactly what requirements a PCI compliant organization MUST meet in the PCI DSS. (The wireless document is here.) A few things to highlight in that document ...


1. EVERYONE must comply with the wireless requirements. There's no getting out of it just because you do not use wireless:
"Even if an organization that must comply with PCI DSS does not use wireless networking as part of the Cardholder Data Environment (CDE), the organization must verify that its wireless networks have been segmented away from the CDE and that wireless networking has not been introduced into the CDE over time. " (page 9, first paragraph)
2. That includes looking for rogue access points:
"Regardless of whether wireless networks have been deployed, periodic monitoring is needed to keep unauthorized or rogue wireless devices from compromising the security of the CDE." (page 9, third paragraph)
3. Which could be ANYWHERE:
"Since a rogue device can potentially show up in any CDE location, it is important that all locations that store, process or transmit cardholder data are either scanned regularly or that wireless IDS/IPS is implemented in those locations." (page 10, third paragraph)
4. So you cannot just look for examples:
"An organization may not choose to select a sample of sites for compliance. Organizations must ensure that they scan all sites." (emphasis theirs, page 10, fourth paragraph)
5. So, how in the world can you implement this?
"Relying on wired side scanning tools (e.g. tools that scan suspicious hardware MAC addresses on switches) may identify some unauthorized wireless devices; however, they tend to have high false positive/negative detection rates. Wired network scanning tools that scan for wireless devices often miss cleverly hidden and disguised rogue wireless devices or devices that are connected to isolated network segments. Wired scanning also fails to detect many instances of rogue wireless clients. A rogue wireless client is any device that has a wireless interface that is not intended to be present in the environment." (page 10, sixth paragraph)
6. You have to monitor the air:
"Wireless analyzers can range from freely available PC tools to commercial scanners and analyzers. The goal of all of these devices is to “sniff” the airwaves and “listen” for wireless devices in the area and identify them. Using this method, a technician or auditor can walk around each site and detect wireless devices. The person would then manually investigate each device." (page 10, seventh paragraph)
7. But that's time consuming and expensive to do:
"Although [manually sniffing the air] is technically possible for a small number of locations, it is often operationally tedious, error-prone, and costly for organizations that have several CDE locations." (page 11, first paragraph)
8. So, what should an enterprise-grade organization do?
"For large organizations, it is recommended that wireless scanning be automated with a wireless IDS/IPS system." (page 11, first paragraph)

In other words, you must deploy a wireless infrastructure at each location where cardholder data may exist, because that's what it takes to implement a wireless IDS. You must, at least, deploy an access point operating as a beacon to monitor the airwaves. But that has all the same (or more) costs that just using wireless in the first place has. So you might as well deploy wireless at each location. At least for now, the document does go on to indicate that wireless scans can still be performed quarterly and that a wireless IDS/IPS is just a method of automating that process. I will not be surprised to see a later revision demand full-time scanning via an IDS/IPS, ditching the once-every-90-days current requirement.

Apparently, one or more of the following are true:
  • The PCI Security Council are not of the ilk of security practitioners that believe in not deploying wireless as a measure of increasing security, because clearly they want you to buy wireless equipment-- and lots of it.
  • The PCI Security Council are receiving kickbacks from wireless vendors who want to sell their wares even to customers outside of their market and forcing wireless on all PCI merchants is a means to achieve that goal.
  • The PCI Security Council does not believe merchants will ever band together to say "enough is enough".
  • The PCI Security Council are control freaks with megalomaniacal (want to dictate the world) tendencies.

The irony here is that the PCI Security Council is paranoid extremely concerned about the use of consumer-grade wireless data transmission equipment in a credit card heist. By that, I mean they are concerned enough to mandate merchants spend considerable time, energy, and dollars on watching to make sure devices that communicate on the 2.4 GHz and 5 GHz spectrums using IEEE 802.11 wireless protocols are not suddenly introduced into cardholder data environments without authorization. What's next on this slippery slope? What about the plausibility of bad guys modifying rogue access point equipment to use non-standard ranges of the wireless spectrum (Layer 1 -- beware the FCC!) or modifying the devices' Layer 2 protocols to not conform to IEEE 802.11? The point is, data can be transmitted beyond those limitations!

[Imagine a conspiracy theory in which wireless hardware manufacturers are padding the PCI Security Council's pocketbooks to require wireless devices at every merchant location, while at the same time, the wireless hardware manufacturers start producing user-programmable wireless access points in a pocket-sized form factor to enable the credit card skimming black market to evade the 2.4/5 GHz and 802.11 boundaries in which a merchant has been dictated they must protect.]

There are no published breach statistics (that I am aware of) that support this type of nonsensical approach.

To make matters worse, in PCI terms, an organization is non-compliant IF a breach CAN or DOES occur. In other words, the PCI Data Security Standards (DSS) are held in such high regard that they believe it is impossible to both comply with every requirement contained within them AND experience a breach of cardholder data. In the case of these new wireless explanations of requirements (because the PCI Security Council will argue these requirements already existed, this is just a more elaborate explanation of them), if an organization experienced a breach, and previously had an accepted Report On Compliance (RoC) based on wired scanning for rogue wireless devices, they will be immediately considered out-of-compliance and thus have to pay the higher fines for non-compliance that all out-of-compliance organizations face.


Ah, what fun the PCI Security Council has dropped on merchants this month!

Pay
Cash
Instead

...

The academic security research community will find this interesting, because what the PCI Security Council is trying to do is prevent "unintended channels" of information flow. This is very difficult (if not computationally impossible-- such as Turing's Halting Problem). Even more difficult may be to detect "covert channels" which are an even more tricky subset of "unintended channel" information flow problems. What's next, PCI mandating protection against timing-based covert channels?

Monday, July 13, 2009

Random Active Directory Quirkiness

Do you need to comply with some external regulations (think PCI) that require your Microsoft Active Directory (AD) passwords to be changed frequently, yet you have an account that, if the password is changed, you think applications may stop working?

I am obviously not encouraging anyone to use the following quirky feature of AD to be dishonest with an auditor, but it is always interesting to find "fake" security features or at least features that can be manipulated in unexpected ways.

If you check the "User must change password at next logon" box on an account in Active Directory Users & Computers, it does something very interesting under the hood-- it deletes the value of the "PwdLastSet" attribute. The "PwdLastSet" attribute is a date-time representation, but the semantic behavior of AD when that field is empty (or zeroed out) is the equivalent to the force password change check box you may have seen thousands of times before and previously believed to be stored in AD as a boolean true/false value or something similar.

The really interesting behavior occurs when you uncheck the box. BEFORE the box is checked, there was an actual date stored in the "PwdLastSet" attribute. When the box was checked and the changes applied to the account, that date in "PwdLastSet" was lost forever. So, if you uncheck the box BEFORE the user account logs on and is forced to change, then what can the AD Users & Computers tool do? It has forever forgotten the true date for when the account's password was last set. So, the AD U&C developers did what any good developer would do: improvise.

So, in the bizarre situation where the force password change box is checked, applied, then unchecked, AD Users & Computers writes the current date-time into the "PwdLastSet" attribute, which has the unintended consequence of making the account look like the password was just changed.

Happy password policy circumventing!

Friday, May 15, 2009

PCI & Content Delivery Networks

Here's an interesting, but commonly overlooked, little security nugget.

If you are running an e-commerce application and rely on a Content Delivery Network (CDN), such as Akamai, beware how your customers' SSL tunnels start and stop.

I came across a scenario in which an organization-- who has passed several PCI Reports on Compliance (RoCs)-- used Akamai as a redirect for their www.[companyname].com e-commerce site. Akamai does their impressive geographical caching stuff by owning the "www" DNS record and responding with an IP based on where you are. They do great work. The organization hosts the web, application, and database servers in a state-of-the-art, expensive top five hosting facility. Since it's known that credit card data passes through the web, app, and database tiers, the organization has PCI binding language in their contract with the hosting provider, which requires the hosting provider to do the usual littany to protect credit cards (firewalls, IDS, biometrics-- must have a note from your mom before you can set foot on-site, that sort of thing). And the organization with the goods follows all appropriate PCI controls, obviously, as they have passed their RoC year after year since the origin of PCI.

Funny thing ... it wasn't until some questions came out about how SSL (TLS) really works under the hood before a big, bad hole was discovered. One of the IT managers was pursuing the concept of Extended Validation certs (even though EV certs are a stupid concept), and an "engineer" (use that term laughingly) pointed out that if they purchased the fancy certs and put them on the webservers in at the hosting provider, they would fail to turn their customers' address bars green. Why? Because of the content delivery network.

You see, SSL/TLS happens in the OSI model before HTTP does. That means a customer who wants to start an encrypted tunnel with "www.somecompany.com" must first look up the DNS entry, then attempt SSL/TLS with them over TCP port 443. This is important: the browser does NOT say "Hey, I want 'www.somecompany.com', is that you? Okay ... NOW ... let's exchange keys and start a tunnel."

In this case, as Akamai hosts the "www" record for "somecompany.com", Akamai must be ready for HTTPS calls into their service. "But wait ... " (you're thinking) " ... Akamai just delivers static content like images or resource files. How can they handle the unique and dynamic behaviors of the application which is required on the other end of the SSL/TLS tunnel?" The answer to your question is: They can't.

On the one hand, the CDN could refuse to accept traffic on port 443 or just refuse to handshake SSL/TLS requests. But that would break transactions into your "https://www.somecompany.com" URLs.

On the other hand, the CDN could accept your customers' HTTPS requests, then serve as a proxy between your customers and your hosting providers' web servers. The entire transactions could be encrypted using HTTPS. But the problem is the CDN must act as a termination point for your customers' requests-- they must DECRYPT those requests. Then they pass those messages back to the hosting provider using a new-- and separate-- HTTPS tunnel.

Guess which option CNDs choose? That's right-- they don't choose to break customers HTTPS attempts. They proxy them. And how did this particular organization figure that out? Well, because an EV-SSL cert on their web server is never presented to their customer. The address bar stays the boring white color, because the customer sees the CDN's certificate, not the organization's.

Why is this statistically relevant? Because a malicious CDN-- or perhaps a malicious employee at a CDN-- could eavesdrop on their HTTPS proxies and save copies of your customers' credit card numbers (or any other confidential information) for their own benefit. The CDN gets to see the messages between the clients and the servers even if only for an instant-- the classic man-in-the-middle attack. An instant is long enough for a breach to occur.

The moral of this story? 1) Learn how the OSI model works. 2) Don't overlook anything. 3) PCI (or any other compliance regulation for that matter) is far from perfect.

Monday, December 8, 2008

The Stupidest PCI Requirement EVER!

The Payment Card Industry (PCI) regulatory compliance goals are good, but not perfect. Some individual requirements in the Data Security Standard (DSS) are flat out ridiculous. In particular, a couple regarding key management take the cake.
3.5.2 Store cryptographic keys securely in the fewest possible locations and forms.
...
3.6 Fully document and implement all key-management processes and procedures for cryptographic keys used for encryption of cardholder data, including the following:
...
3.6.3 Secure cryptographic key storage.
Hmm. Before we even get too far, there is a redundancy with 3.5.2 and 3.6.3. Why even have 3.5.2 if 3.6 covers the items in more detail? I digress ...

3.6.6 Split knowledge and establishment of dual control of cryptographic keys.
What the authors of the DSS were thinking was that PCI compliant merchants would implement cold war-esque missile silo techniques in which two military officers would each place a physical key into a control console and punch in their portion of the launch code sequence. This is technically possible to do with schemes like Adi Shamir's key splitting techniques. However, it rarely makes sense to do so.

Consider an automated e-commerce system. The notion of automation means it works on its own, without human interaction. If that e-commerce system needs to process or store credit card numbers, it will need to encrypt and decrypt them as transactions happen. In order to do those cryptographic functions, the software must have access to the encryption key. It makes no sense for the software to only have part of the key or to rely on a pair of humans to provide it a copy of the key. That defeats the point of automation.

If the pieces of the key have to be put together for each transaction, then a human would have to be involved with each transaction-- definitely not worth the expense! Not to mention an exploit of a vulnerability in the software could result in malicious software keeping a copy of the full key once it's unlocked anyway (because it's the software that does the crypto functions, not 2 people doing crypto in their heads or on pen and paper!).

If a pair of humans are only involved with the initial unlocking of the key, then the software gets a full copy of the key anyway. Any exploit of a vulnerability in the software could potentially read the key, because the key is in its running memory. So, on the one hand, there is no requirement for humans to be involved with each interaction, thus the e-commerce system can operate more cheaply than, say, a phone-order system or a brick-and-mortar retailer. However, each restart of the application software requires a set of 2 humans to be involved with getting the system back and online. Imagine the ideal low-overhead e-commerce retailer planning vacation schedules for its minimal staff around this PCI requirement! PCI essentially dictates that more staff must be hired! Or, that support staff that otherwise would NOT have access to a portion of the key (because they take level 1 calls or work in a different group) now must be trusted with a portion of it. More hands involved means more opportunity for collusion, which increases the risk by increasing the likelihood of an incident, which is NOT what the PCI folks are trying to accomplish!

The difference between a cold war missile silo and an e-commerce software application is the number of "secure" transactions each must have. Missile silos do not launch missiles at the rate of several hundred to several thousand an hour, but good e-commerce applications can take that many credit cards. When there are few (albeit more important) transactions like entering launch codes, it makes sense to require the attention of a couple different people.

So splitting the key such that an e-commerce software application cannot have the full key is stupid.

But then there is the coup-de-grace in the "Testing Procedures" of 3.5.2:
3.5.2 Examine system configuration files to verify that keys are stored in encrypted format and that key-encrypting keys are stored separately from data-encrypting keys.
This is the ultimate in pointless PCI requirements. The real world analogue is taking valuables and stashing them in a safe that is unlocked with a key (the "data-encrypting key" or DEK in PCI parlance). Then, stash the key to the first safe into a second safe also unlocked by a key (the "key-encrypting key" or KEK in PCI parlance). Presumably, at this point, this second key will be like something out of a James Bond film where the key is actually in two parts, each possessed by one of two coordinating parties who reside in two geographically distinct locations. In practice, however, the second key is typically just slipped under the floor mat and the two safes are sitting right next to one another. It takes a little longer to get the valuables out of the safe, but does little to actually prevent a thief from doing so.

In an e-commerce system, it's no different. All of the same pointlessness of splitting keys (as described above) still applies, but now there is an additional point of failure and complexity: the KEK. Encryption does add overhead to a software application's performance, though the trade-off is normally warranted. However, if at each transaction the software needs to use the KEK to unlock the DEK and then perform an encrypt or decrypt operation on a Credit Card number, the overhead is now double what it was previously. As such, most software applications that use KEKs don't do that. Instead, they just use the KEK to unlock the DEK and keep the DEK in memory throughout the duration of operation, until presumably the software or server need to be taken offline for reconfiguration of some kind. With an encryption key in memory, there's still the plausible risk that an exploit of a vulnerability in the software application could result in disclosure of the key or the records that are supposedly protected by it. Even if the key was once in memory, recent research reminds us that data remanence of RAM retains sensitive data like keys for longer than we might expect.

And if a software application is allowed to initiate and unlock its keys without a human (or pair of humans in the case of key splitting), such as the case when the e-commerce application calls for a distributed architecture of multiple application servers in a "farm", then there really is no point of having a second key. If the application can read the second key to unlock the first key, then so could an attacker that gets the same rights as the application. The software might as well just read in the first key in an unencrypted form, which would at least be a simpler design.

If the threat model is the server administrator stealing the keys, then what's the point? Administrators have FULL CONTROL. The administrator could just as easily steal the DEK and the KEK. And no, the answer is not encrypting the KEK with a KEKEK (Key-Encrypting-Key-Encrypting Key), nor with a KEKEKEK, etc., because no matter how many keys are involved (or how many safes with keys in them), the last one has to be in plaintext (readable) form! The answer is, make sure the Admins are trustworthy (which means do periodic background checks, psych evaluations if they are legal, pay them well and do your best to monitor their actions).

If the threat model is losing (or having stolen) a backup tape of a server's configuration and data, then, again, a KEK offers little help, since an attacker who has physical access to a storage device has FULL CONTROL and can read the KEK and then decrypt the DEK and the credit card records.

It is also commonly suggested by PCI assessors that KEKs be stored on different servers to deal with the threat of an administrator stealing the DEK. But that is really just the same as the problems above, just rehashed all over again.

If the application on Server A can automatically read in the KEK on Server B, then (again) a vulnerability in the application software could potentially allow malware to steal the KEK from Server B or cache a copy of it once in use. If the admin of Server A also has admin access to Server B, then it's the same problem there, too; the admin can steal the KEK from Server B and unlock the DEK on Server A. If the admin does NOT have access to the KEK, then it's the two-humans-required-for-support-scenarios all over again, like key splitting. If the KEK cannot be read by Server A when it is authorized to do so (such as the mechanism for reading the KEK from Server B is malfunctioning), then Server B's admin must be called (Murphy's Law indicates it will be off-hours, too) to figure out why it's not working. And in a small to medium sized e-commerce group, like in an ideal low-overhead operation, it will almost always be the same person or couple of people that have admin access to both. In the large scale, an admin of Server A and an admin of Server B can just choose to collude together and share the ill gains from the theft of the KEK, the DEK, and the encrypted credit card records.

What about a small e-commerce application, where the web, application, and database tiers are contained in one physical server for cost reasons? In that case, perhaps the "scope" of PCI compliance would have previously been a single server, but using a secondary server to house a single KEK now introduces that secondary server into the littany of other PCI requirements, which will likely erode the cost benefit of a single server to house all tiers of the application in the first place.

In the off-chance that a backup copy of Server A's configuration and data is lost or stolen, there will be a false sense of temporary hope if Sever B's backup is not also lost or stolen. However, collusion and/or social engineering of an admininstrator of Server B still applies. Also, this will allow an attacker time to reverse engineer the software that is now in the hands of the attacker. Is there a remote code execution bug in the software that could allow an attacker to leak either the full encryption key or individual credit card records? Is there a flaw in the way the crypto keys were generated or used that reduces the brute-force keyspace from the safety of its lofty 2^128 bit perch? Did the developers forget to turn off verbose logging after the support incident last weekend? Are there full transaction details in the clear in a log file? A better approach would be to just use some sort of full storage volume encryption INSTEAD of transaction encryption, such that none of those possibilities would occur. But in practice, that is rarely done to servers (and somehow not mandated by the PCI DSS).

So storing the KEK on multiple servers just introduces more support complexity than it reduces risk from data theft.

And if we now know that these requirements don't guarantee anything of substance that would pass Kerckhoff's Principle (i.e. security by obscurity does not make these key storage techniques more "secure"), then we can also say that having multiple keys, separate KEK storage, and key-splitting all violate 3.5.2 because the keys are not stored "securely in the fewest possible locations and forms".

...

To Recap: Splitting keys is not feasible in most cases; it negates the benefit from having less people involved with e-commerce. Encrypting keys with other keys is an attempt to implement computer security perpetual motion machines. If you really are paranoid, implement full volume encryption on your servers. If you're not, well, ditch the transaction crypto unless you just happen to have CPU cycles to spare. If you must be "PCI Compliant" (whatever that means outside of "not paying fines"), then implement your e-commerce app to have an "auditor mode" by default, where it requires two people to each type in part of a key for the application to initiate. Then let it have a normal "back door" mode where it just uses the regular key for everything. [Most PCI Assessors are not skilled or qualified enough to validate the difference by inspecting the software program's behavior. They really just want a screen shot to go into their pretty report. Of course, this requires a detailed understanding of the ethics involved, and your mileage may vary.]

And remember: "you're only paranoid if you're wrong."


...

UPDATED: It was pointed out that there is even one more crazy method that PCI Assessors think can turn this polished turd into alchemist's gold: "encode" the KEK into the software's binary. By "encode" they mean "compile", as in have a variable (probably a text string) that contains the value of the KEK. Rather than have the software read in that KEK value from a file, have it just apply the KEK from the static or constant variable in the decryption operation that unlocks the DEK. This is equally stupid as the above. If the point of having a KEK and a DEK is to prevent someone who has access to the file system from unlocking credit card records, then the PCI folks completely missed "intro to pen testing 101" which describes the ultra l33t h4x0r tool called "strings". Any text strings (ASCII, UNICODE, what have you) can be extracted by that ages-old command line utility. So, if the threat model is somebody who stole the contents of a server's disks-- they win. If the treat model is a server administrator-- they win. Not to mention, the common practices of software developers is to store source code in a repository, presumably a shared repository. Any static variable that is "encoded" into a binary will live in source code (unless, I guess, the developer is sadistic enough to fire up a hex editor and find/replace the ASCII values in the binary after it's compiled) and source code lives in repositories, which means even more opportunities for collusion. This type of crypto fairydust magic is pure fiction-- it just doesn't work like they think it does.

Friday, May 23, 2008

PCI Silverbullet for POS?

Has Verifone created a PCI silverbullet for Point Of Sale (POS) systems with their VeriShield Protect product? It's certainly interesting. It claims to encrypt credit card data BEFORE it enters POS, passing a similarly formatted (16 digit) encrypted card number into POS that presumably only your bank can decrypt and process.


I have to admit, I like the direction it's headed in. Any organization's goal (unless you are a payment processor) should be to reduce your PCI scope as much as possible, not try to bring PCI to your entire organization. This is a perfectly viable option to addressing risk that is often overlooked: ditch the asset. If you cannot afford to properly protect an asset, and you can find a way to not have to care for the asset anymore, then ditch it.

The questions I have about this specific implementation that are certainly going to have to be answered before anyone can use this to get a PCI QSA off of their back are:

1) What cryptographers have performed cryptanalysis on this "proprietary" design? Verifone's liberty to mingle the words "Triple DES" into their own marketing buzz format, "Hidden TDES", should at least concern you, if you know anything about the history of information security and the track records of proprietary encryption schemes. Since the plaintext and the ciphertext are exactly 16 digits (base 10) long and it appears that only the middle 6 digits are encrypted (see image below), this suggests that there might exist problems with randomness and other common crypto attacks. Sprinkle in the fact that credit card numbers must comply with the "Mod 10" rule (Luhn alogirthm), and I'm willing to bet a good number theorist could really reduce the possibilities of the middle 6 digits. If only the middle 6 digits are encrypted, and they have to be numbers between 0 and 9, then the probability of guessing the correct six digit number is one in a million. But the question is (and it's up to a mathematician or number theorist to answer), how many of the other 999,999 combinations of middle 6 digits, when combined with the first 6 and last 4 digits, actually satisfy the Mod 10 rule? [Especially since the "check digit" in the mod 10 credit card number rule is digit 14, which this method apparently doesn't encrypt.] I'm no mathematician, but I'm willing to bet significantly fewer than 999,999 satisfy the mod 10 rule. It's probably a sizeable cut-down on the brute-force space. If there are any other mistakes in the "H-TDES" design or implementation, it might be even easier to fill in the middle 6 gap.

It would be great to know that Verifone's design was open and peer-reviewed, instead of proprietary. I'd be very curious for someone like Bruce Schneier or Adi Shamir to spend some time reviewing it.


2) How are the keys generated, stored, and rotated? I certainly hope that all of these devices don't get hardcoded (eeprom's flashed) with a static shared key (but I wouldn't be surprised if they are). It would be nice to see something like a TPM (secure co-processor) embedded in the device. That way, we'd know there is an element of tamper resistance. It would be very bad if a study like the one the Light Blue Touchpaper guys at Cambridge University just published would detail that all of the devices share the same key (or just as bad, if all of the devices for a given retailer or bank share the same key).

It would be great if each device had its own public keypair and generated a session key with the bank's public key. This could be possible if the hardware card-swipe device sent the cardholder data to the bank directly instead of relying on a back office system to transmit it (arguably the back-end could do the transmission, provided the card swipe had access to generate a session key with the bank directly).

3) Will the PCI Security Council endorse a solution like this? (Unfortunately, this is probably the most pressing question on most organizations' minds.) If this does not take the Point of Sale system out of PCI scope, then most retailers will not embrace the solution. If the PCI Security Council looks at this correctly with an open mind, then they will seek answers to my questions #1 and #2 before answering #3. In theory, if the retailer doesn't have knowledge or possession of the decryption keys, POS would not be in PCI scope any more than the entire Internet is in PCI scope for e-tailers who use SSL.

...

Many vendors (or more accurately "payment service providers") are using "tokenization" of credit card numbers to get the sticky numbers out of e-tailers' databases and applications, which is a similar concept for e-commerce applications. A simple explanation of tokenizing a credit card number is simply creating a surrogate identifier that means nothing to anyone but the bank (service provider) and the e-tailer. The token replaces the credit card number in the e-tailer's systems, and in best-case scenarios the e-tailer doesn't even touch the card for a millisecond. [Because even a millisecond is long enough to be rooted, intercepted, and defrauded; the PCI Security Council knows that.]

It's great to see people thinking about solutions that fit the mantra: "If you don't have to keep it, then don't keep it."

[Note: all images are likely copyrighted by Verifone and are captures from their public presentation in PowerPoint PPS format here.]

...
[Updated May 23, 2008: Someone pointed out that PCI only requires the middle 6 digits (to which I refer in "question 1" above) to be obscured or protected according to requirement 3.3: "Mask PAN when displayed (the first six and last four digits are the maximum number of digits to be displayed)." Hmmm... I'm not sure how that compares to the very next requirement (3.4): "Render PAN [Primary Account Number], at minimum, unreadable anywhere it is stored" Looks like all 16 digits need to be protected to me.]

Saturday, May 17, 2008

Why You Don't Need a Web Application Layer Firewall

Now that PCI 6.6's supporting documents are finally released, a lot people are jumping on the "Well, we're getting a Web Application Firewall" bandwagon. I've discussed the Pros and Cons of Web Application Firewalls vs Code Reviews before, but let's dissect one more objection in favor of WAFs and against code reviews (specifically static analysis) ...

This is from Trey Ford's blog post "Instant AppSec Alibi?"
Let’s evaluate this in light of what happens after a vulnerability is identified- application owners can do one of a couple things…
  1. Take the website off-line
  2. Revert to older code (known to be secure)
  3. Leave the known vulnerable code online
The vast majority of websites often do the latter… I am personally excited that the organizations now at least have a viable new option with a Web Application Firewall in the toolbox! With virtual patching as a legitimate option, the decision to correct a vulnerability at the code level or mitigate the vuln with a WAF becomes a business decision.

There are two huge flaws in Mr Ford's justification of having WAFs as a layer of defense.

1) Web Application Firewalls only address HALF of the problems with web applications
: the syntactic portion, otherwise known in Gary McGraw speak as "the bug parade". The other half of the problems are design (semantic) problems, which Gary refers to as "security flaws". If you read Gary's books, he eloquently points out that research shows actual software security problems fall about 50/50 in each category (bugs and flaws).

For example, a WAF will never detect, correct, or prevent horizontal (becoming another user) or vertical (becoming an administrator) privilege escalation. This is not an input validation issue, this is an authorization and session management issue. If a WAF vendor says their product can do this, beware. Given the ideal best case scenario, let's suppose a WAF can keep track of the source IP address of where "joe" logged in. If joe's session suddenly jumps to an IP address from some very distinctly different geographic location and the WAF thinks this is "malicious" and kills the session (or more realistically the WAF just doesn't pass the transactions from that assumed-to-be-rogue IP to the web application), then there will be false positives, such as corporate users who jump to VPN and continue their browser's session, or individuals who switch from wireless to an "AirCard" or some other ISP. Location based access policies are problematic. In 1995 it was safe to say "joe will only log on from this IP address", but today's Internet is so much more dynamic than that. And if the WAF won't allow multiple simultaneous sessions from the same IP, well forget selling your company's products or services to corporate users who are all behind the same proxy and NAT address.

Another example: suppose your org's e-commerce app is designed so horribly that a key variable affecting the total price of a shopping cart is controlled by the client/browser. If a malicious user could make a shopping cart total $0, or worse -$100 (issuing a credit to the card instead of a debit), then no WAF on today's or some future market is going to understand how to fix that. The WAF will say "OK, that's a properly formatted ASCII represented number and not some malicious script code, let it pass".

Since the PCI Security Standards Council is supporting the notion of Web Application Firewalls, that begs the question: Does the PCI Security Standards Council even understand what a WAF can and cannot do? Section 6.6 requires that WAFs or Code Reviews address the issues inspired by OWASP which are listed in section 6.5:
6.5.1 Unvalidated input
6.5.2 Broken access control (for example, malicious use of user IDs)
6.5.3 Broken authentication and session management (use of account credentials and session cookies)
6.5.4 Cross-site scripting (XSS) attacks
6.5.5 Buffer overflows
6.5.6 Injection flaws (for example, structured query language (SQL) injection)
6.5.7 Improper error handling
6.5.8 Insecure storage
6.5.9 Denial of service
6.5.10 Insecure configuration management
The following items fall into the "implementation bug" category which could be addressed by a piece of software trained to identify the problem (a WAF or a Static Code Analyzer):
6.5.1 Unvalidated input
6.5.4 Cross-site scripting (XSS) attacks
6.5.5 Buffer overflows
6.5.6 Injection flaws (for example, structured query language (SQL) injection)
6.5.7 Improper error handling
These items fall into the "design flaw" category and require human intelligence to discover, correct, or prevent:
6.5.2 Broken access control (for example, malicious use of user IDs)
6.5.3 Broken authentication and session management (use of account credentials and session cookies)
6.5.8 Insecure storage
6.5.9 Denial of service
6.5.10 Insecure configuration management
Solving "design" or "semantic" issues requires building security into the design phase of your lifecycle. It cannot be added on by a WAF and generally won't be found by a code review, at least not one that relies heavily on automated tools. A manual code review that takes into consideration the criticality of a subset of the application (say, portions dealing with a sensitive transaction) may catch this, but don't count on it.



2) If your organization has already deployed a production web application that is vulnerable to something a WAF could defend against, then you are not really doing code reviews. There's no more blunt of a way to put it. If you have a problem in production that falls into the "bug" category that I described above, then don't bother spending money on WAFs. Instead, spend your money on either a better code review tool OR hiring & training better employees to use it (since they clearly are not using it properly).



Bottom line: any problem in software that a WAF can be taught to find could have been caught at development time with a code review tool, so why bother buying both. You show me a problem a WAF can find that slipped through your development process, and I'll show you a broken development process. Web Application Firewalls are solution in search of a problem.

Tuesday, April 15, 2008

PCI 1.1 Section 6.6

If you're one of the many practitioners waiting to see how the PCI Security Council clarifies the ambiguous 6.6 requirement, then you may wish to use this interview with Bob Russo, General Manager of the PCI Security Standards Council, as either an inkling towards an interpretation OR just more obfuscation (depending upon your point of view).

Here is the PCI requirement in question:
6.6 Ensure that all web-facing applications are protected against known attacks by applying either of the following methods:
• Having all custom application code reviewed for common vulnerabilities by an organization that specializes in application security
• Installing an application layer firewall in front of web-facing applications.
Note: This method is considered a best practice until June 30, 2008, after which it becomes a requirement.
Russo's comments on the debate of Web Application Firewalls versus Code Reviews:
"Personally, I'd love to see everyone go through on OWASP-based source-code review, but certainly, that's not going to happen," Russo said, referring to the expensive and time-consuming process of manual code reviews. "So the application firewall is probably the best thing to do, but there needs to be some clarification around what it needs to do. That clarification is coming; that's been the biggest question."
Jeremiah Grossman sounded off on the interview as well.



Even given all of the discourse I have heard and read to date, there are many unanswered questions on this one particular point alone. No doubt the PCI Security Standards Council has realized that application layer problems are going to undermine everything else we have taught security practitioners for the last decade about the idiocy of controlling security at the network layers. And no doubt the PCI Security Standards Council understands they have a mighty hand to influence organizations to handle their custom software with the appropriate level of diligence and quality that cardholders deserve. However ...

Here are my top 7 questions concerning PCI 1.1, Requirement 6.6 that are still left unanswered:

1) Please define "web-facing applications". Does that mean HTTP/S applications? Does that mean anything directly exposed to Internet traffic? Or, does it mean any application that Al Gore created? [PCI Expert, Trey Ford, attempted to define "web-facing", but we need the offical PCI Security Council interpretation.]

2) Please define "known attacks". Known by whom? Who is keeping the authoritative list? What happens when new attacks become "known" and are added to the list? Do we have to go back and perform more analysis to check for the new attacks?

3) Please define "an organization that specializes in application security". Is that a third party or can it be a team within an organization? Can it be a team of one in a smaller organization? What is meant by "specializes"? Does that mean "took a class", "has a certificate", or is that reserved for somebody who leads the industry in the subject matter? "Application Security" as a discipline (sorry, Gary McGraw--you're right, we should call it "software security") is new. Will we have a chicken and egg problem trying to establish people as specialists in application security?

4) Does a blackbox (runtime) scanning approach constitute a "review" of custom application code? Or will only a whitebox (source code at development time) cut the mustard? Can automated review tools be used, or must it be 100% manual (human) code review? To what extent can automation be involved? Are there specific products (vendors) that are preferred or required when selecting automated tools?

5) Does the "review" imply remediation? In the case of PCI Vulnerability Scanning Procedures, some lesser vulnerabilities are allowed to exist, but vulnerabilities that the PCI Security Standards Council rate at a higher criticality must absolutely be fixed. What criticality scale must we use? Is there a taxonomy of vulnerabilities that are categorized by "must fix" criticality versus a "should fix" criticality?

6) Please define "an application layer firewall". Is that a preventative tool or can it be just a detective tool (i.e. must it be active or can it be passive, like an IDS)? What "bad things" must it detect? How tight must it be tuned? Will there be a process to pre-certify vendors, or must we invest in it now and hope that auditors will accept what we choose?

7) Why is that we are (as of today) only a mere 76 days out from when requirement 6.6 becomes mandatory and we do NOT yet have clarification? Large organizations move slowly. Complicated "web-facing applications" may take a long time to properly regression test with either option implemented (remediations found in code reviews OR web application layer firewall deployments). We have little over two months to: 1) Understand the requirement, 2) Budget accordingly, and 3) Implement on time and under budget. With PCI DSS version->next right around the corner, why wasn't this requirement held off until it could be properly flushed out in the next version?