Friday, September 28, 2007

Code Review vs. Application Layer Firewall

In a recent sales attempt, I received a whitepaper (warning: PDF) from Web Application Layer Firewall vendor Imperva entitled: "The New PCI Requirement: Application Firewall vs. Code Review".

For those not familiar with the Payment Card Industry (PCI) compliance regulations, visit the PCI DSS (Data Security Standard) website.

From my perspective, PCI is an excellent set of security compliance regulations, perhaps the best that exist in the U.S. Items are clear, concise, and specific (sometimes too specific), which is in stark contrast to other regulations, such as SOX, HIPAA, GLBA, etc.

Here's the specific PCI requirement in question:
6.6 Ensure that all web-facing applications are protected against known attacks by applying either of the following methods:
• Having all custom application code reviewed for common vulnerabilities by an organization that specializes in application security
• Installing an application layer firewall in front of web-facing applications.
Note: This method is considered a best practice until June 30, 2008, after which it becomes a requirement.
If you're bored already, you can sum up this problem as a typical 'business case for better engineering' problem.

We (the IT and Security field) know how to achieve higher security assurance in critical applications: thorough source code analysis coupled with simple design. However, we know that for many (if not most) organizations to adopt this mindset, it will require an overhaul for their existing software lifecycles. To put it bluntly: few people are truly analyzing their design and their code.

So in to our rescue is a typical information security vendor, ready to offer a hybridized second-rate solution to the problem. [To be fair, I think it is a well engineered solution, but for the wrong problem-- the problem of managing an application without truly understanding its code.]

Now to jump in and dissect their whitepaper ...
"What is an “Application Layer Firewall”?
An application layer firewall, also known as a “Web Application Firewall” or “WAF” is a network device that is placed in front of the Web applications in an organization’s data center to protect against attacks. A WAF is able to view and understand the full spectrum of application traffic so that it can protect the applications and the sensitive data from illegitimate access and usage." [Italics are mine]
This is stopping point #1. How is it possible that a vendor-- who has NEVER seen your organization's application-- can create a drop-in box that can "understand the full spectrum" of your application's traffic??? Again, to be blunt: they cannot. They can make guesses about commonalities among applications they have seen in the past (e.g. like customers whose applications were extensively analyzed for the purpose of product development). They may be even able to code in some inference engines that can make educated guesses at runtime, but they cannot ever truly "understand the full spectrum".
"Because of SecureSphere’s unique Dynamic Profiling capabilities, it automatically builds a complete baseline profile of your applications and network traffic in a matter of days. Using the application profile, SecureSphere can distinguish between legitimate user behavior and illegitimate behavior as well as protect from attacks... No manual intervention or tuning is necessary, keeping your on-going administrative costs far lower than other WAF products." [Italics are mine]
So, yes, it is an inference engine; note the italicized portions of the quote above. What is striking is the claim that this behavioral analysis tool does not require human intelligence for tuning, which means either: 1) the engine is tuned to avoid false positives at the expense of neglecting true positives, or 2) the Marketing department is (ahem) over-estimating their product's ability. Any behavioral analysis tool requires extensive tuning.
"While code review is a good idea, and is consistent with coding best practices, hiring consultants entails extra cost, loss of flexibility, resource allocation issues, scheduling headaches, and ultimately a lower grade of security than you would achieve through the deployment of a Web Application Firewall." [Italics are mine]
What amazes me in the world of marketing is that it's an acceptable practice to both claim a universal truth and debunk it in the same sentence. The whitepaper notes that code review is a "best practice", yet before a breath is taken attempts to claim that code review is a lower security assurance process when compared to using an application layer firewall, which simply doesn't make sense. Drop-in products work blind-- they do not know your organization's applications and they can only make inferences regarding them. Human Intelligence (this assumes training and expertise, hence the word "intelligence") will certainly trump; understanding the application is inherent to the process of good coding review and implementation.

If the assurance of externally facing applications is of utmost importance, then design/implementation time controls (e.g. static analysis) should be explored, not runtime controls (i.e. application layer firewalls).

...
UPDATED 12/10/2007: Here's a follow-up ...

6 comments:

Anonymous said...

I agree that PCI DSS (v1.1, which you reference) is better than SOX and HIPPA. (I haven't read GLBA details so can't comment on it.) Where I disagree is that it is "perhaps the best that exist in the U.S." IMHO, that would be the NIST Computer Security Research Center's special publications 800-series related to security (see http://csrc.nist.gov/publications/PubsSPs.html). And I don't really find the PCI DSS all that specific either, at least not specific enough to be testable do know before a PCI audit whether you are going to pass or not.

Imperva sounds like they are using anomaly detection, possibly mixed with some signature-based detection. I haven't looked at Imperva's product, but I suspect that their (very questionable) premise is that it has been able to gather a complete and sufficient representation of all the valid (and, for that matter, only the valid) HTTP requests. That is, complete in the sense that during its "learning" period it collects all the possible legitimate variations (right!) and sufficient in the sense that it has enough of similar HTTP requests that it can draw some valid statistical conclusions. But if they are claiming to do this with a 100% success rate with no tuning and no false positives or false negatives, they are either 1) completely clueless, 2) completely deluding themselves, or 3) flat out make false advertising claims. Even if they had a design that could accomplish all that (which I think, based on the Turing Halting Problem, one could argue against even the theoretical possibility of), to so would require that they have 100% defect free software implementation to start with. (Perhaps their "matter of days" is a matter of a few thousand days. That is, after all, technically accurate. ;-)

Regarding their point about code inspections...that might be true if one assumed developers with no security training doing the typical rush job of "code inspections" (e.g., 1KLOC/hr or faster), possibly by simply running some freebie static source code analyzer and mindlessly accepting what it spits out. But that is not what the PCI DSS mandates in 6.6. If done by a qualified organization specializing in doing security code inspections, with inspections being performed at a reasonable rate (less than 200LOC/hr) by at least 3 inspectors who actually take time to prepare in advance for the inspection, then no way is their WAF going provide a higher grade of security. (Unless of course, all the inspector's comments are ignored, which routinely happens. There's certainly a lot of leeway with their statement, so it can be interpreted in many ways.)

Tim MalcomVetter said...

Kevin,

Thanks for dropping by ...

While the NIST documents are probably very thorough, I was considering PCI as a compliance regulation-- if you want to take payment cards, you have to comply and the costs for non-compliance are significant and consistent. To me, it's the enforcement and the simpler language (compared to the other regulations) that make it good.

You bring up some interesting points. Thanks.

Anonymous said...

The idea of having to send all in-house code to a third party for security review is what convinced us that the PCI standards are insane. For us this would mean thousands or tens of thousands of lines of code a month. But the cost wasn't the deal breaker, the added time and complexity of doing this was. Code review is one thing, but code review by an outside organization with no clue about our business is pretty worthless.

Ultimately, we ended up outsourcing everything to do with credit card processing. Frankly, I think the entire purpose of PCI is to make things so expensive and unwieldy that corporations are forced to outsource all of their financial dealings to a fairly small set of PCI compliant vendors who are profiting at our expense.

Tim MalcomVetter said...

I talk in more detail about section 6.6 of the PCI DSS here and then follow up with the PCI Supporting Docs here.

Anonymous said...

Do you know of a product or a company or SAAS solution where we can outsource our CC processing and data gathering for our products. Does any of these services exist for larger fortune 50 corporations ?

Tim MalcomVetter said...

Hi Anonymous,

While I have no affiliation with them, Cybersource is one of several payment processors that have service offerings ready to off-shore the payment processing and storage of payment details altogether.

I am not a PCI QSA, but I would assert that this would significantly decrease-- if not eliminate-- an organization's e-commerce PCI scope.