Thursday, May 28, 2009

More Fake Security

The uninstallation program for Symantec Anti-Virus requires an administrator password that is utterly trivial to bypass. This probably isn't new for a lot of people. I always figured this was weak under the hood, like the password was stored in plaintext in a configuration file or registry key, or stored as a hash output of the password that any admin could overwrite with their own hash. But it turns out it's even easier than that. The smart developers at Symantec were thoughtful enough to have a configuration switch to turn off that pesky password prompt altogether. Why bother replacing a hash or reading in a plaintext value when you can just flip a bit to disable the whole thing?

Just flip the bit from 1 to 0 on the registry value called UseVPUninstallPassword at HKEY_LOCAL_MACHINE\SOFTWARE\INTEL\LANDesk\VirusProtect6\ CurrentVersion\Administrator Only\Security. Then re-run the uninstall program.

I am aware of many large organizations that provide admin rights to their employees on their laptops, but use this setting as a way to prevent them from uninstalling their Symantec security products. Security practitioners worth their salt will tell you that admin rights = game over. This was a gimmick of a feature to begin with. What's worse is that surely at least one developer at Symantec knew that before the code was committed into the product, but security vendors have to sell out and tell you that perpetual motion is possible so you'll spend money with them. These types of features demonstrate the irresponsibility of vendors (Symantec) who build them.

And if you don't think a user with admin rights will do this, how trivial would it be for drive-by malware executed by that user to do this? Very trivial.

Just another example on the pile of examples that security features do not equal security.

Friday, May 15, 2009

"Application" vs "Network" Penetration Tests

Just my two cents, but if you have to dialog about the distinction between an "application" and "network" penetration test, then you're missing the point and not probably testing anything worthwhile.

First of all, the "network" is not an asset. It's a connection medium. Access to a set of cables and blinky lights means nothing. It's the data on the systems that use the "network" that are the assets.

Second, when a pen tester says they're doing a "network penetration test", they really mean they're going to simulate an attacker who will attack a traditional application-- a "canned" application (usually), like one that runs as a service out of the box on a consumer Operating System. It's more than just an authentication challenge (though it could be that). It's likely looking for software defects in those canned applications or commonly known insecure misconfigurations, but it's really still an application that they are testing. [In fact, the argument that a "network penetration test" is nothing more than vulnerability scan seems plausible to me.]

Third, when they say "application penetration test", they are typically talking about either custom software applications or at least an application that didn't come shipped with the OS.

Fourth, if you're trying to test how far one can "penetrate" into your systems to gain access to data, there should be no distinction. If a path to the asset you're trying to protect is through a service that comes bundled with a commercial OS, or if the path to the asset is through a customer product; it makes no difference. A penetration is a penetration.


Yet, as an industry, we like to perpetuate stupidity. This distinction between "network" and "application" penetration tests is such a prime example.

PCI & Content Delivery Networks

Here's an interesting, but commonly overlooked, little security nugget.

If you are running an e-commerce application and rely on a Content Delivery Network (CDN), such as Akamai, beware how your customers' SSL tunnels start and stop.

I came across a scenario in which an organization-- who has passed several PCI Reports on Compliance (RoCs)-- used Akamai as a redirect for their www.[companyname].com e-commerce site. Akamai does their impressive geographical caching stuff by owning the "www" DNS record and responding with an IP based on where you are. They do great work. The organization hosts the web, application, and database servers in a state-of-the-art, expensive top five hosting facility. Since it's known that credit card data passes through the web, app, and database tiers, the organization has PCI binding language in their contract with the hosting provider, which requires the hosting provider to do the usual littany to protect credit cards (firewalls, IDS, biometrics-- must have a note from your mom before you can set foot on-site, that sort of thing). And the organization with the goods follows all appropriate PCI controls, obviously, as they have passed their RoC year after year since the origin of PCI.

Funny thing ... it wasn't until some questions came out about how SSL (TLS) really works under the hood before a big, bad hole was discovered. One of the IT managers was pursuing the concept of Extended Validation certs (even though EV certs are a stupid concept), and an "engineer" (use that term laughingly) pointed out that if they purchased the fancy certs and put them on the webservers in at the hosting provider, they would fail to turn their customers' address bars green. Why? Because of the content delivery network.

You see, SSL/TLS happens in the OSI model before HTTP does. That means a customer who wants to start an encrypted tunnel with "www.somecompany.com" must first look up the DNS entry, then attempt SSL/TLS with them over TCP port 443. This is important: the browser does NOT say "Hey, I want 'www.somecompany.com', is that you? Okay ... NOW ... let's exchange keys and start a tunnel."

In this case, as Akamai hosts the "www" record for "somecompany.com", Akamai must be ready for HTTPS calls into their service. "But wait ... " (you're thinking) " ... Akamai just delivers static content like images or resource files. How can they handle the unique and dynamic behaviors of the application which is required on the other end of the SSL/TLS tunnel?" The answer to your question is: They can't.

On the one hand, the CDN could refuse to accept traffic on port 443 or just refuse to handshake SSL/TLS requests. But that would break transactions into your "https://www.somecompany.com" URLs.

On the other hand, the CDN could accept your customers' HTTPS requests, then serve as a proxy between your customers and your hosting providers' web servers. The entire transactions could be encrypted using HTTPS. But the problem is the CDN must act as a termination point for your customers' requests-- they must DECRYPT those requests. Then they pass those messages back to the hosting provider using a new-- and separate-- HTTPS tunnel.

Guess which option CNDs choose? That's right-- they don't choose to break customers HTTPS attempts. They proxy them. And how did this particular organization figure that out? Well, because an EV-SSL cert on their web server is never presented to their customer. The address bar stays the boring white color, because the customer sees the CDN's certificate, not the organization's.

Why is this statistically relevant? Because a malicious CDN-- or perhaps a malicious employee at a CDN-- could eavesdrop on their HTTPS proxies and save copies of your customers' credit card numbers (or any other confidential information) for their own benefit. The CDN gets to see the messages between the clients and the servers even if only for an instant-- the classic man-in-the-middle attack. An instant is long enough for a breach to occur.

The moral of this story? 1) Learn how the OSI model works. 2) Don't overlook anything. 3) PCI (or any other compliance regulation for that matter) is far from perfect.