...
Well, say I trust Bruce Schneier (I generally do professionally speaking, but not necessarily personally-- and I'll pick on him since he's almost universally accepted as the patron saint of security). Let's say I trust Bruce's analysis of a particular company's products. If Bruce is reviewing the source code and the code is closed to the public but made available to him as an escrow, I would likely be OK with that. Trust is more complicated than just availability of source code. There are not too many people in the world that are qualified to security reviews of security product's code. So, I couldn't trust just anyone's analysis of it. To be honest, if it came down to a computationally-intensive implementation of cryptographic code, I wouldn't even trust my own analysis of it. My point is: Trust is a social-psychological phenomenon, not a technical one.
"Open source" means so many different things to different people. To some it means "free to use, modify, or distribute". To some, it means anyone can review the code. To others, it might just mean the vendor will provide source code escrow services at your request. It might be possible to have a closed source (in the common sense) product opened up to a specific customer (potentially any customer asks the right question the right way).
How many "joe users" that have a one seat install of a product actually review the code? Not many. How many of those one seat installs are actually qualified code reviewers? Fewer still.
Open Source != Security
It (open source) is an unrelated variable. It's like how automobile insurance providers in the US inaccurately categorize all male drivers under 18 years of age as high risk. Not all of them (caricatures and jokes aside) will get tickets, cause wrecks, or otherwise require the insurance agency to pay out. However, the actuarial data presented to the insurers suggests that is a reasonable category of customers for which they should increase premiums. If it was legal and ethical (and affordable) to put all under 25 year old drivers through a *magic* test (I say magic because it may or may not exist) that could differentiate with a higher level of certainty whether the customer had the "x factor" that caused a higher tendency to cause wrecks ... well, that's where the insurance companies would go.
Open Source is like that broad, mis-categorization. There are many open source projects that are never reviewed for potential threats by qualified people. In fact, since "open source" is so "open", there are likely projects that have never even been reviewed by anyone outside of the sole contributor. "Open Source" strikes up a connotation of community and collaboration, but it does not guarantee community and collaboration. Likewise, there's no guarantee that the people reviewing the code aren't adding security problems deliberately.
Trust is a binary action. You either choose to trust someone or something, or you choose not to trust. You might opt to choose to trust someone conditionally, such as I might trust a carpenter to build my house but not to work on my car. Trustworthiness, however, is a totally different equation. People estimate trustworthiness (which is exactly as it reads: calculating how worthy of trust something or someone is) using a combination of perceived reputation (track records) or trusted third parties' estimated trust (e.g. my friend Joe knows a lot about cars, so I trust his mechanic, since Joe would know how to differentiate between a good and bad mechanic).
A product has opened its source code for review. So what? You should be asking the following questions:
- Why? Why did you open your source?
- Who has reviewed your source? What's (not) in it for them?
- What was in the review? Was it just a stamp of approval or were there comments as well?
No comments:
Post a Comment