Case in point: just this week, David Maynor announced he would publish the details surrounding his shrouded hack against Mac OS X wireless drivers that was originally announced last year, but withheld by Apple's NDA. From the Computer World article:
"Maynor will soon publish a second paper on [some unimportant website] explaining how to write software that will run on a compromised system" (boldface is mine)At what point does the "educational" go into the weaponization of an attack? And just exactly how does a "security researcher" fit into the overall economic structure of attack and defense?
David Maynor (picking on him because of current events), works for Errata Security, a security consultancy. From the company's website (no I won't link to it, find it yourself;) ...
"Errata Security is comprised of industry veterans that have been involved in almost every facet of cybersecurity. This team has made many of the headlines you have read including predicting new threats and attacker trends to development of cutting edge technology. If you need the best, Errata Security can do product testing to technical consulting and everything in between." (boldface is mine)So, let's see if we can figure out the economic process ...
1. Find some vulnerability in some widely used product.
2. Create a proof-of-concept and publish it to the world (preferably before sharing it with the vendor).
3. Use FUD (Fear, Uncertainty, and Doubt) to sell the services of a security consultancy startup.
The ethics employed here can best be explained by the little diagram I whipped up below. Imagine we (society or the world) can quantify all known and unknown vulnerabilities that existed yesterday, exist today, and will exist tomorrow-- that's the blue outer circle. Then imagine we could quantify all of the vulnerabilities known by adversaries (again: past, present, and future)-- that's the red (sort of "cranberry") circle. Imagine that we can quantify all of the vulnerabilities these obviously altruistic "security researchers" find for us-- the yellow circle. At some point, the subsets of vulns known by bad guys and good guys must overlap-- that's the idea where we have thwarted the bad guys and they must introduce new tactics to continue being adversarial.
The notion that folks, like Mr Maynor, follow is that if they make the yellow circle big enough, it will eclipse the blue circle and then there will never be a vulnerability (nor a security incident for that matter) ever again. Yet, those with solid foundations in security (as well as common sense) will realize that security is a social problem and not a technical one-- people will find ways and this may not be a finite calculation.
The second notion often heralded is that if we just make the yellow circle eclipse the red ("cranberry") circle, then we'll all be secure. Well, in theory, if the "good guys" knew all of the avenues for attack the "bad guys" know, then yes, perhaps, we would be pseudo-secure for a day or so. But there are two problems with this philosophy: 1) that is a daunting task, and 2) how do we know that the attacks we are finding are really truly overlapping with the attacks the bad guys know? What if the picture looked more like this (below)? What if our rockstar "security researchers" are really just finding new bugs that the bad guys won't ever find on their own? And, most importantly ... what if the bugs these "security researchers" find that the bad guys don't find end up being used by bad guys to attack the good guys' systems because the patch takes effort, energy, time (and sometimes politics) to deploy?
Wouldn't it be nice if these rockstar "security researchers" were out building systems that didn't have the bugs that other rockstar "security researchers" would eventually find? Wouldn't it be nice if they participated in the "build security in" mentality and lifecycle?
It would have been nice if Mr Maynor worked for Apple (either as an employee or a contractor) and helped with their design, implementation, and quality assurance prior to this buggy code being released to market. I certainly would have tenfold more respect for him in that scenario.
I will admit that there are a few researchers approaching rockstar status that don't appear to have such flippant disregard for the public at large. Some of them are taking the approach of doing what they are doing because they are "keeping vendors honest". I still question their motives from time to time (as anyone should). I'd list names here of ones whose goals I would support, but I cannot control those people and they fly so close to the sun.
Characteristics of reputable security researchers to watch for are: publication of generalized threat models, not specific exploits; the suggestion and proof-of-concept of solutions to problems, not just examples of problems; and, no apparent willing interest in taking the almighty dollar in exchange for their 5 minutes under the press spotlight.