Update2: Further proof that people are abusing this in a wide scale and likely wouldn't have had the exploit code not been released.
Update: I've clarified a few points and added a few others.
Recently Tavis Ormandy (a google employee) discovered a security issue in windows, and days after notifying Microsoft published a working exploit to the full disclosure mailing list after failing to negotiate a fix from the vendor within 60 days. I've been chatting about this issue with several people in the field and found that nobody has been discussing what many of us feel, so decided that I'll bite the bullet and put myself out there for criticism (and as a proxy for others). I suspect some people won't like my answers/opinions. My intention of this post isn't to insult or flame Tavis, it is to debate the act of releasing PoC exploit code when one is employed to protect people.
Note: Tavis found this on his own time, I am not saying Google had anything to do with it, however where he works is of importance to some of the points below.
- Tavis is paid by his company to identify security issues, and work responsibly to ensure they are addressed
- Tavis believed he was doing the right thing. His intention was to help people not harm them.
- Tavis states this was done on his personal time.
- Tavis provided mitigation advice for the issue.
- The exploit published by Tavis is now being used in public attacks.
- By releasing code it put some of the customers at risk that interact with his company. Even more so because no fix existed.
- Some of the customers that interact with his employer will likely become infected with bots or other malware as a direct result of the exploit being published. These in turn will be used to assist in brute forcing/attacking their platform as well as other companies.
- By publishing the code, more attackers have access to it, rather than private circles.
- Enterprise vendors have release cycles and need to perform regression testing prior to pushing out fixes. In this case he discovered a vulnerability in almost a dozen versions of windows. This will require coordination with each OS team to perform regression testing, as well as any products that interact with it.
- Microsoft is one of the few companies that responds to those who report issues in their products and have a general good attitude/response (now adays) to issues such as this.
- I don't have all the information on the communication between Tavis and Microsoft, I am going by what has been publicly discussed.
- After speaking with 4 different companies about this, every one said they would have fired him if he was in a similar role in his company. I am not saying that I would fire him or that he should be fired, just that many people feel strongly about this.
- Microsoft is a competitor of Google (Ormandy's employer)
- The vuln is probably known by others and has been for some time.
- Had he not published some sort of info publicly on it it may have taken longer to fix.
- If he had simply published an advisory without working exploit code he'd get his message out (the media would have picked up on this because it's Microsoft) and put pressure on a vendor to fix it sooner. This is a fairly effective technique that has worked before.
- Even if Tavis had only published an advisory, eventually someone would have figured out the issue and used it to attack people.
My Opinions (based on what I know)
- While this issue was likely known already, had Tavis waited 90 days (lets say it would take 90 days for MS to fix this) for the fix to be released maybe 1k-10k people would have been owned by the small circle that already knew about to exploit it. Now that the public exploit code is public 100k+ people are likely going to get owned within the 30-60 days it will take MS to perform a 'quicker fix' due to their hand being forced. The end result, more people owned because of the disclosure of PoC code.
- The shift has been that when you have a nice 0day (as an attacker) in a major OS you typically focus it towards specific targets and don't just blast everyone with it. As people have been debating for years 0days vulnerabilities are worth a lot of money and people will pay a lot for them depending on their motivations. See the WebDav overflow (someone was attacking the army which is how MS discovered it) for an example of targeted attacks with 0days that were saved for high profile targets (whatever that means).
- Had Tavis been a 'hacker' we wouldn't be having this conversation. This is because when people are paid to protect and are caught doing the opposite it is seen as a conflict. Much like a cop committing criminal acts on the side, the cop should know better since it goes against the ethics of the position. I see it the same way here. *If you are paid to protect people then it is an ethical conflict for you to publish working PoC code*.
- While 60 days may seen unreasonable, I can understand why this can take so long due to my previous points. Anyone who has worked in software development at a company that ships product (as compared to updating a website) understands that things can take time. Note: I am not defending 60 days, I am merely stating that for such major products as these that a lot of things depend on the components that will be patched, and that regression testing those other products takes time. You can't ship something without doing this if you are a software development shop with any worth. If MS were to ship it and something were to break then people would be bashing them for their lack of QA.
- If the vendor had refused to acknowledge the issue/blew off the researcher this would be a different story. I have myself been forced to publish information when vendors were being uncooperative in the name of ensuring the issue is addressed. I would not recommend this approach for everyone and anyone in this position is open to attack, often unfairly.