« Uninformed Journal Release Announcement: Volume 10 | Main | OWASP European Summit 2008 is November 3-7 in Portugal »

Dave Aitel on Static Analysis Tools

Dave Aitel has posted to dailydave with his thoughts on Static Analysis Industry. From his email

"So OWASP was dominated by lots of talk from and about static code
analysis tools. I wandered around with a friend of mine at the various
booths (CodeSecure [1], Fortify[2], IBM AppScan[3], Ounce Labs) and
tried them all while listening to their sales pitches. My friend works
for a financial institution that was looking to integrate static
analysis into their code development process. Like many people, she
thought the marketing sounded good. Keep in mind, a lot of the
sponsors for OWASP were static analysis tool vendors, and the
"Industry Panel" was heavily in favor of static analysis tools (until
you talked to them off-stage).

Here's my thoughts:

1. The technology's capabilities does not match the marketing pitch -
ideally for my friend, the tools would find all the exploitable
vulnerabilities in your code and then you would fix them, re-run it,
and get a clean bill of health.

All the tools provide you an interface that purports to fit into this
workflow. None of them, however, work like that. One of the major
problems with the technology is that you have to be a super genius
code auditor to decide if the vulnerabilities are real or not.

Also annoyingly the false positive rate is enormous even when run
against the tiny test programs they are using to demo the tools with.
So you end up with a ten page list of "bugs" that you may or may not
be able to understand enough to fix. All the tools provide nice code
browsers and a graph of data flow to help you with this process, but
in practice it's not enough."


"Those are not good signs for the technology field as a whole. One
possibility is that more research dollars will flood into the space
and the technology will get better and live up to its marketing.
Another possibility is that no matter how much you spend, pure static
analysis can't do the things you want it to do (the IBM and to some
extent Fortify bet)." - Dave Aitel

Read more at: http://seclists.org/dailydave/2008/q4/0005.html


Feed You can follow this conversation by subscribing to the comment feed for this post.

All Comments are Moderated and will be delayed!

I think that its worth mentioning that the IBM AppScan approach is very different from the other vendors in that we address your concern of having to be an expert auditor. We are bringing to market a patent pending analysis method called string analysis. Look up String vs. Taint analysis for more technical detail but the gist of it is that you can be a regular developer and still get effective results from a scan in AppScan. We don't need you to know all of the filters / validation methods before you can gain value from a test.

We two a three phase approach in the background, starting with the static code analysis (String or Taint). Then AppScan targets the compiled running application and tests it as if its a black box. We take those results and correlate them so that you can trace a exploit all the way back to a single line of code. It also provide guidance on what changes to make to the code so that a non-security savy developer can remediate the issues.

Granted we are still getting to the point where we support lots of languages but that will be coming quickly.

Wow, what a marketing pitch! The fact that it is patent pending must mean its good!

So you are saying appscan is the most accurate industry scanner?

Was appscan part of the nist evaluation?

The best way to go about would be to mix n match. Of
course that's hard with all the marketing material, from which, one would be happy if they don't get suffocated with. That gives me an idea, brochures with oxygen sprayers might work well!

Yeah, their marketing pitch was compelling. I had an opportunity to try out Fortify CSA tool and blogged about here:


First from the tools listed, AppScan is different, but only because this is a black-box scanner... the rest, we don't care.
NIST didn't evaluate it in the SATE experiment, because there would be no sense at all of doing this in this particular exposition! (but we would like to do it later, for the next SATEs)

Another point concerning the email from Dave:
"Anyways, market stuff aside, NIST did a survey[5] (and presented at
OWASP) of all the solutions they could get to play, and discovered
that they basically don't work (not their words). They said not to use
their survey to make decisions like that, but let me run down the
conclusions as I saw them based entirely on the 1 hr OWASP presentation"

This is just totally wrong I, as part of the organizers of this experiment, cannot agree with anything here:
- first, we didn't play with the tools, since we didn't run them (huge difference)
- they DO work, they find stuff and a lot! After, sure they don't work like a human would do, but you need expertise to evaluate the beef
- when we say DO NOT rely on our evaluations to choose a tool; it's because our evaluation is INCOMPLETE, we basiaclly didn't have time to do everything and had to make decision on what to evaluate. It turned out that some of our choices for evaluating such weakness or not were just wrong. User shouldn't make a decision on a limited and biased view of the tools...

my 2 cents,
I'm really getting annoyed by people not understanding the data but trying to make them saying stuff that doesn't have any good sense at all...