Flawfinder

This is the main web site for  flawfinder , a program that examines source code and reports possible security weaknesses (``flaws'') sorted by risk level. It's very useful for quickly finding and removing at least some potential security problems  before  a program is widely released to the public. See  ``how does Flawfinder work?'' , below, for more information on how it works.

Flawfinder is specifically designed to be easy to install and use. After installing it, at a command line just type:

    flawfinder directory_with_source_code

Flawfinder works on Unix-like systems today (it's been tested on GNU/Linux), and it should be easy to port to Windows systems. It requires Python 1.5 or greater to run (Python 1.3 or earlier won't work).

Please take a look at other static analysis tools for security, too. One reason I wrote flawfinder was to encourage using static analysis tools to find security vulnerabilities.

Sample Output

If you're curious what the results look like, here are some sample outputs:

  1. The actual text output (when allowing all potential vulnerabilities to be displayed)
  2. The actual HTML output, with context information. This output uses the "--context" option; the text of the risky line is included in the output, which some people find useful. Note that you can use your own web browser to display the results! (And yes, normally flawfinder scans code orders of magnitude faster than this - this is an intentionally stressing test case).
All of these results came from analyzing  this test C program .

The test code intentionally includes a large number of security problems, both to test flawfinder and show what it can find; hopefully the code you're analyzing won't have quite so many high-risk vulnerabilities!

License

Flawfinder is released under the General Public License (GPL) version 2 or later, and thus is open source software (as defined by the Open Source Definition) and Free Software (as defined by the Free Software Foundation's GNU project). Feel free to see Open Source Software / Free Software (OSS/FS) References or Why OSS/FS? Look at the Numbers! for more information about OSS/FS.

Testimonials

Others have found it useful. Here are few testimonials:

  1. I just installed the 0.21 version of Flawfinder. I tried a few different code checking tools and it's by far the friendliest to use. - Darryl Luff
  2. I just sent tons of C/C++ source through flawfinder 1.0. Thanks for the tool, it found several places that I have now fixed. - Joerg Beyer
  3. Thank you for flawfinder! It has helped me in many ways over the last year, for which I am truly [grateful]! - Elfyn McBratney
  4. The other day I was about to clean some old code. After receiving 17K lines of mixed C/C++ I realized that running some kind of source code analyser would be a good idea. I downloaded a whole bunch'of'em (tm), but the only tool that just plain worked on the first run was Flawfinder. Easy to use, no hazzles with strange parameters or configfiles! Instead of learning new software I could concentrate on what I wanted, namely to get down'n'dirty with the code. Thanks! - Jon Björkebäck, developer, Sweden
  5. flawfinder is a good tool for finding potential security issues, and I've been happily using it for a few months now. - Steve Kemp, Debian Security Audit Project, 2004-05-22
  6. I would like to thank you for this awesome piece of software. We are using it in our project Scribus (scribus.net) for a few days. It's very helpful for us. cheers! - Petr Vanek, developer, 2005-12-10
  7. thanx for this great tool. It's working da*n good. I'm using it against wireshark [previously named ethereal] and it is really useful to track potential misuse of C functions. - Sebastien Tandel, developer, 2007-01-10
  8. "Hurra FlawFinder ! FlawFinder is the greatest software of the World. We are fans ! With FlawFinder we never have buffer overflow... With FlawFinder we always find FlawFlaw to make 300 000 getinfos in 300 seconds ! FlawFinder is the Kikipédia of the day." - Christophe JUILLET, 2008-10-08

Documentation

Flawfinder comes with a simple manual describing how to use it. If you're not sure you want to take the plunge to install the program, you can just look at the documentation first. The documentation is available in the following formats:

Downloading

Just select one of the following formats to get the latest version of flawfinder (including the program, installation scripts, and documentation):

The current version of flawfinder is 1.27. If you want to see how it's changed, view its ChangeLog. You can even go look at the flawfinder source code directly.

You can also visit the SourceForge flawfinder project page - downloads are available there, and you definitely need to go there if you want to get on the mailing list, submit a bug report or feature request, or see/get the latest drafts.

Installation

Many Unix-like systems have a package already available to them.

Debian users can quickly download and install flawfinder using apt-get, as usual (my thanks to Adam Lazur for doing the Debian packaging):

  apt-get install flawfinder

For RPM files, download and install them as you would normally install an RPM file. If you want to install the RPM file through a command line, this would be:

  rpm -Uvh flawfinder-*.noarch.rpm

It's also available in many other distributions. Flawfinder is available via FreeBSD's Ports system (see this FreeBSD ports query for flawfinder and flawfinder info for security-related ports). OpenBSD includes flawfinder in its "ports". NetBSD users can simply use NetBSD's pkgsrc to install flawfinder (my thanks to Thomas Klausner for doing the flawfinder NetBSD packaging). The Fink project, which packages OSS/FS for Darwin and Mac OS X, has a Fink flawfinder package, so users of those systems may find that an easy way to get flawfinder.

But if there's no package available to you, or it's old, you can install flawfinder directly using the tarball format. On Unix-like systems, if you choose the tarball format, you can uncompress and install it in the usual manner. First, uncompress them and become root to install:

  gunzip  flawfinder-*.tar.gz
  tar xvf flawfinder-*.tar
  cd flawfinder-*
  su
Then install. You can install to the default installation directory, /usr/local, which will put the program in /usr/local/bin and the manual inside /usr/local/man, by invoking:
  make install
You can override these defaults using standard GNU conventions, by overriding on the make command INSTALL_DIR (normally /usr/local), INSTALL_DIR_BIN (usually INSTALL_DIR/bin), and/or INSTALL_DIR_MAN (usually INSTALL_DIR/man). For example, to install the binary in /usr/bin, and the manual pages inside /usr/share/man (like a Red Hat Linux system would tend to be configured), do this:
  make INSTALL_DIR=/usr INSTALL_DIR_MAN=/usr/share/man install

Cygwin systems (for Microsoft Windows) need to set "PYTHONEXT=.py" in the make command, like this:

  make PYTHONEXT=.py install

see the installation instructions for more information.

Joining the flawfinder community

Flawfinder is now hosted on SourceForge. You can discuss how to use or improve the tool on its mailing list, and you can see the latest drafts on the Subversion version control system.

Speed

Flawfinder is written in Python, to simplify the task of writing and extending it. Python code is not as fast as C code, but for the task I believe it's just fine. Flawfinder version 0.12 on a 400Mhz Pentium II system analyzed 51055 lines in 39.7 seconds, resulting in an average of 1285 analyzed lines/second. Flawfinder 1.20 and later will report their speed (in analyzed lines/second) if you're curious.

How does Flawfinder Work?

Flawfinder works by using a built-in database of C/C++ functions with well-known problems, such as buffer overflow risks (e.g., strcpy(), strcat(), gets(), sprintf(), and the scanf() family), format string problems ([v][f]printf(), [v]snprintf(), and syslog()), race conditions (such as access(), chown(), chgrp(), chmod(), tmpfile(), tmpnam(), tempnam(), and mktemp()), potential shell metacharacter dangers (most of the exec() family, system(), popen()), and poor random number acquisition (such as random()). The good thing is that you don't have to create this database - it comes with the tool.

Flawfinder then takes the source code text, and matches the source code text against those names, while ignoring text inside comments and strings (except for flawfinder directives). Flawfinder also knows about gettext (a common library for internationalized programs), and will treat constant strings passed through gettext as though they were constant strings; this reduces the number of false hits in internationalized programs.

Flawfinder produces a list of ``hits'' (potential security flaws), sorted by risk; by default the riskiest hits are shown first. This risk level depends not only on the function, but on the values of the parameters of the function. For example, constant strings are often less risky than fully variable strings in many contexts. In some cases, flawfinder may be able to determine that the construct isn't risky at all, reducing false positives.

Flawfinder gives better information - and better prioritization - than simply running "grep" on the source code. After all, it knows to ignore comments and the insides of strings, and it will also examine parameters to estimate risk levels. Nevertheless, flawfinder is fundamentally a naive program; it doesn't even know about the data types of function parameters, and it certainly doesn't do control flow or data flow analysis (see the references below to other tools, likeSPLINT, which do deeper analysis). I know how to do that, but doing that is far more work; sometimes all you need is a simple tool. Also, because it's simple, it doesn't get as confused by macro definitions and other oddities that more sophisticated tools have trouble with.

Not every hit is actually a security vulnerability, and not every security vulnerability is necessarily found. As noted above, flawfinder doesn't really understand the semantics of the code at all - it primarily does simple text pattern matching (ignoring comments and strings) Nevertheless, flawfinder can be a very useful aid in finding and removing security vulnerabilities.

Reviewing patches

Sometimes you don't want to review an  entire  program - you only want to review the set of  changes  that were made to a program. If the changes are well-localized (e.g., to a particular section of a file), this is trivial to do by hand, but it's harder otherwise. Flawfinder 1.27 has added automated support so that you can review only the changes in a program.

First, create a "unified diff" file comparing the older version to the current version (say, using GNU diff with the -u option or Subversion's diff). Then run flawfinder on the newer version, and give it the --patch (-P) option pointing to that unified diff.

This works because flawfinder will do its job, but it will only report hits that relate to lines that changed in the unified diff (the patch file). Flawfinder will read the unified diff file, which tells flawfinder what files changed and what lines in those files were changed. More specifically, it uses "Index:" or "+++" lines to determine the files that changed, it uses the line numbers in "@@" regions to get the chunk line number ranges, and it then uses the initial +, -, and space after that to determine which lines really changed.

One challenge is statements that span lines; a statement might start on one line, yet have a change that adds a vulnerability in a later line, and depending on how the vulnerability is reported it might get chopped off. Currently flawfinder handles this by showing vulnerabilities that are reported one line before or after any changed line - which seems to be a reasonable compromise.

Note that the problem with this approach is that it won't notice if you remove code that enforces security requirements. Flawfinder doesn't have that kind of knowledge anyway, so that's not a big deal in this case.

A Fool with a Tool is still a Fool

Any static analysis tool, such as Flawfinder, is merely a tool. No tool can substitute for human thought! In short, " a fool with a tool is still a fool ". It's a mistake to think that analysis tools (like flawfinder) are a substitute for security training and knowledge. Developers - please read documents like  my Secure Programming book  so you'll understand the vulnerabilities that the tool is trying to find! Organizations - please make sure your developers understand how to develop secure software (including learning about the common mistakes past developers have made),  before  having them develop software or use static analysis tools.

An example of horrific tool misuse is disabling vulnerability reports without (1) fixing the vulnerability, or (2) ensuring that it is not a vulnerability. It's publicly known that RealNetworks did this with flawfinder; I suspect others have misused tools this way. I don't mean to beat on RealNetworks particularly, but it's important to apply lessons learned from others, and unlike many projects, the details of their vulnerable source code are publicly available. As noted iniDEFENSE Security Advisory 03.01.05 on RealNetworks RealPlayer (CVE-2005-0455), a security vulnerability was in this pair of lines:

   char tmp[256]; /* Flawfinder: ignore */
   strcpy(tmp, pScreenSize); /* Flawfinder: ignore */

This means that flawfinder did find this vulnerability, but instead fixing it, someone added the "ignore" directive to the code so that flawfinder would stop reporting the vulnerability. But an "ignore" directive simply stops flawfinder fromreporting the vulnerability - it doesn't fix the vulnerability! The intended use of this directive is to add it once a reviewer determined that it was definitely a false positive, but in this case the tool was reporting a real vulnerability. The same thing happened again in iDefense Security Advisory 06.23.05, aka CVE-2005-1766, where the vulnerable line was:

   sprintf(pTmp,  /* Flawfinder: ignore */
And a third vulnerability with the same issue was reported still later in  iDefense Security Advisory 06.26.07, RealNetworks RealPlayer/HelixPlayer SMIL wallclock Stack Overflow Vulnerability , aka CVE-2007-3410, where the vulnerable line was:
   strncpy(buf, pos, len); /* Flawfinder: ignore */

This is not to say that RealNetworks is a fool or set of fools. Indeed, I believe many organizations, not just RealNetworks, have misused tools this way. My thanks to RealNetworks publicly admitting their mistake - it allows others to learn from their mistake! My specific point is that you can't just add comments with "ignore" directives and expect that the software is suddenly more secure. Do not add "ignore" directives until you are certain that the report is a false positive.

This kind of problem can easily happen in organizations that say "run scanning tools until there are no more warnings" but don't later review the changes that were made to eliminate the warnings. If warnings are eliminated because code is changed to eliminate vulnerabilities, that's great! General-purpose tools scanning like flawfinder will have false positive reports, though; it's easy to create a tool without false positives, but they'll do that by failing to report many possible vulnerabilities (some of which will really be vulnerabilities). The obvious answer if you want a broader tool is to allow developers to examine the code, and if they can truly justify that it's a false positive, document why it is a false positive (say in a comment near the report) and then add a "Flawfinder: ignore" directive. But you need to really justify that the report is a false positive; just adding an "ignore" directive doesn't fix anything! Sometimes it's easier to fix a problem that may or may not be a vulnerability, instead of ensuring that it's a false positive - the OpenBSD developers have been doing this successfully for years, since if complicated code isn't an exploitable vulnerability yet, a tiny change can often turn such fragile code into a vulnerability.

If you're in an organization using a scanning tool like this, make sure you review every change caused by a vulnerability report. Every change should be either (1) truly fixed or (2) correctly and completely justified as a false positive. I think organizations should require any such justification to be in comments next to the "ignore" directive. If the justification isn't complete, don't mark it with an "ignore" directive. And before developers even start writing code, get them trained on how to write secure code and what the common mistakes are; this material is not typically covered in university classes or even on the job.

The "ignore" directives are a very useful mechanism - once you have done the analysis, having to re-do the analysis for no reason could use up so much time that it would prevent you from resolving real vulnerabilities. Indeed, many people wouldn't use source scanning tools at all if they couldn't insert "ignore" directives when they are done. The result would be code with vulnerabilities that would be found by such tools. But any mechanism can be misused, and clearly this one has been.

Flawfinder does include a weapon against useless "ignore" directives - the −−neverignore (−n) option. This option is the "ignore the ignores" option - any "ignore" directives are ignored. But in the end, you still need to fix vulnerabilities or ensure that reported vulnerabilities aren't really vulnerabilities at all.

Another problem is that if a tool tells you there's a problem, never fix a bug you don’t understand. For example, the Debian folks ran a tool that found a purported problem in OpenSSL; it wasn't really a problem, and their fix actuallycreated a security problem.

More generally, I am not of the opinion that analysis tools are always "better" than any other method for creating secure software. I don't really believe in a silver bullet, but if I had to pick one, "developer education" would be my silver bullet, not analysis tools. Again, a "fool with a tool is still a fool". I believe that when you need secure software, you need to use a set of methods, including education, languages/tools where vulnerabilities are less likely, good designs (e.g., ones with limited privilege), human review, fuzz testing, and so on; a source scanning tool is just a part of it. Gary McGraw similarly notes that simply passing a scanning tool does not mean perfect security, e.g., tools can't normally find "didn't ask for authorization when it should have".

That said, I think tools that search source or binaries for vulnerabilities usually need to be part of the answer if you're trying to create secure software in today's world. Customers/users are generally unwilling to reduce the amount of functionality they want to something we can easily prove correct, and formally proving programs correct has not scaled well yet (though I commend the work to overcome this). No programming language can prevent all vulnerabilities from being written in the first place, even though selecting the right programming language can be helpful. Human review is great, but it's costly in many circumstances and it often misses things that tools can pick up. Execution testing (like fuzz testing) only checks a miniscule part of the input space. So we often end up needing source or binary scanning tools as part of the process, even though current tools have a HUGE list of problems.... because NOT using them is often worse. Other methods may find the vulnerability, but other methods typically don't scale well.

Hit Density (Hits/KSLOC)

One of the metrics that flawfinder reports is hit density, that is, hits per thousand lines of source code. In some unpublished work, I and someone else found that hit density is a helpful relative indicator of the likelihood of security vulnerabilities in various products. We examined some open source software, such as Sendmail and Postfix, and determined the hit density of each; the ones with higher hit density tended to be the ones with the worse security record in the future. And that's even if none or few of the reported hits were clearly security vulnerabilities.

When you think about it, that makes sense. If a program has a high hit density, it suggests that its developers often use very dangerous constructs that are hard to use correctly and often lead to vulnerabilities. Even if the hits themselvesaren't vulnerabilities, developers who repeatedly use dangerous constructs will sooner or later make the final mistake and allow a vulnerability. It's like a high-wire act -- even talented people will eventually fall if they walk on it long enough.

This appeared to break down on very small programs (less than 10K); a program much smaller than its competition might have a larger hit density yet still be secure. I speculate that because density is a fraction, when a program is muchsmaller than its rivals, density is dramatically forced up (because size is in the denominator). Yet programs that are made dramatically smaller are much easier to evaluate directly, so direct review is more likely to counter vulnerabilities in this case.

Flawfinder and RATS

Unbenownst to me, while I was developing flawfinder, Secure Software Solutions simultaneously developed  RATS , which is also a GPL'ed source code scanner using a similar approach. We agreed to release our programs simultaneously (on May 21, 2001), and we agreed to mention each other's programs in our announcements (you can even see the original  flawfinder announcement ). Now that we've both released our code, we hope to coordinate in the future so that there will be a single ``best of breed'' source code scanner that is open source / free software. Exactly how this will happen is not yet clear, so be prepared for future announcements.

Until the time where we've figured out how to merge these dissimilar projects, I recommend that distributions and software development websites include both programs. Each has advantages that the other doesn't. For example, at the time of this writing Flawfinder is easier to use - just give flawfinder a directory name, and flawfinder will enter the directory recursively, figure out what needs analyzing, and analyze it. Other advantages of flawfinder are that it can handle internationalized programs (it knows about special calls like gettext(), unlike RATS), flawfinder can report column numbers (as well as line numbers) of hits, and flawfinder can produce HTML-formatted results. The automated recursion and HTML formatted results make flawfinder especially nice for source code hosting systems. The flawfinder database includes a number of entries not in RATS, so flawfinder will find things RATS won't. In contrast, RATS can handle other programming languages and runs faster. Both projects are essentially automated advisors, and having two advisors look at your program is likely to be better than using only one (it's somewhat analogous to having two people review your code for security).

Reviews/Papers

Many have reviewed flawfinder or mentioned flawfinder in articles, as well as related tools. Examples include:

  1. "Code Injection in C and C++ : A Survey of Vulnerabilities and Countermeasures" by Yves Younan, Wouter Joosen, and Frank Piessens (Report CW386, July 2004, Department of Computer Science, K.U.Leuven) is a comprehensive survey of many different ways to counter vulnerabilities. Its abstract says, " ... This report documents possible vulnerabilities in C and C++ applications that could lead to situations that allow for code injection and describes the techniques generally used by attackers to exploit them. A fairly large number of defense techniques have been described in literature. An important goal of this report is to give a comprehensive survey of all available preventive and defensive countermeasures that either attempt to eliminate specific vulnerabilities entirely or attempt to combat their exploitation. Finally, the report presents a synthesis of this survey that allows the reader to weigh the advantages and disadvantages of using a specific countermeasure as opposed to using another more easily." They list a wide variety of countermeasures and describe their pros and cons. For example, They state that flawfinder, as well as RATS and ITS4, all have the advantages of "very low" comparitive cost and "very low" memory cost, and that all can find vulnerabilities in the categories V1 (Stack-based buffer overflow), V2 (Heap-based buffer overflow), and V4 (Format string vulnerabilities). All have the applicability limitations A1 (Source code required) and A10 (Only protects libc string manipulation functions). All have the protection limitation P17 (False negatives are possible). That's a fair description of the strengths and weaknesses of flawfinder and similar tools.
  2. Source Code Scanners for Better Code in Linux Journal discusses Flawfinder, RATS, and ITS4. The review noted that the version of flawfinder they used had a weakness - it didn't automatically report static character buffers. That weakness has since been corrected; flawfinder as of version 1.20 can also report static character buffers
  3. Clean Up Your Code with Flawfinder was one of the first announcements by others about Flawfinder
  4. Flawfinder 1.22, le chasseur de failles
  5. the UC Davis Reducing Software Security Risk through an Integrated Approach project (see the flawfinder entry)
  6. "Apparently insecure, analysis of Windows 2000, Linux and OpenBSD sourcecode" (in German), iX 04/04, p. 14. This is noted in the OpenBSD press area for March, 2004, which states that:
    A small article describing the results of examining Windows 2000, Linux and OpenBSD source code using Flawfinder. "OpenBSD is ahead, Flawfinder finds a surprisingly small number of potentially dangerous constructs. The source code audit by the OpenBSD team seems to pay out. Additionally, OpenBSD uses the secure strlcpy/strlcat by Todd C. Miller instead of strcpy etc."
  7. "A Comparison of Publicly Available Tools for Static Intrusion Prevention". You might also want to see "A Comparison of Publicly Available Tools for Dynamic Buffer Overflow Prevention")
  8. "A Comparison of Static Analysis and Fault Injection Techniques for Developing Robust System Services" by Pete Broadwell and Emil Ong, Technical Report, Computer Science Division, University of California, Berkeley, May 2002, used static source code analysis (like flawfinder) and software fault injection against some commonly-used applications. They used some static tools (like ITS4, Warnbuf, and Stumoch) and some dynamic tools (like Fuzz Lite and FIG). As with many other papers, they found that static tools found many false positives, but that "When the tool did find an error however, they were extremely useful." This paper also has references to many other papers.
  9. Methods for the prevention, detection and removal of software security vulnerabilities by Jay-Evan J. Tevis and John A. Hamilton (Auburn University, Auburn, Alabama). This was published in the Proceedings of the 42nd annual Southeast regional conference, Huntsville, Alabama, 2004 (Pages: 197 - 202). ISBN 1-58113-870-9/04/04. The ACM digital library has a copy.
  10. Software Security for Open-Source Systems by Crispin Cowan (IEEE Security and Privacy, 2003) briefly reviews various auditing (static and dynamic) and vulnerability mitigation tools.
  11. "Characterizing the 'Security Vulnerability Likelihood' of Software Functions" by DaCosta, Dahn, Mancoridis, and Prevelakis gives evidence that most vulnerabilities are clustered near inputs, a plausible hypothesis. Note that flawfinder includes the ability to highlight input functions, because I expected that myself.
  12. The presentation Lexical analysis in source code scanning by Jose Nazariol (Uninet Infosec 2002), 20 April, 2002, discusses his prototype tool, Czech, which uses techniques similar to flawfinder. In it, he says "source code analysis using lexical analysis techniques is worthwhile for development. However, it can only assist the developer, not replace a manual audit" (true enough!)
  13. The paper Static Analysis for Security by Gary McGraw (Cigital) and Brian Chess of Fortify Software gives an overview of static analysis tools (like flawfinder). This is the fifth article in the IEEE Security & Privacy magazineseries called "Building Security In."
  14. Will code check tools yield worm-proof software? by Robert Lemos (CNET News.com), dated May 26, 2004, describes gives an overview of static analysis tools for a somewhat lay audience.

Practical Code Auditing by Lurene Grenier (December 13, 2002) briefly discusses simple approaches that can be performed for manual auditing (she works on the OpenBSD project). It does note that you can "grep" for certain kinds of problems; flawfinder is essentially a smart grep that already knows what to look for, so it could easily fit into process at those points. The paper also specifically notes some of the things that are hard to grep for (which are the kinds of things that flawfinder would miss).

Secure Programming with Static Analysis (Addison-Wesley Software Security Series) by Brian Chess and Jacob West discusses static analysis tools in great detail.

Of course, there are many programs that analyze programs, particularly those that work like "lint". There is a set of papers about the Stanford checker which you may find interesting.

Other static analysis tools for security

As noted above, RATS is the project most similar to flawfinder; it uses the same basic technique, and is released under the GPL. If you're looking for another FLOSS tool to help you find security problems in your C programs, for now I particularly suggest that you look at SPLINTNIST's Software Assurance Metrics and Tool Evaluation (SAMATE) project posts a general list of static analysis tools.

OSS tools

Other OSS/FS tools/projects that statically analyze programs for security issues (besides flawfinder) include:

  1. OWASP LAPSE+, a static security analyzer for Java web applications that is a successor to the LAPSE project (GPL).
  2. FindSecurityBugs (LGPL) is a plug-in for FindBugs for finding security-related defects.
  3. SPLINT (GPL license). This works somewhat like lint, searching for probable errors; to really use it, developers need to add additional annotations to help the tool identify problems. This is a very mature program, widely used, and one you can start using right away on 'real programs".
  4. Cqual (GPL license). "Cqual is a type-based analysis tool that provides a lightweight, practical mechanism for specifying and checking properties of C programs. Cqual extends the type system of C with extra user-defined type qualifiers. The programmer adds type qualifier annotations to their program in a few key places, and Cqual performs qualifier inference to check whether the annotations are correct. The analysis results are presented with a user interface that lets the programmer browse the inferred qualifiers and their flow paths."
  5. MOPS (old BSD license) "MOPS is designed to check for violations of rules that can be expressed as temporal safety properties. A temporal safety property dictates the order of a sequence of operations. For example, in Unix systems, we might verify that the C program obeys the following rule: a setuid-root process should not execute an untrusted program without first dropping its root privilege." It uses a model checking approach.
  6. Clang Static Analyzer (BSD-like license) can find bugs in C and Objective-C programs. Here are a few comments about Clang Static Analyzer from a user.
  7. RIPS does static code analysis on PHP code. It's currently in PHP, but RIPS is being rewritten.
  8. CIL is a framework for analyzing C programs.
  9. BLAST (Berkeley Lazy Abstraction Software Verification Tool). "BLAST is a software model checker for C programs. The goal of BLAST is to be able to check that software satisfies behavioral properties of the interfaces it uses. BLAST uses counterexample-driven automatic abstraction refinement to construct an abstract model which is model checked for safety properties. The abstraction is constructed on-the-fly, and only to the required precision." Note: The first version of BLAST was developed at UC Berkeley, but follow-on work is going on at EPFL.
  10. BOON (BSD-like license). BOON stands for "Buffer Overrun detectiON". "BOON is a tool for automatically finding buffer overrun vulnerabilities in C source code. Buffer overruns are one of the most common types of security holes, and we hope that BOON will enable software developers and code auditors to improve the quality of security-critical programs."
  11. ggcc is an extension of the gcc compiler suite that will do static checking of various kinds. As of May 2008 it was in early development.
  12. Stanse (GPLv2) is a static analysis framework to find bugs in C code. It's written in Java, plus some perl.
  13. The Spike PHP Security Audit Tool is for analyzing PHP programs.
  14. Pixy scans PHP programs for XSS and SQLI vulnerabilities; it is written in Java.
  15. Orizon is a general-purpose code analysis system (though their primary interest is security scanning). Milk is a Java source code security scanner built on top of Orizon. They are connected to OWASP.
  16. PScan (GPL license) is a source code scanner like flawfinder and RATS, but has only a limited capability. It's really only intended to find format string problems. In contrast, both flawfinder and RATS can find format string problems and many other problems as well.
  17. The Open Source Quality Project at Berkeley is investigating tools and techniques for assuring software quality (not just security) of OSS/FS programs.
  18. Project pedantic's Czech by Jose Nazario might become interesting, but as of April 2004 it looks like that project has halted, with only a buggy not-ready prototype so far (which is too bad!).
  19. smatch. is a general-purpose tool for statically analyzing programs, and could be used to build vulnerability scanners. Indeed, there are lots of tools for statically analyzing programs in a general way, this is only one example.
  20. Sparse is a specialized static analysis tool that does additional type-checking, including checks related to security. It was originally designed to check the Linux kernel source code. Sparse finally has its own web page. More information on sparse is available from the CE Linux forum, the Quick sparse HOWTO by Randy Dunlap, and the sparse mailing list. You can download older snapshots of sparse's code from codemonkey.
  21. Oink (including Cqual++) (BSD-like license). (a Collaboration of C++ Static Analysis Tools).
  22. Yasca (BSD license) is a "simple static analysis tool designed to analyze source code and for a variety of errors. It is both a framework and an implementation, and leverages other open source code scanners where applicable."
  23. Frama-C (LGPL) is a framework for the development of collaborating static analyzers for the C language. Many analyzers are provided in the distribution, including a value analysis plug-in that provides variation domains for the variables of the program, and Jessie, a plug-in for computing Hoare style weakest preconditions. It provides a formal behavioral specification language for C programs named ACSL.
  24. RTL-check "RTL-check is an extensible and powerful abstract interpretation framework for static analysis of programs from a safety and security perspective. It performs analysis on RTL, which is the low-level intermediate representation generated by GCC. See the documentation section for more information." The code is on SourceForge; a good first start to learning about it is to read Patrice Lacroix master's thesis.
  25. PMD looks for potential problems in Java code. Not specific to security. (BSD-style license) There are other Java program analyzers too.
  26. Findbugs also looks for potential problems in Java code. Not specific to security (LGPL license).
  27. cppcheck does a breadth-first search for bugs (not just one for the host platform). There's little documentation, unfortunately, but you can invoke it like this (use the force option "-f" else it will give up on some files, and use -a ("all warnings") to get all details):
      cppcheck -a -f ./ 2> cpperr.txt &
    
  28. PerlCritic analyzes perl programs. It's really a style checker, not so much a vulnerability scanner.
  29. Agnitio is a tool to manage checklists when doing manual reviews. It's a different kind of tool, but I thought it'd be worth noting. Warning: it needs .NET and doesn't run on Mono as of 2011-09-15 (though they are working on that).
  30. Treehydra is a GCC plugin that provides a low level JavaScript binding to GCC's GIMPLE AST representation. Treehydra is intended for precise static analyses. Most of Treehydra is generated by Dehydra. A Dehydra script walks the GCC tree node structure using the GTY attributes present in GCC. Treehydra is included in Dehydra source, and is built when a plugin-enabled CXX is detected.
  31. Coccinelle aka spatch Coccinelle, also known as spatch, is a source-to-source translator available under GPLv2. Valerie Henson (now Valerie Aurora) has written an article about Coccinelle, and here's another article about it.
  32. bddbddb / bddshellbddbddb (aka b5b) is a general-purpose tool for analyzing big programs. It lets you read in a program and then enter queries in a Prolog-like language, and its internals use the BDD datastructure to make all of this work for large programs. bddshell lets you use it interactively. These are more "tools for building analysis tools", rather than analysis tools themselves.
  33. LLVMLLVM is really a compiler infrastructure project, but among other things it can be used to create analysis tools. But it's not a security analysis tool by itself.
  34. ElsaElsa (BSD license) is a C/C++ parser based on Elkhound. GCC also has a parser.

There is a similar program, ITS4 (from Cigital), but it isn't open source software or Free Software (OSS/FS) as defined above, and as far as I know it isn't maintained.

Of course, you could go the other way: Instead of looking for specific common weaknesses, you could prove that the program actually meets (or does not meet) certain requirements. If you're interested in open source software tools related to proving programs correct, seej High Assurance (for Security or Safety) and Free-Libre / Open Source Software (FLOSS)... with Lots on Formal Methods / Software Verification and the Open Proofs website.

Quasi-open tools

  1. CERT ROSE checkers checks C and C++ against a subset of the rules in the CERT Secure Coding Standards for C and C++. The ROSE checkers are themselves open source, and build on the open source ROSE, but ROSE itself is fundamentally dependent on a a proprietary component (Edison Design Group's C/C++ compiler), so the whole stack is in fact proprietary.
  2. ROSE/Compass (BSD license) is a source-to-source translator that can be used to build analysis programs. It includes Compass, which reports violations of a number of rules that relate to security.

Proprietary tools

There are various suppliers that sell proprietary programs that do this kind of static analysis. These include:

  1. Fortify Software. Their Fortify Source Code Analysis tool is briefly described in the PCWorld article Software Searches for Security Flaws. Fortify Software is now owned by HP (as of 2010).
  2. Coverity's SWAT tool searches for defects in general, including some security issues. It's based on previous work on the Stanford checker, which was implemented by xgcc and the Metal language (the Stanford site has lots of interesting papers, but no code as far as I can tell -- please let me know if things are otherwise).
  3. GrammaTech develops and sells "static-analysis and program-transformation tools for C/C++ and Ada". This include CodeSurfer/CodeSonar (R) for static analysis, and CodeSurfer/x86 for analyzing and rewriting binary executables.
  4. Veracode has tools to analyze software for security vulnerabilities (including binary analysis).
  5. Sofcheck Inspector performs static analysis on Java and Ada programs to find defects.
  6. Red Lizard Software is an Australian firm that sells Goanna, a tool that analyzes C/C++ code for software quality bugs (including some security vulnerabilities).
  7. Kestrel Institute works to "make formal methods work in practice"; they have various proprietary tools.
  8. Ounce Labs's product Prexis. Ounce labs was recently bought by IBM.
  9. Klocwork sells various products that do static analysis.
  10. @stake, now owned by Symantec Corporation, sells a tool called the SmartRisk (TM) Analyzer; unlike many tools, this one analyzes binary code.
  11. Parasoft sells some static analysis tools.
  12. Microsoft bought the company Intrinsa, and their product (known as PREfix) is used now to do static analysis of many of their own products.
  13. PVS-Studio is "a static analyzer that detects errors in source code of C/C++/C++0x applications." (It's not specifically focused on security issues).
  14. Parfait is a Sun research project, which has found some vulnerabilitiesAn interview discusses Parfait further. At the time of this writing, this is unreleased.
  15. KDM Analytics has developed some prototypes using a standards-based approach. Code is first transformed into KDM (an OMG standard), and rules are defined using SBVR (another OMG standard). Then you can search for matches/violations of rules. One neat thing is that this can analyze (in principle) either binary or source code in arbitrary languages. I know some people are modifying gcc to generate KDM. SBVR (the rule-defining language) is a restricted-English logic language, so the rules are unusually readable. To my knowledge, these are not available on the market yet.

There are of course many companies that sell the service of performing security reviews of source code for a fee; who generally use a combination of tools and expertise. These include Secure Software developer of RATS, and Aspect Security, backers of the Open Web Application Security Project (OWASP).

Arian Evans has announced that he's working on a list of such tools, and intends to post that list at OWASP; by the time you read this, it may already be available. NIST's Software Assurance Metrics and Tool Evaluation (SAMATE) project posts a list of static analysis tools, along with a list of related papers and projects. Common Weakness Enumeration (CWE) is developing a standard set of definitions of common weaknesses and their interrelationships.

Other places list security tools, but not really static analysis tools; these include the Talisker Security Wizardry Portal and insecure.org's survey of the top 75 tools.

Java2s has a list of Java-related tools for source analysis which may be of interest. They make the common mistake of saying "commercial" when they mean "proprietary" (OSS is commercial software too).

There are a vast number of static analysis tools that check for style or for possible errors, which might happen to catch security problems. They're usually not focused on security issues, though, and there are too many to list anyway, so I don't try to list them all here.

This list can't possibly be exhaustive, sorry. My goal here isn't to provide all possible alternatives, merely to provide useful information pointing to at least some other tools and services. My goal is mainly so you can have an idea of what's going on in the field.

Be careful defining language subsets

Many people have developed "language subsets" in an effort to reduce the risk of errors. In concept, these can be really helpful, especially for languages like C which are easy to abuse. Such language subsets should be automated by static analysis tools; then, it's easy to check if you've met the rules. But these only have value if the subset is well-designed. In particular, the subset should be designed to minimize cases where perfectly acceptable constructs are forbidden (essentially false positives), and should maximize detection of actual failures (best done through analysis of real-world failures).

One of the better-known subsets for C is "MISRA C". Les Hatton has published a detailed and devastating critique of MISRA C (of both MISRA C 1998 and the later MISRA C 2004). Fundamentally, MISRA C's development was not based on real data on failures, but instead on random rule creation, some of which are absolutely full of false positives, and many have no value. See Les Hatton's papers, including those showing why MISRA C is badly flawed. His paperLanguage subsetting in an industrial context: a comparison of MISRA C 1998 and MISRA C 2004 is "A comparison of real to false positive ratios between the 1998 and 2004 versions of the MISRA C guidelines on a common population of 7 commercial software packages", and it has devestating conclusions: "On these results, MISRA C 2004 seems a step backwards and attempts at compliance with either document are essentially pointless until something is done about improving the wording of the standard and its match with existing experimental data. In its current form, the complexity and noisiness of the rules suggest that only the tool vendors are likely to benefit."

An additional problem with MISRA C is that it is not open access (aka Internet-published). That is, you can't just use Google to find it and then immediately view its contents (without registering or paying for the contents). That makes it hard to apply. Purported standards that aren't open access are becoming increasingly pointless; IETF, OASIS, W3C, Ecma, and many other bodies already do this.

I'm a fan of Les Hatton's work, and I particularly like his paper on his EC-- ruleset. The EC-- ruleset is Internet-published, and is much smaller, so it's actually easier to apply than MISRA C. More importantly, though, the EC-- ruleset appears to be much better matched to the real world for finding failures, so I strongly prefer EC-- over MISRA C. Here were his rules for creating the EC-- ruleset; once you look at this list, I think you'll see why:

  • Every rule is associated with faults which appeared in the quoted surveys and failed in the sense described above.
  • Each rule covers as many of the fault modes as possible to reduce the total number of rules
  • Each rule is easy to understand and as unambiguous as possible
  • Each rule is as non-contentious as possible to ease acceptance
  • Taken together, the rules cover the vast majority of the faults described in the earlier surveys and should therefore have a high signal to noise ratio with detection rates of around 8 per 1 KXLOC expected.

Additional rules specific to security would be a good idea, too, if they're well-crafted. The CERT C Secure Coding Standard is an effort to craft rules for developing secure C programs. I haven't had time to evaluate it in-depth, though, so I don't know what its quality is. Another document you might examine is Microsoft's Security Development Lifecycle (SDL) Banned Function Calls.

Beyond Static Tools

Static analysis tools are unlikely to catch all problems in practice; they're best complemented with other approaches. Certainly, having humans look at code is wonderful. You also want to send test data to try to find problems. Many are based on the idea of sending random or partly random data for testing; some "randomize" but try to concentrate on patterns most likely to reveal security problems. These include:
  1. SPIKE Proxy is an OSS/FS HTTP proxy for finding security flaws in web sites. It is part of the Spike Application Testing Suite and supports automated SQL injection detection, web site crawling, login form brute forcing, overflow detection, and directory traversal detection.
  2. Brute Force Binary Tester (BFBTester) checks for single and multiple argument command line overflows and environment variable overflows, and version 2.0 can also watch for tempfile creation activity.
  3. Michal Zalewski's mangleme (demo and source code) sends stressing random data for testing web browsers.
  4. iExploder is another tools for testing web browsers by sending random data.
  5. zzuf is a fuzzer (open source, MIT-style license). See the FOSDEM 2007 slides and Joe Barr's article about zzuf.

There are lots of scanning tools for checking for already known specific vulnerabilities, and sometimes they help. Nessus is a widely-used vulnerability assessment tool. Nikto scans web servers for common problems.

There are many, many other tools and techniques available; I can't list all of them. You can find a few leads from the Top 75 Security Tools survey at insecure.org. ISP planet's The article Web Vulnerability Assessment Tools.


You might want to look at my Secure Programming HOWTO web page, or some of my other writings such as Open Standards and SecurityOpen Source Software and Software Assurance (Security), and High Assurance (for Security or Safety) and Free-Libre / Open Source Software (FLOSS).

You can also view my home page.


Reference: http://www.dwheeler.com/flawfinder/

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值