|
|
|
# Contribution Guidelines
|
|
|
|
|
|
|
|
|
|
|
|
## Reporting Issues
|
|
|
|
|
|
|
|
If you encounter a problem when using infer or if you have any questions, please open a
|
|
|
|
[GitHub issue](https://github.com/facebook/infer/issues/).
|
|
|
|
|
|
|
|
## Hacking on the Code
|
|
|
|
|
|
|
|
We welcome contributions via [pull requests on GitHub](https://github.com/facebook/infer/pulls).
|
|
|
|
|
|
|
|
### Development Dependencies
|
|
|
|
|
|
|
|
You'll want to install a few more dependencies to comfortably hack on the infer codebase. Simply
|
|
|
|
run:
|
|
|
|
```sh
|
|
|
|
make devsetup
|
|
|
|
```
|
|
|
|
|
|
|
|
### Building Infer for Development
|
|
|
|
|
[build] switch to 4.05.0+flambda by default
Summary:
With flambda (`-O3`), compilation time is ~5x slower, but the backend is ~25% faster!
To mitigate the atrocious compilation times, introduce a new `opt` build mode in the jbuilder files.
- build in "opt" mode by default from the toplevel (so that install scripts and external users get the fastest infer by default), in "default" mode by default from infer/src (since the latter is only called directly by infer devs, for faster builds)
- `make byte` is as fast as before in any mode
- `make test` will build "opt" by default, which is very slow. Solution for testing (or building the models) locally: `make BUILD_MODE=default test`.
- You can even change the default locally with `export BUILD_MODE=default`.
The benchmarks are to be taken with a sizable pinch of salt because I ran them only once and other stuff could be running in the background. That said, the perf win is consistent across all projects, with 15-20% win in wallclock time and around 25% win in total CPU time, ~9% win in sys time, and ~25% fewer minor allocations, and ~5-10% fewer overall allocations. This is only for the backend; the capture is by and large unaffected (either the same or a tad faster within noise range).
Here are the results running on OpenSSL 1.0.2d on osx (12 cores, 32G RAM)
=== base
infer binary: 26193088 bytes
compile time: 40s
capture:
```lang=text
real 1m7.513s
user 3m11.437s
sys 0m55.236s
```
analysis:
```lang=text
real 5m41.580s
user 61m37.855s
sys 1m12.870s
```
Memory profile:
```lang=json
{
...
"minor_gb": 0.1534719169139862,
"promoted_gb": 0.0038930922746658325,
"major_gb": 0.4546157643198967,
"allocated_gb": 0.6041945889592171,
"minor_collections": 78,
"major_collections": 23,
"compactions": 7,
"top_heap_gb": 0.07388687133789062,
"stack_kb": 0.3984375,
"minor_heap_kb": 8192.0,
...
}
```
=== flambda with stock options (no `-Oclassic`, just the same flags as base)
Exactly the same as base.
=== flambda `-O3`
infer binary: 56870376 bytes (2.17x bigger)
compile time: 191s (4.78x slower)
capture is the same as base:
```lang=text
real 1m9.203s
user 3m12.242s
sys 0m58.905s
```
analysis is ~20% wallclock time faster, ~25% CPU time faster:
```lang=text
real 4m32.656s
user 46m43.987s
sys 1m2.424s
```
memory usage is a bit lower too:
```lang=json
{
...
"minor_gb": 0.11583046615123749, // 75% of previous
"promoted_gb": 0.00363825261592865, // 93% of previous
"major_gb": 0.45415670424699783, // about same
"allocated_gb": 0.5663489177823067, // 94% of previous
"minor_collections": 73,
"major_collections": 22,
"compactions": 7,
"top_heap_gb": 0.07165145874023438,
"stack_kb": 0.3359375,
"minor_heap_kb": 8192.0,
...
}
```
=== flambda `-O2`
Not nearly as exciting as `-O3`, but the compilation cost is still quite high:
infer: 37826856 bytes
compilation of infer: 100s
Capture and analysis timings are mostly the same as base.
Reviewed By: jberdine
Differential Revision: D4867979
fbshipit-source-id: 99230b7
7 years ago
|
|
|
- Build the code faster: `make -j BUILD_MODE=default`. By default `make` builds infer with flambda
|
|
|
|
enabled, which makes it very slow (but makes infer significantly faster).
|
|
|
|
|
|
|
|
- Faster edit/build cycle when working on OCaml code inside infer/src/: build inside infer/src/
|
|
|
|
(skips building the models after infer has been built), and build bytecode instead of native:
|
[build] switch to 4.05.0+flambda by default
Summary:
With flambda (`-O3`), compilation time is ~5x slower, but the backend is ~25% faster!
To mitigate the atrocious compilation times, introduce a new `opt` build mode in the jbuilder files.
- build in "opt" mode by default from the toplevel (so that install scripts and external users get the fastest infer by default), in "default" mode by default from infer/src (since the latter is only called directly by infer devs, for faster builds)
- `make byte` is as fast as before in any mode
- `make test` will build "opt" by default, which is very slow. Solution for testing (or building the models) locally: `make BUILD_MODE=default test`.
- You can even change the default locally with `export BUILD_MODE=default`.
The benchmarks are to be taken with a sizable pinch of salt because I ran them only once and other stuff could be running in the background. That said, the perf win is consistent across all projects, with 15-20% win in wallclock time and around 25% win in total CPU time, ~9% win in sys time, and ~25% fewer minor allocations, and ~5-10% fewer overall allocations. This is only for the backend; the capture is by and large unaffected (either the same or a tad faster within noise range).
Here are the results running on OpenSSL 1.0.2d on osx (12 cores, 32G RAM)
=== base
infer binary: 26193088 bytes
compile time: 40s
capture:
```lang=text
real 1m7.513s
user 3m11.437s
sys 0m55.236s
```
analysis:
```lang=text
real 5m41.580s
user 61m37.855s
sys 1m12.870s
```
Memory profile:
```lang=json
{
...
"minor_gb": 0.1534719169139862,
"promoted_gb": 0.0038930922746658325,
"major_gb": 0.4546157643198967,
"allocated_gb": 0.6041945889592171,
"minor_collections": 78,
"major_collections": 23,
"compactions": 7,
"top_heap_gb": 0.07388687133789062,
"stack_kb": 0.3984375,
"minor_heap_kb": 8192.0,
...
}
```
=== flambda with stock options (no `-Oclassic`, just the same flags as base)
Exactly the same as base.
=== flambda `-O3`
infer binary: 56870376 bytes (2.17x bigger)
compile time: 191s (4.78x slower)
capture is the same as base:
```lang=text
real 1m9.203s
user 3m12.242s
sys 0m58.905s
```
analysis is ~20% wallclock time faster, ~25% CPU time faster:
```lang=text
real 4m32.656s
user 46m43.987s
sys 1m2.424s
```
memory usage is a bit lower too:
```lang=json
{
...
"minor_gb": 0.11583046615123749, // 75% of previous
"promoted_gb": 0.00363825261592865, // 93% of previous
"major_gb": 0.45415670424699783, // about same
"allocated_gb": 0.5663489177823067, // 94% of previous
"minor_collections": 73,
"major_collections": 22,
"compactions": 7,
"top_heap_gb": 0.07165145874023438,
"stack_kb": 0.3359375,
"minor_heap_kb": 8192.0,
...
}
```
=== flambda `-O2`
Not nearly as exciting as `-O3`, but the compilation cost is still quite high:
infer: 37826856 bytes
compilation of infer: 100s
Capture and analysis timings are mostly the same as base.
Reviewed By: jberdine
Differential Revision: D4867979
fbshipit-source-id: 99230b7
7 years ago
|
|
|
`make -j -C infer/src byte`. You need to have run `make -j` (with or without `BUILD_MODE=default`)
|
|
|
|
at some point before.
|
|
|
|
|
|
|
|
- In general, `make` commands from the root of the repository make sure that dependencies are in a
|
|
|
|
consistent and up-to-date state (e.g., they rebuild infer and the models before running steps that
|
|
|
|
use infer), while running `make` commands from within subdirectories generally assumes that
|
|
|
|
dependencies are already up-to-date.
|
|
|
|
|
|
|
|
For instance, running `make direct_java_infer_test` will rebuild infer and the models if necessary
|
|
|
|
before running the test, but running `make -C infer/tests/codetoanalyze/java/infer test` will just
|
|
|
|
execute the test.
|
|
|
|
|
[build] switch to 4.05.0+flambda by default
Summary:
With flambda (`-O3`), compilation time is ~5x slower, but the backend is ~25% faster!
To mitigate the atrocious compilation times, introduce a new `opt` build mode in the jbuilder files.
- build in "opt" mode by default from the toplevel (so that install scripts and external users get the fastest infer by default), in "default" mode by default from infer/src (since the latter is only called directly by infer devs, for faster builds)
- `make byte` is as fast as before in any mode
- `make test` will build "opt" by default, which is very slow. Solution for testing (or building the models) locally: `make BUILD_MODE=default test`.
- You can even change the default locally with `export BUILD_MODE=default`.
The benchmarks are to be taken with a sizable pinch of salt because I ran them only once and other stuff could be running in the background. That said, the perf win is consistent across all projects, with 15-20% win in wallclock time and around 25% win in total CPU time, ~9% win in sys time, and ~25% fewer minor allocations, and ~5-10% fewer overall allocations. This is only for the backend; the capture is by and large unaffected (either the same or a tad faster within noise range).
Here are the results running on OpenSSL 1.0.2d on osx (12 cores, 32G RAM)
=== base
infer binary: 26193088 bytes
compile time: 40s
capture:
```lang=text
real 1m7.513s
user 3m11.437s
sys 0m55.236s
```
analysis:
```lang=text
real 5m41.580s
user 61m37.855s
sys 1m12.870s
```
Memory profile:
```lang=json
{
...
"minor_gb": 0.1534719169139862,
"promoted_gb": 0.0038930922746658325,
"major_gb": 0.4546157643198967,
"allocated_gb": 0.6041945889592171,
"minor_collections": 78,
"major_collections": 23,
"compactions": 7,
"top_heap_gb": 0.07388687133789062,
"stack_kb": 0.3984375,
"minor_heap_kb": 8192.0,
...
}
```
=== flambda with stock options (no `-Oclassic`, just the same flags as base)
Exactly the same as base.
=== flambda `-O3`
infer binary: 56870376 bytes (2.17x bigger)
compile time: 191s (4.78x slower)
capture is the same as base:
```lang=text
real 1m9.203s
user 3m12.242s
sys 0m58.905s
```
analysis is ~20% wallclock time faster, ~25% CPU time faster:
```lang=text
real 4m32.656s
user 46m43.987s
sys 1m2.424s
```
memory usage is a bit lower too:
```lang=json
{
...
"minor_gb": 0.11583046615123749, // 75% of previous
"promoted_gb": 0.00363825261592865, // 93% of previous
"major_gb": 0.45415670424699783, // about same
"allocated_gb": 0.5663489177823067, // 94% of previous
"minor_collections": 73,
"major_collections": 22,
"compactions": 7,
"top_heap_gb": 0.07165145874023438,
"stack_kb": 0.3359375,
"minor_heap_kb": 8192.0,
...
}
```
=== flambda `-O2`
Not nearly as exciting as `-O3`, but the compilation cost is still quite high:
infer: 37826856 bytes
compilation of infer: 100s
Capture and analysis timings are mostly the same as base.
Reviewed By: jberdine
Differential Revision: D4867979
fbshipit-source-id: 99230b7
7 years ago
|
|
|
- To switch the default build mode to flambda disabled, you can `export BUILD_MODE=default` in your
|
|
|
|
shell.
|
|
|
|
|
|
|
|
### Debugging OCaml Code
|
|
|
|
|
|
|
|
- Printf-debug using `Logging.debug_dev`. It comes with a warning so
|
|
|
|
that you don't accidentally push code with calls to `debug_dev` to
|
|
|
|
the repo.
|
|
|
|
|
|
|
|
- Browse the documentation of OCaml modules in your browser with `make doc`
|
|
|
|
|
|
|
|
- When using `ocamldebug`, and in particular when setting break points
|
|
|
|
with `break @ <module> <line>` don't forget that an infer module `M`
|
|
|
|
is in reality called `InferModules__M`, or `InferBase__M`, or
|
|
|
|
... See the html documentation of the OCaml modules from `make doc`
|
|
|
|
if you're unsure of a module name.
|
|
|
|
|
|
|
|
```console
|
|
|
|
$ ledit ocamldebug infer/bin/infer.bc
|
|
|
|
(ocd) break @ InferModules__InferAnalyze 100
|
|
|
|
Breakpoint 1 at 9409684: file backend/InferAnalyze.ml, line 99, characters 18-78
|
|
|
|
```
|
|
|
|
|
|
|
|
- To test the infer OCaml code you can use the OCaml toplevel. To
|
|
|
|
build the OCaml toplevel with the infer modules pre-loaded, run
|
|
|
|
`make toplevel` and follow the instructions.
|
|
|
|
|
|
|
|
To pass infer options to the toplevel, use `INFER_ARGS`, for
|
|
|
|
instance: `INFER_ARGS=--debug^-o^infer-out-foo`.
|
|
|
|
|
|
|
|
Many operations require the results directory and database to be
|
|
|
|
initialized with `ResultsDir.assert_results_dir ""`.
|
|
|
|
|
|
|
|
|
|
|
|
## Hacking on the Code in facebook-clang-plugins
|
|
|
|
|
|
|
|
Infer uses `ASTExporter` from the [facebook-clang-plugins](https://github.com/facebook/facebook-clang-plugins)
|
|
|
|
repository. To change that part of the code:
|
|
|
|
1. Create a [pull request](https://github.com/facebook/facebook-clang-plugins/pulls) in `facebook-clang-plugins` with the changes.
|
|
|
|
2. Create a pull request in this repository updating the version of the git submodule.
|
|
|
|
|
|
|
|
## Contributor License Agreement
|
|
|
|
|
|
|
|
We require contributors to sign our Contributor License Agreement. In
|
|
|
|
order for us to review and merge your code, please sign up at
|
|
|
|
https://code.facebook.com/cla. If you have any questions, please drop
|
|
|
|
us a line at cla@fb.com.
|
|
|
|
|
|
|
|
You are also expected to follow the [Code of Conduct](CODE_OF_CONDUCT.md), so please read that if you are a new contributor.
|
|
|
|
|
|
|
|
Thanks!
|
|
|
|
|
|
|
|
## Coding Style
|
|
|
|
|
|
|
|
### All Languages
|
|
|
|
|
|
|
|
- Indent with spaces, not tabs.
|
|
|
|
|
|
|
|
- Line width limit is 100 characters (except for Python for which the limit is 80).
|
|
|
|
|
|
|
|
- In general, follow the style of surrounding code.
|
|
|
|
|
|
|
|
### OCaml
|
|
|
|
|
|
|
|
- The module IStd (infer/src/istd/IStd.ml) is automatically opened in every file. Beware that this
|
|
|
|
can cause weird errors such as:
|
|
|
|
```
|
|
|
|
$ pwd
|
|
|
|
/somewhere/infer/infer/src
|
|
|
|
$ cat base/toto.ml
|
|
|
|
let b = List.mem true [true; false]
|
|
|
|
$ make
|
|
|
|
[...]
|
|
|
|
File "base/toto.ml", line 1, characters 17-21:
|
|
|
|
Error: This variant expression is expected to have type 'a list
|
|
|
|
The constructor true does not belong to type list
|
|
|
|
```
|
|
|
|
|
|
|
|
- All modules open `IStd` using `open! IStd`. This is to make that fact more explicit (there's also
|
|
|
|
the compilation flag mentioned above), and also it helps merlin find the right types. In
|
|
|
|
particular this also opens `Core.Std`.
|
|
|
|
|
|
|
|
- Do not add anything to `IStd` unless you have a compelling reason to do so, for instance if you
|
|
|
|
find some utility function is missing and is not provided by
|
|
|
|
[`Core`](https://ocaml.janestreet.com/ocaml-core/latest/doc/core/).
|
|
|
|
|
|
|
|
- Polymorphic equality is disabled; use type-specific equality instead, even for primitive types
|
|
|
|
(e.g., `Int.equal`). However, if your module uses a lot of polymorphic variants with no arguments
|
|
|
|
you may safely `open PolyVariantEqual`.
|
|
|
|
|
|
|
|
If you try and use polymorphic equality `=` in your code you will get a compilation error, such as:
|
|
|
|
```
|
|
|
|
Error: This expression has type int but an expression was expected of type
|
|
|
|
[ `no_polymorphic_compare ]
|
|
|
|
```
|
|
|
|
|
|
|
|
- Alias and use `module L = Logging` for all your logging needs. Refer to its API in Logging.mli for
|
|
|
|
documentation.
|
|
|
|
|
|
|
|
- Check that your code compiles without warnings with `make -j test_build` (this also runs as part
|
|
|
|
of `make test`).
|
|
|
|
|
|
|
|
- Apart from `IStd` and `PolyVariantEqual`, refrain from globally `open`ing modules. Using
|
|
|
|
local open instead when it improves readability: `let open MyModule in ...`.
|
|
|
|
|
|
|
|
- Avoid the use of module aliases, except for the following commonly-aliased modules. Use
|
|
|
|
module aliases consistently (e.g., do not alias `L` to a module other than `Logging`).
|
|
|
|
```OCaml
|
|
|
|
module CLOpt = CommandLineOption
|
|
|
|
module F = Format
|
|
|
|
module L = Logging
|
|
|
|
module MF = MarkupFormatter
|
|
|
|
```
|
|
|
|
|
|
|
|
- Use `[@@deriving compare]` to write comparison functions whenever possible. Watch out for
|
|
|
|
[this issue](https://github.com/ocaml-ppx/ppx_deriving/issues/116) when writing
|
|
|
|
`type nonrec t = t [@@deriving compare]`.
|
|
|
|
|
|
|
|
- Use `let equal_foo = [%compare.equal : foo]` to write equality functions whenever possible.
|
|
|
|
|
|
|
|
- Use named arguments whenever the purpose of the argument is not immediately obvious. In
|
|
|
|
particular, use named arguments for boolean and integer parameters unless the name of the function
|
|
|
|
mentions them explicitly. Also use named arguments to disambiguate between several arguments of
|
|
|
|
the same type.
|
|
|
|
|
|
|
|
- Use named arguments for functions taken as argument; it is common to name a function argument
|
|
|
|
`f`. For instance: `List.map : 'a list -> f:('a -> 'b) -> 'b list`.
|
|
|
|
|
|
|
|
- In modules defining a type `t`, functions that take an argument of that type should generally have
|
|
|
|
that argument come first, except for for optional arguments: `val f : ?optional:bool -> t -> ...`.
|
|
|
|
|
|
|
|
- Use the `_hum` suffix to flag functions that output human-readable strings.
|
|
|
|
|
|
|
|
- Format code with [ocamlformat](https://github.com/ocaml-ppx/ocamlformat).
|
|
|
|
|
|
|
|
### C/C++/Objective-C
|
|
|
|
|
|
|
|
Follow `clang-format` (see ".clang-format" at the root of the repository).
|
|
|
|
|
|
|
|
## Testing your Changes
|
|
|
|
|
|
|
|
- Make sure infer builds: `make -j test_build`. Refer to the [installation
|
|
|
|
document](https://github.com/facebook/infer/blob/master/INSTALL.md) for details.
|
|
|
|
|
|
|
|
- Run the tests: `make -j 4 test` (adjust 4 to the number of cores available of your machine). The
|
|
|
|
tests (almost) all consist of the same three ingredients:
|
|
|
|
1. Some source code to run infer on.
|
|
|
|
2. An "issues.exp" file where each line represents one item of output of the test. For most tests,
|
|
|
|
one line is one issue reported by infer.
|
|
|
|
3. A `Makefile` that orchestrates the test, for instance running infer on the source code and
|
|
|
|
comparing the results with issues.exp using `diff`.
|
|
|
|
|
|
|
|
- If your changes modified some of the expected outputs and if the changes make sense, you can
|
|
|
|
update the expected test results by running `make test-replace`.
|
|
|
|
|
|
|
|
- If relevant, add a test for your change.
|
|
|
|
|
|
|
|
- To add a test that infer finds (or does not find) a particular issue, add your test in
|
|
|
|
"infer/tests/codetoanalyze/{language}/{analyzer}/". Look at the `Makefile` in that directory and
|
|
|
|
make sure it runs your test. "{analyzer}" is often an infer analyzer (as in
|
|
|
|
`infer -a {analyzer}`), with some special cases:
|
|
|
|
- "errors" is "infer"
|
|
|
|
- "frontend" is a mode where the expected output is the result of the translation of the program
|
|
|
|
by infer's clang frontend into infer's intermediate representation.
|
|
|
|
|
|
|
|
Name the procedures in your test following these conventions:
|
|
|
|
- Test procedures where the analyzer should report an error should end with the suffix `Bad`.
|
|
|
|
- Test procedures where the analyzer should not report an error should end with the suffix `Ok`.
|
|
|
|
- Test procedures documenting current limitations of the analyzer should have the prefix `FP_`
|
|
|
|
(for "false positive") or `FN_` (for "false negative") and a comment explaining why the analyzer
|
|
|
|
gets the wrong answer.
|
|
|
|
|
|
|
|
|
|
|
|
- To add a test that a certain build system integration or a command-line option works in a certain
|
|
|
|
way, add a test in "infer/tests/build_systems/".
|
|
|
|
|
|
|
|
- If you created a new Makefile for your test, add it to the root "Makefile", either to the
|
|
|
|
`DIRECT_TESTS` (first case) or to the `BUILD_SYSTEMS_TESTS` variable (second case). Gate the
|
|
|
|
test appropriately if it depends on Java or Clang or Xcode (see how other tests do it).
|
|
|
|
|
|
|
|
- It can be useful to look at the debug HTML output of infer to see the detail of the symbolic
|
|
|
|
execution. For instance:
|
|
|
|
```sh
|
|
|
|
$ infer --debug -- clang -c examples/hello.c
|
|
|
|
$ firefox infer-out/captured/hello.c.*.html
|
|
|
|
```
|
|
|
|
|
|
|
|
## Updating opam and opam.locked
|
|
|
|
|
|
|
|
tl; dr: Run `make opam.locked`.
|
|
|
|
|
|
|
|
opam.locked records fixed versions of the opam dependencies known to work with infer and to respect
|
|
|
|
the constraints in opam. This prevents unpredictable breakages of infer or its dependencies,
|
|
|
|
especially for infer releases, for which it is more difficult to change their package constraints
|
|
|
|
after the fact.
|
|
|
|
|
|
|
|
To add an opam package or update its version constraints, edit 'opam' then run `make opam.locked`.
|