Summary: The tests are slower when running in debug mode, and it creates a lot of html outputs
Reviewed By: sblackshear
Differential Revision: D5916511
fbshipit-source-id: 07c90b7
Summary:
This diff does two things:
# Infer no longer add the contrains that the return value of a skip function is never null. This was leading to false negatives and is not necessary as those return value are treated angelically
# Infer now support `Nonnull` on the return value of skip functions.
Reviewed By: jberdine, sblackshear
Differential Revision: D5840324
fbshipit-source-id: bbd8d82
Summary:
`reraise` was error-prone when one forgot to save the backtrace between where the exception is caught and where it is reraised.
If any exception was raised (even caught) in between, the printed backtrace would be the one of the last exception thrown and it would be very confusing.
This diff kills `reraise` and introduces `reraise_after exn ~f` and `reraise_if exn ~f` to be used right after catching the exception.
Also turned some of them to the common pattern `try_finally ~f ~finally`.
Reviewed By: jvillard
Differential Revision: D5911244
fbshipit-source-id: 9883d1e
Summary: Example of combination between annotating fields with nullable and the biabduction analysis in Objective C
Reviewed By: dulmarod
Differential Revision: D5906016
fbshipit-source-id: b95c6e0
Summary:
The interval bound of the abstract domain is extended.
`[min|max](int, symbol)` => `int [+|-] [min|max](int, symbol)`
As a result, `vector::empty` can be modelled to return a more precise value: `1 - min(1, size)`.
Reviewed By: mbouaziz
Differential Revision: D5899404
fbshipit-source-id: c8c3f49
Summary:
We take it into account to not report bugs inside the available block. This requires a plugin change.
update-submodule: facebook-clang-plugins
Reviewed By: jvillard
Differential Revision: D5891511
fbshipit-source-id: 21a02ad
Summary:
With this change and the previous facebook-clang-plugins change, infer no
longer exhausts the biniou buffer when reading the serialized C++ AST.
update-submodule: facebook-clang-plugins
Reviewed By: mbouaziz
Differential Revision: D5891081
fbshipit-source-id: cf48eac
Summary:
With this change and the previous facebook-clang-plugins change, infer no
longer exhausts the biniou buffer when reading the serialized C++ AST.
update-submodule: facebook-clang-plugins
Reviewed By: mbouaziz
Differential Revision: D5891081
fbshipit-source-id: 489611a
Summary:
These get way too big in C++, and we only use the very first word of them, to
tell apart class from struct from union... so sad, very bad.
update-submodule: facebook-clang-plugins
Reviewed By: mbouaziz
Differential Revision: D5890594
fbshipit-source-id: 49e6284
Summary:
The only language types we have are Java/Clang/Python. The unit of analysis is a source file, and you can't write a source file that mixes two or more of these languages (to the best of my knowledge).
This diff simplifies using the assumption that all procedures in a file are written in the same language.
Reviewed By: jeremydubreil
Differential Revision: D5886942
fbshipit-source-id: 88c3759
Summary:
The only language types we have are Java/Clang/Python. The unit of analysis is a source file, and you can't write a source file that mixes two or more of these languages (to the best of my knowledge).
This diff simplifies using the assumption that all procedures in a file are written in the same language.
Reviewed By: jeremydubreil
Differential Revision: D5886942
fbshipit-source-id: 8555a16
Summary: Only Eradicate uses this, no need to create it for every checker.
Reviewed By: jeremydubreil
Differential Revision: D5886775
fbshipit-source-id: 7242437
Summary:
A Java cluster checker currently defines a "cluster" as all of the procedures in the same class.
But the cluster checker actually knows about all the procedures defined in the same source file.
In some checkers (such as thread-safety), we want to aggregate results across classes in the same file, not just methods in the same class.
This refactoring leaves the behavior the same for now, but will make it easier to do this in the near future.
Reviewed By: jeremydubreil
Differential Revision: D5885896
fbshipit-source-id: 0815fca
Summary:
Calling functions that raise exceptions (even if they get caught) may smudge
the backtraces we get from OCaml. We need to record the original backtrace
*before* calling such fuctions on the path between catching an exception and
reraising it.
Also change the heptuple returned by `Exceptions.recognize_exception` into a
record type, and make that function not raise when classifying exceptions.
Reviewed By: jberdine
Differential Revision: D5882934
fbshipit-source-id: 8e99fe8
Summary: Infer can then detect the resource leak when resources are stored into a container
Reviewed By: sblackshear
Differential Revision: D5887702
fbshipit-source-id: 6106cfb
Summary: The point of the tracing mode is to compute all the possible path leading to an error state. However, within a method, many of those paths are not feasibile in practice. This leads to many false alarms for the resource leak analysis.
Reviewed By: sblackshear
Differential Revision: D5888695
fbshipit-source-id: 2dbc57b
Summary: Model the `remove(...)` and the `clear()` method of `HashMap`.
Reviewed By: sblackshear
Differential Revision: D5887674
fbshipit-source-id: c3f40ee
Summary: Handling the utility functions for asserting that we're on background thread.
Reviewed By: jberdine
Differential Revision: D5863435
fbshipit-source-id: 3ad95b5
Summary:
Previously, we just tracked a boolean representing whether we were possibly on the main thread (true) or definitely not on the main thread (false).
In order to start supporting `Thread.start`, `Runnable.run`, etc., we'll need something more expressive.
This diff introduces a lattice:
```
Any
/ \
Main Background
\ /
Unknown
```
as the new threads domain. The initial value is `Unknown`, and we introduce `Main` in situations where we would have introduced `true` before.
This (mostly) preserves behavior: the main difference is that before code like
```
if (*) {
assertMainThread()
} else {
x.f = ...
}
```
would have recorded that the access to `x.f` was on the main thread, whereas now we'll say that it's on an unknown thread.
Reviewed By: peterogithub
Differential Revision: D5860256
fbshipit-source-id: efee330
Summary: Useful to know if your build is going to take 40s or 4 minutes.
Reviewed By: jeremydubreil
Differential Revision: D5872878
fbshipit-source-id: 1d370f3
Summary:
It's useful to be able to disable de-duplication on the command line with `--no-filtering`.
Gate de-duplication with `Config.filtering` and move the de-duplication tests to a new directory under the build systems tests.
Reviewed By: jeremydubreil
Differential Revision: D5865329
fbshipit-source-id: 5094f5b
Summary:
Since D5381239, infer is careful not to delete directories that do not "look
like" results directories on startup, in case the user passed, eg, `-o /`.
In our repo, lots of results dir are created by build/test of infer, and when
the version of infer changes and the expected contents of results directories
change then it might start refusing to delete the results directories created
with another version of infer.
Add an option to force infer to delete the results directory no matter how
dodgy it looks, and use it in our repo by adding the option in every
.inferconfig.
Reviewed By: mbouaziz
Differential Revision: D5870984
fbshipit-source-id: 09412de
Summary: Will then be easier to understand if some changes in the test results is legit or not.
Reviewed By: sblackshear
Differential Revision: D5863961
fbshipit-source-id: 7eb3f33
Summary: Parmap delivers a better scheduling, which works as a pipeline, as opposed to what existed before, which schedules processes in batches.
Reviewed By: jvillard
Differential Revision: D5678661
fbshipit-source-id: a632c71
Summary:
Suggesting to add `_Nullable` on the fields checked for, or assigned to, `nullptr` will allow the biabduction analysis to report null dereferences that are related to the lifetime of objects.
Depends on D5832147
Reviewed By: sblackshear
Differential Revision: D5836538
fbshipit-source-id: c1b8e48
Summary: The prune nodes where translated as `prune (expr = false)` and `prune ( expr != false)`. This case is a bit tricky to deconstruct in HIL. This diff translates the prune instructions as just `prune !expr` for the true branch and `prune expr` for the false branch.
Reviewed By: dulmarod
Differential Revision: D5832147
fbshipit-source-id: 2c3502d
Summary:
We need to make sure that destructors of virtual base classes are called only once. Similarly to what clang does, we have two destructors for a class: a destructor wrapper and an inner destructor.
Destructor wrapper is called from outside, i.e., when variables go out of scope or when destructors of fields are being called.
Destructor wrappers have calls to inner destructors of all virtual base classes in the inheritance as their bodies.
Inner destructors have destructor bodies and calls to destructor wrappers of fields and inner destructors of non-virtual base classes.
Reviewed By: dulmarod
Differential Revision: D5834555
fbshipit-source-id: 51db238
Summary:
The "placement new" operator `new (e) T` constructs a `T` in the pre-allocated memory address `e`.
We weren't translating the `e` part, which was leading to false positives in the dead store analysis.
Reviewed By: dulmarod
Differential Revision: D5814191
fbshipit-source-id: 05c6fa9
Summary:
With flambda (`-O3`), compilation time is ~5x slower, but the backend is ~25% faster!
To mitigate the atrocious compilation times, introduce a new `opt` build mode in the jbuilder files.
- build in "opt" mode by default from the toplevel (so that install scripts and external users get the fastest infer by default), in "default" mode by default from infer/src (since the latter is only called directly by infer devs, for faster builds)
- `make byte` is as fast as before in any mode
- `make test` will build "opt" by default, which is very slow. Solution for testing (or building the models) locally: `make BUILD_MODE=default test`.
- You can even change the default locally with `export BUILD_MODE=default`.
The benchmarks are to be taken with a sizable pinch of salt because I ran them only once and other stuff could be running in the background. That said, the perf win is consistent across all projects, with 15-20% win in wallclock time and around 25% win in total CPU time, ~9% win in sys time, and ~25% fewer minor allocations, and ~5-10% fewer overall allocations. This is only for the backend; the capture is by and large unaffected (either the same or a tad faster within noise range).
Here are the results running on OpenSSL 1.0.2d on osx (12 cores, 32G RAM)
=== base
infer binary: 26193088 bytes
compile time: 40s
capture:
```lang=text
real 1m7.513s
user 3m11.437s
sys 0m55.236s
```
analysis:
```lang=text
real 5m41.580s
user 61m37.855s
sys 1m12.870s
```
Memory profile:
```lang=json
{
...
"minor_gb": 0.1534719169139862,
"promoted_gb": 0.0038930922746658325,
"major_gb": 0.4546157643198967,
"allocated_gb": 0.6041945889592171,
"minor_collections": 78,
"major_collections": 23,
"compactions": 7,
"top_heap_gb": 0.07388687133789062,
"stack_kb": 0.3984375,
"minor_heap_kb": 8192.0,
...
}
```
=== flambda with stock options (no `-Oclassic`, just the same flags as base)
Exactly the same as base.
=== flambda `-O3`
infer binary: 56870376 bytes (2.17x bigger)
compile time: 191s (4.78x slower)
capture is the same as base:
```lang=text
real 1m9.203s
user 3m12.242s
sys 0m58.905s
```
analysis is ~20% wallclock time faster, ~25% CPU time faster:
```lang=text
real 4m32.656s
user 46m43.987s
sys 1m2.424s
```
memory usage is a bit lower too:
```lang=json
{
...
"minor_gb": 0.11583046615123749, // 75% of previous
"promoted_gb": 0.00363825261592865, // 93% of previous
"major_gb": 0.45415670424699783, // about same
"allocated_gb": 0.5663489177823067, // 94% of previous
"minor_collections": 73,
"major_collections": 22,
"compactions": 7,
"top_heap_gb": 0.07165145874023438,
"stack_kb": 0.3359375,
"minor_heap_kb": 8192.0,
...
}
```
=== flambda `-O2`
Not nearly as exciting as `-O3`, but the compilation cost is still quite high:
infer: 37826856 bytes
compilation of infer: 100s
Capture and analysis timings are mostly the same as base.
Reviewed By: jberdine
Differential Revision: D4867979
fbshipit-source-id: 99230b7
Summary:
This can be a long-running step and it's useful to know how long it took. We
already dump some statistics on stderr after merging is done, this just adds
one more line.
Reviewed By: mbouaziz
Differential Revision: D5833580
fbshipit-source-id: 70e19ab
Summary:
Simple instance of the problem: analyzing the following program times out.
```
#include <tuple>
void foo() {
std::tuple<std::tuple<int>> x;
}
```
Replacing `std::tuple<std::tuple<int>>` by `std::tuple<int>` makes the analysis
terminate.
In the AST, both tuple<tuple<int>> and tuple<int> have the same template
specialization type: "Pack" (which means we're supposed to go look into the
arguments of the template to get their values). This is not information enough
and that's the plugin fault.
On the backend side, this means that two types have the same Typ.Name.t, namely
"std::tuple<_>", so they collide in the tenv. The definition of
tuple<tuple<int>> is the one making it into the tenv. One of the fields of the
corresponding CxxRecord is of type "tuple<int>", which we see as the same
"tuple<_>", which causes the loop.
update-submodule: facebook-clang-plugins
Reviewed By: mbouaziz
Differential Revision: D5775840
fbshipit-source-id: 0528604