Summary: This is needed to address GC stalls due to a too small heap.
Reviewed By: jvillard
Differential Revision: D26045530
fbshipit-source-id: 590d1e72c
Summary: The existing code overwrites the `BUCK_EXTRA_JAVA_ARGS` environment var. It's better to extend it with our settings, if present.
Reviewed By: artempyanykh
Differential Revision: D26045398
fbshipit-source-id: 25588488c
Summary: Allowing Pulse NPE reports on Nullsafe classes to be suppressed. This is now possible via the optional argument --pulse-nullsafe-report-npe (default: true).
Reviewed By: da319
Differential Revision: D25997321
fbshipit-source-id: 98465df79
Summary: Copying Java biabduction tests into pulse tests folder. The goal is to check how well Pulse will perform on Java.
Reviewed By: jvillard
Differential Revision: D25901299
fbshipit-source-id: a117b44f5
Summary:
When C and C++ code handle a common struct typed value, the struct
type is handled as a `CStruct` in the C code, but as a `CppClass` in
the C++ code. On the other hand, `Fieldname.t` contains a string of
field and **the struct type**. As a result, even if a same field is
accessed in C and C++ code, the accessed fieldnames are different.
```
void callee_in_c(struct s* x) {
x->a = 3;
}
void caller_in_cpp() {
struct s x;
x.a = 5;
callee_in_c(&x);
// HERE
}
```
For example, in the above code, `caller_in_cpp` sets the field `a` as
5, then calls `callee_in_c`, which sets the field `a` as 3. However,
at `HERE`, the value of `x` in Pulse is `{a -> 5, a -> 3}`, because the two
fieldnames are addressed as different ones.
To avoid the issue, this diff loosens the fieldname comparison in
Pulse.
Reviewed By: jvillard
Differential Revision: D26000812
fbshipit-source-id: 77142ebda
Summary: Renaming biabduction tests in infer/tests/codetoanalyze/java/biabduction/*.java to follow our naming convention: fooOk for tests where no report is expected, fooBad when we expect a report, and FP_ or FN_ prefixes when reality doesn't match the expectation
Reviewed By: jvillard
Differential Revision: D25900575
fbshipit-source-id: ad1370085
Summary:
D20769039 (cec8cbeff2) added a preanalysis step that creates edges from throw nodes to all reachable catch nodes. It intended to fix some deadstore FPs however it caused more damage than the fix itself. In particular, throws were connected irrespective of
- the type of the exception
- whether the try was surrounded by a catch
This in turn caused weird CFGs with dangling and impossible to understand nodes:(
This diff reverts this change for now.
Instead, the fix should probably be done in the frontend where we have more information about try/catch blocks.
Reviewed By: da319
Differential Revision: D25997475
fbshipit-source-id: bbeabfbef
Summary:
When there was an assignment of C struct, `x = y;`, it was translated to the statements of load and store.
```
n$0 = *y
*x = n$0
```
However, this is incorrect in Sil, because a struct is not a value that can be assigned to registers. This diff fixes the translation as assignments of each field values :
```
n$0 = *y.field1
*x.field1 = n$0
n$0 = *y.field2
*x.field2 = n$0
...
```
It copies field values of C structs on:
* assign statement
* return statement
* declarations.
It supports nested structs.
Reviewed By: jvillard
Differential Revision: D25952894
fbshipit-source-id: 355f8db9c
Summary:
- We hoist calculation of `loop_head_to_loop_nodes` to simplify `get_loop_control_map` and also to allow it to be used by inefficient keyset iterator without needing to compute exit maps unnecessarily.
- nit on comments
- `open Control` in `loop_control.ml`
- hoist bound map calculation in `cost.ml`
Reviewed By: ngorogiannis
Differential Revision: D25952592
fbshipit-source-id: ef6103497
Summary:
Clang front-end is confused about exceptional CF. For the following program
```
void throw_positive(int b) {
if (b > 0) {
throw std::length_error("error");
}
}
void foo( std::vector<std::string> traceTokens){
if (traceTokens.size() < 13) {
throw std::invalid_argument("Exception!"); // 1
}
for (int i = 13; i < traceTokens.size(); ++i) {
try {
throw_positive(traceTokens[i].size());
} catch (std::range_error& msg) {
throw(1); // 2
}
try {
throw_positive(traceTokens[7].size());
} catch (std::range_error& msg) {
throw (9); // 3
}
}
}
```
Here, infer thinks that there are edges from 1->2 and 1-> 3. This should not be the case.
This in turn makes control analysis think that there is a back edge from 3->2 and violates the assertion that the exit node (3 in this case) must be a prune node...
Replacing assertion with internal error for now until I fix the clang frontend.
Reviewed By: skcho
Differential Revision: D25947376
fbshipit-source-id: 5c6529647
Summary:
Lambdas are essentially private (but are not marked as such in Infer),
so we should only report on their non-private callers.
Meanwhile, add a test to document that access propagation to those
callers is currently broken.
Reviewed By: da319
Differential Revision: D25944811
fbshipit-source-id: ef8ca6d9c
Summary:
This diff fixes a bug in the translation of an empty for-loop. When both initialization and
incrementation statements did not introduce a new node, the frontend generated an incorrect results
where the for-loop was unreachable from the entry node.
Fixes https://github.com/facebook/infer/issues/1374
Reviewed By: jvillard
Differential Revision: D25912142
fbshipit-source-id: 15b65cb84
Summary:
Previously, only names containing '$' were considered synthetic. We need
to extend the logic and look for "_UL_" in the name as well.
Also I deduped 4 different impls of "is_synthetic/generated/autogen".
Reviewed By: ngorogiannis
Differential Revision: D25899232
fbshipit-source-id: 9463eca6b
Summary:
When used by the first-order solver, solve_extract may generate fresh
variables to express the solution substitutions, but when used by
quantifier elimination via solve_for_vars, fresh variables should not
be generated. This diff makes the difference between these use cases
simpler and clearer by replacing the obscure filter predicate argument
that is passed eventually to compose1 with a boolean indicating
whether fresh variables may be generated.
Reviewed By: jvillard
Differential Revision: D25756562
fbshipit-source-id: 53a35b71c
Summary:
Context.solve_for_vars returns a solution substitution consisting of
oriented equalities that are implied by the given context. It is
logically valid to express these equations using terms that are
normalized with respect to the solution substitution itself. This diff
normalizes uninterpreted terms with the solution substitution when
extending it. This preserves the logical strength of the solutions,
and strengthens other purely syntactic operations on the substitution,
such as Context.Subst.is_valid_eq.
Reviewed By: jvillard
Differential Revision: D25756582
fbshipit-source-id: cd997c46b
Summary:
- Fix clamping of percentage change of memory quantities
- Improve sorting of coverage changes relative to unchanged error
results
- Filter out results for invalid LLVM code
- Fix printing multiple status values
- Fix passing args to sledge, different subcommands need different
args
- Fix cleaning commands
- Add an unknown error status to the report if sledge exits without
producing a status in the report
- Internalize globals for tests, as otherwise C++ tests are unreadable
due to the runtime system models
Reviewed By: jvillard
Differential Revision: D25756567
fbshipit-source-id: 3b4c003f4
Summary:
The ~20% hit in compilation time is dominated by the time saved
running the tests. This also makes the memory allocation numbers more
robust to inconsequential changes. Note that `make check` is still
fast.
Note that normally dune passes `-g` to ocamlopt itself, but not if the
build profile includes an `ocamlopt_flags` entry. So add it explicitly
to ensure that compilation includes debug symbols.
(Also remove `-error-style short` that was missed in 9c4f263.)
Reviewed By: jvillard
Differential Revision: D25756570
fbshipit-source-id: 94d18bac2
Summary:
When a stem has an occurrence of an existential, say x, and x is
substituted out by different witnesses, say a and b, in two disjuncts,
then the connection to other occurrences of a and b are lost.
This diff fixes this problem by propagating the solved variables that
survive normalization down to subformulas, and not removing those
variables from subformulas.
Reviewed By: jvillard
Differential Revision: D25756583
fbshipit-source-id: dabda743f
Summary:
Trade a bit more code for lowering complexity from linear to
logarithmic.
Reviewed By: jvillard
Differential Revision: D25756569
fbshipit-source-id: 83ad575fe
Summary:
Composing two substitutions does not need to require that their
domains are disjoint.
Leave disjointness check for composing a single mapping just to check
expected usage.
Reviewed By: jvillard
Differential Revision: D25756557
fbshipit-source-id: 04e92b864
Summary:
The overall operation of the quantifier elimination algorithm in
Sh.simplify is to first descend through the disjunctive structure of a
symbolic heap formula, process the leaves, and then proceed back up
the formula. At each point on the way back up, the first-order solver
is used to compute a solution substitution representing witnesses of
existential variables. This substitution is then used to normalize the
entire subformula at that point. Note that this results in
substituting and normalizing each subformula a number of times
proportional to its depth in the disjunctive structure.
This diff removes this redundancy and substitutes through each
subformula once. This is done by changing from an overall bottom-up to
top-down algorithm, and involves composing solution substitutions
rather than substituting multiple times. This change also heavily
relies on the strengthened Context.apply_and_elim operation, where the
bottom-up many-substitutions approach relies on the internal details
of repeated substitutions.
Reviewed By: jvillard
Differential Revision: D25756549
fbshipit-source-id: ca7cd814a
Summary:
The current implementation of quantifier elimination in Sh.simplify is
tightly coupled with the details of what the Context operations
support. In particular, successfully eliminating variables with
Context.elim effectively relies on being given a context that has been
transformed by Context.apply_subst. These operations are sound
independently, but achieving the desired result is delicate.
To simplify this situation, this diff refactors the tightly coupled
usage into a Context.apply_and_elim operation that hides the details
of the interaction inside the Context module. This enables an accurate
specification of apply_and_elim to be given much more simply than can
be done for the separate operations. This also simplifies the
implementation of Sh.simplify.
Reviewed By: jvillard
Differential Revision: D25756577
fbshipit-source-id: b344b3da6
Summary:
The implementation of quantifier elimination in Sh.simplify currently
relies on a subtle implementation detail of Context.apply_subst to
remove some identity mappings. Now that Context.elim has been
strengthened and generalized, it can be used to to give a much clearer
implementation, that is also more robust to representation changes in
Context.
Reviewed By: jvillard
Differential Revision: D25756554
fbshipit-source-id: 6be0de2f3
Summary:
The current implementation of Context.elim crudely removes oriented
equations from terms involving the given variables. This is easy to
use in a way that violates the representation invariants of Context,
as well as destroys completeness. This diff resolves this by making
Context.elim remove the desired variables by rearranging the existing
equality classes, in particular promoting a new representative term
when the existing representative is to be removed. Also, since this
basic approach is incomplete for interpreted terms, they are detected
and not removed. As a result, the interface changes to return the set
of variables that have been removed.
Reviewed By: jvillard
Differential Revision: D25756573
fbshipit-source-id: 0eead9281
Summary:
Context.fold_uses_of should enumerate the transitive subterms of a
term rather than only the immediate subterms.
Reviewed By: jvillard
Differential Revision: D25756553
fbshipit-source-id: a3911d9f5
Summary:
Previously Sh had a representation invariant which ensured that
existentials of a subformula did not shadow universals of its
superformulas. The current implementation of Sh.freshen_nested_xs
mistakenly still assumes this invariant, but it no longer holds. This
diff fixes freshen_nested_xs to freshen nested existentials with
respect to not only ancestor universals in addition to existentials.
Reviewed By: jvillard
Differential Revision: D25756551
fbshipit-source-id: 54c48b067
Summary:
Sh.simplify propagates first-order constraints over the formula's
propositional structure, and can reveal inconsistency. It is therefore
sometimes beneficial to check consistency after simplifying rather
than before.
Reviewed By: jvillard
Differential Revision: D25756585
fbshipit-source-id: 2a205c965
Summary:
Simplifying a symbolic heap can reveal inconsistencies, either of
individual disjuncts or (therefore) of toplevel formulas. This is due
to the first-order contexts being propagated from a formula to its
descendents, and for the intersection of the contexts of disjunctions
being propagated to their parent. The later stages of eliminating
redundant quantifiers, etc. do not reveal further
inconsistencies. Therefore this diff moves the inconsistency detection
earlier, to just after the contexts are strengthened. This is clearer,
and also a minor optimization.
Reviewed By: jvillard
Differential Revision: D25756564
fbshipit-source-id: d3acf875b