Summary:
In order to test cost analysis results, currently we rely on having an arbitrary cost threshold (200) and report issues that exceed this cost. For instance, a cost of 201 is considered expensive and reported as `EXPENSIVE_EXECUTION_TIME` issue in cost tests.
This means, if we change the cost analysis in a slight way that results in some constant cost increase under 200, we wouldn't able to detect it. I find this unsatisfactory and somewhat hacky.
This diff adds the ability to write the result of `costs-report.json` into a separate `cost-issues.exp` and then compare the actual costs (not only than relying on this arbitrary threshold reporting mechanism).
Reviewed By: skcho
Differential Revision: D21816312
fbshipit-source-id: 93b531928
Summary:
This diff renames `ZERO_XXX` issues to more appropriately named and descriptive
`XXX_UNREACHABLE_AT_EXIT` and replaces bottom with
unreachable in cost kinds and issues.
Reviewed By: skcho
Differential Revision: D20140301
fbshipit-source-id: eb6076b30
Summary:
The issue type `ZERO_EXECUTION_TIME` actually corresponds to bottom state but has been mistakenly used to mean
- unreachable nodes (program never reaching exit state)
- having zero cost (e.g. for allocations).
Note that, for execution costs, the latter doesn't make sense since we always incur a unit cost for the start node. Hence, a function with empty body will have unit cost. For allocations or IO however, we only incur costs for specific primitives, so a function with no allocations/IO could have a zero cost. However, there is no point reporting functions with zero cost as a specific issue type. Instead, what we want to track is the former, i.e. functions whose cost becomes 0 due to program never reaching exit state.
This diff aims to split these cases into two by only reporting on the latter and adds traces to bottom/unreachable cost by creating a special category in polynomials.
Next diff will rename `ZERO_XXX` to `XXX_UNREACHABLE_AT_EXIT`.
Reviewed By: skcho
Differential Revision: D20005774
fbshipit-source-id: 46b9abd5a
Summary: We had no tests that resulted in `ZERO_EXECUTION_COST`. Let's fix that.
Reviewed By: skcho
Differential Revision: D20097504
fbshipit-source-id: 56c23fea0
Summary:
This diff revises widening functions of bounds that have a linear form and a min/max form.
For example, for lower bounds,
* 3 ▽ (1+min(2, x)) = (1+min(2, x))
* 3+x ▽ (3+min(2, x)) = (3+min(2, x))
Reviewed By: jvillard
Differential Revision: D17420786
fbshipit-source-id: ff9eebed3
Summary:
This diff ignores character symbols in the cost results, in order to
avoid FPs from parser code.
Reviewed By: ezgicicek
Differential Revision: D17132053
fbshipit-source-id: d9cf8bd26
Summary: Functions with empty body have unit cost, not zero. The unit cost comes from the start node.
Reviewed By: skcho
Differential Revision: D16855642
fbshipit-source-id: 6b5181faf
Summary: We want to keep big O notation as simple as possible in cost analysis reports (especially in diff time). Therefore, let's not show constants/min/max in big O notations even though the resulting asymptotic bound might be inaccurate. Developers can click on the trace and see the actual cost.
Reviewed By: skcho
Differential Revision: D16731351
fbshipit-source-id: 2e16f7eca
Summary: In order to test changes to bigO notation, let's record them in test results.
Reviewed By: skcho
Differential Revision: D16763972
fbshipit-source-id: c1376909b
Summary:
- Add allocation costs to `costs-report.json` and enable diffing over allocation costs.
- Also, let's be more consistent and modular in naming our cost issues.
- introduce a generic issue type `X_TIME_COMPLEXITY_INCREASE` where `X` can be one of the cost kinds. If the function is on the cold start, issue can have the `COLD_START` suffix. Similarly for infinite/zero/expensive calls.
- Change `PERFORMANCE_VARIATION` -> `EXECUTION_TIME_COMPLEXITY_INCREASE`
- Add new issue type for `ALLOCATION_COMPLEXITY_INCREASE_COLD_START` which will be enabled by default
- Refactor cost issues to be more modular and succinct. This also makes addition of a new cost kind very easy by adding the kind into the `enabled_cost_kinds` list in `CostKind.ml`
Reviewed By: mbouaziz
Differential Revision: D15822681
fbshipit-source-id: cf89ece59
Summary:
Instead of emitting an ad-hoc builtin on variable declaration emit a new
metadata instruction. This allows us to remove the code matching on that
ad-hoc builtin that had to be inserted in several checkers.
Inferbo & pulse used that information meaningfully and had to undergo
some minor changes to cope with the new metada instruction.
Reviewed By: ezgicicek
Differential Revision: D14833100
fbshipit-source-id: 9b3009d22
Summary:
This diff adds a constant to the set of widening thresholds if the
constant is compared to an abstract value in condition expressions.
Each abstract value has its own set of thresholds.
Reviewed By: mbouaziz
Differential Revision: D14147150
fbshipit-source-id: ca0db34d4
Summary: Record where each symbol in a polynomial is coming from: either a loop, function call or a modeled call.
Reviewed By: mbouaziz
Differential Revision: D14047420
fbshipit-source-id: 56d0bd926
Summary: It keeps alias of simple plus/minus arithmetic in order to pruning the value of "++i" expression.
Reviewed By: mbouaziz
Differential Revision: D14080230
fbshipit-source-id: d3af32a32
Summary:
- There is no need to use AI to compute a dot product: let's just fold over all nodes, but still do it in order (using the WTO) to report at the right place
- The previous version was computing a dot product on nodes for each node, which was quadratic, the new version is linear
- Report only once, the first time the threshold is reached (if in a loop, report at the loop head)
Reviewed By: ddino
Differential Revision: D14028171
fbshipit-source-id: b4a840c6e
Summary:
For abstract values representing one concrete value, create only one symbol instead of two.
Still create two symbols (lb, ub) for abstract values representing multiple concrete values (like array cells).
As a consequence, comparisons of symbolic values are more precise (we can even prove equality). I expect to remove a bunch of FPs.
Another consequence is the disappearance of `.lb` and `.ub` in many reports.
Reviewed By: skcho
Differential Revision: D13072084
fbshipit-source-id: 9bc0b9881
Summary:
It materializes symbolic values of function parameters on-demand. The on-demand materialization is triggered when finding a value from an abstract memory and joining/widening abstract memories.
Depends on D13294630
Main idea:
* Symbolic values are on-demand-ly generated by a symbol path and its type
* In order to avoid infinite generation of symbolic values, symbol paths are canonicalized by structure types and field names (which means they are abstracted to the same value). For example, in a linked list, a symbolic value `x->next->next` is canonicalized to `x->next` when the structures (`*x` and `*x->next`) have the same structure type and the same field name (`next`).
Changes from the previous code:
* `Symbol.t` does not include `id` and `pname` for distinguishing symbols. Now, all symbols are compared by `path:SymbolPath.partial` and `bound_end`.
* `SymbolTable` is no longer used, which was used for generating symbolic values with new `id`s.
Reviewed By: mbouaziz
Differential Revision: D13294635
fbshipit-source-id: fa422f084
Summary:
When initialising a variable via semi-exotic means, the frontend loses
the information that the variable was initialised. For instance, it
translates:
```
struct Foo { int i; };
...
Foo s = {42};
```
as:
```
s.i := 42
```
This can be confusing for backends that need to know that `s` actually
got initialised, eg pulse.
The solution implemented here is to insert of dummy call to
`__variable_initiazition`:
```
__variable_initialization(&s);
s.i := 42;
```
Then checkers can recognise that this builtin function does what its
name says.
Reviewed By: mbouaziz
Differential Revision: D12887122
fbshipit-source-id: 6e7214438
Summary: This diff changes pp of binary operation condition in order to avoid a `make test` failure. For the same `uint64_t` type, it is translated to `unsigned long long` in 64bit mac, but `unsigned long` in 64bit linux, which made a `make test` failure.
Reviewed By: mbouaziz
Differential Revision: D10459466
fbshipit-source-id: 449ab548e
Summary:
I realized that control variable analysis was broken when we had multiple back-edges for the same loop. This is often the case when we have a switch statement combined with continue in a loop (see `test_switch` in `switch_continue.c`) or when we have disjunctive guards in do-while loops.
This diff fixes that by
- defining a loop by its loophead (the target of its backedges) rather than its back-edges. Then it converts back-edge list to a map from loop_head to sources of the loop's back-edges.
- collecting multiple guard nodes that come from potentially multiple exit nodes per loop head
In addition, it also removes the wrong assumption that an exit node belongs to a single loop head.
Reviewed By: mbouaziz
Differential Revision: D8398061
fbshipit-source-id: abaf288