[A separate blog post looked at 100 recent data race fixes](https://code.facebook.com/posts/1537144479682247/finding-inter-procedural-bugs-at-scale-with-infer-static-analyzer/)
in Infer's deployment in various bug categories, and for data races observed
that 53 of them were inter-file (and thus involving multiple classes).
[See above](racerd#interprocedural) for an example
of RacerD's interprocedural capabilities.
[See above](racerd#interprocedural) for an example of RacerD's interprocedural
capabilities.
One reaction to the challenge of developing effective static race detectors has
been to ask the programmer to do more work to help the analyzer. Examples of
Separation logic is a novel kind of mathematical logic which facilitates reasoning about
mutations to computer memory. It enables scalability by breaking reasoning into chunks
corresponding to local operations on memory, and then composing
the reasoning chunks together.
Separation logic is a novel kind of mathematical logic which facilitates
reasoning about mutations to computer memory. It enables scalability by breaking
reasoning into chunks corresponding to local operations on memory, and then
composing the reasoning chunks together.
Separation logic is based on a logical connective <Mathcode="\\( * \\)"/> called the _separating conjunction_ and pronounced "and separately". Separation logic formulae are interpreted over program allocated heaps. The logical formula
<Mathcode="\\( A*B \\)"/> holds of a piece of program heap (a heaplet) when it can be divided into two sub-heaplets described by <Mathcode="\\(A\\)"/> and <Mathcode="\\(B\\)"/>.
For example, the formula
Separation logic is based on a logical connective <Mathcode="\\( * \\)"/>
called the _separating conjunction_ and pronounced "and separately". Separation
logic formulae are interpreted over program allocated heaps. The logical formula
<Mathcode="\\( A*B \\)"/> holds of a piece of program heap (a heaplet) when it
can be divided into two sub-heaplets described by <Mathcode="\\(A\\)"/> and
<Mathcode="\\(B\\)"/>. For example, the formula
---
@ -31,19 +31,33 @@ For example, the formula
---
can be read "<Mathcode="\\(x\\)"/> points to <Mathcode="\\(y\\)"/> and separately <Mathcode="\\(y\\)"/> points to <Mathcode="\\(x\\)"/>". This formula describes precisely two allocated memory cells. The first cell is allocated at the address denoted by the pointer <Mathcode="\\(x\\)"/> and the content of this cell is the value of <Mathcode="\\(y\\)"/>.
The second cell is allocated at the address denoted by the pointer <Mathcode="\\(y\\)"/> and the content of this second cell is the value of <Mathcode="\\(x\\)"/>. Crucially, we know that there are precisely two cells because <Mathcode="\\(* \\)" /> stipulates that they are separated and therefore the cells are allocated in two different parts of memory. In other words, <Math code="\\( *\\)"/>
says that <Mathcode="\\(x\\)"/> and <Mathcode="\\(y\\)"/> do not hold the same value (i.e., these pointers are not aliased).
The heaplet partitioning defined by the formula above can visualized like so:
can be read "<Mathcode="\\(x\\)"/> points to <Mathcode="\\(y\\)"/> and
separately <Mathcode="\\(y\\)"/> points to <Mathcode="\\(x\\)"/>". This
formula describes precisely two allocated memory cells. The first cell is
allocated at the address denoted by the pointer <Mathcode="\\(x\\)"/> and the
content of this cell is the value of <Mathcode="\\(y\\)"/>. The second cell is
allocated at the address denoted by the pointer <Mathcode="\\(y\\)"/> and the
content of this second cell is the value of <Mathcode="\\(x\\)"/>. Crucially,
we know that there are precisely two cells because <Mathcode="\\( * \\)"/>
stipulates that they are separated and therefore the cells are allocated in two
different parts of memory. In other words, <Mathcode="\\( * \\)"/> says that
<Mathcode="\\(x\\)"/> and <Mathcode="\\(y\\)"/> do not hold the same value
(i.e., these pointers are not aliased). The heaplet partitioning defined by the
formula above can visualized like so:
![](/img/SepSplit.jpg)
The important thing about separating conjunction is
the way that it fits together with mutation to computer memory; reasoning about program commands
tends to work by updating <Mathcode="\\(*\\)"/>-conjuncts in-place, mimicking the operational in-place update of RAM.
The important thing about separating conjunction is the way that it fits
together with mutation to computer memory; reasoning about program commands
tends to work by updating <Mathcode="\\(*\\)"/>-conjuncts in-place, mimicking
the operational in-place update of RAM.
Separation logic uses Hoare triples of the form <Mathcode="\\( \lbrace pre \rbrace prog \lbrace post \rbrace \\)"/> where <Mathcode="\\(pre\\)"/> is the precondition, <Mathcode="\\(prog\\)"/> a program part, and <Mathcode="\\(post\\)"/>
the postcondition. Triples are abstract specifications of the behavior of the program. For example, we could take
Separation logic uses Hoare triples of the form
<Mathcode="\\( \lbrace pre \rbrace prog \lbrace post \rbrace \\)"/> where
<Mathcode="\\(pre\\)"/> is the precondition, <Mathcode="\\(prog\\)"/> a
program part, and <Mathcode="\\(post\\)"/> the postcondition. Triples are
abstract specifications of the behavior of the program. For example, we could
take
---
@ -51,11 +65,14 @@ the postcondition. Triples are abstract specifications of the behavior of the pr
---
as a specification for a method which closes a resource given to it as a parameter.
as a specification for a method which closes a resource given to it as a
parameter.
Now, suppose we have two resources <Mathcode="\\( r\_1 \\)"/> and <Mathcode="\\( r\_2 \\)"/>, described by <Mathcode="\\(r\_1 \mapsto open * r\_2 \mapsto open\\)"/>
and we close the first of them. We think operationally in terms of updating the memory in place, leaving \\(r_2 \mapsto open\\) alone,
as described by this triple:
Now, suppose we have two resources <Mathcode="\\( r\_1 \\)"/> and
<Mathcode="\\( r\_2 \\)"/>, described by
<Mathcode="\\(r\_1 \mapsto open * r\_2 \mapsto open\\)"/> and we close the
first of them. We think operationally in terms of updating the memory in place,
leaving \\(r_2 \mapsto open\\) alone, as described by this triple:
---
@ -63,12 +80,13 @@ as described by this triple:
---
What we have here is the that specification (spec) described how <Mathcode="\\(closeResource()\\)"/> works by mentioning only one
piece of state, what is sometimes called a small specification,
and in (use) we use that specification to update a larger precondition in place.
What we have here is the that specification (spec) described how
<Mathcode="\\(closeResource()\\)"/> works by mentioning only one piece of
state, what is sometimes called a small specification, and in (use) we use that
specification to update a larger precondition in place.
This is an instance of a general pattern.
There is a rule that lets you go from smaller to bigger specifications
This is an instance of a general pattern. There is a rule that lets you go from
smaller to bigger specifications
---
@ -79,21 +97,24 @@ There is a rule that lets you go from smaller to bigger specifications
Our passage from (spec) to (use) is obtained by taking
- <Mathcode="\\(pre\\)"/> to be <Mathcode="\\(r\_1 \mapsto open\\)"/>
- <Mathcode="\\(post\\)"/> to be <Mathcode="\\(r\_1 \mapsto closed \\)"/>, and
- <Mathcode="\\(post\\)"/> to be <Mathcode="\\(r\_1 \mapsto closed \\)"/>,
and
- <Mathcode="\\(frame\\)"/> to be <Mathcode="\\(r\_2 \mapsto open \\)"/>
This rule is called the _frame rule_ of separation logic. It is named after the frame problem, a classic problem in artificial intelligence.
Generally, the <Mathcode="\\(frame\\)"/> describes state that remains unchanged; the terminology comes from the analogy of
a background scene in an animation as unchanging while the objects and characters within the scene change.
This rule is called the _frame rule_ of separation logic. It is named after the
frame problem, a classic problem in artificial intelligence. Generally, the
<Mathcode="\\(frame\\)"/> describes state that remains unchanged; the
terminology comes from the analogy of a background scene in an animation as
unchanging while the objects and characters within the scene change.
The frame rule is the key to the principle of local reasoning in separation logic: reasoning and specifications
should concentrate on the resources that a program accesses (the footprint), without mentioning what
doesn't change.
The frame rule is the key to the principle of local reasoning in separation
logic: reasoning and specifications should concentrate on the resources that a
program accesses (the footprint), without mentioning what doesn't change.
## Bi-abduction
Bi-abduction is a form of logical inference for separation logic which automates the key ideas about local
reasoning.
Bi-abduction is a form of logical inference for separation logic which automates
the key ideas about local reasoning.
Usually, logic works with validity or entailment statements like
@ -103,9 +124,9 @@ Usually, logic works with validity or entailment statements like
---
which says that <Mathcode="\\(A\\)"/> implies <Mathcode="\\(B\\)"/>. Infer uses an extension of this inference question in an internal
theorem prover while it runs over program statements.
Infer's question
which says that <Mathcode="\\(A\\)"/> implies <Mathcode="\\(B\\)"/>. Infer
uses an extension of this inference question in an internal theorem prover while
it runs over program statements. Infer's question
---
@ -113,17 +134,24 @@ Infer's question
---
is called _bi-abduction_. The problem here is for the theorem prover to <i> discover </i> a pair of frame and antiframe formulae that make the entailment statement valid.
is called _bi-abduction_. The problem here is for the theorem prover to <i>
discover </i> a pair of frame and antiframe formulae that make the entailment
statement valid.
Global analyses of large programs are normally computational untractable. However,
bi-abduction allows to break the large analysis of a large program in small independent analyses of its procedures. This gives Infer the ability to scale independently of the size of the analyzed code. Moreover, by breaking the analysis in small
independent parts, when the full program is analyzed again because
of a code change the analysis results of the unchanged part of the
code can be reused and only the code change needs to be re-analyzed. This process is called incremental analysis and it
is very powerful when integrating a static analysis tool like infer in a development environment.
Global analyses of large programs are normally computational untractable.
However, bi-abduction allows to break the large analysis of a large program in
small independent analyses of its procedures. This gives Infer the ability to
scale independently of the size of the analyzed code. Moreover, by breaking the
analysis in small independent parts, when the full program is analyzed again
because of a code change the analysis results of the unchanged part of the code
can be reused and only the code change needs to be re-analyzed. This process is
called incremental analysis and it is very powerful when integrating a static
analysis tool like infer in a development environment.
In order to be able to decompose a global analysis in small independent analyses, let's first consider how a function
call is analyzed in separation logic. Assume we have the following spec for a function <Mathcode="\\( f() \\)"/>:
In order to be able to decompose a global analysis in small independent
analyses, let's first consider how a function call is analyzed in separation
logic. Assume we have the following spec for a function
<Mathcode="\\( f() \\)"/>:
---
@ -131,9 +159,10 @@ call is analyzed in separation logic. Assume we have the following spec for a fu
---
and by analyzing the caller function, we compute that before
the call of <Mathcode="\\( f \\)"/>, the formula <Mathcode="\\( CallingState \\)"/> hold. Then
to utilize the specification of <Mathcode="\\( f \\)"/> the following implication must holds:
and by analyzing the caller function, we compute that before the call of
<Mathcode="\\( f \\)"/>, the formula <Mathcode="\\( CallingState \\)"/>
hold. Then to utilize the specification of <Mathcode="\\( f \\)"/> the
following implication must holds:
---
@ -141,9 +170,10 @@ to utilize the specification of <Math code="\\( f \\)" /> the following implicat
---
Given that,
bi-abduction is used at procedure call sites for two reasons: to discover missing state that is needed for the above implication to hold and allow the analysis
to proceed (the antiframe) as well as state that the procedure leaves unchanged (the frame).
Given that, bi-abduction is used at procedure call sites for two reasons: to
discover missing state that is needed for the above implication to hold and
allow the analysis to proceed (the antiframe) as well as state that the
procedure leaves unchanged (the frame).
To see how this works suppose we have some bare code
@ -153,12 +183,10 @@ To see how this works suppose we have some bare code
---
but no overall specification;
we are going to describe how to discover a pre/post spec for it.
Considering the first statement and the (spec) above, the human might say: if only we had
<Mathcode="\\(r1 \mapsto open\\)"/> in the precondition then we could proceed.
Technically,
we ask a bi-abduction question
but no overall specification; we are going to describe how to discover a
pre/post spec for it. Considering the first statement and the (spec) above, the
human might say: if only we had <Mathcode="\\(r1 \mapsto open\\)"/> in the
precondition then we could proceed. Technically, we ask a bi-abduction question
---
@ -166,8 +194,11 @@ we ask a bi-abduction question
---
and we can fill this in easily by picking <Mathcode="\\(antiframe = r1 \mapsto open\\)"/> and <Mathcode="\\(frame = emp\\)"/>,
where emp means the empty state. The emp is recording that at the start we presume nothing. So we obtain the trivially true implication:
and we can fill this in easily by picking
<Mathcode="\\(antiframe = r1 \mapsto open\\)"/> and
<Mathcode="\\(frame = emp\\)"/>, where emp means the empty state. The emp is
recording that at the start we presume nothing. So we obtain the trivially true
implication:
---
@ -183,9 +214,9 @@ which, by applying logical rules, can be re-written equivalently to:
---
Notice that this satisfy the (Function Call) requirement to correctly make the call.
So let's add that information in the pre, and while we are at it
record the information in the post of the first statement that comes from (spec).
Notice that this satisfy the (Function Call) requirement to correctly make the
call. So let's add that information in the pre, and while we are at it record
the information in the post of the first statement that comes from (spec).
---
@ -196,9 +227,12 @@ record the information in the post of the first statement that comes from (spec)
---
Now, let's move to the second statement. Its precondition <Mathcode="\\(r1 \mapsto closed\\)"/> in the partial symbolic execution trace just given
does not have the information needed by <Mathcode="\\(closeResource(r2)\\)"/>, so we can fill that in and continue by
putting <Mathcode="\\(r2 \mapsto open\\)"/> in the pre. While we are at it we can thread this assertion back to the beginning.
Now, let's move to the second statement. Its precondition
<Mathcode="\\(r1 \mapsto closed\\)"/> in the partial symbolic execution trace
just given does not have the information needed by
<Mathcode="\\(closeResource(r2)\\)"/>, so we can fill that in and continue by
putting <Mathcode="\\(r2 \mapsto open\\)"/> in the pre. While we are at it we
can thread this assertion back to the beginning.
---
@ -209,7 +243,8 @@ putting <Math code="\\(r2 \mapsto open\\)" /> in the pre. While we are at it we
---
This information on what to thread backwards can be obtained as the antiframe part of the bi-abduction question
This information on what to thread backwards can be obtained as the antiframe
part of the bi-abduction question
---
@ -219,9 +254,12 @@ This information on what to thread backwards can be obtained as the antiframe pa
Note that the antiframe is precisely the information missing from the precondition in order for <Mathcode="\\(closeResource(r2)\\)"/> to proceed. On the other hand, the frame <Mathcode="\\(r1 \mapsto closed\\)"/> is the portion of state not changed by <Mathcode="\\(closeResource(r2)\\)"/>;
we can thread that through to the overall postconditon
(as justified by the frame rule), giving us
Note that the antiframe is precisely the information missing from the
precondition in order for <Mathcode="\\(closeResource(r2)\\)"/> to proceed. On
the other hand, the frame <Mathcode="\\(r1 \mapsto closed\\)"/> is the portion
of state not changed by <Mathcode="\\(closeResource(r2)\\)"/>; we can thread
that through to the overall postconditon (as justified by the frame rule),
giving us
---
@ -233,39 +271,53 @@ we can thread that through to the overall postconditon
---
Thus, we have obtained a pre and post for this code by symbolically executing it, using bi-abduction
to discover preconditions (abduction of antiframes) as well as untouched portions of memory (frames) as we go along.
In general, bi-abduction
provides a way to infer a pre/post specs from bare code, as long as we know specs for the primitives at the base level of the code. The human does not need to write preconditions and postconditions for all the procedures,
which is the key to having a high level of automation.
This is the basis for how Infer works, why it can scale, and how it can analyze code changes incrementally.
Context: The logical terminology we have been using here comes from AI and philosophy of science.
Abductive inference was introduced by the philosopher Charles Peirce, and described as the mechanism
underpinning hypothesis formation (or, guessing what might be true about the world), the most
creative part of the scientific process.
Abduction and the frame problem have both attracted significant attention in AI.
Infer uses an automated form of abduction to generate
preconditions describing the memory that a program touches (the antiframe part above), and frame inference to
discover what isn't touched.
Infer then uses deductive reasoning to
calculate a formula describing the effect of a program, starting from the preconditions.
In a sense, Infer approaches automated reasoning about programs by mimicking what a human might do when trying to understand a program: it abduces what the program needs, and deduces conclusions of that.
It is when the reasoning goes wrong that Infer reports a potential bug.
This description is by necessity simplified compared to what Infer actually does.
More technical information can be found in the following papers. The descriptions in the papers are
precise, but still simplified; there are many engineering decisions not recorded there. Finally, beyond the papers,
you can read the source code if you wish!
Thus, we have obtained a pre and post for this code by symbolically executing
it, using bi-abduction to discover preconditions (abduction of antiframes) as
well as untouched portions of memory (frames) as we go along.
In general, bi-abduction provides a way to infer a pre/post specs from bare
code, as long as we know specs for the primitives at the base level of the code.
The human does not need to write preconditions and postconditions for all the
procedures, which is the key to having a high level of automation. This is the
basis for how Infer works, why it can scale, and how it can analyze code changes
incrementally.
Context: The logical terminology we have been using here comes from AI and
philosophy of science. Abductive inference was introduced by the philosopher
Charles Peirce, and described as the mechanism underpinning hypothesis formation
(or, guessing what might be true about the world), the most creative part of the
scientific process. Abduction and the frame problem have both attracted
significant attention in AI. Infer uses an automated form of abduction to
generate preconditions describing the memory that a program touches (the
antiframe part above), and frame inference to discover what isn't touched. Infer
then uses deductive reasoning to calculate a formula describing the effect of a
program, starting from the preconditions. In a sense, Infer approaches automated
reasoning about programs by mimicking what a human might do when trying to
understand a program: it abduces what the program needs, and deduces conclusions
of that. It is when the reasoning goes wrong that Infer reports a potential bug.
This description is by necessity simplified compared to what Infer actually
does. More technical information can be found in the following papers. The
descriptions in the papers are precise, but still simplified; there are many
engineering decisions not recorded there. Finally, beyond the papers, you can
read the source code if you wish!
## Technical papers
The following papers contain some of the technical background on Infer and information on how it is used inside Facebook.
- <ahref="http://link.springer.com/chapter/10.1007%2F3-540-44802-0_1">Local Reasoning about Programs that Alter Data Structures.</a> An early separation logic paper which advanced ideas about local reasoning and the frame rule.
- <ahref="http://link.springer.com/chapter/10.1007/11804192_6">Smallfoot: Modular Automatic Assertion Checking with Separation Logic.</a> First separation logic verification tool, introduced frame inference
- <ahref="http://link.springer.com/chapter/10.1007%2F11691372_19">A Local Shape Analysis Based on Separation Logic.</a> Separation logic meets abstract interpretation; calculating loop invariants via a fixed-point computation.
- <ahref="http://dl.acm.org/citation.cfm?id=2049700">Compositional Shape Analysis by Means of Bi-Abduction.</a>
The bi-abduction paper.
- <ahref="https://research.facebook.com/publications/moving-fast-with-software-verification/">Moving Fast with Software Verification.</a> A paper about the way we use Infer at Facebook.
The following papers contain some of the technical background on Infer and
Here is an overview of the types of bugs currently reported by Infer checkers.
## Captured Strong Self
## Captured Strong Self
This will happen in one of two cases generally:
This will happen in one of two cases generally:
1. One uses `weakSelf` but forgot to declare it weak first.
2. One is using `strongSelf`, declared in a block, in another (inside) block. This changes the delicate balance of the
`weakSelf`/`strongSelf` use in the first block. The retain cycle is avoided there because `strongSelf` is a local variable
to the block. If `strongSelf` is used in the inside block, then it's not a local variable anymore, but a captured variable.
2. One is using `strongSelf`, declared in a block, in another (inside) block.
This changes the delicate balance of the `weakSelf`/`strongSelf` use in the
first block. The retain cycle is avoided there because `strongSelf` is a
local variable to the block. If `strongSelf` is used in the inside block,
then it's not a local variable anymore, but a captured variable.
## Checkers immutable cast
@ -33,7 +34,6 @@ list e.g. by adding elements.
Action: you can change the return type to be immutable, or make a copy of the
collection so that it can be modified.
## Deadlock
This error is currently reported in Java. A deadlock occurs when two distinct
@ -97,13 +97,11 @@ To suppress reports of deadlocks in a method `m()` use the
}
```
## Dead store
This error is reported in C++. It fires when the value assigned to a variables
is never used (e.g., `int i = 1; i = 2; return i;`).
## Empty vector access
This error type is reported only in C++, in versions >= C++11.
@ -119,7 +117,6 @@ int foo(){
}
```
## Field should be nullable
This error type is reported in Java. It fires when a field is not marked
@ -169,7 +166,6 @@ Action:
...
```
## Fragment retains view
This error type is Android-specific. It fires when a `Fragment` type fails to
@ -182,7 +178,6 @@ retain a useless reference to that `View` that will not be cleaned up until the
Action: Nullify the `View` in question in `onDestroyView`.
## Interface not thread-safe
This error indicates that you have invoked an interface method not annotated
@ -192,7 +187,6 @@ interface or to the interface method. For background on why these annotations
are needed, see the detailed explanation
[here](racerd#interface-not-thread-safe).
## Ivar not null checked
This error type is only reported in Objective-C. This is similar to Null
@ -210,7 +204,6 @@ parameter is `nil`. For example:
Possible solutions are adding a check for `nil`, or making sure that the method
is not called with `nil`.
## Lock Consistency Violation
This is a C++ and Objective C error reported whenever:
@ -234,7 +227,6 @@ container (an array, a vector, etc).
Infer considers a method as private if it's not exported in the header-file
interface.
## Mixed self weakSelf
This happens when an Objective-C block captures both `self` and `weakSelf`, a
@ -242,11 +234,13 @@ weak pointer to `self`. Possibly the developer meant to capture only `weakSelf`
to avoid a retain cycle, but made a typo and used `self` as well in the block,
instead of `strongSelf`. In this case, this could cause a retain cycle.
## Multiple weakSelf
An Objective-C block uses `weakSelf` more than once. This could lead to unexpected behaviour. Even if `weakSelf` is not nil in the first use, it could be nil in the following uses since the object that `weakSelf` points to could be freed anytime. One should assign it to a strong pointer first, and then use it in the block.
An Objective-C block uses `weakSelf` more than once. This could lead to
unexpected behaviour. Even if `weakSelf` is not nil in the first use, it could
be nil in the following uses since the object that `weakSelf` points to could be
freed anytime. One should assign it to a strong pointer first, and then use it
in the block.
## Memory leak
@ -275,7 +269,6 @@ objects from Core Foundation or Core Graphics don't get released.
}
```
## Null Dereference
Infer reports null dereference bugs in C, Objective-C and Java. The issue is
@ -376,7 +369,6 @@ master. In the future we might include analysis directives (hey, analyzer, p is
not null!) like in Hack that tell the analyzer the information that you know,
but that is for later.
## Parameter not null checked
This error type is reported only in Objective-C. It is similar to Null
@ -404,7 +396,6 @@ is not called with `nil`. When an argument will never be `nil`, you can add the
annotation `nonnull` to the argument's type, to tell Infer (and the type
system), that the argument won't be `nil`. This will silence the warning.
## Premature nil termination argument
This error type is reported in C and Objective-C. In many variadic methods,
@ -423,7 +414,6 @@ An example of such variadic methods is
In this example, if `str` is `nil` then an array `@[@"aaa"]` of size 1 will be
created, and not an array `@[@"aaa", str, @"bbb"]` of size 3 as expected.
## Resource leak
Infer reports resource leaks in C, Objective-C and Java. In general, resources
@ -700,7 +690,6 @@ way. This just illustrates that, though you might hear people say that
try-with-resources "solves" the resource problem, it does not. It is very
useful, but you cannot use it blindly when you see a resource-allocation site.
## Retain cycle
A retain cycle is a situation when object A retains object B, and object B
@ -734,7 +723,6 @@ hierarchy:
@end
```
## Static initialization order fiasco
This error is reported in C++. It fires when the initialization of a static
@ -745,13 +733,12 @@ already initialized or not at that point.
For more technical definition and techniques to avoid/remediate, see the
In many methods that take a block as an argument, the block position is annotated with NS_NOESCAPE to mark that the block
passed to this method won't be leaving the current scope. In those cases, there is no need to use `weakSelf` to avoid the block to capture `self`. This issue type flags this case.
In many methods that take a block as an argument, the block position is
annotated with NS_NOESCAPE to mark that the block passed to this method won't be
leaving the current scope. In those cases, there is no need to use `weakSelf` to
avoid the block to capture `self`. This issue type flags this case.