coverage: Never emit improperly-ordered coverage regions
If we emit a coverage region that is improperly ordered (end < start), `llvm-cov` will fail with `coveragemap_error::malformed`, which is inconvenient for users and also very hard to debug.
Ideally we would fix the root causes of these situations, but they tend to occur in very obscure edge-case scenarios (often involving nested macros), and we don't always have a good MCVE to work from. So it makes sense to also have a catch-all check that will prevent improperly-ordered regions from ever being emitted.
---
This is mainly aimed at resolving #119453. We don't have a specific way to reproduce it, which is why I haven't been able to add a test case in this PR. But based on the information provided in that issue, this change seems likely to avoid the error in `llvm-cov`.
`````@rustbot````` label +A-code-coverage
This also switches from `split_off(0)` to `std::mem::take` when emptying the
accumulated list of blocks, because `split_off(0)` handles capacity in a way
that is unintuitive when used in a loop.
The old loop had two separate places where it would flush the acumulated list
of straight-line blocks into a new BCB. One occurred at the start of the loop
body when the current block couldn't be chained into, and the other occurred at
the end of the loop body when the current block couldn't be chained from.
The latter check can be hoisted to the start of the loop body by making it
examine the previous block (which has added itself to the list) instead of the
current block. With that done, we can combine the two separate flushes into one
flush with two possible trigger conditions.
Filtering out unreachable successors is only needed by the main graph traversal
loop, so we can move the filtering step into that loop instead, eliminating the
need to pass the MIR body into `bcb_filtered_successors`.
This is less elegant in some ways, since we no longer visit a BCB's spans as a
batch, but will make it much easier to add support for other kinds of coverage
mapping regions (e.g. branch regions or gap regions).
This draws a clear distinction between the fields/methods that are needed by
initial span extraction and preprocessing, and those that are needed by the
main "refinement" loop.
These two tasks historically needed to be interleaved, but after various recent
changes (including #116046 and #116917) they can now be fully separated.
The old code used a heuristic to detect async functions and adjust their
coverage spans to produce better output. But there's no need to resort to a
heuristic when we can just check whether the current function is actually an
`async fn`.
If we want to know whether two byte positions are in the same file, we don't
need to clone and compare `Lrc<SourceFile>`; we can just get their indices and
compare those instead.
Changes in this patch:
- Extract local variable `def_id`
- Check `is_fn_like` without retrieving HIR
- Inline some locals that are used once and aren't needed for clarity
Renamings:
- find -> opt_hir_node
- get -> hir_node
- find_by_def_id -> opt_hir_node_by_def_id
- get_by_def_id -> hir_node_by_def_id
Fix rebase changes using removed methods
Use `tcx.hir_node_by_def_id()` whenever possible in compiler
Fix clippy errors
Fix compiler
Apply suggestions from code review
Co-authored-by: Vadim Petrochenkov <vadim.petrochenkov@gmail.com>
Add FIXME for `tcx.hir()` returned type about its removal
Simplify with with `tcx.hir_node_by_def_id`
detects redundant imports that can be eliminated.
for #117772 :
In order to facilitate review and modification, split the checking code and
removing redundant imports code into two PR.
coverage: Simplify the heuristic for ignoring `async fn` return spans
The code for extracting coverage spans from MIR has a special heuristic for dealing with `async fn`, so that the function's closing brace does not have a confusing double count.
The code implementing that heuristic is currently mixed in with the code for flushing remaining spans after the main refinement loop, making the refinement code harder to understand.
We can solve that by hoisting the heuristic to an earlier stage, after the spans have been extracted and sorted but before they have been processed by the refinement loop.
The coverage tests verify that the heuristic is still effective, so coverage mappings/reports for `async fn` have not changed.
---
This PR also has the side-effect of fixing the `None some_prev` panic that started appearing after #118525.
The old code assumed that `prev` would always be present after the refinement loop. That was only true if the list of collected spans was non-empty, but prior to #118525 that didn't seem to come up in practice. After that change, the list of collected spans could be empty in some specific circumstances, leading to panics.
The new code uses an `if let` to inspect `prev`, which correctly does nothing if there is no span present.
There are cases where coverage instrumentation wants to show a span for some
syntax element, but there is no MIR node that naturally carries that span, so
the instrumentor can't see it.
MIR building can now use this new kind of coverage statement to deliberately
include those spans in MIR, attached to a dummy statement that has no other
effect.
coverage: Merge refined spans in a separate final pass
Pulling this merge step out of `push_refined_span` and into a separate pass lets us push directly to `refined_spans` instead of calling a helper method.
Because the compiler can now see partial borrows of `refined_spans`, we can remove some extra code that was jumping through hoops to satisfy the borrow checker.
---
``@rustbot`` label +A-code-coverage