2015-08-18 17:59:21 -04:00
|
|
|
|
/*!
|
|
|
|
|
|
Managing the scope stack. The scopes are tied to lexical scopes, so as
|
2020-07-21 09:09:27 +00:00
|
|
|
|
we descend the THIR, we push a scope on the stack, build its
|
2015-08-18 17:59:21 -04:00
|
|
|
|
contents, and then pop it off. Every scope is named by a
|
2017-08-31 21:37:38 +03:00
|
|
|
|
`region::Scope`.
|
2015-08-18 17:59:21 -04:00
|
|
|
|
|
|
|
|
|
|
### SEME Regions
|
|
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
When pushing a new [Scope], we record the current point in the graph (a
|
2015-08-18 17:59:21 -04:00
|
|
|
|
basic block); this marks the entry to the scope. We then generate more
|
|
|
|
|
|
stuff in the control-flow graph. Whenever the scope is exited, either
|
|
|
|
|
|
via a `break` or `return` or just by fallthrough, that marks an exit
|
|
|
|
|
|
from the scope. Each lexical scope thus corresponds to a single-entry,
|
|
|
|
|
|
multiple-exit (SEME) region in the control-flow graph.
|
|
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
For now, we record the `region::Scope` to each SEME region for later reference
|
|
|
|
|
|
(see caveat in next paragraph). This is because destruction scopes are tied to
|
|
|
|
|
|
them. This may change in the future so that MIR lowering determines its own
|
|
|
|
|
|
destruction scopes.
|
2015-08-18 17:59:21 -04:00
|
|
|
|
|
2019-04-03 19:21:51 +01:00
|
|
|
|
### Not so SEME Regions
|
|
|
|
|
|
|
|
|
|
|
|
In the course of building matches, it sometimes happens that certain code
|
|
|
|
|
|
(namely guards) gets executed multiple times. This means that the scope lexical
|
|
|
|
|
|
scope may in fact correspond to multiple, disjoint SEME regions. So in fact our
|
2019-11-16 13:23:31 +00:00
|
|
|
|
mapping is from one scope to a vector of SEME regions. Since the SEME regions
|
|
|
|
|
|
are disjoint, the mapping is still one-to-one for the set of SEME regions that
|
|
|
|
|
|
we're currently in.
|
2015-08-18 17:59:21 -04:00
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
Also in matches, the scopes assigned to arms are not always even SEME regions!
|
|
|
|
|
|
Each arm has a single region with one entry for each pattern. We manually
|
2019-04-03 19:21:51 +01:00
|
|
|
|
manipulate the scheduled drops in this scope to avoid dropping things multiple
|
2019-11-16 13:23:31 +00:00
|
|
|
|
times.
|
2019-04-03 19:21:51 +01:00
|
|
|
|
|
2015-08-18 17:59:21 -04:00
|
|
|
|
### Drops
|
|
|
|
|
|
|
2018-05-08 16:10:16 +03:00
|
|
|
|
The primary purpose for scopes is to insert drops: while building
|
2017-12-01 14:39:51 +02:00
|
|
|
|
the contents, we also accumulate places that need to be dropped upon
|
2015-08-18 17:59:21 -04:00
|
|
|
|
exit from each scope. This is done by calling `schedule_drop`. Once a
|
|
|
|
|
|
drop is scheduled, whenever we branch out we will insert drops of all
|
2017-12-01 14:39:51 +02:00
|
|
|
|
those places onto the outgoing edge. Note that we don't know the full
|
2015-08-18 17:59:21 -04:00
|
|
|
|
set of scheduled drops up front, and so whenever we exit from the
|
|
|
|
|
|
scope we only drop the values scheduled thus far. For example, consider
|
|
|
|
|
|
the scope S corresponding to this loop:
|
|
|
|
|
|
|
2017-06-20 15:15:16 +08:00
|
|
|
|
```
|
|
|
|
|
|
# let cond = true;
|
2015-08-18 17:59:21 -04:00
|
|
|
|
loop {
|
2017-06-20 15:15:16 +08:00
|
|
|
|
let x = ..;
|
2015-08-18 17:59:21 -04:00
|
|
|
|
if cond { break; }
|
2017-06-20 15:15:16 +08:00
|
|
|
|
let y = ..;
|
2015-08-18 17:59:21 -04:00
|
|
|
|
}
|
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
When processing the `let x`, we will add one drop to the scope for
|
2022-11-16 20:34:16 +00:00
|
|
|
|
`x`. The break will then insert a drop for `x`. When we process `let
|
2015-08-18 17:59:21 -04:00
|
|
|
|
y`, we will add another drop (in fact, to a subscope, but let's ignore
|
|
|
|
|
|
that for now); any later drops would also drop `y`.
|
|
|
|
|
|
|
|
|
|
|
|
### Early exit
|
|
|
|
|
|
|
|
|
|
|
|
There are numerous "normal" ways to early exit a scope: `break`,
|
|
|
|
|
|
`continue`, `return` (panics are handled separately). Whenever an
|
2019-11-16 13:23:31 +00:00
|
|
|
|
early exit occurs, the method `break_scope` is called. It is given the
|
2015-08-18 17:59:21 -04:00
|
|
|
|
current point in execution where the early exit occurs, as well as the
|
|
|
|
|
|
scope you want to branch to (note that all early exits from to some
|
2019-11-16 13:23:31 +00:00
|
|
|
|
other enclosing scope). `break_scope` will record the set of drops currently
|
|
|
|
|
|
scheduled in a [DropTree]. Later, before `in_breakable_scope` exits, the drops
|
|
|
|
|
|
will be added to the CFG.
|
2015-08-18 17:59:21 -04:00
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
Panics are handled in a similar fashion, except that the drops are added to the
|
2020-05-08 20:00:32 +01:00
|
|
|
|
MIR once the rest of the function has finished being lowered. If a terminator
|
2019-11-16 13:23:31 +00:00
|
|
|
|
can panic, call `diverge_from(block)` with the block containing the terminator
|
|
|
|
|
|
`block`.
|
2015-08-18 17:59:21 -04:00
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
### Breakable scopes
|
2015-08-18 17:59:21 -04:00
|
|
|
|
|
|
|
|
|
|
In addition to the normal scope stack, we track a loop scope stack
|
2019-11-16 13:23:31 +00:00
|
|
|
|
that contains only loops and breakable blocks. It tracks where a `break`,
|
|
|
|
|
|
`continue` or `return` should go to.
|
2015-08-18 17:59:21 -04:00
|
|
|
|
|
|
|
|
|
|
*/
|
|
|
|
|
|
|
2021-09-01 22:52:17 +01:00
|
|
|
|
use std::mem;
|
|
|
|
|
|
|
2019-06-15 17:37:19 +01:00
|
|
|
|
use crate::build::{BlockAnd, BlockAndExtension, BlockFrame, Builder, CFG};
|
2021-01-21 22:44:02 -05:00
|
|
|
|
use rustc_data_structures::fx::FxHashMap;
|
2022-12-05 11:22:35 -08:00
|
|
|
|
use rustc_hir::HirId;
|
2023-04-19 10:57:17 +00:00
|
|
|
|
use rustc_index::{IndexSlice, IndexVec};
|
2020-07-21 09:09:27 +00:00
|
|
|
|
use rustc_middle::middle::region;
|
|
|
|
|
|
use rustc_middle::mir::*;
|
2023-12-15 15:16:24 +00:00
|
|
|
|
use rustc_middle::thir::{ExprId, LintLevel};
|
2023-07-11 22:07:28 +10:00
|
|
|
|
use rustc_session::lint::Level;
|
2024-01-12 08:21:42 +01:00
|
|
|
|
use rustc_span::source_map::Spanned;
|
2023-05-25 17:30:23 +00:00
|
|
|
|
use rustc_span::{Span, DUMMY_SP};
|
2019-11-16 13:23:31 +00:00
|
|
|
|
|
|
|
|
|
|
#[derive(Debug)]
|
|
|
|
|
|
pub struct Scopes<'tcx> {
|
|
|
|
|
|
scopes: Vec<Scope>,
|
2021-09-01 22:52:17 +01:00
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
/// The current set of breakable scopes. See module comment for more details.
|
|
|
|
|
|
breakable_scopes: Vec<BreakableScope<'tcx>>,
|
|
|
|
|
|
|
2021-09-01 22:52:17 +01:00
|
|
|
|
/// The scope of the innermost if-then currently being lowered.
|
|
|
|
|
|
if_then_scope: Option<IfThenScope>,
|
|
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
/// Drops that need to be done on unwind paths. See the comment on
|
|
|
|
|
|
/// [DropTree] for more details.
|
|
|
|
|
|
unwind_drops: DropTree,
|
|
|
|
|
|
|
2023-10-19 16:06:43 +00:00
|
|
|
|
/// Drops that need to be done on paths to the `CoroutineDrop` terminator.
|
2023-10-19 21:46:28 +00:00
|
|
|
|
coroutine_drops: DropTree,
|
2019-11-16 13:23:31 +00:00
|
|
|
|
}
|
2015-08-18 17:59:21 -04:00
|
|
|
|
|
Add `EndRegion` statement kind to MIR.
* Emit `EndRegion` for every code-extent for which we observe a
borrow. To do this, we needed to thread source info back through
to `fn in_scope`, which makes this commit a bit more painful than
one might have expected.
* There is `end_region` emission in `Builder::pop_scope` and in
`Builder::exit_scope`; the first handles falling out of a scope
normally, the second handles e.g. `break`.
* Remove `EndRegion` statements during the erase_regions mir
transformation.
* Preallocate the terminator block, and throw an `Unreachable` marker
on it from the outset. Then overwrite that Terminator as necessary
on demand.
* Instead of marking the scope as needs_cleanup after seeing a
borrow, just treat every scope in the chain as being part of the
diverge_block (after any *one* of them has separately signalled
that it needs cleanup, e.g. due to having a destructor to run).
* Allow for resume terminators to be patched when looking up drop flags.
(In particular, `MirPatch::new` has an explicit code path,
presumably previously unreachable, that patches up such resume
terminators.)
* Make `Scope` implement `Debug` trait.
* Expanded a stray comment: we do not emit StorageDead on diverging
paths, but that end behavior might not be desirable.
2017-02-17 13:38:42 +01:00
|
|
|
|
#[derive(Debug)]
|
2019-06-15 20:33:23 +01:00
|
|
|
|
struct Scope {
|
2018-05-28 14:16:09 +03:00
|
|
|
|
/// The source scope this scope was created in.
|
|
|
|
|
|
source_scope: SourceScope,
|
2016-05-31 20:27:36 +03:00
|
|
|
|
|
2017-08-31 21:37:38 +03:00
|
|
|
|
/// the region span of this scope within source code.
|
|
|
|
|
|
region_scope: region::Scope,
|
2016-03-23 05:18:49 -04:00
|
|
|
|
|
2017-12-01 14:39:51 +02:00
|
|
|
|
/// set of places to drop when exiting this scope. This starts
|
2016-03-23 05:18:49 -04:00
|
|
|
|
/// out empty but grows as variables are declared during the
|
|
|
|
|
|
/// building process. This is a stack, so we always drop from the
|
|
|
|
|
|
/// end of the vector (top of the stack) first.
|
2019-06-15 20:33:23 +01:00
|
|
|
|
drops: Vec<DropData>,
|
2016-03-09 11:04:26 -05:00
|
|
|
|
|
2021-01-21 22:44:02 -05:00
|
|
|
|
moved_locals: Vec<Local>,
|
|
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
/// The drop index that will drop everything in and below this scope on an
|
|
|
|
|
|
/// unwind path.
|
|
|
|
|
|
cached_unwind_block: Option<DropIdx>,
|
2020-06-04 11:34:42 -04:00
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
/// The drop index that will drop everything in and below this scope on a
|
2023-10-19 21:46:28 +00:00
|
|
|
|
/// coroutine drop path.
|
|
|
|
|
|
cached_coroutine_drop_block: Option<DropIdx>,
|
2016-01-30 00:18:47 +02:00
|
|
|
|
}
|
|
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
#[derive(Clone, Copy, Debug)]
|
2019-06-15 20:33:23 +01:00
|
|
|
|
struct DropData {
|
2019-11-16 13:23:31 +00:00
|
|
|
|
/// The `Span` where drop obligation was incurred (typically where place was
|
|
|
|
|
|
/// declared)
|
|
|
|
|
|
source_info: SourceInfo,
|
2016-03-23 04:24:42 -04:00
|
|
|
|
|
2019-06-15 20:33:23 +01:00
|
|
|
|
/// local to drop
|
|
|
|
|
|
local: Local,
|
2016-03-22 20:39:29 -04:00
|
|
|
|
|
2018-07-03 18:09:00 -07:00
|
|
|
|
/// Whether this is a value Drop or a StorageDead.
|
|
|
|
|
|
kind: DropKind,
|
2019-05-14 18:33:04 -07:00
|
|
|
|
}
|
2016-12-26 14:34:03 +01:00
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
|
2018-07-03 18:09:00 -07:00
|
|
|
|
pub(crate) enum DropKind {
|
2019-05-30 13:21:17 -07:00
|
|
|
|
Value,
|
|
|
|
|
|
Storage,
|
2016-01-30 00:18:47 +02:00
|
|
|
|
}
|
|
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
#[derive(Debug)]
|
2019-06-15 17:37:19 +01:00
|
|
|
|
struct BreakableScope<'tcx> {
|
2017-08-31 21:37:38 +03:00
|
|
|
|
/// Region scope of the loop
|
2019-06-15 17:37:19 +01:00
|
|
|
|
region_scope: region::Scope,
|
2019-09-29 15:08:57 +01:00
|
|
|
|
/// The destination of the loop/block expression itself (i.e., where to put
|
2019-11-16 13:23:31 +00:00
|
|
|
|
/// the result of a `break` or `return` expression)
|
2019-06-15 17:37:19 +01:00
|
|
|
|
break_destination: Place<'tcx>,
|
2019-11-16 13:23:31 +00:00
|
|
|
|
/// Drops that happen on the `break`/`return` path.
|
|
|
|
|
|
break_drops: DropTree,
|
|
|
|
|
|
/// Drops that happen on the `continue` path.
|
|
|
|
|
|
continue_drops: Option<DropTree>,
|
2019-06-15 17:37:19 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
2021-09-01 22:52:17 +01:00
|
|
|
|
#[derive(Debug)]
|
|
|
|
|
|
struct IfThenScope {
|
|
|
|
|
|
/// The if-then scope or arm scope
|
|
|
|
|
|
region_scope: region::Scope,
|
|
|
|
|
|
/// Drops that happen on the `else` path.
|
|
|
|
|
|
else_drops: DropTree,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2019-06-15 17:37:19 +01:00
|
|
|
|
/// The target of an expression that breaks out of a scope
|
|
|
|
|
|
#[derive(Clone, Copy, Debug)]
|
2022-05-20 19:51:09 -04:00
|
|
|
|
pub(crate) enum BreakableTarget {
|
2019-06-15 17:37:19 +01:00
|
|
|
|
Continue(region::Scope),
|
|
|
|
|
|
Break(region::Scope),
|
|
|
|
|
|
Return,
|
2015-08-18 17:59:21 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
rustc_index::newtype_index! {
|
2023-11-21 17:35:46 +11:00
|
|
|
|
#[orderable]
|
2022-12-18 21:47:28 +01:00
|
|
|
|
struct DropIdx {}
|
2019-11-16 13:23:31 +00:00
|
|
|
|
}
|
2016-12-26 14:34:03 +01:00
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
const ROOT_NODE: DropIdx = DropIdx::from_u32(0);
|
2016-12-26 14:34:03 +01:00
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
/// A tree of drops that we have deferred lowering. It's used for:
|
|
|
|
|
|
///
|
|
|
|
|
|
/// * Drops on unwind paths
|
2023-10-19 21:46:28 +00:00
|
|
|
|
/// * Drops on coroutine drop paths (when a suspended coroutine is dropped)
|
2019-11-16 13:23:31 +00:00
|
|
|
|
/// * Drops on return and loop exit paths
|
2021-09-01 22:52:17 +01:00
|
|
|
|
/// * Drops on the else path in an `if let` chain
|
2019-11-16 13:23:31 +00:00
|
|
|
|
///
|
|
|
|
|
|
/// Once no more nodes could be added to the tree, we lower it to MIR in one go
|
2020-10-03 11:49:05 -04:00
|
|
|
|
/// in `build_mir`.
|
2019-11-16 13:23:31 +00:00
|
|
|
|
#[derive(Debug)]
|
|
|
|
|
|
struct DropTree {
|
|
|
|
|
|
/// Drops in the tree.
|
|
|
|
|
|
drops: IndexVec<DropIdx, (DropData, DropIdx)>,
|
|
|
|
|
|
/// Map for finding the inverse of the `next_drop` relation:
|
|
|
|
|
|
///
|
2020-10-03 11:49:05 -04:00
|
|
|
|
/// `previous_drops[(drops[i].1, drops[i].0.local, drops[i].0.kind)] == i`
|
2019-11-16 13:23:31 +00:00
|
|
|
|
previous_drops: FxHashMap<(DropIdx, Local, DropKind), DropIdx>,
|
|
|
|
|
|
/// Edges into the `DropTree` that need to be added once it's lowered.
|
|
|
|
|
|
entry_points: Vec<(DropIdx, BasicBlock)>,
|
2016-12-26 14:34:03 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
2019-06-15 20:33:23 +01:00
|
|
|
|
impl Scope {
|
2019-09-19 10:07:50 -04:00
|
|
|
|
/// Whether there's anything to do for the cleanup path, that is,
|
|
|
|
|
|
/// when unwinding through this scope. This includes destructors,
|
|
|
|
|
|
/// but not StorageDead statements, which don't get emitted at all
|
|
|
|
|
|
/// for unwinding, for several reasons:
|
|
|
|
|
|
/// * clang doesn't emit llvm.lifetime.end for C++ unwinding
|
|
|
|
|
|
/// * LLVM's memory dependency analysis can't handle it atm
|
|
|
|
|
|
/// * polluting the cleanup MIR with StorageDead creates
|
|
|
|
|
|
/// landing pads even though there's no actual destructors
|
|
|
|
|
|
/// * freeing up stack space has no effect during unwinding
|
2023-10-19 21:46:28 +00:00
|
|
|
|
/// Note that for coroutines we do emit StorageDeads, for the
|
|
|
|
|
|
/// use of optimizations in the MIR coroutine transform.
|
2019-09-19 10:07:50 -04:00
|
|
|
|
fn needs_cleanup(&self) -> bool {
|
|
|
|
|
|
self.drops.iter().any(|drop| match drop.kind {
|
|
|
|
|
|
DropKind::Value => true,
|
|
|
|
|
|
DropKind::Storage => false,
|
|
|
|
|
|
})
|
|
|
|
|
|
}
|
2019-11-16 13:23:31 +00:00
|
|
|
|
|
|
|
|
|
|
fn invalidate_cache(&mut self) {
|
|
|
|
|
|
self.cached_unwind_block = None;
|
2023-10-19 21:46:28 +00:00
|
|
|
|
self.cached_coroutine_drop_block = None;
|
2019-11-16 13:23:31 +00:00
|
|
|
|
}
|
2019-11-16 13:23:31 +00:00
|
|
|
|
}
|
2019-06-15 17:37:19 +01:00
|
|
|
|
|
2020-10-02 17:24:54 -04:00
|
|
|
|
/// A trait that determined how [DropTree] creates its blocks and
|
2019-11-16 13:23:31 +00:00
|
|
|
|
/// links to any entry nodes.
|
|
|
|
|
|
trait DropTreeBuilder<'tcx> {
|
|
|
|
|
|
/// Create a new block for the tree. This should call either
|
|
|
|
|
|
/// `cfg.start_new_block()` or `cfg.start_new_cleanup_block()`.
|
|
|
|
|
|
fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock;
|
|
|
|
|
|
|
|
|
|
|
|
/// Links a block outside the drop tree, `from`, to the block `to` inside
|
|
|
|
|
|
/// the drop tree.
|
2024-03-06 21:13:43 +11:00
|
|
|
|
fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock);
|
2019-11-16 13:23:31 +00:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
impl DropTree {
|
|
|
|
|
|
fn new() -> Self {
|
2020-02-29 17:48:09 +00:00
|
|
|
|
// The root node of the tree doesn't represent a drop, but instead
|
|
|
|
|
|
// represents the block in the tree that should be jumped to once all
|
|
|
|
|
|
// of the required drops have been performed.
|
2019-11-16 13:23:31 +00:00
|
|
|
|
let fake_source_info = SourceInfo::outermost(DUMMY_SP);
|
|
|
|
|
|
let fake_data =
|
|
|
|
|
|
DropData { source_info: fake_source_info, local: Local::MAX, kind: DropKind::Storage };
|
|
|
|
|
|
let drop_idx = DropIdx::MAX;
|
|
|
|
|
|
let drops = IndexVec::from_elem_n((fake_data, drop_idx), 1);
|
|
|
|
|
|
Self { drops, entry_points: Vec::new(), previous_drops: FxHashMap::default() }
|
2019-06-15 17:37:19 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
fn add_drop(&mut self, drop: DropData, next: DropIdx) -> DropIdx {
|
|
|
|
|
|
let drops = &mut self.drops;
|
|
|
|
|
|
*self
|
|
|
|
|
|
.previous_drops
|
|
|
|
|
|
.entry((next, drop.local, drop.kind))
|
|
|
|
|
|
.or_insert_with(|| drops.push((drop, next)))
|
2019-11-16 13:23:31 +00:00
|
|
|
|
}
|
|
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
fn add_entry(&mut self, from: BasicBlock, to: DropIdx) {
|
|
|
|
|
|
debug_assert!(to < self.drops.next_index());
|
|
|
|
|
|
self.entry_points.push((to, from));
|
2020-06-04 11:34:42 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
2020-02-29 17:48:09 +00:00
|
|
|
|
/// Builds the MIR for a given drop tree.
|
|
|
|
|
|
///
|
|
|
|
|
|
/// `blocks` should have the same length as `self.drops`, and may have its
|
|
|
|
|
|
/// first value set to some already existing block.
|
2019-11-16 13:23:31 +00:00
|
|
|
|
fn build_mir<'tcx, T: DropTreeBuilder<'tcx>>(
|
|
|
|
|
|
&mut self,
|
|
|
|
|
|
cfg: &mut CFG<'tcx>,
|
|
|
|
|
|
blocks: &mut IndexVec<DropIdx, Option<BasicBlock>>,
|
|
|
|
|
|
) {
|
|
|
|
|
|
debug!("DropTree::build_mir(drops = {:#?})", self);
|
|
|
|
|
|
assert_eq!(blocks.len(), self.drops.len());
|
|
|
|
|
|
|
|
|
|
|
|
self.assign_blocks::<T>(cfg, blocks);
|
|
|
|
|
|
self.link_blocks(cfg, blocks)
|
2019-11-16 13:23:31 +00:00
|
|
|
|
}
|
|
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
/// Assign blocks for all of the drops in the drop tree that need them.
|
|
|
|
|
|
fn assign_blocks<'tcx, T: DropTreeBuilder<'tcx>>(
|
|
|
|
|
|
&mut self,
|
|
|
|
|
|
cfg: &mut CFG<'tcx>,
|
|
|
|
|
|
blocks: &mut IndexVec<DropIdx, Option<BasicBlock>>,
|
|
|
|
|
|
) {
|
|
|
|
|
|
// StorageDead statements can share blocks with each other and also with
|
2020-05-08 20:00:32 +01:00
|
|
|
|
// a Drop terminator. We iterate through the drops to find which drops
|
|
|
|
|
|
// need their own block.
|
2019-11-16 13:23:31 +00:00
|
|
|
|
#[derive(Clone, Copy)]
|
|
|
|
|
|
enum Block {
|
|
|
|
|
|
// This drop is unreachable
|
|
|
|
|
|
None,
|
|
|
|
|
|
// This drop is only reachable through the `StorageDead` with the
|
|
|
|
|
|
// specified index.
|
|
|
|
|
|
Shares(DropIdx),
|
|
|
|
|
|
// This drop has more than one way of being reached, or it is
|
2020-05-08 20:00:32 +01:00
|
|
|
|
// branched to from outside the tree, or its predecessor is a
|
2019-11-16 13:23:31 +00:00
|
|
|
|
// `Value` drop.
|
|
|
|
|
|
Own,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
let mut needs_block = IndexVec::from_elem(Block::None, &self.drops);
|
|
|
|
|
|
if blocks[ROOT_NODE].is_some() {
|
|
|
|
|
|
// In some cases (such as drops for `continue`) the root node
|
|
|
|
|
|
// already has a block. In this case, make sure that we don't
|
|
|
|
|
|
// override it.
|
|
|
|
|
|
needs_block[ROOT_NODE] = Block::Own;
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2020-05-08 20:00:32 +01:00
|
|
|
|
// Sort so that we only need to check the last value.
|
2019-11-16 13:23:31 +00:00
|
|
|
|
let entry_points = &mut self.entry_points;
|
|
|
|
|
|
entry_points.sort();
|
|
|
|
|
|
|
|
|
|
|
|
for (drop_idx, drop_data) in self.drops.iter_enumerated().rev() {
|
2023-05-24 14:19:22 +00:00
|
|
|
|
if entry_points.last().is_some_and(|entry_point| entry_point.0 == drop_idx) {
|
2019-11-16 13:23:31 +00:00
|
|
|
|
let block = *blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg));
|
|
|
|
|
|
needs_block[drop_idx] = Block::Own;
|
2023-05-24 14:19:22 +00:00
|
|
|
|
while entry_points.last().is_some_and(|entry_point| entry_point.0 == drop_idx) {
|
2019-11-16 13:23:31 +00:00
|
|
|
|
let entry_block = entry_points.pop().unwrap().1;
|
2024-03-06 21:13:43 +11:00
|
|
|
|
T::link_entry_point(cfg, entry_block, block);
|
2020-06-04 11:34:42 -04:00
|
|
|
|
}
|
|
|
|
|
|
}
|
2019-11-16 13:23:31 +00:00
|
|
|
|
match needs_block[drop_idx] {
|
|
|
|
|
|
Block::None => continue,
|
|
|
|
|
|
Block::Own => {
|
|
|
|
|
|
blocks[drop_idx].get_or_insert_with(|| T::make_block(cfg));
|
|
|
|
|
|
}
|
|
|
|
|
|
Block::Shares(pred) => {
|
|
|
|
|
|
blocks[drop_idx] = blocks[pred];
|
|
|
|
|
|
}
|
2020-06-04 11:34:42 -04:00
|
|
|
|
}
|
2019-11-16 13:23:31 +00:00
|
|
|
|
if let DropKind::Value = drop_data.0.kind {
|
|
|
|
|
|
needs_block[drop_data.1] = Block::Own;
|
2020-10-26 21:02:48 -04:00
|
|
|
|
} else if drop_idx != ROOT_NODE {
|
|
|
|
|
|
match &mut needs_block[drop_data.1] {
|
|
|
|
|
|
pred @ Block::None => *pred = Block::Shares(drop_idx),
|
|
|
|
|
|
pred @ Block::Shares(_) => *pred = Block::Own,
|
|
|
|
|
|
Block::Own => (),
|
2019-11-16 13:23:31 +00:00
|
|
|
|
}
|
2020-06-04 11:34:42 -04:00
|
|
|
|
}
|
|
|
|
|
|
}
|
2019-11-16 13:23:31 +00:00
|
|
|
|
|
|
|
|
|
|
debug!("assign_blocks: blocks = {:#?}", blocks);
|
|
|
|
|
|
assert!(entry_points.is_empty());
|
2020-06-04 11:34:42 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
fn link_blocks<'tcx>(
|
|
|
|
|
|
&self,
|
|
|
|
|
|
cfg: &mut CFG<'tcx>,
|
2023-03-31 00:32:44 -07:00
|
|
|
|
blocks: &IndexSlice<DropIdx, Option<BasicBlock>>,
|
2019-11-16 13:23:31 +00:00
|
|
|
|
) {
|
|
|
|
|
|
for (drop_idx, drop_data) in self.drops.iter_enumerated().rev() {
|
2021-10-16 03:45:14 +02:00
|
|
|
|
let Some(block) = blocks[drop_idx] else { continue };
|
2019-11-16 13:23:31 +00:00
|
|
|
|
match drop_data.0.kind {
|
|
|
|
|
|
DropKind::Value => {
|
|
|
|
|
|
let terminator = TerminatorKind::Drop {
|
|
|
|
|
|
target: blocks[drop_data.1].unwrap(),
|
|
|
|
|
|
// The caller will handle this if needed.
|
2023-08-21 09:57:10 +02:00
|
|
|
|
unwind: UnwindAction::Terminate(UnwindTerminateReason::InCleanup),
|
2020-10-02 15:40:24 -04:00
|
|
|
|
place: drop_data.0.local.into(),
|
2023-05-25 17:30:23 +00:00
|
|
|
|
replace: false,
|
2019-11-16 13:23:31 +00:00
|
|
|
|
};
|
|
|
|
|
|
cfg.terminate(block, drop_data.0.source_info, terminator);
|
|
|
|
|
|
}
|
|
|
|
|
|
// Root nodes don't correspond to a drop.
|
|
|
|
|
|
DropKind::Storage if drop_idx == ROOT_NODE => {}
|
|
|
|
|
|
DropKind::Storage => {
|
|
|
|
|
|
let stmt = Statement {
|
|
|
|
|
|
source_info: drop_data.0.source_info,
|
|
|
|
|
|
kind: StatementKind::StorageDead(drop_data.0.local),
|
|
|
|
|
|
};
|
|
|
|
|
|
cfg.push(block, stmt);
|
|
|
|
|
|
let target = blocks[drop_data.1].unwrap();
|
|
|
|
|
|
if target != block {
|
|
|
|
|
|
// Diagnostics don't use this `Span` but debuginfo
|
|
|
|
|
|
// might. Since we don't want breakpoints to be placed
|
|
|
|
|
|
// here, especially when this is on an unwind path, we
|
|
|
|
|
|
// use `DUMMY_SP`.
|
|
|
|
|
|
let source_info = SourceInfo { span: DUMMY_SP, ..drop_data.0.source_info };
|
|
|
|
|
|
let terminator = TerminatorKind::Goto { target };
|
|
|
|
|
|
cfg.terminate(block, source_info, terminator);
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
impl<'tcx> Scopes<'tcx> {
|
|
|
|
|
|
pub(crate) fn new() -> Self {
|
|
|
|
|
|
Self {
|
|
|
|
|
|
scopes: Vec::new(),
|
|
|
|
|
|
breakable_scopes: Vec::new(),
|
2021-09-01 22:52:17 +01:00
|
|
|
|
if_then_scope: None,
|
2019-11-16 13:23:31 +00:00
|
|
|
|
unwind_drops: DropTree::new(),
|
2023-10-19 21:46:28 +00:00
|
|
|
|
coroutine_drops: DropTree::new(),
|
2019-11-16 13:23:31 +00:00
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo), vis_scope: SourceScope) {
|
|
|
|
|
|
debug!("push_scope({:?})", region_scope);
|
|
|
|
|
|
self.scopes.push(Scope {
|
|
|
|
|
|
source_scope: vis_scope,
|
|
|
|
|
|
region_scope: region_scope.0,
|
|
|
|
|
|
drops: vec![],
|
2021-01-21 22:44:02 -05:00
|
|
|
|
moved_locals: vec![],
|
2019-11-16 13:23:31 +00:00
|
|
|
|
cached_unwind_block: None,
|
2023-10-19 21:46:28 +00:00
|
|
|
|
cached_coroutine_drop_block: None,
|
2019-11-16 13:23:31 +00:00
|
|
|
|
});
|
2020-06-04 11:34:42 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
fn pop_scope(&mut self, region_scope: (region::Scope, SourceInfo)) -> Scope {
|
|
|
|
|
|
let scope = self.scopes.pop().unwrap();
|
|
|
|
|
|
assert_eq!(scope.region_scope, region_scope.0);
|
|
|
|
|
|
scope
|
2020-06-04 11:34:42 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
fn scope_index(&self, region_scope: region::Scope, span: Span) -> usize {
|
|
|
|
|
|
self.scopes
|
|
|
|
|
|
.iter()
|
|
|
|
|
|
.rposition(|scope| scope.region_scope == region_scope)
|
|
|
|
|
|
.unwrap_or_else(|| span_bug!(span, "region_scope {:?} does not enclose", region_scope))
|
2019-06-15 17:37:19 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/// Returns the topmost active scope, which is known to be alive until
|
|
|
|
|
|
/// the next scope expression.
|
2019-11-16 13:23:31 +00:00
|
|
|
|
fn topmost(&self) -> region::Scope {
|
2019-06-15 17:37:19 +01:00
|
|
|
|
self.scopes.last().expect("topmost_scope: no scopes present").region_scope
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2019-06-01 13:38:36 +02:00
|
|
|
|
impl<'a, 'tcx> Builder<'a, 'tcx> {
|
2016-01-30 00:18:47 +02:00
|
|
|
|
// Adding and removing scopes
|
|
|
|
|
|
// ==========================
|
2022-11-27 11:15:06 +00:00
|
|
|
|
|
|
|
|
|
|
/// Start a breakable scope, which tracks where `continue`, `break` and
|
|
|
|
|
|
/// `return` should branch to.
|
2022-05-20 19:51:09 -04:00
|
|
|
|
pub(crate) fn in_breakable_scope<F>(
|
2019-12-22 17:42:04 -05:00
|
|
|
|
&mut self,
|
|
|
|
|
|
loop_block: Option<BasicBlock>,
|
|
|
|
|
|
break_destination: Place<'tcx>,
|
2019-11-16 13:23:31 +00:00
|
|
|
|
span: Span,
|
2019-12-22 17:42:04 -05:00
|
|
|
|
f: F,
|
2019-11-16 13:23:31 +00:00
|
|
|
|
) -> BlockAnd<()>
|
2019-12-22 17:42:04 -05:00
|
|
|
|
where
|
2019-11-16 13:23:31 +00:00
|
|
|
|
F: FnOnce(&mut Builder<'a, 'tcx>) -> Option<BlockAnd<()>>,
|
2015-08-18 17:59:21 -04:00
|
|
|
|
{
|
2019-06-15 17:37:19 +01:00
|
|
|
|
let region_scope = self.scopes.topmost();
|
2017-02-28 11:05:03 -08:00
|
|
|
|
let scope = BreakableScope {
|
2017-08-31 21:37:38 +03:00
|
|
|
|
region_scope,
|
2017-08-06 22:54:09 -07:00
|
|
|
|
break_destination,
|
2019-11-16 13:23:31 +00:00
|
|
|
|
break_drops: DropTree::new(),
|
|
|
|
|
|
continue_drops: loop_block.map(|_| DropTree::new()),
|
2015-10-07 14:37:42 +02:00
|
|
|
|
};
|
2019-06-15 17:37:19 +01:00
|
|
|
|
self.scopes.breakable_scopes.push(scope);
|
2019-11-16 13:23:31 +00:00
|
|
|
|
let normal_exit_block = f(self);
|
2019-06-15 17:37:19 +01:00
|
|
|
|
let breakable_scope = self.scopes.breakable_scopes.pop().unwrap();
|
2017-08-31 21:37:38 +03:00
|
|
|
|
assert!(breakable_scope.region_scope == region_scope);
|
2022-09-28 16:45:09 +08:00
|
|
|
|
let break_block =
|
|
|
|
|
|
self.build_exit_tree(breakable_scope.break_drops, region_scope, span, None);
|
2021-02-05 06:35:32 -05:00
|
|
|
|
if let Some(drops) = breakable_scope.continue_drops {
|
2022-09-28 16:45:09 +08:00
|
|
|
|
self.build_exit_tree(drops, region_scope, span, loop_block);
|
2021-02-05 06:35:32 -05:00
|
|
|
|
}
|
2019-11-16 13:23:31 +00:00
|
|
|
|
match (normal_exit_block, break_block) {
|
|
|
|
|
|
(Some(block), None) | (None, Some(block)) => block,
|
|
|
|
|
|
(None, None) => self.cfg.start_new_block().unit(),
|
|
|
|
|
|
(Some(normal_block), Some(exit_block)) => {
|
|
|
|
|
|
let target = self.cfg.start_new_block();
|
|
|
|
|
|
let source_info = self.source_info(span);
|
|
|
|
|
|
self.cfg.terminate(
|
|
|
|
|
|
unpack!(normal_block),
|
|
|
|
|
|
source_info,
|
|
|
|
|
|
TerminatorKind::Goto { target },
|
|
|
|
|
|
);
|
|
|
|
|
|
self.cfg.terminate(
|
|
|
|
|
|
unpack!(exit_block),
|
|
|
|
|
|
source_info,
|
|
|
|
|
|
TerminatorKind::Goto { target },
|
|
|
|
|
|
);
|
|
|
|
|
|
target.unit()
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
2015-08-18 17:59:21 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
2021-09-01 22:52:17 +01:00
|
|
|
|
/// Start an if-then scope which tracks drop for `if` expressions and `if`
|
|
|
|
|
|
/// guards.
|
|
|
|
|
|
///
|
|
|
|
|
|
/// For an if-let chain:
|
|
|
|
|
|
///
|
|
|
|
|
|
/// if let Some(x) = a && let Some(y) = b && let Some(z) = c { ... }
|
|
|
|
|
|
///
|
2022-01-18 19:38:17 -03:00
|
|
|
|
/// There are three possible ways the condition can be false and we may have
|
2021-09-01 22:52:17 +01:00
|
|
|
|
/// to drop `x`, `x` and `y`, or neither depending on which binding fails.
|
|
|
|
|
|
/// To handle this correctly we use a `DropTree` in a similar way to a
|
|
|
|
|
|
/// `loop` expression and 'break' out on all of the 'else' paths.
|
|
|
|
|
|
///
|
|
|
|
|
|
/// Notes:
|
|
|
|
|
|
/// - We don't need to keep a stack of scopes in the `Builder` because the
|
|
|
|
|
|
/// 'else' paths will only leave the innermost scope.
|
|
|
|
|
|
/// - This is also used for match guards.
|
2022-05-20 19:51:09 -04:00
|
|
|
|
pub(crate) fn in_if_then_scope<F>(
|
2021-09-01 22:52:17 +01:00
|
|
|
|
&mut self,
|
|
|
|
|
|
region_scope: region::Scope,
|
2022-09-28 16:45:09 +08:00
|
|
|
|
span: Span,
|
2021-09-01 22:52:17 +01:00
|
|
|
|
f: F,
|
|
|
|
|
|
) -> (BasicBlock, BasicBlock)
|
|
|
|
|
|
where
|
|
|
|
|
|
F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<()>,
|
|
|
|
|
|
{
|
|
|
|
|
|
let scope = IfThenScope { region_scope, else_drops: DropTree::new() };
|
|
|
|
|
|
let previous_scope = mem::replace(&mut self.scopes.if_then_scope, Some(scope));
|
|
|
|
|
|
|
|
|
|
|
|
let then_block = unpack!(f(self));
|
|
|
|
|
|
|
|
|
|
|
|
let if_then_scope = mem::replace(&mut self.scopes.if_then_scope, previous_scope).unwrap();
|
|
|
|
|
|
assert!(if_then_scope.region_scope == region_scope);
|
|
|
|
|
|
|
|
|
|
|
|
let else_block = self
|
2022-09-28 16:45:09 +08:00
|
|
|
|
.build_exit_tree(if_then_scope.else_drops, region_scope, span, None)
|
2021-09-01 22:52:17 +01:00
|
|
|
|
.map_or_else(|| self.cfg.start_new_block(), |else_block_and| unpack!(else_block_and));
|
|
|
|
|
|
|
|
|
|
|
|
(then_block, else_block)
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2015-11-19 05:54:27 -05:00
|
|
|
|
/// Convenience wrapper that pushes a scope and then executes `f`
|
|
|
|
|
|
/// to build its contents, popping the scope afterwards.
|
2022-09-13 08:51:45 +00:00
|
|
|
|
#[instrument(skip(self, f), level = "debug")]
|
2022-05-20 19:51:09 -04:00
|
|
|
|
pub(crate) fn in_scope<F, R>(
|
2019-12-22 17:42:04 -05:00
|
|
|
|
&mut self,
|
|
|
|
|
|
region_scope: (region::Scope, SourceInfo),
|
|
|
|
|
|
lint_level: LintLevel,
|
|
|
|
|
|
f: F,
|
|
|
|
|
|
) -> BlockAnd<R>
|
|
|
|
|
|
where
|
|
|
|
|
|
F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>,
|
2015-08-18 17:59:21 -04:00
|
|
|
|
{
|
2018-05-28 14:16:09 +03:00
|
|
|
|
let source_scope = self.source_scope;
|
2019-02-22 15:48:14 +01:00
|
|
|
|
if let LintLevel::Explicit(current_hir_id) = lint_level {
|
2022-12-05 11:22:35 -08:00
|
|
|
|
let parent_id =
|
|
|
|
|
|
self.source_scopes[source_scope].local_data.as_ref().assert_crate_local().lint_root;
|
|
|
|
|
|
self.maybe_new_source_scope(region_scope.1.span, None, current_hir_id, parent_id);
|
2017-09-13 22:33:07 +03:00
|
|
|
|
}
|
2017-08-31 21:37:38 +03:00
|
|
|
|
self.push_scope(region_scope);
|
2019-03-30 21:49:52 +00:00
|
|
|
|
let mut block;
|
2016-05-31 20:27:36 +03:00
|
|
|
|
let rv = unpack!(block = f(self));
|
2017-08-31 21:37:38 +03:00
|
|
|
|
unpack!(block = self.pop_scope(region_scope, block));
|
2018-05-28 14:16:09 +03:00
|
|
|
|
self.source_scope = source_scope;
|
2022-09-13 08:51:45 +00:00
|
|
|
|
debug!(?block);
|
2015-11-19 05:54:27 -05:00
|
|
|
|
block.and(rv)
|
|
|
|
|
|
}
|
2015-08-18 17:59:21 -04:00
|
|
|
|
|
2015-11-19 05:54:27 -05:00
|
|
|
|
/// Push a scope onto the stack. You can then build code in this
|
|
|
|
|
|
/// scope and call `pop_scope` afterwards. Note that these two
|
|
|
|
|
|
/// calls must be paired; using `in_scope` as a convenience
|
|
|
|
|
|
/// wrapper maybe preferable.
|
2022-05-20 19:51:09 -04:00
|
|
|
|
pub(crate) fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo)) {
|
2019-06-15 17:37:19 +01:00
|
|
|
|
self.scopes.push_scope(region_scope, self.source_scope);
|
2015-11-19 05:54:27 -05:00
|
|
|
|
}
|
|
|
|
|
|
|
2017-08-31 21:37:38 +03:00
|
|
|
|
/// Pops a scope, which should have region scope `region_scope`,
|
|
|
|
|
|
/// adding any drops onto the end of `block` that are needed.
|
|
|
|
|
|
/// This must match 1-to-1 with `push_scope`.
|
2022-05-20 19:51:09 -04:00
|
|
|
|
pub(crate) fn pop_scope(
|
2019-12-22 17:42:04 -05:00
|
|
|
|
&mut self,
|
|
|
|
|
|
region_scope: (region::Scope, SourceInfo),
|
|
|
|
|
|
mut block: BasicBlock,
|
|
|
|
|
|
) -> BlockAnd<()> {
|
2017-08-31 21:37:38 +03:00
|
|
|
|
debug!("pop_scope({:?}, {:?})", region_scope, block);
|
2018-11-17 21:07:17 +00:00
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
block = self.leave_top_scope(block);
|
|
|
|
|
|
|
|
|
|
|
|
self.scopes.pop_scope(region_scope);
|
Add `EndRegion` statement kind to MIR.
* Emit `EndRegion` for every code-extent for which we observe a
borrow. To do this, we needed to thread source info back through
to `fn in_scope`, which makes this commit a bit more painful than
one might have expected.
* There is `end_region` emission in `Builder::pop_scope` and in
`Builder::exit_scope`; the first handles falling out of a scope
normally, the second handles e.g. `break`.
* Remove `EndRegion` statements during the erase_regions mir
transformation.
* Preallocate the terminator block, and throw an `Unreachable` marker
on it from the outset. Then overwrite that Terminator as necessary
on demand.
* Instead of marking the scope as needs_cleanup after seeing a
borrow, just treat every scope in the chain as being part of the
diverge_block (after any *one* of them has separately signalled
that it needs cleanup, e.g. due to having a destructor to run).
* Allow for resume terminators to be patched when looking up drop flags.
(In particular, `MirPatch::new` has an explicit code path,
presumably previously unreachable, that patches up such resume
terminators.)
* Make `Scope` implement `Debug` trait.
* Expanded a stray comment: we do not emit StorageDead on diverging
paths, but that end behavior might not be desirable.
2017-02-17 13:38:42 +01:00
|
|
|
|
|
2016-03-23 04:25:09 -04:00
|
|
|
|
block.unit()
|
2016-01-30 00:18:47 +02:00
|
|
|
|
}
|
|
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
/// Sets up the drops for breaking from `block` to `target`.
|
2022-05-20 19:51:09 -04:00
|
|
|
|
pub(crate) fn break_scope(
|
2019-06-15 17:37:19 +01:00
|
|
|
|
&mut self,
|
|
|
|
|
|
mut block: BasicBlock,
|
2023-12-15 15:16:24 +00:00
|
|
|
|
value: Option<ExprId>,
|
2019-11-16 13:23:31 +00:00
|
|
|
|
target: BreakableTarget,
|
2019-06-15 17:37:19 +01:00
|
|
|
|
source_info: SourceInfo,
|
|
|
|
|
|
) -> BlockAnd<()> {
|
2019-11-16 13:23:31 +00:00
|
|
|
|
let span = source_info.span;
|
|
|
|
|
|
|
|
|
|
|
|
let get_scope_index = |scope: region::Scope| {
|
|
|
|
|
|
// find the loop-scope by its `region::Scope`.
|
|
|
|
|
|
self.scopes
|
|
|
|
|
|
.breakable_scopes
|
|
|
|
|
|
.iter()
|
|
|
|
|
|
.rposition(|breakable_scope| breakable_scope.region_scope == scope)
|
|
|
|
|
|
.unwrap_or_else(|| span_bug!(span, "no enclosing breakable scope found"))
|
|
|
|
|
|
};
|
2021-01-21 22:35:05 -05:00
|
|
|
|
let (break_index, destination) = match target {
|
2019-11-16 13:23:31 +00:00
|
|
|
|
BreakableTarget::Return => {
|
|
|
|
|
|
let scope = &self.scopes.breakable_scopes[0];
|
|
|
|
|
|
if scope.break_destination != Place::return_place() {
|
|
|
|
|
|
span_bug!(span, "`return` in item with no return scope");
|
|
|
|
|
|
}
|
2021-01-21 22:35:05 -05:00
|
|
|
|
(0, Some(scope.break_destination))
|
2019-11-16 13:23:31 +00:00
|
|
|
|
}
|
|
|
|
|
|
BreakableTarget::Break(scope) => {
|
|
|
|
|
|
let break_index = get_scope_index(scope);
|
|
|
|
|
|
let scope = &self.scopes.breakable_scopes[break_index];
|
2021-01-21 22:35:05 -05:00
|
|
|
|
(break_index, Some(scope.break_destination))
|
2019-11-16 13:23:31 +00:00
|
|
|
|
}
|
|
|
|
|
|
BreakableTarget::Continue(scope) => {
|
|
|
|
|
|
let break_index = get_scope_index(scope);
|
2021-01-21 22:35:05 -05:00
|
|
|
|
(break_index, None)
|
2019-11-16 13:23:31 +00:00
|
|
|
|
}
|
|
|
|
|
|
};
|
|
|
|
|
|
|
2022-11-21 14:22:44 +00:00
|
|
|
|
match (destination, value) {
|
|
|
|
|
|
(Some(destination), Some(value)) => {
|
2019-06-15 17:37:19 +01:00
|
|
|
|
debug!("stmt_expr Break val block_context.push(SubExpr)");
|
|
|
|
|
|
self.block_context.push(BlockFrame::SubExpr);
|
2021-02-24 21:29:09 +01:00
|
|
|
|
unpack!(block = self.expr_into_dest(destination, block, value));
|
2019-06-15 17:37:19 +01:00
|
|
|
|
self.block_context.pop();
|
2022-11-21 14:22:44 +00:00
|
|
|
|
}
|
|
|
|
|
|
(Some(destination), None) => {
|
2021-03-03 16:35:54 +01:00
|
|
|
|
self.cfg.push_assign_unit(block, source_info, destination, self.tcx)
|
2019-06-15 17:37:19 +01:00
|
|
|
|
}
|
2022-11-21 14:22:44 +00:00
|
|
|
|
(None, Some(_)) => {
|
|
|
|
|
|
panic!("`return`, `become` and `break` with value and must have a destination")
|
|
|
|
|
|
}
|
2023-11-23 11:59:13 +11:00
|
|
|
|
(None, None) => {
|
|
|
|
|
|
if self.tcx.sess.instrument_coverage() {
|
|
|
|
|
|
// Normally we wouldn't build any MIR in this case, but that makes it
|
|
|
|
|
|
// harder for coverage instrumentation to extract a relevant span for
|
|
|
|
|
|
// `continue` expressions. So here we inject a dummy statement with the
|
|
|
|
|
|
// desired span.
|
|
|
|
|
|
self.cfg.push_coverage_span_marker(block, source_info);
|
|
|
|
|
|
}
|
2021-04-18 11:51:42 -07:00
|
|
|
|
}
|
2019-06-15 17:37:19 +01:00
|
|
|
|
}
|
2019-11-16 13:23:31 +00:00
|
|
|
|
|
|
|
|
|
|
let region_scope = self.scopes.breakable_scopes[break_index].region_scope;
|
|
|
|
|
|
let scope_index = self.scopes.scope_index(region_scope, span);
|
|
|
|
|
|
let drops = if destination.is_some() {
|
|
|
|
|
|
&mut self.scopes.breakable_scopes[break_index].break_drops
|
|
|
|
|
|
} else {
|
2024-02-01 22:45:00 +00:00
|
|
|
|
let Some(drops) = self.scopes.breakable_scopes[break_index].continue_drops.as_mut()
|
|
|
|
|
|
else {
|
|
|
|
|
|
self.tcx.dcx().span_delayed_bug(
|
|
|
|
|
|
source_info.span,
|
|
|
|
|
|
"unlabelled `continue` within labelled block",
|
|
|
|
|
|
);
|
|
|
|
|
|
self.cfg.terminate(block, source_info, TerminatorKind::Unreachable);
|
|
|
|
|
|
|
|
|
|
|
|
return self.cfg.start_new_block().unit();
|
|
|
|
|
|
};
|
|
|
|
|
|
drops
|
2019-11-16 13:23:31 +00:00
|
|
|
|
};
|
2022-11-21 13:44:52 +00:00
|
|
|
|
|
|
|
|
|
|
let drop_idx = self.scopes.scopes[scope_index + 1..]
|
|
|
|
|
|
.iter()
|
|
|
|
|
|
.flat_map(|scope| &scope.drops)
|
|
|
|
|
|
.fold(ROOT_NODE, |drop_idx, &drop| drops.add_drop(drop, drop_idx));
|
|
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
drops.add_entry(block, drop_idx);
|
|
|
|
|
|
|
2022-07-21 00:35:12 +08:00
|
|
|
|
// `build_drop_trees` doesn't have access to our source_info, so we
|
2023-08-19 13:10:25 +02:00
|
|
|
|
// create a dummy terminator now. `TerminatorKind::UnwindResume` is used
|
2019-11-16 13:23:31 +00:00
|
|
|
|
// because MIR type checking will panic if it hasn't been overwritten.
|
2024-03-06 21:27:41 +11:00
|
|
|
|
// (See `<ExitScopes as DropTreeBuilder>::link_entry_point`.)
|
2023-08-19 13:10:25 +02:00
|
|
|
|
self.cfg.terminate(block, source_info, TerminatorKind::UnwindResume);
|
2019-11-16 13:23:31 +00:00
|
|
|
|
|
2019-06-15 17:37:19 +01:00
|
|
|
|
self.cfg.start_new_block().unit()
|
|
|
|
|
|
}
|
2016-01-30 00:18:47 +02:00
|
|
|
|
|
2022-05-20 19:51:09 -04:00
|
|
|
|
pub(crate) fn break_for_else(
|
2021-09-01 22:52:17 +01:00
|
|
|
|
&mut self,
|
|
|
|
|
|
block: BasicBlock,
|
|
|
|
|
|
target: region::Scope,
|
|
|
|
|
|
source_info: SourceInfo,
|
|
|
|
|
|
) {
|
|
|
|
|
|
let scope_index = self.scopes.scope_index(target, source_info.span);
|
|
|
|
|
|
let if_then_scope = self
|
|
|
|
|
|
.scopes
|
|
|
|
|
|
.if_then_scope
|
|
|
|
|
|
.as_mut()
|
|
|
|
|
|
.unwrap_or_else(|| span_bug!(source_info.span, "no if-then scope found"));
|
|
|
|
|
|
|
|
|
|
|
|
assert_eq!(if_then_scope.region_scope, target, "breaking to incorrect scope");
|
|
|
|
|
|
|
|
|
|
|
|
let mut drop_idx = ROOT_NODE;
|
|
|
|
|
|
let drops = &mut if_then_scope.else_drops;
|
|
|
|
|
|
for scope in &self.scopes.scopes[scope_index + 1..] {
|
|
|
|
|
|
for drop in &scope.drops {
|
|
|
|
|
|
drop_idx = drops.add_drop(*drop, drop_idx);
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
drops.add_entry(block, drop_idx);
|
|
|
|
|
|
|
2022-07-21 00:35:12 +08:00
|
|
|
|
// `build_drop_trees` doesn't have access to our source_info, so we
|
2023-08-19 13:10:25 +02:00
|
|
|
|
// create a dummy terminator now. `TerminatorKind::UnwindResume` is used
|
2021-09-01 22:52:17 +01:00
|
|
|
|
// because MIR type checking will panic if it hasn't been overwritten.
|
2024-03-06 21:27:41 +11:00
|
|
|
|
// (See `<ExitScopes as DropTreeBuilder>::link_entry_point`.)
|
2023-08-19 13:10:25 +02:00
|
|
|
|
self.cfg.terminate(block, source_info, TerminatorKind::UnwindResume);
|
2021-09-01 22:52:17 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
fn leave_top_scope(&mut self, block: BasicBlock) -> BasicBlock {
|
2017-07-31 23:25:27 +03:00
|
|
|
|
// If we are emitting a `drop` statement, we need to have the cached
|
|
|
|
|
|
// diverge cleanup pads ready in case that drop panics.
|
2023-05-24 14:19:22 +00:00
|
|
|
|
let needs_cleanup = self.scopes.scopes.last().is_some_and(|scope| scope.needs_cleanup());
|
2024-01-06 17:00:24 +00:00
|
|
|
|
let is_coroutine = self.coroutine.is_some();
|
2019-11-16 13:23:31 +00:00
|
|
|
|
let unwind_to = if needs_cleanup { self.diverge_cleanup() } else { DropIdx::MAX };
|
|
|
|
|
|
|
|
|
|
|
|
let scope = self.scopes.scopes.last().expect("leave_top_scope called with no scopes");
|
|
|
|
|
|
unpack!(build_scope_drops(
|
|
|
|
|
|
&mut self.cfg,
|
|
|
|
|
|
&mut self.scopes.unwind_drops,
|
|
|
|
|
|
scope,
|
|
|
|
|
|
block,
|
|
|
|
|
|
unwind_to,
|
2023-10-19 21:46:28 +00:00
|
|
|
|
is_coroutine && needs_cleanup,
|
2019-11-16 13:23:31 +00:00
|
|
|
|
self.arg_count,
|
|
|
|
|
|
))
|
2016-12-26 14:34:03 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
2022-12-05 11:22:35 -08:00
|
|
|
|
/// Possibly creates a new source scope if `current_root` and `parent_root`
|
|
|
|
|
|
/// are different, or if -Zmaximal-hir-to-mir-coverage is enabled.
|
|
|
|
|
|
pub(crate) fn maybe_new_source_scope(
|
|
|
|
|
|
&mut self,
|
|
|
|
|
|
span: Span,
|
|
|
|
|
|
safety: Option<Safety>,
|
|
|
|
|
|
current_id: HirId,
|
|
|
|
|
|
parent_id: HirId,
|
|
|
|
|
|
) {
|
|
|
|
|
|
let (current_root, parent_root) =
|
|
|
|
|
|
if self.tcx.sess.opts.unstable_opts.maximal_hir_to_mir_coverage {
|
2023-07-12 09:15:54 +10:00
|
|
|
|
// Some consumers of rustc need to map MIR locations back to HIR nodes. Currently
|
|
|
|
|
|
// the the only part of rustc that tracks MIR -> HIR is the
|
|
|
|
|
|
// `SourceScopeLocalData::lint_root` field that tracks lint levels for MIR
|
|
|
|
|
|
// locations. Normally the number of source scopes is limited to the set of nodes
|
|
|
|
|
|
// with lint annotations. The -Zmaximal-hir-to-mir-coverage flag changes this
|
|
|
|
|
|
// behavior to maximize the number of source scopes, increasing the granularity of
|
|
|
|
|
|
// the MIR->HIR mapping.
|
2022-12-05 11:22:35 -08:00
|
|
|
|
(current_id, parent_id)
|
|
|
|
|
|
} else {
|
2023-07-12 09:55:39 +10:00
|
|
|
|
// Use `maybe_lint_level_root_bounded` to avoid adding Hir dependencies on our
|
|
|
|
|
|
// parents. We estimate the true lint roots here to avoid creating a lot of source
|
|
|
|
|
|
// scopes.
|
2022-12-05 11:22:35 -08:00
|
|
|
|
(
|
2023-07-12 09:55:39 +10:00
|
|
|
|
self.maybe_lint_level_root_bounded(current_id),
|
|
|
|
|
|
if parent_id == self.hir_id {
|
|
|
|
|
|
parent_id // this is very common
|
|
|
|
|
|
} else {
|
|
|
|
|
|
self.maybe_lint_level_root_bounded(parent_id)
|
|
|
|
|
|
},
|
2022-12-05 11:22:35 -08:00
|
|
|
|
)
|
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
if current_root != parent_root {
|
|
|
|
|
|
let lint_level = LintLevel::Explicit(current_root);
|
|
|
|
|
|
self.source_scope = self.new_source_scope(span, lint_level, safety);
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2023-07-12 09:55:39 +10:00
|
|
|
|
/// Walks upwards from `orig_id` to find a node which might change lint levels with attributes.
|
|
|
|
|
|
/// It stops at `self.hir_id` and just returns it if reached.
|
|
|
|
|
|
fn maybe_lint_level_root_bounded(&mut self, orig_id: HirId) -> HirId {
|
|
|
|
|
|
// This assertion lets us just store `ItemLocalId` in the cache, rather
|
|
|
|
|
|
// than the full `HirId`.
|
|
|
|
|
|
assert_eq!(orig_id.owner, self.hir_id.owner);
|
|
|
|
|
|
|
|
|
|
|
|
let mut id = orig_id;
|
2023-07-11 22:07:28 +10:00
|
|
|
|
let hir = self.tcx.hir();
|
|
|
|
|
|
loop {
|
2023-07-12 09:55:39 +10:00
|
|
|
|
if id == self.hir_id {
|
|
|
|
|
|
// This is a moderately common case, mostly hit for previously unseen nodes.
|
|
|
|
|
|
break;
|
2023-07-11 22:07:28 +10:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
if hir.attrs(id).iter().any(|attr| Level::from_attr(attr).is_some()) {
|
2023-07-12 09:55:39 +10:00
|
|
|
|
// This is a rare case. It's for a node path that doesn't reach the root due to an
|
|
|
|
|
|
// intervening lint level attribute. This result doesn't get cached.
|
2023-07-11 22:07:28 +10:00
|
|
|
|
return id;
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2024-02-09 23:58:36 +03:00
|
|
|
|
let next = self.tcx.parent_hir_id(id);
|
2023-07-11 22:07:28 +10:00
|
|
|
|
if next == id {
|
|
|
|
|
|
bug!("lint traversal reached the root of the crate");
|
|
|
|
|
|
}
|
|
|
|
|
|
id = next;
|
2023-07-12 09:55:39 +10:00
|
|
|
|
|
|
|
|
|
|
// This lookup is just an optimization; it can be removed without affecting
|
|
|
|
|
|
// functionality. It might seem strange to see this at the end of this loop, but the
|
|
|
|
|
|
// `orig_id` passed in to this function is almost always previously unseen, for which a
|
|
|
|
|
|
// lookup will be a miss. So we only do lookups for nodes up the parent chain, where
|
|
|
|
|
|
// cache lookups have a very high hit rate.
|
|
|
|
|
|
if self.lint_level_roots_cache.contains(id.local_id) {
|
|
|
|
|
|
break;
|
|
|
|
|
|
}
|
2023-07-11 22:07:28 +10:00
|
|
|
|
}
|
2023-07-12 09:55:39 +10:00
|
|
|
|
|
|
|
|
|
|
// `orig_id` traced to `self_id`; record this fact. If `orig_id` is a leaf node it will
|
|
|
|
|
|
// rarely (never?) subsequently be searched for, but it's hard to know if that is the case.
|
|
|
|
|
|
// The performance wins from the cache all come from caching non-leaf nodes.
|
|
|
|
|
|
self.lint_level_roots_cache.insert(orig_id.local_id);
|
|
|
|
|
|
self.hir_id
|
2023-07-11 22:07:28 +10:00
|
|
|
|
}
|
|
|
|
|
|
|
2018-05-28 14:16:09 +03:00
|
|
|
|
/// Creates a new source scope, nested in the current one.
|
2022-05-20 19:51:09 -04:00
|
|
|
|
pub(crate) fn new_source_scope(
|
2019-12-22 17:42:04 -05:00
|
|
|
|
&mut self,
|
|
|
|
|
|
span: Span,
|
|
|
|
|
|
lint_level: LintLevel,
|
|
|
|
|
|
safety: Option<Safety>,
|
|
|
|
|
|
) -> SourceScope {
|
2018-05-28 14:16:09 +03:00
|
|
|
|
let parent = self.source_scope;
|
2019-12-22 17:42:04 -05:00
|
|
|
|
debug!(
|
|
|
|
|
|
"new_source_scope({:?}, {:?}, {:?}) - parent({:?})={:?}",
|
|
|
|
|
|
span,
|
|
|
|
|
|
lint_level,
|
|
|
|
|
|
safety,
|
|
|
|
|
|
parent,
|
|
|
|
|
|
self.source_scopes.get(parent)
|
|
|
|
|
|
);
|
2018-05-28 17:37:48 +03:00
|
|
|
|
let scope_local_data = SourceScopeLocalData {
|
2017-09-19 16:20:02 +03:00
|
|
|
|
lint_root: if let LintLevel::Explicit(lint_root) = lint_level {
|
|
|
|
|
|
lint_root
|
|
|
|
|
|
} else {
|
2019-11-26 22:17:35 +02:00
|
|
|
|
self.source_scopes[parent].local_data.as_ref().assert_crate_local().lint_root
|
2017-09-19 16:20:02 +03:00
|
|
|
|
},
|
|
|
|
|
|
safety: safety.unwrap_or_else(|| {
|
2019-11-26 22:17:35 +02:00
|
|
|
|
self.source_scopes[parent].local_data.as_ref().assert_crate_local().safety
|
2019-12-22 17:42:04 -05:00
|
|
|
|
}),
|
2017-09-19 16:20:02 +03:00
|
|
|
|
};
|
2019-11-26 22:17:35 +02:00
|
|
|
|
self.source_scopes.push(SourceScopeData {
|
|
|
|
|
|
span,
|
|
|
|
|
|
parent_scope: Some(parent),
|
2020-02-08 21:31:09 +02:00
|
|
|
|
inlined: None,
|
2020-09-21 06:52:37 +03:00
|
|
|
|
inlined_parent_scope: None,
|
2019-11-26 22:17:35 +02:00
|
|
|
|
local_data: ClearCrossCrate::Set(scope_local_data),
|
|
|
|
|
|
})
|
2016-05-31 20:27:36 +03:00
|
|
|
|
}
|
|
|
|
|
|
|
2018-05-28 14:16:09 +03:00
|
|
|
|
/// Given a span and the current source scope, make a SourceInfo.
|
2022-05-20 19:51:09 -04:00
|
|
|
|
pub(crate) fn source_info(&self, span: Span) -> SourceInfo {
|
2019-12-22 17:42:04 -05:00
|
|
|
|
SourceInfo { span, scope: self.source_scope }
|
2016-03-09 12:36:07 -05:00
|
|
|
|
}
|
|
|
|
|
|
|
2019-06-15 17:37:19 +01:00
|
|
|
|
// Finding scopes
|
|
|
|
|
|
// ==============
|
2022-11-27 11:15:06 +00:00
|
|
|
|
|
2017-05-15 21:09:01 -04:00
|
|
|
|
/// Returns the scope that we should use as the lifetime of an
|
|
|
|
|
|
/// operand. Basically, an operand must live until it is consumed.
|
|
|
|
|
|
/// This is similar to, but not quite the same as, the temporary
|
|
|
|
|
|
/// scope (which can be larger or smaller).
|
|
|
|
|
|
///
|
|
|
|
|
|
/// Consider:
|
2022-04-15 15:04:34 -07:00
|
|
|
|
/// ```ignore (illustrative)
|
|
|
|
|
|
/// let x = foo(bar(X, Y));
|
|
|
|
|
|
/// ```
|
2017-05-15 21:09:01 -04:00
|
|
|
|
/// We wish to pop the storage for X and Y after `bar()` is
|
|
|
|
|
|
/// called, not after the whole `let` is completed.
|
|
|
|
|
|
///
|
2017-05-22 14:40:47 -04:00
|
|
|
|
/// As another example, if the second argument diverges:
|
2022-04-15 15:04:34 -07:00
|
|
|
|
/// ```ignore (illustrative)
|
|
|
|
|
|
/// foo(Box::new(2), panic!())
|
|
|
|
|
|
/// ```
|
2017-05-22 14:40:47 -04:00
|
|
|
|
/// We would allocate the box but then free it on the unwinding
|
|
|
|
|
|
/// path; we would also emit a free on the 'success' path from
|
|
|
|
|
|
/// panic, but that will turn out to be removed as dead-code.
|
2022-05-20 19:51:09 -04:00
|
|
|
|
pub(crate) fn local_scope(&self) -> region::Scope {
|
2020-12-09 10:50:34 +00:00
|
|
|
|
self.scopes.topmost()
|
2017-05-15 21:09:01 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
2019-06-15 17:37:19 +01:00
|
|
|
|
// Scheduling drops
|
|
|
|
|
|
// ================
|
2022-11-27 11:15:06 +00:00
|
|
|
|
|
2022-05-20 19:51:09 -04:00
|
|
|
|
pub(crate) fn schedule_drop_storage_and_value(
|
2018-07-03 18:09:00 -07:00
|
|
|
|
&mut self,
|
|
|
|
|
|
span: Span,
|
|
|
|
|
|
region_scope: region::Scope,
|
2019-06-15 20:33:23 +01:00
|
|
|
|
local: Local,
|
2018-07-03 18:09:00 -07:00
|
|
|
|
) {
|
2019-09-29 15:08:57 +01:00
|
|
|
|
self.schedule_drop(span, region_scope, local, DropKind::Storage);
|
|
|
|
|
|
self.schedule_drop(span, region_scope, local, DropKind::Value);
|
2018-07-03 18:09:00 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
/// Indicates that `place` should be dropped on exit from `region_scope`.
|
2018-07-03 18:09:00 -07:00
|
|
|
|
///
|
2019-11-16 13:23:31 +00:00
|
|
|
|
/// When called with `DropKind::Storage`, `place` shouldn't be the return
|
|
|
|
|
|
/// place, or a function parameter.
|
2022-05-20 19:51:09 -04:00
|
|
|
|
pub(crate) fn schedule_drop(
|
2018-07-03 18:09:00 -07:00
|
|
|
|
&mut self,
|
|
|
|
|
|
span: Span,
|
|
|
|
|
|
region_scope: region::Scope,
|
2019-06-15 20:33:23 +01:00
|
|
|
|
local: Local,
|
2018-07-03 18:09:00 -07:00
|
|
|
|
drop_kind: DropKind,
|
|
|
|
|
|
) {
|
2019-09-29 21:35:20 +01:00
|
|
|
|
let needs_drop = match drop_kind {
|
|
|
|
|
|
DropKind::Value => {
|
2021-03-03 16:35:54 +01:00
|
|
|
|
if !self.local_decls[local].ty.needs_drop(self.tcx, self.param_env) {
|
2019-12-22 17:42:04 -05:00
|
|
|
|
return;
|
|
|
|
|
|
}
|
2019-09-29 21:35:20 +01:00
|
|
|
|
true
|
2019-12-22 17:42:04 -05:00
|
|
|
|
}
|
2019-05-30 13:21:17 -07:00
|
|
|
|
DropKind::Storage => {
|
2019-06-15 20:33:23 +01:00
|
|
|
|
if local.index() <= self.arg_count {
|
|
|
|
|
|
span_bug!(
|
2019-12-22 17:42:04 -05:00
|
|
|
|
span,
|
|
|
|
|
|
"`schedule_drop` called with local {:?} and arg_count {}",
|
2019-06-15 20:33:23 +01:00
|
|
|
|
local,
|
|
|
|
|
|
self.arg_count,
|
|
|
|
|
|
)
|
2018-07-03 18:09:00 -07:00
|
|
|
|
}
|
2019-09-29 21:35:20 +01:00
|
|
|
|
false
|
2016-08-14 06:34:14 +03:00
|
|
|
|
}
|
2019-09-29 21:35:20 +01:00
|
|
|
|
};
|
2016-08-14 06:34:14 +03:00
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
// When building drops, we try to cache chains of drops to reduce the
|
|
|
|
|
|
// number of `DropTree::add_drop` calls. This, however, means that
|
|
|
|
|
|
// whenever we add a drop into a scope which already had some entries
|
|
|
|
|
|
// in the drop tree built (and thus, cached) for it, we must invalidate
|
|
|
|
|
|
// all caches which might branch into the scope which had a drop just
|
|
|
|
|
|
// added to it. This is necessary, because otherwise some other code
|
|
|
|
|
|
// might use the cache to branch into already built chain of drops,
|
|
|
|
|
|
// essentially ignoring the newly added drop.
|
|
|
|
|
|
//
|
|
|
|
|
|
// For example consider there’s two scopes with a drop in each. These
|
|
|
|
|
|
// are built and thus the caches are filled:
|
|
|
|
|
|
//
|
|
|
|
|
|
// +--------------------------------------------------------+
|
|
|
|
|
|
// | +---------------------------------+ |
|
|
|
|
|
|
// | | +--------+ +-------------+ | +---------------+ |
|
|
|
|
|
|
// | | | return | <-+ | drop(outer) | <-+ | drop(middle) | |
|
|
|
|
|
|
// | | +--------+ +-------------+ | +---------------+ |
|
|
|
|
|
|
// | +------------|outer_scope cache|--+ |
|
|
|
|
|
|
// +------------------------------|middle_scope cache|------+
|
|
|
|
|
|
//
|
|
|
|
|
|
// Now, a new, inner-most scope is added along with a new drop into
|
|
|
|
|
|
// both inner-most and outer-most scopes:
|
|
|
|
|
|
//
|
|
|
|
|
|
// +------------------------------------------------------------+
|
|
|
|
|
|
// | +----------------------------------+ |
|
|
|
|
|
|
// | | +--------+ +-------------+ | +---------------+ | +-------------+
|
|
|
|
|
|
// | | | return | <+ | drop(new) | <-+ | drop(middle) | <--+| drop(inner) |
|
|
|
|
|
|
// | | +--------+ | | drop(outer) | | +---------------+ | +-------------+
|
|
|
|
|
|
// | | +-+ +-------------+ | |
|
|
|
|
|
|
// | +---|invalid outer_scope cache|----+ |
|
|
|
|
|
|
// +----=----------------|invalid middle_scope cache|-----------+
|
|
|
|
|
|
//
|
|
|
|
|
|
// If, when adding `drop(new)` we do not invalidate the cached blocks for both
|
|
|
|
|
|
// outer_scope and middle_scope, then, when building drops for the inner (right-most)
|
|
|
|
|
|
// scope, the old, cached blocks, without `drop(new)` will get used, producing the
|
|
|
|
|
|
// wrong results.
|
|
|
|
|
|
//
|
|
|
|
|
|
// Note that this code iterates scopes from the inner-most to the outer-most,
|
|
|
|
|
|
// invalidating caches of each scope visited. This way bare minimum of the
|
|
|
|
|
|
// caches gets invalidated. i.e., if a new drop is added into the middle scope, the
|
|
|
|
|
|
// cache of outer scope stays intact.
|
|
|
|
|
|
//
|
2023-10-19 21:46:28 +00:00
|
|
|
|
// Since we only cache drops for the unwind path and the coroutine drop
|
2019-11-16 13:23:31 +00:00
|
|
|
|
// path, we only need to invalidate the cache for drops that happen on
|
2023-10-19 21:46:28 +00:00
|
|
|
|
// the unwind or coroutine drop paths. This means that for
|
|
|
|
|
|
// non-coroutines we don't need to invalidate caches for `DropKind::Storage`.
|
2024-01-06 17:00:24 +00:00
|
|
|
|
let invalidate_caches = needs_drop || self.coroutine.is_some();
|
2019-11-16 13:23:31 +00:00
|
|
|
|
for scope in self.scopes.scopes.iter_mut().rev() {
|
|
|
|
|
|
if invalidate_caches {
|
|
|
|
|
|
scope.invalidate_cache();
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
if scope.region_scope == region_scope {
|
2023-11-21 20:07:32 +01:00
|
|
|
|
let region_scope_span = region_scope.span(self.tcx, self.region_scope_tree);
|
2018-01-13 23:28:42 +00:00
|
|
|
|
// Attribute scope exit drops to scope's closing brace.
|
2021-03-03 16:35:54 +01:00
|
|
|
|
let scope_end = self.tcx.sess.source_map().end_point(region_scope_span);
|
2018-01-13 23:28:42 +00:00
|
|
|
|
|
2016-01-30 00:18:47 +02:00
|
|
|
|
scope.drops.push(DropData {
|
2019-11-16 13:23:31 +00:00
|
|
|
|
source_info: SourceInfo { span: scope_end, scope: scope.source_scope },
|
2019-06-15 20:33:23 +01:00
|
|
|
|
local,
|
2019-05-30 13:21:17 -07:00
|
|
|
|
kind: drop_kind,
|
2016-01-30 00:18:47 +02:00
|
|
|
|
});
|
2019-11-16 13:23:31 +00:00
|
|
|
|
|
2016-01-30 00:18:47 +02:00
|
|
|
|
return;
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
2019-11-16 13:23:31 +00:00
|
|
|
|
|
2019-06-15 20:33:23 +01:00
|
|
|
|
span_bug!(span, "region scope {:?} not in scope to drop {:?}", region_scope, local);
|
2016-01-30 00:18:47 +02:00
|
|
|
|
}
|
2016-01-16 00:36:32 +02:00
|
|
|
|
|
2021-01-21 22:35:05 -05:00
|
|
|
|
/// Indicates that the "local operand" stored in `local` is
|
2019-09-19 11:41:10 -04:00
|
|
|
|
/// *moved* at some point during execution (see `local_scope` for
|
|
|
|
|
|
/// more information about what a "local operand" is -- in short,
|
|
|
|
|
|
/// it's an intermediate operand created as part of preparing some
|
|
|
|
|
|
/// MIR instruction). We use this information to suppress
|
2021-01-21 22:35:05 -05:00
|
|
|
|
/// redundant drops on the non-unwind paths. This results in less
|
|
|
|
|
|
/// MIR, but also avoids spurious borrow check errors
|
|
|
|
|
|
/// (c.f. #64391).
|
2019-09-19 11:41:10 -04:00
|
|
|
|
///
|
|
|
|
|
|
/// Example: when compiling the call to `foo` here:
|
|
|
|
|
|
///
|
2022-04-15 15:04:34 -07:00
|
|
|
|
/// ```ignore (illustrative)
|
2019-09-19 11:41:10 -04:00
|
|
|
|
/// foo(bar(), ...)
|
|
|
|
|
|
/// ```
|
|
|
|
|
|
///
|
|
|
|
|
|
/// we would evaluate `bar()` to an operand `_X`. We would also
|
|
|
|
|
|
/// schedule `_X` to be dropped when the expression scope for
|
|
|
|
|
|
/// `foo(bar())` is exited. This is relevant, for example, if the
|
|
|
|
|
|
/// later arguments should unwind (it would ensure that `_X` gets
|
|
|
|
|
|
/// dropped). However, if no unwind occurs, then `_X` will be
|
|
|
|
|
|
/// unconditionally consumed by the `call`:
|
|
|
|
|
|
///
|
2022-04-15 15:04:34 -07:00
|
|
|
|
/// ```ignore (illustrative)
|
2019-09-19 11:41:10 -04:00
|
|
|
|
/// bb {
|
|
|
|
|
|
/// ...
|
|
|
|
|
|
/// _R = CALL(foo, _X, ...)
|
|
|
|
|
|
/// }
|
|
|
|
|
|
/// ```
|
|
|
|
|
|
///
|
|
|
|
|
|
/// However, `_X` is still registered to be dropped, and so if we
|
|
|
|
|
|
/// do nothing else, we would generate a `DROP(_X)` that occurs
|
|
|
|
|
|
/// after the call. This will later be optimized out by the
|
2022-03-30 15:14:15 -04:00
|
|
|
|
/// drop-elaboration code, but in the meantime it can lead to
|
2019-09-19 11:41:10 -04:00
|
|
|
|
/// spurious borrow-check errors -- the problem, ironically, is
|
|
|
|
|
|
/// not the `DROP(_X)` itself, but the (spurious) unwind pathways
|
|
|
|
|
|
/// that it creates. See #64391 for an example.
|
2024-01-12 08:21:42 +01:00
|
|
|
|
pub(crate) fn record_operands_moved(&mut self, operands: &[Spanned<Operand<'tcx>>]) {
|
2020-12-09 10:50:34 +00:00
|
|
|
|
let local_scope = self.local_scope();
|
|
|
|
|
|
let scope = self.scopes.scopes.last_mut().unwrap();
|
2019-09-19 11:41:10 -04:00
|
|
|
|
|
2021-01-09 12:00:45 -05:00
|
|
|
|
assert_eq!(scope.region_scope, local_scope, "local scope is not the topmost scope!",);
|
2019-09-19 11:41:10 -04:00
|
|
|
|
|
|
|
|
|
|
// look for moves of a local variable, like `MOVE(_X)`
|
2024-01-12 08:21:42 +01:00
|
|
|
|
let locals_moved = operands.iter().flat_map(|operand| match operand.node {
|
2021-01-21 22:44:02 -05:00
|
|
|
|
Operand::Copy(_) | Operand::Constant(_) => None,
|
|
|
|
|
|
Operand::Move(place) => place.as_local(),
|
|
|
|
|
|
});
|
2019-09-19 11:41:10 -04:00
|
|
|
|
|
2021-01-21 22:44:02 -05:00
|
|
|
|
for local in locals_moved {
|
|
|
|
|
|
// check if we have a Drop for this operand and -- if so
|
|
|
|
|
|
// -- add it to the list of moved operands. Note that this
|
|
|
|
|
|
// local might not have been an operand created for this
|
|
|
|
|
|
// call, it could come from other places too.
|
|
|
|
|
|
if scope.drops.iter().any(|drop| drop.local == local && drop.kind == DropKind::Value) {
|
|
|
|
|
|
scope.moved_locals.push(local);
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
2019-09-19 11:41:10 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
2016-01-30 00:18:47 +02:00
|
|
|
|
// Other
|
|
|
|
|
|
// =====
|
2022-11-27 11:15:06 +00:00
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
/// Returns the [DropIdx] for the innermost drop if the function unwound at
|
|
|
|
|
|
/// this point. The `DropIdx` will be created if it doesn't already exist.
|
|
|
|
|
|
fn diverge_cleanup(&mut self) -> DropIdx {
|
2022-10-05 22:24:12 +08:00
|
|
|
|
// It is okay to use dummy span because the getting scope index on the topmost scope
|
|
|
|
|
|
// must always succeed.
|
|
|
|
|
|
self.diverge_cleanup_target(self.scopes.topmost(), DUMMY_SP)
|
2019-11-16 13:23:31 +00:00
|
|
|
|
}
|
|
|
|
|
|
|
2022-10-05 22:24:12 +08:00
|
|
|
|
/// This is similar to [diverge_cleanup](Self::diverge_cleanup) except its target is set to
|
2022-09-28 16:45:09 +08:00
|
|
|
|
/// some ancestor scope instead of the current scope.
|
|
|
|
|
|
/// It is possible to unwind to some ancestor scope if some drop panics as
|
|
|
|
|
|
/// the program breaks out of a if-then scope.
|
|
|
|
|
|
fn diverge_cleanup_target(&mut self, target_scope: region::Scope, span: Span) -> DropIdx {
|
|
|
|
|
|
let target = self.scopes.scope_index(target_scope, span);
|
|
|
|
|
|
let (uncached_scope, mut cached_drop) = self.scopes.scopes[..=target]
|
|
|
|
|
|
.iter()
|
|
|
|
|
|
.enumerate()
|
|
|
|
|
|
.rev()
|
|
|
|
|
|
.find_map(|(scope_idx, scope)| {
|
|
|
|
|
|
scope.cached_unwind_block.map(|cached_block| (scope_idx + 1, cached_block))
|
|
|
|
|
|
})
|
|
|
|
|
|
.unwrap_or((0, ROOT_NODE));
|
|
|
|
|
|
|
|
|
|
|
|
if uncached_scope > target {
|
|
|
|
|
|
return cached_drop;
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2024-01-06 17:00:24 +00:00
|
|
|
|
let is_coroutine = self.coroutine.is_some();
|
2022-09-28 16:45:09 +08:00
|
|
|
|
for scope in &mut self.scopes.scopes[uncached_scope..=target] {
|
|
|
|
|
|
for drop in &scope.drops {
|
2023-10-19 21:46:28 +00:00
|
|
|
|
if is_coroutine || drop.kind == DropKind::Value {
|
2022-09-28 16:45:09 +08:00
|
|
|
|
cached_drop = self.scopes.unwind_drops.add_drop(*drop, cached_drop);
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
scope.cached_unwind_block = Some(cached_drop);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
cached_drop
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
/// Prepares to create a path that performs all required cleanup for a
|
|
|
|
|
|
/// terminator that can unwind at the given basic block.
|
|
|
|
|
|
///
|
|
|
|
|
|
/// This path terminates in Resume. The path isn't created until after all
|
|
|
|
|
|
/// of the non-unwind paths in this item have been lowered.
|
2022-05-20 19:51:09 -04:00
|
|
|
|
pub(crate) fn diverge_from(&mut self, start: BasicBlock) {
|
2019-11-16 13:23:31 +00:00
|
|
|
|
debug_assert!(
|
|
|
|
|
|
matches!(
|
|
|
|
|
|
self.cfg.block_data(start).terminator().kind,
|
|
|
|
|
|
TerminatorKind::Assert { .. }
|
2021-01-09 12:00:45 -05:00
|
|
|
|
| TerminatorKind::Call { .. }
|
2022-04-13 07:08:58 -04:00
|
|
|
|
| TerminatorKind::Drop { .. }
|
2021-01-09 12:00:45 -05:00
|
|
|
|
| TerminatorKind::FalseUnwind { .. }
|
2021-08-30 01:23:33 +01:00
|
|
|
|
| TerminatorKind::InlineAsm { .. }
|
2019-11-16 13:23:31 +00:00
|
|
|
|
),
|
|
|
|
|
|
"diverge_from called on block with terminator that cannot unwind."
|
|
|
|
|
|
);
|
2020-06-04 11:34:42 -04:00
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
let next_drop = self.diverge_cleanup();
|
|
|
|
|
|
self.scopes.unwind_drops.add_entry(start, next_drop);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/// Sets up a path that performs all required cleanup for dropping a
|
2023-10-19 21:46:28 +00:00
|
|
|
|
/// coroutine, starting from the given block that ends in
|
2019-11-16 13:23:31 +00:00
|
|
|
|
/// [TerminatorKind::Yield].
|
|
|
|
|
|
///
|
2023-10-19 16:06:43 +00:00
|
|
|
|
/// This path terminates in CoroutineDrop.
|
2023-10-19 21:46:28 +00:00
|
|
|
|
pub(crate) fn coroutine_drop_cleanup(&mut self, yield_block: BasicBlock) {
|
2019-11-16 13:23:31 +00:00
|
|
|
|
debug_assert!(
|
|
|
|
|
|
matches!(
|
|
|
|
|
|
self.cfg.block_data(yield_block).terminator().kind,
|
|
|
|
|
|
TerminatorKind::Yield { .. }
|
|
|
|
|
|
),
|
2023-10-19 21:46:28 +00:00
|
|
|
|
"coroutine_drop_cleanup called on block with non-yield terminator."
|
2019-11-16 13:23:31 +00:00
|
|
|
|
);
|
|
|
|
|
|
let (uncached_scope, mut cached_drop) = self
|
|
|
|
|
|
.scopes
|
|
|
|
|
|
.scopes
|
|
|
|
|
|
.iter()
|
|
|
|
|
|
.enumerate()
|
|
|
|
|
|
.rev()
|
|
|
|
|
|
.find_map(|(scope_idx, scope)| {
|
2023-10-19 21:46:28 +00:00
|
|
|
|
scope.cached_coroutine_drop_block.map(|cached_block| (scope_idx + 1, cached_block))
|
2019-11-16 13:23:31 +00:00
|
|
|
|
})
|
|
|
|
|
|
.unwrap_or((0, ROOT_NODE));
|
|
|
|
|
|
|
|
|
|
|
|
for scope in &mut self.scopes.scopes[uncached_scope..] {
|
|
|
|
|
|
for drop in &scope.drops {
|
2023-10-19 21:46:28 +00:00
|
|
|
|
cached_drop = self.scopes.coroutine_drops.add_drop(*drop, cached_drop);
|
2019-11-16 13:23:31 +00:00
|
|
|
|
}
|
2023-10-19 21:46:28 +00:00
|
|
|
|
scope.cached_coroutine_drop_block = Some(cached_drop);
|
2015-08-18 17:59:21 -04:00
|
|
|
|
}
|
2017-11-28 01:45:16 +02:00
|
|
|
|
|
2023-10-19 21:46:28 +00:00
|
|
|
|
self.scopes.coroutine_drops.add_entry(yield_block, cached_drop);
|
2015-08-18 17:59:21 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
2016-06-16 18:28:30 +03:00
|
|
|
|
/// Utility function for *non*-scope code to build their own drops
|
2023-02-08 22:29:52 +01:00
|
|
|
|
/// Force a drop at this point in the MIR by creating a new block.
|
2022-05-20 19:51:09 -04:00
|
|
|
|
pub(crate) fn build_drop_and_replace(
|
2019-12-22 17:42:04 -05:00
|
|
|
|
&mut self,
|
|
|
|
|
|
block: BasicBlock,
|
|
|
|
|
|
span: Span,
|
2020-06-10 09:56:54 +02:00
|
|
|
|
place: Place<'tcx>,
|
2023-02-08 22:29:52 +01:00
|
|
|
|
value: Rvalue<'tcx>,
|
2019-12-22 17:42:04 -05:00
|
|
|
|
) -> BlockAnd<()> {
|
2016-06-07 19:21:56 +03:00
|
|
|
|
let source_info = self.source_info(span);
|
2023-02-08 22:29:52 +01:00
|
|
|
|
|
|
|
|
|
|
// create the new block for the assignment
|
|
|
|
|
|
let assign = self.cfg.start_new_block();
|
|
|
|
|
|
self.cfg.push_assign(assign, source_info, place, value.clone());
|
|
|
|
|
|
|
|
|
|
|
|
// create the new block for the assignment in the case of unwinding
|
|
|
|
|
|
let assign_unwind = self.cfg.start_new_cleanup_block();
|
|
|
|
|
|
self.cfg.push_assign(assign_unwind, source_info, place, value.clone());
|
2019-11-16 13:23:31 +00:00
|
|
|
|
|
2019-12-22 17:42:04 -05:00
|
|
|
|
self.cfg.terminate(
|
|
|
|
|
|
block,
|
|
|
|
|
|
source_info,
|
2022-10-08 23:47:59 +01:00
|
|
|
|
TerminatorKind::Drop {
|
|
|
|
|
|
place,
|
|
|
|
|
|
target: assign,
|
|
|
|
|
|
unwind: UnwindAction::Cleanup(assign_unwind),
|
2023-05-25 17:30:23 +00:00
|
|
|
|
replace: true,
|
2022-10-08 23:47:59 +01:00
|
|
|
|
},
|
2019-12-22 17:42:04 -05:00
|
|
|
|
);
|
2019-11-16 13:23:31 +00:00
|
|
|
|
self.diverge_from(block);
|
|
|
|
|
|
|
2023-02-08 22:29:52 +01:00
|
|
|
|
assign.unit()
|
2016-05-17 01:06:52 +03:00
|
|
|
|
}
|
|
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
/// Creates an `Assert` terminator and return the success block.
|
2016-05-25 08:39:32 +03:00
|
|
|
|
/// If the boolean condition operand is not the expected value,
|
|
|
|
|
|
/// a runtime panic will be caused with the given message.
|
2022-05-20 19:51:09 -04:00
|
|
|
|
pub(crate) fn assert(
|
2019-12-22 17:42:04 -05:00
|
|
|
|
&mut self,
|
|
|
|
|
|
block: BasicBlock,
|
|
|
|
|
|
cond: Operand<'tcx>,
|
|
|
|
|
|
expected: bool,
|
|
|
|
|
|
msg: AssertMessage<'tcx>,
|
|
|
|
|
|
span: Span,
|
|
|
|
|
|
) -> BasicBlock {
|
2016-06-07 19:21:56 +03:00
|
|
|
|
let source_info = self.source_info(span);
|
2016-05-25 08:39:32 +03:00
|
|
|
|
let success_block = self.cfg.start_new_block();
|
|
|
|
|
|
|
2019-12-22 17:42:04 -05:00
|
|
|
|
self.cfg.terminate(
|
|
|
|
|
|
block,
|
|
|
|
|
|
source_info,
|
2022-10-08 23:47:59 +01:00
|
|
|
|
TerminatorKind::Assert {
|
|
|
|
|
|
cond,
|
|
|
|
|
|
expected,
|
2023-05-01 18:30:54 -04:00
|
|
|
|
msg: Box::new(msg),
|
2022-10-08 23:47:59 +01:00
|
|
|
|
target: success_block,
|
|
|
|
|
|
unwind: UnwindAction::Continue,
|
|
|
|
|
|
},
|
2019-12-22 17:42:04 -05:00
|
|
|
|
);
|
2019-11-16 13:23:31 +00:00
|
|
|
|
self.diverge_from(block);
|
2016-05-25 08:39:32 +03:00
|
|
|
|
|
|
|
|
|
|
success_block
|
|
|
|
|
|
}
|
2019-04-03 19:21:51 +01:00
|
|
|
|
|
|
|
|
|
|
/// Unschedules any drops in the top scope.
|
|
|
|
|
|
///
|
|
|
|
|
|
/// This is only needed for `match` arm scopes, because they have one
|
|
|
|
|
|
/// entrance per pattern, but only one exit.
|
2022-05-20 19:51:09 -04:00
|
|
|
|
pub(crate) fn clear_top_scope(&mut self, region_scope: region::Scope) {
|
2019-06-15 17:37:19 +01:00
|
|
|
|
let top_scope = self.scopes.scopes.last_mut().unwrap();
|
2019-04-03 19:21:51 +01:00
|
|
|
|
|
|
|
|
|
|
assert_eq!(top_scope.region_scope, region_scope);
|
|
|
|
|
|
|
|
|
|
|
|
top_scope.drops.clear();
|
2019-11-16 13:23:31 +00:00
|
|
|
|
top_scope.invalidate_cache();
|
2019-04-03 19:21:51 +01:00
|
|
|
|
}
|
2015-08-18 17:59:21 -04:00
|
|
|
|
}
|
2016-01-30 00:18:47 +02:00
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
/// Builds drops for `pop_scope` and `leave_top_scope`.
|
2018-11-17 21:07:17 +00:00
|
|
|
|
fn build_scope_drops<'tcx>(
|
|
|
|
|
|
cfg: &mut CFG<'tcx>,
|
2019-11-16 13:23:31 +00:00
|
|
|
|
unwind_drops: &mut DropTree,
|
2019-06-15 20:33:23 +01:00
|
|
|
|
scope: &Scope,
|
2018-11-17 21:07:17 +00:00
|
|
|
|
mut block: BasicBlock,
|
2019-11-16 13:23:31 +00:00
|
|
|
|
mut unwind_to: DropIdx,
|
|
|
|
|
|
storage_dead_on_unwind: bool,
|
2018-11-17 21:07:17 +00:00
|
|
|
|
arg_count: usize,
|
|
|
|
|
|
) -> BlockAnd<()> {
|
2019-05-22 12:31:43 -07:00
|
|
|
|
debug!("build_scope_drops({:?} -> {:?})", block, scope);
|
2018-11-17 21:07:17 +00:00
|
|
|
|
|
|
|
|
|
|
// Build up the drops in evaluation order. The end result will
|
|
|
|
|
|
// look like:
|
|
|
|
|
|
//
|
|
|
|
|
|
// [SDs, drops[n]] --..> [SDs, drop[1]] -> [SDs, drop[0]] -> [[SDs]]
|
|
|
|
|
|
// | | |
|
|
|
|
|
|
// : | |
|
|
|
|
|
|
// V V
|
|
|
|
|
|
// [drop[n]] -...-> [drop[1]] ------> [drop[0]] ------> [last_unwind_to]
|
|
|
|
|
|
//
|
|
|
|
|
|
// The horizontal arrows represent the execution path when the drops return
|
|
|
|
|
|
// successfully. The downwards arrows represent the execution path when the
|
|
|
|
|
|
// drops panic (panicking while unwinding will abort, so there's no need for
|
2019-05-22 12:31:43 -07:00
|
|
|
|
// another set of arrows).
|
|
|
|
|
|
//
|
2023-10-19 21:46:28 +00:00
|
|
|
|
// For coroutines, we unwind from a drop on a local to its StorageDead
|
2019-05-22 12:31:43 -07:00
|
|
|
|
// statement. For other functions we don't worry about StorageDead. The
|
|
|
|
|
|
// drops for the unwind path should have already been generated by
|
|
|
|
|
|
// `diverge_cleanup_gen`.
|
2018-11-17 21:07:17 +00:00
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
for drop_data in scope.drops.iter().rev() {
|
|
|
|
|
|
let source_info = drop_data.source_info;
|
2019-06-15 20:33:23 +01:00
|
|
|
|
let local = drop_data.local;
|
2019-09-19 11:41:10 -04:00
|
|
|
|
|
2016-08-14 06:34:14 +03:00
|
|
|
|
match drop_data.kind {
|
2019-05-30 13:21:17 -07:00
|
|
|
|
DropKind::Value => {
|
2019-11-16 13:23:31 +00:00
|
|
|
|
// `unwind_to` should drop the value that we're about to
|
|
|
|
|
|
// schedule. If dropping this value panics, then we continue
|
|
|
|
|
|
// with the *next* value on the unwind path.
|
|
|
|
|
|
debug_assert_eq!(unwind_drops.drops[unwind_to].0.local, drop_data.local);
|
|
|
|
|
|
debug_assert_eq!(unwind_drops.drops[unwind_to].0.kind, drop_data.kind);
|
|
|
|
|
|
unwind_to = unwind_drops.drops[unwind_to].1;
|
|
|
|
|
|
|
2021-01-21 22:44:02 -05:00
|
|
|
|
// If the operand has been moved, and we are not on an unwind
|
|
|
|
|
|
// path, then don't generate the drop. (We only take this into
|
|
|
|
|
|
// account for non-unwind paths so as not to disturb the
|
|
|
|
|
|
// caching mechanism.)
|
|
|
|
|
|
if scope.moved_locals.iter().any(|&o| o == local) {
|
|
|
|
|
|
continue;
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
unwind_drops.add_entry(block, unwind_to);
|
2017-10-18 13:54:36 +03:00
|
|
|
|
|
2017-07-31 23:25:27 +03:00
|
|
|
|
let next = cfg.start_new_block();
|
2019-12-22 17:42:04 -05:00
|
|
|
|
cfg.terminate(
|
|
|
|
|
|
block,
|
|
|
|
|
|
source_info,
|
2022-10-08 23:47:59 +01:00
|
|
|
|
TerminatorKind::Drop {
|
|
|
|
|
|
place: local.into(),
|
|
|
|
|
|
target: next,
|
|
|
|
|
|
unwind: UnwindAction::Continue,
|
2023-05-25 17:30:23 +00:00
|
|
|
|
replace: false,
|
2022-10-08 23:47:59 +01:00
|
|
|
|
},
|
2019-12-22 17:42:04 -05:00
|
|
|
|
);
|
2017-07-31 23:25:27 +03:00
|
|
|
|
block = next;
|
|
|
|
|
|
}
|
2019-05-30 13:21:17 -07:00
|
|
|
|
DropKind::Storage => {
|
2019-11-16 13:23:31 +00:00
|
|
|
|
if storage_dead_on_unwind {
|
|
|
|
|
|
debug_assert_eq!(unwind_drops.drops[unwind_to].0.local, drop_data.local);
|
|
|
|
|
|
debug_assert_eq!(unwind_drops.drops[unwind_to].0.kind, drop_data.kind);
|
|
|
|
|
|
unwind_to = unwind_drops.drops[unwind_to].1;
|
|
|
|
|
|
}
|
2018-07-03 18:09:00 -07:00
|
|
|
|
// Only temps and vars need their storage dead.
|
2019-06-15 20:33:23 +01:00
|
|
|
|
assert!(local.index() > arg_count);
|
2019-12-22 17:42:04 -05:00
|
|
|
|
cfg.push(block, Statement { source_info, kind: StatementKind::StorageDead(local) });
|
2016-08-14 06:34:14 +03:00
|
|
|
|
}
|
|
|
|
|
|
}
|
2016-01-30 00:18:47 +02:00
|
|
|
|
}
|
|
|
|
|
|
block.unit()
|
|
|
|
|
|
}
|
2016-02-02 22:50:26 +02:00
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
impl<'a, 'tcx: 'a> Builder<'a, 'tcx> {
|
|
|
|
|
|
/// Build a drop tree for a breakable scope.
|
|
|
|
|
|
///
|
|
|
|
|
|
/// If `continue_block` is `Some`, then the tree is for `continue` inside a
|
2021-01-21 22:35:05 -05:00
|
|
|
|
/// loop. Otherwise this is for `break` or `return`.
|
2019-11-16 13:23:31 +00:00
|
|
|
|
fn build_exit_tree(
|
|
|
|
|
|
&mut self,
|
|
|
|
|
|
mut drops: DropTree,
|
2022-09-28 16:45:09 +08:00
|
|
|
|
else_scope: region::Scope,
|
|
|
|
|
|
span: Span,
|
2021-01-21 22:35:05 -05:00
|
|
|
|
continue_block: Option<BasicBlock>,
|
2019-11-16 13:23:31 +00:00
|
|
|
|
) -> Option<BlockAnd<()>> {
|
|
|
|
|
|
let mut blocks = IndexVec::from_elem(None, &drops.drops);
|
2021-01-21 22:35:05 -05:00
|
|
|
|
blocks[ROOT_NODE] = continue_block;
|
2019-11-16 13:23:31 +00:00
|
|
|
|
|
|
|
|
|
|
drops.build_mir::<ExitScopes>(&mut self.cfg, &mut blocks);
|
2024-01-06 17:00:24 +00:00
|
|
|
|
let is_coroutine = self.coroutine.is_some();
|
2019-11-16 13:23:31 +00:00
|
|
|
|
|
|
|
|
|
|
// Link the exit drop tree to unwind drop tree.
|
|
|
|
|
|
if drops.drops.iter().any(|(drop, _)| drop.kind == DropKind::Value) {
|
2022-09-28 16:45:09 +08:00
|
|
|
|
let unwind_target = self.diverge_cleanup_target(else_scope, span);
|
2019-11-16 13:23:31 +00:00
|
|
|
|
let mut unwind_indices = IndexVec::from_elem_n(unwind_target, 1);
|
|
|
|
|
|
for (drop_idx, drop_data) in drops.drops.iter_enumerated().skip(1) {
|
|
|
|
|
|
match drop_data.0.kind {
|
|
|
|
|
|
DropKind::Storage => {
|
2023-10-19 21:46:28 +00:00
|
|
|
|
if is_coroutine {
|
2019-11-16 13:23:31 +00:00
|
|
|
|
let unwind_drop = self
|
|
|
|
|
|
.scopes
|
|
|
|
|
|
.unwind_drops
|
|
|
|
|
|
.add_drop(drop_data.0, unwind_indices[drop_data.1]);
|
|
|
|
|
|
unwind_indices.push(unwind_drop);
|
|
|
|
|
|
} else {
|
|
|
|
|
|
unwind_indices.push(unwind_indices[drop_data.1]);
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
DropKind::Value => {
|
|
|
|
|
|
let unwind_drop = self
|
|
|
|
|
|
.scopes
|
|
|
|
|
|
.unwind_drops
|
|
|
|
|
|
.add_drop(drop_data.0, unwind_indices[drop_data.1]);
|
|
|
|
|
|
self.scopes
|
|
|
|
|
|
.unwind_drops
|
|
|
|
|
|
.add_entry(blocks[drop_idx].unwrap(), unwind_indices[drop_data.1]);
|
|
|
|
|
|
unwind_indices.push(unwind_drop);
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
2019-05-14 17:43:37 -07:00
|
|
|
|
}
|
2019-11-16 13:23:31 +00:00
|
|
|
|
}
|
2019-11-16 13:23:31 +00:00
|
|
|
|
blocks[ROOT_NODE].map(BasicBlock::unit)
|
2019-11-16 13:23:31 +00:00
|
|
|
|
}
|
|
|
|
|
|
|
2023-10-19 21:46:28 +00:00
|
|
|
|
/// Build the unwind and coroutine drop trees.
|
2022-05-20 19:51:09 -04:00
|
|
|
|
pub(crate) fn build_drop_trees(&mut self) {
|
2024-01-06 17:00:24 +00:00
|
|
|
|
if self.coroutine.is_some() {
|
2023-10-19 21:46:28 +00:00
|
|
|
|
self.build_coroutine_drop_trees();
|
2019-11-16 13:23:31 +00:00
|
|
|
|
} else {
|
|
|
|
|
|
Self::build_unwind_tree(
|
|
|
|
|
|
&mut self.cfg,
|
|
|
|
|
|
&mut self.scopes.unwind_drops,
|
|
|
|
|
|
self.fn_span,
|
|
|
|
|
|
&mut None,
|
|
|
|
|
|
);
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2023-10-19 21:46:28 +00:00
|
|
|
|
fn build_coroutine_drop_trees(&mut self) {
|
|
|
|
|
|
// Build the drop tree for dropping the coroutine while it's suspended.
|
|
|
|
|
|
let drops = &mut self.scopes.coroutine_drops;
|
2019-11-16 13:23:31 +00:00
|
|
|
|
let cfg = &mut self.cfg;
|
|
|
|
|
|
let fn_span = self.fn_span;
|
|
|
|
|
|
let mut blocks = IndexVec::from_elem(None, &drops.drops);
|
2023-10-19 16:06:43 +00:00
|
|
|
|
drops.build_mir::<CoroutineDrop>(cfg, &mut blocks);
|
2019-11-16 13:23:31 +00:00
|
|
|
|
if let Some(root_block) = blocks[ROOT_NODE] {
|
|
|
|
|
|
cfg.terminate(
|
|
|
|
|
|
root_block,
|
|
|
|
|
|
SourceInfo::outermost(fn_span),
|
2023-10-19 16:06:43 +00:00
|
|
|
|
TerminatorKind::CoroutineDrop,
|
2019-11-16 13:23:31 +00:00
|
|
|
|
);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
// Build the drop tree for unwinding in the normal control flow paths.
|
|
|
|
|
|
let resume_block = &mut None;
|
|
|
|
|
|
let unwind_drops = &mut self.scopes.unwind_drops;
|
2021-06-08 11:23:58 -07:00
|
|
|
|
Self::build_unwind_tree(cfg, unwind_drops, fn_span, resume_block);
|
2019-11-16 13:23:31 +00:00
|
|
|
|
|
|
|
|
|
|
// Build the drop tree for unwinding when dropping a suspended
|
2023-10-19 21:46:28 +00:00
|
|
|
|
// coroutine.
|
2020-06-04 11:34:42 -04:00
|
|
|
|
//
|
2019-11-16 13:23:31 +00:00
|
|
|
|
// This is a different tree to the standard unwind paths here to
|
|
|
|
|
|
// prevent drop elaboration from creating drop flags that would have
|
2023-10-19 21:46:28 +00:00
|
|
|
|
// to be captured by the coroutine. I'm not sure how important this
|
2019-11-16 13:23:31 +00:00
|
|
|
|
// optimization is, but it is here.
|
|
|
|
|
|
for (drop_idx, drop_data) in drops.drops.iter_enumerated() {
|
|
|
|
|
|
if let DropKind::Value = drop_data.0.kind {
|
|
|
|
|
|
debug_assert!(drop_data.1 < drops.drops.next_index());
|
|
|
|
|
|
drops.entry_points.push((drop_data.1, blocks[drop_idx].unwrap()));
|
2020-06-04 11:34:42 -04:00
|
|
|
|
}
|
2019-11-16 13:23:31 +00:00
|
|
|
|
}
|
2021-06-08 11:23:58 -07:00
|
|
|
|
Self::build_unwind_tree(cfg, drops, fn_span, resume_block);
|
2019-11-16 13:23:31 +00:00
|
|
|
|
}
|
2019-05-14 17:43:37 -07:00
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
fn build_unwind_tree(
|
|
|
|
|
|
cfg: &mut CFG<'tcx>,
|
|
|
|
|
|
drops: &mut DropTree,
|
|
|
|
|
|
fn_span: Span,
|
|
|
|
|
|
resume_block: &mut Option<BasicBlock>,
|
|
|
|
|
|
) {
|
|
|
|
|
|
let mut blocks = IndexVec::from_elem(None, &drops.drops);
|
|
|
|
|
|
blocks[ROOT_NODE] = *resume_block;
|
|
|
|
|
|
drops.build_mir::<Unwind>(cfg, &mut blocks);
|
|
|
|
|
|
if let (None, Some(resume)) = (*resume_block, blocks[ROOT_NODE]) {
|
2023-08-19 13:10:25 +02:00
|
|
|
|
cfg.terminate(resume, SourceInfo::outermost(fn_span), TerminatorKind::UnwindResume);
|
2019-11-16 13:23:31 +00:00
|
|
|
|
|
|
|
|
|
|
*resume_block = blocks[ROOT_NODE];
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
// DropTreeBuilder implementations.
|
|
|
|
|
|
|
|
|
|
|
|
struct ExitScopes;
|
2019-11-16 13:23:31 +00:00
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
impl<'tcx> DropTreeBuilder<'tcx> for ExitScopes {
|
|
|
|
|
|
fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
|
|
|
|
|
|
cfg.start_new_block()
|
|
|
|
|
|
}
|
2024-03-06 21:13:43 +11:00
|
|
|
|
fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
|
2024-03-06 21:27:41 +11:00
|
|
|
|
// There should be an existing terminator with real source info and a
|
|
|
|
|
|
// dummy TerminatorKind. Replace it with a proper goto.
|
|
|
|
|
|
// (The dummy is added by `break_scope` and `break_for_else`.)
|
|
|
|
|
|
let term = cfg.block_data_mut(from).terminator_mut();
|
|
|
|
|
|
if let TerminatorKind::UnwindResume = term.kind {
|
|
|
|
|
|
term.kind = TerminatorKind::Goto { target: to };
|
|
|
|
|
|
} else {
|
|
|
|
|
|
span_bug!(term.source_info.span, "unexpected dummy terminator kind: {:?}", term.kind);
|
|
|
|
|
|
}
|
2019-11-16 13:23:31 +00:00
|
|
|
|
}
|
2019-11-16 13:23:31 +00:00
|
|
|
|
}
|
|
|
|
|
|
|
2023-10-19 16:06:43 +00:00
|
|
|
|
struct CoroutineDrop;
|
2019-11-16 13:23:31 +00:00
|
|
|
|
|
2023-10-19 16:06:43 +00:00
|
|
|
|
impl<'tcx> DropTreeBuilder<'tcx> for CoroutineDrop {
|
2019-11-16 13:23:31 +00:00
|
|
|
|
fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
|
|
|
|
|
|
cfg.start_new_block()
|
|
|
|
|
|
}
|
2024-03-06 21:13:43 +11:00
|
|
|
|
fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
|
2020-02-29 17:48:09 +00:00
|
|
|
|
let term = cfg.block_data_mut(from).terminator_mut();
|
|
|
|
|
|
if let TerminatorKind::Yield { ref mut drop, .. } = term.kind {
|
2019-11-16 13:23:31 +00:00
|
|
|
|
*drop = Some(to);
|
2020-02-29 17:48:09 +00:00
|
|
|
|
} else {
|
|
|
|
|
|
span_bug!(
|
|
|
|
|
|
term.source_info.span,
|
2023-10-19 21:46:28 +00:00
|
|
|
|
"cannot enter coroutine drop tree from {:?}",
|
2020-02-29 17:48:09 +00:00
|
|
|
|
term.kind
|
|
|
|
|
|
)
|
|
|
|
|
|
}
|
2019-11-16 13:23:31 +00:00
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
struct Unwind;
|
|
|
|
|
|
|
|
|
|
|
|
impl<'tcx> DropTreeBuilder<'tcx> for Unwind {
|
|
|
|
|
|
fn make_block(cfg: &mut CFG<'tcx>) -> BasicBlock {
|
|
|
|
|
|
cfg.start_new_cleanup_block()
|
|
|
|
|
|
}
|
2024-03-06 21:13:43 +11:00
|
|
|
|
fn link_entry_point(cfg: &mut CFG<'tcx>, from: BasicBlock, to: BasicBlock) {
|
2020-02-29 17:48:09 +00:00
|
|
|
|
let term = &mut cfg.block_data_mut(from).terminator_mut();
|
|
|
|
|
|
match &mut term.kind {
|
2023-02-08 22:29:52 +01:00
|
|
|
|
TerminatorKind::Drop { unwind, .. } => {
|
2023-03-06 16:36:42 +00:00
|
|
|
|
if let UnwindAction::Cleanup(unwind) = *unwind {
|
2023-02-08 22:29:52 +01:00
|
|
|
|
let source_info = term.source_info;
|
|
|
|
|
|
cfg.terminate(unwind, source_info, TerminatorKind::Goto { target: to });
|
|
|
|
|
|
} else {
|
2023-03-06 16:36:42 +00:00
|
|
|
|
*unwind = UnwindAction::Cleanup(to);
|
2023-02-08 22:29:52 +01:00
|
|
|
|
}
|
|
|
|
|
|
}
|
2023-03-05 21:02:14 +01:00
|
|
|
|
TerminatorKind::FalseUnwind { unwind, .. }
|
2022-10-08 23:47:59 +01:00
|
|
|
|
| TerminatorKind::Call { unwind, .. }
|
|
|
|
|
|
| TerminatorKind::Assert { unwind, .. }
|
|
|
|
|
|
| TerminatorKind::InlineAsm { unwind, .. } => {
|
|
|
|
|
|
*unwind = UnwindAction::Cleanup(to);
|
2019-11-16 13:23:31 +00:00
|
|
|
|
}
|
|
|
|
|
|
TerminatorKind::Goto { .. }
|
|
|
|
|
|
| TerminatorKind::SwitchInt { .. }
|
2023-08-19 13:10:25 +02:00
|
|
|
|
| TerminatorKind::UnwindResume
|
2023-08-21 09:57:10 +02:00
|
|
|
|
| TerminatorKind::UnwindTerminate(_)
|
2019-11-16 13:23:31 +00:00
|
|
|
|
| TerminatorKind::Return
|
|
|
|
|
|
| TerminatorKind::Unreachable
|
|
|
|
|
|
| TerminatorKind::Yield { .. }
|
2023-10-19 16:06:43 +00:00
|
|
|
|
| TerminatorKind::CoroutineDrop
|
2021-08-30 01:23:33 +01:00
|
|
|
|
| TerminatorKind::FalseEdge { .. } => {
|
2020-02-29 17:48:09 +00:00
|
|
|
|
span_bug!(term.source_info.span, "cannot unwind from {:?}", term.kind)
|
|
|
|
|
|
}
|
2019-11-16 13:23:31 +00:00
|
|
|
|
}
|
2019-12-22 17:42:04 -05:00
|
|
|
|
}
|
2019-05-14 17:43:37 -07:00
|
|
|
|
}
|