2015-08-18 17:59:21 -04:00
|
|
|
|
/*!
|
|
|
|
|
|
Managing the scope stack. The scopes are tied to lexical scopes, so as
|
2020-07-21 09:09:27 +00:00
|
|
|
|
we descend the THIR, we push a scope on the stack, build its
|
2015-08-18 17:59:21 -04:00
|
|
|
|
contents, and then pop it off. Every scope is named by a
|
2017-08-31 21:37:38 +03:00
|
|
|
|
`region::Scope`.
|
2015-08-18 17:59:21 -04:00
|
|
|
|
|
|
|
|
|
|
### SEME Regions
|
|
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
When pushing a new scope, we record the current point in the graph (a
|
2015-08-18 17:59:21 -04:00
|
|
|
|
basic block); this marks the entry to the scope. We then generate more
|
|
|
|
|
|
stuff in the control-flow graph. Whenever the scope is exited, either
|
|
|
|
|
|
via a `break` or `return` or just by fallthrough, that marks an exit
|
|
|
|
|
|
from the scope. Each lexical scope thus corresponds to a single-entry,
|
|
|
|
|
|
multiple-exit (SEME) region in the control-flow graph.
|
|
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
For now, we keep a mapping from each `region::Scope` to its
|
|
|
|
|
|
corresponding SEME region for later reference (see caveat in next
|
|
|
|
|
|
paragraph). This is because region scopes are tied to
|
|
|
|
|
|
them. Eventually, when we shift to non-lexical lifetimes, there should
|
|
|
|
|
|
be no need to remember this mapping.
|
2015-08-18 17:59:21 -04:00
|
|
|
|
|
2019-04-03 19:21:51 +01:00
|
|
|
|
### Not so SEME Regions
|
|
|
|
|
|
|
|
|
|
|
|
In the course of building matches, it sometimes happens that certain code
|
|
|
|
|
|
(namely guards) gets executed multiple times. This means that the scope lexical
|
|
|
|
|
|
scope may in fact correspond to multiple, disjoint SEME regions. So in fact our
|
2020-06-04 11:34:42 -04:00
|
|
|
|
mapping is from one scope to a vector of SEME regions.
|
2015-08-18 17:59:21 -04:00
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
Also in matches, the scopes assigned to arms are not even SEME regions! Each
|
|
|
|
|
|
arm has a single region with one entry for each pattern. We manually
|
2019-04-03 19:21:51 +01:00
|
|
|
|
manipulate the scheduled drops in this scope to avoid dropping things multiple
|
2020-06-04 11:34:42 -04:00
|
|
|
|
times, although drop elaboration would clean this up for value drops.
|
2019-04-03 19:21:51 +01:00
|
|
|
|
|
2015-08-18 17:59:21 -04:00
|
|
|
|
### Drops
|
|
|
|
|
|
|
2018-05-08 16:10:16 +03:00
|
|
|
|
The primary purpose for scopes is to insert drops: while building
|
2017-12-01 14:39:51 +02:00
|
|
|
|
the contents, we also accumulate places that need to be dropped upon
|
2015-08-18 17:59:21 -04:00
|
|
|
|
exit from each scope. This is done by calling `schedule_drop`. Once a
|
|
|
|
|
|
drop is scheduled, whenever we branch out we will insert drops of all
|
2017-12-01 14:39:51 +02:00
|
|
|
|
those places onto the outgoing edge. Note that we don't know the full
|
2015-08-18 17:59:21 -04:00
|
|
|
|
set of scheduled drops up front, and so whenever we exit from the
|
|
|
|
|
|
scope we only drop the values scheduled thus far. For example, consider
|
|
|
|
|
|
the scope S corresponding to this loop:
|
|
|
|
|
|
|
2017-06-20 15:15:16 +08:00
|
|
|
|
```
|
|
|
|
|
|
# let cond = true;
|
2015-08-18 17:59:21 -04:00
|
|
|
|
loop {
|
2017-06-20 15:15:16 +08:00
|
|
|
|
let x = ..;
|
2015-08-18 17:59:21 -04:00
|
|
|
|
if cond { break; }
|
2017-06-20 15:15:16 +08:00
|
|
|
|
let y = ..;
|
2015-08-18 17:59:21 -04:00
|
|
|
|
}
|
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
When processing the `let x`, we will add one drop to the scope for
|
|
|
|
|
|
`x`. The break will then insert a drop for `x`. When we process `let
|
|
|
|
|
|
y`, we will add another drop (in fact, to a subscope, but let's ignore
|
|
|
|
|
|
that for now); any later drops would also drop `y`.
|
|
|
|
|
|
|
|
|
|
|
|
### Early exit
|
|
|
|
|
|
|
|
|
|
|
|
There are numerous "normal" ways to early exit a scope: `break`,
|
|
|
|
|
|
`continue`, `return` (panics are handled separately). Whenever an
|
2020-06-04 11:34:42 -04:00
|
|
|
|
early exit occurs, the method `exit_scope` is called. It is given the
|
2015-08-18 17:59:21 -04:00
|
|
|
|
current point in execution where the early exit occurs, as well as the
|
|
|
|
|
|
scope you want to branch to (note that all early exits from to some
|
2020-06-04 11:34:42 -04:00
|
|
|
|
other enclosing scope). `exit_scope` will record this exit point and
|
|
|
|
|
|
also add all drops.
|
2015-08-18 17:59:21 -04:00
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
Panics are handled in a similar fashion, except that a panic always
|
|
|
|
|
|
returns out to the `DIVERGE_BLOCK`. To trigger a panic, simply call
|
|
|
|
|
|
`panic(p)` with the current point `p`. Or else you can call
|
|
|
|
|
|
`diverge_cleanup`, which will produce a block that you can branch to
|
|
|
|
|
|
which does the appropriate cleanup and then diverges. `panic(p)`
|
|
|
|
|
|
simply calls `diverge_cleanup()` and adds an edge from `p` to the
|
|
|
|
|
|
result.
|
2015-08-18 17:59:21 -04:00
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
### Loop scopes
|
2015-08-18 17:59:21 -04:00
|
|
|
|
|
|
|
|
|
|
In addition to the normal scope stack, we track a loop scope stack
|
2020-06-04 11:34:42 -04:00
|
|
|
|
that contains only loops. It tracks where a `break` and `continue`
|
|
|
|
|
|
should go to.
|
2015-08-18 17:59:21 -04:00
|
|
|
|
|
|
|
|
|
|
*/
|
|
|
|
|
|
|
2019-06-15 17:37:19 +01:00
|
|
|
|
use crate::build::{BlockAnd, BlockAndExtension, BlockFrame, Builder, CFG};
|
2020-07-21 09:09:27 +00:00
|
|
|
|
use crate::thir::{Expr, ExprRef, LintLevel};
|
2020-06-04 11:34:42 -04:00
|
|
|
|
use rustc_data_structures::fx::FxHashMap;
|
|
|
|
|
|
use rustc_hir as hir;
|
|
|
|
|
|
use rustc_hir::GeneratorKind;
|
2020-07-21 09:09:27 +00:00
|
|
|
|
use rustc_middle::middle::region;
|
|
|
|
|
|
use rustc_middle::mir::*;
|
2019-12-31 20:15:40 +03:00
|
|
|
|
use rustc_span::{Span, DUMMY_SP};
|
2020-06-04 11:34:42 -04:00
|
|
|
|
use std::collections::hash_map::Entry;
|
|
|
|
|
|
use std::mem;
|
2015-08-18 17:59:21 -04:00
|
|
|
|
|
Add `EndRegion` statement kind to MIR.
* Emit `EndRegion` for every code-extent for which we observe a
borrow. To do this, we needed to thread source info back through
to `fn in_scope`, which makes this commit a bit more painful than
one might have expected.
* There is `end_region` emission in `Builder::pop_scope` and in
`Builder::exit_scope`; the first handles falling out of a scope
normally, the second handles e.g. `break`.
* Remove `EndRegion` statements during the erase_regions mir
transformation.
* Preallocate the terminator block, and throw an `Unreachable` marker
on it from the outset. Then overwrite that Terminator as necessary
on demand.
* Instead of marking the scope as needs_cleanup after seeing a
borrow, just treat every scope in the chain as being part of the
diverge_block (after any *one* of them has separately signalled
that it needs cleanup, e.g. due to having a destructor to run).
* Allow for resume terminators to be patched when looking up drop flags.
(In particular, `MirPatch::new` has an explicit code path,
presumably previously unreachable, that patches up such resume
terminators.)
* Make `Scope` implement `Debug` trait.
* Expanded a stray comment: we do not emit StorageDead on diverging
paths, but that end behavior might not be desirable.
2017-02-17 13:38:42 +01:00
|
|
|
|
#[derive(Debug)]
|
2019-06-15 20:33:23 +01:00
|
|
|
|
struct Scope {
|
2018-05-28 14:16:09 +03:00
|
|
|
|
/// The source scope this scope was created in.
|
|
|
|
|
|
source_scope: SourceScope,
|
2016-05-31 20:27:36 +03:00
|
|
|
|
|
2017-08-31 21:37:38 +03:00
|
|
|
|
/// the region span of this scope within source code.
|
|
|
|
|
|
region_scope: region::Scope,
|
2016-03-23 05:18:49 -04:00
|
|
|
|
|
2017-08-31 21:37:38 +03:00
|
|
|
|
/// the span of that region_scope
|
|
|
|
|
|
region_scope_span: Span,
|
2017-07-31 16:00:12 +03:00
|
|
|
|
|
2017-12-01 14:39:51 +02:00
|
|
|
|
/// set of places to drop when exiting this scope. This starts
|
2016-03-23 05:18:49 -04:00
|
|
|
|
/// out empty but grows as variables are declared during the
|
|
|
|
|
|
/// building process. This is a stack, so we always drop from the
|
|
|
|
|
|
/// end of the vector (top of the stack) first.
|
2019-06-15 20:33:23 +01:00
|
|
|
|
drops: Vec<DropData>,
|
2016-03-09 11:04:26 -05:00
|
|
|
|
|
2019-09-19 11:41:10 -04:00
|
|
|
|
moved_locals: Vec<Local>,
|
|
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
/// The cache for drop chain on “normal” exit into a particular BasicBlock.
|
|
|
|
|
|
cached_exits: FxHashMap<(BasicBlock, region::Scope), BasicBlock>,
|
|
|
|
|
|
|
|
|
|
|
|
/// The cache for drop chain on "generator drop" exit.
|
|
|
|
|
|
cached_generator_drop: Option<BasicBlock>,
|
2017-10-18 13:54:36 +03:00
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
/// The cache for drop chain on "unwind" exit.
|
|
|
|
|
|
cached_unwind: CachedBlock,
|
2016-01-30 00:18:47 +02:00
|
|
|
|
}
|
|
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
#[derive(Debug, Default)]
|
|
|
|
|
|
crate struct Scopes<'tcx> {
|
|
|
|
|
|
scopes: Vec<Scope>,
|
|
|
|
|
|
/// The current set of breakable scopes. See module comment for more details.
|
|
|
|
|
|
breakable_scopes: Vec<BreakableScope<'tcx>>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Debug)]
|
2019-06-15 20:33:23 +01:00
|
|
|
|
struct DropData {
|
2020-06-04 11:34:42 -04:00
|
|
|
|
/// span where drop obligation was incurred (typically where place was declared)
|
|
|
|
|
|
span: Span,
|
2016-03-23 04:24:42 -04:00
|
|
|
|
|
2019-06-15 20:33:23 +01:00
|
|
|
|
/// local to drop
|
|
|
|
|
|
local: Local,
|
2016-03-22 20:39:29 -04:00
|
|
|
|
|
2018-07-03 18:09:00 -07:00
|
|
|
|
/// Whether this is a value Drop or a StorageDead.
|
|
|
|
|
|
kind: DropKind,
|
2020-06-04 11:34:42 -04:00
|
|
|
|
|
|
|
|
|
|
/// The cached blocks for unwinds.
|
|
|
|
|
|
cached_block: CachedBlock,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Debug, Default, Clone, Copy)]
|
|
|
|
|
|
struct CachedBlock {
|
|
|
|
|
|
/// The cached block for the cleanups-on-diverge path. This block
|
|
|
|
|
|
/// contains code to run the current drop and all the preceding
|
|
|
|
|
|
/// drops (i.e., those having lower index in Drop’s Scope drop
|
|
|
|
|
|
/// array)
|
|
|
|
|
|
unwind: Option<BasicBlock>,
|
|
|
|
|
|
|
|
|
|
|
|
/// The cached block for unwinds during cleanups-on-generator-drop path
|
|
|
|
|
|
///
|
|
|
|
|
|
/// This is split from the standard unwind path here to prevent drop
|
|
|
|
|
|
/// elaboration from creating drop flags that would have to be captured
|
|
|
|
|
|
/// by the generator. I'm not sure how important this optimization is,
|
|
|
|
|
|
/// but it is here.
|
|
|
|
|
|
generator_drop: Option<BasicBlock>,
|
2019-05-14 18:33:04 -07:00
|
|
|
|
}
|
2016-12-26 14:34:03 +01:00
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
#[derive(Debug, PartialEq, Eq)]
|
2018-07-03 18:09:00 -07:00
|
|
|
|
pub(crate) enum DropKind {
|
2019-05-30 13:21:17 -07:00
|
|
|
|
Value,
|
|
|
|
|
|
Storage,
|
2016-01-30 00:18:47 +02:00
|
|
|
|
}
|
|
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
#[derive(Clone, Debug)]
|
2019-06-15 17:37:19 +01:00
|
|
|
|
struct BreakableScope<'tcx> {
|
2017-08-31 21:37:38 +03:00
|
|
|
|
/// Region scope of the loop
|
2019-06-15 17:37:19 +01:00
|
|
|
|
region_scope: region::Scope,
|
2020-06-04 11:34:42 -04:00
|
|
|
|
/// Where the body of the loop begins. `None` if block
|
|
|
|
|
|
continue_block: Option<BasicBlock>,
|
|
|
|
|
|
/// Block to branch into when the loop or block terminates (either by being
|
|
|
|
|
|
/// `break`-en out from, or by having its condition to become false)
|
|
|
|
|
|
break_block: BasicBlock,
|
2019-09-29 15:08:57 +01:00
|
|
|
|
/// The destination of the loop/block expression itself (i.e., where to put
|
2020-06-04 11:34:42 -04:00
|
|
|
|
/// the result of a `break` expression)
|
2019-06-15 17:37:19 +01:00
|
|
|
|
break_destination: Place<'tcx>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/// The target of an expression that breaks out of a scope
|
|
|
|
|
|
#[derive(Clone, Copy, Debug)]
|
2020-01-05 15:46:44 +00:00
|
|
|
|
crate enum BreakableTarget {
|
2019-06-15 17:37:19 +01:00
|
|
|
|
Continue(region::Scope),
|
|
|
|
|
|
Break(region::Scope),
|
|
|
|
|
|
Return,
|
2015-08-18 17:59:21 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
impl CachedBlock {
|
|
|
|
|
|
fn invalidate(&mut self) {
|
|
|
|
|
|
*self = CachedBlock::default();
|
|
|
|
|
|
}
|
2016-12-26 14:34:03 +01:00
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
fn get(&self, generator_drop: bool) -> Option<BasicBlock> {
|
|
|
|
|
|
if generator_drop { self.generator_drop } else { self.unwind }
|
|
|
|
|
|
}
|
2016-12-26 14:34:03 +01:00
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
fn ref_mut(&mut self, generator_drop: bool) -> &mut Option<BasicBlock> {
|
|
|
|
|
|
if generator_drop { &mut self.generator_drop } else { &mut self.unwind }
|
|
|
|
|
|
}
|
2016-12-26 14:34:03 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
2019-06-15 20:33:23 +01:00
|
|
|
|
impl Scope {
|
2020-06-04 11:34:42 -04:00
|
|
|
|
/// Invalidates all the cached blocks in the scope.
|
|
|
|
|
|
///
|
|
|
|
|
|
/// Should always be run for all inner scopes when a drop is pushed into some scope enclosing a
|
|
|
|
|
|
/// larger extent of code.
|
|
|
|
|
|
///
|
|
|
|
|
|
/// `storage_only` controls whether to invalidate only drop paths that run `StorageDead`.
|
|
|
|
|
|
/// `this_scope_only` controls whether to invalidate only drop paths that refer to the current
|
|
|
|
|
|
/// top-of-scope (as opposed to dependent scopes).
|
|
|
|
|
|
fn invalidate_cache(
|
|
|
|
|
|
&mut self,
|
|
|
|
|
|
storage_only: bool,
|
|
|
|
|
|
generator_kind: Option<GeneratorKind>,
|
|
|
|
|
|
this_scope_only: bool,
|
|
|
|
|
|
) {
|
|
|
|
|
|
// FIXME: maybe do shared caching of `cached_exits` etc. to handle functions
|
|
|
|
|
|
// with lots of `try!`?
|
|
|
|
|
|
|
|
|
|
|
|
// cached exits drop storage and refer to the top-of-scope
|
|
|
|
|
|
self.cached_exits.clear();
|
|
|
|
|
|
|
|
|
|
|
|
// the current generator drop and unwind refer to top-of-scope
|
|
|
|
|
|
self.cached_generator_drop = None;
|
|
|
|
|
|
|
|
|
|
|
|
let ignore_unwinds = storage_only && generator_kind.is_none();
|
|
|
|
|
|
if !ignore_unwinds {
|
|
|
|
|
|
self.cached_unwind.invalidate();
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
if !ignore_unwinds && !this_scope_only {
|
|
|
|
|
|
for drop_data in &mut self.drops {
|
|
|
|
|
|
drop_data.cached_block.invalidate();
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/// Given a span and this scope's source scope, make a SourceInfo.
|
|
|
|
|
|
fn source_info(&self, span: Span) -> SourceInfo {
|
|
|
|
|
|
SourceInfo { span, scope: self.source_scope }
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2019-09-19 10:07:50 -04:00
|
|
|
|
/// Whether there's anything to do for the cleanup path, that is,
|
|
|
|
|
|
/// when unwinding through this scope. This includes destructors,
|
|
|
|
|
|
/// but not StorageDead statements, which don't get emitted at all
|
|
|
|
|
|
/// for unwinding, for several reasons:
|
|
|
|
|
|
/// * clang doesn't emit llvm.lifetime.end for C++ unwinding
|
|
|
|
|
|
/// * LLVM's memory dependency analysis can't handle it atm
|
|
|
|
|
|
/// * polluting the cleanup MIR with StorageDead creates
|
|
|
|
|
|
/// landing pads even though there's no actual destructors
|
|
|
|
|
|
/// * freeing up stack space has no effect during unwinding
|
|
|
|
|
|
/// Note that for generators we do emit StorageDeads, for the
|
|
|
|
|
|
/// use of optimizations in the MIR generator transform.
|
|
|
|
|
|
fn needs_cleanup(&self) -> bool {
|
|
|
|
|
|
self.drops.iter().any(|drop| match drop.kind {
|
|
|
|
|
|
DropKind::Value => true,
|
|
|
|
|
|
DropKind::Storage => false,
|
|
|
|
|
|
})
|
|
|
|
|
|
}
|
2019-11-16 13:23:31 +00:00
|
|
|
|
}
|
2019-06-15 17:37:19 +01:00
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
impl<'tcx> Scopes<'tcx> {
|
2020-06-04 11:34:42 -04:00
|
|
|
|
fn len(&self) -> usize {
|
|
|
|
|
|
self.scopes.len()
|
2019-06-15 17:37:19 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
2019-11-16 13:23:31 +00:00
|
|
|
|
fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo), vis_scope: SourceScope) {
|
|
|
|
|
|
debug!("push_scope({:?})", region_scope);
|
|
|
|
|
|
self.scopes.push(Scope {
|
|
|
|
|
|
source_scope: vis_scope,
|
|
|
|
|
|
region_scope: region_scope.0,
|
|
|
|
|
|
region_scope_span: region_scope.1.span,
|
|
|
|
|
|
drops: vec![],
|
|
|
|
|
|
moved_locals: vec![],
|
2020-06-04 11:34:42 -04:00
|
|
|
|
cached_generator_drop: None,
|
|
|
|
|
|
cached_exits: Default::default(),
|
|
|
|
|
|
cached_unwind: CachedBlock::default(),
|
2019-11-16 13:23:31 +00:00
|
|
|
|
});
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
fn pop_scope(
|
|
|
|
|
|
&mut self,
|
|
|
|
|
|
region_scope: (region::Scope, SourceInfo),
|
|
|
|
|
|
) -> (Scope, Option<BasicBlock>) {
|
2019-11-16 13:23:31 +00:00
|
|
|
|
let scope = self.scopes.pop().unwrap();
|
|
|
|
|
|
assert_eq!(scope.region_scope, region_scope.0);
|
2020-06-04 11:34:42 -04:00
|
|
|
|
let unwind_to =
|
|
|
|
|
|
self.scopes.last().and_then(|next_scope| next_scope.cached_unwind.get(false));
|
|
|
|
|
|
(scope, unwind_to)
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
fn may_panic(&self, scope_count: usize) -> bool {
|
|
|
|
|
|
let len = self.len();
|
|
|
|
|
|
self.scopes[(len - scope_count)..].iter().any(|s| s.needs_cleanup())
|
2019-11-16 13:23:31 +00:00
|
|
|
|
}
|
|
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
/// Finds the breakable scope for a given label. This is used for
|
|
|
|
|
|
/// resolving `return`, `break` and `continue`.
|
|
|
|
|
|
fn find_breakable_scope(
|
|
|
|
|
|
&self,
|
|
|
|
|
|
span: Span,
|
|
|
|
|
|
target: BreakableTarget,
|
|
|
|
|
|
) -> (BasicBlock, region::Scope, Option<Place<'tcx>>) {
|
|
|
|
|
|
let get_scope = |scope: region::Scope| {
|
|
|
|
|
|
// find the loop-scope by its `region::Scope`.
|
|
|
|
|
|
self.breakable_scopes
|
|
|
|
|
|
.iter()
|
|
|
|
|
|
.rfind(|breakable_scope| breakable_scope.region_scope == scope)
|
|
|
|
|
|
.unwrap_or_else(|| span_bug!(span, "no enclosing breakable scope found"))
|
|
|
|
|
|
};
|
|
|
|
|
|
match target {
|
|
|
|
|
|
BreakableTarget::Return => {
|
|
|
|
|
|
let scope = &self.breakable_scopes[0];
|
|
|
|
|
|
if scope.break_destination != Place::return_place() {
|
|
|
|
|
|
span_bug!(span, "`return` in item with no return scope");
|
|
|
|
|
|
}
|
|
|
|
|
|
(scope.break_block, scope.region_scope, Some(scope.break_destination))
|
|
|
|
|
|
}
|
|
|
|
|
|
BreakableTarget::Break(scope) => {
|
|
|
|
|
|
let scope = get_scope(scope);
|
|
|
|
|
|
(scope.break_block, scope.region_scope, Some(scope.break_destination))
|
|
|
|
|
|
}
|
|
|
|
|
|
BreakableTarget::Continue(scope) => {
|
|
|
|
|
|
let scope = get_scope(scope);
|
|
|
|
|
|
let continue_block = scope
|
|
|
|
|
|
.continue_block
|
|
|
|
|
|
.unwrap_or_else(|| span_bug!(span, "missing `continue` block"));
|
|
|
|
|
|
(continue_block, scope.region_scope, None)
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
fn num_scopes_above(&self, region_scope: region::Scope, span: Span) -> usize {
|
|
|
|
|
|
let scope_count = self
|
|
|
|
|
|
.scopes
|
2019-11-16 13:23:31 +00:00
|
|
|
|
.iter()
|
2020-06-04 11:34:42 -04:00
|
|
|
|
.rev()
|
|
|
|
|
|
.position(|scope| scope.region_scope == region_scope)
|
|
|
|
|
|
.unwrap_or_else(|| span_bug!(span, "region_scope {:?} does not enclose", region_scope));
|
|
|
|
|
|
let len = self.len();
|
|
|
|
|
|
assert!(scope_count < len, "should not use `exit_scope` to pop ALL scopes");
|
|
|
|
|
|
scope_count
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
fn iter_mut(&mut self) -> impl DoubleEndedIterator<Item = &mut Scope> + '_ {
|
|
|
|
|
|
self.scopes.iter_mut().rev()
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
fn top_scopes(&mut self, count: usize) -> impl DoubleEndedIterator<Item = &mut Scope> + '_ {
|
|
|
|
|
|
let len = self.len();
|
|
|
|
|
|
self.scopes[len - count..].iter_mut()
|
2019-06-15 17:37:19 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/// Returns the topmost active scope, which is known to be alive until
|
|
|
|
|
|
/// the next scope expression.
|
2020-06-04 11:34:42 -04:00
|
|
|
|
pub(super) fn topmost(&self) -> region::Scope {
|
2019-06-15 17:37:19 +01:00
|
|
|
|
self.scopes.last().expect("topmost_scope: no scopes present").region_scope
|
|
|
|
|
|
}
|
2020-06-04 11:34:42 -04:00
|
|
|
|
|
|
|
|
|
|
fn source_info(&self, index: usize, span: Span) -> SourceInfo {
|
|
|
|
|
|
self.scopes[self.len() - index].source_info(span)
|
|
|
|
|
|
}
|
2019-06-15 17:37:19 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
2019-06-01 13:38:36 +02:00
|
|
|
|
impl<'a, 'tcx> Builder<'a, 'tcx> {
|
2016-01-30 00:18:47 +02:00
|
|
|
|
// Adding and removing scopes
|
|
|
|
|
|
// ==========================
|
2019-06-15 17:37:19 +01:00
|
|
|
|
// Start a breakable scope, which tracks where `continue`, `break` and
|
|
|
|
|
|
// `return` should branch to.
|
2020-06-04 11:34:42 -04:00
|
|
|
|
crate fn in_breakable_scope<F, R>(
|
2019-12-22 17:42:04 -05:00
|
|
|
|
&mut self,
|
|
|
|
|
|
loop_block: Option<BasicBlock>,
|
2020-06-04 11:34:42 -04:00
|
|
|
|
break_block: BasicBlock,
|
2019-12-22 17:42:04 -05:00
|
|
|
|
break_destination: Place<'tcx>,
|
|
|
|
|
|
f: F,
|
2020-06-04 11:34:42 -04:00
|
|
|
|
) -> R
|
2019-12-22 17:42:04 -05:00
|
|
|
|
where
|
2020-06-04 11:34:42 -04:00
|
|
|
|
F: FnOnce(&mut Builder<'a, 'tcx>) -> R,
|
2015-08-18 17:59:21 -04:00
|
|
|
|
{
|
2019-06-15 17:37:19 +01:00
|
|
|
|
let region_scope = self.scopes.topmost();
|
2017-02-28 11:05:03 -08:00
|
|
|
|
let scope = BreakableScope {
|
2017-08-31 21:37:38 +03:00
|
|
|
|
region_scope,
|
2020-06-04 11:34:42 -04:00
|
|
|
|
continue_block: loop_block,
|
|
|
|
|
|
break_block,
|
2017-08-06 22:54:09 -07:00
|
|
|
|
break_destination,
|
2015-10-07 14:37:42 +02:00
|
|
|
|
};
|
2019-06-15 17:37:19 +01:00
|
|
|
|
self.scopes.breakable_scopes.push(scope);
|
2020-06-04 11:34:42 -04:00
|
|
|
|
let res = f(self);
|
2019-06-15 17:37:19 +01:00
|
|
|
|
let breakable_scope = self.scopes.breakable_scopes.pop().unwrap();
|
2017-08-31 21:37:38 +03:00
|
|
|
|
assert!(breakable_scope.region_scope == region_scope);
|
2020-06-04 11:34:42 -04:00
|
|
|
|
res
|
2015-08-18 17:59:21 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
2020-01-05 15:46:44 +00:00
|
|
|
|
crate fn in_opt_scope<F, R>(
|
2019-12-22 17:42:04 -05:00
|
|
|
|
&mut self,
|
|
|
|
|
|
opt_scope: Option<(region::Scope, SourceInfo)>,
|
|
|
|
|
|
f: F,
|
|
|
|
|
|
) -> BlockAnd<R>
|
|
|
|
|
|
where
|
|
|
|
|
|
F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>,
|
2017-05-23 13:18:20 +02:00
|
|
|
|
{
|
2019-03-30 21:49:52 +00:00
|
|
|
|
debug!("in_opt_scope(opt_scope={:?})", opt_scope);
|
2019-12-22 17:42:04 -05:00
|
|
|
|
if let Some(region_scope) = opt_scope {
|
|
|
|
|
|
self.push_scope(region_scope);
|
|
|
|
|
|
}
|
2019-03-30 21:49:52 +00:00
|
|
|
|
let mut block;
|
2017-05-23 13:18:20 +02:00
|
|
|
|
let rv = unpack!(block = f(self));
|
2017-08-31 21:37:38 +03:00
|
|
|
|
if let Some(region_scope) = opt_scope {
|
|
|
|
|
|
unpack!(block = self.pop_scope(region_scope, block));
|
2017-05-23 13:18:20 +02:00
|
|
|
|
}
|
2017-08-31 21:37:38 +03:00
|
|
|
|
debug!("in_scope: exiting opt_scope={:?} block={:?}", opt_scope, block);
|
2017-05-23 13:18:20 +02:00
|
|
|
|
block.and(rv)
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2015-11-19 05:54:27 -05:00
|
|
|
|
/// Convenience wrapper that pushes a scope and then executes `f`
|
|
|
|
|
|
/// to build its contents, popping the scope afterwards.
|
2020-01-05 15:46:44 +00:00
|
|
|
|
crate fn in_scope<F, R>(
|
2019-12-22 17:42:04 -05:00
|
|
|
|
&mut self,
|
|
|
|
|
|
region_scope: (region::Scope, SourceInfo),
|
|
|
|
|
|
lint_level: LintLevel,
|
|
|
|
|
|
f: F,
|
|
|
|
|
|
) -> BlockAnd<R>
|
|
|
|
|
|
where
|
|
|
|
|
|
F: FnOnce(&mut Builder<'a, 'tcx>) -> BlockAnd<R>,
|
2015-08-18 17:59:21 -04:00
|
|
|
|
{
|
2019-03-30 21:49:52 +00:00
|
|
|
|
debug!("in_scope(region_scope={:?})", region_scope);
|
2018-05-28 14:16:09 +03:00
|
|
|
|
let source_scope = self.source_scope;
|
2017-09-13 22:33:07 +03:00
|
|
|
|
let tcx = self.hir.tcx();
|
2019-02-22 15:48:14 +01:00
|
|
|
|
if let LintLevel::Explicit(current_hir_id) = lint_level {
|
2019-01-31 23:11:29 +01:00
|
|
|
|
// Use `maybe_lint_level_root_bounded` with `root_lint_level` as a bound
|
|
|
|
|
|
// to avoid adding Hir dependences on our parents.
|
|
|
|
|
|
// We estimate the true lint roots here to avoid creating a lot of source scopes.
|
|
|
|
|
|
|
|
|
|
|
|
let parent_root = tcx.maybe_lint_level_root_bounded(
|
2019-12-22 17:42:04 -05:00
|
|
|
|
self.source_scopes[source_scope].local_data.as_ref().assert_crate_local().lint_root,
|
2019-01-31 23:11:29 +01:00
|
|
|
|
self.hir.root_lint_level,
|
|
|
|
|
|
);
|
2019-12-22 17:42:04 -05:00
|
|
|
|
let current_root =
|
|
|
|
|
|
tcx.maybe_lint_level_root_bounded(current_hir_id, self.hir.root_lint_level);
|
2019-01-31 23:11:29 +01:00
|
|
|
|
|
|
|
|
|
|
if parent_root != current_root {
|
|
|
|
|
|
self.source_scope = self.new_source_scope(
|
|
|
|
|
|
region_scope.1.span,
|
|
|
|
|
|
LintLevel::Explicit(current_root),
|
2019-12-22 17:42:04 -05:00
|
|
|
|
None,
|
2019-01-31 23:11:29 +01:00
|
|
|
|
);
|
2017-09-13 22:33:07 +03:00
|
|
|
|
}
|
|
|
|
|
|
}
|
2017-08-31 21:37:38 +03:00
|
|
|
|
self.push_scope(region_scope);
|
2019-03-30 21:49:52 +00:00
|
|
|
|
let mut block;
|
2016-05-31 20:27:36 +03:00
|
|
|
|
let rv = unpack!(block = f(self));
|
2017-08-31 21:37:38 +03:00
|
|
|
|
unpack!(block = self.pop_scope(region_scope, block));
|
2018-05-28 14:16:09 +03:00
|
|
|
|
self.source_scope = source_scope;
|
2017-08-31 21:37:38 +03:00
|
|
|
|
debug!("in_scope: exiting region_scope={:?} block={:?}", region_scope, block);
|
2015-11-19 05:54:27 -05:00
|
|
|
|
block.and(rv)
|
|
|
|
|
|
}
|
2015-08-18 17:59:21 -04:00
|
|
|
|
|
2015-11-19 05:54:27 -05:00
|
|
|
|
/// Push a scope onto the stack. You can then build code in this
|
|
|
|
|
|
/// scope and call `pop_scope` afterwards. Note that these two
|
|
|
|
|
|
/// calls must be paired; using `in_scope` as a convenience
|
|
|
|
|
|
/// wrapper maybe preferable.
|
2020-01-05 15:46:44 +00:00
|
|
|
|
crate fn push_scope(&mut self, region_scope: (region::Scope, SourceInfo)) {
|
2019-06-15 17:37:19 +01:00
|
|
|
|
self.scopes.push_scope(region_scope, self.source_scope);
|
2015-11-19 05:54:27 -05:00
|
|
|
|
}
|
|
|
|
|
|
|
2017-08-31 21:37:38 +03:00
|
|
|
|
/// Pops a scope, which should have region scope `region_scope`,
|
|
|
|
|
|
/// adding any drops onto the end of `block` that are needed.
|
|
|
|
|
|
/// This must match 1-to-1 with `push_scope`.
|
2020-01-05 15:46:44 +00:00
|
|
|
|
crate fn pop_scope(
|
2019-12-22 17:42:04 -05:00
|
|
|
|
&mut self,
|
|
|
|
|
|
region_scope: (region::Scope, SourceInfo),
|
|
|
|
|
|
mut block: BasicBlock,
|
|
|
|
|
|
) -> BlockAnd<()> {
|
2017-08-31 21:37:38 +03:00
|
|
|
|
debug!("pop_scope({:?}, {:?})", region_scope, block);
|
2020-06-04 11:34:42 -04:00
|
|
|
|
// If we are emitting a `drop` statement, we need to have the cached
|
|
|
|
|
|
// diverge cleanup pads ready in case that drop panics.
|
|
|
|
|
|
if self.scopes.may_panic(1) {
|
|
|
|
|
|
self.diverge_cleanup();
|
|
|
|
|
|
}
|
|
|
|
|
|
let (scope, unwind_to) = self.scopes.pop_scope(region_scope);
|
|
|
|
|
|
let unwind_to = unwind_to.unwrap_or_else(|| self.resume_block());
|
2018-11-17 21:07:17 +00:00
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
unpack!(
|
|
|
|
|
|
block = build_scope_drops(
|
|
|
|
|
|
&mut self.cfg,
|
|
|
|
|
|
self.generator_kind,
|
|
|
|
|
|
&scope,
|
|
|
|
|
|
block,
|
|
|
|
|
|
unwind_to,
|
|
|
|
|
|
self.arg_count,
|
|
|
|
|
|
false, // not generator
|
|
|
|
|
|
false, // not unwind path
|
|
|
|
|
|
)
|
|
|
|
|
|
);
|
Add `EndRegion` statement kind to MIR.
* Emit `EndRegion` for every code-extent for which we observe a
borrow. To do this, we needed to thread source info back through
to `fn in_scope`, which makes this commit a bit more painful than
one might have expected.
* There is `end_region` emission in `Builder::pop_scope` and in
`Builder::exit_scope`; the first handles falling out of a scope
normally, the second handles e.g. `break`.
* Remove `EndRegion` statements during the erase_regions mir
transformation.
* Preallocate the terminator block, and throw an `Unreachable` marker
on it from the outset. Then overwrite that Terminator as necessary
on demand.
* Instead of marking the scope as needs_cleanup after seeing a
borrow, just treat every scope in the chain as being part of the
diverge_block (after any *one* of them has separately signalled
that it needs cleanup, e.g. due to having a destructor to run).
* Allow for resume terminators to be patched when looking up drop flags.
(In particular, `MirPatch::new` has an explicit code path,
presumably previously unreachable, that patches up such resume
terminators.)
* Make `Scope` implement `Debug` trait.
* Expanded a stray comment: we do not emit StorageDead on diverging
paths, but that end behavior might not be desirable.
2017-02-17 13:38:42 +01:00
|
|
|
|
|
2016-03-23 04:25:09 -04:00
|
|
|
|
block.unit()
|
2016-01-30 00:18:47 +02:00
|
|
|
|
}
|
|
|
|
|
|
|
2020-01-05 15:46:44 +00:00
|
|
|
|
crate fn break_scope(
|
2019-06-15 17:37:19 +01:00
|
|
|
|
&mut self,
|
|
|
|
|
|
mut block: BasicBlock,
|
|
|
|
|
|
value: Option<ExprRef<'tcx>>,
|
2020-06-04 11:34:42 -04:00
|
|
|
|
scope: BreakableTarget,
|
2019-06-15 17:37:19 +01:00
|
|
|
|
source_info: SourceInfo,
|
|
|
|
|
|
) -> BlockAnd<()> {
|
2020-06-04 11:34:42 -04:00
|
|
|
|
let (mut target_block, region_scope, destination) =
|
|
|
|
|
|
self.scopes.find_breakable_scope(source_info.span, scope);
|
|
|
|
|
|
if let BreakableTarget::Return = scope {
|
|
|
|
|
|
// We call this now, rather than when we start lowering the
|
|
|
|
|
|
// function so that the return block doesn't precede the entire
|
|
|
|
|
|
// rest of the CFG. Some passes and LLVM prefer blocks to be in
|
|
|
|
|
|
// approximately CFG order.
|
|
|
|
|
|
target_block = self.return_block();
|
|
|
|
|
|
}
|
2019-06-15 17:37:19 +01:00
|
|
|
|
if let Some(destination) = destination {
|
|
|
|
|
|
if let Some(value) = value {
|
|
|
|
|
|
debug!("stmt_expr Break val block_context.push(SubExpr)");
|
|
|
|
|
|
self.block_context.push(BlockFrame::SubExpr);
|
2020-03-31 14:08:48 -03:00
|
|
|
|
unpack!(block = self.into(destination, block, value));
|
2019-06-15 17:37:19 +01:00
|
|
|
|
self.block_context.pop();
|
|
|
|
|
|
} else {
|
2020-04-09 12:24:53 +02:00
|
|
|
|
self.cfg.push_assign_unit(block, source_info, destination, self.hir.tcx())
|
2019-06-15 17:37:19 +01:00
|
|
|
|
}
|
|
|
|
|
|
} else {
|
|
|
|
|
|
assert!(value.is_none(), "`return` and `break` should have a destination");
|
|
|
|
|
|
}
|
2020-06-04 11:34:42 -04:00
|
|
|
|
self.exit_scope(source_info.span, region_scope, block, target_block);
|
2019-06-15 17:37:19 +01:00
|
|
|
|
self.cfg.start_new_block().unit()
|
|
|
|
|
|
}
|
2016-01-30 00:18:47 +02:00
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
/// Branch out of `block` to `target`, exiting all scopes up to
|
|
|
|
|
|
/// and including `region_scope`. This will insert whatever drops are
|
|
|
|
|
|
/// needed. See module comment for details.
|
|
|
|
|
|
crate fn exit_scope(
|
2019-12-22 17:42:04 -05:00
|
|
|
|
&mut self,
|
2020-06-04 11:34:42 -04:00
|
|
|
|
span: Span,
|
|
|
|
|
|
region_scope: region::Scope,
|
2019-12-22 17:42:04 -05:00
|
|
|
|
mut block: BasicBlock,
|
|
|
|
|
|
target: BasicBlock,
|
|
|
|
|
|
) {
|
2020-06-04 11:34:42 -04:00
|
|
|
|
debug!(
|
|
|
|
|
|
"exit_scope(region_scope={:?}, block={:?}, target={:?})",
|
|
|
|
|
|
region_scope, block, target
|
|
|
|
|
|
);
|
|
|
|
|
|
let scope_count = self.scopes.num_scopes_above(region_scope, span);
|
2017-07-31 23:25:27 +03:00
|
|
|
|
|
|
|
|
|
|
// If we are emitting a `drop` statement, we need to have the cached
|
|
|
|
|
|
// diverge cleanup pads ready in case that drop panics.
|
2020-06-04 11:34:42 -04:00
|
|
|
|
let may_panic = self.scopes.may_panic(scope_count);
|
|
|
|
|
|
if may_panic {
|
|
|
|
|
|
self.diverge_cleanup();
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
let mut scopes = self.scopes.top_scopes(scope_count + 1).rev();
|
|
|
|
|
|
let mut scope = scopes.next().unwrap();
|
|
|
|
|
|
for next_scope in scopes {
|
|
|
|
|
|
if scope.drops.is_empty() {
|
|
|
|
|
|
scope = next_scope;
|
|
|
|
|
|
continue;
|
|
|
|
|
|
}
|
|
|
|
|
|
let source_info = scope.source_info(span);
|
|
|
|
|
|
block = match scope.cached_exits.entry((target, region_scope)) {
|
|
|
|
|
|
Entry::Occupied(e) => {
|
|
|
|
|
|
self.cfg.goto(block, source_info, *e.get());
|
|
|
|
|
|
return;
|
|
|
|
|
|
}
|
|
|
|
|
|
Entry::Vacant(v) => {
|
|
|
|
|
|
let b = self.cfg.start_new_block();
|
|
|
|
|
|
self.cfg.goto(block, source_info, b);
|
|
|
|
|
|
v.insert(b);
|
|
|
|
|
|
b
|
|
|
|
|
|
}
|
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
let unwind_to = next_scope.cached_unwind.get(false).unwrap_or_else(|| {
|
|
|
|
|
|
debug_assert!(!may_panic, "cached block not present?");
|
|
|
|
|
|
START_BLOCK
|
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
|
|
unpack!(
|
|
|
|
|
|
block = build_scope_drops(
|
|
|
|
|
|
&mut self.cfg,
|
|
|
|
|
|
self.generator_kind,
|
|
|
|
|
|
scope,
|
|
|
|
|
|
block,
|
|
|
|
|
|
unwind_to,
|
|
|
|
|
|
self.arg_count,
|
|
|
|
|
|
false, // not generator
|
|
|
|
|
|
false, // not unwind path
|
|
|
|
|
|
)
|
|
|
|
|
|
);
|
|
|
|
|
|
|
|
|
|
|
|
scope = next_scope;
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
self.cfg.goto(block, self.scopes.source_info(scope_count, span), target);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/// Creates a path that performs all required cleanup for dropping a generator.
|
|
|
|
|
|
///
|
|
|
|
|
|
/// This path terminates in GeneratorDrop. Returns the start of the path.
|
|
|
|
|
|
/// None indicates there’s no cleanup to do at this point.
|
|
|
|
|
|
crate fn generator_drop_cleanup(&mut self) -> Option<BasicBlock> {
|
|
|
|
|
|
// Fill in the cache for unwinds
|
|
|
|
|
|
self.diverge_cleanup_gen(true);
|
|
|
|
|
|
|
|
|
|
|
|
let src_info = self.scopes.source_info(self.scopes.len(), self.fn_span);
|
|
|
|
|
|
let resume_block = self.resume_block();
|
|
|
|
|
|
let mut scopes = self.scopes.iter_mut().peekable();
|
|
|
|
|
|
let mut block = self.cfg.start_new_block();
|
|
|
|
|
|
let result = block;
|
|
|
|
|
|
|
|
|
|
|
|
while let Some(scope) = scopes.next() {
|
|
|
|
|
|
block = if let Some(b) = scope.cached_generator_drop {
|
|
|
|
|
|
self.cfg.goto(block, src_info, b);
|
|
|
|
|
|
return Some(result);
|
|
|
|
|
|
} else {
|
|
|
|
|
|
let b = self.cfg.start_new_block();
|
|
|
|
|
|
scope.cached_generator_drop = Some(b);
|
|
|
|
|
|
self.cfg.goto(block, src_info, b);
|
|
|
|
|
|
b
|
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
let unwind_to = scopes
|
|
|
|
|
|
.peek()
|
|
|
|
|
|
.as_ref()
|
|
|
|
|
|
.map(|scope| {
|
|
|
|
|
|
scope
|
|
|
|
|
|
.cached_unwind
|
|
|
|
|
|
.get(true)
|
|
|
|
|
|
.unwrap_or_else(|| span_bug!(src_info.span, "cached block not present?"))
|
|
|
|
|
|
})
|
|
|
|
|
|
.unwrap_or(resume_block);
|
|
|
|
|
|
|
|
|
|
|
|
unpack!(
|
|
|
|
|
|
block = build_scope_drops(
|
|
|
|
|
|
&mut self.cfg,
|
|
|
|
|
|
self.generator_kind,
|
|
|
|
|
|
scope,
|
|
|
|
|
|
block,
|
|
|
|
|
|
unwind_to,
|
|
|
|
|
|
self.arg_count,
|
|
|
|
|
|
true, // is generator
|
|
|
|
|
|
true, // is cached path
|
|
|
|
|
|
)
|
|
|
|
|
|
);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
self.cfg.terminate(block, src_info, TerminatorKind::GeneratorDrop);
|
|
|
|
|
|
|
|
|
|
|
|
Some(result)
|
2016-12-26 14:34:03 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
2018-05-28 14:16:09 +03:00
|
|
|
|
/// Creates a new source scope, nested in the current one.
|
2020-01-05 15:46:44 +00:00
|
|
|
|
crate fn new_source_scope(
|
2019-12-22 17:42:04 -05:00
|
|
|
|
&mut self,
|
|
|
|
|
|
span: Span,
|
|
|
|
|
|
lint_level: LintLevel,
|
|
|
|
|
|
safety: Option<Safety>,
|
|
|
|
|
|
) -> SourceScope {
|
2018-05-28 14:16:09 +03:00
|
|
|
|
let parent = self.source_scope;
|
2019-12-22 17:42:04 -05:00
|
|
|
|
debug!(
|
|
|
|
|
|
"new_source_scope({:?}, {:?}, {:?}) - parent({:?})={:?}",
|
|
|
|
|
|
span,
|
|
|
|
|
|
lint_level,
|
|
|
|
|
|
safety,
|
|
|
|
|
|
parent,
|
|
|
|
|
|
self.source_scopes.get(parent)
|
|
|
|
|
|
);
|
2018-05-28 17:37:48 +03:00
|
|
|
|
let scope_local_data = SourceScopeLocalData {
|
2017-09-19 16:20:02 +03:00
|
|
|
|
lint_root: if let LintLevel::Explicit(lint_root) = lint_level {
|
|
|
|
|
|
lint_root
|
|
|
|
|
|
} else {
|
2019-11-26 22:17:35 +02:00
|
|
|
|
self.source_scopes[parent].local_data.as_ref().assert_crate_local().lint_root
|
2017-09-19 16:20:02 +03:00
|
|
|
|
},
|
|
|
|
|
|
safety: safety.unwrap_or_else(|| {
|
2019-11-26 22:17:35 +02:00
|
|
|
|
self.source_scopes[parent].local_data.as_ref().assert_crate_local().safety
|
2019-12-22 17:42:04 -05:00
|
|
|
|
}),
|
2017-09-19 16:20:02 +03:00
|
|
|
|
};
|
2019-11-26 22:17:35 +02:00
|
|
|
|
self.source_scopes.push(SourceScopeData {
|
|
|
|
|
|
span,
|
|
|
|
|
|
parent_scope: Some(parent),
|
|
|
|
|
|
local_data: ClearCrossCrate::Set(scope_local_data),
|
|
|
|
|
|
})
|
2016-05-31 20:27:36 +03:00
|
|
|
|
}
|
|
|
|
|
|
|
2018-05-28 14:16:09 +03:00
|
|
|
|
/// Given a span and the current source scope, make a SourceInfo.
|
2020-01-05 15:46:44 +00:00
|
|
|
|
crate fn source_info(&self, span: Span) -> SourceInfo {
|
2019-12-22 17:42:04 -05:00
|
|
|
|
SourceInfo { span, scope: self.source_scope }
|
2016-03-09 12:36:07 -05:00
|
|
|
|
}
|
|
|
|
|
|
|
2019-06-15 17:37:19 +01:00
|
|
|
|
// Finding scopes
|
|
|
|
|
|
// ==============
|
2017-05-15 21:09:01 -04:00
|
|
|
|
/// Returns the scope that we should use as the lifetime of an
|
|
|
|
|
|
/// operand. Basically, an operand must live until it is consumed.
|
|
|
|
|
|
/// This is similar to, but not quite the same as, the temporary
|
|
|
|
|
|
/// scope (which can be larger or smaller).
|
|
|
|
|
|
///
|
|
|
|
|
|
/// Consider:
|
|
|
|
|
|
///
|
|
|
|
|
|
/// let x = foo(bar(X, Y));
|
|
|
|
|
|
///
|
|
|
|
|
|
/// We wish to pop the storage for X and Y after `bar()` is
|
|
|
|
|
|
/// called, not after the whole `let` is completed.
|
|
|
|
|
|
///
|
2017-05-22 14:40:47 -04:00
|
|
|
|
/// As another example, if the second argument diverges:
|
|
|
|
|
|
///
|
|
|
|
|
|
/// foo(Box::new(2), panic!())
|
|
|
|
|
|
///
|
|
|
|
|
|
/// We would allocate the box but then free it on the unwinding
|
|
|
|
|
|
/// path; we would also emit a free on the 'success' path from
|
|
|
|
|
|
/// panic, but that will turn out to be removed as dead-code.
|
|
|
|
|
|
///
|
2017-05-15 21:09:01 -04:00
|
|
|
|
/// When building statics/constants, returns `None` since
|
|
|
|
|
|
/// intermediate values do not have to be dropped in that case.
|
2020-01-05 15:46:44 +00:00
|
|
|
|
crate fn local_scope(&self) -> Option<region::Scope> {
|
2017-11-10 19:20:35 +02:00
|
|
|
|
match self.hir.body_owner_kind {
|
2019-12-22 17:42:04 -05:00
|
|
|
|
hir::BodyOwnerKind::Const | hir::BodyOwnerKind::Static(_) =>
|
|
|
|
|
|
// No need to free storage in this context.
|
|
|
|
|
|
{
|
|
|
|
|
|
None
|
|
|
|
|
|
}
|
|
|
|
|
|
hir::BodyOwnerKind::Closure | hir::BodyOwnerKind::Fn => Some(self.scopes.topmost()),
|
2017-05-15 21:09:01 -04:00
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
// Schedule an abort block - this is used for some ABIs that cannot unwind
|
|
|
|
|
|
crate fn schedule_abort(&mut self) -> BasicBlock {
|
|
|
|
|
|
let source_info = self.scopes.source_info(self.scopes.len(), self.fn_span);
|
|
|
|
|
|
let abortblk = self.cfg.start_new_cleanup_block();
|
|
|
|
|
|
self.cfg.terminate(abortblk, source_info, TerminatorKind::Abort);
|
|
|
|
|
|
self.cached_resume_block = Some(abortblk);
|
|
|
|
|
|
abortblk
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2019-06-15 17:37:19 +01:00
|
|
|
|
// Scheduling drops
|
|
|
|
|
|
// ================
|
2020-01-05 15:46:44 +00:00
|
|
|
|
crate fn schedule_drop_storage_and_value(
|
2018-07-03 18:09:00 -07:00
|
|
|
|
&mut self,
|
|
|
|
|
|
span: Span,
|
|
|
|
|
|
region_scope: region::Scope,
|
2019-06-15 20:33:23 +01:00
|
|
|
|
local: Local,
|
2018-07-03 18:09:00 -07:00
|
|
|
|
) {
|
2019-09-29 15:08:57 +01:00
|
|
|
|
self.schedule_drop(span, region_scope, local, DropKind::Storage);
|
|
|
|
|
|
self.schedule_drop(span, region_scope, local, DropKind::Value);
|
2018-07-03 18:09:00 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
/// Indicates that `place` should be dropped on exit from
|
|
|
|
|
|
/// `region_scope`.
|
2018-07-03 18:09:00 -07:00
|
|
|
|
///
|
2020-06-04 11:34:42 -04:00
|
|
|
|
/// When called with `DropKind::Storage`, `place` should be a local
|
|
|
|
|
|
/// with an index higher than the current `self.arg_count`.
|
2020-01-05 15:46:44 +00:00
|
|
|
|
crate fn schedule_drop(
|
2018-07-03 18:09:00 -07:00
|
|
|
|
&mut self,
|
|
|
|
|
|
span: Span,
|
|
|
|
|
|
region_scope: region::Scope,
|
2019-06-15 20:33:23 +01:00
|
|
|
|
local: Local,
|
2018-07-03 18:09:00 -07:00
|
|
|
|
drop_kind: DropKind,
|
|
|
|
|
|
) {
|
2019-09-29 21:35:20 +01:00
|
|
|
|
let needs_drop = match drop_kind {
|
|
|
|
|
|
DropKind::Value => {
|
2019-12-22 17:42:04 -05:00
|
|
|
|
if !self.hir.needs_drop(self.local_decls[local].ty) {
|
|
|
|
|
|
return;
|
|
|
|
|
|
}
|
2019-09-29 21:35:20 +01:00
|
|
|
|
true
|
2019-12-22 17:42:04 -05:00
|
|
|
|
}
|
2019-05-30 13:21:17 -07:00
|
|
|
|
DropKind::Storage => {
|
2019-06-15 20:33:23 +01:00
|
|
|
|
if local.index() <= self.arg_count {
|
|
|
|
|
|
span_bug!(
|
2019-12-22 17:42:04 -05:00
|
|
|
|
span,
|
|
|
|
|
|
"`schedule_drop` called with local {:?} and arg_count {}",
|
2019-06-15 20:33:23 +01:00
|
|
|
|
local,
|
|
|
|
|
|
self.arg_count,
|
|
|
|
|
|
)
|
2018-07-03 18:09:00 -07:00
|
|
|
|
}
|
2019-09-29 21:35:20 +01:00
|
|
|
|
false
|
2016-08-14 06:34:14 +03:00
|
|
|
|
}
|
2019-09-29 21:35:20 +01:00
|
|
|
|
};
|
2016-08-14 06:34:14 +03:00
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
for scope in self.scopes.iter_mut() {
|
|
|
|
|
|
let this_scope = scope.region_scope == region_scope;
|
|
|
|
|
|
// When building drops, we try to cache chains of drops in such a way so these drops
|
|
|
|
|
|
// could be reused by the drops which would branch into the cached (already built)
|
|
|
|
|
|
// blocks. This, however, means that whenever we add a drop into a scope which already
|
|
|
|
|
|
// had some blocks built (and thus, cached) for it, we must invalidate all caches which
|
|
|
|
|
|
// might branch into the scope which had a drop just added to it. This is necessary,
|
|
|
|
|
|
// because otherwise some other code might use the cache to branch into already built
|
|
|
|
|
|
// chain of drops, essentially ignoring the newly added drop.
|
|
|
|
|
|
//
|
|
|
|
|
|
// For example consider there’s two scopes with a drop in each. These are built and
|
|
|
|
|
|
// thus the caches are filled:
|
|
|
|
|
|
//
|
|
|
|
|
|
// +--------------------------------------------------------+
|
|
|
|
|
|
// | +---------------------------------+ |
|
|
|
|
|
|
// | | +--------+ +-------------+ | +---------------+ |
|
|
|
|
|
|
// | | | return | <-+ | drop(outer) | <-+ | drop(middle) | |
|
|
|
|
|
|
// | | +--------+ +-------------+ | +---------------+ |
|
|
|
|
|
|
// | +------------|outer_scope cache|--+ |
|
|
|
|
|
|
// +------------------------------|middle_scope cache|------+
|
|
|
|
|
|
//
|
|
|
|
|
|
// Now, a new, inner-most scope is added along with a new drop into both inner-most and
|
|
|
|
|
|
// outer-most scopes:
|
|
|
|
|
|
//
|
|
|
|
|
|
// +------------------------------------------------------------+
|
|
|
|
|
|
// | +----------------------------------+ |
|
|
|
|
|
|
// | | +--------+ +-------------+ | +---------------+ | +-------------+
|
|
|
|
|
|
// | | | return | <+ | drop(new) | <-+ | drop(middle) | <--+| drop(inner) |
|
|
|
|
|
|
// | | +--------+ | | drop(outer) | | +---------------+ | +-------------+
|
|
|
|
|
|
// | | +-+ +-------------+ | |
|
|
|
|
|
|
// | +---|invalid outer_scope cache|----+ |
|
|
|
|
|
|
// +----=----------------|invalid middle_scope cache|-----------+
|
|
|
|
|
|
//
|
|
|
|
|
|
// If, when adding `drop(new)` we do not invalidate the cached blocks for both
|
|
|
|
|
|
// outer_scope and middle_scope, then, when building drops for the inner (right-most)
|
|
|
|
|
|
// scope, the old, cached blocks, without `drop(new)` will get used, producing the
|
|
|
|
|
|
// wrong results.
|
|
|
|
|
|
//
|
|
|
|
|
|
// The cache and its invalidation for unwind branch is somewhat special. The cache is
|
|
|
|
|
|
// per-drop, rather than per scope, which has a several different implications. Adding
|
|
|
|
|
|
// a new drop into a scope will not invalidate cached blocks of the prior drops in the
|
|
|
|
|
|
// scope. That is true, because none of the already existing drops will have an edge
|
|
|
|
|
|
// into a block with the newly added drop.
|
|
|
|
|
|
//
|
|
|
|
|
|
// Note that this code iterates scopes from the inner-most to the outer-most,
|
|
|
|
|
|
// invalidating caches of each scope visited. This way bare minimum of the
|
|
|
|
|
|
// caches gets invalidated. i.e., if a new drop is added into the middle scope, the
|
|
|
|
|
|
// cache of outer scope stays intact.
|
|
|
|
|
|
scope.invalidate_cache(!needs_drop, self.generator_kind, this_scope);
|
|
|
|
|
|
if this_scope {
|
2019-12-22 17:42:04 -05:00
|
|
|
|
let region_scope_span =
|
|
|
|
|
|
region_scope.span(self.hir.tcx(), &self.hir.region_scope_tree);
|
2018-01-13 23:28:42 +00:00
|
|
|
|
// Attribute scope exit drops to scope's closing brace.
|
2018-08-18 12:14:09 +02:00
|
|
|
|
let scope_end = self.hir.tcx().sess.source_map().end_point(region_scope_span);
|
2018-01-13 23:28:42 +00:00
|
|
|
|
|
2016-01-30 00:18:47 +02:00
|
|
|
|
scope.drops.push(DropData {
|
2020-06-04 11:34:42 -04:00
|
|
|
|
span: scope_end,
|
2019-06-15 20:33:23 +01:00
|
|
|
|
local,
|
2019-05-30 13:21:17 -07:00
|
|
|
|
kind: drop_kind,
|
2020-06-04 11:34:42 -04:00
|
|
|
|
cached_block: CachedBlock::default(),
|
2016-01-30 00:18:47 +02:00
|
|
|
|
});
|
|
|
|
|
|
return;
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
2019-06-15 20:33:23 +01:00
|
|
|
|
span_bug!(span, "region scope {:?} not in scope to drop {:?}", region_scope, local);
|
2016-01-30 00:18:47 +02:00
|
|
|
|
}
|
2016-01-16 00:36:32 +02:00
|
|
|
|
|
2019-09-19 11:41:10 -04:00
|
|
|
|
/// Indicates that the "local operand" stored in `local` is
|
|
|
|
|
|
/// *moved* at some point during execution (see `local_scope` for
|
|
|
|
|
|
/// more information about what a "local operand" is -- in short,
|
|
|
|
|
|
/// it's an intermediate operand created as part of preparing some
|
|
|
|
|
|
/// MIR instruction). We use this information to suppress
|
|
|
|
|
|
/// redundant drops on the non-unwind paths. This results in less
|
|
|
|
|
|
/// MIR, but also avoids spurious borrow check errors
|
|
|
|
|
|
/// (c.f. #64391).
|
|
|
|
|
|
///
|
|
|
|
|
|
/// Example: when compiling the call to `foo` here:
|
|
|
|
|
|
///
|
|
|
|
|
|
/// ```rust
|
|
|
|
|
|
/// foo(bar(), ...)
|
|
|
|
|
|
/// ```
|
|
|
|
|
|
///
|
|
|
|
|
|
/// we would evaluate `bar()` to an operand `_X`. We would also
|
|
|
|
|
|
/// schedule `_X` to be dropped when the expression scope for
|
|
|
|
|
|
/// `foo(bar())` is exited. This is relevant, for example, if the
|
|
|
|
|
|
/// later arguments should unwind (it would ensure that `_X` gets
|
|
|
|
|
|
/// dropped). However, if no unwind occurs, then `_X` will be
|
|
|
|
|
|
/// unconditionally consumed by the `call`:
|
|
|
|
|
|
///
|
|
|
|
|
|
/// ```
|
|
|
|
|
|
/// bb {
|
|
|
|
|
|
/// ...
|
|
|
|
|
|
/// _R = CALL(foo, _X, ...)
|
|
|
|
|
|
/// }
|
|
|
|
|
|
/// ```
|
|
|
|
|
|
///
|
|
|
|
|
|
/// However, `_X` is still registered to be dropped, and so if we
|
|
|
|
|
|
/// do nothing else, we would generate a `DROP(_X)` that occurs
|
|
|
|
|
|
/// after the call. This will later be optimized out by the
|
|
|
|
|
|
/// drop-elaboation code, but in the meantime it can lead to
|
|
|
|
|
|
/// spurious borrow-check errors -- the problem, ironically, is
|
|
|
|
|
|
/// not the `DROP(_X)` itself, but the (spurious) unwind pathways
|
|
|
|
|
|
/// that it creates. See #64391 for an example.
|
2020-01-05 15:46:44 +00:00
|
|
|
|
crate fn record_operands_moved(&mut self, operands: &[Operand<'tcx>]) {
|
2019-09-19 11:41:10 -04:00
|
|
|
|
let scope = match self.local_scope() {
|
|
|
|
|
|
None => {
|
|
|
|
|
|
// if there is no local scope, operands won't be dropped anyway
|
|
|
|
|
|
return;
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2019-12-22 17:42:04 -05:00
|
|
|
|
Some(local_scope) => self
|
|
|
|
|
|
.scopes
|
|
|
|
|
|
.iter_mut()
|
2020-06-04 11:34:42 -04:00
|
|
|
|
.find(|scope| scope.region_scope == local_scope)
|
2019-12-22 17:42:04 -05:00
|
|
|
|
.unwrap_or_else(|| bug!("scope {:?} not found in scope list!", local_scope)),
|
2019-09-19 11:41:10 -04:00
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
// look for moves of a local variable, like `MOVE(_X)`
|
|
|
|
|
|
let locals_moved = operands.iter().flat_map(|operand| match operand {
|
|
|
|
|
|
Operand::Copy(_) | Operand::Constant(_) => None,
|
|
|
|
|
|
Operand::Move(place) => place.as_local(),
|
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
|
|
for local in locals_moved {
|
|
|
|
|
|
// check if we have a Drop for this operand and -- if so
|
|
|
|
|
|
// -- add it to the list of moved operands. Note that this
|
|
|
|
|
|
// local might not have been an operand created for this
|
|
|
|
|
|
// call, it could come from other places too.
|
|
|
|
|
|
if scope.drops.iter().any(|drop| drop.local == local && drop.kind == DropKind::Value) {
|
|
|
|
|
|
scope.moved_locals.push(local);
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2016-01-30 00:18:47 +02:00
|
|
|
|
// Other
|
|
|
|
|
|
// =====
|
2019-06-15 19:55:21 +01:00
|
|
|
|
/// Branch based on a boolean condition.
|
|
|
|
|
|
///
|
|
|
|
|
|
/// This is a special case because the temporary for the condition needs to
|
|
|
|
|
|
/// be dropped on both the true and the false arm.
|
2020-01-05 15:46:44 +00:00
|
|
|
|
crate fn test_bool(
|
2019-06-15 19:55:21 +01:00
|
|
|
|
&mut self,
|
|
|
|
|
|
mut block: BasicBlock,
|
|
|
|
|
|
condition: Expr<'tcx>,
|
|
|
|
|
|
source_info: SourceInfo,
|
|
|
|
|
|
) -> (BasicBlock, BasicBlock) {
|
|
|
|
|
|
let cond = unpack!(block = self.as_local_operand(block, condition));
|
|
|
|
|
|
let true_block = self.cfg.start_new_block();
|
|
|
|
|
|
let false_block = self.cfg.start_new_block();
|
2019-12-22 17:42:04 -05:00
|
|
|
|
let term = TerminatorKind::if_(self.hir.tcx(), cond.clone(), true_block, false_block);
|
2019-06-15 19:55:21 +01:00
|
|
|
|
self.cfg.terminate(block, source_info, term);
|
|
|
|
|
|
|
|
|
|
|
|
match cond {
|
|
|
|
|
|
// Don't try to drop a constant
|
|
|
|
|
|
Operand::Constant(_) => (),
|
|
|
|
|
|
// If constants and statics, we don't generate StorageLive for this
|
|
|
|
|
|
// temporary, so don't try to generate StorageDead for it either.
|
|
|
|
|
|
_ if self.local_scope().is_none() => (),
|
2019-12-22 17:42:04 -05:00
|
|
|
|
Operand::Copy(place) | Operand::Move(place) => {
|
2019-10-20 16:09:36 -04:00
|
|
|
|
if let Some(cond_temp) = place.as_local() {
|
|
|
|
|
|
// Manually drop the condition on both branches.
|
|
|
|
|
|
let top_scope = self.scopes.scopes.last_mut().unwrap();
|
|
|
|
|
|
let top_drop_data = top_scope.drops.pop().unwrap();
|
|
|
|
|
|
|
|
|
|
|
|
match top_drop_data.kind {
|
|
|
|
|
|
DropKind::Value { .. } => {
|
|
|
|
|
|
bug!("Drop scheduled on top of condition variable")
|
|
|
|
|
|
}
|
|
|
|
|
|
DropKind::Storage => {
|
2020-06-04 11:34:42 -04:00
|
|
|
|
let source_info = top_scope.source_info(top_drop_data.span);
|
2019-10-20 16:09:36 -04:00
|
|
|
|
let local = top_drop_data.local;
|
|
|
|
|
|
assert_eq!(local, cond_temp, "Drop scheduled on top of condition");
|
|
|
|
|
|
self.cfg.push(
|
|
|
|
|
|
true_block,
|
2019-12-22 17:42:04 -05:00
|
|
|
|
Statement { source_info, kind: StatementKind::StorageDead(local) },
|
2019-10-20 16:09:36 -04:00
|
|
|
|
);
|
|
|
|
|
|
self.cfg.push(
|
|
|
|
|
|
false_block,
|
2019-12-22 17:42:04 -05:00
|
|
|
|
Statement { source_info, kind: StatementKind::StorageDead(local) },
|
2019-10-20 16:09:36 -04:00
|
|
|
|
);
|
|
|
|
|
|
}
|
2019-06-15 19:55:21 +01:00
|
|
|
|
}
|
2020-06-04 11:34:42 -04:00
|
|
|
|
|
|
|
|
|
|
top_scope.invalidate_cache(true, self.generator_kind, true);
|
2019-10-20 16:09:36 -04:00
|
|
|
|
} else {
|
|
|
|
|
|
bug!("Expected as_local_operand to produce a temporary");
|
|
|
|
|
|
}
|
2019-06-15 19:55:21 +01:00
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
(true_block, false_block)
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
/// Creates a path that performs all required cleanup for unwinding.
|
2019-11-16 13:23:31 +00:00
|
|
|
|
///
|
2020-06-04 11:34:42 -04:00
|
|
|
|
/// This path terminates in Resume. Returns the start of the path.
|
|
|
|
|
|
/// See module comment for more details.
|
|
|
|
|
|
crate fn diverge_cleanup(&mut self) -> BasicBlock {
|
|
|
|
|
|
self.diverge_cleanup_gen(false)
|
|
|
|
|
|
}
|
2018-11-17 21:07:17 +00:00
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
fn resume_block(&mut self) -> BasicBlock {
|
|
|
|
|
|
if let Some(target) = self.cached_resume_block {
|
|
|
|
|
|
target
|
|
|
|
|
|
} else {
|
|
|
|
|
|
let resumeblk = self.cfg.start_new_cleanup_block();
|
|
|
|
|
|
self.cfg.terminate(
|
|
|
|
|
|
resumeblk,
|
|
|
|
|
|
SourceInfo::outermost(self.fn_span),
|
|
|
|
|
|
TerminatorKind::Resume,
|
|
|
|
|
|
);
|
|
|
|
|
|
self.cached_resume_block = Some(resumeblk);
|
|
|
|
|
|
resumeblk
|
|
|
|
|
|
}
|
2019-11-16 13:23:31 +00:00
|
|
|
|
}
|
|
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
fn diverge_cleanup_gen(&mut self, generator_drop: bool) -> BasicBlock {
|
|
|
|
|
|
// Build up the drops in **reverse** order. The end result will
|
|
|
|
|
|
// look like:
|
|
|
|
|
|
//
|
|
|
|
|
|
// scopes[n] -> scopes[n-1] -> ... -> scopes[0]
|
|
|
|
|
|
//
|
|
|
|
|
|
// However, we build this in **reverse order**. That is, we
|
|
|
|
|
|
// process scopes[0], then scopes[1], etc, pointing each one at
|
|
|
|
|
|
// the result generates from the one before. Along the way, we
|
|
|
|
|
|
// store caches. If everything is cached, we'll just walk right
|
|
|
|
|
|
// to left reading the cached results but never created anything.
|
|
|
|
|
|
|
|
|
|
|
|
// Find the last cached block
|
|
|
|
|
|
debug!("diverge_cleanup_gen(self.scopes = {:?})", self.scopes);
|
|
|
|
|
|
let cached_cleanup = self.scopes.iter_mut().enumerate().find_map(|(idx, ref scope)| {
|
|
|
|
|
|
let cached_block = scope.cached_unwind.get(generator_drop)?;
|
|
|
|
|
|
Some((cached_block, idx))
|
|
|
|
|
|
});
|
|
|
|
|
|
let (mut target, first_uncached) =
|
|
|
|
|
|
cached_cleanup.unwrap_or_else(|| (self.resume_block(), self.scopes.len()));
|
|
|
|
|
|
|
|
|
|
|
|
for scope in self.scopes.top_scopes(first_uncached) {
|
|
|
|
|
|
target = build_diverge_scope(
|
|
|
|
|
|
&mut self.cfg,
|
|
|
|
|
|
scope.region_scope_span,
|
|
|
|
|
|
scope,
|
|
|
|
|
|
target,
|
|
|
|
|
|
generator_drop,
|
|
|
|
|
|
self.generator_kind,
|
|
|
|
|
|
);
|
2015-08-18 17:59:21 -04:00
|
|
|
|
}
|
2017-11-28 01:45:16 +02:00
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
target
|
2015-08-18 17:59:21 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
2016-06-16 18:28:30 +03:00
|
|
|
|
/// Utility function for *non*-scope code to build their own drops
|
2020-01-05 15:46:44 +00:00
|
|
|
|
crate fn build_drop_and_replace(
|
2019-12-22 17:42:04 -05:00
|
|
|
|
&mut self,
|
|
|
|
|
|
block: BasicBlock,
|
|
|
|
|
|
span: Span,
|
2020-06-10 09:56:54 +02:00
|
|
|
|
place: Place<'tcx>,
|
2019-12-22 17:42:04 -05:00
|
|
|
|
value: Operand<'tcx>,
|
|
|
|
|
|
) -> BlockAnd<()> {
|
2016-06-07 19:21:56 +03:00
|
|
|
|
let source_info = self.source_info(span);
|
2016-05-17 01:06:52 +03:00
|
|
|
|
let next_target = self.cfg.start_new_block();
|
2020-06-04 11:34:42 -04:00
|
|
|
|
let diverge_target = self.diverge_cleanup();
|
2019-12-22 17:42:04 -05:00
|
|
|
|
self.cfg.terminate(
|
|
|
|
|
|
block,
|
|
|
|
|
|
source_info,
|
2020-06-04 11:34:42 -04:00
|
|
|
|
TerminatorKind::DropAndReplace {
|
2020-06-10 09:56:54 +02:00
|
|
|
|
place,
|
2020-06-04 11:34:42 -04:00
|
|
|
|
value,
|
|
|
|
|
|
target: next_target,
|
|
|
|
|
|
unwind: Some(diverge_target),
|
|
|
|
|
|
},
|
2019-12-22 17:42:04 -05:00
|
|
|
|
);
|
2016-05-17 01:06:52 +03:00
|
|
|
|
next_target.unit()
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
/// Creates an Assert terminator and return the success block.
|
2016-05-25 08:39:32 +03:00
|
|
|
|
/// If the boolean condition operand is not the expected value,
|
|
|
|
|
|
/// a runtime panic will be caused with the given message.
|
2020-01-05 15:46:44 +00:00
|
|
|
|
crate fn assert(
|
2019-12-22 17:42:04 -05:00
|
|
|
|
&mut self,
|
|
|
|
|
|
block: BasicBlock,
|
|
|
|
|
|
cond: Operand<'tcx>,
|
|
|
|
|
|
expected: bool,
|
|
|
|
|
|
msg: AssertMessage<'tcx>,
|
|
|
|
|
|
span: Span,
|
|
|
|
|
|
) -> BasicBlock {
|
2016-06-07 19:21:56 +03:00
|
|
|
|
let source_info = self.source_info(span);
|
2020-06-04 11:34:42 -04:00
|
|
|
|
|
2016-05-25 08:39:32 +03:00
|
|
|
|
let success_block = self.cfg.start_new_block();
|
2020-06-04 11:34:42 -04:00
|
|
|
|
let cleanup = self.diverge_cleanup();
|
2016-05-25 08:39:32 +03:00
|
|
|
|
|
2019-12-22 17:42:04 -05:00
|
|
|
|
self.cfg.terminate(
|
|
|
|
|
|
block,
|
|
|
|
|
|
source_info,
|
2020-06-04 11:34:42 -04:00
|
|
|
|
TerminatorKind::Assert {
|
|
|
|
|
|
cond,
|
|
|
|
|
|
expected,
|
|
|
|
|
|
msg,
|
|
|
|
|
|
target: success_block,
|
|
|
|
|
|
cleanup: Some(cleanup),
|
|
|
|
|
|
},
|
2019-12-22 17:42:04 -05:00
|
|
|
|
);
|
2016-05-25 08:39:32 +03:00
|
|
|
|
|
|
|
|
|
|
success_block
|
|
|
|
|
|
}
|
2019-04-03 19:21:51 +01:00
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
// `match` arm scopes
|
|
|
|
|
|
// ==================
|
2019-04-03 19:21:51 +01:00
|
|
|
|
/// Unschedules any drops in the top scope.
|
|
|
|
|
|
///
|
|
|
|
|
|
/// This is only needed for `match` arm scopes, because they have one
|
|
|
|
|
|
/// entrance per pattern, but only one exit.
|
2020-06-04 11:34:42 -04:00
|
|
|
|
pub(crate) fn clear_top_scope(&mut self, region_scope: region::Scope) {
|
2019-06-15 17:37:19 +01:00
|
|
|
|
let top_scope = self.scopes.scopes.last_mut().unwrap();
|
2019-04-03 19:21:51 +01:00
|
|
|
|
|
|
|
|
|
|
assert_eq!(top_scope.region_scope, region_scope);
|
|
|
|
|
|
|
|
|
|
|
|
top_scope.drops.clear();
|
2020-06-04 11:34:42 -04:00
|
|
|
|
top_scope.invalidate_cache(false, self.generator_kind, true);
|
2019-04-03 19:21:51 +01:00
|
|
|
|
}
|
2015-08-18 17:59:21 -04:00
|
|
|
|
}
|
2016-01-30 00:18:47 +02:00
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
/// Builds drops for pop_scope and exit_scope.
|
2018-11-17 21:07:17 +00:00
|
|
|
|
fn build_scope_drops<'tcx>(
|
|
|
|
|
|
cfg: &mut CFG<'tcx>,
|
2020-06-04 11:34:42 -04:00
|
|
|
|
generator_kind: Option<GeneratorKind>,
|
2019-06-15 20:33:23 +01:00
|
|
|
|
scope: &Scope,
|
2018-11-17 21:07:17 +00:00
|
|
|
|
mut block: BasicBlock,
|
2020-06-04 11:34:42 -04:00
|
|
|
|
last_unwind_to: BasicBlock,
|
2018-11-17 21:07:17 +00:00
|
|
|
|
arg_count: usize,
|
2020-06-04 11:34:42 -04:00
|
|
|
|
generator_drop: bool,
|
|
|
|
|
|
is_cached_path: bool,
|
2018-11-17 21:07:17 +00:00
|
|
|
|
) -> BlockAnd<()> {
|
2019-05-22 12:31:43 -07:00
|
|
|
|
debug!("build_scope_drops({:?} -> {:?})", block, scope);
|
2018-11-17 21:07:17 +00:00
|
|
|
|
|
|
|
|
|
|
// Build up the drops in evaluation order. The end result will
|
|
|
|
|
|
// look like:
|
|
|
|
|
|
//
|
|
|
|
|
|
// [SDs, drops[n]] --..> [SDs, drop[1]] -> [SDs, drop[0]] -> [[SDs]]
|
|
|
|
|
|
// | | |
|
|
|
|
|
|
// : | |
|
|
|
|
|
|
// V V
|
|
|
|
|
|
// [drop[n]] -...-> [drop[1]] ------> [drop[0]] ------> [last_unwind_to]
|
|
|
|
|
|
//
|
|
|
|
|
|
// The horizontal arrows represent the execution path when the drops return
|
|
|
|
|
|
// successfully. The downwards arrows represent the execution path when the
|
|
|
|
|
|
// drops panic (panicking while unwinding will abort, so there's no need for
|
2019-05-22 12:31:43 -07:00
|
|
|
|
// another set of arrows).
|
|
|
|
|
|
//
|
|
|
|
|
|
// For generators, we unwind from a drop on a local to its StorageDead
|
|
|
|
|
|
// statement. For other functions we don't worry about StorageDead. The
|
|
|
|
|
|
// drops for the unwind path should have already been generated by
|
|
|
|
|
|
// `diverge_cleanup_gen`.
|
2018-11-17 21:07:17 +00:00
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
for drop_idx in (0..scope.drops.len()).rev() {
|
|
|
|
|
|
let drop_data = &scope.drops[drop_idx];
|
|
|
|
|
|
let source_info = scope.source_info(drop_data.span);
|
2019-06-15 20:33:23 +01:00
|
|
|
|
let local = drop_data.local;
|
2019-09-19 11:41:10 -04:00
|
|
|
|
|
2016-08-14 06:34:14 +03:00
|
|
|
|
match drop_data.kind {
|
2019-05-30 13:21:17 -07:00
|
|
|
|
DropKind::Value => {
|
2019-09-19 13:42:46 -04:00
|
|
|
|
// If the operand has been moved, and we are not on an unwind
|
|
|
|
|
|
// path, then don't generate the drop. (We only take this into
|
|
|
|
|
|
// account for non-unwind paths so as not to disturb the
|
|
|
|
|
|
// caching mechanism.)
|
2020-06-04 11:34:42 -04:00
|
|
|
|
if !is_cached_path && scope.moved_locals.iter().any(|&o| o == local) {
|
2019-09-19 13:42:46 -04:00
|
|
|
|
continue;
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
let unwind_to = get_unwind_to(scope, generator_kind, drop_idx, generator_drop)
|
|
|
|
|
|
.unwrap_or(last_unwind_to);
|
2017-10-18 13:54:36 +03:00
|
|
|
|
|
2017-07-31 23:25:27 +03:00
|
|
|
|
let next = cfg.start_new_block();
|
2019-12-22 17:42:04 -05:00
|
|
|
|
cfg.terminate(
|
|
|
|
|
|
block,
|
|
|
|
|
|
source_info,
|
2020-06-04 11:34:42 -04:00
|
|
|
|
TerminatorKind::Drop {
|
2020-06-10 09:56:54 +02:00
|
|
|
|
place: local.into(),
|
2020-06-04 11:34:42 -04:00
|
|
|
|
target: next,
|
|
|
|
|
|
unwind: Some(unwind_to),
|
|
|
|
|
|
},
|
2019-12-22 17:42:04 -05:00
|
|
|
|
);
|
2017-07-31 23:25:27 +03:00
|
|
|
|
block = next;
|
|
|
|
|
|
}
|
2019-05-30 13:21:17 -07:00
|
|
|
|
DropKind::Storage => {
|
2018-07-03 18:09:00 -07:00
|
|
|
|
// Only temps and vars need their storage dead.
|
2019-06-15 20:33:23 +01:00
|
|
|
|
assert!(local.index() > arg_count);
|
2019-12-22 17:42:04 -05:00
|
|
|
|
cfg.push(block, Statement { source_info, kind: StatementKind::StorageDead(local) });
|
2016-08-14 06:34:14 +03:00
|
|
|
|
}
|
|
|
|
|
|
}
|
2016-01-30 00:18:47 +02:00
|
|
|
|
}
|
|
|
|
|
|
block.unit()
|
|
|
|
|
|
}
|
2016-02-02 22:50:26 +02:00
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
fn get_unwind_to(
|
|
|
|
|
|
scope: &Scope,
|
|
|
|
|
|
generator_kind: Option<GeneratorKind>,
|
|
|
|
|
|
unwind_from: usize,
|
|
|
|
|
|
generator_drop: bool,
|
|
|
|
|
|
) -> Option<BasicBlock> {
|
|
|
|
|
|
for drop_idx in (0..unwind_from).rev() {
|
|
|
|
|
|
let drop_data = &scope.drops[drop_idx];
|
|
|
|
|
|
match (generator_kind, &drop_data.kind) {
|
|
|
|
|
|
(Some(_), DropKind::Storage) => {
|
|
|
|
|
|
return Some(drop_data.cached_block.get(generator_drop).unwrap_or_else(|| {
|
|
|
|
|
|
span_bug!(drop_data.span, "cached block not present for {:?}", drop_data)
|
|
|
|
|
|
}));
|
2019-05-22 12:31:43 -07:00
|
|
|
|
}
|
2020-06-04 11:34:42 -04:00
|
|
|
|
(None, DropKind::Value) => {
|
|
|
|
|
|
return Some(drop_data.cached_block.get(generator_drop).unwrap_or_else(|| {
|
|
|
|
|
|
span_bug!(drop_data.span, "cached block not present for {:?}", drop_data)
|
|
|
|
|
|
}));
|
2019-05-14 17:43:37 -07:00
|
|
|
|
}
|
2020-06-04 11:34:42 -04:00
|
|
|
|
_ => (),
|
2019-11-16 13:23:31 +00:00
|
|
|
|
}
|
|
|
|
|
|
}
|
2020-06-04 11:34:42 -04:00
|
|
|
|
None
|
2019-11-16 13:23:31 +00:00
|
|
|
|
}
|
|
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
fn build_diverge_scope<'tcx>(
|
|
|
|
|
|
cfg: &mut CFG<'tcx>,
|
|
|
|
|
|
span: Span,
|
|
|
|
|
|
scope: &mut Scope,
|
|
|
|
|
|
mut target: BasicBlock,
|
|
|
|
|
|
generator_drop: bool,
|
|
|
|
|
|
generator_kind: Option<GeneratorKind>,
|
|
|
|
|
|
) -> BasicBlock {
|
|
|
|
|
|
// Build up the drops in **reverse** order. The end result will
|
|
|
|
|
|
// look like:
|
|
|
|
|
|
//
|
|
|
|
|
|
// [drops[n]] -...-> [drops[0]] -> [target]
|
|
|
|
|
|
//
|
|
|
|
|
|
// The code in this function reads from right to left. At each
|
|
|
|
|
|
// point, we check for cached blocks representing the
|
|
|
|
|
|
// remainder. If everything is cached, we'll just walk right to
|
|
|
|
|
|
// left reading the cached results but never create anything.
|
|
|
|
|
|
|
|
|
|
|
|
let source_scope = scope.source_scope;
|
|
|
|
|
|
let source_info = |span| SourceInfo { span, scope: source_scope };
|
|
|
|
|
|
|
|
|
|
|
|
// We keep track of StorageDead statements to prepend to our current block
|
|
|
|
|
|
// and store them here, in reverse order.
|
|
|
|
|
|
let mut storage_deads = vec![];
|
|
|
|
|
|
|
|
|
|
|
|
let mut target_built_by_us = false;
|
|
|
|
|
|
|
|
|
|
|
|
// Build up the drops. Here we iterate the vector in
|
|
|
|
|
|
// *forward* order, so that we generate drops[0] first (right to
|
|
|
|
|
|
// left in diagram above).
|
|
|
|
|
|
debug!("build_diverge_scope({:?})", scope.drops);
|
|
|
|
|
|
for (j, drop_data) in scope.drops.iter_mut().enumerate() {
|
|
|
|
|
|
debug!("build_diverge_scope drop_data[{}]: {:?}", j, drop_data);
|
|
|
|
|
|
// Only full value drops are emitted in the diverging path,
|
|
|
|
|
|
// not StorageDead, except in the case of generators.
|
|
|
|
|
|
//
|
|
|
|
|
|
// Note: This may not actually be what we desire (are we
|
|
|
|
|
|
// "freeing" stack storage as we unwind, or merely observing a
|
|
|
|
|
|
// frozen stack)? In particular, the intent may have been to
|
|
|
|
|
|
// match the behavior of clang, but on inspection eddyb says
|
|
|
|
|
|
// this is not what clang does.
|
|
|
|
|
|
match drop_data.kind {
|
|
|
|
|
|
DropKind::Storage if generator_kind.is_some() => {
|
|
|
|
|
|
storage_deads.push(Statement {
|
|
|
|
|
|
source_info: source_info(drop_data.span),
|
|
|
|
|
|
kind: StatementKind::StorageDead(drop_data.local),
|
|
|
|
|
|
});
|
|
|
|
|
|
if !target_built_by_us {
|
|
|
|
|
|
// We cannot add statements to an existing block, so we create a new
|
|
|
|
|
|
// block for our StorageDead statements.
|
|
|
|
|
|
let block = cfg.start_new_cleanup_block();
|
|
|
|
|
|
let source_info = SourceInfo { span: DUMMY_SP, scope: source_scope };
|
|
|
|
|
|
cfg.goto(block, source_info, target);
|
|
|
|
|
|
target = block;
|
|
|
|
|
|
target_built_by_us = true;
|
|
|
|
|
|
}
|
|
|
|
|
|
*drop_data.cached_block.ref_mut(generator_drop) = Some(target);
|
|
|
|
|
|
}
|
|
|
|
|
|
DropKind::Storage => {}
|
|
|
|
|
|
DropKind::Value => {
|
|
|
|
|
|
let cached_block = drop_data.cached_block.ref_mut(generator_drop);
|
|
|
|
|
|
target = if let Some(cached_block) = *cached_block {
|
|
|
|
|
|
storage_deads.clear();
|
|
|
|
|
|
target_built_by_us = false;
|
|
|
|
|
|
cached_block
|
|
|
|
|
|
} else {
|
|
|
|
|
|
push_storage_deads(cfg, target, &mut storage_deads);
|
|
|
|
|
|
let block = cfg.start_new_cleanup_block();
|
|
|
|
|
|
cfg.terminate(
|
|
|
|
|
|
block,
|
|
|
|
|
|
source_info(drop_data.span),
|
|
|
|
|
|
TerminatorKind::Drop {
|
2020-06-10 09:56:54 +02:00
|
|
|
|
place: drop_data.local.into(),
|
2020-06-04 11:34:42 -04:00
|
|
|
|
target,
|
|
|
|
|
|
unwind: None,
|
|
|
|
|
|
},
|
|
|
|
|
|
);
|
|
|
|
|
|
*cached_block = Some(block);
|
|
|
|
|
|
target_built_by_us = true;
|
|
|
|
|
|
block
|
|
|
|
|
|
};
|
|
|
|
|
|
}
|
|
|
|
|
|
};
|
2019-11-16 13:23:31 +00:00
|
|
|
|
}
|
2020-06-04 11:34:42 -04:00
|
|
|
|
push_storage_deads(cfg, target, &mut storage_deads);
|
|
|
|
|
|
*scope.cached_unwind.ref_mut(generator_drop) = Some(target);
|
2019-05-14 17:43:37 -07:00
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
assert!(storage_deads.is_empty());
|
|
|
|
|
|
debug!("build_diverge_scope({:?}, {:?}) = {:?}", scope, span, target);
|
2019-11-16 13:23:31 +00:00
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
target
|
2019-11-16 13:23:31 +00:00
|
|
|
|
}
|
|
|
|
|
|
|
2020-06-04 11:34:42 -04:00
|
|
|
|
fn push_storage_deads<'tcx>(
|
|
|
|
|
|
cfg: &mut CFG<'tcx>,
|
|
|
|
|
|
target: BasicBlock,
|
|
|
|
|
|
storage_deads: &mut Vec<Statement<'tcx>>,
|
|
|
|
|
|
) {
|
|
|
|
|
|
if storage_deads.is_empty() {
|
|
|
|
|
|
return;
|
2019-12-22 17:42:04 -05:00
|
|
|
|
}
|
2020-06-04 11:34:42 -04:00
|
|
|
|
let statements = &mut cfg.block_data_mut(target).statements;
|
|
|
|
|
|
storage_deads.reverse();
|
|
|
|
|
|
debug!(
|
|
|
|
|
|
"push_storage_deads({:?}), storage_deads={:?}, statements={:?}",
|
|
|
|
|
|
target, storage_deads, statements
|
|
|
|
|
|
);
|
|
|
|
|
|
storage_deads.append(statements);
|
|
|
|
|
|
mem::swap(statements, storage_deads);
|
|
|
|
|
|
assert!(storage_deads.is_empty());
|
2019-05-14 17:43:37 -07:00
|
|
|
|
}
|