Rewrite collect_tokens implementations to use a flattened buffer
Instead of trying to collect tokens at each depth, we 'flatten' the stream as we go allong, pushing open/close delimiters to our buffer just like regular tokens. One capturing is complete, we reconstruct a nested `TokenTree::Delimited` structure, producing a normal `TokenStream`. The reconstructed `TokenStream` is not created immediately - instead, it is produced on-demand by a closure (wrapped in a new `LazyTokenStream` type). This closure stores a clone of the original `TokenCursor`, plus a record of the number of calls to `next()/next_desugared()`. This is sufficient to reconstruct the tokenstream seen by the callback without storing any additional state. If the tokenstream is never used (e.g. when a captured `macro_rules!` argument is never passed to a proc macro), we never actually create a `TokenStream`. This implementation has a number of advantages over the previous one: * It is significantly simpler, with no edge cases around capturing the start/end of a delimited group. * It can be easily extended to allow replacing tokens an an arbitrary 'depth' by just using `Vec::splice` at the proper position. This is important for PR #76130, which requires us to track information about attributes along with tokens. * The lazy approach to `TokenStream` construction allows us to easily parse an AST struct, and then decide after the fact whether we need a `TokenStream`. This will be useful when we start collecting tokens for `Attribute` - we can discard the `LazyTokenStream` if the parsed attribute doesn't need tokens (e.g. is a builtin attribute). The performance impact seems to be neglibile (see https://github.com/rust-lang/rust/pull/77250#issuecomment-703960604). There is a small slowdown on a few benchmarks, but it only rises above 1% for incremental builds, where it represents a larger fraction of the much smaller instruction count. There a ~1% speedup on a few other incremental benchmarks - my guess is that the speedups and slowdowns will usually cancel out in practice.
This commit is contained in:
@@ -116,15 +116,16 @@ impl<'a> Parser<'a> {
|
||||
Some(item.into_inner())
|
||||
});
|
||||
|
||||
let needs_tokens = super::attr::maybe_needs_tokens(&attrs);
|
||||
|
||||
let mut unclosed_delims = vec![];
|
||||
let has_attrs = !attrs.is_empty();
|
||||
let parse_item = |this: &mut Self| {
|
||||
let item = this.parse_item_common_(attrs, mac_allowed, attrs_allowed, req_name);
|
||||
unclosed_delims.append(&mut this.unclosed_delims);
|
||||
item
|
||||
};
|
||||
|
||||
let (mut item, tokens) = if has_attrs {
|
||||
let (mut item, tokens) = if needs_tokens {
|
||||
let (item, tokens) = self.collect_tokens(parse_item)?;
|
||||
(item, Some(tokens))
|
||||
} else {
|
||||
|
||||
Reference in New Issue
Block a user