[-Ztrait-solver=next, mir-typeck] instantiate hidden types in the root universe
Fixes an ICE in the test `member-constraints-in-root-universe`.
Main motivation is to make #112691 pass under the new solver.
r? ``@compiler-errors``
The only regression is one ambiguity in the new trait solver, having to
do with two param-env candidates that may apply. I think this is fine,
since the error message already kinda sucks.
Add `implement_via_object` to `rustc_deny_explicit_impl` to control object candidate assembly
Some built-in traits are special, since they are used to prove facts about the program that are important for later phases of compilation such as codegen and CTFE. For example, the `Unsize` trait is used to assert to the compiler that we are able to unsize a type into another type. It doesn't have any methods because it doesn't actually *instruct* the compiler how to do this unsizing, but this is later used (alongside an exhaustive match of combinations of unsizeable types) during codegen to generate unsize coercion code.
Due to this, these built-in traits are incompatible with the type erasure provided by object types. For example, the existence of `dyn Unsize<T>` does not mean that the compiler is able to unsize `Box<dyn Unsize<T>>` into `Box<T>`, since `Unsize` is a *witness* to the fact that a type can be unsized, and it doesn't actually encode that unsizing operation in its vtable as mentioned above.
The old trait solver gets around this fact by having complex control flow that never considers object bounds for certain built-in traits:
2f896da247/compiler/rustc_trait_selection/src/traits/select/candidate_assembly.rs (L61-L132)
However, candidate assembly in the new solver is much more lovely, and I'd hate to add this list of opt-out cases into the new solver. Instead of maintaining this complex and hard-coded control flow, instead we can make this a property of the trait via a built-in attribute. We already have such a build attribute that's applied to every single trait that we care about: `rustc_deny_explicit_impl`. This PR adds `implement_via_object` as a meta-item to that attribute that allows us to opt a trait out of object-bound candidate assembly as well.
r? `@lcnr`
Opportunistically resolve regions in new solver
Use `opportunistic_resolve_var` during canonicalization to collapse some regions.
We have to start using `CanonicalVarValues::is_identity_modulo_regions`. We also have to modify that function to consider responses like `['static, ^0, '^1, ^2]` to be an "identity" response, since because we opportunistically resolve regions, there's no longer a 1:1 mapping between canonical var values and bound var indices in the response...
There's one nasty side-effect -- one test (`tests/ui/dyn-star/param-env-infer.rs`) starts to ICE because the certainty goes from `Yes` to `Maybe(Overflow)`... Not exactly sure why, though? Putting this up for discussion/investigation.
r? ```@lcnr```
Instantiate closure synthetic substs in root universe
In the UI test example, we end up generalizing an associated type (something like `<Map<Option<i32>, [closure upvars=?0]> as IntoIterator>::Item` generalizes into `<Map<Option<i32>, [closure upvars=?1]> as IntoIterator>::Item`) then assigning it to itself, emitting an alias-relate goal. This trivially holds via one of the normalizes-to candidates, instead of relating substs, so when closure analysis eventually sets `?0` to the actual upvars, `?1` never gets constrained. This ends up being reported as an ambiguity error during writeback.
Instead, we can take advantage of the fact that we *know* the closure substs live in the root universe. This will prevent them being generalized, since they always can be named, and the alias-relate above never gets emitted at all.
We can probably do this to a handful of other `next_ty_var` calls in typeck for variables that are clearly associated with the body of the program, but I wanted to limit this for now. Eventually, if we end up representing universes more faithfully like a tree or whatever, we can remove this and turn it back to just a call to `next_ty_var`.
Note: This is incredibly order-dependent -- we need to be assigning a type variable that was created *before* the closure substs, and we also need to actually have an unnormalized type at the time of the assignment. This currently seems easiest to trigger during call argument analysis just due to the fact that we instantiate the call's substs, normalize, THEN check args.
r? ```@lcnr```
Collect VTable stats & add `-Zprint-vtable-sizes`
This is a bit hacky/buggy, but I'm not entirely sure how to fix it, so I want to ask reviewers for help...
To try this, use either of those:
- `cargo clean && RUSTFLAGS="-Zprint-vtable-sizes" cargo +toolchain b`
- `cargo clean && cargo rustc +toolchain -Zprint-vtable-sizes`
- `rustc +toolchain -Zprint-vtable-sizes ./file.rs`
- Either explicitly annotate `let x: () = expr;` where `x` has unit
type, or remove the unit binding to leave only `expr;` instead.
- Fix disjoint-capture-in-same-closure test
Deduplicate identical region constraints in new solver
the new solver doesn't track whether we've already proven a goal like the fulfillment context's obligation forest does, so we may be instantiating a canonical response (and specifically, its nested region obligations) quite a few times.
This may lead to exponentially gathering up identical region constraints for things like auto traits, so let's deduplicate region constraints when in `compute_external_query_constraints`.
r? ``@lcnr``
Preserve substs in opaques recorded in typeck results
This means that we now prepopulate MIR with opaques with the right substs.
The first commit is a hack that I think we discussed, having to do with `DefiningAnchor::Bubble` basically being equivalent to `DefiningAnchor::Error` in the new solver, so having to use `DefiningAnchor::Bind` instead, lol.
r? `@lcnr`
Deal with unnormalized projections when structurally resolving types with new solver
1. Normalize types in `structurally_resolved_type` when the new solver is enabled
2. Normalize built-in autoderef targets in `Autoderef` when the new solver is enabled
3. Normalize-erasing-regions in `resolve_type` in writeback
This is motivated by the UI test provided, which currently fails with:
```
error[E0609]: no field `x` on type `<usize as SliceIndex<[Foo]>>::Output`
--> <source>:9:11
|
9 | xs[0].x = 1;
| ^
```
I'm pretty happy with the approach in (1.) and (2.) and think we'll inevitably need something like this in the long-term, but (3.) seems like a hack to me. It's a *lot* of work to add tons of new calls to every user of these typeck results though (mir build, late lints, etc). Happy to discuss further.
r? `@lcnr`
do not allow inference in `predicate_must_hold` (alternative approach)
See the FCP description for more info, but tl;dr is that we should not return `EvaluatedToOkModuloRegions` if an obligation may hold only with some choice of inference vars being constrained.
Attempts to solve this in the approach laid out by lcnr here: https://github.com/rust-lang/rust/pull/109558#discussion_r1147318134, rather than by eagerly replacing infer vars with placeholders which is a bit too restrictive.
r? `@ghost`
Note user-facing types of coercion failure
When coercing, for example, `Box<A>` into `Box<dyn B>`, make sure that any failure notes mention *those* specific types, rather than mentioning inner types, like "the cast from `A` to `dyn B`".
I expect end-users are often confused when we skip layers of types and only mention the "innermost" part of a coercion, especially when other notes point at HIR, e.g. #111406.
Uplift `clippy::{drop,forget}_{ref,copy}` lints
This PR aims at uplifting the `clippy::drop_ref`, `clippy::drop_copy`, `clippy::forget_ref` and `clippy::forget_copy` lints.
Those lints are/were declared in the correctness category of clippy because they lint on useless and most probably is not what the developer wanted.
## `drop_ref` and `forget_ref`
The `drop_ref` and `forget_ref` lint checks for calls to `std::mem::drop` or `std::mem::forget` with a reference instead of an owned value.
### Example
```rust
let mut lock_guard = mutex.lock();
std::mem::drop(&lock_guard) // Should have been drop(lock_guard), mutex
// still locked
operation_that_requires_mutex_to_be_unlocked();
```
### Explanation
Calling `drop` or `forget` on a reference will only drop the reference itself, which is a no-op. It will not call the `drop` or `forget` method on the underlying referenced value, which is likely what was intended.
## `drop_copy` and `forget_copy`
The `drop_copy` and `forget_copy` lint checks for calls to `std::mem::forget` or `std::mem::drop` with a value that derives the Copy trait.
### Example
```rust
let x: i32 = 42; // i32 implements Copy
std::mem::forget(x) // A copy of x is passed to the function, leaving the
// original unaffected
```
### Explanation
Calling `std::mem::forget` [does nothing for types that implement Copy](https://doc.rust-lang.org/std/mem/fn.drop.html) since the value will be copied and moved into the function on invocation.
-----
Followed the instructions for uplift a clippy describe here: https://github.com/rust-lang/rust/pull/99696#pullrequestreview-1134072751
cc `@m-ou-se` (as T-libs-api leader because the uplifting was discussed in a recent meeting)
Don't compute trait super bounds unless they're positive
Fixes#111207
The comment is modified to explain the rationale for why we even have this recursive call to supertraits in the first place, which doesn't apply to negative bounds since they don't elaborate at all.