Do not shallow resolve to root var while fudging#153869
Do not shallow resolve to root var while fudging#153869ShoyuVanilla wants to merge 1 commit intorust-lang:mainfrom
Conversation
|
@bors try @rust-timer queue |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
Do not shallow resolve to root var while fudging
|
@bors try cancel |
|
Try build cancelled. Cancelled workflows: |
d663168 to
49fe285
Compare
|
@bors try |
This comment has been minimized.
This comment has been minimized.
Do not shallow resolve to root var while fudging
This comment has been minimized.
This comment has been minimized.
|
Finished benchmarking commit (1edb863): comparison URL. Overall result: ❌✅ regressions and improvements - please read the text belowBenchmarking this pull request means it may be perf-sensitive – we'll automatically label it not fit for rolling up. You can override this, but we strongly advise not to, due to possible changes in compiler perf. Next Steps: If you can justify the regressions found in this try perf run, please do so in sufficient writing along with @bors rollup=never Instruction countOur most reliable metric. Used to determine the overall result above. However, even this metric can be noisy.
Max RSS (memory usage)Results (secondary 1.4%)A less reliable metric. May be of interest, but not used to determine the overall result above.
CyclesResults (secondary 3.9%)A less reliable metric. May be of interest, but not used to determine the overall result above.
Binary sizeThis benchmark run did not return any relevant results for this metric. Bootstrap: 481.843s -> 481.051s (-0.16%) |
There was a problem hiding this comment.
I dislike these changes to shallow_resolve as worry that it's easy for this to have unintended sideeffects/weirdness, e.g. it also affects canonicalization in the fudging scope.
My understanding of the two regressions is as follows:
tests/ui/coercion/fudge-inference/input-ty-higher-ranked-fn-trait.rs: We have the expectation Server<?n> and the ret type of the function Server<?m> with n < m. We have an ?m: Fn obligation, so if we don't relate ?n with ?m, but have fudging return ?n, we lose that knowledge
tests/ui/coercion/fudge-inference/input-ty-closure-param-fn-trait-bounds.rs: not actually minimal, this is enough
struct Inv<T, U>(*mut (T, U));
fn pass_through<F>(_: F) -> Inv<F, F> { todo!() }
fn map(_: Inv<impl FnOnce(), impl Fn()>) {}
pub fn traverse() {
map(pass_through(|| ()))
}We have the same issue on stable already, if we partially constrain F
struct Inv<T, U>(*mut (T, U));
fn pass_through<F>(_: F) -> Inv<F, F> { todo!() }
fn map(_: Inv<(impl FnOnce(),), (impl Fn(),)>) {}
pub fn traverse() {
map(pass_through((|| (),)))
}I don't get why we'd actually limit the closure to only impl FnOnce instead of properly inferring the closure kind as we do normally. That feels like a separate issue here, would be happy for you to look into what happens if we never eagerly infer the closure kind in deduce_closure_signature
Could you instead add a fn resolve_vars_for_fudging which has this special behavior by just ignoring the output of shallow_resolve if its an infer var?
I really dislike this problem :x I feel like there should be a better way to handle this sort of thing, but can't think of anything right away
|
Yeah, this doesn't feel to me the right way to do the things, either. I think I should look into the problem more deeply and make some correct fix in a long term, but since the problemtaic issues are stable to beta regression, I ended up in somewhat makeshift 😅
Yeah, this feels better
I'm already a bit familiar with this closure kind deduction. Gonna be a bit verbose, so I'll continue in a new comment |
|
In the beta, struct Inv<T, U>(*mut (T, U));
fn pass_through<F>(_: F) -> Inv<F, F> { todo!() }
fn map(_: Inv<impl FnOnce(), impl Fn()>) {}
pub fn traverse() {
map(pass_through(|| ()))
}When we fudge the input expectations in Since the ty vars aren't fully resolved yet, their root variable becomes something feels quite random, the one with the minimal vid index, rust/compiler/rustc_infer/src/infer/type_variable.rs Lines 347 to 349 in e0a8361 With my resolve to root var PR, the This makes us to deduce the closure kind as rust/compiler/rustc_hir_typeck/src/closure.rs Line 347 in e0a8361 But in stable, we do not shallow resolve to root var, so the expectation becomes For the second code which has a outer concrete wraper type(unary tuple) for the expected input type, the input ty var is resolved into it ( I've been thinking this deduce closure kind from predicates behavior somewhat weird since my first PR to rust-analyzer was deducing the kind of a closure purely depending on its upvars, and I thought that makes more sense, but anyway, the source of truth for rust-analyzer is rustc 😄
So, I can say that the problematic regression(or breaking change) that I meant to fix with that PR will happen if we do so, at least. struct Foo<F: std::ops::FnOnce()>(F);
fn main() {
let mut x = String::new();
let y = Foo(|| {
x = String::from("foo");
});
(y.0)();
}The above is compiled fine with the current rustc, but if we deduce the kind for the closure with its upvars, it would result in |
0068ef7 to
ed659a4
Compare
This comment has been minimized.
This comment has been minimized.
Do not shallow resolve to root var while fudging
This comment has been minimized.
This comment has been minimized.
ed659a4 to
7884c7c
Compare
7884c7c to
5c95952
Compare
| .map(|&ty| self.resolve_vars_if_possible(ty)) | ||
| .collect(), | ||
| )) | ||
| Ok(Some(formal_input_tys.to_vec())) |
There was a problem hiding this comment.
We don't need resolve_var_if_possible here as we already do it in fudge_inference_if_ok
| return Err(TypeError::Mismatch); | ||
| } | ||
| Ok(self.resolve_vars_if_possible(adt_ty)) | ||
| Ok(adt_ty) |
Fixes #153816 and fixes #153849
In #151380, I thought that whether shallow resolve to root var or not wouldn't affect the actual type inferencing, but it isn't true for the fudge, in which we discard all newly created relationships between unresolved inference variables 😅
r? lcnr