Skip to content

safer access to Julia's type inference #8

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Mar 24, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 16 additions & 2 deletions src/internals.jl
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,20 @@ module Internals

import StableTasks: @spawn, @spawnat, @fetch, @fetchfrom, StableTask, AtomicRef

if (
(@isdefined Core) &&
(Core isa Module) &&
isdefined(Core, :Compiler) &&
(Core.Compiler isa Module) &&
isdefined(Core.Compiler, :return_type) &&
applicable(Core.Compiler.return_type, identity, Tuple{})
)
infer_return_type(@nospecialize f::Any) = Core.Compiler.return_type(f, Tuple{})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one last question, is there much reason for the @nospecialize here? Won't that just potentially block inference?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nospecialize doesn't block inference, that's @nospecializeinfer. My reasoning is that the computation only happens at macro instantiation time, so there's not much reason to specialize.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I see. I was always a bit shaky on what things @nospecialize blocked and what it didn't, but I see now the docstring actually has some good info on this, specifically about the lack of inference blocking.

else
# safe conservative fallback to `Any`, which is subtyped by each type
infer_return_type(@nospecialize f::Any) = Any
end

Base.getindex(r::AtomicRef) = @atomic r.x
Base.setindex!(r::AtomicRef{T}, x) where {T} = @atomic r.x = convert(T, x)

Expand Down Expand Up @@ -96,7 +110,7 @@ macro spawn(args...)
quote
let $(letargs...)
f = $thunk
T = Core.Compiler.return_type(f, Tuple{})
T = infer_return_type(f)
ref = AtomicRef{T}()
f_wrap = () -> (ref[] = f(); nothing)
task = Task(f_wrap)
Expand Down Expand Up @@ -136,7 +150,7 @@ macro spawnat(thrdid, ex)
end
let $(letargs...)
thunk = $thunk
RT = Core.Compiler.return_type(thunk, Tuple{})
RT = infer_return_type(thunk)
ret = AtomicRef{RT}()
thunk_wrap = () -> (ret[] = thunk(); nothing)
local task = Task(thunk_wrap)
Expand Down
Loading