Skip to content

Conversation

@qedawkins
Copy link
Contributor

Add a transform that converts scf.forall operations into multi-level pcf.generic nests. This is required in cases where a single scf.forall mapping type needs to map to multiple scopes. The immediate case this arises from is converting thread mapped scf.forall ops to combined subgroup + lane scopes. In this case, we can't simply convert to pcf.loop because of the way that automatic redistribution of pcf.loop's lowering works. We need to redistribute the iterations of the scf.forall to all workers across both scopes. This means that if we make one of the two scopes (subgroup or lane) a pcf.loop this fails to create the correct IR structure. If we make it loop over subgroups, this fails to predicate the lanes, and if we make it loop over lanes, we end up with more thread divergence than normal (normally only a single subgroup may exhibit divergence).

The transform creates an outer loop over workers and an inner scf.forall over the per-worker iteration range. For multi-dimensional cases, affine.linearize_index and affine.delinearize_index are used to flatten/unflatten indices appropriately.

Additionally adds a new method getNativeNumProcessorIds to ScopeAttrInterface. This is needed to query for the number of ids to generate.

Add a transform that converts scf.forall operations into multi-level
pcf.generic nests. This is required in cases where a single scf.forall
mapping type needs to map to multiple scopes. The immediate case this
arises from is converting thread mapped scf.forall ops to combined
subgroup + lane scopes. In this case, we can't simply convert to
pcf.loop because of the way that automatic redistribution of pcf.loop's
lowering works. We need to redistribute the iterations of the scf.forall
to *all* workers across both scopes. This means that if we make one of
the two scopes (subgroup or lane) a pcf.loop this fails to create the
correct IR structure. If we make it loop over subgroups, this fails to
predicate the lanes, and if we make it loop over lanes, we end up with
more thread divergence than normal (normally only a single subgroup
may exhibit divergence).

The transform creates an outer loop over workers and an inner
scf.forall over the per-worker iteration range. For multi-dimensional
cases, affine.linearize_index and affine.delinearize_index are used
to flatten/unflatten indices appropriately.

Additionally adds a new method getNativeNumProcessorIds to ScopeAttrInterface.
This is needed to query for the number of ids to generate.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant