Skip to content

Conversation

@SS-JIA
Copy link
Contributor

@SS-JIA SS-JIA commented Jan 30, 2025

Summary:
Note: This diff is a combination of D68919676 and D68919678. I decided to combine the two because of problems with ghexport, which was having some problems exporting the second diff, as well as the fact that both diffs are needed for export_llama to work so it makes more sense to just have a single diff.

Context

Recent changes split the sdpa_with_kv_cache operator into two separate operators, update_cache and custom_sdpa to decouple the cache update step from the actual SDPA computation.

As a result, SDPA is no longer being delegated on Vulkan because of this interface change. To rectify this, Vulkan must also split sdpa_with_kv_cache into two operators.

Note that during this diff the new operators are not partitioned yet because of complications caused by assertion ops in the graph. The next diff adds a pass to remove such assertion ops which allows the new operators to be partitioned.

Context

Recently, some assertion ops were added to the Llama source code.

Unfortunately, this causes issues for the Vulkan delegate because runtime assertions are not yet supported in Vulkan and the assertion ops cause graph breaks due to not being supported.

To prevent graph breaks when delegating to Vulkan, apply a pass to remove assertion ops during the llama export.

Differential Revision: D68922404

@pytorch-bot
Copy link

pytorch-bot bot commented Jan 30, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/8075

Note: Links to docs will display an error until the docs builds have been completed.

⏳ No Failures, 1 Pending

As of commit c2ac929 with merge base 2a1a583 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jan 30, 2025
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D68922404

SS-JIA added a commit to SS-JIA/executorch-1 that referenced this pull request Jan 30, 2025
… operator + Add `RemoveAsserts` pass and apply it during LlaMa export (pytorch#8075)

Summary:

**Note**: This diff is a combination of D68919676 and D68919678. I decided to combine the two because of problems with `ghexport`, which was having some problems exporting the second diff, as well as the fact that both diffs are needed for `export_llama` to work so it makes more sense to just have a single diff.

## Context

Recent changes split the `sdpa_with_kv_cache` operator into two separate operators, `update_cache` and `custom_sdpa` to decouple the cache update step from the actual SDPA computation.

As a result, SDPA is no longer being delegated on Vulkan because of this interface change. To rectify this, Vulkan must also split `sdpa_with_kv_cache` into two operators.

Note that during this diff the new operators are not partitioned yet because of complications caused by assertion ops in the graph. The next diff adds a pass to remove such assertion ops which allows the new operators to be partitioned.
## Context

Recently, some assertion ops were added to the Llama source code.

Unfortunately, this causes issues for the Vulkan delegate because runtime assertions are not yet supported in Vulkan and the assertion ops cause graph breaks due to not being supported.

To prevent graph breaks when delegating to Vulkan, apply a pass to remove assertion ops during the llama export.

Differential Revision: D68922404
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D68922404

SS-JIA added a commit to SS-JIA/executorch-1 that referenced this pull request Jan 30, 2025
… operator + Add `RemoveAsserts` pass and apply it during LlaMa export (pytorch#8075)

Summary:

**Note**: This diff is a combination of D68919676 and D68919678. I decided to combine the two because of problems with `ghexport`, which was having some problems exporting the second diff, as well as the fact that both diffs are needed for `export_llama` to work so it makes more sense to just have a single diff.

## Context

Recent changes split the `sdpa_with_kv_cache` operator into two separate operators, `update_cache` and `custom_sdpa` to decouple the cache update step from the actual SDPA computation.

As a result, SDPA is no longer being delegated on Vulkan because of this interface change. To rectify this, Vulkan must also split `sdpa_with_kv_cache` into two operators.

Note that during this diff the new operators are not partitioned yet because of complications caused by assertion ops in the graph. The next diff adds a pass to remove such assertion ops which allows the new operators to be partitioned.
## Context

Recently, some assertion ops were added to the Llama source code.

Unfortunately, this causes issues for the Vulkan delegate because runtime assertions are not yet supported in Vulkan and the assertion ops cause graph breaks due to not being supported.

To prevent graph breaks when delegating to Vulkan, apply a pass to remove assertion ops during the llama export.

Differential Revision: D68922404
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D68922404

SS-JIA added a commit to SS-JIA/executorch-1 that referenced this pull request Jan 31, 2025
… operator + Add `RemoveAsserts` pass and apply it during LlaMa export (pytorch#8075)

Summary:

**Note**: This diff is a combination of D68919676 and D68919678. I decided to combine the two because of problems with `ghexport`, which was having some problems exporting the second diff, as well as the fact that both diffs are needed for `export_llama` to work so it makes more sense to just have a single diff.

## Context

Recent changes split the `sdpa_with_kv_cache` operator into two separate operators, `update_cache` and `custom_sdpa` to decouple the cache update step from the actual SDPA computation.

As a result, SDPA is no longer being delegated on Vulkan because of this interface change. To rectify this, Vulkan must also split `sdpa_with_kv_cache` into two operators.

Note that during this diff the new operators are not partitioned yet because of complications caused by assertion ops in the graph. The next diff adds a pass to remove such assertion ops which allows the new operators to be partitioned.
## Context

Recently, some assertion ops were added to the Llama source code.

Unfortunately, this causes issues for the Vulkan delegate because runtime assertions are not yet supported in Vulkan and the assertion ops cause graph breaks due to not being supported.

To prevent graph breaks when delegating to Vulkan, apply a pass to remove assertion ops during the llama export.

Reviewed By: kimishpatel

Differential Revision: D68922404
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D68922404

SS-JIA added a commit to SS-JIA/executorch-1 that referenced this pull request Jan 31, 2025
… operator + Add `RemoveAsserts` pass and apply it during LlaMa export (pytorch#8075)

Summary:

**Note**: This diff is a combination of D68919676 and D68919678. I decided to combine the two because of problems with `ghexport`, which was having some problems exporting the second diff, as well as the fact that both diffs are needed for `export_llama` to work so it makes more sense to just have a single diff.

## Context

Recent changes split the `sdpa_with_kv_cache` operator into two separate operators, `update_cache` and `custom_sdpa` to decouple the cache update step from the actual SDPA computation.

As a result, SDPA is no longer being delegated on Vulkan because of this interface change. To rectify this, Vulkan must also split `sdpa_with_kv_cache` into two operators.

Note that during this diff the new operators are not partitioned yet because of complications caused by assertion ops in the graph. The next diff adds a pass to remove such assertion ops which allows the new operators to be partitioned.
## Context

Recently, some assertion ops were added to the Llama source code.

Unfortunately, this causes issues for the Vulkan delegate because runtime assertions are not yet supported in Vulkan and the assertion ops cause graph breaks due to not being supported.

To prevent graph breaks when delegating to Vulkan, apply a pass to remove assertion ops during the llama export.

Reviewed By: kimishpatel

Differential Revision: D68922404
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D68922404

SS-JIA added a commit to SS-JIA/executorch-1 that referenced this pull request Jan 31, 2025
… operator + Add `RemoveAsserts` pass and apply it during LlaMa export (pytorch#8075)

Summary:

**Note**: This diff is a combination of D68919676 and D68919678. I decided to combine the two because of problems with `ghexport`, which was having some problems exporting the second diff, as well as the fact that both diffs are needed for `export_llama` to work so it makes more sense to just have a single diff.

## Context

Recent changes split the `sdpa_with_kv_cache` operator into two separate operators, `update_cache` and `custom_sdpa` to decouple the cache update step from the actual SDPA computation.

As a result, SDPA is no longer being delegated on Vulkan because of this interface change. To rectify this, Vulkan must also split `sdpa_with_kv_cache` into two operators.

Note that during this diff the new operators are not partitioned yet because of complications caused by assertion ops in the graph. The next diff adds a pass to remove such assertion ops which allows the new operators to be partitioned.
## Context

Recently, some assertion ops were added to the Llama source code.

Unfortunately, this causes issues for the Vulkan delegate because runtime assertions are not yet supported in Vulkan and the assertion ops cause graph breaks due to not being supported.

To prevent graph breaks when delegating to Vulkan, apply a pass to remove assertion ops during the llama export.

Reviewed By: kimishpatel

Differential Revision: D68922404
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D68922404

@SS-JIA SS-JIA added the release notes: vulkan Changes to the Vulkan backend delegate label Jan 31, 2025
… operator + Add `RemoveAsserts` pass and apply it during LlaMa export (pytorch#8075)

Summary:

**Note**: This diff is a combination of D68919676 and D68919678. I decided to combine the two because of problems with `ghexport`, which was having some problems exporting the second diff, as well as the fact that both diffs are needed for `export_llama` to work so it makes more sense to just have a single diff.

## Context

Recent changes split the `sdpa_with_kv_cache` operator into two separate operators, `update_cache` and `custom_sdpa` to decouple the cache update step from the actual SDPA computation.

As a result, SDPA is no longer being delegated on Vulkan because of this interface change. To rectify this, Vulkan must also split `sdpa_with_kv_cache` into two operators.

Note that during this diff the new operators are not partitioned yet because of complications caused by assertion ops in the graph. The next diff adds a pass to remove such assertion ops which allows the new operators to be partitioned.
## Context

Recently, some assertion ops were added to the Llama source code.

Unfortunately, this causes issues for the Vulkan delegate because runtime assertions are not yet supported in Vulkan and the assertion ops cause graph breaks due to not being supported.

To prevent graph breaks when delegating to Vulkan, apply a pass to remove assertion ops during the llama export.

Reviewed By: kimishpatel, digantdesai

Differential Revision: D68922404
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D68922404

@facebook-github-bot facebook-github-bot merged commit ca7cdc8 into pytorch:main Jan 31, 2025
43 of 45 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported release notes: vulkan Changes to the Vulkan backend delegate

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants