Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 4 additions & 5 deletions docs/source/backends-qualcomm.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,10 +74,9 @@ This example is verified with SM8550 and SM8450.
- A compiler to compile AOT parts, e.g., the GCC compiler comes with Ubuntu LTS.
- [Android NDK](https://developer.android.com/ndk). This example is verified with NDK 26c.
- [Qualcomm AI Engine Direct SDK](https://developer.qualcomm.com/software/qualcomm-ai-engine-direct-sdk)
- Click the "Get Software" button to download a version of QNN SDK.
- However, at the moment of updating this tutorial, the above website doesn't provide QNN SDK newer than 2.22.6.
- The below is public links to download various QNN versions. Hope they can be publicly discoverable soon.
- [QNN 2.37.0](https://softwarecenter.qualcomm.com/api/download/software/sdks/Qualcomm_AI_Runtime_Community/All/2.37.0.250724/v2.37.0.250724.zip)
- Click the "Get Software" button to download the latest version of the QNN SDK.
- Although newer versions are available, we have verified and recommend using QNN 2.37.0 for stability.
- You can download it directly from the following link: [QNN 2.37.0](https://softwarecenter.qualcomm.com/api/download/software/sdks/Qualcomm_AI_Runtime_Community/All/2.37.0.250724/v2.37.0.250724.zip)
Comment on lines +77 to +79
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cccclai we need to update these instructions. We shouldnt need to download sdk with qnn pip wheel. Make this either optional or remove it entirely. I would prefer latter because it avoid confusion

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pip wheel release hasn't finished yet. The nightly build is based on viable/strict and it has been stuck for a week, which doesn't include our latest commit for the fix. After it's verified, I can add a session and move this part to build from source session


The directory with installed Qualcomm AI Engine Direct SDK looks like:
```
Expand Down Expand Up @@ -365,7 +364,7 @@ The model, inputs, and output location are passed to `qnn_executorch_runner` by

## Supported model list

Please refer to `$EXECUTORCH_ROOT/examples/qualcomm/scripts/` and `EXECUTORCH_ROOT/examples/qualcomm/oss_scripts/` to the list of supported models.
Please refer to `$EXECUTORCH_ROOT/examples/qualcomm/scripts/` and `$EXECUTORCH_ROOT/examples/qualcomm/oss_scripts/` to the list of supported models.

## How to Support a Custom Model in HTP Backend

Expand Down
7 changes: 4 additions & 3 deletions examples/qualcomm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,12 +111,13 @@ This section outlines the essential APIs and utilities provided to streamline th
Creates a clean directory for storing model outputs or intermediate results. If the directory already exists, it will be deleted and recreated to ensure a consistent environment for each run.

## Additional Dependency
This example requires the following Python packages:
- pandas and scikit-learn: used in the mobilebert multi-class text classification example.
- graphviz (optional): used for visualizing QNN graphs during debugging.

The mobilebert multi-class text classification example requires `pandas` and `sklearn`.
Please install them by something like

```bash
pip install scikit-learn pandas
pip install scikit-learn pandas graphviz
Copy link
Contributor

@abhinaykukkadapu abhinaykukkadapu Oct 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had to install lm_eval too.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lm_eval is only for llama I think

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I see it is part of examples/models/llama/install_requirements.sh, may be we should mention this in the instructions to run llama models?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I think it's reasonable to have it as part of the instructions to run llama models

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’ve just added the instruction at this line

```

## Limitation
Expand Down
Loading