diff --git a/backends/vulkan/docs/android_demo.md b/backends/vulkan/docs/android_demo.md index 9b45fcde9a8..ce23eb989fd 100644 --- a/backends/vulkan/docs/android_demo.md +++ b/backends/vulkan/docs/android_demo.md @@ -81,7 +81,8 @@ First, build and install ExecuTorch libraries, then build the LLaMA runner binary using the Android NDK toolchain. ```shell -(rm -rf cmake-android-out && \ +./install_requirements.sh --clean +(mkdir cmake-android-out && \ cmake . -DCMAKE_INSTALL_PREFIX=cmake-android-out \ -DCMAKE_TOOLCHAIN_FILE=$ANDROID_NDK/build/cmake/android.toolchain.cmake \ -DANDROID_ABI=$ANDROID_ABI \ diff --git a/backends/xnnpack/README.md b/backends/xnnpack/README.md index 0c3d7e14428..2184257b791 100644 --- a/backends/xnnpack/README.md +++ b/backends/xnnpack/README.md @@ -98,7 +98,7 @@ After exporting the XNNPACK Delegated model, we can now try running it with exam cd executorch # Get a clean cmake-out directory -rm -rf cmake-out +./install_requirements.sh --clean mkdir cmake-out # Configure cmake diff --git a/docs/source/build-run-xtensa.md b/docs/source/build-run-xtensa.md index e46f52b6824..bc90ee52922 100644 --- a/docs/source/build-run-xtensa.md +++ b/docs/source/build-run-xtensa.md @@ -162,7 +162,8 @@ In order to run the CMake build, you need the path to the following: ```bash cd executorch -rm -rf cmake-out +./install_requirements.sh --clean +mkdir cmake-out # prebuild and install executorch library cmake -DCMAKE_TOOLCHAIN_FILE=/backends/cadence/cadence.cmake \ -DCMAKE_INSTALL_PREFIX=cmake-out \ diff --git a/docs/source/llm/getting-started.md b/docs/source/llm/getting-started.md index 9419a92fd6e..9f88d7de361 100644 --- a/docs/source/llm/getting-started.md +++ b/docs/source/llm/getting-started.md @@ -396,7 +396,8 @@ At this point, the working directory should contain the following files: If all of these are present, you can now build and run: ```bash -(rm -rf cmake-out && mkdir cmake-out && cd cmake-out && cmake ..) +./install_requirements.sh --clean +(mkdir cmake-out && cd cmake-out && cmake ..) cmake --build cmake-out -j10 ./cmake-out/nanogpt_runner ``` diff --git a/docs/source/tutorial-xnnpack-delegate-lowering.md b/docs/source/tutorial-xnnpack-delegate-lowering.md index 4f0ba3bd1ab..f4579e2cce4 100644 --- a/docs/source/tutorial-xnnpack-delegate-lowering.md +++ b/docs/source/tutorial-xnnpack-delegate-lowering.md @@ -147,7 +147,7 @@ After exporting the XNNPACK Delegated model, we can now try running it with exam cd executorch # Get a clean cmake-out directory -rm -rf cmake-out +./install_requirements.sh --clean mkdir cmake-out # Configure cmake diff --git a/examples/demo-apps/android/ExecuTorchDemo/README.md b/examples/demo-apps/android/ExecuTorchDemo/README.md index 33107cbe5ee..1feb9ca92dc 100644 --- a/examples/demo-apps/android/ExecuTorchDemo/README.md +++ b/examples/demo-apps/android/ExecuTorchDemo/README.md @@ -115,7 +115,7 @@ export ANDROID_ABI=arm64-v8a export QNN_SDK_ROOT= ./install_requirements.sh --clean -mkdir cmake-android-out && cd cmake-android-out +mkdir cmake-android-out cmake . -DCMAKE_INSTALL_PREFIX=cmake-android-out \ -DCMAKE_TOOLCHAIN_FILE="${ANDROID_NDK}/build/cmake/android.toolchain.cmake" \ -DANDROID_ABI="${ANDROID_ABI}" \ diff --git a/examples/models/llama/README.md b/examples/models/llama/README.md index e621ce5d49d..cf9553c1c6e 100644 --- a/examples/models/llama/README.md +++ b/examples/models/llama/README.md @@ -440,9 +440,8 @@ This example tries to reuse the Python code, with minimal modifications to make ``` git clean -xfd pip uninstall executorch +./install_requirements.sh --clean ./install_requirements.sh --pybind xnnpack - -rm -rf cmake-out ``` - If you encounter `pthread` related issues during link time, add `pthread` in `target_link_libraries` in `CMakeLists.txt` - On Mac, if there is linking error in Step 4 with error message like diff --git a/examples/models/phi-3-mini-lora/README.md b/examples/models/phi-3-mini-lora/README.md index 987052dbf24..8e4b2428071 100644 --- a/examples/models/phi-3-mini-lora/README.md +++ b/examples/models/phi-3-mini-lora/README.md @@ -19,7 +19,8 @@ python export_model.py 2. Run the inference model using an example runtime. For more detailed steps on this, check out [Build & Run](https://pytorch.org/executorch/stable/getting-started-setup.html#build-run). ``` # Clean and configure the CMake build system. Compiled programs will appear in the executorch/cmake-out directory we create here. -(rm -rf cmake-out && mkdir cmake-out && cd cmake-out && cmake ..) +./install_requirements.sh --clean +(mkdir cmake-out && cd cmake-out && cmake ..) # Build the executor_runner target cmake --build cmake-out --target executor_runner -j9 diff --git a/examples/portable/README.md b/examples/portable/README.md index a488ef2929a..e469df1510d 100644 --- a/examples/portable/README.md +++ b/examples/portable/README.md @@ -45,8 +45,8 @@ Use `-h` (or `--help`) to see all the supported models. ```bash # Build the tool from the top-level `executorch` directory. -(rm -rf cmake-out \ - && mkdir cmake-out \ +./install_requirements.sh --clean +(mkdir cmake-out \ && cd cmake-out \ && cmake -DEXECUTORCH_PAL_DEFAULT=posix ..) \ && cmake --build cmake-out -j32 --target executor_runner diff --git a/examples/xnnpack/README.md b/examples/xnnpack/README.md index dcd5b9c5d70..a519d935b52 100644 --- a/examples/xnnpack/README.md +++ b/examples/xnnpack/README.md @@ -31,7 +31,7 @@ Once we have the model binary (pte) file, then let's run it with ExecuTorch runt cd executorch # Get a clean cmake-out directory -rm -rf cmake-out +./install_requiements.sh --clean mkdir cmake-out # Configure cmake @@ -86,7 +86,7 @@ After exporting the XNNPACK Delegated model, we can now try running it with exam cd executorch # Get a clean cmake-out directory -rm -rf cmake-out +./install_requirements.sh --clean mkdir cmake-out # Configure cmake diff --git a/extension/training/README.md b/extension/training/README.md index 17e5f91f075..44195471a71 100644 --- a/extension/training/README.md +++ b/extension/training/README.md @@ -230,7 +230,7 @@ After exporting the model for training, we can now try learning using CMake. We cd executorch # Get a clean cmake-out directory -rm -rf cmake-out +./install_requirements.sh --clean mkdir cmake-out # Configure cmake