Skip to content

Commit 83193e1

Browse files
quic-rishinrquic-hemagnih
authored andcommitted
Upgrade python version from 3.10 to 3.12 (#782)
updating the Qeff python version to 3.12 still keeping support for 3.10 3.11. Signed-off-by: Rishin Raj <rishinr@qti.qualcomm.com> Co-authored-by: Hem Agnihotri <hemagnih@qti.qualcomm.com> Signed-off-by: Dipankar Sarkar <dipankar@qti.qualcomm.com>
1 parent 8711e95 commit 83193e1

File tree

7 files changed

+26
-20
lines changed

7 files changed

+26
-20
lines changed

Dockerfile

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,8 @@ FROM docker-registry.qualcomm.com/library/ubuntu:20.04
77
RUN apt-get update && apt-get install -y \
88
git \
99
tmux \
10-
python3.10 \
11-
python3.10-venv \
10+
python3.12 \
11+
python3.12-venv \
1212
python3-pip
1313

1414
# pip recognizes this variable
@@ -24,7 +24,7 @@ RUN mkdir -p /app/qefficient-library
2424
COPY . /app/qefficient-library
2525

2626
# Create Virtual Env for the docker image
27-
RUN python3.10 -m venv /app/llm_env
27+
RUN python3.12 -m venv /app/llm_env
2828
RUN . /app/llm_env/bin/activate
2929
WORKDIR /app/qefficient-library
3030

@@ -33,7 +33,7 @@ WORKDIR /app/qefficient-library
3333
RUN pip install torch==2.0.0+cpu --extra-index-url https://download.pytorch.org/whl/cpu --no-deps
3434
RUN pip install datasets==2.17.0 fsspec==2023.10.0 multidict==6.0.5 sentencepiece --no-deps
3535

36-
RUN python3.10 -m pip install .
36+
RUN python3.12 -m pip install .
3737
WORKDIR /app/qefficient-library
3838

3939
# Set the environment variable for the model card name and token ID
@@ -45,7 +45,7 @@ ENV TOKEN_ID = ""
4545
# Print a success message
4646
CMD ["echo", "qefficient-transformers repository cloned and setup installed inside Docker image."]
4747
CMD ["echo", "Starting the Model Download and Export to Onnx Stage for QEff."]
48-
CMD python3.10 -m QEfficient.cloud.export --model-name "$MODEL_NAME"
48+
CMD python3.12 -m QEfficient.cloud.export --model-name "$MODEL_NAME"
4949

5050
# Example usage:
5151
# docker build -t qefficient-library .
@@ -55,4 +55,4 @@ CMD python3.10 -m QEfficient.cloud.export --model-name "$MODEL_NAME"
5555
# 2. For smaller models, 32GiB RAM is sufficient, but larger LLMs we require good CPU/RAM (Context 7B model would require atleast 64GiB).
5656
# 3. The exact minimum system configuration are tough to decide, since its all function of model parameters.
5757

58-
# docker run -e MODEL_NAME=gpt2 -e TOKEN_ID=<your-token-id> qefficient-library
58+
# docker run -e MODEL_NAME=gpt2 -e TOKEN_ID=<your-token-id> qefficient-library

README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -95,9 +95,9 @@ For other models, there is comprehensive documentation to inspire upon the chang
9595
## Quick Installation
9696
```bash
9797

98-
# Create Python virtual env and activate it. (Recommended Python 3.10)
99-
sudo apt install python3.10-venv
100-
python3.10 -m venv qeff_env
98+
# Create Python virtual env and activate it. (Recommended Python 3.12)
99+
sudo apt install python3.12-venv
100+
python3.12 -m venv qeff_env
101101
source qeff_env/bin/activate
102102
pip install -U pip
103103

@@ -136,4 +136,4 @@ Thanks to:
136136
If you run into any problems with the code, please file Github issues directly to this repo.
137137

138138
## Contributing
139-
This project welcomes contributions and suggestions. Please check the License. Integration with a CLA Bot is underway.
139+
This project welcomes contributions and suggestions. Please check the License. Integration with a CLA Bot is underway.

docs/source/finetune.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ For QEfficient Library : https://github.com/quic/efficient-transformers
1111

1212
For torch_qaic, assuming QEfficient is already installed,
1313
```bash
14-
pip install /opt/qti-aic/integrations/torch_qaic/py310/torch_qaic-0.1.0-cp310-cp310-linux_x86_64.whl
14+
pip install /opt/qti-aic/integrations/torch_qaic/py312/torch_qaic-0.1.0-cp312-cp312-linux_x86_64.whl
1515
```
1616
If qeff-env inside docker is used then torch_qaic and accelerate packages are already installed.
1717

docs/source/installation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ Efficient Transformers have been validated to work with the same compatible SDK.
4848
```bash
4949
# Create Python virtual env and activate it. (Required Python 3.10)
5050

51-
python3.10 -m venv qeff_env
51+
python3.12 -m venv qeff_env
5252
source qeff_env/bin/activate
5353
pip install -U pip
5454

examples/performance/on_device_sampling.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -114,7 +114,7 @@ def main(args, **kwargs):
114114
"""
115115
Example usage:
116116
1. For continuous batching:
117-
python3.10 examples/on_device_sampling.py \
117+
python examples/on_device_sampling.py \
118118
--model-name 'meta-llama/Llama-3.1-8B' \
119119
--prompt-len 128 \
120120
--ctx-len 256 \
@@ -134,7 +134,7 @@ def main(args, **kwargs):
134134
--random-number 26
135135
136136
2. For non-continuous batching:
137-
python3.10 examples/on_device_sampling.py \
137+
python examples/on_device_sampling.py \
138138
--model-name 'meta-llama/Llama-3.1-8B' \
139139
--prompt-len 128 \
140140
--ctx-len 256 \
@@ -154,7 +154,7 @@ def main(args, **kwargs):
154154
--random-number 26
155155
156156
3. With guided decoding:
157-
python3.10 examples/on_device_sampling.py \
157+
python examples/on_device_sampling.py \
158158
--model-name 'meta-llama/Llama-3.1-8B' \
159159
--prompt-len 128 \
160160
--ctx-len 256 \

pyproject.toml

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,10 +14,10 @@ classifiers = [
1414
"Intended Audience :: Developers",
1515
"Intended Audience :: Education",
1616
"Operating System :: Linux",
17-
"Programming Language :: Python :: 3.10",
17+
"Programming Language :: Python :: 3.12",
1818
"Topic :: Scientific/Engineering :: Artificial Intelligence for Inference Accelerator",
1919
]
20-
requires-python = ">=3.8,<3.11"
20+
requires-python = ">=3.8,<3.13"
2121
dependencies = [
2222
"transformers==4.57.0",
2323
"diffusers== 0.35.1",
@@ -48,8 +48,12 @@ dependencies = [
4848
"torch@https://download.pytorch.org/whl/cpu/torch-2.4.1%2Bcpu-cp38-cp38-linux_x86_64.whl ; python_version=='3.8' and platform_machine=='x86_64'",
4949
"torch@https://download.pytorch.org/whl/cpu/torch-2.7.0%2Bcpu-cp39-cp39-manylinux_2_28_x86_64.whl ; python_version=='3.9' and platform_machine=='x86_64'",
5050
"torch@https://download.pytorch.org/whl/cpu/torch-2.7.0%2Bcpu-cp310-cp310-manylinux_2_28_x86_64.whl ; python_version=='3.10' and platform_machine=='x86_64'",
51+
"torch@https://download.pytorch.org/whl/cpu/torch-2.7.0%2Bcpu-cp311-cp311-manylinux_2_28_x86_64.whl ; python_version=='3.11' and platform_machine=='x86_64'",
52+
"torch@https://download.pytorch.org/whl/cpu/torch-2.7.0%2Bcpu-cp312-cp312-manylinux_2_28_x86_64.whl ; python_version=='3.12' and platform_machine=='x86_64'",
5153
"torchvision@https://download.pytorch.org/whl/cpu/torchvision-0.22.0%2Bcpu-cp39-cp39-manylinux_2_28_x86_64.whl ; python_version=='3.9' and platform_machine=='x86_64'",
5254
"torchvision@https://download.pytorch.org/whl/cpu/torchvision-0.22.0%2Bcpu-cp310-cp310-manylinux_2_28_x86_64.whl ; python_version=='3.10' and platform_machine=='x86_64'",
55+
"torchvision@https://download.pytorch.org/whl/cpu/torchvision-0.22.0%2Bcpu-cp311-cp311-manylinux_2_28_x86_64.whl ; python_version=='3.11' and platform_machine=='x86_64'",
56+
"torchvision@https://download.pytorch.org/whl/cpu/torchvision-0.22.0%2Bcpu-cp312-cp312-manylinux_2_28_x86_64.whl ; python_version=='3.12' and platform_machine=='x86_64'",
5357
]
5458

5559
[project.optional-dependencies]

scripts/Jenkinsfile

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -17,8 +17,8 @@ pipeline {
1717
sudo docker exec ${BUILD_TAG} bash -c "
1818
cd /efficient-transformers &&
1919
apt update &&
20-
apt install -y python3.10-venv &&
21-
python3.10 -m venv preflight_qeff &&
20+
DEBIAN_FRONTEND=noninteractive apt install -y tzdata python3.12-venv python3.12-dev build-essential &&
21+
python3.12 -m venv preflight_qeff &&
2222
. preflight_qeff/bin/activate &&
2323
pip install --upgrade pip setuptools &&
2424
pip install .[test] &&
@@ -202,7 +202,9 @@ pipeline {
202202
sudo docker exec ${BUILD_TAG} bash -c "
203203
cd /efficient-transformers &&
204204
. preflight_qeff/bin/activate &&
205-
pip install /opt/qti-aic/integrations/torch_qaic/py310/torch_qaic-0.1.0-cp310-cp310-linux_x86_64.whl &&
205+
# TODO: Update torch_qaic path to py312 when migrating to Python 3.12
206+
pip install /opt/qti-aic/integrations/torch_qaic/py312/torch_qaic-0.1.0-cp312-cp312-linux_x86_64.whl &&
207+
# pip install /opt/qti-aic/integrations/torch_qaic/py310/torch_qaic-0.1.0-cp310-cp310-linux_x86_64.whl &&
206208
pip install torch==2.9.0 torchvision==0.24.0 torchaudio==2.9.0 --index-url https://download.pytorch.org/whl/cpu &&
207209
mkdir -p $PWD/cli_qaic_finetuning &&
208210
export TOKENIZERS_PARALLELISM=false &&

0 commit comments

Comments
 (0)