File tree Expand file tree Collapse file tree 1 file changed +19
-1
lines changed
examples/cpu/inference/python/llm-modeling Expand file tree Collapse file tree 1 file changed +19
-1
lines changed Original file line number Diff line number Diff line change @@ -63,7 +63,25 @@ We provide optimized LLAMA, GPT-J and OPT modeling files on the basis of [huggin
6363
6464## Running example script
6565
66- Please refer to the [ instructions] ( ../../../llm/README.md#2-environment-setup ) for environment setup.
66+ Please install the required packages via the following commands.
67+
68+ ``` bash
69+ python -m pip install torch intel-extension-for-pytorch intel-openmp
70+ conda install gperftools -y
71+ # The example LLM modelings are showcased based on transformers v4.38.1
72+ python -m pip install transformers==4.38.1 accelerate
73+
74+ # Set the environment variables for performance on Xeon
75+ export LD_PRELOAD=$( bash ../../../llm/tools/get_libstdcpp_lib.sh) :${CONDA_PREFIX} /lib/libiomp.so:${CONDA_PREFIX} /lib/libtcmalloc.so:${LD_PRELOAD}
76+ export KMP_BLOCKTIME=1
77+ export KMP_TPAUSE=0
78+ export KMP_FORKJOIN_BARRIER_PATTERN=dist,dist
79+ export KMP_PLAIN_BARRIER_PATTERN=dist,dist
80+ export KMP_REDUCTION_BARRIER_PATTERN=dist,dist
81+
82+ # Download the example prompt file
83+ wget https://intel-extension-for-pytorch.s3.amazonaws.com/miscellaneous/llm/prompt.json
84+ ```
6785
6886The detail usage of ` run.py ` can be obtained by running
6987
You can’t perform that action at this time.
0 commit comments