Skip to content

Commit 3ea5a88

Browse files
committed
shell commands corrected for copy
1 parent c473a03 commit 3ea5a88

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

readme.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -22,13 +22,13 @@ For more: [Pinecone LCEL Article](https://www.pinecone.io/learn/series/langchain
2222

2323
1. Clone the repo using git:
2424
```shell
25-
$ git clone https://github.com/rauni-iitr/langchain_chromaDB_opensourceLLM_streamlit.git
25+
git clone https://github.com/rauni-iitr/langchain_chromaDB_opensourceLLM_streamlit.git
2626
```
2727

2828
2. Create a virtual enviornment, with 'venv' or with 'conda' and activate.
2929
```shell
30-
$ python3 -m venv .venv
31-
$ source .venv/bin/activate
30+
python3 -m venv .venv
31+
source .venv/bin/activate
3232
```
3333

3434
3. Now this rag application is built using few dependencies:
@@ -42,7 +42,7 @@ For more: [Pinecone LCEL Article](https://www.pinecone.io/learn/series/langchain
4242
4343
You can install all of these with pip;
4444
```shell
45-
$ pip install pypdf chromadb transformers sentence-transformers streamlit
45+
pip install pypdf chromadb transformers sentence-transformers streamlit
4646
```
4747
4. Installing llama-cpp-python:
4848
* This project uses uses [LlamaCpp-Python](https://github.com/abetlen/llama-cpp-python) for GGUF(llama-cpp-python >=0.1.83) models loading and inference, if you are using GGML models you need (llama-cpp-python <=0.1.76).
@@ -51,12 +51,12 @@ For more: [Pinecone LCEL Article](https://www.pinecone.io/learn/series/langchain
5151
5252
For Nvidia's GPU infernece, use 'cuBLAS', run below commands in your terminal:
5353
```shell
54-
$ CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python==0.1.83 --no-cache-dir
54+
CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python==0.1.83 --no-cache-dir
5555
```
5656

5757
For Apple's Metal(M1/M2) based infernece, use 'METAL', run:
5858
```shell
59-
$ CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install llama-cpp-python==0.1.83 --no-cache-dir
59+
CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install llama-cpp-python==0.1.83 --no-cache-dir
6060
```
6161
For more info, for setting right flags on any device where your app is running, see [here](https://codesandbox.io/p/github/imotai/llama-cpp-python/main).
6262
@@ -73,7 +73,7 @@ For more: [Pinecone LCEL Article](https://www.pinecone.io/learn/series/langchain
7373
To run the model:
7474
7575
```shell
76-
$ streamlit run st_app.py
76+
streamlit run st_app.py
7777
```
7878
7979

0 commit comments

Comments
 (0)