Skip to content

Commit 5739e47

Browse files
committed
Merge branch 'feature/llm-complete-updates' of https://github.com/zenml-io/zenml-projects into feature/llm-complete-updates
2 parents e0b54bb + 78e94f2 commit 5739e47

File tree

5 files changed

+55
-20
lines changed

5 files changed

+55
-20
lines changed
Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
name: Staging Trigger LLM-COMPLETE
2+
on:
3+
pull_request:
4+
types: [opened, synchronize]
5+
branches: [staging, main]
6+
concurrency:
7+
# New commit on branch cancels running workflows of the same branch
8+
group: ${{ github.workflow }}-${{ github.ref }}
9+
cancel-in-progress: true
10+
11+
jobs:
12+
run-staging-workflow:
13+
runs-on: ubuntu-dind-runners
14+
env:
15+
ZENML_HOST: ${{ secrets.ZENML_HOST }}
16+
ZENML_API_KEY: ${{ secrets.ZENML_API_KEY }}
17+
ZENML_STAGING_STACK: 51a49786-b82a-4646-bde7-a460efb0a9c5
18+
ZENML_GITHUB_SHA: ${{ github.event.pull_request.head.sha }}
19+
ZENML_GITHUB_URL_PR: ${{ github.event.pull_request._links.html.href }}
20+
ZENML_DEBUG: true
21+
ZENML_ANALYTICS_OPT_IN: false
22+
ZENML_LOGGING_VERBOSITY: INFO
23+
24+
steps:
25+
- name: Check out repository code
26+
uses: actions/checkout@v3
27+
28+
- uses: actions/setup-python@v4
29+
with:
30+
python-version: '3.11'
31+
32+
- name: Install requirements
33+
run: |
34+
pip3 install -r requirements.txt
35+
zenml integration install gcp -y
36+
37+
- name: Connect to ZenML server
38+
run: |
39+
zenml connect --url $ZENML_HOST --api-key $ZENML_API_KEY
40+
41+
- name: Set stack (Staging)
42+
if: ${{ github.base_ref == 'staging' }}
43+
run: |
44+
zenml stack set ${{ env.ZENML_STAGING_STACK }}
45+
46+
- name: Run pipeline (Staging)
47+
if: ${{ github.base_ref == 'staging' }}
48+
run: |
49+
python run.py --rag --evaluation --no-cache

llm-complete-guide/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ environment and install the dependencies using the following command:
4343
pip install -r requirements.txt
4444
```
4545

46-
Depending on your setup you may run into some issues when running the pip install command with the
46+
Depending on your setup you may run into some issues when running the `pip install` command with the
4747
`flash_attn` package. In that case running `FLASH_ATTENTION_SKIP_CUDA_BUILD=TRUE pip install flash-attn --no-build-isolation` could help you.
4848

4949
In order to use the default LLM for this query, you'll need an account and an

llm-complete-guide/configs/rag.yaml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
# environment configuration
22
settings:
33
docker:
4+
parent_image: "zenmldocker/prepare-release:base-0.68.0"
45
requirements:
56
- unstructured
67
- sentence-transformers>=3
@@ -10,3 +11,6 @@ settings:
1011
- numpy
1112
- psycopg2-binary
1213
- tiktoken
14+
environment:
15+
ZENML_ENABLE_RICH_TRACEBACK: FALSE
16+
ZENML_LOGGING_VERBOSITY: INFO

llm-complete-guide/configs/rag_eval.yaml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@ enable_cache: False
33
# environment configuration
44
settings:
55
docker:
6+
parent_image: "zenmldocker/prepare-release:base-0.68.0"
67
requirements:
78
- unstructured
89
- sentence-transformers>=3

llm-complete-guide/run.py

Lines changed: 0 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -111,13 +111,6 @@
111111
default=False,
112112
help="Run the synthetic data pipeline.",
113113
)
114-
@click.option(
115-
"--local",
116-
"local",
117-
is_flag=True,
118-
default=False,
119-
help="Uses a local LLM via Ollama.",
120-
)
121114
@click.option(
122115
"--embeddings",
123116
"embeddings",
@@ -132,13 +125,6 @@
132125
default=False,
133126
help="Uses Argilla annotations.",
134127
)
135-
@click.option(
136-
"--dummyembeddings",
137-
"dummyembeddings",
138-
is_flag=True,
139-
default=False,
140-
help="Fine-tunes embeddings.",
141-
)
142128
@click.option(
143129
"--reranked",
144130
"reranked",
@@ -160,9 +146,7 @@ def main(
160146
model: str = OPENAI_MODEL,
161147
no_cache: bool = False,
162148
synthetic: bool = False,
163-
local: bool = False,
164149
embeddings: bool = False,
165-
dummyembeddings: bool = False,
166150
argilla: bool = False,
167151
reranked: bool = False,
168152
chunks: bool = False,
@@ -177,7 +161,6 @@ def main(
177161
no_cache (bool): If `True`, cache will be disabled.
178162
synthetic (bool): If `True`, the synthetic data pipeline will be run.
179163
local (bool): If `True`, the local LLM via Ollama will be used.
180-
dummyembeddings (bool): If `True`, dummyembeddings will be used
181164
embeddings (bool): If `True`, the embeddings will be fine-tuned.
182165
argilla (bool): If `True`, the Argilla annotations will be used.
183166
chunks (bool): If `True`, the chunks pipeline will be run.
@@ -225,8 +208,6 @@ def main(
225208
os.path.dirname(os.path.realpath(__file__)), "configs", "embeddings.yaml"
226209
)
227210
finetune_embeddings.with_options(config_path=config_path, **embeddings_finetune_args)()
228-
if dummyembeddings:
229-
chunking_experiment.with_options(**pipeline_args)()
230211
if chunks:
231212
generate_chunk_questions.with_options(**pipeline_args)()
232213

0 commit comments

Comments
 (0)