Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
41 changes: 41 additions & 0 deletions .github/workflows/code-formatting.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
name: Code Formatting

on:
workflow_call:
push:
branches:
- main

jobs:
formatting-check:
name: Code Formatting Check
runs-on: ubuntu-latest
if: github.event.pull_request.draft == false
env:
ZENML_DEBUG: 1
ZENML_ANALYTICS_OPT_IN: false
steps:
- name: Checkout repository
uses: actions/checkout@v3

- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.9'

- name: Install latest ruff
run: pip install --upgrade ruff

- name: Run formatting script
run: bash scripts/format.sh

- name: Check for changes
id: git-check
run: |
git diff --exit-code || echo "changes=true" >> $GITHUB_OUTPUT

- name: Fail if changes were made
if: steps.git-check.outputs.changes == 'true'
run: |
echo "::error::Formatting check failed. Please run 'scripts/format.sh' locally and commit the changes."
exit 1
36 changes: 0 additions & 36 deletions .github/workflows/gpt4_summarizer.yml

This file was deleted.

60 changes: 0 additions & 60 deletions .github/workflows/production_run_complete_llm.yml

This file was deleted.

10 changes: 9 additions & 1 deletion .github/workflows/pull_request.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: Spell Checking
name: Pull Request Checks

on:
pull_request:
Expand All @@ -25,3 +25,11 @@ jobs:
markdown-link-check:
uses: ./.github/workflows/markdown-link-check.yml
if: github.event.pull_request.draft == false

code-formatting-check:
uses: ./.github/workflows/code-formatting.yml
if: github.event.pull_request.draft == false

readme-projects-check:
uses: ./.github/workflows/readme-projects-check.yml
if: github.event.pull_request.draft == false
21 changes: 21 additions & 0 deletions .github/workflows/readme-projects-check.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
name: README Projects Check

on:
workflow_call:

jobs:
readme-projects-check:
name: Check Projects in README
runs-on: ubuntu-latest
if: github.event.pull_request.draft == false
steps:
- name: Checkout repository
uses: actions/checkout@v3

- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.9'

- name: Run README projects check
run: python3 scripts/check_readme_projects.py
68 changes: 0 additions & 68 deletions .github/workflows/staging_run_complete_llm.yml

This file was deleted.

3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -167,3 +167,6 @@ finetuned-snowflake-arctic-embed-m-v1.5/

# ollama ignores
nohup.out

# Claude
.claude/
17 changes: 15 additions & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,10 +99,23 @@ the ["fork-and-pull" Git workflow](https://github.com/susam/gitpr)
4. Checkout the **main** branch <- `git checkout main`.
5. Create a branch locally off the **main** branch with a succinct but descriptive name.
6. Commit changes to the branch.
7. Push changes to your fork.
8. Open a PR in our repository to the `main` branch and
7. Format your code by running `bash scripts/format.sh` before committing.
8. Push changes to your fork.
9. Open a PR in our repository to the `main` branch and
follow the PR template so that we can efficiently review the changes.

#### Code Formatting

All code must pass our formatting checks before it can be merged. We use [ruff](https://github.com/astral-sh/ruff) for code formatting and linting.

To format your code locally:
```bash
# Run from the project root
bash scripts/format.sh
```

Our CI pipeline automatically checks if your code is properly formatted. If the check fails, you'll need to run the formatting script locally and commit the changes before your PR can be merged.

### 🚨 Reporting a Vulnerability

If you think you have found a vulnerability, and even if you are not sure about it,
Expand Down
9 changes: 9 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,6 +88,15 @@ installation details.
We welcome contributions from anyone to showcase your project built using ZenML.
See our [contributing guide](./CONTRIBUTING.md) to start.

## Code Quality

All code contributions must pass our automated code quality checks:
- **Code Formatting**: We use [ruff](https://github.com/astral-sh/ruff) for code formatting and linting
- **Spelling**: We check for typos and spelling errors
- **Markdown Links**: We verify that all links in documentation work properly

Our CI pipeline will automatically check your PR for these issues. Remember to run `bash scripts/format.sh` locally before submitting your PR to ensure it passes the formatting checks.

# 🆘 Getting Help

By far the easiest and fastest way to get help is to:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,14 +31,10 @@


@step(enable_cache=False)
def deployment_deploy() -> (
Annotated[
Optional[DatabricksDeploymentService],
ArtifactConfig(
name="databricks_deployment", is_deployment_artifact=True
),
]
):
def deployment_deploy() -> Annotated[
Optional[DatabricksDeploymentService],
ArtifactConfig(name="databricks_deployment", is_deployment_artifact=True),
]:
"""Predictions step.

This is an example of a predictions step that takes the data in and returns
Expand Down
6 changes: 2 additions & 4 deletions gamesense/steps/log_metadata.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@
# limitations under the License.
#

from typing import Any, Dict

from zenml import get_step_context, log_metadata, step

Expand All @@ -33,9 +32,8 @@ def log_metadata_from_step_artifact(
"""

context = get_step_context()
metadata_dict: Dict[str, Any] = context.pipeline_run.steps[
step_name
].outputs[artifact_name]
# Access the artifact metadata but don't store the unused variable
_ = context.pipeline_run.steps[step_name].outputs[artifact_name]

log_metadata(
artifact_name=artifact_name,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,9 @@ def deploy_to_huggingface(
save_model_to_deploy.entrypoint()

logger.info("Model saved locally. Pushing to HuggingFace...")
assert secret, "No secret found with name 'huggingface_creds'. Please create one with your `token`."
assert secret, (
"No secret found with name 'huggingface_creds'. Please create one with your `token`."
)

token = secret.secret_values["token"]
api = HfApi(token=token)
Expand Down
10 changes: 4 additions & 6 deletions huggingface-sagemaker/steps/promotion/promote_get_metrics.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,12 +27,10 @@


@step
def promote_get_metrics() -> (
Tuple[
Annotated[Dict[str, Any], "latest_metrics"],
Annotated[Dict[str, Any], "current_metrics"],
]
):
def promote_get_metrics() -> Tuple[
Annotated[Dict[str, Any], "latest_metrics"],
Annotated[Dict[str, Any], "current_metrics"],
]:
"""Get metrics for comparison for promoting a model.

This is an example of a metric retrieval step. It is used to retrieve
Expand Down
12 changes: 6 additions & 6 deletions llm-complete-guide/steps/eval_retrieval.py
Original file line number Diff line number Diff line change
Expand Up @@ -275,9 +275,9 @@ def perform_small_retrieval_evaluation(use_reranking: bool) -> float:


@step
def retrieval_evaluation_small() -> (
Annotated[float, "small_failure_rate_retrieval"]
):
def retrieval_evaluation_small() -> Annotated[
float, "small_failure_rate_retrieval"
]:
"""Executes the retrieval evaluation step without reranking.

Returns:
Expand All @@ -287,9 +287,9 @@ def retrieval_evaluation_small() -> (


@step
def retrieval_evaluation_small_with_reranking() -> (
Annotated[float, "small_failure_rate_retrieval_reranking"]
):
def retrieval_evaluation_small_with_reranking() -> Annotated[
float, "small_failure_rate_retrieval_reranking"
]:
"""Executes the retrieval evaluation step with reranking.

Returns:
Expand Down
Loading
Loading