-
Notifications
You must be signed in to change notification settings - Fork 3
docs: Add k8s setup instructions. #28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: release-0.293-clp-connector
Are you sure you want to change the base?
Changes from 4 commits
5f0eb8c
5bae0ce
c82f6eb
ba8cdf6
80ec62d
e40f329
6cafbfc
17d7d31
8aa5cc1
c8666a4
d730404
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,24 @@ | ||
| # Patterns to ignore when building packages. | ||
| # This supports shell glob matching, relative path matching, and | ||
| # negation (prefixed with !). Only one pattern per line. | ||
| .DS_Store | ||
| # Common VCS dirs | ||
| .git/ | ||
| .gitignore | ||
| .bzr/ | ||
| .bzrignore | ||
| .hg/ | ||
| .hgignore | ||
| .svn/ | ||
| # Common backup files | ||
| *.swp | ||
| *.bak | ||
| *.tmp | ||
| *.orig | ||
| *~ | ||
| # Various IDEs | ||
| .project | ||
| .idea/ | ||
| *.tmproj | ||
| .vscode/ | ||
| clp/ | ||
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,24 @@ | ||
| apiVersion: v2 | ||
| name: presto-velox | ||
| description: A Helm chart for Kubernetes | ||
|
|
||
| # A chart can be either an 'application' or a 'library' chart. | ||
| # | ||
| # Application charts are a collection of templates that can be packaged into versioned archives | ||
| # to be deployed. | ||
| # | ||
| # Library charts provide useful utilities or functions for the chart developer. They're included as | ||
| # a dependency of application charts to inject those utilities and functions into the rendering | ||
| # pipeline. Library charts do not define any templates and therefore cannot be deployed. | ||
| type: application | ||
|
|
||
| # This is the chart version. This version number should be incremented each time you make changes | ||
| # to the chart and its templates, including the app version. | ||
| # Versions are expected to follow Semantic Versioning (https://semver.org/) | ||
| version: 0.1.0 | ||
|
|
||
| # This is the version number of the application being deployed. This version number should be | ||
| # incremented each time you make changes to the application. Versions are not expected to | ||
| # follow Semantic Versioning. They should reflect the version the application is using. | ||
| # It is recommended to use it with quotes. | ||
| appVersion: "1.16.0" |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,139 @@ | ||
| # Setup local K8s cluster for presto + clp | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🛠️ Refactor suggestion Keep a single H1 and demote the others -# Launch clp-package
+# ## Launch clp-package
...
-# Create k8s Cluster
+# ## Create k8s Cluster
...
-# Working with helm chart
+# ## Working with helm chart
...
-# Delete k8s Cluster
+# ## Delete k8s ClusterAlso applies to: 26-26, 53-53, 59-59, 94-94 🤖 Prompt for AI Agents |
||
|
|
||
| ## Install docker | ||
|
|
||
| Follow the guide here: [docker] | ||
|
|
||
| ## Install kubectl | ||
|
|
||
| `kubectl` is the command-line tool for interacting with Kubernetes clusters. You will use it to | ||
| manage and inspect your k3d cluster. | ||
|
|
||
| Follow the guide here: [kubectl] | ||
|
|
||
| ## Install k3d | ||
|
|
||
| k3d is a lightweight wrapper to run k3s (Rancher Lab's minimal Kubernetes distribution) in docker. | ||
|
|
||
| Follow the guide here: [k3d] | ||
|
|
||
| ## Install Helm | ||
|
|
||
| Helm is the package manager for Kubernetes. | ||
|
|
||
| Follow the guide here: [helm] | ||
|
|
||
| # Launch clp-package | ||
| 1. Find the clp-package for test on our official website [clp-json-v0.4.0]. We also put the dataset for demo here: `mongod-256MB-presto-clp.log.tar.gz`. | ||
|
|
||
| 2. Untar it. | ||
|
|
||
| 3. Replace the content of `etc/clp-config.yml` with the following (also replace the IP address `${REPLACE_IP}` with the actual IP address of the host that you are running the clp-package): | ||
|
||
| ```yaml | ||
| package: | ||
| storage_engine: "clp-s" | ||
| database: | ||
| type: "mariadb" | ||
| host: "${REPLACE_IP}" | ||
| port: 6001 | ||
| name: "clp-db" | ||
| query_scheduler: | ||
| host: "${REPLACE_IP}" | ||
| port: 6002 | ||
| jobs_poll_delay: 0.1 | ||
| num_archives_to_search_per_sub_job: 16 | ||
| logging_level: "INFO" | ||
| queue: | ||
| host: "${REPLACE_IP}" | ||
| port: 6003 | ||
| redis: | ||
| host: "${REPLACE_IP}" | ||
| port: 6004 | ||
| query_backend_database: 0 | ||
| compression_backend_database: 1 | ||
| reducer: | ||
| host: "${REPLACE_IP}" | ||
| base_port: 6100 | ||
| logging_level: "INFO" | ||
| upsert_interval: 100 | ||
| results_cache: | ||
| host: "${REPLACE_IP}" | ||
| port: 6005 | ||
| db_name: "clp-query-results" | ||
| stream_collection_name: "stream-files" | ||
| webui: | ||
| host: "localhost" | ||
| port: 6000 | ||
| logging_level: "INFO" | ||
| log_viewer_webui: | ||
| host: "localhost" | ||
| port: 6006 | ||
| ``` | ||
|
|
||
coderabbitai[bot] marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| 4. Launch: | ||
| ```bash | ||
| # You probably want to run in a 3.11 python environment | ||
| sbin/start-clp.sh | ||
| ``` | ||
|
|
||
| 5. Compress: | ||
| ```bash | ||
| # You can also use your own dataset | ||
| sbin/compress.sh --timestamp-key 't.dollar_sign_date' datasets/mongod-256MB-processed.log | ||
| ``` | ||
|
|
||
| 6. Use a JetBrain IDE to connect the database source. The database is `clp-db`, the user is `clp-user` and the password is in `etc/credential.yml`. Then modify the `archive_storage_directory` field in `clp_datasets` table to `/var/data/archives/default`, and submit the change. | ||
|
||
|
|
||
| # Create k8s Cluster | ||
| Create a local k8s cluster with port forwarding | ||
| ```bash | ||
| # Replace the ~/clp-json-x86_64-v0.4.0/var/data/archives to the correct path | ||
| k3d cluster create yscope --servers 1 --agents 1 -v $(readlink -f ~/clp-json-x86_64-v0.4.0/var/data/archives):/var/data/archives | ||
| ``` | ||
|
|
||
| # Working with helm chart | ||
| ## Install | ||
| In `yscope-k8s/templates/presto/presto-coordinator-config.yaml` replace the `${REPLACE_IP}` in `clp.metadata-db-url=jdbc:mysql://${REPLACE_IP}:6001` with the IP address of the host you are running the clp-package (basially match the IP address that you configured in the `etc/clp-config.yml` of the clp-package). | ||
|
|
||
| ```bash | ||
| cd yscope-k8s | ||
|
|
||
| helm template . | ||
|
|
||
| helm install demo . | ||
| ``` | ||
|
|
||
| ## Use cli: | ||
| After all containers are in "Running" states (check by `kubectl get pods`): | ||
| ```bash | ||
| kubectl port-forward service/presto-coordinator 8080:8080 | ||
| ``` | ||
|
|
||
| Then you can further forward the 8080 port to your local laptop, to access the Presto's WebUI by e.g., http://localhost:8080 | ||
|
|
||
| To use presto-cli: | ||
| ```bash | ||
| ./presto-cli-0.293-executable.jar --catalog clp --schema default --server localhost:8080 | ||
| ``` | ||
|
|
||
| Example query: | ||
| ``` | ||
| SELECT * FROM default LIMIT 1; | ||
| ``` | ||
|
Comment on lines
+88
to
+90
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧹 Nitpick (assertive) Add language hint to SQL block The example query lacks a language tag, tripping MD040 and losing syntax highlighting. -```
+```sql
SELECT * FROM default LIMIT 1;In yscope-k8s/README.md around lines 88 to 90, the SQL code block is missing a |
||
|
|
||
| ## Uninstall | ||
| ```bash | ||
| helm uninstall demo | ||
| ``` | ||
|
|
||
| # Delete k8s Cluster | ||
| ```bash | ||
| k3d cluster delete yscope | ||
| ``` | ||
|
|
||
|
|
||
| [clp-json-v0.4.0]: https://github.com/y-scope/clp/releases/tag/v0.4.0 | ||
| [docker]: https://docs.docker.com/engine/install | ||
| [k3d]: https://k3d.io/stable/#installation | ||
| [kubectl]: https://kubernetes.io/docs/tasks/tools/#kubectl | ||
| [helm]: https://helm.sh/docs/intro/install/ | ||
| Original file line number | Diff line number | Diff line change | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,8 @@ | ||||||||||||
| apiVersion: "v1" | ||||||||||||
| kind: "Secret" | ||||||||||||
| metadata: | ||||||||||||
| name: "aws-credentials" | ||||||||||||
| namespace: "default" | ||||||||||||
coderabbitai[bot] marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||||||||||||
| type: "Opaque" | ||||||||||||
| data: | ||||||||||||
| credentials: "W2RlZmF1bHRdCmF3c19hY2Nlc3Nfa2V5X2lkID0gbWluaW9hZG1pbgphd3Nfc2VjcmV0X2FjY2Vzc19rZXkgPSBtaW5pb2FkbWluCg==" | ||||||||||||
|
||||||||||||
| data: | |
| credentials: "W2RlZmF1bHRdCmF3c19hY2Nlc3Nfa2V5X2lkID0gbWluaW9hZG1pbgphd3Nfc2VjcmV0X2FjY2Vzc19rZXkgPSBtaW5pb2FkbWluCg==" | |
| data: | |
| # Base64 of a generated ~/.aws/credentials file | |
| credentials: {{ .Values.objectStore.minio.awsCredentials | b64enc | quote }} |
🧰 Tools
🪛 Checkov (3.2.334)
[LOW] 1-8: The default namespace should not be used
(CKV_K8S_21)
🤖 Prompt for AI Agents
In yscope-k8s/templates/object-store/aws-credentials.yaml at lines 7-8, the
base64-encoded static credentials containing real keys are committed, risking
secret leaks. Replace the hardcoded base64 string with templated placeholders
that reference values from values.yaml. Then, add the actual secret keys in
values.yaml with the comment # pragma: allowlist secret to document and allow
secret scanning exceptions. Also, update documentation to instruct generating
and injecting secrets locally instead of committing them.
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,158 @@ | ||
| apiVersion: "batch/v1" | ||
|
||
| kind: "Job" | ||
| metadata: | ||
| name: "bucket-creation" | ||
| spec: | ||
| template: | ||
| spec: | ||
| containers: | ||
| # Container to deploy the log viewer. To inspect logs, use the following command: | ||
| # `kubectl logs job.batch/bucket-creation` | ||
| - name: "bucket-creation" | ||
| image: "amazon/aws-cli:latest" | ||
| command: | ||
| - "/bin/bash" | ||
| args: | ||
| - "/scripts/bucket-creation.sh" | ||
| env: | ||
| - name: "AWS_ENDPOINT_URL" | ||
| value: "http://{{ .Values.objectStore.minio.serviceName }}.default.svc.cluster.local:{{ .Values.objectStore.minio.apiPort }}" | ||
| - name: "BUCKET_NAME" | ||
| value: "{{ .Values.objectStore.bucketCreation.bucketName }}" | ||
| - name: "PUBLIC" | ||
| value: "{{ .Values.objectStore.bucketCreation.public }}" | ||
| volumeMounts: | ||
| - name: "aws-credentials-volume" | ||
| mountPath: "/root/.aws" | ||
| - name: "scripts-volume" | ||
| mountPath: "/scripts" | ||
| imagePullPolicy: "IfNotPresent" | ||
|
||
| restartPolicy: "Never" | ||
| volumes: | ||
| - name: "aws-credentials-volume" | ||
| secret: | ||
| secretName: "aws-credentials" | ||
|
||
| - name: "scripts-volume" | ||
| configMap: | ||
| name: "bucket-creation" | ||
| --- | ||
| apiVersion: v1 | ||
| kind: "ConfigMap" | ||
| metadata: | ||
| name: "bucket-creation" | ||
| data: | ||
| bucket-creation.sh: |- | ||
| #!/usr/bin/env bash | ||
|
|
||
| # Create a bucket and optionally it configure with public read access | ||
| # on a S3-compatible object store such as MinIO | ||
| # | ||
| # Requirements: | ||
| # | ||
| # * AWS CLI authentication configured using any supported method---for example: | ||
| # * A credentials file in $HOME/.aws/credentials | ||
| # * AWS_CONFIG_FILE pointing to a custom credentials file | ||
| # * Environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY | ||
| # * Environment variables: | ||
| # * AWS_ENDPOINT_URL: The S3-compatible object store endpoint URL | ||
| # * BUCKET_NAME: The name of the bucket where the log viewer should be deployed | ||
| # * NOTE: This script will make the bucket publicly readable. | ||
| # * PUBLIC (Optional): If set to "true", configures bucket with public read policy | ||
| set -e | ||
| set -o pipefail | ||
| set -u | ||
|
|
||
| # Emits a log event to stderr with an auto-generated ISO timestamp as well as the given level | ||
| # and message. | ||
| # | ||
| # @param $1: Level string | ||
| # @param $2: Message to be logged | ||
| log() { | ||
| local -r LEVEL=$1 | ||
| local -r MESSAGE=$2 | ||
| echo "$(date --utc --date="now" +"%Y-%m-%dT%H:%M:%SZ") [${LEVEL}] ${MESSAGE}" >&2 | ||
| } | ||
|
|
||
| # Waits for the S3 endpoint to be available, or exits if it's unavailable. | ||
| wait_for_s3_availability() { | ||
| # Check availability by listing available buckets | ||
| log "INFO" "Waiting until ${AWS_ENDPOINT_URL} endpoint becomes available." | ||
| local -r MAX_RETRIES=10 | ||
| local -r RETRY_DELAY_IN_SECS=6 | ||
| for ((retries = 0; retries < MAX_RETRIES; retries++)); do | ||
| if aws s3 ls --endpoint-url "$AWS_ENDPOINT_URL" >/dev/null; then | ||
| return | ||
| fi | ||
| log "WARN" "S3 API endpoint unavailable. Retrying in ${RETRY_DELAY_IN_SECS} seconds." | ||
|
|
||
| sleep "$RETRY_DELAY_IN_SECS" | ||
| done | ||
|
|
||
| if [[ $retries -eq $MAX_RETRIES ]]; then | ||
| log "ERROR" "Maximum retries reached. S3 API endpoint ${AWS_ENDPOINT_URL} didn't respond." | ||
| exit 1 | ||
| fi | ||
| } | ||
|
|
||
| # Creates a bucket | ||
| create_bucket() { | ||
| # Create log-viewer bucket if it doesn't already exist | ||
| log "INFO" "Creating ${BUCKET_S3_URI} bucket." | ||
| if ! aws s3api head-bucket --endpoint-url "$AWS_ENDPOINT_URL" --bucket "$BUCKET_NAME" \ | ||
| 2>/dev/null; then | ||
| aws s3api create-bucket --endpoint-url "$AWS_ENDPOINT_URL" --bucket "$BUCKET_NAME" | ||
| fi | ||
| } | ||
|
|
||
| # Configures a bucket with public read access | ||
| configure_bucket() { | ||
| # Define and apply the bucket policy for public read access | ||
| log "INFO" "Applying public read access policy to ${BUCKET_S3_URI}" | ||
| local -r POLICY=$( | ||
| cat <<EOP | ||
| { | ||
| "Version": "2012-10-17", | ||
| "Statement": [ | ||
| { | ||
| "Effect": "Allow", | ||
| "Principal": "*", | ||
| "Action": "s3:GetObject", | ||
| "Resource": "arn:aws:s3:::${BUCKET_NAME}/*" | ||
| } | ||
| ] | ||
| } | ||
| EOP | ||
| ) | ||
| if ! aws s3api put-bucket-policy \ | ||
| --endpoint-url "$AWS_ENDPOINT_URL" \ | ||
| --bucket "$BUCKET_NAME" \ | ||
| --policy "$POLICY"; then | ||
| log "ERROR" "Failed to set bucket policy for ${BUCKET_S3_URI}" | ||
| exit 1 | ||
| fi | ||
| } | ||
|
|
||
| # Validate required environment variables | ||
| readonly REQUIRED_ENV_VARS=( | ||
| # Example: "http://minio:9000" | ||
| "AWS_ENDPOINT_URL" | ||
|
|
||
| # Example: "logs" | ||
| "BUCKET_NAME" | ||
| ) | ||
| for var in "${REQUIRED_ENV_VARS[@]}"; do | ||
| if ! [[ -v "$var" ]]; then | ||
| log "ERROR" "$var environment variable must be set." | ||
| exit 1 | ||
| fi | ||
| done | ||
|
|
||
| readonly BUCKET_S3_URI="s3://${BUCKET_NAME}" | ||
|
|
||
| wait_for_s3_availability | ||
| create_bucket | ||
| if [[ "${PUBLIC:-false}" = "true" ]]; then | ||
| configure_bucket | ||
| fi | ||
|
|
||
| log "INFO" "Bucket ${BUCKET_NAME} created and configured successfully." | ||
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,39 @@ | ||||||||||||||||||||||||||||||||||
| apiVersion: "v1" | ||||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||||
| kind: "Pod" | ||||||||||||||||||||||||||||||||||
| metadata: | ||||||||||||||||||||||||||||||||||
| labels: | ||||||||||||||||||||||||||||||||||
| app: {{ .Values.objectStore.minio.serviceName }} | ||||||||||||||||||||||||||||||||||
| name: {{ .Values.objectStore.minio.serviceName }} | ||||||||||||||||||||||||||||||||||
| spec: | ||||||||||||||||||||||||||||||||||
| containers: | ||||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||||
| kind: "Pod" | |
| metadata: | |
| labels: | |
| app: {{ .Values.objectStore.minio.serviceName }} | |
| name: {{ .Values.objectStore.minio.serviceName }} | |
| spec: | |
| containers: | |
| apiVersion: v1 | |
| kind: "Pod" | |
| metadata: | |
| labels: | |
| app: {{ .Values.objectStore.minio.serviceName }} | |
| name: {{ .Values.objectStore.minio.serviceName }} | |
| spec: | |
| containers: |
🧰 Tools
🪛 YAMLlint (1.37.1)
[error] 4-4: too many spaces inside braces
(braces)
[error] 4-4: too many spaces inside braces
(braces)
[error] 5-5: too many spaces inside braces
(braces)
[error] 5-5: too many spaces inside braces
(braces)
🤖 Prompt for AI Agents
In yscope-k8s/templates/object-store/minio.yaml at lines 1 to 7, the Kubernetes
Pod manifest is missing the required apiVersion field. Add the appropriate
apiVersion line above the kind field to specify the API version for the Pod
resource, ensuring the manifest can be applied successfully.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Entry-point likely fails – /bin/bash absent in slim MinIO image
minio/minio images are based on distroless and ship only /bin/sh. The Pod will crash with exec: "/bin/bash": stat /bin/bash: no such file or directory.
- command:
- - "/bin/bash"
- - "-c"
+ command:
+ - "/bin/sh"
+ - "-c"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: {{ .Values.objectStore.minio.serviceName }} | |
| image: "minio/minio:RELEASE.2025-05-24T17-08-30Z" | |
| command: | |
| - "/bin/bash" | |
| - "-c" | |
| args: | |
| - "minio server /data --console-address :{{ .Values.objectStore.minio.consolePort }} --address :{{ .Values.objectStore.minio.apiPort }}" | |
| volumeMounts: | |
| - name: {{ .Values.objectStore.minio.serviceName }} | |
| image: "minio/minio:RELEASE.2025-05-24T17-08-30Z" | |
| command: | |
| - "/bin/sh" | |
| - "-c" | |
| args: | |
| - "minio server /data --console-address :{{ .Values.objectStore.minio.consolePort }} --address :{{ .Values.objectStore.minio.apiPort }}" | |
| volumeMounts: |
🧰 Tools
🪛 YAMLlint (1.37.1)
[error] 9-9: too many spaces inside braces
(braces)
[error] 9-9: too many spaces inside braces
(braces)
🤖 Prompt for AI Agents
In yscope-k8s/templates/object-store/minio.yaml around lines 9 to 16, the
container command uses "/bin/bash" which is not present in the MinIO distroless
image, causing the pod to crash. Change the command to use "/bin/sh" instead of
"/bin/bash" to match the shell available in the image and ensure the container
starts correctly.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Default root credentials not overridden
The container will boot with the well-known minioadmin:minioadmin pair, which is insecure even for local setups. Set MINIO_ROOT_USER and MINIO_ROOT_PASSWORD from a secret.
envFrom:
- secretRef:
name: minio-root-credsGenerate the secret in a separate template.
🧰 Tools
🪛 YAMLlint (1.37.1)
[error] 21-21: too many spaces inside empty braces
(braces)
🤖 Prompt for AI Agents
In yscope-k8s/templates/object-store/minio.yaml around lines 17 to 21, the MinIO
container uses default insecure root credentials. To fix this, create a
Kubernetes secret named minio-root-creds containing MINIO_ROOT_USER and
MINIO_ROOT_PASSWORD, then modify the container spec to include envFrom
referencing this secret. This will securely override the default credentials
with values from the secret.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Prefer a Deployment over a naked Pod for automatic restarts & upgrades
Using a standalone Pod means the object store will not be re-created if the node dies or the Pod is deleted. A Deployment (or StatefulSet if persistent storage is added later) gives you rolling upgrades and self-healing for free.
🧰 Tools
🪛 YAMLlint (1.37.1)
[error] 4-4: too many spaces inside braces
(braces)
[error] 4-4: too many spaces inside braces
(braces)
[error] 5-5: too many spaces inside braces
(braces)
[error] 5-5: too many spaces inside braces
(braces)
[error] 8-8: too many spaces inside braces
(braces)
[error] 8-8: too many spaces inside braces
(braces)
[error] 20-20: too many spaces inside empty braces
(braces)
🤖 Prompt for AI Agents
In yscope-k8s/templates/object-store/minio.yaml lines 1 to 21, replace the
standalone Pod definition with a Deployment resource. This change ensures
automatic restarts and rolling upgrades by managing Pod lifecycle. Define a
Deployment with the same container spec, labels, and volumes, and specify the
desired replica count. This will provide self-healing and upgrade capabilities
instead of a single Pod that does not recover if deleted or if the node fails.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Health probes missing
Adding livenessProbe and readinessProbe against the health endpoint (/minio/health/ready) improves observability and prevents routing to an unready server.
🧰 Tools
🪛 YAMLlint (1.37.1)
[error] 33-33: too many spaces inside braces
(braces)
[error] 33-33: too many spaces inside braces
(braces)
[error] 34-34: too many spaces inside braces
(braces)
[error] 34-34: too many spaces inside braces
(braces)
[error] 36-36: too many spaces inside braces
(braces)
[error] 36-36: too many spaces inside braces
(braces)
[error] 37-37: too many spaces inside braces
(braces)
[error] 37-37: too many spaces inside braces
(braces)
🤖 Prompt for AI Agents
In yscope-k8s/templates/object-store/minio.yaml around lines 30 to 38, the
service definition is missing livenessProbe and readinessProbe configurations.
Add both probes to the pod spec, targeting the /minio/health/ready HTTP endpoint
on the appropriate port, to ensure Kubernetes can monitor the health and
readiness of the Minio server and avoid routing traffic to unhealthy or unready
instances.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line is unnecessary here