Skip to content

Commit fb39139

Browse files
committed
chore(pre-commit): fix lints on all files to make future changes less burdensome
1 parent dd1141a commit fb39139

File tree

25 files changed

+41
-69
lines changed

25 files changed

+41
-69
lines changed

.gitattributes

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,4 +2,4 @@ docs/** -linguist-documentation
22

33
*.adoc linguist-detectable
44
*.yaml linguist-detectable
5-
*.yml linguist-detectable
5+
*.yml linguist-detectable

.github/workflows/dev_spark-k8s-with-scikit-learn.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ jobs:
3434
# TODO: the image 3.5.0-stackable24.3.0 does not have an arm64 build.
3535
# Re-activate the arm runner when the image is updated to one that does.
3636
# Also adjust publish_manifest step to include arm architecture
37-
#- {name: "ubicloud-standard-8-arm", arch: "arm64"}
37+
# - {name: "ubicloud-standard-8-arm", arch: "arm64"}
3838
steps:
3939
- name: Checkout Repository
4040
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7

.gitignore

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
.env
1+
.env

.yamllint.yaml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,3 +8,4 @@ rules:
88
comments:
99
min-spaces-from-content: 1 # Needed due to https://github.com/adrienverge/yamllint/issues/443
1010
braces: disable # because the yaml files are templates which can have {{ ... }}
11+
indentation: disable # There are many conflicting styles and it isn't so important in this repo. It can be enabled later if we want consistency.

demos/data-lakehouse-iceberg-trino-spark/create-trino-tables.yaml

Lines changed: 0 additions & 45 deletions
Original file line numberDiff line numberDiff line change
@@ -338,29 +338,6 @@ data:
338338
)
339339
""")
340340
341-
342-
343-
344-
345-
346-
347-
348-
349-
350-
351-
352-
353-
354-
355-
356-
357-
358-
359-
360-
361-
362-
363-
364341
run_query(connection, """
365342
create table if not exists lakehouse.house_sales.house_sales with (
366343
partitioning = ARRAY['year(date_of_transfer)']
@@ -504,23 +481,6 @@ data:
504481
where tpep_pickup_datetime >= date '2015-01-01' and tpep_pickup_datetime <= now() -- We have to remove some invalid records
505482
""")
506483
507-
508-
509-
510-
511-
512-
513-
514-
515-
516-
517-
518-
519-
520-
521-
522-
523-
524484
run_query(connection, """
525485
create or replace materialized view lakehouse.taxi.yellow_tripdata_daily_agg as
526486
select
@@ -566,11 +526,6 @@ data:
566526
REFRESH MATERIALIZED VIEW lakehouse.taxi.yellow_tripdata_monthly_agg
567527
""")
568528
569-
570-
571-
572-
573-
574529
# At this point Spark should have created the needed underlying tables
575530
run_query(connection, """
576531
create or replace view lakehouse.smart_city.shared_bikes_station_status_latest as

demos/demos-v1.yaml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
---
12
demos:
23
please-update:
34
description: This version of stackablectl is outdated, please visit https://docs.stackable.tech/stackablectl/stable/installation.html on how to get the latest version

demos/demos-v2.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ demos:
7373
memory: 42034Mi
7474
pvc: 75Gi # 30Gi for Kafka
7575
nifi-kafka-druid-water-level-data:
76-
description: Demo ingesting water level data into Kafka using NiFi, streaming it into Druid and creating a Superset dashboard
76+
description: Demo ingesting water level data into Kafka using NiFi, streaming it into Druid and creating a Superset dashboard
7777
documentation: https://docs.stackable.tech/stackablectl/stable/demos/nifi-kafka-druid-water-level-data.html
7878
stackableStack: nifi-kafka-druid-superset-s3
7979
labels:

demos/end-to-end-security/README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@
55
3. Optional: Add Database connection
66
4. Add admin user in Keycloak to all relevant groups (so that he has access to the tables, so he can create datasets, charts and dashboards).
77
5. `pgdump` the Postgres and update the dump in Git. For that shell into `postgresql-superset-0` and execute
8+
89
```sh
910
export PGPASSWORD="$POSTGRES_POSTGRES_PASSWORD"
1011

demos/nifi-kafka-druid-earthquake-data/download_earthquake_data.sh

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
1+
#!/usr/bin/env bash
2+
set -euo pipefail
3+
14
# This script is not used for the demo
25
# Its purpose is to document how to retrieve the used earthquake data
36

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,5 @@
1+
# ent-to-end-security
2+
13
The images are exported from
2-
https://docs.google.com/presentation/d/19h3sBve_dOSgpZ6eTZqmYXxGoiQqXNs1/edit?usp=sharing&ouid=105504333647320477456&rtpof=true&sd=true.
4+
<https://docs.google.com/presentation/d/19h3sBve_dOSgpZ6eTZqmYXxGoiQqXNs1/edit?usp=sharing&ouid=105504333647320477456&rtpof=true&sd=true>
35
Ask Sebastian for access if needed.

0 commit comments

Comments
 (0)