Skip to content

Commit 271bab6

Browse files
committed
Add setup-env script and improve reset-data script
- Add setup-env.sh script to automate environment variable configuration - Add confirmation prompt to reset-data.sh requiring 'RESET' input - Add error handling with abort on critical operation failures - Add dynamic OpenSearch mapping retrieval before index deletion - Fix variable quoting throughout scripts (shellcheck compliant) - Format scripts with shfmt for consistent style - Update README with quick setup instructions and safety features - Update README to reference new reset script capabilities Signed-off-by: Asitha de Silva <asithade@gmail.com>
1 parent aa8a9d3 commit 271bab6

File tree

3 files changed

+282
-92
lines changed

3 files changed

+282
-92
lines changed

README.md

Lines changed: 55 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -15,15 +15,36 @@ These instructions and playbooks assume the script's execution environment has a
1515

1616
## Setup
1717

18-
### 1. Set Environment Variables
18+
### Quick Setup (Recommended)
1919

20-
#### NATS Configuration
20+
Use the automated setup script to configure all environment variables:
21+
22+
```bash
23+
# Source the script to set environment variables in your current shell
24+
source ./scripts/setup-env.sh
25+
```
26+
27+
The script will automatically:
28+
29+
- Set `NATS_URL` to the default Kubernetes service URL
30+
- Retrieve and set `OPENFGA_STORE_ID` from the OpenFGA API
31+
- Retrieve and set `JWT_RSA_SECRET` from the heimdall-signer-cert secret
32+
33+
After running this script, you can proceed directly to [Running Mock Data Generation](#running-mock-data-generation).
34+
35+
### Manual Setup (Alternative)
36+
37+
If you prefer to set environment variables manually or need to customize values:
38+
39+
#### 1. Set Environment Variables
40+
41+
##### NATS Configuration
2142

2243
```bash
2344
export NATS_URL="lfx-platform-nats.lfx.svc.cluster.local:4222"
2445
```
2546

26-
#### OpenFGA Configuration
47+
##### OpenFGA Configuration
2748

2849
First, confirm the OpenFGA Store ID:
2950

@@ -37,15 +58,12 @@ Then export the Store ID:
3758
export OPENFGA_STORE_ID="your-store-id-here"
3859
```
3960

40-
#### Authentication Tokens
61+
##### Authentication Tokens
4162

42-
A Heimdall JWT secret is needed to use the `!jwt` macro in playbooks. If you
43-
export it as an environmental variable, you can pass it to the mock data tool
44-
as a command line argument. No `export` step is needed as this is used only
45-
to populate arguments to the mock data tool shell invocation.
63+
A Heimdall JWT secret is needed to use the `!jwt` macro in playbooks. Export it as an environmental variable so you can pass it to the mock data tool as a command line argument:
4664

4765
```bash
48-
JWT_RSA_SECRET="$(kubectl get secret/heimdall-signer-cert -n lfx -o json | jq -r '.data["signer.pem"]' | base64 --decode)"
66+
export JWT_RSA_SECRET="$(kubectl get secret/heimdall-signer-cert -n lfx -o json | jq -r '.data["signer.pem"]' | base64 --decode)"
4967
```
5068

5169
## Usage
@@ -65,14 +83,34 @@ uv run lfx-v2-mockdata \
6583
```
6684

6785
**Important Notes:**
86+
6887
- **Order matters!** Playbook directories run in the order specified on the command line.
6988
- Within each directory, playbooks execute in alphabetical order.
7089
- Dependencies between playbooks should be considered when organizing execution order. Multiple passes are made to allow `!ref` calls to be resolved, but the right order will improve performance and help avoid max-retry errors.
7190
- The `!jwt` macro will attempt to detect the JWKS key ID from the endpoint at `http://lfx-platform-heimdall.lfx.svc.cluster.local:4457/.well-known/jwks`. If this URL is not accessible from the execution environment, you must pass an explicit JWT key ID using the `--jwt-key-id` argument.
7291

7392
### Wiping Existing Data
7493

75-
If you need to start fresh, wipe the NATS KV buckets:
94+
If you need to start fresh, use the reset script for a complete data wipe:
95+
96+
```bash
97+
./scripts/reset-data.sh
98+
```
99+
100+
This script will:
101+
102+
- Clear all NATS KV buckets (projects, committees, meetings, etc.)
103+
- Clear and recreate OpenSearch indices (using current mapping)
104+
- Restart the query service to clear cache
105+
- Delete the project service pod to clear cache
106+
107+
**Safety Features:**
108+
- Requires typing `RESET` to confirm before proceeding
109+
- Validates all critical operations and exits on failure
110+
- Preserves authentication data in `authelia-users` and `authelia-email-otp` buckets
111+
- Automatically retrieves and uses current OpenSearch mapping before recreation
112+
113+
**Manual Alternative:** If you prefer to wipe only NATS KV buckets manually:
76114

77115
```bash
78116
for bucket in projects project-settings committees committee-settings committee-members; do
@@ -81,19 +119,23 @@ for bucket in projects project-settings committees committee-settings committee-
81119
done
82120
```
83121

84-
*Consider updating this documentation to also provide steps for recreating the OpenSearch index. Stale OpenFGA tuples may also be deleted, but unlike OpenSearch data, it won't impact the refreshed data to keep them.*
122+
_Note: The reset script is the recommended approach as it handles OpenSearch indices and service caches comprehensively._
85123

86124
### Running After Data Wipe
87125

88-
When running after wiping data, you need to recreate the ROOT project first, with an extra playbook at the front. This `recreate_root_project` playbook bypasses the API and directly creates a new ROOT project in the NATS KV bucket.
126+
After using the reset script, the ROOT project is automatically recreated by the project service pod restart. You can run the mock data tool normally:
89127

90128
```bash
91129
uv run lfx-v2-mockdata \
92130
--jwt-rsa-secret "$JWT_RSA_SECRET" \
93131
-t playbooks/projects/{root_project_access,base_projects,extra_projects} playbooks/committees/base_committees
94-
-t playbooks/projects/recreate_root_project playbooks/projects/{root_project_access,base_projects,extra_projects} playbooks/committees/base_committees
95132
```
96133

134+
**Note:** If you wiped data manually (without the reset script), you'll need to delete the project service pod to trigger ROOT project recreation, as it handles permissions correctly:
135+
136+
```bash
137+
kubectl delete pod -n lfx $(kubectl get pods -n lfx --no-headers | grep project-service | awk '{print $1}')
138+
```
97139

98140
## Playbook Structure
99141

scripts/reset-data.sh

Lines changed: 147 additions & 79 deletions
Original file line numberDiff line numberDiff line change
@@ -14,59 +14,99 @@ OPENSEARCH_POD="opensearch-cluster-master-0"
1414

1515
# Find the NATS box pod
1616
find_nats_box() {
17-
NATS_BOX_POD=$(kubectl get pods -n $NAMESPACE --no-headers -o custom-columns=":metadata.name" 2>/dev/null | grep nats-box | head -1)
18-
if [ -z "$NATS_BOX_POD" ]; then
19-
echo "❌ Could not find nats-box pod"
20-
return 1
21-
fi
22-
echo "Found NATS box pod: $NATS_BOX_POD"
23-
return 0
17+
NATS_BOX_POD=$(kubectl get pods -n $NAMESPACE --no-headers -o custom-columns=":metadata.name" 2>/dev/null | grep nats-box | head -1)
18+
if [ -z "$NATS_BOX_POD" ]; then
19+
echo "❌ Could not find nats-box pod"
20+
return 1
21+
fi
22+
echo "Found NATS box pod: $NATS_BOX_POD"
23+
return 0
2424
}
2525

2626
# Clear and recreate NATS KV buckets
2727
clear_nats_buckets() {
28-
echo ""
29-
echo "🗑️ Clearing NATS KV buckets..."
30-
31-
# All data buckets (excluding authelia-users and authelia-email-otp which are auth-related)
32-
for bucket in projects project-settings committees committee-settings committee-members \
33-
meetings meeting-settings meeting-registrants meeting-rsvps meeting-attachments-metadata \
34-
past-meetings past-meeting-participants past-meeting-recordings past-meeting-transcripts \
35-
past-meeting-summaries past-meeting-attachments-metadata fga-sync-cache; do
36-
echo " Clearing bucket: $bucket"
37-
kubectl exec -n $NAMESPACE $NATS_BOX_POD -- nats kv rm -f $bucket 2>/dev/null || true
38-
kubectl exec -n $NAMESPACE $NATS_BOX_POD -- nats kv add $bucket >/dev/null 2>&1
39-
if [ $? -eq 0 ]; then
40-
echo " ✓ Recreated bucket: $bucket"
41-
else
42-
echo " ✗ Failed to recreate bucket: $bucket"
43-
fi
44-
done
45-
46-
echo "✅ NATS KV buckets cleared"
28+
echo ""
29+
echo "🗑️ Clearing NATS KV buckets..."
30+
31+
local has_errors=0
32+
33+
# All data buckets (excluding authelia-users and authelia-email-otp which are auth-related)
34+
for bucket in projects project-settings committees committee-settings committee-members \
35+
meetings meeting-settings meeting-registrants meeting-rsvps meeting-attachments-metadata \
36+
past-meetings past-meeting-participants past-meeting-recordings past-meeting-transcripts \
37+
past-meeting-summaries past-meeting-attachments-metadata fga-sync-cache; do
38+
echo " Clearing bucket: $bucket"
39+
kubectl exec -n $NAMESPACE "$NATS_BOX_POD" -- nats kv rm -f $bucket 2>/dev/null || true
40+
if kubectl exec -n $NAMESPACE "$NATS_BOX_POD" -- nats kv add $bucket >/dev/null 2>&1; then
41+
echo " ✓ Recreated bucket: $bucket"
42+
else
43+
echo " ✗ Failed to recreate bucket: $bucket"
44+
has_errors=1
45+
fi
46+
done
47+
48+
if [ $has_errors -eq 0 ]; then
49+
echo "✅ NATS KV buckets cleared"
50+
return 0
51+
else
52+
echo "⚠️ NATS KV buckets cleared with errors"
53+
return 1
54+
fi
4755
}
4856

4957
# Clear OpenSearch indices and recreate resources mapping
5058
clear_opensearch() {
51-
echo ""
52-
echo "🗑️ Clearing OpenSearch indices..."
59+
echo ""
60+
echo "🗑️ Clearing OpenSearch indices..."
61+
62+
# Retrieve current resources index mapping before deletion
63+
echo " Retrieving current resources index mapping..."
64+
RESOURCES_INDEX=$(kubectl exec -n $NAMESPACE $OPENSEARCH_POD -- curl -s "http://localhost:9200/resources" 2>/dev/null)
5365

54-
# Delete all indices
55-
kubectl exec -n $NAMESPACE $OPENSEARCH_POD -- curl -s -X DELETE "http://localhost:9200/_all" >/dev/null 2>&1
66+
if [ -z "$RESOURCES_INDEX" ] || echo "$RESOURCES_INDEX" | grep -q "index_not_found_exception"; then
67+
echo " ⚠️ Resources index not found, will use default mapping"
68+
RESOURCES_MAPPING=""
69+
else
70+
# Extract mappings and settings using jq
71+
RESOURCES_MAPPING=$(echo "$RESOURCES_INDEX" | jq -c '{
72+
settings: .resources.settings.index | {number_of_replicas: (.number_of_replicas // "0")},
73+
mappings: .resources.mappings
74+
}' 2>/dev/null)
5675

57-
if [ $? -eq 0 ]; then
58-
echo " ✓ Deleted all indices"
59-
else
60-
echo " ✗ Failed to delete indices"
61-
return 1
62-
fi
76+
if [ -z "$RESOURCES_MAPPING" ] || [ "$RESOURCES_MAPPING" = "null" ]; then
77+
echo " ⚠️ Failed to extract mapping, will use default"
78+
RESOURCES_MAPPING=""
79+
else
80+
echo " ✓ Retrieved current mapping"
81+
fi
82+
fi
6383

64-
# Recreate resources index with mapping
65-
echo " Creating resources index with mapping..."
84+
# Delete all indices
85+
if kubectl exec -n $NAMESPACE $OPENSEARCH_POD -- curl -s -X DELETE "http://localhost:9200/_all" >/dev/null 2>&1; then
86+
echo " ✓ Deleted all indices"
87+
else
88+
echo " ✗ Failed to delete indices"
89+
return 1
90+
fi
6691

67-
kubectl exec -n $NAMESPACE $OPENSEARCH_POD -- curl -s -X PUT "http://localhost:9200/resources" \
68-
-H "Content-Type: application/json" \
69-
-d '{
92+
# Recreate resources index (using retrieved mapping or fallback to default)
93+
echo " Creating resources index..."
94+
95+
if [ -n "$RESOURCES_MAPPING" ]; then
96+
# Use retrieved mapping from before deletion
97+
if kubectl exec -n $NAMESPACE $OPENSEARCH_POD -- curl -s -X PUT "http://localhost:9200/resources" \
98+
-H "Content-Type: application/json" \
99+
-d "$RESOURCES_MAPPING" >/dev/null 2>&1; then
100+
echo " ✓ Created resources index with retrieved mapping"
101+
else
102+
echo " ✗ Failed to create resources index with retrieved mapping"
103+
return 1
104+
fi
105+
else
106+
# Fallback to default mapping
107+
if kubectl exec -n $NAMESPACE $OPENSEARCH_POD -- curl -s -X PUT "http://localhost:9200/resources" \
108+
-H "Content-Type: application/json" \
109+
-d '{
70110
"settings": {
71111
"number_of_replicas": 0
72112
},
@@ -110,67 +150,95 @@ clear_opensearch() {
110150
"v1_data": { "type": "flat_object" }
111151
}
112152
}
113-
}' >/dev/null 2>&1
114-
115-
if [ $? -eq 0 ]; then
116-
echo " ✓ Created resources index with mapping"
117-
else
118-
echo " ✗ Failed to create resources index"
119-
return 1
120-
fi
153+
}' >/dev/null 2>&1; then
154+
echo " ✓ Created resources index with default mapping"
155+
else
156+
echo " ✗ Failed to create resources index with default mapping"
157+
return 1
158+
fi
159+
fi
121160

122-
echo "✅ OpenSearch indices cleared and recreated"
161+
echo "✅ OpenSearch indices cleared and recreated"
123162
}
124163

125164
# Restart query service to clear cache
126165
restart_query_service() {
127-
echo ""
128-
echo "🔄 Restarting query service..."
166+
echo ""
167+
echo "🔄 Restarting query service..."
129168

130-
kubectl rollout restart deployment lfx-v2-query-service -n $NAMESPACE >/dev/null 2>&1
131-
kubectl rollout status deployment lfx-v2-query-service -n $NAMESPACE --timeout=120s >/dev/null 2>&1
132-
133-
if [ $? -eq 0 ]; then
134-
echo "✅ Query service restarted"
135-
else
136-
echo "⚠️ Query service restart timed out"
137-
fi
169+
kubectl rollout restart deployment lfx-v2-query-service -n $NAMESPACE >/dev/null 2>&1
170+
if kubectl rollout status deployment lfx-v2-query-service -n $NAMESPACE --timeout=120s >/dev/null 2>&1; then
171+
echo "✅ Query service restarted"
172+
else
173+
echo "⚠️ Query service restart timed out"
174+
fi
138175
}
139176

140177
# Delete project service pod to clear cache
141178
delete_project_service_pod() {
142-
echo ""
143-
echo "🗑️ Deleting project service pod..."
179+
echo ""
180+
echo "🗑️ Deleting project service pod..."
144181

145-
PROJECT_POD=$(kubectl get pods -A --no-headers 2>/dev/null | grep project-service | grep -v Terminating | awk '{print $2}' | head -1)
182+
PROJECT_POD=$(kubectl get pods -A --no-headers 2>/dev/null | grep project-service | grep -v Terminating | awk '{print $2}' | head -1)
146183

147-
if [ -z "$PROJECT_POD" ]; then
148-
echo "⚠️ Could not find project service pod"
149-
return 1
150-
fi
184+
if [ -z "$PROJECT_POD" ]; then
185+
echo "⚠️ Could not find project service pod"
186+
return 1
187+
fi
151188

152-
echo " Found pod: $PROJECT_POD"
153-
kubectl delete pod $PROJECT_POD -n $NAMESPACE >/dev/null 2>&1
189+
echo " Found pod: $PROJECT_POD"
190+
if kubectl delete pod "$PROJECT_POD" -n $NAMESPACE >/dev/null 2>&1; then
191+
echo "✅ Project service pod deleted"
192+
else
193+
echo "⚠️ Failed to delete project service pod"
194+
fi
195+
}
154196

155-
if [ $? -eq 0 ]; then
156-
echo "✅ Project service pod deleted"
157-
else
158-
echo "⚠️ Failed to delete project service pod"
159-
fi
197+
# Confirm with user before performing destructive operations
198+
confirm_reset() {
199+
echo ""
200+
echo "⚠️ WARNING: This script will PERMANENTLY reset data."
201+
echo " The following will be cleared or restarted:"
202+
echo " - NATS KV buckets for projects, committees, meetings, and related data"
203+
echo " - All OpenSearch indices (they will be recreated empty)"
204+
echo " - Query service cache (service restart)"
205+
echo " - Project service pod (pod deletion)"
206+
echo ""
207+
echo "This operation cannot be undone."
208+
echo ""
209+
read -rp "Type 'RESET' to proceed, or anything else to cancel: " CONFIRM_RESET_INPUT
210+
211+
if [ "$CONFIRM_RESET_INPUT" != "RESET" ]; then
212+
echo ""
213+
echo "Aborted. No data has been changed."
214+
exit 1
215+
fi
216+
217+
echo ""
218+
echo "Proceeding with data reset..."
160219
}
161220

162221
# Main
163222
echo "========================================="
164223
echo " LFX Data Reset Script"
165224
echo "========================================="
166225

167-
find_nats_box
168-
if [ $? -ne 0 ]; then
169-
exit 1
226+
confirm_reset
227+
228+
if ! find_nats_box; then
229+
exit 1
230+
fi
231+
232+
if ! clear_nats_buckets; then
233+
echo "❌ Failed to clear NATS KV buckets. Aborting."
234+
exit 1
235+
fi
236+
237+
if ! clear_opensearch; then
238+
echo "❌ Failed to clear OpenSearch indices. Aborting."
239+
exit 1
170240
fi
171241

172-
clear_nats_buckets
173-
clear_opensearch
174242
restart_query_service
175243
delete_project_service_pod
176244

0 commit comments

Comments
 (0)