@@ -404,32 +404,7 @@ curl -X POST https://your-cluster.grpc.cloud.dgraph.io/admin \
404404
405405< /CodeGroup>
406406
407- # ## 2. Plan Downtime Window
408-
409- < Tabs>
410- < Tab title=" Minimal Downtime Strategy" >
411- ** Recommended for** : Production systems with high availability requirements
412- - ** Read-only period** : 15-30 minutes - ** Full downtime** : 30-60 minutes -
413- ** Strategy** : Blue-green deployment with data sync - ** Tools** : Load
414- balancer for traffic switching
415- < /Tab>
416-
417- < Tab title=" Maintenance Window Strategy" >
418- ** Recommended for** : Systems with scheduled maintenance windows - ** Read-only
419- period** : 1-2 hours - ** Full downtime** : 2-4 hours - ** Strategy** : Traditional
420- migration during off-peak hours - ** Tools** : Application-level maintenance
421- mode
422- < /Tab>
423-
424- < Tab title=" Extended Migration Strategy" >
425- ** Recommended for** : Large datasets or complex environments - ** Read-only
426- period** : 4-8 hours - ** Full downtime** : 8-12 hours - ** Strategy** : Bulk
427- load with extended testing period - ** Tools** : Comprehensive monitoring and
428- rollback plan
429- < /Tab>
430- < /Tabs>
431-
432- # ## 3. Infrastructure Sizing
407+ # ## 2. Infrastructure Sizing
433408
434409< CardGroup cols={2}>
435410 < Card title=" CPU Requirements" icon=" microchip" >
@@ -1491,80 +1466,123 @@ curl http://10.0.1.10:8080/health
14911466# ## 2. Import Schema
14921467
14931468<Tabs>
1494- <Tab title="Kubernetes">
1495- ` ` ` bash kubectl port-forward -n dgraph svc/dgraph-dgraph-alpha 8080:8080 &
1496- curl -X POST localhost:8080/admin/schema \ -H "Content-Type:
1497- application/json" \ -d @schema_backup.json ` ` `
1498- </Tab>
1469+
1470+ <Tab title="Kubernetes">
1471+
1472+ ` ` ` bash
1473+
1474+ kubectl port-forward -n dgraph svc/dgraph-dgraph-alpha 8080:8080 &
1475+ curl -X POST localhost:8080/admin/schema -H "Content-Type: application/json"
1476+ \ -d @schema_backup.json
1477+
1478+ ` ` `
1479+
1480+ </Tab>
14991481
15001482<Tab title="Docker Compose">
1501- ```bash curl -X POST localhost:8080/admin/schema \ -H "Content-Type :
1502- application/json" \ -d @schema_backup.json ```
1483+
1484+ ` ` ` bash
1485+
1486+ curl -X POST localhost:8080/admin/schema \ -H "Content-Type:
1487+ application/json" \ -d @schema_backup.json
1488+
1489+ ` ` `
1490+
15031491</Tab>
15041492
1505- <Tab title="Linux VPS">
1506- ```bash curl -X POST 10.0.1.10:8080/admin/schema \ -H "Content-Type :
1507- application/json" \ -d @schema_backup.json ```
1508- </Tab>
1493+ <Tab title="Linux VPS">
1494+
1495+ ` ` ` bash
1496+
1497+ curl -X POST 10.0.1.10:8080/admin/schema \
1498+ -H "Content-Type: application/json" \
1499+ -d @schema_backup.json
1500+
1501+ ` ` `
1502+
1503+ </Tab>
15091504</Tabs>
15101505
15111506# ## 3. Import Data
15121507
15131508<Tabs>
1514- <Tab title="Live Loader (Kubernetes)">
1515- ` ` ` bash kubectl run dgraph-live-loader \ --image=dgraph/dgraph:v23.1.0 \
1516- --restart=Never \ --namespace=dgraph \ --command -- dgraph live \ --files
1517- /data/export.rdf.gz \ --alpha dgraph-dgraph-alpha:9080 \ --zero
1518- dgraph-dgraph-zero:5080 ` ` `
1519- </Tab>
1509+
1510+ <Tab title="Live Loader (Kubernetes)">
1511+
1512+ ` ` ` bash
1513+
1514+ kubectl run dgraph-live-loader \
1515+ --image=dgraph/dgraph:v23.1.0 \
1516+ --restart=Never \
1517+ --namespace=dgraph \
1518+ --command -- dgraph live \
1519+ --files
1520+ /data/export.rdf.gz \
1521+ --alpha dgraph-dgraph-alpha:9080 \ --zero
1522+ dgraph-dgraph-zero:5080
1523+ ` ` `
1524+
1525+ </Tab>
15201526
15211527<Tab title="Live Loader (Docker Compose)">
1522- ` ` ` bash # Copy data files to container docker cp exported_data.rdf.gz
1528+
1529+ ` ` ` bash
1530+
1531+ # Copy data files to container docker cp exported_data.rdf.gz
15231532 dgraph-alpha-1:/dgraph/ # Run live loader docker exec dgraph-alpha-1 dgraph
15241533 live \ --files /dgraph/exported_data.rdf.gz \ --alpha localhost:9080 \ --zero
1525- dgraph-zero-1:5080 ` ` `
1534+ dgraph-zero-1:5080
1535+
1536+ ` ` `
1537+
15261538</Tab>
15271539
1528- <Tab title="Live Loader (Linux VPS)">
1529- ` ` ` bash # Copy data to server scp exported_data.rdf.gz [email protected] :/tmp/ 1530- # Run live loader ssh [email protected] sudo -u dgraph dgraph live \ --files 1531- /tmp/exported_data.rdf.gz \ --alpha localhost:9080 \ --zero 10.0.1.10:5080
1532- ` ` `
1533- </Tab>
1540+ <Tab title="Live Loader (Linux VPS)">
1541+
1542+ ` ` ` bash
1543+
1544+ # Copy data to server scp exported_data.rdf.gz [email protected] :/tmp/ 1545+ # Run live loader ssh [email protected] sudo -u dgraph dgraph live \ --files 1546+ /tmp/exported_data.rdf.gz \ --alpha localhost:9080 \ --zero 10.0.1.10:5080
1547+ ` ` `
1548+
1549+ </Tab>
15341550</Tabs>
15351551
15361552# ## 4. Restore ACL Configuration
15371553
15381554<Tabs>
1539- <Tab title="All Deployments">
1540- ` ` ` bash
1541- # Replace with your actual endpoint
1542- DGRAPH_ENDPOINT="localhost:8080" # Adjust for your deployment
1543-
1544- curl -X POST $DGRAPH_ENDPOINT/admin \
1545- -H "Content-Type: application/json" \
1546- -d '{"query": "mutation { addUser(input: {name: \" admin\" , password: \" password\" }) { user { name } } }"}'
1547- ` ` `
1548- </Tab>
1555+ <Tab title="All Deployments">
1556+ ` ` ` bash
1557+ # Replace with your actual endpoint
1558+ DGRAPH_ENDPOINT="localhost:8080" # Adjust for your deployment
1559+
1560+ curl -X POST $DGRAPH_ENDPOINT/admin \
1561+ -H "Content-Type: application/json" \
1562+ -d '{"query": "mutation { addUser(input: {name: \" admin\" , password:
1563+ \" password\" }) { user { name } } }"}'
1564+
1565+ ` ` ` `
1566+ </Tab>
15491567</Tabs>
15501568
15511569---
15521570
15531571## Post-Migration Verification
15541572
15551573<Card title="Data Integrity Checklist" icon="check-circle">
1556- - Count total nodes and compare with original - Verify specific data samples -
1557- Test query performance - Validate application connections
1574+ - Count total nodes and compare with original - Verify specific data samples -
1575+ Test query performance - Validate application connections
15581576</Card>
15591577
15601578### 1. Data Integrity Check
15611579
15621580<CodeGroup>
15631581` ` ` bash Count Nodes
15641582curl -X POST localhost:8080/query \
1565- -H "Content-Type: application/json" \
1566- -d '{"query": "{ nodeCount(func: has(_predicate_)) { count(uid) } }"}'
1567- ` ` `
1583+ -H "Content-Type : application/json" \
1584+ -d '{"query" : " { nodeCount(func: has(_predicate_)) { count(uid) } }" }'
1585+ ` ` ` `
15681586
15691587` ` ` bash Verify Sample Data
15701588curl -X POST localhost:8080/query \
@@ -1889,98 +1907,51 @@ kubectl create cronjob dgraph-backup \
18891907 may encounter during and after migration.
18901908</Info>
18911909
1892- # ### Hypermode Operations Runbooks
1893-
1894- <CardGroup cols={1}>
1895- <Card
1896- title="Hypermode Ops Runbooks Repository"
1897- icon="book"
1898- href="https://github.com/hypermodeinc/ops-runbooks"
1899- >
1900- Repository for operational runbooks - A collection of step by step guides
1901- for fixing common problems **Contains runbooks for** :
1902-
1903- * Database operations
1904- * Infrastructure monitoring and alerting
1905- * Performance optimization procedures
1906- * Security incident response
1907- * Backup and recovery operations
1908- * Scaling and capacity planning
1909- * Security incident response
1910-
1911- </Card>
1912- </CardGroup>
1913-
1914- # ### Community Runbook Resources
1910+ # ### Dgraph Operations Runbooks
19151911
19161912<CardGroup cols={2}>
19171913 <Card
1918- title="Awesome Runbook Collection "
1919- icon="star "
1920- href="https://github.com/runbear-io/awesome-runbook "
1914+ title="High-Availability Management "
1915+ icon="settings "
1916+ href="https://github.com/hypermodeinc/ops-runbooks/blob/main/Enabling%20or%20Disabling%20High-Availability%20for%20a%20Dgraph%20Cluster.md "
19211917 >
1922- A curated list of awesome runbook documents, guidebooks, software and
1923- resources
1918+ Enable or disable high-availability (HA) for Dgraph clusters, scale replicas, and manage cluster topology.
19241919 </Card>
19251920
19261921<Card
1927- title="SRE Runbook Template "
1928- icon="file-alt "
1929- href="https://github.com/SkeltonThatcher/run-book-template "
1922+ title="Zero Node Recovery "
1923+ icon="refresh "
1924+ href="https://github.com/hypermodeinc/ops-runbooks/blob/main/Freshly%20Rebuilding%20Zero%20to%20avoid%20idx%20issue.md "
19301925>
1931- Run Book / Operations Manual template for modern software systems
1926+ Freshly rebuild Zero nodes to avoid idx issues and restore cluster stability.
19321927</Card>
19331928
19341929<Card
1935- title="Kubernetes Runbooks "
1936- icon="dharmachakra "
1937- href="https://github.com/openshift/ runbooks"
1930+ title="HA Cluster Rebuild "
1931+ icon="wrench "
1932+ href="https://github.com/hypermodeinc/ops- runbooks/blob/main/Rebuild%20a%20Dgraph%20HA%20Cluster%20using%20an%20existing%20p%20directory%20.md "
19381933>
1939- Runbooks for Alerts on OCP - Runbooks in this repository are grouped by the
1940- operator that is responsible for shipping the respective alert
1934+ Rebuild high-availability clusters using existing p directories while
1935+ preserving data.
1936+ </Card>
1937+
1938+ <Card
1939+ title="RAFT Group Management"
1940+ icon="users"
1941+ href="https://github.com/hypermodeinc/ops-runbooks/blob/main/Remove%20and%20re-add%20a%20bad%20Alpha%20to%20a%20RAFT%20Group.md"
1942+ >
1943+ Remove and re-add problematic Alpha nodes to RAFT groups for cluster health.
19411944</Card>
19421945
19431946 <Card
1944- title="CloudOps Automation "
1945- icon="cloud "
1946- href="https://github.com/unskript/Awesome-CloudOps-Automation "
1947+ title="Cluster Unsharding "
1948+ icon="merge "
1949+ href="https://github.com/hypermodeinc/ops-runbooks/blob/main/Unsharding%20a%20Sharded%20Cluster.md "
19471950 >
1948- Cloud-ops automation runbooks that are ready to use. Build your own
1949- automations using the hundreds of drag and drop actions included in the
1950- repository
1951+ Convert sharded clusters back to non-sharded configuration safely.
19511952 </Card>
19521953</CardGroup>
19531954
1954- # ## Security and Compliance
1955-
1956- <Tabs>
1957- <Tab title="Security Best Practices">
1958- - [CIS Kubernetes
1959- Benchmark](https://www.cisecurity.org/benchmark/kubernetes) - [NIST
1960- Cybersecurity Framework](https://www.nist.gov/cyberframework) - [OWASP
1961- Kubernetes Security Cheat
1962- Sheet](https://cheatsheetseries.owasp.org/cheatsheets/Kubernetes_Security_Cheat_Sheet.html)
1963- - [Falco Runtime Security](https://falco.org/docs/)
1964- </Tab>
1965-
1966- <Tab title="Compliance Frameworks">
1967- - [SOC 2 Compliance
1968- Guide](https://www.aicpa.org/interestareas/frc/assuranceadvisoryservices/aicpasoc2report.html)
1969- - [GDPR Compliance](https://gdpr.eu/) - [HIPAA Security
1970- Rule](https://www.hhs.gov/hipaa/for-professionals/security/index.html) - [PCI
1971- DSS Requirements](https://www.pcisecuritystandards.org/pci_security/)
1972- </Tab>
1973-
1974- <Tab title="Security Tools">
1975- - [Trivy Container Scanning](https://aquasecurity.github.io/trivy/) - [Open
1976- Policy Agent (OPA)](https://www.openpolicyagent.org/docs/latest/) -
1977- [Gatekeeper Policy
1978- Controller](https://open-policy-agent.github.io/gatekeeper/website/docs/) -
1979- [Network Policy
1980- Recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes)
1981- </Tab>
1982- </Tabs>
1983-
19841955# ## Performance and Optimization
19851956
19861957<AccordionGroup>
0 commit comments