- Example Orchestration Plan
- Pre Cut-Over Activities
- Orchestration Cutover Execution
- Phase 0
- Phase 1
- Phase 2
- Show the list of phase 2 services, topics, databases
- Execute Phase 2 Orchestration
- Account for special cases like assets and inventory
- Check service migration status (check app logs)
- Re-run phase 2 service migration for failed services
- Cut over phase 2 databases - only for databases associated with phase2 services, some special cases might apply (assets and inventory)
- Check phase 2 db migration status (check database logs)
- Re-run db migration for failed cutover database
- Cut over DNS for phase 2 services - I expect this will be manual tomorrow
- Phase 3
- Phase 4
- Phase 5
- Rollback Activities
-
Start a root shell:
sudo -i
-
Change to orchestration directory:
cd /root/orchestration -
Checkout master branch:
git checkout master
-
Get latest updates:
git pull ./update.sh
-
Reset logging and tracking for orchestration:
rm -rf logs/ && rm -rf .meta && rm -rf /tmp/mudra && mkdir /tmp/mudra
-
Set the
MUDRA_ENVIRONMENTenvironment variable to the environment to be migratedexport MUDRA_ENVIRONMENT=<migration environment>
./orchestrate.sh --environment ${MUDRA_ENVIRONMENT} --datafiles orchestration_datafiles --nodetype Kafka --preflight --maxworkers 10./orchestrate.sh --environment ${MUDRA_ENVIRONMENT} --datafiles orchestration_datafiles --nodetype Database --preflight --skipnodes data-key-service-db --maxworkers 10./orchestrate.sh --environment ${MUDRA_ENVIRONMENT} --datafiles orchestration_datafiles --nodetype App --preflight --maxworkers 10List nodes that failed preflight:
grep "failed preflight" logs/mudra.log
ls -la logs/failed_preflightTail all App node logs that failed preflight:
for app in logs/failed_preflight/App_*; do tail logs/node_logs/App/${app#logs/failed_preflight/App_}-preflight.log; doneTail all Kafka node logs that failed preflight:
for kafka in logs/failed_preflight/Kafka_*; do tail logs/node_logs/Kafka/${kafka#logs/failed_preflight/Kafka_}-check.log; doneTail all Database node logs that failed preflight:
for database in logs/failed_preflight/Database_*; do tail logs/node_logs/Database/${database#logs/failed_preflight/Database_}-preflight.log; done./orchestrate.sh --environment ${MUDRA_ENVIRONMENT} --datafiles orchestration_datafiles --nodetype App --action scaletargetdown --maxworkers 10 --force./orchestrate.sh --datafiles orchestration_datafiles --gettree phase="1" type=App --maxworkers 10./orchestrate.sh --datafiles orchestration_datafiles --gettree phase="1" type=Database --maxworkers 10./orchestrate.sh --datafiles orchestration_datafiles --gettree phase="1" type=Kafka --maxworkers 10./orchestrate.sh --environment ${MUDRA_ENVIRONMENT} --datafiles orchestration_datafiles --phase 1 --maxworkers 10./interface.sh --datafiles orchestration_datafiles --environment ${MUDRA_ENVIRONMENT} database status --phase 1 && column -t -s"," /tmp/mudra/rds/db_status.csv./orchestrate.sh --datafiles orchestration_datafiles --gettree phase="2" type=App --maxworkers 10./orchestrate.sh --datafiles orchestration_datafiles --gettree phase="2" type=Database --maxworkers 10./orchestrate.sh --datafiles orchestration_datafiles --gettree phase="2" type=Kafka --maxworkers 10./orchestrate.sh --environment ${MUDRA_ENVIRONMENT} --datafiles orchestration_datafiles --phase 2 --maxworkers 10Cut over phase 2 databases - only for databases associated with phase2 services, some special cases might apply (assets and inventory)
This happens automatically at the end of Phase 2
To check the task states of sql migration:
./interface.sh --datafiles orchestration_datafiles --environment ${MUDRA_ENVIRONMENT} database status --phase 2 && column -t -s"," /tmp/mudra/rds/db_status.csv
./orchestrate.sh --datafiles orchestration_datafiles --gettree phase="3" type=App --maxworkers 10./orchestrate.sh --datafiles orchestration_datafiles --gettree phase="3" type=Database --maxworkers 10./orchestrate.sh --datafiles orchestration_datafiles --gettree phase="3" type=Kafka --maxworkers 10./orchestrate.sh --environment ${MUDRA_ENVIRONMENT} --datafiles orchestration_datafiles --phase 3 --maxworkers 10This happens automatically at the end of Phase 3
./orchestrate.sh --datafiles orchestration_datafiles --gettree phase="4" type=App --maxworkers 10./orchestrate.sh --datafiles orchestration_datafiles --gettree phase="4" type=Database --maxworkers 10./orchestrate.sh --datafiles orchestration_datafiles --gettree phase="4" type=Kafka --maxworkers 10./orchestrate.sh --environment ${MUDRA_ENVIRONMENT} --datafiles orchestration_datafiles --phase 4 --maxworkers 10This happens automatically at the end of Phase 4
./orchestrate.sh --datafiles orchestration_datafiles --gettree phase="5" type=App --maxworkers 10./orchestrate.sh --datafiles orchestration_datafiles --gettree phase="5" type=Database --maxworkers 10./orchestrate.sh --datafiles orchestration_datafiles --gettree phase="5" type=Kafka --maxworkers 10./orchestrate.sh --environment ${MUDRA_ENVIRONMENT} --datafiles orchestration_datafiles --phase 5 --maxworkers 10This happens automatically at the end of Phase 5
./orchestrate.sh --environment ${MUDRA_ENVIRONMENT} --datafiles orchestration_datafiles --nodetype Database --action cleanup --maxworkers 10 --force./orchestrate.sh --environment ${MUDRA_ENVIRONMENT} --datafiles orchestration_datafiles --nodetype App --action scaletargetdown --maxworkers 10 --force./orchestrate.sh --environment ${MUDRA_ENVIRONMENT} --datafiles orchestration_datafiles --nodetype App --action rollbacksource --maxworkers 10 --force./orchestrate.sh --environment ${MUDRA_ENVIRONMENT} --datafiles orchestration_datafiles --nodetype Database --action unswap --maxworkers 10 --force./orchestrate.sh --environment ${MUDRA_ENVIRONMENT} --datafiles orchestration_datafiles --nodetype Database --action cleanup --maxworkers 10 --force./orchestrate.sh --environment ${MUDRA_ENVIRONMENT} --datafiles orchestration_datafiles --nodetype App --action scaletarget --maxworkers 10 --force