forked from apache/spark
-
Notifications
You must be signed in to change notification settings - Fork 1
Open
Description
I've added them to the benchmark script, but we need to generalize it a bit more so that it works in the submitArmadaSpark.sh and runJupyter.sh scripts as well. These are the sorts of config params that need to be added:
+ --conf spark.storage.decommission.fallbackStorage.path=$ARMADA_S3_USER_DIR/shuffle/
+ARMADA_S3_BUCKET_NAME=${ARMADA_S3_BUCKET_NAME:-kafka-s3}
+ARMADA_S3_BUCKET_ENDPOINT=${ARMADA_S3_BUCKET_ENDPOINT:-http://192.168.59.6}
+ARMADA_S3_USER_DIR=${ARMADA_USER_DIR:-s3a://$ARMADA_S3_BUCKET_NAME/$USER}
+S3_CONF=()
+if [[ $ARMADA_S3_ACCESS_KEY != "" ]]; then
+ S3_CONF=(
+ --conf spark.hadoop.fs.s3a.access.key=$ARMADA_S3_ACCESS_KEY
+ --conf spark.hadoop.fs.s3a.secret.key=$ARMADA_S3_SECRET_KEY
+ )
+else if [[ $ARMADA_SPARK_SECRET_KEY != "" ]]; then
+ S3_CONF=(
+ --conf spark.kubernetes.driver.secretKeyRef.AWS_SECRET_ACCESS_KEY=$ARMADA_SPARK_SECRET_KEY:secret_key
+ --conf spark.kubernetes.executor.secretKeyRef.AWS_SECRET_ACCESS_KEY=$ARMADA_SPARK_SECRET_KEY:secret_key
+ --conf spark.kubernetes.driver.secretKeyRef.AWS_ACCESS_KEY_ID=$ARMADA_SPARK_SECRET_KEY:access_key
+ --conf spark.kubernetes.executor.secretKeyRef.AWS_ACCESS_KEY_ID=$ARMADA_SPARK_SECRET_KEY:access_key
+ )
+ fi
+fi
## Fallback storage migration
spark.decommission.enabled=true
spark.storage.decommission.enabled=true
spark.storage.decommission.shuffleBlocks.enabled=true
spark.storage.decommission.shuffleBlocks.maxDiskSize=0
spark.storage.decommission.fallbackStorage.cleanUp=true
spark.storage.decommission.fallbackStorage.proactive.enabled=true
spark.storage.decommission.fallbackStorage.proactive.reliable=true
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels