You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# start akka on this interface, reachable from your cluster
39
+
akka {
40
+
remote.netty.tcp {
41
+
hostname = "CLUSTER-IP"
42
+
43
+
# This controls the maximum message size, including job results, that can be sent
44
+
maximum-frame-size = 100 MiB
45
+
}
46
+
}
47
+
48
+
Note:
49
+
- YARN transfers the files provided via `--files` submit option into the cluster / container. Spark standalone does not support this in cluster mode and you have to transfer them manual.
50
+
- Instead of running a H2 DB instance you can also run a real DB reachable inside your cluster. You can't use the default (host only) H2 configuration in a cluster setup.
51
+
- Akka binds by [default](../job-server/src/main/resources/application.conf) to the local host interface and is not reachable from the cluster. You need to configure the akka hostname to the cluster internal address.
52
+
53
+
### Reading files uploaded via frontend
54
+
55
+
Files uploaded via the data API (`/data`) are stored on your job server frontend host.
56
+
Call the [DataFileCache](../job-server-api/src/main/scala/spark/jobserver/api/SparkJobBase.scala) API implemented by the job environment in your spark jobs to access them:
0 commit comments