You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -12,24 +12,29 @@ See [Troubleshooting Tips](doc/troubleshooting.md) as well as [Yarn tips](doc/ya
12
12
13
13
(Please add yourself to this list!)
14
14
15
-
- Ooyala
16
-
- Netflix
17
-
- Avenida.com
15
+
-[Ooyala](http://www.ooyala.com)
16
+
-[Netflix](http://www.netflix.com)
17
+
-[Avenida.com](http://www.avenida.com)
18
18
- GumGum
19
19
- Fuse Elements
20
20
- Frontline Solvers
21
21
- Aruba Networks
22
-
-[Zed Worldwide](www.zed.com)
22
+
-[Zed Worldwide](http://www.zed.com)
23
+
-[KNIME](https://www.knime.org/)
24
+
-[Azavea](http://azavea.com)
25
+
-[Maana](http://maana.io/)
23
26
24
27
## Features
25
28
26
-
-*"Spark as a Service"*: Simple REST interface for all aspects of job, context management
29
+
-*"Spark as a Service"*: Simple REST interface (including HTTPS) for all aspects of job, context management
27
30
- Support for Spark SQL, Hive, Streaming Contexts/jobs and custom job contexts! See [Contexts](doc/contexts.md).
31
+
- LDAP Auth support via Apache Shiro integration
28
32
- Supports sub-second low-latency jobs via long-running job contexts
29
33
- Start and stop job contexts for RDD sharing and low-latency jobs; change resources on restart
30
-
- Kill running jobs via stop context
34
+
- Kill running jobs via stop context and delete job
31
35
- Separate jar uploading step for faster job startup
32
36
- Asynchronous and synchronous job API. Synchronous API is great for low latency jobs!
37
+
- Preliminary support for Java (see `JavaSparkJob`)
33
38
- Works with Standalone Spark as well as Mesos and yarn-client
34
39
- Job and jar info is persisted via a pluggable DAO interface
35
40
- Named RDDs to cache and retrieve RDDs by name, improving RDD sharing and reuse among jobs.
@@ -44,12 +49,18 @@ See [Troubleshooting Tips](doc/troubleshooting.md) as well as [Yarn tips](doc/ya
44
49
| 0.4.1 | 1.1.0 |
45
50
| 0.5.0 | 1.2.0 |
46
51
| 0.5.1 | 1.3.0 |
52
+
| 0.5.2 | 1.3.1 |
53
+
| master | 1.4.1 |
47
54
48
55
For release notes, look in the `notes/` directory. They should also be up on [ls.implicit.ly](http://ls.implicit.ly/spark-jobserver/spark-jobserver).
49
56
50
-
## Quick start / development mode
57
+
## Quick Start
51
58
52
-
NOTE: This quick start guide uses SBT to run the job server and the included test jar, but the normal development process is to create a separate project for Job Server jobs and to deploy the job server to a Spark cluster. Please see the deployment section below for more details.
59
+
The easiest way to get started is to try the [Docker container](doc/docker.md) which prepackages a Spark distribution with the job server and lets you start and deploy it.
60
+
61
+
## Development mode
62
+
63
+
The example walk-through below shows you how to use the job server with an included example job, by running the job server in local development mode in SBT. This is not an example of usage in production.
53
64
54
65
You need to have [SBT](http://www.scala-sbt.org/release/docs/Getting-Started/Setup.html) installed.
55
66
@@ -67,6 +78,8 @@ Note that reStart (SBT Revolver) forks the job server in a separate process. If
67
78
type reStart again at the SBT shell prompt, it will compile your changes and restart the jobserver. It enables
68
79
very fast turnaround cycles.
69
80
81
+
**NOTE2**: You cannot do `sbt reStart` from the OS shell. SBT will start job server and immediately kill it.
82
+
70
83
For example jobs see the job-server-tests/ project / folder.
71
84
72
85
When you use `reStart`, the log file goes to `job-server/job-server-local.log`. There is also an environment variable
@@ -80,7 +93,7 @@ Then go ahead and start the job server using the instructions above.
-`runJob` contains the implementation of the Job. The SparkContext is managed by the JobServer and will be provided to the job through this method.
171
-
This releaves the developer from the boiler-plate configuration management that comes with the creation of a Spark job and allows the Job Server to
184
+
This relieves the developer from the boiler-plate configuration management that comes with the creation of a Spark job and allows the Job Server to
172
185
manage and re-use contexts.
173
-
-`validate` allows for an initial validation of the context and any provided configuration. If the context and configuration are OK to run the job, returning `spark.jobserver.SparkJobValid` will let the job execute, otherwise returning `spark.jobserver.SparkJobInvalid(reason)` prevents the job from running and provides means to convey the reason of failure. In this case, the call immediatly returns an `HTTP/1.1 400 Bad Request` status code.
186
+
-`validate` allows for an initial validation of the context and any provided configuration. If the context and configuration are OK to run the job, returning `spark.jobserver.SparkJobValid` will let the job execute, otherwise returning `spark.jobserver.SparkJobInvalid(reason)` prevents the job from running and provides means to convey the reason of failure. In this case, the call immediately returns an `HTTP/1.1 400 Bad Request` status code.
174
187
`validate` helps you preventing running jobs that will eventually fail due to missing or wrong configuration and save both time and resources.
175
188
176
189
Let's try running our sample job with an invalid configuration:
Here is an example of a simple curl command that utilizes ssl:
260
+
```
261
+
curl -k https://localhost:8090/contexts
262
+
```
263
+
The ```-k``` flag tells curl to "Allow connections to SSL sites without certs". Export your server certificate and import it into the client's truststore to fully utilize ssl security.
264
+
265
+
### Authentication
266
+
267
+
Authentication uses the [Apache Shiro](http://shiro.apache.org/index.html) framework. Authentication is activated by setting this flag (Section 'shiro'):
268
+
```
269
+
authentication = on
270
+
# absolute path to shiro config file, including file name
271
+
config.path = "/some/path/shiro.ini"
272
+
```
273
+
Shiro-specific configuration options should be placed into a file named 'shiro.ini' in the directory as specified by the config option 'config.path'.
274
+
Here is an example that configures LDAP with user group verification:
275
+
```
276
+
# use this for basic ldap authorization, without group checking
1. Copy `config/local.sh.template` to `<environment>.sh` and edit as appropriate.
235
-
2.`bin/server_deploy.sh <environment>` -- this packages the job server along with config files and pushes
303
+
### Manual steps
304
+
305
+
1. Copy `config/local.sh.template` to `<environment>.sh` and edit as appropriate. NOTE: be sure to set SPARK_VERSION if you need to compile against a different version, ie. 1.4.1 for job server 0.5.2
306
+
2. Copy `config/shiro.ini.template` to `shiro.ini` and edit as appropriate. NOTE: only required when `authentication = on`
307
+
3. Copy `config/local.conf.template` to `<environment>.conf` and edit as appropriate.
308
+
4.`bin/server_deploy.sh <environment>` -- this packages the job server along with config files and pushes
236
309
it to the remotes you have configured in `<environment>.sh`
237
-
3. On the remote server, start it in the deployed directory with `server_start.sh` and stop it with `server_stop.sh`
310
+
5. On the remote server, start it in the deployed directory with `server_start.sh` and stop it with `server_stop.sh`
238
311
239
312
The `server_start.sh` script uses `spark-submit` under the hood and may be passed any of the standard extra arguments from `spark-submit`.
240
313
@@ -243,10 +316,14 @@ NOTE: by default the assembly jar from `job-server-extras`, which includes suppo
243
316
Note: to test out the deploy to a local staging dir, or package the job server for Mesos,
244
317
use `bin/server_package.sh <environment>`.
245
318
319
+
### Chef
320
+
321
+
There is also a [Chef cookbook](https://github.com/spark-jobserver/chef-spark-jobserver) which can be used to deploy Spark Jobserver.
322
+
246
323
## Architecture
247
324
248
325
The job server is intended to be run as one or more independent processes, separate from the Spark cluster
249
-
(though it very well may be colocated with say the Master).
326
+
(though it very well may be collocated with say the Master).
250
327
251
328
At first glance, it seems many of these functions (eg job management) could be integrated into the Spark standalone master. While this is true, we believe there are many significant reasons to keep it separate:
252
329
@@ -266,8 +343,8 @@ Flow diagrams are checked in in the doc/ subdirectory. .diagram files are for w
266
343
267
344
### Contexts
268
345
269
-
GET /contexts - lists all current contexts
270
-
POST /contexts/<name> - creates a new context
346
+
GET /contexts - lists all current contexts
347
+
POST /contexts/<name> - creates a new context
271
348
DELETE /contexts/<name> - stops a context and all jobs running in it
272
349
273
350
### Jobs
@@ -284,6 +361,24 @@ the REST API.
284
361
285
362
For details on the Typesafe config format used for input (JSON also works), see the [Typesafe Config docs](https://github.com/typesafehub/config).
286
363
364
+
### Data
365
+
366
+
It is sometime necessary to programmatically upload files to the server. Use these paths to manage such files:
367
+
368
+
GET /data - Lists previously uploaded files that were not yet deleted
369
+
POST /data/<prefix> - Uploads a new file, the full path of the file on the server is returned, the
370
+
prefix is the prefix of the actual filename used on the server (a timestamp is
371
+
added to ensure uniqueness)
372
+
DELETE /data/<filename> - Deletes the specified file (only if under control of the JobServer)
373
+
374
+
These files are uploaded to the server and are stored in a local temporary
375
+
directory on the server where the JobServer runs. The POST command returns the full
376
+
pathname and filename of the uploaded file so that later jobs can work with this
377
+
just the same as with any other server-local file. A job could therefore add this file to HDFS or distribute
378
+
it to worker nodes via the SparkContext.addFile command.
379
+
For files that are larger than a few hundred MB, it is recommended to manually upload these files to the server or
380
+
to directly add them to your HDFS.
381
+
287
382
### Context configuration
288
383
289
384
A number of context-specific settings can be controlled when creating a context (POST /contexts) or running an
@@ -359,17 +454,13 @@ for instance: `sbt ++2.11.6 job-server/compile`
359
454
360
455
### Publishing packages
361
456
362
-
- Be sure you are in the master project
363
-
- Run `+test` to ensure all tests pass for all scala versions
364
-
- Now just run `+publish` and package will be published to bintray
457
+
In the root project, do `release cross`.
365
458
366
459
To announce the release on [ls.implicit.ly](http://ls.implicit.ly/), use
367
460
[Herald](https://github.com/n8han/herald#install) after adding release notes in
368
461
the `notes/` dir. Also regenerate the catalog with `lsWriteVersion` SBT task
369
462
and `lsync`, in project job-server.
370
463
371
-
TODO: Automate the above steps with `sbt-release`.
372
-
373
464
## Contact
374
465
375
466
For user/dev questions, we are using google group for discussions:
@@ -381,11 +472,8 @@ Please report bugs/problems to:
381
472
## License
382
473
Apache 2.0, see LICENSE.md
383
474
384
-
Copyright(c) 2014, Ooyala, Inc.
385
-
386
475
## TODO
387
476
388
-
- Have server_start.sh use spark-submit (#155, others) - would help resolve classpath/dependency issues.
0 commit comments