Skip to content

Commit 7a2cf1b

Browse files
committed
fix(doc): make curl commands compatible with windows
1 parent 2b9e091 commit 7a2cf1b

File tree

3 files changed

+15
-15
lines changed

3 files changed

+15
-15
lines changed

README.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -177,7 +177,7 @@ Let's upload the jar:
177177
The above jar is uploaded as app `test`. Next, let's start an ad-hoc word count job, meaning that the job
178178
server will create its own SparkContext, and return a job ID for subsequent querying:
179179

180-
curl -d "input.string = a b c a b see" 'localhost:8090/jobs?appName=test&classPath=spark.jobserver.WordCountExample'
180+
curl -d "input.string = a b c a b see" "localhost:8090/jobs?appName=test&classPath=spark.jobserver.WordCountExample"
181181
{
182182
"duration": "Job not done yet",
183183
"classPath": "spark.jobserver.WordCountExample",
@@ -190,7 +190,7 @@ server will create its own SparkContext, and return a job ID for subsequent quer
190190
NOTE: If you want to feed in a text file config and POST using curl, you want the `--data-binary` option, otherwise
191191
curl will munge your line separator chars. Like:
192192

193-
curl --data-binary @my-job-config.json 'localhost:8090/jobs?appNam=...'
193+
curl --data-binary @my-job-config.json "localhost:8090/jobs?appNam=..."
194194

195195
NOTE2: If you want to send in UTF-8 chars, make sure you pass in a proper header to CURL for the encoding, otherwise it may assume an encoding which is not what you expect.
196196

@@ -220,7 +220,7 @@ You can also append `&timeout=XX` to extend the request timeout for `sync=true`
220220
#### Persistent Context Mode - Faster & Required for Related Jobs
221221
Another way of running this job is in a pre-created context. Start a new context:
222222

223-
curl -d "" 'localhost:8090/contexts/test-context?num-cpu-cores=4&memory-per-node=512m'
223+
curl -d "" "localhost:8090/contexts/test-context?num-cpu-cores=4&memory-per-node=512m"
224224
OK⏎
225225

226226
You can verify that the context has been created:
@@ -230,7 +230,7 @@ You can verify that the context has been created:
230230

231231
Now let's run the job in the context and get the results back right away:
232232

233-
curl -d "input.string = a b c a b see" 'localhost:8090/jobs?appName=test&classPath=spark.jobserver.WordCountExample&context=test-context&sync=true'
233+
curl -d "input.string = a b c a b see" "localhost:8090/jobs?appName=test&classPath=spark.jobserver.WordCountExample&context=test-context&sync=true"
234234
{
235235
"result": {
236236
"a": 2,
@@ -309,7 +309,7 @@ It is much more type safe, separates context configuration, job ID, named object
309309

310310
Let's try running our sample job with an invalid configuration:
311311

312-
curl -i -d "bad.input=abc" 'localhost:8090/jobs?appName=test&classPath=spark.jobserver.WordCountExample'
312+
curl -i -d "bad.input=abc" "localhost:8090/jobs?appName=test&classPath=spark.jobserver.WordCountExample"
313313

314314
HTTP/1.1 400 Bad Request
315315
Server: spray-can/1.2.0
@@ -343,11 +343,11 @@ You have a couple options to package and upload dependency jars.
343343
- Use the `dependent-jar-uris` context configuration param. Then the jar gets loaded for every job.
344344
- The `dependent-jar-uris` can also be used in job configuration param when submitting a job. On an ad-hoc context this has the same effect as `dependent-jar-uris` context configuration param. On a persistent context the jars will be loaded for the current job and then for every job that will be executed on the persistent context.
345345
````
346-
curl -d "" 'localhost:8090/contexts/test-context?num-cpu-cores=4&memory-per-node=512m'
346+
curl -d "" "localhost:8090/contexts/test-context?num-cpu-cores=4&memory-per-node=512m"
347347
OK⏎
348348
````
349349
````
350-
curl 'localhost:8090/jobs?appName=test&classPath=spark.jobserver.WordCountExample&context=test-context&sync=true' -d '{
350+
curl "localhost:8090/jobs?appName=test&classPath=spark.jobserver.WordCountExample&context=test-context&sync=true" -d '{
351351
dependent-jar-uris = ["file:///myjars/deps01.jar", "file:///myjars/deps02.jar"],
352352
input.string = "a b c a b see"
353353
}'

doc/EMR.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -191,7 +191,7 @@ InstanceCount=10,BidPrice=2.99,Name=sparkSlave,InstanceGroupType=CORE,InstanceTy
191191
3. Create test context
192192
```
193193
# create test context
194-
curl -d "" 'localhost:8090/contexts/test?num-cpu-cores=1&memory-per-node=512m&spark.executor.instances=1'
194+
curl -d "" "localhost:8090/contexts/test?num-cpu-cores=1&memory-per-node=512m&spark.executor.instances=1"
195195
# check current contexts. should return test
196196
curl localhost:8090/contexts
197197
```
@@ -200,7 +200,7 @@ InstanceCount=10,BidPrice=2.99,Name=sparkSlave,InstanceGroupType=CORE,InstanceTy
200200
```
201201
# run WordCount example (should be done in 1-2 sec)
202202
curl -d "input.string = a b c a b see" \
203-
'localhost:8090/jobs?appName=testapp&classPath=spark.jobserver.WordCountExample&context=test&sync=true'
203+
"localhost:8090/jobs?appName=testapp&classPath=spark.jobserver.WordCountExample&context=test&sync=true"
204204
```
205205

206206
5. Check jobs

doc/python.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -150,7 +150,7 @@ Then, running `python setup.py bdist_egg` will create a file `dist/my_job_packag
150150

151151
If Spark Job Server is running with Python support, A Python context can be started with, for example:
152152

153-
curl -X POST 'localhost:8090/contexts/py-context?context-factory=spark.jobserver.python.PythonSparkContextFactory'
153+
curl -X POST "localhost:8090/contexts/py-context?context-factory=spark.jobserver.python.PythonSparkContextFactory"
154154

155155
Whereas Java and Scala jobs are packaged as Jar files, Python jobs need to be packaged as `Egg` files. A set of example jobs
156156
can be build using the `job-server-python/` sbt task `job-server-python/buildPyExamples`. this builds an examples Egg
@@ -162,16 +162,16 @@ in `job-server-python/target/python` so we could push this to the server as a jo
162162
Then, running a Python job is similar to running other job types:
163163

164164
curl -d 'input.strings = ["a", "b", "a", "b" ]' \
165-
'localhost:8090/jobs?appName=my_py_job&classPath=my_job_package.WordCountSparkJob&context=py-context'
165+
"localhost:8090/jobs?appName=my_py_job&classPath=my_job_package.WordCountSparkJob&context=py-context"
166166

167-
curl 'localhost:8090/jobs/<job-id>'
167+
curl "localhost:8090/jobs/<job-id>"
168168

169169
## SQLContext and HiveContext support
170170

171171
Python support is also available for `SQLContext` and `HiveContext`. Simply launch a context using
172172
`spark.jobserver.python.PythonSQLContextFactory` or `spark.jobserver.python.PythonHiveContextFactory`. For example:
173173

174-
curl -X POST 'localhost:8090/contexts/pysql-context?context-factory=spark.jobserver.python.PythonSQLContextFactory'
174+
curl -X POST "localhost:8090/contexts/pysql-context?context-factory=spark.jobserver.python.PythonSQLContextFactory"
175175

176176
When implementing the Python job, you can simply assume that the `context` argument to `validate` and `run_job`
177177
is of the appropriate type. Due to dynamic typing in Python, this is not enforced in the method definitions. For example:
@@ -224,9 +224,9 @@ The input to the job can be provided as a conf file, e.g. with the contents:
224224
Then we can submit the `SQLContext` based job:
225225

226226
curl -d @sqlinput.conf \
227-
'localhost:8090/jobs?appName=example_jobs&classPath=example_jobs.sql_average.SQLAverageJob&context=pysql-context'
227+
"localhost:8090/jobs?appName=example_jobs&classPath=example_jobs.sql_average.SQLAverageJob&context=pysql-context"
228228

229-
curl 'localhost:8090/jobs/<job-id>'
229+
curl "localhost:8090/jobs/<job-id>"
230230

231231
When complete, we get output such as:
232232

0 commit comments

Comments
 (0)