Skip to content

Job in state DEFINE instead of RUNNING for scala wordcount example #320

@badgujarsupriya

Description

@badgujarsupriya

Hi,
I am trying your example from spark-shell of dataproc cluster:
https://github.com/GoogleCloudPlatform/cloud-bigtable-examples/blob/master/scala/spark-wordcount/src/main/scala/com/example/bigtable/spark/wordcount/WordCount.scala

Its giving me error at the line:
val job = new Job(conf)

Error:
java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING

Solution i tried:
I changed above line as below:
val job = new org.apache.hadoop.mapreduce.Job(conf)

still it is giving me the same error.
When i searched on google for the same issue, some people are saying 'run it on spark-submit' instead of 'spark-shell'.
but i want to run on spark-shell only.
Please suggest me a solution for how can i achieve the same using spark-shell?

Metadata

Metadata

Assignees

No one assigned

    Labels

    api: bigtableIssues related to the GoogleCloudPlatform/cloud-bigtable-examples API.type: questionRequest for information or clarification. Not an issue.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions