Skip to content

Commit 7869337

Browse files
razvanadwk67
andauthored
Apply suggestions from code review
Co-authored-by: Andrew Kenworthy <[email protected]>
1 parent a46be24 commit 7869337

File tree

2 files changed

+5
-4
lines changed

2 files changed

+5
-4
lines changed

docs/modules/spark-k8s/pages/usage-guide/job-dependencies.adoc

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -97,9 +97,10 @@ include::example$example-pvc.yaml[]
9797

9898
The last and most flexible way to provision dependencies is to use the built-in `spark-submit` support for Maven package coordinates.
9999
The downside of this method is that job dependencies are downloaded every time the job is submitted and this has several implications you must be aware of.
100-
For example, the job submission time will be longer than with the other methods
100+
For example, the job submission time will be longer than with the other methods.
101101
Network connectivity problems may lead to job submission failures.
102-
And finally, not all type of dependencies can be provisioned this way. Most notably, JDBC drivers cannot be provisioned this way since the JVM will only look for them at startup time.
102+
And finally, not all type of dependencies can be provisioned this way.
103+
Most notably, JDBC drivers cannot be provisioned this way since the JVM will only look for them at startup time.
103104

104105
The snippet below showcases how to add Apache Iceberg support to a Spark (version 3.4.x) application.
105106

@@ -127,7 +128,7 @@ spec:
127128

128129
As mentioned above, not all dependencies can be provisioned this way.
129130
JDBC drivers are notorious for not being supported by this method but other types of dependencies may also not work.
130-
If a jar file can be provisioned using it's Maven coordinates or not, depends a lot on the way it is loaded by the JVM.
131+
If a jar file can be provisioned using its Maven coordinates or not, depends a lot on the way it is loaded by the JVM.
131132
In such cases, consider building your own custom Spark image as shown above.
132133

133134
=== Python packages

docs/modules/spark-k8s/partials/supported-versions.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,5 +4,5 @@
44
// Please sort the versions in descending order (newest first)
55

66
- 4.0.0 (Hadoop 3.4.1, Scala 2.13, Python 3.11, Java 17) (Experimental)
7-
- 3.5.5 (Hadoop 3.3.4, Scala 2.12, Python 3.11, Java 17) (Deprecated)
87
- 3.5.6 (Hadoop 3.3.4, Scala 2.12, Python 3.11, Java 17) (LTS)
8+
- 3.5.5 (Hadoop 3.3.4, Scala 2.12, Python 3.11, Java 17) (Deprecated)

0 commit comments

Comments
 (0)