Skip to content

Commit 45d7662

Browse files
committed
Updating documentation for 3.x release
Updated versions and links in docs * Set the baseline to Java 17 * Update links resolves #814
1 parent c353d68 commit 45d7662

File tree

17 files changed

+31
-96
lines changed

17 files changed

+31
-96
lines changed

README.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ process persists beyond the life of the task for future reporting.
1414

1515
== Requirements:
1616

17-
* Java 8 or Above
17+
* Java 17 or Above
1818

1919
== Build Main Project:
2020

@@ -56,4 +56,4 @@ This project adheres to the Contributor Covenant link:CODE_OF_CONDUCT.adoc[code
5656
== Building the Project
5757

5858
This project requires that you invoke the Javadoc engine from the Maven command line. You can do so by appending `javadoc:aggregate` to the rest of your Maven command.
59-
For example, to build the entire project, you could use `./mvnw -Pfull javadoc:aggregate`.
59+
For example, to build the entire project, you could use `mvn clean install -DskipTests -P docs`.

docs/src/main/asciidoc/README.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ process persists beyond the life of the task for future reporting.
77

88
== Requirements:
99

10-
* Java 8 or Above
10+
* Java 17 or Above
1111

1212
== Build Main Project:
1313

@@ -49,4 +49,4 @@ This project adheres to the Contributor Covenant link:CODE_OF_CONDUCT.adoc[code
4949
== Building the Project
5050

5151
This project requires that you invoke the Javadoc engine from the Maven command line. You can do so by appending `javadoc:aggregate` to the rest of your Maven command.
52-
For example, to build the entire project, you could use `./mvnw -Pfull javadoc:aggregate`.
52+
For example, to build the entire project, you could use `mvn clean install -DskipTests -P docs`.

docs/src/main/asciidoc/appendix-building-the-documentation.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,4 +3,4 @@
33
== Building This Documentation
44

55
This project uses Maven to generate this documentation. To generate it for yourself,
6-
run the following command: `$ ./mvnw clean package -P full`.
6+
run the following command: `$ mvn clean install -DskipTests -P docs`.

docs/src/main/asciidoc/appendix-cloud-foundry.adoc

Lines changed: 0 additions & 12 deletions
This file was deleted.

docs/src/main/asciidoc/appendix.adoc

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,4 +4,3 @@
44

55
include::appendix-task-repository-schema.adoc[]
66
include::appendix-building-the-documentation.adoc[]
7-
include::appendix-cloud-foundry.adoc[]

docs/src/main/asciidoc/batch-starter.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -296,7 +296,7 @@ Ingesting a partition of data from a Kafka topic is useful and exactly what the
296296
`KafkaItemReader` can do. To configure a `KafkaItemReader`, two pieces
297297
of configuration are required. First, configuring Kafka with Spring Boot's Kafka
298298
autoconfiguration is required (see the
299-
https://docs.spring.io/spring-boot/docs/2.4.x/reference/htmlsingle/#boot-features-kafka[Spring Boot Kafka documentation]).
299+
https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#messaging.kafka.additional-properties[Spring Boot Kafka documentation]).
300300
Once you have configured the Kafka properties from Spring Boot, you can configure the `KafkaItemReader`
301301
itself by setting the following properties:
302302

@@ -512,7 +512,7 @@ See the https://docs.spring.io/spring-batch/docs/4.3.x/api/org/springframework/b
512512

513513
To write step output to a Kafka topic, you need `KafkaItemWriter`. This starter
514514
provides autoconfiguration for a `KafkaItemWriter` by using facilities from two places.
515-
The first is Spring Boot's Kafka autoconfiguration. (See the https://docs.spring.io/spring-boot/docs/2.4.x/reference/htmlsingle/#boot-features-kafka[Spring Boot Kafka documentation].)
515+
The first is Spring Boot's Kafka autoconfiguration. (See the https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#messaging.kafka.additional-properties[Spring Boot Kafka documentation].)
516516
Second, this starter lets you configure two properties on the writer.
517517

518518
.`KafkaItemWriter` Properties

docs/src/main/asciidoc/batch.adoc

Lines changed: 0 additions & 52 deletions
Original file line numberDiff line numberDiff line change
@@ -151,58 +151,6 @@ dependency for the Spring Cloud Kubernetes Deployer:
151151
the following regex pattern: `[a-z0-9]([-a-z0-9]*[a-z0-9])`.
152152
Otherwise, an exception is thrown.
153153

154-
=== Notes on Developing a Batch-partitioned Application for the Cloud Foundry Platform
155-
156-
* When deploying partitioned apps on the Cloud Foundry platform, you must use the
157-
following dependencies for the Spring Cloud Foundry Deployer:
158-
+
159-
[source,xml]
160-
----
161-
<dependency>
162-
<groupId>org.springframework.cloud</groupId>
163-
<artifactId>spring-cloud-deployer-cloudfoundry</artifactId>
164-
</dependency>
165-
<dependency>
166-
<groupId>io.projectreactor</groupId>
167-
<artifactId>reactor-core</artifactId>
168-
<version>3.1.5.RELEASE</version>
169-
</dependency>
170-
<dependency>
171-
<groupId>io.projectreactor.ipc</groupId>
172-
<artifactId>reactor-netty</artifactId>
173-
<version>0.7.5.RELEASE</version>
174-
</dependency>
175-
----
176-
* When configuring the partition handler, Cloud Foundry Deployment
177-
environment variables need to be established so that the partition handler
178-
can start the partitions. The following list shows the required environment
179-
variables:
180-
- `spring_cloud_deployer_cloudfoundry_url`
181-
- `spring_cloud_deployer_cloudfoundry_org`
182-
- `spring_cloud_deployer_cloudfoundry_space`
183-
- `spring_cloud_deployer_cloudfoundry_domain`
184-
- `spring_cloud_deployer_cloudfoundry_username`
185-
- `spring_cloud_deployer_cloudfoundry_password`
186-
- `spring_cloud_deployer_cloudfoundry_services`
187-
- `spring_cloud_deployer_cloudfoundry_taskTimeout`
188-
189-
An example set of deployment environment variables for a partitioned task that
190-
uses a `mysql` database service might resemble the following:
191-
192-
[source,bash]
193-
----
194-
spring_cloud_deployer_cloudfoundry_url=https://api.local.pcfdev.io
195-
spring_cloud_deployer_cloudfoundry_org=pcfdev-org
196-
spring_cloud_deployer_cloudfoundry_space=pcfdev-space
197-
spring_cloud_deployer_cloudfoundry_domain=local.pcfdev.io
198-
spring_cloud_deployer_cloudfoundry_username=admin
199-
spring_cloud_deployer_cloudfoundry_password=admin
200-
spring_cloud_deployer_cloudfoundry_services=mysql
201-
spring_cloud_deployer_cloudfoundry_taskTimeout=300
202-
----
203-
204-
NOTE: When using PCF-Dev, the following environment variable is also required:
205-
`spring_cloud_deployer_cloudfoundry_skipSslValidation=true`
206154

207155
[[batch-informational-messages]]
208156
== Batch Informational Messages

spring-cloud-task-samples/batch-events/README.adoc

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -10,11 +10,11 @@ This is a task application that emits batch job events to the following channels
1010
* item-write-events
1111
* skip-events
1212

13-
Note: More information on this topic is available https://docs.spring.io/spring-cloud-task/current-SNAPSHOT/reference/htmlsingle/#stream-integration-batch-events[here].
13+
Note: More information on this topic is available https://docs.spring.io/spring-cloud-task/docs/current/reference/html/#stream-integration-batch-events[here].
1414

1515
== Requirements:
1616

17-
* Java 8 or Above
17+
* Java 17 or Above
1818

1919
== Build:
2020

@@ -27,15 +27,15 @@ $ ./mvnw clean install
2727

2828
[source,shell,indent=2]
2929
----
30-
$ java -jar target/batch-events-1.2.1.RELEASE.jar
30+
$ java -jar target/batch-events-3.0.0.jar
3131
----
3232

33-
For example you can listen for specific job execution events on a specified channel with a Spring Cloud Stream Sink
34-
like the log sink using the following:
33+
For example you can listen for specific job execution events on a specified channel with a Spring Cloud Stream Sink
34+
like the https://github.com/spring-cloud/stream-applications/tree/main/applications/sink/log-sink[log sink] using the following:
3535

3636
[source,shell,indent=2]
3737
----
38-
$ java -jar <PATH_TO_LOG_SINK_JAR>/log-sink-rabbit-1.0.2.RELEASE.jar --server.port=9090
38+
$ java -jar <PATH_TO_LOG_SINK_JAR>/log-sink-rabbit-3.1.1.jar --server.port=9090
3939
--spring.cloud.stream.bindings.input.destination=job-execution-events
4040
----
4141

spring-cloud-task-samples/batch-job/README.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ This is a Spring Cloud Task application that executes two simple Spring Batch Jo
44

55
== Requirements:
66

7-
* Java 8 or Above
7+
* Java 17 or Above
88

99
== Classes:
1010

@@ -22,5 +22,5 @@ $ mvn clean package
2222

2323
[source,shell,indent=2]
2424
----
25-
$ java -jar target/batch-job-1.2.1.RELEASE.jar
25+
$ java -jar target/batch-job-3.0.0.jar
2626
----

spring-cloud-task-samples/jpa-sample/README.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ a data store.
55

66
== Requirements:
77

8-
* Java 8 or Above
8+
* Java 17 or Above
99

1010
== Classes:
1111

@@ -24,5 +24,5 @@ $ mvn clean package
2424

2525
[source,shell,indent=2]
2626
----
27-
$ java -jar target/jpa-sample-1.2.1.RELEASE.jar
27+
$ java -jar target/jpa-sample-3.0.0.jar
2828
----

0 commit comments

Comments
 (0)