Skip to content

Commit 0dd4831

Browse files
authored
IGNITE-26733 Updating doc versioning to 3.1.0 (#6793)
1 parent 34faf04 commit 0dd4831

29 files changed

+148
-67
lines changed

docs/_docs/administrators-guide/handling-exceptions.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ When the exception happens, Apache Ignite provides a UUID of the specific except
106106
|`IGN-TX-1`|Default error for transaction state storage.|General error, check the logs and take action depending on the cause. Make sure there are no disk errors.
107107
|`IGN-TX-2`|Storage is stopped.|Storage is stopped due to the node stop or replica stop. No action is required unless this behavior is unexpected. Otherwise check the log for details.
108108
|`IGN-TX-3`|Unexpected transaction state on state change.|This can happen when trying to commit an already aborted transaction or roll back a committed one. No action is required.
109-
|`IGN-TX-4`|Failed to acquire a lock on a key due to a conflict.|The lock is taken by another transaction. Retry the operation or change the deadlock prevention policy.
109+
|`IGN-TX-4`|Failed to acquire a lock on a key due to a conflict.|The lock is held by another transaction. Retry the operation or change the deadlock prevention policy.
110110
|`IGN-TX-5`|Failed to acquire a lock on a key within the timeout.|The lock is held by another transaction. Make sure that the other transaction is not hanging, kill it if necessary; or retry the operation.
111111
|`IGN-TX-6`|Failed to commit a transaction.|Take actions depending on the cause. Make sure that all partitions in the cluster have a majority of nodes online in their groups.
112112
|`IGN-TX-7`|Failed to roll back a transaction.|Take actions depending on the cause. Make sure that all partitions in the cluster have a majority of nodes online in their groups.

docs/_docs/administrators-guide/storage/data-partitions.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -120,4 +120,4 @@ NOTE: Reset is likely to result in <<Partition Rebalance>>, which may take a lon
120120

121121
== Partition Rebalance
122122

123-
When the link:administrators-guide/storage/distribution-zones#cluster-scaling[cluster size changes], Apache Ignite waits for the timeout specified in the `AUTO SCALE UP` or `AUTO SCALE DOWN` distribution zone properties, and then redistributes partitions according to partition distribution algorithm and transfers data to make it up-to-date with the replication group. This process is called *data rebalance*.
123+
When the link:administrators-guide/storage/distribution-zones#cluster-scaling[cluster size changes], Apache Ignite waits for the timeout specified in the `AUTO SCALE UP` or `AUTO SCALE DOWN` distribution zone properties, and then redistributes partitions according to the partition distribution algorithm and transfers data to make it up-to-date with the replication group. This process is called *data rebalance*.

docs/_docs/administrators-guide/storage/engines/rocksdb.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717

1818
WARNING: RocksDB support is experimental.
1919

20-
RocksDB is a persistent storage engine based on LSM tree. It is best used in environments with a large number of write requests.
20+
RocksDB is a persistent storage engine based on an LSM tree. It is best used in environments with a large number of write requests.
2121

2222
== Profile Configuration
2323

docs/_docs/developers-guide/clients/dotnet.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ C# client is available via NuGet. To add it, use the `add package` command:
3030

3131
[source, bash, subs="attributes,specialchars"]
3232
----
33-
dotnet add package Apache.Ignite --version 3.0.0
33+
dotnet add package Apache.Ignite --version 3.1.0
3434
----
3535

3636
== Connecting to Cluster

docs/_docs/developers-guide/clients/java.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ Java client can be added to your project by using maven:
3333
<dependency>
3434
<groupId>org.apache.ignite</groupId>
3535
<artifactId>ignite-client</artifactId>
36-
<version>3.0.0</version>
36+
<version>3.1.0</version>
3737
</dependency>
3838
----
3939

docs/_docs/developers-guide/code-deployment/code-deployment.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ public class MyJob implements ComputeJob<String, String> {
7373
}
7474
----
7575

76-
You can manage deployment units using either link:ignite-cli-tool[CLI] commands or the link:https://ignite.apache.org/releases/3.0.0/openapi.yaml[REST API]. Both methods provide the same functionality for deploying, listing, and undeploying code.
76+
You can manage deployment units using either link:ignite-cli-tool[CLI] commands or the link:https://ignite.apache.org/releases/3.1.0/openapi.yaml[REST API]. Both methods provide the same functionality for deploying, listing, and undeploying code.
7777

7878
== Deploying Units with Folder Structures
7979

@@ -161,7 +161,7 @@ curl -X POST 'http://localhost:10300/management/v1/deployment/units/unit/1.0.0?i
161161

162162
- You can target nodes using either the `deployMode` or `initialNodes` parameter. These options serve the same purpose as the similar CLI parameters, ensuring the unit propagates as needed.
163163

164-
- For additional details see the corresponding link:https://ignite.apache.org/releases/3.0.0/openapi.yaml[API documentation].
164+
- For additional details see the corresponding link:https://ignite.apache.org/releases/3.1.0/openapi.yaml[API documentation].
165165

166166
=== Deploy Manually
167167

docs/_docs/developers-guide/data-streamer.adoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ Data streaming provides at-least-once delivery guarantee.
2323

2424
== Using Data Streamer API
2525

26-
The link:https://ignite.apache.org/releases/3.0.0/javadoc/org/apache/ignite/table/DataStreamerTarget.html[Data Streamer API] lets you load large amounts of data into your cluster quickly and reliably using a publisher–subscriber model, where you create a publisher that streams your data entries to a table view, and the system distributes these entries across the cluster. You can configure how the data is processed via a `DataStreamerOptions` object that allows to set batch sizes, auto-flush intervals, retry limits.
26+
The link:https://ignite.apache.org/releases/3.1.0/javadoc/org/apache/ignite/table/DataStreamerTarget.html[Data Streamer API] lets you load large amounts of data into your cluster quickly and reliably using a publisher–subscriber model, where you create a publisher that streams your data entries to a table view, and the system distributes these entries across the cluster. You can configure how the data is processed via a `DataStreamerOptions` object that allows to set batch sizes, auto-flush intervals, retry limits.
2727

2828
=== Configuring Data Streamer
2929

@@ -187,12 +187,12 @@ public record Account(int Id, string Name);
187187
The Apache Ignite 3 streaming API supports advanced streaming scenarios by allowing you to create a custom receiver that defines server-side processing logic. Use a receiver when you need to process or transform data on the server, update multiple tables from a single data stream, or work with incoming data that does not match a table schema.
188188

189189
With a receiver, you can stream data in any format, as it is schema-agnostic.
190-
The receiver also has access to the full Ignite 3 API through the link:https://ignite.apache.org/releases/3.0.0/javadoc/org/apache/ignite/table/DataStreamerReceiverContext.html[`DataStreamerReceiverContext`].
190+
The receiver also has access to the full Ignite 3 API through the link:https://ignite.apache.org/releases/3.1.0/javadoc/org/apache/ignite/table/DataStreamerReceiverContext.html[`DataStreamerReceiverContext`].
191191

192192
The data streamer controls data flow by requesting items only when partition buffers have space. `DataStreamerOptions.perPartitionParallelOperations` controls how many buffers can be allocated per partition. When buffers are full, the streamer stops requesting more data until some items are processed.
193193
Additionally, if a `resultSubscriber` is specified, it also applies backpressure on the streamer. If the subscriber is slow at consuming results, the streamer reduces its request rate from the publisher accordingly.
194194

195-
To use a receiver, you need to implement the link:https://ignite.apache.org/releases/3.0.0/javadoc/org/apache/ignite/table/DataStreamerReceiver.html[`DataStreamerReceiver`] interface. The receivers `receive` method processes each batch of items streamed to the server, so you can apply custom logic and return results for each item as needed:
195+
To use a receiver, you need to implement the link:https://ignite.apache.org/releases/3.1.0/javadoc/org/apache/ignite/table/DataStreamerReceiver.html[`DataStreamerReceiver`] interface. The receiver's `receive` method processes each batch of items streamed to the server, so you can apply custom logic and return results for each item as needed:
196196

197197
[tabs]
198198
--
@@ -605,7 +605,7 @@ await Task.Yield(); // Simulate async data source.
605605

606606
==== Custom Marshallers in .NET
607607

608-
In .NET, you can define custom marshallers by implementing the link:https://ignite.apache.org/releases/3.0.0/dotnetdoc/api/Apache.Ignite.Marshalling.IMarshaller-1.html[`IMarshaller`] interface.
608+
In .NET, you can define custom marshallers by implementing the link:https://ignite.apache.org/releases/3.1.0/dotnetdoc/api/Apache.Ignite.Marshalling.IMarshaller-1.html[`IMarshaller`] interface.
609609

610610
For example, the code below demonstrates how to use `JsonMarshaller` to serialize data, arguments, and results.
611611

docs/_docs/developers-guide/events/overview.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ cluster config update ignite.eventlog.sinks.exampleSink = {type="log", channel="
3737
Now, the authorization events will be written to the log. Here is how the event may look like:
3838

3939
----
40-
2024-06-04 16:19:29:840 +0300 [INFO][%defaultNode%sql-execution-pool-1][EventLog] {"type":"USER_AUTHORIZATION_SUCCESS","timestamp":1717507169840,"productVersion":"3.0.0","user":{"username":"ignite","authenticationProvider":"basic"},"fields":{"privileges":[{"action":"CREATE_TABLE","on":{"objectType":"TABLE","objectName":"TEST2","schema":"PUBLIC"}}],"roles":["system"]}}
40+
2024-06-04 16:19:29:840 +0300 [INFO][%defaultNode%sql-execution-pool-1][EventLog] {"type":"USER_AUTHORIZATION_SUCCESS","timestamp":1717507169840,"productVersion":"3.1.0","user":{"username":"ignite","authenticationProvider":"basic"},"fields":{"privileges":[{"action":"CREATE_TABLE","on":{"objectType":"TABLE","objectName":"TEST2","schema":"PUBLIC"}}],"roles":["system"]}}
4141
----
4242

4343
Below is the cluster configuration config in JSON.
@@ -145,7 +145,7 @@ All events in Apache Ignite 3 follow the same basic structure described below. S
145145
"type": "AUTHENTICATION",
146146
"user": { "username": "John", "authenticationProvider": "basic" },
147147
"timestamp": 1715169617,
148-
"productVersion": "3.0.0",
148+
"productVersion": "3.1.0",
149149
"fields": {}
150150
}
151151
----

docs/_docs/developers-guide/sql/jdbc-driver.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ The JDBC connector needs to be included from Maven:
3434
<dependency>
3535
<groupId>org.apache.ignite</groupId>
3636
<artifactId>ignite-jdbc</artifactId>
37-
<version>3.0.0</version>
37+
<version>3.1.0</version>
3838
</dependency>
3939
----
4040

docs/_docs/developers-guide/sql/odbc/querying-modifying-data.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -57,13 +57,13 @@ SQLCHAR query1[] = "CREATE TABLE Person ( "
5757
"id LONG PRIMARY KEY, "
5858
"firstName VARCHAR, "
5959
"lastName VARCHAR, "
60-
"salary FLOAT) "";
60+
"salary FLOAT)";
6161
6262
SQLExecDirect(stmt, query1, SQL_NTS);
6363
6464
SQLCHAR query2[] = "CREATE TABLE Organization ( "
6565
"id LONG PRIMARY KEY, "
66-
"name VARCHAR) "";
66+
"name VARCHAR)";
6767
6868
SQLExecDirect(stmt, query2, SQL_NTS);
6969

0 commit comments

Comments
 (0)