Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
c77cbda
Small typo fix on start page (#2100)
JPryce-Aklundh Feb 6, 2025
cf371e5
Remove the default value of --input-type=csv|parquet (#2086)
renetapopova Feb 6, 2025
7c1fa0d
Update the restore command (#2090)
NataliaIvakina Feb 10, 2025
3ce4a75
Add links to the Components of the graph privilege commands from wher…
renetapopova Feb 10, 2025
df9e47d
Emphasize in docs that in clustering environments, when using increme…
renetapopova Feb 10, 2025
ed60775
Update the description of `update-all-matching-relationships` (#2105)
NataliaIvakina Feb 11, 2025
b350dae
Fix the Configure the thread pool for a Bolt connector example to sho…
renetapopova Feb 11, 2025
769b1f1
bump version to 2025.02 (#2099)
renetapopova Feb 11, 2025
21044b9
Add election metrics (#2089)
RagnarW Feb 11, 2025
62db050
Add setting for disabling raft async acquisition (#2087)
RagnarW Feb 12, 2025
7c0bea4
Add metrics for inbound queue in raft (#2088)
RagnarW Feb 12, 2025
be3da56
Update cypher-shell.adoc
loveleif Feb 12, 2025
2a55fdf
Update cypher-shell.adoc
loveleif Feb 12, 2025
5218ae2
Add store copy metrics (#2046)
RagnarW Feb 13, 2025
e5bde5d
Fix the config name `dbms.cluster.raft.async_channel_acquisition_enab…
NataliaIvakina Feb 14, 2025
b68474c
Deprecate raft tx_retries metric (#2117)
NataliaIvakina Feb 17, 2025
c46ee54
Workflows 1.2.0 (#2118)
recrwplay Feb 17, 2025
b498e91
Add info about the server-logs.xml file (#2125)
NataliaIvakina Feb 18, 2025
f8a4a5b
Improve the 5th level headings (#2124)
NataliaIvakina Feb 18, 2025
d14220d
Add a line about Notification API (#2121)
NataliaIvakina Feb 18, 2025
1032966
Clarify the supported filesystems for Linux (#2122)
NataliaIvakina Feb 18, 2025
48db57a
Removed beta label from set vector property functions. (#2137) (#2141)
NataliaIvakina Feb 20, 2025
1023dd4
Updates the hospital example to include steps on how to create and lo…
renetapopova Feb 20, 2025
36e7d85
Document --schema option for incremental import (#2139)
renetapopova Feb 20, 2025
1e0996e
Improve the descriptions of --skip-bad-relationships, --skip-duplicat…
renetapopova Feb 21, 2025
9e6e076
Fix the link to the Vector functions page (#2145)
NataliaIvakina Feb 24, 2025
409908b
Update the version in all K8s backup-restore examples to be the lates…
renetapopova Feb 25, 2025
e144343
Merge branch 'dev' into main-2025.02
renetapopova Feb 27, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions antora.yml
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
name: operations-manual
title: Operations Manual
version: '2025.01'
version: '2025.02'
current: true
start_page: ROOT:index.adoc
nav:
- modules/ROOT/content-nav.adoc
asciidoc:
attributes:
neo4j-version: '2025.01'
neo4j-version-minor: '2025.01'
neo4j-version-exact: '2025.01.0'
neo4j-buildnumber: '2025.01'
neo4j-debian-package-version: '1:5.22.0@'
neo4j-version: '2025.02'
neo4j-version-minor: '2025.02'
neo4j-version-exact: '2025.02.0'
neo4j-buildnumber: '2025.02'
neo4j-debian-package-version: '1:2025.02.0@'
18 changes: 9 additions & 9 deletions modules/ROOT/pages/backup-restore/restore-backup.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -36,13 +36,11 @@ This recovery operation is resource-intensive and can be decoupled from the rest

[source,role=noheader]
----
neo4j-admin database restore [-h] [--expand-commands]
[--verbose] [--overwrite-destination[=true|false]]
[--additional-config=<file>]
--from-path=<path>[,<path>...]
neo4j-admin database restore [-h] [--expand-commands] [--verbose] [--overwrite-destination
[=true|false]] [--source-database[=source-database-name]]
[--additional-config=<file>] --from-path=<path> [,<path>...]
[--restore-until=<recovery-criteria>] [--temp-path=<path>]
[--to-path-data=<path>] [--to-path-txn=<path>]
[<database>]
[--to-path-data=<path>] [--to-path-txn=<path>] [<database>]
----

=== Parameters
Expand Down Expand Up @@ -77,9 +75,7 @@ neo4j-admin database restore [-h] [--expand-commands]
|

|--from-path=<path>[,<path>...]
|A single path or a comma-separated list of paths pointing to a backup artifact file.
An artifact file can be 1) a full backup, in which case it is restored directly or, 2) a differential backup, in which case the command tries first to find in the folder a backup chain ending at that specific differential backup and then restores that chain.
It is possible to restore backups from AWS S3 buckets, Google Cloud storage buckets, and Azure buckets using the appropriate URI as the path.
|The path can point to an individual backup artifact, a folder that contains artifacts, or a comma-separated list of backup artifact files. An artifact file can be 1) a full backup, in which case it is restored directly or, 2) a differential backup, in which case the command tries first to find in the folder a backup chain ending at that specific differential backup and then restores that chain. It is possible to restore backups from AWS S3 buckets, Google Cloud storage buckets, and Azure buckets using the appropriate URI as the path.
|

|-h, --help
Expand All @@ -104,6 +100,10 @@ The restore recovers transaction logs up to, but not including, the transaction
The restore recovers transactions that were committed before the provided timestamp.
|

| --source-database[=source-database-name]
|label:new[Introduced in 2025.02] A source database name. If the `--from-path` points to a folder containing backups for multiple databases, you must specify the database name to filter the artifacts.
|

| --to-path-data=<path>
|Base directory for databases.
Usage of this option is only allowed if the `--from-path` parameter points to exactly one directory.
Expand Down
3 changes: 3 additions & 0 deletions modules/ROOT/pages/changes-deprecations-removals.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -494,6 +494,9 @@ Replaced by: xref:procedures.adoc#procedure_dbms_cluster_secondaryreplicationdis
| Name
| Comment

|xref:monitoring/metrics/reference.adoc#raft-metrics[`<prefix>.cluster.raft.tx_retries`] label:deprecated[Deprecated in 2025.02]
|The metric will be removed in a future release.

2+| xref:monitoring/metrics/reference.adoc#db-data-metrics[Database data metrics] label:deprecated[Deprecated in 5.15]
|`<prefix>.ids_in_use.relationship_type`|
|`<prefix>.ids_in_use.property`|
Expand Down
15 changes: 15 additions & 0 deletions modules/ROOT/pages/configuration/configuration-settings.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -450,6 +450,21 @@ a|A comma-separated list where each element is a string.
m|++++++
|===

[role=label--enterprise-edition label--new-2025.02]
[[config_dbms.cluster.raft.async_channel_acquisition_enabled]]
=== `dbms.cluster.raft.async_channel_acquisition_enabled`

.dbms.cluster.raft.async_channel_acquisition_enabled
[frame="topbot", stripes=odd, grid="cols", cols="<1s,<4"]
|===
|Description
a|Enable async acquisition of raft sender channels. If set to `false`, the leader will wait for a connection to a follower before shipping it entries. This may cause latencies in replication if one or more members are slow at establishing connections.
|Valid values
a|A boolean.
|Default value
m|+++true+++
|===


[role=label--enterprise-edition]
[[config_dbms.cluster.raft.binding_timeout]]
Expand Down
10 changes: 9 additions & 1 deletion modules/ROOT/pages/monitoring/metrics/reference.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -441,7 +441,7 @@ label:deprecated[Deprecated in 5.15]
|<prefix>.cluster.raft.applied_index|The applied index of the Raft log. Represents the application of the committed Raft log entries to the database and internal state. The applied index should always be less than or equal to the commit index. The difference between this and the commit index can be used to monitor how up-to-date the follower database is. (gauge)
|<prefix>.cluster.raft.prune_index |The head index of the Raft log. Represents the oldest Raft index that exists in the log. A prune event will increase this value. This can be used to track how much history of Raft logs the member has. (gauge)
|<prefix>.cluster.raft.term|The Raft Term of this server. It increases monotonically if you do not unbind the cluster state. (gauge)
|<prefix>.cluster.raft.tx_retries|Transaction retries. (counter)
|<prefix>.cluster.raft.tx_retries|label:deprecated[Deprecated in 2025.02] Transaction retries. (counter)
|<prefix>.cluster.raft.is_leader|Is this server the leader? Track this for each rafted primary database in the cluster. It reports `0` if it is not the leader and `1` if it is the leader. The sum of all of these should always be `1`. However, there are transient periods in which the sum can be more than `1` because more than one member thinks it is the leader. Action may be needed if the metric shows `0` for more than 30 seconds. (gauge)
|<prefix>.cluster.raft.in_flight_cache.total_bytes|In-flight cache total bytes. (gauge)
|<prefix>.cluster.raft.in_flight_cache.max_bytes|In-flight cache max bytes. (gauge)
Expand All @@ -466,6 +466,11 @@ label:deprecated[Deprecated in 5.15]
|<prefix>.cluster.raft.snapshot_attempt|label:new[Introduced in 2025.01] Total number of attempts to download Raft snapshots triggered. (counter)
|<prefix>.cluster.raft.snapshot_success|label:new[Introduced in 2025.01] Total number of successfully downloaded Raft snapshots. (counter)
|<prefix>.cluster.raft.snapshot_fail|label:new[Introduced in 2025.01] Total number of failed Raft snapshot download attempts. (counter)
|<prefix>.cluster.raft.inbound_queue_offered|label:new[Introduced in 2025.02] Total number of inbound messages offered to the queue. (counter)
|<prefix>.cluster.raft.inbound_queue_accepted|label:new[Introduced in 2025.02] Total number of inbound messages accepted by the queue. (counter)
|<prefix>.cluster.raft.inbound_queue_rejected|label:new[Introduced in 2025.02] Total number of inbound messages rejected by the queue. (counter)
|<prefix>.cluster.raft.pre_elections_triggered|label:new[Introduced in 2025.02] Total number of pre-elections triggered by this member. (counter)
|<prefix>.cluster.raft.elections_triggered|label:new[Introduced in 2025.02] Total number of elections triggered by this member. (counter)
|===


Expand All @@ -479,6 +484,9 @@ label:deprecated[Deprecated in 5.15]
|<prefix>.cluster.store_copy.pull_updates|The total number of pull requests made by this instance. (counter)
|<prefix>.cluster.store_copy.pull_update_highest_tx_id_requested|The highest transaction id requested in a pull update by this instance. (counter)
|<prefix>.cluster.store_copy.pull_update_highest_tx_id_received|The highest transaction id that has been pulled in the last pull updates by this instance. (counter)
|<prefix>.cluster.store_copy.store_file_download_attempt|label:new[Introduced in 2025.02] Total number of store file download attempts. (counter)
|<prefix>.cluster.store_copy.store_file_download_fail |label:new[Introduced in 2025.02] Total number of failed store file downloads. (counter)
|<prefix>.cluster.store_copy.store_file_download_success |label:new[Introduced in 2025.02] Total number of successful store file downloads. (counter)
|===


Expand Down
106 changes: 59 additions & 47 deletions modules/ROOT/pages/tools/neo4j-admin/neo4j-admin-import.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ These are some things you need to keep in mind when creating your input files:
Indexes and constraints are not created during the import.
Instead, you have to add these afterward (see link:{neo4j-docs-base-uri}/cypher-manual/current/indexes-for-full-text-search[Cypher Manual -> Indexes]).
You can use the `--schema` option to create indexes and contraints during the import process.
You can use the `--schema` option to create and populate indexes and constraints during the import process.
The option is available in the Enterprise Edition and works only for the block format.
See <<indexes-constraints-import, Provide indexes and constraints during import>> for more information.
====
Expand Down Expand Up @@ -130,7 +130,7 @@ For more information, please contact Neo4j Professional Services.

`neo4j-admin import` also supports the Parquet file format.
You can use the parameter `--input-type=csv|parquet` to explicitly specify whether to use CSV or Parquet for the importer.
If not defined, the default value is CSV.
If not defined, it defaults to CSV.
The xref:tools/neo4j-admin/neo4j-admin-import.adoc#import-tool-examples[examples] for CSV can also be used with Parquet.

[[full-import-options-table]]
Expand Down Expand Up @@ -227,7 +227,7 @@ Possible values are:

|--input-type=csv\|parquet
|File type to import from. Can be csv or parquet. Defaults to csv.
|csv
|

|--legacy-style-quoting[=true\|false]
|Whether or not a backslash-escaped quote e.g. \" is interpreted as an inner quote.
Expand Down Expand Up @@ -322,15 +322,15 @@ This is done by using the `--verbose` option.
|--skip-bad-relationships[=true\|false]
|Whether or not to skip importing relationships that refer to missing node IDs, i.e. either start or end node ID/group referring to a node that was not specified by the node input data.

Skipped relationships will be logged, containing at most the number of entities specified by `--bad-tolerance`, unless otherwise specified by the `--skip-bad-entries-logging` option.
Skipped relationships will be logged if they are within the limit of entities specified by `--bad-tolerance` and the `--skip-bad-entries-logging` option is disabled.
|false

|--skip-duplicate-nodes[=true\|false]
|Whether or not to skip importing nodes that have the same ID/group.

In the event of multiple nodes within the same group having the same ID, the first encountered will be imported, whereas consecutive such nodes will be skipped.

Skipped nodes will be logged, containing at most the number of entities specified by `--bad-tolerance`, unless otherwise specified by the `--skip-bad-entries-logging` option.
Skipped nodes will be logged if they are within the limit of entities specified by `--bad-tolerance` and the `--skip-bad-entries-logging` option is disabled.
|false

|--strict[=true\|false]
Expand Down Expand Up @@ -444,41 +444,6 @@ bin/neo4j-admin database import full --nodes import/movies_header.csv,import/mov
--relationships import/roles_header.csv,import/roles.csv
----

[role=label--enterprise-edition]
[[indexes-constraints-import]]
==== Provide indexes and constraints during import

You can use the `--schema` option to create indexes/constraints during the initial import process.
It currently only works for the block format and full import.

You should have a Cypher script containing only `CREATE INDEX|CONSTRAINT` commands to be parsed and executed.
This file uses ';' as the separator.

For example:

[source, cypher, role=nocopy]
----
CREATE INDEX PersonNameIndex FOR (i:Person) ON (i.name);
CREATE CONSTRAINT PersonAgeConstraint FOR (c:Person) REQUIRE c.age IS :: INTEGER
----

List of supported indexes and constraints that can be created by the import tool:

* RANGE
* LOOKUP
* POINT
* TEXT
* FULL-TEXT
* VECTOR

For example:

[source, shell, role=noplay]
----
bin/neo4j-admin database import full neo4j --nodes=import/movies.csv --nodes=import/actors.csv --relationships=import/roles.csv --schema=import/schema.cypher
----


[[import-tool-multiple-input-files-regex-example]]
==== Import data from CSV files using regular expression

Expand Down Expand Up @@ -758,7 +723,7 @@ Possible values are:

|--input-type=csv\|parquet
|File type to import from. Can be csv or parquet. Defaults to csv.
|csv
|

|--legacy-style-quoting[=true\|false]
|Whether or not a backslash-escaped quote e.g. \" is interpreted as an inner quote.
Expand Down Expand Up @@ -838,7 +803,7 @@ If you need to debug the import, it might be useful to collect the stack trace.
This is done by using the `--verbose` option.
|import.report

|--schema=<path>footnote:[The `--schema` option is available in this version but not yet supported. It will be functional in a future release.]
|--schema=<path> label:new[Available from 2025.02]
|Path to the file containing the Cypher commands for creating indexes and constraints during data import.
|

Expand All @@ -849,15 +814,15 @@ This is done by using the `--verbose` option.
|--skip-bad-relationships[=true\|false]
|Whether or not to skip importing relationships that refer to missing node IDs, i.e. either start or end node ID/group referring to a node that was not specified by the node input data.

Skipped relationships will be logged, containing at most the number of entities specified by `--bad-tolerance`, unless otherwise specified by the `--skip-bad-entries-logging` option.
Skipped relationships will be logged if they are within the limit of entities specified by `--bad-tolerance` and the `--skip-bad-entries-logging` option is disabled.
|false

|--skip-duplicate-nodes[=true\|false]
|Whether or not to skip importing nodes that have the same ID/group.

In the event of multiple nodes within the same group having the same ID, the first encountered will be imported, whereas consecutive such nodes will be skipped.

Skipped nodes will be logged, containing at most the number of entities specified by `--bad-tolerance`, unless otherwise specified by the `--skip-bad-entries-logging` option.
Skipped nodes will be logged if they are within the limit of entities specified by `--bad-tolerance` and the `--skip-bad-entries-logging` option is disabled.
|false


Expand All @@ -876,16 +841,15 @@ If enabled all those relationships will be found but at the cost of lower perfor
|false label:changed[Changed in 5.8]

|--threads=<num>
| (advanced) Max number of worker threads used by the importer. Defaults to the number of available processors reported by the JVM. There is a certain amount of minimum threads needed so for that reason there is no lower bound for this value. For optimal
performance, this value should not be greater than the number of available processors.
| (advanced) Max number of worker threads used by the importer. Defaults to the number of available processors reported by the JVM. There is a certain amount of minimum threads needed so for that reason there is no lower bound for this value. For optimal performance, this value should not be greater than the number of available processors.
|20

|--trim-strings[=true\|false]footnote:ingnoredByParquet2[]
|Whether or not strings should be trimmed for whitespaces.
|false

|--update-all-matching-relationships
|label:new[Introduced in 2025.01] If one relationship data entry matches multiple existing relationships, this decides whether to update all matching, or to instead log as error.
|label:new[Introduced in 2025.01] Whether or not to update all existing relationships that match a relationship data entry. If disabled, the relationship data entry will be logged if it is within the limit of entities specified by `--bad-tolerance` and the `--skip-bad-entries-logging` option is disabled.
|false

|--verbose
Expand Down Expand Up @@ -1570,6 +1534,54 @@ bin/neo4j-admin database import --nodes import/movies-header.csv,import/movies.c
----
====

[role=label--enterprise-edition]
[[indexes-constraints-import]]
== Provide indexes and constraints during import

You can use the `--schema` option to create and populate indexes/constraints during the import process.
It works for the block format and both full and incremental import.
For incremental import, this functionality is available from 2025.02.

You should have a Cypher script containing only `CREATE INDEX|CONSTRAINT` commands to be parsed and executed.
This file uses ';' as the separator.

For example:

[source, cypher, role=nocopy]
----
CREATE INDEX PersonNameIndex FOR (i:Person) ON (i.name);
CREATE CONSTRAINT PersonAgeConstraint FOR (c:Person) REQUIRE c.age IS :: INTEGER
----

List of supported indexes and constraints that can be created by the import tool:

* RANGE
* LOOKUP
* POINT
* TEXT
* FULL-TEXT
* VECTOR

For example:

.Create indexes and constraints during full import
[source, shell, role=noplay]
----
bin/neo4j-admin database import full neo4j --nodes=import/movies.csv --nodes=import/actors.csv --relationships=import/roles.csv --schema=import/schema.cypher
----

.Create indexes and constraints during incremental import
[source, shell, role=noplay]
----
bin/neo4j-admin database import incremental --stage=all --nodes=import/movies.csv --nodes=import/actors.csv --relationships=import/roles.csv --schema=import/schema.cypher
----

[NOTE]
====
You must stop your database, if you want to perform the incremental import within one command.
If you cannot afford a full downtime of your database, split the operation into several stages.
For details, see <<incremental-import-stages>>.
====

[role=label--enterprise-edition]
[[import-tool-resume]]
Expand Down
1 change: 0 additions & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,6 @@
"@antora/cli": "^3.1.7",
"@antora/site-generator-default": "^3.1.7",
"@neo4j-antora/antora-add-notes": "^0.3.1",
"@neo4j-antora/antora-modify-sitemaps": "^0.6.1",
"@neo4j-antora/antora-page-roles": "^0.3.2",
"@neo4j-antora/antora-table-footnotes": "^0.3.2",
"@neo4j-antora/antora-unlisted-pages": "^0.1.0",
Expand Down
4 changes: 0 additions & 4 deletions preview.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,10 +24,6 @@ urls:

antora:
extensions:
- require: "@neo4j-antora/antora-modify-sitemaps"
sitemap_version: '2025.01'
sitemap_loc_version: 'current'
move_sitemaps_to_components: true
- require: "@neo4j-antora/antora-unlisted-pages"

asciidoc:
Expand Down
4 changes: 0 additions & 4 deletions publish.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,10 +25,6 @@ urls:

antora:
extensions:
- require: "@neo4j-antora/antora-modify-sitemaps"
sitemap_version: '2025.01'
sitemap_loc_version: 'current'
move_sitemaps_to_components: true
- require: "@neo4j-antora/antora-unlisted-pages"

asciidoc:
Expand Down