Skip to content

Commit ef64371

Browse files
authored
Nhse o34 orkv.i87 linkcheck (OpenRiak#108)
Add link checking GHA, and fix the broken links it found
1 parent f4a4397 commit ef64371

File tree

7 files changed

+30
-7
lines changed

7 files changed

+30
-7
lines changed

.github/workflows/linkcheck.yml

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
name: linkcheck
2+
3+
on:
4+
pull_request:
5+
branches:
6+
- openriak-3.4
7+
8+
jobs:
9+
check-links:
10+
name: runner / linkspector
11+
runs-on: ubuntu-latest
12+
steps:
13+
- uses: actions/checkout@v4
14+
- name: Run linkspector
15+
uses: umbrelladocs/action-linkspector@v1.4.0
16+
with:
17+
config_file: ./.linkspector.yml

.linkspector.yml

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
dirs:
2+
- docs
3+
files:
4+
- ./README.md
5+
excludedDirs:
6+
- docs/previous

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ For later OTP versions, an alternative `openriak-<release>` branch will be requi
2424

2525
## Quick Start
2626

27-
You should have [Erlang/OTP 26](http://erlang.org/download.html) to compile and run this version Riak KV. The [Riak application](https://github.com/OpenRiak/riak) is the parent application for Riak KV, providing a set of scripts to build, package, deploy and run a Riak KV store.
27+
You should have [Erlang/OTP 26](https://erlang.org/downloads.html) to compile and run this version Riak KV. The [Riak application](https://github.com/OpenRiak/riak) is the parent application for Riak KV, providing a set of scripts to build, package, deploy and run a Riak KV store.
2828

2929
## Quick Docs
3030

docs/OperationsAndTroubleshootingGuide.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -435,7 +435,7 @@ riak eval "application:set_env(riak_kv, log_readrepair, true)"
435435

436436
### Monitoring inter-cluster reconciliation
437437

438-
For information on monitoring inter-cluster reconciliation and repair [refer to the NextGen Repl guide](./NextGenReplGuide.md#monitoring-and-run-time-changes).
438+
For information on monitoring inter-cluster reconciliation and repair [refer to the NextGen Repl guide](./ReplicationGuide.md#monitoring-and-runtime-changes).
439439

440440
### Monitoring node worker pools
441441

@@ -690,7 +690,7 @@ Before considering backups, it is worth noting that as a distributed database th
690690

691691
Production users of Riak commonly have relatively lightweight backup and recovery strategies when compared to traditional database management systems; eventual consistency allows the global recovery of state without the need to focus on recovering state first back to a point in time. In general, greater effort is placed into building the resilience of the system, and also the management of change within the application i.e. ensuring the application adopts lazy migration strategies for schema changes that don't require large point-in-time migration events.
692692

693-
If an individual node fails, do not restore an individual node from backup. It is generally much more efficient and reliable to use [the `repair` process](#reactive-replace) to recover data on a node. It is not normal practice to keep backups simply for the purpose of restoring individual nodes, even where those nodes may rely on ephemeral disks.
693+
If an individual node fails, do not restore an individual node from backup. It is generally much more efficient and reliable to use [the `repair` process](#reactive-replacement) to recover data on a node. It is not normal practice to keep backups simply for the purpose of restoring individual nodes, even where those nodes may rely on ephemeral disks.
694694

695695
Note that in cloud environments, if an inefficient backup method is chosen (e.g. snapshots of block-service file-system volumes), then backup costs may consume a dominant proportion of overall Riak infrastructure costs.
696696

docs/OtherAPI.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ layout : default
66

77
# Riak KV - Other APIs
88

9-
The majority of work within Riak KV can be done using the [Object API](/ObjectAPI.md), and the [Query API](/QueryAPI.md). There are though additional APIs, with specific purposes:
9+
The majority of work within Riak KV can be done using the [Object API](./ObjectAPI.md), and the [Query API](./QueryAPI.md). There are though additional APIs, with specific purposes:
1010

1111
- [The AAE Fold API](#aae-fold-api)
1212
- [The Fetch API used to access replication queues](#the-fetch-api)

docs/QueryAPI.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ The Query API is intended to provide flexible and performant functionality in th
6363
{: .highlight }
6464
> The aim of Riak development is to provide a database that performs efficient, scalable and predictable CRUD operations, and is just-queryable-enough to avoid the need of third party database integration in most use cases.
6565
66-
Riak does support via [an external replication API](./NextGenReplGuide.md), the ability to manage replication and reconciliation to third party query engines (e.g. OpenSearch), should more complex query support be required. The automation of such integration is outside of the current functional scope of Riak.
66+
Riak does support via [an external replication API](./ReplicationGuide.md#replication-api), the ability to manage replication and reconciliation to third party query engines (e.g. OpenSearch), should more complex query support be required. The automation of such integration is outside of the current functional scope of Riak.
6767

6868
### Querying - Functional Summary
6969

docs/ReplicationGuide.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,7 @@ The number of sink workers can be configured on the node:
100100
- The number of workers will constrain the pace at which events can be pulled from a source cluster, and also the PUSH workload that a sink cluster can generate for itself.
101101
- There is an overhead of a sink making requests on the source, so each sink worker will backoff if a request results in no replication events being discovered.
102102
- The sink worker pool does not auto-expand.
103-
- Sufficient sink workers need to be configured to keep-up with real-time replication, though [this number can be adjusted at runtime](#changing-the-number-of-sink-workers).
103+
- Sufficient sink workers need to be configured to keep-up with real-time replication, though [this number can be adjusted at runtime](#making-runtime-changes-to-the-sink).
104104
- There is some protection from over-provisioning but not from under-provisioning.
105105

106106
In handling replication events, sink workers must apply the replicated change into the local cluster, and this uses a specific `PUSH` command. The sink workers are constrained in that:
@@ -306,7 +306,7 @@ Reconciliation requires the scheduling of checks. Each check will perform a ful
306306
- `branch_compare`;
307307
- `clock_compare`.
308308

309-
The root to be compared is the root of [the merkle tree](./RiakTheoryGuide.md#handling-requests) representing the state of the whole tree in 1,024 4-byte hashes. The roots are merged across all partitions, to provide a representation of cluster state in a single 4KB integer.
309+
The root to be compared is the root of [the merkle tree](./RiakTheoryGuide.md#anti-entropy) representing the state of the whole tree in 1,024 4-byte hashes. The roots are merged across all partitions, to provide a representation of cluster state in a single 4KB integer.
310310

311311
If these roots match between the clusters, the clusters are considered to be reconciled - `in_sync = true` is the result of the exchange, and `{root_compare, 0}` is the final state of the exchange. If not, the `root_compare` is repeated, and on the repeated check only deltas in the same 4-byte hash as the previous compare need to be considered a potential mismatch. The `root_compare` will be repeated until the intersection of deltas is empty (all 1,024 hashes, have a some stage in the loop, matched between roots), or there exists a stable set of branches in the root, which differ on every comparison. An empty set of deltas will be considered an `in_sync = true` result, otherwise the next phase is required.
312312

0 commit comments

Comments
 (0)