.pem | openssl md5
+```
diff --git a/site/versioned_docs/version-4.1/clustering/creating-a-cluster-user.md b/site/versioned_docs/version-4.1/clustering/creating-a-cluster-user.md
new file mode 100644
index 00000000..3edecd29
--- /dev/null
+++ b/site/versioned_docs/version-4.1/clustering/creating-a-cluster-user.md
@@ -0,0 +1,59 @@
+---
+title: Creating a Cluster User
+---
+
+# Creating a Cluster User
+
+Inter-node authentication takes place via HarperDB users. There is a special role type called `cluster_user` that exists by default and limits the user to only clustering functionality.
+
+A `cluster_user` must be created and added to the `harperdb-config.yaml` file for clustering to be enabled.
+
+All nodes that are intended to be clustered together need to share the same `cluster_user` credentials (i.e. username and password).
+
+There are multiple ways a `cluster_user` can be created, they are:
+
+1. Through the operations API by calling `add_user`
+
+```json
+{
+ "operation": "add_user",
+ "role": "cluster_user",
+ "username": "cluster_account",
+ "password": "letsCluster123!",
+ "active": true
+}
+```
+
+When using the API to create a cluster user the `harperdb-config.yaml` file must be updated with the username of the new cluster user.
+
+This can be done through the API by calling `set_configuration` or by editing the `harperdb-config.yaml` file.
+
+```json
+{
+ "operation": "set_configuration",
+ "clustering_user": "cluster_account"
+}
+```
+
+In the `harperdb-config.yaml` file under the top-level `clustering` element there will be a user element. Set this to the name of the cluster user.
+
+```yaml
+clustering:
+ user: cluster_account
+```
+
+_Note: When making any changes to the `harperdb-config.yaml` file, HarperDB must be restarted for the changes to take effect._
+
+1. Upon installation using **command line variables**. This will automatically set the user in the `harperdb-config.yaml` file.
+
+_Note: Using command line or environment variables for setting the cluster user only works on install._
+
+```
+harperdb install --CLUSTERING_USER cluster_account --CLUSTERING_PASSWORD letsCluster123!
+```
+
+1. Upon installation using **environment variables**. This will automatically set the user in the `harperdb-config.yaml` file.
+
+```
+CLUSTERING_USER=cluster_account CLUSTERING_PASSWORD=letsCluster123
+```
diff --git a/site/versioned_docs/version-4.1/clustering/enabling-clustering.md b/site/versioned_docs/version-4.1/clustering/enabling-clustering.md
new file mode 100644
index 00000000..6b563b19
--- /dev/null
+++ b/site/versioned_docs/version-4.1/clustering/enabling-clustering.md
@@ -0,0 +1,49 @@
+---
+title: Enabling Clustering
+---
+
+# Enabling Clustering
+
+Clustering does not run by default; it needs to be enabled.
+
+To enable clustering the `clustering.enabled` configuration element in the `harperdb-config.yaml` file must be set to `true`.
+
+There are multiple ways to update this element, they are:
+
+1. Directly editing the `harperdb-config.yaml` file and setting enabled to `true`
+
+```yaml
+clustering:
+ enabled: true
+```
+
+_Note: When making any changes to the `harperdb-config.yaml` file HarperDB must be restarted for the changes to take effect._
+
+1. Calling `set_configuration` through the operations API
+
+```json
+{
+ "operation": "set_configuration",
+ "clustering_enabled": true
+}
+```
+
+_Note: When making any changes to HarperDB configuration HarperDB must be restarted for the changes to take effect._
+
+1. Using **command line variables**.
+
+```
+harperdb --CLUSTERING_ENABLED true
+```
+
+1. Using **environment variables**.
+
+```
+CLUSTERING_ENABLED=true
+```
+
+An efficient way to **install HarperDB**, **create the cluster user**, **set the node name** and **enable clustering** in one operation is to combine the steps using command line and/or environment variables. Here is an example using command line variables.
+
+```
+harperdb install --CLUSTERING_ENABLED true --CLUSTERING_NODENAME Node1 --CLUSTERING_USER cluster_account --CLUSTERING_PASSWORD letsCluster123!
+```
diff --git a/site/versioned_docs/version-4.1/clustering/establishing-routes.md b/site/versioned_docs/version-4.1/clustering/establishing-routes.md
new file mode 100644
index 00000000..e4ca2a6d
--- /dev/null
+++ b/site/versioned_docs/version-4.1/clustering/establishing-routes.md
@@ -0,0 +1,73 @@
+---
+title: Establishing Routes
+---
+
+# Establishing Routes
+
+A route is a connection between two nodes. It is how the clustering network is established.
+
+Routes do not need to cross connect all nodes in the cluster. You can select a leader node or a few leaders and all nodes connect to them, you can chain, etc… As long as there is one route connecting a node to the cluster all other nodes should be able to reach that node.
+
+Using routes the clustering servers will create a mesh network between nodes. This mesh network ensures that if a node drops out all other nodes can still communicate with each other. That being said, we recommend designing your routing with failover in mind, this means not storing all your routes on one node but dispersing them throughout the network.
+
+A simple route example is a two node topology, if Node1 adds a route to connect it to Node2, Node2 does not need to add a route to Node1. That one route configuration is all that’s needed to establish a bidirectional connection between the nodes.
+
+A route consists of a `port` and a `host`.
+
+`port` - the clustering port of the remote instance you are creating the connection with. This is going to be the `clustering.hubServer.cluster.network.port` in the HarperDB configuration on the node you are connecting with.
+
+`host` - the host of the remote instance you are creating the connection with.This can be an IP address or a URL.
+
+Routes are set in the `harperdb-config.yaml` file using the `clustering.hubServer.cluster.network.routes` element, which expects an object array, where each object has two properties, `port` and `host`.
+
+```yaml
+clustering:
+ hubServer:
+ cluster:
+ network:
+ routes:
+ - host: 3.62.184.22
+ port: 9932
+ - host: 3.735.184.8
+ port: 9932
+```
+
+
+
+This diagram shows one way of using routes to connect a network of nodes. Node2 and Node3 do not reference any routes in their config. Node1 contains routes for Node2 and Node3, which is enough to establish a network between all three nodes.
+
+There are multiple ways to set routes, they are:
+
+1. Directly editing the `harperdb-config.yaml` file (refer to code snippet above).
+1. Calling `cluster_set_routes` through the API.
+
+```json
+{
+ "operation": "cluster_set_routes",
+ "server": "hub",
+ "routes":[ {"host": "3.735.184.8", "port": 9932} ]
+}
+```
+
+_Note: When making any changes to HarperDB configuration HarperDB must be restarted for the changes to take effect._
+
+1. From the command line.
+
+```bash
+--CLUSTERING_HUBSERVER_CLUSTER_NETWORK_ROUTES "[{\"host\": \"3.735.184.8\", \"port\": 9932}]"
+```
+
+1. Using environment variables.
+
+```bash
+CLUSTERING_HUBSERVER_CLUSTER_NETWORK_ROUTES=[{"host": "3.735.184.8", "port": 9932}]
+```
+
+The API also has `cluster_get_routes` for getting all routes in the config and `cluster_delete_routes` for deleting routes.
+
+```json
+{
+ "operation": "cluster_delete_routes",
+ "routes":[ {"host": "3.735.184.8", "port": 9932} ]
+}
+```
diff --git a/site/versioned_docs/version-4.1/clustering/index.md b/site/versioned_docs/version-4.1/clustering/index.md
new file mode 100644
index 00000000..7bde63a2
--- /dev/null
+++ b/site/versioned_docs/version-4.1/clustering/index.md
@@ -0,0 +1,40 @@
+---
+title: Clustering
+---
+
+# Clustering
+
+HarperDB clustering is the process of connecting multiple HarperDB databases together to create a database mesh network that enables users to define data replication patterns.
+
+HarperDB’s clustering engine replicates data between instances of HarperDB using a highly performant, bi-directional pub/sub model on a per-table basis. Data replicates asynchronously with eventual consistency across the cluster following the defined pub/sub configuration. Individual transactions are sent in the order in which they were transacted, once received by the destination instance, they are processed in an ACID-compliant manor. Conflict resolution follows a last writer wins model based on recorded transaction time on the transaction and the timestamp on the record on the node.
+
+---
+### Common Use Case
+
+A common use case is an edge application collecting and analyzing sensor data that creates an alert if a sensor value exceeds a given threshold:
+
+* The edge application should not be making outbound http requests for security purposes.
+
+* There may not be a reliable network connection.
+
+* Not all sensor data will be sent to the cloud--either because of the unreliable network connection, or maybe it’s just a pain to store it.
+
+* The edge node should be inaccessible from outside the firewall.
+
+* The edge node will send alerts to the cloud with a snippet of sensor data containing the offending sensor readings.
+
+
+HarperDB simplifies the architecture of such an application with its bi-directional, table-level replication:
+
+* The edge instance subscribes to a “thresholds” table on the cloud instance, so the application only makes localhost calls to get the thresholds.
+
+* The application continually pushes sensor data into a “sensor_data” table via the localhost API, comparing it to the threshold values as it does so.
+
+* When a threshold violation occurs, the application adds a record to the “alerts” table.
+
+* The application appends to that record array “sensor_data” entries for the 60 seconds (or minutes, or days) leading up to the threshold violation.
+
+* The edge instance publishes the “alerts” table up to the cloud instance.
+
+
+By letting HarperDB focus on the fault-tolerant logistics of transporting your data, you get to write less code. By moving data only when and where it’s needed, you lower storage and bandwidth costs. And by restricting your app to only making local calls to HarperDB, you reduce the overall exposure of your application to outside forces.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/clustering/managing-subscriptions.md b/site/versioned_docs/version-4.1/clustering/managing-subscriptions.md
new file mode 100644
index 00000000..a1f8c56e
--- /dev/null
+++ b/site/versioned_docs/version-4.1/clustering/managing-subscriptions.md
@@ -0,0 +1,168 @@
+---
+title: Managing subscriptions
+---
+
+# Managing subscriptions
+
+Subscriptions can be added, updated, or removed through the API.
+
+_Note: The schema and tables in the subscription must exist on either the local or the remote node. Any schema and tables that do not exist on one particular node, for example, the local node, will be automatically created on the local node._
+
+To add a single node and create one or more subscriptions use `add_node`.
+
+```json
+{
+ "operation": "add_node",
+ "node_name": "Node2",
+ "subscriptions": [
+ {
+ "schema": "dev",
+ "table": "dog",
+ "publish": false,
+ "subscribe": true
+ },
+ {
+ "schema": "dev",
+ "table": "chicken",
+ "publish": true,
+ "subscribe": true
+ }
+ ]
+}
+```
+
+This is an example of adding Node2 to your local node. Subscriptions are created for two tables, dog and chicken.
+
+To update one or more subscriptions with a single node use `update_node`.
+
+```json
+{
+ "operation": "update_node",
+ "node_name": "Node2",
+ "subscriptions": [
+ {
+ "schema": "dev",
+ "table": "dog",
+ "publish": true,
+ "subscribe": true
+ }
+ ]
+}
+```
+
+This call will update the subscription with the dog table. Any other subscriptions with Node2 will not change.
+
+To add or update subscriptions with one or more nodes in one API call use `configure_cluster`.
+
+```json
+{
+ "operation": "configure_cluster",
+ "connections": [
+ {
+ "node_name": "Node2",
+ "subscriptions": [
+ {
+ "schema": "dev",
+ "table": "chicken",
+ "publish": false,
+ "subscribe": true
+ },
+ {
+ "schema": "prod",
+ "table": "dog",
+ "publish": true,
+ "subscribe": true
+ }
+ ]
+ },
+ {
+ "node_name": "Node3",
+ "subscriptions": [
+ {
+ "schema": "dev",
+ "table": "chicken",
+ "publish": true,
+ "subscribe": false
+ }
+ ]
+ }
+ ]
+}
+```
+
+_Note: `configure_cluster` will override **any and all** existing subscriptions defined on the local node. This means that before going through the connections in the request and adding the subscriptions, it will first go through **all existing subscriptions the local node has** and remove them. To get all existing subscriptions use `cluster_status`._
+
+#### Start time
+
+There is an optional property called `start_time` that can be passed in the subscription. This property accepts an ISO formatted UTC date.
+
+`start_time` can be used to set from what time you would like to source transactions from a table when creating or updating a subscription.
+
+```json
+{
+ "operation": "add_node",
+ "node_name": "Node2",
+ "subscriptions": [
+ {
+ "schema": "dev",
+ "table": "dog",
+ "publish": false,
+ "subscribe": true,
+ "start_time": "2022-09-02T20:06:35.993Z"
+ }
+ ]
+}
+```
+
+This example will get all transactions on Node2’s dog table starting from `2022-09-02T20:06:35.993Z` and replicate them locally on the dog table.
+
+If no start time is passed it defaults to the current time.
+
+_Note: start time utilizes clustering to back source transactions. For this reason it can only source transactions that occurred when clustering was enabled._
+
+#### Remove node
+
+To remove a node and all its subscriptions use `remove_node`.
+
+```json
+{
+ "operation":"remove_node",
+ "node_name":"Node2"
+}
+```
+
+#### Cluster status
+
+To get the status of all connected nodes and see their subscriptions use `cluster_status`.
+
+```json
+{
+ "node_name": "Node1",
+ "is_enabled": true,
+ "connections": [
+ {
+ "node_name": "Node2",
+ "status": "open",
+ "ports": {
+ "clustering": 9932,
+ "operations_api": 9925
+ },
+ "latency_ms": 65,
+ "uptime": "11m 19s",
+ "subscriptions": [
+ {
+ "schema": "dev",
+ "table": "dog",
+ "publish": true,
+ "subscribe": true
+ }
+ ],
+ "system_info": {
+ "hdb_version": "4.0.0",
+ "node_version": "16.17.1",
+ "platform": "linux"
+ }
+ }
+ ]
+}
+```
diff --git a/site/versioned_docs/version-4.1/clustering/naming-a-node.md b/site/versioned_docs/version-4.1/clustering/naming-a-node.md
new file mode 100644
index 00000000..d1ebdfb1
--- /dev/null
+++ b/site/versioned_docs/version-4.1/clustering/naming-a-node.md
@@ -0,0 +1,45 @@
+---
+title: Naming a Node
+---
+
+# Naming a Node
+
+Node name is the name given to a node. It is how nodes are identified within the cluster and must be unique to the cluster.
+
+The name cannot contain any of the following characters: `.,*>` . Dot, comma, asterisk, greater than, or whitespace.
+
+The name is set in the `harperdb-config.yaml` file using the `clustering.nodeName` configuration element.
+
+_Note: If you want to change the node name make sure there are no subscriptions in place before doing so. After the name has been changed a full restart is required._
+
+There are multiple ways to update this element, they are:
+
+1. Directly editing the `harperdb-config.yaml` file.
+
+```yaml
+clustering:
+ nodeName: Node1
+```
+
+_Note: When making any changes to the `harperdb-config.yaml` file HarperDB must be restarted for the changes to take effect._
+
+1. Calling `set_configuration` through the operations API
+
+```json
+{
+ "operation": "set_configuration",
+ "clustering_nodeName":"Node1"
+}
+```
+
+1. Using command line variables.
+
+```
+harperdb --CLUSTERING_NODENAME Node1
+```
+
+1. Using environment variables.
+
+```
+CLUSTERING_NODENAME=Node1
+```
diff --git a/site/versioned_docs/version-4.1/clustering/requirements-and-definitions.md b/site/versioned_docs/version-4.1/clustering/requirements-and-definitions.md
new file mode 100644
index 00000000..1e2dd6af
--- /dev/null
+++ b/site/versioned_docs/version-4.1/clustering/requirements-and-definitions.md
@@ -0,0 +1,11 @@
+---
+title: Requirements and Definitions
+---
+
+# Requirements and Definitions
+
+To create a cluster you must have two or more nodes\* (aka instances) of HarperDB running.
+
+\*_A node is a single instance/installation of HarperDB. A node of HarperDB can operate independently with clustering on or off._
+
+On the following pages we'll walk you through the steps required, in order, to set up a HarperDB cluster.
diff --git a/site/versioned_docs/version-4.1/clustering/subscription-overview.md b/site/versioned_docs/version-4.1/clustering/subscription-overview.md
new file mode 100644
index 00000000..76292f4a
--- /dev/null
+++ b/site/versioned_docs/version-4.1/clustering/subscription-overview.md
@@ -0,0 +1,45 @@
+---
+title: Subscriptions
+---
+
+# Subscriptions
+
+A subscription defines how data should move between two nodes. They are exclusively table level and operate independently. They connect a table on one node to a table on another node, the subscription will apply to a matching schema name and table name on both nodes.
+
+_Note: ‘local’ and ‘remote’ will often be referred to. In the context of these docs ‘local’ is the node that is receiving the API request to create/update a subscription and remote is the other node that is referred to in the request, the node on the other end of the subscription._
+
+A subscription consists of:
+
+`schema` - the name of the schema that the table you are creating the subscription for belongs to.
+
+`table` - the name of the table the subscription will apply to.
+
+`publish` - a boolean which determines if transactions on the local table should be replicated on the remote table.
+
+`subscribe` - a boolean which determines if transactions on the remote table should be replicated on the local table.
+
+#### Publish subscription
+
+
+
+This diagram is an example of a `publish` subscription from the perspective of Node1.
+
+The record with id 2 has been inserted in the dog table on Node1, after it has completed that insert it is sent to Node 2 and inserted in the dog table there.
+
+#### Subscribe subscription
+
+
+
+This diagram is an example of a `subscribe` subscription from the perspective of Node1.
+
+The record with id 3 has been inserted in the dog table on Node2, after it has completed that insert it is sent to Node1 and inserted there.
+
+#### Subscribe and Publish
+
+
+
+This diagram shows both subscribe and publish but publish is set to false. You can see that because subscribe is true the insert on Node2 is being replicated on Node1 but because publish is set to false the insert on Node1 is _**not**_ being replicated on Node2.
+
+
+
+This shows both subscribe and publish set to true. The insert on Node1 is replicated on Node2 and the update on Node2 is replicated on Node1.
diff --git a/site/versioned_docs/version-4.1/clustering/things-worth-knowing.md b/site/versioned_docs/version-4.1/clustering/things-worth-knowing.md
new file mode 100644
index 00000000..cb01b8b8
--- /dev/null
+++ b/site/versioned_docs/version-4.1/clustering/things-worth-knowing.md
@@ -0,0 +1,43 @@
+---
+title: Things worth Knowing
+---
+
+# Things worth Knowing
+
+Additional information that will help you define your clustering topology.
+
+***
+
+### Transactions
+
+Transactions that are replicated across the cluster are:
+
+* Insert
+* Update
+* Upsert
+* Delete
+* Bulk loads
+ * CSV data load
+ * CSV file load
+ * CSV URL load
+ * Import from S3
+
+When adding or updating a node any schemas and tables in the subscription that don’t exist on the remote node will be automatically created.
+
+**Destructive schema operations do not replicate across a cluster**. Those operations include `drop_schema`, `drop_table`, and `drop_attribute`. If the desired outcome is to drop schema information from any nodes then the operation(s) will need to be run on each node independently.
+
+Users and roles are not replicated across the cluster.
+
+***
+
+### Queueing
+
+HarperDB has built-in resiliency for when network connectivity is lost within a subscription. When connections are reestablished, a catchup routine is executed to ensure data that was missed, specific to the subscription, is sent/received as defined.
+
+***
+
+### Topologies
+
+HarperDB clustering creates a mesh network between nodes giving end users the ability to create an infinite number of topologies. subscription topologies can be simple or as complex as needed.
+
+
diff --git a/site/versioned_docs/version-4.1/configuration.md b/site/versioned_docs/version-4.1/configuration.md
new file mode 100644
index 00000000..2079c9fe
--- /dev/null
+++ b/site/versioned_docs/version-4.1/configuration.md
@@ -0,0 +1,785 @@
+---
+title: Configuration File
+---
+
+# Configuration File
+
+HarperDB is configured through a [YAML](https:/yaml.org/) file called `harperdb-config.yaml` located in the operations API root directory (by default this is a directory named `hdb` located in the home directory of the current user).
+
+All available configuration will be populated by default in the config file on install, regardless of whether it is used.
+
+---
+
+## Using the Configuration File and Naming Conventions
+
+The configuration elements in `harperdb-config.yaml` use camelcase: `operationsApi`.
+
+To change a configuration value edit the `harperdb-config.yaml` file and save any changes. HarperDB must be restarted for changes to take effect.
+
+Alternately, configuration can be changed via environment and/or command line variables or via the API. To access lower level elements, use underscores to append parent/child elements (when used this way elements are case insensitive):
+
+ - Environment variables: `OPERATIONSAPI_NETWORK_PORT=9925`
+ - Command line variables: `--OPERATIONSAPI_NETWORK_PORT 9925`
+ - Calling `set_configuration` through the API: `operationsApi_network_port: 9925`
+
+---
+
+## Configuration Options
+
+### `clustering`
+
+The `clustering` section configures the clustering engine, this is used to replicate data between instances of HarperDB.
+
+Clustering offers a lot of different configurations, however in a majority of cases the only options you will need to pay attention to are:
+
+- `clustering.enabled` Enable the clustering processes.
+- `clustering.hubServer.cluster.network.port` The port other nodes will connect to. This port must be accessible from other cluster nodes.
+- `clustering.hubServer.cluster.network.routes`The connections to other instances.
+- `clustering.nodeName` The name of your node, must be unique within the cluster.
+- `clustering.user` The name of the user credentials used for Inter-node authentication.
+
+
+`enabled` - _Type_: boolean; _Default_: false
+
+Enable clustering.
+
+_Note: If you enabled clustering but do not create and add a cluster user you will get a validation error. See `user` description below on how to add a cluster user._
+
+```yaml
+clustering:
+ enabled: true
+```
+
+`clustering.hubServer.cluster`
+
+Clustering’s `hubServer` facilitates the HarperDB mesh network and discovery service.
+
+```yaml
+clustering:
+ hubServer:
+ cluster:
+ name: harperdb
+ network:
+ port: 9932
+ routes:
+ - host: 3.62.184.22
+ port: 9932
+ - host: 3.735.184.8
+ port: 9932
+```
+
+`name` - _Type_: string, _Default_: harperdb
+
+The name of your cluster. This name needs to be consistent for all other nodes intended to be meshed in the same network.
+
+
+
+`port` - _Type_: integer, _Default_: 9932
+
+The port the hub server uses to accept cluster connections
+
+`routes` - _Type_: array, _Default_: null
+
+An object array that represent the host and port this server will cluster to. Each object must have two properties `port` and `host`. Multiple entries can be added to create network resiliency in the event one server is unavailable. Routes can be added, updated and removed either by directly editing the `harperdb-config.yaml` file or by using the `cluster_set_routes` or `cluster_delete_routes` API endpoints.
+
+
+
+
+`host` - _Type_: string
+
+The host of the remote instance you are creating the connection with.
+
+`port` - _Type_: integer
+
+The port of the remote instance you are creating the connection with. This is likely going to be the `clustering.hubServer.cluster.network.port` on the remote instance.
+
+
+
+`clustering.hubServer.leafNodes`
+
+```yaml
+clustering:
+ hubServer:
+ leafNodes:
+ network:
+ port: 9931
+```
+
+`port` - _Type_: integer; _Default_: 9931
+
+The port the hub server uses to accept leaf server connections.
+
+`clustering.hubServer.network`
+
+```yaml
+clustering:
+ hubServer:
+ network:
+ port: 9930
+```
+
+`port` - _Type_: integer; _Default_: 9930
+
+Use this port to connect a client to the hub server, for example using the NATs SDK to interact with the server.
+
+`clustering.leafServer`
+
+Manages streams, streams are ‘message stores’ that store table transactions.
+
+```yaml
+clustering:
+ leafServer:
+ network:
+ port: 9940
+ routes:
+ - host: 3.62.184.22
+ port: 9931
+ - host: node3.example.com
+ port: 9931
+ streams:
+ maxAge: 3600
+ maxBytes: 10000000
+ maxMsgs: 500
+ path: /user/hdb/clustering/leaf
+```
+
+`port` - _Type_: integer; _Default_: 9940
+
+Use this port to connect a client to the leaf server, for example using the NATs SDK to interact with the server.
+
+`routes` - _Type_: array; _Default_: null
+
+An object array that represent the host and port the leaf node will directly connect with. Each object must have two properties `port` and `host`. Unlike the hub server, the leaf server will establish connections to all listed hosts. Routes can be added, updated and removed either by directly editing the `harperdb-config.yaml` file or by using the `cluster_set_routes` or `cluster_delete_routes` API endpoints.
+
+
+
+`host` - _Type_: string
+
+The host of the remote instance you are creating the connection with.
+
+`port` - _Type_: integer
+
+The port of the remote instance you are creating the connection with. This is likely going to be the `clustering.hubServer.cluster.network.port` on the remote instance.
+
+
+
+
+`clustering.leafServer.streams`
+
+`maxAge` - _Type_: integer; _Default_: null
+
+The maximum age of any messages in the stream, expressed in seconds.
+
+`maxBytes` - _Type_: integer; _Default_: null
+
+The maximum size of the stream in bytes. Oldest messages are removed if the stream exceeds this size.
+
+`maxMsgs` - _Type_: integer; _Default_: null
+
+How many messages may be in a stream. Oldest messages are removed if the stream exceeds this number.
+
+`path` - _Type_: string; _Default_: <ROOTPATH>/clustering/leaf
+
+The directory where all the streams are kept.
+
+---
+`logLevel` - _Type_: string; _Default_: error
+
+Control the verbosity of clustering logs.
+
+```yaml
+clustering:
+ logLevel: error
+```
+
+There exists a log level hierarchy in order as `trace`, `debug`, `info`, `warn`, and `error`. When the level is set to `trace` logs will be created for all possible levels. Whereas if the level is set to `warn`, the only entries logged will be `warn` and `error`. The default value is `error`.
+
+
+`nodeName` - _Type_: string; _Default_: null
+
+The name of this node in your HarperDB cluster topology. This must be a value unique from the rest of the cluster node names.
+
+_Note: If you want to change the node name make sure there are no subscriptions in place before doing so. After the name has been changed a full restart is required._
+
+```yaml
+clustering:
+ nodeName: great_node
+```
+
+`tls`
+
+Transport Layer Security default values are automatically generated on install.
+
+```yaml
+clustering:
+ tls:
+ certificate: ~/hdb/keys/certificate.pem
+ certificateAuthority: ~/hdb/keys/ca.pem
+ privateKey: ~/hdb/keys/privateKey.pem
+ insecure: true
+ verify: true
+```
+
+`certificate` - _Type_: string; _Default_: <ROOTPATH>/keys/certificate.pem
+
+Path to the certificate file.
+
+`certificateAuthority` - _Type_: string; _Default_: <ROOTPATH>/keys/ca.pem
+
+Path to the certificate authority file.
+
+`privateKey` - _Type_: string; _Default_: <ROOTPATH>/keys/privateKey.pem
+
+Path to the private key file.
+
+`insecure` - _Type_: boolean; _Default_: true
+
+When true, will skip certificate verification. For use only with self-signed certs.
+
+`republishMessages` - _Type_: boolean; _Default_: true
+
+When true, all transactions that are received from other nodes are republished to this node's stream. When subscriptions are not fully connected between all nodes, this ensures that messages are routed to all nodes through intermediate nodes. This also ensures that all writes, whether local or remote, are written to the NATS transaction log. However, there is additional overhead with republishing, and setting this is to false can provide better data replication performance. When false, you need to ensure all subscriptions are fully connected between every node to every other node, and be aware that the NATS transaction log will only consist of local writes.
+
+`verify` - _Type_: boolean; _Default_: true
+
+When true, hub server will verify client certificate using the CA certificate.
+
+---
+
+`user` - _Type_: string; _Default_: null
+
+The username given to the `cluster_user`. All instances in a cluster must use the same clustering user credentials (matching username and password).
+
+Inter-node authentication takes place via a special HarperDB user role type called `cluster_user`.
+
+The user can be created either through the API using an `add_user` request with the role set to `cluster_user`, or on install using environment variables `CLUSTERING_USER=cluster_person` `CLUSTERING_PASSWORD=pass123!` or CLI variables `harperdb --CLUSTERING_USER cluster_person` `--CLUSTERING_PASSWORD` `pass123!`
+
+```yaml
+clustering:
+ user: cluster_person
+```
+
+---
+
+
+### `customFunctions`
+
+The `customFunctions` section configures HarperDB Custom Functions.
+
+`enabled` - _Type_: boolean; _Default_: true
+
+Enable the Custom Function server or not.
+
+```yaml
+customFunctions:
+ enabled: true
+```
+
+`customFunctions.network`
+
+```yaml
+customFunctions:
+ network:
+ cors: true
+ corsAccessList:
+ - null
+ headersTimeout: 60000
+ https: false
+ keepAliveTimeout: 5000
+ port: 9926
+ timeout: 120000
+```
+
+
+
+`cors` - _Type_: boolean; _Default_: true
+
+Enable Cross Origin Resource Sharing, which allows requests across a domain.
+
+`corsAccessList` - _Type_: array; _Default_: null
+
+An array of allowable domains with CORS
+
+`headersTimeout` - _Type_: integer; _Default_: 60,000 milliseconds (1 minute)
+
+Limit the amount of time the parser will wait to receive the complete HTTP headers with.
+
+`https` - _Type_: boolean; _Default_: false
+
+Enables HTTPS on the Custom Functions API. This requires a valid certificate and key. If `false`, Custom Functions will run using standard HTTP.
+
+`keepAliveTimeout` - _Type_: integer; _Default_: 5,000 milliseconds (5 seconds)
+
+Sets the number of milliseconds of inactivity the server needs to wait for additional incoming data after it has finished processing the last response.
+
+`port` - _Type_: integer; _Default_: 9926
+
+The port used to access the Custom Functions server.
+
+`timeout` - _Type_: integer; _Default_: Defaults to 120,000 milliseconds (2 minutes)
+
+The length of time in milliseconds after which a request will timeout.
+
+
+`nodeEnv` - _Type_: string; _Default_: production
+
+Allows you to specify the node environment in which application will run.
+
+```yaml
+customFunctions:
+ nodeEnv: production
+```
+
+- `production` native node logging is kept to a minimum; more caching to optimize performance. This is the default value.
+- `development` more native node logging; less caching.
+
+`root` - _Type_: string; _Default_: <ROOTPATH>/custom_functions
+
+The path to the folder containing Custom Function files.
+
+```yaml
+customFunctions:
+ root: ~/hdb/custom_functions
+```
+
+`tls`
+Transport Layer Security
+
+```yaml
+customFunctions:
+ tls:
+ certificate: ~/hdb/keys/certificate.pem
+ certificateAuthority: ~/hdb/keys/ca.pem
+ privateKey: ~/hdb/keys/privateKey.pem
+```
+
+`certificate` - _Type_: string; _Default_: <ROOTPATH>/keys/certificate.pem
+
+Path to the certificate file.
+
+`certificateAuthority` - _Type_: string; _Default_: <ROOTPATH>/keys/ca.pem
+
+Path to the certificate authority file.
+
+`privateKey` - _Type_: string; _Default_: <ROOTPATH>/keys/privateKey.pem
+
+Path to the private key file.
+
+
+---
+
+
+### `ipc`
+
+The `ipc` section configures the HarperDB Inter-Process Communication interface.
+
+```yaml
+ipc:
+ network:
+ port: 9383
+```
+
+`port` - _Type_: integer; _Default_: 9383
+
+The port the IPC server runs on. The default is `9383`.
+
+
+---
+
+
+### `localStudio`
+
+The `localStudio` section configures the local HarperDB Studio, a simplified GUI for HarperDB hosted on the server. A more comprehensive GUI is hosted by HarperDB at https:/studio.harperdb.io. Note, all database traffic from either `localStudio` or HarperDB Studio is made directly from your browser to the instance.
+
+`enabled` - _Type_: boolean; _Default_: false
+
+Enabled the local studio or not.
+
+```yaml
+localStudio:
+ enabled: false
+```
+
+---
+
+
+### `logging`
+
+The `logging` section configures HarperDB logging across all HarperDB functionality. HarperDB leverages pm2 for logging. Each process group gets their own log file which is located in `logging.root`.
+
+`auditLog` - _Type_: boolean; _Default_: false
+
+Enabled table transaction logging.
+
+```yaml
+logging:
+ auditLog: false
+```
+
+To access the audit logs, use the API operation `read_audit_log`. It will provide a history of the data, including original records and changes made, in a specified table.
+```json
+{
+ "operation": "read_audit_log",
+ "schema": "dev",
+ "table": "dog"
+}
+````
+`file` - _Type_: boolean; _Default_: true
+
+Defines whether or not to log to a file.
+
+```yaml
+logging:
+ file: true
+```
+
+`level` - _Type_: string; _Default_: error
+
+Control the verbosity of logs.
+
+```yaml
+logging:
+ level: error
+```
+There exists a log level hierarchy in order as `trace`, `debug`, `info`, `warn`, `error`, `fatal`, and `notify`. When the level is set to `trace` logs will be created for all possible levels. Whereas if the level is set to `fatal`, the only entries logged will be `fatal` and `notify`. The default value is `error`.
+
+`root` - _Type_: string; _Default_: <ROOTPATH>/log
+
+The path where the log files will be written.
+
+```yaml
+logging:
+ root: ~/hdb/log
+```
+
+`rotation`
+
+Rotation provides the ability for a user to systematically rotate and archive the `hdb.log` file. To enable `interval` and/or `maxSize` must be set.
+
+**_Note:_** `interval` and `maxSize` are approximates only. It is possible that the log file will exceed these values slightly before it is rotated.
+
+```yaml
+logging:
+ rotation:
+ enabled: true
+ compress: false
+ interval: 1D
+ maxSize: 100K
+ path: /user/hdb/log
+```
+
+
+`enabled` - _Type_: boolean; _Default_: false
+
+Enables logging rotation.
+
+`compress` - _Type_: boolean; _Default_: false
+
+Enables compression via gzip when logs are rotated.
+
+`interval` - _Type_: string; _Default_: null
+
+The time that should elapse between rotations. Acceptable units are D(ays), H(ours) or M(inutes).
+
+`maxSize` - _Type_: string; _Default_: null
+
+The maximum size the log file can reach before it is rotated. Must use units M(egabyte), G(igabyte), or K(ilobyte).
+
+`path` - _Type_: string; _Default_: <ROOTPATH>/log
+
+Where to store the rotated log file. File naming convention is `HDB-YYYY-MM-DDT-HH-MM-SSSZ.log`.
+
+
+
+
+`stdStreams` - _Type_: boolean; _Default_: false
+
+Log HarperDB logs to the standard output and error streams. The `operationsApi.foreground` flag must be enabled in order to receive the stream.
+
+```yaml
+logging:
+ stdStreams: false
+```
+
+---
+
+
+### `operationsApi`
+
+The `operationsApi` section configures the HarperDB Operations API.
+
+`authentication`
+
+```yaml
+operationsApi:
+ authentication:
+ operationTokenTimeout: 1d
+ refreshTokenTimeout: 30d
+```
+
+
+
+`operationTokenTimeout` - _Type_: string; _Default_: 1d
+
+Defines the length of time an operation token will be valid until it expires. Example values: https:/github.com/vercel/ms.
+
+`refreshTokenTimeout` - _Type_: string; _Default_: 1d
+
+Defines the length of time a refresh token will be valid until it expires. Example values: https:/github.com/vercel/ms.
+
+
+`foreground` - _Type_: boolean; _Default_: false
+
+Determines whether or not HarperDB runs in the foreground.
+
+```yaml
+operationsApi:
+ foreground: false
+```
+
+`network`
+
+```yaml
+operationsApi:
+ network:
+ cors: true
+ corsAccessList:
+ - null
+ headersTimeout: 60000
+ https: false
+ keepAliveTimeout: 5000
+ port: 9925
+ timeout: 120000
+```
+
+
+`cors` - _Type_: boolean; _Default_: true
+
+Enable Cross Origin Resource Sharing, which allows requests across a domain.
+
+`corsAccessList` - _Type_: array; _Default_: null
+
+An array of allowable domains with CORS
+
+`headersTimeout` - _Type_: integer; _Default_: 60,000 milliseconds (1 minute)
+
+Limit the amount of time the parser will wait to receive the complete HTTP headers with.
+
+`https` - _Type_: boolean; _Default_: false
+
+Enable HTTPS on the HarperDB operations endpoint. This requires a valid certificate and key. If `false`, HarperDB will run using standard HTTP.
+
+`keepAliveTimeout` - _Type_: integer; _Default_: 5,000 milliseconds (5 seconds)
+
+Sets the number of milliseconds of inactivity the server needs to wait for additional incoming data after it has finished processing the last response.
+
+`port` - _Type_: integer; _Default_: 9925
+
+The port the HarperDB operations API interface will listen on.
+
+`timeout` - _Type_: integer; _Default_: Defaults to 120,000 milliseconds (2 minutes)
+
+The length of time in milliseconds after which a request will timeout.
+
+
+
+`nodeEnv` - _Type_: string; _Default_: production
+
+Allows you to specify the node environment in which application will run.
+
+```yaml
+operationsApi:
+ nodeEnv: production
+```
+
+- `production` native node logging is kept to a minimum; more caching to optimize performance. This is the default value.
+- `development` more native node logging; less caching.
+
+`tls`
+
+This configures the Transport Layer Security for HTTPS support.
+
+```yaml
+operationsApi:
+ tls:
+ certificate: ~/hdb/keys/certificate.pem
+ certificateAuthority: ~/hdb/keys/ca.pem
+ privateKey: ~/hdb/keys/privateKey.pem
+```
+
+`certificate` - _Type_: string; _Default_: <ROOTPATH>/keys/certificate.pem
+
+Path to the certificate file.
+
+`certificateAuthority` - _Type_: string; _Default_: <ROOTPATH>/keys/ca.pem
+
+Path to the certificate authority file.
+
+`privateKey` - _Type_: string; _Default_: <ROOTPATH>/keys/privateKey.pem
+
+Path to the private key file.
+
+---
+
+### `http`
+
+`threads` - _Type_: number; _Default_: One less than the number of logical cores/ processors
+
+The `threads` option specifies the number of threads that will be used to service the HTTP requests for the operations API and custom functions. Generally, this should be close to the number of CPU logical cores/processors to ensure the CPU is fully utilized (a little less because HarperDB does have other threads at work), assuming HarperDB is the main service on a server.
+
+```yaml
+http:
+ threads: 11
+```
+
+`sessionAffinity` - _Type_: string; _Default_: null
+
+HarperDB is a multi-threaded server designed to scale to utilize many CPU cores with high concurrency. Session affinity can help improve the efficiency and fairness of thread utilization by routing multiple requests from the same client to the same thread. This provides a fairer method of request handling by keeping a single user contained to a single thread, can improve caching locality (multiple requests from a single user are more likely to access the same data), and can provide the ability to share information in-memory in user sessions. Enabling session affinity will cause subsequent requests from the same client to be routed to the same thread.
+
+To enable `sessionAffinity`, you need to specify how clients will be identified from the incoming requests. If you are using HarperDB to directly serve HTTP requests from users from different remote addresses, you can use a setting of `ip`. However, if you are using HarperDB behind a proxy server or application server, all the remote ip addresses will be the same and HarperDB will effectively only run on a single thread. Alternately, you can specify a header to use for identification. If you are using basic authentication, you could use the "Authorization" header to route requests to threads by the user's credentials. If you have another header that uniquely identifies users/clients, you can use that as the value of sessionAffinity. But be careful to ensure that the value does provide sufficient uniqueness and that requests are effectively distributed to all the threads and fully utilizing all your CPU cores.
+```yaml
+http:
+ sessionAffinity: ip
+```
+
+---
+
+### `rootPath`
+
+`rootPath` - _Type_: string; _Default_: home directory of the current user
+
+The HarperDB database and applications/API/interface are decoupled from each other. The `rootPath` directory specifies where the HarperDB application persists data, config, logs, and Custom Functions.
+
+```yaml
+rootPath: /Users/jonsnow/hdb
+```
+
+---
+
+### `storage`
+
+`writeAsync` - _Type_: boolean; _Default_: false
+
+The `writeAsync` option turns off disk flushing/syncing, allowing for faster write operation throughput. However, this does not provide storage integrity guarantees, and if a server crashes, it is possible that there may be data loss requiring restore from another backup/another node.
+
+```yaml
+storage:
+ writeAsync: false
+```
+
+`caching` - _Type_: boolean; _Default_: true
+
+The `caching` option enables in-memory caching of records, providing faster access to frequently accessed objects. This can incur some extra overhead for situations where reads are extremely random and don't benefit from caching.
+
+```yaml
+storage:
+ caching: true
+```
+
+
+`compression` - _Type_: boolean; _Default_: false
+
+The `compression` option enables compression of records in the database. This can be helpful for very large databases in reducing storage requirements and potentially allowing more data to be cached. This uses the very fast LZ4 compression algorithm, but this still incurs extra costs for compressing and decompressing.
+
+```yaml
+storage:
+ compression: false
+```
+
+
+`noReadAhead` - _Type_: boolean; _Default_: true
+
+The `noReadAhead` option advises the operating system to not read ahead when reading from the database. This provides better memory utilization, except in situations where large records are used or frequent range queries are used.
+
+```yaml
+storage:
+ noReadAhead: true
+```
+
+
+`prefetchWrites` - _Type_: boolean; _Default_: true
+
+The `prefetchWrites` option loads data prior to write transactions. This should be enabled for databases that are larger than memory (although it can be faster to disable this for smaller databases).
+
+```yaml
+storage:
+ prefetchWrites: true
+```
+
+
+`path` - _Type_: string; _Default_: `/schema`
+
+The `path` configuration sets where all database files should reside.
+
+```yaml
+storage:
+ path: /users/harperdb/storage
+```
+
+**_Note:_** This configuration applies to all database files, which includes system tables that are used internally by HarperDB. For this reason if you wish to use a non default `path` value you must move any existing schemas into your `path` location. Existing schemas is likely to include the system schema which can be found at `/schema/system`.
+
+---
+
+### `schemas`
+
+The `schemas` section is an optional configuration that can be used to define where database files should reside down to the table level.
+
This configuration should be set before the schema and table have been created.
+
The configuration will not create the directories in the path, that must be done by the user.
+
+
+To define where a schema and all its tables should reside use the name of your schema and the `path` parameter.
+
+```yaml
+schemas:
+ nameOfSchema:
+ path: /path/to/schema
+```
+
+To define where specific tables within a schema should reside use the name of your schema, the `tables` parameter, the name of your table and the `path` parameter.
+
+```yaml
+schemas:
+ nameOfSchema:
+ tables:
+ nameOfTable:
+ path: /path/to/table
+```
+
+This same pattern can be used to define where the audit log database files should reside. To do this use the `auditPath` parameter.
+
+```yaml
+schemas:
+ nameOfSchema:
+ auditPath: /path/to/schema
+```
+
+
+**Setting the schemas section through the command line, environment variables or API**
+
+When using command line variables,environment variables or the API to configure the schemas section a slightly different convention from the regular one should be used. To add one or more configurations use a JSON object array.
+
+Using command line variables:
+```bash
+--SCHEMAS [{\"nameOfSchema\":{\"tables\":{\"nameOfTable\":{\"path\":\"\/path\/to\/table\"}}}}]
+```
+
+Using environment variables:
+```bash
+SCHEMAS=[{"nameOfSchema":{"tables":{"nameOfTable":{"path":"/path/to/table"}}}}]
+```
+
+Using the API:
+```json
+{
+ "operation": "set_configuration",
+ "schemas": [{
+ "nameOfSchema": {
+ "tables": {
+ "nameOfTable": {
+ "path": "/path/to/table"
+ }
+ }
+ }
+ }]
+}
+```
diff --git a/site/versioned_docs/version-4.1/custom-functions/create-project.md b/site/versioned_docs/version-4.1/custom-functions/create-project.md
new file mode 100644
index 00000000..9e856975
--- /dev/null
+++ b/site/versioned_docs/version-4.1/custom-functions/create-project.md
@@ -0,0 +1,40 @@
+---
+title: Create a Project
+---
+
+# Create a Project
+
+To create a project using our web-based GUI, HarperDB Studio, checkout out how to manage Custom Functions [here](../harperdb-studio/manage-functions).
+
+Otherwise, to create a project, you have the following options:
+
+1. **Use the add\_custom\_function\_project operation**
+
+ This operation creates a new project folder, and populates it with templates for the routes, helpers, and static subfolders.
+
+```json
+{
+ "operation": "add_custom_function_project",
+ "project": "dogs"
+}
+```
+
+1. **Clone our public gitHub project template**
+
+ _This requires a local installation. Remove the .git directory for a clean slate of git history._
+
+```bash
+> git clone https:/github.com/HarperDB/harperdb-custom-functions-template.git ~/hdb/custom_functions/dogs
+```
+
+1. **Create a project folder in your Custom Functions root directory** and **initialize**
+
+ _This requires a local installation._
+
+```bash
+> mkdir ~/hdb/custom_functions/dogs
+```
+
+```bash
+> npm init
+```
diff --git a/site/versioned_docs/version-4.1/custom-functions/custom-functions-operations.md b/site/versioned_docs/version-4.1/custom-functions/custom-functions-operations.md
new file mode 100644
index 00000000..490a730f
--- /dev/null
+++ b/site/versioned_docs/version-4.1/custom-functions/custom-functions-operations.md
@@ -0,0 +1,47 @@
+---
+title: Custom Functions Operations
+---
+
+# Custom Functions Operations
+
+One way to manage Custom Functions is through [HarperDB Studio](../harperdb-studio/). It performs all the necessary operations automatically. To get started, navigate to your instance in HarperDB Studio and click the subnav link for “functions”. If you have not yet enabled Custom Functions, it will walk you through the process. Once configuration is complete, you can manage and deploy Custom Functions in minutes.
+
+HarperDB Studio manages your Custom Functions using nine HarperDB operations. You may view these operations within our [API Docs](https:/api.harperdb.io/). A brief overview of each of the operations is below:
+
+
+
+* **custom_functions_status**
+
+ Returns the state of the Custom Functions server. This includes whether it is enabled, upon which port it is listening, and where its root project directory is located on the host machine.
+
+* **get_custom_functions**
+
+ Returns an array of projects within the Custom Functions root project directory. Each project has details including each of the files in the **routes** and **helpers** directories, and the total file count in the **static** folder.
+
+* **get_custom_function**
+
+ Returns the content of the specified file as text. HarperDB Studio uses this call to render the file content in its built-in code editor.
+
+* **set_custom_function**
+
+ Updates the content of the specified file. HarperDB Studio uses this call to save any changes made through its built-in code editor.
+
+* **drop_custom_function**
+
+ Deletes the specified file.
+
+* **add_custom_function_project**
+
+ Creates a new project folder in the Custom Functions root project directory. It also inserts into the new directory the contents of our Custom Functions Project template, which is available publicly, here: https:/github.com/HarperDB/harperdb-custom-functions-template.
+
+* **drop_custom_function_project**
+
+ Deletes the specified project folder and all of its contents.
+
+* **package_custom_function_project**
+
+ Creates a .tar file of the specified project folder, then reads it into a base64-encoded string and returns that string the user.
+
+* **deploy_custom_function_project**
+
+ Takes the output of package_custom_function_project, decrypts the base64-encoded string, reconstitutes the .tar file of your project folder, and extracts it to the Custom Functions root project directory.
diff --git a/site/versioned_docs/version-4.1/custom-functions/debugging-custom-function.md b/site/versioned_docs/version-4.1/custom-functions/debugging-custom-function.md
new file mode 100644
index 00000000..91d34bd6
--- /dev/null
+++ b/site/versioned_docs/version-4.1/custom-functions/debugging-custom-function.md
@@ -0,0 +1,102 @@
+---
+title: Debugging a Custom Function
+---
+
+# Debugging a Custom Function
+
+HarperDB Custom Functions projects are managed by HarperDB’s process manager. As such, it may seem more difficult to debug Custom Functions than your standard project. The goal of this document is to provide best practices and recommendations for debugging your Custom Function.
+
+
+
+For local debugging and development, it is recommended that you use standard console log statements for logging. For production use, you may want to use HarperDB's logging facilities, so you aren't logging to the console. The [HarperDB Custom Functions template](https:/github.com/HarperDB/harperdb-custom-functions-template) includes the HarperDB logger module in the primary function parameters with the name `logger`. This logger can be used to output messages directly to the HarperDB log using standardized logging level functions, described below. The log level can be set in the [HarperDB Configuration File](../configuration).
+
+HarperDB Logger Functions
+* `trace(message)`: Write a 'trace' level log, if the configured level allows for it.
+* `debug(message)`: Write a 'debug' level log, if the configured level allows for it.
+* `info(message)`: Write a 'info' level log, if the configured level allows for it.
+* `warn(message)`: Write a 'warn' level log, if the configured level allows for it.
+* `error(message)`: Write a 'error' level log, if the configured level allows for it.
+* `fatal(message)`: Write a 'fatal' level log, if the configured level allows for it.
+* `notify(message)`: Write a 'notify' level log.
+
+
+For debugging purposes, it is recommended to use `notify` as these messages will appear in the log regardless of log level configured.
+
+## Viewing the Log
+
+The HarperDB Log can be found on the [Studio Status page](../harperdb-studio/instance-metrics) or in the local Custom Functions log file, `/log/custom_functions.log`. Additionally, you can use the [`read_log` operation](https:/api.harperdb.io/#7f718dd1-afa5-49ce-bc0c-564e17b1c9cf) to query the HarperDB log.
+
+### Example 1: Execute Query and Log Results
+
+This example performs a SQL query in HarperDB and logs the result. This example utilizes the `logger.notify` function to log the stringified version of the result. If an error occurs, it will output the error using `logger.error` and return the error.
+
+
+
+```javascript
+server.route({
+ url: '/',
+ method: 'GET',
+ handler: async (request) => {
+ request.body = {
+ operation: 'sql',
+ sql: 'SELECT * FROM dev.dog ORDER BY dog_name'
+ };
+
+ try {
+ let result = await hdbCore.requestWithoutAuthentication(request);
+ logger.notify(`Query Result: ${JSON.stringify(result)}`);
+ return result;
+ } catch (e) {
+ logger.error(`Query Error: ${e}`);
+ return e;
+ }
+ }
+});
+```
+
+### Example 2: Execute Multiple Queries and Log Activity
+
+This example performs two SQL queries in HarperDB with logging throughout to describe what is happening. This example utilizes the `logger.notify` function to log the stringified version of the operation and the result of each query. If an error occurs, it will output the error using `logger.error` and return the error.
+
+
+```javascript
+server.route({
+ url: '/example',
+ method: 'GET',
+ handler: async (request) => {
+ logger.notify('/example called!');
+ const results = [];
+
+ request.body = {
+ operation: 'sql',
+ sql: 'SELECT * FROM dev.dog WHERE id = 1'
+ };
+ logger.notify(`Query 1 Operation: ${JSON.stringify(request.body)}`);
+ try {
+ let result = await hdbCore.requestWithoutAuthentication(request);
+ logger.notify(`Query 1: ${JSON.stringify(result)}`);
+ results.push(result);
+ } catch (e) {
+ logger.error(`Query 1: ${e}`);
+ return e;
+ }
+
+ request.body = {
+ operation: 'sql',
+ sql: 'SELECT * FROM dev.dog WHERE id = 2'
+ };
+ logger.notify(`Query 2 Operation: ${JSON.stringify(request.body)}`);
+ try {
+ let result = await hdbCore.requestWithoutAuthentication(request);
+ logger.notify(`Query 2: ${JSON.stringify(result)}`);
+ results.push(result);
+ } catch (e) {
+ logger.error(`Query 2: ${e}`);
+ return e;
+ }
+
+ logger.notify('/example complete!');
+ return results;
+ }
+});
+```
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/custom-functions/define-helpers.md b/site/versioned_docs/version-4.1/custom-functions/define-helpers.md
new file mode 100644
index 00000000..eccd9b6a
--- /dev/null
+++ b/site/versioned_docs/version-4.1/custom-functions/define-helpers.md
@@ -0,0 +1,36 @@
+---
+title: Define Helpers
+---
+
+# Define Helpers
+
+Helpers are functions for use within your routes. You may want to use the same helper in multiple route files, so this allows you to write it once, and include it wherever you need it.
+
+
+
+* To use your helpers, they must be exported from your helper file. Please use any standard export mechanisms available for your module system. We like ESM, ECMAScript Modules. Our example below exports using `module.exports`.
+
+* You must import the helper module into the file that needs access to the exported functions. With ESM, you'd use a `require` statement. See [this example](./define-routes#custom-prevalidation-hooks) in Define Routes.
+
+
+Below is code from the customValidation helper that is referenced in [Define Routes](./define-routes). It takes the request and the logger method from the route declaration, and makes a call to an external API to validate the headers using fetch. The API in this example is just returning a list of ToDos, but it could easily be replaced with a call to a real authentication service.
+
+
+```javascript
+const customValidation = async (request,logger) => {
+ let response = await fetch('https:/jsonplaceholder.typicode.com/todos/1', { headers: { authorization: request.headers.authorization } });
+ let result = await response.json();
+
+ /*
+ * throw an authentication error based on the response body or statusCode
+ */
+ if (result.error) {
+ const errorString = result.error || 'Sorry, there was an error authenticating your request';
+ logger.error(errorString);
+ throw new Error(errorString);
+ }
+ return request;
+};
+
+module.exports = customValidation;
+```
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/custom-functions/define-routes.md b/site/versioned_docs/version-4.1/custom-functions/define-routes.md
new file mode 100644
index 00000000..84cef1da
--- /dev/null
+++ b/site/versioned_docs/version-4.1/custom-functions/define-routes.md
@@ -0,0 +1,131 @@
+---
+title: Define Routes
+---
+
+# Define Routes
+
+HarperDB’s Custom Functions is built on top of [Fastify](https:/www.fastify.io/), so our route definitions follow their specifications. Below is a very simple example of a route declaration.
+
+
+
+Route URLs are resolved in the following manner:
+
+* [**Instance URL**]:[**Custom Functions Port**]/[**Project Name**]/[**Route URL**]
+
+* The route below, within the **dogs** project, with a route of **breeds** would be available at **http:/localhost:9926/dogs/breeds**.
+
+
+In effect, this route is just a pass-through to HarperDB. The same result could have been achieved by hitting the core HarperDB API, since it uses **hdbCore.preValidation** and **hdbCore.request**, which are defined in the “helper methods” section, below.
+
+
+
+```javascript
+module.exports = async (server, { hdbCore, logger }) => {
+ server.route({
+ url: '/',
+ method: 'POST',
+ preValidation: hdbCore.preValidation,
+ handler: hdbCore.request,
+ })
+}
+```
+
+
+## Custom Handlers
+
+For endpoints where you want to execute multiple operations against HarperDB, or perform additional processing (like an ML classification, or an aggregation, or a call to a 3rd party API), you can define your own logic in the handler. The function below will execute a query against the dogs table, and filter the results to only return those dogs over 4 years in age.
+
+
+
+**IMPORTANT: This route has NO preValidation and uses hdbCore.requestWithoutAuthentication, which- as the name implies- bypasses all user authentication. See the security concerns and mitigations in the “helper methods” section, below.**
+
+
+
+```javascript
+module.exports = async (server, { hdbCore, logger }) => {
+ server.route({
+ url: '/:id',
+ method: 'GET',
+ handler: (request) => {
+ request.body= {
+ operation: 'sql',
+ sql: `SELECT * FROM dev.dog WHERE id = ${request.params.id}`
+ };
+
+ const result = await hdbCore.requestWithoutAuthentication(request);
+ return result.filter((dog) => dog.age > 4);
+ }
+ });
+}
+```
+
+## Custom preValidation Hooks
+The simple example above was just a pass-through to HarperDB- the exact same result could have been achieved by hitting the core HarperDB API. But for many applications, you may want to authenticate the user using custom logic you write, or by conferring with a 3rd party service. Custom preValidation hooks let you do just that.
+
+
+
+Below is an example of a route that uses a custom validation hook:
+
+```javascript
+const customValidation = require('../helpers/customValidation');
+
+module.exports = async (server, { hdbCore, logger }) => {
+ server.route({
+ url: '/:id',
+ method: 'GET',
+ preValidation: (request) => customValidation(request, logger),
+ handler: (request) => {
+ request.body= {
+ operation: 'sql',
+ sql: `SELECT * FROM dev.dog WHERE id = ${request.params.id}`
+ };
+
+ return hdbCore.requestWithoutAuthentication(request);
+ }
+ });
+}
+```
+
+
+Notice we imported customValidation from the **helpers** directory. To include a helper, and to see the actual code within customValidation, see [Define Helpers](./define-helpers).
+
+## Helper Methods
+When declaring routes, you are given access to 2 helper methods: hdbCore and logger.
+
+
+
+**hdbCore**
+
+hdbCore contains three functions that allow you to authenticate an inbound request, and execute operations against HarperDB directly, by passing the standard Operations API.
+
+
+
+* **preValidation**
+
+ This takes the authorization header from the inbound request and executes the same authentication as the standard HarperDB Operations API. It will determine if the user exists, and if they are allowed to perform this operation. **If you use the request method, you have to use preValidation to get the authenticated user**.
+
+* **request**
+
+ This will execute a request with HarperDB using the operations API. The `request.body` should contain a standard HarperDB operation and must also include the `hdb_user` property that was in `request.body` provided in the callback.
+
+* **requestWithoutAuthentication**
+
+ Executes a request against HarperDB without any security checks around whether the inbound user is allowed to make this request. For security purposes, you should always take the following precautions when using this method:
+
+ * Properly handle user-submitted values, including url params. User-submitted values should only be used for `search_value` and for defining values in records. Special care should be taken to properly escape any values if user-submitted values are used for SQL.
+
+
+**logger**
+
+This helper allows you to write directly to the Custom Functions log file, custom_functions.log. It’s useful for debugging during development, although you may also use the console logger. There are 5 functions contained within logger, each of which pertains to a different **logging.level** configuration in your harperdb-config.yaml file.
+
+
+* logger.trace(‘Starting the handler for /dogs’)
+
+* logger.debug(‘This should only fire once’)
+
+* logger.warn(‘This should never ever fire’)
+
+* logger.error(‘This did not go well’)
+
+* logger.fatal(‘This did not go very well at all’)
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/custom-functions/example-projects.md b/site/versioned_docs/version-4.1/custom-functions/example-projects.md
new file mode 100644
index 00000000..88ded5fd
--- /dev/null
+++ b/site/versioned_docs/version-4.1/custom-functions/example-projects.md
@@ -0,0 +1,37 @@
+---
+title: Example Projects
+---
+
+# Example Projects
+
+**Library of example projects and tutorials using Custom Functions:**
+
+* [Authorization in HarperDB using Okta Customer Identity Cloud](https:/www.harperdb.io/post/authorization-in-harperdb-using-okta-customer-identity-cloud), by Yitaek Hwang
+
+* [How to Speed Up your Applications by Caching at the Edge with HarperDB](https:/dev.to/doabledanny/how-to-speed-up-your-applications-by-caching-at-the-edge-with-harperdb-3o2l), by Danny Adams
+
+* [OAuth Authentication in HarperDB using Auth0 & Node.js](https:/www.harperdb.io/post/oauth-authentication-in-harperdb-using-auth0-and-node-js), by Lucas Santos
+
+* [How To Create a CRUD API with Next.js & HarperDB Custom Functions](https:/www.harperdb.io/post/create-a-crud-api-w-next-js-harperdb), by Colby Fayock
+
+* [Build a Dynamic REST API with Custom Functions](https:/harperdb.io/blog/build-a-dynamic-rest-api-with-custom-functions/), by Terra Roush
+
+* [How to use HarperDB Custom Functions to Build your Entire Backend](https:/dev.to/andrewbaisden/how-to-use-harperdb-custom-functions-to-build-your-entire-backend-a2m), by Andrew Baisden
+
+* [Using TensorFlowJS & HarperDB Custom Functions for Machine Learning](https:/harperdb.io/blog/using-tensorflowjs-harperdb-for-machine-learning/), by Kevin Ashcraft
+
+* [Build & Deploy a Fitness App with Python & HarperDB](https:/www.youtube.com/watch?v=KMkmA4i2FQc), by Patrick Löber
+
+* [Create a Discord Slash Bot using HarperDB Custom Functions](https:/geekysrm.hashnode.dev/discord-slash-bot-with-harperdb-custom-functions), by Soumya Ranjan Mohanty
+
+* [How I used HarperDB Custom Functions to Build a Web App for my Newsletter](https:/blog.hrithwik.me/how-i-used-harperdb-custom-functions-to-build-a-web-app-for-my-newsletter), by Hrithwik Bharadwaj
+
+* [How I used HarperDB Custom Functions and Recharts to create Dashboard](https:/blog.greenroots.info/how-to-create-dashboard-with-harperdb-custom-functions-and-recharts), by Tapas Adhikary
+
+* [How To Use HarperDB Custom Functions With Your React App](https:/dev.to/tyaga001/how-to-use-harperdb-custom-functions-with-your-react-app-2c43), by Ankur Tyagi
+
+* [Build a Web App Using HarperDB’s Custom Functions](https:/www.youtube.com/watch?v=rz6prItVJZU), livestream by Jaxon Repp
+
+* [How to Web Scrape Using Python, Snscrape & Custom Functions](https:/hackernoon.com/how-to-web-scrape-using-python-snscrape-and-harperdb), by Davis David
+
+* [What’s the Big Deal w/ Custom Functions](https:/rss.com/podcasts/harperdb-select-star/278933/), Select* Podcast
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/custom-functions/host-static.md b/site/versioned_docs/version-4.1/custom-functions/host-static.md
new file mode 100644
index 00000000..0dcd2788
--- /dev/null
+++ b/site/versioned_docs/version-4.1/custom-functions/host-static.md
@@ -0,0 +1,21 @@
+---
+title: Host A Static Web UI
+---
+
+# Host A Static Web UI
+
+The [@fastify/static](https:/github.com/fastify/fastify-static) module can be utilized to serve static files.
+
+Install the module in your project by running `npm i @fastify/static` from inside your project directory.
+
+Register `@fastify/static` with the server and set `root` to the absolute path of the directory that contains the static files to serve.
+
+For further information on how to send specific files see the [@fastify/static](https:/github.com/fastify/fastify-static) docs.
+
+```javascript
+module.exports = async (server, { hdbCore, logger }) => {
+ server.register(require('@fastify/static'), {
+ root: path.join(__dirname, 'public'),
+ })
+};
+```
diff --git a/site/versioned_docs/version-4.1/custom-functions/index.md b/site/versioned_docs/version-4.1/custom-functions/index.md
new file mode 100644
index 00000000..4b97f156
--- /dev/null
+++ b/site/versioned_docs/version-4.1/custom-functions/index.md
@@ -0,0 +1,28 @@
+---
+title: Custom Functions
+---
+
+# Custom Functions
+
+Custom functions are a key part of building a complete HarperDB application. It is highly recommended that you use Custom Functions as the primary mechanism for your application to access your HarperDB database. Using Custom Functions gives you complete control over the accessible endpoints, how users are authenticated and authorized, what data is accessed from the database, and how it is aggregated and returned to users.
+
+* Add your own API endpoints to a standalone API server inside HarperDB
+
+* Use HarperDB Core methods to interact with your data at lightning speed
+
+* Custom Functions are powered by Fastify, so they’re extremely flexible
+
+* Manage in HarperDB Studio, or use your own IDE and Version Management System
+
+* Distribute your Custom Functions to all your HarperDB instances with a single click
+
+---
+* [Requirements and Definitions](./requirements-definitions)
+
+* [Create A Project](./create-project)
+
+* [Define Routes](./define-routes)
+
+* [Define Helpers](./define-helpers)
+
+* [Host a Static UI](./host-static)
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/custom-functions/requirements-definitions.md b/site/versioned_docs/version-4.1/custom-functions/requirements-definitions.md
new file mode 100644
index 00000000..a38a0ec6
--- /dev/null
+++ b/site/versioned_docs/version-4.1/custom-functions/requirements-definitions.md
@@ -0,0 +1,77 @@
+---
+title: Requirements And Definitions
+---
+
+# Requirements And Definitions
+Before you get started with Custom Functions, here’s a primer on the basic configuration and the structure of a Custom Functions Project.
+
+## Configuration
+Custom Functions are configured in the harperdb-config.yaml file located in the operations API root directory (by default this is a directory named `hdb` located in the home directory of the current user). Below is a view of the Custom Functions' section of the config YAML file, plus descriptions of important Custom Functions settings.
+
+```yaml
+customFunctions:
+ enabled: true
+ network:
+ cors: true
+ corsAccessList:
+ - null
+ headersTimeout: 60000
+ https: false
+ keepAliveTimeout: 5000
+ port: 9926
+ timeout: 120000
+ nodeEnv: production
+ root: ~/hdb/custom_functions
+ tls:
+ certificate: ~/hdb/keys/certificate.pem
+ certificateAuthority: ~/hdb/keys/ca.pem
+ privateKey: ~/hdb/keys/privateKey.pem
+```
+
+* **`enabled`**
+ A boolean value that tells HarperDB to start the Custom Functions server. Set it to **true** to enable custom functions and **false** to disable. `enabled` is `true` by default.
+
+* **`network.port`**
+ This is the port HarperDB will use to start a standalone Fastify Server dedicated to serving your Custom Functions’ routes.
+
+* **`root`**
+ This is the root directory where your Custom Functions projects and their files will live. By default, it’s in your \, but you can locate it anywhere--in a developer folder next to your other development projects, for example.
+
+_Please visit our [configuration docs](../configuration) for a more comprehensive look at these settings._
+
+## Project Structure
+**project folder**
+
+The name of the folder that holds your project files serves as the root prefix for all the routes you create. All routes created in the **dogs** project folder will have a URL like this: **https:/my-server-url.com:9926/dogs/my/route**. As such, it’s important that any project folders you create avoid any characters that aren’t URL-friendly. You should avoid URL delimiters in your folder names.
+
+
+**/routes folder**
+
+Files in the **routes** folder define the requests that your Custom Functions server will handle. They are [standard Fastify route declarations](https:/www.fastify.io/docs/latest/Reference/Routes/), so if you’re familiar with them, you should be up and running in no time. The default components for a route are the url, method, preValidation, and handler.
+
+```javascript
+module.exports = async (server, { hdbCore, logger }) => {
+ server.route({
+ url: '/',
+ method: 'POST',
+ preValidation: hdbCore.preValidation,
+ handler: hdbCore.request,
+ });
+}
+```
+
+**/helpers folder**
+
+These files are JavaScript modules that you can use in your handlers, or for custom `preValidation` hooks. Examples include calls to third party Authentication services, filters for results of calls to HarperDB, and custom error responses. As modules, you can use standard import and export functionality.
+
+```javascript
+"use strict";
+
+const dbFilter = (databaseResultsArray) => databaseResultsArray.filter((result) => result.showToApi === true);
+
+module.exports = dbFilter;
+```
+
+**/static folder**
+
+If you’d like to serve your visitors a static website, you can place the html and supporting files into a directory called **static**. The directory must have an **index.html** file, and can have as many supporting resources as are necessary in whatever subfolder structure you prefer within that **static** directory.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/custom-functions/restarting-server.md b/site/versioned_docs/version-4.1/custom-functions/restarting-server.md
new file mode 100644
index 00000000..b8352059
--- /dev/null
+++ b/site/versioned_docs/version-4.1/custom-functions/restarting-server.md
@@ -0,0 +1,18 @@
+---
+title: Restarting the Server
+---
+
+# Restarting the Server
+
+One way to manage Custom Functions is through [HarperDB Studio](../harperdb-studio/). It performs all the necessary operations automatically. To get started, navigate to your instance in HarperDB Studio and click the subnav link for “functions”. If you have not yet enabled Custom Functions, it will walk you through the process. Once configuration is complete, you can manage and deploy Custom Functions in minutes.
+
+For any changes made to your routes, helpers, or projects, you’ll need to restart the Custom Functions server to see them take effect. HarperDB Studio does this automatically whenever you create or delete a project, or add, edit, or edit a route or helper. If you need to start the Custom Functions server yourself, you can use the following operation to do so:
+
+
+
+```json
+{
+ "operation": "restart_service",
+ "service": "custom_functions"
+}
+```
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/custom-functions/templates.md b/site/versioned_docs/version-4.1/custom-functions/templates.md
new file mode 100644
index 00000000..0fb6401e
--- /dev/null
+++ b/site/versioned_docs/version-4.1/custom-functions/templates.md
@@ -0,0 +1,7 @@
+---
+title: Templates
+---
+
+# Templates
+
+Check out our always-expanding library of templates in our open-source [HarperDB-Add-Ons GitHub repo](https:/github.com/HarperDB-Add-Ons).
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/custom-functions/using-npm-git.md b/site/versioned_docs/version-4.1/custom-functions/using-npm-git.md
new file mode 100644
index 00000000..4120fd17
--- /dev/null
+++ b/site/versioned_docs/version-4.1/custom-functions/using-npm-git.md
@@ -0,0 +1,13 @@
+---
+title: Using NPM and Git
+---
+
+# Using NPM and Git
+
+Custom function projects can be structured and managed like normal Node.js projects. You can include external dependencies, include them in your route and helper files, and manage your revisions without changing your development tooling or pipeline.
+
+
+
+* To initialize your project to use npm packages, use the terminal to execute `npm init` from the root of your project folder.
+
+* To implement version control using git, use the terminal to execute `git init` from the root of your project folder.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/getting-started/getting-started.md b/site/versioned_docs/version-4.1/getting-started/getting-started.md
new file mode 100644
index 00000000..0d1db68f
--- /dev/null
+++ b/site/versioned_docs/version-4.1/getting-started/getting-started.md
@@ -0,0 +1,54 @@
+---
+title: Getting Started
+---
+
+# Getting Started
+
+Getting started with HarperDB is easy and fast.
+
+The quickest way to get up and running with HarperDB is with HarperDB Cloud, our database-as-a-service offering, which this guide will utilize.
+
+### Set Up a HarperDB Instance
+
+Before you can start using HarperDB you need to set up an instance. Note, if you would prefer to install HarperDB locally, [check out the installation guides including Linux, Mac, and many other options](../install-harperdb/).
+
+1. [Sign up for the HarperDB Studio](https:/studio.harperdb.io/sign-up)
+1. [Create a new HarperDB Cloud instance](../harperdb-studio/instances#create-a-new-instance)
+
+> HarperDB Cloud instance provisioning typically takes 5-15 minutes. You will receive an email notification when your instance is ready.
+
+### Using the HarperDB Studio
+
+Now that you have a HarperDB instance, you can do pretty much everything you’d like through the Studio. This section links to appropriate articles to get you started interacting with your data.
+
+1. [Create a schema](../harperdb-studio/manage-schemas-browse-data#create-a-schema)
+1. [Create a table](../harperdb-studio/manage-schemas-browse-data#create-a-table)
+1. [Add a record](../harperdb-studio/manage-schemas-browse-data#add-a-record)
+1. [Load CSV data](../harperdb-studio/manage-schemas-browse-data#load-csv-data) (Here’s a sample CSV of the HarperDB team’s dogs)
+1. [Query data via SQL](../harperdb-studio/query-instance-data)
+
+### Using the HarperDB API
+
+Complete HarperDB API documentation is available at api.harperdb.io. The HarperDB Studio features an example code builder that generates API calls in the programming language of your choice. For example purposes, a basic cURL command is shown below to create a schema called dev.
+
+```
+curl --location --request POST 'https:/instance-subdomain.harperdbcloud.com' \
+--header 'Authorization: Basic YourBase64EncodedInstanceUser:Pass' \
+--header 'Content-Type: application/json' \
+--data-raw '{
+"operation": "create_schema",
+"schema": "dev"
+}'
+```
+
+Breaking it down, there are only a few requirements for interacting with HarperDB:
+
+* Using the HTTP POST method.
+* Providing the URL of the HarperDB instance.
+* Providing the Authorization header (more on using Basic authentication).
+* Providing the Content-Type header.
+* Providing a JSON body with the desired operation and any additional operation properties (shown in the --data-raw parameter). This is the only parameter that needs to be changed to execute alternative operations on HarperDB.
+
+### Video Tutorials
+
+[HarperDB video tutorials are available within the HarperDB Studio](../harperdb-studio/resources#video-tutorials). HarperDB and the HarperDB Studio are constantly changing, as such, there may be small discrepancies in UI/UX.
diff --git a/site/versioned_docs/version-4.1/harperdb-cli.md b/site/versioned_docs/version-4.1/harperdb-cli.md
new file mode 100644
index 00000000..b7c1f9e0
--- /dev/null
+++ b/site/versioned_docs/version-4.1/harperdb-cli.md
@@ -0,0 +1,114 @@
+---
+title: HarperDB CLI
+---
+
+# HarperDB CLI
+
+The HarperDB command line interface (CLI) is used to administer [self-installed HarperDB instances](./install-harperdb/).
+
+## Installing HarperDB
+
+To install HarperDB with CLI prompts, run the following command:
+
+```bash
+harperdb install
+```
+
+Alternatively, HarperDB installations can be automated with environment variables or command line arguments; [see a full list of configuration parameters here](./configuration#using-the-configuration-file-and-naming-conventions). Note, when used in conjunction, command line arguments will override environment variables.
+
+#### Environment Variables
+
+```bash
+#minimum required parameters for no additional CLI prompts
+export TC_AGREEMENT=yes
+export HDB_ADMIN_USERNAME=HDB_ADMIN
+export HDB_ADMIN_PASSWORD=password
+export ROOTPATH=/tmp/hdb/
+export OPERATIONSAPI_NETWORK_PORT=9925
+harperdb install
+```
+
+#### Command Line Arguments
+
+```bash
+#minimum required parameters for no additional CLI prompts
+harperdb install --TC_AGREEMENT yes --HDB_ADMIN_USERNAME HDB_ADMIN --HDB_ADMIN_PASSWORD password --ROOTPATH /tmp/hdb/ --OPERATIONSAPI_NETWORK_PORT 9925
+```
+
+***
+
+## Starting HarperDB
+
+To start HarperDB after it is installed, run the following command:
+
+```bash
+harperdb start
+```
+
+***
+
+## Stopping HarperDB
+
+To stop HarperDB once it is running, run the following command:
+
+```bash
+harperdb stop
+```
+
+***
+
+## Restarting HarperDB
+
+To restart HarperDB once it is running, run the following command:
+
+```bash
+harperdb restart
+```
+
+***
+
+## Managing HarperDB Service(s)
+
+The following commands are used to start, restart, or stop one or more HarperDB service without restarting the full application:
+
+```bash
+harperdb start --service harperdb,"custom functions",ipc
+harperdb stop --service harperdb
+harperdb restart --service "custom functions"
+```
+
+The following services are managed via the above commands:
+
+* HarperDB
+* Custom Functions
+* IPC
+* Clustering
+
+***
+
+## Getting the HarperDB Version
+
+To check the version of HarperDB that is installed run the following command:
+
+```bash
+harperdb version
+```
+
+## Get all available CLI commands
+
+To display all available HarperDB CLI commands along with a brief description run:
+
+```bash
+harperdb help
+```
+
+## Get the status of HarperDB and clustering
+
+To display the status of the HarperDB process, the clustering hub and leaf processes, the clustering network and replication statuses, run:
+
+```bash
+harperdb status
+```
+
+## Backups
+HarperDB uses a transactional commit process that ensures that data on disk is always transactionally consistent with storage. This means that HarperDB maintains safety of database integrity in the event of a crash. It also means that you can use any standard volume snapshot tool to make a backup of a HarperDB database. Database files are stored in the hdb/schemas directory (organized schema directories). As long as the snapshot is an atomic snapshot of these database files, the data can be copied/movied back into the schemas directory to restore a previous backup (with HarperDB shut down) , and database integrity will be preserved. Note that simply copying an in-use database file (using `cp`, for example) is _not_ a snapshot, and this would progressively read data from the database at different points in time, which yields unreliable copy that likely will not be usable. Standard copying is only reliable for a database file that is not in use.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/harperdb-cloud/alarms.md b/site/versioned_docs/version-4.1/harperdb-cloud/alarms.md
new file mode 100644
index 00000000..26f28a24
--- /dev/null
+++ b/site/versioned_docs/version-4.1/harperdb-cloud/alarms.md
@@ -0,0 +1,27 @@
+---
+title: HarperDB Cloud Alarms
+---
+
+# HarperDB Cloud Alarms
+
+HarperDB Cloud instance alarms are triggered when certain conditions are met. Once alarms are triggered organization owners will immediately receive an email alert and the alert will be available on the [Instance Configuration](../harperdb-studio/instance-configuration) page. The below table describes each alert and their evaluation metrics.
+
+
+
+### Heading Definitions
+
+* **Alarm**: Title of the alarm.
+
+* **Threshold**: Definition of the alarm threshold.
+
+* **Intervals**: The number of occurrences before an alarm is triggered and the period that the metric is evaluated over.
+
+* **Proposed Remedy**: Recommended solution to avoid the alert in the future.
+
+
+| Alarm | Threshold | Intervals | Proposed Remedy |
+|---------|------------|-----------|----------------------------------------------------------------------------------------------------------------|
+| Storage | > 90% Disk | 1 x 5min | [Increased storage volume](../harperdb-studio/instance-configuration#update-instance-storage) |
+| CPU | > 90% Avg | 2 x 5min | [Increase instance size for additional CPUs](../harperdb-studio/instance-configuration#update-instance-ram) |
+| Memory | > 90% RAM | 2 x 5min | [Increase instance size](../harperdb-studio/instance-configuration#update-instance-ram) |
+
diff --git a/site/versioned_docs/version-4.1/harperdb-cloud/index.md b/site/versioned_docs/version-4.1/harperdb-cloud/index.md
new file mode 100644
index 00000000..d820c858
--- /dev/null
+++ b/site/versioned_docs/version-4.1/harperdb-cloud/index.md
@@ -0,0 +1,7 @@
+---
+title: HarperDB Cloud
+---
+
+# HarperDB Cloud
+
+HarperDB Cloud is the easiest way to test drive HarperDB, it’s HarperDB-as-a-Service. Cloud handles deployment and management of your instances in just a few clicks. HarperDB Cloud is currently powered by AWS with additional cloud providers on our roadmap for the future.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/harperdb-cloud/instance-size-hardware-specs.md b/site/versioned_docs/version-4.1/harperdb-cloud/instance-size-hardware-specs.md
new file mode 100644
index 00000000..74dca186
--- /dev/null
+++ b/site/versioned_docs/version-4.1/harperdb-cloud/instance-size-hardware-specs.md
@@ -0,0 +1,26 @@
+---
+title: HarperDB Cloud Instance Size Hardware Specs
+---
+
+# HarperDB Cloud Instance Size Hardware Specs
+
+While HarperDB Cloud bills by RAM, each instance has other specifications associated with the RAM selection. The following table describes each instance size in detail*.
+
+| AWS EC2 Instance Size | RAM (GiB) | # vCPUs | Network (Gbps) | Processor |
+|------------------------|------------|----------|-----------------|----------------------------------------|
+| t3.nano | 0.5 | 2 | Up to 5 | 2.5 GHz Intel Xeon Platinum 8000 |
+| t3.micro | 1 | 2 | Up to 5 | 2.5 GHz Intel Xeon Platinum 8000 |
+| t3.small | 2 | 2 | Up to 5 | 2.5 GHz Intel Xeon Platinum 8000 |
+| t3.medium | 4 | 2 | Up to 5 | 2.5 GHz Intel Xeon Platinum 8000 |
+| m5.large | 8 | 2 | Up to 10 | Up to 3.1 GHz Intel Xeon Platinum 8000 |
+| m5.xlarge | 16 | 4 | Up to 10 | Up to 3.1 GHz Intel Xeon Platinum 8000 |
+| m5.2xlarge | 32 | 8 | Up to 10 | Up to 3.1 GHz Intel Xeon Platinum 8000 |
+| m5.4xlarge | 64 | 16 | Up to 10 | Up to 3.1 GHz Intel Xeon Platinum 8000 |
+| m5.8xlarge | 128 | 32 | 10 | Up to 3.1 GHz Intel Xeon Platinum 8000 |
+| m5.12xlarge | 192 | 48 | 10 | Up to 3.1 GHz Intel Xeon Platinum 8000 |
+| m5.16xlarge | 256 | 64 | 20 | Up to 3.1 GHz Intel Xeon Platinum 8000 |
+| m5.24xlarge | 384 | 96 | 25 | Up to 3.1 GHz Intel Xeon Platinum 8000 |
+
+
+
+*Specifications are subject to change. For the most up to date information, please refer to AWS documentation: https:/aws.amazon.com/ec2/instance-types/.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/harperdb-cloud/iops-impact.md b/site/versioned_docs/version-4.1/harperdb-cloud/iops-impact.md
new file mode 100644
index 00000000..10baf28c
--- /dev/null
+++ b/site/versioned_docs/version-4.1/harperdb-cloud/iops-impact.md
@@ -0,0 +1,49 @@
+---
+title: IOPS Impact on Performance
+---
+
+# IOPS Impact on Performance
+
+HarperDB, like any database, can place a tremendous load on its storage resources. Storage, not CPU or memory, will more often be the bottleneck of server, virtual machine, or a container running HarperDB. Understanding how storage works, and how much storage performance your workload requires, is key to ensuring that HarperDB performs as expected.
+
+## IOPS Overview
+The primary measure of storage performance is the number of input/output operations per second (IOPS) that a storage device can perform. Different storage devices can have dramatically different performance profiles. A hard drive (HDD) might only perform a hundred or so IOPS, while a solid state drive (SSD) might be able to perform tens or hundreds of thousands of IOPS.
+
+
+
+Cloud providers like AWS, which powers HarperDB Cloud, don’t typically attach individual disks to a virtual machine or container. Instead, they combine large numbers of storage drives to create very high performance storage servers. Chunks (volumes) of that storage is then carved out and presented to many different virtual machines and containers. Due to the shared nature of this type of storage, the cloud provider places configurable limits on the number of IOPS that a volume can perform. The same way that cloud providers charge more for larger capacity volumes, they also charge more for volumes with more IOPS.
+
+## HarperDB Cloud Storage
+
+HarperDB Cloud utilizes AWS Elastic Block Storage (EBS) General Purpose SSD (gp3) volumes. This is the most common storage type used in AWS, as it provides reasonable performance for most workloads, at a reasonable price.
+
+
+
+AWS EBS gp3 volumes have a baseline performance level of 3,000 IOPS, as a result, all HarperDB Cloud storage options will offer 3,000 IOPS. We plan to offer scalable IOPS as an option in the future.
+
+
+
+You can read more about AWS EBS volume IOPS here: https:/docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html.
+
+## Estimating IOPS for HarperDB Instance
+
+The number of IOPS required for a particular workload is influenced by many factors. Testing your particular application is the best way to determine the number of IOPS required. A reliable method is to estimate about two IOPS for every index, including the primary key itself. So if a table has two indices besides primary key, estimate that an insert or update will require about six IOPS. Note that that can often be closer to one IOPS per index under load due to internal batching of writes, and sometimes even better when doing sequential inserts. Again it is best to test to verify this with application specific data and write patterns.
+
+
+
+For assistance in estimating IOPS requirements feel free to contact HarperDB Support or join our Community Slack Channel.
+
+## Example Use Case IOPS Requirements
+
+* **Sensor Data Collection**
+
+ In case of IoT sensors where data collection will be sustained high IOPS are required. While there are not typically large queries going on in this case, there is a high volume of data being ingested. This implies that IOPS will be sustained at a high level. For example, if you are collection 100 records per second you would expect to need roughly 3,000 IOPS just to handle the data inserts.
+* **Data Analytics/BI Server**
+
+ Providing a server for analytics purposes typically requires a larger machine. Typically these cases involve large scale SQL joins and aggregations, which puts a large strain on reads. HarperDB utilizes an in-memory cache, which provides a significant performance boost on machines with large amounts of memory. However, if disparate datasets are constantly being queried and/or new data is frequently being loaded, you will find that the system still needs to have high IOPS to meet performance demand.
+* **Web Services**
+
+ Typical web service implementations with discrete reads and writes often do not need high IOPS to perform as expected. This is often the case is more transactional systems without the requirement for high performance load. A good rule to follow is that any HarperDB operation that requires a data scan will be IOPS intensive, but if these are not frequent then the EBS boost will suffice. Queries utilizing equals operations in either SQL or NoSQL do not require a scan due to HarperDB’s native indexing.
+* **High Performance Database**
+
+ Ultimately, if performance is your top priority, HarperDB should be run on bare metal hardware. Cloud providers offer these options at a higher cost, but they come with obvious performance improvements.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/harperdb-cloud/verizon-5g-wavelength-instances.md b/site/versioned_docs/version-4.1/harperdb-cloud/verizon-5g-wavelength-instances.md
new file mode 100644
index 00000000..1aaa838d
--- /dev/null
+++ b/site/versioned_docs/version-4.1/harperdb-cloud/verizon-5g-wavelength-instances.md
@@ -0,0 +1,46 @@
+---
+title: Verizon 5G Wavelength Instances
+---
+
+# Verizon 5G Wavelength Instances
+
+These instances are only accessible from the Verizon network. When accessing your HarperDB instance please ensure you are connected to the Verizon network, examples include Verizon 5G Internet, Verizon Hotspots, or Verizon mobile devices.
+
+
+
+HarperDB on Verizon 5G Wavelength brings HarperDB closer to the end user exclusively on the Verizon network resulting in as little as single-digit millisecond response time from HarperDB to the client.
+
+
+
+Instances are built via AWS Wavelength. You can read more about [AWS Wavelength here](https:/aws.amazon.com/wavelength/).
+
+HarperDB 5G Wavelength Instance Specs
+While HarperDB 5G Wavelength bills by RAM, each instance has other specifications associated with the RAM selection. The following table describes each instance size in detail*.
+
+| AWS EC2 Instance Size | RAM (GiB) | # vCPUs | Network (Gbps) | Processor |
+|------------------------|------------|----------|-----------------|---------------------------------------------|
+| t3.medium | 4 | 2 | Up to 5 | Up to 3.1 GHz Intel Xeon Platinum Processor |
+| t3.xlarge | 16 | 4 | Up to 5 | Up to 3.1 GHz Intel Xeon Platinum Processor |
+| r5.2xlarge | 64 | 8 | Up to 10 | Up to 3.1 GHz Intel Xeon Platinum Processor |
+
+
+
+
+
+*Specifications are subject to change. For the most up to date information, please refer to [AWS documentation](https:/aws.amazon.com/ec2/instance-types/).
+
+## HarperDB 5G Wavelength Storage
+
+HarperDB 5G Wavelength utilizes AWS Elastic Block Storage (EBS) General Purpose SSD (gp2) volumes. This is the most common storage type used in AWS, as it provides reasonable performance for most workloads, at a reasonable price.
+
+
+
+AWS EBS gp2 volumes have a baseline performance level, which determines the number of IOPS it can perform indefinitely. The larger the volume, the higher it’s baseline performance. Additionally, smaller gp2 volumes are able to burst to a higher number of IOPS for periods of time.
+
+
+
+Smaller gp2 volumes are perfect for trying out the functionality of HarperDB, and might also work well for applications that don’t perform many database transactions. For applications that perform a moderate or high number of transactions, we recommend that you use a larger HarperDB volume. Learn more about the impact of IOPS on performance here.
+
+
+
+You can read more about [AWS EBS gp2 volume IOPS here](https:/docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html#ebsvolumetypes_gp2).
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/harperdb-studio/create-account.md b/site/versioned_docs/version-4.1/harperdb-studio/create-account.md
new file mode 100644
index 00000000..635de7f4
--- /dev/null
+++ b/site/versioned_docs/version-4.1/harperdb-studio/create-account.md
@@ -0,0 +1,26 @@
+---
+title: Create a Studio Account
+---
+
+# Create a Studio Account
+Start at the [HarperDB Studio sign up page](https:/studio.harperdb.io/sign-up).
+
+1) Provide the following information:
+ * First Name
+ * Last Name
+ * Email Address
+ * Subdomain
+
+ *Part of the URL that will be used to identify your HarperDB Cloud Instances. For example, with subdomain “demo” and instance name “c1” the instance URL would be: https:/c1-demo.harperdbcloud.com.*
+ * Coupon Code (optional)
+2) Review the Privacy Policy and Terms of Service.
+3) Click the sign up for free button.
+4) You will be taken to a new screen to add an account password. Enter your password.
+ *Passwords must be a minimum of 8 characters with at least 1 lower case character, 1 upper case character, 1 number, and 1 special character.*
+5) Click the add account password button.
+
+You will receive a Studio welcome email confirming your registration.
+
+
+
+Note: Your email address will be used as your username and cannot be changed.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/harperdb-studio/enable-mixed-content.md b/site/versioned_docs/version-4.1/harperdb-studio/enable-mixed-content.md
new file mode 100644
index 00000000..1948d6be
--- /dev/null
+++ b/site/versioned_docs/version-4.1/harperdb-studio/enable-mixed-content.md
@@ -0,0 +1,11 @@
+---
+title: Enable Mixed Content
+---
+
+# Enable Mixed Content
+
+Enabling mixed content is required in cases where you would like to connect the HarperDB Studio to HarperDB Instances via HTTP. This should not be used for production systems, but may be convenient for development and testing purposes. Doing so will allow your browser to reach HTTP traffic, which is considered insecure, through an HTTPS site like the Studio.
+
+
+
+A comprehensive guide is provided by Adobe [here](https:/experienceleague.adobe.com/docs/target/using/experiences/vec/troubleshoot-composer/mixed-content.html).
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/harperdb-studio/index.md b/site/versioned_docs/version-4.1/harperdb-studio/index.md
new file mode 100644
index 00000000..93ba1af7
--- /dev/null
+++ b/site/versioned_docs/version-4.1/harperdb-studio/index.md
@@ -0,0 +1,15 @@
+---
+title: HarperDB Studio
+---
+
+# HarperDB Studio
+HarperDB Studio is the web-based GUI for HarperDB. Studio enables you to administer, navigate, and monitor all of your HarperDB instances in a simple, user friendly interface without any knowledge of the underlying HarperDB API. It’s free to sign up, get started today!
+
+[Sign up for free!](https:/studio.harperdb.io/sign-up)
+
+---
+## How does Studio Work?
+While HarperDB Studio is web based and hosted by us, all database interactions are performed on the HarperDB instance the studio is connected to. The HarperDB Studio loads in your browser, at which point you login to your HarperDB instances. Credentials are stored in your browser cache and are not transmitted back to HarperDB. All database interactions are made via the HarperDB Operations API directly from your browser to your instance.
+
+## What type of instances can I manage?
+HarperDB Studio enables users to manage both HarperDB Cloud instances and privately hosted instances all from a single UI. All HarperDB instances feature identical behavior whether they are hosted by us or by you.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/harperdb-studio/instance-configuration.md b/site/versioned_docs/version-4.1/harperdb-studio/instance-configuration.md
new file mode 100644
index 00000000..55c01be1
--- /dev/null
+++ b/site/versioned_docs/version-4.1/harperdb-studio/instance-configuration.md
@@ -0,0 +1,119 @@
+---
+title: Instance Configuration
+---
+
+# Instance Configuration
+
+HarperDB instance configuration can be viewed and managed directly through the HarperDB Studio. HarperDB Cloud instances can be resized in two different ways via this page, either by modifying machine RAM or by increasing drive storage. User-installed instances can have their licenses modified by modifying licensed RAM.
+
+
+
+All instance configuration is handled through the **config** page of the HarperDB Studio, accessed with the following instructions:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+
+2) Click the appropriate organization that the instance belongs to.
+
+3) Select your desired instance.
+
+4) Click config in the instance control bar.
+
+*Note, the **config** page will only be available to super users and certain items are restricted to Studio organization owners.*
+
+## Instance Overview
+
+The **instance overview** panel displays the following instance specifications:
+
+* Instance URL
+
+* Instance Node Name (for clustering)
+
+* Instance API Auth Header (this user)
+
+ *The Basic authentication header used for the logged in HarperDB database user*
+
+* Created Date (HarperDB Cloud only)
+
+* Region (HarperDB Cloud only)
+
+ *The geographic region where the instance is hosted.*
+
+* Total Price
+
+* RAM
+
+* Storage (HarperDB Cloud only)
+
+* Disk IOPS (HarperDB Cloud only)
+
+## Update Instance RAM
+
+HarperDB Cloud instance size and user-installed instance licenses can be modified with the following instructions. This option is only available to Studio organization owners.
+
+
+
+Note: For HarperDB Cloud instances, upgrading RAM may add additional CPUs to your instance as well. Click here to see how many CPUs are provisioned for each instance size.
+
+1) In the **update ram** panel at the bottom left:
+
+ * Select the new instance size.
+
+ * If you do not have a credit card associated with your account, an **Add Credit Card To Account** button will appear. Click that to be taken to the billing screen where you can enter your credit card information before returning to the **config** tab to proceed with the upgrade.
+
+ * If you do have a credit card associated, you will be presented with the updated billing information.
+
+ * Click **Upgrade**.
+
+2) The instance will shut down and begin reprovisioning/relicensing itself. The instance will not be available during this time. You will be returned to the instance dashboard and the instance status will show UPDATING INSTANCE.
+
+3) Once your instance upgrade is complete, it will appear on the instance dashboard as status OK with your newly selected instance size.
+
+*Note, if HarperDB Cloud instance reprovisioning takes longer than 20 minutes, please submit a support ticket here: https:/harperdbhelp.zendesk.com/hc/en-us/requests/new.*
+
+## Update Instance Storage
+
+The HarperDB Cloud instance storage size can be increased with the following instructions. This option is only available to Studio organization owners.
+
+Note: Instance storage can only be upgraded once every 6 hours.
+
+1) In the **update storage** panel at the bottom left:
+
+ * Select the new instance storage size.
+
+ * If you do not have a credit card associated with your account, an **Add Credit Card To Account** button will appear. Click that to be taken to the billing screen where you can enter your credit card information before returning to the **config** tab to proceed with the upgrade.
+
+ * If you do have a credit card associated, you will be presented with the updated billing information.
+
+ * Click **Upgrade**.
+
+2) The instance will shut down and begin reprovisioning itself. The instance will not be available during this time. You will be returned to the instance dashboard and the instance status will show UPDATING INSTANCE.
+
+3) Once your instance upgrade is complete, it will appear on the instance dashboard as status OK with your newly selected instance size.
+
+*Note, if this process takes longer than 20 minutes, please submit a support ticket here: https:/harperdbhelp.zendesk.com/hc/en-us/requests/new.*
+
+## Remove Instance
+
+The HarperDB instance can be deleted/removed from the Studio with the following instructions. Once this operation is started it cannot be undone. This option is only available to Studio organization owners.
+
+1) In the **remove instance** panel at the bottom left:
+ * Enter the instance name in the text box.
+
+ * The Studio will present you with a warning.
+
+ * Click **Remove**.
+
+2) The instance will begin deleting immediately.
+
+## Restart Instance
+
+The HarperDB Cloud instance can be restarted with the following instructions.
+
+1) In the **restart instance** panel at the bottom right:
+ * Enter the instance name in the text box.
+
+ * The Studio will present you with a warning.
+
+ * Click **Restart**.
+
+2) The instance will begin restarting immediately.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/harperdb-studio/instance-example-code.md b/site/versioned_docs/version-4.1/harperdb-studio/instance-example-code.md
new file mode 100644
index 00000000..b4b74e5f
--- /dev/null
+++ b/site/versioned_docs/version-4.1/harperdb-studio/instance-example-code.md
@@ -0,0 +1,62 @@
+---
+title: Instance Example Code
+---
+
+# Instance Example Code
+
+Example code prepopulated with the instance URL and authorization token for the logged in database user can be found on the **example code** page of the HarperDB Studio. Code samples are generated based on the HarperDB API Documentation Postman collection. Code samples accessed with the following instructions:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+
+2) Click the appropriate organization that the instance belongs to.
+
+3) Select your desired instance.
+
+4) Click **example code** in the instance control bar.
+
+5) Select the appropriate **category** from the left navigation.
+
+6) Select the appropriate **operation** from the left navigation.
+
+7) Select your desired language/variant from the **Choose Programming Language** dropdown.
+
+8) Copy code from the sample code panel using the copy icon.
+
+## Supported Languages
+
+Sample code uses two identifiers: **language** and **variant**.
+
+* **language** is the programming language that the sample code is generated in.
+
+* **variant** is the methodology or library used by the language to send HarperDB requests.
+
+The list of available language/variants are as follows:
+
+| Language | Variant |
+|--------------|---------------|
+| C# | RestSharp |
+| cURL | cURL |
+| Go | Native |
+| HTTP | HTTP |
+| Java | OkHttp |
+| Java | Unirest |
+| JavaScript | Fetch |
+| JavaScript | jQuery |
+| JavaScript | XHR |
+| NodeJs | Axios |
+| NodeJs | Native |
+| NodeJs | Request |
+| NodeJs | Unirest |
+| Objective-C | NSURLSession |
+| OCaml | Cohttp |
+| PHP | cURL |
+| PHP | HTTP_Request2 |
+| PowerShell | RestMethod |
+| Python | http.client |
+| Python | Requests |
+| Ruby | Net:HTTP |
+| Shell | Httpie |
+| Shell | wget |
+| Swift | URLSession |
+
+
diff --git a/site/versioned_docs/version-4.1/harperdb-studio/instance-metrics.md b/site/versioned_docs/version-4.1/harperdb-studio/instance-metrics.md
new file mode 100644
index 00000000..b2bda847
--- /dev/null
+++ b/site/versioned_docs/version-4.1/harperdb-studio/instance-metrics.md
@@ -0,0 +1,19 @@
+---
+title: Instance Metrics
+---
+
+# Instance Metrics
+
+The HarperDB Studio display instance status and metrics on the instance status page, which can be accessed with the following instructions:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+
+2) Click the appropriate organization that the instance belongs to.
+
+3) Select your desired instance.
+
+4) Click **status** in the instance control bar.
+
+Once on the instance browse page you can view host system information, [HarperDB logs](../logging), and HarperDB Cloud alarms (if it is a cloud instance).
+
+*Note, the **status** page will only be available to super users.*
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/harperdb-studio/instances.md b/site/versioned_docs/version-4.1/harperdb-studio/instances.md
new file mode 100644
index 00000000..33e61ab6
--- /dev/null
+++ b/site/versioned_docs/version-4.1/harperdb-studio/instances.md
@@ -0,0 +1,144 @@
+---
+title: Instances
+---
+
+# Instances
+
+The HarperDB Studio allows you to administer all of your HarperDB instances in one place. HarperDB currently offers the following instance types:
+
+* **HarperDB Cloud Instance**
+Managed installations of HarperDB, what we call HarperDB Cloud.
+* **5G Wavelength Instance**
+Managed installations of HarperDB running on the Verizon network through AWS Wavelength, what we call 5G Wavelength Instances. *Note, these instances are only accessible via the Verizon network.*
+* **User-Installed Instance**
+Any HarperDB installation that is managed by you. These include instances hosted within your cloud provider accounts (for example, from the AWS or Digital Ocean Marketplaces), privately hosted instances, or instances installed locally.
+
+All interactions between the Studio and your instances take place directly from your browser. HarperDB stores metadata about your instances, which enables the Studio to display these instances when you log in. Beyond that, all traffic is routed from your browser to the HarperDB instances using the standard [HarperDB API](https:/api.harperdb.io/).
+
+## Organization Instance List
+A summary view of all instances within an organization can be viewed by clicking on the appropriate organization from the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page. Each instance gets their own card. HarperDB Cloud and user-installed instances are listed together.
+
+## Create a New Instance
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+2) Click the appropriate organization for the instance to be created under.
+3) Click the **Create New HarperDB Cloud Instance + Register User-Installed Instance** card.
+4) Select your desired Instance Type.
+5) For a HarperDB Cloud Instance or a HarperDB 5G Wavelength Instance, click **Create HarperDB Cloud Instance**.
+
+ 1) Fill out Instance Info.
+ 1) Enter Instance Name
+
+ *This will be used to build your instance URL. For example, with subdomain “demo” and instance name “c1” the instance URL would be: https:/c1-demo.harperdbcloud.com. The Instance URL will be previewed below.*
+
+ 2) Enter Instance Username
+
+ *This is the username of the initial HarperDB instance super user.*
+
+ 3) Enter Instance Password
+
+ *This is the password of the initial HarperDB instance super user.*
+
+ 2) Click **Instance Details** to move to the next page.
+ 3) Select Instance Specs
+
+ 1) Select Instance RAM
+
+ *HarperDB Cloud Instances are billed based on Instance RAM, this will select the size of your provisioned instance. More on instance specs.*
+
+ 2) Select Storage Size
+
+ *Each instance has a mounted storage volume where your HarperDB data will reside. Storage is provisioned based on space and IOPS. More on IOPS Impact on Performance.*
+
+ 3) Select Instance Region
+
+ *The geographic area where your instance will be provisioned.*
+
+ 4) Click **Confirm Instance Details** to move to the next page.
+ 5) Review your Instance Details, if there is an error, use the back button to correct it.
+ 6) Review the [Privacy Policy](https:/harperdb.io/legal/privacy-policy/) and [Terms of Service](https:/harperdb.io/legal/harperdb-cloud-terms-of-service/), if you agree, click the **I agree** radio button to confirm.
+ 7) Click **Add Instance**.
+ 8) Your HarperDB Cloud instance will be provisioned in the background. Provisioning typically takes 5-15 minutes. You will receive an email notification when your instance is ready.
+
+## Register User-Installed Instance
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+2) Click the appropriate organization for the instance to be created under.
+3) Click the **Create New HarperDB Cloud Instance + Register User-Installed Instance** card.
+4) Select **Register User-Installed Instance**.
+ 1) Fill out Instance Info.
+
+ 1) Enter Instance Name
+
+ *This is used for descriptive purposes only.*
+ 2) Enter Instance Username
+
+ *The username of a HarperDB super user that is already configured in your HarperDB installation.*
+ 3) Enter Instance Password
+
+ *The password of a HarperDB super user that is already configured in your HarperDB installation.*
+ 4) Enter Host
+
+ *The host to access the HarperDB instance. For example, `harperdb.myhost.com` or `localhost`.*
+ 5) Enter Port
+
+ *The port to access the HarperDB instance. HarperDB defaults `9925`.*
+ 6) Select SSL
+
+ *If your instance is running over SSL, select the SSL checkbox. If not, you will need to enable mixed content in your browser to allow the HTTPS Studio to access the HTTP instance. If there are issues connecting to the instance, the Studio will display a red error message.*
+
+ 2) Click **Instance Details** to move to the next page.
+ 3) Select Instance Specs
+ 1) Select Instance RAM
+
+ *HarperDB instances are billed based on Instance RAM. Selecting additional RAM will enable the ability for faster and more complex queries.*
+ 4) Click **Confirm Instance Details** to move to the next page.
+ 5) Review your Instance Details, if there is an error, use the back button to correct it.
+ 6) Review the [Privacy Policy](https:/harperdb.io/legal/privacy-policy/) and [Terms of Service](https:/harperdb.io/legal/harperdb-cloud-terms-of-service/), if you agree, click the **I agree** radio button to confirm.
+ 7) Click **Add Instance**.
+ 8) The HarperDB Studio will register your instance and restart it for the registration to take effect. Your instance will be immediately available after this is complete.
+
+## Delete an Instance
+
+Instance deletion has two different behaviors depending on the instance type.
+
+* **HarperDB Cloud Instance**
+This instance will be permanently deleted, including all data. This process is irreversible and cannot be undone.
+* **User-Installed Instance**
+The instance will be removed from the HarperDB Studio only. This does not uninstall HarperDB from your system and your data will remain intact.
+
+An instance can be deleted as follows:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+2) Click the appropriate organization that the instance belongs to.
+3) Identify the proper instance card and click the trash can icon.
+4) Enter the instance name into the text box.
+
+ *This is done for confirmation purposes to ensure you do not accidentally delete an instance.*
+5) Click the **Do It** button.
+
+## Upgrade an Instance
+
+HarperDB instances can be resized on the [Instance Configuration](./instance-configuration) page.
+
+## Instance Log In/Log Out
+
+The Studio enables users to log in and out of different database users from the instance control panel. To log out of an instance:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+2) Click the appropriate organization that the instance belongs to.
+3) Identify the proper instance card and click the lock icon.
+4) You will immediately be logged out of the instance.
+
+To log in to an instance:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+2) Click the appropriate organization that the instance belongs to.
+3) Identify the proper instance card, it will have an unlocked icon and a status reading PLEASE LOG IN, and click the center of the card.
+4) Enter the database username.
+
+ *The username of a HarperDB user that is already configured in your HarperDB instance.*
+5) Enter the database password.
+
+ *The password of a HarperDB user that is already configured in your HarperDB instance.*
+6) Click **Log In**.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/harperdb-studio/login-password-reset.md b/site/versioned_docs/version-4.1/harperdb-studio/login-password-reset.md
new file mode 100644
index 00000000..dddda5c1
--- /dev/null
+++ b/site/versioned_docs/version-4.1/harperdb-studio/login-password-reset.md
@@ -0,0 +1,42 @@
+---
+title: Login and Password Reset
+---
+
+# Login and Password Reset
+
+## Log In to Your HarperDB Studio Account
+
+To log into your existing HarperDB Studio account:
+
+1) Navigate to the [HarperDB Studio](https:/studio.harperdb.io/).
+2) Enter your email address.
+3) Enter your password.
+4) Click **sign in**.
+
+## Reset a Forgotten Password
+
+To reset a forgotten password:
+
+1) Navigate to the HarperDB Studio password reset page.
+2) Enter your email address.
+3) Click **send password reset email**.
+4) If the account exists, you will receive an email with a temporary password.
+5) Navigate back to the HarperDB Studio login page.
+6) Enter your email address.
+7) Enter your temporary password.
+8) Click **sign in**.
+9) You will be taken to a new screen to reset your account password. Enter your new password.
+*Passwords must be a minimum of 8 characters with at least 1 lower case character, 1 upper case character, 1 number, and 1 special character.*
+10) Click the **add account password** button.
+
+## Change Your Password
+
+If you are already logged into the Studio, you can change your password though the user interface.
+
+1) Navigate to the HarperDB Studio profile page.
+2) In the **password** section, enter:
+
+ * Current password.
+ * New password.
+ * New password again *(for verification)*.
+4) Click the **Update Password** button.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/harperdb-studio/manage-charts.md b/site/versioned_docs/version-4.1/harperdb-studio/manage-charts.md
new file mode 100644
index 00000000..f96505f5
--- /dev/null
+++ b/site/versioned_docs/version-4.1/harperdb-studio/manage-charts.md
@@ -0,0 +1,79 @@
+---
+title: Charts
+---
+
+# Charts
+
+The HarperDB Studio includes a charting feature within an instance. They are generated in real time based on your existing data and automatically refreshed every 15 seconds. Instance charts can be accessed with the following instructions:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+2) Click the appropriate organization that the instance belongs to.
+3) Select your desired instance.
+4) Click **charts** in the instance control bar.
+
+## Creating a New Chart
+
+Charts are generated based on SQL queries, therefore to build a new chart you first need to build a query. Instructions as follows (starting on the charts page described above):
+
+1) Click **query** in the instance control bar.
+2) Enter the SQL query you would like to generate a chart from.
+
+ *For example, using the dog demo data from the API Docs, we can get the average dog age per owner with the following query: `SELECT AVG(age) as avg_age, owner_name FROM dev.dog GROUP BY owner_name`.*
+
+3) Click **Execute**.
+
+4) Click **create chart** at the top right of the results table.
+
+5) Configure your chart.
+
+ 1) Choose chart type.
+
+ *HarperDB Studio offers many standard charting options like line, bar, etc.*
+
+ 2) Choose a data column.
+
+ *This column will be used to plot the data point. Typically, this is the values being calculated in the `SELECT` statement. Depending on the chart type, you can select multiple data columns to display on a single chart.*
+ 3) Depending on the chart type, you will need to select a grouping.
+
+ *This could be labeled as x-axis, label, etc. This will be used to group the data, typically this is what you used in your **GROUP BY** clause.*
+
+ 4) Enter a chart name.
+
+ *Used for identification purposes and will be displayed at the top of the chart.*
+
+ 5) Choose visible to all org users toggle.
+
+ *Leaving this option off will limit chart visibility to just your HarperDB Studio user. Toggling it on will enable all users with this Organization to view this chart.*
+
+ 6) Click **Add Chart**.
+
+ 7) The chart will now be visible on the **charts** page.
+
+The example query above, configured as a bar chart, results in the following chart:
+
+
+
+
+## Downloading Charts
+HarperDB Studio charts can be downloaded in SVG, PNG, and CSV format. Instructions as follows (starting on the charts page described above):
+
+1) Identify the chart you would like to export.
+2) Click the three bars icon.
+
+3) Select the appropriate download option.
+
+4) The Studio will generate the export and begin downloading immediately.
+
+## Delete a Chart
+
+Delete a chart as follows (starting on the charts page described above):
+
+1) Identify the chart you would like to delete.
+
+2) Click the X icon.
+
+3) Click the **confirm delete chart** button.
+
+4) The chart will be deleted.
+
+Deleting a chart that is visible to all Organization users will delete it for all users.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/harperdb-studio/manage-clustering.md b/site/versioned_docs/version-4.1/harperdb-studio/manage-clustering.md
new file mode 100644
index 00000000..7155249d
--- /dev/null
+++ b/site/versioned_docs/version-4.1/harperdb-studio/manage-clustering.md
@@ -0,0 +1,94 @@
+---
+title: Manage Clustering
+---
+
+# Manage Clustering
+
+HarperDB instance clustering and replication can be configured directly through the HarperDB Studio. It is recommended to read through the clustering documentation first to gain a strong understanding of HarperDB clustering behavior.
+
+
+
+All clustering configuration is handled through the **cluster** page of the HarperDB Studio, accessed with the following instructions:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+
+2) Click the appropriate organization that the instance belongs to.
+
+3) Select your desired instance.
+
+4) Click **cluster** in the instance control bar.
+
+Note, the **cluster** page will only be available to super users.
+
+---
+## Initial Configuration
+
+HarperDB instances do not have clustering configured by default. The HarperDB Studio will walk you through the initial configuration. Upon entering the **cluster** screen for the first time you will need to complete the following configuration. Configurations are set in the **enable clustering** panel on the left while actions are described in the middle of the screen.
+
+1) Create a cluster user, read more about this here: Clustering Users and Roles.
+ * Enter username.
+
+ * Enter password.
+
+ * Click **Create Cluster User**.
+
+2) Click **Set Cluster Node Name**.
+3) Click **Enable Instance Clustering**.
+
+At this point the Studio will restart your HarperDB Instance, required for the configuration changes to take effect.
+
+---
+
+## Manage Clustering
+Once initial clustering configuration is completed you a presented with a clustering management screen with the following properties:
+
+* **connected instances**
+
+ Displays all instances within the Studio Organization that this instance manages a connection with.
+
+* **unconnected instances**
+
+ Displays all instances within the Studio Organization that this instance does not manage a connection with.
+
+* **unregistered instances**
+
+ Displays all instances outside of the Studio Organization that this instance manages a connection with.
+
+* **manage clustering**
+
+ Once instances are connected, this will display clustering management options for all connected instances and all schemas and tables.
+---
+
+## Connect an Instance
+
+HarperDB Instances can be clustered together with the following instructions.
+
+1) Ensure clustering has been configured on both instances and a cluster user with identical credentials exists on both.
+
+2) Identify the instance you would like to connect from the **unconnected instances** panel.
+
+3) Click the plus icon next the appropriate instance.
+
+4) If configurations are correct, all schemas will sync across the cluster, then appear in the **manage clustering** panel. If there is a configuration issue, a red exclamation icon will appear, click it to learn more about what could be causing the issue.
+
+---
+
+## Disconnect an Instance
+
+HarperDB Instances can be disconnected with the following instructions.
+
+1) Identify the instance you would like to disconnect from the **connected instances** panel.
+
+2) Click the minus icon next the appropriate instance.
+
+---
+
+## Manage Replication
+
+Subscriptions must be configured in order to move data between connected instances. Read more about subscriptions here: Creating A Subscription. The **manage clustering** panel displays a table with each row representing an channel per instance. Cells are bolded to indicate a change in the column. Publish and subscribe replication can be configured per table with the following instructions:
+
+1) Identify the instance, schema, and table for replication to be configured.
+
+2) For publish, click the toggle switch in the **publish** column.
+
+3) For subscribe, click the toggle switch in the **subscribe** column.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/harperdb-studio/manage-functions.md b/site/versioned_docs/version-4.1/harperdb-studio/manage-functions.md
new file mode 100644
index 00000000..3a74d7e5
--- /dev/null
+++ b/site/versioned_docs/version-4.1/harperdb-studio/manage-functions.md
@@ -0,0 +1,163 @@
+---
+title: Manage Functions
+---
+
+# Manage Functions
+
+HarperDB Custom Functions are enabled by default and can be configured further through the HarperDB Studio. It is recommended to read through the Custom Functions documentation first to gain a strong understanding of HarperDB Custom Functions behavior.
+
+
+
+All Custom Functions configuration is handled through the **functions** page of the HarperDB Studio, accessed with the following instructions:
+
+1) Navigate to the HarperDB Studio Organizations page.
+
+2) Click the appropriate organization that the instance belongs to.
+
+3) Select your desired instance.
+
+4) Click **functions** in the instance control bar.
+
+*Note, the **functions** page will only be available to super users.*
+
+## Manage Projects
+
+On the **functions** page of the HarperDB Studio you are presented with a functions management screen with the following properties:
+
+* **projects**
+
+ Displays a list of Custom Functions projects residing on this instance.
+* **/project_name/routes**
+
+ Only displayed if there is an existing project. Displays the routes files contained within the selected project.
+* **/project_name/helpers**
+
+ Only displayed if there is an existing project. Displays the helper files contained within the selected project.
+* **/project_name/static**
+
+ Only displayed if there is an existing project. Displays the static file count and a link to the static files contained within the selected project. Note, static files cannot currently be deployed through the Studio and must be deployed via the [HarperDB API](https:/api.harperdb.io/) or manually to the server (not applicable with HarperDB Cloud).
+* **Root File Directory**
+
+ Displays the root file directory where the Custom Functions projects reside on this instance.
+* **Custom Functions Server URL**
+
+ Displays the base URL in which all Custom Functions are accessed for this instance.
+
+
+## Create a Project
+
+HarperDB Custom Functions Projects can be initialized with the following instructions.
+
+1) If this is your first project, skip this step. Click the plus icon next to the **projects** heading.
+
+2) Enter the project name in the text box located under the **projects** heading.
+
+3) Click the check mark icon next the appropriate instance.
+
+4) The Studio will take a few moments to provision a new project based on the [Custom Functions template](https:/github.com/HarperDB/harperdb-custom-functions-template).
+
+5) The Custom Functions project is now created and ready to modify.
+
+## Modify a Project
+
+Custom Functions routes and helper functions can be modified directly through the Studio. From the **functions** page:
+
+1) Select the appropriate **project**.
+
+2) Select the appropriate **route** or **helper**.
+
+3) Modify the code with your desired changes.
+
+4) Click the save icon at the bottom right of the screen.
+
+ *Note, saving modifications will restart the Custom Functions server on your HarperDB instance and may result in up to 60 seconds of downtime for all Custom Functions.*
+
+## Create Additional Routes/Helpers
+
+To create an additional **route** to your Custom Functions project. From the **functions** page:
+
+1) Select the appropriate Custom Functions **project**.
+
+2) Click the plus icon to the right of the **routes** header.
+
+3) Enter the name of the new route in the textbox that appears.
+
+4) Click the check icon to create the new route.
+
+ *Note, adding a route will restart the Custom Functions server on your HarperDB instance and may result in up to 60 seconds of downtime for all Custom Functions.*
+
+To create an additional **helper** to your Custom Functions project. From the **functions** page:
+
+1) Select the appropriate Custom Functions **project**.
+
+2) Click the plus icon to the right of the **helpers** header.
+
+3) Enter the name of the new helper in the textbox that appears.
+
+4) Click the check icon to create the new helper.
+
+ *Note, adding a helper will restart the Custom Functions server on your HarperDB instance and may result in up to 60 seconds of downtime for all Custom Functions.*
+
+## Delete a Project/Route/Helper
+
+To delete a Custom Functions project from the **functions** page:
+
+1) Click the minus icon to the right of the **projects** header.
+
+2) Click the red minus icon to the right of the Custom Functions project you would like to delete.
+
+3) Confirm deletion by clicking the red check icon.
+
+ *Note, deleting a project will restart the Custom Functions server on your HarperDB instance and may result in up to 60 seconds of downtime for all Custom Functions.*
+
+To delete a Custom Functions _project route_ from the **functions** page:
+
+1) Select the appropriate Custom Functions **project**.
+
+2) Click the minus icon to the right of the **routes** header.
+
+3) Click the red minus icon to the right of the Custom Functions route you would like to delete.
+
+4) Confirm deletion by clicking the red check icon.
+
+ *Note, deleting a route will restart the Custom Functions server on your HarperDB instance and may result in up to 60 seconds of downtime for all Custom Functions.*
+
+To delete a Custom Functions _project helper_ from the **functions** page:
+
+1) Select the appropriate Custom Functions **project**.
+
+2) Click the minus icon to the right of the **helper** header.
+
+3) Click the red minus icon to the right of the Custom Functions header you would like to delete.
+
+4) Confirm deletion by clicking the red check icon.
+
+ *Note, deleting a header will restart the Custom Functions server on your HarperDB instance and may result in up to 60 seconds of downtime for all Custom Functions.*
+
+## Deploy Custom Functions Project to Other Instances
+
+The HarperDB Studio provides the ability to deploy Custom Functions projects to additional HarperDB instances within the same Studio Organization. To deploy Custom Functions projects to additional instances, starting from the **functions** page:
+
+1) Select the **project** you would like to deploy.
+
+2) Click the **deploy** button at the top right.
+
+3) A list of instances (excluding the current instance) within the organization will be displayed in tabular with the following information:
+
+ * **Instance Name**: The name used to describe the instance.
+
+ * **Instance URL**: The URL used to access the instance.
+
+ * **CF Capable**: Describes if the instance version supports Custom Functions (yes/no).
+
+ * **CF Enabled**: Describes if Custom Functions are configured and enabled on the instance (yes/no).
+
+ * **Has Project**: Describes if the selected Custom Functions project has been previously deployed to the instance (yes/no).
+
+ * **Deploy**: Button used to deploy the project to the instance.
+
+ * **Remote**: Button used to remove the project from the instance. *Note, this will only be visible if the project has been previously deployed to the instance.*
+
+4) In the appropriate instance row, click the **deploy** button.
+
+ *Note, deploying a project will restart the Custom Functions server on the HarperDB instance receiving the deployment and may result in up to 60 seconds of downtime for all Custom Functions.*
diff --git a/site/versioned_docs/version-4.1/harperdb-studio/manage-instance-roles.md b/site/versioned_docs/version-4.1/harperdb-studio/manage-instance-roles.md
new file mode 100644
index 00000000..e301e7d8
--- /dev/null
+++ b/site/versioned_docs/version-4.1/harperdb-studio/manage-instance-roles.md
@@ -0,0 +1,76 @@
+---
+title: Manage Instance Roles
+---
+
+# Manage Instance Roles
+
+HarperDB users can be managed directly through the HarperDB Studio. It is recommended to read through the users & roles documentation to gain a strong understanding of how they operate.
+
+
+
+Instance role configuration is handled through the roles page of the HarperDB Studio, accessed with the following instructions:
+
+1) Navigate to the HarperDB Studio Organizations page.
+
+2) Click the appropriate organization that the instance belongs to.
+
+3) Select your desired instance.
+
+4) Click **rules** in the instance control bar.
+
+*Note, the **roles** page will only be available to super users.*
+
+
+
+The *roles management* screen consists of the following panels:
+
+* **super users**
+
+ Displays all super user roles for this instance.
+* **cluster users**
+
+ Displays all cluster user roles for this instance.
+* **standard roles**
+
+ Displays all standard roles for this instance.
+* **role permission editing**
+
+ Once a role is selected for editing, permissions will be displayed here in JSON format.
+
+*Note, when new tables are added that are not configured, the Studio will generate configuration values with permissions defaulting to `false`.*
+
+## Role Management
+
+#### Create a Role
+
+1) Click the plus icon at the top right of the appropriate role section.
+
+2) Enter the role name.
+
+3) Click the green check mark.
+
+4) Configure the role permissions in the role permission editing panel.
+
+ *Note, to have the Studio generate attribute permissions JSON, toggle **show all attributes** at the top right of the role permission editing panel.*
+
+5) Click **Update Role Permissions**.
+
+#### Modify a Role
+
+1) Click the appropriate role from the appropriate role section.
+
+2) Modify the role permissions in the role permission editing panel.
+
+ *Note, to have the Studio generate attribute permissions JSON, toggle **show all attributes** at the top right of the role permission editing panel.*
+
+3) Click **Update Role Permissions**.
+
+#### Delete a Role
+
+Deleting a role is permanent and irreversible. A role cannot be remove if users are associated with it.
+
+1) Click the minus icon at the top right of the schemas section.
+
+2) Identify the appropriate role to delete and click the red minus sign in the same row.
+
+3) Click the red check mark to confirm deletion.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/harperdb-studio/manage-instance-users.md b/site/versioned_docs/version-4.1/harperdb-studio/manage-instance-users.md
new file mode 100644
index 00000000..4871cf88
--- /dev/null
+++ b/site/versioned_docs/version-4.1/harperdb-studio/manage-instance-users.md
@@ -0,0 +1,63 @@
+---
+title: Manage Instance Users
+---
+
+# Manage Instance Users
+
+HarperDB instance clustering and replication can be configured directly through the HarperDB Studio. It is recommended to read through the clustering documentation first to gain a strong understanding of HarperDB clustering behavior.
+
+
+
+Instance user configuration is handled through the **users** page of the HarperDB Studio, accessed with the following instructions:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+
+2) Click the appropriate organization that the instance belongs to.
+
+3) Select your desired instance.
+
+4) Click **users** in the instance control bar.
+
+*Note, the **users** page will only be available to super users.*
+
+## Add a User
+
+HarperDB instance users can be added with the following instructions.
+
+1) In the **add user** panel on the left enter:
+
+ * New user username.
+
+ * New user password.
+
+ * Select a role.
+
+ *Learn more about role management here: [Manage Instance Roles](./manage-instance-roles).*
+
+2) Click **Add User**.
+
+## Edit a User
+
+HarperDB instance users can be modified with the following instructions.
+
+1) In the **existing users** panel, click the row of the user you would like to edit.
+
+2) To change a user’s password:
+
+ 1) In the **Change user password** section, enter the new password.
+
+ 2) Click **Update Password**.
+
+3) To change a user’s role:
+
+ 1) In the **Change user role** section, select the new role.
+
+ 2) Click **Update Role**.
+
+4) To delete a user:
+
+ 1) In the **Delete User** section, type the username into the textbox.
+
+ *This is done for confirmation purposes.*
+
+ 2) Click **Delete User**.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/harperdb-studio/manage-schemas-browse-data.md b/site/versioned_docs/version-4.1/harperdb-studio/manage-schemas-browse-data.md
new file mode 100644
index 00000000..41493b96
--- /dev/null
+++ b/site/versioned_docs/version-4.1/harperdb-studio/manage-schemas-browse-data.md
@@ -0,0 +1,132 @@
+---
+title: Manage Schemas / Browse Data
+---
+
+# Manage Schemas / Browse Data
+
+Manage instance schemas/tables and browse data in tabular format with the following instructions:
+
+1) Navigate to the HarperDB Studio Organizations page.
+2) Click the appropriate organization that the instance belongs to.
+3) Select your desired instance.
+4) Click **browse** in the instance control bar.
+
+Once on the instance browse page you can view data, manage schemas and tables, add new data, and more.
+
+## Manage Schemas and Tables
+
+#### Create a Schema
+
+1) Click the plus icon at the top right of the schemas section.
+2) Enter the schema name.
+3) Click the green check mark.
+
+
+#### Delete a Schema
+
+Deleting a schema is permanent and irreversible. Deleting a schema removes all tables and data within it.
+
+1) Click the minus icon at the top right of the schemas section.
+2) Identify the appropriate schema to delete and click the red minus sign in the same row.
+3) Click the red check mark to confirm deletion.
+
+
+#### Create a Table
+
+1) Select the desired schema from the schemas section.
+2) Click the plus icon at the top right of the tables section.
+3) Enter the table name.
+4) Enter the primary key.
+
+ *The primary key is also often referred to as the hash attribute in the studio, and it defines the unique identifier for each row in your table.*
+5) Click the green check mark.
+
+
+#### Delete a Table
+Deleting a table is permanent and irreversible. Deleting a table removes all data within it.
+
+1) Select the desired schema from the schemas section.
+2) Click the minus icon at the top right of the tables section.
+3) Identify the appropriate table to delete and click the red minus sign in the same row.
+4) Click the red check mark to confirm deletion.
+
+## Manage Table Data
+
+The following section assumes you have selected the appropriate table from the schema/table browser.
+
+
+
+#### Filter Table Data
+
+1) Click the magnifying glass icon at the top right of the table browser.
+2) This expands the search filters.
+3) The results will be filtered appropriately.
+
+
+#### Load CSV Data
+
+1) Click the data icon at the top right of the table browser. You will be directed to the CSV upload page where you can choose to import a CSV by URL or upload a CSV file.
+2) To import a CSV by URL:
+ 1) Enter the URL in the **CSV file URL** textbox.
+ 2) Click **Import From URL**.
+ 3) The CSV will load, and you will be redirected back to browse table data.
+3) To upload a CSV file:
+ 1) Click **Click or Drag to select a .csv file** (or drag your CSV file from your file browser).
+ 2) Navigate to your desired CSV file and select it.
+ 3) Click **Insert X Records**, where X is the number of records in your CSV.
+ 4) The CSV will load, and you will be redirected back to browse table data.
+
+
+#### Add a Record
+
+1) Click the plus icon at the top right of the table browser.
+2) The Studio will pre-populate existing table attributes in JSON format.
+
+ *The primary key is not included, but you can add it in and set it to your desired value. Auto-maintained fields are not included and cannot be manually set. You may enter a JSON array to insert multiple records in a single transaction.*
+3) Enter values to be added to the record.
+
+ *You may add new attributes to the JSON; they will be reflexively added to the table.*
+4) Click the **Add New** button.
+
+
+#### Edit a Record
+
+1) Click the record/row you would like to edit.
+2) Modify the desired values.
+
+ *You may add new attributes to the JSON; they will be reflexively added to the table.*
+
+3) Click the **save icon**.
+
+
+#### Delete a Record
+
+Deleting a record is permanent and irreversible. If transaction logging is turned on, the delete transaction will be recorded as well as the data that was deleted.
+
+1) Click the record/row you would like to delete.
+2) Click the **delete icon**.
+3) Confirm deletion by clicking the **check icon**.
+
+## Browse Table Data
+
+The following section assumes you have selected the appropriate table from the schema/table browser.
+
+#### Browse Table Data
+
+The first page of table data is automatically loaded on table selection. Paging controls are at the bottom of the table. Here you can:
+
+* Page left and right using the arrows.
+* Type in the desired page.
+* Change the page size (the amount of records displayed in the table).
+
+
+#### Refresh Table Data
+
+Click the refresh icon at the top right of the table browser.
+
+
+
+#### Automatically Refresh Table Data
+
+Toggle the auto switch at the top right of the table browser. The table data will now automatically refresh every 15 seconds. Filters and pages will remain set for refreshed data.
+
diff --git a/site/versioned_docs/version-4.1/harperdb-studio/organizations.md b/site/versioned_docs/version-4.1/harperdb-studio/organizations.md
new file mode 100644
index 00000000..f9d5cb50
--- /dev/null
+++ b/site/versioned_docs/version-4.1/harperdb-studio/organizations.md
@@ -0,0 +1,105 @@
+---
+title: Organizations
+---
+
+# Organizations
+HarperDB Studio organizations provide the ability to group HarperDB Cloud Instances. Organization behavior is as follows:
+
+* Billing occurs at the organization level to a single credit card.
+* Organizations retain their own unique HarperDB Cloud subdomain.
+* Cloud instances reside within an organization.
+* Studio users can be invited to organizations to share instances.
+
+
+An organization is automatically created for you when you sign up for HarperDB Studio. If you only have one organization, the Studio will automatically bring you to your organization’s page.
+
+---
+
+## List Organizations
+A summary view of all organizations your user belongs to can be viewed on the [HarperDB Studio Organizations](https:/studio.harperdb.io/?redirect=/organizations) page. You can navigate to this page at any time by clicking the **all organizations** link at the top of the HarperDB Studio.
+
+## Create a New Organization
+A new organization can be created as follows:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/?redirect=/organizations) page.
+2) Click the **Create a New Organization** card.
+3) Fill out new organization details
+ * Enter Organization Name
+ *This is used for descriptive purposes only.*
+ * Enter Organization Subdomain
+ *Part of the URL that will be used to identify your HarperDB Cloud Instances. For example, with subdomain “demo” and instance name “c1” the instance URL would be: https:/c1-demo.harperdbcloud.com.*
+4) Click Create Organization.
+
+## Delete an Organization
+An organization cannot be deleted until all instances have been removed. An organization can be deleted as follows:
+
+1) Navigate to the HarperDB Studio Organizations page.
+2) Identify the proper organization card and click the trash can icon.
+3) Enter the organization name into the text box.
+
+ *This is done for confirmation purposes to ensure you do not accidentally delete an organization.*
+4) Click the **Do It** button.
+
+## Manage Users
+HarperDB Studio organization owners can manage users including inviting new users, removing users, and toggling ownership.
+
+
+
+#### Inviting a User
+A new user can be invited to an organization as follows:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/?redirect=/organizations) page.
+2) Click the appropriate organization card.
+3) Click **users** at the top of the screen.
+4) In the **add user** box, enter the new user’s email address.
+5) Click **Add User**.
+
+Users may or may not already be HarperDB Studio users when adding them to an organization. If the HarperDB Studio account already exists, the user will receive an email notification alerting them to the organization invitation. If the user does not have a HarperDB Studio account, they will receive an email welcoming them to HarperDB Studio.
+
+---
+
+#### Toggle a User’s Organization Owner Status
+Organization owners have full access to the organization including the ability to manage organization users, create, modify, and delete instances, and delete the organization. Users must have accepted their invitation prior to being promoted to an owner. A user’s organization owner status can be toggled owner as follows:
+
+1) Navigate to the HarperDB Studio Organizations page.
+2) Click the appropriate organization card.
+3) Click **users** at the top of the screen.
+4) Click the appropriate user from the **existing users** section.
+5) Toggle the **Is Owner** switch to the desired status.
+---
+
+#### Remove a User from an Organization
+Users may be removed from an organization at any time. Removing a user from an organization will not delete their HarperDB Studio account, it will only remove their access to the specified organization. A user can be removed from an organization as follows:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/?redirect=/organizations) page.
+2) Click the appropriate organization card.
+3) Click **users** at the top of the screen.
+4) Click the appropriate user from the **existing users** section.
+5) Type **DELETE** in the text box in the **Delete User** row.
+
+ *This is done for confirmation purposes to ensure you do not accidentally delete a user.*
+6) Click **Delete User**.
+
+## Manage Billing
+
+Billing is configured per organization and will be billed to the stored credit card at appropriate intervals (monthly or annually depending on the registered instance). Billing settings can be configured as follows:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/?redirect=/organizations) page.
+2) Click the appropriate organization card.
+3) Click **billing** at the top of the screen.
+
+Here organization owners can view invoices, manage coupons, and manage the associated credit card.
+
+
+
+*HarperDB billing and payments are managed via Stripe.*
+
+
+
+### Add a Coupon
+
+Coupons are applicable towards any paid tier or user-installed instance and you can change your subscription at any time. Coupons can be added to your Organization as follows:
+
+1) In the coupons panel of the **billing** page, enter your coupon code.
+2) Click **Add Coupon**.
+3) The coupon will then be available and displayed in the coupons panel.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/harperdb-studio/query-instance-data.md b/site/versioned_docs/version-4.1/harperdb-studio/query-instance-data.md
new file mode 100644
index 00000000..5c3ae28f
--- /dev/null
+++ b/site/versioned_docs/version-4.1/harperdb-studio/query-instance-data.md
@@ -0,0 +1,53 @@
+---
+title: Query Instance Data
+---
+
+# Query Instance Data
+
+SQL queries can be executed directly through the HarperDB Studio with the following instructions:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+2) Click the appropriate organization that the instance belongs to.
+3) Select your desired instance.
+4) Click **query** in the instance control bar.
+5) Enter your SQL query in the SQL query window.
+6) Click **Execute**.
+
+*Please note, the Studio will execute the query exactly as entered. For example, if you attempt to `SELECT *` from a table with millions of rows, you will most likely crash your browser.*
+
+## Browse Query Results Set
+
+#### Browse Results Set Data
+
+The first page of results set data is automatically loaded on query execution. Paging controls are at the bottom of the table. Here you can:
+
+* Page left and right using the arrows.
+* Type in the desired page.
+* Change the page size (the amount of records displayed in the table).
+
+#### Refresh Results Set
+
+Click the refresh icon at the top right of the results set table.
+
+#### Automatically Refresh Results Set
+
+Toggle the auto switch at the top right of the results set table. The results set will now automatically refresh every 15 seconds. Filters and pages will remain set for refreshed data.
+
+## Query History
+
+Query history is stored in your local browser cache. Executed queries are listed with the most recent at the top in the **query history** section.
+
+
+#### Rerun Previous Query
+
+* Identify the query from the **query history** list.
+* Click the appropriate query. It will be loaded into the **sql query** input box.
+* Click **Execute**.
+
+#### Clear Query History
+
+Click the trash can icon at the top right of the **query history** section.
+
+## Create Charts
+
+The HarperDB Studio includes a charting feature where you can build charts based on your specified queries. Visit the Charts documentation for more information.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/harperdb-studio/resources.md b/site/versioned_docs/version-4.1/harperdb-studio/resources.md
new file mode 100644
index 00000000..3eaf0a4a
--- /dev/null
+++ b/site/versioned_docs/version-4.1/harperdb-studio/resources.md
@@ -0,0 +1,43 @@
+---
+title: Resources (Marketplace, Drivers, Tutorials, & Example Code)
+---
+
+# Resources (Marketplace, Drivers, Tutorials, & Example Code)
+
+HarperDB Studio resources are available regardless of whether or not you are logged in.
+
+# HarperDB Marketplace
+
+The [HarperDB Marketplace](https:/studio.harperdb.io/resources/marketplace/active) is a collection of SDKs and connectors that enable developers to expand upon HarperDB for quick and easy solution development. Extensions are built and supported by the HarperDB Community. Each extension is hosted on the appropriate package manager or host.
+
+
+
+To download a Marketplace extension:
+
+1) Navigate to the [HarperDB Marketplace](https:/studio.harperdb.io/resources/marketplace/active) page.
+2) Identity the extension you would like to use.
+3) Either click the link to the package.
+4) Follow the extension’s instructions to proceed.
+
+You can submit your rating for each extension by clicking on the stars.
+
+## HarperDB Drivers
+
+HarperDB offers standard drivers to connect real-time HarperDB data with BI, analytics, reporting and data visualization technologies. Drivers are built and maintained by [CData Software](https:/www.cdata.com/drivers/harperdb/).
+
+
+
+To download a driver:
+
+1) Navigate to the [HarperDB Drivers](https:/studio.harperdb.io/resources/marketplace/active) page.
+2) Identity the driver you would like to use.
+3) Click the download link.
+4) For additional instructions, visit the support link on the driver card.
+
+## Video Tutorials
+
+HarperDB offers video tutorials available in the Studio on the [HarperDB Tutorials](https:/studio.harperdb.io/resources/tutorials/UExsZ1RNVEtzeXBTNUdJbjRZaTNOeEM0aW5YX3RBNU85SS4yODlGNEE0NkRGMEEzMEQy) page as well as our [YouTube channel](https:/www.youtube.com/playlist?list=PLlgTMTKsypS5GIn4Yi3NxC4inX_tA5O9I). The HarperDB Studio is changing all the time, as a result these, the videos may not include all of the current Studio features.
+
+## Example Code
+
+The [code examples](https:/studio.harperdb.io/resources/examples/QuickStart%20Examples/Create%20dev%20Schema) page offers example code for many different programming languages. These samples will include a placeholder for your authorization token. Full code examples with the authorization token prepopulated are available within individual instance pages.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/index.md b/site/versioned_docs/version-4.1/index.md
new file mode 100644
index 00000000..fe8e0795
--- /dev/null
+++ b/site/versioned_docs/version-4.1/index.md
@@ -0,0 +1,17 @@
+---
+title: Documentation
+---
+
+# Documentation
+
+HarperDB's documentation covers installation, getting started, APIs, security, and much more. Browse the topics at left, or choose one of the commonly used documentation sections below.
+
+***
+
+* [Install HarperDB Locally](./install-harperdb/)
+* [Getting Started](./getting-started/)
+* [HarperDB Operations API](https:/api.harperdb.io)
+* [HarperDB Studio](./harperdb-studio/)
+* [HarperDB Cloud](./harperdb-cloud/)
+* [Developer Project Examples](https:/github.com/search?q=harperdb)
+* [Support](./support)
diff --git a/site/versioned_docs/version-4.1/install-harperdb/index.md b/site/versioned_docs/version-4.1/install-harperdb/index.md
new file mode 100644
index 00000000..72c05115
--- /dev/null
+++ b/site/versioned_docs/version-4.1/install-harperdb/index.md
@@ -0,0 +1,61 @@
+---
+title: Install HarperDB
+---
+
+# Install HarperDB
+
+This documentation contains information for installing HarperDB locally. Note that if you’d like to get up and running quickly, you can try a [managed instance with HarperDB Cloud](https:/studio.harperdb.io/sign-up). HarperDB is a cross-platform database; we recommend Linux for production use, but HarperDB can run on Windows and Mac as well, for development purposes. Installation is usually very simple and just takes a few steps, but there are a few different options documented here.
+
+HarperDB runs on Node.js, so if you do not have it installed, you need to do that first (if you have installed, you can skip to installing HarperDB, itself). Node.js can be downloaded and installed from [their site](https:/nodejs.org/). For Linux and Mac, we recommend installing and managing Node versions with [NVM, which has instructions for installation](https:/github.com/nvm-sh/nvm), but generally NVM can be installed with:
+```bash
+curl -o- https:/raw.githubusercontent.com/nvm-sh/nvm/v0.39.3/install.sh | bash
+```
+And then logout and login, and then install Node.js using nvm. We recommend using LTS, but support all currently maintained Node versions (which is currently version 14 and newer, and make sure to always uses latest minor/patch for the major version):
+
+```bash
+nvm install 18
+```
+
+### Install and Start HarperDB
+Then you can install HarperDB with NPM and start it:
+
+```bash
+npm install -g harperdb
+harperdb
+```
+
+HarperDB will automatically start after installation.
+
+If you are setting up a production server on Linux, [we have much more extensive documentation on how to configure volumes for database storage, set up a systemd script, configure your operating system for use a database server in our linux installation guide](./linux).
+
+
+
+# With Docker
+
+If you would like to run HarperDB in Docker, install [Docker Desktop](https:/docs.docker.com/desktop/) on your Mac or Windows computer. Otherwise, install the [Docker Engine](https:/docs.docker.com/engine/install/) on your Linux server.
+
+Once Docker Desktop or Docker Engine is installed, visit our [Docker Hub page](https:/hub.docker.com/r/harperdb/harperdb) for information and examples on how to run a HarperDB container.
+
+# Offline Install
+
+If you need to install HarperDB on a device that doesn't have an Internet connection, you can choose your version and download the npm package and install it directly (you’ll still need Node.js and NPM):
+
+Download Install Package
+
+
+Once you’ve downloaded the .tgz file, run the following command from the directory where you’ve placed it:
+
+```bash
+npm install -g harperdb-X.X.X.tgz harperdb install
+```
+
+For more information visit the [HarperDB Command Line Interface](../harperdb-cli) guide.
+
+
+# Installation on Less Common Platforms
+
+HarperDB comes with binaries for standard AMD64/x64 or ARM64 CPU architectures on Linux, Windows (x64 only), and Mac (including Apple Silicon). However, if you are installing on a less common platform (Alpine, for example), you will need to ensure that you have build tools installed for the installation process to compile the binaries (this is handled automatically), including:
+* [Go](https:/go.dev/dl/): version 1.19.1
+* GCC
+* Make
+* Python v3.7, v3.8, v3.9, or v3.10
diff --git a/site/versioned_docs/version-4.1/install-harperdb/linux.md b/site/versioned_docs/version-4.1/install-harperdb/linux.md
new file mode 100644
index 00000000..8435985c
--- /dev/null
+++ b/site/versioned_docs/version-4.1/install-harperdb/linux.md
@@ -0,0 +1,208 @@
+---
+title: Linux Installation and Configuration
+---
+
+# Linux Installation and Configuration
+
+If you wish to install locally or already have a configured server, see the basic [Installation Guide](./)
+
+The following is a recommended way to configure Linux and install HarperDB. These instructions should work reasonably well for any public cloud or on-premises Linux instance.
+
+---
+
+These instructions assume that the following has already been completed:
+
+1. Linux is installed
+1. Basic networking is configured
+1. A non-root user account dedicated to HarperDB with sudo privileges exists
+1. An additional volume for storing HarperDB files is attached to the Linux instance
+1. Traffic to ports 9925 (HarperDB Operations API,) 9926 (HarperDB Custom Functions,) and 9932 (HarperDB Clustering) is permitted
+
+For this example, we will use an AWS Ubuntu Server 22.04 LTS m5.large EC2 Instance with an additional General Purpose SSD EBS volume and the default “ubuntu” user account.
+
+---
+
+### (Optional) LVM Configuration
+Logical Volume Manager (LVM) can be used to stripe multiple disks together to form a single logical volume. If striping disks together is not a requirement, skip these steps.
+
+Find disk that already has a partition
+
+```bash
+used_disk=$(lsblk -P -I 259 | grep "nvme.n1.*part" | grep -o "nvme.n1")
+```
+
+Create array of free disks
+
+```bash
+declare -a free_disks
+mapfile -t free_disks < <(lsblk -P -I 259 | grep "nvme.n1.*disk" | grep -o "nvme.n1" | grep -v "$used_disk")
+```
+
+Get quantity of free disks
+
+```bash
+free_disks_qty=${#free_disks[@]}
+```
+
+Construct pvcreate command
+
+```bash
+cmd_string=""
+for i in "${free_disks[@]}"
+do
+cmd_string="$cmd_string /dev/$i"
+done
+```
+
+Initialize disks for use by LVM
+
+```bash
+pvcreate_cmd="pvcreate $cmd_string"
+sudo $pvcreate_cmd
+```
+
+Create volume group
+
+```bash
+vgcreate_cmd="vgcreate hdb_vg $cmd_string"
+sudo $vgcreate_cmd
+```
+
+Create logical volume
+
+```bash
+sudo lvcreate -n hdb_lv -i $free_disks_qty -l 100%FREE hdb_vg
+```
+
+### Configure Data Volume
+
+Run `lsblk` and note the device name of the additional volume
+
+```bash
+lsblk
+```
+
+Create an ext4 filesystem on the volume (The below commands assume the device name is nvme1n1. If you used LVM to create logical volume, replace /dev/nvme1n1 with /dev/hdb_vg/hdb_lv)
+
+```bash
+sudo mkfs.ext4 -L hdb_data /dev/nvme1n1
+```
+
+Mount the file system and set the correct permissions for the directory
+
+```bash
+mkdir /home/ubuntu/hdb
+sudo mount -t ext4 /dev/nvme1n1 /home/ubuntu/hdb
+sudo chown -R ubuntu:ubuntu /home/ubuntu/hdb
+sudo chmod 775 /home/ubuntu/hdb
+```
+
+Create a fstab entry to mount the filesystem on boot
+
+```bash
+echo "LABEL=hdb_data /home/ubuntu/hdb ext4 defaults,noatime 0 1" | sudo tee -a /etc/fstab
+```
+
+### Configure Linux and Install Prerequisites
+If a swap file or partition does not already exist, create and enable a 2GB swap file
+
+```bash
+sudo dd if=/dev/zero of=/swapfile bs=128M count=16
+sudo chmod 600 /swapfile
+sudo mkswap /swapfile
+sudo swapon /swapfile
+echo "/swapfile swap swap defaults 0 0" | sudo tee -a /etc/fstab
+```
+
+Increase the open file limits for the ubuntu user
+
+```bash
+echo "ubuntu soft nofile 500000" | sudo tee -a /etc/security/limits.conf
+echo "ubuntu hard nofile 1000000" | sudo tee -a /etc/security/limits.conf
+```
+
+Install Node Version Manager (nvm)
+
+```bash
+curl -o- https:/raw.githubusercontent.com/nvm-sh/nvm/v0.39.3/install.sh | bash
+```
+
+Load nvm (or logout and then login)
+
+```bash
+. ~/.nvm/nvm.sh
+```
+
+Install Node.js using nvm ([read more about specific Node version requirements](https:/www.npmjs.com/package/harperdb#prerequisites))
+
+```bash
+nvm install
+```
+
+### Install and Start HarperDB
+Here is an example of installing HarperDB with minimal configuration.
+
+```bash
+npm install -g harperdb
+harperdb start \
+ --TC_AGREEMENT "yes" \
+ --ROOTPATH "/home/ubuntu/hdb" \
+ --OPERATIONSAPI_NETWORK_PORT "9925" \
+ --HDB_ADMIN_USERNAME "HDB_ADMIN" \
+ --HDB_ADMIN_PASSWORD "password"
+```
+
+Here is an example of installing HarperDB with commonly used additional configuration.
+
+```bash
+npm install -g harperdb
+harperdb start \
+ --TC_AGREEMENT "yes" \
+ --ROOTPATH "/home/ubuntu/hdb" \
+ --OPERATIONSAPI_NETWORK_PORT "9925" \
+ --HDB_ADMIN_USERNAME "HDB_ADMIN" \
+ --HDB_ADMIN_PASSWORD "password" \
+ --OPERATIONSAPI_NETWORK_HTTPS "true" \
+ --CUSTOMFUNCTIONS_NETWORK_HTTPS "true" \
+ --CLUSTERING_ENABLED "true" \
+ --CLUSTERING_USER "cluster_user" \
+ --CLUSTERING_PASSWORD "password" \
+ --CLUSTERING_NODENAME "hdb1"
+```
+
+HarperDB will automatically start after installation. If you wish HarperDB to start when the OS boots, you have two options
+
+You can set up a crontab:
+
+```bash
+(crontab -l 2>/dev/null; echo "@reboot PATH=\"/home/ubuntu/.nvm/versions/node/v18.15.0/bin:$PATH\" && harperdb start") | crontab -
+```
+
+Or you can create a systemd script at `/etc/systemd/system/harperdb.service`
+
+Pasting the following contents into the file:
+
+```
+[Unit]
+Description=HarperDB
+
+[Service]
+Type=simple
+Restart=always
+User=ubuntu
+Group=ubuntu
+WorkingDirectory=/home/ubuntu
+ExecStart=/bin/bash -c 'PATH="/home/ubuntu/.nvm/versions/node/v18.15.0/bin:$PATH"; harperdb'
+
+[Install]
+WantedBy=multi-user.target
+```
+
+And then running the following:
+
+```
+systemctl daemon-reload
+systemctl enable harperdb
+```
+
+For more information visit the [HarperDB Command Line Interface guide](../harperdb-cli) and the [HarperDB Configuration File guide](../configuration).
diff --git a/site/versioned_docs/version-4.1/jobs.md b/site/versioned_docs/version-4.1/jobs.md
new file mode 100644
index 00000000..e91330c7
--- /dev/null
+++ b/site/versioned_docs/version-4.1/jobs.md
@@ -0,0 +1,112 @@
+---
+title: Asynchronous Jobs
+---
+
+# Asynchronous Jobs
+
+HarperDB Jobs are asynchronous tasks performed by the Operations API.
+
+## Job Summary
+
+Jobs uses an asynchronous methodology to account for the potential of a long-running operation. For example, exporting millions of records to S3 could take some time, so that job is started and the id is provided to check on the status.
+
+The job status can be **COMPLETE** or **IN_PROGRESS**.
+
+## Example Job Operations
+
+Example job operations include:
+
+[csv data load](https:/api.harperdb.io/#0186bc25-b9ae-44e7-bd9e-8edc0f289aa2)
+
+[csv file load](https:/api.harperdb.io/#c4b71011-8a1d-4cb2-8678-31c0363fea5e)
+
+[csv url load](https:/api.harperdb.io/#d1e9f433-e250-49db-b44d-9ce2dcd92d32)
+
+[import from s3](https:/api.harperdb.io/#820b3947-acbe-41f9-858b-2413cabc3a18)
+
+[delete_records_before](https:/api.harperdb.io/#8de87e47-73a8-4298-b858-ca75dc5765c2)
+
+[export_local](https:/api.harperdb.io/#49a02517-ada9-4198-b48d-8707db905be0)
+
+[export_to_s3](https:/api.harperdb.io/#f6393e9f-e272-4180-a42c-ff029d93ddd4)
+
+Example Response from a Job Operation
+
+```
+{
+ "message": "Starting job with id 062a1892-6a0a-4282-9791-0f4c93b12e16"
+}
+```
+
+Whenever one of these operations is initiated, an asynchronous job is created and the request contains the id of that job which can be used to check on its status.
+
+## Managing Jobs
+
+To check on a job's status, use the [get_job](https:/api.harperdb.io/#d501bef7-dbb7-4714-b535-e466f6583dce) operation.
+
+Get Job Request
+
+```
+{
+ "operation": "get_job",
+ "id": "4a982782-929a-4507-8794-26dae1132def"
+}
+```
+
+Get Job Response
+
+```
+[
+ {
+ "__createdtime__": 1611615798782,
+ "__updatedtime__": 1611615801207,
+ "created_datetime": 1611615798774,
+ "end_datetime": 1611615801206,
+ "id": "4a982782-929a-4507-8794-26dae1132def",
+ "job_body": null,
+ "message": "successfully loaded 350 of 350 records",
+ "start_datetime": 1611615798805,
+ "status": "COMPLETE",
+ "type": "csv_url_load",
+ "user": "HDB_ADMIN",
+ "start_datetime_converted": "2021-01-25T23:03:18.805Z",
+ "end_datetime_converted": "2021-01-25T23:03:21.206Z"
+ }
+]
+```
+
+## Finding Jobs
+
+To find jobs (if the id is not know) use the [search_jobs_by_start_date](https:/api.harperdb.io/#4474ca16-e4c2-4740-81b5-14ed98c5eeab) operation.
+
+Search Jobs Request
+
+```
+{
+ "operation": "search_jobs_by_start_date",
+ "from_date": "2021-01-25T22:05:27.464+0000",
+ "to_date": "2021-01-25T23:05:27.464+0000"
+}
+```
+
+Search Jobs Response
+
+```
+[
+ {
+ "id": "942dd5cb-2368-48a5-8a10-8770ff7eb1f1",
+ "user": "HDB_ADMIN",
+ "type": "csv_url_load",
+ "status": "COMPLETE",
+ "start_datetime": 1611613284781,
+ "end_datetime": 1611613287204,
+ "job_body": null,
+ "message": "successfully loaded 350 of 350 records",
+ "created_datetime": 1611613284764,
+ "__createdtime__": 1611613284767,
+ "__updatedtime__": 1611613287207,
+ "start_datetime_converted": "2021-01-25T22:21:24.781Z",
+ "end_datetime_converted": "2021-01-25T22:21:27.204Z"
+ }
+]
+```
diff --git a/site/versioned_docs/version-4.1/logging.md b/site/versioned_docs/version-4.1/logging.md
new file mode 100644
index 00000000..06d2eadc
--- /dev/null
+++ b/site/versioned_docs/version-4.1/logging.md
@@ -0,0 +1,67 @@
+---
+title: Logging
+---
+
+# Logging
+
+HarperDB maintains a log of events that take place throughout operation. Log messages can be used for diagnostics purposes as well as monitoring.
+
+All logs (except for the install log) are stored in the main log file in the hdb directory `/log/hdb.log`. The install log is located in the HarperDB application directory most likely located in your npm directory `npm/harperdb/logs`.
+
+Each log message has several key components for consistent reporting of events. A log message has a format of:
+```
+ [] [] ...[]:
+```
+For example, a typical log entry looks like:
+```
+2023-03-09T14:25:05.269Z [notify] [main/0]: HarperDB successfully started.
+```
+The components of a log entry are:
+* timestamp - This is the date/time stamp when the event occurred
+* level - This is an associated log level that gives a rough guide to the importance and urgency of the message. The available log levels in order of least urgent (and more verbose) are: `trace`, `debug`, `info`, `warn`, `error`, `fatal`, and `notify`.
+* thread/id - This reports the name of the thread and the thread id, that the event was reported on. Note that NATS logs are recorded by their process name and there is no thread id for them since they are a separate process. Key threads are:
+ * main - This is the thread that is responsible for managing all other threads and routes incoming requests to the other threads
+ * http - These are the worker threads that handle the primary workload of incoming HTTP requests to the operations API and custom functions.
+ * Clustering* - These are threads and processes that handle replication.
+ * job - These are job threads that have been started to handle operations that are executed in a separate job thread.
+* tags - Logging from a custom function will include a "custom-function" tag in the log entry. Most logs will not have any additional tags.
+* message - This is the main message that was reported.
+
+We try to keep logging to a minimum by default, to do this the default log level is `error`. If you require more information from the logs, increasing the log level down will provide that.
+
+The log level can be changed by modifying `logging.level` in the config file `harperdb-config.yaml`.
+
+## Clustering Logging
+
+HarperDB clustering utilizes two [Nats](https:/nats.io/) servers, named Hub and Leaf. The Hub server is responsible for establishing the mesh network that connects instances of HarperDB
+and the Leaf server is responsible for managing the message stores (streams) that replicate and store messages between instances. Due to the verbosity of these servers there is a separate
+log level configuration for them. To adjust their log verbosity set `clustering.logLevel` in the config file `harperdb-config.yaml`. Valid log levels from least verbose are
+`error`, `warn`, `info`, `debug` and `trace`.
+
+## Log File vs Standard Streams
+
+HarperDB logs can optionally be streamed to standard streams. Logging to standard streams (stdout/stderr) is primarily used for container logging drivers. For more traditional installations, we recommend logging to a file. Logging to both standard streams and to a file can be enabled simultaneously.
+To log to standard streams effectively, make sure to directly run `harperdb` and don't start it as a separate process (don't use `harperdb start`) and `logging.stdStreams` must be set to true. Note, logging to standard streams only will disable clustering catchup.
+
+## Logging Rotation
+
+Log rotation allows for managing log files, such as compressing rotated log files, archiving old log files, determining when to rotate, and the like. This will allow for organized storage and efficient use of disk space. For more information see “logging” in our [config docs](./configuration).
+
+## Read Logs via the API
+
+To access specific logs you may query the HarperDB API. Logs can be queried using the `read_log` operation. `read_log` returns outputs from the log based on the provided search criteria.
+
+```json
+{
+ "operation": "read_log",
+ "start": 0,
+ "limit": 1000,
+ "level": "error",
+ "from": "2021-01-25T22:05:27.464+0000",
+ "until": "2021-01-25T23:05:27.464+0000",
+ "order": "desc"
+}
+```
+
+
+
diff --git a/site/versioned_docs/version-4.1/reference/content-types.md b/site/versioned_docs/version-4.1/reference/content-types.md
new file mode 100644
index 00000000..c8a1bad8
--- /dev/null
+++ b/site/versioned_docs/version-4.1/reference/content-types.md
@@ -0,0 +1,25 @@
+---
+title: HarperDB Supported Content Types
+---
+
+# HarperDB Supported Content Types
+
+HarperDB supports several different content types (or MIME types) for both HTTP request bodies (describing operations) as well as for serializing content into HTTP response bodies. HarperDB follows HTTP standards for specifying both request body content types and acceptable response body content types. Any of these content types can be used with any of the standard HarperDB operations.
+
+For request body content, the content type should be specified with the `Content-Type` header. For example with JSON, use `Content-Type: application/json` and for CBOR, include `Content-Type: application/cbor`. To request that the response body be encoded with a specific content type, use the `Accept` header. If you want the response to be in JSON, use `Accept: application/json`. If you want the response to be in CBOR, use `Accept: application/cbor`.
+
+The following content types are supported:
+
+## JSON - application/json
+JSON is the most widely used content type, and is relatively readable and easy to work with. However, JSON does not support all the data types that are supported by HarperDB, and can't be used to natively encode data types like binary data or explicit Maps/Sets. Also, JSON is not as efficient as binary formats. When using JSON, compression is recommended (this also follows standard HTTP protocol with the `Accept-Encoding` header) to improve network transfer performance (although there is server performance overhead). JSON is a good choice for web development and when standard JSON types are sufficient and when combined with compression and debuggability/observability is important.
+
+## CBOR - application/cbor
+CBOR is a highly efficient binary format, and is a recommended format for most production use cases with HarperDB. CBOR supports the full range of HarperDB data types, including binary data, typed dates, and explicit Maps/Sets. CBOR is very performant and space efficient even without compression. Compression will still yield better network transfer size/performance, but compressed CBOR is generally not any smaller than compressed JSON. CBOR also natively supports streaming for optimal performance (using indefinite length arrays). The CBOR format has excellent standardization and HarperDB's CBOR provides an excellent balance of performance and size efficiency.
+
+## MessagePack - application/x-msgpack
+MessagePack is another efficient binary format like CBOR, with a support for all HarperDB data types. MessagePack generally has wider adoption than CBOR and can be useful in systems that don't have CBOR support (or good support). However, MessagePack does not have native support for streaming of arrays of data (for query results), and so query results are returned as a (concatenated) sequence of MessagePack objects/maps. MessagePack decoders used with HarperDB's MessagePack must be prepared to decode a direct sequence of MessagePack values to properly read responses.
+
+## Comma-separated Values (CSV) - text/csv
+Comma-separated values is an easy to use and understand format that can be readily imported into spreadsheets or used for data processing. CSV lacks hierarchical structure most data types, and shouldn't be used for frequent/production use, but when you need it, it is available.
+
+
diff --git a/site/versioned_docs/version-4.1/reference/data-types.md b/site/versioned_docs/version-4.1/reference/data-types.md
new file mode 100644
index 00000000..78a8a684
--- /dev/null
+++ b/site/versioned_docs/version-4.1/reference/data-types.md
@@ -0,0 +1,37 @@
+---
+title: HarperDB Supported Data Types
+---
+
+# HarperDB Supported Data Types
+
+HarperDB supports a rich set of data types for use in records in databases. Various data types can be used from both direct JavaScript interfaces in Custom Functions and the HTTP operations APIs. Using JSON for communication naturally limits the data types to those available in JSON (HarperDB’s supports all of JSON data types), but JavaScript code and alternate data formats facilitate the use of additional data types. As of v4.1, HarperDB supports MessagePack and CBOR, which allows for all of HarperDB supported data types. This includes:
+
+## Boolean
+true or false.
+
+## String
+Strings, or text, are a sequence of any unicode characters and are internally encoded with UTF-8.
+
+## Number
+Numbers can be stored as signed integers up to 64-bit or floating point with 64-bit floating point precision, and numbers are automatically stored using the most optimal type. JSON is parsed by JS, so the maximum safe (precise) integer is 9007199254740991 (larger numbers can be stored, but aren’t guaranteed integer precision). Custom Functions may use BigInt numbers to store/access larger 64-bit integers, but integers beyond 64-bit can’t be stored with integer precision (will be stored as standard double-precision numbers).
+
+## Object/Map
+Objects, or maps, that hold a set named properties can be stored in HarperDB. When provided as JSON objects or JavaScript objects, all property keys are stored as strings. The order of properties is also preserved in HarperDB’s storage. Duplicate property keys are not allowed (they are dropped in parsing any incoming data).
+
+## Array
+Arrays hold an ordered sequence of values and can be stored in HarperDB. There is no support for sparse arrays, although you can use objects to store data with numbers (converted to strings) as properties.
+
+## Null
+A null value can be stored in HarperDB property values as well.
+
+## Date
+Dates can be stored as a specific data type. This is not supported in JSON, but is supported by MessagePack and CBOR. Custom Functions can also store and use Dates using JavaScript Date instances.
+
+## Binary Data
+Binary data can be stored in property values as well. JSON doesn’t have any support for encoding binary data, but MessagePack and CBOR support binary data in data structures, and this will be preserved in HarperDB. Custom Functions can also store binary data by using NodeJS’s Buffer or Uint8Array instances to hold the binary data.
+
+## Explicit Map/Set
+Explicit instances of JavaScript Maps and Sets can be stored and preserved in HarperDB as well. This can’t be represented with JSON, but can be with CBOR.
+
+
+
diff --git a/site/versioned_docs/version-4.1/reference/dynamic-schema.md b/site/versioned_docs/version-4.1/reference/dynamic-schema.md
new file mode 100644
index 00000000..c700e42d
--- /dev/null
+++ b/site/versioned_docs/version-4.1/reference/dynamic-schema.md
@@ -0,0 +1,148 @@
+---
+title: Dynamic Schema
+---
+
+# Dynamic Schema
+
+HarperDB is built to make data ingestion simple. A primary driver of that is the Dynamic Schema. The purpose of this document is to provide a detailed explanation of the dynamic schema specifically related to schema definition and data ingestion.
+
+The dynamic schema provides the structure of schema and table namespaces while simultaneously providing the flexibility of a data-defined schema. Individual attributes are reflexively created as data is ingested, meaning the table will adapt to the structure of data ingested. HarperDB tracks the metadata around schemas, tables, and attributes allowing for describe table, describe schema, and describe all operations.
+
+### Schemas
+
+HarperDB schemas are analogous to a namespace that groups tables together. A schema is required to create a table.
+
+### Tables
+
+HarperDB tables group records together with a common data pattern. To create a table users must provide a table name and a primary key.
+
+* **Table Name**: Used to identify the table.
+* **Primary Key**: This is a required attribute that serves as the unique identifier for a record and is also known as the `hash_attribute` in HarperDB.
+
+Primary Key
+
+The primary key (also referred to as the `hash_attribute`) is used to uniquely identify records. Uniqueness is enforced on the primary; inserts with the same primary key will be rejected. If a primary key is not provided on insert, a GUID will be automatically generated and returned to the user. The [HarperDB Storage Algorithm](./storage-algorithm) utilizes this value for indexing.
+
+**Standard Attributes**
+
+Additional attributes are reflexively added via insert and update operations (in both SQL and NoSQL) when new attributes are included in the data structure provided to HarperDB. As a result, schemas are additive, meaning new attributes are created in the underlying storage algorithm as additional data structures are provided. HarperDB offers `create_attribute` and `drop_attribute` operations for users who prefer to manually define their data model independent of data ingestion. When new attributes are added to tables with existing data the value of that new attribute will be assumed `null` for all existing records.
+
+**Audit Attributes**
+
+HarperDB automatically creates two audit attributes used on each record.
+
+* `__createdtime__`: The time the record was created in [Unix Epoch with milliseconds](https:/www.epochconverter.com/) format.
+* `__updatedtime__`: The time the record was updated in [Unix Epoch with milliseconds](https:/www.epochconverter.com/) format.
+
+### Dynamic Schema Example
+
+To better understand the behavior let’s take a look at an example. This example utilizes [HarperDB API operations](https:/api.harperdb.io/).
+
+**Create a Schema**
+
+```bash
+{
+ "operation": "create_schema",
+ "schema": "dev"
+}
+```
+
+**Create a Table**
+
+Notice the schema name, table name, and hash attribute name are the only required parameters.
+
+```bash
+{
+ "operation": "create_table",
+ "schema": "dev",
+ "table": "dog",
+ "hash_attribute": "id"
+}
+```
+
+At this point the table does not have structure beyond what we provided, so the table looks like this:
+
+**dev.dog**
+
+
+
+**Insert Record**
+
+To define attributes we do not need to do anything beyond sending them in with an insert operation.
+
+```bash
+{
+ "operation": "insert",
+ "schema": "dev",
+ "table": "dog",
+ "records": [
+ {"id": 1, "dog_name": "Penny", "owner_name": "Kyle"}
+ ]
+}
+```
+
+With a single record inserted and new attributes defined, our table now looks like this:
+
+**dev.dog**
+
+
+
+Indexes have been automatically created for `dog_name` and `owner_name` attributes.
+
+**Insert Additional Record**
+
+If we continue inserting records with the same data schema no schema updates are required. One record will omit the hash attribute from the insert to demonstrate GUID generation.
+
+```bash
+{
+ "operation": "insert",
+ "schema": "dev",
+ "table": "dog",
+ "records": [
+ {"id": 2, "dog_name": "Monk", "owner_name": "Aron"},
+ {"dog_name": "Harper","owner_name": "Stephen"}
+ ]
+}
+```
+
+In this case, there is no change to the schema. Our table now looks like this:
+
+**dev.dog**
+
+
+
+**Update Existing Record**
+
+In this case, we will update a record with a new attribute not previously defined on the table.
+
+```bash
+{
+ "operation": "update",
+ "schema": "dev",
+ "table": "dog",
+ "records": [
+ {"id": 2, "weight_lbs": 35}
+ ]
+}
+```
+
+Now we have a new attribute called `weight_lbs`. Our table now looks like this:
+
+**dev.dog**
+
+
+
+**Query Table with SQL**
+
+Now if we query for all records where `weight_lbs` is `null` we expect to get back two records.
+
+```bash
+{
+ "operation": "sql",
+ "sql": "SELECT * FROM dev.dog WHERE weight_lbs IS NULL"
+}
+```
+
+This results in the expected two records being returned.
+
+
diff --git a/site/versioned_docs/version-4.1/reference/headers.md b/site/versioned_docs/version-4.1/reference/headers.md
new file mode 100644
index 00000000..330425bf
--- /dev/null
+++ b/site/versioned_docs/version-4.1/reference/headers.md
@@ -0,0 +1,13 @@
+---
+title: HarperDB Headers
+---
+
+# HarperDB Headers
+
+All HarperDB API responses include headers that are important for interoperability and debugging purposes. The following headers are returned with all HarperDB API responses:
+
+| Key | Example Value | Description |
+|-------------------|------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| server-timing | db;dur=7.165 | This reports the duration of the operation, in milliseconds. This follows the standard for Server-Timing and can be consumed by network monitoring tools. |
+| hdb-response-time | 7.165 | This is the legacy header for reporting response time. It is deprecated and will be removed in 4.2. |
+| content-type | application/json | This reports the MIME type of the returned content, which is negotiated based on the requested content type in the Accept header. |
diff --git a/site/versioned_docs/version-4.1/reference/index.md b/site/versioned_docs/version-4.1/reference/index.md
new file mode 100644
index 00000000..70a3e37e
--- /dev/null
+++ b/site/versioned_docs/version-4.1/reference/index.md
@@ -0,0 +1,14 @@
+---
+title: Reference
+---
+
+# Reference
+
+This section contains technical details and reference materials for HarperDB.
+
+* [Storage Algorithm](./storage-algorithm)
+* [Dynamic Schema](./dynamic-schema)
+* [Headers](./headers)
+* [Limitations](./limits)
+* Content Types
+* [Data Types](./data-types)
diff --git a/site/versioned_docs/version-4.1/reference/limits.md b/site/versioned_docs/version-4.1/reference/limits.md
new file mode 100644
index 00000000..f6509b7b
--- /dev/null
+++ b/site/versioned_docs/version-4.1/reference/limits.md
@@ -0,0 +1,33 @@
+---
+title: HarperDB Limits
+---
+
+# HarperDB Limits
+
+This document outlines limitations of HarperDB.
+
+## Schema Naming Restrictions
+
+**Case Sensitivity**
+
+HarperDB schema metadata (schema names, table names, and attribute/column names) are case sensitive. Meaning schemas, tables, and attributes can differ only by the case of their characters.
+
+**Restrictions on Schema Metadata Names**
+
+HarperDB schema metadata (schema names, table names, and attribute names) cannot contain the following UTF-8 characters:
+
+```
+/`¡¢£¤¥¦§¨©ª«¬®¯°±²³´µ¶·¸¹º»¼½¾¿ÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖרÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõö÷øùúûüýþÿ
+```
+
+Additionally, they cannot contain the first 31 non-printing characters. Spaces are allowed, but not recommended as best practice. The regular expression used to verify a name is valid is:
+
+```
+^[\x20-\x2E|\x30-\x5F|\x61-\x7E]*$
+```
+
+## Table Limitations
+
+**Attribute Maximum**
+
+HarperDB limits number of attributes to 10,000 per table.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/reference/storage-algorithm.md b/site/versioned_docs/version-4.1/reference/storage-algorithm.md
new file mode 100644
index 00000000..efd26d14
--- /dev/null
+++ b/site/versioned_docs/version-4.1/reference/storage-algorithm.md
@@ -0,0 +1,22 @@
+---
+title: Storage Algorithm
+---
+
+# Storage Algorithm
+The HarperDB storage algorithm is fundamental to the HarperDB core functionality, enabling the [Dynamic Schema](./dynamic-schema) and all other user-facing functionality. HarperDB is built on top of Lightning Memory-Mapped Database (LMDB), a key-value store offering industry leading performance and functionality, which allows for our storage algorithm to store data in tables as rows/objects. This document will provide additional details on how data is stored within HarperDB.
+
+## Query Language Agnostic
+The HarperDB storage algorithm was designed to abstract the data storage from any individual query language. HarperDB currently supports both SQL and NoSQL on top of this storage algorithm, with the ability to add additional query languages in the future. This means data can be inserted via NoSQL and read via SQL while hitting the same underlying data storage.
+
+## ACID Compliant
+Utilizing Multi-Version Concurrency Control (MVCC) through LMDB, HarperDB offers ACID compliance independently on each node. Readers and writers operate independently of each other, meaning readers don’t block writers and writers don’t block readers. Each HarperDB table has a single writer process, avoiding deadlocks and assuring that writes are executed in the order in which they were received. HarperDB tables can have multiple reader processes operating at the same time for consistent, high scale reads.
+
+## Universally Indexed
+All top level attributes are automatically indexed immediately upon ingestion. The [HarperDB Dynamic Schema](./dynamic-schema) reflexively creates both the attribute and index reflexively as new schema metadata comes in. Indexes are agnostic of datatype, honoring the following order: booleans, numbers ordered naturally, strings ordered lexically. Within the LMDB implementation, table records are grouped together into a single LMDB environment file, where each attribute index is a sub-database (dbi) inside said environment file. An example of the indexing scheme can be seen below.
+
+## Additional LMDB Benefits
+HarperDB inherits both functional and performance benefits by implementing LMDB as the underlying key-value store. Data is memory-mapped, which enables quick data access without data duplication. All writers are fully serialized, making writes deadlock-free. LMDB is built to maximize operating system features and functionality, fully exploiting buffer cache and built to run in CPU cache. To learn more about LMDB, visit their documentation.
+
+## HarperDB Indexing Example (Single Table)
+
+
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/release-notes/End-of-Life.md b/site/versioned_docs/version-4.1/release-notes/End-of-Life.md
new file mode 100644
index 00000000..ca15f713
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/End-of-Life.md
@@ -0,0 +1,14 @@
+---
+title: HarperDB Software Lifecycle Schedules
+---
+
+# HarperDB Software Lifecycle Schedules
+
+The lifecycle schedules below form a part of HarperDB’s Support Policies. They include Major Releases and Minor Release that have reached their end of life date in the past 3 years.
+
+| **Release** | **Release Date** | **End of Life Date** |
+|-------------|------------------|----------------------|
+| 3.2 | 6/22 | 6/25 |
+| 3.3 | 9/22 | 9/25 |
+| 4.0 | 1/23 | 1/26 |
+| 4.1 | 4/23 | 4/26 |
diff --git a/site/versioned_docs/version-4.1/release-notes/index.md b/site/versioned_docs/version-4.1/release-notes/index.md
new file mode 100644
index 00000000..8c0a3fb9
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/index.md
@@ -0,0 +1,80 @@
+---
+title: Release Notes
+---
+
+# Release Notes
+
+### Current Release
+
+[Meet Tucker](./v4-tucker/) Our 4th Release Pup
+
+[4.1.2 Tucker](./v4-tucker/4.1.2)
+
+[4.1.1 Tucker](./v4-tucker/4.1.1)
+
+[4.1.0 Tucker](./v4-tucker/4.1.0)
+
+[4.0.6 Tucker](./v4-tucker/4.0.6)
+
+[4.0.5 Tucker](./v4-tucker/4.0.5)
+
+[4.0.4 Tucker](./v4-tucker/4.0.4)
+
+[4.0.3 Tucker](./v4-tucker/4.0.3)
+
+[4.0.2 Tucker](./v4-tucker/4.0.2)
+
+[4.0.1 Tucker](./v4-tucker/4.0.1)
+
+[4.0.0 Tucker](./v4-tucker/4.0.0)
+
+
+### Past Releases
+
+[Meet Monkey](./v3-monkey/) Our 3rd Release Pup
+
+[3.2.1 Monkey](./v3-monkey/3.2.1)
+
+[3.2.0 Monkey](./v3-monkey/3.2.0)
+
+[3.1.5 Monkey](./v3-monkey/3.1.5)
+
+[3.1.4 Monkey](./v3-monkey/3.1.4)
+
+[3.1.3 Monkey](./v3-monkey/3.1.3)
+
+[3.1.2 Monkey](./v3-monkey/3.1.2)
+
+[3.1.1 Monkey](./v3-monkey/3.1.1)
+
+[3.1.0 Monkey](./v3-monkey/3.1.0)
+
+[3.0.0 Monkey](./v3-monkey/3.0.0)
+
+***
+
+[Meet Penny](./v2-penny/) Our 2nd Release Pup
+
+[2.3.1 Penny](./v2-penny/2.3.1)
+
+[2.3.0 Penny](./v2-penny/2.3.0)
+
+[2.2.3 Penny](./v2-penny/2.2.3)
+
+[2.2.2 Penny](./v2-penny/2.2.2)
+
+[2.2.0 Penny](./v2-penny/2.2.0)
+
+[2.1.1 Penny](./v2-penny/2.1.1)
+
+***
+
+[Meet Alby](./v1-alby/) Our 1st Release Pup
+
+[1.3.1 Alby](./v1-alby/1.3.1)
+
+[1.3.0 Alby](./v1-alby/1.3.0)
+
+[1.2.0 Alby](./v1-alby/1.2.0)
+
+[1.1.0 Alby](./v1-alby/1.1.0)
diff --git a/site/versioned_docs/version-4.1/release-notes/v1-alby/1.1.0.md b/site/versioned_docs/version-4.1/release-notes/v1-alby/1.1.0.md
new file mode 100644
index 00000000..b42514a2
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v1-alby/1.1.0.md
@@ -0,0 +1,77 @@
+---
+title: 1.1.0
+sidebar_position: 89899
+---
+
+### HarperDB 1.1.0, Alby Release
+4/18/2018
+
+**Features**
+
+* Users & Roles:
+
+ * Limit/Assign access to all HarperDB operations
+
+ * Limit/Assign access to schemas, tables & attributes
+
+ * Limit/Assign access to specific SQL operations (`INSERT`, `UPDATE`, `DELETE`, `SELECT`)
+
+* Enhanced SQL parser
+
+ * Added extensive ANSI SQL Support.
+
+ * Added Array function, which allows for converting relational data into Object/Hierarchical data
+
+ * `Distinct_Array` Function: allows for removing duplicates in the Array function.
+
+ * Enhanced SQL Validation: Improved validation around structure of SQL, validating the schema, etc..
+
+ * 10x performance improvement on SQL statements.
+
+* Export Function: can now call a NoSQL/SQL search and have it export to CSV or JSON.
+
+* Added upgrade function to CLI
+
+* Added ability to perform bulk update from CSV
+
+* Created landing page for HarperDB.
+
+* Added CORS support to HarperDB
+
+**Fixes**
+
+* Fixed memory leak in CSV bulk loads
+
+* Corrected error when attempting to perform a `SQL DELETE`
+
+* Added further validation to NoSQL `UPDATE` to validate schema & table exist
+
+* Fixed install issue occurring when part of the install path does not exist, the install would silently fail.
+
+* Fixed issues with replicated data when one of the replicas is down
+
+* Removed logging of initial user’s credentials during install
+
+* Can now use reserved words as aliases in SQL
+
+* Removed user(s) password in results when calling `list_users`
+
+* Corrected forwarding of operations to other nodes in a cluster
+
+* Corrected lag in schema meta-data passing to other nodes in a cluster
+
+* Drop table & schema now move the table & schema or table to the trash folder under the Database folder for later permanent deletion.
+
+* Bulk inserts no longer halt the entire operation if n records already exist, instead the return includes the hashes of records that have been skipped.
+
+* Added ability to accept EULA from command line
+
+* Corrected `search_by_value` not searching on the correct attribute
+
+* Added ability to increase the timeout of a request by adding `SERVER_TIMEOUT_MS` to config/settings.js
+
+* Add error handling resulting from SQL calculations.
+
+* Standardized error responses as JSON.
+
+* Corrected internal process generation to not allow more processes than machine has cores.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/release-notes/v1-alby/1.2.0.md b/site/versioned_docs/version-4.1/release-notes/v1-alby/1.2.0.md
new file mode 100644
index 00000000..095bf239
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v1-alby/1.2.0.md
@@ -0,0 +1,42 @@
+---
+title: 1.2.0
+sidebar_position: 89799
+---
+
+### HarperDB 1.2.0, Alby Release
+7/10/2018
+
+**Features**
+
+* Time to Live: Conserve the resources of your edge device by setting data on devices to live for a specific period of time.
+* Geo: HarperDB has implemented turf.js into its SQL parser to enable geo based analytics.
+* Jobs: CSV Data loads, Exports & Time to Live now all run as back ground jobs.
+* Exports: Perform queries that export into JSON or CSV and save to disk or S3.
+
+
+**Fixes**
+
+* Fixed issue where CSV data loads incorrectly report number of records loaded.
+* Added validation to stop `BETWEEN` operations in SQL.
+* Updated logging to not include internal variables in the logs.
+* Cleaned up `add_role` response to not include internal variables.
+* Removed old and unused dependencies.
+* Build out further unit tests and integration tests.
+* Fixed https to handle certificates properly.
+* Improved stability of clustering & replication.
+* Corrected issue where Objects and Arrays were not casting properly in `SQL SELECT` response.
+* Fixed issue where Blob text was not being returned from `SQL SELECT`s.
+* Fixed error being returned when querying on table with no data, now correctly returns empty array.
+* Improved performance in SQL when searching on exact values.
+* Fixed error when ./harperdb stop is called.
+* Fixed logging issue causing instability in installer.
+* Fixed `read_log` operation to accept date time.
+* Added permissions checking to `export_to_s3`.
+* Added ability to run SQL on `SELECT` without a `FROM`.
+* Fixed issue where updating a user’s password was not encrypting properly.
+* Fixed `user_guide.html` to point to readme on git repo.
+* Created option to have HarperDB run as a foreground process.
+* Updated `user_info` to return the correct role for a user.
+* Fixed issue where HarperDB would not stop if the database root was deleted.
+* Corrected error message on insert if an invalid schema is provided.
+* Added permissions checks for user & role operations.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/release-notes/v1-alby/1.3.0.md b/site/versioned_docs/version-4.1/release-notes/v1-alby/1.3.0.md
new file mode 100644
index 00000000..ad196159
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v1-alby/1.3.0.md
@@ -0,0 +1,27 @@
+---
+title: 1.3.0
+sidebar_position: 89699
+---
+
+### HarperDB 1.3.0, Alby Release
+11/2/2018
+
+**Features**
+
+* Upgrade: Upgrade to newest version via command line.
+* SQL Support: Added `IS NULL` for SQL parser.
+* Added attribute validation to search operations.
+
+
+**Fixes**
+
+* Fixed `SELECT` calculations, i.e. `SELECT` 2+2.
+* Fixed select OR not returning expected results.
+* No longer allowing reserved words for schema and table names.
+* Corrected process interruptions from improper SQL statements.
+* Improved message handling between spawned processes that replace killed processes.
+* Enhanced error handling for updates to tables that do not exist.
+* Fixed error handling for NoSQL responses when `get_attributes` is provided with invalid attributes.
+* Fixed issue with new columns not being updated properly in update statements.
+* Now validating roles, tables and attributes when creating or updating roles.
+* Fixed an issue where in some cases `undefined` was being returned after dropping a role
diff --git a/site/versioned_docs/version-4.1/release-notes/v1-alby/1.3.1.md b/site/versioned_docs/version-4.1/release-notes/v1-alby/1.3.1.md
new file mode 100644
index 00000000..77e3ffe4
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v1-alby/1.3.1.md
@@ -0,0 +1,29 @@
+---
+title: 1.3.1
+sidebar_position: 89698
+---
+
+### HarperDB 1.3.1, Alby Release
+2/26/2019
+
+**Features**
+
+* Clustering connection direction appointment
+* Foundations for threading/multi processing
+* UUID autogen for hash attributes that were not provided
+* Added cluster status operation
+
+
+**Bug Fixes and Enhancements**
+
+* More logging
+* Clustering communication enhancements
+* Clustering queue ordering by timestamps
+* Cluster re connection enhancements
+* Number of system core(s) detection
+* Node LTS (10.15) compatibility
+* Update/Alter users enhancements
+* General performance enhancements
+* Warning is logged if different versions of harperdb are connected via clustering
+* Fixed need to restart after user creation/alteration
+* Fixed SQL error that occurred on selecting from an empty table
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/release-notes/v1-alby/index.md b/site/versioned_docs/version-4.1/release-notes/v1-alby/index.md
new file mode 100644
index 00000000..265fe04d
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v1-alby/index.md
@@ -0,0 +1,13 @@
+---
+title: HarperDB Alby (Version 1)
+---
+
+# HarperDB Alby (Version 1)
+
+Did you know our release names are dedicated to employee pups? For our first release, Alby was our pup.
+
+Here is a bit about Alby:
+
+
+
+_Hi, I am Alby. My mom is Kaylan Stock, Director of Marketing at HarperDB. I am a 9-year-old Great Dane mix who loves sun bathing, going for swims, and wreaking havoc on the local squirrels. My favorite snack is whatever you are eating, and I love a good butt scratch!_
diff --git a/site/versioned_docs/version-4.1/release-notes/v2-penny/2.1.1.md b/site/versioned_docs/version-4.1/release-notes/v2-penny/2.1.1.md
new file mode 100644
index 00000000..e1314a5f
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v2-penny/2.1.1.md
@@ -0,0 +1,27 @@
+---
+title: 2.1.1
+sidebar_position: 79898
+---
+
+### HarperDB 2.1.1, Penny Release
+05/22/2020
+
+**Highlights**
+
+* CORE-1007 Added the ability to perform `SQL INSERT` & `UPDATE` with function calls & expressions on values.
+* CORE-1023 Fixed minor bug in final SQL step incorrectly trying to translate ordinals to alias in `ORDER BY` statement.
+* CORE-1020 Fixed bug allowing 'null' and 'undefined' string values to be passed in as valid hash values.
+* CORE-1006 Added SQL functionality that enables `JOIN` statements across different schemas.
+* CORE-1005 Implemented JSONata library to handle our JSON document search functionality in SQL, creating the `SEARCH_JSON` function.
+* CORE-1009 Updated schema validation to allow all printable ASCII characters to be used in schema/table/attribute names, except, forward slashes and backticks. Same rules apply now for hash attribute values.
+* CORE-1003 Fixed handling of ORDER BY statements with function aliases.
+* CORE-1004 Fixed bug related to `SELECT*` on `JOIN` queries with table columns with the same name.
+* CORE-996 Fixed an issue where the `transact_to_cluster` flag is lost for CSV URL loads, fixed an issue where new attributes created in CSV bulk load do not sync to the cluster.
+* CORE-994 Added new operation `system_information`. This operation returns info & metrics for the OS, time, memory, cpu, disk, network.
+* CORE-993 Added new custom date functions for AlaSQL & UTC updates.
+* CORE-991 Changed jobs to spawn a new process which will run the intended job without impacting a main HarperDB process.
+* CORE-992 HTTPS enabled by default.
+* CORE-990 Updated `describe_table` to add the record count for the table for LMDB data storage.
+* CORE-989 Killed the socket cluster processes prior to HarperDB processes to eliminate a false uptime.
+* CORE-975 Updated time values set by SQL Date Functions to be in epoch format.
+* CORE-974 Added date functions to `SQL SELECT` column alias functionality.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/release-notes/v2-penny/2.2.0.md b/site/versioned_docs/version-4.1/release-notes/v2-penny/2.2.0.md
new file mode 100644
index 00000000..267168cd
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v2-penny/2.2.0.md
@@ -0,0 +1,43 @@
+---
+title: 2.2.0
+sidebar_position: 79799
+---
+
+### HarperDB 2.2.0, Penny Release
+08/24/2020
+
+**Features/Updates**
+
+* CORE-997 Updated the data format for CSV data loads being sync'd across a cluster to take up less resources
+* CORE-1018 Adds SQL functionality for `BETWEEN` statements
+* CORE-1032 Updates permissions to allow regular users (i.e. non-super users) to call the `get_job` operation
+* CORE-1036 On create/drop table we auto create/drop the related transactions environments for the schema.table
+* CORE-1042 Built raw functions to write to a tables transaction log for insert/update/delete operations
+* CORE-1057 Implemented write transaction into lmdb create/update/delete functions
+* CORE-1048 Adds `SEARCH` wildcard handling for role permissions standards
+* CORE-1059 Added config setting to disable transaction logging for an instance
+* CORE-1076 Adds permissions filter to describe operations
+* CORE-1043 Change clustering catchup to use the new transaction log
+* CORE-1052 Removed word "master" from source
+* CORE-1061 Added new operation called `delete_transactions_before` this will tail a transaction log for a specific schema / table
+* CORE-1040 On HarperDB startup make sure all tables have a transaction environment
+* CORE-1055 Added 2 new setting to change the server headersTimeout & keepAliveTimeout from the config file
+* CORE-1044 Created new operation `read_transaction_log` which will allow a user to get transactions for a table by `timestamp`, `username`, or `hash_value`
+* CORE-1043 Change clustering catchup to use the new transaction log
+* CORE-1089 Added new attribute to `system_information` for table/transaction log data size in bytes & transaction log record count
+* CORE-1101 Fix to store empty strings rather than considering them null & fix to be able to search on empty strings in SQL/NoSQL.
+* CORE-1054 Updates permissions object to remove delete attribute permission and update table attribute permission key to `attribute_permissions`
+* CORE-1092 Do not allow the `__createdtime__` to be updated
+* CORE-1085 Updates create schema/table & drop schema/table/attribute operations permissions to require super user role and adds integration tests to validate
+* CORE-1071 Updates response messages and status codes from `describe_schema` and `describe_table` operations to provide standard language/status code when a schema item is not found
+* CORE-1049 Updates response message for SQL update op with no matching rows
+* CORE-1096 Added tracking of the origin in the transaction log. This origin object stores the node name, timestamp of the transaction from the originating node & the user.
+
+**Bug Fixes**
+
+* CORE-1028 Fixes bug for simple `SQL SELECT` queries not returning aliases and incorrectly returning hash values when not requested in query
+* CORE-1037 Fixed an issue where numbers with leading zero i.e. 00123 are converted to numbers rather than being honored as strings.
+* CORE-1063 Updates permission error response shape to consolidate issues into individual objects per schema/table combo
+* CORE-1098 Fixed an issue where transaction environments were remaining in the global cache after being dropped.
+* CORE-1086 Fixed issue where responses from insert/update were incorrect with skipped records.
+* CORE-1079 Fixes SQL bugs around invalid schema/table and special characters in `WHERE` clause
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/release-notes/v2-penny/2.2.2.md b/site/versioned_docs/version-4.1/release-notes/v2-penny/2.2.2.md
new file mode 100644
index 00000000..827c63db
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v2-penny/2.2.2.md
@@ -0,0 +1,16 @@
+---
+title: 2.2.2
+sidebar_position: 79797
+---
+
+### HarperDB 2.2.2, Penny Release
+10/27/2020
+
+* CORE-1154 Allowed transaction logging to be disabled even if clustering is enabled.
+* CORE-1153 Fixed issue where `delete_files_before` was writing to transaction log.
+* CORE-1152 Fixed issue where no more than 4 HarperDB forks would be created.
+* CORE-1112 Adds handling for system timestamp attributes in permissions.
+* CORE-1131 Adds better handling for checking perms on operations with action value in JSON.
+* CORE-1113 Fixes validation bug checking for super user/cluster user permissions and other permissions.
+* CORE-1135 Adds validation for valid keys in role API operations.
+* CORE-1073 Adds new `import_from_s3` operation to API.
diff --git a/site/versioned_docs/version-4.1/release-notes/v2-penny/2.2.3.md b/site/versioned_docs/version-4.1/release-notes/v2-penny/2.2.3.md
new file mode 100644
index 00000000..eca953e2
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v2-penny/2.2.3.md
@@ -0,0 +1,9 @@
+---
+title: 2.2.3
+sidebar_position: 79796
+---
+
+### HarperDB 2.2.3, Penny Release
+11/16/2020
+
+* CORE-1158 Performance improvements to core delete function and configuration of `delete_files_before` to run in batches with a pause into between.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/release-notes/v2-penny/2.3.0.md b/site/versioned_docs/version-4.1/release-notes/v2-penny/2.3.0.md
new file mode 100644
index 00000000..2b248490
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v2-penny/2.3.0.md
@@ -0,0 +1,22 @@
+---
+title: 2.3.0
+sidebar_position: 79699
+---
+
+### HarperDB 2.3.0, Penny Release
+12/03/2020
+
+**Features/Updates**
+
+* CORE-1191, CORE-1190, CORE-1125, CORE-1157, CORE-1126, CORE-1140, CORE-1134, CORE-1123, CORE-1124, CORE-1122 Added JWT Authentication option (See documentation for more information)
+* CORE-1128, CORE-1143, CORE-1140, CORE-1129 Added `upsert` operation
+* CORE-1187 Added `get_configuration` operation which allows admins to view their configuration settings.
+* CORE-1175 Added new internal LMDB function to copy an environment for use in future features.
+* CORE-1166 Updated packages to address security vulnerabilities.
+
+**Bug Fixes**
+
+* CORE-1195 Modified `drop_attribute` to drop after data cleanse completes.
+* CORE-1149 Fix SQL bug regarding self joins and updates alasql to 0.6.5 release.
+* CORE-1168 Fix inconsistent invalid schema/table errors.
+* CORE-1162 Fix bug which caused `delete_files_before` to cause tables to grow in size due to an open cursor issue.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/release-notes/v2-penny/2.3.1.md b/site/versioned_docs/version-4.1/release-notes/v2-penny/2.3.1.md
new file mode 100644
index 00000000..51291a01
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v2-penny/2.3.1.md
@@ -0,0 +1,12 @@
+---
+title: 2.3.1
+sidebar_position: 79698
+---
+
+### HarperDB 2.3.1, Penny Release
+1/29/2021
+
+**Bug Fixes**
+
+* CORE-1218 A bug in HarperDB 2.3.0 was identified related to manually calling the `create_attribute` operation. This bug caused secondary indexes to be overwritten by the most recently inserted or updated value for the index, thereby causing a search operation filtered with that index to only return the most recently inserted/updated row. Note, this issue does not affect attributes that are reflexively/automatically created. It only affects attributes created using `create_attribute`. To resolve this issue in 2.3.0 or earlier, drop and recreate your table using reflexive attribute creation. In 2.3.1, drop and recreate your table and use either reflexive attribute creation or `create_attribute`.
+* CORE-1219 Increased maximum table attributes from 1000 to 10000
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/release-notes/v2-penny/index.md b/site/versioned_docs/version-4.1/release-notes/v2-penny/index.md
new file mode 100644
index 00000000..5ab6c2a5
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v2-penny/index.md
@@ -0,0 +1,13 @@
+---
+title: HarperDB Penny (Version 2)
+---
+
+# HarperDB Penny (Version 2)
+
+Did you know our release names are dedicated to employee pups? For our second release, Penny was the star.
+
+Here is a bit about Penny:
+
+
+
+_Hi I am Penny! My dad is Kyle Bernhardy, the CTO of HarperDB. I am a nine-year-old Whippet who lives for running hard and fast while exploring the beautiful terrain of Colorado. My favorite activity is chasing birds along with afternoon snoozes in a sunny spot in my backyard._
diff --git a/site/versioned_docs/version-4.1/release-notes/v3-monkey/3.0.0.md b/site/versioned_docs/version-4.1/release-notes/v3-monkey/3.0.0.md
new file mode 100644
index 00000000..2907ee6c
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v3-monkey/3.0.0.md
@@ -0,0 +1,31 @@
+---
+title: 3.0.0
+sidebar_position: 69999
+---
+
+### HarperDB 3.0, Monkey Release
+5/18/2021
+
+**Features/Updates**
+
+* CORE-1217, CORE-1226, CORE-1232 Create new `search_by_conditions` operation.
+* CORE-1304 Upgrade to Node 12.22.1.
+* CORE-1235 Adds new upgrade/install functionality.
+* CORE-1206, CORE-1248, CORE-1252 Implement `lmdb-store` library for optimized performance.
+* CORE-1062 Added alias operation for `delete_files_before`, named `delete_records_before`.
+* CORE-1243 Change `HTTPS_ON` settings value to false by default.
+* CORE-1189 Implement fastify web server, resulting in improved performance.
+* CORE-1221 Update user API to use role name instead of role id.
+* CORE-1225 Updated dependencies to eliminate npm security warnings.
+* CORE-1241 Adds 3.0 update directive and refactors/fixes update functionality.
+
+**Bug Fixes**
+
+* CORE-1299 Remove all references to the `PROJECT_DIR` setting. This setting is problematic when using node version managers and upgrading the version of node and then installing a new instance of HarperDB.
+* CORE-1288 Fix bug with drop table/schema that was causing 'env required' error log.
+* CORE-1285 Update warning log when trying to create an attribute that already exists.
+* CORE-1254 Added logic to manage data collisions in clustering.
+* CORE-1212 Add pre-check to `drop_user` that returns error if user doesn't exist.
+* CORE-1114 Update response code and message from `add_user` when user already exists.
+* CORE-1111 Update response from `create_attribute` to match the create schema/table response.
+* CORE-1205 Fixed bug that prevented schema/table from being dropped if name was a number or had a wildcard value in it. Updated validation for insert, upsert and update.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/release-notes/v3-monkey/3.1.0.md b/site/versioned_docs/version-4.1/release-notes/v3-monkey/3.1.0.md
new file mode 100644
index 00000000..148690f6
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v3-monkey/3.1.0.md
@@ -0,0 +1,23 @@
+---
+title: 3.1.0
+sidebar_position: 69899
+---
+
+### HarperDB 3.1.0, Monkey Release
+8/24/2021
+
+**Features/Updates**
+
+* CORE-1320, CORE-1321, CORE-1323, CORE-1324 Version 1.0 of HarperDB Custom Functions
+* CORE-1275, CORE-1276, CORE-1278, CORE-1279, CORE-1280, CORE-1282, CORE-1283, CORE-1305, CORE-1314 IPC server for communication between HarperDB processes, including HarperDB, HarperDB Clustering, and HarperDB Functions
+* CORE-1352, CORE-1355, CORE-1356, CORE-1358 Implement pm2 for HarperDB process management
+* CORE-1292, CORE-1308, CORE-1312, CORE-1334, CORE-1338 Updated installation process to start HarperDB immediately on install and to accept all config settings via environment variable or command line arguments
+* CORE-1310 Updated licensing functionality
+* CORE-1301 Updated validation for performance improvement
+* CORE-1359 Add `hdb-response-time` header which returns the HarperDB response time in milliseconds
+* CORE-1330, CORE-1309 New config settings: `LOG_TO_FILE`, `LOG_TO_STDSTREAMS`, `IPC_SERVER_PORT`, `RUN_IN_FOREGROUND`, `CUSTOM_FUNCTIONS`, `CUSTOM_FUNCTIONS_PORT`, `CUSTOM_FUNCTIONS_DIRECTORY`, `MAX_CUSTOM_FUNCTION_PROCESSES`
+
+**Bug Fixes**
+
+* CORE-1315 Corrected issue in HarperDB restart scenario
+* CORE-1370 Update some of the validation error handlers so that they don't log full stack
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/release-notes/v3-monkey/3.1.1.md b/site/versioned_docs/version-4.1/release-notes/v3-monkey/3.1.1.md
new file mode 100644
index 00000000..0adbeb21
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v3-monkey/3.1.1.md
@@ -0,0 +1,18 @@
+---
+title: 3.1.1
+sidebar_position: 69898
+---
+
+### HarperDB 3.1.1, Monkey Release
+9/23/2021
+
+**Features/Updates**
+
+* CORE-1393 Added utility function to add settings from env/cmd vars to the settings file on every run/restart
+* CORE-1395 Create a setting which will allow to enable the local Studio to be served from an instance of HarperDB
+* CORE-1397 Update the stock 404 response to not return the request URL
+* General updates to optimize Docker container
+
+**Bug Fixes**
+
+* CORE-1399 Added fixes for complex SQL alias issues
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/release-notes/v3-monkey/3.1.2.md b/site/versioned_docs/version-4.1/release-notes/v3-monkey/3.1.2.md
new file mode 100644
index 00000000..f1c192b6
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v3-monkey/3.1.2.md
@@ -0,0 +1,15 @@
+---
+title: 3.1.2
+sidebar_position: 69897
+---
+
+### HarperDB 3.1.2, Monkey Release
+10/21/2021
+
+**Features/Updates**
+
+* Updated the installation ASCII art to reflect the new HarperDB logo
+
+**Bug Fixes**
+
+* CORE-1408 Corrects issue where `drop_attribute` was not properly setting the LMDB version number causing tables to behave unexpectedly
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/release-notes/v3-monkey/3.1.3.md b/site/versioned_docs/version-4.1/release-notes/v3-monkey/3.1.3.md
new file mode 100644
index 00000000..2d484f8d
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v3-monkey/3.1.3.md
@@ -0,0 +1,11 @@
+---
+title: 3.1.3
+sidebar_position: 69896
+---
+
+### HarperDB 3.1.3, Monkey Release
+1/14/2022
+
+**Bug Fixes**
+
+* CORE-1446 Fix for scans on indexes larger than 1 million entries causing queries to never return
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/release-notes/v3-monkey/3.1.4.md b/site/versioned_docs/version-4.1/release-notes/v3-monkey/3.1.4.md
new file mode 100644
index 00000000..ae0074fd
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v3-monkey/3.1.4.md
@@ -0,0 +1,11 @@
+---
+title: 3.1.4
+sidebar_position: 69895
+---
+
+### HarperDB 3.1.4, Monkey Release
+2/24/2022
+
+**Features/Updates**
+
+* CORE-1460 Added new setting `STORAGE_WRITE_ASYNC`. If this setting is true, LMDB will have faster write performance at the expense of not being crash safe. The default for this setting is false, which results in HarperDB being crash safe.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/release-notes/v3-monkey/3.1.5.md b/site/versioned_docs/version-4.1/release-notes/v3-monkey/3.1.5.md
new file mode 100644
index 00000000..eff4b5b0
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v3-monkey/3.1.5.md
@@ -0,0 +1,11 @@
+---
+title: 3.1.5
+sidebar_position: 69894
+---
+
+### HarperDB 3.1.5, Monkey Release
+3/4/2022
+
+**Features/Updates**
+
+* CORE-1498 Fixed incorrect autocasting of string that start with "0." that tries to convert to number but instead returns NaN.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/release-notes/v3-monkey/3.2.0.md b/site/versioned_docs/version-4.1/release-notes/v3-monkey/3.2.0.md
new file mode 100644
index 00000000..003575d8
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v3-monkey/3.2.0.md
@@ -0,0 +1,13 @@
+---
+title: 3.2.0
+sidebar_position: 69799
+---
+
+### HarperDB 3.2.0, Monkey Release
+3/25/2022
+
+**Features/Updates**
+
+* CORE-1391 Bug fix related to orphaned HarperDB background processes.
+* CORE-1509 Updated node version check, updated Node.js version, updated project dependencies.
+* CORE-1518 Remove final call from logger.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/release-notes/v3-monkey/3.2.1.md b/site/versioned_docs/version-4.1/release-notes/v3-monkey/3.2.1.md
new file mode 100644
index 00000000..dc511a70
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v3-monkey/3.2.1.md
@@ -0,0 +1,11 @@
+---
+title: 3.2.1
+sidebar_position: 69798
+---
+
+### HarperDB 3.2.1, Monkey Release
+6/1/2022
+
+**Features/Updates**
+
+* CORE-1573 Added logic to track the pid of the foreground process if running in foreground. Then on stop, use that pid to kill the process. Logic was also added to kill the pm2 daemon when stop is called.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/release-notes/v3-monkey/3.3.0.md b/site/versioned_docs/version-4.1/release-notes/v3-monkey/3.3.0.md
new file mode 100644
index 00000000..3e3ca784
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v3-monkey/3.3.0.md
@@ -0,0 +1,12 @@
+---
+title: 3.3.0
+sidebar_position: 69699
+---
+
+### HarperDB 3.3.0 - Monkey
+
+* CORE-1595 Added new role type `structure_user`, this enables non-superusers to be able to create/drop schema/table/attribute.
+* CORE-1501 Improved performance for drop_table.
+* CORE-1599 Added two new operations for custom functions `install_node_modules` & `audit_node_modules`.
+* CORE-1598 Added `skip_node_modules` flag to `package_custom_function_project` operation. This flag allows for not bundling project dependencies and deploying a smaller project to other nodes. Use this flag in tandem with `install_node_modules`.
+* CORE-1707 Binaries are now included for Linux on AMD64, Linux on ARM64, and macOS. GCC, Make, Python are no longer required when installing on these platforms.
diff --git a/site/versioned_docs/version-4.1/release-notes/v3-monkey/index.md b/site/versioned_docs/version-4.1/release-notes/v3-monkey/index.md
new file mode 100644
index 00000000..84d3ac9e
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v3-monkey/index.md
@@ -0,0 +1,11 @@
+---
+title: HarperDB Monkey (Version 3)
+---
+
+# HarperDB Monkey (Version 3)
+
+Did you know our release names are dedicated to employee pups? For our third release, we have Monkey.
+
+
+
+_Hi, I am Monkey, a.k.a. Monk, a.k.a. Monchichi. My dad is Aron Johnson, the Director of DevOps at HarperDB. I am an eight-year-old Australian Cattle dog mutt whose favorite pastime is hunting and collecting tennis balls from the park next to her home. I love burrowing in the Colorado snow, rolling in the cool grass on warm days, and cheese!_
diff --git a/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.0.0.md b/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.0.0.md
new file mode 100644
index 00000000..49770307
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.0.0.md
@@ -0,0 +1,124 @@
+---
+title: 4.0.0
+sidebar_position: 59999
+---
+
+### HarperDB 4.0.0, Tucker Release
+11/2/2022
+
+**Networking & Data Replication (Clustering)**
+
+The HarperDB clustering internals have been rewritten and the underlying technology for Clustering has been completely replaced with [NATS](https:/nats.io/), an enterprise grade connective technology responsible for addressing, discovery and exchanging of messages that drive the common patterns in distributed systems.
+* CORE-1464, CORE-1470, : Remove SocketCluster dependencies and all code related to them.
+* CORE-1465, CORE-1485, CORE-1537, CORE-1538, CORE-1558, CORE-1583, CORE_1665, CORE-1710, CORE-1801, CORE-1865 :Add nats-`server` code as dependency, on install of HarperDB download nats-`server` is possible else fallback to building from source code.
+* CORE-1593, CORE-1761: Add `nats.js` as project dependency.
+* CORE-1466: Build NATS configs on `harperdb run` based on HarperDB YAML configuration.
+* CORE-1467, CORE-1508: Launch and manage NATS servers with PM2.
+* CORE-1468, CORE-1507: Create a process which reads the work queue stream and processes transactions.
+* CORE-1481, CORE-1529, CORE-1698, CORE-1502, CORE-1696: On upgrade to 4.0, update pre-existing clustering configurations, create table transaction streams, create work queue stream, update `hdb_nodes` table, create clustering folder structure, and rebuild self-signed certs.
+* CORE-1494, CORE-1521, CORE-1755: Build out internals to interface with NATS.
+* CORE-1504: Update existing hooks to save transactions to work with NATS.
+* CORE-1514, CORE-1515, CORE-1516, CORE-1527, CORE-1532: Update `add_node`, `update_node`, and `remove_node` operations to no longer need host and port in payload. These operations now manage dynamically sourcing of table level transaction streams between nodes and work queues.
+* CORE-1522: Create `NATSReplyService` process which handles the receiving NATS based requests from remote instances and sending back appropriate responses.
+* CORE-1471, CORE-1568, CORE-1563, CORE-1534, CORE-1569: Update `cluster_status` operation.
+* CORE-1611: Update pre-existing transaction log operations to be audit log operations.
+* CORE-1541, CORE-1612, CORE-1613: Create translation log operations which interface with streams.
+* CORE-1668: Update NATS serialization / deserialization to use MessagePack.
+* CORE-1673: Add `system_info` param to `hdb_nodes` table and update on `add_node` and `cluster_status`.
+* CORE-1477, CORE-1493, CORE-1557, CORE-1596, CORE-1577: Both a full HarperDB restart & just clustering restart call the NATS server with a reload directive to maintain full uptime while servers refresh.
+* CORE-1474:HarperDB install adds clustering folder structure.
+* CORE-1530: Post `drop_table` HarperDB purges the related transaction stream.
+* CORE-1567: Set NATS config to always use TLS.
+* CORE-1543: Removed the `transact_to_cluster` attribute from the bulk load operations. Now bulk loads always replicate.
+* CORE-1533, CORE-1556, CORE-1561, CORE-1562, CORE-1564: New operation `configure_cluster`, this operation enables bulk publishing and subscription of multiple tables to multiple instances of HarperDB.
+* CORE-1535: Create work queue stream on install of HarperDB. This stream receives transactions from remote instances of HarperDB which are then ingested in order.
+* CORE-1551: Create transaction streams on the remote node if they do not exist when performing `add_node` or `update_node`.
+* CORE-1594, CORE-1605, CORE-1749, CORE-1767, CORE-1770: Optimize the work queue stream and its consumer to be more performant and validate exact once delivery.
+* CORE-1621, CORE-1692, CORE-1570, CORE-1693: NATS stream names are MD5 hashed to avoid characters that HarperDB allows, but NATS may not.
+* CORE-1762: Add a new optional attribute to `add_node` and `update_node` named `opt_start_time`. This attribute sets a starting time to start synchronizing transactions.
+* CORE-1785: Optimizations and bug fixes in regards to sourcing data from remote instances on HarperDB.
+* CORE-1588: Created new operation `set_cluster_routes` to enable setting routes for instances of HarperDB to mesh together.
+* CORE-1589: Created new operation `get_cluster_routes` to allow for retrieval of routes used to connect the instance of HarperDB to the mesh.
+* CORE-1590: Created new operation `delete_cluster_routes` to allow for removal of routes used to connect the instance of HarperDB to the mesh.
+* CORE-1667: Fix old environment variable `CLUSTERING_PORT` not mapping to new hub server port.
+* CORE-1609: Allow `remove_node` to be called when the other node cannot be reached.
+* CORE-1815: Add transaction lock to `add_node` and `update_node` to avoid concurrent nats source update bug.
+* CORE-1848: Update stream configs if the node name has been changed in the YAML configuration.
+* CORE-1873: Update `add_node` and `update_node` so that it auto-creates schema/table on both local and remote node respectively
+
+
+**Data Storage**
+
+We have made improvements to how we store, index, and retrieve data.
+* CORE-1619: Enabled new concurrent flushing technology for improved write performance.
+* CORE-1701: Optimize search performance for `search_by_conditions` when executing multiple AND conditions.
+* CORE-1652: Encode the values of secondary indices more efficiently for faster access.
+* CORE-1670: Store updated timestamp in `lmdb.js`' version property.
+* CORE-1651: Enabled multiple value indexing of array values which allows for the ability to search on specific elements in an array more efficiently.
+* CORE-1649, CORE-1659: Large text values (larger than 255 bytes) are no longer stored in separate blob index. Now they are segmented and delimited in the same index to increase search performance.
+* Complex objects and object arrays are no longer stored in a separate index to preserve storage and increase write throughput.
+* CORE-1650, CORE-1724, CORE-1738: Improved internals around interpreting attribute values.
+* CORE-1657: Deferred property decoding allows large objects to be stored, but individual attributes can be accessed (like with get_attributes) without incurring the cost of decoding the entire object.
+* CORE-1658: Enable in-memory caching of records for even faster access to frequently accessed data.
+* CORE-1693: Wrap updates in async transactions to ensure ACID-compliant updates.
+* CORE-1653: Upgrade to 4.0 rebuilds tables to reflect changes made to index improvements.
+* CORE-1753: Removed old `node-lmdb` dependency.
+* CORE-1787: Freeze objects returned from queries.
+* CORE-1821: Read the `WRITE_ASYNC` setting which enables LMDB nosync.
+
+**Logging**
+
+HarperDB has increased logging specificity by breaking out logs based on components logging. There are specific log files each for HarperDB Core, Custom Functions, Hub Server, Leaf Server, and more.
+* CORE-1497: Remove `pino` and `winston` dependencies.
+* CORE-1426: All logging is output via `stdout` and `stderr`, our default logging is then picked up by PM2 which handles writing out to file.
+* CORE-1431: Improved `read_log` operation validation.
+* CORE-1433, CORE-1463: Added log rotation.
+* CORE-1553, CORE-1555, CORE-1552, CORE-1554, CORE-1704: Performance gain by only serializing objects and arrays if the log is for the level defined in configuration.
+* CORE-1436: Upgrade to 4.0 updates internals for logging changes.
+* CORE-1428, CORE-1440, CORE-1442, CORE-1434, CORE-1435, CORE-1439, CORE-1482, CORE-1751, CORE-1752: Bug fixes, performance improvements and improved unit tests.
+* CORE-1691: Convert non-PM2 managed log file writes to use Node.js `fs.appendFileSync` function.
+
+**Configuration**
+
+HarperDB has updated its configuration from a properties file to YAML.
+* CORE-1448, CORE-1449, CORE-1519, CORE-1587: Upgrade automatically converts the pre-existing settings file to YAML.
+* CORE-1445, CORE-1534, CORE-1444, CORE-1858: Build out new logic to create, update, and interpret the YAML configuration file.
+* Installer has updated prompts to reflect YAML settings.
+* CORE-1447: Create an alias for the `configure_cluster` operation as `set_configuration`.
+* CORE-1461, CORE-1462, CORE-1483: Unit test improvements.
+* CORE-1492: Improvements to get_configuration and set_configuration operations.
+* CORE-1503: Modify HarperDB configuration for more granular certificate definition.
+* CORE-1591: Update `routes` IP param to `host` and to `leaf` config in `harperdb.conf`
+* CORE-1519: Fix issue when switching between old and new versions of HarperDB we are getting the config parameter is undefined error on npm install.
+
+**Broad NodeJS and Platform Support**
+* CORE-1624: HarperDB can now run on multiple versions of NodeJS, from v14 to v19. We primarily test on v18, so that is the preferred version.
+
+**Windows 10 and 11**
+* CORE-1088: HarperDB now runs natively on Windows 10 and 11 without the need to run in a container or installed in WSL. Windows is only intended for evaluation and development purposes, not for production work loads.
+
+**Extra Changes and Bug Fixes**
+* CORE-1520: Refactor installer to remove all waterfall code and update to use Promises.
+* CORE-1573: Stop the PM2 daemon and any logging processes when stopping hdb.
+* CORE-1586: When HarperDB is running in foreground stop any additional logging processes from being spawned.
+* CORE-1626: Update docker file to accommodate new `harperdb.conf` file.
+* CORE-1592, CORE-1526, CORE-1660, CORE-1646, CORE-1640, CORE-1689, CORE-1711, CORE-1601, CORE-1726, CORE-1728, CORE-1736, CORE-1735, CORE-1745, CORE-1729, CORE-1748, CORE-1644, CORE-1750, CORE-1757, CORE-1727, CORE-1740, CORE-1730, CORE-1777, CORE-1778, CORE-1782, CORE-1775, CORE-1771, CORE-1774, CORE-1759, CORE-1772, CORE-1861, CORE-1862, CORE-1863, CORE-1870, CORE-1869:Changes for CI/CD pipeline and integration tests.
+* CORE-1661: Fixed issue where old boot properties file caused an error when attempting to install 4.0.0.
+* CORE-1697, CORE-1814, CORE-1855: Upgrade fastify dependency to new major version 4.
+* CORE-1629: Jobs are now running as processes managed by the PM2 daemon.
+* CORE-1733: Update LICENSE to reflect our EULA on our site.
+* CORE-1606: Enable Custom Functions by default.
+* CORE-1714: Include pre-built binaries for most common platforms (darwin-arm64, darwin-x64, linux-arm64, linux-x64, win32-x64).
+* CORE-1628: Fix issue where setting license through environment variable not working.
+* CORE-1602, CORE-1760, CORE-1838, CORE-1839, CORE-1847, CORE-1773: HarperDB Docker container improvements.
+* CORE-1706: Add support for encoding HTTP responses with MessagePack.
+* CORE-1709: Improve the way lmdb.js dependencies are installed.
+* CORE-1758: Remove/update unnecessary HTTP headers.
+* CORE-1756: On `npm install` and `harperdb install` change the node version check from an error to a warning if the installed Node.js version does not match our preferred version.
+* CORE-1791: Optimizations to authenticated user caching.
+* CORE-1794: Update README to discuss Windows support & Node.js versions
+* CORE-1837: Fix issue where Custom Function directory was not being created on install.
+* CORE-1742: Add more validation to audit log - check schema/table exists and log is enabled.
+* CORE-1768: Fix issue where when running in foreground HarperDB process is not stopping on `harperdb stop`.
+* CORE-1864: Fix to semver checks on upgrade.
+* CORE-1850: Fix issue where a `cluster_user` type role could not be altered.
diff --git a/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.0.1.md b/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.0.1.md
new file mode 100644
index 00000000..9e148e63
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.0.1.md
@@ -0,0 +1,12 @@
+---
+title: 4.0.1
+sidebar_position: 59998
+---
+
+### HarperDB 4.0.1, Tucker Release
+01/20/2023
+
+**Bug Fixes**
+
+* CORE-1992 Local studio was not loading because the path got mangled in the build.
+* CORE-2001 Fixed deploy_custom_function_project after node update broke it.
diff --git a/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.0.2.md b/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.0.2.md
new file mode 100644
index 00000000..b65d1427
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.0.2.md
@@ -0,0 +1,12 @@
+---
+title: 4.0.2
+sidebar_position: 59997
+---
+
+### HarperDB 4.0.2, Tucker Release
+01/24/2023
+
+**Bug Fixes**
+
+* CORE-2003 Fix bug where if machine had one core thread config would default to zero.
+* Update to lmdb 2.7.3 and msgpackr 1.7.0
diff --git a/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.0.3.md b/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.0.3.md
new file mode 100644
index 00000000..67aaae56
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.0.3.md
@@ -0,0 +1,11 @@
+---
+title: 4.0.3
+sidebar_position: 59996
+---
+
+### HarperDB 4.0.3, Tucker Release
+01/26/2023
+
+**Bug Fixes**
+
+* CORE-2007 Add update nodes 4.0.0 launch script to build script to fix clustering upgrade.
diff --git a/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.0.4.md b/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.0.4.md
new file mode 100644
index 00000000..2a30c9d1
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.0.4.md
@@ -0,0 +1,11 @@
+---
+title: 4.0.4
+sidebar_position: 59995
+---
+
+### HarperDB 4.0.4, Tucker Release
+01/27/2023
+
+**Bug Fixes**
+
+* CORE-2009 Fixed bug where add node was not being called when upgrading clustering.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.0.5.md b/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.0.5.md
new file mode 100644
index 00000000..dc66721f
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.0.5.md
@@ -0,0 +1,14 @@
+---
+title: 4.0.5
+sidebar_position: 59994
+---
+
+### HarperDB 4.0.5, Tucker Release
+02/15/2023
+
+**Bug Fixes**
+
+* CORE-2029 Improved the upgrade process for handling existing user TLS certificates and correctly configuring TLS settings. Added a prompt to upgrade to determine if new certificates should be created or existing certificates should be kept/used.
+* Fix the way NATS connections are honored in a local environment.
+* Do not define the certificate authority path to NATS if it is not defined in the HarperDB config.
+
diff --git a/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.0.6.md b/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.0.6.md
new file mode 100644
index 00000000..bf97d148
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.0.6.md
@@ -0,0 +1,11 @@
+---
+title: 4.0.6
+sidebar_position: 59993
+---
+
+### HarperDB 4.0.6, Tucker Release
+03/09/2023
+
+**Bug Fixes**
+
+* Fixed a data serialization error that occurs when a large number of different record structures are persisted in a single table.
diff --git a/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.0.7.md b/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.0.7.md
new file mode 100644
index 00000000..7d48666a
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.0.7.md
@@ -0,0 +1,11 @@
+---
+title: 4.0.7
+sidebar_position: 59992
+---
+
+### HarperDB 4.0.7, Tucker Release
+03/10/2023
+
+**Bug Fixes**
+
+* Update lmdb.js dependency
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.1.0.md b/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.1.0.md
new file mode 100644
index 00000000..539ed67d
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.1.0.md
@@ -0,0 +1,61 @@
+---
+title: 4.1.0
+sidebar_position: 59899
+---
+
+### HarperDB 4.1.0, Tucker Release
+
+HarperDB 4.1 introduces the ability to use worker threads for concurrently handling HTTP requests. Previously this was handled by processes. This shift provides important benefits in terms of better control of traffic delegation with support for optimized load tracking and session affinity, better debuggability, and reduced memory footprint.
+
+This means debugging will be much easier for custom functions. If you install/run HarperDB locally, most modern IDEs like WebStorm and VSCode support worker thread debugging, so you can start HarperDB in your IDE, and set breakpoints in your custom functions and debug them.
+
+The associated routing functionality now includes session affinity support. This can be used to consistently route users to the same thread which can improve caching locality, performance, and fairness. This can be enabled in with the [`http.sessionAffinity` option in your configuration](../../security/configuration#session-affinity).
+
+HarperDB 4.1's NoSQL query handling has been revamped to consistently use iterators, which provide an extremely memory efficient mechanism for directly streaming query results to the network _as_ the query results are computed. This results in faster Time to First Byte (TTFB) (only the first record/value in a query needs to be computed before data can start to be sent), and less memory usage during querying (the entire query result does not need to be stored in memory). These iterators are also available in query results for custom functions and can provide means for custom function code to iteratively access data from the database without loading entire results. This should be a completely transparent upgrade, all HTTP APIs function the same, with the one exception that custom functions need to be aware that they can't access query results by `[index]` (they should use array methods or for-in loops to handle query results).
+
+4.1 includes configuration options for specifying the location of database storage files. This allows you to specifically locate database directories and files on different volumes for better flexibility and utilization of disks and storage volumes. See the [storage configuration](../../configuration#storage) and [schemas configuration](../../configuration#schemas) for information on how to configure these locations.
+
+Logging has been revamped and condensed into one `hdb.log` file. See [logging](../../logging) for more information.
+
+A new operation called `cluster_network` was added, this operation will ping the cluster and return a list of enmeshed nodes.
+
+Custom Functions will no longer automatically load static file routes, instead the `@fastify/static` plugin will need to be registered with the Custom Function server. See [Host A Static Web UI-static](../../custom-functions/host-static).
+
+Updates to S3 import and export mean that these operations now require the bucket `region` in the request. Also, if referencing a nested object it should be done in the `key` parameter. See examples [here](https:/api.harperdb.io/#aa74bbdf-668c-4536-80f1-b91bb13e5024).
+
+Due to the AWS SDK v2 reaching end of life support we have updated to v3. This has caused some breaking changes in our operations `import_from_s3` and `export_to_s3`:
+* A new attribute `region` will need to be supplied
+* The `bucket` attribute can no longer have trailing slashes. Slashes will now need to be in the `key`.
+
+Starting HarperDB without any command (just `harperdb`) now runs HarperDB like a standard process, in the foreground. This means you can use standard unix tooling for interacting with the process and is conducive for running HarperDB with systemd or any other process management tool. If you wish to have HarperDB launch itself in separate background process (and immediately terminate the shell process), you can do so by running `harperdb start`.
+
+Internal Tickets completed:
+* CORE-609 - Ensure that attribute names are always added to global schema as Strings
+* CORE-1549 - Remove fastify-static code from Custom Functions server which auto serves content from "static" folder
+* CORE-1655 - Iterator based queries
+* CORE-1764 - Fix issue where describe_all operation returns an empty object for non super-users if schema(s) do not yet have table(s)
+* CORE-1854 - Switch to using worker threads instead of processes for handling concurrency
+* CORE-1877 - Extend the csv_url_load operation to allow for additional headers to be passed to the remote server when the csv is being downloaded
+* CORE-1893 - Add last updated timestamp to describe operations
+* CORE-1896 - Fix issue where Select * from system.hdb_info returns wrong HDB version number after Instance Upgrade
+* CORE-1904 - Fix issue when executing GEOJSON query in SQL
+* CORE-1905 - Add HarperDB YAML configuration setting which defines the storage location of NATS streams
+* CORE-1906 - Add HarperDB YAML configuration setting defining the storage location of tables.
+* CORE-1655 - Streaming binary format serialization
+* CORE-1943 - Add configuration option to set mount point for audit tables
+* CORE-1921 - Update NATS transaction lifecycle to handle message deduplication in work queue streams.
+* CORE-1963 - Update logging for better readability, reduced duplication, and request context information.
+* CORE-1968 - In server\nats\natsIngestService.js remove the js_msg.working(); line to improve performance.
+* CORE-1976 - Fix error when calling describe_table operation with no schema or table defined in payload.
+* CORE-1983 - Fix issue where create_attribute operation does not validate request for required attributes
+* CORE-2015 - Remove PM2 logs that get logged in console when starting HDB
+* CORE-2048 - systemd script for 4.1
+* CORE-2052 - Include thread information in system_information for visibility of threads
+* CORE-2061 - Add a better error msg when clustering is enabled without a cluster user set
+* CORE-2068 - Create new log rotate logic since pm2 log-rotate no longer used
+* CORE-2072 - Update to Node 18.15.0
+* CORE-2090 - Upgrade Testing from v4.0.x and v3.x to v4.1.
+* CORE-2091 - Run the performance tests
+* CORE-2092 - Allow for automatic patch version updates of certain packages
+* CORE-2109 - Add verify option to clustering TLS configuration
+* CORE-2111 - Update AWS SDK to v3
diff --git a/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.1.1.md b/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.1.1.md
new file mode 100644
index 00000000..0dce0bd7
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.1.1.md
@@ -0,0 +1,16 @@
+---
+title: 4.1.1
+sidebar_position: 59898
+---
+
+### HarperDB 4.1.1, Tucker Release
+06/16/2023
+
+* HarperDB uses improved logic for determining default heap limits and thread counts. When running in a restricted container and on NodeJS 18.15+, HarperDB will use the constrained memory limit to determine heap limits for each thread. In more memory constrained servers with many CPU cores, a reduced default thread count will be used to ensure that excessive memory is not used by many workers. You may still define your own thread count (with `http`/`threads`) in the [configuration](../../configuration).
+* An option has been added for [disabling the republishing NATS messages](../../configuration), which can provide improved replication performance in a fully connected network.
+* Improvements to our OpenShift container.
+* Dependency security updates.
+
+* **Bug Fixes**
+
+* Fixed a bug in reporting database metrics in the `system_information` operation.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.1.2.md b/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.1.2.md
new file mode 100644
index 00000000..2a62db64
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v4-tucker/4.1.2.md
@@ -0,0 +1,13 @@
+---
+title: 4.1.2
+sidebar_position: 59897
+---
+
+### HarperDB 4.1.2, Tucker Release
+06/16/2023
+
+* HarperDB has updated binary dependencies to support older glibc versions back 2.17.
+* A new CLI command was added to get the current status of whether HarperDB is running and the cluster status. This is available with `harperdb status`.
+* Improvements to our OpenShift container.
+* Dependency security updates.
+
diff --git a/site/versioned_docs/version-4.1/release-notes/v4-tucker/index.md b/site/versioned_docs/version-4.1/release-notes/v4-tucker/index.md
new file mode 100644
index 00000000..0d8d3fd0
--- /dev/null
+++ b/site/versioned_docs/version-4.1/release-notes/v4-tucker/index.md
@@ -0,0 +1,11 @@
+---
+title: HarperDB Tucker (Version 4)
+---
+
+# HarperDB Tucker (Version 4)
+
+Did you know our release names are dedicated to employee pups? For our fourth release, we have Tucker.
+
+
+
+_G’day, I’m Tucker. My dad is David Cockerill, a software engineer here at HarperDB. I am a 3-year-old Labrador Husky mix. I love to protect my dad from all the squirrels and rabbits we have in our yard. I have very ticklish feet and love belly rubs!_
diff --git a/site/versioned_docs/version-4.1/security/basic-auth.md b/site/versioned_docs/version-4.1/security/basic-auth.md
new file mode 100644
index 00000000..f251f27a
--- /dev/null
+++ b/site/versioned_docs/version-4.1/security/basic-auth.md
@@ -0,0 +1,70 @@
+---
+title: Authentication
+---
+
+# Authentication
+
+HarperDB uses Basic Auth and JSON Web Tokens (JWTs) to secure our HTTP requests. In the context of an HTTP transaction, **basic access authentication** is a method for an HTTP user agent to provide a user name and password when making a request.
+
+
+
+** ***You do not need to log in separately. Basic Auth is added to each HTTP request like create_schema, create_table, insert etc… via headers.*** **
+
+
+
+A header is added to each HTTP request. The header key is **“Authorization”** the header value is **“Basic <<your username and password buffer token>>”**
+
+
+
+
+
+## Authentication in HarperDB Studio
+
+In the below code sample, you can see where we add the authorization header to the request. This needs to be added for each and every HTTP request for HarperDB.
+
+_Note: This function uses btoa. Learn about [btoa here](https:/developer.mozilla.org/en-US/docs/Web/API/btoa)._
+
+```javascript
+function callHarperDB(call_object, operation, callback){
+
+ const options = {
+ "method": "POST",
+ "hostname": call_object.endpoint_url,
+ "port": call_object.endpoint_port,
+ "path": "/",
+ "headers": {
+ "content-type": "application/json",
+ "authorization": "Basic " + btoa(call_object.username + ':' + call_object.password),
+ "cache-control": "no-cache"
+
+ }
+ };
+
+ const http_req = http.request(options, function (hdb_res) {
+ let chunks = [];
+
+ hdb_res.on("data", function (chunk) {
+ chunks.push(chunk);
+ });
+
+ hdb_res.on("end", function () {
+ const body = Buffer.concat(chunks);
+ if (isJson(body)) {
+ return callback(null, JSON.parse(body));
+ } else {
+ return callback(body, null);
+
+ }
+
+ });
+ });
+
+ http_req.on("error", function (chunk) {
+ return callback("Failed to connect", null);
+ });
+
+ http_req.write(JSON.stringify(operation));
+ http_req.end();
+
+}
+```
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/security/certificate-management.md b/site/versioned_docs/version-4.1/security/certificate-management.md
new file mode 100644
index 00000000..2a840f78
--- /dev/null
+++ b/site/versioned_docs/version-4.1/security/certificate-management.md
@@ -0,0 +1,59 @@
+---
+title: Certificate Management
+---
+
+# Certificate Management
+
+This document is information on managing certificates for the Operations API and the Custom Functions API. For information on certificate managment for clustering see [clustering certificate management](../clustering/certificate-management).
+
+## Development
+
+An out of the box install of HarperDB does not have HTTPS enabled for the Operations API or the Custom Functions API (see [configuration](../configuration) for relevant configuration file settings.) This is great for local development. If you are developing using a remote server and your requests are traversing the Internet, we recommend that you enable HTTPS.
+
+To enable HTTPS, set the `operationsApi.network.https` and `customFunctions.network.https` to `true` and restart HarperDB.
+
+By default HarperDB will generate certificates and place them at `/keys/`. These certificates will not have a valid Common Name (CN) for your HarperDB node, so you will be able to use HTTPS, but your HTTPS client must be configured to accept the invalid certificate.
+
+## Production
+
+For production deployments, in addition to using HTTPS, we recommend using your own certificate authority (CA) or a public CA such as Let's Encrypt, to generate certificates with CNs that match the Fully Qualified Domain Name (FQDN) of your HarperDB node.
+
+We have a few recommended options for enabling HTTPS in a production setting.
+
+### Option: Enable HarperDB HTTPS and Replace Certificates
+
+To enable HTTPS, set the `operationsApi.network.https` and `customFunctions.network.https` to `true` and restart HarperDB.
+
+To replace the certificates, either replace the contents of the existing certificate files at `/keys/`, or update the HarperDB configuration with the path of your new certificate files, and then restart HarperDB.
+```yaml
+operationsApi:
+ tls:
+ certificate: ~/hdb/keys/certificate.pem
+ certificateAuthority: ~/hdb/keys/ca.pem
+ privateKey: ~/hdb/keys/privateKey.pem
+```
+```yaml
+customFunctions:
+ tls:
+ certificate: ~/hdb/keys/certificate.pem
+ certificateAuthority: ~/hdb/keys/ca.pem
+ privateKey: ~/hdb/keys/privateKey.pem
+```
+
+### Option: Nginx Reverse Proxy
+
+Instead of enabling HTTPS for HarperDB, Nginx can be used as a reverse proxy for HarperDB.
+
+Install Nginx, configure Nginx to use certificates issued from your own CA or a public CA, then configure Nginx to listen for HTTPS requests and forward to HarperDB as HTTP requests.
+
+[Certbot](https:/certbot.eff.org/) is a great tool for automatically requesting and renewing Let’s Encrypt certificates used by Nginx.
+
+### Option: External Reverse Proxy
+
+Instead of enabling HTTPS for HarperDB, a number of different external services can be used as a reverse proxy for HarperDB. These services typically have integrated certificate management. Configure the service to listen for HTTPS requests and forward (over a private network) to HarperDB as HTTP requests.
+
+Examples of these types of services include an AWS Application Load Balancer or a GCP external HTTP(S) load balancer.
+
+### Additional Considerations
+
+It is possible to use different certificates for the Operations API and the Custom Functions API. In scenarios where only your Custom Functions endpoints need to be exposed to the Internet and the Operations API is reserved for HarperDB administration, you may want to use a private CA to issue certificates for the Operations API and a public CA for the Custom Functions API certificates.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/security/configuration.md b/site/versioned_docs/version-4.1/security/configuration.md
new file mode 100644
index 00000000..53fad411
--- /dev/null
+++ b/site/versioned_docs/version-4.1/security/configuration.md
@@ -0,0 +1,61 @@
+---
+title: Configuration
+---
+
+# Configuration
+
+HarperDB was set up to require very minimal configuration to work out of the box. There are, however, some best practices we encourage for anyone building an app with HarperDB.
+
+
+
+## CORS
+
+HarperDB allows for managing [cross-origin HTTP requests](https:/developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS). By default, HarperDB enables CORS for all domains if you need to disable CORS completely or set up an access list of domains you can do the following:
+
+1) Open the harperdb-config.yaml file this can be found in <ROOTPATH>, the location you specified during install.
+
+2) In harperdb-config.yaml there should be 2 entries under `operationsApi.network`: cors and corsAccessList.
+ * `cors`
+
+ 1) To turn off, change to: `cors: false`
+
+ 2) To turn on, change to: `cors: true`
+
+ * `corsAccessList`
+
+ 1) The `corsAccessList` will only be recognized by the system when `cors` is `true`
+
+ 2) To create an access list you set `corsAccessList` to a comma-separated list of domains.
+
+ i.e. `corsAccessList` is `http:/harperdb.io,http:/products.harperdb.io`
+
+ 3) To clear out the access list and allow all domains: `corsAccessList` is `[null]`
+
+
+## SSL
+
+HarperDB provides the option to use an HTTP or HTTPS and HTTP/2 interface. The default port for the server is 9925.
+
+
+
+These default ports can be changed by updating the `operationsApi.network.port` value in `/harperdb-config.yaml`
+
+
+
+By default, HTTPS is turned off and HTTP is turned on. It is recommended that you never directly expose HarperDB's HTTP interface through a publicly available port. HTTP is intended for local or private network use.
+
+
+
+You can toggle HTTPS and HTTP in the settings file. By setting `operationsApi.network.https` to true/false. When `https` is set to `false`, the server will use HTTP (version 1.1). Enabling HTTPS will enable both HTTPS/1.1 and HTTPS/2.
+
+
+
+HarperDB automatically generates a certificate (certificate.pem), a certificate authority (ca.pem) and a private key file (privateKey.pem) which live at `/keys/`.
+
+
+
+You can replace these with your own certificates and key.
+
+
+
+**Changes to these settings require a restart. Use operation `harperdb restart` from HarperDB Operations API.**
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/security/index.md b/site/versioned_docs/version-4.1/security/index.md
new file mode 100644
index 00000000..51f8ce0b
--- /dev/null
+++ b/site/versioned_docs/version-4.1/security/index.md
@@ -0,0 +1,13 @@
+---
+title: Security
+---
+
+# Security
+
+HarperDB uses role-based, attribute-level security to ensure that users can only gain access to the data they’re supposed to be able to access. Our granular permissions allow for unparalleled flexibility and control, and can actually lower the total cost of ownership compared to other database solutions, since you no longer have to replicate subsets of your data to isolate use cases.
+
+* [JWT Authentication](./jwt-auth)
+* [Basic Authentication](./basic-auth)
+* [Configuration](./configuration)
+* [Users and Roles](./users-and-roles)
+
diff --git a/site/versioned_docs/version-4.1/security/jwt-auth.md b/site/versioned_docs/version-4.1/security/jwt-auth.md
new file mode 100644
index 00000000..8447c5ff
--- /dev/null
+++ b/site/versioned_docs/version-4.1/security/jwt-auth.md
@@ -0,0 +1,97 @@
+---
+title: JWT Authentication
+---
+
+# JWT Authentication
+HarperDB uses token based authentication with JSON Web Tokens, JWTs.
+
+This consists of two primary operations `create_authentication_tokens` and `refresh_operation_token`. These generate two types of tokens, as follows:
+
+* The `operation_token` which is used to authenticate all HarperDB operations in the Bearer Token Authorization Header. The default expiry is one day.
+
+* The `refresh_token` which is used to generate a new `operation_token` upon expiry. This token is used in the Bearer Token Authorization Header for the `refresh_operation_token` operation only. The default expiry is thirty days.
+
+The `create_authentication_tokens` operation can be used at any time to refresh both tokens in the event that both have expired or been lost.
+
+## Create Authentication Tokens
+
+Users must initially create tokens using their HarperDB credentials. The following POST body is sent to HarperDB. No headers are required for this POST operation.
+
+```json
+{
+ "operation": "create_authentication_tokens",
+ "username": "username",
+ "password": "password"
+}
+```
+
+A full cURL example can be seen here:
+
+```bash
+curl --location --request POST 'http:/localhost:9925' \
+--header 'Content-Type: application/json' \
+--data-raw '{
+ "operation": "create_authentication_tokens",
+ "username": "username",
+ "password": "password"
+}'
+```
+
+An example expected return object is:
+
+```json
+{
+ "operation_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6InVzZXJuYW1lIiwiaWF0IjoxNjA0OTc4MjAwLCJleHAiOjE2MDUwNjQ2MDAsInN1YiI6Im9wZXJhdGlvbiJ9.MpQA-9CMjA-mn-7mHyUXSuSC_-kqMqJXp_NDiKLFtbtMRbodCuY3DzH401rvy_4vb0yCELf0B5EapLVY1545sv80nxSl6FoZFxQaDWYXycoia6zHpiveR8hKlmA6_XTWHJbY2FM1HAFrdtt3yUTiF-ylkdNbPG7u7fRjTmHfsZ78gd2MNWIDkHoqWuFxIyqk8XydQpsjULf2Uacirt9FmHfkMZ-Jr_rRpcIEW0FZyLInbm6uxLfseFt87wA0TbZ0ofImjAuaW_3mYs-3H48CxP152UJ0jByPb0kHsk1QKP7YHWx1-Wce9NgNADfG5rfgMHANL85zvkv8sJmIGZIoSpMuU3CIqD2rgYnMY-L5dQN1fgfROrPMuAtlYCRK7r-IpjvMDQtRmCiNG45nGsM4DTzsa5GyDrkGssd5OBhl9gr9z9Bb5HQVYhSKIOiy72dK5dQNBklD4eGLMmo-u322zBITmE0lKaBcwYGJw2mmkYcrjDOmsDseU6Bf_zVUd9WF3FqwNkhg4D7nrfNSC_flalkxPHckU5EC_79cqoUIX2ogufBW5XgYbU4WfLloKcIpb51YTZlZfwBHlHPSyaq_guaXFaeCUXKq39_i1n0HRF_mRaxNru0cNDFT9Fm3eD7V8axFijSVAMDyQs_JR7SY483YDKUfN4l-vw-EVynImr4",
+ "refresh_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6InVzZXJuYW1lIiwiaWF0IjoxNjA0OTc4MjAwLCJleHAiOjE2MDc1NzAyMDAsInN1YiI6InJlZnJlc2gifQ.acaCsk-CJWIMLGDZdGnsthyZsJfQ8ihXLyE8mTji8PgGkpbwhs7e1O0uitMgP_pGjHq2tey1BHSwoeCL49b18WyMIB10hK-q2BXGKQkykltjTrQbg7VsdFi0h57mGfO0IqAwYd55_hzHZNnyJMh4b0iPQFDwU7iTD7x9doHhZAvzElpkWbc_NKVw5_Mw3znjntSzbuPN105zlp4Niurin-_5BnukwvoJWLEJ-ZlF6hE4wKhaMB1pWTJjMvJQJE8khTTvlUN8tGxmzoaDYoe1aCGNxmDEQnx8Y5gKzVd89sylhqi54d2nQrJ2-ElfEDsMoXpR01Ps6fNDFtLTuPTp7ixj8LvgL2nCjAg996Ga3PtdvXJAZPDYCqqvaBkZZcsiqOgqLV0vGo3VVlfrcgJXQImMYRr_Inu0FCe47A93IAWuQTs-KplM1KdGJsHSnNBV6oe6QEkROJT5qZME-8xhvBYvOXqp9Znwg39bmiBCMxk26Ce66_vw06MNgoa3D5AlXPWemfdVKPZDnj_aLVjZSs0gAfFElcVn7l9yjWJOaT2Muk26U8bJl-2BEq_DSclqKHODuYM5kkPKIdE4NFrsqsDYuGxcA25rlNETFyl0q-UXj1aoz_joy5Hdnr4mFELmjnoo4jYQuakufP9xeGPsj1skaodKl0mmoGcCD6v1F60"
+}
+```
+
+## Using JWT Authentication Tokens
+
+The `operation_token` value is used to authenticate all operations in place of our standard Basic auth. In order to pass the token you will need to create an Bearer Token Authorization Header like the following request:
+
+```bash
+curl --location --request POST 'http:/localhost:9925' \
+--header 'Content-Type: application/json' \
+--header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6InVzZXJuYW1lIiwiaWF0IjoxNjA0OTc4MjAwLCJleHAiOjE2MDUwNjQ2MDAsInN1YiI6Im9wZXJhdGlvbiJ9.MpQA-9CMjA-mn-7mHyUXSuSC_-kqMqJXp_NDiKLFtbtMRbodCuY3DzH401rvy_4vb0yCELf0B5EapLVY1545sv80nxSl6FoZFxQaDWYXycoia6zHpiveR8hKlmA6_XTWHJbY2FM1HAFrdtt3yUTiF-ylkdNbPG7u7fRjTmHfsZ78gd2MNWIDkHoqWuFxIyqk8XydQpsjULf2Uacirt9FmHfkMZ-Jr_rRpcIEW0FZyLInbm6uxLfseFt87wA0TbZ0ofImjAuaW_3mYs-3H48CxP152UJ0jByPb0kHsk1QKP7YHWx1-Wce9NgNADfG5rfgMHANL85zvkv8sJmIGZIoSpMuU3CIqD2rgYnMY-L5dQN1fgfROrPMuAtlYCRK7r-IpjvMDQtRmCiNG45nGsM4DTzsa5GyDrkGssd5OBhl9gr9z9Bb5HQVYhSKIOiy72dK5dQNBklD4eGLMmo-u322zBITmE0lKaBcwYGJw2mmkYcrjDOmsDseU6Bf_zVUd9WF3FqwNkhg4D7nrfNSC_flalkxPHckU5EC_79cqoUIX2ogufBW5XgYbU4WfLloKcIpb51YTZlZfwBHlHPSyaq_guaXFaeCUXKq39_i1n0HRF_mRaxNru0cNDFT9Fm3eD7V8axFijSVAMDyQs_JR7SY483YDKUfN4l-vw-EVynImr4' \
+--data-raw '{
+ "operation":"search_by_hash",
+ "schema":"dev",
+ "table":"dog",
+ "hash_values":[1],
+ "get_attributes": ["*"]
+}'
+```
+
+## Token Expiration
+
+`operation_token` expires at a set interval. Once it expires it will no longer be accepted by HarperDB. This duration defaults to one day, and is configurable in [harperdb-config.yaml](../configuration). To generate a new `operation_token`, the `refresh_operation_token` operation is used, passing the `refresh_token` in the Bearer Token Authorization Header. A full cURL example can be seen here:
+
+```bash
+curl --location --request POST 'http:/localhost:9925' \
+--header 'Content-Type: application/json' \
+--header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6InVzZXJuYW1lIiwiaWF0IjoxNjA0OTc4MjAwLCJleHAiOjE2MDc1NzAyMDAsInN1YiI6InJlZnJlc2gifQ.acaCsk-CJWIMLGDZdGnsthyZsJfQ8ihXLyE8mTji8PgGkpbwhs7e1O0uitMgP_pGjHq2tey1BHSwoeCL49b18WyMIB10hK-q2BXGKQkykltjTrQbg7VsdFi0h57mGfO0IqAwYd55_hzHZNnyJMh4b0iPQFDwU7iTD7x9doHhZAvzElpkWbc_NKVw5_Mw3znjntSzbuPN105zlp4Niurin-_5BnukwvoJWLEJ-ZlF6hE4wKhaMB1pWTJjMvJQJE8khTTvlUN8tGxmzoaDYoe1aCGNxmDEQnx8Y5gKzVd89sylhqi54d2nQrJ2-ElfEDsMoXpR01Ps6fNDFtLTuPTp7ixj8LvgL2nCjAg996Ga3PtdvXJAZPDYCqqvaBkZZcsiqOgqLV0vGo3VVlfrcgJXQImMYRr_Inu0FCe47A93IAWuQTs-KplM1KdGJsHSnNBV6oe6QEkROJT5qZME-8xhvBYvOXqp9Znwg39bmiBCMxk26Ce66_vw06MNgoa3D5AlXPWemfdVKPZDnj_aLVjZSs0gAfFElcVn7l9yjWJOaT2Muk26U8bJl-2BEq_DSclqKHODuYM5kkPKIdE4NFrsqsDYuGxcA25rlNETFyl0q-UXj1aoz_joy5Hdnr4mFELmjnoo4jYQuakufP9xeGPsj1skaodKl0mmoGcCD6v1F60' \
+--data-raw '{
+ "operation":"refresh_operation_token"
+}'
+```
+
+This will return a new `operation_token`. An example expected return object is:
+
+```bash
+{
+ "operation_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6eyJfX2NyZWF0ZWR0aW1lX18iOjE2MDQ5NzgxODkxNTEsIl9fdXBkYXRlZHRpbWVfXyI6MTYwNDk3ODE4OTE1MSwiYWN0aXZlIjp0cnVlLCJyb2xlIjp7Il9fY3JlYXRlZHRpbWVfXyI6MTYwNDk0NDE1MTM0NywiX191cGRhdGVkdGltZV9fIjoxNjA0OTQ0MTUxMzQ3LCJpZCI6IjdiNDNlNzM1LTkzYzctNDQzYi05NGY3LWQwMzY3Njg5NDc4YSIsInBlcm1pc3Npb24iOnsic3VwZXJfdXNlciI6dHJ1ZSwic3lzdGVtIjp7InRhYmxlcyI6eyJoZGJfdGFibGUiOnsicmVhZCI6dHJ1ZSwiaW5zZXJ0IjpmYWxzZSwidXBkYXRlIjpmYWxzZSwiZGVsZXRlIjpmYWxzZSwiYXR0cmlidXRlX3Blcm1pc3Npb25zIjpbXX0sImhkYl9hdHRyaWJ1dGUiOnsicmVhZCI6dHJ1ZSwiaW5zZXJ0IjpmYWxzZSwidXBkYXRlIjpmYWxzZSwiZGVsZXRlIjpmYWxzZSwiYXR0cmlidXRlX3Blcm1pc3Npb25zIjpbXX0sImhkYl9zY2hlbWEiOnsicmVhZCI6dHJ1ZSwiaW5zZXJ0IjpmYWxzZSwidXBkYXRlIjpmYWxzZSwiZGVsZXRlIjpmYWxzZSwiYXR0cmlidXRlX3Blcm1pc3Npb25zIjpbXX0sImhkYl91c2VyIjp7InJlYWQiOnRydWUsImluc2VydCI6ZmFsc2UsInVwZGF0ZSI6ZmFsc2UsImRlbGV0ZSI6ZmFsc2UsImF0dHJpYnV0ZV9wZXJtaXNzaW9ucyI6W119LCJoZGJfcm9sZSI6eyJyZWFkIjp0cnVlLCJpbnNlcnQiOmZhbHNlLCJ1cGRhdGUiOmZhbHNlLCJkZWxldGUiOmZhbHNlLCJhdHRyaWJ1dGVfcGVybWlzc2lvbnMiOltdfSwiaGRiX2pvYiI6eyJyZWFkIjp0cnVlLCJpbnNlcnQiOmZhbHNlLCJ1cGRhdGUiOmZhbHNlLCJkZWxldGUiOmZhbHNlLCJhdHRyaWJ1dGVfcGVybWlzc2lvbnMiOltdfSwiaGRiX2xpY2Vuc2UiOnsicmVhZCI6dHJ1ZSwiaW5zZXJ0IjpmYWxzZSwidXBkYXRlIjpmYWxzZSwiZGVsZXRlIjpmYWxzZSwiYXR0cmlidXRlX3Blcm1pc3Npb25zIjpbXX0sImhkYl9pbmZvIjp7InJlYWQiOnRydWUsImluc2VydCI6ZmFsc2UsInVwZGF0ZSI6ZmFsc2UsImRlbGV0ZSI6ZmFsc2UsImF0dHJpYnV0ZV9wZXJtaXNzaW9ucyI6W119LCJoZGJfbm9kZXMiOnsicmVhZCI6dHJ1ZSwiaW5zZXJ0IjpmYWxzZSwidXBkYXRlIjpmYWxzZSwiZGVsZXRlIjpmYWxzZSwiYXR0cmlidXRlX3Blcm1pc3Npb25zIjpbXX0sImhkYl90ZW1wIjp7InJlYWQiOnRydWUsImluc2VydCI6ZmFsc2UsInVwZGF0ZSI6ZmFsc2UsImRlbGV0ZSI6ZmFsc2UsImF0dHJpYnV0ZV9wZXJtaXNzaW9ucyI6W119fX19LCJyb2xlIjoic3VwZXJfdXNlciJ9LCJ1c2VybmFtZSI6InVzZXJuYW1lIn0sImlhdCI6MTYwNDk3ODcxMywiZXhwIjoxNjA1MDY1MTEzLCJzdWIiOiJvcGVyYXRpb24ifQ.qB4FS7fzryCO5epQlFCQe4mQcUEhzXjfsXRFPgauXrGZwSeSr2o2a1tE1xjiI3qjK0r3f2bdi2xpFlDR1thdY-m0mOpHTICNOae4KdKzp7cyzRaOFurQnVYmkWjuV_Ww4PJgr6P3XDgXs5_B2d7ZVBR-BaAimYhVRIIShfpWk-4iN1XDk96TwloCkYx01BuN87o-VOvAnOG-K_EISA9RuEBpSkfUEuvHx8IU4VgfywdbhNMh6WXM0VP7ZzSpshgsS07MGjysGtZHNTVExEvFh14lyfjfqKjDoIJbo2msQwD2FvrTTb0iaQry1-Wwz9QJjVAUtid7tJuP8aBeNqvKyMIXRVnl5viFUr-Gs-Zl_WtyVvKlYWw0_rUn3ucmurK8tTy6iHyJ6XdUf4pYQebpEkIvi2rd__e_Z60V84MPvIYs6F_8CAy78aaYmUg5pihUEehIvGRj1RUZgdfaXElw90-m-M5hMOTI04LrzzVnBu7DcMYg4UC1W-WDrrj4zUq7y8_LczDA-yBC2-bkvWwLVtHLgV5yIEuIx2zAN74RQ4eCy1ffWDrVxYJBau4yiIyCc68dsatwHHH6bMK0uI9ib6Y9lsxCYjh-7MFcbP-4UBhgoDDXN9xoUToDLRqR9FTHqAHrGHp7BCdF5d6TQTVL5fmmg61MrLucOo-LZBXs1NY"
+}
+```
+
+The `refresh_token` also expires at a set interval, but a longer interval. Once it expires it will no longer be accepted by HarperDB. This duration defaults to thirty days, and is configurable in [harperdb-config.yaml](../configuration). To generate a new `operation_token` and a new `refresh_token` the `create_authentication_tokensoperation` is called.
+
+## Configuration
+
+Token timeouts are configurable in [harperdb-config.yaml](../configuration) with the following parameters:
+
+* `operationsApi.authentication.operationTokenTimeout`: Defines the length of time until the operation_token expires (default 1d).
+
+* `operationsApi.authentication.refreshTokenTimeout`: Defines the length of time until the refresh_token expires (default 30d).
+
+A full list of valid values for both parameters can be found [here](https:/github.com/vercel/ms).
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/security/users-and-roles.md b/site/versioned_docs/version-4.1/security/users-and-roles.md
new file mode 100644
index 00000000..1801fae2
--- /dev/null
+++ b/site/versioned_docs/version-4.1/security/users-and-roles.md
@@ -0,0 +1,288 @@
+---
+title: Users & Roles
+---
+
+# Users & Roles
+
+HarperDB utilizes a Role-Based Access Control (RBAC) framework to manage access to HarperDB instances. A user is assigned a role that determines the user’s permissions to access database resources and run core operations.
+
+## Roles in HarperDB
+
+Role permissions in HarperDB are broken into two categories – permissions around database manipulation and permissions around database definition.
+
+
+
+**Database Manipulation**: A role defines CRUD (create, read, update, delete) permissions against database resources (i.e. data) in a HarperDB instance.
+
+1) At the table-level access, permissions must be explicitly defined when adding or altering a role – *i.e. HarperDB will assume CRUD access to be FALSE if not explicitly provided in the permissions JSON passed to the `add_role` and/or `alter_role` API operations.*
+
+2) At the attribute-level, permissions for attributes in all tables included in the permissions set will be assigned based on either the specific attribute-level permissions defined in the table’s permission set or, if there are no attribute-level permissions defined, permissions will be based on the table’s CRUD set.
+
+
+**Database Definition**: Permissions related to managing schemas, tables, roles, users, and other system settings and operations are restricted to the built-in `super_user` role.
+
+
+
+**Built-In Roles**
+
+There are three built-in roles within HarperDB. See full breakdown of operations restricted to only super_user roles [here](#role-based-operation-restrictions).
+
+* `super_user` - This role provides full access to all operations and methods within a HarperDB instance, this can be considered the admin role.
+
+ * This role provides full access to all Database Definition operations and the ability to run Database Manipulation operations across the entire database schema with no restrictions.
+
+* `cluster_user` - This role is an internal system role type that is managed internally to allow clustered instances to communicate with one another.
+
+ * This role is an internally managed role to facilitate communication between clustered instances.
+
+* `structure_user` - This role provides specific access for creation and deletion of data.
+
+ * When defining this role type you can either assign a value of true which will allow the role to create and drop schemas & tables. Alternatively the role type can be assigned a string array. The values in this array are schemas and allows the role to only create and drop tables in the designated schemas.
+
+**User-Defined Roles**
+
+In addition to built-in roles, admins (i.e. users assigned to the super_user role) can create customized roles for other users to interact with and manipulate the data within explicitly defined tables and attributes.
+
+* Unless the user-defined role is given `super_user` permissions, permissions must be defined explicitly within the request body JSON.
+
+* Describe operations will return metadata for all schemas, tables, and attributes that a user-defined role has CRUD permissions for.
+
+
+**Role Permissions**
+
+When creating a new, user-defined role in a HarperDB instance, you must provide a role name and the permissions to assign to that role. *Reminder, only super users can create and manage roles.*
+
+* `role` name used to easily identify the role assigned to individual users.
+
+ *Roles can be altered/dropped based on the role name used in and returned from a successful `add_role` , `alter_role`, or `list_roles` operation.*
+
+* `permissions` used to explicitly defined CRUD access to existing table data.
+
+
+Example JSON for `add_role` request
+
+```json
+{
+ "operation":"add_role",
+ "role":"software_developer",
+ "permission":{
+ "super_user":false,
+ "schema_name":{
+ "tables": {
+ "table_name1": {
+ "read":true,
+ "insert":true,
+ "update":true,
+ "delete":false,
+ "attribute_permissions":[
+ {
+ "attribute_name":"attribute1",
+ "read":true,
+ "insert":true,
+ "update":true
+ }
+ ]
+ },
+ "table_name2": {
+ "read":true,
+ "insert":true,
+ "update":true,
+ "delete":false,
+ "attribute_permissions":[]
+ }
+ }
+ }
+ }
+}
+```
+
+**Setting Role Permissions**
+
+There are two parts to a permissions set:
+
+* `super_user` – boolean value indicating if role should be provided super_user access.
+
+ *If `super_user` is set to true, there should be no additional schema-specific permissions values included since the role will have access to the entire database schema. If permissions are included in the body of the operation, they will stored within HarperDB, but ignored, as super_users have full access to the database.*
+
+* `permissions`: Schema tables that a role should have specific CRUD access to should be included in the final, schema-specific `permissions` JSON.
+
+ *For user-defined roles (i.e. non-super_user roles, blank permissions will result in the user being restricted from accessing any of the database schema.*
+
+
+**Table Permissions JSON**
+
+Each table that a role should be given some level of CRUD permissions to must be included in the `tables` array for its schema in the roles permissions JSON passed to the API (*see example above*).
+
+```json
+{
+ "table_name": { / the name of the table to define CRUD perms for
+ "read": boolean, / access to read from this table
+ "insert": boolean, / access to insert data to table
+ "update": boolean, / access to update data in table
+ "delete": boolean, / access to delete row data in table
+ "attribute_permissions": [ / permissions for specific table attributes
+ {
+ "attribute_name": "attribute_name", / attribute to assign permissions to
+ "read": boolean, / access to read this attribute from table
+ "insert": boolean, / access to insert this attribute into the table
+ "update": boolean / access to update this attribute in the table
+ }
+ ]
+}
+```
+
+
+**Important Notes About Table Permissions**
+
+1) If a schema and/or any of its tables are not included in the permissions JSON, the role will not have any CRUD access to the schema and/or tables.
+
+2) If a table-level CRUD permission is set to false, any attribute-level with that same CRUD permission set to true will return an error.
+
+
+**Important Notes About Attribute Permissions**
+
+1) If there are attribute-specific CRUD permissions that need to be enforced on a table, those need to be explicitly described in the `attribute_permissions` array.
+
+2) If a non-hash attribute is given some level of CRUD access, that same access will be assigned to the table’s `hash_attribute`, even if it is not explicitly defined in the permissions JSON.
+
+ *See table_name1’s permission set for an example of this – even though the table’s hash attribute is not specifically defined in the attribute_permissions array, because the role has CRUD access to ‘attribute1’, the role will have the same access to the table’s hash attribute.*
+
+3) If attribute-level permissions are set – *i.e. attribute_permissions.length > 0* – any table attribute not explicitly included will be assumed to have not CRUD access (with the exception of the `hash_attribute` described in #2).
+
+ *See table_name1’s permission set for an example of this – in this scenario, the role will have the ability to create, insert and update ‘attribute1’ and the table’s hash attribute but no other attributes on that table.*
+
+4) If an `attribute_permissions` array is empty, the role’s access to a table’s attributes will be based on the table-level CRUD permissions.
+
+ *See table_name2’s permission set for an example of this.*
+
+5) The `__createdtime__` and `__updatedtime__` attributes that HarperDB manages internally can have read perms set but, if set, all other attribute-level permissions will be ignored.
+
+6) Please note that DELETE permissions are not included as a part of an individual attribute-level permission set. That is because it is not possible to delete individual attributes from a row, rows must be deleted in full.
+
+ * If a role needs the ability to delete rows from a table, that permission should be set on the table-level.
+
+ * The practical approach to deleting an individual attribute of a row would be to set that attribute to null via an update statement.
+
+## Role-Based Operation Restrictions
+
+The table below includes all API operations available in HarperDB and indicates whether or not the operation is restricted to super_user roles.
+
+*Keep in mind that non-super_user roles will also be restricted within the operations they do have access to by the schema-level CRUD permissions set for the roles.*
+
+| Schemas and Tables | Restricted to Super_Users |
+|--------------------|:---------------------------:|
+| describe_all | |
+| describe_schema | |
+| describe_table | |
+| create_schema | X |
+| drop_schema | X |
+| create_table | X |
+| drop_table | X |
+| create_attribute | |
+| drop_attribute | X |
+
+
+| NoSQL Operations | Restricted to Super_Users |
+|----------------------|:---------------------------:|
+| insert | |
+| update | |
+| upsert | |
+| delete | |
+| search_by_hash | |
+| search_by_value | |
+| search_by_conditions | |
+
+| SQL Operations | Restricted to Super_Users |
+|-----------------|:---------------------------:|
+| select | |
+| insert | |
+| update | |
+| delete | |
+
+| Bulk Operations | Restricted to Super_Users |
+|------------------|:---------------------------:|
+| csv_data_load | |
+| csv_file_load | |
+| csv_url_load | |
+| import_from_s3 | |
+
+| Users and Roles | Restricted to Super_Users |
+|-----------------|:---------------------------:|
+| list_roles | X |
+| add_role | X |
+| alter_role | X |
+| drop_role | X |
+| list_users | X |
+| user_info | |
+| add_user | X |
+| alter_user | X |
+| drop_user | X |
+
+| Clustering | Restricted to Super_Users |
+|-----------------------|:---------------------------:|
+| cluster_set_routes | X |
+| cluster_get_routes | X |
+| cluster_delete_routes | X |
+| add_node | X |
+| update_node | X |
+| cluster_status | X |
+| remove_node | X |
+| configure_cluster | X |
+
+
+| Custom Functions | Restricted to Super_Users |
+|----------------------------------|:---------------------------:|
+| custom_functions_status | X |
+| get_custom_functions | X |
+| get_custom_function | X |
+| set_custom_function | X |
+| drop_custom_function | X |
+| add_custom_function_project | X |
+| drop_custom_function_project | X |
+| package_custom_function_project | X |
+| deploy_custom_function_project | X |
+
+| Registration | Restricted to Super_Users |
+|-------------------|:---------------------------:|
+| registration_info | |
+| get_fingerprint | X |
+| set_license | X |
+
+| Jobs | Restricted to Super_Users |
+|----------------------------|:---------------------------:|
+| get_job | |
+| search_jobs_by_start_date | X |
+
+| Logs | Restricted to Super_Users |
+|--------------------------------|:---------------------------:|
+| read_log | X |
+| read_transaction_log | X |
+| delete_transaction_logs_before | X |
+| read_audit_log | X |
+| delete_audit_logs_before | X |
+
+| Utilities | Restricted to Super_Users |
+|-----------------------|:-------------------------:|
+| delete_records_before | X |
+| export_local | X |
+| export_to_s3 | X |
+| system_information | X |
+| restart | X |
+| restart_service | X |
+| get_configuration | X |
+| configure_cluster | X |
+
+| Token Authentication | Restricted to Super_Users |
+|------------------------------|:---------------------------:|
+| create_authentication_tokens | |
+| refresh_operation_token | |
+
+## Error: Must execute as User
+
+**You may have gotten an error like,** `Error: Must execute as <>`.
+
+This means that you installed HarperDB as `<>`. Because HarperDB stores files natively on the operating system, we only allow the HarperDB executable to be run by a single user. This prevents permissions issues on files.
+
+
+
+For example if you installed as user_a, but later wanted to run as user_b. User_b may not have access to the hdb files HarperDB needs. This also keeps HarperDB more secure as it allows you to lock files down to a specific user and prevents other users from accessing your files.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/sql-guide/date-functions.md b/site/versioned_docs/version-4.1/sql-guide/date-functions.md
new file mode 100644
index 00000000..f19d2126
--- /dev/null
+++ b/site/versioned_docs/version-4.1/sql-guide/date-functions.md
@@ -0,0 +1,222 @@
+---
+title: SQL Date Functions
+---
+
+# SQL Date Functions
+
+HarperDB utilizes [Coordinated Universal Time (UTC)](https:/en.wikipedia.org/wiki/Coordinated_Universal_Time) in all internal SQL operations. This means that date values passed into any of the functions below will be assumed to be in UTC or in a format that can be translated to UTC.
+
+When parsing date values passed to SQL date functions in HDB, we first check for [ISO 8601](https:/en.wikipedia.org/wiki/ISO_8601) formats, then for [RFC 2822](https:/tools.ietf.org/html/rfc2822#section-3.3) date-time format and then fall back to new Date(date_string)if a known format is not found.
+
+### CURRENT_DATE()
+
+Returns the current date in UTC in `YYYY-MM-DD` String format.
+
+```
+"SELECT CURRENT_DATE() AS current_date_result" returns
+ {
+ "current_date_result": "2020-04-22"
+ }
+```
+
+### CURRENT_TIME()
+
+Returns the current time in UTC in `HH:mm:ss.SSS` String format.
+
+```
+"SELECT CURRENT_TIME() AS current_time_result" returns
+ {
+ "current_time_result": "15:18:14.639"
+ }
+```
+
+### CURRENT_TIMESTAMP
+
+Referencing this variable will evaluate as the current Unix Timestamp in milliseconds.
+
+```
+"SELECT CURRENT_TIMESTAMP AS current_timestamp_result" returns
+ {
+ "current_timestamp_result": 1587568845765
+ }
+```
+### DATE([date_string])
+
+Formats and returns the date_string argument in UTC in `YYYY-MM-DDTHH:mm:ss.SSSZZ` String format.
+
+If a date_string is not provided, the function will return the current UTC date/time value in the return format defined above.
+
+```
+"SELECT DATE(1587568845765) AS date_result" returns
+ {
+ "date_result": "2020-04-22T15:20:45.765+0000"
+ }
+```
+
+```
+"SELECT DATE(CURRENT_TIMESTAMP) AS date_result2" returns
+ {
+ "date_result2": "2020-04-22T15:20:45.765+0000"
+ }
+```
+
+### DATE_ADD(date, value, interval)
+
+Adds the defined amount of time to the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted interval values: Either string value (key or shorthand) can be passed as the interval argument.
+
+
+| Key | Shorthand |
+|--------------|-----------|
+| years | y |
+| quarters | Q |
+| months | M |
+| weeks | w |
+| days | d |
+| hours | h |
+| minutes | m |
+| seconds | s |
+| milliseconds | ms |
+
+
+```
+"SELECT DATE_ADD(1587568845765, 1, 'days') AS date_add_result" AND
+"SELECT DATE_ADD(1587568845765, 1, 'd') AS date_add_result" both return
+ {
+ "date_add_result": 1587655245765
+ }
+```
+
+```
+"SELECT DATE_ADD(CURRENT_TIMESTAMP, 2, 'years')
+AS date_add_result2" returns
+ {
+ "date_add_result2": 1650643129017
+ }
+```
+
+### DATE_DIFF(date_1, date_2[, interval])
+
+Returns the difference between the two date values passed based on the interval as a Number. If an interval is not provided, the function will return the difference value in milliseconds.
+
+Accepted interval values:
+* years
+* months
+* weeks
+* days
+* hours
+* minutes
+* seconds
+
+```
+"SELECT DATE_DIFF(CURRENT_TIMESTAMP, 1650643129017, 'hours')
+AS date_diff_result" returns
+ {
+ "date_diff_result": -17519.753333333334
+ }
+```
+
+### DATE_FORMAT(date, format)
+
+Formats and returns a date value in the String format provided. Find more details on accepted format values in the [moment.js docs](https:/momentjs.com/docs/#/displaying/format/).
+
+```
+"SELECT DATE_FORMAT(1524412627973, 'YYYY-MM-DD HH:mm:ss')
+AS date_format_result" returns
+ {
+ "date_format_result": "2018-04-22 15:57:07"
+ }
+```
+
+### DATE_SUB(date, value, interval)
+
+Subtracts the defined amount of time from the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted date_sub interval values- Either string value (key or shorthand) can be passed as the interval argument.
+
+| Key | Shorthand |
+|--------------|-----------|
+| years | y |
+| quarters | Q |
+| months | M |
+| weeks | w |
+| days | d |
+| hours | h |
+| minutes | m |
+| seconds | s |
+| milliseconds | ms |
+
+
+```
+"SELECT DATE_SUB(1587568845765, 2, 'years') AS date_sub_result" returns
+ {
+ "date_sub_result": 1524410445765
+ }
+```
+
+### EXTRACT(date, date_part)
+
+Extracts and returns the date_part requested as a String value. Accepted date_part values below show value returned for date = “2020-03-26T15:13:02.041+000”
+
+| date_part | Example return value* |
+|--------------|------------------------|
+| year | “2020” |
+| month | “3” |
+| day | “26” |
+ | hour | “15” |
+| minute | “13” |
+| second | “2” |
+| millisecond | “41” |
+
+```
+"SELECT EXTRACT(1587568845765, 'year') AS extract_result" returns
+ {
+ "extract_result": "2020"
+ }
+```
+
+### GETDATE()
+
+Returns the current Unix Timestamp in milliseconds.
+
+```
+"SELECT GETDATE() AS getdate_result" returns
+ {
+ "getdate_result": 1587568845765
+ }
+```
+
+### GET_SERVER_TIME()
+Returns the current date/time value based on the server’s timezone in `YYYY-MM-DDTHH:mm:ss.SSSZZ` String format.
+
+```
+"SELECT GET_SERVER_TIME() AS get_server_time_result" returns
+ {
+ "get_server_time_result": "2020-04-22T15:20:45.765+0000"
+ }
+```
+
+### OFFSET_UTC(date, offset)
+Returns the UTC date time value with the offset provided included in the return String value formatted as `YYYY-MM-DDTHH:mm:ss.SSSZZ`. The offset argument will be added as minutes unless the value is less than 16 and greater than -16, in which case it will be treated as hours.
+
+```
+"SELECT OFFSET_UTC(1587568845765, 240) AS offset_utc_result" returns
+ {
+ "offset_utc_result": "2020-04-22T19:20:45.765+0400"
+ }
+```
+
+```
+"SELECT OFFSET_UTC(1587568845765, 10) AS offset_utc_result2" returns
+ {
+ "offset_utc_result2": "2020-04-23T01:20:45.765+1000"
+ }
+```
+
+### NOW()
+Returns the current Unix Timestamp in milliseconds.
+
+```
+"SELECT NOW() AS now_result" returns
+ {
+ "now_result": 1587568845765
+ }
+```
+
diff --git a/site/versioned_docs/version-4.1/sql-guide/delete.md b/site/versioned_docs/version-4.1/sql-guide/delete.md
new file mode 100644
index 00000000..6e227192
--- /dev/null
+++ b/site/versioned_docs/version-4.1/sql-guide/delete.md
@@ -0,0 +1,14 @@
+---
+title: Delete
+---
+
+# Delete
+
+HarperDB supports deleting records from a table with condition support.
+
+
+
+```
+DELETE FROM dev.dog
+ WHERE age < 4
+```
diff --git a/site/versioned_docs/version-4.1/sql-guide/features-matrix.md b/site/versioned_docs/version-4.1/sql-guide/features-matrix.md
new file mode 100644
index 00000000..f0ee3072
--- /dev/null
+++ b/site/versioned_docs/version-4.1/sql-guide/features-matrix.md
@@ -0,0 +1,83 @@
+---
+title: SQL Features Matrix
+---
+
+# SQL Features Matrix
+
+HarperDB provides access to most SQL functions, and we’re always expanding that list. Check below to see if we cover what you need. If not, feel free to [add a Feature Request](https:/feedback.harperdb.io/).
+
+
+| INSERT | |
+|------------------------------------|-----|
+| Values - multiple values supported | ✔ |
+| Sub-SELECT | ✗ |
+
+| UPDATE | |
+|-----------------|-----|
+| SET | ✔ |
+| Sub-SELECT | ✗ |
+| Conditions | ✔ |
+| Date Functions* | ✔ |
+| Math Functions | ✔ |
+
+| DELETE | |
+|------------|-----|
+| FROM | ✔ |
+| Sub-SELECT | ✗ |
+| Conditions | ✔ |
+
+| SELECT | |
+|-----------------------|-----|
+| Column SELECT | ✔ |
+| Aliases | ✔ |
+| Aggregator Functions | ✔ |
+| Date Functions* | ✔ |
+| Math Functions | ✔ |
+| Constant Values | ✔ |
+| Distinct | ✔ |
+| Sub-SELECT | ✗ |
+
+| FROM | |
+|-------------------|-----|
+| Multi-table JOIN | ✔ |
+| INNER JOIN | ✔ |
+| LEFT OUTER JOIN | ✔ |
+| LEFT INNER JOIN | ✔ |
+| RIGHT OUTER JOIN | ✔ |
+| RIGHT INNER JOIN | ✔ |
+| FULL JOIN | ✔ |
+| UNION | ✗ |
+| Sub-SELECT | ✗ |
+| TOP | ✔ |
+
+| WHERE | |
+|----------------------------|-----|
+| Multi-Conditions | ✔ |
+| Wildcards | ✔ |
+| IN | ✔ |
+| LIKE | ✔ |
+| Bit-wise Operators AND, OR | ✔ |
+| Bit-wise Operators NOT | ✔ |
+| NULL | ✔ |
+| BETWEEN | ✔ |
+| EXISTS,ANY,ALL | ✔ |
+| Compare columns | ✔ |
+| Compare constants | ✔ |
+| Date Functions* | ✔ |
+| Math Functions | ✔ |
+| Sub-SELECT | ✗ |
+
+| GROUP BY | |
+|-----------------------|-----|
+| Multi-Column GROUP BY | ✔ |
+
+| HAVING | |
+|--------------------------------|-----|
+| Aggregate function conditions | ✔ |
+
+| ORDER BY | |
+|-----------------------|-----|
+| Multi-Column ORDER BY | ✔ |
+| Aliases | ✔ |
+| Date Functions* | ✔ |
+| Math Functions | ✔ |
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/sql-guide/functions.md b/site/versioned_docs/version-4.1/sql-guide/functions.md
new file mode 100644
index 00000000..ccd6f247
--- /dev/null
+++ b/site/versioned_docs/version-4.1/sql-guide/functions.md
@@ -0,0 +1,153 @@
+---
+title: HarperDB SQL Functions
+---
+
+# HarperDB SQL Functions
+
+This SQL keywords reference contains the SQL functions available in HarperDB.
+
+## Functions
+### Aggregate
+
+| Keyword | Syntax | Description |
+|-----------------|-------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------|
+| AVG | AVG(_expression_) | Returns the average of a given numeric expression. |
+| COUNT | SELECT COUNT(_column_name_) FROM _schema.table_ WHERE _condition_ | Returns the number records that match the given criteria. Nulls are not counted. |
+| GROUP_CONCAT | GROUP_CONCAT(_expression_) | Returns a string with concatenated values that are comma separated and that are non-null from a group. Will return null when there are non-null values. |
+| MAX | SELECT MAX(_column_name_) FROM _schema.table_ WHERE _condition_ | Returns largest value in a specified column. |
+| MIN | SELECT MIN(_column_name_) FROM _schema.table_ WHERE _condition_ | Returns smallest value in a specified column. |
+| SUM | SUM(_column_name_) | Returns the sum of the numeric values provided. |
+| ARRAY* | ARRAY(_expression_) | Returns a list of data as a field. |
+| DISTINCT_ARRAY* | DISTINCT_ARRAY(_expression_) | When placed around a standard ARRAY() function, returns a distinct (deduplicated) results set. |
+
+*For more information on ARRAY() and DISTINCT_ARRAY() see [this blog](https:/www.harperdb.io/post/sql-queries-to-complex-objects).
+
+### Conversion
+
+| Keyword | Syntax | Description |
+|---------|--------------------------------------------------|------------------------------------------------------------------------|
+| CAST | CAST(_expression AS datatype(length)_) | Converts a value to a specified datatype. |
+| CONVERT | CONVERT(_data_type(length), expression, style_) | Converts a value from one datatype to a different, specified datatype. |
+
+
+### Date & Time
+
+| Keyword | Syntax | Description |
+|-------------------|-----------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| CURRENT_DATE | CURRENT_DATE() | Returns the current date in UTC in “YYYY-MM-DD” String format. |
+| CURRENT_TIME | CURRENT_TIME() | Returns the current time in UTC in “HH:mm:ss.SSS” string format. |
+| CURRENT_TIMESTAMP | CURRENT_TIMESTAMP | Referencing this variable will evaluate as the current Unix Timestamp in milliseconds. For more information, go here. |
+|
+| DATE | DATE([_date_string_]) | Formats and returns the date_string argument in UTC in ‘YYYY-MM-DDTHH:mm:ss.SSSZZ’ string format. If a date_string is not provided, the function will return the current UTC date/time value in the return format defined above. For more information, go here. |
+|
+| DATE_ADD | DATE_ADD(_date, value, interval_) | Adds the defined amount of time to the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted interval values: Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. |
+|
+| DATE_DIFF | DATEDIFF(_date_1, date_2[, interval]_) | Returns the difference between the two date values passed based on the interval as a Number. If an interval is not provided, the function will return the difference value in milliseconds. For more information, go here. |
+|
+| DATE_FORMAT | DATE_FORMAT(_date, format_) | Formats and returns a date value in the String format provided. Find more details on accepted format values in the moment.js docs. For more information, go here. |
+|
+| DATE_SUB | DATE_SUB(_date, format_) | Subtracts the defined amount of time from the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted date_sub interval values- Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. |
+|
+| DAY | DAY(_date_) | Return the day of the month for the given date. |
+|
+| DAYOFWEEK | DAYOFWEEK(_date_) | Returns the numeric value of the weekday of the date given(“YYYY-MM-DD”).NOTE: 0=Sunday, 1=Monday, 2=Tuesday, 3=Wednesday, 4=Thursday, 5=Friday, and 6=Saturday. |
+| EXTRACT | EXTRACT(_date, date_part_) | Extracts and returns the date_part requested as a String value. Accepted date_part values below show value returned for date = “2020-03-26T15:13:02.041+000” For more information, go here. |
+|
+| GETDATE | GETDATE() | Returns the current Unix Timestamp in milliseconds. |
+| GET_SERVER_TIME | GET_SERVER_TIME() | Returns the current date/time value based on the server’s timezone in `YYYY-MM-DDTHH:mm:ss.SSSZZ` String format. |
+| OFFSET_UTC | OFFSET_UTC(_date, offset_) | Returns the UTC date time value with the offset provided included in the return String value formatted as `YYYY-MM-DDTHH:mm:ss.SSSZZ`. The offset argument will be added as minutes unless the value is less than 16 and greater than -16, in which case it will be treated as hours. |
+| NOW | NOW() | Returns the current Unix Timestamp in milliseconds. |
+|
+| HOUR | HOUR(_datetime_) | Returns the hour part of a given date in range of 0 to 838. |
+|
+| MINUTE | MINUTE(_datetime_) | Returns the minute part of a time/datetime in range of 0 to 59. |
+|
+| MONTH | MONTH(_date_) | Returns month part for a specified date in range of 1 to 12. |
+|
+| SECOND | SECOND(_datetime_) | Returns the seconds part of a time/datetime in range of 0 to 59. |
+| YEAR | YEAR(_date_) | Returns the year part for a specified date. |
+|
+
+### Logical
+
+| Keyword | Syntax | Description |
+|---------|--------------------------------------------------|--------------------------------------------------------------------------------------------|
+| IF | IF(_condition, value_if_true, value_if_false_) | Returns a value if the condition is true, or another value if the condition is false. |
+| IIF | IIF(_condition, value_if_true, value_if_false_) | Returns a value if the condition is true, or another value if the condition is false. |
+| IFNULL | IFNULL(_expression, alt_value_) | Returns a specified value if the expression is null. |
+| NULLIF | NULLIF(_expression_1, expression_2_) | Returns null if expression_1 is equal to expression_2, if not equal, returns expression_1. |
+
+### Mathematical
+
+| Keyword | Syntax | Description |
+|---------|---------------------------------|-----------------------------------------------------------------------------------------------------|
+| ABS | ABS(_expression_) | Returns the absolute value of a given numeric expression. |
+| CEIL | CEIL(_number_) | Returns integer ceiling, the smallest integer value that is bigger than or equal to a given number. |
+| EXP | EXP(_number_) | Returns e to the power of a specified number. |
+| FLOOR | FLOOR(_number_) | Returns the largest integer value that is smaller than, or equal to, a given number. |
+| RANDOM | RANDOM(_seed_) | Returns a pseudo random number. |
+| ROUND | ROUND(_number,decimal_places_) | Rounds a given number to a specified number of decimal places. |
+| SQRT | SQRT(_expression_) | Returns the square root of an expression. |
+
+
+### String
+
+| Keyword | Syntax | Description |
+|-------------|------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| CONCAT | CONCAT(_string_1, string_2, ...., string_n_) | Concatenates, or joins, two or more strings together, resulting in a single string. |
+| CONCAT_WS | CONCAT_WS(_separator, string_1, string_2, ...., string_n_) | Concatenates, or joins, two or more strings together with a separator, resulting in a single string. |
+| INSTR | INSTR(_string_1, string_2_) | Returns the first position, as an integer, of string_2 within string_1. |
+| LEN | LEN(_string_) | Returns the length of a string. |
+| LOWER | LOWER(_string_) | Converts a string to lower-case. |
+| REGEXP | SELECT _column_name_ FROM _schema.table_ WHERE _column_name_ REGEXP _pattern_ | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. |
+| REGEXP_LIKE | SELECT _column_name_ FROM _schema.table_ WHERE REGEXP_LIKE(_column_name, pattern_) | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. |
+| REPLACE | REPLACE(_string, old_string, new_string_) | Replaces all instances of old_string within new_string, with string. |
+| SUBSTRING | SUBSTRING(_string, string_position, length_of_substring_) | Extracts a specified amount of characters from a string. |
+| TRIM | TRIM([_character(s) FROM_] _string_) | Removes leading and trailing spaces, or specified character(s), from a string. |
+| UPPER | UPPER(_string_) | Converts a string to upper-case. |
+
+## Operators
+### Logical Operators
+
+| Keyword | Syntax | Description |
+|----------|--------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------|
+| BETWEEN | SELECT _column_name(s)_ FROM _schema.table_ WHERE _column_name_ BETWEEN _value_1_ AND _value_2_ | (inclusive) Returns values(numbers, text, or dates) within a given range. |
+| IN | SELECT _column_name(s)_ FROM _schema.table_ WHERE _column_name_ IN(_value(s)_) | Used to specify multiple values in a WHERE clause. |
+| LIKE | SELECT _column_name(s)_ FROM _schema.table_ WHERE _column_n_ LIKE _pattern_ | Searches for a specified pattern within a WHERE clause. |
+
+## Queries
+### General
+
+| Keyword | Syntax | Description |
+|-----------|--------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------|
+| DISTINCT | SELECT DISTINCT _column_name(s)_ FROM _schema.table_ | Returns only unique values, eliminating duplicate records. |
+| FROM | FROM _schema.table_ | Used to list the schema(s), table(s), and any joins required for a SQL statement. |
+| GROUP BY | SELECT _column_name(s)_ FROM _schema.table_ WHERE _condition_ GROUP BY _column_name(s)_ ORDER BY _column_name(s)_ | Groups rows that have the same values into summary rows. |
+| HAVING | SELECT _column_name(s)_ FROM _schema.table_ WHERE _condition_ GROUP BY _column_name(s)_ HAVING _condition_ ORDER BY _column_name(s)_ | Filters data based on a group or aggregate function. |
+| SELECT | SELECT _column_name(s)_ FROM _schema.table_ | Selects data from table. |
+| WHERE | SELECT _column_name(s)_ FROM _schema.table_ WHERE _condition_ | Extracts records based on a defined condition. |
+
+### Joins
+
+| Keyword | Syntax | Description |
+|---------------------|----------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| CROSS JOIN | SELECT _column_name(s)_ FROM _schema.table_1_ CROSS JOIN _schema.table_2_ | Returns a paired combination of each row from _table_1_ with row from _table_2_. _Note: CROSS JOIN can return very large result sets and is generally considered bad practice._ |
+| FULL OUTER | SELECT _column_name(s)_ FROM _schema.table_1_ FULL OUTER JOIN _schema.table_2_ ON _table_1.column_name_ _= table_2.column_name_ WHERE _condition_ | Returns all records when there is a match in either _table_1_ (left table) or _table_2_ (right table). |
+| [INNER] JOIN | SELECT _column_name(s)_ FROM _schema.table_1_ INNER JOIN _schema.table_2_ ON _table_1.column_name_ _= table_2.column_name_ | Return only matching records from _table_1_ (left table) and _table_2_ (right table). The INNER keyword is optional and does not affect the result. |
+| LEFT [OUTER] JOIN | SELECT _column_name(s)_ FROM _schema.table_1_ LEFT OUTER JOIN _schema.table_2_ ON _table_1.column_name_ _= table_2.column_name_ | Return all records from _table_1_ (left table) and matching data from _table_2_ (right table). The OUTER keyword is optional and does not affect the result. |
+| RIGHT [OUTER] JOIN | SELECT _column_name(s)_ FROM _schema.table_1_ RIGHT OUTER JOIN _schema.table_2_ ON _table_1.column_name = table_2.column_name_ | Return all records from _table_2_ (right table) and matching data from _table_1_ (left table). The OUTER keyword is optional and does not affect the result. |
+
+### Predicates
+
+| Keyword | Syntax | Description |
+|--------------|------------------------------------------------------------------------------|----------------------------|
+| IS NOT NULL | SELECT _column_name(s)_ FROM _schema.table_ WHERE _column_name_ IS NOT NULL | Tests for non-null values. |
+| IS NULL | SELECT _column_name(s)_ FROM _schema.table_ WHERE _column_name_ IS NULL | Tests for null values. |
+
+### Statements
+
+| Keyword | Syntax | Description |
+|---------|---------------------------------------------------------------------------------------------|-------------------------------------|
+| DELETE | DELETE FROM _schema.table_ WHERE condition | Deletes existing data from a table. |
+| INSERT | INSERT INTO _schema.table(column_name(s))_ VALUES(_value(s)_) | Inserts new records into a table. |
+| UPDATE | UPDATE _schema.table_ SET _column_1 = value_1, column_2 = value_2, ....,_ WHERE _condition_ | Alters existing records in a table. |
diff --git a/site/versioned_docs/version-4.1/sql-guide/index.md b/site/versioned_docs/version-4.1/sql-guide/index.md
new file mode 100644
index 00000000..a6a44875
--- /dev/null
+++ b/site/versioned_docs/version-4.1/sql-guide/index.md
@@ -0,0 +1,11 @@
+---
+title: HarperDB SQL Guide
+---
+
+# HarperDB SQL Guide
+
+The purpose of this guide is to describe the available functionality of HarperDB as it relates to supported SQL functionality. The SQL parser is still actively being developed and this document will be updated as more features and functionality becomes available. **A high-level view of supported features can be found [here](./features-matrix).**
+
+
+
+HarperDB adheres to the concept of schemas & tables. This allows developers to isolate table structures from each other all within one database.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/sql-guide/insert.md b/site/versioned_docs/version-4.1/sql-guide/insert.md
new file mode 100644
index 00000000..a929fe7a
--- /dev/null
+++ b/site/versioned_docs/version-4.1/sql-guide/insert.md
@@ -0,0 +1,14 @@
+---
+title: Insert
+---
+
+# Insert
+
+HarperDB supports inserting 1 to n records into a table. The primary key must be unique (not used by any other record). If no primary key is provided, it will be assigned an auto-generated UUID. HarperDB does not support selecting from one table to insert into another at this time.
+
+
+
+```
+INSERT INTO dev.dog (id, dog_name, age, breed_id)
+ VALUES(1, 'Penny', 5, 347), (2, 'Kato', 4, 347)
+```
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/sql-guide/joins.md b/site/versioned_docs/version-4.1/sql-guide/joins.md
new file mode 100644
index 00000000..8e820a82
--- /dev/null
+++ b/site/versioned_docs/version-4.1/sql-guide/joins.md
@@ -0,0 +1,25 @@
+---
+title: Joins
+---
+
+# Joins
+
+HarperDB allows developers to join any number of tables and currently supports the following join types:
+
+* INNER JOIN LEFT
+* INNER JOIN LEFT
+* OUTER JOIN
+
+
+Here’s a basic example joining two tables from our Get Started example- joining a dogs table with a breeds table:
+
+
+
+```
+SELECT d.id, d.dog_name, d.owner_name, b.name, b.section
+ FROM dev.dog AS d
+ INNER JOIN dev.breed AS b ON d.breed_id = b.id
+ WHERE d.owner_name IN ('Kyle', 'Zach', 'Stephen')
+ AND b.section = 'Mutt'
+ ORDER BY d.dog_name
+```
diff --git a/site/versioned_docs/version-4.1/sql-guide/json-search.md b/site/versioned_docs/version-4.1/sql-guide/json-search.md
new file mode 100644
index 00000000..3c48c308
--- /dev/null
+++ b/site/versioned_docs/version-4.1/sql-guide/json-search.md
@@ -0,0 +1,181 @@
+---
+title: SQL JSON Search
+---
+
+# SQL JSON Search
+
+HarperDB automatically indexes all top level attributes in a row / object written to a table. However, any attributes which holds JSON does not have its nested attributes indexed. In order to make searching and/or transforming these JSON documents easy, HarperDB offers a special SQL function called SEARCH_JSON. The SEARCH_JSON function works in SELECT & WHERE clauses allowing queries to perform powerful filtering on any element of your JSON by implementing the [JSONata library](http:/docs.jsonata.org/overview.html) into our SQL engine.
+
+## Syntax
+
+SEARCH_JSON(*expression, attribute*)
+
+
+
+Executes the supplied string _expression_ against data of the defined top level _attribute_ for each row. The expression both filters and defines output from the JSON document.
+### Example 1
+#### Search a string array
+
+Here are two records in the database:
+
+```json
+[
+ {
+ "id": 1,
+ "name": ["Harper", "Penny"]
+ },
+ {
+ "id": 2,
+ "name": ["Penny"]
+ }
+]
+```
+Here is a simple query that gets any record with "Harper" found in the name.
+
+```
+SELECT *
+FROM dev.dog
+WHERE search_json('"Harper" in *', name)
+```
+
+### Example 2
+The purpose of this query is to give us every movie where at least two of our favorite actors from Marvel films have acted together. The results will return the movie title, the overview, release date and an object array of the actor’s name and their character name in the movie.
+
+
+
+Both function calls evaluate the credits.cast attribute, this attribute is an object array of every cast member in a movie.
+
+```
+SELECT m.title,
+ m.overview,
+ m.release_date,
+ SEARCH_JSON($[name in ["Robert Downey Jr.", "Chris Evans", "Scarlett Johansson", "Mark Ruffalo", "Chris Hemsworth", "Jeremy Renner", "Clark Gregg", "Samuel L. Jackson", "Gwyneth Paltrow", "Don Cheadle"]].{"actor": name, "character": character}, c.`cast`) AS characters
+FROM movies.credits c
+ INNER JOIN movies.movie m
+ ON c.movie_id = m.id
+WHERE SEARCH_JSON($count($[name in ["Robert Downey Jr.", "Chris Evans", "Scarlett Johansson", "Mark Ruffalo", "Chris Hemsworth", "Jeremy Renner", "Clark Gregg", "Samuel L. Jackson", "Gwyneth Paltrow", "Don Cheadle"]]), c.`cast`) >= 2
+```
+A sample of this data from the movie The Avengers looks like
+
+```json
+[
+ {
+ "cast_id": 46,
+ "character": "Tony Stark / Iron Man",
+ "credit_id": "52fe4495c3a368484e02b251",
+ "gender": "male",
+ "id": 3223,
+ "name": "Robert Downey Jr.",
+ "order": 0
+ },
+ {
+ "cast_id": 2,
+ "character": "Steve Rogers / Captain America",
+ "credit_id": "52fe4495c3a368484e02b19b",
+ "gender": "male",
+ "id": 16828,
+ "name": "Chris Evans",
+ "order": 1
+ },
+ {
+ "cast_id": 307,
+ "character": "Bruce Banner / The Hulk",
+ "credit_id": "5e85e8083344c60015411cfa",
+ "gender": "male",
+ "id": 103,
+ "name": "Mark Ruffalo",
+ "order": 2
+ }
+]
+```
+Let’s break down the SEARCH_JSON function call in the SELECT:
+
+```
+SEARCH_JSON(
+ $[name in [
+ "Robert Downey Jr.",
+ "Chris Evans",
+ "Scarlett Johansson",
+ "Mark Ruffalo",
+ "Chris Hemsworth",
+ "Jeremy Renner",
+ "Clark Gregg",
+ "Samuel L. Jackson",
+ "Gwyneth Paltrow",
+ "Don Cheadle"
+ ]].{
+ "actor": name,
+ "character": character
+ },
+ c.`cast`
+)
+```
+The first argument passed to SEARCH_JSON is the expression to execute against the second argument which is the cast attribute on the credits table. This expression will execute for every row. Looking into the expression it starts with “$[…]” this tells the expression to iterate all elements of the cast array.
+
+
+
+Then the expression tells the function to only return entries where the name attribute matches any of the actors defined in the array:
+
+```
+name in ["Robert Downey Jr.", "Chris Evans", "Scarlett Johansson", "Mark Ruffalo", "Chris Hemsworth", "Jeremy Renner", "Clark Gregg", "Samuel L. Jackson", "Gwyneth Paltrow", "Don Cheadle"]
+```
+
+
+So far, we’ve iterated the array and filtered out rows, but we also want the results formatted in a specific way, so we’ve chained an expression on our filter with: `{“actor”: name, “character”: character}`. This tells the function to create a specific object for each matching entry.
+
+
+
+##### Sample Result
+
+```json
+[
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Tony Stark / Iron Man"
+ },
+ {
+ "actor": "Chris Evans",
+ "character": "Steve Rogers / Captain America"
+ },
+ {
+ "actor": "Mark Ruffalo",
+ "character": "Bruce Banner / The Hulk"
+ }
+]
+```
+
+Just having the SEARCH_JSON function in our SELECT is powerful, but given our criteria it would still return every other movie that doesn’t have our matching actors, in order to filter out the movies we do not want we also use SEARCH_JSON in the WHERE clause.
+
+
+
+This function call in the WHERE clause is similar, but we don’t need to perform the same transformation as occurred in the SELECT:
+
+```
+SEARCH_JSON(
+ $count(
+ $[name in [
+ "Robert Downey Jr.",
+ "Chris Evans",
+ "Scarlett Johansson",
+ "Mark Ruffalo",
+ "Chris Hemsworth",
+ "Jeremy Renner",
+ "Clark Gregg",
+ "Samuel L. Jackson",
+ "Gwyneth Paltrow",
+ "Don Cheadle"
+ ]]
+ ),
+ c.`cast`
+) >= 2
+```
+
+As seen above we execute the same name filter against the cast array, the primary difference is we are wrapping the filtered results in $count(…). As it looks this returns a count of the results back which we then use against our SQL comparator of >= 2.
+
+
+
+To see further SEARCH_JSON examples in action view our Postman Collection that provides a sample schema & data with query examples: https:/api.harperdb.io/
+
+
+
+To learn more about how to build expressions check out the JSONata documentation: http:/docs.jsonata.org/overview
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/sql-guide/reserved-word.md b/site/versioned_docs/version-4.1/sql-guide/reserved-word.md
new file mode 100644
index 00000000..bcefa00a
--- /dev/null
+++ b/site/versioned_docs/version-4.1/sql-guide/reserved-word.md
@@ -0,0 +1,203 @@
+---
+title: HarperDB SQL Reserved Words
+---
+
+# HarperDB SQL Reserved Words
+
+This is a list of reserved words in the SQL Parser. Use of these words or symbols may result in unexpected behavior or inaccessible tables/attributes. If any of these words must be used, any SQL call referencing a schema, table, or attribute must have backticks (`…`) or brackets ([…]) around the variable.
+
+For Example, for a table called ASSERT in the dev schema, a SQL select on that table would look like:
+
+```
+SELECT * from dev.`ASSERT`
+```
+
+Alternatively:
+
+```
+SELECT * from dev.[ASSERT]
+```
+
+### RESERVED WORD LIST
+
+* ABSOLUTE
+* ACTION
+* ADD
+* AGGR
+* ALL
+* ALTER
+* AND
+* ANTI
+* ANY
+* APPLY
+* ARRAY
+* AS
+* ASSERT
+* ASC
+* ATTACH
+* AUTOINCREMENT
+* AUTO_INCREMENT
+* AVG
+* BEGIN
+* BETWEEN
+* BREAK
+* BY
+* CALL
+* CASE
+* CAST
+* CHECK
+* CLASS
+* CLOSE
+* COLLATE
+* COLUMN
+* COLUMNS
+* COMMIT
+* CONSTRAINT
+* CONTENT
+* CONTINUE
+* CONVERT
+* CORRESPONDING
+* COUNT
+* CREATE
+* CROSS
+* CUBE
+* CURRENT_TIMESTAMP
+* CURSOR
+* DATABASE
+* DECLARE
+* DEFAULT
+* DELETE
+* DELETED
+* DESC
+* DETACH
+* DISTINCT
+* DOUBLEPRECISION
+* DROP
+* ECHO
+* EDGE
+* END
+* ENUM
+* ELSE
+* EXCEPT
+* EXISTS
+* EXPLAIN
+* FALSE
+* FETCH
+* FIRST
+* FOREIGN
+* FROM
+* GO
+* GRAPH
+* GROUP
+* GROUPING
+* HAVING
+* HDB_HASH
+* HELP
+* IF
+* IDENTITY
+* IS
+* IN
+* INDEX
+* INNER
+* INSERT
+* INSERTED
+* INTERSECT
+* INTO
+* JOIN
+* KEY
+* LAST
+* LET
+* LEFT
+* LIKE
+* LIMIT
+* LOOP
+* MATCHED
+* MATRIX
+* MAX
+* MERGE
+* MIN
+* MINUS
+* MODIFY
+* NATURAL
+* NEXT
+* NEW
+* NOCASE
+* NO
+* NOT
+* NULL
+* OFF
+* ON
+* ONLY
+* OFFSET
+* OPEN
+* OPTION
+* OR
+* ORDER
+* OUTER
+* OVER
+* PATH
+* PARTITION
+* PERCENT
+* PLAN
+* PRIMARY
+* PRINT
+* PRIOR
+* QUERY
+* READ
+* RECORDSET
+* REDUCE
+* REFERENCES
+* RELATIVE
+* REPLACE
+* REMOVE
+* RENAME
+* REQUIRE
+* RESTORE
+* RETURN
+* RETURNS
+* RIGHT
+* ROLLBACK
+* ROLLUP
+* ROW
+* SCHEMA
+* SCHEMAS
+* SEARCH
+* SELECT
+* SEMI
+* SET
+* SETS
+* SHOW
+* SOME
+* SOURCE
+* STRATEGY
+* STORE
+* SYSTEM
+* SUM
+* TABLE
+* TABLES
+* TARGET
+* TEMP
+* TEMPORARY
+* TEXTSTRING
+* THEN
+* TIMEOUT
+* TO
+* TOP
+* TRAN
+* TRANSACTION
+* TRIGGER
+* TRUE
+* TRUNCATE
+* UNION
+* UNIQUE
+* UPDATE
+* USE
+* USING
+* VALUE
+* VERTEX
+* VIEW
+* WHEN
+* WHERE
+* WHILE
+* WITH
+* WORK
diff --git a/site/versioned_docs/version-4.1/sql-guide/select.md b/site/versioned_docs/version-4.1/sql-guide/select.md
new file mode 100644
index 00000000..e2896029
--- /dev/null
+++ b/site/versioned_docs/version-4.1/sql-guide/select.md
@@ -0,0 +1,27 @@
+---
+title: Select
+---
+
+# Select
+
+HarperDB has robust SELECT support, from simple queries all the way to complex joins with multi-conditions, aggregates, grouping & ordering.
+
+
+
+All results are returned as JSON object arrays.
+
+
+
+Query for all records and attributes in the dev.dog table:
+```
+SELECT * FROM dev.dog
+```
+Query specific columns from all rows in the dev.dog table:
+```
+SELECT id, dog_name, age FROM dev.dog
+```
+Query for all records and attributes in the dev.dog table ORDERED BY age in ASC order:
+```
+SELECT * FROM dev.dog ORDER BY age
+```
+*The ORDER BY keyword sorts in ascending order by default. To sort in descending order, use the DESC keyword.
diff --git a/site/versioned_docs/version-4.1/sql-guide/sql-geospatial-functions/geoarea.md b/site/versioned_docs/version-4.1/sql-guide/sql-geospatial-functions/geoarea.md
new file mode 100644
index 00000000..d95a0237
--- /dev/null
+++ b/site/versioned_docs/version-4.1/sql-guide/sql-geospatial-functions/geoarea.md
@@ -0,0 +1,41 @@
+---
+title: geoArea
+---
+
+# geoArea
+
+The geoArea() function returns the area of one or more features in square meters.
+
+### Syntax
+geoArea(_geoJSON_)
+
+### Parameters
+| Parameter | Description |
+|-----------|---------------------------------|
+| geoJSON | Required. One or more features. |
+
+#### Example 1
+Calculate the area, in square meters, of a manually passed GeoJSON polygon.
+
+```
+SELECT geoArea('{
+ "type":"Feature",
+ "geometry":{
+ "type":"Polygon",
+ "coordinates":[[
+ [0,0],
+ [0.123456,0],
+ [0.123456,0.123456],
+ [0,0.123456]
+ ]]
+ }
+}')
+```
+
+#### Example 2
+Find all records that have an area less than 1 square mile (or 2589988 square meters).
+
+```
+SELECT * FROM dev.locations
+WHERE geoArea(geo_data) < 2589988
+```
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/sql-guide/sql-geospatial-functions/geocontains.md b/site/versioned_docs/version-4.1/sql-guide/sql-geospatial-functions/geocontains.md
new file mode 100644
index 00000000..8a562e13
--- /dev/null
+++ b/site/versioned_docs/version-4.1/sql-guide/sql-geospatial-functions/geocontains.md
@@ -0,0 +1,65 @@
+---
+title: geoContains
+---
+
+# geoContains
+Determines if geo2 is completely contained by geo1. Returns a Boolean.
+
+## Syntax
+geoContains(_geo1, geo2_)
+
+## Parameters
+| Parameter | Description |
+|------------|-----------------------------------------------------------------------------------|
+| geo1 | Required. Polygon or MultiPolygon GeoJSON feature. |
+| geo2 | Required. Polygon or MultiPolygon GeoJSON feature tested to be contained by geo1. |
+
+### Example 1
+Return all locations within the state of Colorado (passed as a GeoJSON string).
+
+```
+SELECT *
+FROM dev.locations
+WHERE geoContains('{
+ "type": "Feature",
+ "properties": {
+ "name":"Colorado"
+ },
+ "geometry": {
+ "type": "Polygon",
+ "coordinates": [[
+ [-109.072265625,37.00255267],
+ [-102.01904296874999,37.00255267],
+ [-102.01904296874999,41.01306579],
+ [-109.072265625,41.01306579],
+ [-109.072265625,37.00255267]
+ ]]
+ }
+}', geo_data)
+```
+
+### Example 2
+Return all locations which contain HarperDB Headquarters.
+
+```
+SELECT *
+FROM dev.locations
+WHERE geoContains(geo_data, '{
+ "type": "Feature",
+ "properties": {
+ "name": "HarperDB Headquarters"
+ },
+ "geometry": {
+ "type": "Polygon",
+ "coordinates": [[
+ [-104.98060941696167,39.760704817357905],
+ [-104.98053967952728,39.76065120861263],
+ [-104.98055577278137,39.760642961109674],
+ [-104.98037070035934,39.76049450588716],
+ [-104.9802714586258,39.76056254790385],
+ [-104.9805235862732,39.76076461167841],
+ [-104.98060941696167,39.760704817357905]
+ ]]
+ }
+}')
+```
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/sql-guide/sql-geospatial-functions/geoconvert.md b/site/versioned_docs/version-4.1/sql-guide/sql-geospatial-functions/geoconvert.md
new file mode 100644
index 00000000..44dba079
--- /dev/null
+++ b/site/versioned_docs/version-4.1/sql-guide/sql-geospatial-functions/geoconvert.md
@@ -0,0 +1,30 @@
+---
+title: geoConvert
+---
+
+# geoConvert
+
+Converts a series of coordinates into a GeoJSON of the specified type.
+
+## Syntax
+geoConvert(_coordinates, geo_type_[, _properties_])
+
+## Parameters
+| Parameter | Description |
+|--------------|------------------------------------------------------------------------------------------------------------------------------------|
+| coordinates | Required. One or more coordinates |
+| geo_type | Required. GeoJSON geometry type. Options are ‘point’, ‘lineString’, ‘multiLineString’, ‘multiPoint’, ‘multiPolygon’, and ‘polygon’ |
+| properties | Optional. Escaped JSON array with properties to be added to the GeoJSON output. |
+
+### Example
+Convert a given coordinate into a GeoJSON point with specified properties.
+
+```
+SELECT geoConvert(
+ '[-104.979127,39.761563]',
+ 'point',
+ '{
+ "name": "HarperDB Headquarters"
+ }'
+)
+```
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/sql-guide/sql-geospatial-functions/geocrosses.md b/site/versioned_docs/version-4.1/sql-guide/sql-geospatial-functions/geocrosses.md
new file mode 100644
index 00000000..dea03037
--- /dev/null
+++ b/site/versioned_docs/version-4.1/sql-guide/sql-geospatial-functions/geocrosses.md
@@ -0,0 +1,44 @@
+---
+title: geoCrosses
+---
+
+# geoCrosses
+Determines if the geometries cross over each other. Returns boolean.
+
+## Syntax
+geoCrosses(_geo1, geo2_)
+
+## Parameters
+| Parameter | Description |
+|------------|-----------------------------------------|
+| geo1 | Required. GeoJSON geometry or feature. |
+| geo2 | Required. GeoJSON geometry or feature. |
+
+### Example
+Find all locations that cross over a highway.
+
+```
+SELECT *
+FROM dev.locations
+WHERE geoCrosses(
+ geo_data,
+ '{
+ "type": "Feature",
+ "properties": {
+ "name": "Highway I-25"
+ },
+ "geometry": {
+ "type": "LineString",
+ "coordinates": [
+ [-104.9139404296875,41.00477542222947],
+ [-105.0238037109375,39.715638134796336],
+ [-104.853515625,39.53370327008705],
+ [-104.853515625,38.81403111409755],
+ [-104.61181640625,38.39764411353178],
+ [-104.8974609375,37.68382032669382],
+ [-104.501953125,37.00255267215955]
+ ]
+ }
+ }'
+)
+```
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/sql-guide/sql-geospatial-functions/geodifference.md b/site/versioned_docs/version-4.1/sql-guide/sql-geospatial-functions/geodifference.md
new file mode 100644
index 00000000..652dbc1a
--- /dev/null
+++ b/site/versioned_docs/version-4.1/sql-guide/sql-geospatial-functions/geodifference.md
@@ -0,0 +1,56 @@
+---
+title: geoDifference
+---
+
+# geoDifference
+Returns a new polygon with the difference of the second polygon clipped from the first polygon.
+
+## Syntax
+geoDifference(_polygon1, polygon2_)
+
+## Parameters
+| Parameter | Description |
+|------------|----------------------------------------------------------------------------|
+| polygon1 | Required. Polygon or MultiPolygon GeoJSON feature. |
+| polygon2 | Required. Polygon or MultiPolygon GeoJSON feature to remove from polygon1. |
+
+### Example
+Return a GeoJSON Polygon that removes City Park (_polygon2_) from Colorado (_polygon1_).
+
+```
+SELECT geoDifference('{
+ "type": "Feature",
+ "properties": {
+ "name":"Colorado"
+ },
+ "geometry": {
+ "type": "Polygon",
+ "coordinates": [[
+ [-109.072265625,37.00255267215955],
+ [-102.01904296874999,37.00255267215955],
+ [-102.01904296874999,41.0130657870063],
+ [-109.072265625,41.0130657870063],
+ [-109.072265625,37.00255267215955]
+ ]]
+ }
+ }',
+ '{
+ "type": "Feature",
+ "properties": {
+ "name":"City Park"
+ },
+ "geometry": {
+ "type": "Polygon",
+ "coordinates": [[
+ [-104.95973110198975,39.7543828214657],
+ [-104.95955944061278,39.744781185675386],
+ [-104.95904445648193,39.74422022399989],
+ [-104.95835781097412,39.74402223643582],
+ [-104.94097709655762,39.74392324244047],
+ [-104.9408483505249,39.75434982844515],
+ [-104.95973110198975,39.7543828214657]
+ ]]
+ }
+ }'
+)
+```
diff --git a/site/versioned_docs/version-4.1/sql-guide/sql-geospatial-functions/geodistance.md b/site/versioned_docs/version-4.1/sql-guide/sql-geospatial-functions/geodistance.md
new file mode 100644
index 00000000..ab7c9a53
--- /dev/null
+++ b/site/versioned_docs/version-4.1/sql-guide/sql-geospatial-functions/geodistance.md
@@ -0,0 +1,33 @@
+---
+title: Geodistance
+---
+
+#geoDistance
+Calculates the distance between two points in units (default is kilometers).
+
+## Syntax
+geoDistance(_point1, point2_[_, units_])
+
+## Parameters
+| Parameter | Description |
+|------------|-----------------------------------------------------------------------------------------------------------------------|
+| point1 | Required. GeoJSON Point specifying the origin. |
+| point2 | Required. GeoJSON Point specifying the destination. |
+| units | Optional. Specified as a string. Options are ‘degrees’, ‘radians’, ‘miles’, or ‘kilometers’. Default is ‘kilometers’. |
+
+### Example 1
+Calculate the distance, in miles, between HarperDB’s headquarters and the Washington Monument.
+
+```
+SELECT geoDistance('[-104.979127,39.761563]', '[-77.035248,38.889475]', 'miles')
+```
+
+### Example 2
+Find all locations that are within 40 kilometers of a given point, return that distance in miles, and sort by distance in an ascending order.
+
+```
+SELECT *, geoDistance('[-104.979127,39.761563]', geo_data, 'miles') as distance
+FROM dev.locations
+WHERE geoDistance('[-104.979127,39.761563]', geo_data, 'kilometers') < 40
+ORDER BY distance ASC
+```
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/sql-guide/sql-geospatial-functions/geoequal.md b/site/versioned_docs/version-4.1/sql-guide/sql-geospatial-functions/geoequal.md
new file mode 100644
index 00000000..6c665e06
--- /dev/null
+++ b/site/versioned_docs/version-4.1/sql-guide/sql-geospatial-functions/geoequal.md
@@ -0,0 +1,41 @@
+---
+title: geoEqual
+---
+
+# geoEqual
+Determines if two GeoJSON features are the same type and have identical X,Y coordinate values. For more information see https:/developers.arcgis.com/documentation/spatial-references/. Returns a Boolean.
+
+## Syntax
+geoEqual(_geo1_, _geo2_)
+
+## Parameters
+| Parameter | Description |
+|------------|----------------------------------------|
+| geo1 | Required. GeoJSON geometry or feature. |
+| geo2 | Required. GeoJSON geometry or feature. |
+
+### Example
+Find HarperDB Headquarters within all locations within the database.
+
+```
+SELECT *
+FROM dev.locations
+WHERE geoEqual(geo_data, '{
+ "type": "Feature",
+ "properties": {
+ "name": "HarperDB Headquarters"
+ },
+ "geometry": {
+ "type": "Polygon",
+ "coordinates": [[
+ [-104.98060941696167,39.760704817357905],
+ [-104.98053967952728,39.76065120861263],
+ [-104.98055577278137,39.760642961109674],
+ [-104.98037070035934,39.76049450588716],
+ [-104.9802714586258,39.76056254790385],
+ [-104.9805235862732,39.76076461167841],
+ [-104.98060941696167,39.760704817357905]
+ ]]
+ }
+}')
+```
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/sql-guide/sql-geospatial-functions/geolength.md b/site/versioned_docs/version-4.1/sql-guide/sql-geospatial-functions/geolength.md
new file mode 100644
index 00000000..6b00cadd
--- /dev/null
+++ b/site/versioned_docs/version-4.1/sql-guide/sql-geospatial-functions/geolength.md
@@ -0,0 +1,42 @@
+---
+title: geoLength
+---
+
+# geoLength
+Takes a GeoJSON and measures its length in the specified units (default is kilometers).
+
+## Syntax
+geoLength(_geoJSON_[_, units_])
+
+## Parameters
+| Parameter | Description |
+|------------|-----------------------------------------------------------------------------------------------------------------------|
+| geoJSON | Required. GeoJSON to measure. |
+| units | Optional. Specified as a string. Options are ‘degrees’, ‘radians’, ‘miles’, or ‘kilometers’. Default is ‘kilometers’. |
+
+### Example 1
+Calculate the length, in kilometers, of a manually passed GeoJSON linestring.
+
+```
+SELECT geoLength('{
+ "type": "Feature",
+ "geometry": {
+ "type": "LineString",
+ "coordinates": [
+ [-104.97963309288025,39.76163265441438],
+ [-104.9823260307312,39.76365323407955],
+ [-104.99193906784058,39.75616442110704]
+ ]
+ }
+}')
+```
+
+### Example 2
+Find all data plus the calculated length in miles of the GeoJSON, restrict the response to only lengths less than 5 miles, and return the data in order of lengths smallest to largest.
+
+```
+SELECT *, geoLength(geo_data, 'miles') as length
+FROM dev.locations
+WHERE geoLength(geo_data, 'miles') < 5
+ORDER BY length ASC
+```
diff --git a/site/versioned_docs/version-4.1/sql-guide/sql-geospatial-functions/geonear.md b/site/versioned_docs/version-4.1/sql-guide/sql-geospatial-functions/geonear.md
new file mode 100644
index 00000000..32028ed4
--- /dev/null
+++ b/site/versioned_docs/version-4.1/sql-guide/sql-geospatial-functions/geonear.md
@@ -0,0 +1,36 @@
+---
+title: geoNear
+---
+
+# geoNear
+Determines if point1 and point2 are within a specified distance from each other, default units are kilometers. Returns a Boolean.
+
+## Syntax
+geoNear(_point1, point2, distance_[_, units_])
+
+## Parameters
+| Parameter | Description |
+|------------|-----------------------------------------------------------------------------------------------------------------------|
+| point1 | Required. GeoJSON Point specifying the origin. |
+| point2 | Required. GeoJSON Point specifying the destination. |
+| distance | Required. The maximum distance in units as an integer or decimal. |
+| units | Optional. Specified as a string. Options are ‘degrees’, ‘radians’, ‘miles’, or ‘kilometers’. Default is ‘kilometers’. |
+
+### Example 1
+Return all locations within 50 miles of a given point.
+
+```
+SELECT *
+FROM dev.locations
+WHERE geoNear('[-104.979127,39.761563]', geo_data, 50, 'miles')
+```
+
+### Example 2
+Return all locations within 2 degrees of the earth of a given point. (Each degree lat/long is about 69 miles [111 kilometers]). Return all data and the distance in miles, sorted by ascending distance.
+
+```
+SELECT *, geoDistance('[-104.979127,39.761563]', geo_data, 'miles') as distance
+FROM dev.locations
+WHERE geoNear('[-104.979127,39.761563]', geo_data, 2, 'degrees')
+ORDER BY distance ASC
+```
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/sql-guide/sql-geospatial-functions/index.md b/site/versioned_docs/version-4.1/sql-guide/sql-geospatial-functions/index.md
new file mode 100644
index 00000000..e692e812
--- /dev/null
+++ b/site/versioned_docs/version-4.1/sql-guide/sql-geospatial-functions/index.md
@@ -0,0 +1,20 @@
+---
+title: SQL Geospatial Functions
+---
+
+# SQL Geospatial Functions
+
+HarperDB geospatial features require data to be stored in a single column using the [GeoJSON standard](http:/geojson.org/), a standard commonly used in geospatial technologies. Geospatial functions are available to be used in SQL statements.
+
+
+
+If you are new to GeoJSON you should check out the full specification here: http:/geojson.org/. There are a few important things to point out before getting started.
+
+
+
+1) All GeoJSON coordinates are stored in `[longitude, latitude]` format.
+2) Coordinates or GeoJSON geometries must be passed as string when written directly in a SQL statement.
+3) Note if you are using Postman for you testing. Due to limitations in the Postman client, you will need to escape quotes in your strings and your SQL will need to be passed on a single line.
+
+
+In the examples contained in the left-hand navigation, schema and table names may change, but all GeoJSON data will be stored in a column named geo_data.
diff --git a/site/versioned_docs/version-4.1/sql-guide/update.md b/site/versioned_docs/version-4.1/sql-guide/update.md
new file mode 100644
index 00000000..a3c838e8
--- /dev/null
+++ b/site/versioned_docs/version-4.1/sql-guide/update.md
@@ -0,0 +1,15 @@
+---
+title: Update
+---
+
+# Update
+
+HarperDB supports updating existing table row(s) via UPDATE statements. Multiple conditions can be applied to filter the row(s) to update. At this time selecting from one table to update another is not supported.
+
+
+
+```
+UPDATE dev.dog
+ SET owner_name = 'Kyle'
+ WHERE id IN (1, 2)
+```
diff --git a/site/versioned_docs/version-4.1/support.md b/site/versioned_docs/version-4.1/support.md
new file mode 100644
index 00000000..b89a8e5f
--- /dev/null
+++ b/site/versioned_docs/version-4.1/support.md
@@ -0,0 +1,84 @@
+---
+title: Support
+---
+
+# Support
+
+HarperDB support is available with all paid instances. Support tickets are managed via our [Zendesk portal](https:/harperdbhelp.zendesk.com/hc/en-us/requests/new). Once a ticket is submitted the HarperDB team will triage your request and get back to you as soon as possible. Additionally, you can join our [Slack community](https:/harperdbcommunity.slack.com/join/shared\_invite/zt-e8w6u1pu-2UFAXl\_f4ZHo7F7DVkHIDA#/) where HarperDB team members and others in the community are frequently active to help answer questions.
+
+* [Submit a Support Ticket](https:/harperdbhelp.zendesk.com/hc/en-us/requests/new)
+* [Join Our Slack Community](https:/harperdbcommunity.slack.com/join/shared\_invite/zt-e8w6u1pu-2UFAXl\_f4ZHo7F7DVkHIDA#/)
+
+***
+
+### Common Issues
+
+**1 Gigabyte Limit to Request Bodies**
+
+HarperDB supports the body of a request to be up to 1 GB in size. This limit does not impact the CSV file import function the reads from the local file system or from an external URL. We recommend if you do need to bulk import large record sets that you utilize the CSV import function, especially if you run up on the 1 GB body size limit. Documentation for these functions can be found here.
+
+**Do not install as sudo**
+
+HarperDB should be installed using a specific user for HarperDB. This allows you to restrict the permissions that user has and who has access to the HarperDB file system. The reason behind this is that HarperDB files are written directly to the file system, and by using a specific HarperDB user this gives you granular control over who has access to these files.
+
+**Error: Must execute as User**
+
+You may have gotten an error like, `Error: Must execute as <>.` This means that you installed HarperDB as `<>`. Because HarperDB stores files directly to the file system, we only allow the HarperDB executable to be run by a single user. This prevents permissions issues on files. For example if you installed as user\_a, but later wanted to run as user\_b. User\_b may not have access to the database files HarperDB needs. This also keeps HarperDB more secure as it allows you to lock files down to a specific user and prevents other users from accessing your files.
+
+***
+
+### Frequently Asked Questions (FAQs)
+
+**What operating system should I use to run HarperDB?**
+
+All major operating systems: Linux, Windows, and macOS. However, running HarperDB on Windows and macOS is intended only for development and evaluation purposes. Linux is strongly recommended for production use.
+
+**How are HarperDB’s SQL and NoSQL capabilities different from other solutions?**
+
+Many solutions offer NoSQL capability and separate processing for SQL such as in-memory transformation or multi-model support. HarperDB’s unique mechanism for storing each data attribute individually allows for performing NoSQL and SQL operations in real-time on the stored data set.
+
+**How does HarperDB ensure high availability and consistency?**
+
+HarperDB's clustering and replication capabilities allow high availability and fault-tolerance; if a server goes down, traffic can be quickly routed to other HarperDB servers that can service requests. HarperDB's replication uses a consistent resolution strategy (last-write-wins by logical timestamp), to ensure eventual consistency. HarperDB offers auditing capabilities that can be enabled to preserve a record of all changes so that mistakes or even malicious data changes are recorded and can be reverted.
+
+**Is HarperDB ACID-compliant?**
+
+HarperDB operations are atomic, consist, and isolated per instance. This means that any query will provide an isolated consistent snapshot view of the database (based on when the query started. Updating and insertion operations are also performed atomically; any reads and writes are performed within an atomic, isolated transaction with serialization isolation level, and will rollback if it can not be fully completed successfully. Data is immediately flushed to disk after a write to ensure eventual durability. ACID compliance is not guaranteed across instances in a cluster, rather the eventual consistency will propagate changes with last-write-wins (by last logical timestamp) resolution.
+
+**How Does HarperDB Secure My Data?**
+
+HarperDB has role and user based security allowing you to simply and easily control that the right people have access to your data. We also implement a number of authentication mechanisms to ensure the transactions submitted are trusted and secure.
+
+**Is HarperDB row or column oriented?**
+
+HarperDB can be considered column oriented, however, the exploded data model creates an interface that is free from either of these orientations. A user can search and update with columnar benefits and be as ACID as row oriented restrictions.
+
+**What do you mean when you say HarperDB is single model?**
+
+HarperDB takes every attribute of a database table object and creates a key:value for both the key and its corresponding value. For example, the attribute eye color will be represented by a key “eye-color” and the corresponding value “green” will be represented by a key with the value “green”. We use LMDB’s lightning-fast key:value store to underpin all these interrelated keys and values, meaning that every “column” is automatically indexed, and you get huge performance in a tiny package.
+
+**Are Primary Keys Case-Sensitive?**
+
+When using HarperDB, primary keys are case-sensitive. This can cause confusion for developers. For example, if you have a user table, it might make sense to use `user.email` as the primary key. This can cause problems as Harper@harperdb.io and harper@harperdb.io would be seen as two different records. We recommend enforcing case on keys within your app to avoid this issue.
+
+**How Do I Move My HarperDB Data Directory?**
+
+HarperDB’s data directory can be moved from one location to another by simply updating the `rootPath` in the config file (where the data lives, which you specified during installation) to a new location.
+
+Next, edit HarperDB’s hdb\_boot\_properties.file to point HarperDB to the new location by updating the settings\_path variable. Substitute the NEW\_HDB\_ROOT variable in the snippets below with the new path to your new data directory, making sure you escape any slashes.
+
+On MacOS/OSX
+
+```bash
+sed -i '' -E 's/^(settings_path[[:blank:]]*=[[:blank:]]*).*/\1NEW_HDB_ROOT\/harperdb-config.yaml/' ~/.harperdb/hdb_boot_properties.file
+```
+
+On Linux
+
+```bash
+sed -i -E 's/^(settings_path[[:blank:]]*=[[:blank:]]*).*/\1NEW_HDB_ROOT\/harperdb-config.yaml/' ~/hdb_boot_properties.file
+```
+
+Finally, edit the config file in the root folder you just moved:
+
+* Edit the `rootPath` parameter to reflect the new location of your data directory.
diff --git a/site/versioned_docs/version-4.1/transaction-logging.md b/site/versioned_docs/version-4.1/transaction-logging.md
new file mode 100644
index 00000000..06c6cb38
--- /dev/null
+++ b/site/versioned_docs/version-4.1/transaction-logging.md
@@ -0,0 +1,89 @@
+---
+title: Transaction Logging
+---
+
+# Transaction Logging
+
+HarperDB offers two options for logging transactions executed against a table. The options are similar but utilize different storage layers.
+
+## Transaction log
+
+The first option is `read_transaction_log`. The transaction log is built upon clustering streams. Clustering streams are per-table message stores that enable data to be propagated across a cluster. HarperDB leverages streams for use with the transaction log. When clustering is enabled all transactions that occur against a table are pushed to its stream, and thus make up the transaction log.
+
+If you would like to use the transaction log, but have not set up clustering yet, please see ["How to Cluster"](./clustering/).
+
+
+## Transaction Log Operations
+
+### read_transaction_log
+
+The `read_transaction_log` operation returns a prescribed set of records, based on given parameters. The example below will give a maximum of 2 records within the timestamps provided.
+
+```json
+{
+ "operation": "read_transaction_log",
+ "schema": "dev",
+ "table": "dog",
+ "from": 1598290235769,
+ "to": 1660249020865,
+ "limit": 2
+}
+```
+
+_See example response below._
+
+### read_transaction_log Response
+
+
+```json
+[
+ {
+ "operation": "insert",
+ "user": "admin",
+ "timestamp": 1660165619736,
+ "records": [
+ {
+ "id": 1,
+ "dog_name": "Penny",
+ "owner_name": "Kyle",
+ "breed_id": 154,
+ "age": 7,
+ "weight_lbs": 38,
+ "__updatedtime__": 1660165619688,
+ "__createdtime__": 1660165619688
+ }
+ ]
+ },
+ {
+ "operation": "update",
+ "user": "admin",
+ "timestamp": 1660165620040,
+ "records": [
+ {
+ "id": 1,
+ "dog_name": "Penny B",
+ "__updatedtime__": 1660165620036
+ }
+ ]
+ }
+]
+```
+
+_See example request above._
+
+### delete_transaction_logs_before
+
+The `delete_transaction_logs_before` operation will delete transaction log data according to the given parameters. The example below will delete records older than the timestamp provided.
+
+```json
+{
+ "operation": "delete_transaction_logs_before",
+ "schema": "dev",
+ "table": "dog",
+ "timestamp": 1598290282817
+}
+```
+
+_Note: Streams are used for catchup if a node goes down. If you delete messages from a stream there is a chance catchup won't work._
+
+Read on for `read_audit_log`, the second option, for logging transactions executed against a table.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.1/upgrade-hdb-instance.md b/site/versioned_docs/version-4.1/upgrade-hdb-instance.md
new file mode 100644
index 00000000..af7ba7b1
--- /dev/null
+++ b/site/versioned_docs/version-4.1/upgrade-hdb-instance.md
@@ -0,0 +1,90 @@
+---
+title: Upgrade a HarperDB Instance
+---
+
+# Upgrade a HarperDB Instance
+
+This document describes best practices for upgrading self-hosted HarperDB instances. HarperDB can be upgraded using a combination of npm and built-in HarperDB upgrade scripts. Whenever upgrading your HarperDB installation it is recommended you make a backup of your data first. Note: This document applies to self-hosted HarperDB instances only. All HarperDB Cloud instances will be upgraded by the HarperDB Cloud team.
+
+## Upgrading
+
+Upgrading HarperDB is a two-step process. First the latest version of HarperDB must be downloaded from npm, then the HarperDB upgrade scripts will be utilized to ensure the newest features are available on the system.
+
+1. Install the latest version of HarperDB using `npm install -g harperdb`.
+
+ Note `-g` should only be used if you installed HarperDB globally (which is recommended).
+1. Run `harperdb` to initiate the upgrade process.
+
+ HarperDB will then prompt you for all appropriate inputs and then run the upgrade directives.
+
+## Node Version Manager (nvm)
+
+[Node Version Manager (nvm)](http:/nvm.sh/) is an easy way to install, remove, and switch between different versions of Node.js as required by various applications. More information, including directions on installing nvm can be found here: https:/nvm.sh/.
+
+HarperDB supports Node.js versions 14.0.0 and higher, however, **please check our** [**NPM page**](https:/www.npmjs.com/package/harperdb) **for our recommended Node.js version.** To install a different version of Node.js with nvm, run the command:
+
+```bash
+nvm install
+```
+
+To switch to a version of Node run:
+
+```bash
+nvm use
+```
+
+To see the current running version of Node run:
+
+```bash
+node --version
+```
+
+With a handful of different versions of Node.js installed, run nvm with the `ls` argument to list out all installed versions:
+
+```bash
+nvm ls
+```
+
+When upgrading HarperDB, we recommend also upgrading your Node version. Here we assume you're running on an older version of Node; the execution may look like this:
+
+Switch to the older version of Node that HarperDB is running on (if it is not the current version):
+
+```bash
+nvm use 14.19.0
+```
+
+Make sure HarperDB is not running:
+
+```bash
+harperdb stop
+```
+
+Uninstall HarperDB. Note, this step is not required, but will clean up old artifacts of HarperDB. We recommend removing all other HarperDB installations to ensure the most recent version is always running.
+
+```bash
+npm uninstall -g harperdb
+```
+
+Switch to the newer version of Node:
+
+```bash
+nvm use
+```
+
+Install HarperDB globally
+
+```bash
+npm install -g harperdb
+```
+
+Run the upgrade script
+
+```bash
+harperdb
+```
+
+Start HarperDB
+
+```bash
+harperdb start
+```
diff --git a/site/versioned_docs/version-4.2/administration/_category_.json b/site/versioned_docs/version-4.2/administration/_category_.json
new file mode 100644
index 00000000..828e0998
--- /dev/null
+++ b/site/versioned_docs/version-4.2/administration/_category_.json
@@ -0,0 +1,12 @@
+{
+ "label": "Administration",
+ "position": 2,
+ "link": {
+ "type": "generated-index",
+ "title": "Administration Documentation",
+ "description": "Guides for managing and administering HarperDB instances",
+ "keywords": [
+ "administration"
+ ]
+ }
+}
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/administration/administration.md b/site/versioned_docs/version-4.2/administration/administration.md
new file mode 100644
index 00000000..7478f12f
--- /dev/null
+++ b/site/versioned_docs/version-4.2/administration/administration.md
@@ -0,0 +1,23 @@
+---
+title: Best Practices and Recommendations
+---
+
+# Best Practices and Recommendations
+
+HarperDB is designed for minimal administrative effort, and with managed services these are handled for you. But there are important things to consider for managing your own HarperDB servers.
+
+### Data Protection and (Backup and) Recovery
+
+As a distributed database, data protection and recovery can benefit from different data protection strategies than a traditional single-server database. But multiple aspects of data protection and recovery should be considered:
+
+* Availability: As a distributed database HarperDB is intrinsically built for high-availability and a cluster will continue to run even with complete server(s) failure. This is the first and primary defense for protecting against any downtime or data loss. HarperDB provides fast horizontal scaling functionality with node cloning, which facilitates ease of establishing high availability clusters.
+* [Audit log](./logging/audit-logging): HarperDB defaults to tracking data changes so malicious data changes can be found, attributed, and reverted. This provides security-level defense against data loss, allowing for fine-grained isolation and reversion of individual data without the large-scale reversion/loss of data associated with point-in-time recovery approaches.
+* Snapshots: When used as a source-of-truth database for crucial data, we recommend using snapshot tools to regularly snapshot databases as a final backup/defense against data loss (this should only be used as a last resort in recovery). HarperDB has a [`get_backup`](../developers/operations-api/databases-and-tables#get-backup) operation, which provides direct support for making and retrieving database snapshots. An HTTP request can be used to get a snapshot. Alternatively, volume snapshot tools can be used to snapshot data at the OS/VM level. HarperDB can also provide scripts for replaying transaction logs from snapshots to facilitate point-in-time recovery when necessary (often customization may be preferred in certain recovery situations to minimize data loss).
+
+### Horizontal Scaling with Node Cloning
+
+HarperDB provides rapid horizontal scaling capabilities through [node cloning functionality described here](./cloning).
+
+### Replication Transaction Logging
+
+HarperDB utilizes NATS for replication, which maintains a transaction log. See the [transaction log documentation for information on how to query this log](./logging/transaction-logging).
diff --git a/site/versioned_docs/version-4.2/administration/cloning.md b/site/versioned_docs/version-4.2/administration/cloning.md
new file mode 100644
index 00000000..32f73933
--- /dev/null
+++ b/site/versioned_docs/version-4.2/administration/cloning.md
@@ -0,0 +1,153 @@
+---
+title: Clone Node
+---
+
+# Clone Node
+
+Clone node is a configurable node script that can be pointed to another instance of HarperDB and create a full clone.
+
+To start clone node install `harperdb` as you would normally but have the clone node environment or command line (CLI) variables set (see below).
+
+To run clone node either of the following variables must be set:
+
+#### Environment variables
+
+* `HDB_LEADER_URL` - The URL of the leader node's operation API (usually port 9925).
+* `HDB_LEADER_USERNAME` - The leader node admin username.
+* `HDB_LEADER_PASSWORD` - The leader node admin password.
+* `HDB_LEADER_CLUSTERING_HOST` - _(optional)_ The leader clustering host. This value will be added to the clustering routes on the clone node. If this value is not set, replication will not be setup between the leader and clone.
+
+For example:
+```
+HDB_LEADER_URL=https:/node-1.my-domain.com:9925 HDB_LEADER_CLUSTERING_HOST=node-1.my-domain.com HDB_LEADER_USERNAME=... HDB_LEADER_PASSWORD=... harperdb
+```
+
+#### Command line variables
+
+* `--HDB_LEADER_URL` - The URL of the leader node's operation API (usually port 9925).
+* `--HDB_LEADER_USERNAME` - The leader node admin username.
+* `--HDB_LEADER_PASSWORD` - The leader node admin password.
+* `--HDB_LEADER_CLUSTERING_HOST` - _(optional)_ The leader clustering host. This value will be added to the clustering routes on the clone node. If this value is not set, replication will not be setup between the leader and clone.
+
+For example:
+```
+harperdb --HDB_LEADER_URL https:/node-1.my-domain.com:9925 --HDB_LEADER_CLUSTERING_HOST node-1.my-domain.com --HDB_LEADER_USERNAME ... --HDB_LEADER_PASSWORD ...
+```
+
+If an instance already exists in the location you are cloning to, clone node will not run. It will instead proceed with starting HarperDB.
+This is unless you are cloning overtop (see below) of an existing instance.
+
+Clone node does not require any additional configuration apart from the variables referenced above.
+However, it can be configured through `clone-node-config.yaml`, which should be located in the `ROOTPATH` directory of your clone.
+If no configuration is supplied, default values will be used.
+
+By default:
+* The HarperDB Terms and Conditions will be accepted
+* The Root path will be ``/hdb
+* The Operations API port will be set to 9925
+* The admin and clustering username and password will be the same as the leader node
+* A unique node name will be generated
+* All tables will be cloned and have replication added, the subscriptions will be `publish: true` and `subscribe: true`
+* The users and roles system tables will be cloned and have replication added both ways
+* All components will be cloned
+* All routes will be cloned
+
+**Leader node** - the instance of HarperDB you are cloning.\
+**Clone node** - the new node which will be a clone of the leader node.
+
+The following configuration is used exclusively by clone node.
+
+```yaml
+databaseConfig:
+ excludeDatabases:
+ - database: dev
+ excludeTables:
+ - database: prod
+ table: dog
+```
+
+Set any databases or tables that you wish to exclude from cloning.
+
+```yaml
+componentConfig:
+ skipNodeModules: true
+ exclude:
+ - name: my-cool-component
+```
+
+`skipNodeModules` will not include the node\_modules directory when clone node is packaging components in `hdb/components`.
+
+`exclude` can be used to set any components that you do not want cloned.
+
+```yaml
+clusteringConfig:
+ publishToLeaderNode: true
+ subscribeToLeaderNode: true
+```
+
+`publishToLeaderNode`, `subscribeToLeaderNode` the clustering subscription to set up with the leader node.
+
+```yaml
+httpsRejectUnauthorized: false
+```
+
+Clone node makes http requests to the leader node, `httpsRejectUnauthorized` is used to set if https requests should be verified.
+
+Any HarperDB configuration can also be used in the `clone-node-config.yaml` file and will be applied to the cloned node, for example:
+
+```yaml
+rootPath: null
+operationsApi:
+ network:
+ port: 9925
+clustering:
+ nodeName: null
+ logLevel: info
+logging:
+ level: error
+```
+
+_Note: any required configuration needed to install/run HarperDB will be default values or auto-generated unless it is provided in the config file._
+
+### Fully connected clone
+
+A fully connected topology is when all nodes are replicating (publish and subscribing) with all other nodes. A fully connected clone maintains this topology with addition of the new node. When a clone is created, replication is added between the leader and the clone and any nodes the leader is replicating with. For example, if the leader is replicating with node-a and node-b, the clone will replicate with the leader, node-a and node-b.
+
+To run clone node with the fully connected option simply pass the environment variable `HDB_FULLY_CONNECTED=true` or CLI variable `--HDB_FULLY_CONNECTED true`.
+
+### Cloning overtop of an existing HarperDB instance
+
+_Note: this will completely overwrite any system tables (user, roles, nodes, etc.) and any other databases that are named the same as ones that exist on the leader node. It will also do the same for any components._
+
+To create a clone over an existing install of HarperDB use the environment `HDB_CLONE_OVERTOP=true` or CLI variable `--HDB_CLONE_OVERTOP true`.
+
+## Cloning steps
+
+When run clone node will execute the following steps:
+
+1. Clone any user defined tables and the hdb\_role and hdb\_user system tables.
+1. Install Harperdb overtop of the cloned tables.
+1. Clone the configuration, this includes:
+ * Copy the clustering routes and clustering user.
+ * Copy component references.
+ * Using any provided clone config to populate new cloud node harperdb-config.yaml
+1. Clone any components in the `hdb/component` directory.
+1. Start the cloned HarperDB Instance.
+1. Cluster all cloned tables.
+
+## Custom database and table pathing
+
+Currently, clone node will not clone a table if it has custom pathing configured. In this situation the full database that the table is located in will not be cloned.
+
+If a database has custom pathing (no individual table pathing) it will be cloned, however if no custom pathing is provided in the clone config the database will be stored in the default database directory.
+
+To provide custom pathing for a database in the clone config follow this configuration:
+
+```yaml
+databases:
+ :
+ path: /Users/harper/hdb
+```
+
+`` the name of the database which will be located at the custom path.\
+`path` the path where the database will reside.
diff --git a/site/versioned_docs/version-4.2/administration/harperdb-studio/create-account.md b/site/versioned_docs/version-4.2/administration/harperdb-studio/create-account.md
new file mode 100644
index 00000000..635de7f4
--- /dev/null
+++ b/site/versioned_docs/version-4.2/administration/harperdb-studio/create-account.md
@@ -0,0 +1,26 @@
+---
+title: Create a Studio Account
+---
+
+# Create a Studio Account
+Start at the [HarperDB Studio sign up page](https:/studio.harperdb.io/sign-up).
+
+1) Provide the following information:
+ * First Name
+ * Last Name
+ * Email Address
+ * Subdomain
+
+ *Part of the URL that will be used to identify your HarperDB Cloud Instances. For example, with subdomain “demo” and instance name “c1” the instance URL would be: https:/c1-demo.harperdbcloud.com.*
+ * Coupon Code (optional)
+2) Review the Privacy Policy and Terms of Service.
+3) Click the sign up for free button.
+4) You will be taken to a new screen to add an account password. Enter your password.
+ *Passwords must be a minimum of 8 characters with at least 1 lower case character, 1 upper case character, 1 number, and 1 special character.*
+5) Click the add account password button.
+
+You will receive a Studio welcome email confirming your registration.
+
+
+
+Note: Your email address will be used as your username and cannot be changed.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/administration/harperdb-studio/enable-mixed-content.md b/site/versioned_docs/version-4.2/administration/harperdb-studio/enable-mixed-content.md
new file mode 100644
index 00000000..1948d6be
--- /dev/null
+++ b/site/versioned_docs/version-4.2/administration/harperdb-studio/enable-mixed-content.md
@@ -0,0 +1,11 @@
+---
+title: Enable Mixed Content
+---
+
+# Enable Mixed Content
+
+Enabling mixed content is required in cases where you would like to connect the HarperDB Studio to HarperDB Instances via HTTP. This should not be used for production systems, but may be convenient for development and testing purposes. Doing so will allow your browser to reach HTTP traffic, which is considered insecure, through an HTTPS site like the Studio.
+
+
+
+A comprehensive guide is provided by Adobe [here](https:/experienceleague.adobe.com/docs/target/using/experiences/vec/troubleshoot-composer/mixed-content.html).
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/administration/harperdb-studio/index.md b/site/versioned_docs/version-4.2/administration/harperdb-studio/index.md
new file mode 100644
index 00000000..93ba1af7
--- /dev/null
+++ b/site/versioned_docs/version-4.2/administration/harperdb-studio/index.md
@@ -0,0 +1,15 @@
+---
+title: HarperDB Studio
+---
+
+# HarperDB Studio
+HarperDB Studio is the web-based GUI for HarperDB. Studio enables you to administer, navigate, and monitor all of your HarperDB instances in a simple, user friendly interface without any knowledge of the underlying HarperDB API. It’s free to sign up, get started today!
+
+[Sign up for free!](https:/studio.harperdb.io/sign-up)
+
+---
+## How does Studio Work?
+While HarperDB Studio is web based and hosted by us, all database interactions are performed on the HarperDB instance the studio is connected to. The HarperDB Studio loads in your browser, at which point you login to your HarperDB instances. Credentials are stored in your browser cache and are not transmitted back to HarperDB. All database interactions are made via the HarperDB Operations API directly from your browser to your instance.
+
+## What type of instances can I manage?
+HarperDB Studio enables users to manage both HarperDB Cloud instances and privately hosted instances all from a single UI. All HarperDB instances feature identical behavior whether they are hosted by us or by you.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/administration/harperdb-studio/instance-configuration.md b/site/versioned_docs/version-4.2/administration/harperdb-studio/instance-configuration.md
new file mode 100644
index 00000000..55c01be1
--- /dev/null
+++ b/site/versioned_docs/version-4.2/administration/harperdb-studio/instance-configuration.md
@@ -0,0 +1,119 @@
+---
+title: Instance Configuration
+---
+
+# Instance Configuration
+
+HarperDB instance configuration can be viewed and managed directly through the HarperDB Studio. HarperDB Cloud instances can be resized in two different ways via this page, either by modifying machine RAM or by increasing drive storage. User-installed instances can have their licenses modified by modifying licensed RAM.
+
+
+
+All instance configuration is handled through the **config** page of the HarperDB Studio, accessed with the following instructions:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+
+2) Click the appropriate organization that the instance belongs to.
+
+3) Select your desired instance.
+
+4) Click config in the instance control bar.
+
+*Note, the **config** page will only be available to super users and certain items are restricted to Studio organization owners.*
+
+## Instance Overview
+
+The **instance overview** panel displays the following instance specifications:
+
+* Instance URL
+
+* Instance Node Name (for clustering)
+
+* Instance API Auth Header (this user)
+
+ *The Basic authentication header used for the logged in HarperDB database user*
+
+* Created Date (HarperDB Cloud only)
+
+* Region (HarperDB Cloud only)
+
+ *The geographic region where the instance is hosted.*
+
+* Total Price
+
+* RAM
+
+* Storage (HarperDB Cloud only)
+
+* Disk IOPS (HarperDB Cloud only)
+
+## Update Instance RAM
+
+HarperDB Cloud instance size and user-installed instance licenses can be modified with the following instructions. This option is only available to Studio organization owners.
+
+
+
+Note: For HarperDB Cloud instances, upgrading RAM may add additional CPUs to your instance as well. Click here to see how many CPUs are provisioned for each instance size.
+
+1) In the **update ram** panel at the bottom left:
+
+ * Select the new instance size.
+
+ * If you do not have a credit card associated with your account, an **Add Credit Card To Account** button will appear. Click that to be taken to the billing screen where you can enter your credit card information before returning to the **config** tab to proceed with the upgrade.
+
+ * If you do have a credit card associated, you will be presented with the updated billing information.
+
+ * Click **Upgrade**.
+
+2) The instance will shut down and begin reprovisioning/relicensing itself. The instance will not be available during this time. You will be returned to the instance dashboard and the instance status will show UPDATING INSTANCE.
+
+3) Once your instance upgrade is complete, it will appear on the instance dashboard as status OK with your newly selected instance size.
+
+*Note, if HarperDB Cloud instance reprovisioning takes longer than 20 minutes, please submit a support ticket here: https:/harperdbhelp.zendesk.com/hc/en-us/requests/new.*
+
+## Update Instance Storage
+
+The HarperDB Cloud instance storage size can be increased with the following instructions. This option is only available to Studio organization owners.
+
+Note: Instance storage can only be upgraded once every 6 hours.
+
+1) In the **update storage** panel at the bottom left:
+
+ * Select the new instance storage size.
+
+ * If you do not have a credit card associated with your account, an **Add Credit Card To Account** button will appear. Click that to be taken to the billing screen where you can enter your credit card information before returning to the **config** tab to proceed with the upgrade.
+
+ * If you do have a credit card associated, you will be presented with the updated billing information.
+
+ * Click **Upgrade**.
+
+2) The instance will shut down and begin reprovisioning itself. The instance will not be available during this time. You will be returned to the instance dashboard and the instance status will show UPDATING INSTANCE.
+
+3) Once your instance upgrade is complete, it will appear on the instance dashboard as status OK with your newly selected instance size.
+
+*Note, if this process takes longer than 20 minutes, please submit a support ticket here: https:/harperdbhelp.zendesk.com/hc/en-us/requests/new.*
+
+## Remove Instance
+
+The HarperDB instance can be deleted/removed from the Studio with the following instructions. Once this operation is started it cannot be undone. This option is only available to Studio organization owners.
+
+1) In the **remove instance** panel at the bottom left:
+ * Enter the instance name in the text box.
+
+ * The Studio will present you with a warning.
+
+ * Click **Remove**.
+
+2) The instance will begin deleting immediately.
+
+## Restart Instance
+
+The HarperDB Cloud instance can be restarted with the following instructions.
+
+1) In the **restart instance** panel at the bottom right:
+ * Enter the instance name in the text box.
+
+ * The Studio will present you with a warning.
+
+ * Click **Restart**.
+
+2) The instance will begin restarting immediately.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/administration/harperdb-studio/instance-example-code.md b/site/versioned_docs/version-4.2/administration/harperdb-studio/instance-example-code.md
new file mode 100644
index 00000000..b4b74e5f
--- /dev/null
+++ b/site/versioned_docs/version-4.2/administration/harperdb-studio/instance-example-code.md
@@ -0,0 +1,62 @@
+---
+title: Instance Example Code
+---
+
+# Instance Example Code
+
+Example code prepopulated with the instance URL and authorization token for the logged in database user can be found on the **example code** page of the HarperDB Studio. Code samples are generated based on the HarperDB API Documentation Postman collection. Code samples accessed with the following instructions:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+
+2) Click the appropriate organization that the instance belongs to.
+
+3) Select your desired instance.
+
+4) Click **example code** in the instance control bar.
+
+5) Select the appropriate **category** from the left navigation.
+
+6) Select the appropriate **operation** from the left navigation.
+
+7) Select your desired language/variant from the **Choose Programming Language** dropdown.
+
+8) Copy code from the sample code panel using the copy icon.
+
+## Supported Languages
+
+Sample code uses two identifiers: **language** and **variant**.
+
+* **language** is the programming language that the sample code is generated in.
+
+* **variant** is the methodology or library used by the language to send HarperDB requests.
+
+The list of available language/variants are as follows:
+
+| Language | Variant |
+|--------------|---------------|
+| C# | RestSharp |
+| cURL | cURL |
+| Go | Native |
+| HTTP | HTTP |
+| Java | OkHttp |
+| Java | Unirest |
+| JavaScript | Fetch |
+| JavaScript | jQuery |
+| JavaScript | XHR |
+| NodeJs | Axios |
+| NodeJs | Native |
+| NodeJs | Request |
+| NodeJs | Unirest |
+| Objective-C | NSURLSession |
+| OCaml | Cohttp |
+| PHP | cURL |
+| PHP | HTTP_Request2 |
+| PowerShell | RestMethod |
+| Python | http.client |
+| Python | Requests |
+| Ruby | Net:HTTP |
+| Shell | Httpie |
+| Shell | wget |
+| Swift | URLSession |
+
+
diff --git a/site/versioned_docs/version-4.2/administration/harperdb-studio/instance-metrics.md b/site/versioned_docs/version-4.2/administration/harperdb-studio/instance-metrics.md
new file mode 100644
index 00000000..f084df63
--- /dev/null
+++ b/site/versioned_docs/version-4.2/administration/harperdb-studio/instance-metrics.md
@@ -0,0 +1,16 @@
+---
+title: Instance Metrics
+---
+
+# Instance Metrics
+
+The HarperDB Studio display instance status and metrics on the instance status page, which can be accessed with the following instructions:
+
+1. Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+1. Click the appropriate organization that the instance belongs to.
+1. Select your desired instance.
+1. Click **status** in the instance control bar.
+
+Once on the instance browse page you can view host system information, [HarperDB logs](../logging/standard-logging), and [HarperDB Cloud alarms](../../deployments/harperdb-cloud/alarms) (if it is a cloud instance).
+
+_Note, the **status** page will only be available to super users._
diff --git a/site/versioned_docs/version-4.2/administration/harperdb-studio/instances.md b/site/versioned_docs/version-4.2/administration/harperdb-studio/instances.md
new file mode 100644
index 00000000..7c209629
--- /dev/null
+++ b/site/versioned_docs/version-4.2/administration/harperdb-studio/instances.md
@@ -0,0 +1,131 @@
+---
+title: Instances
+---
+
+# Instances
+
+The HarperDB Studio allows you to administer all of your HarperDB instances in one place. HarperDB currently offers the following instance types:
+
+* **HarperDB Cloud Instance** Managed installations of HarperDB, what we call [HarperDB Cloud](../../deployments/harperdb-cloud/).
+* **5G Wavelength Instance** Managed installations of HarperDB running on the Verizon network through AWS Wavelength, what we call [5G Wavelength Instances](../../deployments/harperdb-cloud/verizon-5g-wavelength-instances). _Note, these instances are only accessible via the Verizon network._
+* **User-Installed Instance** Any HarperDB installation that is managed by you. These include instances hosted within your cloud provider accounts (for example, from the AWS or Digital Ocean Marketplaces), privately hosted instances, or instances installed locally.
+
+All interactions between the Studio and your instances take place directly from your browser. HarperDB stores metadata about your instances, which enables the Studio to display these instances when you log in. Beyond that, all traffic is routed from your browser to the HarperDB instances using the standard [HarperDB API](../../developers/operations-api/).
+
+## Organization Instance List
+
+A summary view of all instances within an organization can be viewed by clicking on the appropriate organization from the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page. Each instance gets their own card. HarperDB Cloud and user-installed instances are listed together.
+
+## Create a New Instance
+
+1. Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+1. Click the appropriate organization for the instance to be created under.
+1. Click the **Create New HarperDB Cloud Instance + Register User-Installed Instance** card.
+1. Select your desired Instance Type.
+1. For a HarperDB Cloud Instance or a HarperDB 5G Wavelength Instance, click **Create HarperDB Cloud Instance**.
+ 1. Fill out Instance Info.
+ 1. Enter Instance Name
+
+ _This will be used to build your instance URL. For example, with subdomain “demo” and instance name “c1” the instance URL would be: https:/c1-demo.harperdbcloud.com. The Instance URL will be previewed below._
+ 1. Enter Instance Username
+
+ _This is the username of the initial HarperDB instance super user._
+ 1. Enter Instance Password
+
+ _This is the password of the initial HarperDB instance super user._
+ 1. Click **Instance Details** to move to the next page.
+ 1. Select Instance Specs
+ 1. Select Instance RAM
+
+ _HarperDB Cloud Instances are billed based on Instance RAM, this will select the size of your provisioned instance._ [_More on instance specs_](../../deployments/harperdb-cloud/instance-size-hardware-specs)_._
+ 1. Select Storage Size
+
+ _Each instance has a mounted storage volume where your HarperDB data will reside. Storage is provisioned based on space and IOPS._ [_More on IOPS Impact on Performance_](../../deployments/harperdb-cloud/iops-impact)_._
+ 1. Select Instance Region
+
+ _The geographic area where your instance will be provisioned._
+ 1. Click **Confirm Instance Details** to move to the next page.
+ 1. Review your Instance Details, if there is an error, use the back button to correct it.
+ 1. Review the [Privacy Policy](https:/harperdb.io/legal/privacy-policy/) and [Terms of Service](https:/harperdb.io/legal/harperdb-cloud-terms-of-service/), if you agree, click the **I agree** radio button to confirm.
+ 1. Click **Add Instance**.
+ 1. Your HarperDB Cloud instance will be provisioned in the background. Provisioning typically takes 5-15 minutes. You will receive an email notification when your instance is ready.
+
+
+## Register User-Installed Instance
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+2) Click the appropriate organization for the instance to be created under.
+3) Click the **Create New HarperDB Cloud Instance + Register User-Installed Instance** card.
+4) Select **Register User-Installed Instance**.
+ 1. Fill out Instance Info.
+ 1. Enter Instance Name
+
+ _This is used for descriptive purposes only._
+ 1. Enter Instance Username
+
+ _The username of a HarperDB super user that is already configured in your HarperDB installation._
+ 1. Enter Instance Password
+
+ _The password of a HarperDB super user that is already configured in your HarperDB installation._
+ 1. Enter Host
+
+ _The host to access the HarperDB instance. For example, `harperdb.myhost.com` or `localhost`._
+ 1. Enter Port
+
+ _The port to access the HarperDB instance. HarperDB defaults `9925` for HTTP and `31283` for HTTPS._
+ 1. Select SSL
+
+ _If your instance is running over SSL, select the SSL checkbox. If not, you will need to enable mixed content in your browser to allow the HTTPS Studio to access the HTTP instance. If there are issues connecting to the instance, the Studio will display a red error message._
+ 1. Click **Instance Details** to move to the next page.
+ 1. Select Instance Specs
+ 1. Select Instance RAM
+
+ _HarperDB instances are billed based on Instance RAM. Selecting additional RAM will enable the ability for faster and more complex queries._
+ 1. Click **Confirm Instance Details** to move to the next page.
+ 1. Review your Instance Details, if there is an error, use the back button to correct it.
+ 1. Review the [Privacy Policy](https:/harperdb.io/legal/privacy-policy/) and [Terms of Service](https:/harperdb.io/legal/harperdb-cloud-terms-of-service/), if you agree, click the **I agree** radio button to confirm.
+ 1. Click **Add Instance**.
+ 1. The HarperDB Studio will register your instance and restart it for the registration to take effect. Your instance will be immediately available after this is complete.
+
+## Delete an Instance
+
+Instance deletion has two different behaviors depending on the instance type.
+
+* **HarperDB Cloud Instance** This instance will be permanently deleted, including all data. This process is irreversible and cannot be undone.
+* **User-Installed Instance** The instance will be removed from the HarperDB Studio only. This does not uninstall HarperDB from your system and your data will remain intact.
+
+An instance can be deleted as follows:
+
+1. Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+1. Click the appropriate organization that the instance belongs to.
+1. Identify the proper instance card and click the trash can icon.
+1. Enter the instance name into the text box.
+
+ _This is done for confirmation purposes to ensure you do not accidentally delete an instance._
+1. Click the **Do It** button.
+
+## Upgrade an Instance
+
+HarperDB instances can be resized on the [Instance Configuration](./instance-configuration) page.
+
+## Instance Log In/Log Out
+
+The Studio enables users to log in and out of different database users from the instance control panel. To log out of an instance:
+
+1. Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+1. Click the appropriate organization that the instance belongs to.
+1. Identify the proper instance card and click the lock icon.
+1. You will immediately be logged out of the instance.
+
+To log in to an instance:
+
+1. Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+1. Click the appropriate organization that the instance belongs to.
+1. Identify the proper instance card, it will have an unlocked icon and a status reading PLEASE LOG IN, and click the center of the card.
+1. Enter the database username.
+
+ _The username of a HarperDB user that is already configured in your HarperDB instance._
+1. Enter the database password.
+
+ _The password of a HarperDB user that is already configured in your HarperDB instance._
+1. Click **Log In**.
diff --git a/site/versioned_docs/version-4.2/administration/harperdb-studio/login-password-reset.md b/site/versioned_docs/version-4.2/administration/harperdb-studio/login-password-reset.md
new file mode 100644
index 00000000..dddda5c1
--- /dev/null
+++ b/site/versioned_docs/version-4.2/administration/harperdb-studio/login-password-reset.md
@@ -0,0 +1,42 @@
+---
+title: Login and Password Reset
+---
+
+# Login and Password Reset
+
+## Log In to Your HarperDB Studio Account
+
+To log into your existing HarperDB Studio account:
+
+1) Navigate to the [HarperDB Studio](https:/studio.harperdb.io/).
+2) Enter your email address.
+3) Enter your password.
+4) Click **sign in**.
+
+## Reset a Forgotten Password
+
+To reset a forgotten password:
+
+1) Navigate to the HarperDB Studio password reset page.
+2) Enter your email address.
+3) Click **send password reset email**.
+4) If the account exists, you will receive an email with a temporary password.
+5) Navigate back to the HarperDB Studio login page.
+6) Enter your email address.
+7) Enter your temporary password.
+8) Click **sign in**.
+9) You will be taken to a new screen to reset your account password. Enter your new password.
+*Passwords must be a minimum of 8 characters with at least 1 lower case character, 1 upper case character, 1 number, and 1 special character.*
+10) Click the **add account password** button.
+
+## Change Your Password
+
+If you are already logged into the Studio, you can change your password though the user interface.
+
+1) Navigate to the HarperDB Studio profile page.
+2) In the **password** section, enter:
+
+ * Current password.
+ * New password.
+ * New password again *(for verification)*.
+4) Click the **Update Password** button.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/administration/harperdb-studio/manage-charts.md b/site/versioned_docs/version-4.2/administration/harperdb-studio/manage-charts.md
new file mode 100644
index 00000000..38c8bc0d
--- /dev/null
+++ b/site/versioned_docs/version-4.2/administration/harperdb-studio/manage-charts.md
@@ -0,0 +1,65 @@
+---
+title: Manage Charts
+---
+
+# Manage Charts
+
+The HarperDB Studio includes a charting feature within an instance. They are generated in real time based on your existing data and automatically refreshed every 15 seconds. Instance charts can be accessed with the following instructions:
+
+1. Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+1. Click the appropriate organization that the instance belongs to.
+1. Select your desired instance.
+1. Click **charts** in the instance control bar.
+
+## Creating a New Chart
+
+Charts are generated based on SQL queries, therefore to build a new chart you first need to build a query. Instructions as follows (starting on the charts page described above):
+
+1. Click **query** in the instance control bar.
+1. Enter the SQL query you would like to generate a chart from.
+
+ _For example, using the dog demo data from the API Docs, we can get the average dog age per owner with the following query: `SELECT AVG(age) as avg_age, owner_name FROM dev.dog GROUP BY owner_name`._
+1. Click **Execute**.
+1. Click **create chart** at the top right of the results table.
+1. Configure your chart.
+ 1. Choose chart type.
+
+ _HarperDB Studio offers many standard charting options like line, bar, etc._
+ 1. Choose a data column.
+
+ _This column will be used to plot the data point. Typically, this is the values being calculated in the `SELECT` statement. Depending on the chart type, you can select multiple data columns to display on a single chart._
+ 1. Depending on the chart type, you will need to select a grouping.
+
+ _This could be labeled as x-axis, label, etc. This will be used to group the data, typically this is what you used in your **GROUP BY** clause._
+ 1. Enter a chart name.
+
+ _Used for identification purposes and will be displayed at the top of the chart._
+ 1. Choose visible to all org users toggle.
+
+ _Leaving this option off will limit chart visibility to just your HarperDB Studio user. Toggling it on will enable all users with this Organization to view this chart._
+ 1. Click **Add Chart**.
+ 1. The chart will now be visible on the **charts** page.
+
+The example query above, configured as a bar chart, results in the following chart:
+
+
+
+## Downloading Charts
+
+HarperDB Studio charts can be downloaded in SVG, PNG, and CSV format. Instructions as follows (starting on the charts page described above):
+
+1. Identify the chart you would like to export.
+1. Click the three bars icon.
+1. Select the appropriate download option.
+1. The Studio will generate the export and begin downloading immediately.
+
+## Delete a Chart
+
+Delete a chart as follows (starting on the charts page described above):
+
+1. Identify the chart you would like to delete.
+1. Click the X icon.
+1. Click the **confirm delete chart** button.
+1. The chart will be deleted.
+
+Deleting a chart that is visible to all Organization users will delete it for all users.
diff --git a/site/versioned_docs/version-4.2/administration/harperdb-studio/manage-clustering.md b/site/versioned_docs/version-4.2/administration/harperdb-studio/manage-clustering.md
new file mode 100644
index 00000000..7155249d
--- /dev/null
+++ b/site/versioned_docs/version-4.2/administration/harperdb-studio/manage-clustering.md
@@ -0,0 +1,94 @@
+---
+title: Manage Clustering
+---
+
+# Manage Clustering
+
+HarperDB instance clustering and replication can be configured directly through the HarperDB Studio. It is recommended to read through the clustering documentation first to gain a strong understanding of HarperDB clustering behavior.
+
+
+
+All clustering configuration is handled through the **cluster** page of the HarperDB Studio, accessed with the following instructions:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+
+2) Click the appropriate organization that the instance belongs to.
+
+3) Select your desired instance.
+
+4) Click **cluster** in the instance control bar.
+
+Note, the **cluster** page will only be available to super users.
+
+---
+## Initial Configuration
+
+HarperDB instances do not have clustering configured by default. The HarperDB Studio will walk you through the initial configuration. Upon entering the **cluster** screen for the first time you will need to complete the following configuration. Configurations are set in the **enable clustering** panel on the left while actions are described in the middle of the screen.
+
+1) Create a cluster user, read more about this here: Clustering Users and Roles.
+ * Enter username.
+
+ * Enter password.
+
+ * Click **Create Cluster User**.
+
+2) Click **Set Cluster Node Name**.
+3) Click **Enable Instance Clustering**.
+
+At this point the Studio will restart your HarperDB Instance, required for the configuration changes to take effect.
+
+---
+
+## Manage Clustering
+Once initial clustering configuration is completed you a presented with a clustering management screen with the following properties:
+
+* **connected instances**
+
+ Displays all instances within the Studio Organization that this instance manages a connection with.
+
+* **unconnected instances**
+
+ Displays all instances within the Studio Organization that this instance does not manage a connection with.
+
+* **unregistered instances**
+
+ Displays all instances outside of the Studio Organization that this instance manages a connection with.
+
+* **manage clustering**
+
+ Once instances are connected, this will display clustering management options for all connected instances and all schemas and tables.
+---
+
+## Connect an Instance
+
+HarperDB Instances can be clustered together with the following instructions.
+
+1) Ensure clustering has been configured on both instances and a cluster user with identical credentials exists on both.
+
+2) Identify the instance you would like to connect from the **unconnected instances** panel.
+
+3) Click the plus icon next the appropriate instance.
+
+4) If configurations are correct, all schemas will sync across the cluster, then appear in the **manage clustering** panel. If there is a configuration issue, a red exclamation icon will appear, click it to learn more about what could be causing the issue.
+
+---
+
+## Disconnect an Instance
+
+HarperDB Instances can be disconnected with the following instructions.
+
+1) Identify the instance you would like to disconnect from the **connected instances** panel.
+
+2) Click the minus icon next the appropriate instance.
+
+---
+
+## Manage Replication
+
+Subscriptions must be configured in order to move data between connected instances. Read more about subscriptions here: Creating A Subscription. The **manage clustering** panel displays a table with each row representing an channel per instance. Cells are bolded to indicate a change in the column. Publish and subscribe replication can be configured per table with the following instructions:
+
+1) Identify the instance, schema, and table for replication to be configured.
+
+2) For publish, click the toggle switch in the **publish** column.
+
+3) For subscribe, click the toggle switch in the **subscribe** column.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/administration/harperdb-studio/manage-functions.md b/site/versioned_docs/version-4.2/administration/harperdb-studio/manage-functions.md
new file mode 100644
index 00000000..3a74d7e5
--- /dev/null
+++ b/site/versioned_docs/version-4.2/administration/harperdb-studio/manage-functions.md
@@ -0,0 +1,163 @@
+---
+title: Manage Functions
+---
+
+# Manage Functions
+
+HarperDB Custom Functions are enabled by default and can be configured further through the HarperDB Studio. It is recommended to read through the Custom Functions documentation first to gain a strong understanding of HarperDB Custom Functions behavior.
+
+
+
+All Custom Functions configuration is handled through the **functions** page of the HarperDB Studio, accessed with the following instructions:
+
+1) Navigate to the HarperDB Studio Organizations page.
+
+2) Click the appropriate organization that the instance belongs to.
+
+3) Select your desired instance.
+
+4) Click **functions** in the instance control bar.
+
+*Note, the **functions** page will only be available to super users.*
+
+## Manage Projects
+
+On the **functions** page of the HarperDB Studio you are presented with a functions management screen with the following properties:
+
+* **projects**
+
+ Displays a list of Custom Functions projects residing on this instance.
+* **/project_name/routes**
+
+ Only displayed if there is an existing project. Displays the routes files contained within the selected project.
+* **/project_name/helpers**
+
+ Only displayed if there is an existing project. Displays the helper files contained within the selected project.
+* **/project_name/static**
+
+ Only displayed if there is an existing project. Displays the static file count and a link to the static files contained within the selected project. Note, static files cannot currently be deployed through the Studio and must be deployed via the [HarperDB API](https:/api.harperdb.io/) or manually to the server (not applicable with HarperDB Cloud).
+* **Root File Directory**
+
+ Displays the root file directory where the Custom Functions projects reside on this instance.
+* **Custom Functions Server URL**
+
+ Displays the base URL in which all Custom Functions are accessed for this instance.
+
+
+## Create a Project
+
+HarperDB Custom Functions Projects can be initialized with the following instructions.
+
+1) If this is your first project, skip this step. Click the plus icon next to the **projects** heading.
+
+2) Enter the project name in the text box located under the **projects** heading.
+
+3) Click the check mark icon next the appropriate instance.
+
+4) The Studio will take a few moments to provision a new project based on the [Custom Functions template](https:/github.com/HarperDB/harperdb-custom-functions-template).
+
+5) The Custom Functions project is now created and ready to modify.
+
+## Modify a Project
+
+Custom Functions routes and helper functions can be modified directly through the Studio. From the **functions** page:
+
+1) Select the appropriate **project**.
+
+2) Select the appropriate **route** or **helper**.
+
+3) Modify the code with your desired changes.
+
+4) Click the save icon at the bottom right of the screen.
+
+ *Note, saving modifications will restart the Custom Functions server on your HarperDB instance and may result in up to 60 seconds of downtime for all Custom Functions.*
+
+## Create Additional Routes/Helpers
+
+To create an additional **route** to your Custom Functions project. From the **functions** page:
+
+1) Select the appropriate Custom Functions **project**.
+
+2) Click the plus icon to the right of the **routes** header.
+
+3) Enter the name of the new route in the textbox that appears.
+
+4) Click the check icon to create the new route.
+
+ *Note, adding a route will restart the Custom Functions server on your HarperDB instance and may result in up to 60 seconds of downtime for all Custom Functions.*
+
+To create an additional **helper** to your Custom Functions project. From the **functions** page:
+
+1) Select the appropriate Custom Functions **project**.
+
+2) Click the plus icon to the right of the **helpers** header.
+
+3) Enter the name of the new helper in the textbox that appears.
+
+4) Click the check icon to create the new helper.
+
+ *Note, adding a helper will restart the Custom Functions server on your HarperDB instance and may result in up to 60 seconds of downtime for all Custom Functions.*
+
+## Delete a Project/Route/Helper
+
+To delete a Custom Functions project from the **functions** page:
+
+1) Click the minus icon to the right of the **projects** header.
+
+2) Click the red minus icon to the right of the Custom Functions project you would like to delete.
+
+3) Confirm deletion by clicking the red check icon.
+
+ *Note, deleting a project will restart the Custom Functions server on your HarperDB instance and may result in up to 60 seconds of downtime for all Custom Functions.*
+
+To delete a Custom Functions _project route_ from the **functions** page:
+
+1) Select the appropriate Custom Functions **project**.
+
+2) Click the minus icon to the right of the **routes** header.
+
+3) Click the red minus icon to the right of the Custom Functions route you would like to delete.
+
+4) Confirm deletion by clicking the red check icon.
+
+ *Note, deleting a route will restart the Custom Functions server on your HarperDB instance and may result in up to 60 seconds of downtime for all Custom Functions.*
+
+To delete a Custom Functions _project helper_ from the **functions** page:
+
+1) Select the appropriate Custom Functions **project**.
+
+2) Click the minus icon to the right of the **helper** header.
+
+3) Click the red minus icon to the right of the Custom Functions header you would like to delete.
+
+4) Confirm deletion by clicking the red check icon.
+
+ *Note, deleting a header will restart the Custom Functions server on your HarperDB instance and may result in up to 60 seconds of downtime for all Custom Functions.*
+
+## Deploy Custom Functions Project to Other Instances
+
+The HarperDB Studio provides the ability to deploy Custom Functions projects to additional HarperDB instances within the same Studio Organization. To deploy Custom Functions projects to additional instances, starting from the **functions** page:
+
+1) Select the **project** you would like to deploy.
+
+2) Click the **deploy** button at the top right.
+
+3) A list of instances (excluding the current instance) within the organization will be displayed in tabular with the following information:
+
+ * **Instance Name**: The name used to describe the instance.
+
+ * **Instance URL**: The URL used to access the instance.
+
+ * **CF Capable**: Describes if the instance version supports Custom Functions (yes/no).
+
+ * **CF Enabled**: Describes if Custom Functions are configured and enabled on the instance (yes/no).
+
+ * **Has Project**: Describes if the selected Custom Functions project has been previously deployed to the instance (yes/no).
+
+ * **Deploy**: Button used to deploy the project to the instance.
+
+ * **Remote**: Button used to remove the project from the instance. *Note, this will only be visible if the project has been previously deployed to the instance.*
+
+4) In the appropriate instance row, click the **deploy** button.
+
+ *Note, deploying a project will restart the Custom Functions server on the HarperDB instance receiving the deployment and may result in up to 60 seconds of downtime for all Custom Functions.*
diff --git a/site/versioned_docs/version-4.2/administration/harperdb-studio/manage-instance-roles.md b/site/versioned_docs/version-4.2/administration/harperdb-studio/manage-instance-roles.md
new file mode 100644
index 00000000..e301e7d8
--- /dev/null
+++ b/site/versioned_docs/version-4.2/administration/harperdb-studio/manage-instance-roles.md
@@ -0,0 +1,76 @@
+---
+title: Manage Instance Roles
+---
+
+# Manage Instance Roles
+
+HarperDB users can be managed directly through the HarperDB Studio. It is recommended to read through the users & roles documentation to gain a strong understanding of how they operate.
+
+
+
+Instance role configuration is handled through the roles page of the HarperDB Studio, accessed with the following instructions:
+
+1) Navigate to the HarperDB Studio Organizations page.
+
+2) Click the appropriate organization that the instance belongs to.
+
+3) Select your desired instance.
+
+4) Click **rules** in the instance control bar.
+
+*Note, the **roles** page will only be available to super users.*
+
+
+
+The *roles management* screen consists of the following panels:
+
+* **super users**
+
+ Displays all super user roles for this instance.
+* **cluster users**
+
+ Displays all cluster user roles for this instance.
+* **standard roles**
+
+ Displays all standard roles for this instance.
+* **role permission editing**
+
+ Once a role is selected for editing, permissions will be displayed here in JSON format.
+
+*Note, when new tables are added that are not configured, the Studio will generate configuration values with permissions defaulting to `false`.*
+
+## Role Management
+
+#### Create a Role
+
+1) Click the plus icon at the top right of the appropriate role section.
+
+2) Enter the role name.
+
+3) Click the green check mark.
+
+4) Configure the role permissions in the role permission editing panel.
+
+ *Note, to have the Studio generate attribute permissions JSON, toggle **show all attributes** at the top right of the role permission editing panel.*
+
+5) Click **Update Role Permissions**.
+
+#### Modify a Role
+
+1) Click the appropriate role from the appropriate role section.
+
+2) Modify the role permissions in the role permission editing panel.
+
+ *Note, to have the Studio generate attribute permissions JSON, toggle **show all attributes** at the top right of the role permission editing panel.*
+
+3) Click **Update Role Permissions**.
+
+#### Delete a Role
+
+Deleting a role is permanent and irreversible. A role cannot be remove if users are associated with it.
+
+1) Click the minus icon at the top right of the schemas section.
+
+2) Identify the appropriate role to delete and click the red minus sign in the same row.
+
+3) Click the red check mark to confirm deletion.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/administration/harperdb-studio/manage-instance-users.md b/site/versioned_docs/version-4.2/administration/harperdb-studio/manage-instance-users.md
new file mode 100644
index 00000000..4871cf88
--- /dev/null
+++ b/site/versioned_docs/version-4.2/administration/harperdb-studio/manage-instance-users.md
@@ -0,0 +1,63 @@
+---
+title: Manage Instance Users
+---
+
+# Manage Instance Users
+
+HarperDB instance clustering and replication can be configured directly through the HarperDB Studio. It is recommended to read through the clustering documentation first to gain a strong understanding of HarperDB clustering behavior.
+
+
+
+Instance user configuration is handled through the **users** page of the HarperDB Studio, accessed with the following instructions:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+
+2) Click the appropriate organization that the instance belongs to.
+
+3) Select your desired instance.
+
+4) Click **users** in the instance control bar.
+
+*Note, the **users** page will only be available to super users.*
+
+## Add a User
+
+HarperDB instance users can be added with the following instructions.
+
+1) In the **add user** panel on the left enter:
+
+ * New user username.
+
+ * New user password.
+
+ * Select a role.
+
+ *Learn more about role management here: [Manage Instance Roles](./manage-instance-roles).*
+
+2) Click **Add User**.
+
+## Edit a User
+
+HarperDB instance users can be modified with the following instructions.
+
+1) In the **existing users** panel, click the row of the user you would like to edit.
+
+2) To change a user’s password:
+
+ 1) In the **Change user password** section, enter the new password.
+
+ 2) Click **Update Password**.
+
+3) To change a user’s role:
+
+ 1) In the **Change user role** section, select the new role.
+
+ 2) Click **Update Role**.
+
+4) To delete a user:
+
+ 1) In the **Delete User** section, type the username into the textbox.
+
+ *This is done for confirmation purposes.*
+
+ 2) Click **Delete User**.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/administration/harperdb-studio/manage-schemas-browse-data.md b/site/versioned_docs/version-4.2/administration/harperdb-studio/manage-schemas-browse-data.md
new file mode 100644
index 00000000..41493b96
--- /dev/null
+++ b/site/versioned_docs/version-4.2/administration/harperdb-studio/manage-schemas-browse-data.md
@@ -0,0 +1,132 @@
+---
+title: Manage Schemas / Browse Data
+---
+
+# Manage Schemas / Browse Data
+
+Manage instance schemas/tables and browse data in tabular format with the following instructions:
+
+1) Navigate to the HarperDB Studio Organizations page.
+2) Click the appropriate organization that the instance belongs to.
+3) Select your desired instance.
+4) Click **browse** in the instance control bar.
+
+Once on the instance browse page you can view data, manage schemas and tables, add new data, and more.
+
+## Manage Schemas and Tables
+
+#### Create a Schema
+
+1) Click the plus icon at the top right of the schemas section.
+2) Enter the schema name.
+3) Click the green check mark.
+
+
+#### Delete a Schema
+
+Deleting a schema is permanent and irreversible. Deleting a schema removes all tables and data within it.
+
+1) Click the minus icon at the top right of the schemas section.
+2) Identify the appropriate schema to delete and click the red minus sign in the same row.
+3) Click the red check mark to confirm deletion.
+
+
+#### Create a Table
+
+1) Select the desired schema from the schemas section.
+2) Click the plus icon at the top right of the tables section.
+3) Enter the table name.
+4) Enter the primary key.
+
+ *The primary key is also often referred to as the hash attribute in the studio, and it defines the unique identifier for each row in your table.*
+5) Click the green check mark.
+
+
+#### Delete a Table
+Deleting a table is permanent and irreversible. Deleting a table removes all data within it.
+
+1) Select the desired schema from the schemas section.
+2) Click the minus icon at the top right of the tables section.
+3) Identify the appropriate table to delete and click the red minus sign in the same row.
+4) Click the red check mark to confirm deletion.
+
+## Manage Table Data
+
+The following section assumes you have selected the appropriate table from the schema/table browser.
+
+
+
+#### Filter Table Data
+
+1) Click the magnifying glass icon at the top right of the table browser.
+2) This expands the search filters.
+3) The results will be filtered appropriately.
+
+
+#### Load CSV Data
+
+1) Click the data icon at the top right of the table browser. You will be directed to the CSV upload page where you can choose to import a CSV by URL or upload a CSV file.
+2) To import a CSV by URL:
+ 1) Enter the URL in the **CSV file URL** textbox.
+ 2) Click **Import From URL**.
+ 3) The CSV will load, and you will be redirected back to browse table data.
+3) To upload a CSV file:
+ 1) Click **Click or Drag to select a .csv file** (or drag your CSV file from your file browser).
+ 2) Navigate to your desired CSV file and select it.
+ 3) Click **Insert X Records**, where X is the number of records in your CSV.
+ 4) The CSV will load, and you will be redirected back to browse table data.
+
+
+#### Add a Record
+
+1) Click the plus icon at the top right of the table browser.
+2) The Studio will pre-populate existing table attributes in JSON format.
+
+ *The primary key is not included, but you can add it in and set it to your desired value. Auto-maintained fields are not included and cannot be manually set. You may enter a JSON array to insert multiple records in a single transaction.*
+3) Enter values to be added to the record.
+
+ *You may add new attributes to the JSON; they will be reflexively added to the table.*
+4) Click the **Add New** button.
+
+
+#### Edit a Record
+
+1) Click the record/row you would like to edit.
+2) Modify the desired values.
+
+ *You may add new attributes to the JSON; they will be reflexively added to the table.*
+
+3) Click the **save icon**.
+
+
+#### Delete a Record
+
+Deleting a record is permanent and irreversible. If transaction logging is turned on, the delete transaction will be recorded as well as the data that was deleted.
+
+1) Click the record/row you would like to delete.
+2) Click the **delete icon**.
+3) Confirm deletion by clicking the **check icon**.
+
+## Browse Table Data
+
+The following section assumes you have selected the appropriate table from the schema/table browser.
+
+#### Browse Table Data
+
+The first page of table data is automatically loaded on table selection. Paging controls are at the bottom of the table. Here you can:
+
+* Page left and right using the arrows.
+* Type in the desired page.
+* Change the page size (the amount of records displayed in the table).
+
+
+#### Refresh Table Data
+
+Click the refresh icon at the top right of the table browser.
+
+
+
+#### Automatically Refresh Table Data
+
+Toggle the auto switch at the top right of the table browser. The table data will now automatically refresh every 15 seconds. Filters and pages will remain set for refreshed data.
+
diff --git a/site/versioned_docs/version-4.2/administration/harperdb-studio/organizations.md b/site/versioned_docs/version-4.2/administration/harperdb-studio/organizations.md
new file mode 100644
index 00000000..f9d5cb50
--- /dev/null
+++ b/site/versioned_docs/version-4.2/administration/harperdb-studio/organizations.md
@@ -0,0 +1,105 @@
+---
+title: Organizations
+---
+
+# Organizations
+HarperDB Studio organizations provide the ability to group HarperDB Cloud Instances. Organization behavior is as follows:
+
+* Billing occurs at the organization level to a single credit card.
+* Organizations retain their own unique HarperDB Cloud subdomain.
+* Cloud instances reside within an organization.
+* Studio users can be invited to organizations to share instances.
+
+
+An organization is automatically created for you when you sign up for HarperDB Studio. If you only have one organization, the Studio will automatically bring you to your organization’s page.
+
+---
+
+## List Organizations
+A summary view of all organizations your user belongs to can be viewed on the [HarperDB Studio Organizations](https:/studio.harperdb.io/?redirect=/organizations) page. You can navigate to this page at any time by clicking the **all organizations** link at the top of the HarperDB Studio.
+
+## Create a New Organization
+A new organization can be created as follows:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/?redirect=/organizations) page.
+2) Click the **Create a New Organization** card.
+3) Fill out new organization details
+ * Enter Organization Name
+ *This is used for descriptive purposes only.*
+ * Enter Organization Subdomain
+ *Part of the URL that will be used to identify your HarperDB Cloud Instances. For example, with subdomain “demo” and instance name “c1” the instance URL would be: https:/c1-demo.harperdbcloud.com.*
+4) Click Create Organization.
+
+## Delete an Organization
+An organization cannot be deleted until all instances have been removed. An organization can be deleted as follows:
+
+1) Navigate to the HarperDB Studio Organizations page.
+2) Identify the proper organization card and click the trash can icon.
+3) Enter the organization name into the text box.
+
+ *This is done for confirmation purposes to ensure you do not accidentally delete an organization.*
+4) Click the **Do It** button.
+
+## Manage Users
+HarperDB Studio organization owners can manage users including inviting new users, removing users, and toggling ownership.
+
+
+
+#### Inviting a User
+A new user can be invited to an organization as follows:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/?redirect=/organizations) page.
+2) Click the appropriate organization card.
+3) Click **users** at the top of the screen.
+4) In the **add user** box, enter the new user’s email address.
+5) Click **Add User**.
+
+Users may or may not already be HarperDB Studio users when adding them to an organization. If the HarperDB Studio account already exists, the user will receive an email notification alerting them to the organization invitation. If the user does not have a HarperDB Studio account, they will receive an email welcoming them to HarperDB Studio.
+
+---
+
+#### Toggle a User’s Organization Owner Status
+Organization owners have full access to the organization including the ability to manage organization users, create, modify, and delete instances, and delete the organization. Users must have accepted their invitation prior to being promoted to an owner. A user’s organization owner status can be toggled owner as follows:
+
+1) Navigate to the HarperDB Studio Organizations page.
+2) Click the appropriate organization card.
+3) Click **users** at the top of the screen.
+4) Click the appropriate user from the **existing users** section.
+5) Toggle the **Is Owner** switch to the desired status.
+---
+
+#### Remove a User from an Organization
+Users may be removed from an organization at any time. Removing a user from an organization will not delete their HarperDB Studio account, it will only remove their access to the specified organization. A user can be removed from an organization as follows:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/?redirect=/organizations) page.
+2) Click the appropriate organization card.
+3) Click **users** at the top of the screen.
+4) Click the appropriate user from the **existing users** section.
+5) Type **DELETE** in the text box in the **Delete User** row.
+
+ *This is done for confirmation purposes to ensure you do not accidentally delete a user.*
+6) Click **Delete User**.
+
+## Manage Billing
+
+Billing is configured per organization and will be billed to the stored credit card at appropriate intervals (monthly or annually depending on the registered instance). Billing settings can be configured as follows:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/?redirect=/organizations) page.
+2) Click the appropriate organization card.
+3) Click **billing** at the top of the screen.
+
+Here organization owners can view invoices, manage coupons, and manage the associated credit card.
+
+
+
+*HarperDB billing and payments are managed via Stripe.*
+
+
+
+### Add a Coupon
+
+Coupons are applicable towards any paid tier or user-installed instance and you can change your subscription at any time. Coupons can be added to your Organization as follows:
+
+1) In the coupons panel of the **billing** page, enter your coupon code.
+2) Click **Add Coupon**.
+3) The coupon will then be available and displayed in the coupons panel.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/administration/harperdb-studio/query-instance-data.md b/site/versioned_docs/version-4.2/administration/harperdb-studio/query-instance-data.md
new file mode 100644
index 00000000..5c3ae28f
--- /dev/null
+++ b/site/versioned_docs/version-4.2/administration/harperdb-studio/query-instance-data.md
@@ -0,0 +1,53 @@
+---
+title: Query Instance Data
+---
+
+# Query Instance Data
+
+SQL queries can be executed directly through the HarperDB Studio with the following instructions:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+2) Click the appropriate organization that the instance belongs to.
+3) Select your desired instance.
+4) Click **query** in the instance control bar.
+5) Enter your SQL query in the SQL query window.
+6) Click **Execute**.
+
+*Please note, the Studio will execute the query exactly as entered. For example, if you attempt to `SELECT *` from a table with millions of rows, you will most likely crash your browser.*
+
+## Browse Query Results Set
+
+#### Browse Results Set Data
+
+The first page of results set data is automatically loaded on query execution. Paging controls are at the bottom of the table. Here you can:
+
+* Page left and right using the arrows.
+* Type in the desired page.
+* Change the page size (the amount of records displayed in the table).
+
+#### Refresh Results Set
+
+Click the refresh icon at the top right of the results set table.
+
+#### Automatically Refresh Results Set
+
+Toggle the auto switch at the top right of the results set table. The results set will now automatically refresh every 15 seconds. Filters and pages will remain set for refreshed data.
+
+## Query History
+
+Query history is stored in your local browser cache. Executed queries are listed with the most recent at the top in the **query history** section.
+
+
+#### Rerun Previous Query
+
+* Identify the query from the **query history** list.
+* Click the appropriate query. It will be loaded into the **sql query** input box.
+* Click **Execute**.
+
+#### Clear Query History
+
+Click the trash can icon at the top right of the **query history** section.
+
+## Create Charts
+
+The HarperDB Studio includes a charting feature where you can build charts based on your specified queries. Visit the Charts documentation for more information.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/administration/jobs.md b/site/versioned_docs/version-4.2/administration/jobs.md
new file mode 100644
index 00000000..e7eccad2
--- /dev/null
+++ b/site/versioned_docs/version-4.2/administration/jobs.md
@@ -0,0 +1,112 @@
+---
+title: Jobs
+---
+
+# Jobs
+
+HarperDB Jobs are asynchronous tasks performed by the Operations API.
+
+## Job Summary
+
+Jobs uses an asynchronous methodology to account for the potential of a long-running operation. For example, exporting millions of records to S3 could take some time, so that job is started and the id is provided to check on the status.
+
+The job status can be **COMPLETE** or **IN\_PROGRESS**.
+
+## Example Job Operations
+
+Example job operations include:
+
+[csv data load](https:/api.harperdb.io/#0186bc25-b9ae-44e7-bd9e-8edc0f289aa2)
+
+[csv file load](https:/api.harperdb.io/#c4b71011-8a1d-4cb2-8678-31c0363fea5e)
+
+[csv url load](https:/api.harperdb.io/#d1e9f433-e250-49db-b44d-9ce2dcd92d32)
+
+[import from s3](https:/api.harperdb.io/#820b3947-acbe-41f9-858b-2413cabc3a18)
+
+[delete\_records\_before](https:/api.harperdb.io/#8de87e47-73a8-4298-b858-ca75dc5765c2)
+
+[export\_local](https:/api.harperdb.io/#49a02517-ada9-4198-b48d-8707db905be0)
+
+[export\_to\_s3](https:/api.harperdb.io/#f6393e9f-e272-4180-a42c-ff029d93ddd4)
+
+Example Response from a Job Operation
+
+```
+{
+ "message": "Starting job with id 062a1892-6a0a-4282-9791-0f4c93b12e16"
+}
+```
+
+Whenever one of these operations is initiated, an asynchronous job is created and the request contains the ID of that job which can be used to check on its status.
+
+## Managing Jobs
+
+To check on a job's status, use the [get\_job](https:/api.harperdb.io/#d501bef7-dbb7-4714-b535-e466f6583dce) operation.
+
+Get Job Request
+
+```
+{
+ "operation": "get_job",
+ "id": "4a982782-929a-4507-8794-26dae1132def"
+}
+```
+
+Get Job Response
+
+```
+[
+ {
+ "__createdtime__": 1611615798782,
+ "__updatedtime__": 1611615801207,
+ "created_datetime": 1611615798774,
+ "end_datetime": 1611615801206,
+ "id": "4a982782-929a-4507-8794-26dae1132def",
+ "job_body": null,
+ "message": "successfully loaded 350 of 350 records",
+ "start_datetime": 1611615798805,
+ "status": "COMPLETE",
+ "type": "csv_url_load",
+ "user": "HDB_ADMIN",
+ "start_datetime_converted": "2021-01-25T23:03:18.805Z",
+ "end_datetime_converted": "2021-01-25T23:03:21.206Z"
+ }
+]
+```
+
+## Finding Jobs
+
+To find jobs (if the ID is not known) use the [search\_jobs\_by\_start\_date](https:/api.harperdb.io/#4474ca16-e4c2-4740-81b5-14ed98c5eeab) operation.
+
+Search Jobs Request
+
+```
+{
+ "operation": "search_jobs_by_start_date",
+ "from_date": "2021-01-25T22:05:27.464+0000",
+ "to_date": "2021-01-25T23:05:27.464+0000"
+}
+```
+
+Search Jobs Response
+
+```
+[
+ {
+ "id": "942dd5cb-2368-48a5-8a10-8770ff7eb1f1",
+ "user": "HDB_ADMIN",
+ "type": "csv_url_load",
+ "status": "COMPLETE",
+ "start_datetime": 1611613284781,
+ "end_datetime": 1611613287204,
+ "job_body": null,
+ "message": "successfully loaded 350 of 350 records",
+ "created_datetime": 1611613284764,
+ "__createdtime__": 1611613284767,
+ "__updatedtime__": 1611613287207,
+ "start_datetime_converted": "2021-01-25T22:21:24.781Z",
+ "end_datetime_converted": "2021-01-25T22:21:27.204Z"
+ }
+]
+```
diff --git a/site/versioned_docs/version-4.2/administration/logging/audit-logging.md b/site/versioned_docs/version-4.2/administration/logging/audit-logging.md
new file mode 100644
index 00000000..5871586b
--- /dev/null
+++ b/site/versioned_docs/version-4.2/administration/logging/audit-logging.md
@@ -0,0 +1,135 @@
+---
+title: Audit Logging
+---
+
+# Audit Logging
+
+### Audit log
+
+The audit log uses a standard HarperDB table to track transactions. For each table a user creates, a corresponding table will be created to track transactions against that table.
+
+Audit log is enabled by default. To diable the audit log, set `logging.auditLog` to false in the config file, `harperdb-config.yaml`. Then restart HarperDB for those changes to take place. Note, the audit is required to be enabled for real-time messaging.
+
+### Audit Log Operations
+
+#### read\_audit\_log
+
+The `read_audit_log` operation is flexible, enabling users to query with many parameters. All operations search on a single table. Filter options include timestamps, usernames, and table hash values. Additional examples found in the [HarperDB API documentation](../../developers/operations-api/logs).
+
+**Search by Timestamp**
+
+```json
+{
+ "operation": "read_audit_log",
+ "schema": "dev",
+ "table": "dog",
+ "search_type": "timestamp",
+ "search_values": [
+ 1660585740558
+ ]
+}
+```
+
+There are three outcomes using timestamp.
+
+* `"search_values": []` - All records returned for specified table
+* `"search_values": [1660585740558]` - All records after provided timestamp
+* `"search_values": [1660585740558, 1760585759710]` - Records "from" and "to" provided timestamp
+
+***
+
+**Search by Username**
+
+```json
+{
+ "operation": "read_audit_log",
+ "schema": "dev",
+ "table": "dog",
+ "search_type": "username",
+ "search_values": [
+ "admin"
+ ]
+}
+```
+
+The above example will return all records whose `username` is "admin."
+
+***
+
+**Search by Primary Key**
+
+```json
+{
+ "operation": "read_audit_log",
+ "schema": "dev",
+ "table": "dog",
+ "search_type": "hash_value",
+ "search_values": [
+ 318
+ ]
+}
+```
+
+The above example will return all records whose primary key (`hash_value`) is 318.
+
+***
+
+#### read\_audit\_log Response
+
+The example that follows provides records of operations performed on a table. One thing of note is that the `read_audit_log` operation gives you the `original_records`.
+
+```json
+{
+ "operation": "update",
+ "user_name": "HDB_ADMIN",
+ "timestamp": 1607035559122.277,
+ "hash_values": [
+ 1,
+ 2
+ ],
+ "records": [
+ {
+ "id": 1,
+ "breed": "Muttzilla",
+ "age": 6,
+ "__updatedtime__": 1607035559122
+ },
+ {
+ "id": 2,
+ "age": 7,
+ "__updatedtime__": 1607035559121
+ }
+ ],
+ "original_records": [
+ {
+ "__createdtime__": 1607035556801,
+ "__updatedtime__": 1607035556801,
+ "age": 5,
+ "breed": "Mutt",
+ "id": 2,
+ "name": "Penny"
+ },
+ {
+ "__createdtime__": 1607035556801,
+ "__updatedtime__": 1607035556801,
+ "age": 5,
+ "breed": "Mutt",
+ "id": 1,
+ "name": "Harper"
+ }
+ ]
+}
+```
+
+#### delete\_audit\_logs\_before
+
+Just like with transaction logs, you can clean up your audit logs with the `delete_audit_logs_before` operation. It will delete audit log data according to the given parameters. The example below will delete records older than the timestamp provided.
+
+```json
+{
+ "operation": "delete_audit_logs_before",
+ "schema": "dev",
+ "table": "cat",
+ "timestamp": 1598290282817
+}
+```
diff --git a/site/versioned_docs/version-4.2/administration/logging/index.md b/site/versioned_docs/version-4.2/administration/logging/index.md
new file mode 100644
index 00000000..2ed92774
--- /dev/null
+++ b/site/versioned_docs/version-4.2/administration/logging/index.md
@@ -0,0 +1,11 @@
+---
+title: Logging
+---
+
+# Logging
+
+HarperDB provides many different logging options for various features and functionality.
+
+* [Standard Logging](./standard-logging): HarperDB maintains a log of events that take place throughout operation.
+* [Audit Logging](./audit-logging): HarperDB uses a standard HarperDB table to track transactions. For each table a user creates, a corresponding table will be created to track transactions against that table.
+* [Transaction Logging](./transaction-logging): HarperDB stores a verbose history of all transactions logged for specified database tables, including original data records.
diff --git a/site/versioned_docs/version-4.2/administration/logging/standard-logging.md b/site/versioned_docs/version-4.2/administration/logging/standard-logging.md
new file mode 100644
index 00000000..d586da1c
--- /dev/null
+++ b/site/versioned_docs/version-4.2/administration/logging/standard-logging.md
@@ -0,0 +1,65 @@
+---
+title: Standard Logging
+---
+
+# Standard Logging
+
+HarperDB maintains a log of events that take place throughout operation. Log messages can be used for diagnostics purposes as well as monitoring.
+
+All logs (except for the install log) are stored in the main log file in the hdb directory `/log/hdb.log`. The install log is located in the HarperDB application directory most likely located in your npm directory `npm/harperdb/logs`.
+
+Each log message has several key components for consistent reporting of events. A log message has a format of:
+
+```
+ [] [] ...[]:
+```
+
+For example, a typical log entry looks like:
+
+```
+2023-03-09T14:25:05.269Z [notify] [main/0]: HarperDB successfully started.
+```
+
+The components of a log entry are:
+
+* timestamp - This is the date/time stamp when the event occurred
+* level - This is an associated log level that gives a rough guide to the importance and urgency of the message. The available log levels in order of least urgent (and more verbose) are: `trace`, `debug`, `info`, `warn`, `error`, `fatal`, and `notify`.
+* thread/ID - This reports the name of the thread and the thread ID that the event was reported on. Note that NATS logs are recorded by their process name and there is no thread id for them since they are a separate process. Key threads are:
+ * main - This is the thread that is responsible for managing all other threads and routes incoming requests to the other threads
+ * http - These are the worker threads that handle the primary workload of incoming HTTP requests to the operations API and custom functions.
+ * Clustering\* - These are threads and processes that handle replication.
+ * job - These are job threads that have been started to handle operations that are executed in a separate job thread.
+* tags - Logging from a custom function will include a "custom-function" tag in the log entry. Most logs will not have any additional tags.
+* message - This is the main message that was reported.
+
+We try to keep logging to a minimum by default, to do this the default log level is `error`. If you require more information from the logs, increasing the log level down will provide that.
+
+The log level can be changed by modifying `logging.level` in the config file `harperdb-config.yaml`.
+
+## Clustering Logging
+
+HarperDB clustering utilizes two [Nats](https:/nats.io/) servers, named Hub and Leaf. The Hub server is responsible for establishing the mesh network that connects instances of HarperDB and the Leaf server is responsible for managing the message stores (streams) that replicate and store messages between instances. Due to the verbosity of these servers there is a separate log level configuration for them. To adjust their log verbosity, set `clustering.logLevel` in the config file `harperdb-config.yaml`. Valid log levels from least verbose are `error`, `warn`, `info`, `debug` and `trace`.
+
+## Log File vs Standard Streams
+
+HarperDB logs can optionally be streamed to standard streams. Logging to standard streams (stdout/stderr) is primarily used for container logging drivers. For more traditional installations, we recommend logging to a file. Logging to both standard streams and to a file can be enabled simultaneously. To log to standard streams effectively, make sure to directly run `harperdb` and don't start it as a separate process (don't use `harperdb start`) and `logging.stdStreams` must be set to true. Note, logging to standard streams only will disable clustering catchup.
+
+## Logging Rotation
+
+Log rotation allows for managing log files, such as compressing rotated log files, archiving old log files, determining when to rotate, and the like. This will allow for organized storage and efficient use of disk space. For more information see “logging” in our [config docs](../../deployments/configuration).
+
+## Read Logs via the API
+
+To access specific logs you may query the HarperDB API. Logs can be queried using the `read_log` operation. `read_log` returns outputs from the log based on the provided search criteria.
+
+```json
+{
+ "operation": "read_log",
+ "start": 0,
+ "limit": 1000,
+ "level": "error",
+ "from": "2021-01-25T22:05:27.464+0000",
+ "until": "2021-01-25T23:05:27.464+0000",
+ "order": "desc"
+}
+```
diff --git a/site/versioned_docs/version-4.2/administration/logging/transaction-logging.md b/site/versioned_docs/version-4.2/administration/logging/transaction-logging.md
new file mode 100644
index 00000000..a65c4714
--- /dev/null
+++ b/site/versioned_docs/version-4.2/administration/logging/transaction-logging.md
@@ -0,0 +1,87 @@
+---
+title: Transaction Logging
+---
+
+# Transaction Logging
+
+HarperDB offers two options for logging transactions executed against a table. The options are similar but utilize different storage layers.
+
+## Transaction log
+
+The first option is `read_transaction_log`. The transaction log is built upon clustering streams. Clustering streams are per-table message stores that enable data to be propagated across a cluster. HarperDB leverages streams for use with the transaction log. When clustering is enabled all transactions that occur against a table are pushed to its stream, and thus make up the transaction log.
+
+If you would like to use the transaction log, but have not set up clustering yet, please see ["How to Cluster"](../../developers/clustering/).
+
+## Transaction Log Operations
+
+### read\_transaction\_log
+
+The `read_transaction_log` operation returns a prescribed set of records, based on given parameters. The example below will give a maximum of 2 records within the timestamps provided.
+
+```json
+{
+ "operation": "read_transaction_log",
+ "schema": "dev",
+ "table": "dog",
+ "from": 1598290235769,
+ "to": 1660249020865,
+ "limit": 2
+}
+```
+
+_See example response below._
+
+### read\_transaction\_log Response
+
+```json
+[
+ {
+ "operation": "insert",
+ "user": "admin",
+ "timestamp": 1660165619736,
+ "records": [
+ {
+ "id": 1,
+ "dog_name": "Penny",
+ "owner_name": "Kyle",
+ "breed_id": 154,
+ "age": 7,
+ "weight_lbs": 38,
+ "__updatedtime__": 1660165619688,
+ "__createdtime__": 1660165619688
+ }
+ ]
+ },
+ {
+ "operation": "update",
+ "user": "admin",
+ "timestamp": 1660165620040,
+ "records": [
+ {
+ "id": 1,
+ "dog_name": "Penny B",
+ "__updatedtime__": 1660165620036
+ }
+ ]
+ }
+]
+```
+
+_See example request above._
+
+### delete\_transaction\_logs\_before
+
+The `delete_transaction_logs_before` operation will delete transaction log data according to the given parameters. The example below will delete records older than the timestamp provided.
+
+```json
+{
+ "operation": "delete_transaction_logs_before",
+ "schema": "dev",
+ "table": "dog",
+ "timestamp": 1598290282817
+}
+```
+
+_Note: Streams are used for catchup if a node goes down. If you delete messages from a stream there is a chance catchup won't work._
+
+Read on for `read_audit_log`, the second option, for logging transactions executed against a table.
diff --git a/site/versioned_docs/version-4.2/deployments/_category_.json b/site/versioned_docs/version-4.2/deployments/_category_.json
new file mode 100644
index 00000000..8fdd6e17
--- /dev/null
+++ b/site/versioned_docs/version-4.2/deployments/_category_.json
@@ -0,0 +1,12 @@
+{
+ "label": "Deployments",
+ "position": 3,
+ "link": {
+ "type": "generated-index",
+ "title": "Deployments Documentation",
+ "description": "Installation and deployment guides for HarperDB",
+ "keywords": [
+ "deployments"
+ ]
+ }
+}
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/deployments/configuration.md b/site/versioned_docs/version-4.2/deployments/configuration.md
new file mode 100644
index 00000000..c10ba2a8
--- /dev/null
+++ b/site/versioned_docs/version-4.2/deployments/configuration.md
@@ -0,0 +1,746 @@
+---
+title: Configuration File
+---
+
+# Configuration File
+
+HarperDB is configured through a [YAML](https:/yaml.org/) file called `harperdb-config.yaml` located in the operations API root directory (by default this is a directory named `hdb` located in the home directory of the current user).
+
+All available configuration will be populated by default in the config file on install, regardless of whether it is used.
+
+***
+
+## Using the Configuration File and Naming Conventions
+
+The configuration elements in `harperdb-config.yaml` use camelcase: `operationsApi`.
+
+To change a configuration value edit the `harperdb-config.yaml` file and save any changes. HarperDB must be restarted for changes to take effect.
+
+Alternately, configuration can be changed via environment and/or command line variables or via the API. To access lower level elements, use underscores to append parent/child elements (when used this way elements are case insensitive):
+
+```
+- Environment variables: `OPERATIONSAPI_NETWORK_PORT=9925`
+- Command line variables: `--OPERATIONSAPI_NETWORK_PORT 9925`
+- Calling `set_configuration` through the API: `operationsApi_network_port: 9925`
+```
+
+\_Note: Component configuration cannot be added or updated via CLI or ENV variables.
+
+## Importing installation configuration
+
+To use a custom configuration file to set values on install, use the CLI/ENV variable `HDB_CONFIG` and set it to the path of your custom configuration file.
+
+To install HarperDB overtop of an existing configuration file, set `HDB_CONFIG` to the root path of your install `/harperdb-config.yaml`
+
+***
+
+## Configuration Options
+
+### `http`
+
+`sessionAffinity` - _Type_: string; _Default_: null
+
+HarperDB is a multi-threaded server designed to scale to utilize many CPU cores with high concurrency. Session affinity can help improve the efficiency and fairness of thread utilization by routing multiple requests from the same client to the same thread. This provides a fairer method of request handling by keeping a single user contained to a single thread, can improve caching locality (multiple requests from a single user are more likely to access the same data), and can provide the ability to share information in-memory in user sessions. Enabling session affinity will cause subsequent requests from the same client to be routed to the same thread.
+
+To enable `sessionAffinity`, you need to specify how clients will be identified from the incoming requests. If you are using HarperDB to directly serve HTTP requests from users from different remote addresses, you can use a setting of `ip`. However, if you are using HarperDB behind a proxy server or application server, all the remote ip addresses will be the same and HarperDB will effectively only run on a single thread. Alternately, you can specify a header to use for identification. If you are using basic authentication, you could use the "Authorization" header to route requests to threads by the user's credentials. If you have another header that uniquely identifies users/clients, you can use that as the value of sessionAffinity. But be careful to ensure that the value does provide sufficient uniqueness and that requests are effectively distributed to all the threads and fully utilizing all your CPU cores.
+
+```yaml
+http:
+ sessionAffinity: ip
+```
+
+`compressionThreshold` - _Type_: number; _Default_: 1200 (bytes)
+
+For HTTP clients that support (Brotli) compression encoding, responses that are larger than than this threshold will be compressed (also note that for clients that accept compression, any streaming responses from queries are compressed as well, since the size is not known beforehand).
+
+```yaml
+http:
+ compressionThreshold: 1200
+```
+
+`cors` - _Type_: boolean; _Default_: true
+
+Enable Cross Origin Resource Sharing, which allows requests across a domain.
+
+`corsAccessList` - _Type_: array; _Default_: null
+
+An array of allowable domains with CORS
+
+`headersTimeout` - _Type_: integer; _Default_: 60,000 milliseconds (1 minute)
+
+Limit the amount of time the parser will wait to receive the complete HTTP headers with.
+
+`keepAliveTimeout` - _Type_: integer; _Default_: 30,000 milliseconds (30 seconds)
+
+Sets the number of milliseconds of inactivity the server needs to wait for additional incoming data after it has finished processing the last response.
+
+`port` - _Type_: integer; _Default_: 9926
+
+The port used to access the component server.
+
+`securePort` - _Type_: integer; _Default_: null
+
+The port the HarperDB component server uses for HTTPS connections. This requires a valid certificate and key.
+
+`timeout` - _Type_: integer; _Default_: Defaults to 120,000 milliseconds (2 minutes)
+
+The length of time in milliseconds after which a request will timeout.
+
+```yaml
+http:
+ cors: true
+ corsAccessList:
+ - null
+ headersTimeout: 60000
+ https: false
+ keepAliveTimeout: 30000
+ port: 9926
+ securePort: null
+ timeout: 120000
+```
+
+***
+
+### `threads`
+
+`threads` - _Type_: number; _Default_: One less than the number of logical cores/ processors
+
+The `threads` option specifies the number of threads that will be used to service the HTTP requests for the operations API and custom functions. Generally, this should be close to the number of CPU logical cores/processors to ensure the CPU is fully utilized (a little less because HarperDB does have other threads at work), assuming HarperDB is the main service on a server.
+
+```yaml
+threads: 11
+```
+
+***
+
+### `clustering`
+
+The `clustering` section configures the clustering engine, this is used to replicate data between instances of HarperDB.
+
+Clustering offers a lot of different configurations, however in a majority of cases the only options you will need to pay attention to are:
+
+* `clustering.enabled` Enable the clustering processes.
+* `clustering.hubServer.cluster.network.port` The port other nodes will connect to. This port must be accessible from other cluster nodes.
+* `clustering.hubServer.cluster.network.routes`The connections to other instances.
+* `clustering.nodeName` The name of your node, must be unique within the cluster.
+* `clustering.user` The name of the user credentials used for Inter-node authentication.
+
+`enabled` - _Type_: boolean; _Default_: false
+
+Enable clustering.
+
+_Note: If you enabled clustering but do not create and add a cluster user you will get a validation error. See `user` description below on how to add a cluster user._
+
+```yaml
+clustering:
+ enabled: true
+```
+
+`clustering.hubServer.cluster`
+
+Clustering’s `hubServer` facilitates the HarperDB mesh network and discovery service.
+
+```yaml
+clustering:
+ hubServer:
+ cluster:
+ name: harperdb
+ network:
+ port: 9932
+ routes:
+ - host: 3.62.184.22
+ port: 9932
+ - host: 3.735.184.8
+ port: 9932
+```
+
+`name` - _Type_: string, _Default_: harperdb
+
+The name of your cluster. This name needs to be consistent for all other nodes intended to be meshed in the same network.
+
+`port` - _Type_: integer, _Default_: 9932
+
+The port the hub server uses to accept cluster connections
+
+`routes` - _Type_: array, _Default_: null
+
+An object array that represent the host and port this server will cluster to. Each object must have two properties `port` and `host`. Multiple entries can be added to create network resiliency in the event one server is unavailable. Routes can be added, updated and removed either by directly editing the `harperdb-config.yaml` file or by using the `cluster_set_routes` or `cluster_delete_routes` API endpoints.
+
+`host` - _Type_: string
+
+The host of the remote instance you are creating the connection with.
+
+`port` - _Type_: integer
+
+The port of the remote instance you are creating the connection with. This is likely going to be the `clustering.hubServer.cluster.network.port` on the remote instance.
+
+`clustering.hubServer.leafNodes`
+
+```yaml
+clustering:
+ hubServer:
+ leafNodes:
+ network:
+ port: 9931
+```
+
+`port` - _Type_: integer; _Default_: 9931
+
+The port the hub server uses to accept leaf server connections.
+
+`clustering.hubServer.network`
+
+```yaml
+clustering:
+ hubServer:
+ network:
+ port: 9930
+```
+
+`port` - _Type_: integer; _Default_: 9930
+
+Use this port to connect a client to the hub server, for example using the NATs SDK to interact with the server.
+
+`clustering.leafServer`
+
+Manages streams, streams are ‘message stores’ that store table transactions.
+
+```yaml
+clustering:
+ leafServer:
+ network:
+ port: 9940
+ routes:
+ - host: 3.62.184.22
+ port: 9931
+ - host: node3.example.com
+ port: 9931
+ streams:
+ maxAge: 3600
+ maxBytes: 10000000
+ maxMsgs: 500
+ path: /user/hdb/clustering/leaf
+```
+
+`port` - _Type_: integer; _Default_: 9940
+
+Use this port to connect a client to the leaf server, for example using the NATs SDK to interact with the server.
+
+`routes` - _Type_: array; _Default_: null
+
+An object array that represent the host and port the leaf node will directly connect with. Each object must have two properties `port` and `host`. Unlike the hub server, the leaf server will establish connections to all listed hosts. Routes can be added, updated and removed either by directly editing the `harperdb-config.yaml` file or by using the `cluster_set_routes` or `cluster_delete_routes` API endpoints.
+
+`host` - _Type_: string
+
+The host of the remote instance you are creating the connection with.
+
+`port` - _Type_: integer
+
+The port of the remote instance you are creating the connection with. This is likely going to be the `clustering.hubServer.cluster.network.port` on the remote instance.
+
+\
+
+
+`clustering.leafServer.streams`
+
+`maxAge` - _Type_: integer; _Default_: null
+
+The maximum age of any messages in the stream, expressed in seconds.
+
+`maxBytes` - _Type_: integer; _Default_: null
+
+The maximum size of the stream in bytes. Oldest messages are removed if the stream exceeds this size.
+
+`maxMsgs` - _Type_: integer; _Default_: null
+
+How many messages may be in a stream. Oldest messages are removed if the stream exceeds this number.
+
+`path` - _Type_: string; _Default_: \/clustering/leaf
+
+The directory where all the streams are kept.
+
+***
+
+`logLevel` - _Type_: string; _Default_: error
+
+Control the verbosity of clustering logs.
+
+```yaml
+clustering:
+ logLevel: error
+```
+
+There exists a log level hierarchy in order as `trace`, `debug`, `info`, `warn`, and `error`. When the level is set to `trace` logs will be created for all possible levels. Whereas if the level is set to `warn`, the only entries logged will be `warn` and `error`. The default value is `error`.
+
+`nodeName` - _Type_: string; _Default_: null
+
+The name of this node in your HarperDB cluster topology. This must be a value unique from the rest of the cluster node names.
+
+_Note: If you want to change the node name make sure there are no subscriptions in place before doing so. After the name has been changed a full restart is required._
+
+```yaml
+clustering:
+ nodeName: great_node
+```
+
+`tls`
+
+Transport Layer Security default values are automatically generated on install.
+
+```yaml
+clustering:
+ tls:
+ certificate: ~/hdb/keys/certificate.pem
+ certificateAuthority: ~/hdb/keys/ca.pem
+ privateKey: ~/hdb/keys/privateKey.pem
+ insecure: true
+ verify: true
+```
+
+`certificate` - _Type_: string; _Default_: \/keys/certificate.pem
+
+Path to the certificate file.
+
+`certificateAuthority` - _Type_: string; _Default_: \/keys/ca.pem
+
+Path to the certificate authority file.
+
+`privateKey` - _Type_: string; _Default_: \/keys/privateKey.pem
+
+Path to the private key file.
+
+`insecure` - _Type_: boolean; _Default_: true
+
+When true, will skip certificate verification. For use only with self-signed certs.
+
+`republishMessages` - _Type_: boolean; _Default_: false
+
+When true, all transactions that are received from other nodes are republished to this node's stream. When subscriptions are not fully connected between all nodes, this ensures that messages are routed to all nodes through intermediate nodes. This also ensures that all writes, whether local or remote, are written to the NATS transaction log. However, there is additional overhead with republishing, and setting this is to false can provide better data replication performance. When false, you need to ensure all subscriptions are fully connected between every node to every other node, and be aware that the NATS transaction log will only consist of local writes.
+
+`verify` - _Type_: boolean; _Default_: true
+
+When true, hub server will verify client certificate using the CA certificate.
+
+***
+
+`user` - _Type_: string; _Default_: null
+
+The username given to the `cluster_user`. All instances in a cluster must use the same clustering user credentials (matching username and password).
+
+Inter-node authentication takes place via a special HarperDB user role type called `cluster_user`.
+
+The user can be created either through the API using an `add_user` request with the role set to `cluster_user`, or on install using environment variables `CLUSTERING_USER=cluster_person` `CLUSTERING_PASSWORD=pass123!` or CLI variables `harperdb --CLUSTERING_USER cluster_person` `--CLUSTERING_PASSWORD` `pass123!`
+
+```yaml
+clustering:
+ user: cluster_person
+```
+
+***
+
+### `localStudio`
+
+The `localStudio` section configures the local HarperDB Studio, a simplified GUI for HarperDB hosted on the server. A more comprehensive GUI is hosted by HarperDB at https:/studio.harperdb.io. Note, all database traffic from either `localStudio` or HarperDB Studio is made directly from your browser to the instance.
+
+`enabled` - _Type_: boolean; _Default_: false
+
+Enabled the local studio or not.
+
+```yaml
+localStudio:
+ enabled: false
+```
+
+***
+
+### `logging`
+
+The `logging` section configures HarperDB logging across all HarperDB functionality. This includes standard text logging of application and database events as well as structured data logs of record changes. Logging of application/database events are logged in text format to the `~/hdb/log/hdb.log` file (or location specified by `logging.root`).
+
+In addition, structured logging of data changes are also available:
+
+`auditLog` - _Type_: boolean; _Default_: false
+
+Enabled table transaction logging.
+
+```yaml
+logging:
+ auditLog: false
+```
+
+To access the audit logs, use the API operation `read_audit_log`. It will provide a history of the data, including original records and changes made, in a specified table.
+
+```json
+{
+ "operation": "read_audit_log",
+ "schema": "dev",
+ "table": "dog"
+}
+```
+
+`file` - _Type_: boolean; _Default_: true
+
+Defines whether or not to log to a file.
+
+```yaml
+logging:
+ file: true
+```
+
+`auditRetention` - _Type_: string|number; _Default_: 3d
+
+This specifies how long audit logs should be retained.
+
+`level` - _Type_: string; _Default_: error
+
+Control the verbosity of text event logs.
+
+```yaml
+logging:
+ level: error
+```
+
+There exists a log level hierarchy in order as `trace`, `debug`, `info`, `warn`, `error`, `fatal`, and `notify`. When the level is set to `trace` logs will be created for all possible levels. Whereas if the level is set to `fatal`, the only entries logged will be `fatal` and `notify`. The default value is `error`.
+
+`root` - _Type_: string; _Default_: \/log
+
+The path where the log files will be written.
+
+```yaml
+logging:
+ root: ~/hdb/log
+```
+
+`rotation`
+
+Rotation provides the ability for a user to systematically rotate and archive the `hdb.log` file. To enable `interval` and/or `maxSize` must be set.
+
+_**Note:**_ `interval` and `maxSize` are approximates only. It is possible that the log file will exceed these values slightly before it is rotated.
+
+```yaml
+logging:
+ rotation:
+ enabled: true
+ compress: false
+ interval: 1D
+ maxSize: 100K
+ path: /user/hdb/log
+```
+
+`enabled` - _Type_: boolean; _Default_: false
+
+Enables logging rotation.
+
+`compress` - _Type_: boolean; _Default_: false
+
+Enables compression via gzip when logs are rotated.
+
+`interval` - _Type_: string; _Default_: null
+
+The time that should elapse between rotations. Acceptable units are D(ays), H(ours) or M(inutes).
+
+`maxSize` - _Type_: string; _Default_: null
+
+The maximum size the log file can reach before it is rotated. Must use units M(egabyte), G(igabyte), or K(ilobyte).
+
+`path` - _Type_: string; _Default_: \/log
+
+Where to store the rotated log file. File naming convention is `HDB-YYYY-MM-DDT-HH-MM-SSSZ.log`.
+
+`stdStreams` - _Type_: boolean; _Default_: false
+
+Log HarperDB logs to the standard output and error streams.
+
+```yaml
+logging:
+ stdStreams: false
+```
+
+***
+
+### `authentication`
+
+The authentication section defines the configuration for the default authentication mechanism in HarperDB.
+
+```yaml
+authentication:
+ authorizeLocal: true
+ cacheTTL: 30000
+ enableSessions: true
+ operationTokenTimeout: 1d
+ refreshTokenTimeout: 30d
+```
+
+`authorizeLocal` - _Type_: boolean; _Default_: true
+
+This will automatically authorize any requests from the loopback IP address as the superuser. This should be disabled for any HarperDB servers that may be accessed by untrusted users from the same instance. For example, this should be disabled if you are using a local proxy, or for general server hardening.
+
+`cacheTTL` - _Type_: number; _Default_: 30000
+
+This defines the length of time (in milliseconds) that an authentication (a particular Authorization header or token) can be cached.
+
+`enableSessions` - _Type_: boolean; _Default_: true
+
+This will enable cookie-based sessions to maintain an authenticated session. This is generally the preferred mechanism for maintaining authentication in web browsers as it allows cookies to hold an authentication token securely without giving JavaScript code access to token/credentials that may open up XSS vulnerabilities.
+
+`operationTokenTimeout` - _Type_: string; _Default_: 1d
+
+Defines the length of time an operation token will be valid until it expires. Example values: https:/github.com/vercel/ms.
+
+`refreshTokenTimeout` - _Type_: string; _Default_: 1d
+
+Defines the length of time a refresh token will be valid until it expires. Example values: https:/github.com/vercel/ms.
+
+### `operationsApi`
+
+The `operationsApi` section configures the HarperDB Operations API.\
+All the `operationsApi` configuration is optional. Any configuration that is not provided under this section will default to the `http` configuration section.
+
+`network`
+
+```yaml
+operationsApi:
+ network:
+ cors: true
+ corsAccessList:
+ - null
+ headersTimeout: 60000
+ keepAliveTimeout: 5000
+ port: 9925
+ securePort: null
+ timeout: 120000
+```
+
+`cors` - _Type_: boolean; _Default_: true
+
+Enable Cross Origin Resource Sharing, which allows requests across a domain.
+
+`corsAccessList` - _Type_: array; _Default_: null
+
+An array of allowable domains with CORS
+
+`headersTimeout` - _Type_: integer; _Default_: 60,000 milliseconds (1 minute)
+
+Limit the amount of time the parser will wait to receive the complete HTTP headers with.
+
+`keepAliveTimeout` - _Type_: integer; _Default_: 5,000 milliseconds (5 seconds)
+
+Sets the number of milliseconds of inactivity the server needs to wait for additional incoming data after it has finished processing the last response.
+
+`port` - _Type_: integer; _Default_: 9925
+
+The port the HarperDB operations API interface will listen on.
+
+`securePort` - _Type_: integer; _Default_: null
+
+The port the HarperDB operations API uses for HTTPS connections. This requires a valid certificate and key.
+
+`timeout` - _Type_: integer; _Default_: Defaults to 120,000 milliseconds (2 minutes)
+
+The length of time in milliseconds after which a request will timeout.
+
+`tls`
+
+This configures the Transport Layer Security for HTTPS support.
+
+```yaml
+operationsApi:
+ tls:
+ certificate: ~/hdb/keys/certificate.pem
+ certificateAuthority: ~/hdb/keys/ca.pem
+ privateKey: ~/hdb/keys/privateKey.pem
+```
+
+`certificate` - _Type_: string; _Default_: \/keys/certificate.pem
+
+Path to the certificate file.
+
+`certificateAuthority` - _Type_: string; _Default_: \/keys/ca.pem
+
+Path to the certificate authority file.
+
+`privateKey` - _Type_: string; _Default_: \/keys/privateKey.pem
+
+Path to the private key file.
+
+***
+
+#### `componentsRoot`
+
+`componentsRoot` - _Type_: string; _Default_: \/components
+
+The path to the folder containing the local component files.
+
+```yaml
+componentsRoot: ~/hdb/components
+```
+
+***
+
+#### `rootPath`
+
+`rootPath` - _Type_: string; _Default_: home directory of the current user
+
+The HarperDB database and applications/API/interface are decoupled from each other. The `rootPath` directory specifies where the HarperDB application persists data, config, logs, and Custom Functions.
+
+```yaml
+rootPath: /Users/jonsnow/hdb
+```
+
+***
+
+#### `storage`
+
+`writeAsync` - _Type_: boolean; _Default_: false
+
+The `writeAsync` option turns off disk flushing/syncing, allowing for faster write operation throughput. However, this does not provide storage integrity guarantees, and if a server crashes, it is possible that there may be data loss requiring restore from another backup/another node.
+
+```yaml
+storage:
+ writeAsync: false
+```
+
+`caching` - _Type_: boolean; _Default_: true
+
+The `caching` option enables in-memory caching of records, providing faster access to frequently accessed objects. This can incur some extra overhead for situations where reads are extremely random and don't benefit from caching.
+
+```yaml
+storage:
+ caching: true
+```
+
+`compression` - _Type_: boolean; _Default_: false
+
+The `compression` option enables compression of records in the database. This can be helpful for very large databases in reducing storage requirements and potentially allowing more data to be cached. This uses the very fast LZ4 compression algorithm, but this still incurs extra costs for compressing and decompressing.
+
+```yaml
+storage:
+ compression: false
+```
+
+`noReadAhead` - _Type_: boolean; _Default_: true
+
+The `noReadAhead` option advises the operating system to not read ahead when reading from the database. This provides better memory utilization, except in situations where large records are used or frequent range queries are used.
+
+```yaml
+storage:
+ noReadAhead: true
+```
+
+`prefetchWrites` - _Type_: boolean; _Default_: true
+
+The `prefetchWrites` option loads data prior to write transactions. This should be enabled for databases that are larger than memory (although it can be faster to disable this for smaller databases).
+
+```yaml
+storage:
+ prefetchWrites: true
+```
+
+`path` - _Type_: string; _Default_: `/schema`
+
+The `path` configuration sets where all database files should reside.
+
+```yaml
+storage:
+ path: /users/harperdb/storage
+```
+
+_**Note:**_ This configuration applies to all database files, which includes system tables that are used internally by HarperDB. For this reason if you wish to use a non default `path` value you must move any existing schemas into your `path` location. Existing schemas is likely to include the system schema which can be found at `/schema/system`.
+
+***
+
+#### `tls`
+
+Transport Layer Security
+
+```yaml
+tls:
+ certificate: ~/hdb/keys/certificate.pem
+ certificateAuthority: ~/hdb/keys/ca.pem
+ privateKey: ~/hdb/keys/privateKey.pem
+```
+
+`certificate` - _Type_: string; _Default_: \/keys/certificate.pem
+
+Path to the certificate file.
+
+`certificateAuthority` - _Type_: string; _Default_: \/keys/ca.pem
+
+Path to the certificate authority file.
+
+`privateKey` - _Type_: string; _Default_: \/keys/privateKey.pem
+
+Path to the private key file.
+
+***
+
+#### `databases`
+
+The `databases` section is an optional configuration that can be used to define where database files should reside down to the table level.\
+\
+This configuration should be set before the database and table have been created.\
+\
+The configuration will not create the directories in the path, that must be done by the user.\
+
+
+To define where a database and all its tables should reside use the name of your database and the `path` parameter.
+
+```yaml
+databases:
+ nameOfDatabase:
+ path: /path/to/database
+```
+
+To define where specific tables within a database should reside use the name of your database, the `tables` parameter, the name of your table and the `path` parameter.
+
+```yaml
+databases:
+ nameOfDatabase:
+ tables:
+ nameOfTable:
+ path: /path/to/table
+```
+
+This same pattern can be used to define where the audit log database files should reside. To do this use the `auditPath` parameter.
+
+```yaml
+databases:
+ nameOfDatabase:
+ auditPath: /path/to/database
+```
+
+\
+
+
+**Setting the database section through the command line, environment variables or API**
+
+When using command line variables,environment variables or the API to configure the databases section a slightly different convention from the regular one should be used. To add one or more configurations use a JSON object array.
+
+Using command line variables:
+
+```bash
+--DATABASES [{\"nameOfSchema\":{\"tables\":{\"nameOfTable\":{\"path\":\"\/path\/to\/table\"}}}}]
+```
+
+Using environment variables:
+
+```bash
+DATABASES=[{"nameOfSchema":{"tables":{"nameOfTable":{"path":"/path/to/table"}}}}]
+```
+
+Using the API:
+
+```json
+{
+ "operation": "set_configuration",
+ "databases": [{
+ "nameOfDatabase": {
+ "tables": {
+ "nameOfTable": {
+ "path": "/path/to/table"
+ }
+ }
+ }
+ }]
+}
+```
diff --git a/site/versioned_docs/version-4.2/deployments/harperdb-cli.md b/site/versioned_docs/version-4.2/deployments/harperdb-cli.md
new file mode 100644
index 00000000..333d9979
--- /dev/null
+++ b/site/versioned_docs/version-4.2/deployments/harperdb-cli.md
@@ -0,0 +1,95 @@
+---
+title: HarperDB CLI
+---
+
+# HarperDB CLI
+
+The HarperDB command line interface (CLI) is used to administer [self-installed HarperDB instances](./install-harperdb/).
+
+## Installing HarperDB
+
+To install HarperDB with CLI prompts, run the following command:
+
+```bash
+harperdb install
+```
+
+Alternatively, HarperDB installations can be automated with environment variables or command line arguments; [see a full list of configuration parameters here](./configuration#using-the-configuration-file-and-naming-conventions). Note, when used in conjunction, command line arguments will override environment variables.
+
+#### Environment Variables
+
+```bash
+#minimum required parameters for no additional CLI prompts
+export TC_AGREEMENT=yes
+export HDB_ADMIN_USERNAME=HDB_ADMIN
+export HDB_ADMIN_PASSWORD=password
+export ROOTPATH=/tmp/hdb/
+export OPERATIONSAPI_NETWORK_PORT=9925
+harperdb install
+```
+
+#### Command Line Arguments
+
+```bash
+#minimum required parameters for no additional CLI prompts
+harperdb install --TC_AGREEMENT yes --HDB_ADMIN_USERNAME HDB_ADMIN --HDB_ADMIN_PASSWORD password --ROOTPATH /tmp/hdb/ --OPERATIONSAPI_NETWORK_PORT 9925
+```
+
+***
+
+## Starting HarperDB
+
+To start HarperDB after it is installed, run the following command:
+
+```bash
+harperdb start
+```
+
+***
+
+## Stopping HarperDB
+
+To stop HarperDB once it is running, run the following command:
+
+```bash
+harperdb stop
+```
+
+***
+
+## Restarting HarperDB
+
+To restart HarperDB once it is running, run the following command:
+
+```bash
+harperdb restart
+```
+***
+
+## Getting the HarperDB Version
+
+To check the version of HarperDB that is installed run the following command:
+
+```bash
+harperdb version
+```
+
+## Get all available CLI commands
+
+To display all available HarperDB CLI commands along with a brief description run:
+
+```bash
+harperdb help
+```
+
+## Get the status of HarperDB and clustering
+
+To display the status of the HarperDB process, the clustering hub and leaf processes, the clustering network and replication statuses, run:
+
+```bash
+harperdb status
+```
+
+## Backups
+
+HarperDB uses a transactional commit process that ensures that data on disk is always transactionally consistent with storage. This means that HarperDB maintains database integrity in the event of a crash. It also means that you can use any standard volume snapshot tool to make a backup of a HarperDB database. Database files are stored in the hdb/database directory. As long as the snapshot is an atomic snapshot of these database files, the data can be copied/moved back into the database directory to restore a previous backup (with HarperDB shut down) , and database integrity will be preserved. Note that simply copying an in-use database file (using `cp`, for example) is _not_ a snapshot, and this would progressively read data from the database at different points in time, which yields unreliable copy that likely will not be usable. Standard copying is only reliable for a database file that is not in use.
diff --git a/site/versioned_docs/version-4.2/deployments/harperdb-cloud/alarms.md b/site/versioned_docs/version-4.2/deployments/harperdb-cloud/alarms.md
new file mode 100644
index 00000000..03526fa8
--- /dev/null
+++ b/site/versioned_docs/version-4.2/deployments/harperdb-cloud/alarms.md
@@ -0,0 +1,20 @@
+---
+title: Alarms
+---
+
+# Alarms
+
+HarperDB Cloud instance alarms are triggered when certain conditions are met. Once alarms are triggered organization owners will immediately receive an email alert and the alert will be available on the [Instance Configuration](../../administration/harperdb-studio/instance-configuration) page. The below table describes each alert and their evaluation metrics.
+
+### Heading Definitions
+
+* **Alarm**: Title of the alarm.
+* **Threshold**: Definition of the alarm threshold.
+* **Intervals**: The number of occurrences before an alarm is triggered and the period that the metric is evaluated over.
+* **Proposed Remedy**: Recommended solution to avoid the alert in the future.
+
+| Alarm | Threshold | Intervals | Proposed Remedy |
+| ------- | ---------- | --------- | -------------------------------------------------------------------------------------------------------------------------------- |
+| Storage | > 90% Disk | 1 x 5min | [Increased storage volume](../../administration/harperdb-studio/instance-configuration#update-instance-storage) |
+| CPU | > 90% Avg | 2 x 5min | [Increase instance size for additional CPUs](../../administration/harperdb-studio/instance-configuration#update-instance-ram) |
+| Memory | > 90% RAM | 2 x 5min | [Increase instance size](../../administration/harperdb-studio/instance-configuration#update-instance-ram) |
diff --git a/site/versioned_docs/version-4.2/deployments/harperdb-cloud/index.md b/site/versioned_docs/version-4.2/deployments/harperdb-cloud/index.md
new file mode 100644
index 00000000..ae2ec1a7
--- /dev/null
+++ b/site/versioned_docs/version-4.2/deployments/harperdb-cloud/index.md
@@ -0,0 +1,9 @@
+---
+title: HarperDB Cloud
+---
+
+# HarperDB Cloud
+
+[HarperDB Cloud](https:/studio.harperdb.io/) is the easiest way to test drive HarperDB, it’s HarperDB-as-a-Service. Cloud handles deployment and management of your instances in just a few clicks. HarperDB Cloud is currently powered by AWS with additional cloud providers on our roadmap for the future.
+
+You can create a new [HarperDB Cloud instance in the HarperDB Studio](../../administration/harperdb-studio/instances#create-a-new-instance).
diff --git a/site/versioned_docs/version-4.2/deployments/harperdb-cloud/instance-size-hardware-specs.md b/site/versioned_docs/version-4.2/deployments/harperdb-cloud/instance-size-hardware-specs.md
new file mode 100644
index 00000000..0e970b13
--- /dev/null
+++ b/site/versioned_docs/version-4.2/deployments/harperdb-cloud/instance-size-hardware-specs.md
@@ -0,0 +1,23 @@
+---
+title: Instance Size Hardware Specs
+---
+
+# Instance Size Hardware Specs
+
+While HarperDB Cloud bills by RAM, each instance has other specifications associated with the RAM selection. The following table describes each instance size in detail\*.
+
+| AWS EC2 Instance Size | RAM (GiB) | # vCPUs | Network (Gbps) | Processor |
+| --------------------- | --------- | ------- | -------------- | -------------------------------------- |
+| t3.micro | 1 | 2 | Up to 5 | 2.5 GHz Intel Xeon Platinum 8000 |
+| t3.small | 2 | 2 | Up to 5 | 2.5 GHz Intel Xeon Platinum 8000 |
+| t3.medium | 4 | 2 | Up to 5 | 2.5 GHz Intel Xeon Platinum 8000 |
+| m5.large | 8 | 2 | Up to 10 | Up to 3.1 GHz Intel Xeon Platinum 8000 |
+| m5.xlarge | 16 | 4 | Up to 10 | Up to 3.1 GHz Intel Xeon Platinum 8000 |
+| m5.2xlarge | 32 | 8 | Up to 10 | Up to 3.1 GHz Intel Xeon Platinum 8000 |
+| m5.4xlarge | 64 | 16 | Up to 10 | Up to 3.1 GHz Intel Xeon Platinum 8000 |
+| m5.8xlarge | 128 | 32 | 10 | Up to 3.1 GHz Intel Xeon Platinum 8000 |
+| m5.12xlarge | 192 | 48 | 10 | Up to 3.1 GHz Intel Xeon Platinum 8000 |
+| m5.16xlarge | 256 | 64 | 20 | Up to 3.1 GHz Intel Xeon Platinum 8000 |
+| m5.24xlarge | 384 | 96 | 25 | Up to 3.1 GHz Intel Xeon Platinum 8000 |
+
+\*Specifications are subject to change. For the most up to date information, please refer to AWS documentation: [https:/aws.amazon.com/ec2/instance-types/](https:/aws.amazon.com/ec2/instance-types/).
diff --git a/site/versioned_docs/version-4.2/deployments/harperdb-cloud/iops-impact.md b/site/versioned_docs/version-4.2/deployments/harperdb-cloud/iops-impact.md
new file mode 100644
index 00000000..1c8496d5
--- /dev/null
+++ b/site/versioned_docs/version-4.2/deployments/harperdb-cloud/iops-impact.md
@@ -0,0 +1,42 @@
+---
+title: IOPS Impact on Performance
+---
+
+# IOPS Impact on Performance
+
+HarperDB, like any database, can place a tremendous load on its storage resources. Storage, not CPU or memory, will more often be the bottleneck of server, virtual machine, or a container running HarperDB. Understanding how storage works, and how much storage performance your workload requires, is key to ensuring that HarperDB performs as expected.
+
+## IOPS Overview
+
+The primary measure of storage performance is the number of input/output operations per second (IOPS) that a storage device can perform. Different storage devices can have dramatically different performance profiles. A hard drive (HDD) might only perform a hundred or so IOPS, while a solid state drive (SSD) might be able to perform tens or hundreds of thousands of IOPS.
+
+Cloud providers like AWS, which powers HarperDB Cloud, don’t typically attach individual disks to a virtual machine or container. Instead, they combine large numbers of storage drives to create very high performance storage servers. Chunks (volumes) of that storage are then carved out and presented to many different virtual machines and containers. Due to the shared nature of this type of storage, the cloud provider places configurable limits on the number of IOPS that a volume can perform. The same way that cloud providers charge more for larger capacity volumes, they also charge more for volumes with more IOPS.
+
+## HarperDB Cloud Storage
+
+HarperDB Cloud utilizes AWS Elastic Block Storage (EBS) General Purpose SSD (gp3) volumes. This is the most common storage type used in AWS, as it provides reasonable performance for most workloads, at a reasonable price.
+
+AWS EBS gp3 volumes have a baseline performance level of 3,000 IOPS, as a result, all HarperDB Cloud storage options will offer 3,000 IOPS. We plan to offer scalable IOPS as an option in the future.
+
+You can read more about AWS EBS volume IOPS here: https:/docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html.
+
+## Estimating IOPS for HarperDB Instance
+
+The number of IOPS required for a particular workload is influenced by many factors. Testing your particular application is the best way to determine the number of IOPS required. A reliable method is to estimate about two IOPS for every index, including the primary key itself. So if a table has two indices besides primary key, estimate that an insert or update will require about six IOPS. Note that that can often be closer to one IOPS per index under load due to internal batching of writes, and sometimes even better when doing sequential inserts. Again it is best to test to verify this with application specific data and write patterns.
+
+For assistance in estimating IOPS requirements feel free to contact HarperDB Support or join our Community Slack Channel.
+
+## Example Use Case IOPS Requirements
+
+* **Sensor Data Collection**
+
+ In the case of IoT sensors where data collection will be sustained, high IOPS are required. While there are not typically large queries going on in this case, there is a high volume of data being ingested. This implies that IOPS will be sustained at a high level. For example, if you are collecting 100 records per second you would expect to need roughly 3,000 IOPS just to handle the data inserts.
+* **Data Analytics/BI Server**
+
+ Providing a server for analytics purposes typically requires a larger machine. Typically these cases involve large scale SQL joins and aggregations, which puts a large strain on reads. HarperDB utilizes an in-memory cache, which provides a significant performance boost on machines with large amounts of memory. However, if disparate datasets are constantly being queried and/or new data is frequently being loaded, you will find that the system still needs to have high IOPS to meet performance demand.
+* **Web Services**
+
+ Typical web service implementations with discrete reads and writes often do not need high IOPS to perform as expected. This is often the case in more transactional systems without the requirement for high performance load. A good rule to follow is that any HarperDB operation that requires a data scan will be IOPS intensive, but if these are not frequent then the EBS boost will suffice. Queries utilizing equals operations in either SQL or NoSQL do not require a scan due to HarperDB’s native indexing.
+* **High Performance Database**
+
+ Ultimately, if performance is your top priority, HarperDB should be run on bare metal hardware. Cloud providers offer these options at a higher cost, but they come with obvious performance improvements.
diff --git a/site/versioned_docs/version-4.2/deployments/harperdb-cloud/verizon-5g-wavelength-instances.md b/site/versioned_docs/version-4.2/deployments/harperdb-cloud/verizon-5g-wavelength-instances.md
new file mode 100644
index 00000000..c5a565e9
--- /dev/null
+++ b/site/versioned_docs/version-4.2/deployments/harperdb-cloud/verizon-5g-wavelength-instances.md
@@ -0,0 +1,31 @@
+---
+title: Verizon 5G Wavelength
+---
+
+# Verizon 5G Wavelength
+
+These instances are only accessible from the Verizon network. When accessing your HarperDB instance please ensure you are connected to the Verizon network, examples include Verizon 5G Internet, Verizon Hotspots, or Verizon mobile devices.
+
+HarperDB on Verizon 5G Wavelength brings HarperDB closer to the end user exclusively on the Verizon network resulting in as little as single-digit millisecond response time from HarperDB to the client.
+
+Instances are built via AWS Wavelength. You can read more about [AWS Wavelength here](https:/aws.amazon.com/wavelength/).
+
+HarperDB 5G Wavelength Instance Specs While HarperDB 5G Wavelength bills by RAM, each instance has other specifications associated with the RAM selection. The following table describes each instance size in detail\*.
+
+| AWS EC2 Instance Size | RAM (GiB) | # vCPUs | Network (Gbps) | Processor |
+| --------------------- | --------- | ------- | -------------- | ------------------------------------------- |
+| t3.medium | 4 | 2 | Up to 5 | Up to 3.1 GHz Intel Xeon Platinum Processor |
+| t3.xlarge | 16 | 4 | Up to 5 | Up to 3.1 GHz Intel Xeon Platinum Processor |
+| r5.2xlarge | 64 | 8 | Up to 10 | Up to 3.1 GHz Intel Xeon Platinum Processor |
+
+\*Specifications are subject to change. For the most up to date information, please refer to [AWS documentation](https:/aws.amazon.com/ec2/instance-types/).
+
+## HarperDB 5G Wavelength Storage
+
+HarperDB 5G Wavelength utilizes AWS Elastic Block Storage (EBS) General Purpose SSD (gp2) volumes. This is the most common storage type used in AWS, as it provides reasonable performance for most workloads, at a reasonable price.
+
+AWS EBS gp2 volumes have a baseline performance level, which determines the number of IOPS it can perform indefinitely. The larger the volume, the higher its baseline performance. Additionally, smaller gp2 volumes are able to burst to a higher number of IOPS for periods of time.
+
+Smaller gp2 volumes are perfect for trying out the functionality of HarperDB, and might also work well for applications that don’t perform many database transactions. For applications that perform a moderate or high number of transactions, we recommend that you use a larger HarperDB volume. Learn more about the [impact of IOPS on performance here](./iops-impact).
+
+You can read more about [AWS EBS gp2 volume IOPS here](https:/docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html#ebsvolumetypes\_gp2).
diff --git a/site/versioned_docs/version-4.2/deployments/install-harperdb/index.md b/site/versioned_docs/version-4.2/deployments/install-harperdb/index.md
new file mode 100644
index 00000000..32a23e0b
--- /dev/null
+++ b/site/versioned_docs/version-4.2/deployments/install-harperdb/index.md
@@ -0,0 +1,63 @@
+---
+title: Install HarperDB
+---
+
+# Install HarperDB
+
+## Install HarperDB
+
+This documentation contains information for installing HarperDB locally. Note that if you’d like to get up and running quickly, you can try a [managed instance with HarperDB Cloud](https:/studio.harperdb.io/sign-up). HarperDB is a cross-platform database; we recommend Linux for production use, but HarperDB can run on Windows and Mac as well, for development purposes. Installation is usually very simple and just takes a few steps, but there are a few different options documented here.
+
+HarperDB runs on Node.js, so if you do not have it installed, you need to do that first (if you have installed, you can skip to installing HarperDB, itself). Node.js can be downloaded and installed from [their site](https:/nodejs.org/). For Linux and Mac, we recommend installing and managing Node versions with [NVM, which has instructions for installation](https:/github.com/nvm-sh/nvm). Generally NVM can be installed with the following command:
+
+```bash
+curl -o- https:/raw.githubusercontent.com/nvm-sh/nvm/v0.39.5/install.sh | bash
+```
+
+And then logout and login, and then install Node.js using nvm. We recommend using LTS, but support all currently maintained Node versions (which is currently version 14 and newer, and make sure to always uses latest minor/patch for the major version):
+
+```bash
+nvm install --lts
+```
+
+#### `Install and Start HarperDB `
+
+Then you can install HarperDB with NPM and start it:
+
+```bash
+npm install -g harperdb
+harperdb
+```
+
+HarperDB will automatically start after installation.
+
+If you are setting up a production server on Linux, [we have much more extensive documentation on how to configure volumes for database storage, set up a systemd script, and configure your operating system to use as a database server in our linux installation guide](./linux).
+
+## With Docker
+
+If you would like to run HarperDB in Docker, install [Docker Desktop](https:/docs.docker.com/desktop/) on your Mac or Windows computer. Otherwise, install the [Docker Engine](https:/docs.docker.com/engine/install/) on your Linux server.
+
+Once Docker Desktop or Docker Engine is installed, visit our [Docker Hub page](https:/hub.docker.com/r/harperdb/harperdb) for information and examples on how to run a HarperDB container.
+
+## Offline Install
+
+If you need to install HarperDB on a device that doesn't have an Internet connection, you can choose your version and download the npm package and install it directly (you’ll still need Node.js and NPM):
+
+[Download Install Package](https:/products-harperdb-io.s3.us-east-2.amazonaws.com/index.html)
+
+Once you’ve downloaded the .tgz file, run the following command from the directory where you’ve placed it:
+
+```bash
+npm install -g harperdb-X.X.X.tgz harperdb install
+```
+
+For more information visit the [HarperDB Command Line Interface](../harperdb-cli) guide.
+
+## Installation on Less Common Platforms
+
+HarperDB comes with binaries for standard AMD64/x64 or ARM64 CPU architectures on Linux, Windows (x64 only), and Mac (including Apple Silicon). However, if you are installing on a less common platform (Alpine, for example), you will need to ensure that you have build tools installed for the installation process to compile the binaries (this is handled automatically), including:
+
+* [Go](https:/go.dev/dl/): version 1.19.1
+* GCC
+* Make
+* Python v3.7, v3.8, v3.9, or v3.10
diff --git a/site/versioned_docs/version-4.2/deployments/install-harperdb/linux.md b/site/versioned_docs/version-4.2/deployments/install-harperdb/linux.md
new file mode 100644
index 00000000..bf02830e
--- /dev/null
+++ b/site/versioned_docs/version-4.2/deployments/install-harperdb/linux.md
@@ -0,0 +1,212 @@
+---
+title: On Linux
+---
+
+# On Linux
+
+If you wish to install locally or already have a configured server, see the basic [Installation Guide](./)
+
+The following is a recommended way to configure Linux and install HarperDB. These instructions should work reasonably well for any public cloud or on-premises Linux instance.
+
+***
+
+These instructions assume that the following has already been completed:
+
+1. Linux is installed
+1. Basic networking is configured
+1. A non-root user account dedicated to HarperDB with sudo privileges exists
+1. An additional volume for storing HarperDB files is attached to the Linux instance
+1. Traffic to ports 9925 (HarperDB Operations API) 9926 (HarperDB Application Interface) and 9932 (HarperDB Clustering) is permitted
+
+While you will need to access HarperDB through port 9925 for the administration through the operations API, and port 9932 for clustering, for higher level of security, you may want to consider keeping both of these ports restricted to a VPN or VPC, and only have the application interface (9926 by default) exposed to the public Internet.
+
+For this example, we will use an AWS Ubuntu Server 22.04 LTS m5.large EC2 Instance with an additional General Purpose SSD EBS volume and the default “ubuntu” user account.
+
+***
+
+### (Optional) LVM Configuration
+
+Logical Volume Manager (LVM) can be used to stripe multiple disks together to form a single logical volume. If striping disks together is not a requirement, skip these steps.
+
+Find disk that already has a partition
+
+```bash
+used_disk=$(lsblk -P -I 259 | grep "nvme.n1.*part" | grep -o "nvme.n1")
+```
+
+Create array of free disks
+
+```bash
+declare -a free_disks
+mapfile -t free_disks < <(lsblk -P -I 259 | grep "nvme.n1.*disk" | grep -o "nvme.n1" | grep -v "$used_disk")
+```
+
+Get quantity of free disks
+
+```bash
+free_disks_qty=${#free_disks[@]}
+```
+
+Construct pvcreate command
+
+```bash
+cmd_string=""
+for i in "${free_disks[@]}"
+do
+cmd_string="$cmd_string /dev/$i"
+done
+```
+
+Initialize disks for use by LVM
+
+```bash
+pvcreate_cmd="pvcreate $cmd_string"
+sudo $pvcreate_cmd
+```
+
+Create volume group
+
+```bash
+vgcreate_cmd="vgcreate hdb_vg $cmd_string"
+sudo $vgcreate_cmd
+```
+
+Create logical volume
+
+```bash
+sudo lvcreate -n hdb_lv -i $free_disks_qty -l 100%FREE hdb_vg
+```
+
+### Configure Data Volume
+
+Run `lsblk` and note the device name of the additional volume
+
+```bash
+lsblk
+```
+
+Create an ext4 filesystem on the volume (The below commands assume the device name is nvme1n1. If you used LVM to create logical volume, replace /dev/nvme1n1 with /dev/hdb\_vg/hdb\_lv)
+
+```bash
+sudo mkfs.ext4 -L hdb_data /dev/nvme1n1
+```
+
+Mount the file system and set the correct permissions for the directory
+
+```bash
+mkdir /home/ubuntu/hdb
+sudo mount -t ext4 /dev/nvme1n1 /home/ubuntu/hdb
+sudo chown -R ubuntu:ubuntu /home/ubuntu/hdb
+sudo chmod 775 /home/ubuntu/hdb
+```
+
+Create a fstab entry to mount the filesystem on boot
+
+```bash
+echo "LABEL=hdb_data /home/ubuntu/hdb ext4 defaults,noatime 0 1" | sudo tee -a /etc/fstab
+```
+
+### Configure Linux and Install Prerequisites
+
+If a swap file or partition does not already exist, create and enable a 2GB swap file
+
+```bash
+sudo dd if=/dev/zero of=/swapfile bs=128M count=16
+sudo chmod 600 /swapfile
+sudo mkswap /swapfile
+sudo swapon /swapfile
+echo "/swapfile swap swap defaults 0 0" | sudo tee -a /etc/fstab
+```
+
+Increase the open file limits for the ubuntu user
+
+```bash
+echo "ubuntu soft nofile 500000" | sudo tee -a /etc/security/limits.conf
+echo "ubuntu hard nofile 1000000" | sudo tee -a /etc/security/limits.conf
+```
+
+Install Node Version Manager (nvm)
+
+```bash
+curl -o- https:/raw.githubusercontent.com/nvm-sh/nvm/v0.39.3/install.sh | bash
+```
+
+Load nvm (or logout and then login)
+
+```bash
+. ~/.nvm/nvm.sh
+```
+
+Install Node.js using nvm ([read more about specific Node version requirements](https:/www.npmjs.com/package/harperdb#prerequisites))
+
+```bash
+nvm install
+```
+
+### `Install and Start HarperDB `
+
+Here is an example of installing HarperDB with minimal configuration.
+
+```bash
+npm install -g harperdb
+harperdb start \
+ --TC_AGREEMENT "yes" \
+ --ROOTPATH "/home/ubuntu/hdb" \
+ --OPERATIONSAPI_NETWORK_PORT "9925" \
+ --HDB_ADMIN_USERNAME "HDB_ADMIN" \
+ --HDB_ADMIN_PASSWORD "password"
+```
+
+Here is an example of installing HarperDB with commonly used additional configuration.
+
+```bash
+npm install -g harperdb
+harperdb start \
+ --TC_AGREEMENT "yes" \
+ --ROOTPATH "/home/ubuntu/hdb" \
+ --OPERATIONSAPI_NETWORK_PORT "9925" \
+ --HDB_ADMIN_USERNAME "HDB_ADMIN" \
+ --HDB_ADMIN_PASSWORD "password" \
+ --HTTP_SECUREPORT "9926" \
+ --CLUSTERING_ENABLED "true" \
+ --CLUSTERING_USER "cluster_user" \
+ --CLUSTERING_PASSWORD "password" \
+ --CLUSTERING_NODENAME "hdb1"
+```
+
+HarperDB will automatically start after installation. If you wish HarperDB to start when the OS boots, you have two options
+
+You can set up a crontab:
+
+```bash
+(crontab -l 2>/dev/null; echo "@reboot PATH=\"/home/ubuntu/.nvm/versions/node/v18.15.0/bin:$PATH\" && harperdb start") | crontab -
+```
+
+Or you can create a systemd script at `/etc/systemd/system/harperdb.service`
+
+Pasting the following contents into the file:
+
+```
+[Unit]
+Description=HarperDB
+
+[Service]
+Type=simple
+Restart=always
+User=ubuntu
+Group=ubuntu
+WorkingDirectory=/home/ubuntu
+ExecStart=/bin/bash -c 'PATH="/home/ubuntu/.nvm/versions/node/v18.15.0/bin:$PATH"; harperdb'
+
+[Install]
+WantedBy=multi-user.target
+```
+
+And then running the following:
+
+```
+systemctl daemon-reload
+systemctl enable harperdb
+```
+
+For more information visit the [HarperDB Command Line Interface guide](../../deployments/harperdb-cli) and the [HarperDB Configuration File guide](../../deployments/configuration).
diff --git a/site/versioned_docs/version-4.2/deployments/upgrade-hdb-instance.md b/site/versioned_docs/version-4.2/deployments/upgrade-hdb-instance.md
new file mode 100644
index 00000000..0b7c6e3f
--- /dev/null
+++ b/site/versioned_docs/version-4.2/deployments/upgrade-hdb-instance.md
@@ -0,0 +1,90 @@
+---
+title: Upgrade a HarperDB Instance
+---
+
+# Upgrade a HarperDB Instance
+
+This document describes best practices for upgrading self-hosted HarperDB instances. HarperDB can be upgraded using a combination of npm and built-in HarperDB upgrade scripts. Whenever upgrading your HarperDB installation it is recommended you make a backup of your data first. Note: This document applies to self-hosted HarperDB instances only. All [HarperDB Cloud instances](./harperdb-cloud/) will be upgraded by the HarperDB Cloud team.
+
+## Upgrading
+
+Upgrading HarperDB is a two-step process. First the latest version of HarperDB must be downloaded from npm, then the HarperDB upgrade scripts will be utilized to ensure the newest features are available on the system.
+
+1. Install the latest version of HarperDB using `npm install -g harperdb`.
+
+ Note `-g` should only be used if you installed HarperDB globally (which is recommended).
+1. Run `harperdb` to initiate the upgrade process.
+
+ HarperDB will then prompt you for all appropriate inputs and then run the upgrade directives.
+
+## Node Version Manager (nvm)
+
+[Node Version Manager (nvm)](http:/nvm.sh/) is an easy way to install, remove, and switch between different versions of Node.js as required by various applications. More information, including directions on installing nvm can be found here: https:/nvm.sh/.
+
+HarperDB supports Node.js versions 14.0.0 and higher, however, **please check our** [**NPM page**](https:/www.npmjs.com/package/harperdb) **for our recommended Node.js version.** To install a different version of Node.js with nvm, run the command:
+
+```bash
+nvm install
+```
+
+To switch to a version of Node run:
+
+```bash
+nvm use
+```
+
+To see the current running version of Node run:
+
+```bash
+node --version
+```
+
+With a handful of different versions of Node.js installed, run nvm with the `ls` argument to list out all installed versions:
+
+```bash
+nvm ls
+```
+
+When upgrading HarperDB, we recommend also upgrading your Node version. Here we assume you're running on an older version of Node; the execution may look like this:
+
+Switch to the older version of Node that HarperDB is running on (if it is not the current version):
+
+```bash
+nvm use 14.19.0
+```
+
+Make sure HarperDB is not running:
+
+```bash
+harperdb stop
+```
+
+Uninstall HarperDB. Note, this step is not required, but will clean up old artifacts of HarperDB. We recommend removing all other HarperDB installations to ensure the most recent version is always running.
+
+```bash
+npm uninstall -g harperdb
+```
+
+Switch to the newer version of Node:
+
+```bash
+nvm use
+```
+
+Install HarperDB globally
+
+```bash
+npm install -g harperdb
+```
+
+Run the upgrade script
+
+```bash
+harperdb
+```
+
+Start HarperDB
+
+```bash
+harperdb start
+```
diff --git a/site/versioned_docs/version-4.2/developers/_category_.json b/site/versioned_docs/version-4.2/developers/_category_.json
new file mode 100644
index 00000000..9fe399bf
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/_category_.json
@@ -0,0 +1,12 @@
+{
+ "label": "Developers",
+ "position": 1,
+ "link": {
+ "type": "generated-index",
+ "title": "Developers Documentation",
+ "description": "Comprehensive guides and references for building applications with HarperDB",
+ "keywords": [
+ "developers"
+ ]
+ }
+}
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/developers/applications/caching.md b/site/versioned_docs/version-4.2/developers/applications/caching.md
new file mode 100644
index 00000000..7f228b5e
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/applications/caching.md
@@ -0,0 +1,273 @@
+---
+title: Caching
+---
+
+# Caching
+
+HarperDB has integrated support for caching data. With built-in caching capabilities and distributed high-performance low-latency responsiveness, HarperDB makes an ideal data caching server. HarperDB can store cached data as queryable structured data, so data can easily be consumed in one format (for example JSON or CSV) and provided to end users in different formats with different selected properties (for example MessagePack, with a subset of selected properties), or even with customized querying capabilities. HarperDB also manages and provides timestamps/tags for proper caching control, facilitating further downstreaming caching. With these combined capabilities, HarperDB is an extremely fast, interoperable, flexible, and customizable caching server.
+
+## Configuring Caching
+
+To set up caching, first you will need to define a table that you will use as your cache (to store the cached data). You can review the [introduction to building applications](./) for more information on setting up the application (and the [defining schemas documentation](./defining-schemas)), but once you have defined an application folder with a schema, you can add a table for caching to your `schema.graphql`:
+
+```graphql
+type MyCache @table(expiration: 3600) @export {
+ id: ID @primaryKey
+}
+```
+
+You may also note that we can define a time-to-live (TTL) expiration on the table, indicating when table records/entries should expire. This is generally necessary for "passive" caches where there is no active notification of when entries expire. However, this is not needed if you provide a means of notifying when data is invalidated and changed.
+
+While you can provide a single expiration time, there are actually several expiration timings that are potentially relevant, and can be independently configured. These settings are available as directive properties on the table configuration (like `expiration` above): stale expiration: The point when a request for a record should trigger a request to origin (but might possibly return the current stale record depending on policy) must-revalidate expiration: The point when a request for a record must make a request to origin first and return the latest value from origin. eviction expiration: The point when a record is actually removed from the caching table.
+
+You can provide a single expiration and it defines the behavior for all three. You can also provide three settings for expiration, through table directives: expiration - The amount of time until a record goes stale. eviction - The amount of time after expiration before a record can be evicted (defaults to zero). scanInterval - The interval for scanning for expired records (defaults to one quarter of the total of expiration and eviction).
+
+## Define External Data Source
+
+Next, you need to define the source for your cache. External data sources could be HTTP APIs, other databases, microservices, or any other source of data. This can be defined as a resource class in your application's `resources.js` module. You can extend the `Resource` class (which is available as a global variable in the HarperDB environment) as your base class. The first method to implement is a `get()` method to define how to retrieve the source data. For example, if we were caching an external HTTP API, we might define it as such:
+
+```javascript
+class ThirdPartyAPI extends Resource {
+ async get() {
+ return (await fetch(`http:/some-api.com/${this.getId()}`)).json();
+ }
+}
+```
+
+Next, we define this external data resource as the "source" for the caching table we defined above:
+
+```javascript
+const { MyTable } = tables;
+MyTable.sourcedFrom(ThirdPartyAPI);
+```
+
+Now we have a fully configured and connected cache. If you access data from `MyCache` (for example, through the REST API, like `/MyCache/some-id`), HarperDB will check to see if the requested entry is in the table and return it if it is available (and hasn't expired). If there is no entry, or it has expired (it is older than one hour in this case), it will go to the source, calling the `get()` method, which will then retrieve the requested entry. Once the entry is retrieved, it will be saved/cached in the caching table (for one hour based on our expiration time).
+
+```mermaid
+flowchart TD
+ Client1(Client 1)-->Cache(Caching Table)
+ Client2(Client 2)-->Cache
+ Cache-->Resource(Data Source Connector)
+ Resource-->API(Remote Data Source API)
+```
+
+HarperDB handles waiting for an existing cache resolution to finish and uses its result. This prevents a "cache stampede" when entries expire, ensuring that multiple requests to a cache entry will all wait on a single request to the data source.
+
+Cache tables with an expiration are periodically pruned for expired entries. Because this is done periodically, there is usually some amount of time between when a record has expired and when the record is actually evicted (the cached data is removed). But when a record is checked for availability, the expiration time is used to determine if the record is fresh (and the cache entry can be used).
+
+### Eviction with Indexing
+
+Eviction is the removal of a locally cached copy of data, but it does not imply the deletion of the actual data from the canonical or origin data source. Because evicted records still exist (just not in the local cache), if a caching table uses expiration (and eviction), and has indexing on certain attributes, the data is not removed from the indexes. The indexes that reference the evicted record are preserved, along with the attribute data necessary to maintain these indexes. Therefore eviction means the removal of non-indexed data (in this case evictions are stored as "partial" records). Eviction only removes the data that can be safely removed from a cache without affecting the integrity or behavior of the indexes. If a search query is performed that matches this evicted record, the record will be requested on-demand to fulfill the search query.
+
+### Specifying a Timestamp
+
+In the example above, we simply retrieved data to fulfill a cache request. We may want to supply the timestamp of the record we are fulfilling as well. This can be set on the context for the request:
+
+```javascript
+class ThirdPartyAPI extends Resource {
+ async get() {
+ let response = await fetch(`http:/some-api.com/${this.getId()}`);
+ this.getContext().lastModified = response.headers.get('Last-Modified');
+ return response.json();
+ }
+}
+```
+
+#### Specifying an Expiration
+
+In addition, we can also specify when a cached record "expires". When a cached record expires, this means that a request for that record will trigger a request to the data source again. This does not necessarily mean that the cached record has been evicted (removed), although expired records will be periodically evicted. If the cached record still exists, the data source can revalidate it and return it. For example:
+
+```javascript
+class ThirdPartyAPI extends Resource {
+ async get() {
+ const context = this.getContext();
+ let headers = new Headers();
+ if (context.replacingVersion) / this is the existing cached record
+ headers.set('If-Modified-Since', new Date(context.replacingVersion).toUTCString());
+ let response = await fetch(`http:/some-api.com/${this.getId()}`, { headers });
+ let cacheInfo = response.headers.get('Cache-Control');
+ let maxAge = cacheInfo?.match(/max-age=(\d)/)?.[1];
+ if (maxAge) / we can set a specific expiration time by setting context.expiresAt
+ context.expiresAt = Date.now() + maxAge * 1000; / convert from seconds to milliseconds and add to current time
+ / we can just revalidate and return the record if the origin has confirmed that it has the same version:
+ if (response.status === 304) return context.replacingRecord;
+ ...
+```
+
+## Active Caching and Invalidation
+
+The cache we have created above is a "passive" cache; it only pulls data from the data source as needed, and has no knowledge of if and when data from the data source has actually changed, so it must rely on timer-based expiration to periodically retrieve possibly updated data. This means that it is possible that the cache may have stale data for a while (if the underlying data has changed, but the cached data hasn't expired), and the cache may have to refresh more than necessary if the data source data hasn't changed. Consequently it can be significantly more effective to implement an "active" cache, in which the data source is monitored and notifies the cache when any data changes. This ensures that when data changes, the cache can immediately load the updated data, and unchanged data can remain cached much longer (or indefinitely).
+
+### Invalidate
+
+One way to provide more active caching is to specifically invalidate individual records. Invalidation is useful when you know the source data has changed, and the cache needs to re-retrieve data from the source the next time that record is accessed. This can be done by executing the `invalidate()` method on a resource. For example, you could extend a table (in your resources.js) and provide a custom POST handler that does invalidation:
+
+```javascript
+const { MyTable } = tables;
+export class MyTableEndpoint extends MyTable {
+ async post(data) {
+ if (data.invalidate) / use this flag as a marker
+ this.invalidate();
+ }
+}
+```
+
+(Note that if you are now exporting this endpoint through resources.js, you don't necessarily need to directly export the table separately in your schema.graphql).
+
+### Subscriptions
+
+We can provide more control of an active cache with subscriptions. If there is a way to receive notifications from the external data source of data changes, we can implement this data source as an "active" data source for our cache by implementing a `subscribe` method. A `subscribe` method should return an asynchronous iterable that iterates and returns events indicating the updates. One straightforward way of creating an asynchronous iterable is by defining the `subscribe` method as an asynchronous generator. If we had an endpoint that we could poll for changes, we could implement this like:
+
+```javascript
+class ThirdPartyAPI extends Resource {
+ async *subscribe() {
+ do {
+ / get the next data change event from the source
+ let update = (await fetch(`http:/some-api.com/latest-update`)).json();
+ const event = { / define the change event (which will update the cache)
+ type: 'put', / this would indicate that the event includes the new data value
+ id: / the primary key of the record that updated
+ value: / the new value of the record that updated
+ timestamp: / the timestamp of when the data change occurred
+ };
+ yield event; / this returns this event, notifying the cache of the change
+ } while(true);
+ }
+ async get() {
+...
+```
+
+Notification events should always include an `id` to indicate the primary key of the updated record. The event should have a `value` for `put` and `message` event types. The `timestamp` is optional and can be used to indicate the exact timestamp of the change. The following event `type`s are supported:
+
+* `put` - This indicates that the record has been updated and provides the new value of the record
+* `invalidate` - Alternately, you can notify with an event type of `invalidate` to indicate that the data has changed, but without the overhead of actually sending the data (the `value` property is not needed), so the data only needs to be sent if and when the data is requested through the cache. An `invalidate` will evict the entry and update the timestamp to indicate that there is new data that should be requested (if needed).
+* `delete` - This indicates that the record has been deleted.
+* `message` - This indicates a message is being passed through the record. The record value has not changed, but this is used for [publish/subscribe messaging](../real-time).
+* `transaction` - This indicates that there are multiple writes that should be treated as a single atomic transaction. These writes should be included as an array of data notification events in the `writes` property.
+
+And the following properties can be defined on event objects:
+
+* `type`: The event type as described above.
+* `id`: The primary key of the record that updated
+* `value`: The new value of the record that updated (for put and message)
+* `writes`: An array of event properties that are part of a transaction (used in conjunction with the transaction event type).
+* `table`: The name of the table with the record that was updated. This can be used with events within a transaction to specify events across multiple tables.
+* `timestamp`: The timestamp of when the data change occurred
+
+With an active external data source with a `subscribe` method, the data source will proactively notify the cache, ensuring a fresh and efficient active cache. Note that with an active data source, we still use the `sourcedFrom` method to register the source for a caching table, and the table will automatically detect and call the subscribe method on the data source.
+
+By default, HarperDB will only run the subscribe method on one thread. HarperDB is multi-threaded and normally runs many concurrent worker threads, but typically running a subscription on multiple threads can introduce overlap in notifications and race conditions and running on a subscription on a single thread is preferable. However, if you want to enable subscribe on multiple threads, you can define a `static subscribeOnThisThread` method to specify if the subscription should run on the current thread:
+
+```javascript
+class ThirdPartyAPI extends Resource {
+ static subscribeOnThisThread(threadIndex) {
+ return threadIndex < 2; / run on two threads (the first two threads)
+ }
+ async *subscribe() {
+ ....
+```
+
+An alternative to using asynchronous generators is to use a subscription stream and send events to it. A default subscription stream (that doesn't generate its own events) is available from the Resource's default subscribe method:
+
+```javascript
+class ThirdPartyAPI extends Resource {
+ subscribe() {
+ const subscription = super.subscribe();
+ setupListeningToRemoteService().on('update', (event) => {
+ subscription.send(event);
+ });
+ return subscription;
+ }
+}
+```
+
+## Downstream Caching
+
+It is highly recommended that you utilize the [REST interface](../rest) for accessing caching tables, as it facilitates downstreaming caching for clients. Timestamps are recorded with all cached entries. Timestamps are then used for incoming [REST requests to specify the `ETag` in the response](../rest#cachingconditional-requests). Clients can cache data themselves and send requests using the `If-None-Match` header to conditionally get a 304 and preserve their cached data based on the timestamp/`ETag` of the entries that are cached in HarperDB. Caching tables also have [subscription capabilities](./caching#subscribing-to-caching-tables), which means that downstream caches can be fully "layered" on top of HarperDB, both as passive or active caches.
+
+## Write-Through Caching
+
+The cache we have defined so far only has data flowing from the data source to the cache. However, you may wish to support write methods, so that writes to the cache table can flow through to underlying canonical data source, as well as populate the cache. This can be accomplished by implementing the standard write methods, like `put` and `delete`. If you were using an API with standard RESTful methods, you can pass writes through to the data source like this:
+
+```javascript
+class ThirdPartyAPI extends Resource {
+ async put(data) {
+ await fetch(`http:/some-api.com/${this.getId()}`, {
+ method: 'PUT',
+ body: JSON.stringify(data)
+ });
+ }
+ async delete() {
+ await fetch(`http:/some-api.com/${this.getId()}`, {
+ method: 'DELETE',
+ });
+ }
+ ...
+```
+
+When doing an insert or update to the MyCache table, the data will be sent to the underlying data source through the `put` method and the new record value will be stored in the cache as well.
+
+### Loading from Source in Methods
+
+When you are using a caching table, it is important to remember that any resource methods besides `get()`, will not automatically load data from the source. If you have defined a `put()`, `post()`, or `delete()` method and you need the source data, you can ensure it is loaded by calling the `ensureLoaded()` method. For example, if you want to modify the existing record from the source, adding a property to it:
+
+```javascript
+class MyCache extends tables.MyCache {
+ async post(data) {
+ / if the data is not cached locally, retrieves from source:
+ await this.ensuredLoaded();
+ / now we can be sure that the data is loaded, and can access properties
+ this.quantity = this.quantity - data.purchases;
+ }
+}
+```
+
+### Subscribing to Caching Tables
+
+You can subscribe to a caching table just like any other table. The one difference is that normal tables do not usually have `invalidate` events, but an active caching table may have `invalidate` events. Again, this event type gives listeners an opportunity to choose whether or not to actually retrieve the value that changed.
+
+### Caching with Replication
+
+Caching tables can be configured to replicate in HarperDB clusters. When replicating caching tables, there are a couple of options. If each node will be separately connected to the data source and you do not need the subscription data notification events to replicate, you can set the `replicationSource` to `false`. In this case, only data requests (that come through standard requests like REST interface or operations API), will be replicated. However, if you data notification will only be delivered to a single node (at once) and you need the subscription data notification events to replicate, you can set the `replicationSource` to `true` and the incoming events from the subscription will be replicated to all other nodes:
+
+```javascript
+MyTable.sourcedFrom(ThirdPartyAPI, { replicationSource: true });
+```
+
+### Passive-Active Updates
+
+With our passive update examples, we have provided a data source handler with a `get()` method that returns the specific requested record as the response. However, we can also actively update other records in our response handler (if our data source provides data that should be propagated to other related records). This can be done transactionally, to ensure that all updates occur atomically. The context that is provided to the data source holds the transaction information, so we can simply pass the context to any update/write methods that we call. For example, let's say we are loading a blog post, which should also includes comment records:
+
+```javascript
+const { Post, Comment } = tables;
+class BlogSource extends Resource {
+ get() {
+ let post = await (await fetch(`http:/my-blog-server/${this.getId()}`).json());
+ for (let comment of comments) {
+ await Comment.put(comment, this); / save this comment as part of our current context and transaction
+ }
+ return post;
+ }
+}
+Post.sourcedFrom(BlogSource);
+```
+
+Here both the update to the post and the update to the comments will be atomically/transactionally committed together with the same timestamp.
+
+## Cache-Control header
+
+When interacting with cached data, you can also use the `Cache-Control` request header to specify certain caching behaviors. When performing a PUT (or POST) method, you can use the `max-age` directive to indicate how long the resource should be cached (until stale):
+
+```http
+PUT /my-resource/id
+Cache-Control: max-age=86400
+```
+
+You can use the `only-if-cached` directive on GET requests to only return a resource if it is cached (otherwise will return 504). Note, that if the entry is not cached, this will still trigger a request for the source data from the data source. If you do not want source data retrieved, you can add the `no-store` directive. You can also use the `no-cache` directive if you do not want to use the cached resource. If you wanted to check if there is a cached resource without triggering a request to the data source:
+
+```http
+GET /my-resource/id
+Cache-Control: only-if-cached, no-store
+```
+
+You may also use the `stale-if-error` to indicate if it is acceptable to return a stale cached resource when the data source returns an error (network connection error, 500, 502, 503, or 504). The `must-revalidate` directive can indicate a stale cached resource can not be returned, even when the data source has an error (by default a stale cached resource is returned when there is a network connection error).
diff --git a/site/versioned_docs/version-4.2/developers/applications/debugging.md b/site/versioned_docs/version-4.2/developers/applications/debugging.md
new file mode 100644
index 00000000..ca03115f
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/applications/debugging.md
@@ -0,0 +1,39 @@
+---
+title: Debugging Applications
+---
+
+# Debugging Applications
+
+HarperDB components and applications run inside the HarperDB process, which is a standard Node.js process that can be debugged with standard JavaScript development tools like Chrome's devtools, VSCode, and WebStorm. Debugging can be performed by launching the HarperDB entry script with your IDE, or you can start HarperDB in dev mode and connect your debugger to the running process (defaults to standard 9229 port):
+
+```
+harperdb dev
+# or to run and debug a specific app
+harperdb dev /path/to/app
+```
+
+Once you have connected a debugger, you may set breakpoints in your application and fully debug it. Note that when using the `dev` command from the CLI, this will run HarperDB in single-threaded mode. This would not be appropriate for production use, but makes it easier to debug applications.
+
+For local debugging and development, it is recommended that you use standard console log statements for logging. For production use, you may want to use HarperDB's logging facilities, so you aren't logging to the console. The logging functions are available on the global `logger` variable that is provided by HarperDB. This logger can be used to output messages directly to the HarperDB log using standardized logging level functions, described below. The log level can be set in the [HarperDB Configuration File](../../deployments/configuration).
+
+HarperDB Logger Functions
+
+* `trace(message)`: Write a 'trace' level log, if the configured level allows for it.
+* `debug(message)`: Write a 'debug' level log, if the configured level allows for it.
+* `info(message)`: Write a 'info' level log, if the configured level allows for it.
+* `warn(message)`: Write a 'warn' level log, if the configured level allows for it.
+* `error(message)`: Write a 'error' level log, if the configured level allows for it.
+* `fatal(message)`: Write a 'fatal' level log, if the configured level allows for it.
+* `notify(message)`: Write a 'notify' level log.
+
+For example, you can log a warning:
+
+```javascript
+logger.warn('You have been warned');
+```
+
+If you want to ensure a message is logged, you can use `notify` as these messages will appear in the log regardless of log level configured.
+
+## Viewing the Log
+
+The HarperDB Log can be found in your local `~/hdb/log/hdb.log` file (or in the log folder if you have specified an alternate hdb root), or in the [Studio Status page](../../administration/harperdb-studio/instance-metrics). Additionally, you can use the [`read_log` operation](../operations-api/logs) to query the HarperDB log.
diff --git a/site/versioned_docs/version-4.2/developers/applications/define-routes.md b/site/versioned_docs/version-4.2/developers/applications/define-routes.md
new file mode 100644
index 00000000..222d14cf
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/applications/define-routes.md
@@ -0,0 +1,118 @@
+---
+title: Define Fastify Routes
+---
+
+# Define Fastify Routes
+
+HarperDB’s applications provide an extension for loading [Fastify](https:/www.fastify.io/) routes as a way to handle endpoints. While we generally recommend building your endpoints/APIs with HarperDB's [REST interface](../rest) for better performance and standards compliance, Fastify's route can provide an extensive API for highly customized path handling. Below is a very simple example of a route declaration.
+
+The fastify route handler can be configured in your application's config.yaml (this is the default config if you used the [application template](https:/github.com/HarperDB/application-template)):
+
+```yaml
+fastifyRoutes: # This loads files that define fastify routes using fastify's auto-loader
+ files: routes/*.js # specify the location of route definition modules
+ path: . # relative to the app-name, like http:/server/app-name/route-name
+```
+
+By default, route URLs are configured to be:
+
+* \[**Instance URL**]:\[**Custom Functions Port**]/\[**Project Name**]/\[**Route URL**]
+
+However, you can specify the path to be `/` if you wish to have your routes handling the root path of incoming URLs.
+
+* The route below, using the default config, within the **dogs** project, with a route of **breeds** would be available at **http:/localhost:9926/dogs/breeds**.
+
+In effect, this route is just a pass-through to HarperDB. The same result could have been achieved by hitting the core HarperDB API, since it uses **hdbCore.preValidation** and **hdbCore.request**, which are defined in the “helper methods” section, below.
+
+```javascript
+export default async (server, { hdbCore, logger }) => {
+ server.route({
+ url: '/',
+ method: 'POST',
+ preValidation: hdbCore.preValidation,
+ handler: hdbCore.request,
+ })
+}
+```
+
+## Custom Handlers
+
+For endpoints where you want to execute multiple operations against HarperDB, or perform additional processing (like an ML classification, or an aggregation, or a call to a 3rd party API), you can define your own logic in the handler. The function below will execute a query against the dogs table, and filter the results to only return those dogs over 4 years in age.
+
+**IMPORTANT: This route has NO preValidation and uses hdbCore.requestWithoutAuthentication, which- as the name implies- bypasses all user authentication. See the security concerns and mitigations in the “helper methods” section, below.**
+
+```javascript
+export default async (server, { hdbCore, logger }) => {
+ server.route({
+ url: '/:id',
+ method: 'GET',
+ handler: (request) => {
+ request.body= {
+ operation: 'sql',
+ sql: `SELECT * FROM dev.dog WHERE id = ${request.params.id}`
+ };
+
+ const result = await hdbCore.requestWithoutAuthentication(request);
+ return result.filter((dog) => dog.age > 4);
+ }
+ });
+}
+```
+
+## Custom preValidation Hooks
+
+The simple example above was just a pass-through to HarperDB- the exact same result could have been achieved by hitting the core HarperDB API. But for many applications, you may want to authenticate the user using custom logic you write, or by conferring with a 3rd party service. Custom preValidation hooks let you do just that.
+
+Below is an example of a route that uses a custom validation hook:
+
+```javascript
+import customValidation from '../helpers/customValidation';
+
+export default async (server, { hdbCore, logger }) => {
+ server.route({
+ url: '/:id',
+ method: 'GET',
+ preValidation: (request) => customValidation(request, logger),
+ handler: (request) => {
+ request.body= {
+ operation: 'sql',
+ sql: `SELECT * FROM dev.dog WHERE id = ${request.params.id}`
+ };
+
+ return hdbCore.requestWithoutAuthentication(request);
+ }
+ });
+}
+```
+
+Notice we imported customValidation from the **helpers** directory. To include a helper, and to see the actual code within customValidation, see [Helper Methods](#helper-methods).
+
+## Helper Methods
+
+When declaring routes, you are given access to 2 helper methods: hdbCore and logger.
+
+**hdbCore**
+
+hdbCore contains three functions that allow you to authenticate an inbound request, and execute operations against HarperDB directly, by passing the standard Operations API.
+
+* **preValidation**
+
+ This is an array of functions used for fastify authentication. The second function takes the authorization header from the inbound request and executes the same authentication as the standard HarperDB Operations API (for example, `hdbCore.preValidation[1](./req, resp, callback)`). It will determine if the user exists, and if they are allowed to perform this operation. **If you use the request method, you have to use preValidation to get the authenticated user**.
+* **request**
+
+ This will execute a request with HarperDB using the operations API. The `request.body` should contain a standard HarperDB operation and must also include the `hdb_user` property that was in `request.body` provided in the callback.
+* **requestWithoutAuthentication**
+
+ Executes a request against HarperDB without any security checks around whether the inbound user is allowed to make this request. For security purposes, you should always take the following precautions when using this method:
+
+ * Properly handle user-submitted values, including url params. User-submitted values should only be used for `search_value` and for defining values in records. Special care should be taken to properly escape any values if user-submitted values are used for SQL.
+
+**logger**
+
+This helper allows you to write directly to the log file, hdb.log. It’s useful for debugging during development, although you may also use the console logger. There are 5 functions contained within logger, each of which pertains to a different **logging.level** configuration in your harperdb-config.yaml file.
+
+* logger.trace(‘Starting the handler for /dogs’)
+* logger.debug(‘This should only fire once’)
+* logger.warn(‘This should never ever fire’)
+* logger.error(‘This did not go well’)
+* logger.fatal(‘This did not go very well at all’)
diff --git a/site/versioned_docs/version-4.2/developers/applications/defining-schemas.md b/site/versioned_docs/version-4.2/developers/applications/defining-schemas.md
new file mode 100644
index 00000000..d1204f15
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/applications/defining-schemas.md
@@ -0,0 +1,103 @@
+---
+title: Defining Schemas
+---
+
+# Defining Schemas
+
+Schemas define tables and their attributes. Schemas can be declaratively defined in HarperDB's using GraphQL schema definitions. Schemas definitions can be used to ensure that tables exist (that are required for applications), and have the appropriate attributes. Schemas can define the primary key, data types for attributes, if they are required, and specify which attributes should be indexed. The [introduction to applications provides](./) a helpful introduction to how to use schemas as part of database application development.
+
+Schemas can be used to define the expected structure of data, but are also highly flexible and support heterogeneous data structures and by default allows data to include additional properties. The standard types for GraphQL schemas are specified in the [GraphQL schema documentation](https:/graphql.org/learn/schema/).
+
+An example schema that defines a couple tables might look like:
+
+```graphql
+# schema.graphql:
+type Dog @table {
+ id: ID @primaryKey
+ name: String
+ breed: String
+ age: Int
+}
+
+type Breed @table {
+ id: ID @primaryKey
+}
+```
+
+In this example, you can see that we specified the expected data structure for records in the Dog and Breed table. For example, this will enforce that Dog records are required to have a `name` property with a string (or null, unless the type were specified to be non-nullable). This does not preclude records from having additional properties (see `@sealed` for preventing additional properties. For example, some Dog records could also optionally include a `favoriteTrick` property.
+
+In this page, we will describe the specific directives that HarperDB uses for defining tables and attributes in a schema.
+
+### Type Directives
+
+#### `@table`
+
+The schema for tables are defined using GraphQL type definitions with a `@table` directive:
+
+```graphql
+type TableName @table
+```
+
+By default the table name is inherited from the type name (in this case the table name would be "TableName"). The `@table` directive supports several optional arguments (all of these are optional and can be freely combined):
+
+* `@table(table: "table_name")` - This allows you to explicitly specify the table name.
+* `@table(database: "database_name")` - This allows you to specify which database the table belongs to. This defaults to the "data" database.
+* `@table(expiration: 3600)` - Sets an expiration time on entries in the table before they are automatically cleared (primarily useful for caching tables). This is specified in seconds.
+* `@table(audit: true)` - This enables the audit log for the table so that a history of record changes are recorded. This defaults to [configuration file's setting for `auditLog`](../../deployments/configuration#logging).
+
+#### `@export`
+
+This indicates that the specified table should be exported as a resource that is accessible as an externally available endpoints, through REST, MQTT, or any of the external resource APIs.
+
+This directive also accepts a `name` parameter to specify the name that should be used for the exported resource (how it will appear in the URL path). For example:
+
+```
+type MyTable @table @export(name: "my-table")
+```
+
+This table would be available at the URL path `/my-table/`. Without the `name` parameter, the exported name defaults to the name of the table type ("MyTable" in this example).
+
+#### `@sealed`
+
+The `@sealed` directive specifies that no additional properties should be allowed on records besides though specified in the type itself.
+
+### Field Directives
+
+The field directives can be used for information about each attribute in table type definition.
+
+#### `@primaryKey`
+
+The `@primaryKey` directive specifies that an attribute is the primary key for a table. These must be unique and when records are created, this will be auto-generated with a UUID if no primary key is provided.
+
+#### `@indexed`
+
+The `@indexed` directive specifies that an attribute should be indexed. This is necessary if you want to execute queries using this attribute (whether that is through RESTful query parameters, SQL, or NoSQL operations).
+
+#### `@createdTime`
+
+The `@createdTime` directive indicates that this property should be assigned a timestamp of the creation time of the record (in epoch milliseconds).
+
+#### `@updatedTime`
+
+The `@updatedTime` directive indicates that this property should be assigned a timestamp of each updated time of the record (in epoch milliseconds).
+
+### Defined vs Dynamic Schemas
+
+If you do not define a schema for a table and create a table through the operations API (without specifying attributes) or studio, such a table will not have a defined schema and will follow the behavior of a ["dynamic-schema" table](../../technical-details/reference/dynamic-schema). It is generally best-practice to define schemas for your tables to ensure predictable, consistent structures with data integrity.
+
+### Field Types
+
+HarperDB supports the following field types in addition to user defined (object) types:
+
+* String: String/text.
+* Int: A 32-bit signed integer (from -2147483648 to 2147483647).
+* Long: A 54-bit signed integer (from -9007199254740992 to 9007199254740992).
+* Float: Any number (any number that can be represented as a [64-bit double precision floating point number](https:/en.wikipedia.org/wiki/Double-precision\_floating-point\_format). Note that all numbers are stored in the most compact representation available).
+* Boolean: true or false.
+* ID: A string (but indicates it is not intended to be legible).
+* Any: Any primitive, object, or array is allowed.
+* Date: A Date object.
+
+#### Renaming Tables
+
+It is important to note that HarperDB does not currently support renaming tables. If you change the name of a table in your schema definition, this will result in the creation of a new, empty table.
diff --git a/site/versioned_docs/version-4.2/developers/applications/example-projects.md b/site/versioned_docs/version-4.2/developers/applications/example-projects.md
new file mode 100644
index 00000000..2eb92ba4
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/applications/example-projects.md
@@ -0,0 +1,37 @@
+---
+title: Example Projects
+---
+
+# Example Projects
+
+**Library of example HarperDB applications and components:**
+
+* [Authorization in HarperDB using Okta Customer Identity Cloud](https:/www.harperdb.io/post/authorization-in-harperdb-using-okta-customer-identity-cloud), by Yitaek Hwang
+
+* [How to Speed Up your Applications by Caching at the Edge with HarperDB](https:/dev.to/doabledanny/how-to-speed-up-your-applications-by-caching-at-the-edge-with-harperdb-3o2l), by Danny Adams
+
+* [OAuth Authentication in HarperDB using Auth0 & Node.js](https:/www.harperdb.io/post/oauth-authentication-in-harperdb-using-auth0-and-node-js), by Lucas Santos
+
+* [How To Create a CRUD API with Next.js & HarperDB Custom Functions](https:/www.harperdb.io/post/create-a-crud-api-w-next-js-harperdb), by Colby Fayock
+
+* [Build a Dynamic REST API with Custom Functions](https:/harperdb.io/blog/build-a-dynamic-rest-api-with-custom-functions/), by Terra Roush
+
+* [How to use HarperDB Custom Functions to Build your Entire Backend](https:/dev.to/andrewbaisden/how-to-use-harperdb-custom-functions-to-build-your-entire-backend-a2m), by Andrew Baisden
+
+* [Using TensorFlowJS & HarperDB Custom Functions for Machine Learning](https:/harperdb.io/blog/using-tensorflowjs-harperdb-for-machine-learning/), by Kevin Ashcraft
+
+* [Build & Deploy a Fitness App with Python & HarperDB](https:/www.youtube.com/watch?v=KMkmA4i2FQc), by Patrick Löber
+
+* [Create a Discord Slash Bot using HarperDB Custom Functions](https:/geekysrm.hashnode.dev/discord-slash-bot-with-harperdb-custom-functions), by Soumya Ranjan Mohanty
+
+* [How I used HarperDB Custom Functions to Build a Web App for my Newsletter](https:/blog.hrithwik.me/how-i-used-harperdb-custom-functions-to-build-a-web-app-for-my-newsletter), by Hrithwik Bharadwaj
+
+* [How I used HarperDB Custom Functions and Recharts to create Dashboard](https:/blog.greenroots.info/how-to-create-dashboard-with-harperdb-custom-functions-and-recharts), by Tapas Adhikary
+
+* [How To Use HarperDB Custom Functions With Your React App](https:/dev.to/tyaga001/how-to-use-harperdb-custom-functions-with-your-react-app-2c43), by Ankur Tyagi
+
+* [Build a Web App Using HarperDB’s Custom Functions](https:/www.youtube.com/watch?v=rz6prItVJZU), livestream by Jaxon Repp
+
+* [How to Web Scrape Using Python, Snscrape & Custom Functions](https:/hackernoon.com/how-to-web-scrape-using-python-snscrape-and-harperdb), by Davis David
+
+* [What’s the Big Deal w/ Custom Functions](https:/rss.com/podcasts/harperdb-select-star/278933/), Select* Podcast
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/developers/applications/index.md b/site/versioned_docs/version-4.2/developers/applications/index.md
new file mode 100644
index 00000000..bad0c09f
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/applications/index.md
@@ -0,0 +1,375 @@
+---
+title: Applications
+---
+
+# Applications
+
+## Overview of HarperDB Applications
+
+HarperDB is more than a database, it's a distributed clustering platform allowing you to package your schema, endpoints and application logic and deploy them to an entire fleet of HarperDB instances optimized for on-the-edge scalable data delivery.
+
+In this guide, we are going to explore the evermore extensible architecture that HarperDB 4.2 and greater provides by building a HarperDB component, a fundamental building-block of the HarperDB ecosystem.
+
+When working through this guide, we recommend you use the [HarperDB Application Template](https:/github.com/HarperDB/application-template) repo as a reference.
+
+## Understanding the Component Application Architecture
+
+HarperDB provides several types of components. Any package that is added to HarperDB is called a "component", and components are generally categorized as either "applications", which deliver a set of endpoints for users, or "extensions", which are building blocks for features like authentication, additional protocols, and connectors that can be used by other components. Components can be added to the `hdb/components` directory and will be loaded by HarperDB when it starts. Components that are remotely deployed to HarperDB (through the studio or the operation API) are installed into the hdb/node\_modules directory. Using `harperdb run .` or `harperdb dev .` allows us to specifically load a certain application in addition to any that have been manually added to `hdb/components` or installed in `node_modules`.
+
+```mermaid
+flowchart LR
+ Client(Client)-->Endpoints
+ Client(Client)-->HTTP
+ Client(Client)-->Extensions
+ subgraph HarperDB
+ direction TB
+ Applications(Applications)-- "Schemas" --> Tables[(Tables)]
+ Applications-->Endpoints[/Custom Endpoints/]
+ Applications-->Extensions
+ Endpoints-->Tables
+ HTTP[/REST/HTTP/]-->Tables
+ Extensions[/Extensions/]-->Tables
+ end
+```
+
+## Getting up and Running
+
+### Pre-Requisites
+
+We assume you are running HarperDB version 4.2 or greater, which supports HarperDB Application architecture (in previous versions, this is 'custom functions').
+
+### Scaffolding our Application Directory
+
+Let's create and initialize a new directory for our application. It is recommended that you start by using the [HarperDB application template](https:/github.com/HarperDB/application-template). Assuming you have `git` installed, you can create your project directory by cloning:
+
+```shell
+> git clone https:/github.com/HarperDB/application-template my-app
+> cd my-app
+```
+
+
+
+You can also start with an empty application directory if you'd prefer.
+
+To create your own application from scratch, you'll may want to initialize it as an npm package with the \`type\` field set to \`module\` in the \`package.json\` so that you can use the EcmaScript module syntax used in this tutorial:
+
+```shell
+> mkdir my-app
+> cd my-app
+> npm init -y esnext
+```
+
+
+
+
+
+If you want to version control your application code, you can adjust the remote URL to your repository.
+
+Here's an example for a github repo:
+
+```shell
+> git remote set-url origin git@github.com://
+```
+Locally developing your application and then committing your app to a source control is a great way to manage your code and configuration, and then you can [directly deploy from your repository](#deploying-your-application).
+
+
+
+## Creating our first Table
+
+The core of a HarperDB application is the database, so let's create a database table!
+
+A quick and expressive way to define a table is through a [GraphQL Schema](https:/graphql.org/learn/schema). Using your editor of choice, edit the file named `schema.graphql` in the root of the application directory, `my-app`, that we created above. To create a table, we will need to add a `type` of `@table` named `Dog` (and you can remove the example table in the template):
+
+```graphql
+type Dog @table {
+ # properties will go here soon
+}
+```
+
+And then we'll add a primary key named `id` of type `ID`:
+
+_(Note: A GraphQL schema is a fast method to define tables in HarperDB, but you are by no means required to use GraphQL to query your application, nor should you necessarily do so)_
+
+```graphql
+type Dog @table {
+ id: ID @primaryKey
+}
+```
+
+Now we tell HarperDB to run this as an application:
+
+```shell
+> harperdb dev . # tell HarperDB cli to run current directory as an application in dev mode
+```
+
+HarperDB will now create the `Dog` table and its `id` attribute we just defined. Not only is this an easy way to get create a table, but this schema is included in our application, which will ensure that this table exists wherever we deploy this application (to any HarperDB instance).
+
+## Adding Attributes to our Table
+
+Next, let's expand our `Dog` table by adding additional typed attributes for dog `name`, `breed` and `age`.
+
+```graphql
+type Dog @table {
+ id: ID @primaryKey
+ name: String
+ breed: String
+ age: Int
+}
+```
+
+This will ensure that new records must have these properties with these types.
+
+Because we ran `harperdb dev .` earlier (dev mode), HarperDB is now monitoring the contents of our application directory for changes and reloading when they occur. This means that once we save our schema file with these new attributes, HarperDB will automatically reload our application, read `my-app/schema.graphql` and update the `Dog` table and attributes we just defined. The dev mode will also ensure that any logging or errors are immediately displayed in the console (rather only in the log file).
+
+As a NoSQL database, HarperDB supports heterogeneous records (also referred to as documents), so you can freely specify additional properties on any record. If you do want to restrict the records to only defined properties, you can always do that by adding the `sealed` directive:
+
+```graphql
+type Dog @table @sealed {
+ id: ID @primaryKey
+ name: String
+ breed: String
+ age: Int
+ tricks: [String]
+}
+```
+
+If you are using HarperDB Studio, we can now [add JSON-formatted records](../../administration/harperdb-studio/manage-schemas-browse-data#add-a-record) to this new table in the studio or upload data as [CSV from a local file or URL](../../administration/harperdb-studio/manage-schemas-browse-data#load-csv-data). A third, more advanced, way to add data to your database is to use the [operations API](../operations-api/), which provides full administrative control over your new HarperDB instance and tables.
+
+## Adding an Endpoint
+
+Now that we have a running application with a database (with data if you imported any data), let's make this data accessible from a RESTful URL by adding an endpoint. To do this, we simply add the `@export` directive to our `Dog` table:
+
+```graphql
+type Dog @table @export {
+ id: ID @primaryKey
+ name: String
+ breed: String
+ age: Int
+ tricks: [String]
+}
+```
+
+By default the application HTTP server port is `9926` (this can be [configured here](../../deployments/configuration#http)), so the local URL would be [http:/localhost:9926/Dog/](http:/localhost:9926/Dog/) with a full REST API. We can PUT or POST data into this table using this new path, and then GET or DELETE from it as well (you can even view data directly from the browser). If you have not added any records yet, we could use a PUT or POST to add a record. PUT is appropriate if you know the id, and POST can be used to assign an id:
+
+```http
+POST /Dog/
+Content-Type: application/json
+
+{
+ "name": "Harper",
+ "breed": "Labrador",
+ "age": 3,
+ "tricks": ["sits"]
+}
+```
+
+With this a record will be created and the auto-assigned id will be available through the `Location` header. If you added a record, you can visit the path `/Dog/` to view that record. Alternately, the curl command `curl http:/localhost:9926/Dog/` will achieve the same thing.
+
+## Authenticating Endpoints
+
+These endpoints automatically support `Basic`, `Cookie`, and `JWT` authentication methods. See the documentation on [security](../security/) for more information on different levels of access.
+
+By default, HarperDB also automatically authorizes all requests from loopback IP addresses (from the same computer) as the superuser, to make it simple to interact for local development. If you want to test authentication/authorization, or enforce stricter security, you may want to disable the [`authentication.authorizeLocal` setting](../../deployments/configuration#authentication).
+
+### Content Negotiation
+
+These endpoints support various content types, including `JSON`, `CBOR`, `MessagePack` and `CSV`. Simply include an `Accept` header in your requests with the preferred content type. We recommend `CBOR` as a compact, efficient encoding with rich data types, but `JSON` is familiar and great for web application development, and `CSV` can be useful for exporting data to spreadsheets or other processing.
+
+HarperDB works with other important standard HTTP headers as well, and these endpoints are even capable of caching interaction:
+
+```
+Authorization: Basic
+Accept: application/cbor
+If-None-Match: "etag-id" # browsers can automatically provide this
+```
+
+## Querying
+
+Querying your application database is straightforward and easy, as tables exported with the `@export` directive are automatically exposed via [REST endpoints](../rest). Simple queries can be crafted through [URL query parameters](https:/en.wikipedia.org/wiki/Query\_string).
+
+In order to maintain reasonable query speed on a database as it grows in size, it is critical to select and establish the proper indexes. So, before we add the `@export` declaration to our `Dog` table and begin querying it, let's take a moment to target some table properties for indexing. We'll use `name` and `breed` as indexed table properties on our `Dog` table. All we need to do to accomplish this is tag these properties with the `@indexed` directive:
+
+```graphql
+type Dog @table {
+ id: ID @primaryKey
+ name: String @indexed
+ breed: String @indexed
+ owner: String
+ age: Int
+ tricks: [String]
+}
+```
+
+And finally, we'll add the `@export` directive to expose the table as a RESTful endpoint
+
+```graphql
+type Dog @table @export {
+ id: ID @primaryKey
+ name: String @indexed
+ breed: String @indexed
+ owner: String
+ age: Int
+ tricks: [String]
+}
+```
+
+Now we can start querying. Again, we just simply access the endpoint with query parameters (basic GET requests), like:
+
+```
+http:/localhost:9926/Dog/?name=Harper
+http:/localhost:9926/Dog/?breed=Labrador
+http:/localhost:9926/Dog/?breed=Husky&name=Balto&select=id,name,breed
+```
+
+Congratulations, you now have created a secure database application backend with a table, a well-defined structure, access controls, and a functional REST endpoint with query capabilities! See the [REST documentation for more information on HTTP access](../rest) and see the [Schema reference](./defining-schemas) for more options for defining schemas.
+
+## Deploying your Application
+
+This guide assumes that you're building a HarperDB application locally. If you have a cloud instance available, you can deploy it by doing the following:
+
+* Commit and push your application component directory code (i.e., the `my-app` directory) to a Github repo. In this tutorial we started with a clone of the application-template. To commit and push to your own repository, change the origin to your repo: `git remote set-url origin git@github.com:your-account/your-repo.git`
+* Go to the applications section of your target cloud instance in the [HarperDB Studio](https:/studio.harperdb.io)
+* In the left-hand menu of the applications IDE, click 'deploy' and specify a package location reference that follows the [npm package specification](https:/docs.npmjs.com/cli/v8/using-npm/package-spec) (i.e., a string like `HarperDB/Application-Template` or a URL like `https:/github.com/HarperDB/application-template`, for example, that npm knows how to install).
+
+You can also deploy your application from your repository by directly using the [`deploy_component` operation](../operations-api/components#deploy-component).
+
+Once you have deployed your application to a HarperDB cloud instance, you can start scaling your application by adding additional instances in other regions.
+
+With the help of a global traffic manager/load balancer configured, you can distribute incoming requests to the appropriate server. You can deploy and re-deploy your application to all the nodes in your mesh.
+
+Now, with an application that you can deploy, update, and re-deploy, you have an application that is horizontally and globally scalable!
+
+## Custom Functionality with JavaScript
+
+So far we have built an application entirely through schema configuration. However, if your application requires more custom functionality, you will probably want to employ your own JavaScript modules to implement more specific features and interactions. This gives you tremendous flexibility and control over how data is accessed and modified in HarperDB. Let's take a look at how we can use JavaScript to extend and define "resources" for custom functionality. Let's add a property to the dog records when they are returned, that includes their age in human years. In HarperDB, data is accessed through our [Resource API](../../technical-details/reference/resource), a standard interface to access data sources, tables, and make them available to endpoints. Database tables are `Resource` classes, and so extending the function of a table is as simple as extending their class.
+
+To define custom (JavaScript) resources as endpoints, we need to create a `resources.js` module (this goes in the root of your application folder). And then endpoints can be defined with Resource classes that `export`ed. This can be done in addition to, or in lieu of the `@export`ed types in the schema.graphql. If you are exporting and extending a table you defined in the schema make sure you remove the `@export` from the schema so that don't export the original table or resource to the same endpoint/path you are exporting with a class. Resource classes have methods that correspond to standard HTTP/REST methods, like `get`, `post`, `patch`, and `put` to implement specific handling for any of these methods (for tables they all have default implementations). To do this, we get the `Dog` class from the defined tables, extend it, and export it:
+
+```javascript
+/ resources.js:
+const { Dog } = tables; / get the Dog table from the HarperDB provided set of tables (in the default database)
+
+export class DogWithHumanAge extends Dog {
+ get(query) {
+ this.humanAge = 15 + this.age * 5; / silly calculation of human age equivalent
+ return super.get(query);
+ }
+}
+```
+
+Here we exported the `DogWithHumanAge` class (exported with the same name), which directly maps to the endpoint path. Therefore, now we have a `/DogWithHumanAge/` endpoint based on this class, just like the direct table interface that was exported as `/Dog/`, but the new endpoint will return objects with the computed `humanAge` property. Resource classes provide getters/setters for every defined attribute so that accessing instance properties like `age`, will get the value from the underlying record. And changing or assigning new properties can be saved or included in the resource as it returned and serialized. The `return super.get(query)` call at the end allows for any query parameters to be applied to the resource, such as selecting individual properties (with a [`select` query parameter](../rest#select-properties)).
+
+Often we may want to incorporate data from other tables or data sources in your data models. Next, let's say that we want a `Breed` table that holds detailed information about each breed, and we want to add that information to the returned dog object. We might define the Breed table as (back in schema.graphql):
+
+```graphql
+type Breed @table {
+ name: String @primaryKey
+ description: String @indexed
+ lifespan: Int
+ averageWeight: Float
+}
+```
+
+And next we will use this table in our `get()` method. We will call the new table's (static) `get()` method to retrieve a breed by id. To do this correctly, we access the table using our current context by passing in `this` as the second argument. This is important because it ensures that we are accessing the data atomically, in a consistent snapshot across tables. This provides automatically tracking of most recently updated timestamps across resources for caching purposes. This allows for sharing of contextual metadata (like user who requested the data), and ensure transactional atomicity for any writes (not needed in this get operation, but important for other operations). The resource methods are automatically wrapped with a transaction (will commit/finish when the method completes), and this allows us to fully utilize multiple resources in our current transaction. With our own snapshot of the database for the Dog and Breed table we can then access data like this:
+
+```javascript
+/resource.js:
+const { Dog, Breed } = tables; / get the Breed table too
+export class DogWithBreed extends Dog {
+ async get(query) {
+ let breedDescription = await Breed.get(this.breed, this);
+ this.breedDescription = breedDescription;
+ return super.get(query);
+ }
+}
+```
+
+The call to `Breed.get` will return an instance of the `Breed` resource class, which holds the record specified the provided id/primary key. Like the `Dog` instance, we can access or change properties on the Breed instance.
+
+Here we have focused on customizing how we retrieve data, but we may also want to define custom actions for writing data. While HTTP PUT method has a specific semantic definition (replace current record), a common method for custom actions is through the HTTP POST method. the POST method has much more open-ended semantics and is a good choice for custom actions. POST requests are handled by our Resource's post() method. Let's say that we want to define a POST handler that adds a new trick to the `tricks` array to a specific instance. We might do it like this, and specify an action to be able to differentiate actions:
+
+```javascript
+export class CustomDog extends Dog {
+ async post(data) {
+ if (data.action === 'add-trick')
+ this.tricks.push(data.trick);
+ }
+}
+```
+
+And a POST request to /CustomDog/ would call this `post` method. The Resource class then automatically tracks changes you make to your resource instances and saves those changes when this transaction is committed (again these methods are automatically wrapped in a transaction and committed once the request handler is finished). So when you push data on to the `tricks` array, this will be recorded and persisted when this method finishes and before sending a response to the client.
+
+The `post` method automatically marks the current instance as being update. However, you can also explicitly specify that you are changing a resource by calling the `update()` method. If you want to modify a resource instance that you retrieved through a `get()` call (like `Breed.get()` call above), you can call its `update()` method to ensure changes are saved (and will be committed in the current transaction).
+
+We can also define custom authorization capabilities. For example, we might want to specify that only the owner of a dog can make updates to a dog. We could add logic to our `post` method or `put` method to do this, but we may want to separate the logic so these methods can be called separately without authorization checks. The [Resource API](../../technical-details/reference/resource) defines `allowRead`, `allowUpdate`, `allowCreate`, and `allowDelete`, or to easily configure individual capabilities. For example, we might do this:
+
+```javascript
+export class CustomDog extends Dog {
+ allowUpdate(user) {
+ return this.owner === user.username;
+ }
+}
+```
+
+Any methods that are not defined will fall back to HarperDB's default authorization procedure based on users' roles. If you are using/extending a table, this is based on HarperDB's [role based access](../security/users-and-roles). If you are extending the base `Resource` class, the default access requires super user permission.
+
+You can also use the `default` export to define the root path resource handler. For example:
+
+```javascript
+/ resources.json
+export default class CustomDog extends Dog {
+ ...
+```
+
+This will allow requests to url like / to be directly resolved to this resource.
+
+## Define Custom Data Sources
+
+We can also directly implement the Resource class and use it to create new data sources from scratch that can be used as endpoints. Custom resources can also be used as caching sources. Let's say that we defined a `Breed` table that was a cache of information about breeds from another source. We could implement a caching table like:
+
+```javascript
+const { Breed } = tables; / our Breed table
+class BreedSource extends Resource { / define a data source
+ async get() {
+ return (await fetch(`http:/best-dog-site.com/${this.getId()}`)).json();
+ }
+}
+/ define that our breed table is a cache of data from the data source above, with a specified expiration
+Breed.sourcedFrom(BreedSource, { expiration: 3600 });
+```
+
+The [caching documentation](./caching) provides much more information on how to use HarperDB's powerful caching capabilities and set up data sources.
+
+HarperDB provides a powerful JavaScript API with significant capabilities that go well beyond a "getting started" guide. See our documentation for more information on using the [`globals`](../../technical-details/reference/globals) and the [Resource interface](../../technical-details/reference/resource).
+
+## Configuring Applications/Components
+
+Every application or component can define their own configuration in a `config.yaml`. If you are using the application template, you will have a [default configuration in this config file](https:/github.com/HarperDB/application-template/blob/main/config.yaml) (which is default configuration if no config file is provided). Within the config file, you can configure how different files and resources are loaded and handled. The default configuration file itself is documented with directions. Each entry can specify any `files` that the loader will handle, and can also optionally specify what, if any, URL `path`s it will handle. A path of `/` means that the root URLs are handled by the loader, and a path of `.` indicates that the URLs that start with this application's name are handled.
+
+This config file allows you define a location for static files, as well (that are directly delivered as-is for incoming HTTP requests).
+
+Each configuration entry can have the following properties, in addition to properties that may be specific to the individual component:
+
+* `files`: This specifies the set of files that should be handled the component. This is a glob pattern, so a set of files can be specified like "directory/**".
+* `path`: This is the URL path that is handled by this component.
+* `root`: This specifies the root directory for mapping file paths to the URLs. For example, if you want all the files in `web/**` to be available in the root URL path via the static handler, you could specify a root of `web`, to indicate that the web directory maps to the root URL path.
+* `package`: This is used to specify that this component is a third party package, and can be loaded from the specified package reference (which can be an NPM package, Github reference, URL, etc.).
+
+## Define Fastify Routes
+
+Exporting resource will generate full RESTful endpoints. But, you may prefer to define endpoints through a framework. HarperDB includes a resource plugin for defining routes with the Fastify web framework. Fastify is a full-featured framework with many plugins, that provides sophisticated route definition capabilities.
+
+By default, applications are configured to load any modules in the `routes` directory (matching `routes/*.js`) with Fastify's autoloader, which will allow these modules to export a function to define fastify routes. See the [defining routes documentation](./define-routes) for more information on how to create Fastify routes.
+
+However, Fastify is not as fast as HarperDB's RESTful endpoints (about 10%-20% slower/more-overhead), nor does it automate the generation of a full uniform interface with correct RESTful header interactions (for caching control), so generally the HarperDB's REST interface is recommended for optimum performance and ease of use.
+
+## Restarting Your Instance
+
+Generally, HarperDB will auto-detect when files change and auto-restart the appropriate threads. However, if there are changes that aren't detected, you may manually restart, with the `restart_service` operation:
+
+```json
+{
+ "operation": "restart_service",
+ "service": "http_workers"
+}
+```
diff --git a/site/versioned_docs/version-4.2/developers/clustering/certificate-management.md b/site/versioned_docs/version-4.2/developers/clustering/certificate-management.md
new file mode 100644
index 00000000..58243cb7
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/clustering/certificate-management.md
@@ -0,0 +1,70 @@
+---
+title: Certificate Management
+---
+
+# Certificate Management
+
+## Development
+
+Out of the box HarperDB generates certificates that are used when HarperDB nodes are clustered together to securely share data between nodes. These certificates are meant for testing and development purposes. Because these certificates do not have Common Names (CNs) that will match the Fully Qualified Domain Name (FQDN) of the HarperDB node, the following settings (see the full [configuration file](../../deployments/configuration) docs for more details) are defaulted & recommended for ease of development:
+
+```
+clustering:
+ tls:
+ certificate: ~/hdb/keys/certificate.pem
+ certificateAuthority: ~/hdb/keys/ca.pem
+ privateKey: ~/hdb/keys/privateKey.pem
+ insecure: true
+ verify: true
+```
+
+The certificates that HarperDB generates are stored in your `/keys/`.
+
+`insecure` is set to `true` to accept the certificate CN mismatch due to development certificates.
+
+`verify` is set to `true` to enable mutual TLS between the nodes.
+
+## Production
+
+In a production environment, we recommend using your own certificate authority (CA), or a public CA such as LetsEncrypt to generate certs for your HarperDB cluster. This will let you generate certificates with CNs that match the FQDN of your nodes.
+
+Once you generate new certificates, to make HarperDB start using them you can either replace the generated files with your own, or update the configuration to point to your new certificates, and then restart HarperDB.
+
+Since these new certificates can be issued with correct CNs, you should set `insecure` to `false` so that nodes will do full validation of the certificates of the other nodes.
+
+### Certificate Requirements
+
+* Certificates must have an `Extended Key Usage` that defines both `TLS Web Server Authentication` and `TLS Web Client Authentication` as these certificates will be used to accept connections from other HarperDB nodes and to make requests to other HarperDB nodes. Example:
+
+```
+X509v3 Key Usage: critical
+ Digital Signature, Key Encipherment
+X509v3 Extended Key Usage:
+ TLS Web Server Authentication, TLS Web Client Authentication
+```
+
+* If you are using an intermediate CA to issue the certificates, the entire certificate chain (to the root CA) must be included in the `certificateAuthority` file.
+* If your certificates expire you will need a way to issue new certificates to the nodes and then restart HarperDB. If you are using a public CA such as LetsEncrypt, a tool like `certbot` can be used to renew certificates.
+
+### Certificate Troubleshooting
+
+If you are having TLS issues with clustering, use the following steps to verify that your certificates are valid.
+
+1. Make sure certificates can be parsed and that you can view the contents:
+
+```
+openssl x509 -in .pem -noout -text`
+```
+
+1. Make sure the certificate validates with the CA:
+
+```
+openssl verify -CAfile .pem .pem`
+```
+
+1. Make sure the certificate and private key are a valid pair by verifying that the output of the following commands match:
+
+```
+openssl rsa -modulus -noout -in .pem | openssl md5
+openssl x509 -modulus -noout -in .pem | openssl md5
+```
diff --git a/site/versioned_docs/version-4.2/developers/clustering/creating-a-cluster-user.md b/site/versioned_docs/version-4.2/developers/clustering/creating-a-cluster-user.md
new file mode 100644
index 00000000..3edecd29
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/clustering/creating-a-cluster-user.md
@@ -0,0 +1,59 @@
+---
+title: Creating a Cluster User
+---
+
+# Creating a Cluster User
+
+Inter-node authentication takes place via HarperDB users. There is a special role type called `cluster_user` that exists by default and limits the user to only clustering functionality.
+
+A `cluster_user` must be created and added to the `harperdb-config.yaml` file for clustering to be enabled.
+
+All nodes that are intended to be clustered together need to share the same `cluster_user` credentials (i.e. username and password).
+
+There are multiple ways a `cluster_user` can be created, they are:
+
+1. Through the operations API by calling `add_user`
+
+```json
+{
+ "operation": "add_user",
+ "role": "cluster_user",
+ "username": "cluster_account",
+ "password": "letsCluster123!",
+ "active": true
+}
+```
+
+When using the API to create a cluster user the `harperdb-config.yaml` file must be updated with the username of the new cluster user.
+
+This can be done through the API by calling `set_configuration` or by editing the `harperdb-config.yaml` file.
+
+```json
+{
+ "operation": "set_configuration",
+ "clustering_user": "cluster_account"
+}
+```
+
+In the `harperdb-config.yaml` file under the top-level `clustering` element there will be a user element. Set this to the name of the cluster user.
+
+```yaml
+clustering:
+ user: cluster_account
+```
+
+_Note: When making any changes to the `harperdb-config.yaml` file, HarperDB must be restarted for the changes to take effect._
+
+1. Upon installation using **command line variables**. This will automatically set the user in the `harperdb-config.yaml` file.
+
+_Note: Using command line or environment variables for setting the cluster user only works on install._
+
+```
+harperdb install --CLUSTERING_USER cluster_account --CLUSTERING_PASSWORD letsCluster123!
+```
+
+1. Upon installation using **environment variables**. This will automatically set the user in the `harperdb-config.yaml` file.
+
+```
+CLUSTERING_USER=cluster_account CLUSTERING_PASSWORD=letsCluster123
+```
diff --git a/site/versioned_docs/version-4.2/developers/clustering/enabling-clustering.md b/site/versioned_docs/version-4.2/developers/clustering/enabling-clustering.md
new file mode 100644
index 00000000..6b563b19
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/clustering/enabling-clustering.md
@@ -0,0 +1,49 @@
+---
+title: Enabling Clustering
+---
+
+# Enabling Clustering
+
+Clustering does not run by default; it needs to be enabled.
+
+To enable clustering the `clustering.enabled` configuration element in the `harperdb-config.yaml` file must be set to `true`.
+
+There are multiple ways to update this element, they are:
+
+1. Directly editing the `harperdb-config.yaml` file and setting enabled to `true`
+
+```yaml
+clustering:
+ enabled: true
+```
+
+_Note: When making any changes to the `harperdb-config.yaml` file HarperDB must be restarted for the changes to take effect._
+
+1. Calling `set_configuration` through the operations API
+
+```json
+{
+ "operation": "set_configuration",
+ "clustering_enabled": true
+}
+```
+
+_Note: When making any changes to HarperDB configuration HarperDB must be restarted for the changes to take effect._
+
+1. Using **command line variables**.
+
+```
+harperdb --CLUSTERING_ENABLED true
+```
+
+1. Using **environment variables**.
+
+```
+CLUSTERING_ENABLED=true
+```
+
+An efficient way to **install HarperDB**, **create the cluster user**, **set the node name** and **enable clustering** in one operation is to combine the steps using command line and/or environment variables. Here is an example using command line variables.
+
+```
+harperdb install --CLUSTERING_ENABLED true --CLUSTERING_NODENAME Node1 --CLUSTERING_USER cluster_account --CLUSTERING_PASSWORD letsCluster123!
+```
diff --git a/site/versioned_docs/version-4.2/developers/clustering/establishing-routes.md b/site/versioned_docs/version-4.2/developers/clustering/establishing-routes.md
new file mode 100644
index 00000000..915c2844
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/clustering/establishing-routes.md
@@ -0,0 +1,73 @@
+---
+title: Establishing Routes
+---
+
+# Establishing Routes
+
+A route is a connection between two nodes. It is how the clustering network is established.
+
+Routes do not need to cross connect all nodes in the cluster. You can select a leader node or a few leaders and all nodes connect to them, you can chain, etc… As long as there is one route connecting a node to the cluster all other nodes should be able to reach that node.
+
+Using routes the clustering servers will create a mesh network between nodes. This mesh network ensures that if a node drops out all other nodes can still communicate with each other. That being said, we recommend designing your routing with failover in mind, this means not storing all your routes on one node but dispersing them throughout the network.
+
+A simple route example is a two node topology, if Node1 adds a route to connect it to Node2, Node2 does not need to add a route to Node1. That one route configuration is all that’s needed to establish a bidirectional connection between the nodes.
+
+A route consists of a `port` and a `host`.
+
+`port` - the clustering port of the remote instance you are creating the connection with. This is going to be the `clustering.hubServer.cluster.network.port` in the HarperDB configuration on the node you are connecting with.
+
+`host` - the host of the remote instance you are creating the connection with.This can be an IP address or a URL.
+
+Routes are set in the `harperdb-config.yaml` file using the `clustering.hubServer.cluster.network.routes` element, which expects an object array, where each object has two properties, `port` and `host`.
+
+```yaml
+clustering:
+ hubServer:
+ cluster:
+ network:
+ routes:
+ - host: 3.62.184.22
+ port: 9932
+ - host: 3.735.184.8
+ port: 9932
+```
+
+
+
+This diagram shows one way of using routes to connect a network of nodes. Node2 and Node3 do not reference any routes in their config. Node1 contains routes for Node2 and Node3, which is enough to establish a network between all three nodes.
+
+There are multiple ways to set routes, they are:
+
+1. Directly editing the `harperdb-config.yaml` file (refer to code snippet above).
+1. Calling `cluster_set_routes` through the API.
+
+```json
+{
+ "operation": "cluster_set_routes",
+ "server": "hub",
+ "routes":[ {"host": "3.735.184.8", "port": 9932} ]
+}
+```
+
+_Note: When making any changes to HarperDB configuration HarperDB must be restarted for the changes to take effect._
+
+1. From the command line.
+
+```bash
+--CLUSTERING_HUBSERVER_CLUSTER_NETWORK_ROUTES "[{\"host\": \"3.735.184.8\", \"port\": 9932}]"
+```
+
+1. Using environment variables.
+
+```bash
+CLUSTERING_HUBSERVER_CLUSTER_NETWORK_ROUTES=[{"host": "3.735.184.8", "port": 9932}]
+```
+
+The API also has `cluster_get_routes` for getting all routes in the config and `cluster_delete_routes` for deleting routes.
+
+```json
+{
+ "operation": "cluster_delete_routes",
+ "routes":[ {"host": "3.735.184.8", "port": 9932} ]
+}
+```
diff --git a/site/versioned_docs/version-4.2/developers/clustering/index.md b/site/versioned_docs/version-4.2/developers/clustering/index.md
new file mode 100644
index 00000000..f5949afd
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/clustering/index.md
@@ -0,0 +1,31 @@
+---
+title: Clustering
+---
+
+# Clustering
+
+HarperDB clustering is the process of connecting multiple HarperDB databases together to create a database mesh network that enables users to define data replication patterns.
+
+HarperDB’s clustering engine replicates data between instances of HarperDB using a highly performant, bi-directional pub/sub model on a per-table basis. Data replicates asynchronously with eventual consistency across the cluster following the defined pub/sub configuration. Individual transactions are sent in the order in which they were transacted, once received by the destination instance, they are processed in an ACID-compliant manner. Conflict resolution follows a last writer wins model based on recorded transaction time on the transaction and the timestamp on the record on the node.
+
+***
+
+### Common Use Case
+
+A common use case is an edge application collecting and analyzing sensor data that creates an alert if a sensor value exceeds a given threshold:
+
+* The edge application should not be making outbound http requests for security purposes.
+* There may not be a reliable network connection.
+* Not all sensor data will be sent to the cloud--either because of the unreliable network connection, or maybe it’s just a pain to store it.
+* The edge node should be inaccessible from outside the firewall.
+* The edge node will send alerts to the cloud with a snippet of sensor data containing the offending sensor readings.
+
+HarperDB simplifies the architecture of such an application with its bi-directional, table-level replication:
+
+* The edge instance subscribes to a “thresholds” table on the cloud instance, so the application only makes localhost calls to get the thresholds.
+* The application continually pushes sensor data into a “sensor\_data” table via the localhost API, comparing it to the threshold values as it does so.
+* When a threshold violation occurs, the application adds a record to the “alerts” table.
+* The application appends to that record array “sensor\_data” entries for the 60 seconds (or minutes, or days) leading up to the threshold violation.
+* The edge instance publishes the “alerts” table up to the cloud instance.
+
+By letting HarperDB focus on the fault-tolerant logistics of transporting your data, you get to write less code. By moving data only when and where it’s needed, you lower storage and bandwidth costs. And by restricting your app to only making local calls to HarperDB, you reduce the overall exposure of your application to outside forces.
diff --git a/site/versioned_docs/version-4.2/developers/clustering/managing-subscriptions.md b/site/versioned_docs/version-4.2/developers/clustering/managing-subscriptions.md
new file mode 100644
index 00000000..a1f8c56e
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/clustering/managing-subscriptions.md
@@ -0,0 +1,168 @@
+---
+title: Managing subscriptions
+---
+
+# Managing subscriptions
+
+Subscriptions can be added, updated, or removed through the API.
+
+_Note: The schema and tables in the subscription must exist on either the local or the remote node. Any schema and tables that do not exist on one particular node, for example, the local node, will be automatically created on the local node._
+
+To add a single node and create one or more subscriptions use `add_node`.
+
+```json
+{
+ "operation": "add_node",
+ "node_name": "Node2",
+ "subscriptions": [
+ {
+ "schema": "dev",
+ "table": "dog",
+ "publish": false,
+ "subscribe": true
+ },
+ {
+ "schema": "dev",
+ "table": "chicken",
+ "publish": true,
+ "subscribe": true
+ }
+ ]
+}
+```
+
+This is an example of adding Node2 to your local node. Subscriptions are created for two tables, dog and chicken.
+
+To update one or more subscriptions with a single node use `update_node`.
+
+```json
+{
+ "operation": "update_node",
+ "node_name": "Node2",
+ "subscriptions": [
+ {
+ "schema": "dev",
+ "table": "dog",
+ "publish": true,
+ "subscribe": true
+ }
+ ]
+}
+```
+
+This call will update the subscription with the dog table. Any other subscriptions with Node2 will not change.
+
+To add or update subscriptions with one or more nodes in one API call use `configure_cluster`.
+
+```json
+{
+ "operation": "configure_cluster",
+ "connections": [
+ {
+ "node_name": "Node2",
+ "subscriptions": [
+ {
+ "schema": "dev",
+ "table": "chicken",
+ "publish": false,
+ "subscribe": true
+ },
+ {
+ "schema": "prod",
+ "table": "dog",
+ "publish": true,
+ "subscribe": true
+ }
+ ]
+ },
+ {
+ "node_name": "Node3",
+ "subscriptions": [
+ {
+ "schema": "dev",
+ "table": "chicken",
+ "publish": true,
+ "subscribe": false
+ }
+ ]
+ }
+ ]
+}
+```
+
+_Note: `configure_cluster` will override **any and all** existing subscriptions defined on the local node. This means that before going through the connections in the request and adding the subscriptions, it will first go through **all existing subscriptions the local node has** and remove them. To get all existing subscriptions use `cluster_status`._
+
+#### Start time
+
+There is an optional property called `start_time` that can be passed in the subscription. This property accepts an ISO formatted UTC date.
+
+`start_time` can be used to set from what time you would like to source transactions from a table when creating or updating a subscription.
+
+```json
+{
+ "operation": "add_node",
+ "node_name": "Node2",
+ "subscriptions": [
+ {
+ "schema": "dev",
+ "table": "dog",
+ "publish": false,
+ "subscribe": true,
+ "start_time": "2022-09-02T20:06:35.993Z"
+ }
+ ]
+}
+```
+
+This example will get all transactions on Node2’s dog table starting from `2022-09-02T20:06:35.993Z` and replicate them locally on the dog table.
+
+If no start time is passed it defaults to the current time.
+
+_Note: start time utilizes clustering to back source transactions. For this reason it can only source transactions that occurred when clustering was enabled._
+
+#### Remove node
+
+To remove a node and all its subscriptions use `remove_node`.
+
+```json
+{
+ "operation":"remove_node",
+ "node_name":"Node2"
+}
+```
+
+#### Cluster status
+
+To get the status of all connected nodes and see their subscriptions use `cluster_status`.
+
+```json
+{
+ "node_name": "Node1",
+ "is_enabled": true,
+ "connections": [
+ {
+ "node_name": "Node2",
+ "status": "open",
+ "ports": {
+ "clustering": 9932,
+ "operations_api": 9925
+ },
+ "latency_ms": 65,
+ "uptime": "11m 19s",
+ "subscriptions": [
+ {
+ "schema": "dev",
+ "table": "dog",
+ "publish": true,
+ "subscribe": true
+ }
+ ],
+ "system_info": {
+ "hdb_version": "4.0.0",
+ "node_version": "16.17.1",
+ "platform": "linux"
+ }
+ }
+ ]
+}
+```
diff --git a/site/versioned_docs/version-4.2/developers/clustering/naming-a-node.md b/site/versioned_docs/version-4.2/developers/clustering/naming-a-node.md
new file mode 100644
index 00000000..d1ebdfb1
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/clustering/naming-a-node.md
@@ -0,0 +1,45 @@
+---
+title: Naming a Node
+---
+
+# Naming a Node
+
+Node name is the name given to a node. It is how nodes are identified within the cluster and must be unique to the cluster.
+
+The name cannot contain any of the following characters: `.,*>` . Dot, comma, asterisk, greater than, or whitespace.
+
+The name is set in the `harperdb-config.yaml` file using the `clustering.nodeName` configuration element.
+
+_Note: If you want to change the node name make sure there are no subscriptions in place before doing so. After the name has been changed a full restart is required._
+
+There are multiple ways to update this element, they are:
+
+1. Directly editing the `harperdb-config.yaml` file.
+
+```yaml
+clustering:
+ nodeName: Node1
+```
+
+_Note: When making any changes to the `harperdb-config.yaml` file HarperDB must be restarted for the changes to take effect._
+
+1. Calling `set_configuration` through the operations API
+
+```json
+{
+ "operation": "set_configuration",
+ "clustering_nodeName":"Node1"
+}
+```
+
+1. Using command line variables.
+
+```
+harperdb --CLUSTERING_NODENAME Node1
+```
+
+1. Using environment variables.
+
+```
+CLUSTERING_NODENAME=Node1
+```
diff --git a/site/versioned_docs/version-4.2/developers/clustering/requirements-and-definitions.md b/site/versioned_docs/version-4.2/developers/clustering/requirements-and-definitions.md
new file mode 100644
index 00000000..1e2dd6af
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/clustering/requirements-and-definitions.md
@@ -0,0 +1,11 @@
+---
+title: Requirements and Definitions
+---
+
+# Requirements and Definitions
+
+To create a cluster you must have two or more nodes\* (aka instances) of HarperDB running.
+
+\*_A node is a single instance/installation of HarperDB. A node of HarperDB can operate independently with clustering on or off._
+
+On the following pages we'll walk you through the steps required, in order, to set up a HarperDB cluster.
diff --git a/site/versioned_docs/version-4.2/developers/clustering/subscription-overview.md b/site/versioned_docs/version-4.2/developers/clustering/subscription-overview.md
new file mode 100644
index 00000000..63246c4f
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/clustering/subscription-overview.md
@@ -0,0 +1,45 @@
+---
+title: Subscription Overview
+---
+
+# Subscription Overview
+
+A subscription defines how data should move between two nodes. They are exclusively table level and operate independently. They connect a table on one node to a table on another node, the subscription will apply to a matching schema name and table name on both nodes.
+
+_Note: ‘local’ and ‘remote’ will often be referred to. In the context of these docs ‘local’ is the node that is receiving the API request to create/update a subscription and remote is the other node that is referred to in the request, the node on the other end of the subscription._
+
+A subscription consists of:
+
+`schema` - the name of the schema that the table you are creating the subscription for belongs to.
+
+`table` - the name of the table the subscription will apply to.
+
+`publish` - a boolean which determines if transactions on the local table should be replicated on the remote table.
+
+`subscribe` - a boolean which determines if transactions on the remote table should be replicated on the local table.
+
+#### Publish subscription
+
+
+
+This diagram is an example of a `publish` subscription from the perspective of Node1.
+
+The record with id 2 has been inserted in the dog table on Node1, after it has completed that insert it is sent to Node 2 and inserted in the dog table there.
+
+#### Subscribe subscription
+
+
+
+This diagram is an example of a `subscribe` subscription from the perspective of Node1.
+
+The record with id 3 has been inserted in the dog table on Node2, after it has completed that insert it is sent to Node1 and inserted there.
+
+#### Subscribe and Publish
+
+
+
+This diagram shows both subscribe and publish but publish is set to false. You can see that because subscribe is true the insert on Node2 is being replicated on Node1 but because publish is set to false the insert on Node1 is _**not**_ being replicated on Node2.
+
+
+
+This shows both subscribe and publish set to true. The insert on Node1 is replicated on Node2 and the update on Node2 is replicated on Node1.
diff --git a/site/versioned_docs/version-4.2/developers/clustering/things-worth-knowing.md b/site/versioned_docs/version-4.2/developers/clustering/things-worth-knowing.md
new file mode 100644
index 00000000..a140e4d3
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/clustering/things-worth-knowing.md
@@ -0,0 +1,43 @@
+---
+title: Things Worth Knowing
+---
+
+# Things Worth Knowing
+
+Additional information that will help you define your clustering topology.
+
+***
+
+### Transactions
+
+Transactions that are replicated across the cluster are:
+
+* Insert
+* Update
+* Upsert
+* Delete
+* Bulk loads
+ * CSV data load
+ * CSV file load
+ * CSV URL load
+ * Import from S3
+
+When adding or updating a node any schemas and tables in the subscription that don’t exist on the remote node will be automatically created.
+
+**Destructive schema operations do not replicate across a cluster**. Those operations include `drop_schema`, `drop_table`, and `drop_attribute`. If the desired outcome is to drop schema information from any nodes then the operation(s) will need to be run on each node independently.
+
+Users and roles are not replicated across the cluster.
+
+***
+
+### Queueing
+
+HarperDB has built-in resiliency for when network connectivity is lost within a subscription. When connections are reestablished, a catchup routine is executed to ensure data that was missed, specific to the subscription, is sent/received as defined.
+
+***
+
+### Topologies
+
+HarperDB clustering creates a mesh network between nodes giving end users the ability to create an infinite number of topologies. subscription topologies can be simple or as complex as needed.
+
+
diff --git a/site/versioned_docs/version-4.2/developers/components/drivers.md b/site/versioned_docs/version-4.2/developers/components/drivers.md
new file mode 100644
index 00000000..0f1c063e
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/components/drivers.md
@@ -0,0 +1,12 @@
+---
+title: Drivers
+description: >-
+ Industry standard tools to real-time HarperDB data with BI, analytics,
+ reporting and data visualization technologies.
+---
+
+# Drivers
+
+
+
+
diff --git a/site/versioned_docs/version-4.2/developers/components/google-data-studio.md b/site/versioned_docs/version-4.2/developers/components/google-data-studio.md
new file mode 100644
index 00000000..e33fb2bd
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/components/google-data-studio.md
@@ -0,0 +1,37 @@
+---
+title: Google Data Studio
+---
+
+# Google Data Studio
+
+[Google Data Studio](https:/datastudio.google.com/) is a free collaborative visualization tool which enables users to build configurable charts and tables quickly. The HarperDB Google Data Studio connector seamlessly integrates your HarperDB data with Google Data Studio so you can build custom, real-time data visualizations.
+
+The HarperDB Google Data Studio Connector is subject to our [Terms of Use](https:/harperdb.io/legal/harperdb-cloud-terms-of-service/) and [Privacy Policy](https:/harperdb.io/legal/privacy-policy/).
+
+## Requirements
+
+The HarperDB database must be accessible through the Internet in order for Google Data Studio servers to access it. The database may be hosted by you or via [HarperDB Cloud](../../deployments/harperdb-cloud/).
+
+## Get Started
+
+Get started by selecting the HarperDB connector from the [Google Data Studio Partner Connector Gallery](https:/datastudio.google.com/u/0/datasources/create).
+
+1. Log in to https:/datastudio.google.com/.
+1. Add a new Data Source using the HarperDB connector. The current release version can be added as a data source by following this link: [HarperDB Google Data Studio Connector](https:/datastudio.google.com/datasources/create?connectorId=AKfycbxBKgF8FI5R42WVxO-QCOq7dmUys0HJrUJMkBQRoGnCasY60\_VJeO3BhHJPvdd20-S76g).
+1. Authorize the connector to access other servers on your behalf (this allows the connector to contact your database).
+1. Enter the Web URL to access your database (preferably with HTTPS), as well as the Basic Auth key you use to access the database. Just include the key, not the word “Basic” at the start of it.
+1. Check the box for “Secure Connections Only” if you want to always use HTTPS connections for this data source; entering a Web URL that starts with https:/ will do the same thing, if you prefer.
+1. Check the box for “Allow Bad Certs” if your HarperDB instance does not have a valid SSL certificate. [HarperDB Cloud](../../deployments/harperdb-cloud/) always has valid certificates, and so will never require this to be checked. Instances you set up yourself may require this, if you are using self-signed certs. If you are using [HarperDB Cloud](../../deployments/harperdb-cloud/) or another instance you know should always have valid SSL certificates, do not check this box.
+1. Choose your Query Type. This determines what information the configuration will ask for after pressing the Next button.
+ * Table will ask you for a Schema and a Table to return all fields of using `SELECT *`.
+ * SQL will ask you for the SQL query you’re using to retrieve fields from the database. You may `JOIN` multiple tables together, and use HarperDB specific SQL functions, along with the usual power SQL grants.
+1. When all information is entered correctly, press the Connect button in the top right of the new Data Source view to generate the Schema. You may also want to name the data source at this point. If the connector encounters any errors, a dialog box will tell you what went wrong so you can correct the issue.
+1. If there are no errors, you now have a data source you can use in your reports! You may change the types of the generated fields in the Schema view if you need to (for instance, changing a Number field to a specific currency), as well as creating new fields from the report view that do calculations on other fields.
+
+## Considerations
+
+* Both Postman and the [HarperDB Studio](../../administration/harperdb-studio/) app have ways to convert a user:password pair to a Basic Auth token. Use either to create the token for the connector’s user.
+ * You may sign out of your current user by going to the instances tab in HarperDB Studio, then clicking on the lock icon at the top-right of a given instance’s box. Click the lock again to sign in as any user. The Basic Auth token will be visible in the Authorization header portion of any code created in the Sample Code tab.
+* It’s highly recommended that you create a read-only user role in HarperDB Studio, and create a user with that role for your data sources to use. This prevents that authorization token from being used to alter your database, should someone else ever get ahold of it.
+* The RecordCount field is intended for use as a metric, for counting how many instances of a given set of values appear in a report’s data set.
+* _Do not attempt to create fields with spaces in their names_ for any data sources! Google Data Studio will crash when attempting to retrieve a field with such a name, producing a System Error instead of a useful chart on your reports. Using CamelCase or snake\_case gets around this.
diff --git a/site/versioned_docs/version-4.2/developers/components/index.md b/site/versioned_docs/version-4.2/developers/components/index.md
new file mode 100644
index 00000000..4901c49f
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/components/index.md
@@ -0,0 +1,38 @@
+---
+title: Components
+---
+
+# Components
+
+HarperDB is a highly extensible database application platform with support for a rich variety of composable modular components and components that can be used and combined to build applications and add functionality to existing applications. HarperDB tools, components, and add-ons can be found in a few places:
+
+* [SDK libraries](./sdks) are available for connecting to HarperDB from different languages.
+* [Drivers](./drivers) are available for connecting to HarperDB from different products and tools.
+* [HarperDB-Add-Ons repositories](https:/github.com/orgs/HarperDB-Add-Ons/repositories) lists various templates and add-ons for HarperDB.
+* [HarperDB repositories](https:/github.com/orgs/HarperDB-Add-Ons/repositories) include additional tools for HarperDB.
+* You can also [search github.com for ever-growing list of projects that use, or work with, HarperDB](https:/github.com/search?q=harperdb\&type=repositories)
+* [Google Data Studio](./google-data-studio) is a visualization tool for building charts and tables from HarperDB data.
+
+## Components
+
+There are four general categories of components for HarperDB. The most common is applications. Applications are simply a component that delivers complete functionality through an external interface that it defines, and is usually composed of other components. See [our guide to building applications for getting started](../applications/).
+
+A data source component can implement the Resource API to customize access to a table or provide access to an external data source. External data source components are used to retrieve and access data from other sources.
+
+The next two are considered extension components. Server protocol extension components provide and define ways for clients to access data and can be used to extend or create new protocols.
+
+Server resource components implement support for different types of files that can be used as resources in applications. HarperDB includes support for using JavaScript modules and GraphQL Schemas as resources, but resource components may add support for different file types like HTML templates (like JSX), CSV data, and more.
+
+## Server components
+
+Server components can be easily be added and configured by simply adding an entry to your harperdb-config.yaml:
+
+```yaml
+my-server-component:
+ package: 'HarperDB-Add-Ons/package-name' # this can be any valid github or npm reference
+ port: 4321
+```
+
+## Writing Extension Components
+
+You can write your own extensions to build new functionality on HarperDB. See the [writing extension components documentation](./writing-extensions) for more information.
diff --git a/site/versioned_docs/version-4.2/developers/components/installing.md b/site/versioned_docs/version-4.2/developers/components/installing.md
new file mode 100644
index 00000000..aac137ea
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/components/installing.md
@@ -0,0 +1,79 @@
+---
+title: Installing
+---
+
+# Installing
+
+Components can be easily added by adding a new top level element to your `harperdb-config.yaml` file.
+
+The configuration comprises two values:
+
+* component name - can be anything, as long as it follows valid YAML syntax.
+* package - a reference to your component.
+
+```yaml
+myComponentName:
+ package: HarperDB-Add-Ons/package
+```
+
+Under the hood HarperDB is calling npm install on all components, this means that the package value can be any valid npm reference such as a GitHub repo, an NPM package, a tarball, a local directory or a website.
+
+```yaml
+myGithubComponent:
+ package: HarperDB-Add-Ons/package#v2.2.0 # install from GitHub
+myNPMComponent:
+ package: harperdb # install from NPM
+myTarBall:
+ package: /Users/harper/cool-component.tar # install from tarball
+myLocal:
+ package: /Users/harper/local # install from local path
+myWebsite:
+ package: https:/harperdb-component # install from URL
+```
+
+When HarperDB is run or restarted it checks to see if there are any new or updated components. If there are, it will dynamically create a package.json file in the `rootPath` directory and call `npm install`.
+
+NPM will install all the components in `/node_moduels`.
+
+The package.json file that is created will look something like this.
+
+```json
+{
+ "dependencies": {
+ "myGithubComponent": "github:HarperDB-Add-Ons/package#v2.2.0",
+ "myNPMComponent": "npm:harperdb",
+ "myTarBall": "file:/Users/harper/cool-component.tar",
+ "myLocal": "file:/Users/harper/local",
+ "myWebsite": "https:/harperdb-component"
+ }
+}
+```
+
+The package prefix is automatically added, however you can manually set it in your package reference.
+
+```yaml
+myCoolComponent:
+ package: file:/Users/harper/cool-component.tar
+```
+
+## Installing components using the operations API
+
+To add a component using the operations API use the `deploy_component` operation.
+
+```json
+{
+ "operation": "deploy_component",
+ "project": "my-cool-component",
+ "package": "HarperDB-Add-Ons/package/mycc"
+}
+```
+
+Another option is to pass `deploy_component` a base64-encoded string representation of your component as a `.tar` file. HarperDB can generate this via the `package_component` operation. When deploying with a payload, your component will be deployed to your `/components` directory. Any components in this directory will be automatically picked up by HarperDB.
+
+```json
+{
+ "operation": "deploy_component",
+ "project": "my-cool-component",
+ "payload": "NzY1IAAwMDAwMjQgADAwMDAwMDAwMDAwIDE0NDIwMDQ3...."
+}
+```
diff --git a/site/versioned_docs/version-4.2/developers/components/operations.md b/site/versioned_docs/version-4.2/developers/components/operations.md
new file mode 100644
index 00000000..fc5d2bf9
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/components/operations.md
@@ -0,0 +1,37 @@
+---
+title: Operations
+---
+
+# Operations
+
+One way to manage applications and components is through [HarperDB Studio](../../administration/harperdb-studio/). It performs all the necessary operations automatically. To get started, navigate to your instance in HarperDB Studio and click the subnav link for “applications”. Once configuration is complete, you can manage and deploy applications in minutes.
+
+HarperDB Studio manages your applications using nine HarperDB operations. You may view these operations within our [API Docs](../operations-api/). A brief overview of each of the operations is below:
+
+* **components\_status**
+
+ Returns the state of the applications server. This includes whether it is enabled, upon which port it is listening, and where its root project directory is located on the host machine.
+* **get\_components**
+
+ Returns an array of projects within the applications root project directory.
+* **get\_component\_file**
+
+ Returns the content of the specified file as text. HarperDB Studio uses this call to render the file content in its built-in code editor.
+* **set\_component\_file**
+
+ Updates the content of the specified file. HarperDB Studio uses this call to save any changes made through its built-in code editor.
+* **drop\_component\_file**
+
+ Deletes the specified file.
+* **add\_component\_project**
+
+ Creates a new project folder in the applications root project directory. It also inserts into the new directory the contents of our applications Project template, which is available publicly, here: https:/github.com/HarperDB/harperdb-custom-functions-template.
+* **drop\_component\_project**
+
+ Deletes the specified project folder and all of its contents.
+* **package\_component\_project**
+
+ Creates a .tar file of the specified project folder, then reads it into a base64-encoded string and returns that string to the user.
+* **deploy\_component\_project**
+
+ Takes the output of package\_component\_project, decrypts the base64-encoded string, reconstitutes the .tar file of your project folder, and extracts it to the applications root project directory.
diff --git a/site/versioned_docs/version-4.2/developers/components/sdks.md b/site/versioned_docs/version-4.2/developers/components/sdks.md
new file mode 100644
index 00000000..9064851e
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/components/sdks.md
@@ -0,0 +1,21 @@
+---
+title: SDKs
+description: >-
+ Software Development Kits available for connecting to HarperDB from different
+ languages.
+---
+
+# SDKs
+
+| SDK/Tool | Description | Installation |
+| ------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------- |
+| [HarperDB.NET.Client](https:/www.nuget.org/packages/HarperDB.NET.Client) | A Dot Net Core client to execute operations against HarperDB | `dotnet add package HarperDB.NET.Client --version 1.1.0` |
+| [Websocket Client](https:/www.npmjs.com/package/harperdb-websocket-client) | A Javascript client for real-time access to HarperDB transactions | `npm i -s harperdb-websocket-client` |
+| [Gatsby HarperDB Source](https:/www.npmjs.com/package/gatsby-source-harperdb) | Use HarperDB as the data source for a Gatsby project at the build time | `npm i -s gatsby-source-harperdb` |
+| [HarperDB.EntityFrameworkCore](https:/www.nuget.org/packages/HarperDB.EntityFrameworkCore) | The HarperDB EntityFrameworkCore Provider Package for .NET 6.0 | `dotnet add package HarperDB.EntityFrameworkCore --version 1.0.0` |
+| [Python SDK](https:/pypi.org/project/harperdb/) | Python3 implementations of HarperDB API functions with wrappers for an object-oriented interface | `pip3 install harperdb` |
+| [HarperDB Flutter SDK](https:/github.com/HarperDB/harperdb-sdk-flutter) | A HarperDB SDK for Flutter | `flutter pub add harperdb` |
+| [React Hook](https:/www.npmjs.com/package/use-harperdb) | A ReactJS Hook for HarperDB | `npm i -s use-harperdb` |
+| [Node Red Node](https:/flows.nodered.org/node/node-red-contrib-harperdb) | Easy drag and drop connections to HarperDB using the Node-Red platform | `npm i -s node-red-contrib-harperdb` |
+| [NodeJS SDK](https:/www.npmjs.com/package/harperive) | A HarperDB SDK for NodeJS | `npm i -s harperive` |
+| [HarperDB Cargo Crate](https:/crates.io/crates/harperdb) | A HarperDB SDK for Rust | `Cargo.toml > harperdb = '1.0.0'` |
diff --git a/site/versioned_docs/version-4.2/developers/components/writing-extensions.md b/site/versioned_docs/version-4.2/developers/components/writing-extensions.md
new file mode 100644
index 00000000..3a0b0ea1
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/components/writing-extensions.md
@@ -0,0 +1,153 @@
+---
+title: Writing Extensions
+---
+
+# Writing Extensions
+
+HarperDB is a highly extensible database application platform with support for a rich variety of composable modular components and extensions that can be used and combined to build applications and add functionality to existing applications. Here we describe the different types of components/extensions that can be developed for HarperDB and how to create them.
+
+There are three general categories of components for HarperDB:
+
+* **protocol extensions** that provide and define ways for clients to access data
+* **resource extensions** that handle and interpret different types of files
+* **consumer data sources** that provide a way to access and retrieve data from other sources.
+
+Server protocol extensions can be used to implement new protocols like MQTT, AMQP, Kafka, or maybe a retro-style Gopher interface. It can also be used to augment existing protocols like HTTP with "middleware" that can add authentication, analytics, or additional content negotiation, or add layer protocols on top of WebSockets.
+
+Server resource extensions implement support for different types of files that can be used as resources in applications. HarperDB includes support for using JavaScript modules and GraphQL Schemas as resources, but resource extensions could be added to support different file types like HTML templates (like JSX), CSV data, and more.
+
+Consumer data source components are used to retrieve and access data from other sources, and can be very useful if you want to use HarperDB to cache or use data from other databases like MySQL, Postgres, or Oracle, or subscribe to data from messaging brokers (again possibly Kafka, NATS, etc.).
+
+These are not mutually exclusive, you may build components that fulfill any or all of these roles.
+
+## Server Extensions
+
+Server Extensions are implemented as JavaScript packages/modules and interact with HarperDB through a number of possible hooks. A component can be defined as an extension by specifying the extensionModule in the config.yaml:
+
+```yaml
+extensionModule: './entry-module-name.js'
+```
+
+### Module Initialization
+
+Once a user has configured an extension, HarperDB will attempt to load the extension package specified by `package` property. Once loaded, there are several functions that can be exported that will be called by HarperDB:
+
+`export function start(options: { port: number, server: {}})` If defined, this will be called on the initialization of the extension. The provided `server` property object includes a set of additional entry points for utilizing or layering on top of other protocols (and when implementing a new protocol, you can add your own entry points). The most common entry is to provide an HTTP middleware layer. This looks like:
+
+```javascript
+export function start(options: { port: number, server: {}}) {
+ options.server.http(async (request, nextLayer) => {
+ / we can directly return a response here, or do some processing on the request and delegate to the next layer
+ let response = await nextLayer(request);
+ return response;
+ });
+}
+```
+
+Here, the `request` object will have the following structure (this is based on Node's request, but augmented to conform to a subset of the [WHATWG Request API](https:/developer.mozilla.org/en-US/docs/Web/API/Request)):
+
+```typescript
+interface Request {
+ method: string
+ headers: Headers / use request.headers.get(headerName) to get header values
+ body: Stream
+ data: any / deserialized data from the request body
+}
+```
+
+The returned `response` object should have the following structure (again, following a structural subset of the [WHATWG Response API](https:/developer.mozilla.org/en-US/docs/Web/API/Response)):
+
+```typescript
+interface Response {
+ status?: number
+ headers?: {} / an object with header name/values
+ data?: any / object/value that will be serialized into the body
+ body?: Stream
+}
+```
+
+If you were implementing an authentication extension, you could get authentication information from the request and use it to add the `user` property to the request:
+
+```javascript
+export function start(options: { port: number, server: {}, resources: Map}) {
+ options.server.http((request, nextLayer) => {
+ let authorization = request.headers.authorization;
+ if (authorization) {
+ / get some token for the user and determine the user
+ / if we want to use harperdb's user database
+ let user = server.getUser(username, password);
+ request.user = user; / authenticate user object goes on the request
+ }
+ / continue on to the next layer
+ return nextLayer(request);
+ });
+ / if you needed to add a login resource, could add it as well:
+ resources.set('/login', LoginResource);
+}
+```
+
+If you were implementing a new protocol, you can directly interact with the sockets and listen for new incoming TCP connections:
+
+```javascript
+export function start(options: { port: number, server: {}}) {
+ options.server.socket((socket) => {
+ });
+})
+```
+
+### Resource Handling
+
+Typically, servers not only communicate with clients, but serve up meaningful data based on the resources within the server. While resource extensions typically handle defining resources, once resources are defined, they can be consumed by server extensions. The `resources` argument provides access to the set of all the resources that have been defined. A server can call `resources.getMatch(path)` to get the resource associated with the URL path.
+
+## Resource Extensions
+
+Resource extensions allow us to handle different files and make them accessible to servers as resources, following the common [Resource API](../../technical-details/reference/resource). To implement a resource extension, you export a function called `handleFile`. Users can then configure which files that should be handled by your extension. For example, if we had implemented an EJS handler, it could be configured as:
+
+```yaml
+ module: 'ejs-extension',
+ path: '/templates/*.ejs'
+```
+
+And in our extension module, we could implement `handleFile`:
+
+```javascript
+export function handleFile?(contents, relative_path, file_path, resources) {
+ / will be called for each .ejs file.
+ / We can then add the generate resource:
+ resources.set(relative_path, GeneratedResource);
+}
+```
+
+We can also implement a handler for directories. This can be useful for implementing a handler for broader frameworks that load their own files, like Next.js or Remix, or a static file handler. HarperDB includes such an extension for fastify's auto-loader that loads a directory of route definitions. This hook looks like:
+
+```javascript
+export function handleDirectory?(relative_path, path, resources) {
+}
+```
+
+Note that these hooks are not mutually exclusive. You can write an extension that implements any or all of these hooks, potentially implementing a custom protocol and file handling.
+
+## Data Source Components
+
+Data source components implement the Resource interface to provide access to various data sources, which may be other APIs, databases, or local storage. Components that implement this interface can then be used as a source for caching tables, can be accessed as part of endpoint implementations, or even used as endpoints themselves. See the [Resource documentation](../../technical-details/reference/resource) for more information on implementing new resources.
+
+## Content Type Extensions
+
+HarperDB uses content negotiation to determine how to deserialize content incoming data from HTTP requests (and any other protocols that support content negotiation) and to serialize data into responses. This negotiation is performed by comparing the `Content-Type` header with registered content type handler to determine how to deserialize content into structured data that is processed and stored, and comparing the `Accept` header with registered content type handlers to determine how to serialize structured data. HarperDB comes with a rich set of content type handlers including JSON, CBOR, MessagePack, CSV, Event-Stream, and more. However, you can also add your own content type handlers by adding new entries (or even replacing existing entries) to the `contentTypes` exported map from the `server` global (or `harperdb` export). This map is keyed by the MIME type, and the value is an object with properties (all optional): `serialize(data): Buffer|Uint8Array|string`: If defined, this will be called with the data structure and should return the data serialized as binary data (NodeJS Buffer or Uint8Array) or a string, for the response. `serializeStream(data): ReadableStream`: If defined, this will be called with the data structure and should return the data serialized as a ReadableStream. This is generally necessary for handling asynchronous iteratables. `deserialize(Buffer|string): any`: If defined (and deserializeStream is not defined), this will be called with the raw data received from the incoming request and should return the deserialized data structure. This will be called with a string for text MIME types ("text/..."), and a Buffer for all others. `deserializeStream(ReadableStream): any`: If defined (and deserializeStream is not defined), this will be called with the raw data stream received from the incoming request and should return the deserialized data structure (potentially as an asynchronous iterable). `q: number`: This is an indication of this serialization quality between 0 and 1, and if omitted, defaults to 1. It is called "content negotiation" instead of "content demanding" because both client and server may have multiple supported content types, and the server needs to choose the best for both. This is determined by finding the content type (of all supported) with the highest product of client q and server q (1 is a perfect representation of the data, 0 is worst, 0.5 is medium quality).
+
+For example, if you wanted to define an XML serializer (that can respond with XML to requests with `Accept: text/xml`) you could write:
+
+```javascript
+contentTypes.set('text/xml', {
+ serialize(data) {
+ return '' ... some serialization '';
+ },
+ q: 0.8,
+});
+```
+
+## Trusted/Untrusted
+
+Extensions will also be categorized as trusted or untrusted. For some HarperDB installations, administrators may choose to constrain users to only using trusted extensions for security reasons (such multi-tenancy requirements or added defense in depth). Most installations do not impose such constraints, but this may exist in some situations.
+
+An extension can be automatically considered trusted if it conforms to the requirements of [Secure EcmaScript](https:/www.npmjs.com/package/ses/v/0.7.0) (basically strict mode code that doesn't modify any global objects), and either does not use any other modules, or only uses modules from other trusted extensions/components. An extension can be marked as trusted by review by the HarperDB team as well, but developers should not expect that HarperDB can review all extensions. Untrusted extensions can access any other packages/modules, and may have many additional capabilities.
diff --git a/site/versioned_docs/version-4.2/developers/operations-api/advanced-json-sql-examples.md b/site/versioned_docs/version-4.2/developers/operations-api/advanced-json-sql-examples.md
new file mode 100644
index 00000000..1584a0c4
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/operations-api/advanced-json-sql-examples.md
@@ -0,0 +1,1780 @@
+---
+title: Advanced JSON SQL Examples
+---
+
+# Advanced JSON SQL Examples
+
+## Create movies database
+Create a new database called "movies" using the 'create_database' operation.
+
+_Note: Creating a database is optional, if one is not created HarperDB will default to using a database named `data`_
+
+### Body
+```json
+{
+ "operation": "create_database",
+ "database": "movies"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "database 'movies' successfully created"
+}
+```
+
+---
+
+## Create movie Table
+Creates a new table called "movie" inside the database "movies" using the ‘create_table’ operation.
+
+### Body
+
+```json
+{
+ "operation": "create_table",
+ "database": "movies",
+ "table": "movie",
+ "primary_key": "id"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "table 'movies.movie' successfully created."
+}
+```
+
+
+---
+
+## Create credits Table
+Creates a new table called "credits" inside the database "movies" using the ‘create_table’ operation.
+
+### Body
+
+```json
+{
+ "operation": "create_table",
+ "database": "movies",
+ "table": "credits",
+ "primary_key": "movie_id"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "table 'movies.credits' successfully created."
+}
+```
+
+
+---
+
+## Bulk Insert movie Via CSV
+Inserts data from a hosted CSV file into the "movie" table using the 'csv_url_load' operation.
+
+### Body
+
+```json
+{
+ "operation": "csv_url_load",
+ "database": "movies",
+ "table": "movie",
+ "csv_url": "https:/search-json-sample-data.s3.us-east-2.amazonaws.com/movie.csv"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Starting job with id 1889eee4-23c1-4945-9bb7-c805fc20726c"
+}
+```
+
+
+---
+
+## Bulk Insert credits Via CSV
+Inserts data from a hosted CSV file into the "credits" table using the 'csv_url_load' operation.
+
+### Body
+
+```json
+{
+ "operation": "csv_url_load",
+ "database": "movies",
+ "table": "credits",
+ "csv_url": "https:/search-json-sample-data.s3.us-east-2.amazonaws.com/credits.csv"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Starting job with id 3a14cd74-67f3-41e9-8ccd-45ffd0addc2c",
+ "job_id": "3a14cd74-67f3-41e9-8ccd-45ffd0addc2c"
+}
+```
+
+
+---
+
+## View raw data
+In the following example we will be running expressions on the keywords & production_companies attributes, so for context we are displaying what the raw data looks like.
+
+### Body
+
+```json
+{
+ "operation": "sql",
+ "sql": "SELECT title, rank, keywords, production_companies FROM movies.movie ORDER BY rank LIMIT 10"
+}
+```
+
+### Response: 200
+```json
+[
+ {
+ "title": "Ad Astra",
+ "rank": 1,
+ "keywords": [
+ {
+ "id": 305,
+ "name": "moon"
+ },
+ {
+ "id": 697,
+ "name": "loss of loved one"
+ },
+ {
+ "id": 839,
+ "name": "planet mars"
+ },
+ {
+ "id": 14626,
+ "name": "astronaut"
+ },
+ {
+ "id": 157265,
+ "name": "moon colony"
+ },
+ {
+ "id": 162429,
+ "name": "solar system"
+ },
+ {
+ "id": 240119,
+ "name": "father son relationship"
+ },
+ {
+ "id": 244256,
+ "name": "near future"
+ },
+ {
+ "id": 257878,
+ "name": "planet neptune"
+ },
+ {
+ "id": 260089,
+ "name": "space walk"
+ }
+ ],
+ "production_companies": [
+ {
+ "id": 490,
+ "name": "New Regency Productions",
+ "origin_country": ""
+ },
+ {
+ "id": 79963,
+ "name": "Keep Your Head",
+ "origin_country": ""
+ },
+ {
+ "id": 73492,
+ "name": "MadRiver Pictures",
+ "origin_country": ""
+ },
+ {
+ "id": 81,
+ "name": "Plan B Entertainment",
+ "origin_country": "US"
+ },
+ {
+ "id": 30666,
+ "name": "RT Features",
+ "origin_country": "BR"
+ },
+ {
+ "id": 30148,
+ "name": "Bona Film Group",
+ "origin_country": "CN"
+ },
+ {
+ "id": 22213,
+ "name": "TSG Entertainment",
+ "origin_country": "US"
+ }
+ ]
+ },
+ {
+ "title": "Extraction",
+ "rank": 2,
+ "keywords": [
+ {
+ "id": 3070,
+ "name": "mercenary"
+ },
+ {
+ "id": 4110,
+ "name": "mumbai (bombay), india"
+ },
+ {
+ "id": 9717,
+ "name": "based on comic"
+ },
+ {
+ "id": 9730,
+ "name": "crime boss"
+ },
+ {
+ "id": 11107,
+ "name": "rescue mission"
+ },
+ {
+ "id": 18712,
+ "name": "based on graphic novel"
+ },
+ {
+ "id": 265216,
+ "name": "dhaka (dacca), bangladesh"
+ }
+ ],
+ "production_companies": [
+ {
+ "id": 106544,
+ "name": "AGBO",
+ "origin_country": "US"
+ },
+ {
+ "id": 109172,
+ "name": "Thematic Entertainment",
+ "origin_country": "US"
+ },
+ {
+ "id": 92029,
+ "name": "TGIM Films",
+ "origin_country": "US"
+ }
+ ]
+ },
+ {
+ "title": "To the Beat! Back 2 School",
+ "rank": 3,
+ "keywords": [
+ {
+ "id": 10873,
+ "name": "school"
+ }
+ ],
+ "production_companies": []
+ },
+ {
+ "title": "Bloodshot",
+ "rank": 4,
+ "keywords": [
+ {
+ "id": 2651,
+ "name": "nanotechnology"
+ },
+ {
+ "id": 9715,
+ "name": "superhero"
+ },
+ {
+ "id": 9717,
+ "name": "based on comic"
+ },
+ {
+ "id": 164218,
+ "name": "psychotronic"
+ },
+ {
+ "id": 255024,
+ "name": "shared universe"
+ },
+ {
+ "id": 258575,
+ "name": "valiant comics"
+ }
+ ],
+ "production_companies": [
+ {
+ "id": 34,
+ "name": "Sony Pictures",
+ "origin_country": "US"
+ },
+ {
+ "id": 10246,
+ "name": "Cross Creek Pictures",
+ "origin_country": "US"
+ },
+ {
+ "id": 6573,
+ "name": "Mimran Schur Pictures",
+ "origin_country": "US"
+ },
+ {
+ "id": 333,
+ "name": "Original Film",
+ "origin_country": "US"
+ },
+ {
+ "id": 103673,
+ "name": "The Hideaway Entertainment",
+ "origin_country": "US"
+ },
+ {
+ "id": 124335,
+ "name": "Valiant Entertainment",
+ "origin_country": "US"
+ },
+ {
+ "id": 5,
+ "name": "Columbia Pictures",
+ "origin_country": "US"
+ },
+ {
+ "id": 1225,
+ "name": "One Race",
+ "origin_country": "US"
+ },
+ {
+ "id": 30148,
+ "name": "Bona Film Group",
+ "origin_country": "CN"
+ }
+ ]
+ },
+ {
+ "title": "The Call of the Wild",
+ "rank": 5,
+ "keywords": [
+ {
+ "id": 818,
+ "name": "based on novel or book"
+ },
+ {
+ "id": 4542,
+ "name": "gold rush"
+ },
+ {
+ "id": 15162,
+ "name": "dog"
+ },
+ {
+ "id": 155821,
+ "name": "sled dogs"
+ },
+ {
+ "id": 189390,
+ "name": "yukon"
+ },
+ {
+ "id": 207928,
+ "name": "19th century"
+ },
+ {
+ "id": 259987,
+ "name": "cgi animation"
+ },
+ {
+ "id": 263806,
+ "name": "1890s"
+ }
+ ],
+ "production_companies": [
+ {
+ "id": 787,
+ "name": "3 Arts Entertainment",
+ "origin_country": "US"
+ },
+ {
+ "id": 127928,
+ "name": "20th Century Studios",
+ "origin_country": "US"
+ },
+ {
+ "id": 22213,
+ "name": "TSG Entertainment",
+ "origin_country": "US"
+ }
+ ]
+ },
+ {
+ "title": "Sonic the Hedgehog",
+ "rank": 6,
+ "keywords": [
+ {
+ "id": 282,
+ "name": "video game"
+ },
+ {
+ "id": 6054,
+ "name": "friendship"
+ },
+ {
+ "id": 10842,
+ "name": "good vs evil"
+ },
+ {
+ "id": 41645,
+ "name": "based on video game"
+ },
+ {
+ "id": 167043,
+ "name": "road movie"
+ },
+ {
+ "id": 172142,
+ "name": "farting"
+ },
+ {
+ "id": 188933,
+ "name": "bar fight"
+ },
+ {
+ "id": 226967,
+ "name": "amistad"
+ },
+ {
+ "id": 245230,
+ "name": "live action remake"
+ },
+ {
+ "id": 258111,
+ "name": "fantasy"
+ },
+ {
+ "id": 260223,
+ "name": "videojuego"
+ }
+ ],
+ "production_companies": [
+ {
+ "id": 333,
+ "name": "Original Film",
+ "origin_country": "US"
+ },
+ {
+ "id": 10644,
+ "name": "Blur Studios",
+ "origin_country": "US"
+ },
+ {
+ "id": 77884,
+ "name": "Marza Animation Planet",
+ "origin_country": "JP"
+ },
+ {
+ "id": 4,
+ "name": "Paramount",
+ "origin_country": "US"
+ },
+ {
+ "id": 113750,
+ "name": "SEGA",
+ "origin_country": "JP"
+ },
+ {
+ "id": 100711,
+ "name": "DJ2 Entertainment",
+ "origin_country": ""
+ },
+ {
+ "id": 24955,
+ "name": "Paramount Animation",
+ "origin_country": "US"
+ }
+ ]
+ },
+ {
+ "title": "Birds of Prey (and the Fantabulous Emancipation of One Harley Quinn)",
+ "rank": 7,
+ "keywords": [
+ {
+ "id": 849,
+ "name": "dc comics"
+ },
+ {
+ "id": 9717,
+ "name": "based on comic"
+ },
+ {
+ "id": 187056,
+ "name": "woman director"
+ },
+ {
+ "id": 229266,
+ "name": "dc extended universe"
+ }
+ ],
+ "production_companies": [
+ {
+ "id": 9993,
+ "name": "DC Entertainment",
+ "origin_country": "US"
+ },
+ {
+ "id": 82968,
+ "name": "LuckyChap Entertainment",
+ "origin_country": "GB"
+ },
+ {
+ "id": 103462,
+ "name": "Kroll & Co Entertainment",
+ "origin_country": "US"
+ },
+ {
+ "id": 174,
+ "name": "Warner Bros. Pictures",
+ "origin_country": "US"
+ },
+ {
+ "id": 429,
+ "name": "DC Comics",
+ "origin_country": "US"
+ },
+ {
+ "id": 128064,
+ "name": "DC Films",
+ "origin_country": "US"
+ },
+ {
+ "id": 101831,
+ "name": "Clubhouse Pictures",
+ "origin_country": "US"
+ }
+ ]
+ },
+ {
+ "title": "Justice League Dark: Apokolips War",
+ "rank": 8,
+ "keywords": [
+ {
+ "id": 849,
+ "name": "dc comics"
+ }
+ ],
+ "production_companies": [
+ {
+ "id": 2785,
+ "name": "Warner Bros. Animation",
+ "origin_country": "US"
+ },
+ {
+ "id": 9993,
+ "name": "DC Entertainment",
+ "origin_country": "US"
+ },
+ {
+ "id": 429,
+ "name": "DC Comics",
+ "origin_country": "US"
+ }
+ ]
+ },
+ {
+ "title": "Parasite",
+ "rank": 9,
+ "keywords": [
+ {
+ "id": 1353,
+ "name": "underground"
+ },
+ {
+ "id": 5318,
+ "name": "seoul"
+ },
+ {
+ "id": 5732,
+ "name": "birthday party"
+ },
+ {
+ "id": 5752,
+ "name": "private lessons"
+ },
+ {
+ "id": 9866,
+ "name": "basement"
+ },
+ {
+ "id": 10453,
+ "name": "con artist"
+ },
+ {
+ "id": 11935,
+ "name": "working class"
+ },
+ {
+ "id": 12565,
+ "name": "psychological thriller"
+ },
+ {
+ "id": 13126,
+ "name": "limousine driver"
+ },
+ {
+ "id": 14514,
+ "name": "class differences"
+ },
+ {
+ "id": 14864,
+ "name": "rich poor"
+ },
+ {
+ "id": 17997,
+ "name": "housekeeper"
+ },
+ {
+ "id": 18015,
+ "name": "tutor"
+ },
+ {
+ "id": 18035,
+ "name": "family"
+ },
+ {
+ "id": 33421,
+ "name": "crime family"
+ },
+ {
+ "id": 173272,
+ "name": "flood"
+ },
+ {
+ "id": 188861,
+ "name": "smell"
+ },
+ {
+ "id": 198673,
+ "name": "unemployed"
+ },
+ {
+ "id": 237462,
+ "name": "wealthy family"
+ }
+ ],
+ "production_companies": [
+ {
+ "id": 7036,
+ "name": "CJ Entertainment",
+ "origin_country": "KR"
+ },
+ {
+ "id": 4399,
+ "name": "Barunson E&A",
+ "origin_country": "KR"
+ }
+ ]
+ },
+ {
+ "title": "Star Wars: The Rise of Skywalker",
+ "rank": 10,
+ "keywords": [
+ {
+ "id": 161176,
+ "name": "space opera"
+ }
+ ],
+ "production_companies": [
+ {
+ "id": 1,
+ "name": "Lucasfilm",
+ "origin_country": "US"
+ },
+ {
+ "id": 11461,
+ "name": "Bad Robot",
+ "origin_country": "US"
+ },
+ {
+ "id": 2,
+ "name": "Walt Disney Pictures",
+ "origin_country": "US"
+ },
+ {
+ "id": 120404,
+ "name": "British Film Commission",
+ "origin_country": ""
+ }
+ ]
+ }
+]
+```
+
+
+---
+
+## Simple search_json call
+This query uses search_json to convert the keywords object array to a simple string array. The expression '[name]' tells the function to extract all values for the name attribute and wrap them in an array.
+
+### Body
+
+```json
+{
+ "operation": "sql",
+ "sql": "SELECT title, rank, search_json('[name]', keywords) as keywords FROM movies.movie ORDER BY rank LIMIT 10"
+}
+```
+
+### Response: 200
+```json
+[
+ {
+ "title": "Ad Astra",
+ "rank": 1,
+ "keywords": [
+ "moon",
+ "loss of loved one",
+ "planet mars",
+ "astronaut",
+ "moon colony",
+ "solar system",
+ "father son relationship",
+ "near future",
+ "planet neptune",
+ "space walk"
+ ]
+ },
+ {
+ "title": "Extraction",
+ "rank": 2,
+ "keywords": [
+ "mercenary",
+ "mumbai (bombay), india",
+ "based on comic",
+ "crime boss",
+ "rescue mission",
+ "based on graphic novel",
+ "dhaka (dacca), bangladesh"
+ ]
+ },
+ {
+ "title": "To the Beat! Back 2 School",
+ "rank": 3,
+ "keywords": [
+ "school"
+ ]
+ },
+ {
+ "title": "Bloodshot",
+ "rank": 4,
+ "keywords": [
+ "nanotechnology",
+ "superhero",
+ "based on comic",
+ "psychotronic",
+ "shared universe",
+ "valiant comics"
+ ]
+ },
+ {
+ "title": "The Call of the Wild",
+ "rank": 5,
+ "keywords": [
+ "based on novel or book",
+ "gold rush",
+ "dog",
+ "sled dogs",
+ "yukon",
+ "19th century",
+ "cgi animation",
+ "1890s"
+ ]
+ },
+ {
+ "title": "Sonic the Hedgehog",
+ "rank": 6,
+ "keywords": [
+ "video game",
+ "friendship",
+ "good vs evil",
+ "based on video game",
+ "road movie",
+ "farting",
+ "bar fight",
+ "amistad",
+ "live action remake",
+ "fantasy",
+ "videojuego"
+ ]
+ },
+ {
+ "title": "Birds of Prey (and the Fantabulous Emancipation of One Harley Quinn)",
+ "rank": 7,
+ "keywords": [
+ "dc comics",
+ "based on comic",
+ "woman director",
+ "dc extended universe"
+ ]
+ },
+ {
+ "title": "Justice League Dark: Apokolips War",
+ "rank": 8,
+ "keywords": [
+ "dc comics"
+ ]
+ },
+ {
+ "title": "Parasite",
+ "rank": 9,
+ "keywords": [
+ "underground",
+ "seoul",
+ "birthday party",
+ "private lessons",
+ "basement",
+ "con artist",
+ "working class",
+ "psychological thriller",
+ "limousine driver",
+ "class differences",
+ "rich poor",
+ "housekeeper",
+ "tutor",
+ "family",
+ "crime family",
+ "flood",
+ "smell",
+ "unemployed",
+ "wealthy family"
+ ]
+ },
+ {
+ "title": "Star Wars: The Rise of Skywalker",
+ "rank": 10,
+ "keywords": [
+ "space opera"
+ ]
+ }
+]
+```
+
+
+---
+
+## Use search_json in a where clause
+This example shows how we can use SEARCH_JSON to filter out records in a WHERE clause. The production_companies attribute holds an object array of companies that produced each movie, we want to only see movies which were produced by Marvel Studios. Our expression is a filter '$[name="Marvel Studios"]' this tells the function to iterate the production_companies array and only return entries where the name is "Marvel Studios".
+
+### Body
+
+```json
+{
+ "operation": "sql",
+ "sql": "SELECT title, release_date FROM movies.movie where search_json('$[name=\"Marvel Studios\"]', production_companies) IS NOT NULL ORDER BY release_date"
+}
+```
+
+### Response: 200
+```json
+[
+ {
+ "title": "Iron Man",
+ "release_date": "2008-04-30"
+ },
+ {
+ "title": "The Incredible Hulk",
+ "release_date": "2008-06-12"
+ },
+ {
+ "title": "Iron Man 2",
+ "release_date": "2010-04-28"
+ },
+ {
+ "title": "Thor",
+ "release_date": "2011-04-21"
+ },
+ {
+ "title": "Captain America: The First Avenger",
+ "release_date": "2011-07-22"
+ },
+ {
+ "title": "Marvel One-Shot: The Consultant",
+ "release_date": "2011-09-12"
+ },
+ {
+ "title": "Marvel One-Shot: A Funny Thing Happened on the Way to Thor's Hammer",
+ "release_date": "2011-10-25"
+ },
+ {
+ "title": "The Avengers",
+ "release_date": "2012-04-25"
+ },
+ {
+ "title": "Marvel One-Shot: Item 47",
+ "release_date": "2012-09-13"
+ },
+ {
+ "title": "Iron Man 3",
+ "release_date": "2013-04-18"
+ },
+ {
+ "title": "Marvel One-Shot: Agent Carter",
+ "release_date": "2013-09-08"
+ },
+ {
+ "title": "Thor: The Dark World",
+ "release_date": "2013-10-29"
+ },
+ {
+ "title": "Marvel One-Shot: All Hail the King",
+ "release_date": "2014-02-04"
+ },
+ {
+ "title": "Marvel Studios: Assembling a Universe",
+ "release_date": "2014-03-18"
+ },
+ {
+ "title": "Captain America: The Winter Soldier",
+ "release_date": "2014-03-20"
+ },
+ {
+ "title": "Guardians of the Galaxy",
+ "release_date": "2014-07-30"
+ },
+ {
+ "title": "Avengers: Age of Ultron",
+ "release_date": "2015-04-22"
+ },
+ {
+ "title": "Ant-Man",
+ "release_date": "2015-07-14"
+ },
+ {
+ "title": "Captain America: Civil War",
+ "release_date": "2016-04-27"
+ },
+ {
+ "title": "Team Thor",
+ "release_date": "2016-08-28"
+ },
+ {
+ "title": "Doctor Strange",
+ "release_date": "2016-10-25"
+ },
+ {
+ "title": "Guardians of the Galaxy Vol. 2",
+ "release_date": "2017-04-19"
+ },
+ {
+ "title": "Spider-Man: Homecoming",
+ "release_date": "2017-07-05"
+ },
+ {
+ "title": "Thor: Ragnarok",
+ "release_date": "2017-10-25"
+ },
+ {
+ "title": "Black Panther",
+ "release_date": "2018-02-13"
+ },
+ {
+ "title": "Avengers: Infinity War",
+ "release_date": "2018-04-25"
+ },
+ {
+ "title": "Ant-Man and the Wasp",
+ "release_date": "2018-07-04"
+ },
+ {
+ "title": "Captain Marvel",
+ "release_date": "2019-03-06"
+ },
+ {
+ "title": "Avengers: Endgame",
+ "release_date": "2019-04-24"
+ },
+ {
+ "title": "Spider-Man: Far from Home",
+ "release_date": "2019-06-28"
+ },
+ {
+ "title": "Black Widow",
+ "release_date": "2020-10-28"
+ },
+ {
+ "title": "Untitled Spider-Man 3",
+ "release_date": "2021-11-04"
+ },
+ {
+ "title": "Thor: Love and Thunder",
+ "release_date": "2022-02-10"
+ },
+ {
+ "title": "Doctor Strange in the Multiverse of Madness",
+ "release_date": "2022-03-23"
+ },
+ {
+ "title": "Untitled Marvel Project (3)",
+ "release_date": "2022-07-29"
+ },
+ {
+ "title": "Guardians of the Galaxy Vol. 3",
+ "release_date": "2023-02-16"
+ }
+]
+```
+
+
+---
+
+## Use search_json to show the movies with the largest casts
+This example shows how we can use SEARCH_JSON to perform a simple calculation on JSON and order by the results. The cast attribute holds an object array of details around the cast of a movie. We use the expression '$count(id)' that counts each id and returns the value back which we alias in SQL as cast_size which in turn gets used to sort the rows.
+
+### Body
+
+```json
+{
+ "operation": "sql",
+ "sql": "SELECT movie_title, search_json('$count(id)', `cast`) as cast_size FROM movies.credits ORDER BY cast_size DESC LIMIT 10"
+}
+```
+
+### Response: 200
+```json
+[
+ {
+ "movie_title": "Around the World in Eighty Days",
+ "cast_size": 312
+ },
+ {
+ "movie_title": "And the Oscar Goes To...",
+ "cast_size": 259
+ },
+ {
+ "movie_title": "Rock of Ages",
+ "cast_size": 223
+ },
+ {
+ "movie_title": "Mr. Smith Goes to Washington",
+ "cast_size": 213
+ },
+ {
+ "movie_title": "Les Misérables",
+ "cast_size": 208
+ },
+ {
+ "movie_title": "Jason Bourne",
+ "cast_size": 201
+ },
+ {
+ "movie_title": "The Muppets",
+ "cast_size": 191
+ },
+ {
+ "movie_title": "You Don't Mess with the Zohan",
+ "cast_size": 183
+ },
+ {
+ "movie_title": "The Irishman",
+ "cast_size": 173
+ },
+ {
+ "movie_title": "Spider-Man: Far from Home",
+ "cast_size": 173
+ }
+]
+```
+
+
+---
+
+## search_json as a condition, in a select with a table join
+This example shows how we can use SEARCH_JSON to find movies where at least of 2 our favorite actors from Marvel films have acted together then list the movie, its overview, release date, and the actors names and their characters. The WHERE clause performs a count on credits.cast attribute that have the matching actors. The SELECT performs the same filter on the cast attribute and performs a transform on each object to just return the actor's name and their character.
+
+### Body
+
+```json
+{
+ "operation": "sql",
+ "sql": "SELECT m.title, m.overview, m.release_date, search_json('$[name in [\"Robert Downey Jr.\", \"Chris Evans\", \"Scarlett Johansson\", \"Mark Ruffalo\", \"Chris Hemsworth\", \"Jeremy Renner\", \"Clark Gregg\", \"Samuel L. Jackson\", \"Gwyneth Paltrow\", \"Don Cheadle\"]].{\"actor\": name, \"character\": character}', c.`cast`) as characters FROM movies.credits c INNER JOIN movies.movie m ON c.movie_id = m.id WHERE search_json('$count($[name in [\"Robert Downey Jr.\", \"Chris Evans\", \"Scarlett Johansson\", \"Mark Ruffalo\", \"Chris Hemsworth\", \"Jeremy Renner\", \"Clark Gregg\", \"Samuel L. Jackson\", \"Gwyneth Paltrow\", \"Don Cheadle\"]])', c.`cast`) >= 2"
+}
+```
+
+### Response: 200
+```json
+[
+ {
+ "title": "Out of Sight",
+ "overview": "Meet Jack Foley, a smooth criminal who bends the law and is determined to make one last heist. Karen Sisco is a federal marshal who chooses all the right moves … and all the wrong guys. Now they're willing to risk it all to find out if there's more between them than just the law.",
+ "release_date": "1998-06-26",
+ "characters": [
+ {
+ "actor": "Don Cheadle",
+ "character": "Maurice Miller"
+ },
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Hejira Henry (uncredited)"
+ }
+ ]
+ },
+ {
+ "title": "Iron Man",
+ "overview": "After being held captive in an Afghan cave, billionaire engineer Tony Stark creates a unique weaponized suit of armor to fight evil.",
+ "release_date": "2008-04-30",
+ "characters": [
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Tony Stark / Iron Man"
+ },
+ {
+ "actor": "Gwyneth Paltrow",
+ "character": "Virginia \"Pepper\" Potts"
+ },
+ {
+ "actor": "Clark Gregg",
+ "character": "Phil Coulson"
+ },
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Nick Fury (uncredited)"
+ },
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Nick Fury"
+ }
+ ]
+ },
+ {
+ "title": "Captain America: The First Avenger",
+ "overview": "During World War II, Steve Rogers is a sickly man from Brooklyn who's transformed into super-soldier Captain America to aid in the war effort. Rogers must stop the Red Skull – Adolf Hitler's ruthless head of weaponry, and the leader of an organization that intends to use a mysterious device of untold powers for world domination.",
+ "release_date": "2011-07-22",
+ "characters": [
+ {
+ "actor": "Chris Evans",
+ "character": "Steve Rogers / Captain America"
+ },
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Nick Fury"
+ }
+ ]
+ },
+ {
+ "title": "In Good Company",
+ "overview": "Dan Foreman is a seasoned advertisement sales executive at a high-ranking publication when a corporate takeover results in him being placed under naive supervisor Carter Duryea, who is half his age. Matters are made worse when Dan's new supervisor becomes romantically involved with his daughter an 18 year-old college student Alex.",
+ "release_date": "2004-12-29",
+ "characters": [
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Alex Foreman"
+ },
+ {
+ "actor": "Clark Gregg",
+ "character": "Mark Steckle"
+ }
+ ]
+ },
+ {
+ "title": "Zodiac",
+ "overview": "The true story of the investigation of the \"Zodiac Killer\", a serial killer who terrified the San Francisco Bay Area, taunting police with his ciphers and letters. The case becomes an obsession for three men as their lives and careers are built and destroyed by the endless trail of clues.",
+ "release_date": "2007-03-02",
+ "characters": [
+ {
+ "actor": "Mark Ruffalo",
+ "character": "Dave Toschi"
+ },
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Paul Avery"
+ }
+ ]
+ },
+ {
+ "title": "Hard Eight",
+ "overview": "A stranger mentors a young Reno gambler who weds a hooker and befriends a vulgar casino regular.",
+ "release_date": "1996-02-28",
+ "characters": [
+ {
+ "actor": "Gwyneth Paltrow",
+ "character": "Clementine"
+ },
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Jimmy"
+ }
+ ]
+ },
+ {
+ "title": "The Spirit",
+ "overview": "Down these mean streets a man must come. A hero born, murdered, and born again. A Rookie cop named Denny Colt returns from the beyond as The Spirit, a hero whose mission is to fight against the bad forces from the shadows of Central City. The Octopus, who kills anyone unfortunate enough to see his face, has other plans; he is going to wipe out the entire city.",
+ "release_date": "2008-12-25",
+ "characters": [
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Silken Floss"
+ },
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Octopuss"
+ }
+ ]
+ },
+ {
+ "title": "S.W.A.T.",
+ "overview": "Hondo Harrelson recruits Jim Street to join an elite unit of the Los Angeles Police Department. Together they seek out more members, including tough Deke Kay and single mom Chris Sanchez. The team's first big assignment is to escort crime boss Alex Montel to prison. It seems routine, but when Montel offers a huge reward to anyone who can break him free, criminals of various stripes step up for the prize.",
+ "release_date": "2003-08-08",
+ "characters": [
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Sgt. Dan 'Hondo' Harrelson"
+ },
+ {
+ "actor": "Jeremy Renner",
+ "character": "Brian Gamble"
+ }
+ ]
+ },
+ {
+ "title": "Iron Man 2",
+ "overview": "With the world now aware of his dual life as the armored superhero Iron Man, billionaire inventor Tony Stark faces pressure from the government, the press and the public to share his technology with the military. Unwilling to let go of his invention, Stark, with Pepper Potts and James 'Rhodey' Rhodes at his side, must forge new alliances – and confront powerful enemies.",
+ "release_date": "2010-04-28",
+ "characters": [
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Tony Stark / Iron Man"
+ },
+ {
+ "actor": "Gwyneth Paltrow",
+ "character": "Virginia \"Pepper\" Potts"
+ },
+ {
+ "actor": "Don Cheadle",
+ "character": "James \"Rhodey\" Rhodes / War Machine"
+ },
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Natalie Rushman / Natasha Romanoff / Black Widow"
+ },
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Nick Fury"
+ },
+ {
+ "actor": "Clark Gregg",
+ "character": "Phil Coulson"
+ }
+ ]
+ },
+ {
+ "title": "Thor",
+ "overview": "Against his father Odin's will, The Mighty Thor - a powerful but arrogant warrior god - recklessly reignites an ancient war. Thor is cast down to Earth and forced to live among humans as punishment. Once here, Thor learns what it takes to be a true hero when the most dangerous villain of his world sends the darkest forces of Asgard to invade Earth.",
+ "release_date": "2011-04-21",
+ "characters": [
+ {
+ "actor": "Chris Hemsworth",
+ "character": "Thor Odinson"
+ },
+ {
+ "actor": "Clark Gregg",
+ "character": "Phil Coulson"
+ },
+ {
+ "actor": "Jeremy Renner",
+ "character": "Clint Barton / Hawkeye (uncredited)"
+ },
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Nick Fury (uncredited)"
+ }
+ ]
+ },
+ {
+ "title": "View from the Top",
+ "overview": "A small-town woman tries to achieve her goal of becoming a flight attendant.",
+ "release_date": "2003-03-21",
+ "characters": [
+ {
+ "actor": "Gwyneth Paltrow",
+ "character": "Donna"
+ },
+ {
+ "actor": "Mark Ruffalo",
+ "character": "Ted Stewart"
+ }
+ ]
+ },
+ {
+ "title": "The Nanny Diaries",
+ "overview": "A college graduate goes to work as a nanny for a rich New York family. Ensconced in their home, she has to juggle their dysfunction, a new romance, and the spoiled brat in her charge.",
+ "release_date": "2007-08-24",
+ "characters": [
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Annie Braddock"
+ },
+ {
+ "actor": "Chris Evans",
+ "character": "Hayden \"Harvard Hottie\""
+ }
+ ]
+ },
+ {
+ "title": "The Perfect Score",
+ "overview": "Six high school seniors decide to break into the Princeton Testing Center so they can steal the answers to their upcoming SAT tests and all get perfect scores.",
+ "release_date": "2004-01-30",
+ "characters": [
+ {
+ "actor": "Chris Evans",
+ "character": "Kyle"
+ },
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Francesca Curtis"
+ }
+ ]
+ },
+ {
+ "title": "The Avengers",
+ "overview": "When an unexpected enemy emerges and threatens global safety and security, Nick Fury, director of the international peacekeeping agency known as S.H.I.E.L.D., finds himself in need of a team to pull the world back from the brink of disaster. Spanning the globe, a daring recruitment effort begins!",
+ "release_date": "2012-04-25",
+ "characters": [
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Tony Stark / Iron Man"
+ },
+ {
+ "actor": "Chris Evans",
+ "character": "Steve Rogers / Captain America"
+ },
+ {
+ "actor": "Mark Ruffalo",
+ "character": "Bruce Banner / The Hulk"
+ },
+ {
+ "actor": "Chris Hemsworth",
+ "character": "Thor Odinson"
+ },
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Natasha Romanoff / Black Widow"
+ },
+ {
+ "actor": "Jeremy Renner",
+ "character": "Clint Barton / Hawkeye"
+ },
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Nick Fury"
+ },
+ {
+ "actor": "Clark Gregg",
+ "character": "Phil Coulson"
+ },
+ {
+ "actor": "Gwyneth Paltrow",
+ "character": "Virginia \"Pepper\" Potts"
+ }
+ ]
+ },
+ {
+ "title": "Iron Man 3",
+ "overview": "When Tony Stark's world is torn apart by a formidable terrorist called the Mandarin, he starts an odyssey of rebuilding and retribution.",
+ "release_date": "2013-04-18",
+ "characters": [
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Tony Stark / Iron Man"
+ },
+ {
+ "actor": "Gwyneth Paltrow",
+ "character": "Virginia \"Pepper\" Potts"
+ },
+ {
+ "actor": "Don Cheadle",
+ "character": "James \"Rhodey\" Rhodes / Iron Patriot"
+ },
+ {
+ "actor": "Mark Ruffalo",
+ "character": "Bruce Banner (uncredited)"
+ }
+ ]
+ },
+ {
+ "title": "Marvel One-Shot: The Consultant",
+ "overview": "Agent Coulson informs Agent Sitwell that the World Security Council wishes Emil Blonsky to be released from prison to join the Avengers Initiative. As Nick Fury doesn't want to release Blonsky, the two agents decide to send a patsy to sabotage the meeting...",
+ "release_date": "2011-09-12",
+ "characters": [
+ {
+ "actor": "Clark Gregg",
+ "character": "Phil Coulson"
+ },
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Tony Stark (archive footage)"
+ }
+ ]
+ },
+ {
+ "title": "Thor: The Dark World",
+ "overview": "Thor fights to restore order across the cosmos… but an ancient race led by the vengeful Malekith returns to plunge the universe back into darkness. Faced with an enemy that even Odin and Asgard cannot withstand, Thor must embark on his most perilous and personal journey yet, one that will reunite him with Jane Foster and force him to sacrifice everything to save us all.",
+ "release_date": "2013-10-29",
+ "characters": [
+ {
+ "actor": "Chris Hemsworth",
+ "character": "Thor Odinson"
+ },
+ {
+ "actor": "Chris Evans",
+ "character": "Loki as Captain America (uncredited)"
+ }
+ ]
+ },
+ {
+ "title": "Avengers: Age of Ultron",
+ "overview": "When Tony Stark tries to jumpstart a dormant peacekeeping program, things go awry and Earth’s Mightiest Heroes are put to the ultimate test as the fate of the planet hangs in the balance. As the villainous Ultron emerges, it is up to The Avengers to stop him from enacting his terrible plans, and soon uneasy alliances and unexpected action pave the way for an epic and unique global adventure.",
+ "release_date": "2015-04-22",
+ "characters": [
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Tony Stark / Iron Man"
+ },
+ {
+ "actor": "Chris Evans",
+ "character": "Steve Rogers / Captain America"
+ },
+ {
+ "actor": "Mark Ruffalo",
+ "character": "Bruce Banner / The Hulk"
+ },
+ {
+ "actor": "Chris Hemsworth",
+ "character": "Thor Odinson"
+ },
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Natasha Romanoff / Black Widow"
+ },
+ {
+ "actor": "Jeremy Renner",
+ "character": "Clint Barton / Hawkeye"
+ },
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Nick Fury"
+ },
+ {
+ "actor": "Don Cheadle",
+ "character": "James \"Rhodey\" Rhodes / War Machine"
+ }
+ ]
+ },
+ {
+ "title": "Captain America: The Winter Soldier",
+ "overview": "After the cataclysmic events in New York with The Avengers, Steve Rogers, aka Captain America is living quietly in Washington, D.C. and trying to adjust to the modern world. But when a S.H.I.E.L.D. colleague comes under attack, Steve becomes embroiled in a web of intrigue that threatens to put the world at risk. Joining forces with the Black Widow, Captain America struggles to expose the ever-widening conspiracy while fighting off professional assassins sent to silence him at every turn. When the full scope of the villainous plot is revealed, Captain America and the Black Widow enlist the help of a new ally, the Falcon. However, they soon find themselves up against an unexpected and formidable enemy—the Winter Soldier.",
+ "release_date": "2014-03-20",
+ "characters": [
+ {
+ "actor": "Chris Evans",
+ "character": "Steve Rogers / Captain America"
+ },
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Nick Fury"
+ },
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Natasha Romanoff / Black Widow"
+ }
+ ]
+ },
+ {
+ "title": "Thanks for Sharing",
+ "overview": "A romantic comedy that brings together three disparate characters who are learning to face a challenging and often confusing world as they struggle together against a common demon—sex addiction.",
+ "release_date": "2013-09-19",
+ "characters": [
+ {
+ "actor": "Mark Ruffalo",
+ "character": "Adam"
+ },
+ {
+ "actor": "Gwyneth Paltrow",
+ "character": "Phoebe"
+ }
+ ]
+ },
+ {
+ "title": "Chef",
+ "overview": "When Chef Carl Casper suddenly quits his job at a prominent Los Angeles restaurant after refusing to compromise his creative integrity for its controlling owner, he is left to figure out what's next. Finding himself in Miami, he teams up with his ex-wife, his friend and his son to launch a food truck. Taking to the road, Chef Carl goes back to his roots to reignite his passion for the kitchen -- and zest for life and love.",
+ "release_date": "2014-05-08",
+ "characters": [
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Molly"
+ },
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Marvin"
+ }
+ ]
+ },
+ {
+ "title": "Marvel Studios: Assembling a Universe",
+ "overview": "A look at the story behind Marvel Studios and the Marvel Cinematic Universe, featuring interviews and behind-the-scenes footage from all of the Marvel films, the Marvel One-Shots and \"Marvel's Agents of S.H.I.E.L.D.\"",
+ "release_date": "2014-03-18",
+ "characters": [
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Himself / Tony Stark / Iron Man"
+ },
+ {
+ "actor": "Chris Hemsworth",
+ "character": "Himself / Thor"
+ },
+ {
+ "actor": "Chris Evans",
+ "character": "Himself / Steve Rogers / Captain America"
+ },
+ {
+ "actor": "Mark Ruffalo",
+ "character": "Himself / Bruce Banner / Hulk"
+ },
+ {
+ "actor": "Gwyneth Paltrow",
+ "character": "Herself"
+ },
+ {
+ "actor": "Clark Gregg",
+ "character": "Himself"
+ },
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Himself"
+ },
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Herself"
+ },
+ {
+ "actor": "Jeremy Renner",
+ "character": "Himself"
+ }
+ ]
+ },
+ {
+ "title": "Captain America: Civil War",
+ "overview": "Following the events of Age of Ultron, the collective governments of the world pass an act designed to regulate all superhuman activity. This polarizes opinion amongst the Avengers, causing two factions to side with Iron Man or Captain America, which causes an epic battle between former allies.",
+ "release_date": "2016-04-27",
+ "characters": [
+ {
+ "actor": "Chris Evans",
+ "character": "Steve Rogers / Captain America"
+ },
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Tony Stark / Iron Man"
+ },
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Natasha Romanoff / Black Widow"
+ },
+ {
+ "actor": "Don Cheadle",
+ "character": "James \"Rhodey\" Rhodes / War Machine"
+ },
+ {
+ "actor": "Jeremy Renner",
+ "character": "Clint Barton / Hawkeye"
+ }
+ ]
+ },
+ {
+ "title": "Thor: Ragnarok",
+ "overview": "Thor is imprisoned on the other side of the universe and finds himself in a race against time to get back to Asgard to stop Ragnarok, the destruction of his home-world and the end of Asgardian civilization, at the hands of an all-powerful new threat, the ruthless Hela.",
+ "release_date": "2017-10-25",
+ "characters": [
+ {
+ "actor": "Chris Hemsworth",
+ "character": "Thor Odinson"
+ },
+ {
+ "actor": "Mark Ruffalo",
+ "character": "Bruce Banner / Hulk"
+ },
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Natasha Romanoff / Black Widow (archive footage / uncredited)"
+ }
+ ]
+ },
+ {
+ "title": "Avengers: Endgame",
+ "overview": "After the devastating events of Avengers: Infinity War, the universe is in ruins due to the efforts of the Mad Titan, Thanos. With the help of remaining allies, the Avengers must assemble once more in order to undo Thanos' actions and restore order to the universe once and for all, no matter what consequences may be in store.",
+ "release_date": "2019-04-24",
+ "characters": [
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Tony Stark / Iron Man"
+ },
+ {
+ "actor": "Chris Evans",
+ "character": "Steve Rogers / Captain America"
+ },
+ {
+ "actor": "Mark Ruffalo",
+ "character": "Bruce Banner / Hulk"
+ },
+ {
+ "actor": "Chris Hemsworth",
+ "character": "Thor Odinson"
+ },
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Natasha Romanoff / Black Widow"
+ },
+ {
+ "actor": "Jeremy Renner",
+ "character": "Clint Barton / Hawkeye"
+ },
+ {
+ "actor": "Don Cheadle",
+ "character": "James Rhodes / War Machine"
+ },
+ {
+ "actor": "Gwyneth Paltrow",
+ "character": "Pepper Potts"
+ },
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Nick Fury"
+ }
+ ]
+ },
+ {
+ "title": "Avengers: Infinity War",
+ "overview": "As the Avengers and their allies have continued to protect the world from threats too large for any one hero to handle, a new danger has emerged from the cosmic shadows: Thanos. A despot of intergalactic infamy, his goal is to collect all six Infinity Stones, artifacts of unimaginable power, and use them to inflict his twisted will on all of reality. Everything the Avengers have fought for has led up to this moment - the fate of Earth and existence itself has never been more uncertain.",
+ "release_date": "2018-04-25",
+ "characters": [
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Tony Stark / Iron Man"
+ },
+ {
+ "actor": "Chris Hemsworth",
+ "character": "Thor Odinson"
+ },
+ {
+ "actor": "Chris Evans",
+ "character": "Steve Rogers / Captain America"
+ },
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Natasha Romanoff / Black Widow"
+ },
+ {
+ "actor": "Don Cheadle",
+ "character": "James \"Rhodey\" Rhodes / War Machine"
+ },
+ {
+ "actor": "Gwyneth Paltrow",
+ "character": "Virginia \"Pepper\" Potts"
+ },
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Nick Fury (uncredited)"
+ },
+ {
+ "actor": "Mark Ruffalo",
+ "character": "Bruce Banner / The Hulk"
+ }
+ ]
+ },
+ {
+ "title": "Captain Marvel",
+ "overview": "The story follows Carol Danvers as she becomes one of the universe’s most powerful heroes when Earth is caught in the middle of a galactic war between two alien races. Set in the 1990s, Captain Marvel is an all-new adventure from a previously unseen period in the history of the Marvel Cinematic Universe.",
+ "release_date": "2019-03-06",
+ "characters": [
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Nick Fury"
+ },
+ {
+ "actor": "Clark Gregg",
+ "character": "Agent Phil Coulson"
+ },
+ {
+ "actor": "Chris Evans",
+ "character": "Steve Rogers / Captain America (uncredited)"
+ },
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Natasha Romanoff / Black Widow (uncredited)"
+ },
+ {
+ "actor": "Don Cheadle",
+ "character": "James 'Rhodey' Rhodes / War Machine (uncredited)"
+ },
+ {
+ "actor": "Mark Ruffalo",
+ "character": "Bruce Banner / The Hulk (uncredited)"
+ }
+ ]
+ },
+ {
+ "title": "Spider-Man: Homecoming",
+ "overview": "Following the events of Captain America: Civil War, Peter Parker, with the help of his mentor Tony Stark, tries to balance his life as an ordinary high school student in Queens, New York City, with fighting crime as his superhero alter ego Spider-Man as a new threat, the Vulture, emerges.",
+ "release_date": "2017-07-05",
+ "characters": [
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Tony Stark / Iron Man"
+ },
+ {
+ "actor": "Gwyneth Paltrow",
+ "character": "Virginia \"Pepper\" Potts"
+ },
+ {
+ "actor": "Chris Evans",
+ "character": "Steve Rogers / Captain America"
+ }
+ ]
+ },
+ {
+ "title": "Team Thor",
+ "overview": "Discover what Thor was up to during the events of Captain America: Civil War.",
+ "release_date": "2016-08-28",
+ "characters": [
+ {
+ "actor": "Chris Hemsworth",
+ "character": "Thor Odinson"
+ },
+ {
+ "actor": "Mark Ruffalo",
+ "character": "Bruce Banner"
+ }
+ ]
+ },
+ {
+ "title": "Black Widow",
+ "overview": "Natasha Romanoff, also known as Black Widow, confronts the darker parts of her ledger when a dangerous conspiracy with ties to her past arises. Pursued by a force that will stop at nothing to bring her down, Natasha must deal with her history as a spy and the broken relationships left in her wake long before she became an Avenger.",
+ "release_date": "2020-10-28",
+ "characters": [
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Natasha Romanoff / Black Widow"
+ },
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Tony Stark / Iron Man"
+ }
+ ]
+ }
+]
+```
diff --git a/site/versioned_docs/version-4.2/developers/operations-api/bulk-operations.md b/site/versioned_docs/version-4.2/developers/operations-api/bulk-operations.md
new file mode 100644
index 00000000..048ec5d4
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/operations-api/bulk-operations.md
@@ -0,0 +1,136 @@
+---
+title: Bulk Operations
+---
+
+# Bulk Operations
+
+## CSV Data Load
+Ingests CSV data, provided directly in the operation as an `insert`, `update` or `upsert` into the specified database table.
+
+* operation _(required)_ - must always be `csv_data_load`
+* action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert`
+* database _(optional)_ - name of the database where you are loading your data. The default is `data`
+* table _(required)_ - name of the table where you are loading your data
+* data _(required)_ - csv data to import into HarperDB
+
+### Body
+```json
+{
+ "operation": "csv_data_load",
+ "database": "dev",
+ "action": "insert",
+ "table": "breed",
+ "data": "id,name,section,country,image\n1,ENGLISH POINTER,British and Irish Pointers and Setters,GREAT BRITAIN,http:/www.fci.be/Nomenclature/Illustrations/001g07.jpg\n2,ENGLISH SETTER,British and Irish Pointers and Setters,GREAT BRITAIN,http:/www.fci.be/Nomenclature/Illustrations/002g07.jpg\n3,KERRY BLUE TERRIER,Large and medium sized Terriers,IRELAND,\n"
+}
+```
+
+### Response: 200
+```json
+ {
+ "message": "Starting job with id 2fe25039-566e-4670-8bb3-2db3d4e07e69",
+ "job_id": "2fe25039-566e-4670-8bb3-2db3d4e07e69"
+ }
+```
+
+---
+
+## CSV File Load
+Ingests CSV data, provided via a path on the local filesystem, as an `insert`, `update` or `upsert` into the specified database table.
+
+_Note: The CSV file must reside on the same machine on which HarperDB is running. For example, the path to a CSV on your computer will produce an error if your HarperDB instance is a cloud instance._
+
+* operation _(required)_ - must always be `csv_file_load`
+* action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert`
+* database _(optional)_ - name of the database where you are loading your data. The default is `data`
+* table _(required)_ - name of the table where you are loading your data
+* file_path _(required)_ - path to the csv file on the host running harperdb
+
+### Body
+```json
+{
+ "operation": "csv_file_load",
+ "action": "insert",
+ "database": "dev",
+ "table": "breed",
+ "file_path": "/home/user/imports/breeds.csv"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Starting job with id 3994d8e2-ec6a-43c4-8563-11c1df81870e",
+ "job_id": "3994d8e2-ec6a-43c4-8563-11c1df81870e"
+}
+```
+
+---
+
+## CSV URL Load
+Ingests CSV data, provided via URL, as an `insert`, `update` or `upsert` into the specified database table.
+
+* operation _(required)_ - must always be `csv_url_load`
+* action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert`
+* database _(optional)_ - name of the database where you are loading your data. The default is `data`
+* table _(required)_ - name of the table where you are loading your data
+* csv_url _(required)_ - URL to the csv
+
+### Body
+```json
+{
+ "operation": "csv_url_load",
+ "action": "insert",
+ "database": "dev",
+ "table": "breed",
+ "csv_url": "https:/s3.amazonaws.com/complimentarydata/breeds.csv"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Starting job with id 332aa0a2-6833-46cd-88a6-ae375920436a",
+ "job_id": "332aa0a2-6833-46cd-88a6-ae375920436a"
+}
+```
+
+---
+
+## Import from S3
+This operation allows users to import CSV or JSON files from an AWS S3 bucket as an `insert`, `update` or `upsert`.
+
+* operation _(required)_ - must always be `import_from_s3`
+* action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert`
+* database _(optional)_ - name of the database where you are loading your data. The default is `data`
+* table _(required)_ - name of the table where you are loading your data
+* s3 _(required)_ - object containing required AWS S3 bucket info for operation:
+ * aws_access_key_id - AWS access key for authenticating into your S3 bucket
+ * aws_secret_access_key - AWS secret for authenticating into your S3 bucket
+ * bucket - AWS S3 bucket to import from
+ * key - the name of the file to import - _the file must include a valid file extension ('.csv' or '.json')_
+ * region - the region of the bucket
+
+### Body
+```json
+{
+ "operation": "import_from_s3",
+ "action": "insert",
+ "database": "dev",
+ "table": "dog",
+ "s3": {
+ "aws_access_key_id": "YOUR_KEY",
+ "aws_secret_access_key": "YOUR_SECRET_KEY",
+ "bucket": "BUCKET_NAME",
+ "key": "OBJECT_NAME",
+ "region": "BUCKET_REGION"
+ }
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Starting job with id 062a1892-6a0a-4282-9791-0f4c93b12e16",
+ "job_id": "062a1892-6a0a-4282-9791-0f4c93b12e16"
+}
+```
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/developers/operations-api/clustering.md b/site/versioned_docs/version-4.2/developers/operations-api/clustering.md
new file mode 100644
index 00000000..bb7c0632
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/operations-api/clustering.md
@@ -0,0 +1,390 @@
+---
+title: Clustering
+---
+
+# Clustering
+
+## Cluster Set Routes
+Adds a route/routes to either the hub or leaf server cluster configuration.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `cluster_set_routes`
+* server _(required)_ - must always be `hub` or `leaf`, in most cases you should use `hub` here
+* routes _(required)_ - must always be an objects array with a host and port:
+ * host - the host of the remote instance you are clustering to
+ * port - the clustering port of the remote instance you are clustering to, in most cases this is the value in `clustering.hubServer.cluster.network.port` on the remote instance `harperdb-config.yaml`
+
+### Body
+```json
+{
+ "operation": "cluster_set_routes",
+ "server": "hub",
+ "routes": [
+ {
+ "host": "3.22.181.22",
+ "port": 12345
+ },
+ {
+ "host": "3.137.184.8",
+ "port": 12345
+ },
+ {
+ "host": "18.223.239.195",
+ "port": 12345
+ },
+ {
+ "host": "18.116.24.71",
+ "port": 12345
+ }
+ ]
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "cluster routes successfully set",
+ "set": [
+ {
+ "host": "3.22.181.22",
+ "port": 12345
+ },
+ {
+ "host": "3.137.184.8",
+ "port": 12345
+ },
+ {
+ "host": "18.223.239.195",
+ "port": 12345
+ },
+ {
+ "host": "18.116.24.71",
+ "port": 12345
+ }
+ ],
+ "skipped": []
+}
+```
+
+---
+
+## Cluster Get Routes
+Gets all the hub and leaf server routes from the config file.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `cluster_get_routes`
+
+### Body
+```json
+{
+ "operation": "cluster_get_routes"
+}
+```
+
+### Response: 200
+```json
+{
+ "hub": [
+ {
+ "host": "3.22.181.22",
+ "port": 12345
+ },
+ {
+ "host": "3.137.184.8",
+ "port": 12345
+ },
+ {
+ "host": "18.223.239.195",
+ "port": 12345
+ },
+ {
+ "host": "18.116.24.71",
+ "port": 12345
+ }
+ ],
+ "leaf": []
+}
+```
+
+---
+
+## Cluster Delete Routes
+Removes route(s) from hub and/or leaf server routes array in config file. Returns a deletion success message and arrays of deleted and skipped records.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `cluster_delete_routes`
+* routes _required_ - Must be an array of route object(s)
+
+### Body
+
+```json
+{
+ "operation": "cluster_delete_routes",
+ "routes": [
+ {
+ "host": "18.116.24.71",
+ "port": 12345
+ }
+ ]
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "cluster routes successfully deleted",
+ "deleted": [
+ {
+ "host": "18.116.24.71",
+ "port": 12345
+ }
+ ],
+ "skipped": []
+}
+```
+
+
+---
+
+## Add Node
+Registers an additional HarperDB instance with associated subscriptions. Learn more about HarperDB clustering here: https:/harperdb.io/docs/clustering/.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `add_node`
+* node_name _(required)_ - the node name of the remote node
+* subscriptions _(required)_ - The relationship created between nodes. Must be an object array and include `schema`, `table`, `subscribe` and `publish`:
+ * schema - the schema to replicate from
+ * table - the table to replicate from
+ * subscribe - a boolean which determines if transactions on the remote table should be replicated on the local table
+ * publish - a boolean which determines if transactions on the local table should be replicated on the remote table
+ * start_time _(optional)_ - How far back to go to get transactions from node being added. Must be in UTC YYYY-MM-DDTHH:mm:ss.sssZ format
+
+### Body
+```json
+{
+ "operation": "add_node",
+ "node_name": "ec2-3-22-181-22",
+ "subscriptions": [
+ {
+ "schema": "dev",
+ "table": "dog",
+ "subscribe": false,
+ "publish": true,
+ "start_time": "2022-09-02T20:06:35.993Z"
+ }
+ ]
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Successfully added 'ec2-3-22-181-22' to manifest"
+}
+```
+
+---
+
+## Update Node
+Modifies an existing HarperDB instance registration and associated subscriptions. Learn more about HarperDB clustering here: https:/harperdb.io/docs/clustering/.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `update_node`
+* node_name _(required)_ - the node name of the remote node you are updating
+* subscriptions _(required)_ - The relationship created between nodes. Must be an object array and include `schema`, `table`, `subscribe` and `publish`:
+ * schema - the schema to replicate from
+ * table - the table to replicate from
+ * subscribe - a boolean which determines if transactions on the remote table should be replicated on the local table
+ * publish - a boolean which determines if transactions on the local table should be replicated on the remote table
+
+### Body
+```json
+{
+ "operation": "update_node",
+ "node_name": "ec2-18-223-239-195",
+ "subscriptions": [
+ {
+ "schema": "dev",
+ "table": "dog",
+ "subscribe": true,
+ "publish": false
+ }
+ ]
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Successfully updated 'ec2-3-22-181-22'"
+}
+```
+
+---
+
+## Cluster Status
+Returns an array of status objects from a cluster. A status object will contain the clustering node name, whether or not clustering is enabled, and a list of possible connections. Learn more about HarperDB clustering here: https:/harperdb.io/docs/clustering/.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `cluster_status`
+
+### Body
+```json
+{
+ "operation": "cluster_status"
+}
+```
+
+### Response: 200
+```json
+{
+ "node_name": "ec2-18-221-143-69",
+ "is_enabled": true,
+ "connections": [
+ {
+ "node_name": "ec2-3-22-181-22",
+ "status": "open",
+ "ports": {
+ "clustering": 12345,
+ "operations_api": 9925
+ },
+ "latency_ms": 13,
+ "uptime": "30d 1h 18m 8s",
+ "subscriptions": [
+ {
+ "schema": "dev",
+ "table": "dog",
+ "publish": true,
+ "subscribe": true
+ }
+ ]
+ }
+ ]
+}
+```
+
+
+---
+
+## Cluster Network
+Returns an object array of enmeshed nodes. Each node object will contain the name of the node, the amount of time (in milliseconds) it took for it to respond, the names of the nodes it is enmeshed with and the routes set in its config file. Learn more about HarperDB clustering here: [https:/harperdb.io/docs/clustering/](https:/harperdb.io/docs/clustering/).
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_- must always be `cluster_network`
+* timeout (_optional_) - the amount of time in milliseconds to wait for a response from the network. Must be a number
+* connected_nodes (_optional_) - omit `connected_nodes` from the response. Must be a boolean. Defaults to `false`
+* routes (_optional_) - omit `routes` from the response. Must be a boolean. Defaults to `false`
+
+### Body
+
+```json
+{
+ "operation": "cluster_network"
+}
+```
+
+### Response: 200
+```json
+{
+ "nodes": [
+ {
+ "name": "local_node",
+ "response_time": 4,
+ "connected_nodes": ["ec2-3-142-255-78"],
+ "routes": [
+ {
+ "host": "3.142.255.78",
+ "port": 9932
+ }
+ ]
+ },
+ {
+ "name": "ec2-3-142-255-78",
+ "response_time": 57,
+ "connected_nodes": ["ec2-3-12-153-124", "ec2-3-139-236-138", "local_node"],
+ "routes": []
+ }
+ ]
+}
+```
+
+---
+
+## Remove Node
+Removes a HarperDB instance and associated subscriptions from the cluster. Learn more about HarperDB clustering here: https:/harperdb.io/docs/clustering/.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `remove_node`
+* name _(required)_ - The name of the node you are de-registering
+
+### Body
+```json
+{
+ "operation": "remove_node",
+ "node_name": "ec2-3-22-181-22"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Successfully removed 'ec2-3-22-181-22' from manifest"
+}
+```
+
+---
+
+## Configure Cluster
+Bulk create/remove subscriptions for any number of remote nodes. Resets and replaces any existing clustering setup.
+Learn more about HarperDB clustering here: https:/harperdb.io/docs/clustering/.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `configure_cluster`
+* connections _(required)_ - must be an object array with each object containing `node_name` and `subscriptions` for that node
+
+### Body
+```json
+{
+ "operation": "configure_cluster",
+ "connections": [
+ {
+ "node_name": "ec2-3-137-184-8",
+ "subscriptions": [
+ {
+ "schema": "dev",
+ "table": "dog",
+ "subscribe": true,
+ "publish": false
+ }
+ ]
+ },
+ {
+ "node_name": "ec2-18-223-239-195",
+ "subscriptions": [
+ {
+ "schema": "dev",
+ "table": "dog",
+ "subscribe": true,
+ "publish": true
+ }
+ ]
+ }
+ ]
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Cluster successfully configured."
+}
+```
diff --git a/site/versioned_docs/version-4.2/developers/operations-api/components.md b/site/versioned_docs/version-4.2/developers/operations-api/components.md
new file mode 100644
index 00000000..17ba5f0a
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/operations-api/components.md
@@ -0,0 +1,291 @@
+---
+title: Components
+---
+
+# Components
+
+## Add Component
+
+Creates a new component project in the component root directory using a predefined template.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `add_component`
+* project _(required)_ - the name of the project you wish to create
+
+### Body
+```json
+{
+ "operation": "add_component",
+ "project": "my-component"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Successfully added project: my-component"
+}
+```
+---
+## Deploy Component
+
+Will deploy a component using either a base64-encoded string representation of a `.tar` file (the output from `package_component`) or a package value, which can be any valid NPM reference, such as a GitHub repo, an NPM package, a tarball, a local directory or a website.\
+
+If deploying with the `payload` option, HarperDB will decrypt the base64-encoded string, reconstitute the .tar file of your project folder, and extract it to the component root project directory.\
+
+If deploying with the `package` option, the package value will be written to `harperdb-config.yaml`. Then npm install will be utilized to install the component in the `node_modules` directory located in the hdb root. The value is a package reference, which should generally be a [URL reference, as described here](https:/docs.npmjs.com/cli/v10/configuring-npm/package-json#urls-as-dependencies) (it is also possible to include NPM registerd packages and file paths). URL package references can directly reference tarballs that can be installed as a package. However, the most common and recommended usage is to install from a Git repository, which can be combined with a tag to deploy a specific version directly from versioned source control. When using tags, we highly recommend that you use the `semver` directive to ensure consistent and reliable installation by NPM. In addition to tags, you can also reference branches or commit numbers. Here is an example URL package reference to a (public) Git repository that doesn't require authentication:
+```
+https:/github.com/HarperDB/application-template#semver:v1.0.0
+```
+or this can be shortened to:
+```
+HarperDB/application-template#semver:v1.0.0
+```
+
+You can also install from private repository if you have an installed SSH keys on the server:
+```
+git+ssh:/git@github.com:my-org/my-app.git#semver:v1.0.0
+```
+Or you can use a Github token:
+```
+https:/@github.com/my-org/my-app#semver:v1.0.0
+```
+Or you can use a GitLab Project Access Token:
+```
+https:/my-project:@gitlab.com/my-group/my-project#semver:v1.0.0
+```
+Note that your component will be installed by NPM. If your component has dependencies, NPM will attempt to download and install these as well. NPM normally uses the public registry.npmjs.org registry. If you are installing without network access to this, you may wish to define [custom registry locations](https:/docs.npmjs.com/cli/v8/configuring-npm/npmrc) if you have any dependencies that need to be installed. NPM will install the deployed component and any dependencies in node_modules in the hdb root directory (typically `~/hdb/node_modules`).
+
+_Note: After deploying a component a restart may be required_
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `deploy_component`
+* project _(required)_ - the name of the project you wish to deploy
+* package _(optional)_ - this can be any valid GitHub or NPM reference
+* payload _(optional)_ - a base64-encoded string representation of the .tar file. Must be a string
+
+### Body
+
+```json
+{
+ "operation": "deploy_component",
+ "project": "my-component",
+ "payload": "A very large base64-encoded string representation of the .tar file"
+}
+```
+
+```json
+{
+ "operation": "deploy_component",
+ "project": "my-component",
+ "package": "HarperDB/application-template"
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "Successfully deployed: my-component"
+}
+```
+---
+## Package Component
+
+Creates a temporary `.tar` file of the specified project folder, then reads it into a base64-encoded string and returns an object with the string and the payload.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `package_component`
+* project _(required)_ - the name of the project you wish to package
+* skip_node_modules _(optional)_ - if true, creates option for tar module that will exclude the project's node_modules directory. Must be a boolean
+
+### Body
+
+```json
+{
+ "operation": "package_component",
+ "project": "my-component",
+ "skip_node_modules": true
+}
+```
+
+### Response: 200
+
+```json
+{
+ "project": "my-component",
+ "payload": "LgAAAAAAAAAAAAAAAAAAA...AAAAAAAAAAAAAAAAAAAAAAAAAAAAA=="
+}
+```
+---
+## Drop Component
+
+Deletes a file from inside the component project or deletes the complete project.
+
+**If just `project` is provided it will delete all that projects local files and folders**
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `drop_component`
+* project _(required)_ - the name of the project you wish to delete or to delete from if using the `file` parameter
+* file _(optional)_ - the path relative to your project folder of the file you wish to delete
+
+### Body
+
+```json
+{
+ "operation": "drop_component",
+ "project": "my-component",
+ "file": "utils/myUtils.js"
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "Successfully dropped: my-component/utils/myUtils.js"
+}
+```
+---
+## Get Components
+
+Gets all local component files and folders and any component config from `harperdb-config.yaml`
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `get_components`
+
+### Body
+
+```json
+{
+ "operation": "get_components"
+}
+```
+
+### Response: 200
+
+```json
+{
+ "name": "components",
+ "entries": [
+ {
+ "package": "HarperDB/application-template",
+ "name": "deploy-test-gh"
+ },
+ {
+ "package": "@fastify/compress",
+ "name": "fast-compress"
+ },
+ {
+ "name": "my-component",
+ "entries": [
+ {
+ "name": "LICENSE",
+ "mtime": "2023-08-22T16:00:40.286Z",
+ "size": 1070
+ },
+ {
+ "name": "index.md",
+ "mtime": "2023-08-22T16:00:40.287Z",
+ "size": 1207
+ },
+ {
+ "name": "config.yaml",
+ "mtime": "2023-08-22T16:00:40.287Z",
+ "size": 1069
+ },
+ {
+ "name": "package.json",
+ "mtime": "2023-08-22T16:00:40.288Z",
+ "size": 145
+ },
+ {
+ "name": "resources.js",
+ "mtime": "2023-08-22T16:00:40.289Z",
+ "size": 583
+ },
+ {
+ "name": "schema.graphql",
+ "mtime": "2023-08-22T16:00:40.289Z",
+ "size": 466
+ },
+ {
+ "name": "utils",
+ "entries": [
+ {
+ "name": "commonUtils.js",
+ "mtime": "2023-08-22T16:00:40.289Z",
+ "size": 583
+ }
+ ]
+ }
+ ]
+ }
+ ]
+}
+```
+---
+## Get Component File
+
+Gets the contents of a file inside a component project.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `get_component_file`
+* project _(required)_ - the name of the project where the file is located
+* file _(required)_ - the path relative to your project folder of the file you wish to view
+* encoding _(optional)_ - the encoding that will be passed to the read file call. Defaults to `utf8`
+
+### Body
+
+```json
+{
+ "operation": "get_component_file",
+ "project": "my-component",
+ "file": "resources.js"
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "/**export class MyCustomResource extends tables.TableName {\n\t/ we can define our own custom POST handler\n\tpost(content) {\n\t\t/ do something with the incoming content;\n\t\treturn super.post(content);\n\t}\n\t/ or custom GET handler\n\tget() {\n\t\t/ we can modify this resource before returning\n\t\treturn super.get();\n\t}\n}\n */\n/ we can also define a custom resource without a specific table\nexport class Greeting extends Resource {\n\t/ a \"Hello, world!\" handler\n\tget() {\n\t\treturn { greeting: 'Hello, world!' };\n\t}\n}"
+}
+```
+---
+## Set Component File
+
+Creates or updates a file inside a component project.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `set_component_file`
+* project _(required)_ - the name of the project the file is located in
+* file _(required)_ - the path relative to your project folder of the file you wish to set
+* payload _(required)_ - what will be written to the file
+* encoding _(optional)_ - the encoding that will be passed to the write file call. Defaults to `utf8`
+
+### Body
+
+```json
+{
+ "operation": "set_component_file",
+ "project": "my-component",
+ "file": "test.js",
+ "payload": "console.log('hello world')"
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "Successfully set component: test.js"
+}
+```
diff --git a/site/versioned_docs/version-4.2/developers/operations-api/custom-functions.md b/site/versioned_docs/version-4.2/developers/operations-api/custom-functions.md
new file mode 100644
index 00000000..bf9537fc
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/operations-api/custom-functions.md
@@ -0,0 +1,276 @@
+---
+title: Custom Functions
+---
+
+# Custom Functions
+
+## Custom Functions Status
+
+Returns the state of the Custom functions server. This includes whether it is enabled, upon which port it is listening, and where its root project directory is located on the host machine.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `custom_function_status`
+
+### Body
+```json
+{
+ "operation": "custom_functions_status"
+}
+```
+
+### Response: 200
+```json
+{
+ "is_enabled": true,
+ "port": 9926,
+ "directory": "/Users/myuser/hdb/custom_functions"
+}
+```
+
+---
+
+## Get Custom Functions
+
+Returns an array of projects within the Custom Functions root project directory. Each project has details including each of the files in the routes and helpers directories, and the total file count in the static folder.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `get_custom_functions`
+
+### Body
+
+```json
+{
+ "operation": "get_custom_functions"
+}
+```
+
+### Response: 200
+
+```json
+{
+ "dogs": {
+ "routes": ["examples"],
+ "helpers":["example"],
+ "static":3
+ }
+}
+```
+
+---
+
+## Get Custom Function
+
+Returns the content of the specified file as text. HarperDB Studio uses this call to render the file content in its built-in code editor.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `get_custom_function`
+* project _(required)_ - the name of the project containing the file for which you wish to get content
+* type _(required)_ - the name of the sub-folder containing the file for which you wish to get content - must be either routes or helpers
+* file _(required)_ - The name of the file for which you wish to get content - should not include the file extension (which is always .js)
+
+### Body
+
+```json
+{
+ "operation": "get_custom_function",
+ "project": "dogs",
+ "type": "helpers",
+ "file": "example"
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "'use strict';\n\nconst https = require('https');\n\nconst authRequest = (options) => {\n return new Promise((resolve, reject) => {\n const req = https.request(options, (res) => {\n res.setEncoding('utf8');\n let responseBody = '';\n\n res.on('data', (chunk) => {\n responseBody += chunk;\n });\n\n res.on('end', () => {\n resolve(JSON.parse(responseBody));\n });\n });\n\n req.on('error', (err) => {\n reject(err);\n });\n\n req.end();\n });\n};\n\nconst customValidation = async (request,logger) => {\n const options = {\n hostname: 'jsonplaceholder.typicode.com',\n port: 443,\n path: '/todos/1',\n method: 'GET',\n headers: { authorization: request.headers.authorization },\n };\n\n const result = await authRequest(options);\n\n /*\n * throw an authentication error based on the response body or statusCode\n */\n if (result.error) {\n const errorString = result.error || 'Sorry, there was an error authenticating your request';\n logger.error(errorString);\n throw new Error(errorString);\n }\n return request;\n};\n\nmodule.exports = customValidation;\n"
+}
+```
+
+---
+
+## Set Custom Function
+
+Updates the content of the specified file. HarperDB Studio uses this call to save any changes made through its built-in code editor.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `set_custom_function`
+* project _(required)_ - the name of the project containing the file for which you wish to set content
+* type _(required)_ - the name of the sub-folder containing the file for which you wish to set content - must be either routes or helpers
+* file _(required)_ - the name of the file for which you wish to set content - should not include the file extension (which is always .js)
+* function_content _(required)_ - the content you wish to save into the specified file
+
+### Body
+
+```json
+{
+ "operation": "set_custom_function",
+ "project": "dogs",
+ "type": "helpers",
+ "file": "example",
+ "function_content": "'use strict';\n\nconst https = require('https');\n\nconst authRequest = (options) => {\n return new Promise((resolve, reject) => {\n const req = https.request(options, (res) => {\n res.setEncoding('utf8');\n let responseBody = '';\n\n res.on('data', (chunk) => {\n responseBody += chunk;\n });\n\n res.on('end', () => {\n resolve(JSON.parse(responseBody));\n });\n });\n\n req.on('error', (err) => {\n reject(err);\n });\n\n req.end();\n });\n};\n\nconst customValidation = async (request,logger) => {\n const options = {\n hostname: 'jsonplaceholder.typicode.com',\n port: 443,\n path: '/todos/1',\n method: 'GET',\n headers: { authorization: request.headers.authorization },\n };\n\n const result = await authRequest(options);\n\n /*\n * throw an authentication error based on the response body or statusCode\n */\n if (result.error) {\n const errorString = result.error || 'Sorry, there was an error authenticating your request';\n logger.error(errorString);\n throw new Error(errorString);\n }\n return request;\n};\n\nmodule.exports = customValidation;\n"
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "Successfully updated custom function: example.js"
+}
+```
+
+---
+
+## Drop Custom Function
+
+Deletes the specified file.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `drop_custom_function`
+* project _(required)_ - the name of the project containing the file you wish to delete
+* type _(required)_ - the name of the sub-folder containing the file you wish to delete. Must be either routes or helpers
+* file _(required)_ - the name of the file you wish to delete. Should not include the file extension (which is always .js)
+
+### Body
+
+```json
+{
+ "operation": "drop_custom_function",
+ "project": "dogs",
+ "type": "helpers",
+ "file": "example"
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message":"Successfully deleted custom function: example.js"
+}
+```
+
+---
+
+## Add Custom Function Project
+
+Creates a new project folder in the Custom Functions root project directory. It also inserts into the new directory the contents of our Custom Functions Project template, which is available publicly, here: https:/github.com/HarperDB/harperdb-custom-functions-template.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `add_custom_function_project`
+* project _(required)_ - the name of the project you wish to create
+
+### Body
+
+```json
+{
+ "operation": "add_custom_function_project",
+ "project": "dogs"
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message":"Successfully created custom function project: dogs"
+}
+```
+
+---
+
+## Drop Custom Function Project
+
+Deletes the specified project folder and all of its contents.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `drop_custom_function_project`
+* project _(required)_ - the name of the project you wish to delete
+
+### Body
+
+```json
+{
+ "operation": "drop_custom_function_project",
+ "project": "dogs"
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "Successfully deleted project: dogs"
+}
+```
+
+---
+
+## Package Custom Function Project
+
+Creates a .tar file of the specified project folder, then reads it into a base64-encoded string and returns an object with the string, the payload and the file.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `package_custom_function_project`
+* project _(required)_ - the name of the project you wish to package up for deployment
+* skip_node_modules _(optional)_ - if true, creates option for tar module that will exclude the project's node_modules directory. Must be a boolean.
+
+### Body
+
+```json
+{
+ "operation": "package_custom_function_project",
+ "project": "dogs",
+ "skip_node_modules": true
+}
+```
+
+### Response: 200
+
+```json
+{
+ "project": "dogs",
+ "payload": "LgAAAAAAAAAAAAAAAAAAA...AAAAAAAAAAAAAAAAAAAAAAAAAAAAA==",
+ "file": "/tmp/d27f1154-5d82-43f0-a5fb-a3018f366081.tar"
+}
+```
+
+---
+
+## Deploy Custom Function Project
+
+Takes the output of package_custom_function_project, decrypts the base64-encoded string, reconstitutes the .tar file of your project folder, and extracts it to the Custom Functions root project directory.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `deploy_custom_function_project`
+* project _(required)_ - the name of the project you wish to deploy. Must be a string
+* payload _(required)_ - a base64-encoded string representation of the .tar file. Must be a string
+
+
+### Body
+
+```json
+{
+ "operation": "deploy_custom_function_project",
+ "project": "dogs",
+ "payload": "A very large base64-encoded string represenation of the .tar file"
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "Successfully deployed project: dogs"
+}
+```
diff --git a/site/versioned_docs/version-4.2/developers/operations-api/databases-and-tables.md b/site/versioned_docs/version-4.2/developers/operations-api/databases-and-tables.md
new file mode 100644
index 00000000..18f23171
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/operations-api/databases-and-tables.md
@@ -0,0 +1,362 @@
+---
+title: Databases and Tables
+---
+
+# Databases and Tables
+
+## Describe All
+Returns the definitions of all databases and tables within the database. Record counts about 5000 records are estimated, as determining the exact count can be expensive. When the record count is estimated, this is indicated by the inclusion of a confidence interval of `estimated_record_range`. If you need the exact count, you can include an `"exact_count": true` in the operation, but be aware that this requires a full table scan (may be expensive).
+
+* operation _(required)_ - must always be `describe_all`
+
+### Body
+```json
+{
+ "operation": "describe_all"
+}
+```
+
+### Response: 200
+```json
+{
+ "dev": {
+ "dog": {
+ "schema": "dev",
+ "name": "dog",
+ "hash_attribute": "id",
+ "audit": true,
+ "schema_defined": false,
+ "attributes": [
+ {
+ "attribute": "id",
+ "indexed": true,
+ "is_primary_key": true
+ },
+ {
+ "attribute": "__createdtime__",
+ "indexed": true
+ },
+ {
+ "attribute": "__updatedtime__",
+ "indexed": true
+ },
+ {
+ "attribute": "type",
+ "indexed": true
+ }
+ ],
+ "clustering_stream_name": "dd9e90c2689151ab812e0f2d98816bff",
+ "record_count": 4000,
+ "estimated_record_range": [3976, 4033],
+ "last_updated_record": 1697658683698.4504
+ }
+ }
+}
+```
+
+---
+
+## Describe database
+Returns the definitions of all tables within the specified database.
+
+* operation _(required)_ - must always be `describe_database`
+* database _(optional)_ - database where the table you wish to describe lives. The default is `data`
+
+### Body
+```json
+{
+ "operation": "describe_database",
+ "database": "dev"
+}
+```
+
+### Response: 200
+```json
+{
+ "dog": {
+ "schema": "dev",
+ "name": "dog",
+ "hash_attribute": "id",
+ "audit": true,
+ "schema_defined": false,
+ "attributes": [
+ {
+ "attribute": "id",
+ "indexed": true,
+ "is_primary_key": true
+ },
+ {
+ "attribute": "__createdtime__",
+ "indexed": true
+ },
+ {
+ "attribute": "__updatedtime__",
+ "indexed": true
+ },
+ {
+ "attribute": "type",
+ "indexed": true
+ }
+ ],
+ "clustering_stream_name": "dd9e90c2689151ab812e0f2d98816bff",
+ "record_count": 4000,
+ "estimated_record_range": [3976, 4033],
+ "last_updated_record": 1697658683698.4504
+ }
+}
+```
+
+---
+
+## Describe Table
+Returns the definition of the specified table.
+
+* operation _(required)_ - must always be `describe_table`
+* table _(required)_ - table you wish to describe
+* database _(optional)_ - database where the table you wish to describe lives. The default is `data`
+
+### Body
+```json
+{
+ "operation": "describe_table",
+ "table": "dog"
+}
+```
+
+### Response: 200
+```json
+{
+ "schema": "dev",
+ "name": "dog",
+ "hash_attribute": "id",
+ "audit": true,
+ "schema_defined": false,
+ "attributes": [
+ {
+ "attribute": "id",
+ "indexed": true,
+ "is_primary_key": true
+ },
+ {
+ "attribute": "__createdtime__",
+ "indexed": true
+ },
+ {
+ "attribute": "__updatedtime__",
+ "indexed": true
+ },
+ {
+ "attribute": "type",
+ "indexed": true
+ }
+ ],
+ "clustering_stream_name": "dd9e90c2689151ab812e0f2d98816bff",
+ "record_count": 4000,
+ "estimated_record_range": [3976, 4033],
+ "last_updated_record": 1697658683698.4504
+}
+```
+
+---
+
+## Create database
+Create a new database.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `create_database`
+* database _(optional)_ - name of the database you are creating. The default is `data`
+
+### Body
+```json
+{
+ "operation": "create_database",
+ "database": "dev"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "database 'dev' successfully created"
+}
+```
+
+---
+
+## Drop database
+Drop an existing database. NOTE: Dropping a database will delete all tables and all of their records in that database.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - this should always be `drop_database`
+* database _(required)_ - name of the database you are dropping
+
+### Body
+```json
+{
+ "operation": "drop_database",
+ "database": "dev"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "successfully deleted 'dev'"
+}
+```
+
+---
+
+## Create Table
+Create a new table within a database.
+
+_Operation is restricted to super_user roles only_
+
+
+* operation _(required)_ - must always be `create_table`
+* database _(optional)_ - name of the database where you want your table to live. If the database does not exist, it will be created. If the `database` property is not provided it will default to `data`.
+* table _(required)_ - name of the table you are creating
+* primary_key _(required)_ - primary key for the table
+* attributes _(optional)_ - an array of attributes that specifies the schema for the table, that is the set of attributes for the table. When attributes are supplied the table will not be considered a "dynamic schema" table, and attributes will not be auto-added when records with new properties are inserted. Each attribute is specified as:
+ * name _(required)_ - the name of the attribute
+ * indexed _(optional)_ - indicates if the attribute should be indexed
+ * type _(optional)_ - specifies the data type of the attribute (can be String, Int, Float, Date, ID, Any)
+* expiration _(optional)_ - specifies the time-to-live or expiration of records in the table before they are evicted (records are not evicted on any timer if not specified). This is specified in seconds.
+
+### Body
+```json
+{
+ "operation": "create_table",
+ "database": "dev",
+ "table": "dog",
+ "primary_key": "id"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "table 'dev.dog' successfully created."
+}
+```
+
+---
+
+## Drop Table
+Drop an existing database table. NOTE: Dropping a table will delete all associated records in that table.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - this should always be `drop_table`
+* database _(optional)_ - database where the table you are dropping lives. The default is `data`
+* table _(required)_ - name of the table you are dropping
+
+### Body
+
+```json
+{
+ "operation": "drop_table",
+ "database": "dev",
+ "table": "dog"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "successfully deleted table 'dev.dog'"
+}
+```
+
+---
+
+## Create Attribute
+Create a new attribute within the specified table. **The create_attribute operation can be used for admins wishing to pre-define schema values for setting role-based permissions or for any other reason.**
+
+_Note: HarperDB will automatically create new attributes on insert and update if they do not already exist within the schema._
+
+* operation _(required)_ - must always be `create_attribute`
+* database _(optional)_ - name of the database of the table you want to add your attribute. The default is `data`
+* table _(required)_ - name of the table where you want to add your attribute to live
+* attribute _(required)_ - name for the attribute
+
+### Body
+```json
+{
+ "operation": "create_attribute",
+ "database": "dev",
+ "table": "dog",
+ "attribute": "is_adorable"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "inserted 1 of 1 records",
+ "skipped_hashes": [],
+ "inserted_hashes": [
+ "383c0bef-5781-4e1c-b5c8-987459ad0831"
+ ]
+}
+```
+
+---
+
+## Drop Attribute
+Drop an existing attribute from the specified table. NOTE: Dropping an attribute will delete all associated attribute values in that table.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - this should always be `drop_attribute`
+* database _(optional)_ - database where the table you are dropping lives. The default is `data`
+* table _(required)_ - table where the attribute you are dropping lives
+* attribute _(required)_ - attribute that you intend to drop
+
+### Body
+
+```json
+{
+ "operation": "drop_attribute",
+ "database": "dev",
+ "table": "dog",
+ "attribute": "is_adorable"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "successfully deleted attribute 'is_adorable'"
+}
+```
+
+---
+
+## Get Backup
+This will return a snapshot of the requested database. This provides a means for backing up the database through the operations API. The response will be the raw database file (in binary format), which can later be restored as a database file by copying into the appropriate hdb/databases directory (with HarperDB not running). The returned file is a snapshot of the database at the moment in time that the get_backup operation begins. This also supports backing up individual tables in a database. However, this is a more expensive operation than backing up a database in whole, and will lose any transactional atomicity between writes across tables, so generally it is recommended that you backup the entire database.
+
+It is important to note that trying to copy a database file that is in use (HarperDB actively running and writing to the file) using standard file copying tools is not safe (the copied file will likely be corrupt), which is why using this snapshot operation is recommended for backups (volume snapshots are also a good way to backup HarperDB databases).
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - this should always be `get_backup`
+* database _(required)_ - this is the database that will be snapshotted and returned
+* table _(optional)_ - this will specify a specific table to backup
+* tables _(optional)_ - this will specify a specific set of tables to backup
+
+### Body
+
+```json
+{
+ "operation": "get_backup",
+ "database": "dev"
+}
+```
+
+### Response: 200
+```
+The database in raw binary data format
+```
diff --git a/site/versioned_docs/version-4.2/developers/operations-api/index.md b/site/versioned_docs/version-4.2/developers/operations-api/index.md
new file mode 100644
index 00000000..cf2db22d
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/operations-api/index.md
@@ -0,0 +1,51 @@
+---
+title: Operations API
+---
+
+# Operations API
+
+The operations API provides a full set of capabilities for configuring, deploying, administering, and controlling HarperDB. To send operations to the operations API, you send a POST request to the operations API endpoint, which [defaults to port 9925](../../../deployments/configuration), on the root path, where the body is the operations object. These requests need to authenticated, which can be done with [basic auth](../../../developers/security/basic-auth) or [JWT authentication](../../../developers/security/jwt-auth). For example, a request to create a table would be performed as:
+
+```http
+POST http:/my-harperdb-server:9925/
+Authorization: Basic YourBase64EncodedInstanceUser:Pass
+Content-Type: application/json
+
+{
+ "operation": "create_table",
+ "table": "my-table"
+}
+```
+
+The operations API reference is available below and categorized by topic:
+
+* [Quick Start Examples](./quickstart-examples)
+* [Databases and Tables](./databases-and-tables)
+* [NoSQL Operations](./nosql-operations)
+* [Bulk Operations](./bulk-operations)
+* [Users and Roles](./users-and-roles)
+* [Clustering](./clustering)
+* [Components](./components)
+* [Registration](./registration)
+* [Jobs](./jobs)
+* [Logs](./logs)
+* [Utilities](./utilities)
+* [Token Authentication](./token-authentication)
+* [SQL Operations](./sql-operations)
+* [Advanced JSON SQL Examples](./advanced-json-sql-examples)
+
+• [Past Release API Documentation](https:/olddocs.harperdb.io)
+
+## More Examples
+
+Here is an example of using `curl` to make an operations API request:
+
+```bash
+curl --location --request POST 'https:/instance-subdomain.harperdbcloud.com' \
+--header 'Authorization: Basic YourBase64EncodedInstanceUser:Pass' \
+--header 'Content-Type: application/json' \
+--data-raw '{
+"operation": "create_schema",
+"schema": "dev"
+}'
+```
diff --git a/site/versioned_docs/version-4.2/developers/operations-api/jobs.md b/site/versioned_docs/version-4.2/developers/operations-api/jobs.md
new file mode 100644
index 00000000..8b05357f
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/operations-api/jobs.md
@@ -0,0 +1,82 @@
+---
+title: Jobs
+---
+
+# Jobs
+
+## Get Job
+Returns job status, metrics, and messages for the specified job ID.
+
+* operation _(required)_ - must always be `get_job`
+* id _(required)_ - the id of the job you wish to view
+
+### Body
+
+```json
+{
+ "operation": "get_job",
+ "id": "4a982782-929a-4507-8794-26dae1132def"
+}
+```
+
+### Response: 200
+```json
+[
+ {
+ "__createdtime__": 1611615798782,
+ "__updatedtime__": 1611615801207,
+ "created_datetime": 1611615798774,
+ "end_datetime": 1611615801206,
+ "id": "4a982782-929a-4507-8794-26dae1132def",
+ "job_body": null,
+ "message": "successfully loaded 350 of 350 records",
+ "start_datetime": 1611615798805,
+ "status": "COMPLETE",
+ "type": "csv_url_load",
+ "user": "HDB_ADMIN",
+ "start_datetime_converted": "2021-01-25T23:03:18.805Z",
+ "end_datetime_converted": "2021-01-25T23:03:21.206Z"
+ }
+]
+```
+
+---
+
+## Search Jobs By Start Date
+Returns a list of job statuses, metrics, and messages for all jobs executed within the specified time window.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `search_jobs_by_start_date`
+* from_date _(required)_ - the date you wish to start the search
+* to_date _(required)_ - the date you wish to end the search
+
+### Body
+```json
+{
+ "operation": "search_jobs_by_start_date",
+ "from_date": "2021-01-25T22:05:27.464+0000",
+ "to_date": "2021-01-25T23:05:27.464+0000"
+}
+```
+
+### Response: 200
+```json
+[
+ {
+ "id": "942dd5cb-2368-48a5-8a10-8770ff7eb1f1",
+ "user": "HDB_ADMIN",
+ "type": "csv_url_load",
+ "status": "COMPLETE",
+ "start_datetime": 1611613284781,
+ "end_datetime": 1611613287204,
+ "job_body": null,
+ "message": "successfully loaded 350 of 350 records",
+ "created_datetime": 1611613284764,
+ "__createdtime__": 1611613284767,
+ "__updatedtime__": 1611613287207,
+ "start_datetime_converted": "2021-01-25T22:21:24.781Z",
+ "end_datetime_converted": "2021-01-25T22:21:27.204Z"
+ }
+]
+```
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/developers/operations-api/logs.md b/site/versioned_docs/version-4.2/developers/operations-api/logs.md
new file mode 100644
index 00000000..3da8a570
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/operations-api/logs.md
@@ -0,0 +1,753 @@
+---
+title: Logs
+---
+
+# Logs
+
+## Read HarperDB Log
+Returns log outputs from the primary HarperDB log based on the provided search criteria. Read more about HarperDB logging here: https:/docs.harperdb.io/docs/logging#read-logs-via-the-api.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `read_Log`
+* start _(optional)_ - result to start with. Must be a number
+* limit _(optional)_ - number of results returned. Default behavior is 100. Must be a number
+* level _(optional)_ - error level to filter on. Default behavior is all levels. Must be `error`, `info`, or `null`
+* from _(optional)_ - date to begin showing log results. Must be `YYYY-MM-DD` or `YYYY-MM-DD hh:mm:ss`
+* until _(optional)_ - date to end showing log results. Must be `YYYY-MM-DD` or `YYYY-MM-DD hh:mm:ss`
+* order _(optional)_ - order to display logs desc or asc by timestamp
+### Body
+
+```json
+{
+ "operation": "read_log",
+ "start": 0,
+ "limit": 1000,
+ "level": "error",
+ "from": "2021-01-25T22:05:27.464+0000",
+ "until": "2021-01-25T23:05:27.464+0000",
+ "order": "desc"
+}
+```
+
+### Response: 200
+```json
+[
+ {
+ "level": "notify",
+ "message": "Connected to cluster server.",
+ "timestamp": "2021-01-25T23:03:20.710Z",
+ "thread": "main/0",
+ "tags": []
+ },
+ {
+ "level": "warn",
+ "message": "Login failed",
+ "timestamp": "2021-01-25T22:24:45.113Z",
+ "thread": "http/9",
+ "tags": []
+ },
+ {
+ "level": "error",
+ "message": "unknown attribute 'name and breed'",
+ "timestamp": "2021-01-25T22:23:24.167Z",
+ "thread": "http/9",
+ "tags": []
+ }
+]
+
+```
+
+
+---
+
+## Read Transaction Log
+Returns all transactions logged for the specified database table. You may filter your results with the optional from, to, and limit fields. Read more about HarperDB transaction logs here: https:/docs.harperdb.io/docs/transaction-logging#read_transaction_log.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `read_transaction_log`
+* schema _(required)_ - schema under which the transaction log resides
+* table _(required)_ - table under which the transaction log resides
+* from _(optional)_ - time format must be millisecond-based epoch in UTC
+* to _(optional)_ - time format must be millisecond-based epoch in UTC
+* limit _(optional)_ - max number of logs you want to receive. Must be a number
+
+### Body
+
+```json
+{
+ "operation": "read_transaction_log",
+ "schema": "dev",
+ "table": "dog",
+ "from": 1560249020865,
+ "to": 1660585656639,
+ "limit": 10
+}
+```
+
+### Response: 200
+```json
+[
+ {
+ "operation": "insert",
+ "user": "admin",
+ "timestamp": 1660165619736,
+ "records": [
+ {
+ "id": 1,
+ "dog_name": "Penny",
+ "owner_name": "Kyle",
+ "breed_id": 154,
+ "age": 7,
+ "weight_lbs": 38,
+ "__updatedtime__": 1660165619688,
+ "__createdtime__": 1660165619688
+ }
+ ]
+ },
+ {
+ "operation": "insert",
+ "user": "admin",
+ "timestamp": 1660165619813,
+ "records": [
+ {
+ "id": 2,
+ "dog_name": "Harper",
+ "owner_name": "Stephen",
+ "breed_id": 346,
+ "age": 7,
+ "weight_lbs": 55,
+ "adorable": true,
+ "__updatedtime__": 1660165619797,
+ "__createdtime__": 1660165619797
+ },
+ {
+ "id": 3,
+ "dog_name": "Alby",
+ "owner_name": "Kaylan",
+ "breed_id": 348,
+ "age": 7,
+ "weight_lbs": 84,
+ "adorable": true,
+ "__updatedtime__": 1660165619797,
+ "__createdtime__": 1660165619797
+ },
+ {
+ "id": 4,
+ "dog_name": "Billy",
+ "owner_name": "Zach",
+ "breed_id": 347,
+ "age": 6,
+ "weight_lbs": 60,
+ "adorable": true,
+ "__updatedtime__": 1660165619797,
+ "__createdtime__": 1660165619797
+ },
+ {
+ "id": 5,
+ "dog_name": "Rose Merry",
+ "owner_name": "Zach",
+ "breed_id": 348,
+ "age": 8,
+ "weight_lbs": 15,
+ "adorable": true,
+ "__updatedtime__": 1660165619797,
+ "__createdtime__": 1660165619797
+ },
+ {
+ "id": 6,
+ "dog_name": "Kato",
+ "owner_name": "Kyle",
+ "breed_id": 351,
+ "age": 6,
+ "weight_lbs": 32,
+ "adorable": true,
+ "__updatedtime__": 1660165619797,
+ "__createdtime__": 1660165619797
+ },
+ {
+ "id": 7,
+ "dog_name": "Simon",
+ "owner_name": "Fred",
+ "breed_id": 349,
+ "age": 3,
+ "weight_lbs": 35,
+ "adorable": true,
+ "__updatedtime__": 1660165619797,
+ "__createdtime__": 1660165619797
+ },
+ {
+ "id": 8,
+ "dog_name": "Gemma",
+ "owner_name": "Stephen",
+ "breed_id": 350,
+ "age": 5,
+ "weight_lbs": 55,
+ "adorable": true,
+ "__updatedtime__": 1660165619797,
+ "__createdtime__": 1660165619797
+ },
+ {
+ "id": 9,
+ "dog_name": "Yeti",
+ "owner_name": "Jaxon",
+ "breed_id": 200,
+ "age": 5,
+ "weight_lbs": 55,
+ "adorable": true,
+ "__updatedtime__": 1660165619797,
+ "__createdtime__": 1660165619797
+ },
+ {
+ "id": 10,
+ "dog_name": "Monkey",
+ "owner_name": "Aron",
+ "breed_id": 271,
+ "age": 7,
+ "weight_lbs": 35,
+ "adorable": true,
+ "__updatedtime__": 1660165619797,
+ "__createdtime__": 1660165619797
+ },
+ {
+ "id": 11,
+ "dog_name": "Bode",
+ "owner_name": "Margo",
+ "breed_id": 104,
+ "age": 8,
+ "weight_lbs": 75,
+ "adorable": true,
+ "__updatedtime__": 1660165619797,
+ "__createdtime__": 1660165619797
+ },
+ {
+ "id": 12,
+ "dog_name": "Tucker",
+ "owner_name": "David",
+ "breed_id": 346,
+ "age": 2,
+ "weight_lbs": 60,
+ "adorable": true,
+ "__updatedtime__": 1660165619798,
+ "__createdtime__": 1660165619798
+ },
+ {
+ "id": 13,
+ "dog_name": "Jagger",
+ "owner_name": "Margo",
+ "breed_id": 271,
+ "age": 7,
+ "weight_lbs": 35,
+ "adorable": true,
+ "__updatedtime__": 1660165619798,
+ "__createdtime__": 1660165619798
+ }
+ ]
+ },
+ {
+ "operation": "update",
+ "user": "admin",
+ "timestamp": 1660165620040,
+ "records": [
+ {
+ "id": 1,
+ "dog_name": "Penny B",
+ "__updatedtime__": 1660165620036
+ }
+ ]
+ }
+]
+```
+
+---
+
+## Delete Transaction Logs Before
+Deletes transaction log data for the specified database table that is older than the specified timestamp.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `delete_transaction_log_before`
+* schema _(required)_ - schema under which the transaction log resides. Must be a string
+* table _(required)_ - table under which the transaction log resides. Must be a string
+* timestamp _(required)_ - records older than this date will be deleted. Format is millisecond-based epoch in UTC
+
+### Body
+```json
+{
+ "operation": "delete_transaction_logs_before",
+ "schema": "dev",
+ "table": "dog",
+ "timestamp": 1598290282817
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Starting job with id 26a6d3a6-6d77-40f9-bee7-8d6ef479a126"
+}
+```
+
+---
+
+## Read Audit Log
+AuditLog must be enabled in the HarperDB configuration file to make this request. Returns a verbose history of all transactions logged for the specified database table, including original data records. You may filter your results with the optional search_type and search_values fields. Read more about HarperDB transaction logs here: https:/docs.harperdb.io/docs/transaction-logging#read_audit_log.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `read_audit_log`
+* schema _(required)_ - schema under which the transaction log resides
+* table _(required)_ - table under which the transaction log resides
+* search_type _(optional)_ - possibilities are `hash_value`, `timestamp` and `username`
+* search_values _(optional)_ - an array of string or numbers relating to search_type
+
+### Body
+
+```json
+{
+ "operation": "read_audit_log",
+ "schema": "dev",
+ "table": "dog"
+}
+```
+
+### Response: 200
+```json
+[
+ {
+ "operation": "insert",
+ "user_name": "admin",
+ "timestamp": 1660585635882.288,
+ "hash_values": [
+ 318
+ ],
+ "records": [
+ {
+ "id": 318,
+ "dog_name": "Polliwog",
+ "__updatedtime__": 1660585635876,
+ "__createdtime__": 1660585635876
+ }
+ ]
+ },
+ {
+ "operation": "insert",
+ "user_name": "admin",
+ "timestamp": 1660585716133.01,
+ "hash_values": [
+ 444
+ ],
+ "records": [
+ {
+ "id": 444,
+ "dog_name": "Davis",
+ "__updatedtime__": 1660585716128,
+ "__createdtime__": 1660585716128
+ }
+ ]
+ },
+ {
+ "operation": "update",
+ "user_name": "admin",
+ "timestamp": 1660585740558.415,
+ "hash_values": [
+ 444
+ ],
+ "records": [
+ {
+ "id": 444,
+ "fur_type": "coarse",
+ "__updatedtime__": 1660585740556
+ }
+ ],
+ "original_records": [
+ {
+ "id": 444,
+ "dog_name": "Davis",
+ "__updatedtime__": 1660585716128,
+ "__createdtime__": 1660585716128
+ }
+ ]
+ },
+ {
+ "operation": "delete",
+ "user_name": "admin",
+ "timestamp": 1660585759710.56,
+ "hash_values": [
+ 444
+ ],
+ "original_records": [
+ {
+ "id": 444,
+ "dog_name": "Davis",
+ "__updatedtime__": 1660585740556,
+ "__createdtime__": 1660585716128,
+ "fur_type": "coarse"
+ }
+ ]
+ }
+]
+```
+
+
+---
+
+## Read Audit Log by timestamp
+AuditLog must be enabled in the HarperDB configuration file to make this request. Returns the transactions logged for the specified database table between the specified time window. Read more about HarperDB transaction logs here: https:/docs.harperdb.io/docs/transaction-logging#read_audit_log.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `read_audit_log`
+* schema _(required)_ - schema under which the transaction log resides
+* table _(required)_ - table under which the transaction log resides
+* search_type _(optional)_ - timestamp
+* search_values _(optional)_ - an array containing a maximum of two values [`from_timestamp`, `to_timestamp`] defining the range of transactions you would like to view.
+ * Timestamp format is millisecond-based epoch in UTC
+ * If no items are supplied then all transactions are returned
+ * If only one entry is supplied then all transactions after the supplied timestamp will be returned
+
+### Body
+
+```json
+{
+ "operation": "read_audit_log",
+ "schema": "dev",
+ "table": "dog",
+ "search_type": "timestamp",
+ "search_values": [
+ 1660585740558,
+ 1660585759710.56
+ ]
+}
+```
+
+### Response: 200
+```json
+[
+ {
+ "operation": "insert",
+ "user_name": "admin",
+ "timestamp": 1660585635882.288,
+ "hash_values": [
+ 318
+ ],
+ "records": [
+ {
+ "id": 318,
+ "dog_name": "Polliwog",
+ "__updatedtime__": 1660585635876,
+ "__createdtime__": 1660585635876
+ }
+ ]
+ },
+ {
+ "operation": "insert",
+ "user_name": "admin",
+ "timestamp": 1660585716133.01,
+ "hash_values": [
+ 444
+ ],
+ "records": [
+ {
+ "id": 444,
+ "dog_name": "Davis",
+ "__updatedtime__": 1660585716128,
+ "__createdtime__": 1660585716128
+ }
+ ]
+ },
+ {
+ "operation": "update",
+ "user_name": "admin",
+ "timestamp": 1660585740558.415,
+ "hash_values": [
+ 444
+ ],
+ "records": [
+ {
+ "id": 444,
+ "fur_type": "coarse",
+ "__updatedtime__": 1660585740556
+ }
+ ],
+ "original_records": [
+ {
+ "id": 444,
+ "dog_name": "Davis",
+ "__updatedtime__": 1660585716128,
+ "__createdtime__": 1660585716128
+ }
+ ]
+ },
+ {
+ "operation": "delete",
+ "user_name": "admin",
+ "timestamp": 1660585759710.56,
+ "hash_values": [
+ 444
+ ],
+ "original_records": [
+ {
+ "id": 444,
+ "dog_name": "Davis",
+ "__updatedtime__": 1660585740556,
+ "__createdtime__": 1660585716128,
+ "fur_type": "coarse"
+ }
+ ]
+ },
+ {
+ "operation": "update",
+ "user_name": "admin",
+ "timestamp": 1660586298457.224,
+ "hash_values": [
+ 318
+ ],
+ "records": [
+ {
+ "id": 318,
+ "fur_type": "super fluffy",
+ "__updatedtime__": 1660586298455
+ }
+ ],
+ "original_records": [
+ {
+ "id": 318,
+ "dog_name": "Polliwog",
+ "__updatedtime__": 1660585635876,
+ "__createdtime__": 1660585635876
+ }
+ ]
+ }
+]
+```
+
+
+---
+
+## Read Audit Log by username
+AuditLog must be enabled in the HarperDB configuration file to make this request. Returns the transactions logged for the specified database table which were committed by the specified user. Read more about HarperDB transaction logs here: https:/docs.harperdb.io/docs/transaction-logging#read_audit_log.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `read_audit_log`
+* schema _(required)_ - schema under which the transaction log resides
+* table _(required)_ - table under which the transaction log resides
+* search_type _(optional)_ - username
+* search_values _(optional)_ - the HarperDB user for whom you would like to view transactions
+
+### Body
+
+```json
+{
+ "operation": "read_audit_log",
+ "schema": "dev",
+ "table": "dog",
+ "search_type": "username",
+ "search_values": [
+ "admin"
+ ]
+}
+```
+
+### Response: 200
+```json
+{
+ "admin": [
+ {
+ "operation": "insert",
+ "user_name": "admin",
+ "timestamp": 1660585635882.288,
+ "hash_values": [
+ 318
+ ],
+ "records": [
+ {
+ "id": 318,
+ "dog_name": "Polliwog",
+ "__updatedtime__": 1660585635876,
+ "__createdtime__": 1660585635876
+ }
+ ]
+ },
+ {
+ "operation": "insert",
+ "user_name": "admin",
+ "timestamp": 1660585716133.01,
+ "hash_values": [
+ 444
+ ],
+ "records": [
+ {
+ "id": 444,
+ "dog_name": "Davis",
+ "__updatedtime__": 1660585716128,
+ "__createdtime__": 1660585716128
+ }
+ ]
+ },
+ {
+ "operation": "update",
+ "user_name": "admin",
+ "timestamp": 1660585740558.415,
+ "hash_values": [
+ 444
+ ],
+ "records": [
+ {
+ "id": 444,
+ "fur_type": "coarse",
+ "__updatedtime__": 1660585740556
+ }
+ ],
+ "original_records": [
+ {
+ "id": 444,
+ "dog_name": "Davis",
+ "__updatedtime__": 1660585716128,
+ "__createdtime__": 1660585716128
+ }
+ ]
+ },
+ {
+ "operation": "delete",
+ "user_name": "admin",
+ "timestamp": 1660585759710.56,
+ "hash_values": [
+ 444
+ ],
+ "original_records": [
+ {
+ "id": 444,
+ "dog_name": "Davis",
+ "__updatedtime__": 1660585740556,
+ "__createdtime__": 1660585716128,
+ "fur_type": "coarse"
+ }
+ ]
+ },
+ {
+ "operation": "update",
+ "user_name": "admin",
+ "timestamp": 1660586298457.224,
+ "hash_values": [
+ 318
+ ],
+ "records": [
+ {
+ "id": 318,
+ "fur_type": "super fluffy",
+ "__updatedtime__": 1660586298455
+ }
+ ],
+ "original_records": [
+ {
+ "id": 318,
+ "dog_name": "Polliwog",
+ "__updatedtime__": 1660585635876,
+ "__createdtime__": 1660585635876
+ }
+ ]
+ }
+ ]
+}
+```
+
+
+---
+
+## Read Audit Log by hash_value
+AuditLog must be enabled in the HarperDB configuration file to make this request. Returns the transactions logged for the specified database table which were committed to the specified hash value(s). Read more about HarperDB transaction logs here: https:/docs.harperdb.io/docs/transaction-logging#read_audit_log.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `read_audit_log`
+* schema _(required)_ - schema under which the transaction log resides
+* table _(required)_ - table under which the transaction log resides
+* search_type _(optional)_ - hash_value
+* search_values _(optional)_ - an array of hash_attributes for which you wish to see transaction logs
+
+### Body
+
+```json
+{
+ "operation": "read_audit_log",
+ "schema": "dev",
+ "table": "dog",
+ "search_type": "hash_value",
+ "search_values": [
+ 318
+ ]
+}
+```
+
+### Response: 200
+```json
+{
+ "318": [
+ {
+ "operation": "insert",
+ "user_name": "admin",
+ "timestamp": 1660585635882.288,
+ "records": [
+ {
+ "id": 318,
+ "dog_name": "Polliwog",
+ "__updatedtime__": 1660585635876,
+ "__createdtime__": 1660585635876
+ }
+ ]
+ },
+ {
+ "operation": "update",
+ "user_name": "admin",
+ "timestamp": 1660586298457.224,
+ "records": [
+ {
+ "id": 318,
+ "fur_type": "super fluffy",
+ "__updatedtime__": 1660586298455
+ }
+ ],
+ "original_records": [
+ {
+ "id": 318,
+ "dog_name": "Polliwog",
+ "__updatedtime__": 1660585635876,
+ "__createdtime__": 1660585635876
+ }
+ ]
+ }
+ ]
+}
+```
+
+---
+
+## Delete Audit Logs Before
+AuditLog must be enabled in the HarperDB configuration file to make this request. Deletes audit log data for the specified database table that is older than the specified timestamp.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `delete_audit_logs_before`
+* schema _(required)_ - schema under which the transaction log resides. Must be a string
+* table _(required)_ - table under which the transaction log resides. Must be a string
+* timestamp _(required)_ - records older than this date will be deleted. Format is millisecond-based epoch in UTC
+
+### Body
+```json
+{
+ "operation": "delete_audit_logs_before",
+ "schema": "dev",
+ "table": "dog",
+ "timestamp": 1660585759710.56
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Starting job with id 7479e5f8-a86e-4fc9-add7-749493bc100f"
+}
+```
diff --git a/site/versioned_docs/version-4.2/developers/operations-api/nosql-operations.md b/site/versioned_docs/version-4.2/developers/operations-api/nosql-operations.md
new file mode 100644
index 00000000..d27e0c95
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/operations-api/nosql-operations.md
@@ -0,0 +1,390 @@
+---
+title: NoSQL Operations
+---
+
+# NoSQL Operations
+
+## Insert
+
+Adds one or more rows of data to a database table. Primary keys of the inserted JSON record may be supplied on insert. If a primary key is not provided, then a GUID will be generated for each record.
+
+* operation _(required)_ - must always be `insert`
+* database _(optional)_ - database where the table you are inserting records into lives. The default is `data`
+* table _(required)_ - table where you want to insert records
+* records _(required)_ - array of one or more records for insert
+
+### Body
+
+```json
+{
+ "operation": "insert",
+ "database": "dev",
+ "table": "dog",
+ "records": [
+ {
+ "id": 8,
+ "dog_name": "Harper",
+ "breed_id": 346,
+ "age": 7
+ },
+ {
+ "id": 9,
+ "dog_name": "Penny",
+ "breed_id": 154,
+ "age": 7
+ }
+ ]
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "inserted 2 of 2 records",
+ "inserted_hashes": [
+ 8,
+ 9
+ ],
+ "skipped_hashes": []
+}
+```
+
+---
+
+## Update
+
+Changes the values of specified attributes in one or more rows in a database table as identified by the primary key. NOTE: Primary key of the updated JSON record(s) MUST be supplied on update.
+
+* operation _(required)_ - must always be `update`
+* database _(optional)_ - database of the table you are updating records in. The default is `data`
+* table _(required)_ - table where you want to update records
+* records _(required)_ - array of one or more records for update
+
+### Body
+
+```json
+{
+ "operation": "update",
+ "database": "dev",
+ "table": "dog",
+ "records": [
+ {
+ "id": 1,
+ "weight_lbs": 55
+ },
+ {
+ "id": 2,
+ "owner": "Kyle B",
+ "weight_lbs": 35
+ }
+ ]
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "updated 2 of 2 records",
+ "update_hashes": [
+ 1,
+ 3
+ ],
+ "skipped_hashes": []
+}
+```
+
+---
+
+## Upsert
+
+Changes the values of specified attributes for rows with matching primary keys that exist in the table. Adds rows to the database table for primary keys that do not exist or are not provided.
+
+* operation _(required)_ - must always be `update`
+* database _(optional)_ - database of the table you are updating records in. The default is `data`
+* table _(required)_ - table where you want to update records
+* records _(required)_ - array of one or more records for update
+
+### Body
+
+```json
+{
+ "operation": "upsert",
+ "database": "dev",
+ "table": "dog",
+ "records": [
+ {
+ "id": 8,
+ "weight_lbs": 155
+ },
+ {
+ "name": "Bill",
+ "breed": "Pit Bull",
+ "id": 10,
+ "Age": 11,
+ "weight_lbs": 155
+ },
+ {
+ "name": "Harper",
+ "breed": "Mutt",
+ "age": 5,
+ "weight_lbs": 155
+ }
+ ]
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "upserted 3 of 3 records",
+ "upserted_hashes": [
+ 8,
+ 10,
+ "ea06fc8e-717b-4c6c-b69d-b29014054ab7"
+ ]
+}
+```
+
+---
+
+## Delete
+
+Removes one or more rows of data from a specified table.
+
+* operation _(required)_ - must always be `delete`
+* database _(optional)_ - database where the table you are deleting records lives. The default is `data`
+* table _(required)_ - table where you want to deleting records
+* ids _(required)_ - array of one or more primary key values, which identifies records to delete
+
+### Body
+
+```json
+{
+ "operation": "delete",
+ "database": "dev",
+ "table": "dog",
+ "ids": [
+ 1,
+ 2
+ ]
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "2 of 2 records successfully deleted",
+ "deleted_hashes": [
+ 1,
+ 2
+ ],
+ "skipped_hashes": []
+}
+```
+
+---
+
+## Search By ID
+
+Returns data from a table for one or more primary keys.
+
+* operation _(required)_ - must always be `search_by_id`
+* database _(optional)_ - database where the table you are searching lives. The default is `data`
+* table _(required)_ - table you wish to search
+* ids _(required)_ - array of primary keys to retrieve
+* get_attributes _(required)_ - define which attributes you want returned. _Use `['*']` to return all attributes_
+
+### Body
+
+```json
+{
+ "operation": "search_by_id",
+ "database": "dev",
+ "table": "dog",
+ "ids": [
+ 1,
+ 2
+ ],
+ "get_attributes": [
+ "dog_name",
+ "breed_id"
+ ]
+}
+```
+
+### Response: 200
+
+```json
+[
+ {
+ "dog_name": "Penny",
+ "breed_id": 154
+ },
+ {
+ "dog_name": "Harper",
+ "breed_id": 346
+ }
+]
+```
+
+---
+
+## Search By Value
+
+Returns data from a table for a matching value.
+
+* operation _(required)_ - must always be `search_by_value`
+* database _(optional)_ - database where the table you are searching lives. The default is `data`
+* table _(required)_ - table you wish to search
+* search_attribute _(required)_ - attribute you wish to search can be any attribute
+* search_value _(required)_ - value you wish to search - wild cards are allowed
+* get_attributes _(required)_ - define which attributes you want returned. Use `['*']` to return all attributes
+
+### Body
+
+```json
+{
+ "operation": "search_by_value",
+ "database": "dev",
+ "table": "dog",
+ "search_attribute": "owner_name",
+ "search_value": "Ky*",
+ "get_attributes": [
+ "id",
+ "dog_name"
+ ]
+}
+```
+
+### Response: 200
+
+```json
+[
+ {
+ "dog_name": "Penny"
+ },
+ {
+ "dog_name": "Kato"
+ }
+]
+```
+
+---
+
+## Search By Conditions
+
+Returns data from a table for one or more matching conditions.
+
+* operation _(required)_ - must always be `search_by_conditions`
+* database _(optional)_ - database where the table you are searching lives. The default is `data`
+* table _(required)_ - table you wish to search
+* operator _(optional)_ - the operator used between each condition - `and`, `or`. The default is `and`
+* offset _(optional)_ - the number of records that the query results will skip. The default is `0`
+* limit _(optional)_ - the number of records that the query results will include. The default is `null`, resulting in no limit
+* get_attributes _(required)_ - define which attributes you want returned. Use `['*']` to return all attributes
+* conditions _(required)_ - the array of conditions objects, specified below, to filter by. Must include one or more object in the array
+ * search_attribute _(required)_ - the attribute you wish to search, can be any attribute
+ * search_type _(required)_ - the type of search to perform - `equals`, `contains`, `starts_with`, `ends_with`, `greater_than`, `greater_than_equal`, `less_than`, `less_than_equal`, `between`
+ * search_value _(required)_ - case-sensitive value you wish to search. If the `search_type` is `between` then use an array of two values to search between
+
+### Body
+
+```json
+{
+ "operation": "search_by_conditions",
+ "database": "dev",
+ "table": "dog",
+ "operator": "and",
+ "offset": 0,
+ "limit": 10,
+ "get_attributes": [
+ "*"
+ ],
+ "conditions": [
+ {
+ "search_attribute": "age",
+ "search_type": "between",
+ "search_value": [
+ 5,
+ 8
+ ]
+ },
+ {
+ "search_attribute": "weight_lbs",
+ "search_type": "greater_than",
+ "search_value": 40
+ },
+ {
+ "search_attribute": "adorable",
+ "search_type": "equals",
+ "search_value": true
+ }
+ ]
+}
+```
+
+### Response: 200
+
+```json
+[
+ {
+ "__createdtime__": 1620227719791,
+ "__updatedtime__": 1620227719791,
+ "adorable": true,
+ "age": 7,
+ "breed_id": 346,
+ "dog_name": "Harper",
+ "id": 2,
+ "owner_name": "Stephen",
+ "weight_lbs": 55
+ },
+ {
+ "__createdtime__": 1620227719792,
+ "__updatedtime__": 1620227719792,
+ "adorable": true,
+ "age": 7,
+ "breed_id": 348,
+ "dog_name": "Alby",
+ "id": 3,
+ "owner_name": "Kaylan",
+ "weight_lbs": 84
+ },
+ {
+ "__createdtime__": 1620227719792,
+ "__updatedtime__": 1620227719792,
+ "adorable": true,
+ "age": 6,
+ "breed_id": 347,
+ "dog_name": "Billy",
+ "id": 4,
+ "owner_name": "Zach",
+ "weight_lbs": 60
+ },
+ {
+ "__createdtime__": 1620227719792,
+ "__updatedtime__": 1620227719792,
+ "adorable": true,
+ "age": 5,
+ "breed_id": 250,
+ "dog_name": "Gemma",
+ "id": 8,
+ "owner_name": "Stephen",
+ "weight_lbs": 55
+ },
+ {
+ "__createdtime__": 1620227719792,
+ "__updatedtime__": 1620227719792,
+ "adorable": true,
+ "age": 8,
+ "breed_id": 104,
+ "dog_name": "Bode",
+ "id": 11,
+ "owner_name": "Margo",
+ "weight_lbs": 75
+ }
+]
+```
diff --git a/site/versioned_docs/version-4.2/developers/operations-api/quickstart-examples.md b/site/versioned_docs/version-4.2/developers/operations-api/quickstart-examples.md
new file mode 100644
index 00000000..e74b7979
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/operations-api/quickstart-examples.md
@@ -0,0 +1,385 @@
+---
+title: Quick Start Examples
+---
+
+# Quick Start Examples
+
+## Create dog Table
+
+We first need to create a table. Since our company is named after our CEO's dog, lets create a table to store all our employees' dogs. We'll call this table, `dogs`.
+
+Tables in HarperDB are schema-less, so we don't need to add any attributes other than a primary_key (in pre 4.2 versions this was referred to as the hash_attribute) to create this table. A hash attribute is an attribute that defines the unique identifier for each row in your table. In a traditional RDMS this would be called a primary key.
+
+HarperDB does offer a `database` parameter that can be used to hold logical groupings of tables. The parameter is optional and if not provided the operation will default to using a database named `data`.
+
+If you receive an error response, make sure your Basic Authentication user and password match those you entered during the installation process.
+
+### Body
+
+```json
+{
+ "operation": "create_table",
+ "table": "dog",
+ "primary_key": "id"
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "table 'data.dog' successfully created."
+}
+```
+
+---
+
+## Create breed Table
+Now that we have a table to store our dog data, we also want to create a table to track known breeds. Just as with the dog table, the only attribute we need to specify is the `primary_key`.
+
+### Body
+
+```json
+{
+ "operation": "create_table",
+ "table": "breed",
+ "primary_key": "id"
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "table 'data.breed' successfully created."
+}
+```
+
+---
+
+## Insert 1 Dog
+
+We're ready to add some dog data. Penny is our CTO's pup, so she gets ID 1 or we're all fired. We are specifying attributes in this call, but this doesn't prevent us from specifying additional attributes in subsequent calls.
+
+### Body
+
+```json
+{
+ "operation": "insert",
+ "table": "dog",
+ "records": [
+ {
+ "id": 1,
+ "dog_name": "Penny",
+ "owner_name": "Kyle",
+ "breed_id": 154,
+ "age": 7,
+ "weight_lbs": 38
+ }
+ ]
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "inserted 1 of 1 records",
+ "inserted_hashes": [
+ 1
+ ],
+ "skipped_hashes": []
+}
+```
+
+---
+
+## Insert Multiple Dogs
+
+Let's add some more Harper doggies! We can add as many dog objects as we want into the records collection. If you're adding a lot of objects, we would recommend using the .csv upload option (see the next section where we populate the breed table).
+
+### Body
+
+```json
+{
+ "operation": "insert",
+ "table": "dog",
+ "records": [
+ {
+ "id": 2,
+ "dog_name": "Harper",
+ "owner_name": "Stephen",
+ "breed_id": 346,
+ "age": 7,
+ "weight_lbs": 55,
+ "adorable": true
+ },
+ {
+ "id": 3,
+ "dog_name": "Alby",
+ "owner_name": "Kaylan",
+ "breed_id": 348,
+ "age": 7,
+ "weight_lbs": 84,
+ "adorable": true
+ },
+ {
+ "id": 4,
+ "dog_name": "Billy",
+ "owner_name": "Zach",
+ "breed_id": 347,
+ "age": 6,
+ "weight_lbs": 60,
+ "adorable": true
+ },
+ {
+ "id": 5,
+ "dog_name": "Rose Merry",
+ "owner_name": "Zach",
+ "breed_id": 348,
+ "age": 8,
+ "weight_lbs": 15,
+ "adorable": true
+ },
+ {
+ "id": 6,
+ "dog_name": "Kato",
+ "owner_name": "Kyle",
+ "breed_id": 351,
+ "age": 6,
+ "weight_lbs": 32,
+ "adorable": true
+ },
+ {
+ "id": 7,
+ "dog_name": "Simon",
+ "owner_name": "Fred",
+ "breed_id": 349,
+ "age": 3,
+ "weight_lbs": 35,
+ "adorable": true
+ },
+ {
+ "id": 8,
+ "dog_name": "Gemma",
+ "owner_name": "Stephen",
+ "breed_id": 350,
+ "age": 5,
+ "weight_lbs": 55,
+ "adorable": true
+ },
+ {
+ "id": 9,
+ "dog_name": "Yeti",
+ "owner_name": "Jaxon",
+ "breed_id": 200,
+ "age": 5,
+ "weight_lbs": 55,
+ "adorable": true
+ },
+ {
+ "id": 10,
+ "dog_name": "Monkey",
+ "owner_name": "Aron",
+ "breed_id": 271,
+ "age": 7,
+ "weight_lbs": 35,
+ "adorable": true
+ },
+ {
+ "id": 11,
+ "dog_name": "Bode",
+ "owner_name": "Margo",
+ "breed_id": 104,
+ "age": 8,
+ "weight_lbs": 75,
+ "adorable": true
+ },
+ {
+ "id": 12,
+ "dog_name": "Tucker",
+ "owner_name": "David",
+ "breed_id": 346,
+ "age": 2,
+ "weight_lbs": 60,
+ "adorable": true
+ },
+ {
+ "id": 13,
+ "dog_name": "Jagger",
+ "owner_name": "Margo",
+ "breed_id": 271,
+ "age": 7,
+ "weight_lbs": 35,
+ "adorable": true
+ }
+ ]
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "inserted 12 of 12 records",
+ "inserted_hashes": [
+ 2,
+ 3,
+ 4,
+ 5,
+ 6,
+ 7,
+ 8,
+ 9,
+ 10,
+ 11,
+ 12,
+ 13
+ ],
+ "skipped_hashes": []
+}
+```
+
+---
+
+## Bulk Insert Breeds Via CSV
+
+We need to populate the 'breed' table with some data so we can reference it later. For larger data sets, we recommend using our CSV upload option.
+
+Each header in a column will be considered as an attribute, and each row in the file will be a row in the table. Simply specify the file path and the table to upload to, and HarperDB will take care of the rest. You can pull the breeds.csv file from here: https:/s3.amazonaws.com/complimentarydata/breeds.csv
+
+### Body
+
+```json
+{
+ "operation": "csv_url_load",
+ "table": "breed",
+ "csv_url": "https:/s3.amazonaws.com/complimentarydata/breeds.csv"
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "Starting job with id e77d63b9-70d5-499c-960f-6736718a4369",
+ "job_id": "e77d63b9-70d5-499c-960f-6736718a4369"
+}
+```
+
+---
+
+## Update 1 Dog Using NoSQL
+
+HarperDB supports NoSQL and SQL commands. We're going to update the dog table to show Penny's last initial using our NoSQL API.
+
+### Body
+
+```json
+{
+ "operation": "update",
+ "table": "dog",
+ "records": [
+ {
+ "id": 1,
+ "dog_name": "Penny B"
+ }
+ ]
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "updated 1 of 1 records",
+ "update_hashes": [
+ 1
+ ],
+ "skipped_hashes": []
+}
+```
+
+---
+
+## Select a Dog by ID Using SQL
+
+Now we're going to use a simple SQL SELECT call to pull Penny's updated data. Note we now see Penny's last initial in the dog name.
+
+### Body
+
+```json
+{
+ "operation": "sql",
+ "sql": "SELECT * FROM data.dog where id = 1"
+}
+```
+
+### Response: 200
+
+```json
+[
+ {
+ "owner_name": "Kyle",
+ "adorable": null,
+ "breed_id": 154,
+ "__updatedtime__": 1610749428575,
+ "dog_name": "Penny B",
+ "weight_lbs": 38,
+ "id": 1,
+ "age": 7,
+ "__createdtime__": 1610749386566
+ }
+]
+```
+
+---
+
+## Select Dogs and Join Breed
+
+Here's a more complex SQL command joining the breed table with the dog table. We will also pull only the pups belonging to Kyle, Zach, and Stephen.
+
+### Body
+
+```json
+{
+ "operation": "sql",
+ "sql": "SELECT d.id, d.dog_name, d.owner_name, b.name, b.section FROM data.dog AS d INNER JOIN data.breed AS b ON d.breed_id = b.id WHERE d.owner_name IN ('Kyle', 'Zach', 'Stephen') AND b.section = 'Mutt' ORDER BY d.dog_name"
+}
+```
+
+### Response: 200
+
+```json
+[
+ {
+ "id": 4,
+ "dog_name": "Billy",
+ "owner_name": "Zach",
+ "name": "LABRADOR / GREAT DANE MIX",
+ "section": "Mutt"
+ },
+ {
+ "id": 8,
+ "dog_name": "Gemma",
+ "owner_name": "Stephen",
+ "name": "SHORT HAIRED SETTER MIX",
+ "section": "Mutt"
+ },
+ {
+ "id": 2,
+ "dog_name": "Harper",
+ "owner_name": "Stephen",
+ "name": "HUSKY MIX",
+ "section": "Mutt"
+ },
+ {
+ "id": 5,
+ "dog_name": "Rose Merry",
+ "owner_name": "Zach",
+ "name": "TERRIER MIX",
+ "section": "Mutt"
+ }
+]
+
+```
diff --git a/site/versioned_docs/version-4.2/developers/operations-api/registration.md b/site/versioned_docs/version-4.2/developers/operations-api/registration.md
new file mode 100644
index 00000000..53d953af
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/operations-api/registration.md
@@ -0,0 +1,67 @@
+---
+title: Registration
+---
+
+# Registration
+
+
+## Registration Info
+Returns the registration data of the HarperDB instance.
+
+* operation _(required)_ - must always be `registration_info`
+
+### Body
+```json
+{
+ "operation": "registration_info"
+}
+```
+
+### Response: 200
+```json
+{
+ "registered": true,
+ "version": "4.2.0",
+ "ram_allocation": 2048,
+ "license_expiration_date": "2022-01-15"
+}
+```
+
+---
+
+## Get Fingerprint
+Returns the HarperDB fingerprint, uniquely generated based on the machine, for licensing purposes.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `get_fingerprint`
+
+### Body
+
+```json
+{
+ "operation": "get_fingerprint"
+}
+```
+
+---
+
+## Set License
+Sets the HarperDB license as generated by HarperDB License Management software.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `set_license`
+* key _(required)_ - your license key
+* company _(required)_ - the company that was used in the license
+
+### Body
+
+```json
+{
+ "operation": "set_license",
+ "key": "",
+ "company": ""
+}
+```
+
diff --git a/site/versioned_docs/version-4.2/developers/operations-api/sql-operations.md b/site/versioned_docs/version-4.2/developers/operations-api/sql-operations.md
new file mode 100644
index 00000000..39259083
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/operations-api/sql-operations.md
@@ -0,0 +1,118 @@
+---
+title: SQL Operations
+---
+
+# SQL Operations
+
+## Select
+Executes the provided SQL statement. The SELECT statement is used to query data from the database.
+
+* operation _(required)_ - must always be `sql`
+* sql _(required)_ - use standard SQL
+
+### Body
+
+```json
+{
+ "operation": "sql",
+ "sql": "SELECT * FROM dev.dog WHERE id = 1"
+}
+```
+
+### Response: 200
+```json
+[
+ {
+ "id": 1,
+ "age": 7,
+ "dog_name": "Penny",
+ "weight_lbs": 38,
+ "breed_id": 154,
+ "owner_name": "Kyle",
+ "adorable": true,
+ "__createdtime__": 1611614106043,
+ "__updatedtime__": 1611614119507
+ }
+]
+```
+
+---
+
+## Insert
+Executes the provided SQL statement. The INSERT statement is used to add one or more rows to a database table.
+
+* operation _(required)_ - must always be `sql`
+* sql _(required)_ - use standard SQL
+
+### Body
+
+```json
+{
+ "operation": "sql",
+ "sql": "INSERT INTO dev.dog (id, dog_name) VALUE (22, 'Simon')"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "inserted 1 of 1 records",
+ "inserted_hashes": [
+ 22
+ ],
+ "skipped_hashes": []
+}
+```
+---
+
+## Update
+Executes the provided SQL statement. The UPDATE statement is used to change the values of specified attributes in one or more rows in a database table.
+
+* operation _(required)_ - must always be `sql`
+* sql _(required)_ - use standard SQL
+
+### Body
+```json
+{
+ "operation": "sql",
+ "sql": "UPDATE dev.dog SET dog_name = 'penelope' WHERE id = 1"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "updated 1 of 1 records",
+ "update_hashes": [
+ 1
+ ],
+ "skipped_hashes": []
+}
+```
+
+---
+
+## Delete
+Executes the provided SQL statement. The DELETE statement is used to remove one or more rows of data from a database table.
+
+* operation _(required)_ - must always be `sql`
+* sql _(required)_ - use standard SQL
+
+### Body
+```json
+{
+ "operation": "sql",
+ "sql": "DELETE FROM dev.dog WHERE id = 1"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "1 of 1 record successfully deleted",
+ "deleted_hashes": [
+ 1
+ ],
+ "skipped_hashes": []
+}
+```
diff --git a/site/versioned_docs/version-4.2/developers/operations-api/token-authentication.md b/site/versioned_docs/version-4.2/developers/operations-api/token-authentication.md
new file mode 100644
index 00000000..161c69b5
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/operations-api/token-authentication.md
@@ -0,0 +1,54 @@
+---
+title: Token Authentication
+---
+
+# Token Authentication
+
+## Create Authentication Tokens
+Creates the tokens needed for authentication: operation & refresh token.
+
+_Note - this operation does not require authorization to be set_
+
+* operation _(required)_ - must always be `create_authentication_tokens`
+* username _(required)_ - username of user to generate tokens for
+* password _(required)_ - password of user to generate tokens for
+
+### Body
+```json
+{
+ "operation": "create_authentication_tokens",
+ "username": "",
+ "password": ""
+}
+```
+
+### Response: 200
+```json
+{
+ "operation_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6IkhEQl9BRE1JTiIsImlhdCI6MTYwNTA2Mzk0OSwiZXhwIjoxNjA1MTUwMzQ5LCJzdWIiOiJvcGVyYXRpb24ifQ.TlV93BqavQVQntXTt_WeY5IjAuCshfd6RzhihLWFWhu1qEKLHdwg9o5Z4ASaNmfuyKBqbFw65IbOYKd348EXeC_T6d0GO3yUhICYWXkqhQnxVW_T-ECKc7m5Bty9HTgfeaJ2e2yW55nbZYWG_gLtNgObUjCziX20-gGGR25sNTRm78mLQPYQkBJph6WXwAuyQrX704h0NfvNqyAZSwjxgtjuuEftTJ7FutLrQSLGIBIYq9nsHrFkheiDSn-C8_WKJ_zATa4YIofjqn9g5wA6o_7kSNaU2-gWnCm_jbcAcfvOmXh6rd89z8pwPqnC0f131qHIBps9UHaC1oozzmu_C6bsg7905OoAdFFY42Vojs98SMbfRApRvwaS4SprBsam3izODNI64ZUBREu3l4SZDalUf2kN8XPVWkI1LKq_mZsdtqr1r11Z9xslI1wVdxjunYeanjBhs7_j2HTX7ieVGn1a23cWceUk8F1HDGe_KEuPQs03R73V8acq_freh-kPhIa4eLqmcHeBw3WcyNGW8GuP8kyQRkGuO5sQSzZqbr_YSbZdSShZWTWDE6RYYC9ZV9KJtHVxhs0hexUpcoqO8OtJocyltRjtDjhSm9oUxszYRaALu-h8YadZT9dEKzsyQIt30d7LS9ETmmGWx4nKSTME2bV21PnDv_rEc5R6gnE",
+ "refresh_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6IkhEQl9BRE1JTiIsImlhdCI6MTYwNTA2Mzk0OSwiZXhwIjoxNjA3NjU1OTQ5LCJzdWIiOiJyZWZyZXNoIn0.znhJhkdSROBPP_GLRzAxYdjgQ3BuqpAbQB7zMSSOQJ3s83HnmZ10Bnpw_3L2aF-tOFgz_t6HUAvn26fNOLsspJD2aOvHPcVS4yLKS5nagpA6ar_pqng9f6Ebfs8ohguLCfHnHRJ8poLxuWRvWW9_9pIlDiwsj4yo3Mbxi3mW8Bbtnk2MwiNHFxTksD12Ne8EWz8q2jic5MjArqBBgR373oYoWU1oxpTM6gIsZCBRowXcc9XFy2vyRoggEUU4ISRFQ4ZY9ayJ-_jleSDCUamJSNQsdb1OUTvc6CxeYlLjCoV0ijRUB6p2XWNVezFhDu8yGqOeyGFJzArhxbVc_pl4UYd5aUVxhrO9DdhG29cY_mHV0FqfXphR9QllK--LJFTP4aFqkCxnVr7HSa17hL0ZVK1HaKrx21PAdCkVNZpD6J3RtRbTkfnIB_C3Be9jhOV3vpTf7ZGn_Bs3CPJi_sL313Z1yKSDAS5rXTPceEOcTPHjzkMP9Wz19KfFq_0kuiZdDmeYNqJeFPAgGJ-S0tO51krzyGqLyCCA32_W104GR8OoQi2gEED6HIx2G0-1rnLnefN6eHQiY5r-Q3Oj9e2y3EvqqgWOmEDw88-SjPTwQVnMbBHYN2RfluU7EmvDh6Saoe79Lhlu8ZeSJ1x6ZgA8-Cirraz1_526Tn8v5FGDfrc"
+}
+```
+
+---
+
+## Refresh Operation Token
+This operation creates a new operation token.
+
+* operation _(required)_ - must always be `refresh_operation_token`
+* refresh_token _(required)_ - the refresh token that was provided when tokens were created
+
+### Body
+```json
+{
+ "operation": "refresh_operation_token",
+ "refresh_token": "EXISTING_REFRESH_TOKEN"
+}
+```
+
+### Response: 200
+```json
+{
+ "operation_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6eyJfX2NyZWF0ZWR0aW1lX18iOjE2MDQ1MTc4Nzk1MjMsIl9fdXBkYXRlZHRpbWVfXyI6MTYwNDUxNzg3OTUyMywiYWN0aXZlIjp0cnVlLCJhdXRoX3Rva2VuIjpudWxsLCJyb2xlIjp7Il9fY3JlYXRlZHRpbWVfXyI6MTYwNDUxNzg3OTUyMSwiX191cGRhdGVkdGltZV9fIjoxNjA0NTE3ODc5NTIxLCJpZCI6IjZhYmRjNGJhLWU5MjQtNDlhNi1iOGY0LWM1NWUxYmQ0OTYzZCIsInBlcm1pc3Npb24iOnsic3VwZXJfdXNlciI6dHJ1ZSwic3lzdGVtIjp7InRhYmxlcyI6eyJoZGJfdGFibGUiOnsicmVhZCI6dHJ1ZSwiaW5zZXJ0IjpmYWxzZSwidXBkYXRlIjpmYWxzZSwiZGVsZXRlIjpmYWxzZSwiYXR0cmlidXRlX3Blcm1pc3Npb25zIjpbXX0sImhkYl9hdHRyaWJ1dGUiOnsicmVhZCI6dHJ1ZSwiaW5zZXJ0IjpmYWxzZSwidXBkYXRlIjpmYWxzZSwiZGVsZXRlIjpmYWxzZSwiYXR0cmlidXRlX3Blcm1pc3Npb25zIjpbXX0sImhkYl9zY2hlbWEiOnsicmVhZCI6dHJ1ZSwiaW5zZXJ0IjpmYWxzZSwidXBkYXRlIjpmYWxzZSwiZGVsZXRlIjpmYWxzZSwiYXR0cmlidXRlX3Blcm1pc3Npb25zIjpbXX0sImhkYl91c2VyIjp7InJlYWQiOnRydWUsImluc2VydCI6ZmFsc2UsInVwZGF0ZSI6ZmFsc2UsImRlbGV0ZSI6ZmFsc2UsImF0dHJpYnV0ZV9wZXJtaXNzaW9ucyI6W119LCJoZGJfcm9sZSI6eyJyZWFkIjp0cnVlLCJpbnNlcnQiOmZhbHNlLCJ1cGRhdGUiOmZhbHNlLCJkZWxldGUiOmZhbHNlLCJhdHRyaWJ1dGVfcGVybWlzc2lvbnMiOltdfSwiaGRiX2pvYiI6eyJyZWFkIjp0cnVlLCJpbnNlcnQiOmZhbHNlLCJ1cGRhdGUiOmZhbHNlLCJkZWxldGUiOmZhbHNlLCJhdHRyaWJ1dGVfcGVybWlzc2lvbnMiOltdfSwiaGRiX2xpY2Vuc2UiOnsicmVhZCI6dHJ1ZSwiaW5zZXJ0IjpmYWxzZSwidXBkYXRlIjpmYWxzZSwiZGVsZXRlIjpmYWxzZSwiYXR0cmlidXRlX3Blcm1pc3Npb25zIjpbXX0sImhkYl9pbmZvIjp7InJlYWQiOnRydWUsImluc2VydCI6ZmFsc2UsInVwZGF0ZSI6ZmFsc2UsImRlbGV0ZSI6ZmFsc2UsImF0dHJpYnV0ZV9wZXJtaXNzaW9ucyI6W119LCJoZGJfbm9kZXMiOnsicmVhZCI6dHJ1ZSwiaW5zZXJ0IjpmYWxzZSwidXBkYXRlIjpmYWxzZSwiZGVsZXRlIjpmYWxzZSwiYXR0cmlidXRlX3Blcm1pc3Npb25zIjpbXX0sImhkYl90ZW1wIjp7InJlYWQiOnRydWUsImluc2VydCI6ZmFsc2UsInVwZGF0ZSI6ZmFsc2UsImRlbGV0ZSI6ZmFsc2UsImF0dHJpYnV0ZV9wZXJtaXNzaW9ucyI6W119fX19LCJyb2xlIjoic3VwZXJfdXNlciJ9LCJ1c2VybmFtZSI6IkhEQl9BRE1JTiJ9LCJpYXQiOjE2MDUwNjQ0MjMsImV4cCI6MTYwNTE1MDgyMywic3ViIjoib3BlcmF0aW9uIn0.VVZdhlh7_xFEaGPwhAh6VJ1d7eisiF3ok3ZwLTQAMWZB6umb2S7pPSTbXAmqAGHRlFAK3BYfnwT3YWt0gZbHvk24_0x3s_dej3PYJ8khIxzMjqpkR6qSjQIC2dhKqpwRPNtoqW_xnep9L-qf5iPtqkwsqWhF1c5VSN8nFouLWMZSuJ6Mag04soNhFvY0AF6QiTyzajMTb6uurRMWOnxk8hwMrY_5xtupabqtZheXP_0DV8l10B7GFi_oWf_lDLmwRmNbeUfW8ZyCIJMj36bjN3PsfVIxog87SWKKCwbWZWfJWw0KEph-HvU0ay35deyGWPIaDQmujuh2vtz-B0GoIAC58PJdXNyQRzES_nSb6Oqc_wGZsLM6EsNn_lrIp3mK_3a5jirZ8s6Z2SfcYKaLF2hCevdm05gRjFJ6ijxZrUSOR2S415wLxmqCCWCp_-sEUz8erUrf07_aj-Bv99GUub4b_znOsQF3uABKd4KKff2cNSMhAa-6sro5GDRRJg376dcLi2_9HOZbnSo90zrpVq8RNV900aydyzDdlXkZja8jdHBk4mxSSewYBvM7up6I0G4X-ZlzFOp30T7kjdLa6480Qp34iYRMMtq0Htpb5k2jPt8dNFnzW-Q2eRy1wNBbH3cCH0rd7_BIGuTCrl4hGU8QjlBiF7Gj0_-uJYhKnhg"
+}
+```
diff --git a/site/versioned_docs/version-4.2/developers/operations-api/users-and-roles.md b/site/versioned_docs/version-4.2/developers/operations-api/users-and-roles.md
new file mode 100644
index 00000000..59b33a51
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/operations-api/users-and-roles.md
@@ -0,0 +1,484 @@
+---
+title: Users and Roles
+---
+
+# Users and Roles
+
+## List Roles
+Returns a list of all roles. Learn more about HarperDB roles here: https:/harperdb.io/docs/security/users-roles/.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `list_roles`
+
+### Body
+```json
+{
+ "operation": "list_roles"
+}
+```
+
+### Response: 200
+```json
+[
+ {
+ "__createdtime__": 1611615061106,
+ "__updatedtime__": 1611615061106,
+ "id": "05c2ffcd-f780-40b1-9432-cfe8ba5ad890",
+ "permission": {
+ "super_user": false,
+ "dev": {
+ "tables": {
+ "dog": {
+ "read": true,
+ "insert": true,
+ "update": true,
+ "delete": false,
+ "attribute_permissions": [
+ {
+ "attribute_name": "name",
+ "read": true,
+ "insert": true,
+ "update": true
+ }
+ ]
+ }
+ }
+ }
+ },
+ "role": "developer"
+ },
+ {
+ "__createdtime__": 1610749235614,
+ "__updatedtime__": 1610749235614,
+ "id": "136f03fa-a0e9-46c3-bd5d-7f3e7dd5b564",
+ "permission": {
+ "cluster_user": true
+ },
+ "role": "cluster_user"
+ },
+ {
+ "__createdtime__": 1610749235609,
+ "__updatedtime__": 1610749235609,
+ "id": "745b3138-a7cf-455a-8256-ac03722eef12",
+ "permission": {
+ "super_user": true
+ },
+ "role": "super_user"
+ }
+]
+```
+
+---
+
+## Add Role
+Creates a new role with the specified permissions. Learn more about HarperDB roles here: [https:/harperdb.io/docs/security/users-roles/](https:/harperdb.io/docs/security/users-roles/).
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `add_role`
+* role _(required)_ - name of role you are defining
+* permission _(required)_ - object defining permissions for users associated with this role:
+ * super_user _(optional)_ - boolean which, if set to true, gives users associated with this role full access to all operations and methods. If not included, value will be assumed to be false.
+ * structure_user (optional) - boolean OR array of schema names (as strings). If boolean, user can create new schemas and tables. If array of strings, users can only manage tables within the specified schemas. This overrides any individual table permissions for specified schemas, or for all schemas if the value is true.
+
+### Body
+```json
+{
+ "operation": "add_role",
+ "role": "developer",
+ "permission": {
+ "super_user": false,
+ "structure_user": false,
+ "dev": {
+ "tables": {
+ "dog": {
+ "read": true,
+ "insert": true,
+ "update": true,
+ "delete": false,
+ "attribute_permissions": [
+ {
+ "attribute_name": "name",
+ "read": true,
+ "insert": true,
+ "update": true
+ }
+ ]
+ }
+ }
+ }
+ }
+}
+```
+
+### Response: 200
+```json
+{
+ "role": "develope3r",
+ "permission": {
+ "super_user": false,
+ "structure_user": false,
+ "dev": {
+ "tables": {
+ "dog": {
+ "read": true,
+ "insert": true,
+ "update": true,
+ "delete": false,
+ "attribute_permissions": [
+ {
+ "attribute_name": "name",
+ "read": true,
+ "insert": true,
+ "update": true
+ }
+ ]
+ }
+ }
+ }
+ },
+ "id": "0a9368b0-bd81-482f-9f5a-8722e3582f96",
+ "__updatedtime__": 1598549532897,
+ "__createdtime__": 1598549532897
+}
+```
+
+---
+
+## Alter Role
+Modifies an existing role with the specified permissions. updates permissions from an existing role. Learn more about HarperDB roles here: [https:/harperdb.io/docs/security/users-roles/](https:/harperdb.io/docs/security/users-roles/).
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `alter_role`
+* id _(required)_ - the id value for the role you are altering
+* role _(optional)_ - name value to update on the role you are altering
+* permission _(required)_ - object defining permissions for users associated with this role:
+ * super_user _(optional)_ - boolean which, if set to true, gives users associated with this role full access to all operations and methods. If not included, value will be assumed to be false.
+ * structure_user (optional) - boolean OR array of schema names (as strings). If boolean, user can create new schemas and tables. If array of strings, users can only manage tables within the specified schemas. This overrides any individual table permissions for specified schemas, or for all schemas if the value is true.
+
+### Body
+
+```json
+{
+ "operation": "alter_role",
+ "id": "f92162e2-cd17-450c-aae0-372a76859038",
+ "role": "another_developer",
+ "permission": {
+ "super_user": false,
+ "structure_user": false,
+ "dev": {
+ "tables": {
+ "dog": {
+ "read": true,
+ "insert": true,
+ "update": true,
+ "delete": false,
+ "attribute_permissions": [
+ {
+ "attribute_name": "name",
+ "read": false,
+ "insert": true,
+ "update": true
+ }
+ ]
+ }
+ }
+ }
+ }
+}
+```
+
+### Response: 200
+```json
+{
+ "id": "a7cb91e9-32e4-4dbf-a327-fab4fa9191ea",
+ "role": "developer",
+ "permission": {
+ "super_user": false,
+ "structure_user": false,
+ "dev": {
+ "tables": {
+ "dog": {
+ "read": true,
+ "insert": true,
+ "update": true,
+ "delete": false,
+ "attribute_permissions": [
+ {
+ "attribute_name": "name",
+ "read": false,
+ "insert": true,
+ "update": true
+ }
+ ]
+ }
+ }
+ }
+ },
+ "__updatedtime__": 1598549996106
+}
+```
+
+---
+
+## Drop Role
+Deletes an existing role from the database. NOTE: Role with associated users cannot be dropped. Learn more about HarperDB roles here: https:/harperdb.io/docs/security/users-roles/.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - this must always be `drop_role`
+* id _(required)_ - this is the id of the role you are dropping
+
+### Body
+```json
+{
+ "operation": "drop_role",
+ "id": "2ebc3415-0aa0-4eea-9b8e-40860b436119"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "developer successfully deleted"
+}
+```
+
+---
+
+## List Users
+Returns a list of all users. Learn more about HarperDB users here: https:/harperdb.io/docs/security/users-roles/.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `list_users`
+
+### Body
+```json
+{
+ "operation": "list_users"
+}
+```
+
+### Response: 200
+```json
+[
+ {
+ "__createdtime__": 1635520961165,
+ "__updatedtime__": 1635520961165,
+ "active": true,
+ "role": {
+ "__createdtime__": 1635520961161,
+ "__updatedtime__": 1635520961161,
+ "id": "7c78ef13-c1f3-4063-8ea3-725127a78279",
+ "permission": {
+ "super_user": true,
+ "system": {
+ "tables": {
+ "hdb_table": {
+ "read": true,
+ "insert": false,
+ "update": false,
+ "delete": false,
+ "attribute_permissions": []
+ },
+ "hdb_attribute": {
+ "read": true,
+ "insert": false,
+ "update": false,
+ "delete": false,
+ "attribute_permissions": []
+ },
+ "hdb_schema": {
+ "read": true,
+ "insert": false,
+ "update": false,
+ "delete": false,
+ "attribute_permissions": []
+ },
+ "hdb_user": {
+ "read": true,
+ "insert": false,
+ "update": false,
+ "delete": false,
+ "attribute_permissions": []
+ },
+ "hdb_role": {
+ "read": true,
+ "insert": false,
+ "update": false,
+ "delete": false,
+ "attribute_permissions": []
+ },
+ "hdb_job": {
+ "read": true,
+ "insert": false,
+ "update": false,
+ "delete": false,
+ "attribute_permissions": []
+ },
+ "hdb_license": {
+ "read": true,
+ "insert": false,
+ "update": false,
+ "delete": false,
+ "attribute_permissions": []
+ },
+ "hdb_info": {
+ "read": true,
+ "insert": false,
+ "update": false,
+ "delete": false,
+ "attribute_permissions": []
+ },
+ "hdb_nodes": {
+ "read": true,
+ "insert": false,
+ "update": false,
+ "delete": false,
+ "attribute_permissions": []
+ },
+ "hdb_temp": {
+ "read": true,
+ "insert": false,
+ "update": false,
+ "delete": false,
+ "attribute_permissions": []
+ }
+ }
+ }
+ },
+ "role": "super_user"
+ },
+ "username": "HDB_ADMIN"
+ }
+]
+```
+
+---
+
+## User Info
+Returns user data for the associated user credentials.
+
+* operation _(required)_ - must always be `user_info`
+
+### Body
+```json
+{
+ "operation": "user_info"
+}
+```
+
+### Response: 200
+```json
+{
+ "__createdtime__": 1610749235611,
+ "__updatedtime__": 1610749235611,
+ "active": true,
+ "role": {
+ "__createdtime__": 1610749235609,
+ "__updatedtime__": 1610749235609,
+ "id": "745b3138-a7cf-455a-8256-ac03722eef12",
+ "permission": {
+ "super_user": true
+ },
+ "role": "super_user"
+ },
+ "username": "HDB_ADMIN"
+}
+```
+
+---
+
+## Add User
+Creates a new user with the specified role and credentials. Learn more about HarperDB users here: https:/harperdb.io/docs/security/users-roles/.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `add_user`
+* role _(required)_ - 'role' name value of the role you wish to assign to the user. See `add_role` for more detail
+* username _(required)_ - username assigned to the user. It can not be altered after adding the user. It serves as the hash
+* password _(required)_ - clear text for password. HarperDB will encrypt the password upon receipt
+* active _(required)_ - boolean value for status of user's access to your HarperDB instance. If set to false, user will not be able to access your instance of HarperDB.
+
+### Body
+```json
+{
+ "operation": "add_user",
+ "role": "role_name",
+ "username": "hdb_user",
+ "password": "password",
+ "active": true
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "hdb_user successfully added"
+}
+```
+
+---
+
+## Alter User
+Modifies an existing user's role and/or credentials. Learn more about HarperDB users here: https:/harperdb.io/docs/security/users-roles/.
+
+_Operation is restricted to super\_user roles only_
+
+ * operation _(required)_ - must always be `alter_user`
+ * username _(required)_ - username assigned to the user. It can not be altered after adding the user. It serves as the hash.
+ * password _(optional)_ - clear text for password. HarperDB will encrypt the password upon receipt
+ * role _(optional)_ - `role` name value of the role you wish to assign to the user. See `add_role` for more detail
+ * active _(optional)_ - status of user's access to your HarperDB instance. See `add_role` for more detail
+
+### Body
+```json
+{
+ "operation": "alter_user",
+ "role": "role_name",
+ "username": "hdb_user",
+ "password": "password",
+ "active": true
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "updated 1 of 1 records",
+ "new_attributes": [],
+ "txn_time": 1611615114397.988,
+ "update_hashes": [
+ "hdb_user"
+ ],
+ "skipped_hashes": []
+}
+```
+
+---
+
+## Drop User
+Deletes an existing user by username. Learn more about HarperDB users here: https:/harperdb.io/docs/security/users-roles/.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `drop_user`
+* username _(required)_ - username assigned to the user
+
+### Body
+```json
+{
+ "operation": "drop_user",
+ "username": "sgoldberg"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "sgoldberg successfully deleted"
+}
+```
diff --git a/site/versioned_docs/version-4.2/developers/operations-api/utilities.md b/site/versioned_docs/version-4.2/developers/operations-api/utilities.md
new file mode 100644
index 00000000..8e8a80d5
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/operations-api/utilities.md
@@ -0,0 +1,358 @@
+---
+title: Utilities
+---
+
+# Utilities
+
+## Restart
+Restarts the HarperDB instance.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `restart`
+
+### Body
+```json
+{
+ "operation": "restart"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Restarting HarperDB. This may take up to 60 seconds."
+}
+```
+---
+
+## Restart Service
+Restarts servers for the specified HarperDB service.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `restart_service`
+* service _(required)_ - must be one of: `http_workers`, `clustering_config` or `clustering`
+
+### Body
+```json
+{
+ "operation": "restart_service",
+ "service": "http_workers"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Restarting http_workers"
+}
+```
+
+---
+## System Information
+Returns detailed metrics on the host system.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `system_information`
+* attributes _(optional)_ - string array of top level attributes desired in the response, if no value is supplied all attributes will be returned. Available attributes are: ['system', 'time', 'cpu', 'memory', 'disk', 'network', 'harperdb_processes', 'table_size', 'replication']
+
+### Body
+```json
+{
+ "operation": "system_information"
+}
+```
+
+---
+
+## Delete Records Before
+
+Delete data before the specified timestamp on the specified database table exclusively on the node where it is executed. Any clustered nodes with replicated data will retain that data.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `delete_records_before`
+* date _(required)_ - records older than this date will be deleted. Supported format looks like: `YYYY-MM-DDThh:mm:ss.sZ`
+* schema _(required)_ - name of the schema where you are deleting your data
+* table _(required)_ - name of the table where you are deleting your data
+
+### Body
+```json
+{
+ "operation": "delete_records_before",
+ "date": "2021-01-25T23:05:27.464",
+ "schema": "dev",
+ "table": "breed"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Starting job with id d3aed926-e9fe-4ec1-aea7-0fb4451bd373",
+ "job_id": "d3aed926-e9fe-4ec1-aea7-0fb4451bd373"
+}
+```
+
+---
+
+## Export Local
+Exports data based on a given search operation to a local file in JSON or CSV format.
+
+* operation _(required)_ - must always be `export_local`
+* format _(required)_ - the format you wish to export the data, options are `json` & `csv`
+* path _(required)_ - path local to the server to export the data
+* search_operation _(required)_ - search_operation of `search_by_hash`, `search_by_value` or `sql`
+
+### Body
+```json
+{
+ "operation": "export_local",
+ "format": "json",
+ "path": "/data/",
+ "search_operation": {
+ "operation": "sql",
+ "sql": "SELECT * FROM dev.breed"
+ }
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Starting job with id 6fc18eaa-3504-4374-815c-44840a12e7e5"
+}
+```
+
+---
+
+## Export To S3
+Exports data based on a given search operation from table to AWS S3 in JSON or CSV format.
+
+* operation _(required)_ - must always be `export_to_s3`
+* format _(required)_ - the format you wish to export the data, options are `json` & `csv`
+* s3 _(required)_ - details your access keys, bucket, bucket region and key for saving the data to S3
+* search_operation _(required)_ - search_operation of `search_by_hash`, `search_by_value` or `sql`
+
+### Body
+```json
+{
+ "operation": "export_to_s3",
+ "format": "json",
+ "s3": {
+ "aws_access_key_id": "YOUR_KEY",
+ "aws_secret_access_key": "YOUR_SECRET_KEY",
+ "bucket": "BUCKET_NAME",
+ "key": "OBJECT_NAME",
+ "region": "BUCKET_REGION"
+ },
+ "search_operation": {
+ "operation": "sql",
+ "sql": "SELECT * FROM dev.dog"
+ }
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Starting job with id 9fa85968-4cb1-4008-976e-506c4b13fc4a",
+ "job_id": "9fa85968-4cb1-4008-976e-506c4b13fc4a"
+}
+```
+
+---
+
+## Install Node Modules
+Executes npm install against specified custom function projects.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `install_node_modules`
+* projects _(required)_ - must ba an array of custom functions projects.
+* dry_run _(optional)_ - refers to the npm --dry-run flag: [https:/docs.npmjs.com/cli/v8/commands/npm-install#dry-run](https:/docs.npmjs.com/cli/v8/commands/npm-install#dry-run). Defaults to false.
+
+### Body
+```json
+{
+ "operation": "install_node_modules",
+ "projects": [
+ "dogs",
+ "cats"
+ ],
+ "dry_run": true
+}
+```
+
+---
+
+## Set Configuration
+
+Modifies the HarperDB configuration file parameters. Must follow with a restart or restart_service operation.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `set_configuration`
+* logging_level _(example/optional)_ - one or more configuration keywords to be updated in the HarperDB configuration file
+* clustering_enabled _(example/optional)_ - one or more configuration keywords to be updated in the HarperDB configuration file
+
+### Body
+```json
+{
+ "operation": "set_configuration",
+ "logging_level": "trace",
+ "clustering_enabled": true
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Configuration successfully set. You must restart HarperDB for new config settings to take effect."
+}
+```
+
+---
+
+## Get Configuration
+Returns the HarperDB configuration parameters.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `get_configuration`
+
+### Body
+```json
+{
+ "operation": "get_configuration"
+}
+```
+
+### Response: 200
+```json
+{
+ "http": {
+ "compressionThreshold": 1200,
+ "cors": false,
+ "corsAccessList": [
+ null
+ ],
+ "keepAliveTimeout": 30000,
+ "port": 9926,
+ "securePort": null,
+ "timeout": 120000
+ },
+ "threads": 11,
+ "authentication": {
+ "cacheTTL": 30000,
+ "enableSessions": true,
+ "operationTokenTimeout": "1d",
+ "refreshTokenTimeout": "30d"
+ },
+ "analytics": {
+ "aggregatePeriod": 60
+ },
+ "clustering": {
+ "enabled": true,
+ "hubServer": {
+ "cluster": {
+ "name": "harperdb",
+ "network": {
+ "port": 12345,
+ "routes": null
+ }
+ },
+ "leafNodes": {
+ "network": {
+ "port": 9931
+ }
+ },
+ "network": {
+ "port": 9930
+ }
+ },
+ "leafServer": {
+ "network": {
+ "port": 9940,
+ "routes": null
+ },
+ "streams": {
+ "maxAge": null,
+ "maxBytes": null,
+ "maxMsgs": null,
+ "path": "/Users/hdb/clustering/leaf"
+ }
+ },
+ "logLevel": "info",
+ "nodeName": "node1",
+ "republishMessages": false,
+ "databaseLevel": false,
+ "tls": {
+ "certificate": "/Users/hdb/keys/certificate.pem",
+ "certificateAuthority": "/Users/hdb/keys/ca.pem",
+ "privateKey": "/Users/hdb/keys/privateKey.pem",
+ "insecure": true,
+ "verify": true
+ },
+ "user": "cluster_user"
+ },
+ "componentsRoot": "/Users/hdb/components",
+ "localStudio": {
+ "enabled": false
+ },
+ "logging": {
+ "auditAuthEvents": {
+ "logFailed": false,
+ "logSuccessful": false
+ },
+ "auditLog": true,
+ "auditRetention": "3d",
+ "file": true,
+ "level": "error",
+ "root": "/Users/hdb/log",
+ "rotation": {
+ "enabled": false,
+ "compress": false,
+ "interval": null,
+ "maxSize": null,
+ "path": "/Users/hdb/log"
+ },
+ "stdStreams": false
+ },
+ "mqtt": {
+ "network": {
+ "port": 1883,
+ "securePort": 8883
+ },
+ "webSocket": true,
+ "requireAuthentication": true
+ },
+ "operationsApi": {
+ "network": {
+ "cors": true,
+ "corsAccessList": [
+ "*"
+ ],
+ "domainSocket": "/Users/hdb/operations-server",
+ "port": 9925,
+ "securePort": null
+ }
+ },
+ "rootPath": "/Users/hdb",
+ "storage": {
+ "writeAsync": false,
+ "caching": true,
+ "compression": false,
+ "noReadAhead": true,
+ "path": "/Users/hdb/database",
+ "prefetchWrites": true
+ },
+ "tls": {
+ "certificate": "/Users/hdb/keys/certificate.pem",
+ "certificateAuthority": "/Users/hdb/keys/ca.pem",
+ "privateKey": "/Users/hdb/keys/privateKey.pem"
+ }
+}
+```
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/developers/real-time.md b/site/versioned_docs/version-4.2/developers/real-time.md
new file mode 100644
index 00000000..bcb84756
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/real-time.md
@@ -0,0 +1,158 @@
+---
+title: Real-Time
+---
+
+# Real-Time
+
+## Real-Time
+
+HarperDB provides real-time access to data and messaging. This allows clients to monitor and subscribe to data for changes in real-time as well as handling data-oriented messaging. HarperDB supports multiple standardized protocols to facilitate diverse standards-based client interaction.
+
+HarperDB real-time communication is based around database tables. Declared tables are the basis for monitoring data, and defining "topics" for publishing and subscribing to messages. Declaring a table that establishes a topic can be as simple as adding a table with no attributes to your [schema.graphql in a HarperDB application folder](./applications/):
+```
+type MyTopic @table @export
+```
+You can then subscribe to records or sub-topics in this topic/namespace, as well as save data and publish messages, with the protocols discussed below.
+
+### Content Negotiation
+
+HarperDB is a database, not a generic broker, and therefore highly adept at handling _structured_ data. Data can be published and subscribed in all supported structured/object formats, including JSON, CBOR, and MessagePack, and the data will be stored and handled as structured data. This means that different clients can individually choose which format they prefer, both for inbound and outbound messages. One client could publish in JSON, and another client could choose to receive messages in CBOR.
+
+## Protocols
+
+### MQTT
+
+HarperDB supports MQTT as an interface to this real-time data delivery. It is important to note that MQTT in HarperDB is not just a generic pub/sub hub, but is deeply integrated with the database providing subscriptions directly to database records, and publishing to these records. In this document we will explain how MQTT pub/sub concepts are aligned and integrated with database functionality.
+
+#### Configuration
+
+HarperDB supports MQTT with its `mqtt` server module and HarperDB supports MQTT over standard TCP sockets or over WebSockets. This is enabled by default, but can be configured in your `harperdb-config.yaml` configuration, allowing you to change which ports it listens on, if secure TLS connections are used, and MQTT is accepted over WebSockets:
+
+```yaml
+mqtt:
+ network:
+ port: 1883
+ securePort: 8883 # for TLS
+ webSocket: true # will also enable WS support through the default HTTP interface/port
+ requireAuthentication: true
+```
+
+Note that if you are using WebSockets for MQTT, the sub-protocol should be set to "mqtt" (this is required by the MQTT specification, and should be included by any conformant client): `Sec-WebSocket-Protocol: mqtt`.
+
+#### Capabilities
+
+HarperDB's MQTT capabilities includes support for MQTT versions v3.1 and v5 with standard publish and subscription capabilities with multi-level topics, QoS 0 and 1 levels, and durable (non-clean) sessions. MQTT supports QoS 2 interaction, but doesn't guarantee exactly once delivery (although any guarantees of exactly once over unstable networks is a fictional aspiration). MQTT doesn't currently support last will, nor single-level wildcards (only multi-level wildcards).
+
+### Topics
+
+In MQTT, messages are published to, and subscribed from, topics. In HarperDB topics are aligned with resource endpoint paths in exactly the same way as the REST endpoints. If you define a table or resource in your schema, with a path/endpoint of "my-resource", that means that this can be addressed as a topic just like a URL path. So a topic of "my-resource/some-id" would correspond to the record in the my-resource table (or custom resource) with a record id of "some-id".
+
+This means that you can subscribe to "my-resource/some-id" and making this subscription means you will receive notification messages for any updates to this record. If this record is modified or deleted, a message will be sent to listeners of this subscription.
+
+The current value of this record is also treated as the "retained" message for this topic. When you subscribe to "my-resource/some-id", you will immediately receive the record for this id, through a "publish" command from the server, as the initial "retained" message that is first delivered. This provides a simple and effective way to get the current state of a record and future updates to that record without having to worry about timing issues of aligning a retrieval and subscription separately.
+
+Similarly, publishing a message to a "topic" also interacts with the database. Publishing a message with "retain" flag enabled is interpreted as an update or put to that record. The published message will replace the current record with the contents of the published message.
+
+If a message is published without a `retain` flag, the message will not alter the record at all, but will still be published to any subscribers to that record.
+
+HarperDB supports QoS 0 and 1 for publishing and subscribing.
+
+HarperDB supports multi-level topics, both for subscribing and publishing. HarperDB also supports multi-level wildcards, so you can subscribe to /`my-resource/#` to receive notifications for `my-resource/some-id` as well as `my-resource/nested/id`, or you can subscribe to `my-resource/nested/#` and receive the latter, but not the former, topic messages. HarperDB currently only supports trailing multi-level wildcards (no single-level wildcards with '\*').
+
+### Ordering
+
+HarperDB is designed to be a distributed database, and an intrinsic characteristic of distributed servers is that messages may take different amounts of time to traverse the network and may arrive in a different order depending on server location and network topology. HarperDB is designed for distributed data with minimal latency, and so messages are delivered to subscribers immediately when they arrive, HarperDB does not delay messages for coordinating confirmation or consensus among other nodes, which would significantly increase latency, messages are delivered as quickly as possible.
+
+As an example, let's consider message #1 is published to node A, which then sends the message to node B and node C, but the message takes a while to get there. Slightly later, while the first message is still in transit, message #2 is published to node B, which then replicates it to A and C, and because of network conditions, message #2 arrives at node C before message #1. Because HarperDB prioritizes low latency, when node C receives message #2, it immediately publishes it to all its local subscribers (it has no knowledge that message #1 is in transit).
+
+When message #1 is received by node C, the behavior of what it does with this message is dependent on whether the message is a "retained" message (was published with a retain flag set to true, or was put/update/upsert/inserted into the database) or was a non-retained message. In the case of a non-retained message, this message will be delivered to all local subscribers (even though it had been published earlier), thereby prioritizing the delivery of every message. On the other hand, a retained message will not deliver the earlier out-of-order message to clients, and HarperDB will keep the message with the latest timestamp as the "winning" record state (and will be retained message for any subsequent subscriptions). Retained messages maintain (eventual) consistency across the entire cluster of servers, all nodes will converge to the same message as the being the latest and retained message (#2 in this case).
+
+Non-retained messages are generally a good choice for applications like chat, where every message needs to be delivered even if they might arrive out-of-order (the order may not be consistent across all servers). Retained messages can be thought of a "superseding" messages, and are a good fit for applications like instrument measurements like temperature readings, where the priority to provide the _latest_ temperature and older temperature readings are not important to publish after a new reading, and consistency of the most-recent record (across the network) is important.
+
+### WebSockets
+
+WebSockets are supported through the REST interface and go through the `connect(incomingMessages)` method on resources. By default, making a WebSockets connection to a URL will subscribe to the referenced resource. For example, making a WebSocket connection to `new WebSocket('wss:/server/my-resource/341')` will access the resource defined for 'my-resource' and the resource id of 341 and connect to it. On the web platform this could be:
+
+```javascript
+let ws = new WebSocket('wss:/server/my-resource/341');
+ws.onmessage = (event) => {
+ / received a notification from the server
+ let data = JSON.parse(event.data);
+};
+```
+
+By default, the resources will make a subscription to that resource, monitoring any changes to the records or messages published to it, and will return events on the WebSockets connection. You can also override `connect(incomingMessages)` with your own handler. The `connect` method simply needs to return an iterable (asynchronous iterable) that represents the stream of messages to be sent to the client. One easy way to create an iterable stream is to define the `connect` method as a generator and `yield` messages as they become available. For example, a simple WebSockets echo server for a resource could be written:
+
+```javascript
+export class Echo extends Resource {
+ async *connect(incomingMessages) {
+ for await (let message of incomingMessages) { / wait for each incoming message from the client
+ / and send the message back to the client
+ yield message;
+ }
+ }
+```
+
+You can also call the default `connect` and it will provide a convenient streaming iterable with events for the outgoing messages, with a `send` method that you can call to send messages on the iterable, and a `close` event for determining when the connection is closed. The incoming messages iterable is also an event emitter, and you can listen for `data` events to get the incoming messages using event style:
+
+```javascript
+export class Example extends Resource {
+ connect(incomingMessages) {
+ let outgoingMessages = super.connect();
+ let timer = setInterval(() => {
+ outgoingMessages.send({greeting: 'hi again!'});
+ }, 1000); / send a message once a second
+ incomingMessages.on('data', (message) => {
+ / another way of echo-ing the data back to the client
+ outgoingMessages.send(message);
+ });
+ outgoingMessages.on('close', () => {
+ / make sure we end the timer once the connection is closed
+ clearInterval(timer);
+ });
+ return outgoingMessages;
+ }
+```
+
+### Server Sent Events
+
+Server Sent Events (SSE) are also supported through the REST server interface, and provide a simple and efficient mechanism for web-based applications to receive real-time updates. For consistency of push delivery, SSE connections go through the `connect()` method on resources, much like WebSockets. The primary difference is that `connect` is called without any `incomingMessages` argument, since SSE is a one-directional transport mechanism. This can be used much like WebSockets, specifying a resource URL path will connect to that resource, and by default provides a stream of messages for changes and messages for that resource. For example, you can connect to receive notification in a browser for a resource like:
+
+```javascript
+let eventSource = new EventSource('https:/server/my-resource/341', { withCredentials: true });
+eventSource.onmessage = (event) => {
+ / received a notification from the server
+ let data = JSON.parse(event.data);
+};
+```
+
+### MQTT Feature Support Matrix
+
+| Feature | Support |
+| ------- | ------- |
+| Connections, protocol negotiation, and acknowledgement with v3.1.1 | :heavy_check_mark: |
+| Connections, protocol negotiation, and acknowledgement with v5 | :heavy_check_mark: |
+| Secure MQTTS | :heavy_check_mark: |
+| MQTTS over WebSockets | :heavy_check_mark: |
+| MQTT authentication via user/pass | :heavy_check_mark: |
+| MQTT authentication via mTLS | :heavy_check_mark: |
+| Publish | :heavy_check_mark: |
+| Subscribe | :heavy_check_mark: |
+| Multi-level wildcard | :heavy_check_mark: |
+| Single-level wildcard | :heavy_check_mark: |
+| QoS 0 | :heavy_check_mark: |
+| QoS 1 | :heavy_check_mark: |
+| QoS 2 | Not fully supported, can perform conversation but does persist |
+| Clean session | :heavy_check_mark: |
+| Durable session | :heavy_check_mark: |
+| Distributed durable session | |
+| Will | :heavy_check_mark: |
+| MQTT V5 User properties | |
+| MQTT V5 Will properties | |
+| MQTT V5 Connection properties | |
+| MQTT V5 Connection acknowledgement properties | |
+| MQTT V5 Publish properties | |
+| MQTT V5 Subscribe properties | |
+| MQTT V5 Ack properties | |
+| MQTT V5 AUTH command | |
+| MQTT V5 Shared Subscriptions | |
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/developers/rest.md b/site/versioned_docs/version-4.2/developers/rest.md
new file mode 100644
index 00000000..6b44783f
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/rest.md
@@ -0,0 +1,201 @@
+---
+title: REST
+---
+
+# REST
+
+HarperDB provides a powerful, efficient, and standard-compliant HTTP REST interface for interacting with tables and other resources. The REST interface is the recommended interface for data access, querying, and manipulation (for HTTP interactions), providing the best performance and HTTP interoperability with different clients.
+
+Resources, including tables, can be configured as RESTful endpoints. The name of the query or the [exported](./applications/defining-schemas#export) name of the resource defines the beginning of the endpoint path. From there, a record id or query can be appended. Following uniform interface principles, HTTP methods define different actions with resources. For each method, this describes the default action.
+
+The default path structure provides access to resources at several different levels:
+
+* `/my-resource` - The root path of a resource usually has a description of the resource (like a describe operation for a table).
+* `/my-resource/` - The trailing slash in a path indicates it is a collection of the records. The root collection for a table represents all the records in a table, and usually you will append query parameters to query and search for more specific records.
+* `/my-resource/record-id` - This resource locator represents a specific record, referenced by its id. This is typically how you can retrieve, update, and delete individual records.
+* `/my-resource/record-id/` - Again, a trailing slash indicates a collection; here it is the collection of the records that begin with the specified id prefix.
+* `/my-resource/record-id/with/multiple/parts` - A record id can consist of multiple path segments.
+
+## GET
+
+These can be used to retrieve individual records or perform searches. This is handled by the Resource method `get()` (and can be overridden).
+
+### `GET /my-resource/`
+
+This can be used to retrieve a record by its primary key. The response will include the record as the body.
+
+#### Caching/Conditional Requests
+
+A `GET` response for a record will include an encoded version, a timestamp of the last modification, of this record in the `ETag` request headers (or any accessed record when used in a custom get method). On subsequent requests, a client (that has a cached copy) may include an `If-None-Match` request header with this tag. If the record has not been updated since this date, the response will have a 304 status and no body. This facilitates significant performance gains since the response data doesn't need to be serialized and transferred over the network.
+
+### `GET /my-resource/?property=value`
+
+This can be used to search for records by the specified property name and value. See the querying section for more information.
+
+### `GET /my-resource/.property`
+
+This can be used to retrieve the specified property of the specified record.
+
+## PUT
+
+This can be used to create or update a record with the provided object/data (similar to an "upsert") with a specified key. This is handled by the Resource method `put(record)`.
+
+### `PUT /my-resource/`
+
+This will create or update the record with the URL path that maps to the record's primary key. The record will be replaced with the contents of the data in the request body. The new record will exactly match the data that was sent (this will remove any properties that were present in the previous record and not included in the body). Future GETs will return the exact data that was provided by PUT (what you PUT is what you GET). For example:
+
+```http
+PUT /MyTable/123
+Content-Type: application/json
+
+{ "name": "some data" }
+```
+
+This will create or replace the record with a primary key of "123" with the object defined by the JSON in the body. This is handled by the Resource method `put()`.
+
+## DELETE
+
+This can be used to delete a record or records.
+
+## `DELETE /my-resource/`
+
+This will delete a record with the given primary key. This is handled by the Resource's `delete` method. For example:
+
+```http
+DELETE /MyTable/123
+```
+
+This will delete the record with the primary key of "123".
+
+## `DELETE /my-resource/?property=value`
+
+This will delete all the records that match the provided query.
+
+## POST
+
+Generally the POST method can be used for custom actions since POST has the broadest semantics. For tables that are expost\ed as endpoints, this also can be used to create new records.
+
+### `POST /my-resource/`
+
+This is handled by the Resource method `post(data)`, which is a good method to extend to make various other types of modifications. Also, with a table you can create a new record without specifying a primary key, for example:
+
+````http
+````http
+POST /MyTable/
+Content-Type: application/json
+
+`{ "name": "some data" }`
+````
+
+This will create a new record, auto-assigning a primary key, which will be returned in the `Location` header.
+
+## Querying through URL query parameters
+
+URL query parameters provide a powerful language for specifying database queries in HarperDB. This can be used to search by a single property name and value, to find all records which provide value for the given property/attribute. It is important to note that this property must be configured to be indexed to search on it. For example:
+
+````http
+GET /my-resource/?property=value
+```
+
+We can specify multiple properties that must match:
+
+```http
+GET /my-resource/?property=value&property2=another-value
+```
+
+Note that only one of the properties needs to be indexed for this query to execute.
+
+We can also specify different comparators such as less than and greater than queries using [FIQL](https:/datatracker.ietf.org/doc/html/draft-nottingham-atompub-fiql-00) syntax. If we want to specify records with an `age` value greater than 20:
+
+```http
+GET /my-resource/?age=gt=20
+```
+
+Or less than or equal to 20:
+
+```http
+GET /my-resource/?age=le=20
+```
+
+The comparison operators include `lt` (less than), `le` (less than or equal), `gt` (greater than), `ge` (greater than or equal), and `ne` (not equal). These comparison operators can also be combined with other query parameters with `&`. For example, if we wanted products with a category of software and price between 100 and 200, we could write:
+
+```http
+GET /product/?category=software&price=gt=100&price=lt=200
+```
+
+HarperDB has several special query functions that use "call" syntax. These can be included in the query string as its own query entry (separated from other query conditions with an `&`). These include:
+
+### `select(properties)`
+
+This allows you to specify which properties should be included in the responses. This takes several forms:
+
+* `?select(property)`: This will return the values of the specified property directly in the response (will not be put in an object).
+* `?select(property1,property2)`: This returns the records as objects, but limited to the specified properties.
+* `?select([property1,property2,...])`: This returns the records as arrays of the property values in the specified properties.
+* `?select(property1,)`: This can be used to specify that objects should be returned with the single specified property.
+
+To get a list of product names with a category of software:
+
+```http
+GET /product/?category=software&select(name)
+```
+
+### `limit(start,end)` or `limit(end)`
+
+Specifies a limit on the number of records returned, optionally providing a starting offset.
+
+For example, to find the first twenty records with a `rating` greater than 3, `inStock` equal to true, only returning the `rating` and `name` properties, you could use:
+
+```http
+GET /product?rating=gt=3&inStock=true&select(rating,name)&limit(20)
+```
+
+### Content Types and Negotiation
+
+HTTP defines a couple of headers for indicating the (preferred) content type of the request and response. The `Content-Type` request header can be used to specify the content type of the request body (for PUT, PATCH, and POST). The `Accept` request header indicates the preferred content type of the response. For general records with object structures, HarperDB supports the following content types: `application/json` - Common format, easy to read, with great tooling support. `application/cbor` - Recommended binary format for optimal encoding efficiency and performance. `application/x-msgpack` - This is also an efficient format, but CBOR is preferable, as it has better streaming capabilities and faster time-to-first-byte. `text/csv` - CSV, lacks explicit typing, not well suited for heterogeneous data structures, but good for moving data to and from a spreadsheet.
+
+CBOR is generally the most efficient and powerful encoding format, with the best performance, most compact encoding, and most expansive ability to encode different data types like Dates, Maps, and Sets. MessagePack is very similar and tends to have broader adoption. However, JSON can be easier to work with and may have better tooling. Also, if you are using compression for data transfer (gzip or brotli), JSON will often result in more compact compressed data due to character frequencies that better align with Huffman coding, making JSON a good choice for web applications that do not require specific data types beyond the standard JSON types.
+
+Requesting a specific content type can also be done in a URL by suffixing the path with extension for the content type. If you want to retrieve a record in CSV format, you could request:
+
+```http
+GET /product/some-id.csv
+```
+
+Or you could request a query response in MessagePack:
+
+```http
+GET /product/.msgpack?category=software
+```
+
+However, generally it is not recommended that you use extensions in paths and it is best practice to use the `Accept` header to specify acceptable content types.
+
+### Specific Content Objects
+
+You can specify other content types, and the data will be stored as a record or object that holds the type and contents of the data. For example, if you do:
+
+```
+PUT /my-resource/33
+Content-Type: text/calendar
+
+BEGIN:VCALENDAR
+VERSION:2.0
+...
+```
+
+This would store a record equivalent to JSON:
+
+```
+{ "contentType": "text/calendar", data: "BEGIN:VCALENDAR\nVERSION:2.0\n...
+```
+
+Retrieving a record with `contentType` and `data` properties will likewise return a response with the specified `Content-Type` and body. If the `Content-Type` is not of the `text` family, the data will be treated as binary data (a Node.js `Buffer`).
+
+You can also use `application/octet-stream` to indicate that the request body should be preserved in binary form. This also useful for uploading to a specific property:
+
+```
+PUT /my-resource/33/image
+Content-Type: image/gif
+
+...image data...
+```
diff --git a/site/versioned_docs/version-4.2/developers/security/basic-auth.md b/site/versioned_docs/version-4.2/developers/security/basic-auth.md
new file mode 100644
index 00000000..00ab8b6d
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/security/basic-auth.md
@@ -0,0 +1,62 @@
+---
+title: Basic Authentication
+---
+
+# Basic Authentication
+
+HarperDB uses Basic Auth and JSON Web Tokens (JWTs) to secure our HTTP requests. In the context of an HTTP transaction, **basic access authentication** is a method for an HTTP user agent to provide a username and password when making a request.
+
+** _**You do not need to log in separately. Basic Auth is added to each HTTP request like create\_schema, create\_table, insert etc… via headers.**_ **
+
+A header is added to each HTTP request. The header key is **“Authorization”** the header value is **“Basic <<your username and password buffer token>>”**
+
+## Authentication in HarperDB Studio
+
+In the below code sample, you can see where we add the authorization header to the request. This needs to be added for each and every HTTP request for HarperDB.
+
+_Note: This function uses btoa. Learn about_ [_btoa here_](https:/developer.mozilla.org/en-US/docs/Web/API/btoa)_._
+
+```javascript
+function callHarperDB(call_object, operation, callback){
+
+ const options = {
+ "method": "POST",
+ "hostname": call_object.endpoint_url,
+ "port": call_object.endpoint_port,
+ "path": "/",
+ "headers": {
+ "content-type": "application/json",
+ "authorization": "Basic " + btoa(call_object.username + ':' + call_object.password),
+ "cache-control": "no-cache"
+
+ }
+ };
+
+ const http_req = http.request(options, function (hdb_res) {
+ let chunks = [];
+
+ hdb_res.on("data", function (chunk) {
+ chunks.push(chunk);
+ });
+
+ hdb_res.on("end", function () {
+ const body = Buffer.concat(chunks);
+ if (isJson(body)) {
+ return callback(null, JSON.parse(body));
+ } else {
+ return callback(body, null);
+
+ }
+
+ });
+ });
+
+ http_req.on("error", function (chunk) {
+ return callback("Failed to connect", null);
+ });
+
+ http_req.write(JSON.stringify(operation));
+ http_req.end();
+
+}
+```
diff --git a/site/versioned_docs/version-4.2/developers/security/certificate-management.md b/site/versioned_docs/version-4.2/developers/security/certificate-management.md
new file mode 100644
index 00000000..eb69df74
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/security/certificate-management.md
@@ -0,0 +1,62 @@
+---
+title: Certificate Management
+---
+
+# Certificate Management
+
+This document is information on managing certificates for HarperDB external facing APIs. For information on certificate management for clustering see [clustering certificate management](../clustering/certificate-management).
+
+## Development
+
+An out of the box install of HarperDB does not have HTTPS enabled (see [configuration](../../deployments/configuration) for relevant configuration file settings.) This is great for local development. If you are developing using a remote server and your requests are traversing the Internet, we recommend that you enable HTTPS.
+
+To enable HTTPS, set `http.securePort` in `harperdb-config.yaml` to the port you wish to use for HTTPS connections and restart HarperDB.
+
+By default HarperDB will generate certificates and place them at `/keys/`. These certificates will not have a valid Common Name (CN) for your HarperDB node, so you will be able to use HTTPS, but your HTTPS client must be configured to accept the invalid certificate.
+
+## Production
+
+For production deployments, in addition to using HTTPS, we recommend using your own certificate authority (CA) or a public CA such as Let's Encrypt, to generate certificates with CNs that match the Fully Qualified Domain Name (FQDN) of your HarperDB node.
+
+We have a few recommended options for enabling HTTPS in a production setting.
+
+### Option: Enable HarperDB HTTPS and Replace Certificates
+
+To enable HTTPS, set `http.securePort` in `harperdb-config.yaml` to the port you wish to use for HTTPS connections and restart HarperDB.
+
+To replace the certificates, either replace the contents of the existing certificate files at `/keys/`, or update the HarperDB configuration with the path of your new certificate files, and then restart HarperDB.
+
+```yaml
+tls:
+ certificate: ~/hdb/keys/certificate.pem
+ certificateAuthority: ~/hdb/keys/ca.pem
+ privateKey: ~/hdb/keys/privateKey.pem
+```
+
+`operationsApi.tls` configuration is optional. If it is not set HarperDB will default to the values in the `tls` section.
+
+```yaml
+operationsApi:
+ tls:
+ certificate: ~/hdb/keys/certificate.pem
+ certificateAuthority: ~/hdb/keys/ca.pem
+ privateKey: ~/hdb/keys/privateKey.pem
+```
+
+### Option: Nginx Reverse Proxy
+
+Instead of enabling HTTPS for HarperDB, Nginx can be used as a reverse proxy for HarperDB.
+
+Install Nginx, configure Nginx to use certificates issued from your own CA or a public CA, then configure Nginx to listen for HTTPS requests and forward to HarperDB as HTTP requests.
+
+[Certbot](https:/certbot.eff.org/) is a great tool for automatically requesting and renewing Let’s Encrypt certificates used by Nginx.
+
+### Option: External Reverse Proxy
+
+Instead of enabling HTTPS for HarperDB, a number of different external services can be used as a reverse proxy for HarperDB. These services typically have integrated certificate management. Configure the service to listen for HTTPS requests and forward (over a private network) to HarperDB as HTTP requests.
+
+Examples of these types of services include an AWS Application Load Balancer or a GCP external HTTP(S) load balancer.
+
+### Additional Considerations
+
+It is possible to use different certificates for the Operations API and the Custom Functions API. In scenarios where only your Custom Functions endpoints need to be exposed to the Internet and the Operations API is reserved for HarperDB administration, you may want to use a private CA to issue certificates for the Operations API and a public CA for the Custom Functions API certificates.
diff --git a/site/versioned_docs/version-4.2/developers/security/configuration.md b/site/versioned_docs/version-4.2/developers/security/configuration.md
new file mode 100644
index 00000000..67d959fd
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/security/configuration.md
@@ -0,0 +1,39 @@
+---
+title: Configuration
+---
+
+# Configuration
+
+HarperDB was set up to require very minimal configuration to work out of the box. There are, however, some best practices we encourage for anyone building an app with HarperDB.
+
+## CORS
+
+HarperDB allows for managing [cross-origin HTTP requests](https:/developer.mozilla.org/en-US/docs/Web/HTTP/Access\_control\_CORS). By default, HarperDB enables CORS for all domains if you need to disable CORS completely or set up an access list of domains you can do the following:
+
+1. Open the harperdb-config.yaml file, which can be found in \, the location you specified during install.
+1. In harperdb-config.yaml there should be 2 entries under `operationsApi.network`: cors and corsAccessList.
+ * `cors`
+ 1. To turn off, change to: `cors: false`
+ 1. To turn on, change to: `cors: true`
+ * `corsAccessList`
+ 1. The `corsAccessList` will only be recognized by the system when `cors` is `true`
+ 1. To create an access list you set `corsAccessList` to a comma-separated list of domains.
+
+ i.e. `corsAccessList` is `http:/harperdb.io,http:/products.harperdb.io`
+ 1. To clear out the access list and allow all domains: `corsAccessList` is `[null]`
+
+## SSL
+
+HarperDB provides the option to use an HTTP or HTTPS and HTTP/2 interface. The default port for the server is 9925.
+
+These default ports can be changed by updating the `operationsApi.network.port` value in `/harperdb-config.yaml`
+
+By default, HTTPS is turned off and HTTP is turned on. It is recommended that you never directly expose HarperDB's HTTP interface through a publicly available port. HTTP is intended for local or private network use.
+
+You can toggle HTTPS and HTTP in the settings file. By setting `operationsApi.network.https` to true/false. When `https` is set to `false`, the server will use HTTP (version 1.1). Enabling HTTPS will enable both HTTPS/1.1 and HTTPS/2.
+
+HarperDB automatically generates a certificate (certificate.pem), a certificate authority (ca.pem) and a private key file (privateKey.pem) which live at `/keys/`.
+
+You can replace these with your own certificates and key.
+
+**Changes to these settings require a restart. Use operation `harperdb restart` from HarperDB Operations API.**
diff --git a/site/versioned_docs/version-4.2/developers/security/index.md b/site/versioned_docs/version-4.2/developers/security/index.md
new file mode 100644
index 00000000..cc5dcfc2
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/security/index.md
@@ -0,0 +1,12 @@
+---
+title: Security
+---
+
+# Security
+
+HarperDB uses role-based, attribute-level security to ensure that users can only gain access to the data they’re supposed to be able to access. Our granular permissions allow for unparalleled flexibility and control, and can actually lower the total cost of ownership compared to other database solutions, since you no longer have to replicate subsets of your data to isolate use cases.
+
+* [JWT Authentication](./jwt-auth)
+* [Basic Authentication](./basic-auth)
+* [Configuration](./configuration)
+* [Users and Roles](./users-and-roles)
diff --git a/site/versioned_docs/version-4.2/developers/security/jwt-auth.md b/site/versioned_docs/version-4.2/developers/security/jwt-auth.md
new file mode 100644
index 00000000..f48fe0ee
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/security/jwt-auth.md
@@ -0,0 +1,96 @@
+---
+title: JWT Authentication
+---
+
+# JWT Authentication
+
+HarperDB uses token based authentication with JSON Web Tokens, JWTs.
+
+This consists of two primary operations `create_authentication_tokens` and `refresh_operation_token`. These generate two types of tokens, as follows:
+
+* The `operation_token` which is used to authenticate all HarperDB operations in the Bearer Token Authorization Header. The default expiry is one day.
+* The `refresh_token` which is used to generate a new `operation_token` upon expiry. This token is used in the Bearer Token Authorization Header for the `refresh_operation_token` operation only. The default expiry is thirty days.
+
+The `create_authentication_tokens` operation can be used at any time to refresh both tokens in the event that both have expired or been lost.
+
+## Create Authentication Tokens
+
+Users must initially create tokens using their HarperDB credentials. The following POST body is sent to HarperDB. No headers are required for this POST operation.
+
+```json
+{
+ "operation": "create_authentication_tokens",
+ "username": "username",
+ "password": "password"
+}
+```
+
+A full cURL example can be seen here:
+
+```bash
+curl --location --request POST 'http:/localhost:9925' \
+--header 'Content-Type: application/json' \
+--data-raw '{
+ "operation": "create_authentication_tokens",
+ "username": "username",
+ "password": "password"
+}'
+```
+
+An example expected return object is:
+
+```json
+{
+ "operation_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6InVzZXJuYW1lIiwiaWF0IjoxNjA0OTc4MjAwLCJleHAiOjE2MDUwNjQ2MDAsInN1YiI6Im9wZXJhdGlvbiJ9.MpQA-9CMjA-mn-7mHyUXSuSC_-kqMqJXp_NDiKLFtbtMRbodCuY3DzH401rvy_4vb0yCELf0B5EapLVY1545sv80nxSl6FoZFxQaDWYXycoia6zHpiveR8hKlmA6_XTWHJbY2FM1HAFrdtt3yUTiF-ylkdNbPG7u7fRjTmHfsZ78gd2MNWIDkHoqWuFxIyqk8XydQpsjULf2Uacirt9FmHfkMZ-Jr_rRpcIEW0FZyLInbm6uxLfseFt87wA0TbZ0ofImjAuaW_3mYs-3H48CxP152UJ0jByPb0kHsk1QKP7YHWx1-Wce9NgNADfG5rfgMHANL85zvkv8sJmIGZIoSpMuU3CIqD2rgYnMY-L5dQN1fgfROrPMuAtlYCRK7r-IpjvMDQtRmCiNG45nGsM4DTzsa5GyDrkGssd5OBhl9gr9z9Bb5HQVYhSKIOiy72dK5dQNBklD4eGLMmo-u322zBITmE0lKaBcwYGJw2mmkYcrjDOmsDseU6Bf_zVUd9WF3FqwNkhg4D7nrfNSC_flalkxPHckU5EC_79cqoUIX2ogufBW5XgYbU4WfLloKcIpb51YTZlZfwBHlHPSyaq_guaXFaeCUXKq39_i1n0HRF_mRaxNru0cNDFT9Fm3eD7V8axFijSVAMDyQs_JR7SY483YDKUfN4l-vw-EVynImr4",
+ "refresh_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6InVzZXJuYW1lIiwiaWF0IjoxNjA0OTc4MjAwLCJleHAiOjE2MDc1NzAyMDAsInN1YiI6InJlZnJlc2gifQ.acaCsk-CJWIMLGDZdGnsthyZsJfQ8ihXLyE8mTji8PgGkpbwhs7e1O0uitMgP_pGjHq2tey1BHSwoeCL49b18WyMIB10hK-q2BXGKQkykltjTrQbg7VsdFi0h57mGfO0IqAwYd55_hzHZNnyJMh4b0iPQFDwU7iTD7x9doHhZAvzElpkWbc_NKVw5_Mw3znjntSzbuPN105zlp4Niurin-_5BnukwvoJWLEJ-ZlF6hE4wKhaMB1pWTJjMvJQJE8khTTvlUN8tGxmzoaDYoe1aCGNxmDEQnx8Y5gKzVd89sylhqi54d2nQrJ2-ElfEDsMoXpR01Ps6fNDFtLTuPTp7ixj8LvgL2nCjAg996Ga3PtdvXJAZPDYCqqvaBkZZcsiqOgqLV0vGo3VVlfrcgJXQImMYRr_Inu0FCe47A93IAWuQTs-KplM1KdGJsHSnNBV6oe6QEkROJT5qZME-8xhvBYvOXqp9Znwg39bmiBCMxk26Ce66_vw06MNgoa3D5AlXPWemfdVKPZDnj_aLVjZSs0gAfFElcVn7l9yjWJOaT2Muk26U8bJl-2BEq_DSclqKHODuYM5kkPKIdE4NFrsqsDYuGxcA25rlNETFyl0q-UXj1aoz_joy5Hdnr4mFELmjnoo4jYQuakufP9xeGPsj1skaodKl0mmoGcCD6v1F60"
+}
+```
+
+## Using JWT Authentication Tokens
+
+The `operation_token` value is used to authenticate all operations in place of our standard Basic auth. In order to pass the token you will need to create an Bearer Token Authorization Header like the following request:
+
+```bash
+curl --location --request POST 'http:/localhost:9925' \
+--header 'Content-Type: application/json' \
+--header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6InVzZXJuYW1lIiwiaWF0IjoxNjA0OTc4MjAwLCJleHAiOjE2MDUwNjQ2MDAsInN1YiI6Im9wZXJhdGlvbiJ9.MpQA-9CMjA-mn-7mHyUXSuSC_-kqMqJXp_NDiKLFtbtMRbodCuY3DzH401rvy_4vb0yCELf0B5EapLVY1545sv80nxSl6FoZFxQaDWYXycoia6zHpiveR8hKlmA6_XTWHJbY2FM1HAFrdtt3yUTiF-ylkdNbPG7u7fRjTmHfsZ78gd2MNWIDkHoqWuFxIyqk8XydQpsjULf2Uacirt9FmHfkMZ-Jr_rRpcIEW0FZyLInbm6uxLfseFt87wA0TbZ0ofImjAuaW_3mYs-3H48CxP152UJ0jByPb0kHsk1QKP7YHWx1-Wce9NgNADfG5rfgMHANL85zvkv8sJmIGZIoSpMuU3CIqD2rgYnMY-L5dQN1fgfROrPMuAtlYCRK7r-IpjvMDQtRmCiNG45nGsM4DTzsa5GyDrkGssd5OBhl9gr9z9Bb5HQVYhSKIOiy72dK5dQNBklD4eGLMmo-u322zBITmE0lKaBcwYGJw2mmkYcrjDOmsDseU6Bf_zVUd9WF3FqwNkhg4D7nrfNSC_flalkxPHckU5EC_79cqoUIX2ogufBW5XgYbU4WfLloKcIpb51YTZlZfwBHlHPSyaq_guaXFaeCUXKq39_i1n0HRF_mRaxNru0cNDFT9Fm3eD7V8axFijSVAMDyQs_JR7SY483YDKUfN4l-vw-EVynImr4' \
+--data-raw '{
+ "operation":"search_by_hash",
+ "schema":"dev",
+ "table":"dog",
+ "hash_values":[1],
+ "get_attributes": ["*"]
+}'
+```
+
+## Token Expiration
+
+`operation_token` expires at a set interval. Once it expires it will no longer be accepted by HarperDB. This duration defaults to one day, and is configurable in [harperdb-config.yaml](../../deployments/configuration). To generate a new `operation_token`, the `refresh_operation_token` operation is used, passing the `refresh_token` in the Bearer Token Authorization Header. A full cURL example can be seen here:
+
+```bash
+curl --location --request POST 'http:/localhost:9925' \
+--header 'Content-Type: application/json' \
+--header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6InVzZXJuYW1lIiwiaWF0IjoxNjA0OTc4MjAwLCJleHAiOjE2MDc1NzAyMDAsInN1YiI6InJlZnJlc2gifQ.acaCsk-CJWIMLGDZdGnsthyZsJfQ8ihXLyE8mTji8PgGkpbwhs7e1O0uitMgP_pGjHq2tey1BHSwoeCL49b18WyMIB10hK-q2BXGKQkykltjTrQbg7VsdFi0h57mGfO0IqAwYd55_hzHZNnyJMh4b0iPQFDwU7iTD7x9doHhZAvzElpkWbc_NKVw5_Mw3znjntSzbuPN105zlp4Niurin-_5BnukwvoJWLEJ-ZlF6hE4wKhaMB1pWTJjMvJQJE8khTTvlUN8tGxmzoaDYoe1aCGNxmDEQnx8Y5gKzVd89sylhqi54d2nQrJ2-ElfEDsMoXpR01Ps6fNDFtLTuPTp7ixj8LvgL2nCjAg996Ga3PtdvXJAZPDYCqqvaBkZZcsiqOgqLV0vGo3VVlfrcgJXQImMYRr_Inu0FCe47A93IAWuQTs-KplM1KdGJsHSnNBV6oe6QEkROJT5qZME-8xhvBYvOXqp9Znwg39bmiBCMxk26Ce66_vw06MNgoa3D5AlXPWemfdVKPZDnj_aLVjZSs0gAfFElcVn7l9yjWJOaT2Muk26U8bJl-2BEq_DSclqKHODuYM5kkPKIdE4NFrsqsDYuGxcA25rlNETFyl0q-UXj1aoz_joy5Hdnr4mFELmjnoo4jYQuakufP9xeGPsj1skaodKl0mmoGcCD6v1F60' \
+--data-raw '{
+ "operation":"refresh_operation_token"
+}'
+```
+
+This will return a new `operation_token`. An example expected return object is:
+
+```bash
+{
+ "operation_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6eyJfX2NyZWF0ZWR0aW1lX18iOjE2MDQ5NzgxODkxNTEsIl9fdXBkYXRlZHRpbWVfXyI6MTYwNDk3ODE4OTE1MSwiYWN0aXZlIjp0cnVlLCJyb2xlIjp7Il9fY3JlYXRlZHRpbWVfXyI6MTYwNDk0NDE1MTM0NywiX191cGRhdGVkdGltZV9fIjoxNjA0OTQ0MTUxMzQ3LCJpZCI6IjdiNDNlNzM1LTkzYzctNDQzYi05NGY3LWQwMzY3Njg5NDc4YSIsInBlcm1pc3Npb24iOnsic3VwZXJfdXNlciI6dHJ1ZSwic3lzdGVtIjp7InRhYmxlcyI6eyJoZGJfdGFibGUiOnsicmVhZCI6dHJ1ZSwiaW5zZXJ0IjpmYWxzZSwidXBkYXRlIjpmYWxzZSwiZGVsZXRlIjpmYWxzZSwiYXR0cmlidXRlX3Blcm1pc3Npb25zIjpbXX0sImhkYl9hdHRyaWJ1dGUiOnsicmVhZCI6dHJ1ZSwiaW5zZXJ0IjpmYWxzZSwidXBkYXRlIjpmYWxzZSwiZGVsZXRlIjpmYWxzZSwiYXR0cmlidXRlX3Blcm1pc3Npb25zIjpbXX0sImhkYl9zY2hlbWEiOnsicmVhZCI6dHJ1ZSwiaW5zZXJ0IjpmYWxzZSwidXBkYXRlIjpmYWxzZSwiZGVsZXRlIjpmYWxzZSwiYXR0cmlidXRlX3Blcm1pc3Npb25zIjpbXX0sImhkYl91c2VyIjp7InJlYWQiOnRydWUsImluc2VydCI6ZmFsc2UsInVwZGF0ZSI6ZmFsc2UsImRlbGV0ZSI6ZmFsc2UsImF0dHJpYnV0ZV9wZXJtaXNzaW9ucyI6W119LCJoZGJfcm9sZSI6eyJyZWFkIjp0cnVlLCJpbnNlcnQiOmZhbHNlLCJ1cGRhdGUiOmZhbHNlLCJkZWxldGUiOmZhbHNlLCJhdHRyaWJ1dGVfcGVybWlzc2lvbnMiOltdfSwiaGRiX2pvYiI6eyJyZWFkIjp0cnVlLCJpbnNlcnQiOmZhbHNlLCJ1cGRhdGUiOmZhbHNlLCJkZWxldGUiOmZhbHNlLCJhdHRyaWJ1dGVfcGVybWlzc2lvbnMiOltdfSwiaGRiX2xpY2Vuc2UiOnsicmVhZCI6dHJ1ZSwiaW5zZXJ0IjpmYWxzZSwidXBkYXRlIjpmYWxzZSwiZGVsZXRlIjpmYWxzZSwiYXR0cmlidXRlX3Blcm1pc3Npb25zIjpbXX0sImhkYl9pbmZvIjp7InJlYWQiOnRydWUsImluc2VydCI6ZmFsc2UsInVwZGF0ZSI6ZmFsc2UsImRlbGV0ZSI6ZmFsc2UsImF0dHJpYnV0ZV9wZXJtaXNzaW9ucyI6W119LCJoZGJfbm9kZXMiOnsicmVhZCI6dHJ1ZSwiaW5zZXJ0IjpmYWxzZSwidXBkYXRlIjpmYWxzZSwiZGVsZXRlIjpmYWxzZSwiYXR0cmlidXRlX3Blcm1pc3Npb25zIjpbXX0sImhkYl90ZW1wIjp7InJlYWQiOnRydWUsImluc2VydCI6ZmFsc2UsInVwZGF0ZSI6ZmFsc2UsImRlbGV0ZSI6ZmFsc2UsImF0dHJpYnV0ZV9wZXJtaXNzaW9ucyI6W119fX19LCJyb2xlIjoic3VwZXJfdXNlciJ9LCJ1c2VybmFtZSI6InVzZXJuYW1lIn0sImlhdCI6MTYwNDk3ODcxMywiZXhwIjoxNjA1MDY1MTEzLCJzdWIiOiJvcGVyYXRpb24ifQ.qB4FS7fzryCO5epQlFCQe4mQcUEhzXjfsXRFPgauXrGZwSeSr2o2a1tE1xjiI3qjK0r3f2bdi2xpFlDR1thdY-m0mOpHTICNOae4KdKzp7cyzRaOFurQnVYmkWjuV_Ww4PJgr6P3XDgXs5_B2d7ZVBR-BaAimYhVRIIShfpWk-4iN1XDk96TwloCkYx01BuN87o-VOvAnOG-K_EISA9RuEBpSkfUEuvHx8IU4VgfywdbhNMh6WXM0VP7ZzSpshgsS07MGjysGtZHNTVExEvFh14lyfjfqKjDoIJbo2msQwD2FvrTTb0iaQry1-Wwz9QJjVAUtid7tJuP8aBeNqvKyMIXRVnl5viFUr-Gs-Zl_WtyVvKlYWw0_rUn3ucmurK8tTy6iHyJ6XdUf4pYQebpEkIvi2rd__e_Z60V84MPvIYs6F_8CAy78aaYmUg5pihUEehIvGRj1RUZgdfaXElw90-m-M5hMOTI04LrzzVnBu7DcMYg4UC1W-WDrrj4zUq7y8_LczDA-yBC2-bkvWwLVtHLgV5yIEuIx2zAN74RQ4eCy1ffWDrVxYJBau4yiIyCc68dsatwHHH6bMK0uI9ib6Y9lsxCYjh-7MFcbP-4UBhgoDDXN9xoUToDLRqR9FTHqAHrGHp7BCdF5d6TQTVL5fmmg61MrLucOo-LZBXs1NY"
+}
+```
+
+The `refresh_token` also expires at a set interval, but a longer interval. Once it expires it will no longer be accepted by HarperDB. This duration defaults to thirty days, and is configurable in [harperdb-config.yaml](../../deployments/configuration). To generate a new `operation_token` and a new `refresh_token` the `create_authentication_tokensoperation` is called.
+
+## Configuration
+
+Token timeouts are configurable in [harperdb-config.yaml](../../deployments/configuration) with the following parameters:
+
+* `operationsApi.authentication.operationTokenTimeout`: Defines the length of time until the operation\_token expires (default 1d).
+* `operationsApi.authentication.refreshTokenTimeout`: Defines the length of time until the refresh\_token expires (default 30d).
+
+A full list of valid values for both parameters can be found [here](https:/github.com/vercel/ms).
diff --git a/site/versioned_docs/version-4.2/developers/security/users-and-roles.md b/site/versioned_docs/version-4.2/developers/security/users-and-roles.md
new file mode 100644
index 00000000..6c9fbfb5
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/security/users-and-roles.md
@@ -0,0 +1,267 @@
+---
+title: Users & Roles
+---
+
+# Users & Roles
+
+HarperDB utilizes a Role-Based Access Control (RBAC) framework to manage access to HarperDB instances. A user is assigned a role that determines the user’s permissions to access database resources and run core operations.
+
+## Roles in HarperDB
+
+Role permissions in HarperDB are broken into two categories – permissions around database manipulation and permissions around database definition.
+
+**Database Manipulation**: A role defines CRUD (create, read, update, delete) permissions against database resources (i.e. data) in a HarperDB instance.
+
+1. At the table-level access, permissions must be explicitly defined when adding or altering a role – _i.e. HarperDB will assume CRUD access to be FALSE if not explicitly provided in the permissions JSON passed to the `add_role` and/or `alter_role` API operations._
+1. At the attribute-level, permissions for attributes in all tables included in the permissions set will be assigned based on either the specific attribute-level permissions defined in the table’s permission set or, if there are no attribute-level permissions defined, permissions will be based on the table’s CRUD set.
+
+**Database Definition**: Permissions related to managing schemas, tables, roles, users, and other system settings and operations are restricted to the built-in `super_user` role.
+
+**Built-In Roles**
+
+There are three built-in roles within HarperDB. See full breakdown of operations restricted to only super\_user roles [here](./users-and-roles#role-based-operation-restrictions).
+
+* `super_user` - This role provides full access to all operations and methods within a HarperDB instance, this can be considered the admin role.
+ * This role provides full access to all Database Definition operations and the ability to run Database Manipulation operations across the entire database schema with no restrictions.
+* `cluster_user` - This role is an internal system role type that is managed internally to allow clustered instances to communicate with one another.
+ * This role is an internally managed role to facilitate communication between clustered instances.
+* `structure_user` - This role provides specific access for creation and deletion of data.
+ * When defining this role type you can either assign a value of true which will allow the role to create and drop schemas & tables. Alternatively the role type can be assigned a string array. The values in this array are schemas and allows the role to only create and drop tables in the designated schemas.
+
+**User-Defined Roles**
+
+In addition to built-in roles, admins (i.e. users assigned to the super\_user role) can create customized roles for other users to interact with and manipulate the data within explicitly defined tables and attributes.
+
+* Unless the user-defined role is given `super_user` permissions, permissions must be defined explicitly within the request body JSON.
+* Describe operations will return metadata for all schemas, tables, and attributes that a user-defined role has CRUD permissions for.
+
+**Role Permissions**
+
+When creating a new, user-defined role in a HarperDB instance, you must provide a role name and the permissions to assign to that role. _Reminder, only super users can create and manage roles._
+
+* `role` name used to easily identify the role assigned to individual users.
+
+ _Roles can be altered/dropped based on the role name used in and returned from a successful `add_role` , `alter_role`, or `list_roles` operation._
+* `permissions` used to explicitly define CRUD access to existing table data.
+
+Example JSON for `add_role` request
+
+```json
+{
+ "operation":"add_role",
+ "role":"software_developer",
+ "permission":{
+ "super_user":false,
+ "schema_name":{
+ "tables": {
+ "table_name1": {
+ "read":true,
+ "insert":true,
+ "update":true,
+ "delete":false,
+ "attribute_permissions":[
+ {
+ "attribute_name":"attribute1",
+ "read":true,
+ "insert":true,
+ "update":true
+ }
+ ]
+ },
+ "table_name2": {
+ "read":true,
+ "insert":true,
+ "update":true,
+ "delete":false,
+ "attribute_permissions":[]
+ }
+ }
+ }
+ }
+}
+```
+
+**Setting Role Permissions**
+
+There are two parts to a permissions set:
+
+* `super_user` – boolean value indicating if role should be provided super\_user access.
+
+ _If `super_user` is set to true, there should be no additional schema-specific permissions values included since the role will have access to the entire database schema. If permissions are included in the body of the operation, they will be stored within HarperDB, but ignored, as super\_users have full access to the database._
+* `permissions`: Schema tables that a role should have specific CRUD access to should be included in the final, schema-specific `permissions` JSON.
+
+ _For user-defined roles (i.e. non-super\_user roles, blank permissions will result in the user being restricted from accessing any of the database schema._
+
+**Table Permissions JSON**
+
+Each table that a role should be given some level of CRUD permissions to must be included in the `tables` array for its schema in the roles permissions JSON passed to the API (_see example above_).
+
+```json
+{
+ "table_name": { / the name of the table to define CRUD perms for
+ "read": boolean, / access to read from this table
+ "insert": boolean, / access to insert data to table
+ "update": boolean, / access to update data in table
+ "delete": boolean, / access to delete row data in table
+ "attribute_permissions": [ / permissions for specific table attributes
+ {
+ "attribute_name": "attribute_name", / attribute to assign permissions to
+ "read": boolean, / access to read this attribute from table
+ "insert": boolean, / access to insert this attribute into the table
+ "update": boolean / access to update this attribute in the table
+ }
+ ]
+}
+```
+
+**Important Notes About Table Permissions**
+
+1. If a schema and/or any of its tables are not included in the permissions JSON, the role will not have any CRUD access to the schema and/or tables.
+1. If a table-level CRUD permission is set to false, any attribute-level with that same CRUD permission set to true will return an error.
+
+**Important Notes About Attribute Permissions**
+
+1. If there are attribute-specific CRUD permissions that need to be enforced on a table, those need to be explicitly described in the `attribute_permissions` array.
+1. If a non-hash attribute is given some level of CRUD access, that same access will be assigned to the table’s `hash_attribute` (also referred to as the `primary_key`), even if it is not explicitly defined in the permissions JSON.
+
+ _See table\_name1’s permission set for an example of this – even though the table’s hash attribute is not specifically defined in the attribute\_permissions array, because the role has CRUD access to ‘attribute1’, the role will have the same access to the table’s hash attribute._
+1. If attribute-level permissions are set – _i.e. attribute\_permissions.length > 0_ – any table attribute not explicitly included will be assumed to have not CRUD access (with the exception of the `hash_attribute` described in #2).
+
+ _See table\_name1’s permission set for an example of this – in this scenario, the role will have the ability to create, insert and update ‘attribute1’ and the table’s hash attribute but no other attributes on that table._
+1. If an `attribute_permissions` array is empty, the role’s access to a table’s attributes will be based on the table-level CRUD permissions.
+
+ _See table\_name2’s permission set for an example of this._
+1. The `__createdtime__` and `__updatedtime__` attributes that HarperDB manages internally can have read perms set but, if set, all other attribute-level permissions will be ignored.
+1. Please note that DELETE permissions are not included as a part of an individual attribute-level permission set. That is because it is not possible to delete individual attributes from a row, rows must be deleted in full.
+ * If a role needs the ability to delete rows from a table, that permission should be set on the table-level.
+ * The practical approach to deleting an individual attribute of a row would be to set that attribute to null via an update statement.
+
+## `Role-Based Operation Restrictions `
+
+The table below includes all API operations available in HarperDB and indicates whether or not the operation is restricted to super\_user roles.
+
+_Keep in mind that non-super\_user roles will also be restricted within the operations they do have access to by the schema-level CRUD permissions set for the roles._
+
+| Schemas and Tables | Restricted to Super\_Users |
+| ------------------ | :------------------------: |
+| describe\_all | |
+| describe\_schema | |
+| describe\_table | |
+| create\_schema | X |
+| drop\_schema | X |
+| create\_table | X |
+| drop\_table | X |
+| create\_attribute | |
+| drop\_attribute | X |
+
+| NoSQL Operations | Restricted to Super\_Users |
+| ---------------------- | :------------------------: |
+| insert | |
+| update | |
+| upsert | |
+| delete | |
+| search\_by\_hash | |
+| search\_by\_value | |
+| search\_by\_conditions | |
+
+| SQL Operations | Restricted to Super\_Users |
+| -------------- | :------------------------: |
+| select | |
+| insert | |
+| update | |
+| delete | |
+
+| Bulk Operations | Restricted to Super\_Users |
+| ---------------- | :------------------------: |
+| csv\_data\_load | |
+| csv\_file\_load | |
+| csv\_url\_load | |
+| import\_from\_s3 | |
+
+| Users and Roles | Restricted to Super\_Users |
+| --------------- | :------------------------: |
+| list\_roles | X |
+| add\_role | X |
+| alter\_role | X |
+| drop\_role | X |
+| list\_users | X |
+| user\_info | |
+| add\_user | X |
+| alter\_user | X |
+| drop\_user | X |
+
+| Clustering | Restricted to Super\_Users |
+| ----------------------- | :------------------------: |
+| cluster\_set\_routes | X |
+| cluster\_get\_routes | X |
+| cluster\_delete\_routes | X |
+| add\_node | X |
+| update\_node | X |
+| cluster\_status | X |
+| remove\_node | X |
+| configure\_cluster | X |
+
+| Components | Restricted to Super\_Users |
+| -------------------- | :------------------------: |
+| get\_components | X |
+| get\_component\_file | X |
+| set\_component\_file | X |
+| drop\_component | X |
+| add\_component | X |
+| package\_component | X |
+| deploy\_component | X |
+
+| Custom Functions | Restricted to Super\_Users |
+| ---------------------------------- | :------------------------: |
+| custom\_functions\_status | X |
+| get\_custom\_functions | X |
+| get\_custom\_function | X |
+| set\_custom\_function | X |
+| drop\_custom\_function | X |
+| add\_custom\_function\_project | X |
+| drop\_custom\_function\_project | X |
+| package\_custom\_function\_project | X |
+| deploy\_custom\_function\_project | X |
+
+| Registration | Restricted to Super\_Users |
+| ------------------ | :------------------------: |
+| registration\_info | |
+| get\_fingerprint | X |
+| set\_license | X |
+
+| Jobs | Restricted to Super\_Users |
+| ----------------------------- | :------------------------: |
+| get\_job | |
+| search\_jobs\_by\_start\_date | X |
+
+| Logs | Restricted to Super\_Users |
+| --------------------------------- | :------------------------: |
+| read\_log | X |
+| read\_transaction\_log | X |
+| delete\_transaction\_logs\_before | X |
+| read\_audit\_log | X |
+| delete\_audit\_logs\_before | X |
+
+| Utilities | Restricted to Super\_Users |
+| ----------------------- | :------------------------: |
+| delete\_records\_before | X |
+| export\_local | X |
+| export\_to\_s3 | X |
+| system\_information | X |
+| restart | X |
+| restart\_service | X |
+| get\_configuration | X |
+| configure\_cluster | X |
+
+| Token Authentication | Restricted to Super\_Users |
+| ------------------------------ | :------------------------: |
+| create\_authentication\_tokens | |
+| refresh\_operation\_token | |
+
+## Error: Must execute as User
+
+**You may have gotten an error like,** `Error: Must execute as <>`.
+
+This means that you installed HarperDB as `<>`. Because HarperDB stores files natively on the operating system, we only allow the HarperDB executable to be run by a single user. This prevents permissions issues on files.
+
+For example if you installed as user\_a, but later wanted to run as user\_b. User\_b may not have access to the hdb files HarperDB needs. This also keeps HarperDB more secure as it allows you to lock files down to a specific user and prevents other users from accessing your files.
diff --git a/site/versioned_docs/version-4.2/developers/sql-guide/date-functions.md b/site/versioned_docs/version-4.2/developers/sql-guide/date-functions.md
new file mode 100644
index 00000000..f19d2126
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/sql-guide/date-functions.md
@@ -0,0 +1,222 @@
+---
+title: SQL Date Functions
+---
+
+# SQL Date Functions
+
+HarperDB utilizes [Coordinated Universal Time (UTC)](https:/en.wikipedia.org/wiki/Coordinated_Universal_Time) in all internal SQL operations. This means that date values passed into any of the functions below will be assumed to be in UTC or in a format that can be translated to UTC.
+
+When parsing date values passed to SQL date functions in HDB, we first check for [ISO 8601](https:/en.wikipedia.org/wiki/ISO_8601) formats, then for [RFC 2822](https:/tools.ietf.org/html/rfc2822#section-3.3) date-time format and then fall back to new Date(date_string)if a known format is not found.
+
+### CURRENT_DATE()
+
+Returns the current date in UTC in `YYYY-MM-DD` String format.
+
+```
+"SELECT CURRENT_DATE() AS current_date_result" returns
+ {
+ "current_date_result": "2020-04-22"
+ }
+```
+
+### CURRENT_TIME()
+
+Returns the current time in UTC in `HH:mm:ss.SSS` String format.
+
+```
+"SELECT CURRENT_TIME() AS current_time_result" returns
+ {
+ "current_time_result": "15:18:14.639"
+ }
+```
+
+### CURRENT_TIMESTAMP
+
+Referencing this variable will evaluate as the current Unix Timestamp in milliseconds.
+
+```
+"SELECT CURRENT_TIMESTAMP AS current_timestamp_result" returns
+ {
+ "current_timestamp_result": 1587568845765
+ }
+```
+### DATE([date_string])
+
+Formats and returns the date_string argument in UTC in `YYYY-MM-DDTHH:mm:ss.SSSZZ` String format.
+
+If a date_string is not provided, the function will return the current UTC date/time value in the return format defined above.
+
+```
+"SELECT DATE(1587568845765) AS date_result" returns
+ {
+ "date_result": "2020-04-22T15:20:45.765+0000"
+ }
+```
+
+```
+"SELECT DATE(CURRENT_TIMESTAMP) AS date_result2" returns
+ {
+ "date_result2": "2020-04-22T15:20:45.765+0000"
+ }
+```
+
+### DATE_ADD(date, value, interval)
+
+Adds the defined amount of time to the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted interval values: Either string value (key or shorthand) can be passed as the interval argument.
+
+
+| Key | Shorthand |
+|--------------|-----------|
+| years | y |
+| quarters | Q |
+| months | M |
+| weeks | w |
+| days | d |
+| hours | h |
+| minutes | m |
+| seconds | s |
+| milliseconds | ms |
+
+
+```
+"SELECT DATE_ADD(1587568845765, 1, 'days') AS date_add_result" AND
+"SELECT DATE_ADD(1587568845765, 1, 'd') AS date_add_result" both return
+ {
+ "date_add_result": 1587655245765
+ }
+```
+
+```
+"SELECT DATE_ADD(CURRENT_TIMESTAMP, 2, 'years')
+AS date_add_result2" returns
+ {
+ "date_add_result2": 1650643129017
+ }
+```
+
+### DATE_DIFF(date_1, date_2[, interval])
+
+Returns the difference between the two date values passed based on the interval as a Number. If an interval is not provided, the function will return the difference value in milliseconds.
+
+Accepted interval values:
+* years
+* months
+* weeks
+* days
+* hours
+* minutes
+* seconds
+
+```
+"SELECT DATE_DIFF(CURRENT_TIMESTAMP, 1650643129017, 'hours')
+AS date_diff_result" returns
+ {
+ "date_diff_result": -17519.753333333334
+ }
+```
+
+### DATE_FORMAT(date, format)
+
+Formats and returns a date value in the String format provided. Find more details on accepted format values in the [moment.js docs](https:/momentjs.com/docs/#/displaying/format/).
+
+```
+"SELECT DATE_FORMAT(1524412627973, 'YYYY-MM-DD HH:mm:ss')
+AS date_format_result" returns
+ {
+ "date_format_result": "2018-04-22 15:57:07"
+ }
+```
+
+### DATE_SUB(date, value, interval)
+
+Subtracts the defined amount of time from the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted date_sub interval values- Either string value (key or shorthand) can be passed as the interval argument.
+
+| Key | Shorthand |
+|--------------|-----------|
+| years | y |
+| quarters | Q |
+| months | M |
+| weeks | w |
+| days | d |
+| hours | h |
+| minutes | m |
+| seconds | s |
+| milliseconds | ms |
+
+
+```
+"SELECT DATE_SUB(1587568845765, 2, 'years') AS date_sub_result" returns
+ {
+ "date_sub_result": 1524410445765
+ }
+```
+
+### EXTRACT(date, date_part)
+
+Extracts and returns the date_part requested as a String value. Accepted date_part values below show value returned for date = “2020-03-26T15:13:02.041+000”
+
+| date_part | Example return value* |
+|--------------|------------------------|
+| year | “2020” |
+| month | “3” |
+| day | “26” |
+ | hour | “15” |
+| minute | “13” |
+| second | “2” |
+| millisecond | “41” |
+
+```
+"SELECT EXTRACT(1587568845765, 'year') AS extract_result" returns
+ {
+ "extract_result": "2020"
+ }
+```
+
+### GETDATE()
+
+Returns the current Unix Timestamp in milliseconds.
+
+```
+"SELECT GETDATE() AS getdate_result" returns
+ {
+ "getdate_result": 1587568845765
+ }
+```
+
+### GET_SERVER_TIME()
+Returns the current date/time value based on the server’s timezone in `YYYY-MM-DDTHH:mm:ss.SSSZZ` String format.
+
+```
+"SELECT GET_SERVER_TIME() AS get_server_time_result" returns
+ {
+ "get_server_time_result": "2020-04-22T15:20:45.765+0000"
+ }
+```
+
+### OFFSET_UTC(date, offset)
+Returns the UTC date time value with the offset provided included in the return String value formatted as `YYYY-MM-DDTHH:mm:ss.SSSZZ`. The offset argument will be added as minutes unless the value is less than 16 and greater than -16, in which case it will be treated as hours.
+
+```
+"SELECT OFFSET_UTC(1587568845765, 240) AS offset_utc_result" returns
+ {
+ "offset_utc_result": "2020-04-22T19:20:45.765+0400"
+ }
+```
+
+```
+"SELECT OFFSET_UTC(1587568845765, 10) AS offset_utc_result2" returns
+ {
+ "offset_utc_result2": "2020-04-23T01:20:45.765+1000"
+ }
+```
+
+### NOW()
+Returns the current Unix Timestamp in milliseconds.
+
+```
+"SELECT NOW() AS now_result" returns
+ {
+ "now_result": 1587568845765
+ }
+```
+
diff --git a/site/versioned_docs/version-4.2/developers/sql-guide/features-matrix.md b/site/versioned_docs/version-4.2/developers/sql-guide/features-matrix.md
new file mode 100644
index 00000000..f0ee3072
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/sql-guide/features-matrix.md
@@ -0,0 +1,83 @@
+---
+title: SQL Features Matrix
+---
+
+# SQL Features Matrix
+
+HarperDB provides access to most SQL functions, and we’re always expanding that list. Check below to see if we cover what you need. If not, feel free to [add a Feature Request](https:/feedback.harperdb.io/).
+
+
+| INSERT | |
+|------------------------------------|-----|
+| Values - multiple values supported | ✔ |
+| Sub-SELECT | ✗ |
+
+| UPDATE | |
+|-----------------|-----|
+| SET | ✔ |
+| Sub-SELECT | ✗ |
+| Conditions | ✔ |
+| Date Functions* | ✔ |
+| Math Functions | ✔ |
+
+| DELETE | |
+|------------|-----|
+| FROM | ✔ |
+| Sub-SELECT | ✗ |
+| Conditions | ✔ |
+
+| SELECT | |
+|-----------------------|-----|
+| Column SELECT | ✔ |
+| Aliases | ✔ |
+| Aggregator Functions | ✔ |
+| Date Functions* | ✔ |
+| Math Functions | ✔ |
+| Constant Values | ✔ |
+| Distinct | ✔ |
+| Sub-SELECT | ✗ |
+
+| FROM | |
+|-------------------|-----|
+| Multi-table JOIN | ✔ |
+| INNER JOIN | ✔ |
+| LEFT OUTER JOIN | ✔ |
+| LEFT INNER JOIN | ✔ |
+| RIGHT OUTER JOIN | ✔ |
+| RIGHT INNER JOIN | ✔ |
+| FULL JOIN | ✔ |
+| UNION | ✗ |
+| Sub-SELECT | ✗ |
+| TOP | ✔ |
+
+| WHERE | |
+|----------------------------|-----|
+| Multi-Conditions | ✔ |
+| Wildcards | ✔ |
+| IN | ✔ |
+| LIKE | ✔ |
+| Bit-wise Operators AND, OR | ✔ |
+| Bit-wise Operators NOT | ✔ |
+| NULL | ✔ |
+| BETWEEN | ✔ |
+| EXISTS,ANY,ALL | ✔ |
+| Compare columns | ✔ |
+| Compare constants | ✔ |
+| Date Functions* | ✔ |
+| Math Functions | ✔ |
+| Sub-SELECT | ✗ |
+
+| GROUP BY | |
+|-----------------------|-----|
+| Multi-Column GROUP BY | ✔ |
+
+| HAVING | |
+|--------------------------------|-----|
+| Aggregate function conditions | ✔ |
+
+| ORDER BY | |
+|-----------------------|-----|
+| Multi-Column ORDER BY | ✔ |
+| Aliases | ✔ |
+| Date Functions* | ✔ |
+| Math Functions | ✔ |
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/developers/sql-guide/functions.md b/site/versioned_docs/version-4.2/developers/sql-guide/functions.md
new file mode 100644
index 00000000..ccd6f247
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/sql-guide/functions.md
@@ -0,0 +1,153 @@
+---
+title: HarperDB SQL Functions
+---
+
+# HarperDB SQL Functions
+
+This SQL keywords reference contains the SQL functions available in HarperDB.
+
+## Functions
+### Aggregate
+
+| Keyword | Syntax | Description |
+|-----------------|-------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------|
+| AVG | AVG(_expression_) | Returns the average of a given numeric expression. |
+| COUNT | SELECT COUNT(_column_name_) FROM _schema.table_ WHERE _condition_ | Returns the number records that match the given criteria. Nulls are not counted. |
+| GROUP_CONCAT | GROUP_CONCAT(_expression_) | Returns a string with concatenated values that are comma separated and that are non-null from a group. Will return null when there are non-null values. |
+| MAX | SELECT MAX(_column_name_) FROM _schema.table_ WHERE _condition_ | Returns largest value in a specified column. |
+| MIN | SELECT MIN(_column_name_) FROM _schema.table_ WHERE _condition_ | Returns smallest value in a specified column. |
+| SUM | SUM(_column_name_) | Returns the sum of the numeric values provided. |
+| ARRAY* | ARRAY(_expression_) | Returns a list of data as a field. |
+| DISTINCT_ARRAY* | DISTINCT_ARRAY(_expression_) | When placed around a standard ARRAY() function, returns a distinct (deduplicated) results set. |
+
+*For more information on ARRAY() and DISTINCT_ARRAY() see [this blog](https:/www.harperdb.io/post/sql-queries-to-complex-objects).
+
+### Conversion
+
+| Keyword | Syntax | Description |
+|---------|--------------------------------------------------|------------------------------------------------------------------------|
+| CAST | CAST(_expression AS datatype(length)_) | Converts a value to a specified datatype. |
+| CONVERT | CONVERT(_data_type(length), expression, style_) | Converts a value from one datatype to a different, specified datatype. |
+
+
+### Date & Time
+
+| Keyword | Syntax | Description |
+|-------------------|-----------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| CURRENT_DATE | CURRENT_DATE() | Returns the current date in UTC in “YYYY-MM-DD” String format. |
+| CURRENT_TIME | CURRENT_TIME() | Returns the current time in UTC in “HH:mm:ss.SSS” string format. |
+| CURRENT_TIMESTAMP | CURRENT_TIMESTAMP | Referencing this variable will evaluate as the current Unix Timestamp in milliseconds. For more information, go here. |
+|
+| DATE | DATE([_date_string_]) | Formats and returns the date_string argument in UTC in ‘YYYY-MM-DDTHH:mm:ss.SSSZZ’ string format. If a date_string is not provided, the function will return the current UTC date/time value in the return format defined above. For more information, go here. |
+|
+| DATE_ADD | DATE_ADD(_date, value, interval_) | Adds the defined amount of time to the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted interval values: Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. |
+|
+| DATE_DIFF | DATEDIFF(_date_1, date_2[, interval]_) | Returns the difference between the two date values passed based on the interval as a Number. If an interval is not provided, the function will return the difference value in milliseconds. For more information, go here. |
+|
+| DATE_FORMAT | DATE_FORMAT(_date, format_) | Formats and returns a date value in the String format provided. Find more details on accepted format values in the moment.js docs. For more information, go here. |
+|
+| DATE_SUB | DATE_SUB(_date, format_) | Subtracts the defined amount of time from the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted date_sub interval values- Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. |
+|
+| DAY | DAY(_date_) | Return the day of the month for the given date. |
+|
+| DAYOFWEEK | DAYOFWEEK(_date_) | Returns the numeric value of the weekday of the date given(“YYYY-MM-DD”).NOTE: 0=Sunday, 1=Monday, 2=Tuesday, 3=Wednesday, 4=Thursday, 5=Friday, and 6=Saturday. |
+| EXTRACT | EXTRACT(_date, date_part_) | Extracts and returns the date_part requested as a String value. Accepted date_part values below show value returned for date = “2020-03-26T15:13:02.041+000” For more information, go here. |
+|
+| GETDATE | GETDATE() | Returns the current Unix Timestamp in milliseconds. |
+| GET_SERVER_TIME | GET_SERVER_TIME() | Returns the current date/time value based on the server’s timezone in `YYYY-MM-DDTHH:mm:ss.SSSZZ` String format. |
+| OFFSET_UTC | OFFSET_UTC(_date, offset_) | Returns the UTC date time value with the offset provided included in the return String value formatted as `YYYY-MM-DDTHH:mm:ss.SSSZZ`. The offset argument will be added as minutes unless the value is less than 16 and greater than -16, in which case it will be treated as hours. |
+| NOW | NOW() | Returns the current Unix Timestamp in milliseconds. |
+|
+| HOUR | HOUR(_datetime_) | Returns the hour part of a given date in range of 0 to 838. |
+|
+| MINUTE | MINUTE(_datetime_) | Returns the minute part of a time/datetime in range of 0 to 59. |
+|
+| MONTH | MONTH(_date_) | Returns month part for a specified date in range of 1 to 12. |
+|
+| SECOND | SECOND(_datetime_) | Returns the seconds part of a time/datetime in range of 0 to 59. |
+| YEAR | YEAR(_date_) | Returns the year part for a specified date. |
+|
+
+### Logical
+
+| Keyword | Syntax | Description |
+|---------|--------------------------------------------------|--------------------------------------------------------------------------------------------|
+| IF | IF(_condition, value_if_true, value_if_false_) | Returns a value if the condition is true, or another value if the condition is false. |
+| IIF | IIF(_condition, value_if_true, value_if_false_) | Returns a value if the condition is true, or another value if the condition is false. |
+| IFNULL | IFNULL(_expression, alt_value_) | Returns a specified value if the expression is null. |
+| NULLIF | NULLIF(_expression_1, expression_2_) | Returns null if expression_1 is equal to expression_2, if not equal, returns expression_1. |
+
+### Mathematical
+
+| Keyword | Syntax | Description |
+|---------|---------------------------------|-----------------------------------------------------------------------------------------------------|
+| ABS | ABS(_expression_) | Returns the absolute value of a given numeric expression. |
+| CEIL | CEIL(_number_) | Returns integer ceiling, the smallest integer value that is bigger than or equal to a given number. |
+| EXP | EXP(_number_) | Returns e to the power of a specified number. |
+| FLOOR | FLOOR(_number_) | Returns the largest integer value that is smaller than, or equal to, a given number. |
+| RANDOM | RANDOM(_seed_) | Returns a pseudo random number. |
+| ROUND | ROUND(_number,decimal_places_) | Rounds a given number to a specified number of decimal places. |
+| SQRT | SQRT(_expression_) | Returns the square root of an expression. |
+
+
+### String
+
+| Keyword | Syntax | Description |
+|-------------|------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| CONCAT | CONCAT(_string_1, string_2, ...., string_n_) | Concatenates, or joins, two or more strings together, resulting in a single string. |
+| CONCAT_WS | CONCAT_WS(_separator, string_1, string_2, ...., string_n_) | Concatenates, or joins, two or more strings together with a separator, resulting in a single string. |
+| INSTR | INSTR(_string_1, string_2_) | Returns the first position, as an integer, of string_2 within string_1. |
+| LEN | LEN(_string_) | Returns the length of a string. |
+| LOWER | LOWER(_string_) | Converts a string to lower-case. |
+| REGEXP | SELECT _column_name_ FROM _schema.table_ WHERE _column_name_ REGEXP _pattern_ | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. |
+| REGEXP_LIKE | SELECT _column_name_ FROM _schema.table_ WHERE REGEXP_LIKE(_column_name, pattern_) | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. |
+| REPLACE | REPLACE(_string, old_string, new_string_) | Replaces all instances of old_string within new_string, with string. |
+| SUBSTRING | SUBSTRING(_string, string_position, length_of_substring_) | Extracts a specified amount of characters from a string. |
+| TRIM | TRIM([_character(s) FROM_] _string_) | Removes leading and trailing spaces, or specified character(s), from a string. |
+| UPPER | UPPER(_string_) | Converts a string to upper-case. |
+
+## Operators
+### Logical Operators
+
+| Keyword | Syntax | Description |
+|----------|--------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------|
+| BETWEEN | SELECT _column_name(s)_ FROM _schema.table_ WHERE _column_name_ BETWEEN _value_1_ AND _value_2_ | (inclusive) Returns values(numbers, text, or dates) within a given range. |
+| IN | SELECT _column_name(s)_ FROM _schema.table_ WHERE _column_name_ IN(_value(s)_) | Used to specify multiple values in a WHERE clause. |
+| LIKE | SELECT _column_name(s)_ FROM _schema.table_ WHERE _column_n_ LIKE _pattern_ | Searches for a specified pattern within a WHERE clause. |
+
+## Queries
+### General
+
+| Keyword | Syntax | Description |
+|-----------|--------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------|
+| DISTINCT | SELECT DISTINCT _column_name(s)_ FROM _schema.table_ | Returns only unique values, eliminating duplicate records. |
+| FROM | FROM _schema.table_ | Used to list the schema(s), table(s), and any joins required for a SQL statement. |
+| GROUP BY | SELECT _column_name(s)_ FROM _schema.table_ WHERE _condition_ GROUP BY _column_name(s)_ ORDER BY _column_name(s)_ | Groups rows that have the same values into summary rows. |
+| HAVING | SELECT _column_name(s)_ FROM _schema.table_ WHERE _condition_ GROUP BY _column_name(s)_ HAVING _condition_ ORDER BY _column_name(s)_ | Filters data based on a group or aggregate function. |
+| SELECT | SELECT _column_name(s)_ FROM _schema.table_ | Selects data from table. |
+| WHERE | SELECT _column_name(s)_ FROM _schema.table_ WHERE _condition_ | Extracts records based on a defined condition. |
+
+### Joins
+
+| Keyword | Syntax | Description |
+|---------------------|----------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| CROSS JOIN | SELECT _column_name(s)_ FROM _schema.table_1_ CROSS JOIN _schema.table_2_ | Returns a paired combination of each row from _table_1_ with row from _table_2_. _Note: CROSS JOIN can return very large result sets and is generally considered bad practice._ |
+| FULL OUTER | SELECT _column_name(s)_ FROM _schema.table_1_ FULL OUTER JOIN _schema.table_2_ ON _table_1.column_name_ _= table_2.column_name_ WHERE _condition_ | Returns all records when there is a match in either _table_1_ (left table) or _table_2_ (right table). |
+| [INNER] JOIN | SELECT _column_name(s)_ FROM _schema.table_1_ INNER JOIN _schema.table_2_ ON _table_1.column_name_ _= table_2.column_name_ | Return only matching records from _table_1_ (left table) and _table_2_ (right table). The INNER keyword is optional and does not affect the result. |
+| LEFT [OUTER] JOIN | SELECT _column_name(s)_ FROM _schema.table_1_ LEFT OUTER JOIN _schema.table_2_ ON _table_1.column_name_ _= table_2.column_name_ | Return all records from _table_1_ (left table) and matching data from _table_2_ (right table). The OUTER keyword is optional and does not affect the result. |
+| RIGHT [OUTER] JOIN | SELECT _column_name(s)_ FROM _schema.table_1_ RIGHT OUTER JOIN _schema.table_2_ ON _table_1.column_name = table_2.column_name_ | Return all records from _table_2_ (right table) and matching data from _table_1_ (left table). The OUTER keyword is optional and does not affect the result. |
+
+### Predicates
+
+| Keyword | Syntax | Description |
+|--------------|------------------------------------------------------------------------------|----------------------------|
+| IS NOT NULL | SELECT _column_name(s)_ FROM _schema.table_ WHERE _column_name_ IS NOT NULL | Tests for non-null values. |
+| IS NULL | SELECT _column_name(s)_ FROM _schema.table_ WHERE _column_name_ IS NULL | Tests for null values. |
+
+### Statements
+
+| Keyword | Syntax | Description |
+|---------|---------------------------------------------------------------------------------------------|-------------------------------------|
+| DELETE | DELETE FROM _schema.table_ WHERE condition | Deletes existing data from a table. |
+| INSERT | INSERT INTO _schema.table(column_name(s))_ VALUES(_value(s)_) | Inserts new records into a table. |
+| UPDATE | UPDATE _schema.table_ SET _column_1 = value_1, column_2 = value_2, ....,_ WHERE _condition_ | Alters existing records in a table. |
diff --git a/site/versioned_docs/version-4.2/developers/sql-guide/index.md b/site/versioned_docs/version-4.2/developers/sql-guide/index.md
new file mode 100644
index 00000000..ae274bd3
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/sql-guide/index.md
@@ -0,0 +1,88 @@
+---
+title: SQL Guide
+---
+
+# SQL Guide
+
+:::warning
+HarperDB encourages developers to utilize other querying tools over SQL for performance purposes. HarperDB SQL is intended for data investigation purposes and uses cases where performance is not a priority. SQL optimizations are on our roadmap for the future.
+:::
+
+## HarperDB SQL Guide
+
+The purpose of this guide is to describe the available functionality of HarperDB as it relates to supported SQL functionality. The SQL parser is still actively being developed, many SQL features may not be optimized or utilize indexes. This document will be updated as more features and functionality becomes available. Generally, the REST interface provides a more stable, secure, and performant interface for data interaction, but the SQL functionality can be useful for administrative ad-hoc querying, and utilizing existing SQL statements. **A high-level view of supported features can be found** [**here**](./features-matrix)**.**
+
+HarperDB adheres to the concept of database & tables. This allows developers to isolate table structures from each other all within one database.
+
+## Select
+
+HarperDB has robust SELECT support, from simple queries all the way to complex joins with multi-conditions, aggregates, grouping & ordering.
+
+All results are returned as JSON object arrays.
+
+Query for all records and attributes in the dev.dog table:
+
+```
+SELECT * FROM dev.dog
+```
+
+Query specific columns from all rows in the dev.dog table:
+
+```
+SELECT id, dog_name, age FROM dev.dog
+```
+
+Query for all records and attributes in the dev.dog table ORDERED BY age in ASC order:
+
+```
+SELECT * FROM dev.dog ORDER BY age
+```
+
+_The ORDER BY keyword sorts in ascending order by default. To sort in descending order, use the DESC keyword._
+
+## Insert
+
+HarperDB supports inserting 1 to n records into a table. The primary key must be unique (not used by any other record). If no primary key is provided, it will be assigned an auto-generated UUID. HarperDB does not support selecting from one table to insert into another at this time.
+
+```
+INSERT INTO dev.dog (id, dog_name, age, breed_id)
+ VALUES(1, 'Penny', 5, 347), (2, 'Kato', 4, 347)
+```
+
+## Update
+
+HarperDB supports updating existing table row(s) via UPDATE statements. Multiple conditions can be applied to filter the row(s) to update. At this time selecting from one table to update another is not supported.
+
+```
+UPDATE dev.dog
+ SET owner_name = 'Kyle'
+ WHERE id IN (1, 2)
+```
+
+## Delete
+
+HarperDB supports deleting records from a table with condition support.
+
+```
+DELETE FROM dev.dog
+ WHERE age < 4
+```
+
+## Joins
+
+HarperDB allows developers to join any number of tables and currently supports the following join types:
+
+* INNER JOIN LEFT
+* INNER JOIN LEFT
+* OUTER JOIN
+
+Here’s a basic example joining two tables from our Get Started example- joining a dogs table with a breeds table:
+
+```
+SELECT d.id, d.dog_name, d.owner_name, b.name, b.section
+ FROM dev.dog AS d
+ INNER JOIN dev.breed AS b ON d.breed_id = b.id
+ WHERE d.owner_name IN ('Kyle', 'Zach', 'Stephen')
+ AND b.section = 'Mutt'
+ ORDER BY d.dog_name
+```
diff --git a/site/versioned_docs/version-4.2/developers/sql-guide/json-search.md b/site/versioned_docs/version-4.2/developers/sql-guide/json-search.md
new file mode 100644
index 00000000..7d160413
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/sql-guide/json-search.md
@@ -0,0 +1,173 @@
+---
+title: SQL JSON Search
+---
+
+# SQL JSON Search
+
+HarperDB automatically indexes all top level attributes in a row / object written to a table. However, any attributes which hold JSON data do not have their nested attributes indexed. In order to make searching and/or transforming these JSON documents easy, HarperDB offers a special SQL function called SEARCH\_JSON. The SEARCH\_JSON function works in SELECT & WHERE clauses allowing queries to perform powerful filtering on any element of your JSON by implementing the [JSONata library](http:/docs.jsonata.org/overview.html) into our SQL engine.
+
+## Syntax
+
+SEARCH\_JSON(_expression, attribute_)
+
+Executes the supplied string _expression_ against data of the defined top level _attribute_ for each row. The expression both filters and defines output from the JSON document.
+
+### Example 1
+
+#### Search a string array
+
+Here are two records in the database:
+
+```json
+[
+ {
+ "id": 1,
+ "name": ["Harper", "Penny"]
+ },
+ {
+ "id": 2,
+ "name": ["Penny"]
+ }
+]
+```
+
+Here is a simple query that gets any record with "Harper" found in the name.
+
+```
+SELECT *
+FROM dev.dog
+WHERE search_json('"Harper" in *', name)
+```
+
+### Example 2
+
+The purpose of this query is to give us every movie where at least two of our favorite actors from Marvel films have acted together. The results will return the movie title, the overview, release date and an object array of the actor’s name and their character name in the movie.
+
+Both function calls evaluate the credits.cast attribute, this attribute is an object array of every cast member in a movie.
+
+```
+SELECT m.title,
+ m.overview,
+ m.release_date,
+ SEARCH_JSON($[name in ["Robert Downey Jr.", "Chris Evans", "Scarlett Johansson", "Mark Ruffalo", "Chris Hemsworth", "Jeremy Renner", "Clark Gregg", "Samuel L. Jackson", "Gwyneth Paltrow", "Don Cheadle"]].{"actor": name, "character": character}, c.`cast`) AS characters
+FROM movies.credits c
+ INNER JOIN movies.movie m
+ ON c.movie_id = m.id
+WHERE SEARCH_JSON($count($[name in ["Robert Downey Jr.", "Chris Evans", "Scarlett Johansson", "Mark Ruffalo", "Chris Hemsworth", "Jeremy Renner", "Clark Gregg", "Samuel L. Jackson", "Gwyneth Paltrow", "Don Cheadle"]]), c.`cast`) >= 2
+```
+
+A sample of this data from the movie The Avengers looks like
+
+```json
+[
+ {
+ "cast_id": 46,
+ "character": "Tony Stark / Iron Man",
+ "credit_id": "52fe4495c3a368484e02b251",
+ "gender": "male",
+ "id": 3223,
+ "name": "Robert Downey Jr.",
+ "order": 0
+ },
+ {
+ "cast_id": 2,
+ "character": "Steve Rogers / Captain America",
+ "credit_id": "52fe4495c3a368484e02b19b",
+ "gender": "male",
+ "id": 16828,
+ "name": "Chris Evans",
+ "order": 1
+ },
+ {
+ "cast_id": 307,
+ "character": "Bruce Banner / The Hulk",
+ "credit_id": "5e85e8083344c60015411cfa",
+ "gender": "male",
+ "id": 103,
+ "name": "Mark Ruffalo",
+ "order": 2
+ }
+]
+```
+
+Let’s break down the SEARCH\_JSON function call in the SELECT:
+
+```
+SEARCH_JSON(
+ $[name in [
+ "Robert Downey Jr.",
+ "Chris Evans",
+ "Scarlett Johansson",
+ "Mark Ruffalo",
+ "Chris Hemsworth",
+ "Jeremy Renner",
+ "Clark Gregg",
+ "Samuel L. Jackson",
+ "Gwyneth Paltrow",
+ "Don Cheadle"
+ ]].{
+ "actor": name,
+ "character": character
+ },
+ c.`cast`
+)
+```
+
+The first argument passed to SEARCH\_JSON is the expression to execute against the second argument which is the cast attribute on the credits table. This expression will execute for every row. Looking into the expression it starts with “$\[…]” this tells the expression to iterate all elements of the cast array.
+
+Then the expression tells the function to only return entries where the name attribute matches any of the actors defined in the array:
+
+```
+name in ["Robert Downey Jr.", "Chris Evans", "Scarlett Johansson", "Mark Ruffalo", "Chris Hemsworth", "Jeremy Renner", "Clark Gregg", "Samuel L. Jackson", "Gwyneth Paltrow", "Don Cheadle"]
+```
+
+So far, we’ve iterated the array and filtered out rows, but we also want the results formatted in a specific way, so we’ve chained an expression on our filter with: `{“actor”: name, “character”: character}`. This tells the function to create a specific object for each matching entry.
+
+**Sample Result**
+
+```json
+[
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Tony Stark / Iron Man"
+ },
+ {
+ "actor": "Chris Evans",
+ "character": "Steve Rogers / Captain America"
+ },
+ {
+ "actor": "Mark Ruffalo",
+ "character": "Bruce Banner / The Hulk"
+ }
+]
+```
+
+Just having the SEARCH\_JSON function in our SELECT is powerful, but given our criteria it would still return every other movie that doesn’t have our matching actors, in order to filter out the movies we do not want we also use SEARCH\_JSON in the WHERE clause.
+
+This function call in the WHERE clause is similar, but we don’t need to perform the same transformation as occurred in the SELECT:
+
+```
+SEARCH_JSON(
+ $count(
+ $[name in [
+ "Robert Downey Jr.",
+ "Chris Evans",
+ "Scarlett Johansson",
+ "Mark Ruffalo",
+ "Chris Hemsworth",
+ "Jeremy Renner",
+ "Clark Gregg",
+ "Samuel L. Jackson",
+ "Gwyneth Paltrow",
+ "Don Cheadle"
+ ]]
+ ),
+ c.`cast`
+) >= 2
+```
+
+As seen above we execute the same name filter against the cast array, the primary difference is we are wrapping the filtered results in $count(…). As it looks this returns a count of the results back which we then use against our SQL comparator of >= 2.
+
+To see further SEARCH\_JSON examples in action view our Postman Collection that provides a [sample schema & data with query examples](../operations-api/advanced-json-sql-examples).
+
+To learn more about how to build expressions check out the JSONata documentation: [http:/docs.jsonata.org/overview](http:/docs.jsonata.org/overview)
diff --git a/site/versioned_docs/version-4.2/developers/sql-guide/reserved-word.md b/site/versioned_docs/version-4.2/developers/sql-guide/reserved-word.md
new file mode 100644
index 00000000..bcefa00a
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/sql-guide/reserved-word.md
@@ -0,0 +1,203 @@
+---
+title: HarperDB SQL Reserved Words
+---
+
+# HarperDB SQL Reserved Words
+
+This is a list of reserved words in the SQL Parser. Use of these words or symbols may result in unexpected behavior or inaccessible tables/attributes. If any of these words must be used, any SQL call referencing a schema, table, or attribute must have backticks (`…`) or brackets ([…]) around the variable.
+
+For Example, for a table called ASSERT in the dev schema, a SQL select on that table would look like:
+
+```
+SELECT * from dev.`ASSERT`
+```
+
+Alternatively:
+
+```
+SELECT * from dev.[ASSERT]
+```
+
+### RESERVED WORD LIST
+
+* ABSOLUTE
+* ACTION
+* ADD
+* AGGR
+* ALL
+* ALTER
+* AND
+* ANTI
+* ANY
+* APPLY
+* ARRAY
+* AS
+* ASSERT
+* ASC
+* ATTACH
+* AUTOINCREMENT
+* AUTO_INCREMENT
+* AVG
+* BEGIN
+* BETWEEN
+* BREAK
+* BY
+* CALL
+* CASE
+* CAST
+* CHECK
+* CLASS
+* CLOSE
+* COLLATE
+* COLUMN
+* COLUMNS
+* COMMIT
+* CONSTRAINT
+* CONTENT
+* CONTINUE
+* CONVERT
+* CORRESPONDING
+* COUNT
+* CREATE
+* CROSS
+* CUBE
+* CURRENT_TIMESTAMP
+* CURSOR
+* DATABASE
+* DECLARE
+* DEFAULT
+* DELETE
+* DELETED
+* DESC
+* DETACH
+* DISTINCT
+* DOUBLEPRECISION
+* DROP
+* ECHO
+* EDGE
+* END
+* ENUM
+* ELSE
+* EXCEPT
+* EXISTS
+* EXPLAIN
+* FALSE
+* FETCH
+* FIRST
+* FOREIGN
+* FROM
+* GO
+* GRAPH
+* GROUP
+* GROUPING
+* HAVING
+* HDB_HASH
+* HELP
+* IF
+* IDENTITY
+* IS
+* IN
+* INDEX
+* INNER
+* INSERT
+* INSERTED
+* INTERSECT
+* INTO
+* JOIN
+* KEY
+* LAST
+* LET
+* LEFT
+* LIKE
+* LIMIT
+* LOOP
+* MATCHED
+* MATRIX
+* MAX
+* MERGE
+* MIN
+* MINUS
+* MODIFY
+* NATURAL
+* NEXT
+* NEW
+* NOCASE
+* NO
+* NOT
+* NULL
+* OFF
+* ON
+* ONLY
+* OFFSET
+* OPEN
+* OPTION
+* OR
+* ORDER
+* OUTER
+* OVER
+* PATH
+* PARTITION
+* PERCENT
+* PLAN
+* PRIMARY
+* PRINT
+* PRIOR
+* QUERY
+* READ
+* RECORDSET
+* REDUCE
+* REFERENCES
+* RELATIVE
+* REPLACE
+* REMOVE
+* RENAME
+* REQUIRE
+* RESTORE
+* RETURN
+* RETURNS
+* RIGHT
+* ROLLBACK
+* ROLLUP
+* ROW
+* SCHEMA
+* SCHEMAS
+* SEARCH
+* SELECT
+* SEMI
+* SET
+* SETS
+* SHOW
+* SOME
+* SOURCE
+* STRATEGY
+* STORE
+* SYSTEM
+* SUM
+* TABLE
+* TABLES
+* TARGET
+* TEMP
+* TEMPORARY
+* TEXTSTRING
+* THEN
+* TIMEOUT
+* TO
+* TOP
+* TRAN
+* TRANSACTION
+* TRIGGER
+* TRUE
+* TRUNCATE
+* UNION
+* UNIQUE
+* UPDATE
+* USE
+* USING
+* VALUE
+* VERTEX
+* VIEW
+* WHEN
+* WHERE
+* WHILE
+* WITH
+* WORK
diff --git a/site/versioned_docs/version-4.2/developers/sql-guide/sql-geospatial-functions.md b/site/versioned_docs/version-4.2/developers/sql-guide/sql-geospatial-functions.md
new file mode 100644
index 00000000..e557b5be
--- /dev/null
+++ b/site/versioned_docs/version-4.2/developers/sql-guide/sql-geospatial-functions.md
@@ -0,0 +1,380 @@
+---
+title: SQL Geospatial Functions
+---
+
+# SQL Geospatial Functions
+
+HarperDB geospatial features require data to be stored in a single column using the [GeoJSON standard](http:/geojson.org/), a standard commonly used in geospatial technologies. Geospatial functions are available to be used in SQL statements.
+
+
+
+If you are new to GeoJSON you should check out the full specification here: http:/geojson.org/. There are a few important things to point out before getting started.
+
+
+
+1) All GeoJSON coordinates are stored in `[longitude, latitude]` format.
+2) Coordinates or GeoJSON geometries must be passed as string when written directly in a SQL statement.
+3) Note if you are using Postman for you testing. Due to limitations in the Postman client, you will need to escape quotes in your strings and your SQL will need to be passed on a single line.
+
+
+In the examples contained in the left-hand navigation, schema and table names may change, but all GeoJSON data will be stored in a column named geo_data.
+
+# geoArea
+
+The geoArea() function returns the area of one or more features in square meters.
+
+### Syntax
+geoArea(_geoJSON_)
+
+### Parameters
+| Parameter | Description |
+|-----------|---------------------------------|
+| geoJSON | Required. One or more features. |
+
+#### Example 1
+Calculate the area, in square meters, of a manually passed GeoJSON polygon.
+
+```
+SELECT geoArea('{
+ "type":"Feature",
+ "geometry":{
+ "type":"Polygon",
+ "coordinates":[[
+ [0,0],
+ [0.123456,0],
+ [0.123456,0.123456],
+ [0,0.123456]
+ ]]
+ }
+}')
+```
+
+#### Example 2
+Find all records that have an area less than 1 square mile (or 2589988 square meters).
+
+```
+SELECT * FROM dev.locations
+WHERE geoArea(geo_data) < 2589988
+```
+
+# geoLength
+Takes a GeoJSON and measures its length in the specified units (default is kilometers).
+
+## Syntax
+geoLength(_geoJSON_[_, units_])
+
+## Parameters
+| Parameter | Description |
+|------------|-----------------------------------------------------------------------------------------------------------------------|
+| geoJSON | Required. GeoJSON to measure. |
+| units | Optional. Specified as a string. Options are ‘degrees’, ‘radians’, ‘miles’, or ‘kilometers’. Default is ‘kilometers’. |
+
+### Example 1
+Calculate the length, in kilometers, of a manually passed GeoJSON linestring.
+
+```
+SELECT geoLength('{
+ "type": "Feature",
+ "geometry": {
+ "type": "LineString",
+ "coordinates": [
+ [-104.97963309288025,39.76163265441438],
+ [-104.9823260307312,39.76365323407955],
+ [-104.99193906784058,39.75616442110704]
+ ]
+ }
+}')
+```
+
+### Example 2
+Find all data plus the calculated length in miles of the GeoJSON, restrict the response to only lengths less than 5 miles, and return the data in order of lengths smallest to largest.
+
+```
+SELECT *, geoLength(geo_data, 'miles') as length
+FROM dev.locations
+WHERE geoLength(geo_data, 'miles') < 5
+ORDER BY length ASC
+```
+# geoDifference
+Returns a new polygon with the difference of the second polygon clipped from the first polygon.
+
+## Syntax
+geoDifference(_polygon1, polygon2_)
+
+## Parameters
+| Parameter | Description |
+|------------|----------------------------------------------------------------------------|
+| polygon1 | Required. Polygon or MultiPolygon GeoJSON feature. |
+| polygon2 | Required. Polygon or MultiPolygon GeoJSON feature to remove from polygon1. |
+
+### Example
+Return a GeoJSON Polygon that removes City Park (_polygon2_) from Colorado (_polygon1_).
+
+```
+SELECT geoDifference('{
+ "type": "Feature",
+ "properties": {
+ "name":"Colorado"
+ },
+ "geometry": {
+ "type": "Polygon",
+ "coordinates": [[
+ [-109.072265625,37.00255267215955],
+ [-102.01904296874999,37.00255267215955],
+ [-102.01904296874999,41.0130657870063],
+ [-109.072265625,41.0130657870063],
+ [-109.072265625,37.00255267215955]
+ ]]
+ }
+ }',
+ '{
+ "type": "Feature",
+ "properties": {
+ "name":"City Park"
+ },
+ "geometry": {
+ "type": "Polygon",
+ "coordinates": [[
+ [-104.95973110198975,39.7543828214657],
+ [-104.95955944061278,39.744781185675386],
+ [-104.95904445648193,39.74422022399989],
+ [-104.95835781097412,39.74402223643582],
+ [-104.94097709655762,39.74392324244047],
+ [-104.9408483505249,39.75434982844515],
+ [-104.95973110198975,39.7543828214657]
+ ]]
+ }
+ }'
+)
+```
+
+# geoDistance
+Calculates the distance between two points in units (default is kilometers).
+
+## Syntax
+geoDistance(_point1, point2_[_, units_])
+
+## Parameters
+| Parameter | Description |
+|------------|-----------------------------------------------------------------------------------------------------------------------|
+| point1 | Required. GeoJSON Point specifying the origin. |
+| point2 | Required. GeoJSON Point specifying the destination. |
+| units | Optional. Specified as a string. Options are ‘degrees’, ‘radians’, ‘miles’, or ‘kilometers’. Default is ‘kilometers’. |
+
+### Example 1
+Calculate the distance, in miles, between HarperDB’s headquarters and the Washington Monument.
+
+```
+SELECT geoDistance('[-104.979127,39.761563]', '[-77.035248,38.889475]', 'miles')
+```
+
+### Example 2
+Find all locations that are within 40 kilometers of a given point, return that distance in miles, and sort by distance in an ascending order.
+
+```
+SELECT *, geoDistance('[-104.979127,39.761563]', geo_data, 'miles') as distance
+FROM dev.locations
+WHERE geoDistance('[-104.979127,39.761563]', geo_data, 'kilometers') < 40
+ORDER BY distance ASC
+```
+
+# geoNear
+Determines if point1 and point2 are within a specified distance from each other, default units are kilometers. Returns a Boolean.
+
+## Syntax
+geoNear(_point1, point2, distance_[_, units_])
+
+## Parameters
+| Parameter | Description |
+|------------|-----------------------------------------------------------------------------------------------------------------------|
+| point1 | Required. GeoJSON Point specifying the origin. |
+| point2 | Required. GeoJSON Point specifying the destination. |
+| distance | Required. The maximum distance in units as an integer or decimal. |
+| units | Optional. Specified as a string. Options are ‘degrees’, ‘radians’, ‘miles’, or ‘kilometers’. Default is ‘kilometers’. |
+
+### Example 1
+Return all locations within 50 miles of a given point.
+
+```
+SELECT *
+FROM dev.locations
+WHERE geoNear('[-104.979127,39.761563]', geo_data, 50, 'miles')
+```
+
+### Example 2
+Return all locations within 2 degrees of the earth of a given point. (Each degree lat/long is about 69 miles [111 kilometers]). Return all data and the distance in miles, sorted by ascending distance.
+
+```
+SELECT *, geoDistance('[-104.979127,39.761563]', geo_data, 'miles') as distance
+FROM dev.locations
+WHERE geoNear('[-104.979127,39.761563]', geo_data, 2, 'degrees')
+ORDER BY distance ASC
+```
+
+# geoContains
+Determines if geo2 is completely contained by geo1. Returns a Boolean.
+
+## Syntax
+geoContains(_geo1, geo2_)
+
+## Parameters
+| Parameter | Description |
+|------------|-----------------------------------------------------------------------------------|
+| geo1 | Required. Polygon or MultiPolygon GeoJSON feature. |
+| geo2 | Required. Polygon or MultiPolygon GeoJSON feature tested to be contained by geo1. |
+
+### Example 1
+Return all locations within the state of Colorado (passed as a GeoJSON string).
+
+```
+SELECT *
+FROM dev.locations
+WHERE geoContains('{
+ "type": "Feature",
+ "properties": {
+ "name":"Colorado"
+ },
+ "geometry": {
+ "type": "Polygon",
+ "coordinates": [[
+ [-109.072265625,37.00255267],
+ [-102.01904296874999,37.00255267],
+ [-102.01904296874999,41.01306579],
+ [-109.072265625,41.01306579],
+ [-109.072265625,37.00255267]
+ ]]
+ }
+}', geo_data)
+```
+
+### Example 2
+Return all locations which contain HarperDB Headquarters.
+
+```
+SELECT *
+FROM dev.locations
+WHERE geoContains(geo_data, '{
+ "type": "Feature",
+ "properties": {
+ "name": "HarperDB Headquarters"
+ },
+ "geometry": {
+ "type": "Polygon",
+ "coordinates": [[
+ [-104.98060941696167,39.760704817357905],
+ [-104.98053967952728,39.76065120861263],
+ [-104.98055577278137,39.760642961109674],
+ [-104.98037070035934,39.76049450588716],
+ [-104.9802714586258,39.76056254790385],
+ [-104.9805235862732,39.76076461167841],
+ [-104.98060941696167,39.760704817357905]
+ ]]
+ }
+}')
+```
+
+# geoEqual
+Determines if two GeoJSON features are the same type and have identical X,Y coordinate values. For more information see https:/developers.arcgis.com/documentation/spatial-references/. Returns a Boolean.
+
+## Syntax
+geoEqual(_geo1_, _geo2_)
+
+## Parameters
+| Parameter | Description |
+|------------|----------------------------------------|
+| geo1 | Required. GeoJSON geometry or feature. |
+| geo2 | Required. GeoJSON geometry or feature. |
+
+### Example
+Find HarperDB Headquarters within all locations within the database.
+
+```
+SELECT *
+FROM dev.locations
+WHERE geoEqual(geo_data, '{
+ "type": "Feature",
+ "properties": {
+ "name": "HarperDB Headquarters"
+ },
+ "geometry": {
+ "type": "Polygon",
+ "coordinates": [[
+ [-104.98060941696167,39.760704817357905],
+ [-104.98053967952728,39.76065120861263],
+ [-104.98055577278137,39.760642961109674],
+ [-104.98037070035934,39.76049450588716],
+ [-104.9802714586258,39.76056254790385],
+ [-104.9805235862732,39.76076461167841],
+ [-104.98060941696167,39.760704817357905]
+ ]]
+ }
+}')
+```
+
+# geoCrosses
+Determines if the geometries cross over each other. Returns boolean.
+
+## Syntax
+geoCrosses(_geo1, geo2_)
+
+## Parameters
+| Parameter | Description |
+|------------|-----------------------------------------|
+| geo1 | Required. GeoJSON geometry or feature. |
+| geo2 | Required. GeoJSON geometry or feature. |
+
+### Example
+Find all locations that cross over a highway.
+
+```
+SELECT *
+FROM dev.locations
+WHERE geoCrosses(
+ geo_data,
+ '{
+ "type": "Feature",
+ "properties": {
+ "name": "Highway I-25"
+ },
+ "geometry": {
+ "type": "LineString",
+ "coordinates": [
+ [-104.9139404296875,41.00477542222947],
+ [-105.0238037109375,39.715638134796336],
+ [-104.853515625,39.53370327008705],
+ [-104.853515625,38.81403111409755],
+ [-104.61181640625,38.39764411353178],
+ [-104.8974609375,37.68382032669382],
+ [-104.501953125,37.00255267215955]
+ ]
+ }
+ }'
+)
+```
+
+# geoConvert
+
+Converts a series of coordinates into a GeoJSON of the specified type.
+
+## Syntax
+geoConvert(_coordinates, geo_type_[, _properties_])
+
+## Parameters
+| Parameter | Description |
+|--------------|------------------------------------------------------------------------------------------------------------------------------------|
+| coordinates | Required. One or more coordinates |
+| geo_type | Required. GeoJSON geometry type. Options are ‘point’, ‘lineString’, ‘multiLineString’, ‘multiPoint’, ‘multiPolygon’, and ‘polygon’ |
+| properties | Optional. Escaped JSON array with properties to be added to the GeoJSON output. |
+
+### Example
+Convert a given coordinate into a GeoJSON point with specified properties.
+
+```
+SELECT geoConvert(
+ '[-104.979127,39.761563]',
+ 'point',
+ '{
+ "name": "HarperDB Headquarters"
+ }'
+)
+```
diff --git a/site/versioned_docs/version-4.2/getting-started.md b/site/versioned_docs/version-4.2/getting-started.md
new file mode 100644
index 00000000..3f2a5e53
--- /dev/null
+++ b/site/versioned_docs/version-4.2/getting-started.md
@@ -0,0 +1,84 @@
+---
+title: Getting Started
+---
+
+# Getting Started
+
+HarperDB is designed for quick and simple setup and deployment, with smart defaults that lead to fast, scalable, and globally distributed database applications.
+
+You can easily create a HarperDB database in the cloud through our studio or install it locally. The quickest way to get HarperDB up and running is with [HarperDB Cloud](./deployments/harperdb-cloud/), our database-as-a-service offering. However, HarperDB is a [database application platform](./developers/applications/), and to leverage HarperDB’s full application development capabilities of defining schemas, endpoints, messaging, and gateway capabilities, you may wish to install and run HarperDB locally so that you can use your standard local IDE tools, debugging, and version control.
+
+### Installing a HarperDB Instance
+
+You can simply install HarperDB with npm (or yarn, or other package managers):
+
+```shell
+npm install -g harperdb
+```
+
+Here we installed HarperDB globally (and we recommend this) to make it easy to run a single HarperDB instance with multiple projects, but you can install it locally (not globally) as well.
+
+You can run HarperDB by running:
+
+```javascript
+harperdb
+```
+
+You can now use HarperDB as a standalone database. You can also create a cloud instance (see below), which is also an easy way to get started.
+
+#### Developing Database Applications with HarperDB
+
+HarperDB is more than just a database, with HarperDB you build "database applications" which package your schema, endpoints, and application logic together. You can then deploy your application to an entire cluster of HarperDB instances, ready to scale to on-the-edge delivery of data and application endpoints directly to your users. To get started with HarperDB, take a look at our application development guide, with quick and easy examples:
+
+[Database application development guide](./developers/applications/)
+
+### Setting up a Cloud Instance
+
+To set up a HarperDB cloud instance, simply sign up and create a new instance:
+
+1. [Sign up for the HarperDB Studio](https:/studio.harperdb.io/sign-up)
+1. [Create a new HarperDB Cloud instance](./administration/harperdb-studio/instances#create-a-new-instance)
+
+Note that a local instance and cloud instance are not mutually exclusive. You can register your local instance in the HarperDB Studio, and a common development flow is to develop locally and then deploy your application to your cloud instance.
+
+HarperDB Cloud instance provisioning typically takes 5-15 minutes. You will receive an email notification when your instance is ready.
+
+#### Using the HarperDB Studio
+
+Now that you have a HarperDB instance, if you want to use HarperDB as a standalone database, you can fully administer and interact with our database through the Studio. This section links to appropriate articles to get you started interacting with your data.
+
+1. [Create a schema](./administration/harperdb-studio/manage-schemas-browse-data#create-a-schema)
+1. [Create a table](./administration/harperdb-studio/manage-schemas-browse-data#create-a-table)
+1. [Add a record](./administration/harperdb-studio/manage-schemas-browse-data#add-a-record)
+1. [Load CSV data](./administration/harperdb-studio/manage-schemas-browse-data#load-csv-data) (Here’s a sample CSV of the HarperDB team’s dogs)
+1. [Query data via SQL](./administration/harperdb-studio/query-instance-data)
+
+## Administering HarperDB
+
+If you are deploying and administering HarperDB, you may want to look at our [configuration documentation](./deployments/configuration) and our administrative operations API below.
+
+### HarperDB APIs
+
+The preferred way to interact with HarperDB for typical querying, accessing, and updating data (CRUD) operations is through the REST interface, described in the [REST documentation](./developers/rest).
+
+The Operations API provides extensive administrative capabilities for HarperDB, and the [Operations API documentation has usage and examples](./developers/operations-api/). Generally it is recommended that you use the RESTful interface as your primary interface for performant data access, querying, and manipulation (DML) for building production applications (under heavy load), and the operations API (and SQL) for data definition (DDL) and administrative purposes.
+
+The HarperDB Operations API is single endpoint, which means the only thing that needs to change across different calls is the body. For example purposes, a basic cURL command is shown below to create a schema called dev. To change this behavior, swap out the operation in the `data-raw` body parameter.
+
+```
+curl --location --request POST 'https:/instance-subdomain.harperdbcloud.com' \
+--header 'Authorization: Basic YourBase64EncodedInstanceUser:Pass' \
+--header 'Content-Type: application/json' \
+--data-raw '{
+ "operation": "create_schema",
+ "schema": "dev"
+}'
+```
+
+## Support and Learning More
+
+If you find yourself in need of additional support you can submit a [HarperDB support ticket](https:/harperdbhelp.zendesk.com/hc/en-us/requests/new). You can also learn more about available HarperDB projects by searching [Github](https:/github.com/search?q=harperdb).
+
+### Video Tutorials
+
+[HarperDB video tutorials are available on our YouTube channel](https:/www.youtube.com/@harperdbio). HarperDB and the HarperDB Studio are constantly changing, as such, there may be small discrepancies in UI/UX.
diff --git a/site/versioned_docs/version-4.2/index.md b/site/versioned_docs/version-4.2/index.md
new file mode 100644
index 00000000..fd7be9a8
--- /dev/null
+++ b/site/versioned_docs/version-4.2/index.md
@@ -0,0 +1,106 @@
+---
+title: HarperDB Docs
+---
+
+# HarperDB Docs
+
+HarperDB is a globally-distributed edge application platform. It reduces complexity, increases performance, and lowers costs by combining user-defined applications, a high-performance database, and an enterprise-grade streaming broker into a single package. The platform offers unlimited horizontal scale at the click of a button, and syncs data across the cluster in milliseconds. HarperDB simplifies the process of delivering applications and the data that drives them to the edge, which dramatically improves both the user experience and total cost of ownership for large-scale applications. Deploying HarperDB on global infrastructure enables a CDN-like solution for enterprise data and applications.
+
+HarperDB's documentation covers installation, getting started, administrative operation APIs, security, and much more. Browse the topics at left, or choose one of the commonly used documentation sections below.
+
+:::info
+Wondering what's new with HarperDB 4.2? Take a look at our latest [Release Notes](./technical-details/release-notes/v4-tucker/4.2.0).
+:::
+
+## Getting Started
+
+
+
+
+
+ Get up and running with HarperDB
+
+
+
+
+
+ Run HarperDB on your on hardware
+
+
+
+
+
+ Spin up an instance in minutes to going fast
+
+
+
+
+## Building with HarperDB
+
+
+
+
+
+ Build your a fully featured HarperDB Component with custom functionality
+
+
+
+
+
+ The recommended HTTP interface for data access, querying, and manipulation
+
+
+
+
+
+ Configure, deploy, administer, and control your HarperDB instance
+
+
+
+
+
+
+
+
+ The process of connecting multiple HarperDB databases together to create a database mesh network that enables users to define data replication patterns.
+
+
+
+
+
+ The web-based GUI for HarperDB. Studio enables you to administer, navigate, and monitor all of your HarperDB instances in a simple, user friendly interface.
+
+
+
diff --git a/site/versioned_docs/version-4.2/technical-details/_category_.json b/site/versioned_docs/version-4.2/technical-details/_category_.json
new file mode 100644
index 00000000..69ce80a6
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/_category_.json
@@ -0,0 +1,12 @@
+{
+ "label": "Technical Details",
+ "position": 4,
+ "link": {
+ "type": "generated-index",
+ "title": "Technical Details Documentation",
+ "description": "Reference documentation and technical specifications",
+ "keywords": [
+ "technical-details"
+ ]
+ }
+}
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/technical-details/reference/analytics.md b/site/versioned_docs/version-4.2/technical-details/reference/analytics.md
new file mode 100644
index 00000000..7b475176
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/reference/analytics.md
@@ -0,0 +1,117 @@
+---
+title: Analytics
+---
+
+# Analytics
+
+HarperDB provides extensive telemetry and analytics data to help monitor the status of the server and work loads, and to help understand traffic and usage patterns to identify issues and scaling needs, and identify queries and actions that are consuming the most resources.
+
+HarperDB collects statistics for all operations, URL endpoints, and messaging topics, aggregating information by thread, operation, resource, and methods, in real-time. These statistics are logged in the `hdb_raw_analytics` and `hdb_analytics` table in the `system` database.
+
+There are two "levels" of analytics in the HarperDB analytics table: the first is the immediate level of raw direct logging of real-time statistics. These analytics entries are recorded once a second (when there is activity) by each thread, and include all recorded activity in the last second, along with system resource information. The records have a primary key that is the timestamp in milliseconds since epoch. This can be queried (with `superuser` permission) using the search\_by\_conditions operation (this will search for 10 seconds worth of analytics) on the `hdb_raw_analytics` table:
+
+```
+POST http:/localhost:9925
+Content-Type: application/json
+
+{
+ "operation": "search_by_conditions",
+ "schema": "system",
+ "table": "hdb_raw_analytics",
+ "conditions": [{
+ "search_attribute": "id",
+ "search_type": "between",
+ "search_value": [168859400000, 1688594010000]
+ }]
+}
+```
+
+And a typical response looks like:
+
+```
+{
+ "time": 1688594390708,
+ "period": 1000.8336279988289,
+ "metrics": [
+ {
+ "metric": "bytes-sent",
+ "path": "search_by_conditions",
+ "type": "operation",
+ "median": 202,
+ "mean": 202,
+ "p95": 202,
+ "p90": 202,
+ "count": 1
+ },
+ ...
+ {
+ "metric": "memory",
+ "threadId": 2,
+ "rss": 1492664320,
+ "heapTotal": 124596224,
+ "heapUsed": 119563120,
+ "external": 3469790,
+ "arrayBuffers": 798721
+ },
+ {
+ "metric": "utilization",
+ "idle": 138227.52767700003,
+ "active": 70.5066209952347,
+ "utilization": 0.0005098165086230495
+ }
+ ],
+ "threadId": 2,
+ "totalBytesProcessed": 12182820,
+ "id": 1688594390708.6853
+}
+```
+
+The second level of analytics recording is aggregate data. The aggregate records are recorded once a minute, and aggregate the results from all the per-second entries from all the threads, creating a summary of statistics once a minute. The ids for these milliseconds since epoch can be queried from the `hdb_analytics` table. You can query these with an operation like:
+
+```
+POST http:/localhost:9925
+Content-Type: application/json
+
+{
+ "operation": "search_by_conditions",
+ "schema": "system",
+ "table": "hdb_analytics",
+ "conditions": [{
+ "search_attribute": "id",
+ "search_type": "between",
+ "search_value": [1688194100000, 1688594990000]
+ }]
+}
+```
+
+And a summary record looks like:
+
+```
+{
+ "period": 60000,
+ "metric": "bytes-sent",
+ "method": "connack",
+ "type": "mqtt",
+ "median": 4,
+ "mean": 4,
+ "p95": 4,
+ "p90": 4,
+ "count": 1,
+ "id": 1688589569646,
+ "time": 1688589569646
+}
+```
+
+The following are general resource usage statistics that are tracked:
+
+* memory - This includes RSS, heap, buffer and external data usage.
+* utilization - How much of the time the worker was processing requests.
+* mqtt-connections - The number of MQTT connections.
+
+The following types of information is tracked for each HTTP request:
+
+* success - How many requests returned a successful response (20x response code). TTFB - Time to first byte in the response to the client.
+* transfer - Time to finish the transfer of the data to the client.
+* bytes-sent - How many bytes of data were sent to the client.
+
+Requests are categorized by operation name, for the operations API, by the resource (name) with the REST API, and by command for the MQTT interface.
diff --git a/site/versioned_docs/version-4.2/technical-details/reference/architecture.md b/site/versioned_docs/version-4.2/technical-details/reference/architecture.md
new file mode 100644
index 00000000..f2881d3c
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/reference/architecture.md
@@ -0,0 +1,42 @@
+---
+title: Architecture
+---
+
+# Architecture
+
+HarperDB's architecture consists of resources, which includes tables and user defined data sources and extensions, and server interfaces, which includes the RESTful HTTP interface, operations API, and MQTT. Servers are supported by routing and auth services.
+
+```
+ ┌──────────┐ ┌──────────┐
+ │ Clients │ │ Clients │
+ └────┬─────┘ └────┬─────┘
+ │ │
+ ▼ ▼
+ ┌────────────────────────────────────────┐
+ │ │
+ │ Socket routing/management │
+ ├───────────────────────┬────────────────┤
+ │ │ │
+ │ Server Interfaces ─►│ Authentication │
+ │ RESTful HTTP, MQTT │ Authorization │
+ │ ◄─┤ │
+ │ ▲ └────────────────┤
+ │ │ │ │
+ ├───┼──────────┼─────────────────────────┤
+ │ │ │ ▲ │
+ │ ▼ Resources ▲ │ ┌───────────┐ │
+ │ │ └─┤ │ │
+ ├─────────────────┴────┐ │ App │ │
+ │ ├─►│ resources │ │
+ │ Database tables │ └───────────┘ │
+ │ │ ▲ │
+ ├──────────────────────┘ │ │
+ │ ▲ ▼ │ │
+ │ ┌────────────────┐ │ │
+ │ │ External │ │ │
+ │ │ data sources ├────┘ │
+ │ │ │ │
+ │ └────────────────┘ │
+ │ │
+ └────────────────────────────────────────┘
+```
diff --git a/site/versioned_docs/version-4.2/technical-details/reference/content-types.md b/site/versioned_docs/version-4.2/technical-details/reference/content-types.md
new file mode 100644
index 00000000..6aee4850
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/reference/content-types.md
@@ -0,0 +1,27 @@
+---
+title: Content Types
+---
+
+# Content Types
+
+HarperDB supports several different content types (or MIME types) for both HTTP request bodies (describing operations) as well as for serializing content into HTTP response bodies. HarperDB follows HTTP standards for specifying both request body content types and acceptable response body content types. Any of these content types can be used with any of the standard HarperDB operations.
+
+For request body content, the content type should be specified with the `Content-Type` header. For example with JSON, use `Content-Type: application/json` and for CBOR, include `Content-Type: application/cbor`. To request that the response body be encoded with a specific content type, use the `Accept` header. If you want the response to be in JSON, use `Accept: application/json`. If you want the response to be in CBOR, use `Accept: application/cbor`.
+
+The following content types are supported:
+
+## JSON - application/json
+
+JSON is the most widely used content type, and is relatively readable and easy to work with. However, JSON does not support all the data types that are supported by HarperDB, and can't be used to natively encode data types like binary data or explicit Maps/Sets. Also, JSON is not as efficient as binary formats. When using JSON, compression is recommended (this also follows standard HTTP protocol with the `Accept-Encoding` header) to improve network transfer performance (although there is server performance overhead). JSON is a good choice for web development and when standard JSON types are sufficient and when combined with compression and debuggability/observability is important.
+
+## CBOR - application/cbor
+
+CBOR is a highly efficient binary format, and is a recommended format for most production use cases with HarperDB. CBOR supports the full range of HarperDB data types, including binary data, typed dates, and explicit Maps/Sets. CBOR is very performant and space efficient even without compression. Compression will still yield better network transfer size/performance, but compressed CBOR is generally not any smaller than compressed JSON. CBOR also natively supports streaming for optimal performance (using indefinite length arrays). The CBOR format has excellent standardization and HarperDB's CBOR provides an excellent balance of performance and size efficiency.
+
+## MessagePack - application/x-msgpack
+
+MessagePack is another efficient binary format like CBOR, with support for all HarperDB data types. MessagePack generally has wider adoption than CBOR and can be useful in systems that don't have CBOR support (or good support). However, MessagePack does not have native support for streaming of arrays of data (for query results), and so query results are returned as a (concatenated) sequence of MessagePack objects/maps. MessagePack decoders used with HarperDB's MessagePack must be prepared to decode a direct sequence of MessagePack values to properly read responses.
+
+## Comma-separated Values (CSV) - text/csv
+
+Comma-separated values is an easy to use and understand format that can be readily imported into spreadsheets or used for data processing. CSV lacks hierarchical structure for most data types, and shouldn't be used for frequent/production use, but when you need it, it is available.
diff --git a/site/versioned_docs/version-4.2/technical-details/reference/data-types.md b/site/versioned_docs/version-4.2/technical-details/reference/data-types.md
new file mode 100644
index 00000000..fca44b40
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/reference/data-types.md
@@ -0,0 +1,45 @@
+---
+title: Data Types
+---
+
+# Data Types
+
+HarperDB supports a rich set of data types for use in records in databases. Various data types can be used from both direct JavaScript interfaces in Custom Functions and the HTTP operations APIs. Using JSON for communication naturally limits the data types to those available in JSON (HarperDB’s supports all of JSON data types), but JavaScript code and alternate data formats facilitate the use of additional data types. As of v4.1, HarperDB supports MessagePack and CBOR, which allows for all of HarperDB supported data types. This includes:
+
+(Note that these labels are descriptive, they do not necessarily correspond to the GraphQL schema type names, but the schema type names are noted where possible)
+
+## Boolean
+
+true or false. The GraphQL schema type name is `Boolean`.
+
+## String
+
+Strings, or text, are a sequence of any unicode characters and are internally encoded with UTF-8. The GraphQL schema type name is `String`.
+
+## Number
+
+Numbers can be stored as signed integers up to 64-bit or floating point with 64-bit floating point precision, and numbers are automatically stored using the most optimal type. JSON is parsed by JS, so the maximum safe (precise) integer is 9007199254740991 (larger numbers can be stored, but aren’t guaranteed integer precision). Custom Functions may use BigInt numbers to store/access larger 64-bit integers, but integers beyond 64-bit can’t be stored with integer precision (will be stored as standard double-precision numbers). The GraphQL schema type name is `Float` (`Int` can also be used to describe numbers that should fit into signed 32-bit integers).
+
+## Object/Map
+
+Objects, or maps, that hold a set named properties can be stored in HarperDB. When provided as JSON objects or JavaScript objects, all property keys are stored as strings. The order of properties is also preserved in HarperDB’s storage. Duplicate property keys are not allowed (they are dropped in parsing any incoming data).
+
+## Array
+
+Arrays hold an ordered sequence of values and can be stored in HarperDB. There is no support for sparse arrays, although you can use objects to store data with numbers (converted to strings) as properties.
+
+## Null
+
+A null value can be stored in HarperDB property values as well.
+
+## Date
+
+Dates can be stored as a specific data type. This is not supported in JSON, but is supported by MessagePack and CBOR. Custom Functions can also store and use Dates using JavaScript Date instances. The GraphQL schema type name is `Date`.
+
+## Binary Data
+
+Binary data can be stored in property values as well. JSON doesn’t have any support for encoding binary data, but MessagePack and CBOR support binary data in data structures, and this will be preserved in HarperDB. Custom Functions can also store binary data by using NodeJS’s Buffer or Uint8Array instances to hold the binary data. The GraphQL schema type name is `Bytes`.
+
+## Explicit Map/Set
+
+Explicit instances of JavaScript Maps and Sets can be stored and preserved in HarperDB as well. This can’t be represented with JSON, but can be with CBOR.
diff --git a/site/versioned_docs/version-4.2/technical-details/reference/dynamic-schema.md b/site/versioned_docs/version-4.2/technical-details/reference/dynamic-schema.md
new file mode 100644
index 00000000..c10aaf8e
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/reference/dynamic-schema.md
@@ -0,0 +1,148 @@
+---
+title: Dynamic Schema
+---
+
+# Dynamic Schema
+
+When tables are created without any schema, through the operations API (without specifying attributes) or studio, the tables follow "dynamic-schema" behavior. Generally it is best-practice to define schemas for your tables to ensure predictable, consistent structures with data integrity and precise control over indexing, without dependency on data itself. However, it can often be simpler and quicker to simply create a table and let the data auto-generate the schema dynamically with everything being auto-indexed for broad querying.
+
+With dynamic schemas individual attributes are reflexively created as data is ingested, meaning the table will adapt to the structure of data ingested. HarperDB tracks the metadata around schemas, tables, and attributes allowing for describe table, describe schema, and describe all operations.
+
+### Databases
+
+HarperDB databases hold a collection of tables together in a single file that are transactionally connected. This means that operations across tables within a database can be performed in a single atomic transaction. By default tables are added to the default database called "data", but other databases can be created and specified for tables.
+
+### Tables
+
+HarperDB tables group records together with a common data pattern. To create a table users must provide a table name and a primary key.
+
+* **Table Name**: Used to identify the table.
+* **Primary Key**: This is a required attribute that serves as the unique identifier for a record and is also known as the `hash_attribute` in HarperDB operations API.
+
+## Primary Key
+
+The primary key (also referred to as the `hash_attribute`) is used to uniquely identify records. Uniqueness is enforced on the primary; inserts with the same primary key will be rejected. If a primary key is not provided on insert, a GUID will be automatically generated and returned to the user. The [HarperDB Storage Algorithm](./storage-algorithm) utilizes this value for indexing.
+
+**Standard Attributes**
+
+With tables that are using dynamic schemas, additional attributes are reflexively added via insert and update operations (in both SQL and NoSQL) when new attributes are included in the data structure provided to HarperDB. As a result, schemas are additive, meaning new attributes are created in the underlying storage algorithm as additional data structures are provided. HarperDB offers `create_attribute` and `drop_attribute` operations for users who prefer to manually define their data model independent of data ingestion. When new attributes are added to tables with existing data the value of that new attribute will be assumed `null` for all existing records.
+
+**Audit Attributes**
+
+HarperDB automatically creates two audit attributes used on each record if the table is created without a schema.
+
+* `__createdtime__`: The time the record was created in [Unix Epoch with milliseconds](https:/www.epochconverter.com/) format.
+* `__updatedtime__`: The time the record was updated in [Unix Epoch with milliseconds](https:/www.epochconverter.com/) format.
+
+### Dynamic Schema Example
+
+To better understand the behavior let’s take a look at an example. This example utilizes [HarperDB API operations](../../developers/operations-api/databases-and-tables).
+
+**Create a Database**
+
+```bash
+{
+ "operation": "create_database",
+ "schema": "dev"
+}
+```
+
+**Create a Table**
+
+Notice the schema name, table name, and primary key name are the only required parameters.
+
+```bash
+{
+ "operation": "create_table",
+ "database": "dev",
+ "table": "dog",
+ "primary_key": "id"
+}
+```
+
+At this point the table does not have structure beyond what we provided, so the table looks like this:
+
+**dev.dog**
+
+
+
+**Insert Record**
+
+To define attributes we do not need to do anything beyond sending them in with an insert operation.
+
+```bash
+{
+ "operation": "insert",
+ "database": "dev",
+ "table": "dog",
+ "records": [
+ {"id": 1, "dog_name": "Penny", "owner_name": "Kyle"}
+ ]
+}
+```
+
+With a single record inserted and new attributes defined, our table now looks like this:
+
+**dev.dog**
+
+
+
+Indexes have been automatically created for `dog_name` and `owner_name` attributes.
+
+**Insert Additional Record**
+
+If we continue inserting records with the same data schema no schema updates are required. One record will omit the hash attribute from the insert to demonstrate GUID generation.
+
+```bash
+{
+ "operation": "insert",
+ "database": "dev",
+ "table": "dog",
+ "records": [
+ {"id": 2, "dog_name": "Monk", "owner_name": "Aron"},
+ {"dog_name": "Harper","owner_name": "Stephen"}
+ ]
+}
+```
+
+In this case, there is no change to the schema. Our table now looks like this:
+
+**dev.dog**
+
+
+
+**Update Existing Record**
+
+In this case, we will update a record with a new attribute not previously defined on the table.
+
+```bash
+{
+ "operation": "update",
+ "database": "dev",
+ "table": "dog",
+ "records": [
+ {"id": 2, "weight_lbs": 35}
+ ]
+}
+```
+
+Now we have a new attribute called `weight_lbs`. Our table now looks like this:
+
+**dev.dog**
+
+
+
+**Query Table with SQL**
+
+Now if we query for all records where `weight_lbs` is `null` we expect to get back two records.
+
+```bash
+{
+ "operation": "sql",
+ "sql": "SELECT * FROM dev.dog WHERE weight_lbs IS NULL"
+}
+```
+
+This results in the expected two records being returned.
+
+
diff --git a/site/versioned_docs/version-4.2/technical-details/reference/globals.md b/site/versioned_docs/version-4.2/technical-details/reference/globals.md
new file mode 100644
index 00000000..68623f59
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/reference/globals.md
@@ -0,0 +1,80 @@
+---
+title: Globals
+---
+
+# Globals
+
+The primary way that JavaScript code can interact with HarperDB is through the global variables, which has several objects and classes that provide access to the tables, server hooks, and resources that HarperDB provides for building applications. As global variables, these can be directly accessed in any module.
+
+These global variables are also available through the `harperdb` module/package, which can provide better typing in TypeScript. To use this with your own directory, make sure you link the package to your current `harperdb` installation:
+
+```bash
+npm link harperdb
+```
+
+The `harperdb` package is automatically linked for all installed components. Once linked, if you are using EcmaScript module syntax you can import function from `harperdb` like:
+
+```javascript
+import { tables, Resource } from 'harperdb';
+```
+
+Or if you are using CommonJS format for your modules:
+
+```javascript
+const { tables, Resource } = require('harperdb');
+```
+
+The global variables include:
+
+### `tables`
+
+This is an object that holds all the tables for the default database (called `data`) as properties. Each of these property values is a table class that subclasses the Resource interface and provides access to the table through the Resource interface. For example, you can get a record from a table (in the default database) called 'my-table' with:
+
+```javascript
+import { tables } from 'harperdb';
+const { MyTable } = tables;
+async function getRecord() {
+ let record = await MyTable.get(recordId);
+}
+```
+
+It is recommended that you [define a schema](../../getting-started/) for all the tables that are required to exist in your application. This will ensure that the tables exist on the `tables` object. Also note that the property names follow a CamelCase convention for use in JavaScript and in the GraphQL Schemas, but these are translated to snake\_case for the actual table names, and converted back to CamelCase when added to the `tables` object.
+
+### `databases`
+
+This is an object that holds all the databases in HarperDB, and can be used to explicitly access a table by database name. Each database will be a property on this object, each of these property values will be an object with the set of all tables in that database. The default database, `databases.data` should equal the `tables` export. For example, if you want to access the "dog" table in the "dev" database, you could do so:
+
+```javascript
+import { databases } from 'harperdb';
+const { Dog } = databases.dev;
+```
+
+### `Resource`
+
+This is the base class for all resources, including tables and external data sources. This is provided so that you can extend it to implement custom data source providers. See the [Resource API documentation](./resource) for more details about implementing a Resource class.
+
+### `auth(username, password?): Promise`
+
+This returns the user object with permissions/authorization information based on the provided username. If a password is provided, the password will be verified before returning the user object (if the password is incorrect, an error will be thrown).
+
+### `logger`
+
+This provides methods `trace`, `debug`, `info`, `warn`, `error`, `fatal`, and `notify` for logging. See the [logging documentation](../../administration/logging/standard-logging) for more information.
+
+### `server`
+
+This provides a number of functions and objects to interact with the server including:
+
+#### `server.config`
+
+This provides access to the HarperDB configuration object. This comes from the [harperdb-config.yaml](../../deployments/configuration) (parsed into object form).
+
+#### `server.recordAnalytics(value, metric, path?, method?, type?)`
+
+This records the provided value as a metric into HarperDB's analytics. HarperDB efficiently records and tracks these metrics and makes them available through [analytics API](./analytics). The values are aggregated and statistical information is computed when many operations are performed. The optional parameters can be used to group statistics. For the parameters, make sure you are not grouping on too fine of a level for useful aggregation. The parameters are:
+
+* `value` - This is a numeric value for the metric that is being recorded. This can be a value measuring time or bytes, for example.
+* `metric` - This is the name of the metric.
+* `path` - This is an optional path (like a URL path). For a URL like /my-resource/, you would typically include a path of "my-resource", not including the id so you can group by all the requests to "my-resource" instead of individually aggregating by each individual id.
+* `method` - Optional method to group by.
+* `type` - Optional type to group by.
diff --git a/site/versioned_docs/version-4.2/technical-details/reference/headers.md b/site/versioned_docs/version-4.2/technical-details/reference/headers.md
new file mode 100644
index 00000000..c58bb7ec
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/reference/headers.md
@@ -0,0 +1,12 @@
+---
+title: HarperDB Headers
+---
+
+# HarperDB Headers
+
+All HarperDB API responses include headers that are important for interoperability and debugging purposes. The following headers are returned with all HarperDB API responses:
+
+| Key | Example Value | Description |
+|-------------------|------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| server-timing | db;dur=7.165 | This reports the duration of the operation, in milliseconds. This follows the standard for Server-Timing and can be consumed by network monitoring tools. |
+| content-type | application/json | This reports the MIME type of the returned content, which is negotiated based on the requested content type in the Accept header. |
diff --git a/site/versioned_docs/version-4.2/technical-details/reference/index.md b/site/versioned_docs/version-4.2/technical-details/reference/index.md
new file mode 100644
index 00000000..e9a6ebf9
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/reference/index.md
@@ -0,0 +1,16 @@
+---
+title: Reference
+---
+
+# Reference
+
+This section contains technical details and reference materials for HarperDB.
+
+* [Resource API](./resource)
+* [Transactions](./transactions)
+* [Storage Algorithm](./storage-algorithm)
+* [Dynamic Schema](./dynamic-schema)
+* [Headers](./headers)
+* [Limitations](./limits)
+* Content Types
+* [Data Types](./data-types)
diff --git a/site/versioned_docs/version-4.2/technical-details/reference/limits.md b/site/versioned_docs/version-4.2/technical-details/reference/limits.md
new file mode 100644
index 00000000..ccad9d64
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/reference/limits.md
@@ -0,0 +1,33 @@
+---
+title: HarperDB Limits
+---
+
+# HarperDB Limits
+
+This document outlines limitations of HarperDB.
+
+## Database Naming Restrictions
+
+**Case Sensitivity**
+
+HarperDB database metadata (database names, table names, and attribute/column names) are case sensitive. Meaning databases, tables, and attributes can differ only by the case of their characters.
+
+**Restrictions on Database Metadata Names**
+
+HarperDB database metadata (database names, table names, and attribute names) cannot contain the following UTF-8 characters:
+
+```
+/`¡¢£¤¥¦§¨©ª«¬®¯°±²³´µ¶·¸¹º»¼½¾¿ÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖרÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõö÷øùúûüýþÿ
+```
+
+Additionally, they cannot contain the first 31 non-printing characters. Spaces are allowed, but not recommended as best practice. The regular expression used to verify a name is valid is:
+
+```
+^[\x20-\x2E|\x30-\x5F|\x61-\x7E]*$
+```
+
+## Table Limitations
+
+**Attribute Maximum**
+
+HarperDB limits the number of total indexed attributes across tables (including the primary key of each table) to 10,000 per database.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/technical-details/reference/resource.md b/site/versioned_docs/version-4.2/technical-details/reference/resource.md
new file mode 100644
index 00000000..708e8457
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/reference/resource.md
@@ -0,0 +1,531 @@
+---
+title: Resource Class
+---
+
+# Resource Class
+
+## Resource Class
+
+The Resource class is designed to model different data resources within HarperDB. The Resource class can be extended to create new data sources. Resources can be exported to define endpoints. Tables themselves extend the Resource class, and can be extended by users.
+
+Conceptually, a Resource class provides an interface for accessing, querying, modifying, and monitoring a set of entities or records. Instances of a Resource class can represent a single record or entity, or a collection of records, at a given point in time, that you can interact with through various methods or queries. Resource instances can represent an atomic transactional view of a resource and facilitate transactional interaction. Therefore there are distinct resource instances created for every record or query that is accessed, and the instance methods are used for interaction with the data.
+
+The RESTful HTTP server and other server interfaces will instantiate/load resources to fulfill incoming requests so resources can be defined as endpoints for external interaction. When resources are used by the server interfaces, they will be executed in transaction and the access checks will be performed before the method is executed.
+
+Paths (URL, MQTT topics) are mapped to different resource instances. Using a path that does specify an ID like `/MyResource/3492` will be mapped to a Resource instance where the instance's ID will be `3492`, and interactions will use the instance methods like `get()`, `put()`, and `post()`. Using the root path (`/MyResource/`) will map to a Resource instance with an ID of `null`.
+
+You can create classes that extend Resource to define your own data sources, typically to interface with external data sources (the Resource base class is available as a global variable in the HarperDB JS environment). In doing this, you will generally be extending and providing implementations for the instance methods below. For example:
+
+```javascript
+export class MyExternalData extends Resource {
+ get() {
+ / fetch data from an external source, using our primary key
+ this.fetch(this.id)
+ }
+ put(data) {
+ / send the data into the external source
+ }
+ delete() {
+ / delete an entity in the external data source
+ }
+ subscribe(options) {
+ / if the external data source is capable of real-time notification of changes, can subscribe
+ }
+}
+/ we can export this class from resources.json as our own endpoint, or use this as the source for
+/ a HarperDB data to store and cache the data coming from this data source:
+tables.MyCache.sourcedFrom(MyExternalData);
+```
+
+You can also extend table classes in the same way, overriding the instance methods for custom functionality. The `tables` object is a global variable in the HarperDB JavaScript environment, along with `Resource`:
+
+```javascript
+export class MyTable extends tables.MyTable {
+ get() {
+ / we can add properties or change properties before returning data:
+ this.newProperty = 'newValue';
+ this.existingProperty = 44;
+ return super.get(); / returns the record, modified with the changes above
+ }
+ put(data) {
+ / can change data any way we want
+ super.put(data);
+ }
+ delete() {
+ super.delete();
+ }
+ post(data) {
+ / providing a post handler (for HTTP POST requests) is a common way to create additional
+ / actions that aren't well described with just PUT or DELETE
+ }
+}
+```
+Make sure that if are extending and `export`ing your table with this class, that you remove the `@export` directive in the your schema, so that you aren't exporting the same table/class twice.
+
+## Global Variables
+
+### `tables`
+
+This is an object with all the tables in the default database (the default database is "data"). Each table that has been declared or created will be available as a (standard) property on this object, and the value will be the table class that can be used to interact with that table. The table classes implement the Resource API.
+
+### `databases`
+
+This is an object with all the databases that have been defined in HarperDB (in the running instance). Each database that has been declared or created will be available as a (standard) property on this object. The property values are an object with the tables in that database, where each property is a table, like the `tables` object. In fact, `databases.data === tables` should always be true.
+
+### `Resource`
+
+This is the Resource base class. This can be directly extended for custom resources, and is the base class for all tables.
+
+### `server`
+
+This object provides extension points for extension components that wish to implement new server functionality (new protocols, authentication, etc.). See the [extensions documentation for more information](../../developers/components/writing-extensions).
+
+### `transaction`
+
+This provides a function for starting transactions. See the transactions section below for more information.
+
+### `contentTypes`
+
+This provides an interface for defining new content type handlers. See the [content type extensions documentation](../../developers/components/writing-extensions) for more information.
+
+### TypeScript Support
+
+While these objects/methods are all available as global variables, it is easier to get TypeScript support (code assistance, type checking) for these interfaces by explicitly `import`ing them. This can be done by setting up a package link to the main HarperDB package in your app:
+
+```
+# you may need to go to your harperdb directory and set it up as a link first
+npm link harperdb
+```
+
+And then you can import any of the main HarperDB APIs you will use, and your IDE should understand the full typings associated with them:
+
+```
+import { databases, tables, Resource } from 'harperdb';
+```
+
+## Resource Class (Instance) Methods
+
+### Properties/attributes declared in schema
+
+Properties that have been defined in your table's schema can be accessed and modified as direct properties on the Resource instances.
+
+### `get(queryOrProperty?)`: Resource|AsyncIterable
+
+This is called to return the record or data for this resource, and is called by HTTP GET requests. This may be optionally called with a `query` object to specify a query should be performed, or a string to indicate that the specified property value should be returned. When defining Resource classes, you can define or override this method to define exactly what should be returned when retrieving a record. The default `get` method (`super.get()`) returns the current record as a plain object.
+
+The query object can be used to access any query parameters that were included in the URL. For example, with a request to `/my-resource/some-id?param1=value`, we can access URL/request information:
+
+```javascript
+get(query) {
+ / note that query will only exist (as an object) if there is a query string
+ let param1 = query?.get?.('param1'); / returns 'value'
+ let id = this.getId(); / returns 'some-id'
+ ...
+}
+```
+If `get` is called for a single record (for a request like `/Table/some-id`), the default action is to return `this` instance of the resource. If `get` is called on a collection (`/Table/?name=value`), the default action is to `search` and return an AsyncIterable of results.
+
+### `search(query: Query)`: AsyncIterable
+
+By default this is called by `get(query)` from a collection resource.
+
+### `getId(): string|number|Array`
+
+Returns the primary key value for this resource.
+
+### `put(data: object)`
+
+This will assign the provided record or data to this resource, and is called for HTTP PUT requests. You can define or override this method to define how records should be updated. The default `put` method on tables (`super.put(data)`) writes the record to the table (updating or inserting depending on if the record previously existed) as part of the current transaction for the resource instance.
+
+### `patch(data: object)`
+
+This will update the existing record with the provided data's properties, and is called for HTTP PATCH requests. You can define or override this method to define how records should be updated. The default `patch` method on tables (`super.patch(data)`) updates the record. The properties will be applied to the existing record, overwriting the existing records properties, and preserving any properties in the record that are not specified in the `data` object. This is performed as part of the current transaction for the resource instance.
+
+### `update(data: object, fullUpdate: boolean?)`
+
+This is called by the default `put` and `patch` handlers to update a record. `put` calls with `fullUpdate` as `true` to indicate a full record replacement (`patch` calls it with the second argument as `false`). Any additional property changes that are made before the transaction commits will also be persisted.
+
+### `delete(queryOrProperty?)`
+
+This will delete this record or resource, and is called for HTTP DELETE requests. You can define or override this method to define how records should be deleted. The default `delete` method on tables (`super.put(record)`) deletes the record from the table as part of the current transaction.
+
+### `publish(message)`
+
+This will publish a message to this resource, and is called for MQTT publish commands. You can define or override this method to define how messages should be published. The default `publish` method on tables (`super.publish(message)`) records the published message as part of the current transaction; this will not change the data in the record but will notify any subscribers to the record/topic.
+
+### `post(data)`
+
+This is called for HTTP POST requests. You can define this method to provide your own implementation of how POST requests should be handled. Generally this provides a generic mechanism for various types of data updates.
+
+### `invalidate()`
+
+This method is available on tables. This will invalidate the current record in the table. This can be used with a caching table and is used to indicate that the source data has changed, and the record needs to be reloaded when next accessed.
+
+### `subscribe(subscriptionRequest): Promise`
+
+This will subscribe to the current resource, and is called for MQTT subscribe commands. You can define or override this method to define how subscriptions should be handled. The default `subscribe` method on tables (`super.publish(message)`) will set up a listener that will be called for any changes or published messages to this resource.
+
+The returned (promise resolves to) Subscription object is an `AsyncIterable` that you can use a `for await` to iterate through. It also has a `queue` property which holds (an array of) any messages that are ready to be delivered immediately (if you have specified a start time, previous count, or there is a message for the current or "retained" record, these may be immediately returned).
+
+The `subscriptionRequest` object supports the following properties (all optional):
+
+* `id` - The primary key of the record (or topic) that you want to subscribe to. If omitted, this will be a subscription to the whole table.
+* `isCollection` - If this is enabled and the `id` was included, this will create a subscription to all the record updates/messages that are prefixed with the id. For example, a subscription request of `{id:'sub', isCollection: true}` would return events for any update with an id/topic of the form sub/\* (like `sub/1`).
+* `startTime` - This will begin the subscription at a past point in time, returning all updates/messages since the start time (a catch-up of historical messages). This can be used to resume a subscription, getting all messages since the last subscription.
+* `previousCount` - This specifies the number of previous updates/messages to deliver. For example, `previousCount: 10` would return the last ten messages. Note that `previousCount` can not be used in conjunction with `startTime`.
+* `omitCurrent` - Indicates that the current (or retained) record should _not_ be immediately sent as the first update in the subscription (if no `startTime` or `previousCount` was used). By default, the current record is sent as the first update.
+
+### `connect(incomingMessages?: AsyncIterable): AsyncIterable`
+
+This is called when a connection is received through WebSockets or Server Sent Events (SSE) to this resource path. This is called with `incomingMessages` as an iterable stream of incoming messages when the connection is from WebSockets, and is called with no arguments when the connection is from a SSE connection. This can return an asynchronous iterable representing the stream of messages to be sent to the client.
+
+### `set(property, value)`
+
+This will assign the provided value to the designated property in the resource's record. During a write operation, this will indicate that the record has changed and the changes will be saved during commit. During a read operation, this will modify the copy of the record that will be serialized during serialization (converted to the output format of JSON, MessagePack, etc.).
+
+### `allowCreate(user)`
+
+This is called to determine if the user has permission to create the current resource. This is called as part of external incoming requests (HTTP). The default behavior for a generic resource is that this requires super-user permission and the default behavior for a table is to check the user's role's insert permission to the table.
+
+### `allowRead(user)`
+
+This is called to determine if the user has permission to read from the current resource. This is called as part of external incoming requests (HTTP GET). The default behavior for a generic resource is that this requires super-user permission and the default behavior for a table is to check the user's role's read permission to the table.
+
+### `allowUpdate(user)`
+
+This is called to determine if the user has permission to update the current resource. This is called as part of external incoming requests (HTTP PUT). The default behavior for a generic resource is that this requires super-user permission and the default behavior for a table is to check the user's role's update permission to the table.
+
+### `allowDelete(user)`
+
+This is called to determine if the user has permission to delete the current resource. This is called as part of external incoming requests (HTTP DELETE). The default behavior for a generic resource is that this requires super-user permission and the default behavior for a table is to check the user's role's delete permission to the table.
+
+### `getUpdatedTime(): number`
+
+This returns the last updated time of the resource (timestamp of last commit). This is returned as milliseconds from epoch.
+
+### `wasLoadedFromSource(): boolean`
+
+Indicates if the record had been loaded from source. When using caching tables, this indicates that there was a cache miss and the data had to be loaded from the source (or waiting on an inflight request from the source to finish).
+
+### `getContext(): Context`
+
+Returns the context for this resource. The context contains information about the current transaction, the user that initiated this action, and other metadata that should be retained through the life of an action.
+
+#### `Context`
+
+The `Context` object has the following (potential) properties:
+
+* `user` - This is the user object, which includes information about the username, role, and authorizations.
+* `transaction` - The current transaction If the current method was triggered by an HTTP request, the following properties are available:
+* `lastModified` - This value is used to indicate the last modified or updated timestamp of any resource(s) that are accessed and will inform the response's `ETag` (or `Last-Modified`) header. This can be updated by application code if it knows that modification should cause this timestamp to be updated.
+
+When a resource gets a request through HTTP, the request object is the context, which has the following properties:
+
+* `url` - The local path/URL of the request (this will not include the protocol or host name, but will start at the path and includes the query string).
+* `method` - The method of the HTTP request.
+* `headers` - This is an object with the headers that were included in the HTTP request. You can access headers by calling `context.headers.get(headerName)`.
+* `responseHeaders` - This is an object with the headers that will be included in the HTTP response. You can set headers by calling `context.responseHeaders.set(headerName, value)`.
+* `pathname` - This provides the path part of the URL (no querystring).
+* `host` - This provides the host name of the request (from the `Host` header).
+* `ip` - This provides the ip address of the client that made the request.
+
+When a resource is accessed as a data source:
+
+* `requestContext` - For resources that are acting as a data source for another resource, this provides access to the context of the resource that is making a request for data from the data source resource.
+
+### `operation(operationObject: Object, authorize?: boolean): Promise`
+
+This method is available on tables and will execute a HarperDB operation, using the current table as the target of the operation (the `table` and `database` do not need to be specified). See the [operations API](https:/api.harperdb.io/) for available operations that can be performed. You can set the second argument to `true` if you want the current user to be checked for authorization for the operation (if `true`, will throw an error if they are not authorized).
+
+### `allowStaleWhileRevalidate(entry: { version: number, localTime: number, expiresAt: number, value: object }, id): boolean`
+
+For caching tables, this can be defined to allow stale entries to be returned while revalidation is taking place, rather than waiting for revalidation. The `version` is the timestamp/version from the source, the `localTime` is when the resource was last refreshed, the `expiresAt` is when the resource expired and became stale, and the `value` is the last value (the stale value) of the record/resource. All times are in milliseconds since epoch. Returning `true` will allow the current stale value to be returned while revalidation takes place concurrently. Returning `false` will cause the response to wait for the data source or origin to revalidate or provide the latest value first, and then return the latest value.
+
+## Resource Static Methods and Properties
+
+The Resource class also has static methods that mirror the instance methods with an initial argument that is the id of the record to act on. The static methods are generally the preferred and most convenient method for interacting with tables outside of methods that are directly extending a table.
+
+The get, put, delete, subscribe, and connect methods all have static equivalents. There is also a `static search()` method for specifically handling searching a table with query parameters. By default, the Resource static methods default to calling the instance methods. Again, generally static methods are the preferred way to interact with resources and call them from application code. These methods are available on all user Resource classes and tables.
+
+### `get(id: Id, context?: Resource|Context)`
+
+This will retrieve a resource instance by id. For example, if you want to retrieve comments by id in the retrieval of a blog post you could do:
+
+```javascript
+const { MyTable } = tables;
+...
+/ in class:
+ async get() {
+ for (let commentId of this.commentIds) {
+ let comment = await Comment.get(commentId, this);
+ / now you can do something with the comment record
+ }
+ }
+```
+
+Type definition for `Id`:
+```
+Id = string|number|array
+```
+
+### `put(record: object, context?: Resource|Context): Promise`
+### `put(id: Id, record: object, context?: Resource|Context): Promise`
+
+This will save the provided record or data to this resource. This will fully replace the existing record. Make sure to `await` this function to ensure it finishes execution within the surrounding transaction.
+
+### `patch(recordUpdate: object, context?: Resource|Context): Promise`
+### `patch(id: Id, recordUpdate: object, context?: Resource|Context): Promise`
+
+This will save the provided updates to the record. The `recordUpdate` object's properties will be applied to the existing record, overwriting the existing records properties, and preserving any properties in the record that are not specified in the `recordUpdate` object. Make sure to `await` this function to ensure it finishes execution within the surrounding transaction.
+
+### `delete(id: Id, context?: Resource|Context): Promise`
+
+Deletes this resource's record or data. Make sure to `await` this function to ensure it finishes execution within the surrounding transaction.
+
+### `publish(message: object, context?: Resource|Context): Promise`
+### `publish(topic: Id, message: object, context?: Resource|Context): Promise`
+
+Publishes the given message to the record entry specified by the id in the context. Make sure to `await` this function to ensure it finishes execution within the surrounding transaction.
+
+### `subscribe(subscriptionRequest, context?: Resource|Context): Promise`
+
+Subscribes to a record/resource.
+
+### `search(query: Query, context?: Resource|Context): AsyncIterable`
+
+This will perform a query on this table or collection. The query parameter can be used to specify the desired query.
+
+### `primaryKey`
+
+This property indicates the name of the primary key attribute for a table. You can get the primary key for a record using this property name. For example:
+
+```
+let record34 = await Table.get(34);
+record34[Table.primaryKey] -> 34
+```
+
+There are additional methods that are only available on table classes (which are a type of resource).
+
+### `Table.sourcedFrom(Resource, options)`
+
+This defines the source for a table. This allows a table to function as a cache for an external resource. When a table is configured to have a source, any request for a record that is not found in the table will be delegated to the source resource to retrieve and the result will be cached/stored in the table. All writes to the table will also first be delegated to the source (if the source defines write functions like `put`, `delete`, etc.). The options parameter can include an `expiration` property that will configure the table with a time-to-live expiration window for automatic deletion or invalidation of older entries.
+
+If the source resource implements subscription support, real-time invalidation can be performed to ensure the cache is guaranteed to be fresh (and this can eliminate or reduce the need for time-based expiration of data).
+
+### `parsePath(path, context, query) {`
+
+This is called by static methods when they are responding to a URL (from HTTP request, for example), and translates the path to an id. By default, this will convert a multi-segment path to multipart id (an array), which facilitates hierarchical id-based data access, and also parses `.property` suffixes for accessing properties and specifying preferred content type in the URL. However, in some situations you may wish to preserve the path directly as a string. You can override `parsePath` for simpler path to id preservation:
+
+```javascript
+ static parsePath(path) {
+ return path; / return the path as the id
+ }
+```
+
+### `isCollection(resource: Resource): boolean`
+This returns a boolean indicating if the provide resource instance represents a collection (can return a query result) or a single record/entity.
+
+### Context and Transactions
+
+Whenever you implement an action that is calling other resources, it is recommended that you provide the "context" for the action. This allows a secondary resource to be accessed through the same transaction, preserving atomicity and isolation.
+
+This also allows timestamps that are accessed during resolution to be used to determine the overall last updated timestamp, which informs the header timestamps (which facilitates accurate client-side caching). The context also maintains user, session, and request metadata information that is communicated so that contextual request information (like headers) can be accessed and any writes are properly attributed to the correct user.
+
+When using an export resource class, the REST interface will automatically create a context for you with a transaction and request metadata, and you can pass this to other actions by simply including `this` as the source argument (second argument) to the static methods.
+
+For example, if we had a method to post a comment on a blog, and when this happens we also want to update an array of comment IDs on the blog record, but then add the comment to a separate comment table. We might do this:
+
+```javascript
+const { Comment } = tables;
+
+export class BlogPost extends tables.BlogPost {
+ post(comment) {
+ / add a comment record to the comment table, using this resource as the source for the context
+ Comment.put(comment, this);
+ this.comments.push(comment.id); / add the id for the record to our array of comment ids
+ / Both of these actions will be committed atomically as part of the same transaction
+ }
+}
+```
+
+Please see the [transaction documentation](./transactions) for more information on how transactions work in HarperDB.
+
+### Query
+
+The `get`/`search` methods accept a Query object that can be used to specify a query for data. The query is an object that has the following properties, which are all optional:
+
+* `conditions`: This is an array of objects that specify the conditions to use the match records (if conditions are omitted or it is an empty array, this is a search for everything in the table). Each condition object has the following properties:
+ * `attribute`: Name of the property/attribute to match on.
+ * `value`: The value to match.
+ * `comparator`: This can specify how the value is compared. This defaults to "equals", but can also be "greater\_than", "greater\_than\_equal", "less\_than", "less\_than\_equal", "starts\_with", "contains", "ends\_with", "between", and "not_equal".
+* `operator`: Specifies if the conditions should be applied as an `"and"` (records must match all conditions), or as an "or" (records must match at least one condition). This defaults to `"and"`.
+* `limit`: This specifies the limit of the number of records that should be returned from the query.
+* `offset`: This specifies the number of records that should be skipped prior to returning records in the query. This is often used with `limit` to implement "paging" of records.
+* `select`: This specifies the specific properties that should be included in each record that is returned. This can be a string value, to specify that the value of the specified property should be returned for each iteration/element in the results. This can be an array, to specify a set of properties that should be included in the returned objects. The array can specify an `select.asArray = true` property and the query results will return a set of arrays of values of the specified properties instead of objects; this can be used to return more compact results.
+
+The query results are returned as an `AsyncIterable`. In order to access the elements of the query results, you must use a `for await` loop (it does _not_ return an array, you can not access the results by index).
+
+For example, we could do a query like:
+
+```javascript
+let { Product } = tables;
+let results = Product.search({
+ conditions: [
+ { attribute: 'rating', value: 4.5, comparator: 'greater_than' },
+ { attribute: 'price', value: 100, comparator: 'less_than' },
+ ],
+ offset: 20,
+ limit: 10,
+ select: ['id', 'name', 'price', 'rating'],
+})
+for await (let record of results) {
+ / iterate through each record in the query results
+}
+```
+`AsyncIterable`s can be returned from resource methods, and will be properly serialized in responses. When a query is performed, this will open/reserve a read transaction until the query results are iterated, either through your own `for await` loop or through serialization. Failing to iterate the results this will result in a long-lived read transaction which can degrade performance (including write performance), and may eventually be aborted.
+
+### Interacting with the Resource Data Model
+
+When extending or interacting with table resources, when a resource instance is retrieved and instantiated, it will be loaded with the record data from its table. You can interact with this record through the resource instance. For any properties that have been defined in the table's schema, you can direct access or modify properties through standard property syntax. For example, let's say we defined a product schema:
+
+```graphql
+type Product @table {
+ id: ID @primaryKey
+ name: String
+ rating: Int
+ price: Float
+}
+```
+
+If we have extended this table class with our get() we can interact with any these specified attributes/properties:
+
+```javascript
+export class CustomProduct extends Product {
+ get(query) {
+ let name = this.name; / this is the name of the current product
+ let rating = this.rating; / this is the rating of the current product
+ this.rating = 3 / we can also modify the rating for the current instance
+ / (with a get this won't be saved by default, but will be used when serialized)
+ return super.get(query);
+ }
+}
+```
+
+Likewise, we can interact with resource instances in the same way when retrieving them through the static methods:
+
+```javascript
+let product1 = await Product.get(1);
+let name = product1.name; / this is the name of the product with a primary key of 1
+let rating = product1.rating; / this is the rating of the product with a primary key of 1
+product1.rating = 3 / modify the rating for this instance (this will be saved without a call to update())
+
+```
+
+If there are additional properties on (some) products that aren't defined in the schema, we can still access them through the resource instance, but since they aren't declared, there won't be getter/setter definition for direct property access, but we can access properties with the `get(propertyName)` method and modify properties with the `set(propertyName, value)` method:
+
+```javascript
+let product1 = await Product.get(1);
+let additionalInformation = product1.get('additionalInformation'); / get the additionalInformation property value even though it isn't defined in the schema
+product1.set('newProperty', 'some value'); / we can assign any properties we want with set
+```
+
+And likewise, we can do this in an instance method, although you will probably want to use super.get()/set() so you don't have to write extra logic to avoid recursion:
+
+```javascript
+export class CustomProduct extends Product {
+ get(query) {
+ let additionalInformation = super.get('additionalInformation'); / get the additionalInformation property value even though it isn't defined in the schema
+ super.set('newProperty', 'some value'); / we can assign any properties we want with set
+ }
+}
+```
+
+Note that you may also need to use `get`/`set` for properties that conflict with existing method names. For example, your schema defines an attribute called `getId` (not recommended), you would need to access that property through `get('getId')` and `set('getId', value)`.
+
+If you want to save the changes you make, you can call the \`update()\`\` method:
+
+```javascript
+let product1 = await Product.get(1);
+product1.rating = 3;
+product1.set('newProperty', 'some value');
+product1.update(); / save both of these property changes
+```
+
+Updates are automatically saved inside modifying methods like put and post:
+
+```javascript
+export class CustomProduct extends Product {
+ post(data) {
+ this.name = data.name;
+ this.set('description', data.description);
+ / both of these changes will be saved automatically as this transaction commits
+ }
+}
+```
+
+We can also interact with properties in nested objects and arrays, following the same patterns. For example we could define more complex types on our product:
+
+```graphql
+type Product @table {
+ id: ID @primaryKey
+ name: String
+ rating: Int
+ price: Float
+ brand: Brand;
+ variations: [Variation];
+}
+type Brand {
+ name: String
+}
+type Variation {
+ name: String
+ price: Float
+}
+```
+
+We can interact with these nested properties:
+
+```javascript
+export class CustomProduct extends Product {
+ post(data) {
+ let brandName = this.brand.name;
+ let firstVariationPrice = this.variations[0].price;
+ let additionalInfoOnBrand = this.brand.get('additionalInfo'); / not defined in schema, but can still try to access property
+ / make some changes
+ this.variations.splice(0, 1); / remove first variation
+ this.variations.push({ name: 'new variation', price: 9.99 }); / add a new variation
+ this.brand.name = 'new brand name';
+ / all these change will be saved
+ }
+}
+```
+
+If you need to delete a property, you can do with the `delete` method:
+
+```javascript
+let product1 = await Product.get(1);
+product1.delete('additionalInformation');
+product1.update();
+```
+
+You can also get "plain" object representation of a resource instance by calling `toJSON`, which will return a simple object with all the properties (whether defined in the schema) as direct normal properties:
+
+```javascript
+let product1 = await Product.get(1);
+let plainObject = product1.toJSON();
+for (let key in plainObject) {
+ / can iterate through the properties of this record
+}
+```
+
+### Throwing Errors
+
+You may throw errors (and leave them uncaught) from the response methods and these should be caught and handled by protocol the handler. For REST requests/responses, this will result in an error response. By default the status code will be 500. You can assign a property of `statusCode` to errors to indicate the HTTP status code that should be returned. For example:
+
+```javascript
+if (notAuthorized()) {
+ let error = new Error('You are not authorized to access this');
+ error.statusCode = 403;
+ throw error;
+}
+```
diff --git a/site/versioned_docs/version-4.2/technical-details/reference/storage-algorithm.md b/site/versioned_docs/version-4.2/technical-details/reference/storage-algorithm.md
new file mode 100644
index 00000000..024109a5
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/reference/storage-algorithm.md
@@ -0,0 +1,27 @@
+---
+title: Storage Algorithm
+---
+
+# Storage Algorithm
+
+The HarperDB storage algorithm is fundamental to the HarperDB core functionality, enabling the [Dynamic Schema](./dynamic-schema) and all other user-facing functionality. HarperDB is built on top of Lightning Memory-Mapped Database (LMDB), a key-value store offering industry leading performance and functionality, which allows for our storage algorithm to store data in tables as rows/objects. This document will provide additional details on how data is stored within HarperDB.
+
+## Query Language Agnostic
+
+The HarperDB storage algorithm was designed to abstract the data storage from any individual query language. HarperDB currently supports both SQL and NoSQL on top of this storage algorithm, with the ability to add additional query languages in the future. This means data can be inserted via NoSQL and read via SQL while hitting the same underlying data storage.
+
+## ACID Compliant
+
+Utilizing Multi-Version Concurrency Control (MVCC) through LMDB, HarperDB offers ACID compliance independently on each node. Readers and writers operate independently of each other, meaning readers don’t block writers and writers don’t block readers. Each HarperDB table has a single writer process, avoiding deadlocks and assuring that writes are executed in the order in which they were received. HarperDB tables can have multiple reader processes operating at the same time for consistent, high scale reads.
+
+## Universally Indexed
+
+All top level attributes are automatically indexed immediately upon ingestion. The [HarperDB Dynamic Schema](./dynamic-schema) reflexively creates both the attribute and index reflexively as new schema metadata comes in. Indexes are agnostic of datatype, honoring the following order: booleans, numbers ordered naturally, strings ordered lexically. Within the LMDB implementation, table records are grouped together into a single LMDB environment file, where each attribute index is a sub-database (dbi) inside said environment file. An example of the indexing scheme can be seen below.
+
+## Additional LMDB Benefits
+
+HarperDB inherits both functional and performance benefits by implementing LMDB as the underlying key-value store. Data is memory-mapped, which enables quick data access without data duplication. All writers are fully serialized, making writes deadlock-free. LMDB is built to maximize operating system features and functionality, fully exploiting buffer cache and built to run in CPU cache. To learn more about LMDB, visit their documentation.
+
+## HarperDB Indexing Example (Single Table)
+
+
diff --git a/site/versioned_docs/version-4.2/technical-details/reference/transactions.md b/site/versioned_docs/version-4.2/technical-details/reference/transactions.md
new file mode 100644
index 00000000..8dbb70ca
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/reference/transactions.md
@@ -0,0 +1,40 @@
+---
+title: Transactions
+---
+
+# Transactions
+
+Transactions are an important part of robust handling of data in data-driven applications. HarperDB provides ACID-compliant support for transactions, allowing for guaranteed atomic, consistent, and isolated data handling within transactions, with durability guarantees on commit. Understanding how transactions are tracked and behave is important for properly leveraging transactional support in HarperDB. For most operations this is very intuitive, each HTTP request is executed in a transaction, so when multiple actions are executed in a single request, they are normally automatically included in the same transaction.
+
+Transactions span a database. Once a read snapshot is started, it is an atomic snapshot of all the tables in a database. And writes that span multiple tables in the database will all be committed atomically together (no writes in one table will be visible before writes in another table in the same database). If a transaction is used to access or write data in multiple databases, there will actually be a separate database transaction used for each database, and there is no guarantee of atomicity between separate transactions in separate databases. This can be an important consideration when deciding if and how tables should be organized into different databases.
+
+Because HarperDB is designed to be a low-latency distributed database, locks are avoided in data handling. Because of this, transactions do not lock data within the transaction. When a transaction starts, it will provide a read snapshot of the database for any retrievals or queries, which means all reads will be performed on a single version of the database isolated from any other writes that are concurrently taking place. And within a transaction all writes are aggregated and atomically written on commit. These writes are all isolated (from other transactions) until committed, and all become visible atomically. However, because transactions are non-locking, it is possible that writes from other transactions may occur between when reads are performed and when the writes are committed (at which point the last write will win for any records that have been written concurrently). Support for locks in transactions is planned for a future release.
+
+Transactions can also be explicitly started using the `transaction` global function that is provided in the HarperDB environment:
+
+## `transaction(context?, callback: (transaction) => any): Promise`
+
+This executes the callback in a transaction, providing a context that can be used for any resource methods that are called. This returns a promise for when the transaction has been committed. The callback itself may be asynchronous (return a promise), allowing for asynchronous activity within the transaction. This is useful for starting a transaction when your code is not already running within a transaction (in an HTTP request handler, a transaction will typically already be started). For example, if we wanted to run an action on a timer that periodically loads data, we could ensure that the data is loaded in single transactions like this (note that HDB is multi-threaded and if we do a timer-based job, we very likely want it to only run in one thread):
+
+```javascript
+import { tables } from 'harperdb';
+const { MyTable } = tables;
+if (isMainThread) / only on main thread
+ setInterval(async () => {
+ let someData = await (await fetch(... some URL ...)).json();
+ transaction((txn) => {
+ for (let item in someData) {
+ MyTable.put(item, txn);
+ }
+ });
+ }, 3600000); / every hour
+```
+
+You can provide your own context object for the transaction to attach to. If you call `transaction` with a context that already has a transaction started, it will simply use the current transaction, execute the callback and immediately return (this can be useful for ensuring that a transaction has started).
+
+Once the transaction callback is completed (for non-nested transaction calls), the transaction will commit, and if the callback throws an error, the transaction will abort. However, the callback is called with the `transaction` object, which also provides the following methods and property:
+
+* `commit(): Promise` - Commits the current transaction. The transaction will be committed once the returned promise resolves.
+* `abort(): void` - Aborts the current transaction and resets it.
+* `resetReadSnapshot(): void` - Resets the read snapshot for the transaction, resetting to the latest data in the database.
+* `timestamp: number` - This is the timestamp associated with the current transaction.
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/End-of-Life.md b/site/versioned_docs/version-4.2/technical-details/release-notes/End-of-Life.md
new file mode 100644
index 00000000..ca15f713
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/End-of-Life.md
@@ -0,0 +1,14 @@
+---
+title: HarperDB Software Lifecycle Schedules
+---
+
+# HarperDB Software Lifecycle Schedules
+
+The lifecycle schedules below form a part of HarperDB’s Support Policies. They include Major Releases and Minor Release that have reached their end of life date in the past 3 years.
+
+| **Release** | **Release Date** | **End of Life Date** |
+|-------------|------------------|----------------------|
+| 3.2 | 6/22 | 6/25 |
+| 3.3 | 9/22 | 9/25 |
+| 4.0 | 1/23 | 1/26 |
+| 4.1 | 4/23 | 4/26 |
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/index.md b/site/versioned_docs/version-4.2/technical-details/release-notes/index.md
new file mode 100644
index 00000000..f44555ef
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/index.md
@@ -0,0 +1,99 @@
+---
+title: Release Notes
+---
+
+# Release Notes
+
+### Current Release
+
+[Meet Tucker](../../technical-details/release-notes/v4-tucker) Our 4th Release Pup
+
+[4.2.8 Tucker](./v4-tucker/4.2.8)
+
+[4.2.7 Tucker](./v4-tucker/4.2.7)
+
+[4.2.6 Tucker](./v4-tucker/4.2.6)
+
+[4.2.5 Tucker](./v4-tucker/4.2.5)
+
+[4.2.4 Tucker](./v4-tucker/4.2.4)
+
+[4.2.3 Tucker](./v4-tucker/4.2.3)
+
+[4.2.2 Tucker](./v4-tucker/4.2.2)
+
+[4.2.1 Tucker](./v4-tucker/4.2.1)
+
+[4.2.0 Tucker](./v4-tucker/4.2.0)
+
+[4.1.2 Tucker](./v4-tucker/4.1.2)
+
+[4.1.1 Tucker](./v4-tucker/4.1.1)
+
+[4.1.0 Tucker](./v4-tucker/4.1.0)
+
+[4.0.7 Tucker](./v4-tucker/4.0.7)
+
+[4.0.6 Tucker](./v4-tucker/4.0.6)
+
+[4.0.5 Tucker](./v4-tucker/4.0.5)
+
+[4.0.4 Tucker](./v4-tucker/4.0.4)
+
+[4.0.3 Tucker](./v4-tucker/4.0.3)
+
+[4.0.2 Tucker](./v4-tucker/4.0.2)
+
+[4.0.1 Tucker](./v4-tucker/4.0.1)
+
+[4.0.0 Tucker](./v4-tucker/4.0.0)
+
+### Past Releases
+
+[Meet Monkey](../../technical-details/release-notes/v3-monkey) Our 3rd Release Pup
+
+[3.2.1 Monkey](./v3-monkey/3.2.1)
+
+[3.2.0 Monkey](./v3-monkey/3.2.0)
+
+[3.1.5 Monkey](./v3-monkey/3.1.5)
+
+[3.1.4 Monkey](./v3-monkey/3.1.4)
+
+[3.1.3 Monkey](./v3-monkey/3.1.3)
+
+[3.1.2 Monkey](./v3-monkey/3.1.2)
+
+[3.1.1 Monkey](./v3-monkey/3.1.1)
+
+[3.1.0 Monkey](./v3-monkey/3.1.0)
+
+[3.0.0 Monkey](./v3-monkey/3.0.0)
+
+***
+
+[Meet Penny](../../technical-details/release-notes/v2-penny) Our 2nd Release Pup
+
+[2.3.1 Penny](./v2-penny/2.3.1)
+
+[2.3.0 Penny](./v2-penny/2.3.0)
+
+[2.2.3 Penny](./v2-penny/2.2.3)
+
+[2.2.2 Penny](./v2-penny/2.2.2)
+
+[2.2.0 Penny](./v2-penny/2.2.0)
+
+[2.1.1 Penny](./v2-penny/2.1.1)
+
+***
+
+[Meet Alby](../../technical-details/release-notes/v1-alby) Our 1st Release Pup
+
+[1.3.1 Alby](./v1-alby/1.3.1)
+
+[1.3.0 Alby](./v1-alby/1.3.0)
+
+[1.2.0 Alby](./v1-alby/1.2.0)
+
+[1.1.0 Alby](./v1-alby/1.1.0)
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v1-alby/1.1.0.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v1-alby/1.1.0.md
new file mode 100644
index 00000000..b42514a2
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v1-alby/1.1.0.md
@@ -0,0 +1,77 @@
+---
+title: 1.1.0
+sidebar_position: 89899
+---
+
+### HarperDB 1.1.0, Alby Release
+4/18/2018
+
+**Features**
+
+* Users & Roles:
+
+ * Limit/Assign access to all HarperDB operations
+
+ * Limit/Assign access to schemas, tables & attributes
+
+ * Limit/Assign access to specific SQL operations (`INSERT`, `UPDATE`, `DELETE`, `SELECT`)
+
+* Enhanced SQL parser
+
+ * Added extensive ANSI SQL Support.
+
+ * Added Array function, which allows for converting relational data into Object/Hierarchical data
+
+ * `Distinct_Array` Function: allows for removing duplicates in the Array function.
+
+ * Enhanced SQL Validation: Improved validation around structure of SQL, validating the schema, etc..
+
+ * 10x performance improvement on SQL statements.
+
+* Export Function: can now call a NoSQL/SQL search and have it export to CSV or JSON.
+
+* Added upgrade function to CLI
+
+* Added ability to perform bulk update from CSV
+
+* Created landing page for HarperDB.
+
+* Added CORS support to HarperDB
+
+**Fixes**
+
+* Fixed memory leak in CSV bulk loads
+
+* Corrected error when attempting to perform a `SQL DELETE`
+
+* Added further validation to NoSQL `UPDATE` to validate schema & table exist
+
+* Fixed install issue occurring when part of the install path does not exist, the install would silently fail.
+
+* Fixed issues with replicated data when one of the replicas is down
+
+* Removed logging of initial user’s credentials during install
+
+* Can now use reserved words as aliases in SQL
+
+* Removed user(s) password in results when calling `list_users`
+
+* Corrected forwarding of operations to other nodes in a cluster
+
+* Corrected lag in schema meta-data passing to other nodes in a cluster
+
+* Drop table & schema now move the table & schema or table to the trash folder under the Database folder for later permanent deletion.
+
+* Bulk inserts no longer halt the entire operation if n records already exist, instead the return includes the hashes of records that have been skipped.
+
+* Added ability to accept EULA from command line
+
+* Corrected `search_by_value` not searching on the correct attribute
+
+* Added ability to increase the timeout of a request by adding `SERVER_TIMEOUT_MS` to config/settings.js
+
+* Add error handling resulting from SQL calculations.
+
+* Standardized error responses as JSON.
+
+* Corrected internal process generation to not allow more processes than machine has cores.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v1-alby/1.2.0.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v1-alby/1.2.0.md
new file mode 100644
index 00000000..095bf239
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v1-alby/1.2.0.md
@@ -0,0 +1,42 @@
+---
+title: 1.2.0
+sidebar_position: 89799
+---
+
+### HarperDB 1.2.0, Alby Release
+7/10/2018
+
+**Features**
+
+* Time to Live: Conserve the resources of your edge device by setting data on devices to live for a specific period of time.
+* Geo: HarperDB has implemented turf.js into its SQL parser to enable geo based analytics.
+* Jobs: CSV Data loads, Exports & Time to Live now all run as back ground jobs.
+* Exports: Perform queries that export into JSON or CSV and save to disk or S3.
+
+
+**Fixes**
+
+* Fixed issue where CSV data loads incorrectly report number of records loaded.
+* Added validation to stop `BETWEEN` operations in SQL.
+* Updated logging to not include internal variables in the logs.
+* Cleaned up `add_role` response to not include internal variables.
+* Removed old and unused dependencies.
+* Build out further unit tests and integration tests.
+* Fixed https to handle certificates properly.
+* Improved stability of clustering & replication.
+* Corrected issue where Objects and Arrays were not casting properly in `SQL SELECT` response.
+* Fixed issue where Blob text was not being returned from `SQL SELECT`s.
+* Fixed error being returned when querying on table with no data, now correctly returns empty array.
+* Improved performance in SQL when searching on exact values.
+* Fixed error when ./harperdb stop is called.
+* Fixed logging issue causing instability in installer.
+* Fixed `read_log` operation to accept date time.
+* Added permissions checking to `export_to_s3`.
+* Added ability to run SQL on `SELECT` without a `FROM`.
+* Fixed issue where updating a user’s password was not encrypting properly.
+* Fixed `user_guide.html` to point to readme on git repo.
+* Created option to have HarperDB run as a foreground process.
+* Updated `user_info` to return the correct role for a user.
+* Fixed issue where HarperDB would not stop if the database root was deleted.
+* Corrected error message on insert if an invalid schema is provided.
+* Added permissions checks for user & role operations.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v1-alby/1.3.0.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v1-alby/1.3.0.md
new file mode 100644
index 00000000..ad196159
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v1-alby/1.3.0.md
@@ -0,0 +1,27 @@
+---
+title: 1.3.0
+sidebar_position: 89699
+---
+
+### HarperDB 1.3.0, Alby Release
+11/2/2018
+
+**Features**
+
+* Upgrade: Upgrade to newest version via command line.
+* SQL Support: Added `IS NULL` for SQL parser.
+* Added attribute validation to search operations.
+
+
+**Fixes**
+
+* Fixed `SELECT` calculations, i.e. `SELECT` 2+2.
+* Fixed select OR not returning expected results.
+* No longer allowing reserved words for schema and table names.
+* Corrected process interruptions from improper SQL statements.
+* Improved message handling between spawned processes that replace killed processes.
+* Enhanced error handling for updates to tables that do not exist.
+* Fixed error handling for NoSQL responses when `get_attributes` is provided with invalid attributes.
+* Fixed issue with new columns not being updated properly in update statements.
+* Now validating roles, tables and attributes when creating or updating roles.
+* Fixed an issue where in some cases `undefined` was being returned after dropping a role
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v1-alby/1.3.1.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v1-alby/1.3.1.md
new file mode 100644
index 00000000..77e3ffe4
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v1-alby/1.3.1.md
@@ -0,0 +1,29 @@
+---
+title: 1.3.1
+sidebar_position: 89698
+---
+
+### HarperDB 1.3.1, Alby Release
+2/26/2019
+
+**Features**
+
+* Clustering connection direction appointment
+* Foundations for threading/multi processing
+* UUID autogen for hash attributes that were not provided
+* Added cluster status operation
+
+
+**Bug Fixes and Enhancements**
+
+* More logging
+* Clustering communication enhancements
+* Clustering queue ordering by timestamps
+* Cluster re connection enhancements
+* Number of system core(s) detection
+* Node LTS (10.15) compatibility
+* Update/Alter users enhancements
+* General performance enhancements
+* Warning is logged if different versions of harperdb are connected via clustering
+* Fixed need to restart after user creation/alteration
+* Fixed SQL error that occurred on selecting from an empty table
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v1-alby/_category_.json b/site/versioned_docs/version-4.2/technical-details/release-notes/v1-alby/_category_.json
new file mode 100644
index 00000000..e33195ec
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v1-alby/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "HarperDB Alby (Version 1)",
+ "position": -1
+}
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v1-alby/index.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v1-alby/index.md
new file mode 100644
index 00000000..ae17d022
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v1-alby/index.md
@@ -0,0 +1,13 @@
+---
+title: HarperDB Alby (Version 1)
+---
+
+# HarperDB Alby (Version 1)
+
+Did you know our release names are dedicated to employee pups? For our first release, Alby was our pup.
+
+Here is a bit about Alby:
+
+
+
+_Hi, I am Alby. My mom is Kaylan Stock, Director of Marketing at HarperDB. I am a 9-year-old Great Dane mix who loves sun bathing, going for swims, and wreaking havoc on the local squirrels. My favorite snack is whatever you are eating, and I love a good butt scratch!_
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v2-penny/2.1.1.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v2-penny/2.1.1.md
new file mode 100644
index 00000000..e1314a5f
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v2-penny/2.1.1.md
@@ -0,0 +1,27 @@
+---
+title: 2.1.1
+sidebar_position: 79898
+---
+
+### HarperDB 2.1.1, Penny Release
+05/22/2020
+
+**Highlights**
+
+* CORE-1007 Added the ability to perform `SQL INSERT` & `UPDATE` with function calls & expressions on values.
+* CORE-1023 Fixed minor bug in final SQL step incorrectly trying to translate ordinals to alias in `ORDER BY` statement.
+* CORE-1020 Fixed bug allowing 'null' and 'undefined' string values to be passed in as valid hash values.
+* CORE-1006 Added SQL functionality that enables `JOIN` statements across different schemas.
+* CORE-1005 Implemented JSONata library to handle our JSON document search functionality in SQL, creating the `SEARCH_JSON` function.
+* CORE-1009 Updated schema validation to allow all printable ASCII characters to be used in schema/table/attribute names, except, forward slashes and backticks. Same rules apply now for hash attribute values.
+* CORE-1003 Fixed handling of ORDER BY statements with function aliases.
+* CORE-1004 Fixed bug related to `SELECT*` on `JOIN` queries with table columns with the same name.
+* CORE-996 Fixed an issue where the `transact_to_cluster` flag is lost for CSV URL loads, fixed an issue where new attributes created in CSV bulk load do not sync to the cluster.
+* CORE-994 Added new operation `system_information`. This operation returns info & metrics for the OS, time, memory, cpu, disk, network.
+* CORE-993 Added new custom date functions for AlaSQL & UTC updates.
+* CORE-991 Changed jobs to spawn a new process which will run the intended job without impacting a main HarperDB process.
+* CORE-992 HTTPS enabled by default.
+* CORE-990 Updated `describe_table` to add the record count for the table for LMDB data storage.
+* CORE-989 Killed the socket cluster processes prior to HarperDB processes to eliminate a false uptime.
+* CORE-975 Updated time values set by SQL Date Functions to be in epoch format.
+* CORE-974 Added date functions to `SQL SELECT` column alias functionality.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v2-penny/2.2.0.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v2-penny/2.2.0.md
new file mode 100644
index 00000000..267168cd
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v2-penny/2.2.0.md
@@ -0,0 +1,43 @@
+---
+title: 2.2.0
+sidebar_position: 79799
+---
+
+### HarperDB 2.2.0, Penny Release
+08/24/2020
+
+**Features/Updates**
+
+* CORE-997 Updated the data format for CSV data loads being sync'd across a cluster to take up less resources
+* CORE-1018 Adds SQL functionality for `BETWEEN` statements
+* CORE-1032 Updates permissions to allow regular users (i.e. non-super users) to call the `get_job` operation
+* CORE-1036 On create/drop table we auto create/drop the related transactions environments for the schema.table
+* CORE-1042 Built raw functions to write to a tables transaction log for insert/update/delete operations
+* CORE-1057 Implemented write transaction into lmdb create/update/delete functions
+* CORE-1048 Adds `SEARCH` wildcard handling for role permissions standards
+* CORE-1059 Added config setting to disable transaction logging for an instance
+* CORE-1076 Adds permissions filter to describe operations
+* CORE-1043 Change clustering catchup to use the new transaction log
+* CORE-1052 Removed word "master" from source
+* CORE-1061 Added new operation called `delete_transactions_before` this will tail a transaction log for a specific schema / table
+* CORE-1040 On HarperDB startup make sure all tables have a transaction environment
+* CORE-1055 Added 2 new setting to change the server headersTimeout & keepAliveTimeout from the config file
+* CORE-1044 Created new operation `read_transaction_log` which will allow a user to get transactions for a table by `timestamp`, `username`, or `hash_value`
+* CORE-1043 Change clustering catchup to use the new transaction log
+* CORE-1089 Added new attribute to `system_information` for table/transaction log data size in bytes & transaction log record count
+* CORE-1101 Fix to store empty strings rather than considering them null & fix to be able to search on empty strings in SQL/NoSQL.
+* CORE-1054 Updates permissions object to remove delete attribute permission and update table attribute permission key to `attribute_permissions`
+* CORE-1092 Do not allow the `__createdtime__` to be updated
+* CORE-1085 Updates create schema/table & drop schema/table/attribute operations permissions to require super user role and adds integration tests to validate
+* CORE-1071 Updates response messages and status codes from `describe_schema` and `describe_table` operations to provide standard language/status code when a schema item is not found
+* CORE-1049 Updates response message for SQL update op with no matching rows
+* CORE-1096 Added tracking of the origin in the transaction log. This origin object stores the node name, timestamp of the transaction from the originating node & the user.
+
+**Bug Fixes**
+
+* CORE-1028 Fixes bug for simple `SQL SELECT` queries not returning aliases and incorrectly returning hash values when not requested in query
+* CORE-1037 Fixed an issue where numbers with leading zero i.e. 00123 are converted to numbers rather than being honored as strings.
+* CORE-1063 Updates permission error response shape to consolidate issues into individual objects per schema/table combo
+* CORE-1098 Fixed an issue where transaction environments were remaining in the global cache after being dropped.
+* CORE-1086 Fixed issue where responses from insert/update were incorrect with skipped records.
+* CORE-1079 Fixes SQL bugs around invalid schema/table and special characters in `WHERE` clause
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v2-penny/2.2.2.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v2-penny/2.2.2.md
new file mode 100644
index 00000000..827c63db
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v2-penny/2.2.2.md
@@ -0,0 +1,16 @@
+---
+title: 2.2.2
+sidebar_position: 79797
+---
+
+### HarperDB 2.2.2, Penny Release
+10/27/2020
+
+* CORE-1154 Allowed transaction logging to be disabled even if clustering is enabled.
+* CORE-1153 Fixed issue where `delete_files_before` was writing to transaction log.
+* CORE-1152 Fixed issue where no more than 4 HarperDB forks would be created.
+* CORE-1112 Adds handling for system timestamp attributes in permissions.
+* CORE-1131 Adds better handling for checking perms on operations with action value in JSON.
+* CORE-1113 Fixes validation bug checking for super user/cluster user permissions and other permissions.
+* CORE-1135 Adds validation for valid keys in role API operations.
+* CORE-1073 Adds new `import_from_s3` operation to API.
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v2-penny/2.2.3.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v2-penny/2.2.3.md
new file mode 100644
index 00000000..eca953e2
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v2-penny/2.2.3.md
@@ -0,0 +1,9 @@
+---
+title: 2.2.3
+sidebar_position: 79796
+---
+
+### HarperDB 2.2.3, Penny Release
+11/16/2020
+
+* CORE-1158 Performance improvements to core delete function and configuration of `delete_files_before` to run in batches with a pause into between.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v2-penny/2.3.0.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v2-penny/2.3.0.md
new file mode 100644
index 00000000..2b248490
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v2-penny/2.3.0.md
@@ -0,0 +1,22 @@
+---
+title: 2.3.0
+sidebar_position: 79699
+---
+
+### HarperDB 2.3.0, Penny Release
+12/03/2020
+
+**Features/Updates**
+
+* CORE-1191, CORE-1190, CORE-1125, CORE-1157, CORE-1126, CORE-1140, CORE-1134, CORE-1123, CORE-1124, CORE-1122 Added JWT Authentication option (See documentation for more information)
+* CORE-1128, CORE-1143, CORE-1140, CORE-1129 Added `upsert` operation
+* CORE-1187 Added `get_configuration` operation which allows admins to view their configuration settings.
+* CORE-1175 Added new internal LMDB function to copy an environment for use in future features.
+* CORE-1166 Updated packages to address security vulnerabilities.
+
+**Bug Fixes**
+
+* CORE-1195 Modified `drop_attribute` to drop after data cleanse completes.
+* CORE-1149 Fix SQL bug regarding self joins and updates alasql to 0.6.5 release.
+* CORE-1168 Fix inconsistent invalid schema/table errors.
+* CORE-1162 Fix bug which caused `delete_files_before` to cause tables to grow in size due to an open cursor issue.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v2-penny/2.3.1.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v2-penny/2.3.1.md
new file mode 100644
index 00000000..51291a01
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v2-penny/2.3.1.md
@@ -0,0 +1,12 @@
+---
+title: 2.3.1
+sidebar_position: 79698
+---
+
+### HarperDB 2.3.1, Penny Release
+1/29/2021
+
+**Bug Fixes**
+
+* CORE-1218 A bug in HarperDB 2.3.0 was identified related to manually calling the `create_attribute` operation. This bug caused secondary indexes to be overwritten by the most recently inserted or updated value for the index, thereby causing a search operation filtered with that index to only return the most recently inserted/updated row. Note, this issue does not affect attributes that are reflexively/automatically created. It only affects attributes created using `create_attribute`. To resolve this issue in 2.3.0 or earlier, drop and recreate your table using reflexive attribute creation. In 2.3.1, drop and recreate your table and use either reflexive attribute creation or `create_attribute`.
+* CORE-1219 Increased maximum table attributes from 1000 to 10000
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v2-penny/_category_.json b/site/versioned_docs/version-4.2/technical-details/release-notes/v2-penny/_category_.json
new file mode 100644
index 00000000..285eecf7
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v2-penny/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "HarperDB Penny (Version 2)",
+ "position": -2
+}
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v2-penny/index.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v2-penny/index.md
new file mode 100644
index 00000000..23bd15ec
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v2-penny/index.md
@@ -0,0 +1,13 @@
+---
+title: HarperDB Penny (Version 2)
+---
+
+# HarperDB Penny (Version 2)
+
+Did you know our release names are dedicated to employee pups? For our second release, Penny was the star.
+
+Here is a bit about Penny:
+
+
+
+_Hi I am Penny! My dad is Kyle Bernhardy, the CTO of HarperDB. I am a nine-year-old Whippet who lives for running hard and fast while exploring the beautiful terrain of Colorado. My favorite activity is chasing birds along with afternoon snoozes in a sunny spot in my backyard._
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/3.0.0.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/3.0.0.md
new file mode 100644
index 00000000..2907ee6c
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/3.0.0.md
@@ -0,0 +1,31 @@
+---
+title: 3.0.0
+sidebar_position: 69999
+---
+
+### HarperDB 3.0, Monkey Release
+5/18/2021
+
+**Features/Updates**
+
+* CORE-1217, CORE-1226, CORE-1232 Create new `search_by_conditions` operation.
+* CORE-1304 Upgrade to Node 12.22.1.
+* CORE-1235 Adds new upgrade/install functionality.
+* CORE-1206, CORE-1248, CORE-1252 Implement `lmdb-store` library for optimized performance.
+* CORE-1062 Added alias operation for `delete_files_before`, named `delete_records_before`.
+* CORE-1243 Change `HTTPS_ON` settings value to false by default.
+* CORE-1189 Implement fastify web server, resulting in improved performance.
+* CORE-1221 Update user API to use role name instead of role id.
+* CORE-1225 Updated dependencies to eliminate npm security warnings.
+* CORE-1241 Adds 3.0 update directive and refactors/fixes update functionality.
+
+**Bug Fixes**
+
+* CORE-1299 Remove all references to the `PROJECT_DIR` setting. This setting is problematic when using node version managers and upgrading the version of node and then installing a new instance of HarperDB.
+* CORE-1288 Fix bug with drop table/schema that was causing 'env required' error log.
+* CORE-1285 Update warning log when trying to create an attribute that already exists.
+* CORE-1254 Added logic to manage data collisions in clustering.
+* CORE-1212 Add pre-check to `drop_user` that returns error if user doesn't exist.
+* CORE-1114 Update response code and message from `add_user` when user already exists.
+* CORE-1111 Update response from `create_attribute` to match the create schema/table response.
+* CORE-1205 Fixed bug that prevented schema/table from being dropped if name was a number or had a wildcard value in it. Updated validation for insert, upsert and update.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/3.1.0.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/3.1.0.md
new file mode 100644
index 00000000..148690f6
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/3.1.0.md
@@ -0,0 +1,23 @@
+---
+title: 3.1.0
+sidebar_position: 69899
+---
+
+### HarperDB 3.1.0, Monkey Release
+8/24/2021
+
+**Features/Updates**
+
+* CORE-1320, CORE-1321, CORE-1323, CORE-1324 Version 1.0 of HarperDB Custom Functions
+* CORE-1275, CORE-1276, CORE-1278, CORE-1279, CORE-1280, CORE-1282, CORE-1283, CORE-1305, CORE-1314 IPC server for communication between HarperDB processes, including HarperDB, HarperDB Clustering, and HarperDB Functions
+* CORE-1352, CORE-1355, CORE-1356, CORE-1358 Implement pm2 for HarperDB process management
+* CORE-1292, CORE-1308, CORE-1312, CORE-1334, CORE-1338 Updated installation process to start HarperDB immediately on install and to accept all config settings via environment variable or command line arguments
+* CORE-1310 Updated licensing functionality
+* CORE-1301 Updated validation for performance improvement
+* CORE-1359 Add `hdb-response-time` header which returns the HarperDB response time in milliseconds
+* CORE-1330, CORE-1309 New config settings: `LOG_TO_FILE`, `LOG_TO_STDSTREAMS`, `IPC_SERVER_PORT`, `RUN_IN_FOREGROUND`, `CUSTOM_FUNCTIONS`, `CUSTOM_FUNCTIONS_PORT`, `CUSTOM_FUNCTIONS_DIRECTORY`, `MAX_CUSTOM_FUNCTION_PROCESSES`
+
+**Bug Fixes**
+
+* CORE-1315 Corrected issue in HarperDB restart scenario
+* CORE-1370 Update some of the validation error handlers so that they don't log full stack
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/3.1.1.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/3.1.1.md
new file mode 100644
index 00000000..0adbeb21
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/3.1.1.md
@@ -0,0 +1,18 @@
+---
+title: 3.1.1
+sidebar_position: 69898
+---
+
+### HarperDB 3.1.1, Monkey Release
+9/23/2021
+
+**Features/Updates**
+
+* CORE-1393 Added utility function to add settings from env/cmd vars to the settings file on every run/restart
+* CORE-1395 Create a setting which will allow to enable the local Studio to be served from an instance of HarperDB
+* CORE-1397 Update the stock 404 response to not return the request URL
+* General updates to optimize Docker container
+
+**Bug Fixes**
+
+* CORE-1399 Added fixes for complex SQL alias issues
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/3.1.2.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/3.1.2.md
new file mode 100644
index 00000000..f1c192b6
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/3.1.2.md
@@ -0,0 +1,15 @@
+---
+title: 3.1.2
+sidebar_position: 69897
+---
+
+### HarperDB 3.1.2, Monkey Release
+10/21/2021
+
+**Features/Updates**
+
+* Updated the installation ASCII art to reflect the new HarperDB logo
+
+**Bug Fixes**
+
+* CORE-1408 Corrects issue where `drop_attribute` was not properly setting the LMDB version number causing tables to behave unexpectedly
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/3.1.3.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/3.1.3.md
new file mode 100644
index 00000000..2d484f8d
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/3.1.3.md
@@ -0,0 +1,11 @@
+---
+title: 3.1.3
+sidebar_position: 69896
+---
+
+### HarperDB 3.1.3, Monkey Release
+1/14/2022
+
+**Bug Fixes**
+
+* CORE-1446 Fix for scans on indexes larger than 1 million entries causing queries to never return
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/3.1.4.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/3.1.4.md
new file mode 100644
index 00000000..ae0074fd
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/3.1.4.md
@@ -0,0 +1,11 @@
+---
+title: 3.1.4
+sidebar_position: 69895
+---
+
+### HarperDB 3.1.4, Monkey Release
+2/24/2022
+
+**Features/Updates**
+
+* CORE-1460 Added new setting `STORAGE_WRITE_ASYNC`. If this setting is true, LMDB will have faster write performance at the expense of not being crash safe. The default for this setting is false, which results in HarperDB being crash safe.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/3.1.5.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/3.1.5.md
new file mode 100644
index 00000000..eff4b5b0
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/3.1.5.md
@@ -0,0 +1,11 @@
+---
+title: 3.1.5
+sidebar_position: 69894
+---
+
+### HarperDB 3.1.5, Monkey Release
+3/4/2022
+
+**Features/Updates**
+
+* CORE-1498 Fixed incorrect autocasting of string that start with "0." that tries to convert to number but instead returns NaN.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/3.2.0.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/3.2.0.md
new file mode 100644
index 00000000..003575d8
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/3.2.0.md
@@ -0,0 +1,13 @@
+---
+title: 3.2.0
+sidebar_position: 69799
+---
+
+### HarperDB 3.2.0, Monkey Release
+3/25/2022
+
+**Features/Updates**
+
+* CORE-1391 Bug fix related to orphaned HarperDB background processes.
+* CORE-1509 Updated node version check, updated Node.js version, updated project dependencies.
+* CORE-1518 Remove final call from logger.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/3.2.1.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/3.2.1.md
new file mode 100644
index 00000000..dc511a70
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/3.2.1.md
@@ -0,0 +1,11 @@
+---
+title: 3.2.1
+sidebar_position: 69798
+---
+
+### HarperDB 3.2.1, Monkey Release
+6/1/2022
+
+**Features/Updates**
+
+* CORE-1573 Added logic to track the pid of the foreground process if running in foreground. Then on stop, use that pid to kill the process. Logic was also added to kill the pm2 daemon when stop is called.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/3.3.0.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/3.3.0.md
new file mode 100644
index 00000000..3e3ca784
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/3.3.0.md
@@ -0,0 +1,12 @@
+---
+title: 3.3.0
+sidebar_position: 69699
+---
+
+### HarperDB 3.3.0 - Monkey
+
+* CORE-1595 Added new role type `structure_user`, this enables non-superusers to be able to create/drop schema/table/attribute.
+* CORE-1501 Improved performance for drop_table.
+* CORE-1599 Added two new operations for custom functions `install_node_modules` & `audit_node_modules`.
+* CORE-1598 Added `skip_node_modules` flag to `package_custom_function_project` operation. This flag allows for not bundling project dependencies and deploying a smaller project to other nodes. Use this flag in tandem with `install_node_modules`.
+* CORE-1707 Binaries are now included for Linux on AMD64, Linux on ARM64, and macOS. GCC, Make, Python are no longer required when installing on these platforms.
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/_category_.json b/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/_category_.json
new file mode 100644
index 00000000..0103ac36
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "HarperDB Monkey (Version 3)",
+ "position": -3
+}
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/index.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/index.md
new file mode 100644
index 00000000..a05a2b6c
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v3-monkey/index.md
@@ -0,0 +1,11 @@
+---
+title: HarperDB Monkey (Version 3)
+---
+
+# HarperDB Monkey (Version 3)
+
+Did you know our release names are dedicated to employee pups? For our third release, we have Monkey.
+
+
+
+_Hi, I am Monkey, a.k.a. Monk, a.k.a. Monchichi. My dad is Aron Johnson, the Director of DevOps at HarperDB. I am an eight-year-old Australian Cattle dog mutt whose favorite pastime is hunting and collecting tennis balls from the park next to her home. I love burrowing in the Colorado snow, rolling in the cool grass on warm days, and cheese!_
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.0.0.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.0.0.md
new file mode 100644
index 00000000..49770307
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.0.0.md
@@ -0,0 +1,124 @@
+---
+title: 4.0.0
+sidebar_position: 59999
+---
+
+### HarperDB 4.0.0, Tucker Release
+11/2/2022
+
+**Networking & Data Replication (Clustering)**
+
+The HarperDB clustering internals have been rewritten and the underlying technology for Clustering has been completely replaced with [NATS](https:/nats.io/), an enterprise grade connective technology responsible for addressing, discovery and exchanging of messages that drive the common patterns in distributed systems.
+* CORE-1464, CORE-1470, : Remove SocketCluster dependencies and all code related to them.
+* CORE-1465, CORE-1485, CORE-1537, CORE-1538, CORE-1558, CORE-1583, CORE_1665, CORE-1710, CORE-1801, CORE-1865 :Add nats-`server` code as dependency, on install of HarperDB download nats-`server` is possible else fallback to building from source code.
+* CORE-1593, CORE-1761: Add `nats.js` as project dependency.
+* CORE-1466: Build NATS configs on `harperdb run` based on HarperDB YAML configuration.
+* CORE-1467, CORE-1508: Launch and manage NATS servers with PM2.
+* CORE-1468, CORE-1507: Create a process which reads the work queue stream and processes transactions.
+* CORE-1481, CORE-1529, CORE-1698, CORE-1502, CORE-1696: On upgrade to 4.0, update pre-existing clustering configurations, create table transaction streams, create work queue stream, update `hdb_nodes` table, create clustering folder structure, and rebuild self-signed certs.
+* CORE-1494, CORE-1521, CORE-1755: Build out internals to interface with NATS.
+* CORE-1504: Update existing hooks to save transactions to work with NATS.
+* CORE-1514, CORE-1515, CORE-1516, CORE-1527, CORE-1532: Update `add_node`, `update_node`, and `remove_node` operations to no longer need host and port in payload. These operations now manage dynamically sourcing of table level transaction streams between nodes and work queues.
+* CORE-1522: Create `NATSReplyService` process which handles the receiving NATS based requests from remote instances and sending back appropriate responses.
+* CORE-1471, CORE-1568, CORE-1563, CORE-1534, CORE-1569: Update `cluster_status` operation.
+* CORE-1611: Update pre-existing transaction log operations to be audit log operations.
+* CORE-1541, CORE-1612, CORE-1613: Create translation log operations which interface with streams.
+* CORE-1668: Update NATS serialization / deserialization to use MessagePack.
+* CORE-1673: Add `system_info` param to `hdb_nodes` table and update on `add_node` and `cluster_status`.
+* CORE-1477, CORE-1493, CORE-1557, CORE-1596, CORE-1577: Both a full HarperDB restart & just clustering restart call the NATS server with a reload directive to maintain full uptime while servers refresh.
+* CORE-1474:HarperDB install adds clustering folder structure.
+* CORE-1530: Post `drop_table` HarperDB purges the related transaction stream.
+* CORE-1567: Set NATS config to always use TLS.
+* CORE-1543: Removed the `transact_to_cluster` attribute from the bulk load operations. Now bulk loads always replicate.
+* CORE-1533, CORE-1556, CORE-1561, CORE-1562, CORE-1564: New operation `configure_cluster`, this operation enables bulk publishing and subscription of multiple tables to multiple instances of HarperDB.
+* CORE-1535: Create work queue stream on install of HarperDB. This stream receives transactions from remote instances of HarperDB which are then ingested in order.
+* CORE-1551: Create transaction streams on the remote node if they do not exist when performing `add_node` or `update_node`.
+* CORE-1594, CORE-1605, CORE-1749, CORE-1767, CORE-1770: Optimize the work queue stream and its consumer to be more performant and validate exact once delivery.
+* CORE-1621, CORE-1692, CORE-1570, CORE-1693: NATS stream names are MD5 hashed to avoid characters that HarperDB allows, but NATS may not.
+* CORE-1762: Add a new optional attribute to `add_node` and `update_node` named `opt_start_time`. This attribute sets a starting time to start synchronizing transactions.
+* CORE-1785: Optimizations and bug fixes in regards to sourcing data from remote instances on HarperDB.
+* CORE-1588: Created new operation `set_cluster_routes` to enable setting routes for instances of HarperDB to mesh together.
+* CORE-1589: Created new operation `get_cluster_routes` to allow for retrieval of routes used to connect the instance of HarperDB to the mesh.
+* CORE-1590: Created new operation `delete_cluster_routes` to allow for removal of routes used to connect the instance of HarperDB to the mesh.
+* CORE-1667: Fix old environment variable `CLUSTERING_PORT` not mapping to new hub server port.
+* CORE-1609: Allow `remove_node` to be called when the other node cannot be reached.
+* CORE-1815: Add transaction lock to `add_node` and `update_node` to avoid concurrent nats source update bug.
+* CORE-1848: Update stream configs if the node name has been changed in the YAML configuration.
+* CORE-1873: Update `add_node` and `update_node` so that it auto-creates schema/table on both local and remote node respectively
+
+
+**Data Storage**
+
+We have made improvements to how we store, index, and retrieve data.
+* CORE-1619: Enabled new concurrent flushing technology for improved write performance.
+* CORE-1701: Optimize search performance for `search_by_conditions` when executing multiple AND conditions.
+* CORE-1652: Encode the values of secondary indices more efficiently for faster access.
+* CORE-1670: Store updated timestamp in `lmdb.js`' version property.
+* CORE-1651: Enabled multiple value indexing of array values which allows for the ability to search on specific elements in an array more efficiently.
+* CORE-1649, CORE-1659: Large text values (larger than 255 bytes) are no longer stored in separate blob index. Now they are segmented and delimited in the same index to increase search performance.
+* Complex objects and object arrays are no longer stored in a separate index to preserve storage and increase write throughput.
+* CORE-1650, CORE-1724, CORE-1738: Improved internals around interpreting attribute values.
+* CORE-1657: Deferred property decoding allows large objects to be stored, but individual attributes can be accessed (like with get_attributes) without incurring the cost of decoding the entire object.
+* CORE-1658: Enable in-memory caching of records for even faster access to frequently accessed data.
+* CORE-1693: Wrap updates in async transactions to ensure ACID-compliant updates.
+* CORE-1653: Upgrade to 4.0 rebuilds tables to reflect changes made to index improvements.
+* CORE-1753: Removed old `node-lmdb` dependency.
+* CORE-1787: Freeze objects returned from queries.
+* CORE-1821: Read the `WRITE_ASYNC` setting which enables LMDB nosync.
+
+**Logging**
+
+HarperDB has increased logging specificity by breaking out logs based on components logging. There are specific log files each for HarperDB Core, Custom Functions, Hub Server, Leaf Server, and more.
+* CORE-1497: Remove `pino` and `winston` dependencies.
+* CORE-1426: All logging is output via `stdout` and `stderr`, our default logging is then picked up by PM2 which handles writing out to file.
+* CORE-1431: Improved `read_log` operation validation.
+* CORE-1433, CORE-1463: Added log rotation.
+* CORE-1553, CORE-1555, CORE-1552, CORE-1554, CORE-1704: Performance gain by only serializing objects and arrays if the log is for the level defined in configuration.
+* CORE-1436: Upgrade to 4.0 updates internals for logging changes.
+* CORE-1428, CORE-1440, CORE-1442, CORE-1434, CORE-1435, CORE-1439, CORE-1482, CORE-1751, CORE-1752: Bug fixes, performance improvements and improved unit tests.
+* CORE-1691: Convert non-PM2 managed log file writes to use Node.js `fs.appendFileSync` function.
+
+**Configuration**
+
+HarperDB has updated its configuration from a properties file to YAML.
+* CORE-1448, CORE-1449, CORE-1519, CORE-1587: Upgrade automatically converts the pre-existing settings file to YAML.
+* CORE-1445, CORE-1534, CORE-1444, CORE-1858: Build out new logic to create, update, and interpret the YAML configuration file.
+* Installer has updated prompts to reflect YAML settings.
+* CORE-1447: Create an alias for the `configure_cluster` operation as `set_configuration`.
+* CORE-1461, CORE-1462, CORE-1483: Unit test improvements.
+* CORE-1492: Improvements to get_configuration and set_configuration operations.
+* CORE-1503: Modify HarperDB configuration for more granular certificate definition.
+* CORE-1591: Update `routes` IP param to `host` and to `leaf` config in `harperdb.conf`
+* CORE-1519: Fix issue when switching between old and new versions of HarperDB we are getting the config parameter is undefined error on npm install.
+
+**Broad NodeJS and Platform Support**
+* CORE-1624: HarperDB can now run on multiple versions of NodeJS, from v14 to v19. We primarily test on v18, so that is the preferred version.
+
+**Windows 10 and 11**
+* CORE-1088: HarperDB now runs natively on Windows 10 and 11 without the need to run in a container or installed in WSL. Windows is only intended for evaluation and development purposes, not for production work loads.
+
+**Extra Changes and Bug Fixes**
+* CORE-1520: Refactor installer to remove all waterfall code and update to use Promises.
+* CORE-1573: Stop the PM2 daemon and any logging processes when stopping hdb.
+* CORE-1586: When HarperDB is running in foreground stop any additional logging processes from being spawned.
+* CORE-1626: Update docker file to accommodate new `harperdb.conf` file.
+* CORE-1592, CORE-1526, CORE-1660, CORE-1646, CORE-1640, CORE-1689, CORE-1711, CORE-1601, CORE-1726, CORE-1728, CORE-1736, CORE-1735, CORE-1745, CORE-1729, CORE-1748, CORE-1644, CORE-1750, CORE-1757, CORE-1727, CORE-1740, CORE-1730, CORE-1777, CORE-1778, CORE-1782, CORE-1775, CORE-1771, CORE-1774, CORE-1759, CORE-1772, CORE-1861, CORE-1862, CORE-1863, CORE-1870, CORE-1869:Changes for CI/CD pipeline and integration tests.
+* CORE-1661: Fixed issue where old boot properties file caused an error when attempting to install 4.0.0.
+* CORE-1697, CORE-1814, CORE-1855: Upgrade fastify dependency to new major version 4.
+* CORE-1629: Jobs are now running as processes managed by the PM2 daemon.
+* CORE-1733: Update LICENSE to reflect our EULA on our site.
+* CORE-1606: Enable Custom Functions by default.
+* CORE-1714: Include pre-built binaries for most common platforms (darwin-arm64, darwin-x64, linux-arm64, linux-x64, win32-x64).
+* CORE-1628: Fix issue where setting license through environment variable not working.
+* CORE-1602, CORE-1760, CORE-1838, CORE-1839, CORE-1847, CORE-1773: HarperDB Docker container improvements.
+* CORE-1706: Add support for encoding HTTP responses with MessagePack.
+* CORE-1709: Improve the way lmdb.js dependencies are installed.
+* CORE-1758: Remove/update unnecessary HTTP headers.
+* CORE-1756: On `npm install` and `harperdb install` change the node version check from an error to a warning if the installed Node.js version does not match our preferred version.
+* CORE-1791: Optimizations to authenticated user caching.
+* CORE-1794: Update README to discuss Windows support & Node.js versions
+* CORE-1837: Fix issue where Custom Function directory was not being created on install.
+* CORE-1742: Add more validation to audit log - check schema/table exists and log is enabled.
+* CORE-1768: Fix issue where when running in foreground HarperDB process is not stopping on `harperdb stop`.
+* CORE-1864: Fix to semver checks on upgrade.
+* CORE-1850: Fix issue where a `cluster_user` type role could not be altered.
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.0.1.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.0.1.md
new file mode 100644
index 00000000..9e148e63
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.0.1.md
@@ -0,0 +1,12 @@
+---
+title: 4.0.1
+sidebar_position: 59998
+---
+
+### HarperDB 4.0.1, Tucker Release
+01/20/2023
+
+**Bug Fixes**
+
+* CORE-1992 Local studio was not loading because the path got mangled in the build.
+* CORE-2001 Fixed deploy_custom_function_project after node update broke it.
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.0.2.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.0.2.md
new file mode 100644
index 00000000..b65d1427
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.0.2.md
@@ -0,0 +1,12 @@
+---
+title: 4.0.2
+sidebar_position: 59997
+---
+
+### HarperDB 4.0.2, Tucker Release
+01/24/2023
+
+**Bug Fixes**
+
+* CORE-2003 Fix bug where if machine had one core thread config would default to zero.
+* Update to lmdb 2.7.3 and msgpackr 1.7.0
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.0.3.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.0.3.md
new file mode 100644
index 00000000..67aaae56
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.0.3.md
@@ -0,0 +1,11 @@
+---
+title: 4.0.3
+sidebar_position: 59996
+---
+
+### HarperDB 4.0.3, Tucker Release
+01/26/2023
+
+**Bug Fixes**
+
+* CORE-2007 Add update nodes 4.0.0 launch script to build script to fix clustering upgrade.
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.0.4.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.0.4.md
new file mode 100644
index 00000000..2a30c9d1
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.0.4.md
@@ -0,0 +1,11 @@
+---
+title: 4.0.4
+sidebar_position: 59995
+---
+
+### HarperDB 4.0.4, Tucker Release
+01/27/2023
+
+**Bug Fixes**
+
+* CORE-2009 Fixed bug where add node was not being called when upgrading clustering.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.0.5.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.0.5.md
new file mode 100644
index 00000000..dc66721f
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.0.5.md
@@ -0,0 +1,14 @@
+---
+title: 4.0.5
+sidebar_position: 59994
+---
+
+### HarperDB 4.0.5, Tucker Release
+02/15/2023
+
+**Bug Fixes**
+
+* CORE-2029 Improved the upgrade process for handling existing user TLS certificates and correctly configuring TLS settings. Added a prompt to upgrade to determine if new certificates should be created or existing certificates should be kept/used.
+* Fix the way NATS connections are honored in a local environment.
+* Do not define the certificate authority path to NATS if it is not defined in the HarperDB config.
+
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.0.6.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.0.6.md
new file mode 100644
index 00000000..bf97d148
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.0.6.md
@@ -0,0 +1,11 @@
+---
+title: 4.0.6
+sidebar_position: 59993
+---
+
+### HarperDB 4.0.6, Tucker Release
+03/09/2023
+
+**Bug Fixes**
+
+* Fixed a data serialization error that occurs when a large number of different record structures are persisted in a single table.
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.0.7.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.0.7.md
new file mode 100644
index 00000000..7d48666a
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.0.7.md
@@ -0,0 +1,11 @@
+---
+title: 4.0.7
+sidebar_position: 59992
+---
+
+### HarperDB 4.0.7, Tucker Release
+03/10/2023
+
+**Bug Fixes**
+
+* Update lmdb.js dependency
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.1.0.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.1.0.md
new file mode 100644
index 00000000..80b4e5d2
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.1.0.md
@@ -0,0 +1,63 @@
+---
+title: 4.1.0
+sidebar_position: 59899
+---
+
+# 4.1.0
+
+HarperDB 4.1 introduces the ability to use worker threads for concurrently handling HTTP requests. Previously this was handled by processes. This shift provides important benefits in terms of better control of traffic delegation with support for optimized load tracking and session affinity, better debuggability, and reduced memory footprint.
+
+This means debugging will be much easier for custom functions. If you install/run HarperDB locally, most modern IDEs like WebStorm and VSCode support worker thread debugging, so you can start HarperDB in your IDE, and set breakpoints in your custom functions and debug them.
+
+The associated routing functionality now includes session affinity support. This can be used to consistently route users to the same thread which can improve caching locality, performance, and fairness. This can be enabled in with the [`http.sessionAffinity` option in your configurationsecurity configuration.
+
+HarperDB 4.1's NoSQL query handling has been revamped to consistently use iterators, which provide an extremely memory efficient mechanism for directly streaming query results to the network _as_ the query results are computed. This results in faster Time to First Byte (TTFB) (only the first record/value in a query needs to be computed before data can start to be sent), and less memory usage during querying (the entire query result does not need to be stored in memory). These iterators are also available in query results for custom functions and can provide means for custom function code to iteratively access data from the database without loading entire results. This should be a completely transparent upgrade, all HTTP APIs function the same, with the one exception that custom functions need to be aware that they can't access query results by `[index]` (they should use array methods or for-in loops to handle query results).
+
+4.1 includes configuration options for specifying the location of database storage files. This allows you to specifically locate database directories and files on different volumes for better flexibility and utilization of disks and storage volumes. See the [storage configuration](../../../../deployments/configuration#storage) and [schemas configuration](../../../../deployments/configuration#schemas) for information on how to configure these locations.
+
+Logging has been revamped and condensed into one `hdb.log` file. See [logginglogging for more information.
+
+A new operation called `cluster_network` was added, this operation will ping the cluster and return a list of enmeshed nodes.
+
+Custom Functions will no longer automatically load static file routes, instead the `@fastify/static` plugin will need to be registered with the Custom Function server. See [Host A Static Web UI-static](https:/docs.harperdb.io/docs/v/4.1/custom-functions/host-static).
+
+Updates to S3 import and export mean that these operations now require the bucket `region` in the request. Also, if referencing a nested object it should be done in the `key` parameter. See examples [here](https:/api.harperdb.io/#aa74bbdf-668c-4536-80f1-b91bb13e5024).
+
+Due to the AWS SDK v2 reaching end of life support we have updated to v3. This has caused some breaking changes in our operations `import_from_s3` and `export_to_s3`:
+
+* A new attribute `region` will need to be supplied
+* The `bucket` attribute can no longer have trailing slashes. Slashes will now need to be in the `key`.
+
+Starting HarperDB without any command (just `harperdb`) now runs HarperDB like a standard process, in the foreground. This means you can use standard unix tooling for interacting with the process and is conducive for running HarperDB with systemd or any other process management tool. If you wish to have HarperDB launch itself in separate background process (and immediately terminate the shell process), you can do so by running `harperdb start`.
+
+Internal Tickets completed:
+
+* CORE-609 - Ensure that attribute names are always added to global schema as Strings
+* CORE-1549 - Remove fastify-static code from Custom Functions server which auto serves content from "static" folder
+* CORE-1655 - Iterator based queries
+* CORE-1764 - Fix issue where describe\_all operation returns an empty object for non super-users if schema(s) do not yet have table(s)
+* CORE-1854 - Switch to using worker threads instead of processes for handling concurrency
+* CORE-1877 - Extend the csv\_url\_load operation to allow for additional headers to be passed to the remote server when the csv is being downloaded
+* CORE-1893 - Add last updated timestamp to describe operations
+* CORE-1896 - Fix issue where Select \* from system.hdb\_info returns wrong HDB version number after Instance Upgrade
+* CORE-1904 - Fix issue when executing GEOJSON query in SQL
+* CORE-1905 - Add HarperDB YAML configuration setting which defines the storage location of NATS streams
+* CORE-1906 - Add HarperDB YAML configuration setting defining the storage location of tables.
+* CORE-1655 - Streaming binary format serialization
+* CORE-1943 - Add configuration option to set mount point for audit tables
+* CORE-1921 - Update NATS transaction lifecycle to handle message deduplication in work queue streams.
+* CORE-1963 - Update logging for better readability, reduced duplication, and request context information.
+* CORE-1968 - In server\nats\natsIngestService.js remove the js\_msg.working(); line to improve performance.
+* CORE-1976 - Fix error when calling describe\_table operation with no schema or table defined in payload.
+* CORE-1983 - Fix issue where create\_attribute operation does not validate request for required attributes
+* CORE-2015 - Remove PM2 logs that get logged in console when starting HDB
+* CORE-2048 - systemd script for 4.1
+* CORE-2052 - Include thread information in system\_information for visibility of threads
+* CORE-2061 - Add a better error msg when clustering is enabled without a cluster user set
+* CORE-2068 - Create new log rotate logic since pm2 log-rotate no longer used
+* CORE-2072 - Update to Node 18.15.0
+* CORE-2090 - Upgrade Testing from v4.0.x and v3.x to v4.1.
+* CORE-2091 - Run the performance tests
+* CORE-2092 - Allow for automatic patch version updates of certain packages
+* CORE-2109 - Add verify option to clustering TLS configuration
+* CORE-2111 - Update AWS SDK to v3
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.1.1.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.1.1.md
new file mode 100644
index 00000000..537ef71c
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.1.1.md
@@ -0,0 +1,15 @@
+---
+title: 4.1.1
+sidebar_position: 59898
+---
+
+# 4.1.1
+
+06/16/2023
+
+* HarperDB uses improved logic for determining default heap limits and thread counts. When running in a restricted container and on NodeJS 18.15+, HarperDB will use the constrained memory limit to determine heap limits for each thread. In more memory constrained servers with many CPU cores, a reduced default thread count will be used to ensure that excessive memory is not used by many workers. You may still define your own thread count (with `http`/`threads`) in the [configuration](../../../deployments/configuration).
+* An option has been added for [disabling the republishing NATS messages](../../../deployments/configuration), which can provide improved replication performance in a fully connected network.
+* Improvements to our OpenShift container.
+* Dependency security updates.
+* **Bug Fixes**
+* Fixed a bug in reporting database metrics in the `system_information` operation.
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.1.2.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.1.2.md
new file mode 100644
index 00000000..2a62db64
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.1.2.md
@@ -0,0 +1,13 @@
+---
+title: 4.1.2
+sidebar_position: 59897
+---
+
+### HarperDB 4.1.2, Tucker Release
+06/16/2023
+
+* HarperDB has updated binary dependencies to support older glibc versions back 2.17.
+* A new CLI command was added to get the current status of whether HarperDB is running and the cluster status. This is available with `harperdb status`.
+* Improvements to our OpenShift container.
+* Dependency security updates.
+
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.2.0.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.2.0.md
new file mode 100644
index 00000000..a57a9781
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.2.0.md
@@ -0,0 +1,99 @@
+---
+title: 4.2.0
+sidebar_position: 59799
+---
+
+# 4.2.0
+
+#### HarperDB 4.2.0
+
+HarperDB 4.2 introduces a new interface to accessing our core database engine with faster access, well-typed idiomatic JavaScript interfaces, ergonomic object mapping, and real-time data subscriptions. 4.2 also had adopted a new component architecture for building extensions to deliver customized external data sources, authentication, file handlers, content types, and more. These architectural upgrades lead to several key new HarperDB capabilities including a new REST interface, advanced caching, real-time messaging and publish/subscribe functionality through MQTT, WebSockets, and Server-Sent Events.
+
+4.2 also introduces configurable database schemas, using GraphQL Schema syntax. The new component structure is also configuration-driven, providing easy, low-code paths to building applications. [Check out our new getting starting guide](../../../getting-started) to see how easy it is to get started with HarperDB apps.
+
+### Resource API
+
+The [Resource API](../../reference/resource) is the new interface for accessing data in HarperDB. It utilizes a uniform interface for accessing data in HarperDB database/tables and is designed to easily be implemented or extended for defining customized application logic for table access or defining custom external data sources. This API has support for connecting resources together for caching and delivering data change and message notifications in real-time. The [Resource API documentation details this interface](../../reference/resource).
+
+### Component Architecture
+
+HarperDB's custom functions have evolved towards a [full component architecture](../../../developers/components); our internal functionality is defined as components, and this can be used in a modular way in conjunction with user components. These can all easily be configured and loaded through configuration files, and there is now a [well-defined interface for creating your own components](../../../developers/components/writing-extensions). Components can easily be deployed/installed into HarperDB using [NPM and Github references as well](../../../developers/components/installing).
+
+### Configurable Database Schemas
+
+HarperDB applications or components support [schema definitions using GraphQL schema syntax](../../../../developers/applications/defining-schemas). This makes it easy to define your table and attribute structure and gives you control over which attributes should be indexed and what types they should be. With schemas in configuration, these schemas can be bundled with an application and deployed together with application code.
+
+### REST Interface
+
+HarperDB 4.2 introduces a new REST interface for accessing data through best-practice HTTP APIs using intuitive paths and standards-based methods and headers that directly map to our Resource API. This new interface provides fast and easy access to data via queries through GET requests, modifications of data through PUTs, customized actions through POSTs and more. With standards-based header support built-in, this works seamlessly with external caches (including browser caches) for accelerated performance and reduced network transfers.
+
+### Real-Time
+
+HarperDB 4.2 now provides standard interfaces for subscribing to data changes and receiving notifications of changes and messages in real-time. Using these new real-time messaging capabilities with structured data provides a powerful integrated platform for both database style data updates and querying along with message delivery. [Real-time messaging](../../../../developers/real-time) of data is available through several protocols:
+
+#### MQTT
+
+4.2 now includes MQTT support which is a publish and subscribe messaging protocol, designed for efficiency (designed to be efficient enough for even small Internet of Things devices). This allows clients to connect to HarperDB and publish messages through our data center and subscribe to messages and data for real-time delivery. 4.2 implements support for QoS 0 and 1, along with durable sessions.
+
+#### WebSockets
+
+HarperDB now also supports WebSockets. This can be used as a transport for MQTT or as a connection for custom connection handling.
+
+#### Server-Sent Events
+
+HarperDB also includes support for Server-Sent Events. This is a very easy-to-use browser API that allows web sites/applications to connect to HarperDB and subscribe to data changes with minimal effort over standard HTTP.
+
+### Database Structure
+
+HarperDB databases contain a collection of tables, and these tables are now contained in a single transactionally-consistent database file. This means reads and writes can be performed transactionally and atomically across tables (as long as they are in the same database). Multi-table transactions are replicated as single atomic transactions as well. Audit logs are also maintained in the same database with atomic consistency as well.
+
+Databases are now entirely encapsulated in a file, which means they can be moved/copied to another database without requiring any separate metadata updates in the system tables.
+
+### Clone Node
+
+HarperDB includes new functionality for adding new HarperDB nodes in a cluster. New instances can be configured to clone from a leader node, performing and copying a database snapshot from a leader node, and self-configuring from the leader node as well, to facilitate accelerated deployment of new nodes for fast horizontal scaling to meet demand needs. [See the documentation on Clone Node for more information.](../../../../administration/cloning)
+
+### Operations API terminology updates
+
+Any operation that used the `schema` property was updated to make this property optional and alternately support `database` as the property for specifying the database (formerly 'schema'). If both `schema` and `database` are absent, operation defaults to using the `data` database. Term 'primary key' now used in place of 'hash'. noSQL operation `search_by_hash` updated to `search_by_id`.
+
+Support was added for defining a table with `primary_key` instead of `hash_attribute`.
+
+## Configuration
+
+There have been significant changes to `harperdb-config.yaml`, however none of these changes should affect pre-4.2 versions. If you upgrade to 4.2 any existing configuration should be backwards compatible and will not need to be updated.
+
+`harperdb-config.yaml` has had some configuration values added, removed, renamed and defaults changed. Please refer to [harperdb-config.yaml](../../../deployments/configuration) for the most current configuration parameters.
+
+* The `http` element has been expanded.
+ * `compressionThreshold` was added.
+ * All `customFunction` configuration now lives here, except for the `tls` section.
+* `threads` has moved out of the `http` element and now is its own top level element.
+* `authentication` section was moved out of the `operationsApi` section and is now its own top level element/section.
+* `analytics.aggregatePeriod` was added.
+* Default logging level was changed to `warn`.
+* Default clustering log level was changed to `info`.
+* `clustering.republishMessages` now defaults to `false`.
+* `operationsApi.foreground` was removed. To start HarperDB in the foreground, from the CLI run `harperdb`.
+* Made `operationsApi` configuration optional. Any config not defined here will default to the `http` section.
+* Added a `securePort` parameter to `operationsApi` and `http` used for setting the https port.
+* Added a new top level `tls` section.
+* Removed `customFunctions.enabled`, `customFunctions.network.https`, `operationsApi.network.https` and `operationsApi.nodeEnv`.
+* Added an element called `componentRoot` which replaces `customFunctions.root`.
+* Updated custom pathing to use `databases` instead of `schemas`.
+* Added `logging.auditAuthEvents.logFailed` and `logging.auditAuthEvents.logSuccessful` for enabling logging of auth events.
+* A new `mqtt` section was added.
+
+### Socket Management
+
+HarperDB now uses socket sharing to distribute incoming connections to different threads (`SO_REUSEPORT`). This is considered to be the most performant mechanism available for multi-threaded socket handling. This does mean that we have deprecated session-affinity based socket delegation.
+
+HarperDB now also supports more flexible port configurations: application endpoints and WebSockets run on 9926 by default, but these can be separated, or application endpoints can be configured to run on the same port as the operations API for a single port configuration.
+
+### Sessions
+
+HarperDB now supports cookie-based sessions for authentication for web clients. This can be used with the standard authentication mechanisms to login, and then cookies can be used to preserve the authenticated session. This is generally a more secure way of maintaining authentication in browsers, without having to rely on storing credentials.
+
+### Dev Mode
+
+HarperDB can now directly run a HarperDB application from any location using `harperdb run /path/to/app` or `harperdb dev /path/to/app`. The latter starts in dev mode, with logging directly to the console, debugging enabled, and auto-restarting with any changes in your application files. Dev mode is recommended for local application and component development.
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.2.1.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.2.1.md
new file mode 100644
index 00000000..38617ca9
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.2.1.md
@@ -0,0 +1,13 @@
+---
+title: 4.2.1
+sidebar_position: 59798
+---
+
+### HarperDB 4.2.1, Tucker Release
+11/3/2023
+
+* Downgrade NATS 2.10.3 back to 2.10.1 due to regression in connection handling.
+* Handle package names with underscores.
+* Improved validation of queries and comparators
+* Avoid double replication on transactions with multiple commits
+* Added file metadata on get_component_file
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.2.2.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.2.2.md
new file mode 100644
index 00000000..15768374
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.2.2.md
@@ -0,0 +1,15 @@
+---
+title: 4.2.2
+sidebar_position: 59797
+---
+
+### HarperDB 4.2.2, Tucker Release
+11/8/2023
+
+* Increase timeouts for NATS connections.
+* Fix for database snapshots for backups (and for clone node).
+* Fix application of permissions for default tables exposed through REST.
+* Log replication failures with record information.
+* Fix application of authorization/permissions for MQTT commands.
+* Fix copying of local components in clone node.
+* Fix calculation of overlapping start time in clone node.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.2.3.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.2.3.md
new file mode 100644
index 00000000..dab25c3d
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.2.3.md
@@ -0,0 +1,13 @@
+---
+title: 4.2.3
+sidebar_position: 59796
+---
+
+### HarperDB 4.2.3, Tucker Release
+11/15/2023
+
+* When setting setting securePort, disable unsecure port setting on same port
+* Fix `harperdb status` when pid file is missing
+* Fix/include missing icons/fonts from local studio
+* Fix crash that can occur when concurrently accessing records > 16KB
+* Apply a lower heap limit to better ensure that memory leaks are quickly caught/mitigated
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.2.4.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.2.4.md
new file mode 100644
index 00000000..87ee241d
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.2.4.md
@@ -0,0 +1,10 @@
+---
+title: 4.2.4
+sidebar_position: 59795
+---
+
+### HarperDB 4.2.4, Tucker Release
+11/16/2023
+
+* Prevent coercion of strings to numbers in SQL queries (in WHERE clause)
+* Address fastify deprecation warning about accessing config
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.2.5.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.2.5.md
new file mode 100644
index 00000000..1172c4b3
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.2.5.md
@@ -0,0 +1,12 @@
+---
+title: 4.2.5
+sidebar_position: 59794
+---
+
+### HarperDB 4.2.5, Tucker Release
+11/22/2023
+
+* Disable compression on server-sent events to ensure messages are immediately sent (not queued for later deliver)
+* Update geoNear function to tolerate null values
+* lmdb-js fix to ensure prefetched keys are pinned in memory until retrieved
+* Add header to indicate start of a new authenticated session (for studio to identify authenticated sessions)
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.2.6.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.2.6.md
new file mode 100644
index 00000000..d0a1f177
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.2.6.md
@@ -0,0 +1,10 @@
+---
+title: 4.2.6
+sidebar_position: 59793
+---
+
+### HarperDB 4.2.6, Tucker Release
+11/29/2023
+
+* Update various geo SQL functions to tolerate invalid values
+* Properly report component installation/load errors in `get_components` (for studio to load components after an installation failure)
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.2.7.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.2.7.md
new file mode 100644
index 00000000..78bfcaa7
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.2.7.md
@@ -0,0 +1,11 @@
+---
+title: 4.2.7
+sidebar_position: 59792
+---
+
+### HarperDB 4.2.7
+12/6/2023
+
+* Add support for cloning over the top of an existing HarperDB instance
+* Add health checks for NATS consumer with ability to restart consumer loops for better resiliency
+* Revert Fastify autoload module due to a regression that had caused EcmaScript modules for Fastify route modules to fail to load on Windows
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.2.8.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.2.8.md
new file mode 100644
index 00000000..fbe94b69
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/4.2.8.md
@@ -0,0 +1,14 @@
+---
+title: 4.2.8
+sidebar_position: 59791
+---
+
+### HarperDB 4.2.8
+12/19/2023
+
+* Added support CLI command line arguments for clone node
+* Added support for cloning a node without enabling clustering
+* Clear NATS client cache on closed event
+* Fix check for attribute permissions so that an empty attribute permissions array is treated as a table level permission definition
+* Improve speed of cross-node health checks
+* Fix for using `database` in describe operations
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/_category_.json b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/_category_.json
new file mode 100644
index 00000000..9a7bca50
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "HarperDB Tucker (Version 4)",
+ "position": -4
+}
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/index.md b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/index.md
new file mode 100644
index 00000000..f2f48c98
--- /dev/null
+++ b/site/versioned_docs/version-4.2/technical-details/release-notes/v4-tucker/index.md
@@ -0,0 +1,11 @@
+---
+title: HarperDB Tucker (Version 4)
+---
+
+# HarperDB Tucker (Version 4)
+
+Did you know our release names are dedicated to employee pups? For our fourth release, we have Tucker.
+
+
+
+_G’day, I’m Tucker. My dad is David Cockerill, a software engineer here at HarperDB. I am a 3-year-old Labrador Husky mix. I love to protect my dad from all the squirrels and rabbits we have in our yard. I have very ticklish feet and love belly rubs!_
diff --git a/site/versioned_docs/version-4.3/administration/_category_.json b/site/versioned_docs/version-4.3/administration/_category_.json
new file mode 100644
index 00000000..828e0998
--- /dev/null
+++ b/site/versioned_docs/version-4.3/administration/_category_.json
@@ -0,0 +1,12 @@
+{
+ "label": "Administration",
+ "position": 2,
+ "link": {
+ "type": "generated-index",
+ "title": "Administration Documentation",
+ "description": "Guides for managing and administering HarperDB instances",
+ "keywords": [
+ "administration"
+ ]
+ }
+}
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.3/administration/administration.md b/site/versioned_docs/version-4.3/administration/administration.md
new file mode 100644
index 00000000..8fbe7b80
--- /dev/null
+++ b/site/versioned_docs/version-4.3/administration/administration.md
@@ -0,0 +1,31 @@
+---
+title: Best Practices and Recommendations
+---
+
+# Best Practices and Recommendations
+
+HarperDB is designed for minimal administrative effort, and with managed services these are handled for you. But there are important things to consider for managing your own HarperDB servers.
+
+### Data Protection and (Backup and) Recovery
+
+As a distributed database, data protection and recovery can benefit from different data protection strategies than a traditional single-server database. But multiple aspects of data protection and recovery should be considered:
+
+* Availability: As a distributed database HarperDB is intrinsically built for high-availability and a cluster will continue to run even with complete server(s) failure. This is the first and primary defense for protecting against any downtime or data loss. HarperDB provides fast horizontal scaling functionality with node cloning, which facilitates ease of establishing high availability clusters.
+* [Audit log](./logging/audit-logging): HarperDB defaults to tracking data changes so malicious data changes can be found, attributed, and reverted. This provides security-level defense against data loss, allowing for fine-grained isolation and reversion of individual data without the large-scale reversion/loss of data associated with point-in-time recovery approaches.
+* Snapshots: When used as a source-of-truth database for crucial data, we recommend using snapshot tools to regularly snapshot databases as a final backup/defense against data loss (this should only be used as a last resort in recovery). HarperDB has a [`get_backup`](../developers/operations-api/databases-and-tables#get-backup) operation, which provides direct support for making and retrieving database snapshots. An HTTP request can be used to get a snapshot. Alternatively, volume snapshot tools can be used to snapshot data at the OS/VM level. HarperDB can also provide scripts for replaying transaction logs from snapshots to facilitate point-in-time recovery when necessary (often customization may be preferred in certain recovery situations to minimize data loss).
+
+### Horizontal Scaling with Node Cloning
+
+HarperDB provides rapid horizontal scaling capabilities through [node cloning functionality described here](./cloning).
+
+### Monitoring
+
+HarperDB provides robust capabilities for analytics and observability to facilitate effective and informative monitoring:
+* Analytics provides statistics on usage, request counts, load, memory usage with historical tracking. The analytics data can be [accessed through querying](../technical-details/reference/analytics).
+* A large variety of real-time statistics about load, system information, database metrics, thread usage can be retrieved through the [`system_information` API](../developers/operations-api/utilities).
+* Information about the current cluster configuration and status can be found in the [cluster APIs](../developers/operations-api/clustering).
+* Analytics and system information can easily be exported to Prometheus with our [Prometheus exporter component](https:/github.com/HarperDB-Add-Ons/prometheus_exporter), making it easy visualize and monitor HarperDB with Graphana.
+
+### Replication Transaction Logging
+
+HarperDB utilizes NATS for replication, which maintains a transaction log. See the [transaction log documentation for information on how to query this log](./logging/transaction-logging).
diff --git a/site/versioned_docs/version-4.3/administration/cloning.md b/site/versioned_docs/version-4.3/administration/cloning.md
new file mode 100644
index 00000000..ed4e1d79
--- /dev/null
+++ b/site/versioned_docs/version-4.3/administration/cloning.md
@@ -0,0 +1,171 @@
+---
+title: Clone Node
+---
+
+# Clone Node
+
+Clone node is a configurable node script that when pointed to another instance of HarperDB will create a clone of that
+instance's config, databases and setup replication. If it is run in a location where there is no existing HarperDB install,
+it will, along with cloning, install HarperDB. If it is run in a location where there is another HarperDB instance, it will
+only clone config, databases and replication that do not already exist.
+
+Clone node is triggered when HarperDB is installed or started with certain environment or command line (CLI) variables set (see below).
+
+**Leader node** - the instance of HarperDB you are cloning.\
+**Clone node** - the new node which will be a clone of the leader node.
+
+To start clone run `harperdb` in the CLI with either of the following variables set:
+
+#### Environment variables
+
+* `HDB_LEADER_URL` - The URL of the leader node's operation API (usually port 9925).
+* `HDB_LEADER_USERNAME` - The leader node admin username.
+* `HDB_LEADER_PASSWORD` - The leader node admin password.
+* `HDB_LEADER_CLUSTERING_HOST` - _(optional)_ The leader clustering host. This value will be added to the clustering routes on the clone node. If this value is not set, replication will not be set up between the leader and clone.
+
+For example:
+```
+HDB_LEADER_URL=https:/node-1.my-domain.com:9925 HDB_LEADER_CLUSTERING_HOST=node-1.my-domain.com HDB_LEADER_USERNAME=... HDB_LEADER_PASSWORD=... harperdb
+```
+
+#### Command line variables
+
+* `--HDB_LEADER_URL` - The URL of the leader node's operation API (usually port 9925).
+* `--HDB_LEADER_USERNAME` - The leader node admin username.
+* `--HDB_LEADER_PASSWORD` - The leader node admin password.
+* `--HDB_LEADER_CLUSTERING_HOST` - _(optional)_ The leader clustering host. This value will be added to the clustering routes on the clone node. If this value is not set, replication will not be set up between the leader and clone.
+
+For example:
+```
+harperdb --HDB_LEADER_URL https:/node-1.my-domain.com:9925 --HDB_LEADER_CLUSTERING_HOST node-1.my-domain.com --HDB_LEADER_USERNAME ... --HDB_LEADER_PASSWORD ...
+```
+
+Each time clone is run it will set a value `cloned: true` in `harperdb-config.yaml`. This value will prevent clone from
+running again. If you want to run clone again set this value to `false`. If HarperDB is started with the clone variables
+still present and `cloned` is true, HarperDB will just start as normal.
+
+Clone node does not require any additional configuration apart from the variables referenced above.
+However, if you wish to set any configuration during clone this can be done by passing the config as environment/CLI
+variables or cloning overtop of an existing harperdb-config.yaml file.
+
+More can be found in the HarperDB config documentation [here](../deployments/configuration).
+
+_Note: because node name must be unique, clone will auto-generate one unless one is provided_
+
+### Excluding database, components and replication
+
+To set any specific (optional) clone config, including the exclusion of any database, components or replication, there is a file
+called `clone-node-config.yaml` that can be used.
+
+The file must be located in the `ROOTPATH` directory of your clone (the `hdb` directory where you clone will be installed.
+If the directory does not exist, create one and add the file to it).
+
+The config available in `clone-node-config.yaml` is:
+
+```yaml
+databaseConfig:
+ excludeDatabases:
+ - database: null
+ excludeTables:
+ - database: null
+ table: null
+componentConfig:
+ exclude:
+ - name: null
+clusteringConfig:
+ publishToLeaderNode: true
+ subscribeToLeaderNode: true
+ excludeDatabases:
+ - database: null
+ excludeTables:
+ - database: null
+ table: null
+```
+
+_Note: only include the configuration that you are using. If no clone config file is provided nothing will be excluded,
+unless it already exists on the clone._
+
+`databaseConfig` - Set any databases or tables that you wish to exclude from cloning.
+
+`componentConfig` - Set any components that you do not want cloned. Clone node will not clone the component code,
+it will only clone the component reference that exists in the leader harperdb-config file.
+
+`clusteringConfig` - Set the replication setup to establish with the other nodes (default is `true` & `true`) and
+set any databases or tables that you wish to exclude from clustering.
+
+### Cloning configuration
+
+Clone node will not clone any configuration that is classed as unique to the leader node. This includes `clustering.nodeName`,
+`rootPath` and any other path related values, for example `storage.path`, `logging.root`, `componentsRoot`,
+any authentication certificate/key paths.
+
+**Clustering Routes**
+
+By default, the clone will send a set routes request to the leader node. The default `host` used in this request will be the
+host name of the clone operating system.
+
+To manually set a host use the variable `HDB_CLONE_CLUSTERING_HOST`.
+
+To disable the setting of the route set `HDB_SET_CLUSTERING_HOST` to `false`.
+
+### Cloning system database
+
+HarperDB uses a database called `system` to store operational information. Clone node will only clone the user and role
+tables from this database. It will also set up replication on this table, which means that any existing and future user and roles
+that are added will be replicated throughout the cluster.
+
+Cloning the user and role tables means that once clone node is complete, the clone will share the same login credentials with
+the leader.
+
+### Fully connected clone
+
+A fully connected topology is when all nodes are replicating (publish and subscribing) with all other nodes.
+A fully connected clone maintains this topology with addition of the new node. When a clone is created,
+replication is added between the leader and the clone and any nodes the leader is replicating with. For example,
+if the leader is replicating with node-a and node-b, the clone will replicate with the leader, node-a and node-b.
+
+To run clone node with the fully connected option simply pass the environment variable `HDB_FULLY_CONNECTED=true` or CLI variable `--HDB_FULLY_CONNECTED true`.
+
+### Cloning overtop of an existing HarperDB instance
+
+Clone node will not overwrite any existing config, database or replication. It will write/clone any config database or replication
+that does not exist on the node it is running on.
+
+An example of how this can be useful is if you want to set HarperDB config before the clone is created. To do this you
+would create a harperdb-config.yaml file in your local `hdb` root directory with the config you wish to set. Then
+when clone is run it will append the missing config to the file and install HarperDB with the desired config.
+
+Another useful example could be retroactively adding another database to an existing instance. Running clone on
+an existing instance could create a full clone of another database and set up replication between the database on the
+leader and the clone.
+
+### Cloning steps
+
+Clone node will execute the following steps when ran:
+1. Look for an existing HarperDB install. It does this by using the default (or user provided) `ROOTPATH`.
+1. If an existing instance is found it will check for a `harperdb-config.yaml` file and search for the `cloned` value. If the value exists and is `true` clone will skip the clone logic and start HarperDB.
+1. Clone harperdb-config.yaml values that don't already exist (excluding values unique to the leader node).
+1. Fully clone any databases that don't already exist.
+1. If classed as a "fresh clone", install HarperDB. An instance is classed as a fresh clone if there is no system database.
+1. If clustering is enabled on the leader and the `HDB_LEADER_CLUSTERING_HOST` variable is provided, set up replication on all cloned database(s).
+1. Clone is complete, start HarperDB.
+
+### Cloning with Docker
+
+To run clone inside a container add the environment variables to your run command.
+
+For example:
+
+```
+docker run -d \
+ -v :/home/harperdb/hdb \
+ -e HDB_LEADER_PASSWORD=password \
+ -e HDB_LEADER_USERNAME=admin \
+ -e HDB_LEADER_URL=https:/1.123.45.6:9925 \
+ -e HDB_LEADER_CLUSTERING_HOST=1.123.45.6 \
+ -p 9925:9925 \
+ -p 9926:9926 \
+ harperdb/harperdb
+```
+
+Clone will only run once, when you first start the container. If the container restarts the environment variables will be ignored.
diff --git a/site/versioned_docs/version-4.3/administration/compact.md b/site/versioned_docs/version-4.3/administration/compact.md
new file mode 100644
index 00000000..ca2aaf57
--- /dev/null
+++ b/site/versioned_docs/version-4.3/administration/compact.md
@@ -0,0 +1,65 @@
+---
+title: Compact a database
+---
+
+# Compact a database
+
+Database files can grow quickly as you use them, sometimes impeding performance.
+HarperDB has multiple compact features that can be used to reduce database file size and potentially improve performance.
+The compact process does not compress your data, it instead makes your database file smaller by eliminating free-space and fragmentation.
+
+There are two options that HarperDB offers for compacting a Database.
+
+_Note: Some of the storage configuration (such as compression) cannot be updated on existing databases,
+this is where the following options are useful. They will create a new compressed copy of the database with any updated configuration._
+
+More information on the storage configuration options can be [found here](../deployments/configuration#storage)
+
+### Copy compaction
+
+It is recommended that, to prevent any record loss, HarperDB is not running when performing this operation.
+
+This will copy a HarperDB database with compaction. If you wish to use this new database in place of the original,
+you will need to move/rename it to the path of the original database.
+
+This command should be run in the [CLI](../deployments/harperdb-cli)
+
+```bash
+harperdb copy-db
+```
+For example, to copy the default database:
+```bash
+harperdb copy-db data /home/user/hdb/database/copy.mdb
+```
+
+### Compact on start
+
+Compact on start is a more automated option that will compact __all__ databases when HarperDB is started. HarperDB will
+not start until compact is complete. Under the hood it loops through all non-system databases,
+creates a backup of each one and calls copy-db. After the copy/compaction is complete it will move the new database
+to where the original one is located and remove any backups.
+
+Compact on start is initiated by config in harperdb-config.yaml
+
+_Note: Compact on start will switch `compactOnStart` to `false` after it has run_
+
+`compactOnStart` - _Type_: boolean; _Default_: false
+
+`compactOnStartKeepBackup` - _Type_: boolean; _Default_: false
+
+```yaml
+storage:
+ compactOnStart: true
+ compactOnStartKeepBackup: false
+```
+
+Using CLI variables
+
+```bash
+--STORAGE_COMPACTONSTART true --STORAGE_COMPACTONSTARTKEEPBACKUP true
+```
+
+```bash
+STORAGE_COMPACTONSTART=true
+STORAGE_COMPACTONSTARTKEEPBACKUP=true
+```
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.3/administration/harperdb-studio/create-account.md b/site/versioned_docs/version-4.3/administration/harperdb-studio/create-account.md
new file mode 100644
index 00000000..635de7f4
--- /dev/null
+++ b/site/versioned_docs/version-4.3/administration/harperdb-studio/create-account.md
@@ -0,0 +1,26 @@
+---
+title: Create a Studio Account
+---
+
+# Create a Studio Account
+Start at the [HarperDB Studio sign up page](https:/studio.harperdb.io/sign-up).
+
+1) Provide the following information:
+ * First Name
+ * Last Name
+ * Email Address
+ * Subdomain
+
+ *Part of the URL that will be used to identify your HarperDB Cloud Instances. For example, with subdomain “demo” and instance name “c1” the instance URL would be: https:/c1-demo.harperdbcloud.com.*
+ * Coupon Code (optional)
+2) Review the Privacy Policy and Terms of Service.
+3) Click the sign up for free button.
+4) You will be taken to a new screen to add an account password. Enter your password.
+ *Passwords must be a minimum of 8 characters with at least 1 lower case character, 1 upper case character, 1 number, and 1 special character.*
+5) Click the add account password button.
+
+You will receive a Studio welcome email confirming your registration.
+
+
+
+Note: Your email address will be used as your username and cannot be changed.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.3/administration/harperdb-studio/enable-mixed-content.md b/site/versioned_docs/version-4.3/administration/harperdb-studio/enable-mixed-content.md
new file mode 100644
index 00000000..1948d6be
--- /dev/null
+++ b/site/versioned_docs/version-4.3/administration/harperdb-studio/enable-mixed-content.md
@@ -0,0 +1,11 @@
+---
+title: Enable Mixed Content
+---
+
+# Enable Mixed Content
+
+Enabling mixed content is required in cases where you would like to connect the HarperDB Studio to HarperDB Instances via HTTP. This should not be used for production systems, but may be convenient for development and testing purposes. Doing so will allow your browser to reach HTTP traffic, which is considered insecure, through an HTTPS site like the Studio.
+
+
+
+A comprehensive guide is provided by Adobe [here](https:/experienceleague.adobe.com/docs/target/using/experiences/vec/troubleshoot-composer/mixed-content.html).
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.3/administration/harperdb-studio/index.md b/site/versioned_docs/version-4.3/administration/harperdb-studio/index.md
new file mode 100644
index 00000000..8765927c
--- /dev/null
+++ b/site/versioned_docs/version-4.3/administration/harperdb-studio/index.md
@@ -0,0 +1,17 @@
+---
+title: HarperDB Studio
+---
+
+# HarperDB Studio
+HarperDB Studio is the web-based GUI for HarperDB. Studio enables you to administer, navigate, and monitor all of your HarperDB instances in a simple, user-friendly interface without any knowledge of the underlying HarperDB API. It’s free to sign up, get started today!
+
+[Sign up for free!](https:/studio.harperdb.io/sign-up)
+
+HarperDB now includes a simplified local Studio that is packaged with all HarperDB installations and served directly from the instance. It can be enabled in the [configuration file](../../deployments/configuration#localstudio). This section is dedicated to the hosted Studio accessed at [studio.harperdb.io](https:/studio.harperdb.io).
+
+---
+## How does Studio Work?
+While HarperDB Studio is web based and hosted by us, all database interactions are performed on the HarperDB instance the studio is connected to. The HarperDB Studio loads in your browser, at which point you login to your HarperDB instances. Credentials are stored in your browser cache and are not transmitted back to HarperDB. All database interactions are made via the HarperDB Operations API directly from your browser to your instance.
+
+## What type of instances can I manage?
+HarperDB Studio enables users to manage both HarperDB Cloud instances and privately hosted instances all from a single UI. All HarperDB instances feature identical behavior whether they are hosted by us or by you.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.3/administration/harperdb-studio/instance-configuration.md b/site/versioned_docs/version-4.3/administration/harperdb-studio/instance-configuration.md
new file mode 100644
index 00000000..ec800055
--- /dev/null
+++ b/site/versioned_docs/version-4.3/administration/harperdb-studio/instance-configuration.md
@@ -0,0 +1,125 @@
+---
+title: Instance Configuration
+---
+
+# Instance Configuration
+
+HarperDB instance configuration can be viewed and managed directly through the HarperDB Studio. HarperDB Cloud instances can be resized in two different ways via this page, either by modifying machine RAM or by increasing drive storage. Enterprise instances can have their licenses modified by modifying licensed RAM.
+
+
+
+All instance configuration is handled through the **config** page of the HarperDB Studio, accessed with the following instructions:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+
+2) Click the appropriate organization that the instance belongs to.
+
+3) Select your desired instance.
+
+4) Click config in the instance control bar.
+
+*Note, the **config** page will only be available to super users and certain items are restricted to Studio organization owners.*
+
+## Instance Overview
+
+The **instance overview** panel displays the following instance specifications:
+
+* Instance URL
+
+* Applications URL
+
+* Instance Node Name (for clustering)
+
+* Instance API Auth Header (this user)
+
+ *The Basic authentication header used for the logged in HarperDB database user*
+
+* Created Date (HarperDB Cloud only)
+
+* Region (HarperDB Cloud only)
+
+ *The geographic region where the instance is hosted.*
+
+* Total Price
+
+* RAM
+
+* Storage (HarperDB Cloud only)
+
+* Disk IOPS (HarperDB Cloud only)
+
+## Update Instance RAM
+
+HarperDB Cloud instance size and Enterprise instance licenses can be modified with the following instructions. This option is only available to Studio organization owners.
+
+
+
+Note: For HarperDB Cloud instances, upgrading RAM may add additional CPUs to your instance as well. Click here to see how many CPUs are provisioned for each instance size.
+
+1) In the **update ram** panel at the bottom left:
+
+ * Select the new instance size.
+
+ * If you do not have a credit card associated with your account, an **Add Credit Card To Account** button will appear. Click that to be taken to the billing screen where you can enter your credit card information before returning to the **config** tab to proceed with the upgrade.
+
+ * If you do have a credit card associated, you will be presented with the updated billing information.
+
+ * Click **Upgrade**.
+
+2) The instance will shut down and begin reprovisioning/relicensing itself. The instance will not be available during this time. You will be returned to the instance dashboard and the instance status will show UPDATING INSTANCE.
+
+3) Once your instance upgrade is complete, it will appear on the instance dashboard as status OK with your newly selected instance size.
+
+*Note, if HarperDB Cloud instance reprovisioning takes longer than 20 minutes, please submit a support ticket here: https:/harperdbhelp.zendesk.com/hc/en-us/requests/new.*
+
+## Update Instance Storage
+
+The HarperDB Cloud instance storage size can be increased with the following instructions. This option is only available to Studio organization owners.
+
+Note: Instance storage can only be upgraded once every 6 hours.
+
+1) In the **update storage** panel at the bottom left:
+
+ * Select the new instance storage size.
+
+ * If you do not have a credit card associated with your account, an **Add Credit Card To Account** button will appear. Click that to be taken to the billing screen where you can enter your credit card information before returning to the **config** tab to proceed with the upgrade.
+
+ * If you do have a credit card associated, you will be presented with the updated billing information.
+
+ * Click **Upgrade**.
+
+2) The instance will shut down and begin reprovisioning itself. The instance will not be available during this time. You will be returned to the instance dashboard and the instance status will show UPDATING INSTANCE.
+
+3) Once your instance upgrade is complete, it will appear on the instance dashboard as status OK with your newly selected instance size.
+
+*Note, if this process takes longer than 20 minutes, please submit a support ticket here: https:/harperdbhelp.zendesk.com/hc/en-us/requests/new.*
+
+## Remove Instance
+
+The HarperDB instance can be deleted/removed from the Studio with the following instructions. Once this operation is started it cannot be undone. This option is only available to Studio organization owners.
+
+1) In the **remove instance** panel at the bottom left:
+ * Enter the instance name in the text box.
+
+ * The Studio will present you with a warning.
+
+ * Click **Remove**.
+
+2) The instance will begin deleting immediately.
+
+## Restart Instance
+
+The HarperDB Cloud instance can be restarted with the following instructions.
+
+1) In the **restart instance** panel at the bottom right:
+ * Enter the instance name in the text box.
+
+ * The Studio will present you with a warning.
+
+ * Click **Restart**.
+
+2) The instance will begin restarting immediately.
+
+## Instance Config (Read Only)
+
+A JSON preview of the instance config is available for reference at the bottom of the page. This is a read only visual and is not editable via the Studio. To make changes to the instance config, review the [configuration file documentation](../../deployments/configuration#using-the-configuration-file-and-naming-conventions).
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.3/administration/harperdb-studio/instance-metrics.md b/site/versioned_docs/version-4.3/administration/harperdb-studio/instance-metrics.md
new file mode 100644
index 00000000..f084df63
--- /dev/null
+++ b/site/versioned_docs/version-4.3/administration/harperdb-studio/instance-metrics.md
@@ -0,0 +1,16 @@
+---
+title: Instance Metrics
+---
+
+# Instance Metrics
+
+The HarperDB Studio display instance status and metrics on the instance status page, which can be accessed with the following instructions:
+
+1. Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+1. Click the appropriate organization that the instance belongs to.
+1. Select your desired instance.
+1. Click **status** in the instance control bar.
+
+Once on the instance browse page you can view host system information, [HarperDB logs](../logging/standard-logging), and [HarperDB Cloud alarms](../../deployments/harperdb-cloud/alarms) (if it is a cloud instance).
+
+_Note, the **status** page will only be available to super users._
diff --git a/site/versioned_docs/version-4.3/administration/harperdb-studio/instances.md b/site/versioned_docs/version-4.3/administration/harperdb-studio/instances.md
new file mode 100644
index 00000000..548deb5a
--- /dev/null
+++ b/site/versioned_docs/version-4.3/administration/harperdb-studio/instances.md
@@ -0,0 +1,131 @@
+---
+title: Instances
+---
+
+# Instances
+
+The HarperDB Studio allows you to administer all of your HarperDB instances in one place. HarperDB currently offers the following instance types:
+
+* **HarperDB Cloud Instance** Managed installations of HarperDB, what we call [HarperDB Cloud](../../deployments/harperdb-cloud/).
+* **5G Wavelength Instance** Managed installations of HarperDB running on the Verizon network through AWS Wavelength, what we call [5G Wavelength Instances](../../deployments/harperdb-cloud/verizon-5g-wavelength-instances). _Note, these instances are only accessible via the Verizon network._
+* **Enterprise Instance** Any HarperDB installation that is managed by you. These include instances hosted within your cloud provider accounts (for example, from the AWS or Digital Ocean Marketplaces), privately hosted instances, or instances installed locally.
+
+All interactions between the Studio and your instances take place directly from your browser. HarperDB stores metadata about your instances, which enables the Studio to display these instances when you log in. Beyond that, all traffic is routed from your browser to the HarperDB instances using the standard [HarperDB API](../../developers/operations-api/).
+
+## Organization Instance List
+
+A summary view of all instances within an organization can be viewed by clicking on the appropriate organization from the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page. Each instance gets their own card. HarperDB Cloud and Enterprise instances are listed together.
+
+## Create a New Instance
+
+1. Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+1. Click the appropriate organization for the instance to be created under.
+1. Click the **Create New HarperDB Cloud Instance + Register Enterprise Instance** card.
+1. Select your desired Instance Type.
+1. For a HarperDB Cloud Instance or a HarperDB 5G Wavelength Instance, click **Create HarperDB Cloud Instance**.
+ 1. Fill out Instance Info.
+ 1. Enter Instance Name
+
+ _This will be used to build your instance URL. For example, with subdomain “demo” and instance name “c1” the instance URL would be: https:/c1-demo.harperdbcloud.com. The Instance URL will be previewed below._
+ 1. Enter Instance Username
+
+ _This is the username of the initial HarperDB instance super user._
+ 1. Enter Instance Password
+
+ _This is the password of the initial HarperDB instance super user._
+ 1. Click **Instance Details** to move to the next page.
+ 1. Select Instance Specs
+ 1. Select Instance RAM
+
+ _HarperDB Cloud Instances are billed based on Instance RAM, this will select the size of your provisioned instance._ [_More on instance specs_](../../deployments/harperdb-cloud/instance-size-hardware-specs)_._
+ 1. Select Storage Size
+
+ _Each instance has a mounted storage volume where your HarperDB data will reside. Storage is provisioned based on space and IOPS._ [_More on IOPS Impact on Performance_](../../deployments/harperdb-cloud/iops-impact)_._
+ 1. Select Instance Region
+
+ _The geographic area where your instance will be provisioned._
+ 1. Click **Confirm Instance Details** to move to the next page.
+ 1. Review your Instance Details, if there is an error, use the back button to correct it.
+ 1. Review the [Privacy Policy](https:/harperdb.io/legal/privacy-policy/) and [Terms of Service](https:/harperdb.io/legal/harperdb-cloud-terms-of-service/), if you agree, click the **I agree** radio button to confirm.
+ 1. Click **Add Instance**.
+ 1. Your HarperDB Cloud instance will be provisioned in the background. Provisioning typically takes 5-15 minutes. You will receive an email notification when your instance is ready.
+
+
+## Register Enterprise Instance
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+2) Click the appropriate organization for the instance to be created under.
+3) Click the **Create New HarperDB Cloud Instance + Register Enterprise Instance** card.
+4) Select **Register Enterprise Instance**.
+ 1. Fill out Instance Info.
+ 1. Enter Instance Name
+
+ _This is used for descriptive purposes only._
+ 1. Enter Instance Username
+
+ _The username of a HarperDB super user that is already configured in your HarperDB installation._
+ 1. Enter Instance Password
+
+ _The password of a HarperDB super user that is already configured in your HarperDB installation._
+ 1. Enter Host
+
+ _The host to access the HarperDB instance. For example, `harperdb.myhost.com` or `localhost`._
+ 1. Enter Port
+
+ _The port to access the HarperDB instance. HarperDB defaults `9925` for HTTP and `31283` for HTTPS._
+ 1. Select SSL
+
+ _If your instance is running over SSL, select the SSL checkbox. If not, you will need to enable mixed content in your browser to allow the HTTPS Studio to access the HTTP instance. If there are issues connecting to the instance, the Studio will display a red error message._
+ 1. Click **Instance Details** to move to the next page.
+ 1. Select Instance Specs
+ 1. Select Instance RAM
+
+ _HarperDB instances are billed based on Instance RAM. Selecting additional RAM will enable the ability for faster and more complex queries._
+ 1. Click **Confirm Instance Details** to move to the next page.
+ 1. Review your Instance Details, if there is an error, use the back button to correct it.
+ 1. Review the [Privacy Policy](https:/harperdb.io/legal/privacy-policy/) and [Terms of Service](https:/harperdb.io/legal/harperdb-cloud-terms-of-service/), if you agree, click the **I agree** radio button to confirm.
+ 1. Click **Add Instance**.
+ 1. The HarperDB Studio will register your instance and restart it for the registration to take effect. Your instance will be immediately available after this is complete.
+
+## Delete an Instance
+
+Instance deletion has two different behaviors depending on the instance type.
+
+* **HarperDB Cloud Instance** This instance will be permanently deleted, including all data. This process is irreversible and cannot be undone.
+* **Enterprise Instance** The instance will be removed from the HarperDB Studio only. This does not uninstall HarperDB from your system and your data will remain intact.
+
+An instance can be deleted as follows:
+
+1. Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+1. Click the appropriate organization that the instance belongs to.
+1. Identify the proper instance card and click the trash can icon.
+1. Enter the instance name into the text box.
+
+ _This is done for confirmation purposes to ensure you do not accidentally delete an instance._
+1. Click the **Do It** button.
+
+## Upgrade an Instance
+
+HarperDB instances can be resized on the [Instance Configuration](./instance-configuration) page.
+
+## Instance Log In/Log Out
+
+The Studio enables users to log in and out of different database users from the instance control panel. To log out of an instance:
+
+1. Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+1. Click the appropriate organization that the instance belongs to.
+1. Identify the proper instance card and click the lock icon.
+1. You will immediately be logged out of the instance.
+
+To log in to an instance:
+
+1. Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+1. Click the appropriate organization that the instance belongs to.
+1. Identify the proper instance card, it will have an unlocked icon and a status reading PLEASE LOG IN, and click the center of the card.
+1. Enter the database username.
+
+ _The username of a HarperDB user that is already configured in your HarperDB instance._
+1. Enter the database password.
+
+ _The password of a HarperDB user that is already configured in your HarperDB instance._
+1. Click **Log In**.
diff --git a/site/versioned_docs/version-4.3/administration/harperdb-studio/login-password-reset.md b/site/versioned_docs/version-4.3/administration/harperdb-studio/login-password-reset.md
new file mode 100644
index 00000000..dddda5c1
--- /dev/null
+++ b/site/versioned_docs/version-4.3/administration/harperdb-studio/login-password-reset.md
@@ -0,0 +1,42 @@
+---
+title: Login and Password Reset
+---
+
+# Login and Password Reset
+
+## Log In to Your HarperDB Studio Account
+
+To log into your existing HarperDB Studio account:
+
+1) Navigate to the [HarperDB Studio](https:/studio.harperdb.io/).
+2) Enter your email address.
+3) Enter your password.
+4) Click **sign in**.
+
+## Reset a Forgotten Password
+
+To reset a forgotten password:
+
+1) Navigate to the HarperDB Studio password reset page.
+2) Enter your email address.
+3) Click **send password reset email**.
+4) If the account exists, you will receive an email with a temporary password.
+5) Navigate back to the HarperDB Studio login page.
+6) Enter your email address.
+7) Enter your temporary password.
+8) Click **sign in**.
+9) You will be taken to a new screen to reset your account password. Enter your new password.
+*Passwords must be a minimum of 8 characters with at least 1 lower case character, 1 upper case character, 1 number, and 1 special character.*
+10) Click the **add account password** button.
+
+## Change Your Password
+
+If you are already logged into the Studio, you can change your password though the user interface.
+
+1) Navigate to the HarperDB Studio profile page.
+2) In the **password** section, enter:
+
+ * Current password.
+ * New password.
+ * New password again *(for verification)*.
+4) Click the **Update Password** button.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.3/administration/harperdb-studio/manage-applications.md b/site/versioned_docs/version-4.3/administration/harperdb-studio/manage-applications.md
new file mode 100644
index 00000000..57126c96
--- /dev/null
+++ b/site/versioned_docs/version-4.3/administration/harperdb-studio/manage-applications.md
@@ -0,0 +1,61 @@
+---
+title: Manage Applications
+---
+
+# Manage Applications
+
+[HarperDB Applications](../../developers/applications/) are enabled by default and can be configured further through the HarperDB Studio. It is recommended to read through the [Applications](../../developers/applications/) documentation first to gain a strong understanding of HarperDB Applications behavior.
+
+All Applications configuration and development is handled through the **applications** page of the HarperDB Studio, accessed with the following instructions:
+
+1) Navigate to the HarperDB Studio Organizations page.
+
+2) Click the appropriate organization that the instance belongs to.
+
+3) Select your desired instance.
+
+4) Click **applications** in the instance control bar.
+
+*Note, the **applications** page will only be available to super users.*
+
+## Manage Applications
+
+The Applications editor is not required for development and deployment, though it is a useful tool to maintain and manage your HarperDB Applications. The editor provides the ability to create new applications or import/deploy remote application packages.
+
+The left bar is the applications file navigator, allowing you to select files to edit and add/remove files and folders. By default, this view is empty because there are no existing applications. To get started, either create a new application or import/deploy a remote application.
+
+The right side of the screen is the file editor. Here you can make edit individual files of your application directly in the HarperDB Studio.
+
+## Things to Keep in Mind
+To learn more about developing HarperDB Applications, make sure to read through the [Applications](../../developers/applications/) documentation.
+
+When working with Applications in the HarperDB Studio, by default the editor will restart the HarperDB Applications server every time a file is saved. Note, this behavior can be turned off by toggling the `auto` toggle at the top right of the applications page. If you are constantly editing your application, it may result in errors causing the application not to run. These errors will not be visible on the application page, however they will be available in the HarperDB logs, which can be found on the [status page](./instance-metrics).
+
+The Applications editor stores unsaved changes in cache. This means that occasionally your editor will show a discrepancy from the code that is stored and running on your HarperDB instance. You can identify if the code in your Studio differs if the "save" and "revert" buttons are active. To revert the cached version in your editor to the version of the file stored on your HarperDB instance click the "revert" button.
+
+## Accessing Your Application Endpoints
+Accessing your application endpoints varies with which type of endpoint you're creating. All endpoints, regardless of type, will be accessed via the [HarperDB HTTP port found in the HarperDB configuration file](../../deployments/configuration#http). The default port is `9926`, but you can verify what your instances is set to by navigating to the [instance config page](./instance-configuration) and examining the read only JSON version of your instance's config file looking specifically for either the `http: port: 9926` or `http: securePort: 9926` configs. If `port` is set, you will access your endpoints via `http` and if `securePort` is set, you will access your endpoints via `https`.
+
+Below is a breakdown of how to access each type of endpoint. In these examples, we will use a locally hosted instance with `securePort` set to `9926`: `https:/localhost:9926`.
+
+- **Standard REST Endpoints**\
+Standard REST endpoints are defined via the `@export` directive to tables in your schema definition. You can read more about these in the [Adding an Endpoint section of the Applications documentation](../../developers/applications/#adding-an-endpoint). Here, if we are looking to access a record with ID `1` from table `Dog` on our instance, [per the REST documentation](../../developers/rest), we could send a `GET` (or since this is a GET, we could post the URL in our browser) to `https:/localhost:9926/Dog/1`.
+- **Augmented REST Endpoints**\
+HarperDB Applications enable you to write [Custom Functionality with JavaScript](../../developers/applications/#custom-functionality-with-javascript) for your resources. Accessing these endpoints is identical to accessing the standard REST endpoints above, though you may have defined custom behavior in each function. Taking the example from the [Applications documentation](../../developers/applications/#custom-functionality-with-javascript), if we are looking to access the `DogWithHumanAge` example, we could send the GET to `https:/localhost:9926/DogWithHumanAge/1`.
+- **Fastify Routes**\
+If you need more functionality than the REST applications can provide, you can define your own custom endpoints using [Fastify Routes](../../developers/applications/#define-fastify-routes). The paths to these routes are defined via the application `config.yaml` file. You can read more about how you can customize the configuration options in the [Define Fastify Routes documentation](../../developers/applications/define-routes). By default, routes are accessed via the following pattern: `[Instance URL]:[HTTP Port]/[Project Name]/[Route URL]`. Using the example from the [HarperDB Application Template](https:/github.com/HarperDB/application-template/blob/main/routes/index.js), where we've named our project `application-template`, we would access the `getAll` route at `https:/localhost/application-template/getAll`.
+
+
+## Creating a New Application
+
+1) From the application page, click the "+ app" button at the top right.
+2) Click "+ Create A New Application Using The Default Template".
+3) Enter a name for your project, note project names must contain only alphanumeric characters, dashes and underscores.
+4) Click OK.
+5) Your project will be available in the applications file navigator on the left. Click a file to select a file to edit.
+
+## Editing an Application
+
+1) From the applications page, click the file you would like to edit from the file navigator on the left.
+2) Edit the file with any changes you'd like.
+3) Click "save" at the top right. Note, as mentioned above, when you save a file, the HarperDB Applications server will be restarted immediately.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.3/administration/harperdb-studio/manage-charts.md b/site/versioned_docs/version-4.3/administration/harperdb-studio/manage-charts.md
new file mode 100644
index 00000000..cb73ae99
--- /dev/null
+++ b/site/versioned_docs/version-4.3/administration/harperdb-studio/manage-charts.md
@@ -0,0 +1,65 @@
+---
+title: Manage Charts
+---
+
+# Manage Charts
+
+The HarperDB Studio includes a charting feature within an instance. They are generated in real time based on your existing data and automatically refreshed every 15 seconds. Instance charts can be accessed with the following instructions:
+
+1. Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+1. Click the appropriate organization that the instance belongs to.
+1. Select your desired instance.
+1. Click **charts** in the instance control bar.
+
+## Creating a New Chart
+
+Charts are generated based on SQL queries, therefore to build a new chart you first need to build a query. Instructions as follows (starting on the charts page described above):
+
+1. Click **query** in the instance control bar.
+1. Enter the SQL query you would like to generate a chart from.
+
+ _For example, using the dog demo data from the API Docs, we can get the average dog age per owner with the following query: `SELECT AVG(age) as avg_age, owner_name FROM dev.dog GROUP BY owner_name`._
+1. Click **Execute**.
+1. Click **create chart** at the top right of the results table.
+1. Configure your chart.
+ 1. Choose chart type.
+
+ _HarperDB Studio offers many standard charting options like line, bar, etc._
+ 1. Choose a data column.
+
+ _This column will be used to plot the data point. Typically, this is the values being calculated in the `SELECT` statement. Depending on the chart type, you can select multiple data columns to display on a single chart._
+ 1. Depending on the chart type, you will need to select a grouping.
+
+ _This could be labeled as x-axis, label, etc. This will be used to group the data, typically this is what you used in your **GROUP BY** clause._
+ 1. Enter a chart name.
+
+ _Used for identification purposes and will be displayed at the top of the chart._
+ 1. Choose visible to all org users toggle.
+
+ _Leaving this option off will limit chart visibility to just your HarperDB Studio user. Toggling it on will enable all users with this Organization to view this chart._
+ 1. Click **Add Chart**.
+ 1. The chart will now be visible on the **charts** page.
+
+The example query above, configured as a bar chart, results in the following chart:
+
+
+
+## Downloading Charts
+
+HarperDB Studio charts can be downloaded in SVG, PNG, and CSV format. Instructions as follows (starting on the charts page described above):
+
+1. Identify the chart you would like to export.
+1. Click the three bars icon.
+1. Select the appropriate download option.
+1. The Studio will generate the export and begin downloading immediately.
+
+## Delete a Chart
+
+Delete a chart as follows (starting on the charts page described above):
+
+1. Identify the chart you would like to delete.
+1. Click the X icon.
+1. Click the **confirm delete chart** button.
+1. The chart will be deleted.
+
+Deleting a chart that is visible to all Organization users will delete it for all users.
diff --git a/site/versioned_docs/version-4.3/administration/harperdb-studio/manage-databases-browse-data.md b/site/versioned_docs/version-4.3/administration/harperdb-studio/manage-databases-browse-data.md
new file mode 100644
index 00000000..da302f70
--- /dev/null
+++ b/site/versioned_docs/version-4.3/administration/harperdb-studio/manage-databases-browse-data.md
@@ -0,0 +1,132 @@
+---
+title: Manage Databases / Browse Data
+---
+
+# Manage Databases / Browse Data
+
+Manage instance databases/tables and browse data in tabular format with the following instructions:
+
+1) Navigate to the HarperDB Studio Organizations page.
+2) Click the appropriate organization that the instance belongs to.
+3) Select your desired instance.
+4) Click **browse** in the instance control bar.
+
+Once on the instance browse page you can view data, manage databases and tables, add new data, and more.
+
+## Manage Databases and Tables
+
+#### Create a Database
+
+1) Click the plus icon at the top right of the databases section.
+2) Enter the database name.
+3) Click the green check mark.
+
+
+#### Delete a Database
+
+Deleting a database is permanent and irreversible. Deleting a database removes all tables and data within it.
+
+1) Click the minus icon at the top right of the databases section.
+2) Identify the appropriate database to delete and click the red minus sign in the same row.
+3) Click the red check mark to confirm deletion.
+
+
+#### Create a Table
+
+1) Select the desired database from the databases section.
+2) Click the plus icon at the top right of the tables section.
+3) Enter the table name.
+4) Enter the primary key.
+
+ *The primary key is also often referred to as the hash attribute in the studio, and it defines the unique identifier for each row in your table.*
+5) Click the green check mark.
+
+
+#### Delete a Table
+Deleting a table is permanent and irreversible. Deleting a table removes all data within it.
+
+1) Select the desired database from the databases section.
+2) Click the minus icon at the top right of the tables section.
+3) Identify the appropriate table to delete and click the red minus sign in the same row.
+4) Click the red check mark to confirm deletion.
+
+## Manage Table Data
+
+The following section assumes you have selected the appropriate table from the database/table browser.
+
+
+
+#### Filter Table Data
+
+1) Click the magnifying glass icon at the top right of the table browser.
+2) This expands the search filters.
+3) The results will be filtered appropriately.
+
+
+#### Load CSV Data
+
+1) Click the data icon at the top right of the table browser. You will be directed to the CSV upload page where you can choose to import a CSV by URL or upload a CSV file.
+2) To import a CSV by URL:
+ 1) Enter the URL in the **CSV file URL** textbox.
+ 2) Click **Import From URL**.
+ 3) The CSV will load, and you will be redirected back to browse table data.
+3) To upload a CSV file:
+ 1) Click **Click or Drag to select a .csv file** (or drag your CSV file from your file browser).
+ 2) Navigate to your desired CSV file and select it.
+ 3) Click **Insert X Records**, where X is the number of records in your CSV.
+ 4) The CSV will load, and you will be redirected back to browse table data.
+
+
+#### Add a Record
+
+1) Click the plus icon at the top right of the table browser.
+2) The Studio will pre-populate existing table attributes in JSON format.
+
+ *The primary key is not included, but you can add it in and set it to your desired value. Auto-maintained fields are not included and cannot be manually set. You may enter a JSON array to insert multiple records in a single transaction.*
+3) Enter values to be added to the record.
+
+ *You may add new attributes to the JSON; they will be reflexively added to the table.*
+4) Click the **Add New** button.
+
+
+#### Edit a Record
+
+1) Click the record/row you would like to edit.
+2) Modify the desired values.
+
+ *You may add new attributes to the JSON; they will be reflexively added to the table.*
+
+3) Click the **save icon**.
+
+
+#### Delete a Record
+
+Deleting a record is permanent and irreversible. If transaction logging is turned on, the delete transaction will be recorded as well as the data that was deleted.
+
+1) Click the record/row you would like to delete.
+2) Click the **delete icon**.
+3) Confirm deletion by clicking the **check icon**.
+
+## Browse Table Data
+
+The following section assumes you have selected the appropriate table from the database/table browser.
+
+#### Browse Table Data
+
+The first page of table data is automatically loaded on table selection. Paging controls are at the bottom of the table. Here you can:
+
+* Page left and right using the arrows.
+* Type in the desired page.
+* Change the page size (the amount of records displayed in the table).
+
+
+#### Refresh Table Data
+
+Click the refresh icon at the top right of the table browser.
+
+
+
+#### Automatically Refresh Table Data
+
+Toggle the auto switch at the top right of the table browser. The table data will now automatically refresh every 15 seconds. Filters and pages will remain set for refreshed data.
+
diff --git a/site/versioned_docs/version-4.3/administration/harperdb-studio/manage-instance-roles.md b/site/versioned_docs/version-4.3/administration/harperdb-studio/manage-instance-roles.md
new file mode 100644
index 00000000..f0aa72bb
--- /dev/null
+++ b/site/versioned_docs/version-4.3/administration/harperdb-studio/manage-instance-roles.md
@@ -0,0 +1,76 @@
+---
+title: Manage Instance Roles
+---
+
+# Manage Instance Roles
+
+HarperDB users and roles can be managed directly through the HarperDB Studio. It is recommended to read through the [users & roles documentation](../../developers/security/users-and-roles) to gain a strong understanding of how they operate.
+
+Instance role configuration is handled through the **roles** page of the HarperDB Studio, accessed with the following instructions:
+
+1) Navigate to the HarperDB Studio Organizations page.
+
+2) Click the appropriate organization that the instance belongs to.
+
+3) Select your desired instance.
+
+4) Click **roles** in the instance control bar.
+
+*Note, the **roles** page will only be available to super users.*
+
+
+
+The *roles management* screen consists of the following panels:
+
+* **super users**
+
+ Displays all super user roles for this instance.
+* **cluster users**
+
+ Displays all cluster user roles for this instance.
+* **standard roles**
+
+ Displays all standard roles for this instance.
+* **role permission editing**
+
+ Once a role is selected for editing, permissions will be displayed here in JSON format.
+
+*Note, when new tables are added that are not configured, the Studio will generate configuration values with permissions defaulting to `false`.*
+
+## Role Management
+
+#### Create a Role
+
+1) Click the plus icon at the top right of the appropriate role section.
+
+2) Enter the role name.
+
+3) Click the green check mark.
+
+4) Optionally toggle the **manage databases/tables** switch to specify the `structure_user` config.
+
+5) Configure the role permissions in the role permission editing panel.
+
+ *Note, to have the Studio generate attribute permissions JSON, toggle **show all attributes** at the top right of the role permission editing panel.*
+
+6) Click **Update Role Permissions**.
+
+#### Modify a Role
+
+1) Click the appropriate role from the appropriate role section.
+
+2) Modify the role permissions in the role permission editing panel.
+
+ *Note, to have the Studio generate attribute permissions JSON, toggle **show all attributes** at the top right of the role permission editing panel.*
+
+3) Click **Update Role Permissions**.
+
+#### Delete a Role
+
+Deleting a role is permanent and irreversible. A role cannot be remove if users are associated with it.
+
+1) Click the minus icon at the top right of the roles section.
+
+2) Identify the appropriate role to delete and click the red minus sign in the same row.
+
+3) Click the red check mark to confirm deletion.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.3/administration/harperdb-studio/manage-instance-users.md b/site/versioned_docs/version-4.3/administration/harperdb-studio/manage-instance-users.md
new file mode 100644
index 00000000..02a0a32d
--- /dev/null
+++ b/site/versioned_docs/version-4.3/administration/harperdb-studio/manage-instance-users.md
@@ -0,0 +1,61 @@
+---
+title: Manage Instance Users
+---
+
+# Manage Instance Users
+
+HarperDB users and roles can be managed directly through the HarperDB Studio. It is recommended to read through the [users & roles documentation](../../developers/security/users-and-roles) to gain a strong understanding of how they operate.
+
+Instance user configuration is handled through the **users** page of the HarperDB Studio, accessed with the following instructions:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+
+2) Click the appropriate organization that the instance belongs to.
+
+3) Select your desired instance.
+
+4) Click **users** in the instance control bar.
+
+*Note, the **users** page will only be available to super users.*
+
+## Add a User
+
+HarperDB instance users can be added with the following instructions.
+
+1) In the **add user** panel on the left enter:
+
+ * New user username.
+
+ * New user password.
+
+ * Select a role.
+
+ *Learn more about role management here: [Manage Instance Roles](./manage-instance-roles).*
+
+2) Click **Add User**.
+
+## Edit a User
+
+HarperDB instance users can be modified with the following instructions.
+
+1) In the **existing users** panel, click the row of the user you would like to edit.
+
+2) To change a user’s password:
+
+ 1) In the **Change user password** section, enter the new password.
+
+ 2) Click **Update Password**.
+
+3) To change a user’s role:
+
+ 1) In the **Change user role** section, select the new role.
+
+ 2) Click **Update Role**.
+
+4) To delete a user:
+
+ 1) In the **Delete User** section, type the username into the textbox.
+
+ *This is done for confirmation purposes.*
+
+ 2) Click **Delete User**.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.3/administration/harperdb-studio/manage-replication.md b/site/versioned_docs/version-4.3/administration/harperdb-studio/manage-replication.md
new file mode 100644
index 00000000..3ee158bd
--- /dev/null
+++ b/site/versioned_docs/version-4.3/administration/harperdb-studio/manage-replication.md
@@ -0,0 +1,89 @@
+---
+title: Manage Replication
+---
+
+# Manage Replication
+
+HarperDB instance clustering and replication can be configured directly through the HarperDB Studio. It is recommended to read through the [clustering documentation](../../developers/clustering/) first to gain a strong understanding of HarperDB clustering behavior.
+
+
+
+All clustering configuration is handled through the **replication** page of the HarperDB Studio, accessed with the following instructions:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+
+2) Click the appropriate organization that the instance belongs to.
+
+3) Select your desired instance.
+
+4) Click **replication** in the instance control bar.
+
+Note, the **replication** page will only be available to super users.
+
+---
+## Initial Configuration
+
+HarperDB instances do not have clustering configured by default. The HarperDB Studio will walk you through the initial configuration. Upon entering the **replication** screen for the first time you will need to complete the following configuration. Configurations are set in the **enable clustering** panel on the left while actions are described in the middle of the screen. It is worth reviewing the [Creating a Cluster User](../../developers/clustering/creating-a-cluster-user) document before proceeding.
+
+1) Enter Cluster User username. (Defaults to `cluster_user`).
+2) Enter Cluster Password.
+3) Review and/or Set Cluster Node Name.
+4) Click **Enable Clustering**.
+
+At this point the Studio will restart your HarperDB Instance, required for the configuration changes to take effect.
+
+---
+
+## Manage Clustering
+Once initial clustering configuration is completed you a presented with a clustering management screen with the following properties:
+
+* **connected instances**
+
+ Displays all instances within the Studio Organization that this instance manages a connection with.
+
+* **unconnected instances**
+
+ Displays all instances within the Studio Organization that this instance does not manage a connection with.
+
+* **unregistered instances**
+
+ Displays all instances outside the Studio Organization that this instance manages a connection with.
+
+* **manage clustering**
+
+ Once instances are connected, this will display clustering management options for all connected instances and all databases and tables.
+---
+
+## Connect an Instance
+
+HarperDB Instances can be clustered together with the following instructions.
+
+1) Ensure clustering has been configured on both instances and a cluster user with identical credentials exists on both.
+
+2) Identify the instance you would like to connect from the **unconnected instances** panel.
+
+3) Click the plus icon next the appropriate instance.
+
+4) If configurations are correct, all databases will sync across the cluster, then appear in the **manage clustering** panel. If there is a configuration issue, a red exclamation icon will appear, click it to learn more about what could be causing the issue.
+
+---
+
+## Disconnect an Instance
+
+HarperDB Instances can be disconnected with the following instructions.
+
+1) Identify the instance you would like to disconnect from the **connected instances** panel.
+
+2) Click the minus icon next the appropriate instance.
+
+---
+
+## Manage Replication
+
+Subscriptions must be configured in order to move data between connected instances. Read more about subscriptions here: Creating A Subscription. The **manage clustering** panel displays a table with each row representing an channel per instance. Cells are bolded to indicate a change in the column. Publish and subscribe replication can be configured per table with the following instructions:
+
+1) Identify the instance, database, and table for replication to be configured.
+
+2) For publish, click the toggle switch in the **publish** column.
+
+3) For subscribe, click the toggle switch in the **subscribe** column.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.3/administration/harperdb-studio/organizations.md b/site/versioned_docs/version-4.3/administration/harperdb-studio/organizations.md
new file mode 100644
index 00000000..888469d7
--- /dev/null
+++ b/site/versioned_docs/version-4.3/administration/harperdb-studio/organizations.md
@@ -0,0 +1,105 @@
+---
+title: Organizations
+---
+
+# Organizations
+HarperDB Studio organizations provide the ability to group HarperDB Cloud Instances. Organization behavior is as follows:
+
+* Billing occurs at the organization level to a single credit card.
+* Organizations retain their own unique HarperDB Cloud subdomain.
+* Cloud instances reside within an organization.
+* Studio users can be invited to organizations to share instances.
+
+
+An organization is automatically created for you when you sign up for HarperDB Studio. If you only have one organization, the Studio will automatically bring you to your organization’s page.
+
+---
+
+## List Organizations
+A summary view of all organizations your user belongs to can be viewed on the [HarperDB Studio Organizations](https:/studio.harperdb.io/?redirect=/organizations) page. You can navigate to this page at any time by clicking the **all organizations** link at the top of the HarperDB Studio.
+
+## Create a New Organization
+A new organization can be created as follows:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/?redirect=/organizations) page.
+2) Click the **Create a New Organization** card.
+3) Fill out new organization details
+ * Enter Organization Name
+ *This is used for descriptive purposes only.*
+ * Enter Organization Subdomain
+ *Part of the URL that will be used to identify your HarperDB Cloud Instances. For example, with subdomain “demo” and instance name “c1” the instance URL would be: https:/c1-demo.harperdbcloud.com.*
+4) Click Create Organization.
+
+## Delete an Organization
+An organization cannot be deleted until all instances have been removed. An organization can be deleted as follows:
+
+1) Navigate to the HarperDB Studio Organizations page.
+2) Identify the proper organization card and click the trash can icon.
+3) Enter the organization name into the text box.
+
+ *This is done for confirmation purposes to ensure you do not accidentally delete an organization.*
+4) Click the **Do It** button.
+
+## Manage Users
+HarperDB Studio organization owners can manage users including inviting new users, removing users, and toggling ownership.
+
+
+
+#### Inviting a User
+A new user can be invited to an organization as follows:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/?redirect=/organizations) page.
+2) Click the appropriate organization card.
+3) Click **users** at the top of the screen.
+4) In the **add user** box, enter the new user’s email address.
+5) Click **Add User**.
+
+Users may or may not already be HarperDB Studio users when adding them to an organization. If the HarperDB Studio account already exists, the user will receive an email notification alerting them to the organization invitation. If the user does not have a HarperDB Studio account, they will receive an email welcoming them to HarperDB Studio.
+
+---
+
+#### Toggle a User’s Organization Owner Status
+Organization owners have full access to the organization including the ability to manage organization users, create, modify, and delete instances, and delete the organization. Users must have accepted their invitation prior to being promoted to an owner. A user’s organization owner status can be toggled owner as follows:
+
+1) Navigate to the HarperDB Studio Organizations page.
+2) Click the appropriate organization card.
+3) Click **users** at the top of the screen.
+4) Click the appropriate user from the **existing users** section.
+5) Toggle the **Is Owner** switch to the desired status.
+---
+
+#### Remove a User from an Organization
+Users may be removed from an organization at any time. Removing a user from an organization will not delete their HarperDB Studio account, it will only remove their access to the specified organization. A user can be removed from an organization as follows:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/?redirect=/organizations) page.
+2) Click the appropriate organization card.
+3) Click **users** at the top of the screen.
+4) Click the appropriate user from the **existing users** section.
+5) Type **DELETE** in the text box in the **Delete User** row.
+
+ *This is done for confirmation purposes to ensure you do not accidentally delete a user.*
+6) Click **Delete User**.
+
+## Manage Billing
+
+Billing is configured per organization and will be billed to the stored credit card at appropriate intervals (monthly or annually depending on the registered instance). Billing settings can be configured as follows:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/?redirect=/organizations) page.
+2) Click the appropriate organization card.
+3) Click **billing** at the top of the screen.
+
+Here organization owners can view invoices, manage coupons, and manage the associated credit card.
+
+
+
+*HarperDB billing and payments are managed via Stripe.*
+
+
+
+### Add a Coupon
+
+Coupons are applicable towards any paid tier or enterprise instance and you can change your subscription at any time. Coupons can be added to your Organization as follows:
+
+1) In the coupons panel of the **billing** page, enter your coupon code.
+2) Click **Add Coupon**.
+3) The coupon will then be available and displayed in the coupons panel.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.3/administration/harperdb-studio/query-instance-data.md b/site/versioned_docs/version-4.3/administration/harperdb-studio/query-instance-data.md
new file mode 100644
index 00000000..5c3ae28f
--- /dev/null
+++ b/site/versioned_docs/version-4.3/administration/harperdb-studio/query-instance-data.md
@@ -0,0 +1,53 @@
+---
+title: Query Instance Data
+---
+
+# Query Instance Data
+
+SQL queries can be executed directly through the HarperDB Studio with the following instructions:
+
+1) Navigate to the [HarperDB Studio Organizations](https:/studio.harperdb.io/organizations) page.
+2) Click the appropriate organization that the instance belongs to.
+3) Select your desired instance.
+4) Click **query** in the instance control bar.
+5) Enter your SQL query in the SQL query window.
+6) Click **Execute**.
+
+*Please note, the Studio will execute the query exactly as entered. For example, if you attempt to `SELECT *` from a table with millions of rows, you will most likely crash your browser.*
+
+## Browse Query Results Set
+
+#### Browse Results Set Data
+
+The first page of results set data is automatically loaded on query execution. Paging controls are at the bottom of the table. Here you can:
+
+* Page left and right using the arrows.
+* Type in the desired page.
+* Change the page size (the amount of records displayed in the table).
+
+#### Refresh Results Set
+
+Click the refresh icon at the top right of the results set table.
+
+#### Automatically Refresh Results Set
+
+Toggle the auto switch at the top right of the results set table. The results set will now automatically refresh every 15 seconds. Filters and pages will remain set for refreshed data.
+
+## Query History
+
+Query history is stored in your local browser cache. Executed queries are listed with the most recent at the top in the **query history** section.
+
+
+#### Rerun Previous Query
+
+* Identify the query from the **query history** list.
+* Click the appropriate query. It will be loaded into the **sql query** input box.
+* Click **Execute**.
+
+#### Clear Query History
+
+Click the trash can icon at the top right of the **query history** section.
+
+## Create Charts
+
+The HarperDB Studio includes a charting feature where you can build charts based on your specified queries. Visit the Charts documentation for more information.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.3/administration/jobs.md b/site/versioned_docs/version-4.3/administration/jobs.md
new file mode 100644
index 00000000..44b755fe
--- /dev/null
+++ b/site/versioned_docs/version-4.3/administration/jobs.md
@@ -0,0 +1,112 @@
+---
+title: Jobs
+---
+
+# Jobs
+
+HarperDB Jobs are asynchronous tasks performed by the Operations API.
+
+## Job Summary
+
+Jobs uses an asynchronous methodology to account for the potential of a long-running operation. For example, exporting millions of records to S3 could take some time, so that job is started and the id is provided to check on the status.
+
+The job status can be **COMPLETE** or **IN\_PROGRESS**.
+
+## Example Job Operations
+
+Example job operations include:
+
+[csv data load](../developers/operations-api/bulk-operations#csv-data-load)
+
+[csv file load](../developers/operations-api/bulk-operations#csv-file-load)
+
+[csv url load](../developers/operations-api/bulk-operations#csv-url-load)
+
+[import from s3](../developers/operations-api/bulk-operations#import-from-s3)
+
+[delete_records_before](../developers/operations-api/utilities#delete-records-before)
+
+[export_local](../developers/operations-api/utilities#export-local)
+
+[export_to_s3](../developers/operations-api/utilities#export-to-s3)
+
+Example Response from a Job Operation
+
+```
+{
+ "message": "Starting job with id 062a1892-6a0a-4282-9791-0f4c93b12e16"
+}
+```
+
+Whenever one of these operations is initiated, an asynchronous job is created and the request contains the ID of that job which can be used to check on its status.
+
+## Managing Jobs
+
+To check on a job's status, use the [get_job](../developers/operations-api/jobs#get-job) operation.
+
+Get Job Request
+
+```
+{
+ "operation": "get_job",
+ "id": "4a982782-929a-4507-8794-26dae1132def"
+}
+```
+
+Get Job Response
+
+```
+[
+ {
+ "__createdtime__": 1611615798782,
+ "__updatedtime__": 1611615801207,
+ "created_datetime": 1611615798774,
+ "end_datetime": 1611615801206,
+ "id": "4a982782-929a-4507-8794-26dae1132def",
+ "job_body": null,
+ "message": "successfully loaded 350 of 350 records",
+ "start_datetime": 1611615798805,
+ "status": "COMPLETE",
+ "type": "csv_url_load",
+ "user": "HDB_ADMIN",
+ "start_datetime_converted": "2021-01-25T23:03:18.805Z",
+ "end_datetime_converted": "2021-01-25T23:03:21.206Z"
+ }
+]
+```
+
+## Finding Jobs
+
+To find jobs (if the ID is not known) use the [search_jobs_by_start_date](../developers/operations-api/jobs#search-jobs-by-start-date) operation.
+
+Search Jobs Request
+
+```
+{
+ "operation": "search_jobs_by_start_date",
+ "from_date": "2021-01-25T22:05:27.464+0000",
+ "to_date": "2021-01-25T23:05:27.464+0000"
+}
+```
+
+Search Jobs Response
+
+```
+[
+ {
+ "id": "942dd5cb-2368-48a5-8a10-8770ff7eb1f1",
+ "user": "HDB_ADMIN",
+ "type": "csv_url_load",
+ "status": "COMPLETE",
+ "start_datetime": 1611613284781,
+ "end_datetime": 1611613287204,
+ "job_body": null,
+ "message": "successfully loaded 350 of 350 records",
+ "created_datetime": 1611613284764,
+ "__createdtime__": 1611613284767,
+ "__updatedtime__": 1611613287207,
+ "start_datetime_converted": "2021-01-25T22:21:24.781Z",
+ "end_datetime_converted": "2021-01-25T22:21:27.204Z"
+ }
+]
+```
diff --git a/site/versioned_docs/version-4.3/administration/logging/audit-logging.md b/site/versioned_docs/version-4.3/administration/logging/audit-logging.md
new file mode 100644
index 00000000..11d552ec
--- /dev/null
+++ b/site/versioned_docs/version-4.3/administration/logging/audit-logging.md
@@ -0,0 +1,135 @@
+---
+title: Audit Logging
+---
+
+# Audit Logging
+
+### Audit log
+
+The audit log uses a standard HarperDB table to track transactions. For each table a user creates, a corresponding table will be created to track transactions against that table.
+
+Audit log is enabled by default. To disable the audit log, set `logging.auditLog` to false in the config file, `harperdb-config.yaml`. Then restart HarperDB for those changes to take place. Note, the audit is required to be enabled for real-time messaging.
+
+### Audit Log Operations
+
+#### read\_audit\_log
+
+The `read_audit_log` operation is flexible, enabling users to query with many parameters. All operations search on a single table. Filter options include timestamps, usernames, and table hash values. Additional examples found in the [HarperDB API documentation](../../developers/operations-api/logs).
+
+**Search by Timestamp**
+
+```json
+{
+ "operation": "read_audit_log",
+ "schema": "dev",
+ "table": "dog",
+ "search_type": "timestamp",
+ "search_values": [
+ 1660585740558
+ ]
+}
+```
+
+There are three outcomes using timestamp.
+
+* `"search_values": []` - All records returned for specified table
+* `"search_values": [1660585740558]` - All records after provided timestamp
+* `"search_values": [1660585740558, 1760585759710]` - Records "from" and "to" provided timestamp
+
+***
+
+**Search by Username**
+
+```json
+{
+ "operation": "read_audit_log",
+ "schema": "dev",
+ "table": "dog",
+ "search_type": "username",
+ "search_values": [
+ "admin"
+ ]
+}
+```
+
+The above example will return all records whose `username` is "admin."
+
+***
+
+**Search by Primary Key**
+
+```json
+{
+ "operation": "read_audit_log",
+ "schema": "dev",
+ "table": "dog",
+ "search_type": "hash_value",
+ "search_values": [
+ 318
+ ]
+}
+```
+
+The above example will return all records whose primary key (`hash_value`) is 318.
+
+***
+
+#### read\_audit\_log Response
+
+The example that follows provides records of operations performed on a table. One thing of note is that the `read_audit_log` operation gives you the `original_records`.
+
+```json
+{
+ "operation": "update",
+ "user_name": "HDB_ADMIN",
+ "timestamp": 1607035559122.277,
+ "hash_values": [
+ 1,
+ 2
+ ],
+ "records": [
+ {
+ "id": 1,
+ "breed": "Muttzilla",
+ "age": 6,
+ "__updatedtime__": 1607035559122
+ },
+ {
+ "id": 2,
+ "age": 7,
+ "__updatedtime__": 1607035559121
+ }
+ ],
+ "original_records": [
+ {
+ "__createdtime__": 1607035556801,
+ "__updatedtime__": 1607035556801,
+ "age": 5,
+ "breed": "Mutt",
+ "id": 2,
+ "name": "Penny"
+ },
+ {
+ "__createdtime__": 1607035556801,
+ "__updatedtime__": 1607035556801,
+ "age": 5,
+ "breed": "Mutt",
+ "id": 1,
+ "name": "Harper"
+ }
+ ]
+}
+```
+
+#### delete\_audit\_logs\_before
+
+Just like with transaction logs, you can clean up your audit logs with the `delete_audit_logs_before` operation. It will delete audit log data according to the given parameters. The example below will delete records older than the timestamp provided.
+
+```json
+{
+ "operation": "delete_audit_logs_before",
+ "schema": "dev",
+ "table": "cat",
+ "timestamp": 1598290282817
+}
+```
diff --git a/site/versioned_docs/version-4.3/administration/logging/index.md b/site/versioned_docs/version-4.3/administration/logging/index.md
new file mode 100644
index 00000000..2ed92774
--- /dev/null
+++ b/site/versioned_docs/version-4.3/administration/logging/index.md
@@ -0,0 +1,11 @@
+---
+title: Logging
+---
+
+# Logging
+
+HarperDB provides many different logging options for various features and functionality.
+
+* [Standard Logging](./standard-logging): HarperDB maintains a log of events that take place throughout operation.
+* [Audit Logging](./audit-logging): HarperDB uses a standard HarperDB table to track transactions. For each table a user creates, a corresponding table will be created to track transactions against that table.
+* [Transaction Logging](./transaction-logging): HarperDB stores a verbose history of all transactions logged for specified database tables, including original data records.
diff --git a/site/versioned_docs/version-4.3/administration/logging/standard-logging.md b/site/versioned_docs/version-4.3/administration/logging/standard-logging.md
new file mode 100644
index 00000000..d586da1c
--- /dev/null
+++ b/site/versioned_docs/version-4.3/administration/logging/standard-logging.md
@@ -0,0 +1,65 @@
+---
+title: Standard Logging
+---
+
+# Standard Logging
+
+HarperDB maintains a log of events that take place throughout operation. Log messages can be used for diagnostics purposes as well as monitoring.
+
+All logs (except for the install log) are stored in the main log file in the hdb directory `/log/hdb.log`. The install log is located in the HarperDB application directory most likely located in your npm directory `npm/harperdb/logs`.
+
+Each log message has several key components for consistent reporting of events. A log message has a format of:
+
+```
+ [] [] ...[]:
+```
+
+For example, a typical log entry looks like:
+
+```
+2023-03-09T14:25:05.269Z [notify] [main/0]: HarperDB successfully started.
+```
+
+The components of a log entry are:
+
+* timestamp - This is the date/time stamp when the event occurred
+* level - This is an associated log level that gives a rough guide to the importance and urgency of the message. The available log levels in order of least urgent (and more verbose) are: `trace`, `debug`, `info`, `warn`, `error`, `fatal`, and `notify`.
+* thread/ID - This reports the name of the thread and the thread ID that the event was reported on. Note that NATS logs are recorded by their process name and there is no thread id for them since they are a separate process. Key threads are:
+ * main - This is the thread that is responsible for managing all other threads and routes incoming requests to the other threads
+ * http - These are the worker threads that handle the primary workload of incoming HTTP requests to the operations API and custom functions.
+ * Clustering\* - These are threads and processes that handle replication.
+ * job - These are job threads that have been started to handle operations that are executed in a separate job thread.
+* tags - Logging from a custom function will include a "custom-function" tag in the log entry. Most logs will not have any additional tags.
+* message - This is the main message that was reported.
+
+We try to keep logging to a minimum by default, to do this the default log level is `error`. If you require more information from the logs, increasing the log level down will provide that.
+
+The log level can be changed by modifying `logging.level` in the config file `harperdb-config.yaml`.
+
+## Clustering Logging
+
+HarperDB clustering utilizes two [Nats](https:/nats.io/) servers, named Hub and Leaf. The Hub server is responsible for establishing the mesh network that connects instances of HarperDB and the Leaf server is responsible for managing the message stores (streams) that replicate and store messages between instances. Due to the verbosity of these servers there is a separate log level configuration for them. To adjust their log verbosity, set `clustering.logLevel` in the config file `harperdb-config.yaml`. Valid log levels from least verbose are `error`, `warn`, `info`, `debug` and `trace`.
+
+## Log File vs Standard Streams
+
+HarperDB logs can optionally be streamed to standard streams. Logging to standard streams (stdout/stderr) is primarily used for container logging drivers. For more traditional installations, we recommend logging to a file. Logging to both standard streams and to a file can be enabled simultaneously. To log to standard streams effectively, make sure to directly run `harperdb` and don't start it as a separate process (don't use `harperdb start`) and `logging.stdStreams` must be set to true. Note, logging to standard streams only will disable clustering catchup.
+
+## Logging Rotation
+
+Log rotation allows for managing log files, such as compressing rotated log files, archiving old log files, determining when to rotate, and the like. This will allow for organized storage and efficient use of disk space. For more information see “logging” in our [config docs](../../deployments/configuration).
+
+## Read Logs via the API
+
+To access specific logs you may query the HarperDB API. Logs can be queried using the `read_log` operation. `read_log` returns outputs from the log based on the provided search criteria.
+
+```json
+{
+ "operation": "read_log",
+ "start": 0,
+ "limit": 1000,
+ "level": "error",
+ "from": "2021-01-25T22:05:27.464+0000",
+ "until": "2021-01-25T23:05:27.464+0000",
+ "order": "desc"
+}
+```
diff --git a/site/versioned_docs/version-4.3/administration/logging/transaction-logging.md b/site/versioned_docs/version-4.3/administration/logging/transaction-logging.md
new file mode 100644
index 00000000..a65c4714
--- /dev/null
+++ b/site/versioned_docs/version-4.3/administration/logging/transaction-logging.md
@@ -0,0 +1,87 @@
+---
+title: Transaction Logging
+---
+
+# Transaction Logging
+
+HarperDB offers two options for logging transactions executed against a table. The options are similar but utilize different storage layers.
+
+## Transaction log
+
+The first option is `read_transaction_log`. The transaction log is built upon clustering streams. Clustering streams are per-table message stores that enable data to be propagated across a cluster. HarperDB leverages streams for use with the transaction log. When clustering is enabled all transactions that occur against a table are pushed to its stream, and thus make up the transaction log.
+
+If you would like to use the transaction log, but have not set up clustering yet, please see ["How to Cluster"](../../developers/clustering/).
+
+## Transaction Log Operations
+
+### read\_transaction\_log
+
+The `read_transaction_log` operation returns a prescribed set of records, based on given parameters. The example below will give a maximum of 2 records within the timestamps provided.
+
+```json
+{
+ "operation": "read_transaction_log",
+ "schema": "dev",
+ "table": "dog",
+ "from": 1598290235769,
+ "to": 1660249020865,
+ "limit": 2
+}
+```
+
+_See example response below._
+
+### read\_transaction\_log Response
+
+```json
+[
+ {
+ "operation": "insert",
+ "user": "admin",
+ "timestamp": 1660165619736,
+ "records": [
+ {
+ "id": 1,
+ "dog_name": "Penny",
+ "owner_name": "Kyle",
+ "breed_id": 154,
+ "age": 7,
+ "weight_lbs": 38,
+ "__updatedtime__": 1660165619688,
+ "__createdtime__": 1660165619688
+ }
+ ]
+ },
+ {
+ "operation": "update",
+ "user": "admin",
+ "timestamp": 1660165620040,
+ "records": [
+ {
+ "id": 1,
+ "dog_name": "Penny B",
+ "__updatedtime__": 1660165620036
+ }
+ ]
+ }
+]
+```
+
+_See example request above._
+
+### delete\_transaction\_logs\_before
+
+The `delete_transaction_logs_before` operation will delete transaction log data according to the given parameters. The example below will delete records older than the timestamp provided.
+
+```json
+{
+ "operation": "delete_transaction_logs_before",
+ "schema": "dev",
+ "table": "dog",
+ "timestamp": 1598290282817
+}
+```
+
+_Note: Streams are used for catchup if a node goes down. If you delete messages from a stream there is a chance catchup won't work._
+
+Read on for `read_audit_log`, the second option, for logging transactions executed against a table.
diff --git a/site/versioned_docs/version-4.3/deployments/_category_.json b/site/versioned_docs/version-4.3/deployments/_category_.json
new file mode 100644
index 00000000..8fdd6e17
--- /dev/null
+++ b/site/versioned_docs/version-4.3/deployments/_category_.json
@@ -0,0 +1,12 @@
+{
+ "label": "Deployments",
+ "position": 3,
+ "link": {
+ "type": "generated-index",
+ "title": "Deployments Documentation",
+ "description": "Installation and deployment guides for HarperDB",
+ "keywords": [
+ "deployments"
+ ]
+ }
+}
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.3/deployments/configuration.md b/site/versioned_docs/version-4.3/deployments/configuration.md
new file mode 100644
index 00000000..23c92368
--- /dev/null
+++ b/site/versioned_docs/version-4.3/deployments/configuration.md
@@ -0,0 +1,970 @@
+---
+title: Configuration File
+---
+
+# Configuration File
+
+HarperDB is configured through a [YAML](https:/yaml.org/) file called `harperdb-config.yaml` located in the HarperDB root directory (by default this is a directory named `hdb` located in the home directory of the current user).
+
+Some configuration will be populated by default in the config file on install, regardless of whether it is used.
+
+***
+
+## Using the Configuration File and Naming Conventions
+
+The configuration elements in `harperdb-config.yaml` use camelcase: `operationsApi`.
+
+To change a configuration value edit the `harperdb-config.yaml` file and save any changes. HarperDB must be restarted for changes to take effect.
+
+Alternately, configuration can be changed via environment and/or command line variables or via the API. To access lower level elements, use underscores to append parent/child elements (when used this way elements are case insensitive):
+
+```
+- Environment variables: `OPERATIONSAPI_NETWORK_PORT=9925`
+- Command line variables: `--OPERATIONSAPI_NETWORK_PORT 9925`
+- Calling `set_configuration` through the API: `operationsApi_network_port: 9925`
+```
+
+_Note: Component configuration cannot be added or updated via CLI or ENV variables._
+
+## Importing installation configuration
+
+To use a custom configuration file to set values on install, use the CLI/ENV variable `HDB_CONFIG` and set it to the path of your custom configuration file.
+
+To install HarperDB overtop of an existing configuration file, set `HDB_CONFIG` to the root path of your install `/harperdb-config.yaml`
+
+***
+
+## Configuration Options
+
+### `http`
+
+`sessionAffinity` - _Type_: string; _Default_: null
+
+HarperDB is a multi-threaded server designed to scale to utilize many CPU cores with high concurrency. Session affinity can help improve the efficiency and fairness of thread utilization by routing multiple requests from the same client to the same thread. This provides a fairer method of request handling by keeping a single user contained to a single thread, can improve caching locality (multiple requests from a single user are more likely to access the same data), and can provide the ability to share information in-memory in user sessions. Enabling session affinity will cause subsequent requests from the same client to be routed to the same thread.
+
+To enable `sessionAffinity`, you need to specify how clients will be identified from the incoming requests. If you are using HarperDB to directly serve HTTP requests from users from different remote addresses, you can use a setting of `ip`. However, if you are using HarperDB behind a proxy server or application server, all the remote ip addresses will be the same and HarperDB will effectively only run on a single thread. Alternately, you can specify a header to use for identification. If you are using basic authentication, you could use the "Authorization" header to route requests to threads by the user's credentials. If you have another header that uniquely identifies users/clients, you can use that as the value of sessionAffinity. But be careful to ensure that the value does provide sufficient uniqueness and that requests are effectively distributed to all the threads and fully utilizing all your CPU cores.
+
+```yaml
+http:
+ sessionAffinity: ip
+```
+
+`compressionThreshold` - _Type_: number; _Default_: 1200 (bytes)
+
+For HTTP clients that support (Brotli) compression encoding, responses that are larger than than this threshold will be compressed (also note that for clients that accept compression, any streaming responses from queries are compressed as well, since the size is not known beforehand).
+
+```yaml
+http:
+ compressionThreshold: 1200
+```
+
+`cors` - _Type_: boolean; _Default_: true
+
+Enable Cross Origin Resource Sharing, which allows requests across a domain.
+
+`corsAccessList` - _Type_: array; _Default_: null
+
+An array of allowable domains with CORS
+
+`headersTimeout` - _Type_: integer; _Default_: 60,000 milliseconds (1 minute)
+
+Limit the amount of time the parser will wait to receive the complete HTTP headers with.
+
+`maxHeaderSize` - _Type_: integer; _Default_: 16394
+
+The maximum allowed size of HTTP headers in bytes.
+
+`keepAliveTimeout` - _Type_: integer; _Default_: 30,000 milliseconds (30 seconds)
+
+Sets the number of milliseconds of inactivity the server needs to wait for additional incoming data after it has finished processing the last response.
+
+`port` - _Type_: integer; _Default_: 9926
+
+The port used to access the component server.
+
+`securePort` - _Type_: integer; _Default_: null
+
+The port the HarperDB component server uses for HTTPS connections. This requires a valid certificate and key.
+
+`timeout` - _Type_: integer; _Default_: Defaults to 120,000 milliseconds (2 minutes)
+
+The length of time in milliseconds after which a request will timeout.
+
+```yaml
+http:
+ cors: true
+ corsAccessList:
+ - null
+ headersTimeout: 60000
+ maxHeaderSize: 8192
+ https: false
+ keepAliveTimeout: 30000
+ port: 9926
+ securePort: null
+ timeout: 120000
+```
+
+`mlts` - _Type_: boolean | object; _Default_: false
+
+This can be configured to enable mTLS based authentication for incoming connections. If enabled with default options (by setting to `true`), the client certificate will be checked against the certificate authority specified with `tls.certificateAuthority`. And if the certificate can be properly verified, the connection will authenticate users where the user's id/username is specified by the `CN` (common name) from the client certificate's `subject`, by default.
+
+You can also define specific mTLS options by specifying an object for mtls with the following (optional) properties which may be included:
+
+`user` - _Type_: string; _Default_: Common Name
+
+This configures a specific username to authenticate as for mTLS connections. If a `user` is defined, any authorized mTLS connection (that authorizes against the certificate authority) will be authenticated as this user.
+This can also be set to `null`, which indicates that no authentication is performed based on the mTLS authorization. When combined with `required: true`, this can be used to enforce that users must have authorized mTLS _and_ provide credential-based authentication.
+
+`required` - _Type_: boolean; _Default_: false
+
+This can be enabled to require client certificates (mTLS) for all incoming MQTT connections. If enabled, any connection that doesn't provide an authorized certificate will be rejected/closed. By default, this is disabled, and authentication can take place with mTLS _or_ standard credential authentication.
+
+```yaml
+http:
+ mtls: true
+```
+or
+```yaml
+http:
+ mtls:
+ required: true
+ user: user-name
+```
+
+
+***
+
+### `threads`
+
+The `threads` provides control over how many threads, how much heap memory they may use, and debugging of the threads:
+
+`count` - _Type_: number; _Default_: One less than the number of logical cores/processors
+
+The `threads.count` option specifies the number of threads that will be used to service the HTTP requests for the operations API and custom functions. Generally, this should be close to the number of CPU logical cores/processors to ensure the CPU is fully utilized (a little less because HarperDB does have other threads at work), assuming HarperDB is the main service on a server.
+
+```yaml
+threads:
+ count: 11
+```
+
+`debug` - _Type_: boolean | object; _Default_: false
+
+This enables debugging. If simply set to true, this will enable debugging on the main thread on port 9229 with the 127.0.0.1 host interface. This can also be an object for more debugging control.
+
+`debug.port` - The port to use for debugging the main thread
+`debug.startingPort` - This will set up a separate port for debugging each thread. This is necessary for debugging individual threads with devtools.
+`debug.host` - Specify the host interface to listen on
+`debug.waitForDebugger` - Wait for debugger before starting
+
+```yaml
+threads:
+ debug:
+ port: 9249
+```
+
+`maxHeapMemory` - _Type_: number;
+
+```yaml
+threads:
+ maxHeapMemory: 300
+```
+
+This specifies the heap memory limit for each thread, in megabytes. The default heap limit is a heuristic based on available memory and thread count.
+
+
+***
+
+### `clustering`
+
+The `clustering` section configures the clustering engine, this is used to replicate data between instances of HarperDB.
+
+Clustering offers a lot of different configurations, however in a majority of cases the only options you will need to pay attention to are:
+
+* `clustering.enabled` Enable the clustering processes.
+* `clustering.hubServer.cluster.network.port` The port other nodes will connect to. This port must be accessible from other cluster nodes.
+* `clustering.hubServer.cluster.network.routes`The connections to other instances.
+* `clustering.nodeName` The name of your node, must be unique within the cluster.
+* `clustering.user` The name of the user credentials used for Inter-node authentication.
+
+`enabled` - _Type_: boolean; _Default_: false
+
+Enable clustering.
+
+_Note: If you enabled clustering but do not create and add a cluster user you will get a validation error. See `user` description below on how to add a cluster user._
+
+```yaml
+clustering:
+ enabled: true
+```
+
+`clustering.hubServer.cluster`
+
+Clustering’s `hubServer` facilitates the HarperDB mesh network and discovery service.
+
+```yaml
+clustering:
+ hubServer:
+ cluster:
+ name: harperdb
+ network:
+ port: 9932
+ routes:
+ - host: 3.62.184.22
+ port: 9932
+ - host: 3.735.184.8
+ port: 9932
+```
+
+`name` - _Type_: string, _Default_: harperdb
+
+The name of your cluster. This name needs to be consistent for all other nodes intended to be meshed in the same network.
+
+`port` - _Type_: integer, _Default_: 9932
+
+The port the hub server uses to accept cluster connections
+
+`routes` - _Type_: array, _Default_: null
+
+An object array that represent the host and port this server will cluster to. Each object must have two properties `port` and `host`. Multiple entries can be added to create network resiliency in the event one server is unavailable. Routes can be added, updated and removed either by directly editing the `harperdb-config.yaml` file or by using the `cluster_set_routes` or `cluster_delete_routes` API endpoints.
+
+`host` - _Type_: string
+
+The host of the remote instance you are creating the connection with.
+
+`port` - _Type_: integer
+
+The port of the remote instance you are creating the connection with. This is likely going to be the `clustering.hubServer.cluster.network.port` on the remote instance.
+
+`clustering.hubServer.leafNodes`
+
+```yaml
+clustering:
+ hubServer:
+ leafNodes:
+ network:
+ port: 9931
+```
+
+`port` - _Type_: integer; _Default_: 9931
+
+The port the hub server uses to accept leaf server connections.
+
+`clustering.hubServer.network`
+
+```yaml
+clustering:
+ hubServer:
+ network:
+ port: 9930
+```
+
+`port` - _Type_: integer; _Default_: 9930
+
+Use this port to connect a client to the hub server, for example using the NATs SDK to interact with the server.
+
+`clustering.leafServer`
+
+Manages streams, streams are ‘message stores’ that store table transactions.
+
+```yaml
+clustering:
+ leafServer:
+ network:
+ port: 9940
+ routes:
+ - host: 3.62.184.22
+ port: 9931
+ - host: node3.example.com
+ port: 9931
+ streams:
+ maxAge: 3600
+ maxBytes: 10000000
+ maxMsgs: 500
+ path: /user/hdb/clustering/leaf
+```
+
+`port` - _Type_: integer; _Default_: 9940
+
+Use this port to connect a client to the leaf server, for example using the NATs SDK to interact with the server.
+
+`routes` - _Type_: array; _Default_: null
+
+An object array that represent the host and port the leaf node will directly connect with. Each object must have two properties `port` and `host`. Unlike the hub server, the leaf server will establish connections to all listed hosts. Routes can be added, updated and removed either by directly editing the `harperdb-config.yaml` file or by using the `cluster_set_routes` or `cluster_delete_routes` API endpoints.
+
+`host` - _Type_: string
+
+The host of the remote instance you are creating the connection with.
+
+`port` - _Type_: integer
+
+The port of the remote instance you are creating the connection with. This is likely going to be the `clustering.hubServer.cluster.network.port` on the remote instance.
+
+`clustering.leafServer.streams`
+
+`maxAge` - _Type_: integer; _Default_: null
+
+The maximum age of any messages in the stream, expressed in seconds.
+
+`maxBytes` - _Type_: integer; _Default_: null
+
+The maximum size of the stream in bytes. Oldest messages are removed if the stream exceeds this size.
+
+`maxMsgs` - _Type_: integer; _Default_: null
+
+How many messages may be in a stream. Oldest messages are removed if the stream exceeds this number.
+
+`path` - _Type_: string; _Default_: \/clustering/leaf
+
+The directory where all the streams are kept.
+
+```yaml
+clustering:
+ leafServer:
+ streams:
+ maxConsumeMsgs: 100
+ maxIngestThreads: 2
+```
+`maxConsumeMsgs` - _Type_: integer; _Default_: 100
+
+The maximum number of messages a consumer can process in one go.
+
+`maxIngestThreads` - _Type_: integer; _Default_: 2
+
+The number of HarperDB threads that are delegated to ingesting messages.
+
+***
+
+`logLevel` - _Type_: string; _Default_: error
+
+Control the verbosity of clustering logs.
+
+```yaml
+clustering:
+ logLevel: error
+```
+
+There exists a log level hierarchy in order as `trace`, `debug`, `info`, `warn`, and `error`. When the level is set to `trace` logs will be created for all possible levels. Whereas if the level is set to `warn`, the only entries logged will be `warn` and `error`. The default value is `error`.
+
+`nodeName` - _Type_: string; _Default_: null
+
+The name of this node in your HarperDB cluster topology. This must be a value unique from the rest of the cluster node names.
+
+_Note: If you want to change the node name make sure there are no subscriptions in place before doing so. After the name has been changed a full restart is required._
+
+```yaml
+clustering:
+ nodeName: great_node
+```
+
+`tls`
+
+Transport Layer Security default values are automatically generated on install.
+
+```yaml
+clustering:
+ tls:
+ certificate: ~/hdb/keys/certificate.pem
+ certificateAuthority: ~/hdb/keys/ca.pem
+ privateKey: ~/hdb/keys/privateKey.pem
+ insecure: true
+ verify: true
+```
+
+`certificate` - _Type_: string; _Default_: \/keys/certificate.pem
+
+Path to the certificate file.
+
+`certificateAuthority` - _Type_: string; _Default_: \/keys/ca.pem
+
+Path to the certificate authority file.
+
+`privateKey` - _Type_: string; _Default_: \/keys/privateKey.pem
+
+Path to the private key file.
+
+`insecure` - _Type_: boolean; _Default_: true
+
+When true, will skip certificate verification. For use only with self-signed certs.
+
+`republishMessages` - _Type_: boolean; _Default_: false
+
+When true, all transactions that are received from other nodes are republished to this node's stream. When subscriptions are not fully connected between all nodes, this ensures that messages are routed to all nodes through intermediate nodes. This also ensures that all writes, whether local or remote, are written to the NATS transaction log. However, there is additional overhead with republishing, and setting this is to false can provide better data replication performance. When false, you need to ensure all subscriptions are fully connected between every node to every other node, and be aware that the NATS transaction log will only consist of local writes.
+
+`verify` - _Type_: boolean; _Default_: true
+
+When true, hub server will verify client certificate using the CA certificate.
+
+***
+
+`user` - _Type_: string; _Default_: null
+
+The username given to the `cluster_user`. All instances in a cluster must use the same clustering user credentials (matching username and password).
+
+Inter-node authentication takes place via a special HarperDB user role type called `cluster_user`.
+
+The user can be created either through the API using an `add_user` request with the role set to `cluster_user`, or on install using environment variables `CLUSTERING_USER=cluster_person` `CLUSTERING_PASSWORD=pass123!` or CLI variables `harperdb --CLUSTERING_USER cluster_person` `--CLUSTERING_PASSWORD` `pass123!`
+
+```yaml
+clustering:
+ user: cluster_person
+```
+
+***
+
+### `localStudio`
+
+The `localStudio` section configures the local HarperDB Studio, a GUI for HarperDB hosted on the server. A hosted version of the HarperDB Studio with licensing and provisioning options is available at https:/studio.harperdb.io. Note, all database traffic from either `localStudio` or HarperDB Studio is made directly from your browser to the instance.
+
+`enabled` - _Type_: boolean; _Default_: false
+
+Enabled the local studio or not.
+
+```yaml
+localStudio:
+ enabled: false
+```
+
+***
+
+### `logging`
+
+The `logging` section configures HarperDB logging across all HarperDB functionality. This includes standard text logging of application and database events as well as structured data logs of record changes. Logging of application/database events are logged in text format to the `~/hdb/log/hdb.log` file (or location specified by `logging.root`).
+
+In addition, structured logging of data changes are also available:
+
+`auditLog` - _Type_: boolean; _Default_: false
+
+Enabled table transaction logging.
+
+```yaml
+logging:
+ auditLog: false
+```
+
+To access the audit logs, use the API operation `read_audit_log`. It will provide a history of the data, including original records and changes made, in a specified table.
+
+```json
+{
+ "operation": "read_audit_log",
+ "schema": "dev",
+ "table": "dog"
+}
+```
+
+`file` - _Type_: boolean; _Default_: true
+
+Defines whether to log to a file.
+
+```yaml
+logging:
+ file: true
+```
+
+`auditRetention` - _Type_: string|number; _Default_: 3d
+
+This specifies how long audit logs should be retained.
+
+`level` - _Type_: string; _Default_: error
+
+Control the verbosity of text event logs.
+
+```yaml
+logging:
+ level: error
+```
+
+There exists a log level hierarchy in order as `trace`, `debug`, `info`, `warn`, `error`, `fatal`, and `notify`. When the level is set to `trace` logs will be created for all possible levels. Whereas if the level is set to `fatal`, the only entries logged will be `fatal` and `notify`. The default value is `error`.
+
+`root` - _Type_: string; _Default_: \/log
+
+The path where the log files will be written.
+
+```yaml
+logging:
+ root: ~/hdb/log
+```
+
+`rotation`
+
+Rotation provides the ability for a user to systematically rotate and archive the `hdb.log` file. To enable `interval` and/or `maxSize` must be set.
+
+_**Note:**_ `interval` and `maxSize` are approximates only. It is possible that the log file will exceed these values slightly before it is rotated.
+
+```yaml
+logging:
+ rotation:
+ enabled: true
+ compress: false
+ interval: 1D
+ maxSize: 100K
+ path: /user/hdb/log
+```
+
+`enabled` - _Type_: boolean; _Default_: false
+
+Enables logging rotation.
+
+`compress` - _Type_: boolean; _Default_: false
+
+Enables compression via gzip when logs are rotated.
+
+`interval` - _Type_: string; _Default_: null
+
+The time that should elapse between rotations. Acceptable units are D(ays), H(ours) or M(inutes).
+
+`maxSize` - _Type_: string; _Default_: null
+
+The maximum size the log file can reach before it is rotated. Must use units M(egabyte), G(igabyte), or K(ilobyte).
+
+`path` - _Type_: string; _Default_: \/log
+
+Where to store the rotated log file. File naming convention is `HDB-YYYY-MM-DDT-HH-MM-SSSZ.log`.
+
+`stdStreams` - _Type_: boolean; _Default_: false
+
+Log HarperDB logs to the standard output and error streams.
+
+```yaml
+logging:
+ stdStreams: false
+```
+
+***
+
+### `authentication`
+
+The authentication section defines the configuration for the default authentication mechanism in HarperDB.
+
+```yaml
+authentication:
+ authorizeLocal: true
+ cacheTTL: 30000
+ enableSessions: true
+ operationTokenTimeout: 1d
+ refreshTokenTimeout: 30d
+```
+
+`authorizeLocal` - _Type_: boolean; _Default_: true
+
+This will automatically authorize any requests from the loopback IP address as the superuser. This should be disabled for any HarperDB servers that may be accessed by untrusted users from the same instance. For example, this should be disabled if you are using a local proxy, or for general server hardening.
+
+`cacheTTL` - _Type_: number; _Default_: 30000
+
+This defines the length of time (in milliseconds) that an authentication (a particular Authorization header or token) can be cached.
+
+`enableSessions` - _Type_: boolean; _Default_: true
+
+This will enable cookie-based sessions to maintain an authenticated session. This is generally the preferred mechanism for maintaining authentication in web browsers as it allows cookies to hold an authentication token securely without giving JavaScript code access to token/credentials that may open up XSS vulnerabilities.
+
+`operationTokenTimeout` - _Type_: string; _Default_: 1d
+
+Defines the length of time an operation token will be valid until it expires. Example values: https:/github.com/vercel/ms.
+
+`refreshTokenTimeout` - _Type_: string; _Default_: 1d
+
+Defines the length of time a refresh token will be valid until it expires. Example values: https:/github.com/vercel/ms.
+
+### `operationsApi`
+
+The `operationsApi` section configures the HarperDB Operations API.\
+All the `operationsApi` configuration is optional. Any configuration that is not provided under this section will default to the `http` configuration section.
+
+`network`
+
+```yaml
+operationsApi:
+ network:
+ cors: true
+ corsAccessList:
+ - null
+ domainSocket: /user/hdb/operations-server
+ headersTimeout: 60000
+ keepAliveTimeout: 5000
+ port: 9925
+ securePort: null
+ timeout: 120000
+```
+
+`cors` - _Type_: boolean; _Default_: true
+
+Enable Cross Origin Resource Sharing, which allows requests across a domain.
+
+`corsAccessList` - _Type_: array; _Default_: null
+
+An array of allowable domains with CORS
+
+`domainSocket` - _Type_: string; _Default_: \/hdb/operations-server
+
+The path to the Unix domain socket used to provide the Operations API through the CLI
+
+`headersTimeout` - _Type_: integer; _Default_: 60,000 milliseconds (1 minute)
+
+Limit the amount of time the parser will wait to receive the complete HTTP headers with.
+
+`keepAliveTimeout` - _Type_: integer; _Default_: 5,000 milliseconds (5 seconds)
+
+Sets the number of milliseconds of inactivity the server needs to wait for additional incoming data after it has finished processing the last response.
+
+`port` - _Type_: integer; _Default_: 9925
+
+The port the HarperDB operations API interface will listen on.
+
+`securePort` - _Type_: integer; _Default_: null
+
+The port the HarperDB operations API uses for HTTPS connections. This requires a valid certificate and key.
+
+`timeout` - _Type_: integer; _Default_: Defaults to 120,000 milliseconds (2 minutes)
+
+The length of time in milliseconds after which a request will timeout.
+
+`tls`
+
+This configures the Transport Layer Security for HTTPS support.
+
+```yaml
+operationsApi:
+ tls:
+ certificate: ~/hdb/keys/certificate.pem
+ certificateAuthority: ~/hdb/keys/ca.pem
+ privateKey: ~/hdb/keys/privateKey.pem
+```
+
+`certificate` - _Type_: string; _Default_: \/keys/certificate.pem
+
+Path to the certificate file.
+
+`certificateAuthority` - _Type_: string; _Default_: \/keys/ca.pem
+
+Path to the certificate authority file.
+
+`privateKey` - _Type_: string; _Default_: \/keys/privateKey.pem
+
+Path to the private key file.
+
+***
+
+### `componentsRoot`
+
+`componentsRoot` - _Type_: string; _Default_: \/components
+
+The path to the folder containing the local component files.
+
+```yaml
+componentsRoot: ~/hdb/components
+```
+
+***
+
+### `rootPath`
+
+`rootPath` - _Type_: string; _Default_: home directory of the current user
+
+The HarperDB database and applications/API/interface are decoupled from each other. The `rootPath` directory specifies where the HarperDB application persists data, config, logs, and Custom Functions.
+
+```yaml
+rootPath: /Users/jonsnow/hdb
+```
+
+***
+
+### `storage`
+
+`writeAsync` - _Type_: boolean; _Default_: false
+
+The `writeAsync` option turns off disk flushing/syncing, allowing for faster write operation throughput. However, this does not provide storage integrity guarantees, and if a server crashes, it is possible that there may be data loss requiring restore from another backup/another node.
+
+```yaml
+storage:
+ writeAsync: false
+```
+
+`caching` - _Type_: boolean; _Default_: true
+
+The `caching` option enables in-memory caching of records, providing faster access to frequently accessed objects. This can incur some extra overhead for situations where reads are extremely random and don't benefit from caching.
+
+```yaml
+storage:
+ caching: true
+```
+
+`compression` - _Type_: boolean; _Default_: true
+
+The `compression` option enables compression of records in the database. This can be helpful for very large records in reducing storage requirements and potentially allowing more data to be cached. This uses the very fast LZ4 compression algorithm, but this still incurs extra costs for compressing and decompressing.
+
+```yaml
+storage:
+ compression: false
+```
+
+`compression.dictionary` _Type_: number; _Default_: null
+
+Path to a compression dictionary file
+
+`compression.threshold` _Type_: number; _Default_: Either `4036` or if `storage.pageSize` provided `storage.pageSize - 60`
+
+Only entries that are larger than this value (in bytes) will be compressed.
+
+```yaml
+storage:
+ compression:
+ dictionary: /users/harperdb/dict.txt
+ threshold: 1000
+```
+
+`compactOnStart` - _Type_: boolean; _Default_: false
+
+When `true` all non-system databases will be compacted when starting HarperDB, read more [here](../administration/compact).
+
+`compactOnStartKeepBackup` - _Type_: boolean; _Default_: false
+
+Keep the backups made by compactOnStart.
+
+```yaml
+storage:
+ compactOnStart: true
+ compactOnStartKeepBackup: false
+```
+
+`maxTransactionQueueTime` - _Type_: time; _Default_: 45s
+
+The `maxTransactionQueueTime` specifies how long the write queue can get before write requests are rejected (with a 503).
+
+```yaml
+storage:
+ maxTransactionQueueTime: 2m
+```
+
+`noReadAhead` - _Type_: boolean; _Default_: false
+
+The `noReadAhead` option advises the operating system to not read ahead when reading from the database. This provides better memory utilization for databases with small records (less than one page), but can degrade performance in situations where large records are used or frequent range queries are used.
+
+```yaml
+storage:
+ noReadAhead: true
+```
+
+`prefetchWrites` - _Type_: boolean; _Default_: true
+
+The `prefetchWrites` option loads data prior to write transactions. This should be enabled for databases that are larger than memory (although it can be faster to disable this for smaller databases).
+
+```yaml
+storage:
+ prefetchWrites: true
+```
+
+`path` - _Type_: string; _Default_: `/schema`
+
+The `path` configuration sets where all database files should reside.
+
+```yaml
+storage:
+ path: /users/harperdb/storage
+```
+_**Note:**_ This configuration applies to all database files, which includes system tables that are used internally by HarperDB. For this reason if you wish to use a non default `path` value you must move any existing schemas into your `path` location. Existing schemas is likely to include the system schema which can be found at `/schema/system`.
+
+
+`pageSize` - _Type_: number; _Default_: Defaults to the default page size of the OS
+
+Defines the page size of the database.
+
+```yaml
+storage:
+ pageSize: 4096
+```
+
+***
+
+### `tls`
+
+The section defines the certificates, keys, and settings for Transport Layer Security (TLS) for HTTPS and TLS socket support. This is used for both the HTTP and MQTT protocols. The `tls` section can be a single object with the settings below, or it can be an array of objects, where each object is a separate TLS configuration. By using an array, the TLS configuration can be used to define multiple certificates for different domains/hosts (negotiated through SNI).
+
+```yaml
+tls:
+ certificate: ~/hdb/keys/certificate.pem
+ certificateAuthority: ~/hdb/keys/ca.pem
+ privateKey: ~/hdb/keys/privateKey.pem
+```
+
+`certificate` - _Type_: string; _Default_: \/keys/certificate.pem
+
+Path to the certificate file.
+
+`certificateAuthority` - _Type_: string; _Default_: \/keys/ca.pem
+
+Path to the certificate authority file.
+
+`privateKey` - _Type_: string; _Default_: \/keys/privateKey.pem
+
+Path to the private key file.
+
+`ciphers` - _Type_: string;
+
+Allows specific ciphers to be set.
+
+If you want to define multiple certificates that are applied based on the domain/host requested via SNI, you can define an array of TLS configurations. Each configuration can have the same properties as the root TLS configuration, but can (optionally) also have an additional `host` property to specify the domain/host that the certificate should be used for:
+
+```yaml
+tls:
+ - certificate: ~/hdb/keys/certificate1.pem
+ certificateAuthority: ~/hdb/keys/ca1.pem
+ privateKey: ~/hdb/keys/privateKey1.pem
+ host: example.com # the host is optional, and if not provided, this certificate's common name will be used as the host name.
+ - certificate: ~/hdb/keys/certificate2.pem
+ certificateAuthority: ~/hdb/keys/ca2.pem
+ privateKey: ~/hdb/keys/privateKey2.pem
+
+```
+
+Note that a `tls` section can also be defined in the `operationsApi` section, which will override the root `tls` section for the operations API.
+
+***
+
+### `mqtt`
+
+The MQTT protocol can be configured in this section.
+
+```yaml
+mqtt:
+ network:
+ port: 1883
+ securePort: 8883
+ mtls: false
+ webSocket: true
+ requireAuthentication: true
+```
+
+`port` - _Type_: number; _Default_: 1883
+
+This is the port to use for listening for insecure MQTT connections.
+
+`securePort` - _Type_: number; _Default_: 8883
+
+This is the port to use for listening for secure MQTT connections. This will use the `tls` configuration for certificates.
+
+`webSocket` - _Type_: boolean; _Default_: true
+
+This enables access to MQTT through WebSockets. This will handle WebSocket connections on the http port (defaults to 9926), that have specified a (sub) protocol of `mqtt`.
+
+`requireAuthentication` - _Type_: boolean; _Default_: true
+
+This indicates if authentication should be required for establishing an MQTT connection (whether through MQTT connection credentials or mTLS). Disabling this allows unauthenticated connections, which are then subject to authorization for publishing and subscribing (and by default tables/resources do not authorize such access, but that can be enabled at the resource level).
+
+`mlts` - _Type_: boolean | object; _Default_: false
+
+This can be configured to enable mTLS based authentication for incoming connections. If enabled with default options (by setting to `true`), the client certificate will be checked against the certificate authority specified in the `tls` section. And if the certificate can be properly verified, the connection will authenticate users where the user's id/username is specified by the `CN` (common name) from the client certificate's `subject`, by default.
+
+You can also define specific mTLS options by specifying an object for mtls with the following (optional) properties which may be included:
+
+`user` - _Type_: string; _Default_: Common Name
+
+This configures a specific username to authenticate as for mTLS connections. If a `user` is defined, any authorized mTLS connection (that authorizes against the certificate authority) will be authenticated as this user.
+This can also be set to `null`, which indicates that no authentication is performed based on the mTLS authorization. When combined with `required: true`, this can be used to enforce that users must have authorized mTLS _and_ provide credential-based authentication.
+
+`required` - _Type_: boolean; _Default_: false
+
+This can be enabled to require client certificates (mTLS) for all incoming MQTT connections. If enabled, any connection that doesn't provide an authorized certificate will be rejected/closed. By default, this is disabled, and authentication can take place with mTLS _or_ standard credential authentication.
+
+`certificateAuthority` - _Type_: string; _Default_: Path from `tls.certificateAuthority`
+
+This can define a specific path to use for the certificate authority. By default, certificate authorization checks against the CA specified at `tls.certificateAuthority`, but if you need a specific/distinct CA for MQTT, you can set this.
+
+For example, you could specify that mTLS is required and will authenticate as "user-name":
+```yaml
+mqtt:
+ network:
+ mtls:
+ user: user-name
+ required: true
+```
+
+***
+
+### `databases`
+
+The `databases` section is an optional configuration that can be used to define where database files should reside down to the table level.
+This configuration should be set before the database and table have been created.
+The configuration will not create the directories in the path, that must be done by the user.
+
+To define where a database and all its tables should reside use the name of your database and the `path` parameter.
+
+```yaml
+databases:
+ nameOfDatabase:
+ path: /path/to/database
+```
+
+To define where specific tables within a database should reside use the name of your database, the `tables` parameter, the name of your table and the `path` parameter.
+
+```yaml
+databases:
+ nameOfDatabase:
+ tables:
+ nameOfTable:
+ path: /path/to/table
+```
+
+This same pattern can be used to define where the audit log database files should reside. To do this use the `auditPath` parameter.
+
+```yaml
+databases:
+ nameOfDatabase:
+ auditPath: /path/to/database
+```
+
+**Setting the database section through the command line, environment variables or API**
+
+When using command line variables,environment variables or the API to configure the databases section a slightly different convention from the regular one should be used. To add one or more configurations use a JSON object array.
+
+Using command line variables:
+
+```bash
+--DATABASES [{\"nameOfSchema\":{\"tables\":{\"nameOfTable\":{\"path\":\"\/path\/to\/table\"}}}}]
+```
+
+Using environment variables:
+
+```bash
+DATABASES=[{"nameOfSchema":{"tables":{"nameOfTable":{"path":"/path/to/table"}}}}]
+```
+
+Using the API:
+
+```json
+{
+ "operation": "set_configuration",
+ "databases": [{
+ "nameOfDatabase": {
+ "tables": {
+ "nameOfTable": {
+ "path": "/path/to/table"
+ }
+ }
+ }
+ }]
+}
+```
+
+***
+
+### Components
+
+`` - _Type_: string
+
+The name of the component. This will be used to name the folder where the component is installed and must be unique.
+
+`package` - _Type_: string
+
+A reference to your [component](../developers/components/installing) package.This could be a remote git repo, a local folder/file or an NPM package.
+HarperDB will add this package to a package.json file and call `npm install` on it, so any reference that works with that paradigm will work here.
+
+Read more about npm install [here](https:/docs.npmjs.com/cli/v8/commands/npm-install)
+
+`port` - _Type_: number _Default_: whatever is set in `http.port`
+
+The port that your component should listen on. If no port is provided it will default to `http.port`
+
+```yaml
+:
+ package: 'HarperDB-Add-Ons/package-name'
+ port: 4321
+```
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.3/deployments/harperdb-cli.md b/site/versioned_docs/version-4.3/deployments/harperdb-cli.md
new file mode 100644
index 00000000..804bc749
--- /dev/null
+++ b/site/versioned_docs/version-4.3/deployments/harperdb-cli.md
@@ -0,0 +1,164 @@
+---
+title: HarperDB CLI
+---
+
+# HarperDB CLI
+
+The HarperDB command line interface (CLI) is used to administer [self-installed HarperDB instances](./install-harperdb/).
+
+## Installing HarperDB
+
+To install HarperDB with CLI prompts, run the following command:
+
+```bash
+harperdb install
+```
+
+Alternatively, HarperDB installations can be automated with environment variables or command line arguments; [see a full list of configuration parameters here](./configuration#using-the-configuration-file-and-naming-conventions). Note, when used in conjunction, command line arguments will override environment variables.
+
+#### Environment Variables
+
+```bash
+#minimum required parameters for no additional CLI prompts
+export TC_AGREEMENT=yes
+export HDB_ADMIN_USERNAME=HDB_ADMIN
+export HDB_ADMIN_PASSWORD=password
+export ROOTPATH=/tmp/hdb/
+export OPERATIONSAPI_NETWORK_PORT=9925
+harperdb install
+```
+
+#### Command Line Arguments
+
+```bash
+#minimum required parameters for no additional CLI prompts
+harperdb install --TC_AGREEMENT yes --HDB_ADMIN_USERNAME HDB_ADMIN --HDB_ADMIN_PASSWORD password --ROOTPATH /tmp/hdb/ --OPERATIONSAPI_NETWORK_PORT 9925
+```
+
+***
+
+## Starting HarperDB
+
+To start HarperDB after it is installed, run the following command:
+
+```bash
+harperdb start
+```
+
+***
+
+## Stopping HarperDB
+
+To stop HarperDB once it is running, run the following command:
+
+```bash
+harperdb stop
+```
+
+***
+
+## Restarting HarperDB
+
+To restart HarperDB once it is running, run the following command:
+
+```bash
+harperdb restart
+```
+***
+
+## Getting the HarperDB Version
+
+To check the version of HarperDB that is installed run the following command:
+
+```bash
+harperdb version
+```
+***
+
+## Renew self-signed certificates
+
+To renew the HarperDB generated self-signed certificates, run:
+
+```bash
+harperdb renew-certs
+```
+
+***
+
+## Copy a database with compaction
+
+To copy a HarperDB database with compaction (to eliminate free-space and fragmentation), use
+
+```bash
+harperdb copy-db
+```
+For example, to copy the default database:
+```bash
+harperdb copy-db data /home/user/hdb/database/copy.mdb
+```
+
+
+***
+
+## Get all available CLI commands
+
+To display all available HarperDB CLI commands along with a brief description run:
+
+```bash
+harperdb help
+```
+***
+
+## Get the status of HarperDB and clustering
+
+To display the status of the HarperDB process, the clustering hub and leaf processes, the clustering network and replication statuses, run:
+
+```bash
+harperdb status
+```
+
+***
+
+## Backups
+
+HarperDB uses a transactional commit process that ensures that data on disk is always transactionally consistent with storage. This means that HarperDB maintains database integrity in the event of a crash. It also means that you can use any standard volume snapshot tool to make a backup of a HarperDB database. Database files are stored in the hdb/database directory. As long as the snapshot is an atomic snapshot of these database files, the data can be copied/moved back into the database directory to restore a previous backup (with HarperDB shut down) , and database integrity will be preserved. Note that simply copying an in-use database file (using `cp`, for example) is _not_ a snapshot, and this would progressively read data from the database at different points in time, which yields unreliable copy that likely will not be usable. Standard copying is only reliable for a database file that is not in use.
+
+***
+
+# Operations API through the CLI
+
+Some of the API operations are available through the CLI, this includes most operations that do not require nested parameters.
+To call the operation use the following convention: ` =`.
+By default, the result will be formatted as YAML, if you would like the result in JSON pass: `json=true`.
+
+Some examples are:
+
+```bash
+$ harperdb describe_table database=dev table=dog
+
+schema: dev
+name: dog
+hash_attribute: id
+audit: true
+schema_defined: false
+attributes:
+ - attribute: id
+ is_primary_key: true
+ - attribute: name
+ indexed: true
+clustering_stream_name: 3307bb542e0081253klnfd3f1cf551b
+record_count: 10
+last_updated_record: 1724483231970.9949
+```
+
+`harperdb set_configuration logging_level=error`
+
+`harperdb deploy_component project=my-cool-app package=https:/github.com/HarperDB/application-template`
+
+`harperdb get_components`
+
+`harperdb search_by_id database=dev table=dog ids='["1"]' get_attributes='["*"]' json=true`
+
+`harperdb search_by_value table=dog search_attribute=name search_value=harper get_attributes='["id", "name"]'`
+
+`harperdb sql sql='select * from dev.dog where id="1"'`
diff --git a/site/versioned_docs/version-4.3/deployments/harperdb-cloud/alarms.md b/site/versioned_docs/version-4.3/deployments/harperdb-cloud/alarms.md
new file mode 100644
index 00000000..03526fa8
--- /dev/null
+++ b/site/versioned_docs/version-4.3/deployments/harperdb-cloud/alarms.md
@@ -0,0 +1,20 @@
+---
+title: Alarms
+---
+
+# Alarms
+
+HarperDB Cloud instance alarms are triggered when certain conditions are met. Once alarms are triggered organization owners will immediately receive an email alert and the alert will be available on the [Instance Configuration](../../administration/harperdb-studio/instance-configuration) page. The below table describes each alert and their evaluation metrics.
+
+### Heading Definitions
+
+* **Alarm**: Title of the alarm.
+* **Threshold**: Definition of the alarm threshold.
+* **Intervals**: The number of occurrences before an alarm is triggered and the period that the metric is evaluated over.
+* **Proposed Remedy**: Recommended solution to avoid the alert in the future.
+
+| Alarm | Threshold | Intervals | Proposed Remedy |
+| ------- | ---------- | --------- | -------------------------------------------------------------------------------------------------------------------------------- |
+| Storage | > 90% Disk | 1 x 5min | [Increased storage volume](../../administration/harperdb-studio/instance-configuration#update-instance-storage) |
+| CPU | > 90% Avg | 2 x 5min | [Increase instance size for additional CPUs](../../administration/harperdb-studio/instance-configuration#update-instance-ram) |
+| Memory | > 90% RAM | 2 x 5min | [Increase instance size](../../administration/harperdb-studio/instance-configuration#update-instance-ram) |
diff --git a/site/versioned_docs/version-4.3/deployments/harperdb-cloud/index.md b/site/versioned_docs/version-4.3/deployments/harperdb-cloud/index.md
new file mode 100644
index 00000000..ae2ec1a7
--- /dev/null
+++ b/site/versioned_docs/version-4.3/deployments/harperdb-cloud/index.md
@@ -0,0 +1,9 @@
+---
+title: HarperDB Cloud
+---
+
+# HarperDB Cloud
+
+[HarperDB Cloud](https:/studio.harperdb.io/) is the easiest way to test drive HarperDB, it’s HarperDB-as-a-Service. Cloud handles deployment and management of your instances in just a few clicks. HarperDB Cloud is currently powered by AWS with additional cloud providers on our roadmap for the future.
+
+You can create a new [HarperDB Cloud instance in the HarperDB Studio](../../administration/harperdb-studio/instances#create-a-new-instance).
diff --git a/site/versioned_docs/version-4.3/deployments/harperdb-cloud/instance-size-hardware-specs.md b/site/versioned_docs/version-4.3/deployments/harperdb-cloud/instance-size-hardware-specs.md
new file mode 100644
index 00000000..0e970b13
--- /dev/null
+++ b/site/versioned_docs/version-4.3/deployments/harperdb-cloud/instance-size-hardware-specs.md
@@ -0,0 +1,23 @@
+---
+title: Instance Size Hardware Specs
+---
+
+# Instance Size Hardware Specs
+
+While HarperDB Cloud bills by RAM, each instance has other specifications associated with the RAM selection. The following table describes each instance size in detail\*.
+
+| AWS EC2 Instance Size | RAM (GiB) | # vCPUs | Network (Gbps) | Processor |
+| --------------------- | --------- | ------- | -------------- | -------------------------------------- |
+| t3.micro | 1 | 2 | Up to 5 | 2.5 GHz Intel Xeon Platinum 8000 |
+| t3.small | 2 | 2 | Up to 5 | 2.5 GHz Intel Xeon Platinum 8000 |
+| t3.medium | 4 | 2 | Up to 5 | 2.5 GHz Intel Xeon Platinum 8000 |
+| m5.large | 8 | 2 | Up to 10 | Up to 3.1 GHz Intel Xeon Platinum 8000 |
+| m5.xlarge | 16 | 4 | Up to 10 | Up to 3.1 GHz Intel Xeon Platinum 8000 |
+| m5.2xlarge | 32 | 8 | Up to 10 | Up to 3.1 GHz Intel Xeon Platinum 8000 |
+| m5.4xlarge | 64 | 16 | Up to 10 | Up to 3.1 GHz Intel Xeon Platinum 8000 |
+| m5.8xlarge | 128 | 32 | 10 | Up to 3.1 GHz Intel Xeon Platinum 8000 |
+| m5.12xlarge | 192 | 48 | 10 | Up to 3.1 GHz Intel Xeon Platinum 8000 |
+| m5.16xlarge | 256 | 64 | 20 | Up to 3.1 GHz Intel Xeon Platinum 8000 |
+| m5.24xlarge | 384 | 96 | 25 | Up to 3.1 GHz Intel Xeon Platinum 8000 |
+
+\*Specifications are subject to change. For the most up to date information, please refer to AWS documentation: [https:/aws.amazon.com/ec2/instance-types/](https:/aws.amazon.com/ec2/instance-types/).
diff --git a/site/versioned_docs/version-4.3/deployments/harperdb-cloud/iops-impact.md b/site/versioned_docs/version-4.3/deployments/harperdb-cloud/iops-impact.md
new file mode 100644
index 00000000..1c8496d5
--- /dev/null
+++ b/site/versioned_docs/version-4.3/deployments/harperdb-cloud/iops-impact.md
@@ -0,0 +1,42 @@
+---
+title: IOPS Impact on Performance
+---
+
+# IOPS Impact on Performance
+
+HarperDB, like any database, can place a tremendous load on its storage resources. Storage, not CPU or memory, will more often be the bottleneck of server, virtual machine, or a container running HarperDB. Understanding how storage works, and how much storage performance your workload requires, is key to ensuring that HarperDB performs as expected.
+
+## IOPS Overview
+
+The primary measure of storage performance is the number of input/output operations per second (IOPS) that a storage device can perform. Different storage devices can have dramatically different performance profiles. A hard drive (HDD) might only perform a hundred or so IOPS, while a solid state drive (SSD) might be able to perform tens or hundreds of thousands of IOPS.
+
+Cloud providers like AWS, which powers HarperDB Cloud, don’t typically attach individual disks to a virtual machine or container. Instead, they combine large numbers of storage drives to create very high performance storage servers. Chunks (volumes) of that storage are then carved out and presented to many different virtual machines and containers. Due to the shared nature of this type of storage, the cloud provider places configurable limits on the number of IOPS that a volume can perform. The same way that cloud providers charge more for larger capacity volumes, they also charge more for volumes with more IOPS.
+
+## HarperDB Cloud Storage
+
+HarperDB Cloud utilizes AWS Elastic Block Storage (EBS) General Purpose SSD (gp3) volumes. This is the most common storage type used in AWS, as it provides reasonable performance for most workloads, at a reasonable price.
+
+AWS EBS gp3 volumes have a baseline performance level of 3,000 IOPS, as a result, all HarperDB Cloud storage options will offer 3,000 IOPS. We plan to offer scalable IOPS as an option in the future.
+
+You can read more about AWS EBS volume IOPS here: https:/docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html.
+
+## Estimating IOPS for HarperDB Instance
+
+The number of IOPS required for a particular workload is influenced by many factors. Testing your particular application is the best way to determine the number of IOPS required. A reliable method is to estimate about two IOPS for every index, including the primary key itself. So if a table has two indices besides primary key, estimate that an insert or update will require about six IOPS. Note that that can often be closer to one IOPS per index under load due to internal batching of writes, and sometimes even better when doing sequential inserts. Again it is best to test to verify this with application specific data and write patterns.
+
+For assistance in estimating IOPS requirements feel free to contact HarperDB Support or join our Community Slack Channel.
+
+## Example Use Case IOPS Requirements
+
+* **Sensor Data Collection**
+
+ In the case of IoT sensors where data collection will be sustained, high IOPS are required. While there are not typically large queries going on in this case, there is a high volume of data being ingested. This implies that IOPS will be sustained at a high level. For example, if you are collecting 100 records per second you would expect to need roughly 3,000 IOPS just to handle the data inserts.
+* **Data Analytics/BI Server**
+
+ Providing a server for analytics purposes typically requires a larger machine. Typically these cases involve large scale SQL joins and aggregations, which puts a large strain on reads. HarperDB utilizes an in-memory cache, which provides a significant performance boost on machines with large amounts of memory. However, if disparate datasets are constantly being queried and/or new data is frequently being loaded, you will find that the system still needs to have high IOPS to meet performance demand.
+* **Web Services**
+
+ Typical web service implementations with discrete reads and writes often do not need high IOPS to perform as expected. This is often the case in more transactional systems without the requirement for high performance load. A good rule to follow is that any HarperDB operation that requires a data scan will be IOPS intensive, but if these are not frequent then the EBS boost will suffice. Queries utilizing equals operations in either SQL or NoSQL do not require a scan due to HarperDB’s native indexing.
+* **High Performance Database**
+
+ Ultimately, if performance is your top priority, HarperDB should be run on bare metal hardware. Cloud providers offer these options at a higher cost, but they come with obvious performance improvements.
diff --git a/site/versioned_docs/version-4.3/deployments/harperdb-cloud/verizon-5g-wavelength-instances.md b/site/versioned_docs/version-4.3/deployments/harperdb-cloud/verizon-5g-wavelength-instances.md
new file mode 100644
index 00000000..c5a565e9
--- /dev/null
+++ b/site/versioned_docs/version-4.3/deployments/harperdb-cloud/verizon-5g-wavelength-instances.md
@@ -0,0 +1,31 @@
+---
+title: Verizon 5G Wavelength
+---
+
+# Verizon 5G Wavelength
+
+These instances are only accessible from the Verizon network. When accessing your HarperDB instance please ensure you are connected to the Verizon network, examples include Verizon 5G Internet, Verizon Hotspots, or Verizon mobile devices.
+
+HarperDB on Verizon 5G Wavelength brings HarperDB closer to the end user exclusively on the Verizon network resulting in as little as single-digit millisecond response time from HarperDB to the client.
+
+Instances are built via AWS Wavelength. You can read more about [AWS Wavelength here](https:/aws.amazon.com/wavelength/).
+
+HarperDB 5G Wavelength Instance Specs While HarperDB 5G Wavelength bills by RAM, each instance has other specifications associated with the RAM selection. The following table describes each instance size in detail\*.
+
+| AWS EC2 Instance Size | RAM (GiB) | # vCPUs | Network (Gbps) | Processor |
+| --------------------- | --------- | ------- | -------------- | ------------------------------------------- |
+| t3.medium | 4 | 2 | Up to 5 | Up to 3.1 GHz Intel Xeon Platinum Processor |
+| t3.xlarge | 16 | 4 | Up to 5 | Up to 3.1 GHz Intel Xeon Platinum Processor |
+| r5.2xlarge | 64 | 8 | Up to 10 | Up to 3.1 GHz Intel Xeon Platinum Processor |
+
+\*Specifications are subject to change. For the most up to date information, please refer to [AWS documentation](https:/aws.amazon.com/ec2/instance-types/).
+
+## HarperDB 5G Wavelength Storage
+
+HarperDB 5G Wavelength utilizes AWS Elastic Block Storage (EBS) General Purpose SSD (gp2) volumes. This is the most common storage type used in AWS, as it provides reasonable performance for most workloads, at a reasonable price.
+
+AWS EBS gp2 volumes have a baseline performance level, which determines the number of IOPS it can perform indefinitely. The larger the volume, the higher its baseline performance. Additionally, smaller gp2 volumes are able to burst to a higher number of IOPS for periods of time.
+
+Smaller gp2 volumes are perfect for trying out the functionality of HarperDB, and might also work well for applications that don’t perform many database transactions. For applications that perform a moderate or high number of transactions, we recommend that you use a larger HarperDB volume. Learn more about the [impact of IOPS on performance here](./iops-impact).
+
+You can read more about [AWS EBS gp2 volume IOPS here](https:/docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html#ebsvolumetypes\_gp2).
diff --git a/site/versioned_docs/version-4.3/deployments/install-harperdb/index.md b/site/versioned_docs/version-4.3/deployments/install-harperdb/index.md
new file mode 100644
index 00000000..8e105aac
--- /dev/null
+++ b/site/versioned_docs/version-4.3/deployments/install-harperdb/index.md
@@ -0,0 +1,61 @@
+---
+title: Install HarperDB
+---
+
+# Install HarperDB
+
+## Install HarperDB
+
+This documentation contains information for installing HarperDB locally. Note that if you’d like to get up and running quickly, you can try a [managed instance with HarperDB Cloud](https:/studio.harperdb.io/sign-up). HarperDB is a cross-platform database; we recommend Linux for production use, but HarperDB can run on Windows and Mac as well, for development purposes. Installation is usually very simple and just takes a few steps, but there are a few different options documented here.
+
+HarperDB runs on Node.js, so if you do not have it installed, you need to do that first (if you have installed, you can skip to installing HarperDB, itself). Node.js can be downloaded and installed from [their site](https:/nodejs.org/). For Linux and Mac, we recommend installing and managing Node versions with [NVM, which has instructions for installation](https:/github.com/nvm-sh/nvm). Generally NVM can be installed with the following command:
+
+```bash
+curl -o- https:/raw.githubusercontent.com/nvm-sh/nvm/v0.39.5/install.sh | bash
+```
+
+And then logout and login, and then install Node.js using nvm. We recommend using LTS, but support all currently maintained Node versions (which is currently version 14 and newer, and make sure to always uses latest minor/patch for the major version):
+
+```bash
+nvm install --lts
+```
+
+#### `Install and Start HarperDB `
+
+Then you can install HarperDB with NPM and start it:
+
+```bash
+npm install -g harperdb
+harperdb
+```
+
+HarperDB will automatically start after installation. HarperDB's installation can be configured with numerous options via CLI arguments, for more information visit the [HarperDB Command Line Interface](../harperdb-cli) guide.
+
+If you are setting up a production server on Linux, [we have much more extensive documentation on how to configure volumes for database storage, set up a systemd script, and configure your operating system to use as a database server in our linux installation guide](./linux).
+
+## With Docker
+
+If you would like to run HarperDB in Docker, install [Docker Desktop](https:/docs.docker.com/desktop/) on your Mac or Windows computer. Otherwise, install the [Docker Engine](https:/docs.docker.com/engine/install/) on your Linux server.
+
+Once Docker Desktop or Docker Engine is installed, visit our [Docker Hub page](https:/hub.docker.com/r/harperdb/harperdb) for information and examples on how to run a HarperDB container.
+
+## Offline Install
+
+If you need to install HarperDB on a device that doesn't have an Internet connection, you can choose your version and download the npm package and install it directly (you’ll still need Node.js and NPM):
+
+[Download Install Package](https:/products-harperdb-io.s3.us-east-2.amazonaws.com/index.html)
+
+Once you’ve downloaded the .tgz file, run the following command from the directory where you’ve placed it:
+
+```bash
+npm install -g harperdb-X.X.X.tgz harperdb install
+```
+
+## Installation on Less Common Platforms
+
+HarperDB comes with binaries for standard AMD64/x64 or ARM64 CPU architectures on Linux, Windows (x64 only), and Mac (including Apple Silicon). However, if you are installing on a less common platform (Alpine, for example), you will need to ensure that you have build tools installed for the installation process to compile the binaries (this is handled automatically), including:
+
+* [Go](https:/go.dev/dl/): version 1.19.1
+* GCC
+* Make
+* Python v3.7, v3.8, v3.9, or v3.10
diff --git a/site/versioned_docs/version-4.3/deployments/install-harperdb/linux.md b/site/versioned_docs/version-4.3/deployments/install-harperdb/linux.md
new file mode 100644
index 00000000..6cea34ad
--- /dev/null
+++ b/site/versioned_docs/version-4.3/deployments/install-harperdb/linux.md
@@ -0,0 +1,223 @@
+---
+title: On Linux
+---
+
+# On Linux
+
+If you wish to install locally or already have a configured server, see the basic [Installation Guide](./)
+
+The following is a recommended way to configure Linux and install HarperDB. These instructions should work reasonably well for any public cloud or on-premises Linux instance.
+
+***
+
+These instructions assume that the following has already been completed:
+
+1. Linux is installed
+1. Basic networking is configured
+1. A non-root user account dedicated to HarperDB with sudo privileges exists
+1. An additional volume for storing HarperDB files is attached to the Linux instance
+1. Traffic to ports 9925 (HarperDB Operations API) 9926 (HarperDB Application Interface) and 9932 (HarperDB Clustering) is permitted
+
+While you will need to access HarperDB through port 9925 for the administration through the operations API, and port 9932 for clustering, for higher level of security, you may want to consider keeping both of these ports restricted to a VPN or VPC, and only have the application interface (9926 by default) exposed to the public Internet.
+
+For this example, we will use an AWS Ubuntu Server 22.04 LTS m5.large EC2 Instance with an additional General Purpose SSD EBS volume and the default “ubuntu” user account.
+
+***
+
+### (Optional) LVM Configuration
+
+Logical Volume Manager (LVM) can be used to stripe multiple disks together to form a single logical volume. If striping disks together is not a requirement, skip these steps.
+
+Find disk that already has a partition
+
+```bash
+used_disk=$(lsblk -P -I 259 | grep "nvme.n1.*part" | grep -o "nvme.n1")
+```
+
+Create array of free disks
+
+```bash
+declare -a free_disks
+mapfile -t free_disks < <(lsblk -P -I 259 | grep "nvme.n1.*disk" | grep -o "nvme.n1" | grep -v "$used_disk")
+```
+
+Get quantity of free disks
+
+```bash
+free_disks_qty=${#free_disks[@]}
+```
+
+Construct pvcreate command
+
+```bash
+cmd_string=""
+for i in "${free_disks[@]}"
+do
+cmd_string="$cmd_string /dev/$i"
+done
+```
+
+Initialize disks for use by LVM
+
+```bash
+pvcreate_cmd="pvcreate $cmd_string"
+sudo $pvcreate_cmd
+```
+
+Create volume group
+
+```bash
+vgcreate_cmd="vgcreate hdb_vg $cmd_string"
+sudo $vgcreate_cmd
+```
+
+Create logical volume
+
+```bash
+sudo lvcreate -n hdb_lv -i $free_disks_qty -l 100%FREE hdb_vg
+```
+
+### Configure Data Volume
+
+Run `lsblk` and note the device name of the additional volume
+
+```bash
+lsblk
+```
+
+Create an ext4 filesystem on the volume (The below commands assume the device name is nvme1n1. If you used LVM to create logical volume, replace /dev/nvme1n1 with /dev/hdb\_vg/hdb\_lv)
+
+```bash
+sudo mkfs.ext4 -L hdb_data /dev/nvme1n1
+```
+
+Mount the file system and set the correct permissions for the directory
+
+```bash
+mkdir /home/ubuntu/hdb
+sudo mount -t ext4 /dev/nvme1n1 /home/ubuntu/hdb
+sudo chown -R ubuntu:ubuntu /home/ubuntu/hdb
+sudo chmod 775 /home/ubuntu/hdb
+```
+
+Create a fstab entry to mount the filesystem on boot
+
+```bash
+echo "LABEL=hdb_data /home/ubuntu/hdb ext4 defaults,noatime 0 1" | sudo tee -a /etc/fstab
+```
+
+### Configure Linux and Install Prerequisites
+
+If a swap file or partition does not already exist, create and enable a 2GB swap file
+
+```bash
+sudo dd if=/dev/zero of=/swapfile bs=128M count=16
+sudo chmod 600 /swapfile
+sudo mkswap /swapfile
+sudo swapon /swapfile
+echo "/swapfile swap swap defaults 0 0" | sudo tee -a /etc/fstab
+```
+
+Increase the open file limits for the ubuntu user
+
+```bash
+echo "ubuntu soft nofile 500000" | sudo tee -a /etc/security/limits.conf
+echo "ubuntu hard nofile 1000000" | sudo tee -a /etc/security/limits.conf
+```
+
+Install Node Version Manager (nvm)
+
+```bash
+curl -o- https:/raw.githubusercontent.com/nvm-sh/nvm/v0.39.3/install.sh | bash
+```
+
+Load nvm (or logout and then login)
+
+```bash
+. ~/.nvm/nvm.sh
+```
+
+Install Node.js using nvm ([read more about specific Node version requirements](https:/www.npmjs.com/package/harperdb#prerequisites))
+
+```bash
+nvm install
+```
+
+### `Install and Start HarperDB `
+
+Here is an example of installing HarperDB with minimal configuration.
+
+```bash
+npm install -g harperdb
+harperdb start \
+ --TC_AGREEMENT "yes" \
+ --ROOTPATH "/home/ubuntu/hdb" \
+ --OPERATIONSAPI_NETWORK_PORT "9925" \
+ --HDB_ADMIN_USERNAME "HDB_ADMIN" \
+ --HDB_ADMIN_PASSWORD "password"
+```
+
+Here is an example of installing HarperDB with commonly used additional configuration.
+
+```bash
+npm install -g harperdb
+harperdb start \
+ --TC_AGREEMENT "yes" \
+ --ROOTPATH "/home/ubuntu/hdb" \
+ --OPERATIONSAPI_NETWORK_PORT "9925" \
+ --HDB_ADMIN_USERNAME "HDB_ADMIN" \
+ --HDB_ADMIN_PASSWORD "password" \
+ --HTTP_SECUREPORT "9926" \
+ --CLUSTERING_ENABLED "true" \
+ --CLUSTERING_USER "cluster_user" \
+ --CLUSTERING_PASSWORD "password" \
+ --CLUSTERING_NODENAME "hdb1"
+```
+
+You can also use a custom configuration file to set values on install, use the CLI/ENV variable `HDB_CONFIG` and set it to the path of your [custom configuration file](../../deployments/configuration):
+```bash
+npm install -g harperdb
+harperdb start \
+ --TC_AGREEMENT "yes" \
+ --HDB_ADMIN_USERNAME "HDB_ADMIN" \
+ --HDB_ADMIN_PASSWORD "password" \
+ --HDB_CONFIG "/path/to/your/custom/harperdb-config.yaml"
+```
+
+#### Start HarperDB on Boot
+HarperDB will automatically start after installation. If you wish HarperDB to start when the OS boots, you have two options:
+
+You can set up a crontab:
+
+```bash
+(crontab -l 2>/dev/null; echo "@reboot PATH=\"/home/ubuntu/.nvm/versions/node/v18.15.0/bin:$PATH\" && harperdb start") | crontab -
+```
+
+Or you can create a systemd script at `/etc/systemd/system/harperdb.service`
+
+Pasting the following contents into the file:
+
+```
+[Unit]
+Description=HarperDB
+
+[Service]
+Type=simple
+Restart=always
+User=ubuntu
+Group=ubuntu
+WorkingDirectory=/home/ubuntu
+ExecStart=/bin/bash -c 'PATH="/home/ubuntu/.nvm/versions/node/v18.15.0/bin:$PATH"; harperdb'
+
+[Install]
+WantedBy=multi-user.target
+```
+
+And then running the following:
+
+```
+systemctl daemon-reload
+systemctl enable harperdb
+```
+
+For more information visit the [HarperDB Command Line Interface guide](../../deployments/harperdb-cli) and the [HarperDB Configuration File guide](../../deployments/configuration).
diff --git a/site/versioned_docs/version-4.3/deployments/upgrade-hdb-instance.md b/site/versioned_docs/version-4.3/deployments/upgrade-hdb-instance.md
new file mode 100644
index 00000000..0b7c6e3f
--- /dev/null
+++ b/site/versioned_docs/version-4.3/deployments/upgrade-hdb-instance.md
@@ -0,0 +1,90 @@
+---
+title: Upgrade a HarperDB Instance
+---
+
+# Upgrade a HarperDB Instance
+
+This document describes best practices for upgrading self-hosted HarperDB instances. HarperDB can be upgraded using a combination of npm and built-in HarperDB upgrade scripts. Whenever upgrading your HarperDB installation it is recommended you make a backup of your data first. Note: This document applies to self-hosted HarperDB instances only. All [HarperDB Cloud instances](./harperdb-cloud/) will be upgraded by the HarperDB Cloud team.
+
+## Upgrading
+
+Upgrading HarperDB is a two-step process. First the latest version of HarperDB must be downloaded from npm, then the HarperDB upgrade scripts will be utilized to ensure the newest features are available on the system.
+
+1. Install the latest version of HarperDB using `npm install -g harperdb`.
+
+ Note `-g` should only be used if you installed HarperDB globally (which is recommended).
+1. Run `harperdb` to initiate the upgrade process.
+
+ HarperDB will then prompt you for all appropriate inputs and then run the upgrade directives.
+
+## Node Version Manager (nvm)
+
+[Node Version Manager (nvm)](http:/nvm.sh/) is an easy way to install, remove, and switch between different versions of Node.js as required by various applications. More information, including directions on installing nvm can be found here: https:/nvm.sh/.
+
+HarperDB supports Node.js versions 14.0.0 and higher, however, **please check our** [**NPM page**](https:/www.npmjs.com/package/harperdb) **for our recommended Node.js version.** To install a different version of Node.js with nvm, run the command:
+
+```bash
+nvm install
+```
+
+To switch to a version of Node run:
+
+```bash
+nvm use
+```
+
+To see the current running version of Node run:
+
+```bash
+node --version
+```
+
+With a handful of different versions of Node.js installed, run nvm with the `ls` argument to list out all installed versions:
+
+```bash
+nvm ls
+```
+
+When upgrading HarperDB, we recommend also upgrading your Node version. Here we assume you're running on an older version of Node; the execution may look like this:
+
+Switch to the older version of Node that HarperDB is running on (if it is not the current version):
+
+```bash
+nvm use 14.19.0
+```
+
+Make sure HarperDB is not running:
+
+```bash
+harperdb stop
+```
+
+Uninstall HarperDB. Note, this step is not required, but will clean up old artifacts of HarperDB. We recommend removing all other HarperDB installations to ensure the most recent version is always running.
+
+```bash
+npm uninstall -g harperdb
+```
+
+Switch to the newer version of Node:
+
+```bash
+nvm use
+```
+
+Install HarperDB globally
+
+```bash
+npm install -g harperdb
+```
+
+Run the upgrade script
+
+```bash
+harperdb
+```
+
+Start HarperDB
+
+```bash
+harperdb start
+```
diff --git a/site/versioned_docs/version-4.3/developers/_category_.json b/site/versioned_docs/version-4.3/developers/_category_.json
new file mode 100644
index 00000000..9fe399bf
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/_category_.json
@@ -0,0 +1,12 @@
+{
+ "label": "Developers",
+ "position": 1,
+ "link": {
+ "type": "generated-index",
+ "title": "Developers Documentation",
+ "description": "Comprehensive guides and references for building applications with HarperDB",
+ "keywords": [
+ "developers"
+ ]
+ }
+}
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.3/developers/applications/caching.md b/site/versioned_docs/version-4.3/developers/applications/caching.md
new file mode 100644
index 00000000..6ebad89a
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/applications/caching.md
@@ -0,0 +1,288 @@
+---
+title: Caching
+---
+
+# Caching
+
+HarperDB has integrated support for caching data from external sources. With built-in caching capabilities and distributed high-performance low-latency responsiveness, HarperDB makes an ideal data caching server. HarperDB can store cached data in standard tables, as queryable structured data, so data can easily be consumed in one format (for example JSON or CSV) and provided to end users in different formats with different selected properties (for example MessagePack, with a subset of selected properties), or even with customized querying capabilities. HarperDB also manages and provides timestamps/tags for proper caching control, facilitating further downstreaming caching. With these combined capabilities, HarperDB is an extremely fast, interoperable, flexible, and customizable caching server.
+
+## Configuring Caching
+
+To set up caching, first you will need to define a table that you will use as your cache (to store the cached data). You can review the [introduction to building applications](./) for more information on setting up the application (and the [defining schemas documentation](./defining-schemas)), but once you have defined an application folder with a schema, you can add a table for caching to your `schema.graphql`:
+
+```graphql
+type MyCache @table(expiration: 3600) @export {
+ id: ID @primaryKey
+}
+```
+
+You may also note that we can define a time-to-live (TTL) expiration on the table, indicating when table records/entries should expire and be evicted from this table. This is generally necessary for "passive" caches where there is no active notification of when entries expire. However, this is not needed if you provide a means of notifying when data is invalidated and changed. The units for expiration, and other duration-based properties, are in seconds.
+
+While you can provide a single expiration time, there are actually several expiration timings that are potentially relevant, and can be independently configured. These settings are available as directive properties on the table configuration (like `expiration` above): stale expiration: The point when a request for a record should trigger a request to origin (but might possibly return the current stale record depending on policy) must-revalidate expiration: The point when a request for a record must make a request to origin first and return the latest value from origin. eviction expiration: The point when a record is actually removed from the caching table.
+
+You can provide a single expiration and it defines the behavior for all three. You can also provide three settings for expiration, through table directives:
+* expiration - The amount of time until a record goes stale.
+* eviction - The amount of time after expiration before a record can be evicted (defaults to zero).
+* scanInterval - The interval for scanning for expired records (defaults to one quarter of the total of expiration and eviction).
+
+## Define External Data Source
+
+Next, you need to define the source for your cache. External data sources could be HTTP APIs, other databases, microservices, or any other source of data. This can be defined as a resource class in your application's `resources.js` module. You can extend the `Resource` class (which is available as a global variable in the HarperDB environment) as your base class. The first method to implement is a `get()` method to define how to retrieve the source data. For example, if we were caching an external HTTP API, we might define it as such:
+
+```javascript
+class ThirdPartyAPI extends Resource {
+ async get() {
+ return (await fetch(`http:/some-api.com/${this.getId()}`)).json();
+ }
+}
+```
+
+Next, we define this external data resource as the "source" for the caching table we defined above:
+
+```javascript
+const { MyTable } = tables;
+MyTable.sourcedFrom(ThirdPartyAPI);
+```
+
+Now we have a fully configured and connected caching table. If you access data from `MyCache` (for example, through the REST API, like `/MyCache/some-id`), HarperDB will check to see if the requested entry is in the table and return it if it is available (and hasn't expired). If there is no entry, or it has expired (it is older than one hour in this case), it will go to the source, calling the `get()` method, which will then retrieve the requested entry. Once the entry is retrieved, it will be saved/cached in the caching table (for one hour based on our expiration time).
+
+```mermaid
+flowchart TD
+ Client1(Client 1)-->Cache(Caching Table)
+ Client2(Client 2)-->Cache
+ Cache-->Resource(Data Source Connector)
+ Resource-->API(Remote Data Source API)
+```
+
+
+HarperDB handles waiting for an existing cache resolution to finish and uses its result. This prevents a "cache stampede" when entries expire, ensuring that multiple requests to a cache entry will all wait on a single request to the data source.
+
+Cache tables with an expiration are periodically pruned for expired entries. Because this is done periodically, there is usually some amount of time between when a record has expired and when the record is actually evicted (the cached data is removed). But when a record is checked for availability, the expiration time is used to determine if the record is fresh (and the cache entry can be used).
+
+### Eviction with Indexing
+
+Eviction is the removal of a locally cached copy of data, but it does not imply the deletion of the actual data from the canonical or origin data source. Because evicted records still exist (just not in the local cache), if a caching table uses expiration (and eviction), and has indexing on certain attributes, the data is not removed from the indexes. The indexes that reference the evicted record are preserved, along with the attribute data necessary to maintain these indexes. Therefore eviction means the removal of non-indexed data (in this case evictions are stored as "partial" records). Eviction only removes the data that can be safely removed from a cache without affecting the integrity or behavior of the indexes. If a search query is performed that matches this evicted record, the record will be requested on-demand to fulfill the search query.
+
+### Specifying a Timestamp
+
+In the example above, we simply retrieved data to fulfill a cache request. We may want to supply the timestamp of the record we are fulfilling as well. This can be set on the context for the request:
+
+```javascript
+class ThirdPartyAPI extends Resource {
+ async get() {
+ let response = await fetch(`http:/some-api.com/${this.getId()}`);
+ this.getContext().lastModified = response.headers.get('Last-Modified');
+ return response.json();
+ }
+}
+```
+
+#### Specifying an Expiration
+
+In addition, we can also specify when a cached record "expires". When a cached record expires, this means that a request for that record will trigger a request to the data source again. This does not necessarily mean that the cached record has been evicted (removed), although expired records will be periodically evicted. If the cached record still exists, the data source can revalidate it and return it. For example:
+
+```javascript
+class ThirdPartyAPI extends Resource {
+ async get() {
+ const context = this.getContext();
+ let headers = new Headers();
+ if (context.replacingVersion) / this is the existing cached record
+ headers.set('If-Modified-Since', new Date(context.replacingVersion).toUTCString());
+ let response = await fetch(`http:/some-api.com/${this.getId()}`, { headers });
+ let cacheInfo = response.headers.get('Cache-Control');
+ let maxAge = cacheInfo?.match(/max-age=(\d)/)?.[1];
+ if (maxAge) / we can set a specific expiration time by setting context.expiresAt
+ context.expiresAt = Date.now() + maxAge * 1000; / convert from seconds to milliseconds and add to current time
+ / we can just revalidate and return the record if the origin has confirmed that it has the same version:
+ if (response.status === 304) return context.replacingRecord;
+ ...
+```
+
+## Active Caching and Invalidation
+
+The cache we have created above is a "passive" cache; it only pulls data from the data source as needed, and has no knowledge of if and when data from the data source has actually changed, so it must rely on timer-based expiration to periodically retrieve possibly updated data. This means that it is possible that the cache may have stale data for a while (if the underlying data has changed, but the cached data hasn't expired), and the cache may have to refresh more than necessary if the data source data hasn't changed. Consequently it can be significantly more effective to implement an "active" cache, in which the data source is monitored and notifies the cache when any data changes. This ensures that when data changes, the cache can immediately load the updated data, and unchanged data can remain cached much longer (or indefinitely).
+
+### Invalidate
+
+One way to provide more active caching is to specifically invalidate individual records. Invalidation is useful when you know the source data has changed, and the cache needs to re-retrieve data from the source the next time that record is accessed. This can be done by executing the `invalidate()` method on a resource. For example, you could extend a table (in your resources.js) and provide a custom POST handler that does invalidation:
+
+```javascript
+const { MyTable } = tables;
+export class MyTableEndpoint extends MyTable {
+ async post(data) {
+ if (data.invalidate) / use this flag as a marker
+ this.invalidate();
+ }
+}
+```
+
+(Note that if you are now exporting this endpoint through resources.js, you don't necessarily need to directly export the table separately in your schema.graphql).
+
+### Subscriptions
+
+We can provide more control of an active cache with subscriptions. If there is a way to receive notifications from the external data source of data changes, we can implement this data source as an "active" data source for our cache by implementing a `subscribe` method. A `subscribe` method should return an asynchronous iterable that iterates and returns events indicating the updates. One straightforward way of creating an asynchronous iterable is by defining the `subscribe` method as an asynchronous generator. If we had an endpoint that we could poll for changes every second, we could implement this like:
+
+```javascript
+class ThirdPartyAPI extends Resource {
+ async *subscribe() {
+ setInterval(() => { / every second retrieve more data
+ / get the next data change event from the source
+ let update = (await fetch(`http:/some-api.com/latest-update`)).json();
+ const event = { / define the change event (which will update the cache)
+ type: 'put', / this would indicate that the event includes the new data value
+ id: / the primary key of the record that updated
+ value: / the new value of the record that updated
+ timestamp: / the timestamp of when the data change occurred
+ };
+ yield event; / this returns this event, notifying the cache of the change
+ }, 1000);
+ }
+ async get() {
+...
+```
+
+Notification events should always include an `id` property to indicate the primary key of the updated record. The event should have a `value` property for `put` and `message` event types. The `timestamp` is optional and can be used to indicate the exact timestamp of the change. The following event `type`s are supported:
+
+* `put` - This indicates that the record has been updated and provides the new value of the record.
+* `invalidate` - Alternately, you can notify with an event type of `invalidate` to indicate that the data has changed, but without the overhead of actually sending the data (the `value` property is not needed), so the data only needs to be sent if and when the data is requested through the cache. An `invalidate` will evict the entry and update the timestamp to indicate that there is new data that should be requested (if needed).
+* `delete` - This indicates that the record has been deleted.
+* `message` - This indicates a message is being passed through the record. The record value has not changed, but this is used for [publish/subscribe messaging](../real-time).
+* `transaction` - This indicates that there are multiple writes that should be treated as a single atomic transaction. These writes should be included as an array of data notification events in the `writes` property.
+
+And the following properties can be defined on event objects:
+
+* `type`: The event type as described above.
+* `id`: The primary key of the record that updated
+* `value`: The new value of the record that updated (for put and message)
+* `writes`: An array of event properties that are part of a transaction (used in conjunction with the transaction event type).
+* `table`: The name of the table with the record that was updated. This can be used with events within a transaction to specify events across multiple tables.
+* `timestamp`: The timestamp of when the data change occurred
+
+With an active external data source with a `subscribe` method, the data source will proactively notify the cache, ensuring a fresh and efficient active cache. Note that with an active data source, we still use the `sourcedFrom` method to register the source for a caching table, and the table will automatically detect and call the subscribe method on the data source.
+
+By default, HarperDB will only run the subscribe method on one thread. HarperDB is multi-threaded and normally runs many concurrent worker threads, but typically running a subscription on multiple threads can introduce overlap in notifications and race conditions and running on a subscription on a single thread is preferable. However, if you want to enable subscribe on multiple threads, you can define a `static subscribeOnThisThread` method to specify if the subscription should run on the current thread:
+
+```javascript
+class ThirdPartyAPI extends Resource {
+ static subscribeOnThisThread(threadIndex) {
+ return threadIndex < 2; / run on two threads (the first two threads)
+ }
+ async *subscribe() {
+ ....
+```
+
+An alternative to using asynchronous generators is to use a subscription stream and send events to it. A default subscription stream (that doesn't generate its own events) is available from the Resource's default subscribe method:
+
+```javascript
+class ThirdPartyAPI extends Resource {
+ subscribe() {
+ const subscription = super.subscribe();
+ setupListeningToRemoteService().on('update', (event) => {
+ subscription.send(event);
+ });
+ return subscription;
+ }
+}
+```
+
+## Downstream Caching
+
+It is highly recommended that you utilize the [REST interface](../rest) for accessing caching tables, as it facilitates downstreaming caching for clients. Timestamps are recorded with all cached entries. Timestamps are then used for incoming [REST requests to specify the `ETag` in the response](../rest#cachingconditional-requests). Clients can cache data themselves and send requests using the `If-None-Match` header to conditionally get a 304 and preserve their cached data based on the timestamp/`ETag` of the entries that are cached in HarperDB. Caching tables also have [subscription capabilities](./caching#subscribing-to-caching-tables), which means that downstream caches can be fully "layered" on top of HarperDB, both as passive or active caches.
+
+## Write-Through Caching
+
+The cache we have defined so far only has data flowing from the data source to the cache. However, you may wish to support write methods, so that writes to the cache table can flow through to underlying canonical data source, as well as populate the cache. This can be accomplished by implementing the standard write methods, like `put` and `delete`. If you were using an API with standard RESTful methods, you can pass writes through to the data source like this:
+
+```javascript
+class ThirdPartyAPI extends Resource {
+ async put(data) {
+ await fetch(`http:/some-api.com/${this.getId()}`, {
+ method: 'PUT',
+ body: JSON.stringify(data)
+ });
+ }
+ async delete() {
+ await fetch(`http:/some-api.com/${this.getId()}`, {
+ method: 'DELETE',
+ });
+ }
+ ...
+```
+
+When doing an insert or update to the MyCache table, the data will be sent to the underlying data source through the `put` method and the new record value will be stored in the cache as well.
+
+### Loading from Source in Methods
+
+When you are using a caching table, it is important to remember that any resource methods besides `get()`, will not automatically load data from the source. If you have defined a `put()`, `post()`, or `delete()` method and you need the source data, you can ensure it is loaded by calling the `ensureLoaded()` method. For example, if you want to modify the existing record from the source, adding a property to it:
+
+```javascript
+class MyCache extends tables.MyCache {
+ async post(data) {
+ / if the data is not cached locally, retrieves from source:
+ await this.ensuredLoaded();
+ / now we can be sure that the data is loaded, and can access properties
+ this.quantity = this.quantity - data.purchases;
+ }
+}
+```
+
+### Subscribing to Caching Tables
+
+You can subscribe to a caching table just like any other table. The one difference is that normal tables do not usually have `invalidate` events, but an active caching table may have `invalidate` events. Again, this event type gives listeners an opportunity to choose whether or not to actually retrieve the value that changed.
+
+### Passive-Active Updates
+
+With our passive update examples, we have provided a data source handler with a `get()` method that returns the specific requested record as the response. However, we can also actively update other records in our response handler (if our data source provides data that should be propagated to other related records). This can be done transactionally, to ensure that all updates occur atomically. The context that is provided to the data source holds the transaction information, so we can simply pass the context to any update/write methods that we call. For example, let's say we are loading a blog post, which should also includes comment records:
+
+```javascript
+const { Post, Comment } = tables;
+class BlogSource extends Resource {
+ get() {
+ let post = await (await fetch(`http:/my-blog-server/${this.getId()}`).json());
+ for (let comment of comments) {
+ await Comment.put(comment, this); / save this comment as part of our current context and transaction
+ }
+ return post;
+ }
+}
+Post.sourcedFrom(BlogSource);
+```
+
+Here both the update to the post and the update to the comments will be atomically/transactionally committed together with the same timestamp.
+
+## Cache-Control header
+
+When interacting with cached data, you can also use the `Cache-Control` request header to specify certain caching behaviors. When performing a PUT (or POST) method, you can use the `max-age` directive to indicate how long the resource should be cached (until stale):
+
+```http
+PUT /my-resource/id
+Cache-Control: max-age=86400
+```
+
+You can use the `only-if-cached` directive on GET requests to only return a resource if it is cached (otherwise will return 504). Note, that if the entry is not cached, this will still trigger a request for the source data from the data source. If you do not want source data retrieved, you can add the `no-store` directive. You can also use the `no-cache` directive if you do not want to use the cached resource. If you wanted to check if there is a cached resource without triggering a request to the data source:
+
+```http
+GET /my-resource/id
+Cache-Control: only-if-cached, no-store
+```
+
+You may also use the `stale-if-error` to indicate if it is acceptable to return a stale cached resource when the data source returns an error (network connection error, 500, 502, 503, or 504). The `must-revalidate` directive can indicate a stale cached resource can not be returned, even when the data source has an error (by default a stale cached resource is returned when there is a network connection error).
+
+
+## Caching Flow
+It may be helpful to understand the flow of a cache request. When a request is made to a caching table:
+* HarperDB will first create a resource instance to handle the process, and ensure that the data is loaded for the resource instance. To do this, it will first check if the record is in the table/cache.
+ * If the record is not in the cache, HarperDB will first check if there is a current request to get the record from the source. If there is, HarperDB will wait for the request to complete and return the record from the cache.
+ * If not, HarperDB will call the `get()` method on the source to retrieve the record. The record will then be stored in the cache.
+ * If the record is in the cache, HarperDB will check if the record is stale. If the record is not stale, HarperDB will immediately return the record from the cache. If the record is stale, HarperDB will call the `get()` method on the source to retrieve the record.
+ * The record will then be stored in the cache. This will write of the record to the cache will be done in a separate asynchronous/background write-behind transaction, so it does not block the current request, which will return the data immediately once it has it.
+* The `get()` method will be called on the resource instance to return the record to the client (or perform any querying on the record). If this is overriden, the method will be called at this time.
+
+### Caching Flow with Write-Through
+When a writes are performed on a caching table (in `put()` or `post()` method, for example), the flow is slightly different:
+* HarperDB will have first created a resource instance to handle the process, and this resource instance that will be the current `this` for a call to `put()` or `post()`.
+* If a `put()` or `update()` is called, for example, this action will be record in the current transaction.
+* Once the transaction is committed (which is done automatically as the request handler completes), the transaction write will be sent to the source to update the data.
+ * The local writes will wait for the source to confirm the writes have completed (note that this effectively allows you to perform a two-phase transactional write to the source, and the source can confirm the writes have completed before the transaction is committed locally).
+ * The transaction writes will then be written the local caching table.
+* The transaction handler will wait for the local commit to be written, then the transaction will be resolved and a response will be sent to the client.
diff --git a/site/versioned_docs/version-4.3/developers/applications/debugging.md b/site/versioned_docs/version-4.3/developers/applications/debugging.md
new file mode 100644
index 00000000..ca03115f
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/applications/debugging.md
@@ -0,0 +1,39 @@
+---
+title: Debugging Applications
+---
+
+# Debugging Applications
+
+HarperDB components and applications run inside the HarperDB process, which is a standard Node.js process that can be debugged with standard JavaScript development tools like Chrome's devtools, VSCode, and WebStorm. Debugging can be performed by launching the HarperDB entry script with your IDE, or you can start HarperDB in dev mode and connect your debugger to the running process (defaults to standard 9229 port):
+
+```
+harperdb dev
+# or to run and debug a specific app
+harperdb dev /path/to/app
+```
+
+Once you have connected a debugger, you may set breakpoints in your application and fully debug it. Note that when using the `dev` command from the CLI, this will run HarperDB in single-threaded mode. This would not be appropriate for production use, but makes it easier to debug applications.
+
+For local debugging and development, it is recommended that you use standard console log statements for logging. For production use, you may want to use HarperDB's logging facilities, so you aren't logging to the console. The logging functions are available on the global `logger` variable that is provided by HarperDB. This logger can be used to output messages directly to the HarperDB log using standardized logging level functions, described below. The log level can be set in the [HarperDB Configuration File](../../deployments/configuration).
+
+HarperDB Logger Functions
+
+* `trace(message)`: Write a 'trace' level log, if the configured level allows for it.
+* `debug(message)`: Write a 'debug' level log, if the configured level allows for it.
+* `info(message)`: Write a 'info' level log, if the configured level allows for it.
+* `warn(message)`: Write a 'warn' level log, if the configured level allows for it.
+* `error(message)`: Write a 'error' level log, if the configured level allows for it.
+* `fatal(message)`: Write a 'fatal' level log, if the configured level allows for it.
+* `notify(message)`: Write a 'notify' level log.
+
+For example, you can log a warning:
+
+```javascript
+logger.warn('You have been warned');
+```
+
+If you want to ensure a message is logged, you can use `notify` as these messages will appear in the log regardless of log level configured.
+
+## Viewing the Log
+
+The HarperDB Log can be found in your local `~/hdb/log/hdb.log` file (or in the log folder if you have specified an alternate hdb root), or in the [Studio Status page](../../administration/harperdb-studio/instance-metrics). Additionally, you can use the [`read_log` operation](../operations-api/logs) to query the HarperDB log.
diff --git a/site/versioned_docs/version-4.3/developers/applications/define-routes.md b/site/versioned_docs/version-4.3/developers/applications/define-routes.md
new file mode 100644
index 00000000..6a9ed34b
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/applications/define-routes.md
@@ -0,0 +1,118 @@
+---
+title: Define Fastify Routes
+---
+
+# Define Fastify Routes
+
+HarperDB’s applications provide an extension for loading [Fastify](https:/www.fastify.io/) routes as a way to handle endpoints. While we generally recommend building your endpoints/APIs with HarperDB's [REST interface](../rest) for better performance and standards compliance, Fastify's route can provide an extensive API for highly customized path handling. Below is a very simple example of a route declaration.
+
+The fastify route handler can be configured in your application's config.yaml (this is the default config if you used the [application template](https:/github.com/HarperDB/application-template)):
+
+```yaml
+fastifyRoutes: # This loads files that define fastify routes using fastify's auto-loader
+ files: routes/*.js # specify the location of route definition modules
+ path: . # relative to the app-name, like http:/server/app-name/route-name
+```
+
+By default, route URLs are configured to be:
+
+* \[**Instance URL**]:\[**HTTP Port**]/\[**Project Name**]/\[**Route URL**]
+
+However, you can specify the path to be `/` if you wish to have your routes handling the root path of incoming URLs.
+
+* The route below, using the default config, within the **dogs** project, with a route of **breeds** would be available at **http:/localhost:9926/dogs/breeds**.
+
+In effect, this route is just a pass-through to HarperDB. The same result could have been achieved by hitting the core HarperDB API, since it uses **hdbCore.preValidation** and **hdbCore.request**, which are defined in the “helper methods” section, below.
+
+```javascript
+export default async (server, { hdbCore, logger }) => {
+ server.route({
+ url: '/',
+ method: 'POST',
+ preValidation: hdbCore.preValidation,
+ handler: hdbCore.request,
+ })
+}
+```
+
+## Custom Handlers
+
+For endpoints where you want to execute multiple operations against HarperDB, or perform additional processing (like an ML classification, or an aggregation, or a call to a 3rd party API), you can define your own logic in the handler. The function below will execute a query against the dogs table, and filter the results to only return those dogs over 4 years in age.
+
+**IMPORTANT: This route has NO preValidation and uses hdbCore.requestWithoutAuthentication, which- as the name implies- bypasses all user authentication. See the security concerns and mitigations in the “helper methods” section, below.**
+
+```javascript
+export default async (server, { hdbCore, logger }) => {
+ server.route({
+ url: '/:id',
+ method: 'GET',
+ handler: (request) => {
+ request.body= {
+ operation: 'sql',
+ sql: `SELECT * FROM dev.dog WHERE id = ${request.params.id}`
+ };
+
+ const result = await hdbCore.requestWithoutAuthentication(request);
+ return result.filter((dog) => dog.age > 4);
+ }
+ });
+}
+```
+
+## Custom preValidation Hooks
+
+The simple example above was just a pass-through to HarperDB- the exact same result could have been achieved by hitting the core HarperDB API. But for many applications, you may want to authenticate the user using custom logic you write, or by conferring with a 3rd party service. Custom preValidation hooks let you do just that.
+
+Below is an example of a route that uses a custom validation hook:
+
+```javascript
+import customValidation from '../helpers/customValidation';
+
+export default async (server, { hdbCore, logger }) => {
+ server.route({
+ url: '/:id',
+ method: 'GET',
+ preValidation: (request) => customValidation(request, logger),
+ handler: (request) => {
+ request.body= {
+ operation: 'sql',
+ sql: `SELECT * FROM dev.dog WHERE id = ${request.params.id}`
+ };
+
+ return hdbCore.requestWithoutAuthentication(request);
+ }
+ });
+}
+```
+
+Notice we imported customValidation from the **helpers** directory. To include a helper, and to see the actual code within customValidation, see [Define Helpers](#helper-methods).
+
+## Helper Methods
+
+When declaring routes, you are given access to 2 helper methods: hdbCore and logger.
+
+**hdbCore**
+
+hdbCore contains three functions that allow you to authenticate an inbound request, and execute operations against HarperDB directly, by passing the standard Operations API.
+
+* **preValidation**
+
+ This is an array of functions used for fastify authentication. The second function takes the authorization header from the inbound request and executes the same authentication as the standard HarperDB Operations API (for example, `hdbCore.preValidation[1](./req, resp, callback)`). It will determine if the user exists, and if they are allowed to perform this operation. **If you use the request method, you have to use preValidation to get the authenticated user**.
+* **request**
+
+ This will execute a request with HarperDB using the operations API. The `request.body` should contain a standard HarperDB operation and must also include the `hdb_user` property that was in `request.body` provided in the callback.
+* **requestWithoutAuthentication**
+
+ Executes a request against HarperDB without any security checks around whether the inbound user is allowed to make this request. For security purposes, you should always take the following precautions when using this method:
+
+ * Properly handle user-submitted values, including url params. User-submitted values should only be used for `search_value` and for defining values in records. Special care should be taken to properly escape any values if user-submitted values are used for SQL.
+
+**logger**
+
+This helper allows you to write directly to the log file, hdb.log. It’s useful for debugging during development, although you may also use the console logger. There are 5 functions contained within logger, each of which pertains to a different **logging.level** configuration in your harperdb-config.yaml file.
+
+* logger.trace(‘Starting the handler for /dogs’)
+* logger.debug(‘This should only fire once’)
+* logger.warn(‘This should never ever fire’)
+* logger.error(‘This did not go well’)
+* logger.fatal(‘This did not go very well at all’)
diff --git a/site/versioned_docs/version-4.3/developers/applications/defining-schemas.md b/site/versioned_docs/version-4.3/developers/applications/defining-schemas.md
new file mode 100644
index 00000000..29a0c12d
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/applications/defining-schemas.md
@@ -0,0 +1,161 @@
+---
+title: Defining Schemas
+---
+
+# Defining Schemas
+
+Schemas define tables and their attributes. Schemas can be declaratively defined in HarperDB's using GraphQL schema definitions. Schemas definitions can be used to ensure that tables exist (that are required for applications), and have the appropriate attributes. Schemas can define the primary key, data types for attributes, if they are required, and specify which attributes should be indexed. The [introduction to applications provides](./) a helpful introduction to how to use schemas as part of database application development.
+
+Schemas can be used to define the expected structure of data, but are also highly flexible and support heterogeneous data structures and by default allows data to include additional properties. The standard types for GraphQL schemas are specified in the [GraphQL schema documentation](https:/graphql.org/learn/schema/).
+
+An example schema that defines a couple tables might look like:
+
+```graphql
+# schema.graphql:
+type Dog @table {
+ id: ID @primaryKey
+ name: String
+ breed: String
+ age: Int
+}
+
+type Breed @table {
+ id: ID @primaryKey
+}
+```
+
+In this example, you can see that we specified the expected data structure for records in the Dog and Breed table. For example, this will enforce that Dog records are required to have a `name` property with a string (or null, unless the type were specified to be non-nullable). This does not preclude records from having additional properties (see `@sealed` for preventing additional properties. For example, some Dog records could also optionally include a `favoriteTrick` property.
+
+In this page, we will describe the specific directives that HarperDB uses for defining tables and attributes in a schema.
+
+### Type Directives
+
+#### `@table`
+
+The schema for tables are defined using GraphQL type definitions with a `@table` directive:
+
+```graphql
+type TableName @table
+```
+
+By default the table name is inherited from the type name (in this case the table name would be "TableName"). The `@table` directive supports several optional arguments (all of these are optional and can be freely combined):
+
+* `@table(table: "table_name")` - This allows you to explicitly specify the table name.
+* `@table(database: "database_name")` - This allows you to specify which database the table belongs to. This defaults to the "data" database.
+* `@table(expiration: 3600)` - Sets an expiration time on entries in the table before they are automatically cleared (primarily useful for caching tables). This is specified in seconds.
+* `@table(audit: true)` - This enables the audit log for the table so that a history of record changes are recorded. This defaults to [configuration file's setting for `auditLog`](../../deployments/configuration#logging).
+
+#### `@export`
+
+This indicates that the specified table should be exported as a resource that is accessible as an externally available endpoints, through REST, MQTT, or any of the external resource APIs.
+
+This directive also accepts a `name` parameter to specify the name that should be used for the exported resource (how it will appear in the URL path). For example:
+
+```
+type MyTable @table @export(name: "my-table")
+```
+
+This table would be available at the URL path `/my-table/`. Without the `name` parameter, the exported name defaults to the name of the table type ("MyTable" in this example).
+
+### Relationships: `@relationship`
+Defining relationships is the foundation of using "join" queries in HarperDB. A relationship defines how one table relates to another table using a foreign key. Using the `@relationship` directive will define a property as a computed property, which resolves to the an record/instance from a target type, based on the referenced attribute, which can be in this table or the target table. The `@relationship` directive must be used in combination with an attribute with a type that references another table.
+
+#### `@relationship(from: attribute)`
+This defines a relationship where the foreign key is defined in this table, and relates to the primary key of the target table. If the foreign key is single-valued, this establishes a many-to-one relationship with the target table. The foreign key may also be a multi-valued array, in which case this will be a many-to-many relationship.
+For example, we can define a foreign key that references another table and then define the relationship. Here we create a `brandId` attribute that will be our foreign key (it will hold an id that references the primary key of the Brand table), and we define a relationship to the `Brand` table through the `brand` attribute:
+```graphql
+type Product @table @export {
+ id: ID @primaryKey
+ brandId: ID @indexed
+ brand: Brand @relationship(from: brandId)
+}
+type Brand @table @export {
+ id: ID @primaryKey
+}
+```
+Once this is defined we can use the `brand` attribute as a [property in our product instances](../../technical-details/reference/resource) and allow for querying by `brand` and selecting brand attributes as returned properties in [query results](../rest).
+
+Again, the foreign key may be a multi-valued array (array of keys referencing the target table records). For example, if we had a list of features that references a Feature table:
+
+```graphql
+type Product @table @export {
+ id: ID @primaryKey
+ featureIds: [ID] @indexed # array of ids
+ features: [Feature] @relationship(from: featureIds) # array of referenced feature records
+}
+type Feature @table {
+ id: ID @primaryKey
+ ...
+}
+```
+
+#### `@relationship(to: attribute)`
+This defines a relationship where the foreign key is defined in the target table and relates to primary key of this table. If the foreign key is single-valued, this establishes a one-to-many relationship with the target table. Note that the target table type must be an array element type (like `[Table]`). The foreign key may also be a multi-valued array, in which case this will be a many-to-many relationship.
+For example, we can define on a reciprocal relationship, from the example above, adding a relationship from brand back to product. Here we use continue to use the `brandId` attribute from the `Product` schema, and we define a relationship to the `Product` table through the `products` attribute:
+```graphql
+type Brand @table @export {
+ id: ID @primaryKey
+ name: String
+ products: [Product] @relationship(to: brandId)
+}
+```
+Once this is defined we can use the `products` attribute as a property in our brand instances and allow for querying by `products` and selecting product attributes as returned properties in query results.
+
+Note that schemas can also reference themselves with relationships, allow records to define relationships like parent-child relationships between records in the same table.
+
+#### `@sealed`
+
+The `@sealed` directive specifies that no additional properties should be allowed on records besides though specified in the type itself.
+
+### Field Directives
+
+The field directives can be used for information about each attribute in table type definition.
+
+#### `@primaryKey`
+
+The `@primaryKey` directive specifies that an attribute is the primary key for a table. These must be unique and when records are created, this will be auto-generated with a UUID if no primary key is provided.
+
+#### `@indexed`
+
+The `@indexed` directive specifies that an attribute should be indexed. This is necessary if you want to execute queries using this attribute (whether that is through RESTful query parameters, SQL, or NoSQL operations).
+
+#### `@createdTime`
+
+The `@createdTime` directive indicates that this property should be assigned a timestamp of the creation time of the record (in epoch milliseconds).
+
+#### `@updatedTime`
+
+The `@updatedTime` directive indicates that this property should be assigned a timestamp of each updated time of the record (in epoch milliseconds).
+
+### Defined vs Dynamic Schemas
+
+If you do not define a schema for a table and create a table through the operations API (without specifying attributes) or studio, such a table will not have a defined schema and will follow the behavior of a ["dynamic-schema" table](../../technical-details/reference/dynamic-schema). It is generally best-practice to define schemas for your tables to ensure predictable, consistent structures with data integrity.
+
+### Field Types
+
+HarperDB supports the following field types in addition to user defined (object) types:
+
+* `String`: String/text.
+* `Int`: A 32-bit signed integer (from -2147483648 to 2147483647).
+* `Long`: A 54-bit signed integer (from -9007199254740992 to 9007199254740992).
+* `Float`: Any number (any number that can be represented as a [64-bit double precision floating point number](https:/en.wikipedia.org/wiki/Double-precision\_floating-point\_format). Note that all numbers are stored in the most compact representation available).
+* `BigInt`: Any integer (negative or positive) with less than 300 digits. (Note that `BigInt` is a distinct and separate type from standard numbers in JavaScript, so custom code should handle this type appropriately.)
+* `Boolean`: true or false.
+* `ID`: A string (but indicates it is not intended to be human readable).
+* `Any`: Any primitive, object, or array is allowed.
+* `Date`: A Date object.
+* `Bytes`: Binary data (as a Buffer or Uint8Array).
+
+#### Renaming Tables
+
+It is important to note that HarperDB does not currently support renaming tables. If you change the name of a table in your schema definition, this will result in the creation of a new, empty table.
+
+### OpenAPI Specification
+
+_The [OpenAPI Specification](https:/spec.openapis.org/oas/v3.1.0) defines a standard, programming language-agnostic interface description for HTTP APIs,
+which allows both humans and computers to discover and understand the capabilities of a service without requiring
+access to source code, additional documentation, or inspection of network traffic._
+
+If a set of endpoints are configured through a HarperDB GraphQL schema, those endpoints can be described by using a default REST endpoint called `GET /openapi`.
+
+_Note: The `/openapi` endpoint should only be used as a starting guide, it may not cover all the elements of an endpoint._
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.3/developers/applications/example-projects.md b/site/versioned_docs/version-4.3/developers/applications/example-projects.md
new file mode 100644
index 00000000..2eb92ba4
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/applications/example-projects.md
@@ -0,0 +1,37 @@
+---
+title: Example Projects
+---
+
+# Example Projects
+
+**Library of example HarperDB applications and components:**
+
+* [Authorization in HarperDB using Okta Customer Identity Cloud](https:/www.harperdb.io/post/authorization-in-harperdb-using-okta-customer-identity-cloud), by Yitaek Hwang
+
+* [How to Speed Up your Applications by Caching at the Edge with HarperDB](https:/dev.to/doabledanny/how-to-speed-up-your-applications-by-caching-at-the-edge-with-harperdb-3o2l), by Danny Adams
+
+* [OAuth Authentication in HarperDB using Auth0 & Node.js](https:/www.harperdb.io/post/oauth-authentication-in-harperdb-using-auth0-and-node-js), by Lucas Santos
+
+* [How To Create a CRUD API with Next.js & HarperDB Custom Functions](https:/www.harperdb.io/post/create-a-crud-api-w-next-js-harperdb), by Colby Fayock
+
+* [Build a Dynamic REST API with Custom Functions](https:/harperdb.io/blog/build-a-dynamic-rest-api-with-custom-functions/), by Terra Roush
+
+* [How to use HarperDB Custom Functions to Build your Entire Backend](https:/dev.to/andrewbaisden/how-to-use-harperdb-custom-functions-to-build-your-entire-backend-a2m), by Andrew Baisden
+
+* [Using TensorFlowJS & HarperDB Custom Functions for Machine Learning](https:/harperdb.io/blog/using-tensorflowjs-harperdb-for-machine-learning/), by Kevin Ashcraft
+
+* [Build & Deploy a Fitness App with Python & HarperDB](https:/www.youtube.com/watch?v=KMkmA4i2FQc), by Patrick Löber
+
+* [Create a Discord Slash Bot using HarperDB Custom Functions](https:/geekysrm.hashnode.dev/discord-slash-bot-with-harperdb-custom-functions), by Soumya Ranjan Mohanty
+
+* [How I used HarperDB Custom Functions to Build a Web App for my Newsletter](https:/blog.hrithwik.me/how-i-used-harperdb-custom-functions-to-build-a-web-app-for-my-newsletter), by Hrithwik Bharadwaj
+
+* [How I used HarperDB Custom Functions and Recharts to create Dashboard](https:/blog.greenroots.info/how-to-create-dashboard-with-harperdb-custom-functions-and-recharts), by Tapas Adhikary
+
+* [How To Use HarperDB Custom Functions With Your React App](https:/dev.to/tyaga001/how-to-use-harperdb-custom-functions-with-your-react-app-2c43), by Ankur Tyagi
+
+* [Build a Web App Using HarperDB’s Custom Functions](https:/www.youtube.com/watch?v=rz6prItVJZU), livestream by Jaxon Repp
+
+* [How to Web Scrape Using Python, Snscrape & Custom Functions](https:/hackernoon.com/how-to-web-scrape-using-python-snscrape-and-harperdb), by Davis David
+
+* [What’s the Big Deal w/ Custom Functions](https:/rss.com/podcasts/harperdb-select-star/278933/), Select* Podcast
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.3/developers/applications/index.md b/site/versioned_docs/version-4.3/developers/applications/index.md
new file mode 100644
index 00000000..7fee6bd8
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/applications/index.md
@@ -0,0 +1,376 @@
+---
+title: Applications
+---
+
+# Applications
+
+## Overview of HarperDB Applications
+
+HarperDB is more than a database, it's a distributed clustering platform allowing you to package your schema, endpoints and application logic and deploy them to an entire fleet of HarperDB instances optimized for on-the-edge scalable data delivery.
+
+In this guide, we are going to explore the evermore extensible architecture that HarperDB provides by building a HarperDB component, a fundamental building-block of the HarperDB ecosystem.
+
+When working through this guide, we recommend you use the [HarperDB Application Template](https:/github.com/HarperDB/application-template) repo as a reference.
+
+## Understanding the Component Application Architecture
+
+HarperDB provides several types of components. Any package that is added to HarperDB is called a "component", and components are generally categorized as either "applications", which deliver a set of endpoints for users, or "extensions", which are building blocks for features like authentication, additional protocols, and connectors that can be used by other components. Components can be added to the `hdb/components` directory and will be loaded by HarperDB when it starts. Components that are remotely deployed to HarperDB (through the studio or the operation API) are installed into the hdb/node\_modules directory. Using `harperdb run .` or `harperdb dev .` allows us to specifically load a certain application in addition to any that have been manually added to `hdb/components` or installed (in `node\_modules`).
+
+```mermaid
+flowchart LR
+ Client(Client)-->Endpoints
+ Client(Client)-->HTTP
+ Client(Client)-->Extensions
+ subgraph HarperDB
+ direction TB
+ Applications(Applications)-- "Schemas" --> Tables[(Tables)]
+ Applications-->Endpoints[/Custom Endpoints/]
+ Applications-->Extensions
+ Endpoints-->Tables
+ HTTP[/REST/HTTP/]-->Tables
+ Extensions[/Extensions/]-->Tables
+ end
+```
+
+## Getting up and Running
+
+### Pre-Requisites
+
+We assume you are running HarperDB version 4.2 or greater, which supports HarperDB Application architecture (in previous versions, this is 'custom functions').
+
+### Scaffolding our Application Directory
+
+Let's create and initialize a new directory for our application. It is recommended that you start by using the [HarperDB application template](https:/github.com/HarperDB/application-template). Assuming you have `git` installed, you can create your project directory by cloning:
+
+```shell
+> git clone https:/github.com/HarperDB/application-template my-app
+> cd my-app
+```
+
+
+
+You can also start with an empty application directory if you'd prefer.
+
+To create your own application from scratch, you'll may want to initialize it as an npm package with the \`type\` field set to \`module\` in the \`package.json\` so that you can use the EcmaScript module syntax used in this tutorial:
+
+```shell
+> mkdir my-app
+> cd my-app
+> npm init -y esnext
+```
+
+
+
+
+
+If you want to version control your application code, you can adjust the remote URL to your repository.
+
+Here's an example for a github repo:
+
+```shell
+> git remote set-url origin git@github.com://
+```
+
+Locally developing your application and then committing your app to a source control is a great way to manage your code and configuration, and then you can [directly deploy from your repository](./#deploying-your-application).
+
+
+
+## Creating our first Table
+
+The core of a HarperDB application is the database, so let's create a database table!
+
+A quick and expressive way to define a table is through a [GraphQL Schema](https:/graphql.org/learn/schema). Using your editor of choice, edit the file named `schema.graphql` in the root of the application directory, `my-app`, that we created above. To create a table, we will need to add a `type` of `@table` named `Dog` (and you can remove the example table in the template):
+
+```graphql
+type Dog @table {
+ # properties will go here soon
+}
+```
+
+And then we'll add a primary key named `id` of type `ID`:
+
+_(Note: A GraphQL schema is a fast method to define tables in HarperDB, but you are by no means required to use GraphQL to query your application, nor should you necessarily do so)_
+
+```graphql
+type Dog @table {
+ id: ID @primaryKey
+}
+```
+
+Now we tell HarperDB to run this as an application:
+
+```shell
+> harperdb dev . # tell HarperDB cli to run current directory as an application in dev mode
+```
+
+HarperDB will now create the `Dog` table and its `id` attribute we just defined. Not only is this an easy way to get create a table, but this schema is included in our application, which will ensure that this table exists wherever we deploy this application (to any HarperDB instance).
+
+## Adding Attributes to our Table
+
+Next, let's expand our `Dog` table by adding additional typed attributes for dog `name`, `breed` and `age`.
+
+```graphql
+type Dog @table {
+ id: ID @primaryKey
+ name: String
+ breed: String
+ age: Int
+}
+```
+
+This will ensure that new records must have these properties with these types.
+
+Because we ran `harperdb dev .` earlier (dev mode), HarperDB is now monitoring the contents of our application directory for changes and reloading when they occur. This means that once we save our schema file with these new attributes, HarperDB will automatically reload our application, read `my-app/schema.graphql` and update the `Dog` table and attributes we just defined. The dev mode will also ensure that any logging or errors are immediately displayed in the console (rather only in the log file).
+
+As a NoSQL database, HarperDB supports heterogeneous records (also referred to as documents), so you can freely specify additional properties on any record. If you do want to restrict the records to only defined properties, you can always do that by adding the `sealed` directive:
+
+```graphql
+type Dog @table @sealed {
+ id: ID @primaryKey
+ name: String
+ breed: String
+ age: Int
+ tricks: [String]
+}
+```
+
+If you are using HarperDB Studio, we can now [add JSON-formatted records](../../administration/harperdb-studio/manage-databases-browse-data#add-a-record) to this new table in the studio or upload data as [CSV from a local file or URL](../../administration/harperdb-studio/manage-databases-browse-data#load-csv-data). A third, more advanced, way to add data to your database is to use the [operations API](../operations-api/), which provides full administrative control over your new HarperDB instance and tables.
+
+## Adding an Endpoint
+
+Now that we have a running application with a database (with data if you imported any data), let's make this data accessible from a RESTful URL by adding an endpoint. To do this, we simply add the `@export` directive to our `Dog` table:
+
+```graphql
+type Dog @table @export {
+ id: ID @primaryKey
+ name: String
+ breed: String
+ age: Int
+ tricks: [String]
+}
+```
+
+By default the application HTTP server port is `9926` (this can be [configured here](../../deployments/configuration#http)), so the local URL would be [http:/localhost:9926/Dog/](http:/localhost:9926/Dog/) with a full REST API. We can PUT or POST data into this table using this new path, and then GET or DELETE from it as well (you can even view data directly from the browser). If you have not added any records yet, we could use a PUT or POST to add a record. PUT is appropriate if you know the id, and POST can be used to assign an id:
+
+```http
+POST /Dog/
+Content-Type: application/json
+
+{
+ "name": "Harper",
+ "breed": "Labrador",
+ "age": 3,
+ "tricks": ["sits"]
+}
+```
+
+With this a record will be created and the auto-assigned id will be available through the `Location` header. If you added a record, you can visit the path `/Dog/` to view that record. Alternately, the curl command `curl http:/localhost:9926/Dog/` will achieve the same thing.
+
+## Authenticating Endpoints
+
+These endpoints automatically support `Basic`, `Cookie`, and `JWT` authentication methods. See the documentation on [security](../security/) for more information on different levels of access.
+
+By default, HarperDB also automatically authorizes all requests from loopback IP addresses (from the same computer) as the superuser, to make it simple to interact for local development. If you want to test authentication/authorization, or enforce stricter security, you may want to disable the [`authentication.authorizeLocal` setting](../../deployments/configuration#authentication).
+
+### Content Negotiation
+
+These endpoints support various content types, including `JSON`, `CBOR`, `MessagePack` and `CSV`. Simply include an `Accept` header in your requests with the preferred content type. We recommend `CBOR` as a compact, efficient encoding with rich data types, but `JSON` is familiar and great for web application development, and `CSV` can be useful for exporting data to spreadsheets or other processing.
+
+HarperDB works with other important standard HTTP headers as well, and these endpoints are even capable of caching interaction:
+
+```
+Authorization: Basic
+Accept: application/cbor
+If-None-Match: "etag-id" # browsers can automatically provide this
+```
+
+## Querying
+
+Querying your application database is straightforward and easy, as tables exported with the `@export` directive are automatically exposed via [REST endpoints](../rest). Simple queries can be crafted through [URL query parameters](https:/en.wikipedia.org/wiki/Query\_string).
+
+In order to maintain reasonable query speed on a database as it grows in size, it is critical to select and establish the proper indexes. So, before we add the `@export` declaration to our `Dog` table and begin querying it, let's take a moment to target some table properties for indexing. We'll use `name` and `breed` as indexed table properties on our `Dog` table. All we need to do to accomplish this is tag these properties with the `@indexed` directive:
+
+```graphql
+type Dog @table {
+ id: ID @primaryKey
+ name: String @indexed
+ breed: String @indexed
+ owner: String
+ age: Int
+ tricks: [String]
+}
+```
+
+And finally, we'll add the `@export` directive to expose the table as a RESTful endpoint
+
+```graphql
+type Dog @table @export {
+ id: ID @primaryKey
+ name: String @indexed
+ breed: String @indexed
+ owner: String
+ age: Int
+ tricks: [String]
+}
+```
+
+Now we can start querying. Again, we just simply access the endpoint with query parameters (basic GET requests), like:
+
+```
+http:/localhost:9926/Dog/?name=Harper
+http:/localhost:9926/Dog/?breed=Labrador
+http:/localhost:9926/Dog/?breed=Husky&name=Balto&select=id,name,breed
+```
+
+Congratulations, you now have created a secure database application backend with a table, a well-defined structure, access controls, and a functional REST endpoint with query capabilities! See the [REST documentation for more information on HTTP access](../rest) and see the [Schema reference](./defining-schemas) for more options for defining schemas.
+
+## Deploying your Application
+
+This guide assumes that you're building a HarperDB application locally. If you have a cloud instance available, you can deploy it by doing the following:
+
+* Commit and push your application component directory code (i.e., the `my-app` directory) to a Github repo. In this tutorial we started with a clone of the application-template. To commit and push to your own repository, change the origin to your repo: `git remote set-url origin git@github.com:your-account/your-repo.git`
+* Go to the applications section of your target cloud instance in the [HarperDB Studio](../../administration/harperdb-studio/manage-applications).
+* In the left-hand menu of the applications IDE, click 'deploy' and specify a package location reference that follows the [npm package specification](https:/docs.npmjs.com/cli/v8/using-npm/package-spec) (i.e., a string like `HarperDB/Application-Template` or a URL like `https:/github.com/HarperDB/application-template`, for example, that npm knows how to install).
+
+You can also deploy your application from your repository by directly using the [`deploy_component` operation](../operations-api/components#deploy-component).
+
+Once you have deployed your application to a HarperDB cloud instance, you can start scaling your application by adding additional instances in other regions.
+
+With the help of a global traffic manager/load balancer configured, you can distribute incoming requests to the appropriate server. You can deploy and re-deploy your application to all the nodes in your mesh.
+
+Now, with an application that you can deploy, update, and re-deploy, you have an application that is horizontally and globally scalable!
+
+## Custom Functionality with JavaScript
+
+So far we have built an application entirely through schema configuration. However, if your application requires more custom functionality, you will probably want to employ your own JavaScript modules to implement more specific features and interactions. This gives you tremendous flexibility and control over how data is accessed and modified in HarperDB. Let's take a look at how we can use JavaScript to extend and define "resources" for custom functionality. Let's add a property to the dog records when they are returned, that includes their age in human years. In HarperDB, data is accessed through our [Resource API](../../technical-details/reference/resource), a standard interface to access data sources, tables, and make them available to endpoints. Database tables are `Resource` classes, and so extending the function of a table is as simple as extending their class.
+
+To define custom (JavaScript) resources as endpoints, we need to create a `resources.js` module (this goes in the root of your application folder). And then endpoints can be defined with Resource classes that `export`ed. This can be done in addition to, or in lieu of the `@export`ed types in the schema.graphql. If you are exporting and extending a table you defined in the schema make sure you remove the `@export` from the schema so that don't export the original table or resource to the same endpoint/path you are exporting with a class. Resource classes have methods that correspond to standard HTTP/REST methods, like `get`, `post`, `patch`, and `put` to implement specific handling for any of these methods (for tables they all have default implementations). To do this, we get the `Dog` class from the defined tables, extend it, and export it:
+
+```javascript
+/ resources.js:
+const { Dog } = tables; / get the Dog table from the HarperDB provided set of tables (in the default database)
+
+export class DogWithHumanAge extends Dog {
+ get(query) {
+ this.humanAge = 15 + this.age * 5; / silly calculation of human age equivalent
+ return super.get(query);
+ }
+}
+```
+
+Here we exported the `DogWithHumanAge` class (exported with the same name), which directly maps to the endpoint path. Therefore, now we have a `/DogWithHumanAge/` endpoint based on this class, just like the direct table interface that was exported as `/Dog/`, but the new endpoint will return objects with the computed `humanAge` property. Resource classes provide getters/setters for every defined attribute so that accessing instance properties like `age`, will get the value from the underlying record. The instance holds information about the primary key of the record so updates and actions can be applied to the correct record. And changing or assigning new properties can be saved or included in the resource as it returned and serialized. The `return super.get(query)` call at the end allows for any query parameters to be applied to the resource, such as selecting individual properties (with a [`select` query parameter](../rest#select-properties)).
+
+Often we may want to incorporate data from other tables or data sources in your data models. Next, let's say that we want a `Breed` table that holds detailed information about each breed, and we want to add that information to the returned dog object. We might define the Breed table as (back in schema.graphql):
+
+```graphql
+type Breed @table {
+ name: String @primaryKey
+ description: String @indexed
+ lifespan: Int
+ averageWeight: Float
+}
+```
+
+And next we will use this table in our `get()` method. We will call the new table's (static) `get()` method to retrieve a breed by id. To do this correctly, we access the table using our current context by passing in `this` as the second argument. This is important because it ensures that we are accessing the data atomically, in a consistent snapshot across tables. This provides automatically tracking of most recently updated timestamps across resources for caching purposes. This allows for sharing of contextual metadata (like user who requested the data), and ensure transactional atomicity for any writes (not needed in this get operation, but important for other operations). The resource methods are automatically wrapped with a transaction (will commit/finish when the method completes), and this allows us to fully utilize multiple resources in our current transaction. With our own snapshot of the database for the Dog and Breed table we can then access data like this:
+
+```javascript
+/resource.js:
+const { Dog, Breed } = tables; / get the Breed table too
+export class DogWithBreed extends Dog {
+ async get(query) {
+ let breedDescription = await Breed.get(this.breed, this);
+ this.breedDescription = breedDescription;
+ return super.get(query);
+ }
+}
+```
+
+The call to `Breed.get` will return an instance of the `Breed` resource class, which holds the record specified the provided id/primary key. Like the `Dog` instance, we can access or change properties on the Breed instance.
+
+Here we have focused on customizing how we retrieve data, but we may also want to define custom actions for writing data. While HTTP PUT method has a specific semantic definition (replace current record), a common method for custom actions is through the HTTP POST method. the POST method has much more open-ended semantics and is a good choice for custom actions. POST requests are handled by our Resource's post() method. Let's say that we want to define a POST handler that adds a new trick to the `tricks` array to a specific instance. We might do it like this, and specify an action to be able to differentiate actions:
+
+```javascript
+export class CustomDog extends Dog {
+ async post(data) {
+ if (data.action === 'add-trick')
+ this.tricks.push(data.trick);
+ }
+}
+```
+
+And a POST request to /CustomDog/ would call this `post` method. The Resource class then automatically tracks changes you make to your resource instances and saves those changes when this transaction is committed (again these methods are automatically wrapped in a transaction and committed once the request handler is finished). So when you push data on to the `tricks` array, this will be recorded and persisted when this method finishes and before sending a response to the client.
+
+The `post` method automatically marks the current instance as being update. However, you can also explicitly specify that you are changing a resource by calling the `update()` method. If you want to modify a resource instance that you retrieved through a `get()` call (like `Breed.get()` call above), you can call its `update()` method to ensure changes are saved (and will be committed in the current transaction).
+
+We can also define custom authorization capabilities. For example, we might want to specify that only the owner of a dog can make updates to a dog. We could add logic to our `post` method or `put` method to do this, but we may want to separate the logic so these methods can be called separately without authorization checks. The [Resource API](../../technical-details/reference/resource) defines `allowRead`, `allowUpdate`, `allowCreate`, and `allowDelete`, or to easily configure individual capabilities. For example, we might do this:
+
+```javascript
+export class CustomDog extends Dog {
+ allowUpdate(user) {
+ return this.owner === user.username;
+ }
+}
+```
+
+Any methods that are not defined will fall back to HarperDB's default authorization procedure based on users' roles. If you are using/extending a table, this is based on HarperDB's [role based access](../security/users-and-roles). If you are extending the base `Resource` class, the default access requires super user permission.
+
+You can also use the `default` export to define the root path resource handler. For example:
+
+```javascript
+/ resources.json
+export default class CustomDog extends Dog {
+ ...
+```
+
+This will allow requests to url like / to be directly resolved to this resource.
+
+## Define Custom Data Sources
+
+We can also directly implement the Resource class and use it to create new data sources from scratch that can be used as endpoints. Custom resources can also be used as caching sources. Let's say that we defined a `Breed` table that was a cache of information about breeds from another source. We could implement a caching table like:
+
+```javascript
+const { Breed } = tables; / our Breed table
+class BreedSource extends Resource { / define a data source
+ async get() {
+ return (await fetch(`http:/best-dog-site.com/${this.getId()}`)).json();
+ }
+}
+/ define that our breed table is a cache of data from the data source above, with a specified expiration
+Breed.sourcedFrom(BreedSource, { expiration: 3600 });
+```
+
+The [caching documentation](./caching) provides much more information on how to use HarperDB's powerful caching capabilities and set up data sources.
+
+HarperDB provides a powerful JavaScript API with significant capabilities that go well beyond a "getting started" guide. See our documentation for more information on using the [`globals`](../../technical-details/reference/globals) and the [Resource interface](../../technical-details/reference/resource).
+
+## Configuring Applications/Components
+
+Every application or component can define their own configuration in a `config.yaml`. If you are using the application template, you will have a [default configuration in this config file](https:/github.com/HarperDB/application-template/blob/main/config.yaml) (which is default configuration if no config file is provided). Within the config file, you can configure how different files and resources are loaded and handled. The default configuration file itself is documented with directions. Each entry can specify any `files` that the loader will handle, and can also optionally specify what, if any, URL `path`s it will handle. A path of `/` means that the root URLs are handled by the loader, and a path of `.` indicates that the URLs that start with this application's name are handled.
+
+This config file allows you define a location for static files, as well (that are directly delivered as-is for incoming HTTP requests).
+
+Each configuration entry can have the following properties, in addition to properties that may be specific to the individual component:
+
+* `files`: This specifies the set of files that should be handled the component. This is a glob pattern, so a set of files can be specified like "directory/**".
+* `path`: This is the URL path that is handled by this component.
+* `root`: This specifies the root directory for mapping file paths to the URLs. For example, if you want all the files in `web/**` to be available in the root URL path via the static handler, you could specify a root of `web`, to indicate that the web directory maps to the root URL path.
+* `package`: This is used to specify that this component is a third party package, and can be loaded from the specified package reference (which can be an NPM package, Github reference, URL, etc.).
+
+## Define Fastify Routes
+
+Exporting resource will generate full RESTful endpoints. But, you may prefer to define endpoints through a framework. HarperDB includes a resource plugin for defining routes with the Fastify web framework. Fastify is a full-featured framework with many plugins, that provides sophisticated route definition capabilities.
+
+By default, applications are configured to load any modules in the `routes` directory (matching `routes/*.js`) with Fastify's autoloader, which will allow these modules to export a function to define fastify routes. See the [defining routes documentation](./define-routes) for more information on how to create Fastify routes.
+
+However, Fastify is not as fast as HarperDB's RESTful endpoints (about 10%-20% slower/more-overhead), nor does it automate the generation of a full uniform interface with correct RESTful header interactions (for caching control), so generally the HarperDB's REST interface is recommended for optimum performance and ease of use.
+
+## Restarting Your Instance
+
+Generally, HarperDB will auto-detect when files change and auto-restart the appropriate threads. However, if there are changes that aren't detected, you may manually restart, with the `restart_service` operation:
+
+```json
+{
+ "operation": "restart_service",
+ "service": "http_workers"
+}
+```
diff --git a/site/versioned_docs/version-4.3/developers/clustering/certificate-management.md b/site/versioned_docs/version-4.3/developers/clustering/certificate-management.md
new file mode 100644
index 00000000..58243cb7
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/clustering/certificate-management.md
@@ -0,0 +1,70 @@
+---
+title: Certificate Management
+---
+
+# Certificate Management
+
+## Development
+
+Out of the box HarperDB generates certificates that are used when HarperDB nodes are clustered together to securely share data between nodes. These certificates are meant for testing and development purposes. Because these certificates do not have Common Names (CNs) that will match the Fully Qualified Domain Name (FQDN) of the HarperDB node, the following settings (see the full [configuration file](../../deployments/configuration) docs for more details) are defaulted & recommended for ease of development:
+
+```
+clustering:
+ tls:
+ certificate: ~/hdb/keys/certificate.pem
+ certificateAuthority: ~/hdb/keys/ca.pem
+ privateKey: ~/hdb/keys/privateKey.pem
+ insecure: true
+ verify: true
+```
+
+The certificates that HarperDB generates are stored in your `/keys/`.
+
+`insecure` is set to `true` to accept the certificate CN mismatch due to development certificates.
+
+`verify` is set to `true` to enable mutual TLS between the nodes.
+
+## Production
+
+In a production environment, we recommend using your own certificate authority (CA), or a public CA such as LetsEncrypt to generate certs for your HarperDB cluster. This will let you generate certificates with CNs that match the FQDN of your nodes.
+
+Once you generate new certificates, to make HarperDB start using them you can either replace the generated files with your own, or update the configuration to point to your new certificates, and then restart HarperDB.
+
+Since these new certificates can be issued with correct CNs, you should set `insecure` to `false` so that nodes will do full validation of the certificates of the other nodes.
+
+### Certificate Requirements
+
+* Certificates must have an `Extended Key Usage` that defines both `TLS Web Server Authentication` and `TLS Web Client Authentication` as these certificates will be used to accept connections from other HarperDB nodes and to make requests to other HarperDB nodes. Example:
+
+```
+X509v3 Key Usage: critical
+ Digital Signature, Key Encipherment
+X509v3 Extended Key Usage:
+ TLS Web Server Authentication, TLS Web Client Authentication
+```
+
+* If you are using an intermediate CA to issue the certificates, the entire certificate chain (to the root CA) must be included in the `certificateAuthority` file.
+* If your certificates expire you will need a way to issue new certificates to the nodes and then restart HarperDB. If you are using a public CA such as LetsEncrypt, a tool like `certbot` can be used to renew certificates.
+
+### Certificate Troubleshooting
+
+If you are having TLS issues with clustering, use the following steps to verify that your certificates are valid.
+
+1. Make sure certificates can be parsed and that you can view the contents:
+
+```
+openssl x509 -in .pem -noout -text`
+```
+
+1. Make sure the certificate validates with the CA:
+
+```
+openssl verify -CAfile .pem .pem`
+```
+
+1. Make sure the certificate and private key are a valid pair by verifying that the output of the following commands match:
+
+```
+openssl rsa -modulus -noout -in .pem | openssl md5
+openssl x509 -modulus -noout -in .pem | openssl md5
+```
diff --git a/site/versioned_docs/version-4.3/developers/clustering/creating-a-cluster-user.md b/site/versioned_docs/version-4.3/developers/clustering/creating-a-cluster-user.md
new file mode 100644
index 00000000..3edecd29
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/clustering/creating-a-cluster-user.md
@@ -0,0 +1,59 @@
+---
+title: Creating a Cluster User
+---
+
+# Creating a Cluster User
+
+Inter-node authentication takes place via HarperDB users. There is a special role type called `cluster_user` that exists by default and limits the user to only clustering functionality.
+
+A `cluster_user` must be created and added to the `harperdb-config.yaml` file for clustering to be enabled.
+
+All nodes that are intended to be clustered together need to share the same `cluster_user` credentials (i.e. username and password).
+
+There are multiple ways a `cluster_user` can be created, they are:
+
+1. Through the operations API by calling `add_user`
+
+```json
+{
+ "operation": "add_user",
+ "role": "cluster_user",
+ "username": "cluster_account",
+ "password": "letsCluster123!",
+ "active": true
+}
+```
+
+When using the API to create a cluster user the `harperdb-config.yaml` file must be updated with the username of the new cluster user.
+
+This can be done through the API by calling `set_configuration` or by editing the `harperdb-config.yaml` file.
+
+```json
+{
+ "operation": "set_configuration",
+ "clustering_user": "cluster_account"
+}
+```
+
+In the `harperdb-config.yaml` file under the top-level `clustering` element there will be a user element. Set this to the name of the cluster user.
+
+```yaml
+clustering:
+ user: cluster_account
+```
+
+_Note: When making any changes to the `harperdb-config.yaml` file, HarperDB must be restarted for the changes to take effect._
+
+1. Upon installation using **command line variables**. This will automatically set the user in the `harperdb-config.yaml` file.
+
+_Note: Using command line or environment variables for setting the cluster user only works on install._
+
+```
+harperdb install --CLUSTERING_USER cluster_account --CLUSTERING_PASSWORD letsCluster123!
+```
+
+1. Upon installation using **environment variables**. This will automatically set the user in the `harperdb-config.yaml` file.
+
+```
+CLUSTERING_USER=cluster_account CLUSTERING_PASSWORD=letsCluster123
+```
diff --git a/site/versioned_docs/version-4.3/developers/clustering/enabling-clustering.md b/site/versioned_docs/version-4.3/developers/clustering/enabling-clustering.md
new file mode 100644
index 00000000..6b563b19
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/clustering/enabling-clustering.md
@@ -0,0 +1,49 @@
+---
+title: Enabling Clustering
+---
+
+# Enabling Clustering
+
+Clustering does not run by default; it needs to be enabled.
+
+To enable clustering the `clustering.enabled` configuration element in the `harperdb-config.yaml` file must be set to `true`.
+
+There are multiple ways to update this element, they are:
+
+1. Directly editing the `harperdb-config.yaml` file and setting enabled to `true`
+
+```yaml
+clustering:
+ enabled: true
+```
+
+_Note: When making any changes to the `harperdb-config.yaml` file HarperDB must be restarted for the changes to take effect._
+
+1. Calling `set_configuration` through the operations API
+
+```json
+{
+ "operation": "set_configuration",
+ "clustering_enabled": true
+}
+```
+
+_Note: When making any changes to HarperDB configuration HarperDB must be restarted for the changes to take effect._
+
+1. Using **command line variables**.
+
+```
+harperdb --CLUSTERING_ENABLED true
+```
+
+1. Using **environment variables**.
+
+```
+CLUSTERING_ENABLED=true
+```
+
+An efficient way to **install HarperDB**, **create the cluster user**, **set the node name** and **enable clustering** in one operation is to combine the steps using command line and/or environment variables. Here is an example using command line variables.
+
+```
+harperdb install --CLUSTERING_ENABLED true --CLUSTERING_NODENAME Node1 --CLUSTERING_USER cluster_account --CLUSTERING_PASSWORD letsCluster123!
+```
diff --git a/site/versioned_docs/version-4.3/developers/clustering/establishing-routes.md b/site/versioned_docs/version-4.3/developers/clustering/establishing-routes.md
new file mode 100644
index 00000000..b733aaed
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/clustering/establishing-routes.md
@@ -0,0 +1,73 @@
+---
+title: Establishing Routes
+---
+
+# Establishing Routes
+
+A route is a connection between two nodes. It is how the clustering network is established.
+
+Routes do not need to cross connect all nodes in the cluster. You can select a leader node or a few leaders and all nodes connect to them, you can chain, etc… As long as there is one route connecting a node to the cluster all other nodes should be able to reach that node.
+
+Using routes the clustering servers will create a mesh network between nodes. This mesh network ensures that if a node drops out all other nodes can still communicate with each other. That being said, we recommend designing your routing with failover in mind, this means not storing all your routes on one node but dispersing them throughout the network.
+
+A simple route example is a two node topology, if Node1 adds a route to connect it to Node2, Node2 does not need to add a route to Node1. That one route configuration is all that’s needed to establish a bidirectional connection between the nodes.
+
+A route consists of a `port` and a `host`.
+
+`port` - the clustering port of the remote instance you are creating the connection with. This is going to be the `clustering.hubServer.cluster.network.port` in the HarperDB configuration on the node you are connecting with.
+
+`host` - the host of the remote instance you are creating the connection with.This can be an IP address or a URL.
+
+Routes are set in the `harperdb-config.yaml` file using the `clustering.hubServer.cluster.network.routes` element, which expects an object array, where each object has two properties, `port` and `host`.
+
+```yaml
+clustering:
+ hubServer:
+ cluster:
+ network:
+ routes:
+ - host: 3.62.184.22
+ port: 9932
+ - host: 3.735.184.8
+ port: 9932
+```
+
+
+
+This diagram shows one way of using routes to connect a network of nodes. Node2 and Node3 do not reference any routes in their config. Node1 contains routes for Node2 and Node3, which is enough to establish a network between all three nodes.
+
+There are multiple ways to set routes, they are:
+
+1. Directly editing the `harperdb-config.yaml` file (refer to code snippet above).
+1. Calling `cluster_set_routes` through the API.
+
+```json
+{
+ "operation": "cluster_set_routes",
+ "server": "hub",
+ "routes":[ {"host": "3.735.184.8", "port": 9932} ]
+}
+```
+
+_Note: When making any changes to HarperDB configuration HarperDB must be restarted for the changes to take effect._
+
+1. From the command line.
+
+```bash
+--CLUSTERING_HUBSERVER_CLUSTER_NETWORK_ROUTES "[{\"host\": \"3.735.184.8\", \"port\": 9932}]"
+```
+
+1. Using environment variables.
+
+```bash
+CLUSTERING_HUBSERVER_CLUSTER_NETWORK_ROUTES=[{"host": "3.735.184.8", "port": 9932}]
+```
+
+The API also has `cluster_get_routes` for getting all routes in the config and `cluster_delete_routes` for deleting routes.
+
+```json
+{
+ "operation": "cluster_delete_routes",
+ "routes":[ {"host": "3.735.184.8", "port": 9932} ]
+}
+```
diff --git a/site/versioned_docs/version-4.3/developers/clustering/index.md b/site/versioned_docs/version-4.3/developers/clustering/index.md
new file mode 100644
index 00000000..f5949afd
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/clustering/index.md
@@ -0,0 +1,31 @@
+---
+title: Clustering
+---
+
+# Clustering
+
+HarperDB clustering is the process of connecting multiple HarperDB databases together to create a database mesh network that enables users to define data replication patterns.
+
+HarperDB’s clustering engine replicates data between instances of HarperDB using a highly performant, bi-directional pub/sub model on a per-table basis. Data replicates asynchronously with eventual consistency across the cluster following the defined pub/sub configuration. Individual transactions are sent in the order in which they were transacted, once received by the destination instance, they are processed in an ACID-compliant manner. Conflict resolution follows a last writer wins model based on recorded transaction time on the transaction and the timestamp on the record on the node.
+
+***
+
+### Common Use Case
+
+A common use case is an edge application collecting and analyzing sensor data that creates an alert if a sensor value exceeds a given threshold:
+
+* The edge application should not be making outbound http requests for security purposes.
+* There may not be a reliable network connection.
+* Not all sensor data will be sent to the cloud--either because of the unreliable network connection, or maybe it’s just a pain to store it.
+* The edge node should be inaccessible from outside the firewall.
+* The edge node will send alerts to the cloud with a snippet of sensor data containing the offending sensor readings.
+
+HarperDB simplifies the architecture of such an application with its bi-directional, table-level replication:
+
+* The edge instance subscribes to a “thresholds” table on the cloud instance, so the application only makes localhost calls to get the thresholds.
+* The application continually pushes sensor data into a “sensor\_data” table via the localhost API, comparing it to the threshold values as it does so.
+* When a threshold violation occurs, the application adds a record to the “alerts” table.
+* The application appends to that record array “sensor\_data” entries for the 60 seconds (or minutes, or days) leading up to the threshold violation.
+* The edge instance publishes the “alerts” table up to the cloud instance.
+
+By letting HarperDB focus on the fault-tolerant logistics of transporting your data, you get to write less code. By moving data only when and where it’s needed, you lower storage and bandwidth costs. And by restricting your app to only making local calls to HarperDB, you reduce the overall exposure of your application to outside forces.
diff --git a/site/versioned_docs/version-4.3/developers/clustering/managing-subscriptions.md b/site/versioned_docs/version-4.3/developers/clustering/managing-subscriptions.md
new file mode 100644
index 00000000..a8a4b407
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/clustering/managing-subscriptions.md
@@ -0,0 +1,168 @@
+---
+title: Managing subscriptions
+---
+
+# Managing subscriptions
+
+Subscriptions can be added, updated, or removed through the API.
+
+_Note: The databases and tables in the subscription must exist on either the local or the remote node. Any databases or tables that do not exist on one particular node, for example, the local node, will be automatically created on the local node._
+
+To add a single node and create one or more subscriptions use `set_node_replication`.
+
+```json
+{
+ "operation": "set_node_replication",
+ "node_name": "Node2",
+ "subscriptions": [
+ {
+ "database": "data",
+ "table": "dog",
+ "publish": false,
+ "subscribe": true
+ },
+ {
+ "database": "data",
+ "table": "chicken",
+ "publish": true,
+ "subscribe": true
+ }
+ ]
+}
+```
+
+This is an example of adding Node2 to your local node. Subscriptions are created for two tables, dog and chicken.
+
+To update one or more subscriptions with a single node you can also use `set_node_replication`, however this will behave as a PATCH/upsert, where only the subscription(s) changing will be inserted/update while the others will be left untouched.
+
+```json
+{
+ "operation": "set_node_replication",
+ "node_name": "Node2",
+ "subscriptions": [
+ {
+ "schema": "dev",
+ "table": "dog",
+ "publish": true,
+ "subscribe": true
+ }
+ ]
+}
+```
+
+This call will update the subscription with the dog table. Any other subscriptions with Node2 will not change.
+
+To add or update subscriptions with one or more nodes in one API call use `configure_cluster`.
+
+```json
+{
+ "operation": "configure_cluster",
+ "connections": [
+ {
+ "node_name": "Node2",
+ "subscriptions": [
+ {
+ "database": "dev",
+ "table": "chicken",
+ "publish": false,
+ "subscribe": true
+ },
+ {
+ "database": "prod",
+ "table": "dog",
+ "publish": true,
+ "subscribe": true
+ }
+ ]
+ },
+ {
+ "node_name": "Node3",
+ "subscriptions": [
+ {
+ "database": "dev",
+ "table": "chicken",
+ "publish": true,
+ "subscribe": false
+ }
+ ]
+ }
+ ]
+}
+```
+
+_Note: `configure_cluster` will override **any and all** existing subscriptions defined on the local node. This means that before going through the connections in the request and adding the subscriptions, it will first go through **all existing subscriptions the local node has** and remove them. To get all existing subscriptions use `cluster_status`._
+
+#### Start time
+
+There is an optional property called `start_time` that can be passed in the subscription. This property accepts an ISO formatted UTC date.
+
+`start_time` can be used to set from what time you would like to source transactions from a table when creating or updating a subscription.
+
+```json
+{
+ "operation": "set_node_replication",
+ "node_name": "Node2",
+ "subscriptions": [
+ {
+ "database": "dev",
+ "table": "dog",
+ "publish": false,
+ "subscribe": true,
+ "start_time": "2022-09-02T20:06:35.993Z"
+ }
+ ]
+}
+```
+
+This example will get all transactions on Node2’s dog table starting from `2022-09-02T20:06:35.993Z` and replicate them locally on the dog table.
+
+If no start time is passed it defaults to the current time.
+
+_Note: start time utilizes clustering to back source transactions. For this reason it can only source transactions that occurred when clustering was enabled._
+
+#### Remove node
+
+To remove a node and all its subscriptions use `remove_node`.
+
+```json
+{
+ "operation":"remove_node",
+ "node_name":"Node2"
+}
+```
+
+#### Cluster status
+
+To get the status of all connected nodes and see their subscriptions use `cluster_status`.
+
+```json
+{
+ "node_name": "Node1",
+ "is_enabled": true,
+ "connections": [
+ {
+ "node_name": "Node2",
+ "status": "open",
+ "ports": {
+ "clustering": 9932,
+ "operations_api": 9925
+ },
+ "latency_ms": 65,
+ "uptime": "11m 19s",
+ "subscriptions": [
+ {
+ "schema": "dev",
+ "table": "dog",
+ "publish": true,
+ "subscribe": true
+ }
+ ],
+ "system_info": {
+ "hdb_version": "4.0.0",
+ "node_version": "16.17.1",
+ "platform": "linux"
+ }
+ }
+ ]
+}
+```
diff --git a/site/versioned_docs/version-4.3/developers/clustering/naming-a-node.md b/site/versioned_docs/version-4.3/developers/clustering/naming-a-node.md
new file mode 100644
index 00000000..d1ebdfb1
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/clustering/naming-a-node.md
@@ -0,0 +1,45 @@
+---
+title: Naming a Node
+---
+
+# Naming a Node
+
+Node name is the name given to a node. It is how nodes are identified within the cluster and must be unique to the cluster.
+
+The name cannot contain any of the following characters: `.,*>` . Dot, comma, asterisk, greater than, or whitespace.
+
+The name is set in the `harperdb-config.yaml` file using the `clustering.nodeName` configuration element.
+
+_Note: If you want to change the node name make sure there are no subscriptions in place before doing so. After the name has been changed a full restart is required._
+
+There are multiple ways to update this element, they are:
+
+1. Directly editing the `harperdb-config.yaml` file.
+
+```yaml
+clustering:
+ nodeName: Node1
+```
+
+_Note: When making any changes to the `harperdb-config.yaml` file HarperDB must be restarted for the changes to take effect._
+
+1. Calling `set_configuration` through the operations API
+
+```json
+{
+ "operation": "set_configuration",
+ "clustering_nodeName":"Node1"
+}
+```
+
+1. Using command line variables.
+
+```
+harperdb --CLUSTERING_NODENAME Node1
+```
+
+1. Using environment variables.
+
+```
+CLUSTERING_NODENAME=Node1
+```
diff --git a/site/versioned_docs/version-4.3/developers/clustering/requirements-and-definitions.md b/site/versioned_docs/version-4.3/developers/clustering/requirements-and-definitions.md
new file mode 100644
index 00000000..1e2dd6af
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/clustering/requirements-and-definitions.md
@@ -0,0 +1,11 @@
+---
+title: Requirements and Definitions
+---
+
+# Requirements and Definitions
+
+To create a cluster you must have two or more nodes\* (aka instances) of HarperDB running.
+
+\*_A node is a single instance/installation of HarperDB. A node of HarperDB can operate independently with clustering on or off._
+
+On the following pages we'll walk you through the steps required, in order, to set up a HarperDB cluster.
diff --git a/site/versioned_docs/version-4.3/developers/clustering/subscription-overview.md b/site/versioned_docs/version-4.3/developers/clustering/subscription-overview.md
new file mode 100644
index 00000000..6ceac7fe
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/clustering/subscription-overview.md
@@ -0,0 +1,45 @@
+---
+title: Subscription Overview
+---
+
+# Subscription Overview
+
+A subscription defines how data should move between two nodes. They are exclusively table level and operate independently. They connect a table on one node to a table on another node, the subscription will apply to a matching database name and table name on both nodes.
+
+_Note: ‘local’ and ‘remote’ will often be referred to. In the context of these docs ‘local’ is the node that is receiving the API request to create/update a subscription and remote is the other node that is referred to in the request, the node on the other end of the subscription._
+
+A subscription consists of:
+
+`database` - the name of the database that the table you are creating the subscription for belongs to. *Note, this was previously referred to as schema and may occasionally still be referenced that way.*
+
+`table` - the name of the table the subscription will apply to.
+
+`publish` - a boolean which determines if transactions on the local table should be replicated on the remote table.
+
+`subscribe` - a boolean which determines if transactions on the remote table should be replicated on the local table.
+
+#### Publish subscription
+
+
+
+This diagram is an example of a `publish` subscription from the perspective of Node1.
+
+The record with id 2 has been inserted in the dog table on Node1, after it has completed that insert it is sent to Node 2 and inserted in the dog table there.
+
+#### Subscribe subscription
+
+
+
+This diagram is an example of a `subscribe` subscription from the perspective of Node1.
+
+The record with id 3 has been inserted in the dog table on Node2, after it has completed that insert it is sent to Node1 and inserted there.
+
+#### Subscribe and Publish
+
+
+
+This diagram shows both subscribe and publish but publish is set to false. You can see that because subscribe is true the insert on Node2 is being replicated on Node1 but because publish is set to false the insert on Node1 is _**not**_ being replicated on Node2.
+
+
+
+This shows both subscribe and publish set to true. The insert on Node1 is replicated on Node2 and the update on Node2 is replicated on Node1.
diff --git a/site/versioned_docs/version-4.3/developers/clustering/things-worth-knowing.md b/site/versioned_docs/version-4.3/developers/clustering/things-worth-knowing.md
new file mode 100644
index 00000000..c57a47ca
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/clustering/things-worth-knowing.md
@@ -0,0 +1,43 @@
+---
+title: Things Worth Knowing
+---
+
+# Things Worth Knowing
+
+Additional information that will help you define your clustering topology.
+
+***
+
+### Transactions
+
+Transactions that are replicated across the cluster are:
+
+* Insert
+* Update
+* Upsert
+* Delete
+* Bulk loads
+ * CSV data load
+ * CSV file load
+ * CSV URL load
+ * Import from S3
+
+When adding or updating a node any databases and tables in the subscription that don’t exist on the remote node will be automatically created.
+
+**Destructive database operations do not replicate across a cluster**. Those operations include `drop_database`, `drop_table`, and `drop_attribute`. If the desired outcome is to drop database information from any nodes then the operation(s) will need to be run on each node independently.
+
+Users and roles are not replicated across the cluster.
+
+***
+
+### Queueing
+
+HarperDB has built-in resiliency for when network connectivity is lost within a subscription. When connections are reestablished, a catchup routine is executed to ensure data that was missed, specific to the subscription, is sent/received as defined.
+
+***
+
+### Topologies
+
+HarperDB clustering creates a mesh network between nodes giving end users the ability to create an infinite number of topologies. subscription topologies can be simple or as complex as needed.
+
+
diff --git a/site/versioned_docs/version-4.3/developers/components/drivers.md b/site/versioned_docs/version-4.3/developers/components/drivers.md
new file mode 100644
index 00000000..0f1c063e
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/components/drivers.md
@@ -0,0 +1,12 @@
+---
+title: Drivers
+description: >-
+ Industry standard tools to real-time HarperDB data with BI, analytics,
+ reporting and data visualization technologies.
+---
+
+# Drivers
+
+
+
+
diff --git a/site/versioned_docs/version-4.3/developers/components/google-data-studio.md b/site/versioned_docs/version-4.3/developers/components/google-data-studio.md
new file mode 100644
index 00000000..e33fb2bd
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/components/google-data-studio.md
@@ -0,0 +1,37 @@
+---
+title: Google Data Studio
+---
+
+# Google Data Studio
+
+[Google Data Studio](https:/datastudio.google.com/) is a free collaborative visualization tool which enables users to build configurable charts and tables quickly. The HarperDB Google Data Studio connector seamlessly integrates your HarperDB data with Google Data Studio so you can build custom, real-time data visualizations.
+
+The HarperDB Google Data Studio Connector is subject to our [Terms of Use](https:/harperdb.io/legal/harperdb-cloud-terms-of-service/) and [Privacy Policy](https:/harperdb.io/legal/privacy-policy/).
+
+## Requirements
+
+The HarperDB database must be accessible through the Internet in order for Google Data Studio servers to access it. The database may be hosted by you or via [HarperDB Cloud](../../deployments/harperdb-cloud/).
+
+## Get Started
+
+Get started by selecting the HarperDB connector from the [Google Data Studio Partner Connector Gallery](https:/datastudio.google.com/u/0/datasources/create).
+
+1. Log in to https:/datastudio.google.com/.
+1. Add a new Data Source using the HarperDB connector. The current release version can be added as a data source by following this link: [HarperDB Google Data Studio Connector](https:/datastudio.google.com/datasources/create?connectorId=AKfycbxBKgF8FI5R42WVxO-QCOq7dmUys0HJrUJMkBQRoGnCasY60\_VJeO3BhHJPvdd20-S76g).
+1. Authorize the connector to access other servers on your behalf (this allows the connector to contact your database).
+1. Enter the Web URL to access your database (preferably with HTTPS), as well as the Basic Auth key you use to access the database. Just include the key, not the word “Basic” at the start of it.
+1. Check the box for “Secure Connections Only” if you want to always use HTTPS connections for this data source; entering a Web URL that starts with https:/ will do the same thing, if you prefer.
+1. Check the box for “Allow Bad Certs” if your HarperDB instance does not have a valid SSL certificate. [HarperDB Cloud](../../deployments/harperdb-cloud/) always has valid certificates, and so will never require this to be checked. Instances you set up yourself may require this, if you are using self-signed certs. If you are using [HarperDB Cloud](../../deployments/harperdb-cloud/) or another instance you know should always have valid SSL certificates, do not check this box.
+1. Choose your Query Type. This determines what information the configuration will ask for after pressing the Next button.
+ * Table will ask you for a Schema and a Table to return all fields of using `SELECT *`.
+ * SQL will ask you for the SQL query you’re using to retrieve fields from the database. You may `JOIN` multiple tables together, and use HarperDB specific SQL functions, along with the usual power SQL grants.
+1. When all information is entered correctly, press the Connect button in the top right of the new Data Source view to generate the Schema. You may also want to name the data source at this point. If the connector encounters any errors, a dialog box will tell you what went wrong so you can correct the issue.
+1. If there are no errors, you now have a data source you can use in your reports! You may change the types of the generated fields in the Schema view if you need to (for instance, changing a Number field to a specific currency), as well as creating new fields from the report view that do calculations on other fields.
+
+## Considerations
+
+* Both Postman and the [HarperDB Studio](../../administration/harperdb-studio/) app have ways to convert a user:password pair to a Basic Auth token. Use either to create the token for the connector’s user.
+ * You may sign out of your current user by going to the instances tab in HarperDB Studio, then clicking on the lock icon at the top-right of a given instance’s box. Click the lock again to sign in as any user. The Basic Auth token will be visible in the Authorization header portion of any code created in the Sample Code tab.
+* It’s highly recommended that you create a read-only user role in HarperDB Studio, and create a user with that role for your data sources to use. This prevents that authorization token from being used to alter your database, should someone else ever get ahold of it.
+* The RecordCount field is intended for use as a metric, for counting how many instances of a given set of values appear in a report’s data set.
+* _Do not attempt to create fields with spaces in their names_ for any data sources! Google Data Studio will crash when attempting to retrieve a field with such a name, producing a System Error instead of a useful chart on your reports. Using CamelCase or snake\_case gets around this.
diff --git a/site/versioned_docs/version-4.3/developers/components/index.md b/site/versioned_docs/version-4.3/developers/components/index.md
new file mode 100644
index 00000000..4901c49f
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/components/index.md
@@ -0,0 +1,38 @@
+---
+title: Components
+---
+
+# Components
+
+HarperDB is a highly extensible database application platform with support for a rich variety of composable modular components and components that can be used and combined to build applications and add functionality to existing applications. HarperDB tools, components, and add-ons can be found in a few places:
+
+* [SDK libraries](./sdks) are available for connecting to HarperDB from different languages.
+* [Drivers](./drivers) are available for connecting to HarperDB from different products and tools.
+* [HarperDB-Add-Ons repositories](https:/github.com/orgs/HarperDB-Add-Ons/repositories) lists various templates and add-ons for HarperDB.
+* [HarperDB repositories](https:/github.com/orgs/HarperDB-Add-Ons/repositories) include additional tools for HarperDB.
+* You can also [search github.com for ever-growing list of projects that use, or work with, HarperDB](https:/github.com/search?q=harperdb\&type=repositories)
+* [Google Data Studio](./google-data-studio) is a visualization tool for building charts and tables from HarperDB data.
+
+## Components
+
+There are four general categories of components for HarperDB. The most common is applications. Applications are simply a component that delivers complete functionality through an external interface that it defines, and is usually composed of other components. See [our guide to building applications for getting started](../applications/).
+
+A data source component can implement the Resource API to customize access to a table or provide access to an external data source. External data source components are used to retrieve and access data from other sources.
+
+The next two are considered extension components. Server protocol extension components provide and define ways for clients to access data and can be used to extend or create new protocols.
+
+Server resource components implement support for different types of files that can be used as resources in applications. HarperDB includes support for using JavaScript modules and GraphQL Schemas as resources, but resource components may add support for different file types like HTML templates (like JSX), CSV data, and more.
+
+## Server components
+
+Server components can be easily be added and configured by simply adding an entry to your harperdb-config.yaml:
+
+```yaml
+my-server-component:
+ package: 'HarperDB-Add-Ons/package-name' # this can be any valid github or npm reference
+ port: 4321
+```
+
+## Writing Extension Components
+
+You can write your own extensions to build new functionality on HarperDB. See the [writing extension components documentation](./writing-extensions) for more information.
diff --git a/site/versioned_docs/version-4.3/developers/components/installing.md b/site/versioned_docs/version-4.3/developers/components/installing.md
new file mode 100644
index 00000000..aac137ea
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/components/installing.md
@@ -0,0 +1,79 @@
+---
+title: Installing
+---
+
+# Installing
+
+Components can be easily added by adding a new top level element to your `harperdb-config.yaml` file.
+
+The configuration comprises two values:
+
+* component name - can be anything, as long as it follows valid YAML syntax.
+* package - a reference to your component.
+
+```yaml
+myComponentName:
+ package: HarperDB-Add-Ons/package
+```
+
+Under the hood HarperDB is calling npm install on all components, this means that the package value can be any valid npm reference such as a GitHub repo, an NPM package, a tarball, a local directory or a website.
+
+```yaml
+myGithubComponent:
+ package: HarperDB-Add-Ons/package#v2.2.0 # install from GitHub
+myNPMComponent:
+ package: harperdb # install from NPM
+myTarBall:
+ package: /Users/harper/cool-component.tar # install from tarball
+myLocal:
+ package: /Users/harper/local # install from local path
+myWebsite:
+ package: https:/harperdb-component # install from URL
+```
+
+When HarperDB is run or restarted it checks to see if there are any new or updated components. If there are, it will dynamically create a package.json file in the `rootPath` directory and call `npm install`.
+
+NPM will install all the components in `/node_moduels`.
+
+The package.json file that is created will look something like this.
+
+```json
+{
+ "dependencies": {
+ "myGithubComponent": "github:HarperDB-Add-Ons/package#v2.2.0",
+ "myNPMComponent": "npm:harperdb",
+ "myTarBall": "file:/Users/harper/cool-component.tar",
+ "myLocal": "file:/Users/harper/local",
+ "myWebsite": "https:/harperdb-component"
+ }
+}
+```
+
+The package prefix is automatically added, however you can manually set it in your package reference.
+
+```yaml
+myCoolComponent:
+ package: file:/Users/harper/cool-component.tar
+```
+
+## Installing components using the operations API
+
+To add a component using the operations API use the `deploy_component` operation.
+
+```json
+{
+ "operation": "deploy_component",
+ "project": "my-cool-component",
+ "package": "HarperDB-Add-Ons/package/mycc"
+}
+```
+
+Another option is to pass `deploy_component` a base64-encoded string representation of your component as a `.tar` file. HarperDB can generate this via the `package_component` operation. When deploying with a payload, your component will be deployed to your `/components` directory. Any components in this directory will be automatically picked up by HarperDB.
+
+```json
+{
+ "operation": "deploy_component",
+ "project": "my-cool-component",
+ "payload": "NzY1IAAwMDAwMjQgADAwMDAwMDAwMDAwIDE0NDIwMDQ3...."
+}
+```
diff --git a/site/versioned_docs/version-4.3/developers/components/operations.md b/site/versioned_docs/version-4.3/developers/components/operations.md
new file mode 100644
index 00000000..fc5d2bf9
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/components/operations.md
@@ -0,0 +1,37 @@
+---
+title: Operations
+---
+
+# Operations
+
+One way to manage applications and components is through [HarperDB Studio](../../administration/harperdb-studio/). It performs all the necessary operations automatically. To get started, navigate to your instance in HarperDB Studio and click the subnav link for “applications”. Once configuration is complete, you can manage and deploy applications in minutes.
+
+HarperDB Studio manages your applications using nine HarperDB operations. You may view these operations within our [API Docs](../operations-api/). A brief overview of each of the operations is below:
+
+* **components\_status**
+
+ Returns the state of the applications server. This includes whether it is enabled, upon which port it is listening, and where its root project directory is located on the host machine.
+* **get\_components**
+
+ Returns an array of projects within the applications root project directory.
+* **get\_component\_file**
+
+ Returns the content of the specified file as text. HarperDB Studio uses this call to render the file content in its built-in code editor.
+* **set\_component\_file**
+
+ Updates the content of the specified file. HarperDB Studio uses this call to save any changes made through its built-in code editor.
+* **drop\_component\_file**
+
+ Deletes the specified file.
+* **add\_component\_project**
+
+ Creates a new project folder in the applications root project directory. It also inserts into the new directory the contents of our applications Project template, which is available publicly, here: https:/github.com/HarperDB/harperdb-custom-functions-template.
+* **drop\_component\_project**
+
+ Deletes the specified project folder and all of its contents.
+* **package\_component\_project**
+
+ Creates a .tar file of the specified project folder, then reads it into a base64-encoded string and returns that string to the user.
+* **deploy\_component\_project**
+
+ Takes the output of package\_component\_project, decrypts the base64-encoded string, reconstitutes the .tar file of your project folder, and extracts it to the applications root project directory.
diff --git a/site/versioned_docs/version-4.3/developers/components/sdks.md b/site/versioned_docs/version-4.3/developers/components/sdks.md
new file mode 100644
index 00000000..9064851e
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/components/sdks.md
@@ -0,0 +1,21 @@
+---
+title: SDKs
+description: >-
+ Software Development Kits available for connecting to HarperDB from different
+ languages.
+---
+
+# SDKs
+
+| SDK/Tool | Description | Installation |
+| ------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------- |
+| [HarperDB.NET.Client](https:/www.nuget.org/packages/HarperDB.NET.Client) | A Dot Net Core client to execute operations against HarperDB | `dotnet add package HarperDB.NET.Client --version 1.1.0` |
+| [Websocket Client](https:/www.npmjs.com/package/harperdb-websocket-client) | A Javascript client for real-time access to HarperDB transactions | `npm i -s harperdb-websocket-client` |
+| [Gatsby HarperDB Source](https:/www.npmjs.com/package/gatsby-source-harperdb) | Use HarperDB as the data source for a Gatsby project at the build time | `npm i -s gatsby-source-harperdb` |
+| [HarperDB.EntityFrameworkCore](https:/www.nuget.org/packages/HarperDB.EntityFrameworkCore) | The HarperDB EntityFrameworkCore Provider Package for .NET 6.0 | `dotnet add package HarperDB.EntityFrameworkCore --version 1.0.0` |
+| [Python SDK](https:/pypi.org/project/harperdb/) | Python3 implementations of HarperDB API functions with wrappers for an object-oriented interface | `pip3 install harperdb` |
+| [HarperDB Flutter SDK](https:/github.com/HarperDB/harperdb-sdk-flutter) | A HarperDB SDK for Flutter | `flutter pub add harperdb` |
+| [React Hook](https:/www.npmjs.com/package/use-harperdb) | A ReactJS Hook for HarperDB | `npm i -s use-harperdb` |
+| [Node Red Node](https:/flows.nodered.org/node/node-red-contrib-harperdb) | Easy drag and drop connections to HarperDB using the Node-Red platform | `npm i -s node-red-contrib-harperdb` |
+| [NodeJS SDK](https:/www.npmjs.com/package/harperive) | A HarperDB SDK for NodeJS | `npm i -s harperive` |
+| [HarperDB Cargo Crate](https:/crates.io/crates/harperdb) | A HarperDB SDK for Rust | `Cargo.toml > harperdb = '1.0.0'` |
diff --git a/site/versioned_docs/version-4.3/developers/components/writing-extensions.md b/site/versioned_docs/version-4.3/developers/components/writing-extensions.md
new file mode 100644
index 00000000..51ba8de7
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/components/writing-extensions.md
@@ -0,0 +1,171 @@
+---
+title: Writing Extensions
+---
+
+# Writing Extensions
+
+HarperDB is a highly extensible database application platform with support for a rich variety of composable modular components and extensions that can be used and combined to build applications and add functionality to existing applications. Here we describe the different types of components/extensions that can be developed for HarperDB and how to create them.
+
+There are three general categories of components for HarperDB:
+
+* **protocol extensions** that provide and define ways for clients to access data
+* **resource extensions** that handle and interpret different types of files
+* **consumer data sources** that provide a way to access and retrieve data from other sources.
+
+Server protocol extensions can be used to implement new protocols like MQTT, AMQP, Kafka, or maybe a retro-style Gopher interface. It can also be used to augment existing protocols like HTTP with "middleware" that can add authentication, analytics, or additional content negotiation, or add layer protocols on top of WebSockets.
+
+Server resource extensions implement support for different types of files that can be used as resources in applications. HarperDB includes support for using JavaScript modules and GraphQL Schemas as resources, but resource extensions could be added to support different file types like HTML templates (like JSX), CSV data, and more.
+
+Consumer data source components are used to retrieve and access data from other sources, and can be very useful if you want to use HarperDB to cache or use data from other databases like MySQL, Postgres, or Oracle, or subscribe to data from messaging brokers (again possibly Kafka, NATS, etc.).
+
+These are not mutually exclusive, you may build components that fulfill any or all of these roles.
+
+## Server Extensions
+
+Server Extensions are implemented as JavaScript packages/modules and interact with HarperDB through a number of possible hooks. A component can be defined as an extension by specifying the extensionModule in the config.yaml:
+
+```yaml
+extensionModule: './entry-module-name.js'
+```
+
+### Module Initialization
+
+Once a user has configured an extension, HarperDB will attempt to load the extension package specified by `package` property. Once loaded, there are several functions that the extension module can export, that will be called by HarperDB:
+
+`export function start(options: { port: number, server: {}})` If defined, this will be called on the initialization of the extension. The provided `server` property object includes a set of additional entry points for utilizing or layering on top of other protocols (and when implementing a new protocol, you can add your own entry points). The most common entry is to provide an HTTP middleware layer. This looks like:
+
+```javascript
+export function start(options: { port: number, server: {}}) {
+ options.server.http(async (request, nextLayer) => {
+ / we can directly return a response here, or do some processing on the request and delegate to the next layer
+ let response = await nextLayer(request);
+ return response;
+ });
+}
+```
+
+Here, the `request` object will have the following structure (this is based on Node's request, but augmented to conform to a subset of the [WHATWG Request API](https:/developer.mozilla.org/en-US/docs/Web/API/Request)):
+
+```typescript
+interface Request {
+ method: string
+ headers: Headers / use request.headers.get(headerName) to get header values
+ body: Stream
+ data: any / deserialized data from the request body
+}
+```
+
+The returned `response` object should have the following structure (again, following a structural subset of the [WHATWG Response API](https:/developer.mozilla.org/en-US/docs/Web/API/Response)):
+
+```typescript
+interface Response {
+ status?: number
+ headers?: {} / an object with header name/values
+ data?: any / object/value that will be serialized into the body
+ body?: Stream
+}
+```
+
+The `server.http` function also accepts an options argument that supports a `runFirst` flag to indicate that the middleware should go at the top of the stack and be executed prior to other HTTP components.
+If you were implementing an authentication extension, you could get authentication information from the request and use it to add the `user` property to the request:
+
+```javascript
+export function start(options: { port: number, server: {}, resources: Map}) {
+ options.server.http((request, nextLayer) => {
+ let authorization = request.headers.authorization;
+ if (authorization) {
+ / get some token for the user and determine the user
+ / if we want to use harperdb's user database
+ let user = server.getUser(username, password);
+ request.user = user; / authenticate user object goes on the request
+ }
+ / continue on to the next layer
+ return nextLayer(request);
+ }, { runFirst: true });
+ / if you needed to add a login resource, could add it as well:
+ resources.set('/login', LoginResource);
+}
+```
+
+#### Direct Socket Server
+If you were implementing a new protocol, you can directly interact with the sockets and listen for new incoming TCP connections:
+
+```javascript
+export function start(options: { port: number, server: {}}) {
+ options.server.socket((socket) => {
+ / called for each incoming socket
+ });
+})
+```
+#### WebSockets
+If you were implementing a protocol using WebSockets, you can define a listener for incoming WebSocket connections and indicate the WebSockets (sub)protocol to specifically handle (which will select your listener if the `Sec-WebSocket-Protocol` header matches your protocol):
+
+```javascript
+export function start(options) {
+ server.ws((socket) => {
+ / called for each incoming WebSocket
+ }, Object.assign({ subProtocol: 'my-cool-protocol' }, options));
+})
+```
+
+### Resource Handling
+
+Typically, servers not only communicate with clients, but serve up meaningful data based on the resources within the server. While resource extensions typically handle defining resources, once resources are defined, they can be consumed by server extensions. The `resources` argument provides access to the set of all the resources that have been defined. A server can call `resources.getMatch(path)` to get the resource associated with the URL path.
+
+## Resource Extensions
+
+Resource extensions allow us to handle different files and make them accessible to servers as resources, following the common [Resource API](../../technical-details/reference/resource). To implement a resource extension, you export a function called `handleFile`. Users can then configure which files that should be handled by your extension. For example, if we had implemented an EJS handler, it could be configured as:
+
+```yaml
+ module: 'ejs-extension',
+ path: '/templates/*.ejs'
+```
+
+And in our extension module, we could implement `handleFile`:
+
+```javascript
+export function handleFile?(contents, relative_path, file_path, resources) {
+ / will be called for each .ejs file.
+ / We can then add the generate resource:
+ resources.set(relative_path, GeneratedResource);
+}
+```
+
+We can also implement a handler for directories. This can be useful for implementing a handler for broader frameworks that load their own files, like Next.js or Remix, or a static file handler. HarperDB includes such an extension for fastify's auto-loader that loads a directory of route definitions. This hook looks like:
+
+```javascript
+export function handleDirectory?(relative_path, path, resources) {
+}
+```
+
+Note that these hooks are not mutually exclusive. You can write an extension that implements any or all of these hooks, potentially implementing a custom protocol and file handling.
+
+## Data Source Components
+
+Data source components implement the `Resource` interface to provide access to various data sources, which may be other APIs, databases, or local storage. Components that implement this interface can then be used as a source for caching tables, can be accessed as part of endpoint implementations, or even used as endpoints themselves. See the [Resource documentation](../../technical-details/reference/resource) for more information on implementing new resources.
+
+## Content Type Extensions
+
+HarperDB uses content negotiation to determine how to deserialize content incoming data from HTTP requests (and any other protocols that support content negotiation) and to serialize data into responses. This negotiation is performed by comparing the `Content-Type` header with registered content type handler to determine how to deserialize content into structured data that is processed and stored, and comparing the `Accept` header with registered content type handlers to determine how to serialize structured data. HarperDB comes with a rich set of content type handlers including JSON, CBOR, MessagePack, CSV, Event-Stream, and more. However, you can also add your own content type handlers by adding new entries (or even replacing existing entries) to the `contentTypes` exported map from the `server` global (or `harperdb` export). This map is keyed by the MIME type, and the value is an object with properties (all optional):
+* `serialize(data): Buffer|Uint8Array|string`: If defined, this will be called with the data structure and should return the data serialized as binary data (NodeJS Buffer or Uint8Array) or a string, for the response.
+* `serializeStream(data): ReadableStream`: If defined, this will be called with the data structure and should return the data serialized as a ReadableStream. This is generally necessary for handling asynchronous iteratables.
+* `deserialize(Buffer|string): any`: If defined (and deserializeStream is not defined), this will be called with the raw data received from the incoming request and should return the deserialized data structure. This will be called with a string for text MIME types ("text/..."), and a Buffer for all others.
+* `deserializeStream(ReadableStream): any`: If defined, this will be called with the raw data stream (if there is one) received from the incoming request and should return the deserialized data structure (potentially as an asynchronous iterable).
+* `q: number`: This is an indication of this serialization quality between 0 and 1, and if omitted, defaults to 1. It is called "content negotiation" instead of "content demanding" because both client and server may have multiple supported content types, and the server needs to choose the best for both. This is determined by finding the content type (of all supported) with the highest product of client q and server q (1 is a perfect representation of the data, 0 is worst, 0.5 is medium quality).
+
+For example, if you wanted to define an XML serializer (that can respond with XML to requests with `Accept: text/xml`) you could write:
+
+```javascript
+contentTypes.set('text/xml', {
+ serialize(data) {
+ return '' ... some serialization '';
+ },
+ q: 0.8,
+});
+```
+
+## Trusted/Untrusted (Future Plans)
+
+In the future, extensions may be categorized as trusted or untrusted. For some HarperDB installations, administrators may choose to constrain users to only using trusted extensions for security reasons (such multi-tenancy requirements or added defense in depth). Most installations do not impose such constraints, but this may exist in some situations.
+
+An extension can be automatically considered trusted if it conforms to the requirements of [Secure EcmaScript](https:/www.npmjs.com/package/ses/v/0.7.0) (basically strict mode code that doesn't modify any global objects), and either does not use any other modules, or only uses modules from other trusted extensions/components. An extension can be marked as trusted by review by the HarperDB team as well, but developers should not expect that HarperDB can review all extensions. Untrusted extensions can access any other packages/modules, and may have many additional capabilities.
diff --git a/site/versioned_docs/version-4.3/developers/operations-api/advanced-json-sql-examples.md b/site/versioned_docs/version-4.3/developers/operations-api/advanced-json-sql-examples.md
new file mode 100644
index 00000000..1584a0c4
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/operations-api/advanced-json-sql-examples.md
@@ -0,0 +1,1780 @@
+---
+title: Advanced JSON SQL Examples
+---
+
+# Advanced JSON SQL Examples
+
+## Create movies database
+Create a new database called "movies" using the 'create_database' operation.
+
+_Note: Creating a database is optional, if one is not created HarperDB will default to using a database named `data`_
+
+### Body
+```json
+{
+ "operation": "create_database",
+ "database": "movies"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "database 'movies' successfully created"
+}
+```
+
+---
+
+## Create movie Table
+Creates a new table called "movie" inside the database "movies" using the ‘create_table’ operation.
+
+### Body
+
+```json
+{
+ "operation": "create_table",
+ "database": "movies",
+ "table": "movie",
+ "primary_key": "id"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "table 'movies.movie' successfully created."
+}
+```
+
+
+---
+
+## Create credits Table
+Creates a new table called "credits" inside the database "movies" using the ‘create_table’ operation.
+
+### Body
+
+```json
+{
+ "operation": "create_table",
+ "database": "movies",
+ "table": "credits",
+ "primary_key": "movie_id"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "table 'movies.credits' successfully created."
+}
+```
+
+
+---
+
+## Bulk Insert movie Via CSV
+Inserts data from a hosted CSV file into the "movie" table using the 'csv_url_load' operation.
+
+### Body
+
+```json
+{
+ "operation": "csv_url_load",
+ "database": "movies",
+ "table": "movie",
+ "csv_url": "https:/search-json-sample-data.s3.us-east-2.amazonaws.com/movie.csv"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Starting job with id 1889eee4-23c1-4945-9bb7-c805fc20726c"
+}
+```
+
+
+---
+
+## Bulk Insert credits Via CSV
+Inserts data from a hosted CSV file into the "credits" table using the 'csv_url_load' operation.
+
+### Body
+
+```json
+{
+ "operation": "csv_url_load",
+ "database": "movies",
+ "table": "credits",
+ "csv_url": "https:/search-json-sample-data.s3.us-east-2.amazonaws.com/credits.csv"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Starting job with id 3a14cd74-67f3-41e9-8ccd-45ffd0addc2c",
+ "job_id": "3a14cd74-67f3-41e9-8ccd-45ffd0addc2c"
+}
+```
+
+
+---
+
+## View raw data
+In the following example we will be running expressions on the keywords & production_companies attributes, so for context we are displaying what the raw data looks like.
+
+### Body
+
+```json
+{
+ "operation": "sql",
+ "sql": "SELECT title, rank, keywords, production_companies FROM movies.movie ORDER BY rank LIMIT 10"
+}
+```
+
+### Response: 200
+```json
+[
+ {
+ "title": "Ad Astra",
+ "rank": 1,
+ "keywords": [
+ {
+ "id": 305,
+ "name": "moon"
+ },
+ {
+ "id": 697,
+ "name": "loss of loved one"
+ },
+ {
+ "id": 839,
+ "name": "planet mars"
+ },
+ {
+ "id": 14626,
+ "name": "astronaut"
+ },
+ {
+ "id": 157265,
+ "name": "moon colony"
+ },
+ {
+ "id": 162429,
+ "name": "solar system"
+ },
+ {
+ "id": 240119,
+ "name": "father son relationship"
+ },
+ {
+ "id": 244256,
+ "name": "near future"
+ },
+ {
+ "id": 257878,
+ "name": "planet neptune"
+ },
+ {
+ "id": 260089,
+ "name": "space walk"
+ }
+ ],
+ "production_companies": [
+ {
+ "id": 490,
+ "name": "New Regency Productions",
+ "origin_country": ""
+ },
+ {
+ "id": 79963,
+ "name": "Keep Your Head",
+ "origin_country": ""
+ },
+ {
+ "id": 73492,
+ "name": "MadRiver Pictures",
+ "origin_country": ""
+ },
+ {
+ "id": 81,
+ "name": "Plan B Entertainment",
+ "origin_country": "US"
+ },
+ {
+ "id": 30666,
+ "name": "RT Features",
+ "origin_country": "BR"
+ },
+ {
+ "id": 30148,
+ "name": "Bona Film Group",
+ "origin_country": "CN"
+ },
+ {
+ "id": 22213,
+ "name": "TSG Entertainment",
+ "origin_country": "US"
+ }
+ ]
+ },
+ {
+ "title": "Extraction",
+ "rank": 2,
+ "keywords": [
+ {
+ "id": 3070,
+ "name": "mercenary"
+ },
+ {
+ "id": 4110,
+ "name": "mumbai (bombay), india"
+ },
+ {
+ "id": 9717,
+ "name": "based on comic"
+ },
+ {
+ "id": 9730,
+ "name": "crime boss"
+ },
+ {
+ "id": 11107,
+ "name": "rescue mission"
+ },
+ {
+ "id": 18712,
+ "name": "based on graphic novel"
+ },
+ {
+ "id": 265216,
+ "name": "dhaka (dacca), bangladesh"
+ }
+ ],
+ "production_companies": [
+ {
+ "id": 106544,
+ "name": "AGBO",
+ "origin_country": "US"
+ },
+ {
+ "id": 109172,
+ "name": "Thematic Entertainment",
+ "origin_country": "US"
+ },
+ {
+ "id": 92029,
+ "name": "TGIM Films",
+ "origin_country": "US"
+ }
+ ]
+ },
+ {
+ "title": "To the Beat! Back 2 School",
+ "rank": 3,
+ "keywords": [
+ {
+ "id": 10873,
+ "name": "school"
+ }
+ ],
+ "production_companies": []
+ },
+ {
+ "title": "Bloodshot",
+ "rank": 4,
+ "keywords": [
+ {
+ "id": 2651,
+ "name": "nanotechnology"
+ },
+ {
+ "id": 9715,
+ "name": "superhero"
+ },
+ {
+ "id": 9717,
+ "name": "based on comic"
+ },
+ {
+ "id": 164218,
+ "name": "psychotronic"
+ },
+ {
+ "id": 255024,
+ "name": "shared universe"
+ },
+ {
+ "id": 258575,
+ "name": "valiant comics"
+ }
+ ],
+ "production_companies": [
+ {
+ "id": 34,
+ "name": "Sony Pictures",
+ "origin_country": "US"
+ },
+ {
+ "id": 10246,
+ "name": "Cross Creek Pictures",
+ "origin_country": "US"
+ },
+ {
+ "id": 6573,
+ "name": "Mimran Schur Pictures",
+ "origin_country": "US"
+ },
+ {
+ "id": 333,
+ "name": "Original Film",
+ "origin_country": "US"
+ },
+ {
+ "id": 103673,
+ "name": "The Hideaway Entertainment",
+ "origin_country": "US"
+ },
+ {
+ "id": 124335,
+ "name": "Valiant Entertainment",
+ "origin_country": "US"
+ },
+ {
+ "id": 5,
+ "name": "Columbia Pictures",
+ "origin_country": "US"
+ },
+ {
+ "id": 1225,
+ "name": "One Race",
+ "origin_country": "US"
+ },
+ {
+ "id": 30148,
+ "name": "Bona Film Group",
+ "origin_country": "CN"
+ }
+ ]
+ },
+ {
+ "title": "The Call of the Wild",
+ "rank": 5,
+ "keywords": [
+ {
+ "id": 818,
+ "name": "based on novel or book"
+ },
+ {
+ "id": 4542,
+ "name": "gold rush"
+ },
+ {
+ "id": 15162,
+ "name": "dog"
+ },
+ {
+ "id": 155821,
+ "name": "sled dogs"
+ },
+ {
+ "id": 189390,
+ "name": "yukon"
+ },
+ {
+ "id": 207928,
+ "name": "19th century"
+ },
+ {
+ "id": 259987,
+ "name": "cgi animation"
+ },
+ {
+ "id": 263806,
+ "name": "1890s"
+ }
+ ],
+ "production_companies": [
+ {
+ "id": 787,
+ "name": "3 Arts Entertainment",
+ "origin_country": "US"
+ },
+ {
+ "id": 127928,
+ "name": "20th Century Studios",
+ "origin_country": "US"
+ },
+ {
+ "id": 22213,
+ "name": "TSG Entertainment",
+ "origin_country": "US"
+ }
+ ]
+ },
+ {
+ "title": "Sonic the Hedgehog",
+ "rank": 6,
+ "keywords": [
+ {
+ "id": 282,
+ "name": "video game"
+ },
+ {
+ "id": 6054,
+ "name": "friendship"
+ },
+ {
+ "id": 10842,
+ "name": "good vs evil"
+ },
+ {
+ "id": 41645,
+ "name": "based on video game"
+ },
+ {
+ "id": 167043,
+ "name": "road movie"
+ },
+ {
+ "id": 172142,
+ "name": "farting"
+ },
+ {
+ "id": 188933,
+ "name": "bar fight"
+ },
+ {
+ "id": 226967,
+ "name": "amistad"
+ },
+ {
+ "id": 245230,
+ "name": "live action remake"
+ },
+ {
+ "id": 258111,
+ "name": "fantasy"
+ },
+ {
+ "id": 260223,
+ "name": "videojuego"
+ }
+ ],
+ "production_companies": [
+ {
+ "id": 333,
+ "name": "Original Film",
+ "origin_country": "US"
+ },
+ {
+ "id": 10644,
+ "name": "Blur Studios",
+ "origin_country": "US"
+ },
+ {
+ "id": 77884,
+ "name": "Marza Animation Planet",
+ "origin_country": "JP"
+ },
+ {
+ "id": 4,
+ "name": "Paramount",
+ "origin_country": "US"
+ },
+ {
+ "id": 113750,
+ "name": "SEGA",
+ "origin_country": "JP"
+ },
+ {
+ "id": 100711,
+ "name": "DJ2 Entertainment",
+ "origin_country": ""
+ },
+ {
+ "id": 24955,
+ "name": "Paramount Animation",
+ "origin_country": "US"
+ }
+ ]
+ },
+ {
+ "title": "Birds of Prey (and the Fantabulous Emancipation of One Harley Quinn)",
+ "rank": 7,
+ "keywords": [
+ {
+ "id": 849,
+ "name": "dc comics"
+ },
+ {
+ "id": 9717,
+ "name": "based on comic"
+ },
+ {
+ "id": 187056,
+ "name": "woman director"
+ },
+ {
+ "id": 229266,
+ "name": "dc extended universe"
+ }
+ ],
+ "production_companies": [
+ {
+ "id": 9993,
+ "name": "DC Entertainment",
+ "origin_country": "US"
+ },
+ {
+ "id": 82968,
+ "name": "LuckyChap Entertainment",
+ "origin_country": "GB"
+ },
+ {
+ "id": 103462,
+ "name": "Kroll & Co Entertainment",
+ "origin_country": "US"
+ },
+ {
+ "id": 174,
+ "name": "Warner Bros. Pictures",
+ "origin_country": "US"
+ },
+ {
+ "id": 429,
+ "name": "DC Comics",
+ "origin_country": "US"
+ },
+ {
+ "id": 128064,
+ "name": "DC Films",
+ "origin_country": "US"
+ },
+ {
+ "id": 101831,
+ "name": "Clubhouse Pictures",
+ "origin_country": "US"
+ }
+ ]
+ },
+ {
+ "title": "Justice League Dark: Apokolips War",
+ "rank": 8,
+ "keywords": [
+ {
+ "id": 849,
+ "name": "dc comics"
+ }
+ ],
+ "production_companies": [
+ {
+ "id": 2785,
+ "name": "Warner Bros. Animation",
+ "origin_country": "US"
+ },
+ {
+ "id": 9993,
+ "name": "DC Entertainment",
+ "origin_country": "US"
+ },
+ {
+ "id": 429,
+ "name": "DC Comics",
+ "origin_country": "US"
+ }
+ ]
+ },
+ {
+ "title": "Parasite",
+ "rank": 9,
+ "keywords": [
+ {
+ "id": 1353,
+ "name": "underground"
+ },
+ {
+ "id": 5318,
+ "name": "seoul"
+ },
+ {
+ "id": 5732,
+ "name": "birthday party"
+ },
+ {
+ "id": 5752,
+ "name": "private lessons"
+ },
+ {
+ "id": 9866,
+ "name": "basement"
+ },
+ {
+ "id": 10453,
+ "name": "con artist"
+ },
+ {
+ "id": 11935,
+ "name": "working class"
+ },
+ {
+ "id": 12565,
+ "name": "psychological thriller"
+ },
+ {
+ "id": 13126,
+ "name": "limousine driver"
+ },
+ {
+ "id": 14514,
+ "name": "class differences"
+ },
+ {
+ "id": 14864,
+ "name": "rich poor"
+ },
+ {
+ "id": 17997,
+ "name": "housekeeper"
+ },
+ {
+ "id": 18015,
+ "name": "tutor"
+ },
+ {
+ "id": 18035,
+ "name": "family"
+ },
+ {
+ "id": 33421,
+ "name": "crime family"
+ },
+ {
+ "id": 173272,
+ "name": "flood"
+ },
+ {
+ "id": 188861,
+ "name": "smell"
+ },
+ {
+ "id": 198673,
+ "name": "unemployed"
+ },
+ {
+ "id": 237462,
+ "name": "wealthy family"
+ }
+ ],
+ "production_companies": [
+ {
+ "id": 7036,
+ "name": "CJ Entertainment",
+ "origin_country": "KR"
+ },
+ {
+ "id": 4399,
+ "name": "Barunson E&A",
+ "origin_country": "KR"
+ }
+ ]
+ },
+ {
+ "title": "Star Wars: The Rise of Skywalker",
+ "rank": 10,
+ "keywords": [
+ {
+ "id": 161176,
+ "name": "space opera"
+ }
+ ],
+ "production_companies": [
+ {
+ "id": 1,
+ "name": "Lucasfilm",
+ "origin_country": "US"
+ },
+ {
+ "id": 11461,
+ "name": "Bad Robot",
+ "origin_country": "US"
+ },
+ {
+ "id": 2,
+ "name": "Walt Disney Pictures",
+ "origin_country": "US"
+ },
+ {
+ "id": 120404,
+ "name": "British Film Commission",
+ "origin_country": ""
+ }
+ ]
+ }
+]
+```
+
+
+---
+
+## Simple search_json call
+This query uses search_json to convert the keywords object array to a simple string array. The expression '[name]' tells the function to extract all values for the name attribute and wrap them in an array.
+
+### Body
+
+```json
+{
+ "operation": "sql",
+ "sql": "SELECT title, rank, search_json('[name]', keywords) as keywords FROM movies.movie ORDER BY rank LIMIT 10"
+}
+```
+
+### Response: 200
+```json
+[
+ {
+ "title": "Ad Astra",
+ "rank": 1,
+ "keywords": [
+ "moon",
+ "loss of loved one",
+ "planet mars",
+ "astronaut",
+ "moon colony",
+ "solar system",
+ "father son relationship",
+ "near future",
+ "planet neptune",
+ "space walk"
+ ]
+ },
+ {
+ "title": "Extraction",
+ "rank": 2,
+ "keywords": [
+ "mercenary",
+ "mumbai (bombay), india",
+ "based on comic",
+ "crime boss",
+ "rescue mission",
+ "based on graphic novel",
+ "dhaka (dacca), bangladesh"
+ ]
+ },
+ {
+ "title": "To the Beat! Back 2 School",
+ "rank": 3,
+ "keywords": [
+ "school"
+ ]
+ },
+ {
+ "title": "Bloodshot",
+ "rank": 4,
+ "keywords": [
+ "nanotechnology",
+ "superhero",
+ "based on comic",
+ "psychotronic",
+ "shared universe",
+ "valiant comics"
+ ]
+ },
+ {
+ "title": "The Call of the Wild",
+ "rank": 5,
+ "keywords": [
+ "based on novel or book",
+ "gold rush",
+ "dog",
+ "sled dogs",
+ "yukon",
+ "19th century",
+ "cgi animation",
+ "1890s"
+ ]
+ },
+ {
+ "title": "Sonic the Hedgehog",
+ "rank": 6,
+ "keywords": [
+ "video game",
+ "friendship",
+ "good vs evil",
+ "based on video game",
+ "road movie",
+ "farting",
+ "bar fight",
+ "amistad",
+ "live action remake",
+ "fantasy",
+ "videojuego"
+ ]
+ },
+ {
+ "title": "Birds of Prey (and the Fantabulous Emancipation of One Harley Quinn)",
+ "rank": 7,
+ "keywords": [
+ "dc comics",
+ "based on comic",
+ "woman director",
+ "dc extended universe"
+ ]
+ },
+ {
+ "title": "Justice League Dark: Apokolips War",
+ "rank": 8,
+ "keywords": [
+ "dc comics"
+ ]
+ },
+ {
+ "title": "Parasite",
+ "rank": 9,
+ "keywords": [
+ "underground",
+ "seoul",
+ "birthday party",
+ "private lessons",
+ "basement",
+ "con artist",
+ "working class",
+ "psychological thriller",
+ "limousine driver",
+ "class differences",
+ "rich poor",
+ "housekeeper",
+ "tutor",
+ "family",
+ "crime family",
+ "flood",
+ "smell",
+ "unemployed",
+ "wealthy family"
+ ]
+ },
+ {
+ "title": "Star Wars: The Rise of Skywalker",
+ "rank": 10,
+ "keywords": [
+ "space opera"
+ ]
+ }
+]
+```
+
+
+---
+
+## Use search_json in a where clause
+This example shows how we can use SEARCH_JSON to filter out records in a WHERE clause. The production_companies attribute holds an object array of companies that produced each movie, we want to only see movies which were produced by Marvel Studios. Our expression is a filter '$[name="Marvel Studios"]' this tells the function to iterate the production_companies array and only return entries where the name is "Marvel Studios".
+
+### Body
+
+```json
+{
+ "operation": "sql",
+ "sql": "SELECT title, release_date FROM movies.movie where search_json('$[name=\"Marvel Studios\"]', production_companies) IS NOT NULL ORDER BY release_date"
+}
+```
+
+### Response: 200
+```json
+[
+ {
+ "title": "Iron Man",
+ "release_date": "2008-04-30"
+ },
+ {
+ "title": "The Incredible Hulk",
+ "release_date": "2008-06-12"
+ },
+ {
+ "title": "Iron Man 2",
+ "release_date": "2010-04-28"
+ },
+ {
+ "title": "Thor",
+ "release_date": "2011-04-21"
+ },
+ {
+ "title": "Captain America: The First Avenger",
+ "release_date": "2011-07-22"
+ },
+ {
+ "title": "Marvel One-Shot: The Consultant",
+ "release_date": "2011-09-12"
+ },
+ {
+ "title": "Marvel One-Shot: A Funny Thing Happened on the Way to Thor's Hammer",
+ "release_date": "2011-10-25"
+ },
+ {
+ "title": "The Avengers",
+ "release_date": "2012-04-25"
+ },
+ {
+ "title": "Marvel One-Shot: Item 47",
+ "release_date": "2012-09-13"
+ },
+ {
+ "title": "Iron Man 3",
+ "release_date": "2013-04-18"
+ },
+ {
+ "title": "Marvel One-Shot: Agent Carter",
+ "release_date": "2013-09-08"
+ },
+ {
+ "title": "Thor: The Dark World",
+ "release_date": "2013-10-29"
+ },
+ {
+ "title": "Marvel One-Shot: All Hail the King",
+ "release_date": "2014-02-04"
+ },
+ {
+ "title": "Marvel Studios: Assembling a Universe",
+ "release_date": "2014-03-18"
+ },
+ {
+ "title": "Captain America: The Winter Soldier",
+ "release_date": "2014-03-20"
+ },
+ {
+ "title": "Guardians of the Galaxy",
+ "release_date": "2014-07-30"
+ },
+ {
+ "title": "Avengers: Age of Ultron",
+ "release_date": "2015-04-22"
+ },
+ {
+ "title": "Ant-Man",
+ "release_date": "2015-07-14"
+ },
+ {
+ "title": "Captain America: Civil War",
+ "release_date": "2016-04-27"
+ },
+ {
+ "title": "Team Thor",
+ "release_date": "2016-08-28"
+ },
+ {
+ "title": "Doctor Strange",
+ "release_date": "2016-10-25"
+ },
+ {
+ "title": "Guardians of the Galaxy Vol. 2",
+ "release_date": "2017-04-19"
+ },
+ {
+ "title": "Spider-Man: Homecoming",
+ "release_date": "2017-07-05"
+ },
+ {
+ "title": "Thor: Ragnarok",
+ "release_date": "2017-10-25"
+ },
+ {
+ "title": "Black Panther",
+ "release_date": "2018-02-13"
+ },
+ {
+ "title": "Avengers: Infinity War",
+ "release_date": "2018-04-25"
+ },
+ {
+ "title": "Ant-Man and the Wasp",
+ "release_date": "2018-07-04"
+ },
+ {
+ "title": "Captain Marvel",
+ "release_date": "2019-03-06"
+ },
+ {
+ "title": "Avengers: Endgame",
+ "release_date": "2019-04-24"
+ },
+ {
+ "title": "Spider-Man: Far from Home",
+ "release_date": "2019-06-28"
+ },
+ {
+ "title": "Black Widow",
+ "release_date": "2020-10-28"
+ },
+ {
+ "title": "Untitled Spider-Man 3",
+ "release_date": "2021-11-04"
+ },
+ {
+ "title": "Thor: Love and Thunder",
+ "release_date": "2022-02-10"
+ },
+ {
+ "title": "Doctor Strange in the Multiverse of Madness",
+ "release_date": "2022-03-23"
+ },
+ {
+ "title": "Untitled Marvel Project (3)",
+ "release_date": "2022-07-29"
+ },
+ {
+ "title": "Guardians of the Galaxy Vol. 3",
+ "release_date": "2023-02-16"
+ }
+]
+```
+
+
+---
+
+## Use search_json to show the movies with the largest casts
+This example shows how we can use SEARCH_JSON to perform a simple calculation on JSON and order by the results. The cast attribute holds an object array of details around the cast of a movie. We use the expression '$count(id)' that counts each id and returns the value back which we alias in SQL as cast_size which in turn gets used to sort the rows.
+
+### Body
+
+```json
+{
+ "operation": "sql",
+ "sql": "SELECT movie_title, search_json('$count(id)', `cast`) as cast_size FROM movies.credits ORDER BY cast_size DESC LIMIT 10"
+}
+```
+
+### Response: 200
+```json
+[
+ {
+ "movie_title": "Around the World in Eighty Days",
+ "cast_size": 312
+ },
+ {
+ "movie_title": "And the Oscar Goes To...",
+ "cast_size": 259
+ },
+ {
+ "movie_title": "Rock of Ages",
+ "cast_size": 223
+ },
+ {
+ "movie_title": "Mr. Smith Goes to Washington",
+ "cast_size": 213
+ },
+ {
+ "movie_title": "Les Misérables",
+ "cast_size": 208
+ },
+ {
+ "movie_title": "Jason Bourne",
+ "cast_size": 201
+ },
+ {
+ "movie_title": "The Muppets",
+ "cast_size": 191
+ },
+ {
+ "movie_title": "You Don't Mess with the Zohan",
+ "cast_size": 183
+ },
+ {
+ "movie_title": "The Irishman",
+ "cast_size": 173
+ },
+ {
+ "movie_title": "Spider-Man: Far from Home",
+ "cast_size": 173
+ }
+]
+```
+
+
+---
+
+## search_json as a condition, in a select with a table join
+This example shows how we can use SEARCH_JSON to find movies where at least of 2 our favorite actors from Marvel films have acted together then list the movie, its overview, release date, and the actors names and their characters. The WHERE clause performs a count on credits.cast attribute that have the matching actors. The SELECT performs the same filter on the cast attribute and performs a transform on each object to just return the actor's name and their character.
+
+### Body
+
+```json
+{
+ "operation": "sql",
+ "sql": "SELECT m.title, m.overview, m.release_date, search_json('$[name in [\"Robert Downey Jr.\", \"Chris Evans\", \"Scarlett Johansson\", \"Mark Ruffalo\", \"Chris Hemsworth\", \"Jeremy Renner\", \"Clark Gregg\", \"Samuel L. Jackson\", \"Gwyneth Paltrow\", \"Don Cheadle\"]].{\"actor\": name, \"character\": character}', c.`cast`) as characters FROM movies.credits c INNER JOIN movies.movie m ON c.movie_id = m.id WHERE search_json('$count($[name in [\"Robert Downey Jr.\", \"Chris Evans\", \"Scarlett Johansson\", \"Mark Ruffalo\", \"Chris Hemsworth\", \"Jeremy Renner\", \"Clark Gregg\", \"Samuel L. Jackson\", \"Gwyneth Paltrow\", \"Don Cheadle\"]])', c.`cast`) >= 2"
+}
+```
+
+### Response: 200
+```json
+[
+ {
+ "title": "Out of Sight",
+ "overview": "Meet Jack Foley, a smooth criminal who bends the law and is determined to make one last heist. Karen Sisco is a federal marshal who chooses all the right moves … and all the wrong guys. Now they're willing to risk it all to find out if there's more between them than just the law.",
+ "release_date": "1998-06-26",
+ "characters": [
+ {
+ "actor": "Don Cheadle",
+ "character": "Maurice Miller"
+ },
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Hejira Henry (uncredited)"
+ }
+ ]
+ },
+ {
+ "title": "Iron Man",
+ "overview": "After being held captive in an Afghan cave, billionaire engineer Tony Stark creates a unique weaponized suit of armor to fight evil.",
+ "release_date": "2008-04-30",
+ "characters": [
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Tony Stark / Iron Man"
+ },
+ {
+ "actor": "Gwyneth Paltrow",
+ "character": "Virginia \"Pepper\" Potts"
+ },
+ {
+ "actor": "Clark Gregg",
+ "character": "Phil Coulson"
+ },
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Nick Fury (uncredited)"
+ },
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Nick Fury"
+ }
+ ]
+ },
+ {
+ "title": "Captain America: The First Avenger",
+ "overview": "During World War II, Steve Rogers is a sickly man from Brooklyn who's transformed into super-soldier Captain America to aid in the war effort. Rogers must stop the Red Skull – Adolf Hitler's ruthless head of weaponry, and the leader of an organization that intends to use a mysterious device of untold powers for world domination.",
+ "release_date": "2011-07-22",
+ "characters": [
+ {
+ "actor": "Chris Evans",
+ "character": "Steve Rogers / Captain America"
+ },
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Nick Fury"
+ }
+ ]
+ },
+ {
+ "title": "In Good Company",
+ "overview": "Dan Foreman is a seasoned advertisement sales executive at a high-ranking publication when a corporate takeover results in him being placed under naive supervisor Carter Duryea, who is half his age. Matters are made worse when Dan's new supervisor becomes romantically involved with his daughter an 18 year-old college student Alex.",
+ "release_date": "2004-12-29",
+ "characters": [
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Alex Foreman"
+ },
+ {
+ "actor": "Clark Gregg",
+ "character": "Mark Steckle"
+ }
+ ]
+ },
+ {
+ "title": "Zodiac",
+ "overview": "The true story of the investigation of the \"Zodiac Killer\", a serial killer who terrified the San Francisco Bay Area, taunting police with his ciphers and letters. The case becomes an obsession for three men as their lives and careers are built and destroyed by the endless trail of clues.",
+ "release_date": "2007-03-02",
+ "characters": [
+ {
+ "actor": "Mark Ruffalo",
+ "character": "Dave Toschi"
+ },
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Paul Avery"
+ }
+ ]
+ },
+ {
+ "title": "Hard Eight",
+ "overview": "A stranger mentors a young Reno gambler who weds a hooker and befriends a vulgar casino regular.",
+ "release_date": "1996-02-28",
+ "characters": [
+ {
+ "actor": "Gwyneth Paltrow",
+ "character": "Clementine"
+ },
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Jimmy"
+ }
+ ]
+ },
+ {
+ "title": "The Spirit",
+ "overview": "Down these mean streets a man must come. A hero born, murdered, and born again. A Rookie cop named Denny Colt returns from the beyond as The Spirit, a hero whose mission is to fight against the bad forces from the shadows of Central City. The Octopus, who kills anyone unfortunate enough to see his face, has other plans; he is going to wipe out the entire city.",
+ "release_date": "2008-12-25",
+ "characters": [
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Silken Floss"
+ },
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Octopuss"
+ }
+ ]
+ },
+ {
+ "title": "S.W.A.T.",
+ "overview": "Hondo Harrelson recruits Jim Street to join an elite unit of the Los Angeles Police Department. Together they seek out more members, including tough Deke Kay and single mom Chris Sanchez. The team's first big assignment is to escort crime boss Alex Montel to prison. It seems routine, but when Montel offers a huge reward to anyone who can break him free, criminals of various stripes step up for the prize.",
+ "release_date": "2003-08-08",
+ "characters": [
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Sgt. Dan 'Hondo' Harrelson"
+ },
+ {
+ "actor": "Jeremy Renner",
+ "character": "Brian Gamble"
+ }
+ ]
+ },
+ {
+ "title": "Iron Man 2",
+ "overview": "With the world now aware of his dual life as the armored superhero Iron Man, billionaire inventor Tony Stark faces pressure from the government, the press and the public to share his technology with the military. Unwilling to let go of his invention, Stark, with Pepper Potts and James 'Rhodey' Rhodes at his side, must forge new alliances – and confront powerful enemies.",
+ "release_date": "2010-04-28",
+ "characters": [
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Tony Stark / Iron Man"
+ },
+ {
+ "actor": "Gwyneth Paltrow",
+ "character": "Virginia \"Pepper\" Potts"
+ },
+ {
+ "actor": "Don Cheadle",
+ "character": "James \"Rhodey\" Rhodes / War Machine"
+ },
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Natalie Rushman / Natasha Romanoff / Black Widow"
+ },
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Nick Fury"
+ },
+ {
+ "actor": "Clark Gregg",
+ "character": "Phil Coulson"
+ }
+ ]
+ },
+ {
+ "title": "Thor",
+ "overview": "Against his father Odin's will, The Mighty Thor - a powerful but arrogant warrior god - recklessly reignites an ancient war. Thor is cast down to Earth and forced to live among humans as punishment. Once here, Thor learns what it takes to be a true hero when the most dangerous villain of his world sends the darkest forces of Asgard to invade Earth.",
+ "release_date": "2011-04-21",
+ "characters": [
+ {
+ "actor": "Chris Hemsworth",
+ "character": "Thor Odinson"
+ },
+ {
+ "actor": "Clark Gregg",
+ "character": "Phil Coulson"
+ },
+ {
+ "actor": "Jeremy Renner",
+ "character": "Clint Barton / Hawkeye (uncredited)"
+ },
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Nick Fury (uncredited)"
+ }
+ ]
+ },
+ {
+ "title": "View from the Top",
+ "overview": "A small-town woman tries to achieve her goal of becoming a flight attendant.",
+ "release_date": "2003-03-21",
+ "characters": [
+ {
+ "actor": "Gwyneth Paltrow",
+ "character": "Donna"
+ },
+ {
+ "actor": "Mark Ruffalo",
+ "character": "Ted Stewart"
+ }
+ ]
+ },
+ {
+ "title": "The Nanny Diaries",
+ "overview": "A college graduate goes to work as a nanny for a rich New York family. Ensconced in their home, she has to juggle their dysfunction, a new romance, and the spoiled brat in her charge.",
+ "release_date": "2007-08-24",
+ "characters": [
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Annie Braddock"
+ },
+ {
+ "actor": "Chris Evans",
+ "character": "Hayden \"Harvard Hottie\""
+ }
+ ]
+ },
+ {
+ "title": "The Perfect Score",
+ "overview": "Six high school seniors decide to break into the Princeton Testing Center so they can steal the answers to their upcoming SAT tests and all get perfect scores.",
+ "release_date": "2004-01-30",
+ "characters": [
+ {
+ "actor": "Chris Evans",
+ "character": "Kyle"
+ },
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Francesca Curtis"
+ }
+ ]
+ },
+ {
+ "title": "The Avengers",
+ "overview": "When an unexpected enemy emerges and threatens global safety and security, Nick Fury, director of the international peacekeeping agency known as S.H.I.E.L.D., finds himself in need of a team to pull the world back from the brink of disaster. Spanning the globe, a daring recruitment effort begins!",
+ "release_date": "2012-04-25",
+ "characters": [
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Tony Stark / Iron Man"
+ },
+ {
+ "actor": "Chris Evans",
+ "character": "Steve Rogers / Captain America"
+ },
+ {
+ "actor": "Mark Ruffalo",
+ "character": "Bruce Banner / The Hulk"
+ },
+ {
+ "actor": "Chris Hemsworth",
+ "character": "Thor Odinson"
+ },
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Natasha Romanoff / Black Widow"
+ },
+ {
+ "actor": "Jeremy Renner",
+ "character": "Clint Barton / Hawkeye"
+ },
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Nick Fury"
+ },
+ {
+ "actor": "Clark Gregg",
+ "character": "Phil Coulson"
+ },
+ {
+ "actor": "Gwyneth Paltrow",
+ "character": "Virginia \"Pepper\" Potts"
+ }
+ ]
+ },
+ {
+ "title": "Iron Man 3",
+ "overview": "When Tony Stark's world is torn apart by a formidable terrorist called the Mandarin, he starts an odyssey of rebuilding and retribution.",
+ "release_date": "2013-04-18",
+ "characters": [
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Tony Stark / Iron Man"
+ },
+ {
+ "actor": "Gwyneth Paltrow",
+ "character": "Virginia \"Pepper\" Potts"
+ },
+ {
+ "actor": "Don Cheadle",
+ "character": "James \"Rhodey\" Rhodes / Iron Patriot"
+ },
+ {
+ "actor": "Mark Ruffalo",
+ "character": "Bruce Banner (uncredited)"
+ }
+ ]
+ },
+ {
+ "title": "Marvel One-Shot: The Consultant",
+ "overview": "Agent Coulson informs Agent Sitwell that the World Security Council wishes Emil Blonsky to be released from prison to join the Avengers Initiative. As Nick Fury doesn't want to release Blonsky, the two agents decide to send a patsy to sabotage the meeting...",
+ "release_date": "2011-09-12",
+ "characters": [
+ {
+ "actor": "Clark Gregg",
+ "character": "Phil Coulson"
+ },
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Tony Stark (archive footage)"
+ }
+ ]
+ },
+ {
+ "title": "Thor: The Dark World",
+ "overview": "Thor fights to restore order across the cosmos… but an ancient race led by the vengeful Malekith returns to plunge the universe back into darkness. Faced with an enemy that even Odin and Asgard cannot withstand, Thor must embark on his most perilous and personal journey yet, one that will reunite him with Jane Foster and force him to sacrifice everything to save us all.",
+ "release_date": "2013-10-29",
+ "characters": [
+ {
+ "actor": "Chris Hemsworth",
+ "character": "Thor Odinson"
+ },
+ {
+ "actor": "Chris Evans",
+ "character": "Loki as Captain America (uncredited)"
+ }
+ ]
+ },
+ {
+ "title": "Avengers: Age of Ultron",
+ "overview": "When Tony Stark tries to jumpstart a dormant peacekeeping program, things go awry and Earth’s Mightiest Heroes are put to the ultimate test as the fate of the planet hangs in the balance. As the villainous Ultron emerges, it is up to The Avengers to stop him from enacting his terrible plans, and soon uneasy alliances and unexpected action pave the way for an epic and unique global adventure.",
+ "release_date": "2015-04-22",
+ "characters": [
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Tony Stark / Iron Man"
+ },
+ {
+ "actor": "Chris Evans",
+ "character": "Steve Rogers / Captain America"
+ },
+ {
+ "actor": "Mark Ruffalo",
+ "character": "Bruce Banner / The Hulk"
+ },
+ {
+ "actor": "Chris Hemsworth",
+ "character": "Thor Odinson"
+ },
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Natasha Romanoff / Black Widow"
+ },
+ {
+ "actor": "Jeremy Renner",
+ "character": "Clint Barton / Hawkeye"
+ },
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Nick Fury"
+ },
+ {
+ "actor": "Don Cheadle",
+ "character": "James \"Rhodey\" Rhodes / War Machine"
+ }
+ ]
+ },
+ {
+ "title": "Captain America: The Winter Soldier",
+ "overview": "After the cataclysmic events in New York with The Avengers, Steve Rogers, aka Captain America is living quietly in Washington, D.C. and trying to adjust to the modern world. But when a S.H.I.E.L.D. colleague comes under attack, Steve becomes embroiled in a web of intrigue that threatens to put the world at risk. Joining forces with the Black Widow, Captain America struggles to expose the ever-widening conspiracy while fighting off professional assassins sent to silence him at every turn. When the full scope of the villainous plot is revealed, Captain America and the Black Widow enlist the help of a new ally, the Falcon. However, they soon find themselves up against an unexpected and formidable enemy—the Winter Soldier.",
+ "release_date": "2014-03-20",
+ "characters": [
+ {
+ "actor": "Chris Evans",
+ "character": "Steve Rogers / Captain America"
+ },
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Nick Fury"
+ },
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Natasha Romanoff / Black Widow"
+ }
+ ]
+ },
+ {
+ "title": "Thanks for Sharing",
+ "overview": "A romantic comedy that brings together three disparate characters who are learning to face a challenging and often confusing world as they struggle together against a common demon—sex addiction.",
+ "release_date": "2013-09-19",
+ "characters": [
+ {
+ "actor": "Mark Ruffalo",
+ "character": "Adam"
+ },
+ {
+ "actor": "Gwyneth Paltrow",
+ "character": "Phoebe"
+ }
+ ]
+ },
+ {
+ "title": "Chef",
+ "overview": "When Chef Carl Casper suddenly quits his job at a prominent Los Angeles restaurant after refusing to compromise his creative integrity for its controlling owner, he is left to figure out what's next. Finding himself in Miami, he teams up with his ex-wife, his friend and his son to launch a food truck. Taking to the road, Chef Carl goes back to his roots to reignite his passion for the kitchen -- and zest for life and love.",
+ "release_date": "2014-05-08",
+ "characters": [
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Molly"
+ },
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Marvin"
+ }
+ ]
+ },
+ {
+ "title": "Marvel Studios: Assembling a Universe",
+ "overview": "A look at the story behind Marvel Studios and the Marvel Cinematic Universe, featuring interviews and behind-the-scenes footage from all of the Marvel films, the Marvel One-Shots and \"Marvel's Agents of S.H.I.E.L.D.\"",
+ "release_date": "2014-03-18",
+ "characters": [
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Himself / Tony Stark / Iron Man"
+ },
+ {
+ "actor": "Chris Hemsworth",
+ "character": "Himself / Thor"
+ },
+ {
+ "actor": "Chris Evans",
+ "character": "Himself / Steve Rogers / Captain America"
+ },
+ {
+ "actor": "Mark Ruffalo",
+ "character": "Himself / Bruce Banner / Hulk"
+ },
+ {
+ "actor": "Gwyneth Paltrow",
+ "character": "Herself"
+ },
+ {
+ "actor": "Clark Gregg",
+ "character": "Himself"
+ },
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Himself"
+ },
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Herself"
+ },
+ {
+ "actor": "Jeremy Renner",
+ "character": "Himself"
+ }
+ ]
+ },
+ {
+ "title": "Captain America: Civil War",
+ "overview": "Following the events of Age of Ultron, the collective governments of the world pass an act designed to regulate all superhuman activity. This polarizes opinion amongst the Avengers, causing two factions to side with Iron Man or Captain America, which causes an epic battle between former allies.",
+ "release_date": "2016-04-27",
+ "characters": [
+ {
+ "actor": "Chris Evans",
+ "character": "Steve Rogers / Captain America"
+ },
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Tony Stark / Iron Man"
+ },
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Natasha Romanoff / Black Widow"
+ },
+ {
+ "actor": "Don Cheadle",
+ "character": "James \"Rhodey\" Rhodes / War Machine"
+ },
+ {
+ "actor": "Jeremy Renner",
+ "character": "Clint Barton / Hawkeye"
+ }
+ ]
+ },
+ {
+ "title": "Thor: Ragnarok",
+ "overview": "Thor is imprisoned on the other side of the universe and finds himself in a race against time to get back to Asgard to stop Ragnarok, the destruction of his home-world and the end of Asgardian civilization, at the hands of an all-powerful new threat, the ruthless Hela.",
+ "release_date": "2017-10-25",
+ "characters": [
+ {
+ "actor": "Chris Hemsworth",
+ "character": "Thor Odinson"
+ },
+ {
+ "actor": "Mark Ruffalo",
+ "character": "Bruce Banner / Hulk"
+ },
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Natasha Romanoff / Black Widow (archive footage / uncredited)"
+ }
+ ]
+ },
+ {
+ "title": "Avengers: Endgame",
+ "overview": "After the devastating events of Avengers: Infinity War, the universe is in ruins due to the efforts of the Mad Titan, Thanos. With the help of remaining allies, the Avengers must assemble once more in order to undo Thanos' actions and restore order to the universe once and for all, no matter what consequences may be in store.",
+ "release_date": "2019-04-24",
+ "characters": [
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Tony Stark / Iron Man"
+ },
+ {
+ "actor": "Chris Evans",
+ "character": "Steve Rogers / Captain America"
+ },
+ {
+ "actor": "Mark Ruffalo",
+ "character": "Bruce Banner / Hulk"
+ },
+ {
+ "actor": "Chris Hemsworth",
+ "character": "Thor Odinson"
+ },
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Natasha Romanoff / Black Widow"
+ },
+ {
+ "actor": "Jeremy Renner",
+ "character": "Clint Barton / Hawkeye"
+ },
+ {
+ "actor": "Don Cheadle",
+ "character": "James Rhodes / War Machine"
+ },
+ {
+ "actor": "Gwyneth Paltrow",
+ "character": "Pepper Potts"
+ },
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Nick Fury"
+ }
+ ]
+ },
+ {
+ "title": "Avengers: Infinity War",
+ "overview": "As the Avengers and their allies have continued to protect the world from threats too large for any one hero to handle, a new danger has emerged from the cosmic shadows: Thanos. A despot of intergalactic infamy, his goal is to collect all six Infinity Stones, artifacts of unimaginable power, and use them to inflict his twisted will on all of reality. Everything the Avengers have fought for has led up to this moment - the fate of Earth and existence itself has never been more uncertain.",
+ "release_date": "2018-04-25",
+ "characters": [
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Tony Stark / Iron Man"
+ },
+ {
+ "actor": "Chris Hemsworth",
+ "character": "Thor Odinson"
+ },
+ {
+ "actor": "Chris Evans",
+ "character": "Steve Rogers / Captain America"
+ },
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Natasha Romanoff / Black Widow"
+ },
+ {
+ "actor": "Don Cheadle",
+ "character": "James \"Rhodey\" Rhodes / War Machine"
+ },
+ {
+ "actor": "Gwyneth Paltrow",
+ "character": "Virginia \"Pepper\" Potts"
+ },
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Nick Fury (uncredited)"
+ },
+ {
+ "actor": "Mark Ruffalo",
+ "character": "Bruce Banner / The Hulk"
+ }
+ ]
+ },
+ {
+ "title": "Captain Marvel",
+ "overview": "The story follows Carol Danvers as she becomes one of the universe’s most powerful heroes when Earth is caught in the middle of a galactic war between two alien races. Set in the 1990s, Captain Marvel is an all-new adventure from a previously unseen period in the history of the Marvel Cinematic Universe.",
+ "release_date": "2019-03-06",
+ "characters": [
+ {
+ "actor": "Samuel L. Jackson",
+ "character": "Nick Fury"
+ },
+ {
+ "actor": "Clark Gregg",
+ "character": "Agent Phil Coulson"
+ },
+ {
+ "actor": "Chris Evans",
+ "character": "Steve Rogers / Captain America (uncredited)"
+ },
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Natasha Romanoff / Black Widow (uncredited)"
+ },
+ {
+ "actor": "Don Cheadle",
+ "character": "James 'Rhodey' Rhodes / War Machine (uncredited)"
+ },
+ {
+ "actor": "Mark Ruffalo",
+ "character": "Bruce Banner / The Hulk (uncredited)"
+ }
+ ]
+ },
+ {
+ "title": "Spider-Man: Homecoming",
+ "overview": "Following the events of Captain America: Civil War, Peter Parker, with the help of his mentor Tony Stark, tries to balance his life as an ordinary high school student in Queens, New York City, with fighting crime as his superhero alter ego Spider-Man as a new threat, the Vulture, emerges.",
+ "release_date": "2017-07-05",
+ "characters": [
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Tony Stark / Iron Man"
+ },
+ {
+ "actor": "Gwyneth Paltrow",
+ "character": "Virginia \"Pepper\" Potts"
+ },
+ {
+ "actor": "Chris Evans",
+ "character": "Steve Rogers / Captain America"
+ }
+ ]
+ },
+ {
+ "title": "Team Thor",
+ "overview": "Discover what Thor was up to during the events of Captain America: Civil War.",
+ "release_date": "2016-08-28",
+ "characters": [
+ {
+ "actor": "Chris Hemsworth",
+ "character": "Thor Odinson"
+ },
+ {
+ "actor": "Mark Ruffalo",
+ "character": "Bruce Banner"
+ }
+ ]
+ },
+ {
+ "title": "Black Widow",
+ "overview": "Natasha Romanoff, also known as Black Widow, confronts the darker parts of her ledger when a dangerous conspiracy with ties to her past arises. Pursued by a force that will stop at nothing to bring her down, Natasha must deal with her history as a spy and the broken relationships left in her wake long before she became an Avenger.",
+ "release_date": "2020-10-28",
+ "characters": [
+ {
+ "actor": "Scarlett Johansson",
+ "character": "Natasha Romanoff / Black Widow"
+ },
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Tony Stark / Iron Man"
+ }
+ ]
+ }
+]
+```
diff --git a/site/versioned_docs/version-4.3/developers/operations-api/bulk-operations.md b/site/versioned_docs/version-4.3/developers/operations-api/bulk-operations.md
new file mode 100644
index 00000000..048ec5d4
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/operations-api/bulk-operations.md
@@ -0,0 +1,136 @@
+---
+title: Bulk Operations
+---
+
+# Bulk Operations
+
+## CSV Data Load
+Ingests CSV data, provided directly in the operation as an `insert`, `update` or `upsert` into the specified database table.
+
+* operation _(required)_ - must always be `csv_data_load`
+* action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert`
+* database _(optional)_ - name of the database where you are loading your data. The default is `data`
+* table _(required)_ - name of the table where you are loading your data
+* data _(required)_ - csv data to import into HarperDB
+
+### Body
+```json
+{
+ "operation": "csv_data_load",
+ "database": "dev",
+ "action": "insert",
+ "table": "breed",
+ "data": "id,name,section,country,image\n1,ENGLISH POINTER,British and Irish Pointers and Setters,GREAT BRITAIN,http:/www.fci.be/Nomenclature/Illustrations/001g07.jpg\n2,ENGLISH SETTER,British and Irish Pointers and Setters,GREAT BRITAIN,http:/www.fci.be/Nomenclature/Illustrations/002g07.jpg\n3,KERRY BLUE TERRIER,Large and medium sized Terriers,IRELAND,\n"
+}
+```
+
+### Response: 200
+```json
+ {
+ "message": "Starting job with id 2fe25039-566e-4670-8bb3-2db3d4e07e69",
+ "job_id": "2fe25039-566e-4670-8bb3-2db3d4e07e69"
+ }
+```
+
+---
+
+## CSV File Load
+Ingests CSV data, provided via a path on the local filesystem, as an `insert`, `update` or `upsert` into the specified database table.
+
+_Note: The CSV file must reside on the same machine on which HarperDB is running. For example, the path to a CSV on your computer will produce an error if your HarperDB instance is a cloud instance._
+
+* operation _(required)_ - must always be `csv_file_load`
+* action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert`
+* database _(optional)_ - name of the database where you are loading your data. The default is `data`
+* table _(required)_ - name of the table where you are loading your data
+* file_path _(required)_ - path to the csv file on the host running harperdb
+
+### Body
+```json
+{
+ "operation": "csv_file_load",
+ "action": "insert",
+ "database": "dev",
+ "table": "breed",
+ "file_path": "/home/user/imports/breeds.csv"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Starting job with id 3994d8e2-ec6a-43c4-8563-11c1df81870e",
+ "job_id": "3994d8e2-ec6a-43c4-8563-11c1df81870e"
+}
+```
+
+---
+
+## CSV URL Load
+Ingests CSV data, provided via URL, as an `insert`, `update` or `upsert` into the specified database table.
+
+* operation _(required)_ - must always be `csv_url_load`
+* action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert`
+* database _(optional)_ - name of the database where you are loading your data. The default is `data`
+* table _(required)_ - name of the table where you are loading your data
+* csv_url _(required)_ - URL to the csv
+
+### Body
+```json
+{
+ "operation": "csv_url_load",
+ "action": "insert",
+ "database": "dev",
+ "table": "breed",
+ "csv_url": "https:/s3.amazonaws.com/complimentarydata/breeds.csv"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Starting job with id 332aa0a2-6833-46cd-88a6-ae375920436a",
+ "job_id": "332aa0a2-6833-46cd-88a6-ae375920436a"
+}
+```
+
+---
+
+## Import from S3
+This operation allows users to import CSV or JSON files from an AWS S3 bucket as an `insert`, `update` or `upsert`.
+
+* operation _(required)_ - must always be `import_from_s3`
+* action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert`
+* database _(optional)_ - name of the database where you are loading your data. The default is `data`
+* table _(required)_ - name of the table where you are loading your data
+* s3 _(required)_ - object containing required AWS S3 bucket info for operation:
+ * aws_access_key_id - AWS access key for authenticating into your S3 bucket
+ * aws_secret_access_key - AWS secret for authenticating into your S3 bucket
+ * bucket - AWS S3 bucket to import from
+ * key - the name of the file to import - _the file must include a valid file extension ('.csv' or '.json')_
+ * region - the region of the bucket
+
+### Body
+```json
+{
+ "operation": "import_from_s3",
+ "action": "insert",
+ "database": "dev",
+ "table": "dog",
+ "s3": {
+ "aws_access_key_id": "YOUR_KEY",
+ "aws_secret_access_key": "YOUR_SECRET_KEY",
+ "bucket": "BUCKET_NAME",
+ "key": "OBJECT_NAME",
+ "region": "BUCKET_REGION"
+ }
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Starting job with id 062a1892-6a0a-4282-9791-0f4c93b12e16",
+ "job_id": "062a1892-6a0a-4282-9791-0f4c93b12e16"
+}
+```
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.3/developers/operations-api/clustering.md b/site/versioned_docs/version-4.3/developers/operations-api/clustering.md
new file mode 100644
index 00000000..300664c4
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/operations-api/clustering.md
@@ -0,0 +1,457 @@
+---
+title: Clustering
+---
+
+# Clustering
+
+## Cluster Set Routes
+Adds a route/routes to either the hub or leaf server cluster configuration. This operation behaves as a PATCH/upsert, meaning it will add new routes to the configuration while leaving existing routes untouched.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `cluster_set_routes`
+* server _(required)_ - must always be `hub` or `leaf`, in most cases you should use `hub` here
+* routes _(required)_ - must always be an objects array with a host and port:
+ * host - the host of the remote instance you are clustering to
+ * port - the clustering port of the remote instance you are clustering to, in most cases this is the value in `clustering.hubServer.cluster.network.port` on the remote instance `harperdb-config.yaml`
+
+### Body
+```json
+{
+ "operation": "cluster_set_routes",
+ "server": "hub",
+ "routes": [
+ {
+ "host": "3.22.181.22",
+ "port": 12345
+ },
+ {
+ "host": "3.137.184.8",
+ "port": 12345
+ },
+ {
+ "host": "18.223.239.195",
+ "port": 12345
+ },
+ {
+ "host": "18.116.24.71",
+ "port": 12345
+ }
+ ]
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "cluster routes successfully set",
+ "set": [
+ {
+ "host": "3.22.181.22",
+ "port": 12345
+ },
+ {
+ "host": "3.137.184.8",
+ "port": 12345
+ },
+ {
+ "host": "18.223.239.195",
+ "port": 12345
+ },
+ {
+ "host": "18.116.24.71",
+ "port": 12345
+ }
+ ],
+ "skipped": []
+}
+```
+
+---
+
+## Cluster Get Routes
+Gets all the hub and leaf server routes from the config file.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `cluster_get_routes`
+
+### Body
+```json
+{
+ "operation": "cluster_get_routes"
+}
+```
+
+### Response: 200
+```json
+{
+ "hub": [
+ {
+ "host": "3.22.181.22",
+ "port": 12345
+ },
+ {
+ "host": "3.137.184.8",
+ "port": 12345
+ },
+ {
+ "host": "18.223.239.195",
+ "port": 12345
+ },
+ {
+ "host": "18.116.24.71",
+ "port": 12345
+ }
+ ],
+ "leaf": []
+}
+```
+
+---
+
+## Cluster Delete Routes
+Removes route(s) from hub and/or leaf server routes array in config file. Returns a deletion success message and arrays of deleted and skipped records.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `cluster_delete_routes`
+* routes _required_ - Must be an array of route object(s)
+
+### Body
+
+```json
+{
+ "operation": "cluster_delete_routes",
+ "routes": [
+ {
+ "host": "18.116.24.71",
+ "port": 12345
+ }
+ ]
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "cluster routes successfully deleted",
+ "deleted": [
+ {
+ "host": "18.116.24.71",
+ "port": 12345
+ }
+ ],
+ "skipped": []
+}
+```
+
+
+---
+
+## Add Node
+Registers an additional HarperDB instance with associated subscriptions. Learn more about [HarperDB clustering here](../clustering/).
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `add_node`
+* node_name _(required)_ - the node name of the remote node
+* subscriptions _(required)_ - The relationship created between nodes. Must be an object array and include `schema`, `table`, `subscribe` and `publish`:
+ * schema - the schema to replicate from
+ * table - the table to replicate from
+ * subscribe - a boolean which determines if transactions on the remote table should be replicated on the local table
+ * publish - a boolean which determines if transactions on the local table should be replicated on the remote table
+ * start_time _(optional)_ - How far back to go to get transactions from node being added. Must be in UTC YYYY-MM-DDTHH:mm:ss.sssZ format
+
+### Body
+```json
+{
+ "operation": "add_node",
+ "node_name": "ec2-3-22-181-22",
+ "subscriptions": [
+ {
+ "schema": "dev",
+ "table": "dog",
+ "subscribe": false,
+ "publish": true,
+ "start_time": "2022-09-02T20:06:35.993Z"
+ }
+ ]
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Successfully added 'ec2-3-22-181-22' to manifest"
+}
+```
+
+---
+
+## Update Node
+Modifies an existing HarperDB instance registration and associated subscriptions. This operation behaves as a PATCH/upsert, meaning it will insert or update the specified replication configurations while leaving other table replication configuration untouched. Learn more about [HarperDB clustering here](../clustering/).
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `update_node`
+* node_name _(required)_ - the node name of the remote node you are updating
+* subscriptions _(required)_ - The relationship created between nodes. Must be an object array and include `schema`, `table`, `subscribe` and `publish`:
+ * schema - the schema to replicate from
+ * table - the table to replicate from
+ * subscribe - a boolean which determines if transactions on the remote table should be replicated on the local table
+ * publish - a boolean which determines if transactions on the local table should be replicated on the remote table
+ * start_time _(optional)_ - How far back to go to get transactions from node being added. Must be in UTC YYYY-MM-DDTHH:mm:ss.sssZ format
+
+### Body
+```json
+{
+ "operation": "update_node",
+ "node_name": "ec2-18-223-239-195",
+ "subscriptions": [
+ {
+ "schema": "dev",
+ "table": "dog",
+ "subscribe": true,
+ "publish": false,
+ "start_time": "2022-09-02T20:06:35.993Z"
+ }
+ ]
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Successfully updated 'ec2-3-22-181-22'"
+}
+```
+
+---
+
+## Set Node Replication
+A more adeptly named alias for add and update node. This operation behaves as a PATCH/upsert, meaning it will insert or update the specified replication configurations while leaving other table replication configuration untouched. The `database` (aka `schema`) parameter is optional, it will default to `data`.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `set_node_replication`
+* node_name _(required)_ - the node name of the remote node you are updating
+* subscriptions _(required)_ - The relationship created between nodes. Must be an object array and `table`, `subscribe` and `publish`:
+ * database *(optional)* - the database to replicate from
+ * table *(required)* - the table to replicate from
+ * subscribe *(required)* - a boolean which determines if transactions on the remote table should be replicated on the local table
+ * publish *(required)* - a boolean which determines if transactions on the local table should be replicated on the remote table
+*
+### Body
+```json
+{
+ "operation": "set_node_replication",
+ "node_name": "node1",
+ "subscriptions": [
+ {
+ "table": "dog",
+ "subscribe": true,
+ "publish": true
+ }
+ ]
+}
+```
+### Response: 200
+```json
+{
+ "message": "Successfully updated 'ec2-3-22-181-22'"
+}
+```
+
+---
+
+## Cluster Status
+Returns an array of status objects from a cluster. A status object will contain the clustering node name, whether or not clustering is enabled, and a list of possible connections. Learn more about [HarperDB clustering here](../clustering/).
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `cluster_status`
+
+### Body
+```json
+{
+ "operation": "cluster_status"
+}
+```
+
+### Response: 200
+```json
+{
+ "node_name": "ec2-18-221-143-69",
+ "is_enabled": true,
+ "connections": [
+ {
+ "node_name": "ec2-3-22-181-22",
+ "status": "open",
+ "ports": {
+ "clustering": 12345,
+ "operations_api": 9925
+ },
+ "latency_ms": 13,
+ "uptime": "30d 1h 18m 8s",
+ "subscriptions": [
+ {
+ "schema": "dev",
+ "table": "dog",
+ "publish": true,
+ "subscribe": true
+ }
+ ]
+ }
+ ]
+}
+```
+
+
+---
+
+## Cluster Network
+Returns an object array of enmeshed nodes. Each node object will contain the name of the node, the amount of time (in milliseconds) it took for it to respond, the names of the nodes it is enmeshed with and the routes set in its config file. Learn more about [HarperDB clustering here](../clustering/).
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_- must always be `cluster_network`
+* timeout (_optional_) - the amount of time in milliseconds to wait for a response from the network. Must be a number
+* connected_nodes (_optional_) - omit `connected_nodes` from the response. Must be a boolean. Defaults to `false`
+* routes (_optional_) - omit `routes` from the response. Must be a boolean. Defaults to `false`
+
+### Body
+
+```json
+{
+ "operation": "cluster_network"
+}
+```
+
+### Response: 200
+```json
+{
+ "nodes": [
+ {
+ "name": "local_node",
+ "response_time": 4,
+ "connected_nodes": ["ec2-3-142-255-78"],
+ "routes": [
+ {
+ "host": "3.142.255.78",
+ "port": 9932
+ }
+ ]
+ },
+ {
+ "name": "ec2-3-142-255-78",
+ "response_time": 57,
+ "connected_nodes": ["ec2-3-12-153-124", "ec2-3-139-236-138", "local_node"],
+ "routes": []
+ }
+ ]
+}
+```
+
+---
+
+## Remove Node
+Removes a HarperDB instance and associated subscriptions from the cluster. Learn more about [HarperDB clustering here](../clustering/).
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `remove_node`
+* name _(required)_ - The name of the node you are de-registering
+
+### Body
+```json
+{
+ "operation": "remove_node",
+ "node_name": "ec2-3-22-181-22"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Successfully removed 'ec2-3-22-181-22' from manifest"
+}
+```
+
+---
+
+## Configure Cluster
+Bulk create/remove subscriptions for any number of remote nodes. Resets and replaces any existing clustering setup.
+Learn more about [HarperDB clustering here](../clustering/).
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `configure_cluster`
+* connections _(required)_ - must be an object array with each object containing `node_name` and `subscriptions` for that node
+
+### Body
+```json
+{
+ "operation": "configure_cluster",
+ "connections": [
+ {
+ "node_name": "ec2-3-137-184-8",
+ "subscriptions": [
+ {
+ "schema": "dev",
+ "table": "dog",
+ "subscribe": true,
+ "publish": false
+ }
+ ]
+ },
+ {
+ "node_name": "ec2-18-223-239-195",
+ "subscriptions": [
+ {
+ "schema": "dev",
+ "table": "dog",
+ "subscribe": true,
+ "publish": true
+ }
+ ]
+ }
+ ]
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Cluster successfully configured."
+}
+```
+
+---
+
+## Purge Stream
+
+Will purge messages from a stream
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `purge_stream`
+* database _(required)_ - the name of the database where the streams table resides
+* table _(required)_ - the name of the table that belongs to the stream
+* options _(optional)_ - control how many messages get purged. Options are:
+ * `keep` - purge will keep this many most recent messages
+ * `seq` - purge all messages up to, but not including, this sequence
+
+### Body
+```json
+{
+ "operation": "purge_stream",
+ "database": "dev",
+ "table": "dog",
+ "options": {
+ "keep": 100
+ }
+}
+```
+
+---
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.3/developers/operations-api/components.md b/site/versioned_docs/version-4.3/developers/operations-api/components.md
new file mode 100644
index 00000000..17ba5f0a
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/operations-api/components.md
@@ -0,0 +1,291 @@
+---
+title: Components
+---
+
+# Components
+
+## Add Component
+
+Creates a new component project in the component root directory using a predefined template.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `add_component`
+* project _(required)_ - the name of the project you wish to create
+
+### Body
+```json
+{
+ "operation": "add_component",
+ "project": "my-component"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Successfully added project: my-component"
+}
+```
+---
+## Deploy Component
+
+Will deploy a component using either a base64-encoded string representation of a `.tar` file (the output from `package_component`) or a package value, which can be any valid NPM reference, such as a GitHub repo, an NPM package, a tarball, a local directory or a website.\
+
+If deploying with the `payload` option, HarperDB will decrypt the base64-encoded string, reconstitute the .tar file of your project folder, and extract it to the component root project directory.\
+
+If deploying with the `package` option, the package value will be written to `harperdb-config.yaml`. Then npm install will be utilized to install the component in the `node_modules` directory located in the hdb root. The value is a package reference, which should generally be a [URL reference, as described here](https:/docs.npmjs.com/cli/v10/configuring-npm/package-json#urls-as-dependencies) (it is also possible to include NPM registerd packages and file paths). URL package references can directly reference tarballs that can be installed as a package. However, the most common and recommended usage is to install from a Git repository, which can be combined with a tag to deploy a specific version directly from versioned source control. When using tags, we highly recommend that you use the `semver` directive to ensure consistent and reliable installation by NPM. In addition to tags, you can also reference branches or commit numbers. Here is an example URL package reference to a (public) Git repository that doesn't require authentication:
+```
+https:/github.com/HarperDB/application-template#semver:v1.0.0
+```
+or this can be shortened to:
+```
+HarperDB/application-template#semver:v1.0.0
+```
+
+You can also install from private repository if you have an installed SSH keys on the server:
+```
+git+ssh:/git@github.com:my-org/my-app.git#semver:v1.0.0
+```
+Or you can use a Github token:
+```
+https:/@github.com/my-org/my-app#semver:v1.0.0
+```
+Or you can use a GitLab Project Access Token:
+```
+https:/my-project:@gitlab.com/my-group/my-project#semver:v1.0.0
+```
+Note that your component will be installed by NPM. If your component has dependencies, NPM will attempt to download and install these as well. NPM normally uses the public registry.npmjs.org registry. If you are installing without network access to this, you may wish to define [custom registry locations](https:/docs.npmjs.com/cli/v8/configuring-npm/npmrc) if you have any dependencies that need to be installed. NPM will install the deployed component and any dependencies in node_modules in the hdb root directory (typically `~/hdb/node_modules`).
+
+_Note: After deploying a component a restart may be required_
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `deploy_component`
+* project _(required)_ - the name of the project you wish to deploy
+* package _(optional)_ - this can be any valid GitHub or NPM reference
+* payload _(optional)_ - a base64-encoded string representation of the .tar file. Must be a string
+
+### Body
+
+```json
+{
+ "operation": "deploy_component",
+ "project": "my-component",
+ "payload": "A very large base64-encoded string representation of the .tar file"
+}
+```
+
+```json
+{
+ "operation": "deploy_component",
+ "project": "my-component",
+ "package": "HarperDB/application-template"
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "Successfully deployed: my-component"
+}
+```
+---
+## Package Component
+
+Creates a temporary `.tar` file of the specified project folder, then reads it into a base64-encoded string and returns an object with the string and the payload.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `package_component`
+* project _(required)_ - the name of the project you wish to package
+* skip_node_modules _(optional)_ - if true, creates option for tar module that will exclude the project's node_modules directory. Must be a boolean
+
+### Body
+
+```json
+{
+ "operation": "package_component",
+ "project": "my-component",
+ "skip_node_modules": true
+}
+```
+
+### Response: 200
+
+```json
+{
+ "project": "my-component",
+ "payload": "LgAAAAAAAAAAAAAAAAAAA...AAAAAAAAAAAAAAAAAAAAAAAAAAAAA=="
+}
+```
+---
+## Drop Component
+
+Deletes a file from inside the component project or deletes the complete project.
+
+**If just `project` is provided it will delete all that projects local files and folders**
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `drop_component`
+* project _(required)_ - the name of the project you wish to delete or to delete from if using the `file` parameter
+* file _(optional)_ - the path relative to your project folder of the file you wish to delete
+
+### Body
+
+```json
+{
+ "operation": "drop_component",
+ "project": "my-component",
+ "file": "utils/myUtils.js"
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "Successfully dropped: my-component/utils/myUtils.js"
+}
+```
+---
+## Get Components
+
+Gets all local component files and folders and any component config from `harperdb-config.yaml`
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `get_components`
+
+### Body
+
+```json
+{
+ "operation": "get_components"
+}
+```
+
+### Response: 200
+
+```json
+{
+ "name": "components",
+ "entries": [
+ {
+ "package": "HarperDB/application-template",
+ "name": "deploy-test-gh"
+ },
+ {
+ "package": "@fastify/compress",
+ "name": "fast-compress"
+ },
+ {
+ "name": "my-component",
+ "entries": [
+ {
+ "name": "LICENSE",
+ "mtime": "2023-08-22T16:00:40.286Z",
+ "size": 1070
+ },
+ {
+ "name": "index.md",
+ "mtime": "2023-08-22T16:00:40.287Z",
+ "size": 1207
+ },
+ {
+ "name": "config.yaml",
+ "mtime": "2023-08-22T16:00:40.287Z",
+ "size": 1069
+ },
+ {
+ "name": "package.json",
+ "mtime": "2023-08-22T16:00:40.288Z",
+ "size": 145
+ },
+ {
+ "name": "resources.js",
+ "mtime": "2023-08-22T16:00:40.289Z",
+ "size": 583
+ },
+ {
+ "name": "schema.graphql",
+ "mtime": "2023-08-22T16:00:40.289Z",
+ "size": 466
+ },
+ {
+ "name": "utils",
+ "entries": [
+ {
+ "name": "commonUtils.js",
+ "mtime": "2023-08-22T16:00:40.289Z",
+ "size": 583
+ }
+ ]
+ }
+ ]
+ }
+ ]
+}
+```
+---
+## Get Component File
+
+Gets the contents of a file inside a component project.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `get_component_file`
+* project _(required)_ - the name of the project where the file is located
+* file _(required)_ - the path relative to your project folder of the file you wish to view
+* encoding _(optional)_ - the encoding that will be passed to the read file call. Defaults to `utf8`
+
+### Body
+
+```json
+{
+ "operation": "get_component_file",
+ "project": "my-component",
+ "file": "resources.js"
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "/**export class MyCustomResource extends tables.TableName {\n\t/ we can define our own custom POST handler\n\tpost(content) {\n\t\t/ do something with the incoming content;\n\t\treturn super.post(content);\n\t}\n\t/ or custom GET handler\n\tget() {\n\t\t/ we can modify this resource before returning\n\t\treturn super.get();\n\t}\n}\n */\n/ we can also define a custom resource without a specific table\nexport class Greeting extends Resource {\n\t/ a \"Hello, world!\" handler\n\tget() {\n\t\treturn { greeting: 'Hello, world!' };\n\t}\n}"
+}
+```
+---
+## Set Component File
+
+Creates or updates a file inside a component project.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `set_component_file`
+* project _(required)_ - the name of the project the file is located in
+* file _(required)_ - the path relative to your project folder of the file you wish to set
+* payload _(required)_ - what will be written to the file
+* encoding _(optional)_ - the encoding that will be passed to the write file call. Defaults to `utf8`
+
+### Body
+
+```json
+{
+ "operation": "set_component_file",
+ "project": "my-component",
+ "file": "test.js",
+ "payload": "console.log('hello world')"
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "Successfully set component: test.js"
+}
+```
diff --git a/site/versioned_docs/version-4.3/developers/operations-api/custom-functions.md b/site/versioned_docs/version-4.3/developers/operations-api/custom-functions.md
new file mode 100644
index 00000000..bf9537fc
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/operations-api/custom-functions.md
@@ -0,0 +1,276 @@
+---
+title: Custom Functions
+---
+
+# Custom Functions
+
+## Custom Functions Status
+
+Returns the state of the Custom functions server. This includes whether it is enabled, upon which port it is listening, and where its root project directory is located on the host machine.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `custom_function_status`
+
+### Body
+```json
+{
+ "operation": "custom_functions_status"
+}
+```
+
+### Response: 200
+```json
+{
+ "is_enabled": true,
+ "port": 9926,
+ "directory": "/Users/myuser/hdb/custom_functions"
+}
+```
+
+---
+
+## Get Custom Functions
+
+Returns an array of projects within the Custom Functions root project directory. Each project has details including each of the files in the routes and helpers directories, and the total file count in the static folder.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `get_custom_functions`
+
+### Body
+
+```json
+{
+ "operation": "get_custom_functions"
+}
+```
+
+### Response: 200
+
+```json
+{
+ "dogs": {
+ "routes": ["examples"],
+ "helpers":["example"],
+ "static":3
+ }
+}
+```
+
+---
+
+## Get Custom Function
+
+Returns the content of the specified file as text. HarperDB Studio uses this call to render the file content in its built-in code editor.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `get_custom_function`
+* project _(required)_ - the name of the project containing the file for which you wish to get content
+* type _(required)_ - the name of the sub-folder containing the file for which you wish to get content - must be either routes or helpers
+* file _(required)_ - The name of the file for which you wish to get content - should not include the file extension (which is always .js)
+
+### Body
+
+```json
+{
+ "operation": "get_custom_function",
+ "project": "dogs",
+ "type": "helpers",
+ "file": "example"
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "'use strict';\n\nconst https = require('https');\n\nconst authRequest = (options) => {\n return new Promise((resolve, reject) => {\n const req = https.request(options, (res) => {\n res.setEncoding('utf8');\n let responseBody = '';\n\n res.on('data', (chunk) => {\n responseBody += chunk;\n });\n\n res.on('end', () => {\n resolve(JSON.parse(responseBody));\n });\n });\n\n req.on('error', (err) => {\n reject(err);\n });\n\n req.end();\n });\n};\n\nconst customValidation = async (request,logger) => {\n const options = {\n hostname: 'jsonplaceholder.typicode.com',\n port: 443,\n path: '/todos/1',\n method: 'GET',\n headers: { authorization: request.headers.authorization },\n };\n\n const result = await authRequest(options);\n\n /*\n * throw an authentication error based on the response body or statusCode\n */\n if (result.error) {\n const errorString = result.error || 'Sorry, there was an error authenticating your request';\n logger.error(errorString);\n throw new Error(errorString);\n }\n return request;\n};\n\nmodule.exports = customValidation;\n"
+}
+```
+
+---
+
+## Set Custom Function
+
+Updates the content of the specified file. HarperDB Studio uses this call to save any changes made through its built-in code editor.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `set_custom_function`
+* project _(required)_ - the name of the project containing the file for which you wish to set content
+* type _(required)_ - the name of the sub-folder containing the file for which you wish to set content - must be either routes or helpers
+* file _(required)_ - the name of the file for which you wish to set content - should not include the file extension (which is always .js)
+* function_content _(required)_ - the content you wish to save into the specified file
+
+### Body
+
+```json
+{
+ "operation": "set_custom_function",
+ "project": "dogs",
+ "type": "helpers",
+ "file": "example",
+ "function_content": "'use strict';\n\nconst https = require('https');\n\nconst authRequest = (options) => {\n return new Promise((resolve, reject) => {\n const req = https.request(options, (res) => {\n res.setEncoding('utf8');\n let responseBody = '';\n\n res.on('data', (chunk) => {\n responseBody += chunk;\n });\n\n res.on('end', () => {\n resolve(JSON.parse(responseBody));\n });\n });\n\n req.on('error', (err) => {\n reject(err);\n });\n\n req.end();\n });\n};\n\nconst customValidation = async (request,logger) => {\n const options = {\n hostname: 'jsonplaceholder.typicode.com',\n port: 443,\n path: '/todos/1',\n method: 'GET',\n headers: { authorization: request.headers.authorization },\n };\n\n const result = await authRequest(options);\n\n /*\n * throw an authentication error based on the response body or statusCode\n */\n if (result.error) {\n const errorString = result.error || 'Sorry, there was an error authenticating your request';\n logger.error(errorString);\n throw new Error(errorString);\n }\n return request;\n};\n\nmodule.exports = customValidation;\n"
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "Successfully updated custom function: example.js"
+}
+```
+
+---
+
+## Drop Custom Function
+
+Deletes the specified file.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `drop_custom_function`
+* project _(required)_ - the name of the project containing the file you wish to delete
+* type _(required)_ - the name of the sub-folder containing the file you wish to delete. Must be either routes or helpers
+* file _(required)_ - the name of the file you wish to delete. Should not include the file extension (which is always .js)
+
+### Body
+
+```json
+{
+ "operation": "drop_custom_function",
+ "project": "dogs",
+ "type": "helpers",
+ "file": "example"
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message":"Successfully deleted custom function: example.js"
+}
+```
+
+---
+
+## Add Custom Function Project
+
+Creates a new project folder in the Custom Functions root project directory. It also inserts into the new directory the contents of our Custom Functions Project template, which is available publicly, here: https:/github.com/HarperDB/harperdb-custom-functions-template.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `add_custom_function_project`
+* project _(required)_ - the name of the project you wish to create
+
+### Body
+
+```json
+{
+ "operation": "add_custom_function_project",
+ "project": "dogs"
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message":"Successfully created custom function project: dogs"
+}
+```
+
+---
+
+## Drop Custom Function Project
+
+Deletes the specified project folder and all of its contents.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `drop_custom_function_project`
+* project _(required)_ - the name of the project you wish to delete
+
+### Body
+
+```json
+{
+ "operation": "drop_custom_function_project",
+ "project": "dogs"
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "Successfully deleted project: dogs"
+}
+```
+
+---
+
+## Package Custom Function Project
+
+Creates a .tar file of the specified project folder, then reads it into a base64-encoded string and returns an object with the string, the payload and the file.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `package_custom_function_project`
+* project _(required)_ - the name of the project you wish to package up for deployment
+* skip_node_modules _(optional)_ - if true, creates option for tar module that will exclude the project's node_modules directory. Must be a boolean.
+
+### Body
+
+```json
+{
+ "operation": "package_custom_function_project",
+ "project": "dogs",
+ "skip_node_modules": true
+}
+```
+
+### Response: 200
+
+```json
+{
+ "project": "dogs",
+ "payload": "LgAAAAAAAAAAAAAAAAAAA...AAAAAAAAAAAAAAAAAAAAAAAAAAAAA==",
+ "file": "/tmp/d27f1154-5d82-43f0-a5fb-a3018f366081.tar"
+}
+```
+
+---
+
+## Deploy Custom Function Project
+
+Takes the output of package_custom_function_project, decrypts the base64-encoded string, reconstitutes the .tar file of your project folder, and extracts it to the Custom Functions root project directory.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `deploy_custom_function_project`
+* project _(required)_ - the name of the project you wish to deploy. Must be a string
+* payload _(required)_ - a base64-encoded string representation of the .tar file. Must be a string
+
+
+### Body
+
+```json
+{
+ "operation": "deploy_custom_function_project",
+ "project": "dogs",
+ "payload": "A very large base64-encoded string represenation of the .tar file"
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "Successfully deployed project: dogs"
+}
+```
diff --git a/site/versioned_docs/version-4.3/developers/operations-api/databases-and-tables.md b/site/versioned_docs/version-4.3/developers/operations-api/databases-and-tables.md
new file mode 100644
index 00000000..68f30089
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/operations-api/databases-and-tables.md
@@ -0,0 +1,362 @@
+---
+title: Databases and Tables
+---
+
+# Databases and Tables
+
+## Describe All
+Returns the definitions of all databases and tables within the database. Record counts about 5000 records are estimated, as determining the exact count can be expensive. When the record count is estimated, this is indicated by the inclusion of a confidence interval of `estimated_record_range`. If you need the exact count, you can include an `"exact_count": true` in the operation, but be aware that this requires a full table scan (may be expensive).
+
+* operation _(required)_ - must always be `describe_all`
+
+### Body
+```json
+{
+ "operation": "describe_all"
+}
+```
+
+### Response: 200
+```json
+{
+ "dev": {
+ "dog": {
+ "schema": "dev",
+ "name": "dog",
+ "hash_attribute": "id",
+ "audit": true,
+ "schema_defined": false,
+ "attributes": [
+ {
+ "attribute": "id",
+ "indexed": true,
+ "is_primary_key": true
+ },
+ {
+ "attribute": "__createdtime__",
+ "indexed": true
+ },
+ {
+ "attribute": "__updatedtime__",
+ "indexed": true
+ },
+ {
+ "attribute": "type",
+ "indexed": true
+ }
+ ],
+ "clustering_stream_name": "dd9e90c2689151ab812e0f2d98816bff",
+ "record_count": 4000,
+ "estimated_record_range": [3976, 4033],
+ "last_updated_record": 1697658683698.4504
+ }
+ }
+}
+```
+
+---
+
+## Describe database
+Returns the definitions of all tables within the specified database.
+
+* operation _(required)_ - must always be `describe_database`
+* database _(optional)_ - database where the table you wish to describe lives. The default is `data`
+
+### Body
+```json
+{
+ "operation": "describe_database",
+ "database": "dev"
+}
+```
+
+### Response: 200
+```json
+{
+ "dog": {
+ "schema": "dev",
+ "name": "dog",
+ "hash_attribute": "id",
+ "audit": true,
+ "schema_defined": false,
+ "attributes": [
+ {
+ "attribute": "id",
+ "indexed": true,
+ "is_primary_key": true
+ },
+ {
+ "attribute": "__createdtime__",
+ "indexed": true
+ },
+ {
+ "attribute": "__updatedtime__",
+ "indexed": true
+ },
+ {
+ "attribute": "type",
+ "indexed": true
+ }
+ ],
+ "clustering_stream_name": "dd9e90c2689151ab812e0f2d98816bff",
+ "record_count": 4000,
+ "estimated_record_range": [3976, 4033],
+ "last_updated_record": 1697658683698.4504
+ }
+}
+```
+
+---
+
+## Describe Table
+Returns the definition of the specified table.
+
+* operation _(required)_ - must always be `describe_table`
+* table _(required)_ - table you wish to describe
+* database _(optional)_ - database where the table you wish to describe lives. The default is `data`
+
+### Body
+```json
+{
+ "operation": "describe_table",
+ "table": "dog"
+}
+```
+
+### Response: 200
+```json
+{
+ "schema": "dev",
+ "name": "dog",
+ "hash_attribute": "id",
+ "audit": true,
+ "schema_defined": false,
+ "attributes": [
+ {
+ "attribute": "id",
+ "indexed": true,
+ "is_primary_key": true
+ },
+ {
+ "attribute": "__createdtime__",
+ "indexed": true
+ },
+ {
+ "attribute": "__updatedtime__",
+ "indexed": true
+ },
+ {
+ "attribute": "type",
+ "indexed": true
+ }
+ ],
+ "clustering_stream_name": "dd9e90c2689151ab812e0f2d98816bff",
+ "record_count": 4000,
+ "estimated_record_range": [3976, 4033],
+ "last_updated_record": 1697658683698.4504
+}
+```
+
+---
+
+## Create database
+Create a new database.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `create_database`
+* database _(optional)_ - name of the database you are creating. The default is `data`
+
+### Body
+```json
+{
+ "operation": "create_database",
+ "database": "dev"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "database 'dev' successfully created"
+}
+```
+
+---
+
+## Drop database
+Drop an existing database. NOTE: Dropping a database will delete all tables and all of their records in that database.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - this should always be `drop_database`
+* database _(required)_ - name of the database you are dropping
+
+### Body
+```json
+{
+ "operation": "drop_database",
+ "database": "dev"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "successfully deleted 'dev'"
+}
+```
+
+---
+
+## Create Table
+Create a new table within a database.
+
+_Operation is restricted to super_user roles only_
+
+
+* operation _(required)_ - must always be `create_table`
+* database _(optional)_ - name of the database where you want your table to live. If the database does not exist, it will be created. If the `database` property is not provided it will default to `data`.
+* table _(required)_ - name of the table you are creating
+* primary_key _(required)_ - primary key for the table
+* attributes _(optional)_ - an array of attributes that specifies the schema for the table, that is the set of attributes for the table. When attributes are supplied the table will not be considered a "dynamic schema" table, and attributes will not be auto-added when records with new properties are inserted. Each attribute is specified as:
+ * name _(required)_ - the name of the attribute
+ * indexed _(optional)_ - indicates if the attribute should be indexed
+ * type _(optional)_ - specifies the data type of the attribute (can be String, Int, Float, Date, ID, Any)
+* expiration _(optional)_ - specifies the time-to-live or expiration of records in the table before they are evicted (records are not evicted on any timer if not specified). This is specified in seconds.
+
+### Body
+```json
+{
+ "operation": "create_table",
+ "database": "dev",
+ "table": "dog",
+ "primary_key": "id"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "table 'dev.dog' successfully created."
+}
+```
+
+---
+
+## Drop Table
+Drop an existing database table. NOTE: Dropping a table will delete all associated records in that table.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - this should always be `drop_table`
+* database _(optional)_ - database where the table you are dropping lives. The default is `data`
+* table _(required)_ - name of the table you are dropping
+
+### Body
+
+```json
+{
+ "operation": "drop_table",
+ "database": "dev",
+ "table": "dog"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "successfully deleted table 'dev.dog'"
+}
+```
+
+---
+
+## Create Attribute
+Create a new attribute within the specified table. **The create_attribute operation can be used for admins wishing to pre-define database values for setting role-based permissions or for any other reason.**
+
+_Note: HarperDB will automatically create new attributes on insert and update if they do not already exist within the database._
+
+* operation _(required)_ - must always be `create_attribute`
+* database _(optional)_ - name of the database of the table you want to add your attribute. The default is `data`
+* table _(required)_ - name of the table where you want to add your attribute to live
+* attribute _(required)_ - name for the attribute
+
+### Body
+```json
+{
+ "operation": "create_attribute",
+ "database": "dev",
+ "table": "dog",
+ "attribute": "is_adorable"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "inserted 1 of 1 records",
+ "skipped_hashes": [],
+ "inserted_hashes": [
+ "383c0bef-5781-4e1c-b5c8-987459ad0831"
+ ]
+}
+```
+
+---
+
+## Drop Attribute
+Drop an existing attribute from the specified table. NOTE: Dropping an attribute will delete all associated attribute values in that table.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - this should always be `drop_attribute`
+* database _(optional)_ - database where the table you are dropping lives. The default is `data`
+* table _(required)_ - table where the attribute you are dropping lives
+* attribute _(required)_ - attribute that you intend to drop
+
+### Body
+
+```json
+{
+ "operation": "drop_attribute",
+ "database": "dev",
+ "table": "dog",
+ "attribute": "is_adorable"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "successfully deleted attribute 'is_adorable'"
+}
+```
+
+---
+
+## Get Backup
+This will return a snapshot of the requested database. This provides a means for backing up the database through the operations API. The response will be the raw database file (in binary format), which can later be restored as a database file by copying into the appropriate hdb/databases directory (with HarperDB not running). The returned file is a snapshot of the database at the moment in time that the get_backup operation begins. This also supports backing up individual tables in a database. However, this is a more expensive operation than backing up a database in whole, and will lose any transactional atomicity between writes across tables, so generally it is recommended that you backup the entire database.
+
+It is important to note that trying to copy a database file that is in use (HarperDB actively running and writing to the file) using standard file copying tools is not safe (the copied file will likely be corrupt), which is why using this snapshot operation is recommended for backups (volume snapshots are also a good way to backup HarperDB databases).
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - this should always be `get_backup`
+* database _(required)_ - this is the database that will be snapshotted and returned
+* table _(optional)_ - this will specify a specific table to backup
+* tables _(optional)_ - this will specify a specific set of tables to backup
+
+### Body
+
+```json
+{
+ "operation": "get_backup",
+ "database": "dev"
+}
+```
+
+### Response: 200
+```
+The database in raw binary data format
+```
diff --git a/site/versioned_docs/version-4.3/developers/operations-api/index.md b/site/versioned_docs/version-4.3/developers/operations-api/index.md
new file mode 100644
index 00000000..cf2db22d
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/operations-api/index.md
@@ -0,0 +1,51 @@
+---
+title: Operations API
+---
+
+# Operations API
+
+The operations API provides a full set of capabilities for configuring, deploying, administering, and controlling HarperDB. To send operations to the operations API, you send a POST request to the operations API endpoint, which [defaults to port 9925](../../../deployments/configuration), on the root path, where the body is the operations object. These requests need to authenticated, which can be done with [basic auth](../../../developers/security/basic-auth) or [JWT authentication](../../../developers/security/jwt-auth). For example, a request to create a table would be performed as:
+
+```http
+POST http:/my-harperdb-server:9925/
+Authorization: Basic YourBase64EncodedInstanceUser:Pass
+Content-Type: application/json
+
+{
+ "operation": "create_table",
+ "table": "my-table"
+}
+```
+
+The operations API reference is available below and categorized by topic:
+
+* [Quick Start Examples](./quickstart-examples)
+* [Databases and Tables](./databases-and-tables)
+* [NoSQL Operations](./nosql-operations)
+* [Bulk Operations](./bulk-operations)
+* [Users and Roles](./users-and-roles)
+* [Clustering](./clustering)
+* [Components](./components)
+* [Registration](./registration)
+* [Jobs](./jobs)
+* [Logs](./logs)
+* [Utilities](./utilities)
+* [Token Authentication](./token-authentication)
+* [SQL Operations](./sql-operations)
+* [Advanced JSON SQL Examples](./advanced-json-sql-examples)
+
+• [Past Release API Documentation](https:/olddocs.harperdb.io)
+
+## More Examples
+
+Here is an example of using `curl` to make an operations API request:
+
+```bash
+curl --location --request POST 'https:/instance-subdomain.harperdbcloud.com' \
+--header 'Authorization: Basic YourBase64EncodedInstanceUser:Pass' \
+--header 'Content-Type: application/json' \
+--data-raw '{
+"operation": "create_schema",
+"schema": "dev"
+}'
+```
diff --git a/site/versioned_docs/version-4.3/developers/operations-api/jobs.md b/site/versioned_docs/version-4.3/developers/operations-api/jobs.md
new file mode 100644
index 00000000..8b05357f
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/operations-api/jobs.md
@@ -0,0 +1,82 @@
+---
+title: Jobs
+---
+
+# Jobs
+
+## Get Job
+Returns job status, metrics, and messages for the specified job ID.
+
+* operation _(required)_ - must always be `get_job`
+* id _(required)_ - the id of the job you wish to view
+
+### Body
+
+```json
+{
+ "operation": "get_job",
+ "id": "4a982782-929a-4507-8794-26dae1132def"
+}
+```
+
+### Response: 200
+```json
+[
+ {
+ "__createdtime__": 1611615798782,
+ "__updatedtime__": 1611615801207,
+ "created_datetime": 1611615798774,
+ "end_datetime": 1611615801206,
+ "id": "4a982782-929a-4507-8794-26dae1132def",
+ "job_body": null,
+ "message": "successfully loaded 350 of 350 records",
+ "start_datetime": 1611615798805,
+ "status": "COMPLETE",
+ "type": "csv_url_load",
+ "user": "HDB_ADMIN",
+ "start_datetime_converted": "2021-01-25T23:03:18.805Z",
+ "end_datetime_converted": "2021-01-25T23:03:21.206Z"
+ }
+]
+```
+
+---
+
+## Search Jobs By Start Date
+Returns a list of job statuses, metrics, and messages for all jobs executed within the specified time window.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `search_jobs_by_start_date`
+* from_date _(required)_ - the date you wish to start the search
+* to_date _(required)_ - the date you wish to end the search
+
+### Body
+```json
+{
+ "operation": "search_jobs_by_start_date",
+ "from_date": "2021-01-25T22:05:27.464+0000",
+ "to_date": "2021-01-25T23:05:27.464+0000"
+}
+```
+
+### Response: 200
+```json
+[
+ {
+ "id": "942dd5cb-2368-48a5-8a10-8770ff7eb1f1",
+ "user": "HDB_ADMIN",
+ "type": "csv_url_load",
+ "status": "COMPLETE",
+ "start_datetime": 1611613284781,
+ "end_datetime": 1611613287204,
+ "job_body": null,
+ "message": "successfully loaded 350 of 350 records",
+ "created_datetime": 1611613284764,
+ "__createdtime__": 1611613284767,
+ "__updatedtime__": 1611613287207,
+ "start_datetime_converted": "2021-01-25T22:21:24.781Z",
+ "end_datetime_converted": "2021-01-25T22:21:27.204Z"
+ }
+]
+```
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.3/developers/operations-api/logs.md b/site/versioned_docs/version-4.3/developers/operations-api/logs.md
new file mode 100644
index 00000000..a6c39c46
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/operations-api/logs.md
@@ -0,0 +1,755 @@
+---
+title: Logs
+---
+
+# Logs
+
+## Read HarperDB Log
+Returns log outputs from the primary HarperDB log based on the provided search criteria. Read more about HarperDB logging here: https:/docs.harperdb.io/docs/logging#read-logs-via-the-api.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `read_Log`
+* start _(optional)_ - result to start with. Default is 0, the first log in `hdb.log`. Must be a number
+* limit _(optional)_ - number of results returned. Default behavior is 1000. Must be a number
+* level _(optional)_ - error level to filter on. Default behavior is all levels. Must be `notify`, `error`, `warn`, `info`, `debug` or `trace`
+* from _(optional)_ - date to begin showing log results. Must be `YYYY-MM-DD` or `YYYY-MM-DD hh:mm:ss`. Default is first log in `hdb.log`
+* until _(optional)_ - date to end showing log results. Must be `YYYY-MM-DD` or `YYYY-MM-DD hh:mm:ss`. Default is last log in `hdb.log`
+* order _(optional)_ - order to display logs desc or asc by timestamp. By default, will maintain `hdb.log` order
+### Body
+
+```json
+{
+ "operation": "read_log",
+ "start": 0,
+ "limit": 1000,
+ "level": "error",
+ "from": "2021-01-25T22:05:27.464+0000",
+ "until": "2021-01-25T23:05:27.464+0000",
+ "order": "desc"
+}
+```
+
+### Response: 200
+```json
+[
+ {
+ "level": "notify",
+ "message": "Connected to cluster server.",
+ "timestamp": "2021-01-25T23:03:20.710Z",
+ "thread": "main/0",
+ "tags": []
+ },
+ {
+ "level": "warn",
+ "message": "Login failed",
+ "timestamp": "2021-01-25T22:24:45.113Z",
+ "thread": "http/9",
+ "tags": []
+ },
+ {
+ "level": "error",
+ "message": "unknown attribute 'name and breed'",
+ "timestamp": "2021-01-25T22:23:24.167Z",
+ "thread": "http/9",
+ "tags": []
+ }
+]
+
+```
+
+
+---
+
+## Read Transaction Log
+Returns all transactions logged for the specified database table. You may filter your results with the optional from, to, and limit fields. Read more about HarperDB transaction logs here: https:/docs.harperdb.io/docs/transaction-logging#read_transaction_log.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `read_transaction_log`
+* schema _(required)_ - schema under which the transaction log resides
+* table _(required)_ - table under which the transaction log resides
+* from _(optional)_ - time format must be millisecond-based epoch in UTC
+* to _(optional)_ - time format must be millisecond-based epoch in UTC
+* limit _(optional)_ - max number of logs you want to receive. Must be a number
+
+### Body
+
+```json
+{
+ "operation": "read_transaction_log",
+ "schema": "dev",
+ "table": "dog",
+ "from": 1560249020865,
+ "to": 1660585656639,
+ "limit": 10
+}
+```
+
+### Response: 200
+```json
+[
+ {
+ "operation": "insert",
+ "user": "admin",
+ "timestamp": 1660165619736,
+ "records": [
+ {
+ "id": 1,
+ "dog_name": "Penny",
+ "owner_name": "Kyle",
+ "breed_id": 154,
+ "age": 7,
+ "weight_lbs": 38,
+ "__updatedtime__": 1660165619688,
+ "__createdtime__": 1660165619688
+ }
+ ]
+ },
+ {
+ "operation": "insert",
+ "user": "admin",
+ "timestamp": 1660165619813,
+ "records": [
+ {
+ "id": 2,
+ "dog_name": "Harper",
+ "owner_name": "Stephen",
+ "breed_id": 346,
+ "age": 7,
+ "weight_lbs": 55,
+ "adorable": true,
+ "__updatedtime__": 1660165619797,
+ "__createdtime__": 1660165619797
+ },
+ {
+ "id": 3,
+ "dog_name": "Alby",
+ "owner_name": "Kaylan",
+ "breed_id": 348,
+ "age": 7,
+ "weight_lbs": 84,
+ "adorable": true,
+ "__updatedtime__": 1660165619797,
+ "__createdtime__": 1660165619797
+ },
+ {
+ "id": 4,
+ "dog_name": "Billy",
+ "owner_name": "Zach",
+ "breed_id": 347,
+ "age": 6,
+ "weight_lbs": 60,
+ "adorable": true,
+ "__updatedtime__": 1660165619797,
+ "__createdtime__": 1660165619797
+ },
+ {
+ "id": 5,
+ "dog_name": "Rose Merry",
+ "owner_name": "Zach",
+ "breed_id": 348,
+ "age": 8,
+ "weight_lbs": 15,
+ "adorable": true,
+ "__updatedtime__": 1660165619797,
+ "__createdtime__": 1660165619797
+ },
+ {
+ "id": 6,
+ "dog_name": "Kato",
+ "owner_name": "Kyle",
+ "breed_id": 351,
+ "age": 6,
+ "weight_lbs": 32,
+ "adorable": true,
+ "__updatedtime__": 1660165619797,
+ "__createdtime__": 1660165619797
+ },
+ {
+ "id": 7,
+ "dog_name": "Simon",
+ "owner_name": "Fred",
+ "breed_id": 349,
+ "age": 3,
+ "weight_lbs": 35,
+ "adorable": true,
+ "__updatedtime__": 1660165619797,
+ "__createdtime__": 1660165619797
+ },
+ {
+ "id": 8,
+ "dog_name": "Gemma",
+ "owner_name": "Stephen",
+ "breed_id": 350,
+ "age": 5,
+ "weight_lbs": 55,
+ "adorable": true,
+ "__updatedtime__": 1660165619797,
+ "__createdtime__": 1660165619797
+ },
+ {
+ "id": 9,
+ "dog_name": "Yeti",
+ "owner_name": "Jaxon",
+ "breed_id": 200,
+ "age": 5,
+ "weight_lbs": 55,
+ "adorable": true,
+ "__updatedtime__": 1660165619797,
+ "__createdtime__": 1660165619797
+ },
+ {
+ "id": 10,
+ "dog_name": "Monkey",
+ "owner_name": "Aron",
+ "breed_id": 271,
+ "age": 7,
+ "weight_lbs": 35,
+ "adorable": true,
+ "__updatedtime__": 1660165619797,
+ "__createdtime__": 1660165619797
+ },
+ {
+ "id": 11,
+ "dog_name": "Bode",
+ "owner_name": "Margo",
+ "breed_id": 104,
+ "age": 8,
+ "weight_lbs": 75,
+ "adorable": true,
+ "__updatedtime__": 1660165619797,
+ "__createdtime__": 1660165619797
+ },
+ {
+ "id": 12,
+ "dog_name": "Tucker",
+ "owner_name": "David",
+ "breed_id": 346,
+ "age": 2,
+ "weight_lbs": 60,
+ "adorable": true,
+ "__updatedtime__": 1660165619798,
+ "__createdtime__": 1660165619798
+ },
+ {
+ "id": 13,
+ "dog_name": "Jagger",
+ "owner_name": "Margo",
+ "breed_id": 271,
+ "age": 7,
+ "weight_lbs": 35,
+ "adorable": true,
+ "__updatedtime__": 1660165619798,
+ "__createdtime__": 1660165619798
+ }
+ ]
+ },
+ {
+ "operation": "update",
+ "user": "admin",
+ "timestamp": 1660165620040,
+ "records": [
+ {
+ "id": 1,
+ "dog_name": "Penny B",
+ "__updatedtime__": 1660165620036
+ }
+ ]
+ }
+]
+```
+
+---
+
+## Delete Transaction Logs Before
+Deletes transaction log data for the specified database table that is older than the specified timestamp.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `delete_transaction_log_before`
+* schema _(required)_ - schema under which the transaction log resides. Must be a string
+* table _(required)_ - table under which the transaction log resides. Must be a string
+* timestamp _(required)_ - records older than this date will be deleted. Format is millisecond-based epoch in UTC
+
+### Body
+```json
+{
+ "operation": "delete_transaction_logs_before",
+ "schema": "dev",
+ "table": "dog",
+ "timestamp": 1598290282817
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Starting job with id 26a6d3a6-6d77-40f9-bee7-8d6ef479a126"
+}
+```
+
+---
+
+## Read Audit Log
+AuditLog must be enabled in the HarperDB configuration file to make this request. Returns a verbose history of all transactions logged for the specified database table, including original data records. You may filter your results with the optional search_type and search_values fields. Read more about HarperDB transaction logs here: https:/docs.harperdb.io/docs/transaction-logging#read_audit_log.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `read_audit_log`
+* schema _(required)_ - schema under which the transaction log resides
+* table _(required)_ - table under which the transaction log resides
+* search_type _(optional)_ - possibilities are `hash_value`, `timestamp` and `username`
+* search_values _(optional)_ - an array of string or numbers relating to search_type
+
+### Body
+
+```json
+{
+ "operation": "read_audit_log",
+ "schema": "dev",
+ "table": "dog"
+}
+```
+
+### Response: 200
+```json
+[
+ {
+ "operation": "insert",
+ "user_name": "admin",
+ "timestamp": 1660585635882.288,
+ "hash_values": [
+ 318
+ ],
+ "records": [
+ {
+ "id": 318,
+ "dog_name": "Polliwog",
+ "__updatedtime__": 1660585635876,
+ "__createdtime__": 1660585635876
+ }
+ ]
+ },
+ {
+ "operation": "insert",
+ "user_name": "admin",
+ "timestamp": 1660585716133.01,
+ "hash_values": [
+ 444
+ ],
+ "records": [
+ {
+ "id": 444,
+ "dog_name": "Davis",
+ "__updatedtime__": 1660585716128,
+ "__createdtime__": 1660585716128
+ }
+ ]
+ },
+ {
+ "operation": "update",
+ "user_name": "admin",
+ "timestamp": 1660585740558.415,
+ "hash_values": [
+ 444
+ ],
+ "records": [
+ {
+ "id": 444,
+ "fur_type": "coarse",
+ "__updatedtime__": 1660585740556
+ }
+ ],
+ "original_records": [
+ {
+ "id": 444,
+ "dog_name": "Davis",
+ "__updatedtime__": 1660585716128,
+ "__createdtime__": 1660585716128
+ }
+ ]
+ },
+ {
+ "operation": "delete",
+ "user_name": "admin",
+ "timestamp": 1660585759710.56,
+ "hash_values": [
+ 444
+ ],
+ "original_records": [
+ {
+ "id": 444,
+ "dog_name": "Davis",
+ "__updatedtime__": 1660585740556,
+ "__createdtime__": 1660585716128,
+ "fur_type": "coarse"
+ }
+ ]
+ }
+]
+```
+
+
+---
+
+## Read Audit Log by timestamp
+AuditLog must be enabled in the HarperDB configuration file to make this request. Returns the transactions logged for the specified database table between the specified time window. Read more about HarperDB transaction logs here: https:/docs.harperdb.io/docs/transaction-logging#read_audit_log.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `read_audit_log`
+* schema _(required)_ - schema under which the transaction log resides
+* table _(required)_ - table under which the transaction log resides
+* search_type _(optional)_ - timestamp
+* search_values _(optional)_ - an array containing a maximum of two values [`from_timestamp`, `to_timestamp`] defining the range of transactions you would like to view.
+ * Timestamp format is millisecond-based epoch in UTC
+ * If no items are supplied then all transactions are returned
+ * If only one entry is supplied then all transactions after the supplied timestamp will be returned
+
+### Body
+
+```json
+{
+ "operation": "read_audit_log",
+ "schema": "dev",
+ "table": "dog",
+ "search_type": "timestamp",
+ "search_values": [
+ 1660585740558,
+ 1660585759710.56
+ ]
+}
+```
+
+### Response: 200
+```json
+[
+ {
+ "operation": "insert",
+ "user_name": "admin",
+ "timestamp": 1660585635882.288,
+ "hash_values": [
+ 318
+ ],
+ "records": [
+ {
+ "id": 318,
+ "dog_name": "Polliwog",
+ "__updatedtime__": 1660585635876,
+ "__createdtime__": 1660585635876
+ }
+ ]
+ },
+ {
+ "operation": "insert",
+ "user_name": "admin",
+ "timestamp": 1660585716133.01,
+ "hash_values": [
+ 444
+ ],
+ "records": [
+ {
+ "id": 444,
+ "dog_name": "Davis",
+ "__updatedtime__": 1660585716128,
+ "__createdtime__": 1660585716128
+ }
+ ]
+ },
+ {
+ "operation": "update",
+ "user_name": "admin",
+ "timestamp": 1660585740558.415,
+ "hash_values": [
+ 444
+ ],
+ "records": [
+ {
+ "id": 444,
+ "fur_type": "coarse",
+ "__updatedtime__": 1660585740556
+ }
+ ],
+ "original_records": [
+ {
+ "id": 444,
+ "dog_name": "Davis",
+ "__updatedtime__": 1660585716128,
+ "__createdtime__": 1660585716128
+ }
+ ]
+ },
+ {
+ "operation": "delete",
+ "user_name": "admin",
+ "timestamp": 1660585759710.56,
+ "hash_values": [
+ 444
+ ],
+ "original_records": [
+ {
+ "id": 444,
+ "dog_name": "Davis",
+ "__updatedtime__": 1660585740556,
+ "__createdtime__": 1660585716128,
+ "fur_type": "coarse"
+ }
+ ]
+ },
+ {
+ "operation": "update",
+ "user_name": "admin",
+ "timestamp": 1660586298457.224,
+ "hash_values": [
+ 318
+ ],
+ "records": [
+ {
+ "id": 318,
+ "fur_type": "super fluffy",
+ "__updatedtime__": 1660586298455
+ }
+ ],
+ "original_records": [
+ {
+ "id": 318,
+ "dog_name": "Polliwog",
+ "__updatedtime__": 1660585635876,
+ "__createdtime__": 1660585635876
+ }
+ ]
+ }
+]
+```
+
+
+---
+
+## Read Audit Log by username
+AuditLog must be enabled in the HarperDB configuration file to make this request. Returns the transactions logged for the specified database table which were committed by the specified user. Read more about HarperDB transaction logs here: https:/docs.harperdb.io/docs/transaction-logging#read_audit_log.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `read_audit_log`
+* schema _(required)_ - schema under which the transaction log resides
+* table _(required)_ - table under which the transaction log resides
+* search_type _(optional)_ - username
+* search_values _(optional)_ - the HarperDB user for whom you would like to view transactions
+
+### Body
+
+```json
+{
+ "operation": "read_audit_log",
+ "schema": "dev",
+ "table": "dog",
+ "search_type": "username",
+ "search_values": [
+ "admin"
+ ]
+}
+```
+
+### Response: 200
+```json
+{
+ "admin": [
+ {
+ "operation": "insert",
+ "user_name": "admin",
+ "timestamp": 1660585635882.288,
+ "hash_values": [
+ 318
+ ],
+ "records": [
+ {
+ "id": 318,
+ "dog_name": "Polliwog",
+ "__updatedtime__": 1660585635876,
+ "__createdtime__": 1660585635876
+ }
+ ]
+ },
+ {
+ "operation": "insert",
+ "user_name": "admin",
+ "timestamp": 1660585716133.01,
+ "hash_values": [
+ 444
+ ],
+ "records": [
+ {
+ "id": 444,
+ "dog_name": "Davis",
+ "__updatedtime__": 1660585716128,
+ "__createdtime__": 1660585716128
+ }
+ ]
+ },
+ {
+ "operation": "update",
+ "user_name": "admin",
+ "timestamp": 1660585740558.415,
+ "hash_values": [
+ 444
+ ],
+ "records": [
+ {
+ "id": 444,
+ "fur_type": "coarse",
+ "__updatedtime__": 1660585740556
+ }
+ ],
+ "original_records": [
+ {
+ "id": 444,
+ "dog_name": "Davis",
+ "__updatedtime__": 1660585716128,
+ "__createdtime__": 1660585716128
+ }
+ ]
+ },
+ {
+ "operation": "delete",
+ "user_name": "admin",
+ "timestamp": 1660585759710.56,
+ "hash_values": [
+ 444
+ ],
+ "original_records": [
+ {
+ "id": 444,
+ "dog_name": "Davis",
+ "__updatedtime__": 1660585740556,
+ "__createdtime__": 1660585716128,
+ "fur_type": "coarse"
+ }
+ ]
+ },
+ {
+ "operation": "update",
+ "user_name": "admin",
+ "timestamp": 1660586298457.224,
+ "hash_values": [
+ 318
+ ],
+ "records": [
+ {
+ "id": 318,
+ "fur_type": "super fluffy",
+ "__updatedtime__": 1660586298455
+ }
+ ],
+ "original_records": [
+ {
+ "id": 318,
+ "dog_name": "Polliwog",
+ "__updatedtime__": 1660585635876,
+ "__createdtime__": 1660585635876
+ }
+ ]
+ }
+ ]
+}
+```
+
+
+---
+
+## Read Audit Log by hash_value
+AuditLog must be enabled in the HarperDB configuration file to make this request. Returns the transactions logged for the specified database table which were committed to the specified hash value(s). Read more about HarperDB transaction logs here: https:/docs.harperdb.io/docs/transaction-logging#read_audit_log.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `read_audit_log`
+* schema _(required)_ - schema under which the transaction log resides
+* table _(required)_ - table under which the transaction log resides
+* search_type _(optional)_ - hash_value
+* search_values _(optional)_ - an array of hash_attributes for which you wish to see transaction logs
+
+### Body
+
+```json
+{
+ "operation": "read_audit_log",
+ "schema": "dev",
+ "table": "dog",
+ "search_type": "hash_value",
+ "search_values": [
+ 318
+ ]
+}
+```
+
+### Response: 200
+```json
+{
+ "318": [
+ {
+ "operation": "insert",
+ "user_name": "admin",
+ "timestamp": 1660585635882.288,
+ "records": [
+ {
+ "id": 318,
+ "dog_name": "Polliwog",
+ "__updatedtime__": 1660585635876,
+ "__createdtime__": 1660585635876
+ }
+ ]
+ },
+ {
+ "operation": "update",
+ "user_name": "admin",
+ "timestamp": 1660586298457.224,
+ "records": [
+ {
+ "id": 318,
+ "fur_type": "super fluffy",
+ "__updatedtime__": 1660586298455
+ }
+ ],
+ "original_records": [
+ {
+ "id": 318,
+ "dog_name": "Polliwog",
+ "__updatedtime__": 1660585635876,
+ "__createdtime__": 1660585635876
+ }
+ ]
+ }
+ ]
+}
+```
+
+---
+
+## Delete Audit Logs Before
+AuditLog must be enabled in the HarperDB configuration file to make this request. Deletes audit log data for the specified database table that is older than the specified timestamp.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `delete_audit_logs_before`
+* schema _(required)_ - schema under which the transaction log resides. Must be a string
+* table _(required)_ - table under which the transaction log resides. Must be a string
+* timestamp _(required)_ - records older than this date will be deleted. Format is millisecond-based epoch in UTC
+
+### Body
+```json
+{
+ "operation": "delete_audit_logs_before",
+ "schema": "dev",
+ "table": "dog",
+ "timestamp": 1660585759710.56
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Starting job with id 7479e5f8-a86e-4fc9-add7-749493bc100f"
+}
+```
+
+
diff --git a/site/versioned_docs/version-4.3/developers/operations-api/nosql-operations.md b/site/versioned_docs/version-4.3/developers/operations-api/nosql-operations.md
new file mode 100644
index 00000000..47db9d1e
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/operations-api/nosql-operations.md
@@ -0,0 +1,413 @@
+---
+title: NoSQL Operations
+---
+
+# NoSQL Operations
+
+## Insert
+
+Adds one or more rows of data to a database table. Primary keys of the inserted JSON record may be supplied on insert. If a primary key is not provided, then a GUID will be generated for each record.
+
+* operation _(required)_ - must always be `insert`
+* database _(optional)_ - database where the table you are inserting records into lives. The default is `data`
+* table _(required)_ - table where you want to insert records
+* records _(required)_ - array of one or more records for insert
+
+### Body
+
+```json
+{
+ "operation": "insert",
+ "database": "dev",
+ "table": "dog",
+ "records": [
+ {
+ "id": 8,
+ "dog_name": "Harper",
+ "breed_id": 346,
+ "age": 7
+ },
+ {
+ "id": 9,
+ "dog_name": "Penny",
+ "breed_id": 154,
+ "age": 7
+ }
+ ]
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "inserted 2 of 2 records",
+ "inserted_hashes": [
+ 8,
+ 9
+ ],
+ "skipped_hashes": []
+}
+```
+
+---
+
+## Update
+
+Changes the values of specified attributes in one or more rows in a database table as identified by the primary key. NOTE: Primary key of the updated JSON record(s) MUST be supplied on update.
+
+* operation _(required)_ - must always be `update`
+* database _(optional)_ - database of the table you are updating records in. The default is `data`
+* table _(required)_ - table where you want to update records
+* records _(required)_ - array of one or more records for update
+
+### Body
+
+```json
+{
+ "operation": "update",
+ "database": "dev",
+ "table": "dog",
+ "records": [
+ {
+ "id": 1,
+ "weight_lbs": 55
+ },
+ {
+ "id": 2,
+ "owner": "Kyle B",
+ "weight_lbs": 35
+ }
+ ]
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "updated 2 of 2 records",
+ "update_hashes": [
+ 1,
+ 3
+ ],
+ "skipped_hashes": []
+}
+```
+
+---
+
+## Upsert
+
+Changes the values of specified attributes for rows with matching primary keys that exist in the table. Adds rows to the database table for primary keys that do not exist or are not provided.
+
+* operation _(required)_ - must always be `upsert`
+* database _(optional)_ - database of the table you are updating records in. The default is `data`
+* table _(required)_ - table where you want to update records
+* records _(required)_ - array of one or more records for update
+
+### Body
+
+```json
+{
+ "operation": "upsert",
+ "database": "dev",
+ "table": "dog",
+ "records": [
+ {
+ "id": 8,
+ "weight_lbs": 155
+ },
+ {
+ "name": "Bill",
+ "breed": "Pit Bull",
+ "id": 10,
+ "Age": 11,
+ "weight_lbs": 155
+ },
+ {
+ "name": "Harper",
+ "breed": "Mutt",
+ "age": 5,
+ "weight_lbs": 155
+ }
+ ]
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "upserted 3 of 3 records",
+ "upserted_hashes": [
+ 8,
+ 10,
+ "ea06fc8e-717b-4c6c-b69d-b29014054ab7"
+ ]
+}
+```
+
+---
+
+## Delete
+
+Removes one or more rows of data from a specified table.
+
+* operation _(required)_ - must always be `delete`
+* database _(optional)_ - database where the table you are deleting records lives. The default is `data`
+* table _(required)_ - table where you want to deleting records
+* ids _(required)_ - array of one or more primary key values, which identifies records to delete
+
+### Body
+
+```json
+{
+ "operation": "delete",
+ "database": "dev",
+ "table": "dog",
+ "ids": [
+ 1,
+ 2
+ ]
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "2 of 2 records successfully deleted",
+ "deleted_hashes": [
+ 1,
+ 2
+ ],
+ "skipped_hashes": []
+}
+```
+
+---
+
+## Search By ID
+
+Returns data from a table for one or more primary keys.
+
+* operation _(required)_ - must always be `search_by_id`
+* database _(optional)_ - database where the table you are searching lives. The default is `data`
+* table _(required)_ - table you wish to search
+* ids _(required)_ - array of primary keys to retrieve
+* get_attributes _(required)_ - define which attributes you want returned. _Use `['*']` to return all attributes_
+
+### Body
+
+```json
+{
+ "operation": "search_by_id",
+ "database": "dev",
+ "table": "dog",
+ "ids": [
+ 1,
+ 2
+ ],
+ "get_attributes": [
+ "dog_name",
+ "breed_id"
+ ]
+}
+```
+
+### Response: 200
+
+```json
+[
+ {
+ "dog_name": "Penny",
+ "breed_id": 154
+ },
+ {
+ "dog_name": "Harper",
+ "breed_id": 346
+ }
+]
+```
+
+---
+
+## Search By Value
+
+Returns data from a table for a matching value.
+
+* operation _(required)_ - must always be `search_by_value`
+* database _(optional)_ - database where the table you are searching lives. The default is `data`
+* table _(required)_ - table you wish to search
+* search_attribute _(required)_ - attribute you wish to search can be any attribute
+* search_value _(required)_ - value you wish to search - wild cards are allowed
+* get_attributes _(required)_ - define which attributes you want returned. Use `['*']` to return all attributes
+
+### Body
+
+```json
+{
+ "operation": "search_by_value",
+ "database": "dev",
+ "table": "dog",
+ "search_attribute": "owner_name",
+ "search_value": "Ky*",
+ "get_attributes": [
+ "id",
+ "dog_name"
+ ]
+}
+```
+
+### Response: 200
+
+```json
+[
+ {
+ "dog_name": "Penny"
+ },
+ {
+ "dog_name": "Kato"
+ }
+]
+```
+
+---
+
+## Search By Conditions
+
+Returns data from a table for one or more matching conditions. This supports grouping of conditions to indicate order of operations as well.
+
+* operation _(required)_ - must always be `search_by_conditions`
+* database _(optional)_ - database where the table you are searching lives. The default is `data`
+* table _(required)_ - table you wish to search
+* operator _(optional)_ - the operator used between each condition - `and`, `or`. The default is `and`
+* offset _(optional)_ - the number of records that the query results will skip. The default is `0`
+* limit _(optional)_ - the number of records that the query results will include. The default is `null`, resulting in no limit
+* sort _optional_ - This is an object that indicates the sort order. It has the following properties:
+ * attribute _(required)_ - The attribute to sort by
+ * descending _(optional)_ - If true, will sort in descending order (defaults to ascending order)
+ * next _(optional)_ - This can define the next sort object that will be used to break ties for sorting when there are multiple records with the same value for the first attribute (follows the same structure as `sort`, and can recursive additional attributes).
+* get_attributes _(required)_ - define which attributes you want returned. Use `['*']` to return all attributes
+* conditions _(required)_ - the array of conditions objects, specified below, to filter by. Must include one or more object in the array that are a condition or a grouped set of conditions. A condition has the following properties:
+ * search_attribute _(required)_ - the attribute you wish to search, can be any attribute
+ * search_type _(required)_ - the type of search to perform - `equals`, `contains`, `starts_with`, `ends_with`, `greater_than`, `greater_than_equal`, `less_than`, `less_than_equal`, `between`
+ * search_value _(required)_ - case-sensitive value you wish to search. If the `search_type` is `between` then use an array of two values to search between
+ Or a set of grouped conditions has the following properties:
+ * operator _(optional)_ - the operator used between each condition - `and`, `or`. The default is `and`
+ * conditions _(required)_ - the array of conditions objects as described above.
+### Body
+
+```json
+{
+ "operation": "search_by_conditions",
+ "database": "dev",
+ "table": "dog",
+ "operator": "and",
+ "offset": 0,
+ "limit": 10,
+ "sort": {
+ "attribute": "id",
+ "next": {
+ "dog_name": "age",
+ "descending": true
+ }
+ },
+ "get_attributes": [
+ "*"
+ ],
+ "conditions": [
+ {
+ "search_attribute": "age",
+ "search_type": "between",
+ "search_value": [
+ 5,
+ 8
+ ]
+ },
+ {
+ "search_attribute": "weight_lbs",
+ "search_type": "greater_than",
+ "search_value": 40
+ },
+ {
+ "operator": "or",
+ "conditions": [
+ {
+ "search_attribute": "adorable",
+ "search_type": "equals",
+ "search_value": true
+ },
+ {
+ "search_attribute": "lovable",
+ "search_type": "equals",
+ "search_value": true
+ }
+ ]
+ }
+ ]
+}
+```
+
+### Response: 200
+
+```json
+[
+ {
+ "__createdtime__": 1620227719791,
+ "__updatedtime__": 1620227719791,
+ "adorable": true,
+ "age": 7,
+ "breed_id": 346,
+ "dog_name": "Harper",
+ "id": 2,
+ "owner_name": "Stephen",
+ "weight_lbs": 55
+ },
+ {
+ "__createdtime__": 1620227719792,
+ "__updatedtime__": 1620227719792,
+ "adorable": true,
+ "age": 7,
+ "breed_id": 348,
+ "dog_name": "Alby",
+ "id": 3,
+ "owner_name": "Kaylan",
+ "weight_lbs": 84
+ },
+ {
+ "__createdtime__": 1620227719792,
+ "__updatedtime__": 1620227719792,
+ "adorable": true,
+ "age": 6,
+ "breed_id": 347,
+ "dog_name": "Billy",
+ "id": 4,
+ "owner_name": "Zach",
+ "weight_lbs": 60
+ },
+ {
+ "__createdtime__": 1620227719792,
+ "__updatedtime__": 1620227719792,
+ "adorable": true,
+ "age": 5,
+ "breed_id": 250,
+ "dog_name": "Gemma",
+ "id": 8,
+ "owner_name": "Stephen",
+ "weight_lbs": 55
+ },
+ {
+ "__createdtime__": 1620227719792,
+ "__updatedtime__": 1620227719792,
+ "adorable": true,
+ "age": 8,
+ "breed_id": 104,
+ "dog_name": "Bode",
+ "id": 11,
+ "owner_name": "Margo",
+ "weight_lbs": 75
+ }
+]
+```
diff --git a/site/versioned_docs/version-4.3/developers/operations-api/quickstart-examples.md b/site/versioned_docs/version-4.3/developers/operations-api/quickstart-examples.md
new file mode 100644
index 00000000..e1ef734a
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/operations-api/quickstart-examples.md
@@ -0,0 +1,387 @@
+---
+title: Quick Start Examples
+---
+
+# Quick Start Examples
+
+HarperDB recommends utilizing [HarperDB Applications](../../developers/applications/) for defining databases, tables, and other functionality. However, this guide is a great way to get started using on the HarperDB Operations API.
+
+## Create dog Table
+
+We first need to create a table. Since our company is named after our CEO's dog, lets create a table to store all our employees' dogs. We'll call this table, `dogs`.
+
+Tables in HarperDB are schema-less, so we don't need to add any attributes other than a primary_key (in pre 4.2 versions this was referred to as the hash_attribute) to create this table.
+
+HarperDB does offer a `database` parameter that can be used to hold logical groupings of tables. The parameter is optional and if not provided the operation will default to using a database named `data`.
+
+If you receive an error response, make sure your Basic Authentication user and password match those you entered during the installation process.
+
+### Body
+
+```json
+{
+ "operation": "create_table",
+ "table": "dog",
+ "primary_key": "id"
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "table 'data.dog' successfully created."
+}
+```
+
+---
+
+## Create breed Table
+Now that we have a table to store our dog data, we also want to create a table to track known breeds. Just as with the dog table, the only attribute we need to specify is the `primary_key`.
+
+### Body
+
+```json
+{
+ "operation": "create_table",
+ "table": "breed",
+ "primary_key": "id"
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "table 'data.breed' successfully created."
+}
+```
+
+---
+
+## Insert 1 Dog
+
+We're ready to add some dog data. Penny is our CTO's pup, so she gets ID 1 or we're all fired. We are specifying attributes in this call, but this doesn't prevent us from specifying additional attributes in subsequent calls.
+
+### Body
+
+```json
+{
+ "operation": "insert",
+ "table": "dog",
+ "records": [
+ {
+ "id": 1,
+ "dog_name": "Penny",
+ "owner_name": "Kyle",
+ "breed_id": 154,
+ "age": 7,
+ "weight_lbs": 38
+ }
+ ]
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "inserted 1 of 1 records",
+ "inserted_hashes": [
+ 1
+ ],
+ "skipped_hashes": []
+}
+```
+
+---
+
+## Insert Multiple Dogs
+
+Let's add some more Harper doggies! We can add as many dog objects as we want into the records collection. If you're adding a lot of objects, we would recommend using the .csv upload option (see the next section where we populate the breed table).
+
+### Body
+
+```json
+{
+ "operation": "insert",
+ "table": "dog",
+ "records": [
+ {
+ "id": 2,
+ "dog_name": "Harper",
+ "owner_name": "Stephen",
+ "breed_id": 346,
+ "age": 7,
+ "weight_lbs": 55,
+ "adorable": true
+ },
+ {
+ "id": 3,
+ "dog_name": "Alby",
+ "owner_name": "Kaylan",
+ "breed_id": 348,
+ "age": 7,
+ "weight_lbs": 84,
+ "adorable": true
+ },
+ {
+ "id": 4,
+ "dog_name": "Billy",
+ "owner_name": "Zach",
+ "breed_id": 347,
+ "age": 6,
+ "weight_lbs": 60,
+ "adorable": true
+ },
+ {
+ "id": 5,
+ "dog_name": "Rose Merry",
+ "owner_name": "Zach",
+ "breed_id": 348,
+ "age": 8,
+ "weight_lbs": 15,
+ "adorable": true
+ },
+ {
+ "id": 6,
+ "dog_name": "Kato",
+ "owner_name": "Kyle",
+ "breed_id": 351,
+ "age": 6,
+ "weight_lbs": 32,
+ "adorable": true
+ },
+ {
+ "id": 7,
+ "dog_name": "Simon",
+ "owner_name": "Fred",
+ "breed_id": 349,
+ "age": 3,
+ "weight_lbs": 35,
+ "adorable": true
+ },
+ {
+ "id": 8,
+ "dog_name": "Gemma",
+ "owner_name": "Stephen",
+ "breed_id": 350,
+ "age": 5,
+ "weight_lbs": 55,
+ "adorable": true
+ },
+ {
+ "id": 9,
+ "dog_name": "Yeti",
+ "owner_name": "Jaxon",
+ "breed_id": 200,
+ "age": 5,
+ "weight_lbs": 55,
+ "adorable": true
+ },
+ {
+ "id": 10,
+ "dog_name": "Monkey",
+ "owner_name": "Aron",
+ "breed_id": 271,
+ "age": 7,
+ "weight_lbs": 35,
+ "adorable": true
+ },
+ {
+ "id": 11,
+ "dog_name": "Bode",
+ "owner_name": "Margo",
+ "breed_id": 104,
+ "age": 8,
+ "weight_lbs": 75,
+ "adorable": true
+ },
+ {
+ "id": 12,
+ "dog_name": "Tucker",
+ "owner_name": "David",
+ "breed_id": 346,
+ "age": 2,
+ "weight_lbs": 60,
+ "adorable": true
+ },
+ {
+ "id": 13,
+ "dog_name": "Jagger",
+ "owner_name": "Margo",
+ "breed_id": 271,
+ "age": 7,
+ "weight_lbs": 35,
+ "adorable": true
+ }
+ ]
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "inserted 12 of 12 records",
+ "inserted_hashes": [
+ 2,
+ 3,
+ 4,
+ 5,
+ 6,
+ 7,
+ 8,
+ 9,
+ 10,
+ 11,
+ 12,
+ 13
+ ],
+ "skipped_hashes": []
+}
+```
+
+---
+
+## Bulk Insert Breeds Via CSV
+
+We need to populate the 'breed' table with some data so we can reference it later. For larger data sets, we recommend using our CSV upload option.
+
+Each header in a column will be considered as an attribute, and each row in the file will be a row in the table. Simply specify the file path and the table to upload to, and HarperDB will take care of the rest. You can pull the breeds.csv file from here: https:/s3.amazonaws.com/complimentarydata/breeds.csv
+
+### Body
+
+```json
+{
+ "operation": "csv_url_load",
+ "table": "breed",
+ "csv_url": "https:/s3.amazonaws.com/complimentarydata/breeds.csv"
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "Starting job with id e77d63b9-70d5-499c-960f-6736718a4369",
+ "job_id": "e77d63b9-70d5-499c-960f-6736718a4369"
+}
+```
+
+---
+
+## Update 1 Dog Using NoSQL
+
+HarperDB supports NoSQL and SQL commands. We're going to update the dog table to show Penny's last initial using our NoSQL API.
+
+### Body
+
+```json
+{
+ "operation": "update",
+ "table": "dog",
+ "records": [
+ {
+ "id": 1,
+ "dog_name": "Penny B"
+ }
+ ]
+}
+```
+
+### Response: 200
+
+```json
+{
+ "message": "updated 1 of 1 records",
+ "update_hashes": [
+ 1
+ ],
+ "skipped_hashes": []
+}
+```
+
+---
+
+## Select a Dog by ID Using SQL
+
+Now we're going to use a simple SQL SELECT call to pull Penny's updated data. Note we now see Penny's last initial in the dog name.
+
+### Body
+
+```json
+{
+ "operation": "sql",
+ "sql": "SELECT * FROM data.dog where id = 1"
+}
+```
+
+### Response: 200
+
+```json
+[
+ {
+ "owner_name": "Kyle",
+ "adorable": null,
+ "breed_id": 154,
+ "__updatedtime__": 1610749428575,
+ "dog_name": "Penny B",
+ "weight_lbs": 38,
+ "id": 1,
+ "age": 7,
+ "__createdtime__": 1610749386566
+ }
+]
+```
+
+---
+
+## Select Dogs and Join Breed
+
+Here's a more complex SQL command joining the breed table with the dog table. We will also pull only the pups belonging to Kyle, Zach, and Stephen.
+
+### Body
+
+```json
+{
+ "operation": "sql",
+ "sql": "SELECT d.id, d.dog_name, d.owner_name, b.name, b.section FROM data.dog AS d INNER JOIN data.breed AS b ON d.breed_id = b.id WHERE d.owner_name IN ('Kyle', 'Zach', 'Stephen') AND b.section = 'Mutt' ORDER BY d.dog_name"
+}
+```
+
+### Response: 200
+
+```json
+[
+ {
+ "id": 4,
+ "dog_name": "Billy",
+ "owner_name": "Zach",
+ "name": "LABRADOR / GREAT DANE MIX",
+ "section": "Mutt"
+ },
+ {
+ "id": 8,
+ "dog_name": "Gemma",
+ "owner_name": "Stephen",
+ "name": "SHORT HAIRED SETTER MIX",
+ "section": "Mutt"
+ },
+ {
+ "id": 2,
+ "dog_name": "Harper",
+ "owner_name": "Stephen",
+ "name": "HUSKY MIX",
+ "section": "Mutt"
+ },
+ {
+ "id": 5,
+ "dog_name": "Rose Merry",
+ "owner_name": "Zach",
+ "name": "TERRIER MIX",
+ "section": "Mutt"
+ }
+]
+
+```
diff --git a/site/versioned_docs/version-4.3/developers/operations-api/registration.md b/site/versioned_docs/version-4.3/developers/operations-api/registration.md
new file mode 100644
index 00000000..53d953af
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/operations-api/registration.md
@@ -0,0 +1,67 @@
+---
+title: Registration
+---
+
+# Registration
+
+
+## Registration Info
+Returns the registration data of the HarperDB instance.
+
+* operation _(required)_ - must always be `registration_info`
+
+### Body
+```json
+{
+ "operation": "registration_info"
+}
+```
+
+### Response: 200
+```json
+{
+ "registered": true,
+ "version": "4.2.0",
+ "ram_allocation": 2048,
+ "license_expiration_date": "2022-01-15"
+}
+```
+
+---
+
+## Get Fingerprint
+Returns the HarperDB fingerprint, uniquely generated based on the machine, for licensing purposes.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `get_fingerprint`
+
+### Body
+
+```json
+{
+ "operation": "get_fingerprint"
+}
+```
+
+---
+
+## Set License
+Sets the HarperDB license as generated by HarperDB License Management software.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `set_license`
+* key _(required)_ - your license key
+* company _(required)_ - the company that was used in the license
+
+### Body
+
+```json
+{
+ "operation": "set_license",
+ "key": "",
+ "company": ""
+}
+```
+
diff --git a/site/versioned_docs/version-4.3/developers/operations-api/sql-operations.md b/site/versioned_docs/version-4.3/developers/operations-api/sql-operations.md
new file mode 100644
index 00000000..5958805e
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/operations-api/sql-operations.md
@@ -0,0 +1,122 @@
+---
+title: SQL Operations
+---
+
+:::warning
+HarperDB encourages developers to utilize other querying tools over SQL for performance purposes. HarperDB SQL is intended for data investigation purposes and uses cases where performance is not a priority. SQL optimizations are on our roadmap for the future.
+:::
+
+# SQL Operations
+
+## Select
+Executes the provided SQL statement. The SELECT statement is used to query data from the database.
+
+* operation _(required)_ - must always be `sql`
+* sql _(required)_ - use standard SQL
+
+### Body
+
+```json
+{
+ "operation": "sql",
+ "sql": "SELECT * FROM dev.dog WHERE id = 1"
+}
+```
+
+### Response: 200
+```json
+[
+ {
+ "id": 1,
+ "age": 7,
+ "dog_name": "Penny",
+ "weight_lbs": 38,
+ "breed_id": 154,
+ "owner_name": "Kyle",
+ "adorable": true,
+ "__createdtime__": 1611614106043,
+ "__updatedtime__": 1611614119507
+ }
+]
+```
+
+---
+
+## Insert
+Executes the provided SQL statement. The INSERT statement is used to add one or more rows to a database table.
+
+* operation _(required)_ - must always be `sql`
+* sql _(required)_ - use standard SQL
+
+### Body
+
+```json
+{
+ "operation": "sql",
+ "sql": "INSERT INTO dev.dog (id, dog_name) VALUE (22, 'Simon')"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "inserted 1 of 1 records",
+ "inserted_hashes": [
+ 22
+ ],
+ "skipped_hashes": []
+}
+```
+---
+
+## Update
+Executes the provided SQL statement. The UPDATE statement is used to change the values of specified attributes in one or more rows in a database table.
+
+* operation _(required)_ - must always be `sql`
+* sql _(required)_ - use standard SQL
+
+### Body
+```json
+{
+ "operation": "sql",
+ "sql": "UPDATE dev.dog SET dog_name = 'penelope' WHERE id = 1"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "updated 1 of 1 records",
+ "update_hashes": [
+ 1
+ ],
+ "skipped_hashes": []
+}
+```
+
+---
+
+## Delete
+Executes the provided SQL statement. The DELETE statement is used to remove one or more rows of data from a database table.
+
+* operation _(required)_ - must always be `sql`
+* sql _(required)_ - use standard SQL
+
+### Body
+```json
+{
+ "operation": "sql",
+ "sql": "DELETE FROM dev.dog WHERE id = 1"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "1 of 1 record successfully deleted",
+ "deleted_hashes": [
+ 1
+ ],
+ "skipped_hashes": []
+}
+```
diff --git a/site/versioned_docs/version-4.3/developers/operations-api/token-authentication.md b/site/versioned_docs/version-4.3/developers/operations-api/token-authentication.md
new file mode 100644
index 00000000..161c69b5
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/operations-api/token-authentication.md
@@ -0,0 +1,54 @@
+---
+title: Token Authentication
+---
+
+# Token Authentication
+
+## Create Authentication Tokens
+Creates the tokens needed for authentication: operation & refresh token.
+
+_Note - this operation does not require authorization to be set_
+
+* operation _(required)_ - must always be `create_authentication_tokens`
+* username _(required)_ - username of user to generate tokens for
+* password _(required)_ - password of user to generate tokens for
+
+### Body
+```json
+{
+ "operation": "create_authentication_tokens",
+ "username": "",
+ "password": ""
+}
+```
+
+### Response: 200
+```json
+{
+ "operation_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6IkhEQl9BRE1JTiIsImlhdCI6MTYwNTA2Mzk0OSwiZXhwIjoxNjA1MTUwMzQ5LCJzdWIiOiJvcGVyYXRpb24ifQ.TlV93BqavQVQntXTt_WeY5IjAuCshfd6RzhihLWFWhu1qEKLHdwg9o5Z4ASaNmfuyKBqbFw65IbOYKd348EXeC_T6d0GO3yUhICYWXkqhQnxVW_T-ECKc7m5Bty9HTgfeaJ2e2yW55nbZYWG_gLtNgObUjCziX20-gGGR25sNTRm78mLQPYQkBJph6WXwAuyQrX704h0NfvNqyAZSwjxgtjuuEftTJ7FutLrQSLGIBIYq9nsHrFkheiDSn-C8_WKJ_zATa4YIofjqn9g5wA6o_7kSNaU2-gWnCm_jbcAcfvOmXh6rd89z8pwPqnC0f131qHIBps9UHaC1oozzmu_C6bsg7905OoAdFFY42Vojs98SMbfRApRvwaS4SprBsam3izODNI64ZUBREu3l4SZDalUf2kN8XPVWkI1LKq_mZsdtqr1r11Z9xslI1wVdxjunYeanjBhs7_j2HTX7ieVGn1a23cWceUk8F1HDGe_KEuPQs03R73V8acq_freh-kPhIa4eLqmcHeBw3WcyNGW8GuP8kyQRkGuO5sQSzZqbr_YSbZdSShZWTWDE6RYYC9ZV9KJtHVxhs0hexUpcoqO8OtJocyltRjtDjhSm9oUxszYRaALu-h8YadZT9dEKzsyQIt30d7LS9ETmmGWx4nKSTME2bV21PnDv_rEc5R6gnE",
+ "refresh_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6IkhEQl9BRE1JTiIsImlhdCI6MTYwNTA2Mzk0OSwiZXhwIjoxNjA3NjU1OTQ5LCJzdWIiOiJyZWZyZXNoIn0.znhJhkdSROBPP_GLRzAxYdjgQ3BuqpAbQB7zMSSOQJ3s83HnmZ10Bnpw_3L2aF-tOFgz_t6HUAvn26fNOLsspJD2aOvHPcVS4yLKS5nagpA6ar_pqng9f6Ebfs8ohguLCfHnHRJ8poLxuWRvWW9_9pIlDiwsj4yo3Mbxi3mW8Bbtnk2MwiNHFxTksD12Ne8EWz8q2jic5MjArqBBgR373oYoWU1oxpTM6gIsZCBRowXcc9XFy2vyRoggEUU4ISRFQ4ZY9ayJ-_jleSDCUamJSNQsdb1OUTvc6CxeYlLjCoV0ijRUB6p2XWNVezFhDu8yGqOeyGFJzArhxbVc_pl4UYd5aUVxhrO9DdhG29cY_mHV0FqfXphR9QllK--LJFTP4aFqkCxnVr7HSa17hL0ZVK1HaKrx21PAdCkVNZpD6J3RtRbTkfnIB_C3Be9jhOV3vpTf7ZGn_Bs3CPJi_sL313Z1yKSDAS5rXTPceEOcTPHjzkMP9Wz19KfFq_0kuiZdDmeYNqJeFPAgGJ-S0tO51krzyGqLyCCA32_W104GR8OoQi2gEED6HIx2G0-1rnLnefN6eHQiY5r-Q3Oj9e2y3EvqqgWOmEDw88-SjPTwQVnMbBHYN2RfluU7EmvDh6Saoe79Lhlu8ZeSJ1x6ZgA8-Cirraz1_526Tn8v5FGDfrc"
+}
+```
+
+---
+
+## Refresh Operation Token
+This operation creates a new operation token.
+
+* operation _(required)_ - must always be `refresh_operation_token`
+* refresh_token _(required)_ - the refresh token that was provided when tokens were created
+
+### Body
+```json
+{
+ "operation": "refresh_operation_token",
+ "refresh_token": "EXISTING_REFRESH_TOKEN"
+}
+```
+
+### Response: 200
+```json
+{
+ "operation_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6eyJfX2NyZWF0ZWR0aW1lX18iOjE2MDQ1MTc4Nzk1MjMsIl9fdXBkYXRlZHRpbWVfXyI6MTYwNDUxNzg3OTUyMywiYWN0aXZlIjp0cnVlLCJhdXRoX3Rva2VuIjpudWxsLCJyb2xlIjp7Il9fY3JlYXRlZHRpbWVfXyI6MTYwNDUxNzg3OTUyMSwiX191cGRhdGVkdGltZV9fIjoxNjA0NTE3ODc5NTIxLCJpZCI6IjZhYmRjNGJhLWU5MjQtNDlhNi1iOGY0LWM1NWUxYmQ0OTYzZCIsInBlcm1pc3Npb24iOnsic3VwZXJfdXNlciI6dHJ1ZSwic3lzdGVtIjp7InRhYmxlcyI6eyJoZGJfdGFibGUiOnsicmVhZCI6dHJ1ZSwiaW5zZXJ0IjpmYWxzZSwidXBkYXRlIjpmYWxzZSwiZGVsZXRlIjpmYWxzZSwiYXR0cmlidXRlX3Blcm1pc3Npb25zIjpbXX0sImhkYl9hdHRyaWJ1dGUiOnsicmVhZCI6dHJ1ZSwiaW5zZXJ0IjpmYWxzZSwidXBkYXRlIjpmYWxzZSwiZGVsZXRlIjpmYWxzZSwiYXR0cmlidXRlX3Blcm1pc3Npb25zIjpbXX0sImhkYl9zY2hlbWEiOnsicmVhZCI6dHJ1ZSwiaW5zZXJ0IjpmYWxzZSwidXBkYXRlIjpmYWxzZSwiZGVsZXRlIjpmYWxzZSwiYXR0cmlidXRlX3Blcm1pc3Npb25zIjpbXX0sImhkYl91c2VyIjp7InJlYWQiOnRydWUsImluc2VydCI6ZmFsc2UsInVwZGF0ZSI6ZmFsc2UsImRlbGV0ZSI6ZmFsc2UsImF0dHJpYnV0ZV9wZXJtaXNzaW9ucyI6W119LCJoZGJfcm9sZSI6eyJyZWFkIjp0cnVlLCJpbnNlcnQiOmZhbHNlLCJ1cGRhdGUiOmZhbHNlLCJkZWxldGUiOmZhbHNlLCJhdHRyaWJ1dGVfcGVybWlzc2lvbnMiOltdfSwiaGRiX2pvYiI6eyJyZWFkIjp0cnVlLCJpbnNlcnQiOmZhbHNlLCJ1cGRhdGUiOmZhbHNlLCJkZWxldGUiOmZhbHNlLCJhdHRyaWJ1dGVfcGVybWlzc2lvbnMiOltdfSwiaGRiX2xpY2Vuc2UiOnsicmVhZCI6dHJ1ZSwiaW5zZXJ0IjpmYWxzZSwidXBkYXRlIjpmYWxzZSwiZGVsZXRlIjpmYWxzZSwiYXR0cmlidXRlX3Blcm1pc3Npb25zIjpbXX0sImhkYl9pbmZvIjp7InJlYWQiOnRydWUsImluc2VydCI6ZmFsc2UsInVwZGF0ZSI6ZmFsc2UsImRlbGV0ZSI6ZmFsc2UsImF0dHJpYnV0ZV9wZXJtaXNzaW9ucyI6W119LCJoZGJfbm9kZXMiOnsicmVhZCI6dHJ1ZSwiaW5zZXJ0IjpmYWxzZSwidXBkYXRlIjpmYWxzZSwiZGVsZXRlIjpmYWxzZSwiYXR0cmlidXRlX3Blcm1pc3Npb25zIjpbXX0sImhkYl90ZW1wIjp7InJlYWQiOnRydWUsImluc2VydCI6ZmFsc2UsInVwZGF0ZSI6ZmFsc2UsImRlbGV0ZSI6ZmFsc2UsImF0dHJpYnV0ZV9wZXJtaXNzaW9ucyI6W119fX19LCJyb2xlIjoic3VwZXJfdXNlciJ9LCJ1c2VybmFtZSI6IkhEQl9BRE1JTiJ9LCJpYXQiOjE2MDUwNjQ0MjMsImV4cCI6MTYwNTE1MDgyMywic3ViIjoib3BlcmF0aW9uIn0.VVZdhlh7_xFEaGPwhAh6VJ1d7eisiF3ok3ZwLTQAMWZB6umb2S7pPSTbXAmqAGHRlFAK3BYfnwT3YWt0gZbHvk24_0x3s_dej3PYJ8khIxzMjqpkR6qSjQIC2dhKqpwRPNtoqW_xnep9L-qf5iPtqkwsqWhF1c5VSN8nFouLWMZSuJ6Mag04soNhFvY0AF6QiTyzajMTb6uurRMWOnxk8hwMrY_5xtupabqtZheXP_0DV8l10B7GFi_oWf_lDLmwRmNbeUfW8ZyCIJMj36bjN3PsfVIxog87SWKKCwbWZWfJWw0KEph-HvU0ay35deyGWPIaDQmujuh2vtz-B0GoIAC58PJdXNyQRzES_nSb6Oqc_wGZsLM6EsNn_lrIp3mK_3a5jirZ8s6Z2SfcYKaLF2hCevdm05gRjFJ6ijxZrUSOR2S415wLxmqCCWCp_-sEUz8erUrf07_aj-Bv99GUub4b_znOsQF3uABKd4KKff2cNSMhAa-6sro5GDRRJg376dcLi2_9HOZbnSo90zrpVq8RNV900aydyzDdlXkZja8jdHBk4mxSSewYBvM7up6I0G4X-ZlzFOp30T7kjdLa6480Qp34iYRMMtq0Htpb5k2jPt8dNFnzW-Q2eRy1wNBbH3cCH0rd7_BIGuTCrl4hGU8QjlBiF7Gj0_-uJYhKnhg"
+}
+```
diff --git a/site/versioned_docs/version-4.3/developers/operations-api/users-and-roles.md b/site/versioned_docs/version-4.3/developers/operations-api/users-and-roles.md
new file mode 100644
index 00000000..f5f35d56
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/operations-api/users-and-roles.md
@@ -0,0 +1,484 @@
+---
+title: Users and Roles
+---
+
+# Users and Roles
+
+## List Roles
+Returns a list of all roles. [Learn more about HarperDB roles here.](../security/users-and-roles)
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `list_roles`
+
+### Body
+```json
+{
+ "operation": "list_roles"
+}
+```
+
+### Response: 200
+```json
+[
+ {
+ "__createdtime__": 1611615061106,
+ "__updatedtime__": 1611615061106,
+ "id": "05c2ffcd-f780-40b1-9432-cfe8ba5ad890",
+ "permission": {
+ "super_user": false,
+ "dev": {
+ "tables": {
+ "dog": {
+ "read": true,
+ "insert": true,
+ "update": true,
+ "delete": false,
+ "attribute_permissions": [
+ {
+ "attribute_name": "name",
+ "read": true,
+ "insert": true,
+ "update": true
+ }
+ ]
+ }
+ }
+ }
+ },
+ "role": "developer"
+ },
+ {
+ "__createdtime__": 1610749235614,
+ "__updatedtime__": 1610749235614,
+ "id": "136f03fa-a0e9-46c3-bd5d-7f3e7dd5b564",
+ "permission": {
+ "cluster_user": true
+ },
+ "role": "cluster_user"
+ },
+ {
+ "__createdtime__": 1610749235609,
+ "__updatedtime__": 1610749235609,
+ "id": "745b3138-a7cf-455a-8256-ac03722eef12",
+ "permission": {
+ "super_user": true
+ },
+ "role": "super_user"
+ }
+]
+```
+
+---
+
+## Add Role
+Creates a new role with the specified permissions. [Learn more about HarperDB roles here.](../security/users-and-roles)
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `add_role`
+* role _(required)_ - name of role you are defining
+* permission _(required)_ - object defining permissions for users associated with this role:
+ * super_user _(optional)_ - boolean which, if set to true, gives users associated with this role full access to all operations and methods. If not included, value will be assumed to be false.
+ * structure_user (optional) - boolean OR array of database names (as strings). If boolean, user can create new databases and tables. If array of strings, users can only manage tables within the specified databases. This overrides any individual table permissions for specified databases, or for all databases if the value is true.
+
+### Body
+```json
+{
+ "operation": "add_role",
+ "role": "developer",
+ "permission": {
+ "super_user": false,
+ "structure_user": false,
+ "dev": {
+ "tables": {
+ "dog": {
+ "read": true,
+ "insert": true,
+ "update": true,
+ "delete": false,
+ "attribute_permissions": [
+ {
+ "attribute_name": "name",
+ "read": true,
+ "insert": true,
+ "update": true
+ }
+ ]
+ }
+ }
+ }
+ }
+}
+```
+
+### Response: 200
+```json
+{
+ "role": "developer",
+ "permission": {
+ "super_user": false,
+ "structure_user": false,
+ "dev": {
+ "tables": {
+ "dog": {
+ "read": true,
+ "insert": true,
+ "update": true,
+ "delete": false,
+ "attribute_permissions": [
+ {
+ "attribute_name": "name",
+ "read": true,
+ "insert": true,
+ "update": true
+ }
+ ]
+ }
+ }
+ }
+ },
+ "id": "0a9368b0-bd81-482f-9f5a-8722e3582f96",
+ "__updatedtime__": 1598549532897,
+ "__createdtime__": 1598549532897
+}
+```
+
+---
+
+## Alter Role
+Modifies an existing role with the specified permissions. updates permissions from an existing role. [Learn more about HarperDB roles here.](../security/users-and-roles)
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `alter_role`
+* id _(required)_ - the id value for the role you are altering
+* role _(optional)_ - name value to update on the role you are altering
+* permission _(required)_ - object defining permissions for users associated with this role:
+ * super_user _(optional)_ - boolean which, if set to true, gives users associated with this role full access to all operations and methods. If not included, value will be assumed to be false.
+ * structure_user (optional) - boolean OR array of database names (as strings). If boolean, user can create new databases and tables. If array of strings, users can only manage tables within the specified databases. This overrides any individual table permissions for specified databases, or for all databases if the value is true.
+
+### Body
+
+```json
+{
+ "operation": "alter_role",
+ "id": "f92162e2-cd17-450c-aae0-372a76859038",
+ "role": "another_developer",
+ "permission": {
+ "super_user": false,
+ "structure_user": false,
+ "dev": {
+ "tables": {
+ "dog": {
+ "read": true,
+ "insert": true,
+ "update": true,
+ "delete": false,
+ "attribute_permissions": [
+ {
+ "attribute_name": "name",
+ "read": false,
+ "insert": true,
+ "update": true
+ }
+ ]
+ }
+ }
+ }
+ }
+}
+```
+
+### Response: 200
+```json
+{
+ "id": "a7cb91e9-32e4-4dbf-a327-fab4fa9191ea",
+ "role": "developer",
+ "permission": {
+ "super_user": false,
+ "structure_user": false,
+ "dev": {
+ "tables": {
+ "dog": {
+ "read": true,
+ "insert": true,
+ "update": true,
+ "delete": false,
+ "attribute_permissions": [
+ {
+ "attribute_name": "name",
+ "read": false,
+ "insert": true,
+ "update": true
+ }
+ ]
+ }
+ }
+ }
+ },
+ "__updatedtime__": 1598549996106
+}
+```
+
+---
+
+## Drop Role
+Deletes an existing role from the database. NOTE: Role with associated users cannot be dropped. [Learn more about HarperDB roles here.](../security/users-and-roles)
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - this must always be `drop_role`
+* id _(required)_ - this is the id of the role you are dropping
+
+### Body
+```json
+{
+ "operation": "drop_role",
+ "id": "developer"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "developer successfully deleted"
+}
+```
+
+---
+
+## List Users
+Returns a list of all users. [Learn more about HarperDB roles here.](../security/users-and-roles)
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `list_users`
+
+### Body
+```json
+{
+ "operation": "list_users"
+}
+```
+
+### Response: 200
+```json
+[
+ {
+ "__createdtime__": 1635520961165,
+ "__updatedtime__": 1635520961165,
+ "active": true,
+ "role": {
+ "__createdtime__": 1635520961161,
+ "__updatedtime__": 1635520961161,
+ "id": "7c78ef13-c1f3-4063-8ea3-725127a78279",
+ "permission": {
+ "super_user": true,
+ "system": {
+ "tables": {
+ "hdb_table": {
+ "read": true,
+ "insert": false,
+ "update": false,
+ "delete": false,
+ "attribute_permissions": []
+ },
+ "hdb_attribute": {
+ "read": true,
+ "insert": false,
+ "update": false,
+ "delete": false,
+ "attribute_permissions": []
+ },
+ "hdb_schema": {
+ "read": true,
+ "insert": false,
+ "update": false,
+ "delete": false,
+ "attribute_permissions": []
+ },
+ "hdb_user": {
+ "read": true,
+ "insert": false,
+ "update": false,
+ "delete": false,
+ "attribute_permissions": []
+ },
+ "hdb_role": {
+ "read": true,
+ "insert": false,
+ "update": false,
+ "delete": false,
+ "attribute_permissions": []
+ },
+ "hdb_job": {
+ "read": true,
+ "insert": false,
+ "update": false,
+ "delete": false,
+ "attribute_permissions": []
+ },
+ "hdb_license": {
+ "read": true,
+ "insert": false,
+ "update": false,
+ "delete": false,
+ "attribute_permissions": []
+ },
+ "hdb_info": {
+ "read": true,
+ "insert": false,
+ "update": false,
+ "delete": false,
+ "attribute_permissions": []
+ },
+ "hdb_nodes": {
+ "read": true,
+ "insert": false,
+ "update": false,
+ "delete": false,
+ "attribute_permissions": []
+ },
+ "hdb_temp": {
+ "read": true,
+ "insert": false,
+ "update": false,
+ "delete": false,
+ "attribute_permissions": []
+ }
+ }
+ }
+ },
+ "role": "super_user"
+ },
+ "username": "HDB_ADMIN"
+ }
+]
+```
+
+---
+
+## User Info
+Returns user data for the associated user credentials.
+
+* operation _(required)_ - must always be `user_info`
+
+### Body
+```json
+{
+ "operation": "user_info"
+}
+```
+
+### Response: 200
+```json
+{
+ "__createdtime__": 1610749235611,
+ "__updatedtime__": 1610749235611,
+ "active": true,
+ "role": {
+ "__createdtime__": 1610749235609,
+ "__updatedtime__": 1610749235609,
+ "id": "745b3138-a7cf-455a-8256-ac03722eef12",
+ "permission": {
+ "super_user": true
+ },
+ "role": "super_user"
+ },
+ "username": "HDB_ADMIN"
+}
+```
+
+---
+
+## Add User
+Creates a new user with the specified role and credentials. [Learn more about HarperDB roles here.](../security/users-and-roles)
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `add_user`
+* role _(required)_ - 'role' name value of the role you wish to assign to the user. See `add_role` for more detail
+* username _(required)_ - username assigned to the user. It can not be altered after adding the user. It serves as the hash
+* password _(required)_ - clear text for password. HarperDB will encrypt the password upon receipt
+* active _(required)_ - boolean value for status of user's access to your HarperDB instance. If set to false, user will not be able to access your instance of HarperDB.
+
+### Body
+```json
+{
+ "operation": "add_user",
+ "role": "role_name",
+ "username": "hdb_user",
+ "password": "password",
+ "active": true
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "hdb_user successfully added"
+}
+```
+
+---
+
+## Alter User
+Modifies an existing user's role and/or credentials. [Learn more about HarperDB roles here.](../security/users-and-roles)
+
+_Operation is restricted to super\_user roles only_
+
+ * operation _(required)_ - must always be `alter_user`
+ * username _(required)_ - username assigned to the user. It can not be altered after adding the user. It serves as the hash.
+ * password _(optional)_ - clear text for password. HarperDB will encrypt the password upon receipt
+ * role _(optional)_ - `role` name value of the role you wish to assign to the user. See `add_role` for more detail
+ * active _(optional)_ - status of user's access to your HarperDB instance. See `add_role` for more detail
+
+### Body
+```json
+{
+ "operation": "alter_user",
+ "role": "role_name",
+ "username": "hdb_user",
+ "password": "password",
+ "active": true
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "updated 1 of 1 records",
+ "new_attributes": [],
+ "txn_time": 1611615114397.988,
+ "update_hashes": [
+ "hdb_user"
+ ],
+ "skipped_hashes": []
+}
+```
+
+---
+
+## Drop User
+Deletes an existing user by username. [Learn more about HarperDB roles here.](../security/users-and-roles)
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `drop_user`
+* username _(required)_ - username assigned to the user
+
+### Body
+```json
+{
+ "operation": "drop_user",
+ "username": "sgoldberg"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "sgoldberg successfully deleted"
+}
+```
diff --git a/site/versioned_docs/version-4.3/developers/operations-api/utilities.md b/site/versioned_docs/version-4.3/developers/operations-api/utilities.md
new file mode 100644
index 00000000..7ba696ae
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/operations-api/utilities.md
@@ -0,0 +1,359 @@
+---
+title: Utilities
+---
+
+# Utilities
+
+## Restart
+Restarts the HarperDB instance.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `restart`
+
+### Body
+```json
+{
+ "operation": "restart"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Restarting HarperDB. This may take up to 60 seconds."
+}
+```
+---
+
+## Restart Service
+Restarts servers for the specified HarperDB service.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `restart_service`
+* service _(required)_ - must be one of: `http_workers`, `clustering_config` or `clustering`
+
+### Body
+```json
+{
+ "operation": "restart_service",
+ "service": "http_workers"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Restarting http_workers"
+}
+```
+
+---
+## System Information
+Returns detailed metrics on the host system.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `system_information`
+* attributes _(optional)_ - string array of top level attributes desired in the response, if no value is supplied all attributes will be returned. Available attributes are: ['system', 'time', 'cpu', 'memory', 'disk', 'network', 'harperdb_processes', 'table_size', 'replication']
+
+### Body
+```json
+{
+ "operation": "system_information"
+}
+```
+
+---
+
+## Delete Records Before
+
+Delete data before the specified timestamp on the specified database table exclusively on the node where it is executed. Any clustered nodes with replicated data will retain that data.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `delete_records_before`
+* date _(required)_ - records older than this date will be deleted. Supported format looks like: `YYYY-MM-DDThh:mm:ss.sZ`
+* schema _(required)_ - name of the schema where you are deleting your data
+* table _(required)_ - name of the table where you are deleting your data
+
+### Body
+```json
+{
+ "operation": "delete_records_before",
+ "date": "2021-01-25T23:05:27.464",
+ "schema": "dev",
+ "table": "breed"
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Starting job with id d3aed926-e9fe-4ec1-aea7-0fb4451bd373",
+ "job_id": "d3aed926-e9fe-4ec1-aea7-0fb4451bd373"
+}
+```
+
+---
+
+## Export Local
+Exports data based on a given search operation to a local file in JSON or CSV format.
+
+* operation _(required)_ - must always be `export_local`
+* format _(required)_ - the format you wish to export the data, options are `json` & `csv`
+* path _(required)_ - path local to the server to export the data
+* search_operation _(required)_ - search_operation of `search_by_hash`, `search_by_value`, `search_by_conditions` or `sql`
+* filename _(optional)_ - the name of the file where your export will be written to (do not include extension in filename). If one is not provided it will be autogenerated based on the epoch.
+
+### Body
+```json
+{
+ "operation": "export_local",
+ "format": "json",
+ "path": "/data/",
+ "search_operation": {
+ "operation": "sql",
+ "sql": "SELECT * FROM dev.breed"
+ }
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Starting job with id 6fc18eaa-3504-4374-815c-44840a12e7e5"
+}
+```
+
+---
+
+## Export To S3
+Exports data based on a given search operation from table to AWS S3 in JSON or CSV format.
+
+* operation _(required)_ - must always be `export_to_s3`
+* format _(required)_ - the format you wish to export the data, options are `json` & `csv`
+* s3 _(required)_ - details your access keys, bucket, bucket region and key for saving the data to S3
+* search_operation _(required)_ - search_operation of `search_by_hash`, `search_by_value`, `search_by_conditions` or `sql`
+
+### Body
+```json
+{
+ "operation": "export_to_s3",
+ "format": "json",
+ "s3": {
+ "aws_access_key_id": "YOUR_KEY",
+ "aws_secret_access_key": "YOUR_SECRET_KEY",
+ "bucket": "BUCKET_NAME",
+ "key": "OBJECT_NAME",
+ "region": "BUCKET_REGION"
+ },
+ "search_operation": {
+ "operation": "sql",
+ "sql": "SELECT * FROM dev.dog"
+ }
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Starting job with id 9fa85968-4cb1-4008-976e-506c4b13fc4a",
+ "job_id": "9fa85968-4cb1-4008-976e-506c4b13fc4a"
+}
+```
+
+---
+
+## Install Node Modules
+Executes npm install against specified custom function projects.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `install_node_modules`
+* projects _(required)_ - must ba an array of custom functions projects.
+* dry_run _(optional)_ - refers to the npm --dry-run flag: [https:/docs.npmjs.com/cli/v8/commands/npm-install#dry-run](https:/docs.npmjs.com/cli/v8/commands/npm-install#dry-run). Defaults to false.
+
+### Body
+```json
+{
+ "operation": "install_node_modules",
+ "projects": [
+ "dogs",
+ "cats"
+ ],
+ "dry_run": true
+}
+```
+
+---
+
+## Set Configuration
+
+Modifies the HarperDB configuration file parameters. Must follow with a restart or restart_service operation.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `set_configuration`
+* logging_level _(example/optional)_ - one or more configuration keywords to be updated in the HarperDB configuration file
+* clustering_enabled _(example/optional)_ - one or more configuration keywords to be updated in the HarperDB configuration file
+
+### Body
+```json
+{
+ "operation": "set_configuration",
+ "logging_level": "trace",
+ "clustering_enabled": true
+}
+```
+
+### Response: 200
+```json
+{
+ "message": "Configuration successfully set. You must restart HarperDB for new config settings to take effect."
+}
+```
+
+---
+
+## Get Configuration
+Returns the HarperDB configuration parameters.
+
+_Operation is restricted to super_user roles only_
+
+* operation _(required)_ - must always be `get_configuration`
+
+### Body
+```json
+{
+ "operation": "get_configuration"
+}
+```
+
+### Response: 200
+```json
+{
+ "http": {
+ "compressionThreshold": 1200,
+ "cors": false,
+ "corsAccessList": [
+ null
+ ],
+ "keepAliveTimeout": 30000,
+ "port": 9926,
+ "securePort": null,
+ "timeout": 120000
+ },
+ "threads": 11,
+ "authentication": {
+ "cacheTTL": 30000,
+ "enableSessions": true,
+ "operationTokenTimeout": "1d",
+ "refreshTokenTimeout": "30d"
+ },
+ "analytics": {
+ "aggregatePeriod": 60
+ },
+ "clustering": {
+ "enabled": true,
+ "hubServer": {
+ "cluster": {
+ "name": "harperdb",
+ "network": {
+ "port": 12345,
+ "routes": null
+ }
+ },
+ "leafNodes": {
+ "network": {
+ "port": 9931
+ }
+ },
+ "network": {
+ "port": 9930
+ }
+ },
+ "leafServer": {
+ "network": {
+ "port": 9940,
+ "routes": null
+ },
+ "streams": {
+ "maxAge": null,
+ "maxBytes": null,
+ "maxMsgs": null,
+ "path": "/Users/hdb/clustering/leaf"
+ }
+ },
+ "logLevel": "info",
+ "nodeName": "node1",
+ "republishMessages": false,
+ "databaseLevel": false,
+ "tls": {
+ "certificate": "/Users/hdb/keys/certificate.pem",
+ "certificateAuthority": "/Users/hdb/keys/ca.pem",
+ "privateKey": "/Users/hdb/keys/privateKey.pem",
+ "insecure": true,
+ "verify": true
+ },
+ "user": "cluster_user"
+ },
+ "componentsRoot": "/Users/hdb/components",
+ "localStudio": {
+ "enabled": false
+ },
+ "logging": {
+ "auditAuthEvents": {
+ "logFailed": false,
+ "logSuccessful": false
+ },
+ "auditLog": true,
+ "auditRetention": "3d",
+ "file": true,
+ "level": "error",
+ "root": "/Users/hdb/log",
+ "rotation": {
+ "enabled": false,
+ "compress": false,
+ "interval": null,
+ "maxSize": null,
+ "path": "/Users/hdb/log"
+ },
+ "stdStreams": false
+ },
+ "mqtt": {
+ "network": {
+ "port": 1883,
+ "securePort": 8883
+ },
+ "webSocket": true,
+ "requireAuthentication": true
+ },
+ "operationsApi": {
+ "network": {
+ "cors": true,
+ "corsAccessList": [
+ "*"
+ ],
+ "domainSocket": "/Users/hdb/operations-server",
+ "port": 9925,
+ "securePort": null
+ }
+ },
+ "rootPath": "/Users/hdb",
+ "storage": {
+ "writeAsync": false,
+ "caching": true,
+ "compression": false,
+ "noReadAhead": true,
+ "path": "/Users/hdb/database",
+ "prefetchWrites": true
+ },
+ "tls": {
+ "certificate": "/Users/hdb/keys/certificate.pem",
+ "certificateAuthority": "/Users/hdb/keys/ca.pem",
+ "privateKey": "/Users/hdb/keys/privateKey.pem"
+ }
+}
+```
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.3/developers/real-time.md b/site/versioned_docs/version-4.3/developers/real-time.md
new file mode 100644
index 00000000..dd2d88c9
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/real-time.md
@@ -0,0 +1,160 @@
+---
+title: Real-Time
+---
+
+# Real-Time
+
+## Real-Time
+
+HarperDB provides real-time access to data and messaging. This allows clients to monitor and subscribe to data for changes in real-time as well as handling data-oriented messaging. HarperDB supports multiple standardized protocols to facilitate diverse standards-based client interaction.
+
+HarperDB real-time communication is based around database tables. Declared tables are the basis for monitoring data, and defining "topics" for publishing and subscribing to messages. Declaring a table that establishes a topic can be as simple as adding a table with no attributes to your [schema.graphql in a HarperDB application folder](./applications/):
+```
+type MyTopic @table @export
+```
+You can then subscribe to records or sub-topics in this topic/namespace, as well as save data and publish messages, with the protocols discussed below.
+
+### Content Negotiation
+
+HarperDB is a database, not a generic broker, and therefore highly adept at handling _structured_ data. Data can be published and subscribed in all supported structured/object formats, including JSON, CBOR, and MessagePack, and the data will be stored and handled as structured data. This means that different clients can individually choose which format they prefer, both for inbound and outbound messages. One client could publish in JSON, and another client could choose to receive messages in CBOR.
+
+## Protocols
+
+### MQTT
+
+HarperDB supports MQTT as an interface to this real-time data delivery. It is important to note that MQTT in HarperDB is not just a generic pub/sub hub, but is deeply integrated with the database providing subscriptions directly to database records, and publishing to these records. In this document we will explain how MQTT pub/sub concepts are aligned and integrated with database functionality.
+
+#### Configuration
+
+HarperDB supports MQTT with its `mqtt` server module and HarperDB supports MQTT over standard TCP sockets or over WebSockets. This is enabled by default, but can be configured in your `harperdb-config.yaml` configuration, allowing you to change which ports it listens on, if secure TLS connections are used, and MQTT is accepted over WebSockets:
+
+```yaml
+mqtt:
+ network:
+ port: 1883
+ securePort: 8883 # for TLS
+ webSocket: true # will also enable WS support through the default HTTP interface/port
+ mTLS: false
+ requireAuthentication: true
+```
+
+Note that if you are using WebSockets for MQTT, the sub-protocol should be set to "mqtt" (this is required by the MQTT specification, and should be included by any conformant client): `Sec-WebSocket-Protocol: mqtt`. mTLS is also supported by enabling it in the configuration and using the certificate authority from the TLS section of the configuration. See the [configuration documentation for more information](../deployments/configuration).
+
+#### Capabilities
+
+HarperDB's MQTT capabilities includes support for MQTT versions v3.1 and v5 with standard publish and subscription capabilities with multi-level topics, QoS 0 and 1 levels, and durable (non-clean) sessions. MQTT supports QoS 2 interaction, but doesn't guarantee exactly once delivery (although any guarantees of exactly once over unstable networks is a fictional aspiration). MQTT doesn't currently support last will, nor single-level wildcards (only multi-level wildcards).
+
+### Topics
+
+In MQTT, messages are published to, and subscribed from, topics. In HarperDB topics are aligned with resource endpoint paths in exactly the same way as the REST endpoints. If you define a table or resource in your schema, with a path/endpoint of "my-resource", that means that this can be addressed as a topic just like a URL path. So a topic of "my-resource/some-id" would correspond to the record in the my-resource table (or custom resource) with a record id of "some-id".
+
+This means that you can subscribe to "my-resource/some-id" and making this subscription means you will receive notification messages for any updates to this record. If this record is modified or deleted, a message will be sent to listeners of this subscription.
+
+The current value of this record is also treated as the "retained" message for this topic. When you subscribe to "my-resource/some-id", you will immediately receive the record for this id, through a "publish" command from the server, as the initial "retained" message that is first delivered. This provides a simple and effective way to get the current state of a record and future updates to that record without having to worry about timing issues of aligning a retrieval and subscription separately.
+
+Similarly, publishing a message to a "topic" also interacts with the database. Publishing a message with "retain" flag enabled is interpreted as an update or put to that record. The published message will replace the current record with the contents of the published message.
+
+If a message is published without a `retain` flag, the message will not alter the record at all, but will still be published to any subscribers to that record.
+
+HarperDB supports QoS 0 and 1 for publishing and subscribing.
+
+HarperDB supports multi-level topics, both for subscribing and publishing. HarperDB also supports multi-level wildcards, so you can subscribe to /`my-resource/#` to receive notifications for `my-resource/some-id` as well as `my-resource/nested/id`, or you can subscribe to `my-resource/nested/#` and receive the latter, but not the former, topic messages. HarperDB currently only supports trailing multi-level wildcards (no single-level wildcards with '\*').
+
+### Ordering
+
+HarperDB is designed to be a distributed database, and an intrinsic characteristic of distributed servers is that messages may take different amounts of time to traverse the network and may arrive in a different order depending on server location and network topology. HarperDB is designed for distributed data with minimal latency, and so messages are delivered to subscribers immediately when they arrive, HarperDB does not delay messages for coordinating confirmation or consensus among other nodes, which would significantly increase latency, messages are delivered as quickly as possible.
+
+As an example, let's consider message #1 is published to node A, which then sends the message to node B and node C, but the message takes a while to get there. Slightly later, while the first message is still in transit, message #2 is published to node B, which then replicates it to A and C, and because of network conditions, message #2 arrives at node C before message #1. Because HarperDB prioritizes low latency, when node C receives message #2, it immediately publishes it to all its local subscribers (it has no knowledge that message #1 is in transit).
+
+When message #1 is received by node C, the behavior of what it does with this message is dependent on whether the message is a "retained" message (was published with a retain flag set to true, or was put/update/upsert/inserted into the database) or was a non-retained message. In the case of a non-retained message, this message will be delivered to all local subscribers (even though it had been published earlier), thereby prioritizing the delivery of every message. On the other hand, a retained message will not deliver the earlier out-of-order message to clients, and HarperDB will keep the message with the latest timestamp as the "winning" record state (and will be retained message for any subsequent subscriptions). Retained messages maintain (eventual) consistency across the entire cluster of servers, all nodes will converge to the same message as the being the latest and retained message (#2 in this case).
+
+Non-retained messages are generally a good choice for applications like chat, where every message needs to be delivered even if they might arrive out-of-order (the order may not be consistent across all servers). Retained messages can be thought of a "superseding" messages, and are a good fit for applications like instrument measurements like temperature readings, where the priority to provide the _latest_ temperature and older temperature readings are not important to publish after a new reading, and consistency of the most-recent record (across the network) is important.
+
+### WebSockets
+
+WebSockets are supported through the REST interface and go through the `connect(incomingMessages)` method on resources. By default, making a WebSockets connection to a URL will subscribe to the referenced resource. For example, making a WebSocket connection to `new WebSocket('wss:/server/my-resource/341')` will access the resource defined for 'my-resource' and the resource id of 341 and connect to it. On the web platform this could be:
+
+```javascript
+let ws = new WebSocket('wss:/server/my-resource/341');
+ws.onmessage = (event) => {
+ / received a notification from the server
+ let data = JSON.parse(event.data);
+};
+```
+
+By default, the resources will make a subscription to that resource, monitoring any changes to the records or messages published to it, and will return events on the WebSockets connection. You can also override `connect(incomingMessages)` with your own handler. The `connect` method simply needs to return an iterable (asynchronous iterable) that represents the stream of messages to be sent to the client. One easy way to create an iterable stream is to define the `connect` method as a generator and `yield` messages as they become available. For example, a simple WebSockets echo server for a resource could be written:
+
+```javascript
+export class Echo extends Resource {
+ async *connect(incomingMessages) {
+ for await (let message of incomingMessages) { / wait for each incoming message from the client
+ / and send the message back to the client
+ yield message;
+ }
+ }
+```
+
+You can also call the default `connect` and it will provide a convenient streaming iterable with events for the outgoing messages, with a `send` method that you can call to send messages on the iterable, and a `close` event for determining when the connection is closed. The incoming messages iterable is also an event emitter, and you can listen for `data` events to get the incoming messages using event style:
+
+```javascript
+export class Example extends Resource {
+ connect(incomingMessages) {
+ let outgoingMessages = super.connect();
+ let timer = setInterval(() => {
+ outgoingMessages.send({greeting: 'hi again!'});
+ }, 1000); / send a message once a second
+ incomingMessages.on('data', (message) => {
+ / another way of echo-ing the data back to the client
+ outgoingMessages.send(message);
+ });
+ outgoingMessages.on('close', () => {
+ / make sure we end the timer once the connection is closed
+ clearInterval(timer);
+ });
+ return outgoingMessages;
+ }
+```
+
+### Server Sent Events
+
+Server Sent Events (SSE) are also supported through the REST server interface, and provide a simple and efficient mechanism for web-based applications to receive real-time updates. For consistency of push delivery, SSE connections go through the `connect()` method on resources, much like WebSockets. The primary difference is that `connect` is called without any `incomingMessages` argument, since SSE is a one-directional transport mechanism. This can be used much like WebSockets, specifying a resource URL path will connect to that resource, and by default provides a stream of messages for changes and messages for that resource. For example, you can connect to receive notification in a browser for a resource like:
+
+```javascript
+let eventSource = new EventSource('https:/server/my-resource/341', { withCredentials: true });
+eventSource.onmessage = (event) => {
+ / received a notification from the server
+ let data = JSON.parse(event.data);
+};
+```
+
+### MQTT Feature Support Matrix
+
+| Feature | Support |
+|--------------------------------------------------------------------|----------------------------------------------------------------|
+| Connections, protocol negotiation, and acknowledgement with v3.1.1 | :heavy_check_mark: |
+| Connections, protocol negotiation, and acknowledgement with v5 | :heavy_check_mark: |
+| Secure MQTTS | :heavy_check_mark: |
+| MQTTS over WebSockets | :heavy_check_mark: |
+| MQTT authentication via user/pass | :heavy_check_mark: |
+| MQTT authentication via mTLS | :heavy_check_mark: |
+| Publish | :heavy_check_mark: |
+| Subscribe | :heavy_check_mark: |
+| Multi-level wildcard | :heavy_check_mark: |
+| Single-level wildcard | :heavy_check_mark: |
+| QoS 0 | :heavy_check_mark: |
+| QoS 1 | :heavy_check_mark: |
+| QoS 2 | Not fully supported, can perform conversation but does persist |
+| Keep-Alive monitoring | |
+| Clean session | :heavy_check_mark: |
+| Durable session | :heavy_check_mark: |
+| Distributed durable session | |
+| Will | :heavy_check_mark: |
+| MQTT V5 User properties | |
+| MQTT V5 Will properties | |
+| MQTT V5 Connection properties | |
+| MQTT V5 Connection acknowledgement properties | |
+| MQTT V5 Publish properties | |
+| MQTT V5 Subscribe properties | |
+| MQTT V5 Ack properties | |
+| MQTT V5 AUTH command | |
+| MQTT V5 Shared Subscriptions | |
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.3/developers/rest.md b/site/versioned_docs/version-4.3/developers/rest.md
new file mode 100644
index 00000000..90348887
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/rest.md
@@ -0,0 +1,357 @@
+---
+title: REST
+---
+
+# REST
+
+HarperDB provides a powerful, efficient, and standard-compliant HTTP REST interface for interacting with tables and other resources. The REST interface is the recommended interface for data access, querying, and manipulation (for HTTP interactions), providing the best performance and HTTP interoperability with different clients.
+
+Resources, including tables, can be configured as RESTful endpoints. Make sure you review the [application introduction](./applications/) and [defining schemas](./applications/defining-schemas) to properly define your schemas and select which tables are exported and available through REST interface, as tables are not exported by default. The name of the [exported](./applications/defining-schemas#export) resource defines the basis of the endpoint path available at the application HTTP server port [configured here](../../deployments/configuration#http) (the default being `9926`). From there, a record id or query can be appended. Following uniform interface principles, HTTP methods define different actions with resources. For each method, this describes the default action.
+
+The default path structure provides access to resources at several levels:
+
+* `/my-resource` - The root path of a resource usually has a description of the resource (like a describe operation for a table).
+* `/my-resource/` - The trailing slash in a path indicates it is a collection of the records. The root collection for a table represents all the records in a table, and usually you will append query parameters to query and search for more specific records.
+* `/my-resource/record-id` - This resource locator represents a specific record, referenced by its id. This is typically how you can retrieve, update, and delete individual records.
+* `/my-resource/record-id/` - Again, a trailing slash indicates a collection; here it is the collection of the records that begin with the specified id prefix.
+* `/my-resource/record-id/with/multiple/parts` - A record id can consist of multiple path segments.
+
+## GET
+
+These can be used to retrieve individual records or perform searches. This is handled by the Resource method `get()` (and can be overridden).
+
+### `GET /my-resource/`
+
+This can be used to retrieve a record by its primary key. The response will include the record as the body.
+
+#### Caching/Conditional Requests
+
+A `GET` response for a record will include an encoded version, a timestamp of the last modification, of this record in the `ETag` request headers (or any accessed record when used in a custom get method). On subsequent requests, a client (that has a cached copy) may include an `If-None-Match` request header with this tag. If the record has not been updated since this date, the response will have a 304 status and no body. This facilitates significant performance gains since the response data doesn't need to be serialized and transferred over the network.
+
+### `GET /my-resource/?property=value`
+
+This can be used to search for records by the specified property name and value. See the querying section for more information.
+
+### `GET /my-resource/.property`
+
+This can be used to retrieve the specified property of the specified record.
+
+## PUT
+
+This can be used to create or update a record with the provided object/data (similar to an "upsert") with a specified key. This is handled by the Resource method `put(record)`.
+
+### `PUT /my-resource/`
+
+This will create or update the record with the URL path that maps to the record's primary key. The record will be replaced with the contents of the data in the request body. The new record will exactly match the data that was sent (this will remove any properties that were present in the previous record and not included in the body). Future GETs will return the exact data that was provided by PUT (what you PUT is what you GET). For example:
+
+```http
+PUT /MyTable/123
+Content-Type: application/json
+
+{ "name": "some data" }
+```
+
+This will create or replace the record with a primary key of "123" with the object defined by the JSON in the body. This is handled by the Resource method `put()`.
+
+## DELETE
+
+This can be used to delete a record or records.
+
+## `DELETE /my-resource/`
+
+This will delete a record with the given primary key. This is handled by the Resource's `delete` method. For example:
+
+```http
+DELETE /MyTable/123
+```
+
+This will delete the record with the primary key of "123".
+
+## `DELETE /my-resource/?property=value`
+
+This will delete all the records that match the provided query.
+
+## POST
+
+Generally the POST method can be used for custom actions since POST has the broadest semantics. For tables that are expost\ed as endpoints, this also can be used to create new records.
+
+### `POST /my-resource/`
+
+This is handled by the Resource method `post(data)`, which is a good method to extend to make various other types of modifications. Also, with a table you can create a new record without specifying a primary key, for example:
+
+````http
+````http
+POST /MyTable/
+Content-Type: application/json
+
+`{ "name": "some data" }`
+````
+
+This will create a new record, auto-assigning a primary key, which will be returned in the `Location` header.
+
+## Querying through URL query parameters
+
+URL query parameters provide a powerful language for specifying database queries in HarperDB. This can be used to search by a single attribute name and value, to find all records which provide value for the given property/attribute. It is important to note that this attribute must be configured to be indexed to search on it. For example:
+
+````http
+GET /my-resource/?property=value
+```
+
+We can specify multiple properties that must match:
+
+```http
+GET /my-resource/?property=value&property2=another-value
+```
+
+Note that only one of the attributes needs to be indexed for this query to execute.
+
+We can also specify different comparators such as less than and greater than queries using [FIQL](https:/datatracker.ietf.org/doc/html/draft-nottingham-atompub-fiql-00) syntax. If we want to specify records with an `age` value greater than 20:
+
+```http
+GET /my-resource/?age=gt=20
+```
+
+Or less than or equal to 20:
+
+```http
+GET /my-resource/?age=le=20
+```
+
+The comparison operators include standard FIQL operators, `lt` (less than), `le` (less than or equal), `gt` (greater than), `ge` (greater than or equal), and `ne` (not equal). These comparison operators can also be combined with other query parameters with `&`. For example, if we wanted products with a category of software and price between 100 and 200, we could write:
+
+```http
+GET /Product/?category=software&price=gt=100&price=lt=200
+```
+
+Comparison operators can also be used on Date fields, however, we have to ensure that the date format is properly escaped. For example, if we are looking for a listing date greater than `2017-03-08T09:00:00.000Z` we must escape the colons as `%3A`:
+
+```
+GET /Product/?listDate=gt=2017-03-08T09%3A30%3A00.000Z
+```
+
+You can also search for attributes that start with a specific string, by using the == comparator and appending a `*` to the attribute value:
+
+```http
+GET /Product/?name==Keyboard*
+```
+
+Note that some HTTP clients may be overly aggressive in encoding query parameters, and you may need to disable extra encoding of query parameters, to ensure operators are passed through without manipulation.
+
+Here is a full list of the supported FIQL-style operators/comparators:
+* `==`: equal
+* `=lt=`: less than
+* `=le=`: less than or equal
+* `=gt=`: greater than
+* `=ge=`: greater than or equal
+* `=ne=`, !=: not equal
+* `=ct=`: contains the value (for strings)
+* `=sw=`, `==*`: starts with the value (for strings)
+* `=ew=`: ends with the value (for strings)
+* `=`, `===`: strict equality (no type conversion)
+* `!==`: strict inequality (no type conversion)
+
+### Unions
+Conditions can also be applied with `OR` logic, returning the union of records that match either condition. This can be specified by using the `|` operator instead of `&`. For example, to return any product a rating of `5` _or_ a `featured` attribute that is `true`, we could write:
+```http
+GET /Product/?rating=5|featured=true
+```
+
+### Grouping of Operators
+Multiple conditions with different operators can be combined with grouping of conditions to indicate the order of operation. Grouping conditions can be done with parenthesis, with standard grouping conventions as used in query and mathematical expressions. For example, a query to find products with a rating of 5 OR a price between 100 and 200 could be written:
+```http
+GET /Product/?rating=5|(price=gt=100&price=lt=200)
+```
+Grouping conditions can also be done with square brackets, which function the same as parenthesis for grouping conditions. The advantage of using square brackets is that you can include user provided values that might have parenthesis in them, and use standard URI component encoding functionality, which will safely escape/encode square brackets, but not parenthesis. For example, if we were constructing a query for products with a rating of a 5 and matching one of a set of user provided tags, a query could be built like:
+```http
+GET /Product/?rating=5&[tag=fast|tag=scalable|tag=efficient]
+```
+And the tags could be safely generated from user inputs in a tag array like:
+```javascript
+let url = `/Product/?rating=5[${tags.map(encodeURIComponent).join('|')}]`
+```
+More complex queries can be created by further nesting groups:
+```http
+GET /Product/?price=lt=100|[rating=5&[tag=fast|tag=scalable|tag=efficient]&inStock=true]
+```
+
+## Query Calls
+
+HarperDB has several special query functions that use "call" syntax. These can be included in the query string as its own query entry (separated from other query conditions with an `&`). These include:
+
+### `select(properties)`
+
+This function allows you to specify which properties should be included in the responses. This takes several forms:
+
+* `?select(property)`: This will return the values of the specified property directly in the response (will not be put in an object).
+* `?select(property1,property2)`: This returns the records as objects, but limited to the specified properties.
+* `?select([property1,property2,...])`: This returns the records as arrays of the property values in the specified properties.
+* `?select(property1,)`: This can be used to specify that objects should be returned with the single specified property.
+* `?select(property{subProperty1,subProperty2{subSubProperty,..}},...)`: This can be used to specify which sub-properties should be included in nested objects and joined/references records.
+
+To get a list of product names with a category of software:
+
+```http
+GET /Product/?category=software&select(name)
+```
+
+### `limit(start,end)` or `limit(end)`
+
+This function specifies a limit on the number of records returned, optionally providing a starting offset.
+
+For example, to find the first twenty records with a `rating` greater than 3, `inStock` equal to true, only returning the `rating` and `name` properties, you could use:
+
+```http
+GET /Product/?rating=gt=3&inStock=true&select(rating,name)&limit(20)
+```
+
+### `sort(property)`, `sort(+property,-property,...)`
+
+This function allows you to indicate the sort order for the returned results. The argument for `sort()` is one or more properties that should be used to sort. If the property is prefixed with '+' or no prefix, the sort will be performed in ascending order by the indicated attribute/property. If the property is prefixed with '-', it will be sorted in descending order. If the multiple properties are specified, the sort will be performed on the first property, and for records with the same value for that property, the next property will be used to break the tie and sort results. This tie breaking will continue through any provided properties.
+
+For example, to sort by product name (in ascending order):
+```http
+GET /Product?rating=gt=3&sort(+name)
+```
+To sort by rating in ascending order, then by price in descending order for products with the same rating:
+```http
+GET /Product?sort(+rating,-price)
+```
+
+# Relationships
+HarperDB supports relationships in its data models, allowing for tables to define a relationship with data from other tables (or even itself) through foreign keys. These relationships can be one-to-many, many-to-one, or many-to-many (and even with ordered relationships). These relationships are defined in the schema, and then can easily be queried through chained attributes that act as "join" queries, allowing related attributes to referenced in conditions and selected for returned results.
+
+## Chained Attributes and Joins
+To support relationships and hierarchical data structures, in addition to querying on top-level attributes, you can also query on chained attributes. Most importantly, this provides HarperDB's "join" functionality, allowing related tables to be queried and joined in the results. Chained properties are specified by using dot syntax. In order to effectively leverage join functionality, you need to define a relationship in your schema:
+```graphql
+type Product @table @export {
+ id: ID @primaryKey
+ name: String
+ brandId: ID @indexed
+ brand: Brand @relationship(from: "brandId")
+}
+type Brand @table @export {
+ id: ID @primaryKey
+ name: String
+ products: [Product] @relationship(to: "brandId")
+}
+```
+And then you could query a product by brand name:
+```http
+GET /Product/?brand.name=Microsoft
+```
+This will query for products for which the `brandId` references a `Brand` record with a `name` of `"Microsoft"`.
+
+The `brand` attribute in `Product` is a "computed" attribute from the foreign key (`brandId`), for the many-to-one relationship to the `Brand`. In the schema above, we also defined the reverse one-to-many relationship from a `Brand` to a `Product`, and we could likewise query that:
+```http
+GET /Brand/?products.name=Keyboard
+```
+This would return any `Brand` with at least one product with a name `"Keyboard"`. Note, that both of these queries are effectively acting as an "INNER JOIN".
+
+### Chained/Nested Select
+Computed relationship attributes are not included by default in query results. However, we can include them by specifying them in a select:
+```http
+GET /Product/?brand.name=Microsoft&select(name,brand)
+```
+We can also do a "nested" select and specify which sub-attributes to include. For example, if we only wanted to include the name property from the brand, we could do so:
+```http
+GET /Product/?brand.name=Microsoft&select(name,brand{name})
+```
+Or to specify multiple sub-attributes, we can comma delimit them. Note that selects can "join" to another table without any constraint/filter on the related/joined table:
+```http
+GET /Product/?name=Keyboard&select(name,brand{name,id})
+```
+When selecting properties from a related table without any constraints on the related table, this effectively acts like a "LEFT JOIN" and will omit the `brand` property if the brandId is `null` or references a non-existent brand.
+
+
+### Many-to-many Relationships (Array of Foreign Keys)
+Many-to-many relationships are also supported, and can easily be created using an array of foreign key values, without requiring the traditional use of a junction table. This can be done by simply creating a relationship on an array-typed property that references a local array of foreign keys. For example, we could create a relationship to the resellers of a product (each product can have multiple resellers, each )
+
+```graphql
+type Product @table @export {
+ id: ID @primaryKey
+ name: String
+ resellerIds: [ID] @indexed
+ resellers: [Reseller] @relationship(from: "resellerId")
+}
+type Reseller @table {
+ id: ID @primaryKey
+ name: String
+ ...
+}
+```
+The product record can then hold an array of the reseller ids. When the `reseller` property is accessed (either through code or through select, conditions), the array of ids is resolved to an array of reseller records. We can also query through the resellers relationships like with the other relationships. For example, to query the products that are available through the "Cool Shop":
+```http
+GET /Product/?resellers.name=Cool Shop&select(id,name,resellers{name,id})
+```
+One of the benefits of using an array of foreign key values is that the this can be manipulated using standard array methods (in JavaScript), and the array can dictate an order to keys and therefore to the resulting records. For example, you may wish to define a specific order to the resellers and how they are listed (which comes first, last):
+```http
+PUT /Product/123
+Content-Type: application/json
+
+{ "id": "123", "resellerIds": ["first-reseller-id", "second-reseller-id", "last-reseller-id"],
+...}
+```
+
+### Type Conversion
+Queries parameters are simply text, so there are several features for converting parameter values to properly typed values for performing correct searches. For the FIQL comparators, which includes `==`, `!=`, `=gt=`, `=lt=`, `=ge=`, `=gt=`, the parser will perform type conversion, according to the following rules:
+* `name==null`: Will convert the value to `null` for searching.
+* `name==123`: Will convert the value to a number _if_ the attribute is untyped (there is no type specified in a GraphQL schema, or the type is specified to be `Any`).
+* `name==true`: Will convert the value to a boolean _if_ the attribute is untyped (there is no type specified in a GraphQL schema, or the type is specified to be `Any`).
+* `name==number:123`: Will explicitly convert the value after "number:" to a number.
+* `name==boolean:true`: Will explicitly convert the value after "boolean:" to a boolean.
+* `name==string:some%20text`: Will explicitly keep the value after "string:" as a string (and perform URL component decoding)
+* `name==date:2024-01-05T20%3A07%3A27.955Z`: Will explicitly convert the value after "date:" to a Date object.
+
+If the attribute specifies a type (like `Float`) in the schema definition, the value will always be converted to the specified type before searching.
+
+For "strict" operators, which includes `=`, `===`, and `!==`, no automatic type conversion will be applied, the value will be decoded as string with URL component decoding, and have type conversion applied if the attribute specifies a type, in which case the attribute type will specify the type conversion.
+
+### Content Types and Negotiation
+
+HTTP defines a couple of headers for indicating the (preferred) content type of the request and response. The `Content-Type` request header can be used to specify the content type of the request body (for PUT, PATCH, and POST). The `Accept` request header indicates the preferred content type of the response. For general records with object structures, HarperDB supports the following content types: `application/json` - Common format, easy to read, with great tooling support. `application/cbor` - Recommended binary format for optimal encoding efficiency and performance. `application/x-msgpack` - This is also an efficient format, but CBOR is preferable, as it has better streaming capabilities and faster time-to-first-byte. `text/csv` - CSV, lacks explicit typing, not well suited for heterogeneous data structures, but good for moving data to and from a spreadsheet.
+
+CBOR is generally the most efficient and powerful encoding format, with the best performance, most compact encoding, and most expansive ability to encode different data types like Dates, Maps, and Sets. MessagePack is very similar and tends to have broader adoption. However, JSON can be easier to work with and may have better tooling. Also, if you are using compression for data transfer (gzip or brotli), JSON will often result in more compact compressed data due to character frequencies that better align with Huffman coding, making JSON a good choice for web applications that do not require specific data types beyond the standard JSON types.
+
+Requesting a specific content type can also be done in a URL by suffixing the path with extension for the content type. If you want to retrieve a record in CSV format, you could request:
+
+```http
+GET /product/some-id.csv
+```
+
+Or you could request a query response in MessagePack:
+
+```http
+GET /product/.msgpack?category=software
+```
+
+However, generally it is not recommended that you use extensions in paths and it is best practice to use the `Accept` header to specify acceptable content types.
+
+### Specific Content Objects
+
+You can specify other content types, and the data will be stored as a record or object that holds the type and contents of the data. For example, if you do:
+
+```
+PUT /my-resource/33
+Content-Type: text/calendar
+
+BEGIN:VCALENDAR
+VERSION:2.0
+...
+```
+
+This would store a record equivalent to JSON:
+
+```
+{ "contentType": "text/calendar", data: "BEGIN:VCALENDAR\nVERSION:2.0\n...
+```
+
+Retrieving a record with `contentType` and `data` properties will likewise return a response with the specified `Content-Type` and body. If the `Content-Type` is not of the `text` family, the data will be treated as binary data (a Node.js `Buffer`).
+
+You can also use `application/octet-stream` to indicate that the request body should be preserved in binary form. This also useful for uploading to a specific property:
+
+```
+PUT /my-resource/33/image
+Content-Type: image/gif
+
+...image data...
+```
diff --git a/site/versioned_docs/version-4.3/developers/security/basic-auth.md b/site/versioned_docs/version-4.3/developers/security/basic-auth.md
new file mode 100644
index 00000000..56367bb2
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/security/basic-auth.md
@@ -0,0 +1,62 @@
+---
+title: Basic Authentication
+---
+
+# Basic Authentication
+
+HarperDB uses Basic Auth and JSON Web Tokens (JWTs) to secure our HTTP requests. In the context of an HTTP transaction, **basic access authentication** is a method for an HTTP user agent to provide a username and password when making a request.
+
+** _**You do not need to log in separately. Basic Auth is added to each HTTP request like create\_database, create\_table, insert etc… via headers.**_ **
+
+A header is added to each HTTP request. The header key is **“Authorization”** the header value is **“Basic <<your username and password buffer token>>”**
+
+## Authentication in HarperDB Studio
+
+In the below code sample, you can see where we add the authorization header to the request. This needs to be added for each and every HTTP request for HarperDB.
+
+_Note: This function uses btoa. Learn about_ [_btoa here_](https:/developer.mozilla.org/en-US/docs/Web/API/btoa)_._
+
+```javascript
+function callHarperDB(call_object, operation, callback){
+
+ const options = {
+ "method": "POST",
+ "hostname": call_object.endpoint_url,
+ "port": call_object.endpoint_port,
+ "path": "/",
+ "headers": {
+ "content-type": "application/json",
+ "authorization": "Basic " + btoa(call_object.username + ':' + call_object.password),
+ "cache-control": "no-cache"
+
+ }
+ };
+
+ const http_req = http.request(options, function (hdb_res) {
+ let chunks = [];
+
+ hdb_res.on("data", function (chunk) {
+ chunks.push(chunk);
+ });
+
+ hdb_res.on("end", function () {
+ const body = Buffer.concat(chunks);
+ if (isJson(body)) {
+ return callback(null, JSON.parse(body));
+ } else {
+ return callback(body, null);
+
+ }
+
+ });
+ });
+
+ http_req.on("error", function (chunk) {
+ return callback("Failed to connect", null);
+ });
+
+ http_req.write(JSON.stringify(operation));
+ http_req.end();
+
+}
+```
diff --git a/site/versioned_docs/version-4.3/developers/security/certificate-management.md b/site/versioned_docs/version-4.3/developers/security/certificate-management.md
new file mode 100644
index 00000000..eb69df74
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/security/certificate-management.md
@@ -0,0 +1,62 @@
+---
+title: Certificate Management
+---
+
+# Certificate Management
+
+This document is information on managing certificates for HarperDB external facing APIs. For information on certificate management for clustering see [clustering certificate management](../clustering/certificate-management).
+
+## Development
+
+An out of the box install of HarperDB does not have HTTPS enabled (see [configuration](../../deployments/configuration) for relevant configuration file settings.) This is great for local development. If you are developing using a remote server and your requests are traversing the Internet, we recommend that you enable HTTPS.
+
+To enable HTTPS, set `http.securePort` in `harperdb-config.yaml` to the port you wish to use for HTTPS connections and restart HarperDB.
+
+By default HarperDB will generate certificates and place them at `/keys/`. These certificates will not have a valid Common Name (CN) for your HarperDB node, so you will be able to use HTTPS, but your HTTPS client must be configured to accept the invalid certificate.
+
+## Production
+
+For production deployments, in addition to using HTTPS, we recommend using your own certificate authority (CA) or a public CA such as Let's Encrypt, to generate certificates with CNs that match the Fully Qualified Domain Name (FQDN) of your HarperDB node.
+
+We have a few recommended options for enabling HTTPS in a production setting.
+
+### Option: Enable HarperDB HTTPS and Replace Certificates
+
+To enable HTTPS, set `http.securePort` in `harperdb-config.yaml` to the port you wish to use for HTTPS connections and restart HarperDB.
+
+To replace the certificates, either replace the contents of the existing certificate files at `/keys/`, or update the HarperDB configuration with the path of your new certificate files, and then restart HarperDB.
+
+```yaml
+tls:
+ certificate: ~/hdb/keys/certificate.pem
+ certificateAuthority: ~/hdb/keys/ca.pem
+ privateKey: ~/hdb/keys/privateKey.pem
+```
+
+`operationsApi.tls` configuration is optional. If it is not set HarperDB will default to the values in the `tls` section.
+
+```yaml
+operationsApi:
+ tls:
+ certificate: ~/hdb/keys/certificate.pem
+ certificateAuthority: ~/hdb/keys/ca.pem
+ privateKey: ~/hdb/keys/privateKey.pem
+```
+
+### Option: Nginx Reverse Proxy
+
+Instead of enabling HTTPS for HarperDB, Nginx can be used as a reverse proxy for HarperDB.
+
+Install Nginx, configure Nginx to use certificates issued from your own CA or a public CA, then configure Nginx to listen for HTTPS requests and forward to HarperDB as HTTP requests.
+
+[Certbot](https:/certbot.eff.org/) is a great tool for automatically requesting and renewing Let’s Encrypt certificates used by Nginx.
+
+### Option: External Reverse Proxy
+
+Instead of enabling HTTPS for HarperDB, a number of different external services can be used as a reverse proxy for HarperDB. These services typically have integrated certificate management. Configure the service to listen for HTTPS requests and forward (over a private network) to HarperDB as HTTP requests.
+
+Examples of these types of services include an AWS Application Load Balancer or a GCP external HTTP(S) load balancer.
+
+### Additional Considerations
+
+It is possible to use different certificates for the Operations API and the Custom Functions API. In scenarios where only your Custom Functions endpoints need to be exposed to the Internet and the Operations API is reserved for HarperDB administration, you may want to use a private CA to issue certificates for the Operations API and a public CA for the Custom Functions API certificates.
diff --git a/site/versioned_docs/version-4.3/developers/security/configuration.md b/site/versioned_docs/version-4.3/developers/security/configuration.md
new file mode 100644
index 00000000..67d959fd
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/security/configuration.md
@@ -0,0 +1,39 @@
+---
+title: Configuration
+---
+
+# Configuration
+
+HarperDB was set up to require very minimal configuration to work out of the box. There are, however, some best practices we encourage for anyone building an app with HarperDB.
+
+## CORS
+
+HarperDB allows for managing [cross-origin HTTP requests](https:/developer.mozilla.org/en-US/docs/Web/HTTP/Access\_control\_CORS). By default, HarperDB enables CORS for all domains if you need to disable CORS completely or set up an access list of domains you can do the following:
+
+1. Open the harperdb-config.yaml file, which can be found in \, the location you specified during install.
+1. In harperdb-config.yaml there should be 2 entries under `operationsApi.network`: cors and corsAccessList.
+ * `cors`
+ 1. To turn off, change to: `cors: false`
+ 1. To turn on, change to: `cors: true`
+ * `corsAccessList`
+ 1. The `corsAccessList` will only be recognized by the system when `cors` is `true`
+ 1. To create an access list you set `corsAccessList` to a comma-separated list of domains.
+
+ i.e. `corsAccessList` is `http:/harperdb.io,http:/products.harperdb.io`
+ 1. To clear out the access list and allow all domains: `corsAccessList` is `[null]`
+
+## SSL
+
+HarperDB provides the option to use an HTTP or HTTPS and HTTP/2 interface. The default port for the server is 9925.
+
+These default ports can be changed by updating the `operationsApi.network.port` value in `/harperdb-config.yaml`
+
+By default, HTTPS is turned off and HTTP is turned on. It is recommended that you never directly expose HarperDB's HTTP interface through a publicly available port. HTTP is intended for local or private network use.
+
+You can toggle HTTPS and HTTP in the settings file. By setting `operationsApi.network.https` to true/false. When `https` is set to `false`, the server will use HTTP (version 1.1). Enabling HTTPS will enable both HTTPS/1.1 and HTTPS/2.
+
+HarperDB automatically generates a certificate (certificate.pem), a certificate authority (ca.pem) and a private key file (privateKey.pem) which live at `/keys/`.
+
+You can replace these with your own certificates and key.
+
+**Changes to these settings require a restart. Use operation `harperdb restart` from HarperDB Operations API.**
diff --git a/site/versioned_docs/version-4.3/developers/security/index.md b/site/versioned_docs/version-4.3/developers/security/index.md
new file mode 100644
index 00000000..6f3ab721
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/security/index.md
@@ -0,0 +1,13 @@
+---
+title: Security
+---
+
+# Security
+
+HarperDB uses role-based, attribute-level security to ensure that users can only gain access to the data they’re supposed to be able to access. Our granular permissions allow for unparalleled flexibility and control, and can actually lower the total cost of ownership compared to other database solutions, since you no longer have to replicate subsets of your data to isolate use cases.
+
+* [JWT Authentication](./jwt-auth)
+* [Basic Authentication](./basic-auth)
+* [mTLS Authentication](./mtls-auth)
+* [Configuration](./configuration)
+* [Users and Roles](./users-and-roles)
diff --git a/site/versioned_docs/version-4.3/developers/security/jwt-auth.md b/site/versioned_docs/version-4.3/developers/security/jwt-auth.md
new file mode 100644
index 00000000..f48fe0ee
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/security/jwt-auth.md
@@ -0,0 +1,96 @@
+---
+title: JWT Authentication
+---
+
+# JWT Authentication
+
+HarperDB uses token based authentication with JSON Web Tokens, JWTs.
+
+This consists of two primary operations `create_authentication_tokens` and `refresh_operation_token`. These generate two types of tokens, as follows:
+
+* The `operation_token` which is used to authenticate all HarperDB operations in the Bearer Token Authorization Header. The default expiry is one day.
+* The `refresh_token` which is used to generate a new `operation_token` upon expiry. This token is used in the Bearer Token Authorization Header for the `refresh_operation_token` operation only. The default expiry is thirty days.
+
+The `create_authentication_tokens` operation can be used at any time to refresh both tokens in the event that both have expired or been lost.
+
+## Create Authentication Tokens
+
+Users must initially create tokens using their HarperDB credentials. The following POST body is sent to HarperDB. No headers are required for this POST operation.
+
+```json
+{
+ "operation": "create_authentication_tokens",
+ "username": "username",
+ "password": "password"
+}
+```
+
+A full cURL example can be seen here:
+
+```bash
+curl --location --request POST 'http:/localhost:9925' \
+--header 'Content-Type: application/json' \
+--data-raw '{
+ "operation": "create_authentication_tokens",
+ "username": "username",
+ "password": "password"
+}'
+```
+
+An example expected return object is:
+
+```json
+{
+ "operation_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6InVzZXJuYW1lIiwiaWF0IjoxNjA0OTc4MjAwLCJleHAiOjE2MDUwNjQ2MDAsInN1YiI6Im9wZXJhdGlvbiJ9.MpQA-9CMjA-mn-7mHyUXSuSC_-kqMqJXp_NDiKLFtbtMRbodCuY3DzH401rvy_4vb0yCELf0B5EapLVY1545sv80nxSl6FoZFxQaDWYXycoia6zHpiveR8hKlmA6_XTWHJbY2FM1HAFrdtt3yUTiF-ylkdNbPG7u7fRjTmHfsZ78gd2MNWIDkHoqWuFxIyqk8XydQpsjULf2Uacirt9FmHfkMZ-Jr_rRpcIEW0FZyLInbm6uxLfseFt87wA0TbZ0ofImjAuaW_3mYs-3H48CxP152UJ0jByPb0kHsk1QKP7YHWx1-Wce9NgNADfG5rfgMHANL85zvkv8sJmIGZIoSpMuU3CIqD2rgYnMY-L5dQN1fgfROrPMuAtlYCRK7r-IpjvMDQtRmCiNG45nGsM4DTzsa5GyDrkGssd5OBhl9gr9z9Bb5HQVYhSKIOiy72dK5dQNBklD4eGLMmo-u322zBITmE0lKaBcwYGJw2mmkYcrjDOmsDseU6Bf_zVUd9WF3FqwNkhg4D7nrfNSC_flalkxPHckU5EC_79cqoUIX2ogufBW5XgYbU4WfLloKcIpb51YTZlZfwBHlHPSyaq_guaXFaeCUXKq39_i1n0HRF_mRaxNru0cNDFT9Fm3eD7V8axFijSVAMDyQs_JR7SY483YDKUfN4l-vw-EVynImr4",
+ "refresh_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6InVzZXJuYW1lIiwiaWF0IjoxNjA0OTc4MjAwLCJleHAiOjE2MDc1NzAyMDAsInN1YiI6InJlZnJlc2gifQ.acaCsk-CJWIMLGDZdGnsthyZsJfQ8ihXLyE8mTji8PgGkpbwhs7e1O0uitMgP_pGjHq2tey1BHSwoeCL49b18WyMIB10hK-q2BXGKQkykltjTrQbg7VsdFi0h57mGfO0IqAwYd55_hzHZNnyJMh4b0iPQFDwU7iTD7x9doHhZAvzElpkWbc_NKVw5_Mw3znjntSzbuPN105zlp4Niurin-_5BnukwvoJWLEJ-ZlF6hE4wKhaMB1pWTJjMvJQJE8khTTvlUN8tGxmzoaDYoe1aCGNxmDEQnx8Y5gKzVd89sylhqi54d2nQrJ2-ElfEDsMoXpR01Ps6fNDFtLTuPTp7ixj8LvgL2nCjAg996Ga3PtdvXJAZPDYCqqvaBkZZcsiqOgqLV0vGo3VVlfrcgJXQImMYRr_Inu0FCe47A93IAWuQTs-KplM1KdGJsHSnNBV6oe6QEkROJT5qZME-8xhvBYvOXqp9Znwg39bmiBCMxk26Ce66_vw06MNgoa3D5AlXPWemfdVKPZDnj_aLVjZSs0gAfFElcVn7l9yjWJOaT2Muk26U8bJl-2BEq_DSclqKHODuYM5kkPKIdE4NFrsqsDYuGxcA25rlNETFyl0q-UXj1aoz_joy5Hdnr4mFELmjnoo4jYQuakufP9xeGPsj1skaodKl0mmoGcCD6v1F60"
+}
+```
+
+## Using JWT Authentication Tokens
+
+The `operation_token` value is used to authenticate all operations in place of our standard Basic auth. In order to pass the token you will need to create an Bearer Token Authorization Header like the following request:
+
+```bash
+curl --location --request POST 'http:/localhost:9925' \
+--header 'Content-Type: application/json' \
+--header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6InVzZXJuYW1lIiwiaWF0IjoxNjA0OTc4MjAwLCJleHAiOjE2MDUwNjQ2MDAsInN1YiI6Im9wZXJhdGlvbiJ9.MpQA-9CMjA-mn-7mHyUXSuSC_-kqMqJXp_NDiKLFtbtMRbodCuY3DzH401rvy_4vb0yCELf0B5EapLVY1545sv80nxSl6FoZFxQaDWYXycoia6zHpiveR8hKlmA6_XTWHJbY2FM1HAFrdtt3yUTiF-ylkdNbPG7u7fRjTmHfsZ78gd2MNWIDkHoqWuFxIyqk8XydQpsjULf2Uacirt9FmHfkMZ-Jr_rRpcIEW0FZyLInbm6uxLfseFt87wA0TbZ0ofImjAuaW_3mYs-3H48CxP152UJ0jByPb0kHsk1QKP7YHWx1-Wce9NgNADfG5rfgMHANL85zvkv8sJmIGZIoSpMuU3CIqD2rgYnMY-L5dQN1fgfROrPMuAtlYCRK7r-IpjvMDQtRmCiNG45nGsM4DTzsa5GyDrkGssd5OBhl9gr9z9Bb5HQVYhSKIOiy72dK5dQNBklD4eGLMmo-u322zBITmE0lKaBcwYGJw2mmkYcrjDOmsDseU6Bf_zVUd9WF3FqwNkhg4D7nrfNSC_flalkxPHckU5EC_79cqoUIX2ogufBW5XgYbU4WfLloKcIpb51YTZlZfwBHlHPSyaq_guaXFaeCUXKq39_i1n0HRF_mRaxNru0cNDFT9Fm3eD7V8axFijSVAMDyQs_JR7SY483YDKUfN4l-vw-EVynImr4' \
+--data-raw '{
+ "operation":"search_by_hash",
+ "schema":"dev",
+ "table":"dog",
+ "hash_values":[1],
+ "get_attributes": ["*"]
+}'
+```
+
+## Token Expiration
+
+`operation_token` expires at a set interval. Once it expires it will no longer be accepted by HarperDB. This duration defaults to one day, and is configurable in [harperdb-config.yaml](../../deployments/configuration). To generate a new `operation_token`, the `refresh_operation_token` operation is used, passing the `refresh_token` in the Bearer Token Authorization Header. A full cURL example can be seen here:
+
+```bash
+curl --location --request POST 'http:/localhost:9925' \
+--header 'Content-Type: application/json' \
+--header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6InVzZXJuYW1lIiwiaWF0IjoxNjA0OTc4MjAwLCJleHAiOjE2MDc1NzAyMDAsInN1YiI6InJlZnJlc2gifQ.acaCsk-CJWIMLGDZdGnsthyZsJfQ8ihXLyE8mTji8PgGkpbwhs7e1O0uitMgP_pGjHq2tey1BHSwoeCL49b18WyMIB10hK-q2BXGKQkykltjTrQbg7VsdFi0h57mGfO0IqAwYd55_hzHZNnyJMh4b0iPQFDwU7iTD7x9doHhZAvzElpkWbc_NKVw5_Mw3znjntSzbuPN105zlp4Niurin-_5BnukwvoJWLEJ-ZlF6hE4wKhaMB1pWTJjMvJQJE8khTTvlUN8tGxmzoaDYoe1aCGNxmDEQnx8Y5gKzVd89sylhqi54d2nQrJ2-ElfEDsMoXpR01Ps6fNDFtLTuPTp7ixj8LvgL2nCjAg996Ga3PtdvXJAZPDYCqqvaBkZZcsiqOgqLV0vGo3VVlfrcgJXQImMYRr_Inu0FCe47A93IAWuQTs-KplM1KdGJsHSnNBV6oe6QEkROJT5qZME-8xhvBYvOXqp9Znwg39bmiBCMxk26Ce66_vw06MNgoa3D5AlXPWemfdVKPZDnj_aLVjZSs0gAfFElcVn7l9yjWJOaT2Muk26U8bJl-2BEq_DSclqKHODuYM5kkPKIdE4NFrsqsDYuGxcA25rlNETFyl0q-UXj1aoz_joy5Hdnr4mFELmjnoo4jYQuakufP9xeGPsj1skaodKl0mmoGcCD6v1F60' \
+--data-raw '{
+ "operation":"refresh_operation_token"
+}'
+```
+
+This will return a new `operation_token`. An example expected return object is:
+
+```bash
+{
+ "operation_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6eyJfX2NyZWF0ZWR0aW1lX18iOjE2MDQ5NzgxODkxNTEsIl9fdXBkYXRlZHRpbWVfXyI6MTYwNDk3ODE4OTE1MSwiYWN0aXZlIjp0cnVlLCJyb2xlIjp7Il9fY3JlYXRlZHRpbWVfXyI6MTYwNDk0NDE1MTM0NywiX191cGRhdGVkdGltZV9fIjoxNjA0OTQ0MTUxMzQ3LCJpZCI6IjdiNDNlNzM1LTkzYzctNDQzYi05NGY3LWQwMzY3Njg5NDc4YSIsInBlcm1pc3Npb24iOnsic3VwZXJfdXNlciI6dHJ1ZSwic3lzdGVtIjp7InRhYmxlcyI6eyJoZGJfdGFibGUiOnsicmVhZCI6dHJ1ZSwiaW5zZXJ0IjpmYWxzZSwidXBkYXRlIjpmYWxzZSwiZGVsZXRlIjpmYWxzZSwiYXR0cmlidXRlX3Blcm1pc3Npb25zIjpbXX0sImhkYl9hdHRyaWJ1dGUiOnsicmVhZCI6dHJ1ZSwiaW5zZXJ0IjpmYWxzZSwidXBkYXRlIjpmYWxzZSwiZGVsZXRlIjpmYWxzZSwiYXR0cmlidXRlX3Blcm1pc3Npb25zIjpbXX0sImhkYl9zY2hlbWEiOnsicmVhZCI6dHJ1ZSwiaW5zZXJ0IjpmYWxzZSwidXBkYXRlIjpmYWxzZSwiZGVsZXRlIjpmYWxzZSwiYXR0cmlidXRlX3Blcm1pc3Npb25zIjpbXX0sImhkYl91c2VyIjp7InJlYWQiOnRydWUsImluc2VydCI6ZmFsc2UsInVwZGF0ZSI6ZmFsc2UsImRlbGV0ZSI6ZmFsc2UsImF0dHJpYnV0ZV9wZXJtaXNzaW9ucyI6W119LCJoZGJfcm9sZSI6eyJyZWFkIjp0cnVlLCJpbnNlcnQiOmZhbHNlLCJ1cGRhdGUiOmZhbHNlLCJkZWxldGUiOmZhbHNlLCJhdHRyaWJ1dGVfcGVybWlzc2lvbnMiOltdfSwiaGRiX2pvYiI6eyJyZWFkIjp0cnVlLCJpbnNlcnQiOmZhbHNlLCJ1cGRhdGUiOmZhbHNlLCJkZWxldGUiOmZhbHNlLCJhdHRyaWJ1dGVfcGVybWlzc2lvbnMiOltdfSwiaGRiX2xpY2Vuc2UiOnsicmVhZCI6dHJ1ZSwiaW5zZXJ0IjpmYWxzZSwidXBkYXRlIjpmYWxzZSwiZGVsZXRlIjpmYWxzZSwiYXR0cmlidXRlX3Blcm1pc3Npb25zIjpbXX0sImhkYl9pbmZvIjp7InJlYWQiOnRydWUsImluc2VydCI6ZmFsc2UsInVwZGF0ZSI6ZmFsc2UsImRlbGV0ZSI6ZmFsc2UsImF0dHJpYnV0ZV9wZXJtaXNzaW9ucyI6W119LCJoZGJfbm9kZXMiOnsicmVhZCI6dHJ1ZSwiaW5zZXJ0IjpmYWxzZSwidXBkYXRlIjpmYWxzZSwiZGVsZXRlIjpmYWxzZSwiYXR0cmlidXRlX3Blcm1pc3Npb25zIjpbXX0sImhkYl90ZW1wIjp7InJlYWQiOnRydWUsImluc2VydCI6ZmFsc2UsInVwZGF0ZSI6ZmFsc2UsImRlbGV0ZSI6ZmFsc2UsImF0dHJpYnV0ZV9wZXJtaXNzaW9ucyI6W119fX19LCJyb2xlIjoic3VwZXJfdXNlciJ9LCJ1c2VybmFtZSI6InVzZXJuYW1lIn0sImlhdCI6MTYwNDk3ODcxMywiZXhwIjoxNjA1MDY1MTEzLCJzdWIiOiJvcGVyYXRpb24ifQ.qB4FS7fzryCO5epQlFCQe4mQcUEhzXjfsXRFPgauXrGZwSeSr2o2a1tE1xjiI3qjK0r3f2bdi2xpFlDR1thdY-m0mOpHTICNOae4KdKzp7cyzRaOFurQnVYmkWjuV_Ww4PJgr6P3XDgXs5_B2d7ZVBR-BaAimYhVRIIShfpWk-4iN1XDk96TwloCkYx01BuN87o-VOvAnOG-K_EISA9RuEBpSkfUEuvHx8IU4VgfywdbhNMh6WXM0VP7ZzSpshgsS07MGjysGtZHNTVExEvFh14lyfjfqKjDoIJbo2msQwD2FvrTTb0iaQry1-Wwz9QJjVAUtid7tJuP8aBeNqvKyMIXRVnl5viFUr-Gs-Zl_WtyVvKlYWw0_rUn3ucmurK8tTy6iHyJ6XdUf4pYQebpEkIvi2rd__e_Z60V84MPvIYs6F_8CAy78aaYmUg5pihUEehIvGRj1RUZgdfaXElw90-m-M5hMOTI04LrzzVnBu7DcMYg4UC1W-WDrrj4zUq7y8_LczDA-yBC2-bkvWwLVtHLgV5yIEuIx2zAN74RQ4eCy1ffWDrVxYJBau4yiIyCc68dsatwHHH6bMK0uI9ib6Y9lsxCYjh-7MFcbP-4UBhgoDDXN9xoUToDLRqR9FTHqAHrGHp7BCdF5d6TQTVL5fmmg61MrLucOo-LZBXs1NY"
+}
+```
+
+The `refresh_token` also expires at a set interval, but a longer interval. Once it expires it will no longer be accepted by HarperDB. This duration defaults to thirty days, and is configurable in [harperdb-config.yaml](../../deployments/configuration). To generate a new `operation_token` and a new `refresh_token` the `create_authentication_tokensoperation` is called.
+
+## Configuration
+
+Token timeouts are configurable in [harperdb-config.yaml](../../deployments/configuration) with the following parameters:
+
+* `operationsApi.authentication.operationTokenTimeout`: Defines the length of time until the operation\_token expires (default 1d).
+* `operationsApi.authentication.refreshTokenTimeout`: Defines the length of time until the refresh\_token expires (default 30d).
+
+A full list of valid values for both parameters can be found [here](https:/github.com/vercel/ms).
diff --git a/site/versioned_docs/version-4.3/developers/security/mtls-auth.md b/site/versioned_docs/version-4.3/developers/security/mtls-auth.md
new file mode 100644
index 00000000..8c063693
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/security/mtls-auth.md
@@ -0,0 +1,7 @@
+---
+title: mTLS Authentication
+---
+
+# mTLS Authentication
+
+HarperDB supports mTLS authentication for incoming connections. When enabled in the [HTTP config settings](../../deployments/configuration#http) the client certificate will be checked against the certificate authority specified with `tls.certificateAuthority`. If the certificate can be properly verified, the connection will authenticate users where the user's id/username is specified by the `CN` (common name) from the client certificate's `subject`, by default. The [HTTP config settings](../../deployments/configuration#http) allow you to determine if mTLS is required for all connections or optional.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.3/developers/security/users-and-roles.md b/site/versioned_docs/version-4.3/developers/security/users-and-roles.md
new file mode 100644
index 00000000..d490edf0
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/security/users-and-roles.md
@@ -0,0 +1,267 @@
+---
+title: Users & Roles
+---
+
+# Users & Roles
+
+HarperDB utilizes a Role-Based Access Control (RBAC) framework to manage access to HarperDB instances. A user is assigned a role that determines the user’s permissions to access database resources and run core operations.
+
+## Roles in HarperDB
+
+Role permissions in HarperDB are broken into two categories – permissions around database manipulation and permissions around database definition.
+
+**Database Manipulation**: A role defines CRUD (create, read, update, delete) permissions against database resources (i.e. data) in a HarperDB instance.
+
+1. At the table-level access, permissions must be explicitly defined when adding or altering a role – _i.e. HarperDB will assume CRUD access to be FALSE if not explicitly provided in the permissions JSON passed to the `add_role` and/or `alter_role` API operations._
+1. At the attribute-level, permissions for attributes in all tables included in the permissions set will be assigned based on either the specific attribute-level permissions defined in the table’s permission set or, if there are no attribute-level permissions defined, permissions will be based on the table’s CRUD set.
+
+**Database Definition**: Permissions related to managing databases, tables, roles, users, and other system settings and operations are restricted to the built-in `super_user` role.
+
+**Built-In Roles**
+
+There are three built-in roles within HarperDB. See full breakdown of operations restricted to only super\_user roles [here](./users-and-roles#role-based-operation-restrictions).
+
+* `super_user` - This role provides full access to all operations and methods within a HarperDB instance, this can be considered the admin role.
+ * This role provides full access to all Database Definition operations and the ability to run Database Manipulation operations across the entire database schema with no restrictions.
+* `cluster_user` - This role is an internal system role type that is managed internally to allow clustered instances to communicate with one another.
+ * This role is an internally managed role to facilitate communication between clustered instances.
+* `structure_user` - This role provides specific access for creation and deletion of data.
+ * When defining this role type you can either assign a value of true which will allow the role to create and drop databases & tables. Alternatively the role type can be assigned a string array. The values in this array are databases and allows the role to only create and drop tables in the designated databases.
+
+**User-Defined Roles**
+
+In addition to built-in roles, admins (i.e. users assigned to the super\_user role) can create customized roles for other users to interact with and manipulate the data within explicitly defined tables and attributes.
+
+* Unless the user-defined role is given `super_user` permissions, permissions must be defined explicitly within the request body JSON.
+* Describe operations will return metadata for all databases, tables, and attributes that a user-defined role has CRUD permissions for.
+
+**Role Permissions**
+
+When creating a new, user-defined role in a HarperDB instance, you must provide a role name and the permissions to assign to that role. _Reminder, only super users can create and manage roles._
+
+* `role` name used to easily identify the role assigned to individual users.
+
+ _Roles can be altered/dropped based on the role name used in and returned from a successful `add_role` , `alter_role`, or `list_roles` operation._
+* `permissions` used to explicitly define CRUD access to existing table data.
+
+Example JSON for `add_role` request
+
+```json
+{
+ "operation":"add_role",
+ "role":"software_developer",
+ "permission":{
+ "super_user":false,
+ "database_name":{
+ "tables": {
+ "table_name1": {
+ "read":true,
+ "insert":true,
+ "update":true,
+ "delete":false,
+ "attribute_permissions":[
+ {
+ "attribute_name":"attribute1",
+ "read":true,
+ "insert":true,
+ "update":true
+ }
+ ]
+ },
+ "table_name2": {
+ "read":true,
+ "insert":true,
+ "update":true,
+ "delete":false,
+ "attribute_permissions":[]
+ }
+ }
+ }
+ }
+}
+```
+
+**Setting Role Permissions**
+
+There are two parts to a permissions set:
+
+* `super_user` – boolean value indicating if role should be provided super\_user access.
+
+ _If `super_user` is set to true, there should be no additional database-specific permissions values included since the role will have access to the entire database schema. If permissions are included in the body of the operation, they will be stored within HarperDB, but ignored, as super\_users have full access to the database._
+* `permissions`: Database tables that a role should have specific CRUD access to should be included in the final, database-specific `permissions` JSON.
+
+ _For user-defined roles (i.e. non-super\_user roles, blank permissions will result in the user being restricted from accessing any of the database schema._
+
+**Table Permissions JSON**
+
+Each table that a role should be given some level of CRUD permissions to must be included in the `tables` array for its database in the roles permissions JSON passed to the API (_see example above_).
+
+```json
+{
+ "table_name": { / the name of the table to define CRUD perms for
+ "read": boolean, / access to read from this table
+ "insert": boolean, / access to insert data to table
+ "update": boolean, / access to update data in table
+ "delete": boolean, / access to delete row data in table
+ "attribute_permissions": [ / permissions for specific table attributes
+ {
+ "attribute_name": "attribute_name", / attribute to assign permissions to
+ "read": boolean, / access to read this attribute from table
+ "insert": boolean, / access to insert this attribute into the table
+ "update": boolean / access to update this attribute in the table
+ }
+ ]
+}
+```
+
+**Important Notes About Table Permissions**
+
+1. If a database and/or any of its tables are not included in the permissions JSON, the role will not have any CRUD access to the database and/or tables.
+1. If a table-level CRUD permission is set to false, any attribute-level with that same CRUD permission set to true will return an error.
+
+**Important Notes About Attribute Permissions**
+
+1. If there are attribute-specific CRUD permissions that need to be enforced on a table, those need to be explicitly described in the `attribute_permissions` array.
+1. If a non-hash attribute is given some level of CRUD access, that same access will be assigned to the table’s `hash_attribute` (also referred to as the `primary_key`), even if it is not explicitly defined in the permissions JSON.
+
+ _See table\_name1’s permission set for an example of this – even though the table’s hash attribute is not specifically defined in the attribute\_permissions array, because the role has CRUD access to ‘attribute1’, the role will have the same access to the table’s hash attribute._
+1. If attribute-level permissions are set – _i.e. attribute\_permissions.length > 0_ – any table attribute not explicitly included will be assumed to have not CRUD access (with the exception of the `hash_attribute` described in #2).
+
+ _See table\_name1’s permission set for an example of this – in this scenario, the role will have the ability to create, insert and update ‘attribute1’ and the table’s hash attribute but no other attributes on that table._
+1. If an `attribute_permissions` array is empty, the role’s access to a table’s attributes will be based on the table-level CRUD permissions.
+
+ _See table\_name2’s permission set for an example of this._
+1. The `__createdtime__` and `__updatedtime__` attributes that HarperDB manages internally can have read perms set but, if set, all other attribute-level permissions will be ignored.
+1. Please note that DELETE permissions are not included as a part of an individual attribute-level permission set. That is because it is not possible to delete individual attributes from a row, rows must be deleted in full.
+ * If a role needs the ability to delete rows from a table, that permission should be set on the table-level.
+ * The practical approach to deleting an individual attribute of a row would be to set that attribute to null via an update statement.
+
+## `Role-Based Operation Restrictions `
+
+The table below includes all API operations available in HarperDB and indicates whether or not the operation is restricted to super\_user roles.
+
+_Keep in mind that non-super\_user roles will also be restricted within the operations they do have access to by the database-level CRUD permissions set for the roles._
+
+| Databases and Tables | Restricted to Super\_Users |
+|----------------------| :------------------------: |
+| describe\_all | |
+| describe\_database | |
+| describe\_table | |
+| create\_database | X |
+| drop\_database | X |
+| create\_table | X |
+| drop\_table | X |
+| create\_attribute | |
+| drop\_attribute | X |
+
+| NoSQL Operations | Restricted to Super\_Users |
+| ---------------------- | :------------------------: |
+| insert | |
+| update | |
+| upsert | |
+| delete | |
+| search\_by\_hash | |
+| search\_by\_value | |
+| search\_by\_conditions | |
+
+| SQL Operations | Restricted to Super\_Users |
+| -------------- | :------------------------: |
+| select | |
+| insert | |
+| update | |
+| delete | |
+
+| Bulk Operations | Restricted to Super\_Users |
+| ---------------- | :------------------------: |
+| csv\_data\_load | |
+| csv\_file\_load | |
+| csv\_url\_load | |
+| import\_from\_s3 | |
+
+| Users and Roles | Restricted to Super\_Users |
+| --------------- | :------------------------: |
+| list\_roles | X |
+| add\_role | X |
+| alter\_role | X |
+| drop\_role | X |
+| list\_users | X |
+| user\_info | |
+| add\_user | X |
+| alter\_user | X |
+| drop\_user | X |
+
+| Clustering | Restricted to Super\_Users |
+| ----------------------- | :------------------------: |
+| cluster\_set\_routes | X |
+| cluster\_get\_routes | X |
+| cluster\_delete\_routes | X |
+| add\_node | X |
+| update\_node | X |
+| cluster\_status | X |
+| remove\_node | X |
+| configure\_cluster | X |
+
+| Components | Restricted to Super\_Users |
+| -------------------- | :------------------------: |
+| get\_components | X |
+| get\_component\_file | X |
+| set\_component\_file | X |
+| drop\_component | X |
+| add\_component | X |
+| package\_component | X |
+| deploy\_component | X |
+
+| Custom Functions | Restricted to Super\_Users |
+| ---------------------------------- | :------------------------: |
+| custom\_functions\_status | X |
+| get\_custom\_functions | X |
+| get\_custom\_function | X |
+| set\_custom\_function | X |
+| drop\_custom\_function | X |
+| add\_custom\_function\_project | X |
+| drop\_custom\_function\_project | X |
+| package\_custom\_function\_project | X |
+| deploy\_custom\_function\_project | X |
+
+| Registration | Restricted to Super\_Users |
+| ------------------ | :------------------------: |
+| registration\_info | |
+| get\_fingerprint | X |
+| set\_license | X |
+
+| Jobs | Restricted to Super\_Users |
+| ----------------------------- | :------------------------: |
+| get\_job | |
+| search\_jobs\_by\_start\_date | X |
+
+| Logs | Restricted to Super\_Users |
+| --------------------------------- | :------------------------: |
+| read\_log | X |
+| read\_transaction\_log | X |
+| delete\_transaction\_logs\_before | X |
+| read\_audit\_log | X |
+| delete\_audit\_logs\_before | X |
+
+| Utilities | Restricted to Super\_Users |
+| ----------------------- | :------------------------: |
+| delete\_records\_before | X |
+| export\_local | X |
+| export\_to\_s3 | X |
+| system\_information | X |
+| restart | X |
+| restart\_service | X |
+| get\_configuration | X |
+| configure\_cluster | X |
+
+| Token Authentication | Restricted to Super\_Users |
+| ------------------------------ | :------------------------: |
+| create\_authentication\_tokens | |
+| refresh\_operation\_token | |
+
+## Error: Must execute as User
+
+**You may have gotten an error like,** `Error: Must execute as <>`.
+
+This means that you installed HarperDB as `<>`. Because HarperDB stores files natively on the operating system, we only allow the HarperDB executable to be run by a single user. This prevents permissions issues on files.
+
+For example if you installed as user\_a, but later wanted to run as user\_b. User\_b may not have access to the hdb files HarperDB needs. This also keeps HarperDB more secure as it allows you to lock files down to a specific user and prevents other users from accessing your files.
diff --git a/site/versioned_docs/version-4.3/developers/sql-guide/date-functions.md b/site/versioned_docs/version-4.3/developers/sql-guide/date-functions.md
new file mode 100644
index 00000000..2ae9addf
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/sql-guide/date-functions.md
@@ -0,0 +1,226 @@
+---
+title: SQL Date Functions
+---
+
+:::warning
+HarperDB encourages developers to utilize other querying tools over SQL for performance purposes. HarperDB SQL is intended for data investigation purposes and uses cases where performance is not a priority. SQL optimizations are on our roadmap for the future.
+:::
+
+# SQL Date Functions
+
+HarperDB utilizes [Coordinated Universal Time (UTC)](https:/en.wikipedia.org/wiki/Coordinated_Universal_Time) in all internal SQL operations. This means that date values passed into any of the functions below will be assumed to be in UTC or in a format that can be translated to UTC.
+
+When parsing date values passed to SQL date functions in HDB, we first check for [ISO 8601](https:/en.wikipedia.org/wiki/ISO_8601) formats, then for [RFC 2822](https:/tools.ietf.org/html/rfc2822#section-3.3) date-time format and then fall back to new Date(date_string)if a known format is not found.
+
+### CURRENT_DATE()
+
+Returns the current date in UTC in `YYYY-MM-DD` String format.
+
+```
+"SELECT CURRENT_DATE() AS current_date_result" returns
+ {
+ "current_date_result": "2020-04-22"
+ }
+```
+
+### CURRENT_TIME()
+
+Returns the current time in UTC in `HH:mm:ss.SSS` String format.
+
+```
+"SELECT CURRENT_TIME() AS current_time_result" returns
+ {
+ "current_time_result": "15:18:14.639"
+ }
+```
+
+### CURRENT_TIMESTAMP
+
+Referencing this variable will evaluate as the current Unix Timestamp in milliseconds.
+
+```
+"SELECT CURRENT_TIMESTAMP AS current_timestamp_result" returns
+ {
+ "current_timestamp_result": 1587568845765
+ }
+```
+### DATE([date_string])
+
+Formats and returns the date_string argument in UTC in `YYYY-MM-DDTHH:mm:ss.SSSZZ` String format.
+
+If a date_string is not provided, the function will return the current UTC date/time value in the return format defined above.
+
+```
+"SELECT DATE(1587568845765) AS date_result" returns
+ {
+ "date_result": "2020-04-22T15:20:45.765+0000"
+ }
+```
+
+```
+"SELECT DATE(CURRENT_TIMESTAMP) AS date_result2" returns
+ {
+ "date_result2": "2020-04-22T15:20:45.765+0000"
+ }
+```
+
+### DATE_ADD(date, value, interval)
+
+Adds the defined amount of time to the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted interval values: Either string value (key or shorthand) can be passed as the interval argument.
+
+
+| Key | Shorthand |
+|--------------|-----------|
+| years | y |
+| quarters | Q |
+| months | M |
+| weeks | w |
+| days | d |
+| hours | h |
+| minutes | m |
+| seconds | s |
+| milliseconds | ms |
+
+
+```
+"SELECT DATE_ADD(1587568845765, 1, 'days') AS date_add_result" AND
+"SELECT DATE_ADD(1587568845765, 1, 'd') AS date_add_result" both return
+ {
+ "date_add_result": 1587655245765
+ }
+```
+
+```
+"SELECT DATE_ADD(CURRENT_TIMESTAMP, 2, 'years')
+AS date_add_result2" returns
+ {
+ "date_add_result2": 1650643129017
+ }
+```
+
+### DATE_DIFF(date_1, date_2[, interval])
+
+Returns the difference between the two date values passed based on the interval as a Number. If an interval is not provided, the function will return the difference value in milliseconds.
+
+Accepted interval values:
+* years
+* months
+* weeks
+* days
+* hours
+* minutes
+* seconds
+
+```
+"SELECT DATE_DIFF(CURRENT_TIMESTAMP, 1650643129017, 'hours')
+AS date_diff_result" returns
+ {
+ "date_diff_result": -17519.753333333334
+ }
+```
+
+### DATE_FORMAT(date, format)
+
+Formats and returns a date value in the String format provided. Find more details on accepted format values in the [moment.js docs](https:/momentjs.com/docs/#/displaying/format/).
+
+```
+"SELECT DATE_FORMAT(1524412627973, 'YYYY-MM-DD HH:mm:ss')
+AS date_format_result" returns
+ {
+ "date_format_result": "2018-04-22 15:57:07"
+ }
+```
+
+### DATE_SUB(date, value, interval)
+
+Subtracts the defined amount of time from the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted date_sub interval values- Either string value (key or shorthand) can be passed as the interval argument.
+
+| Key | Shorthand |
+|--------------|-----------|
+| years | y |
+| quarters | Q |
+| months | M |
+| weeks | w |
+| days | d |
+| hours | h |
+| minutes | m |
+| seconds | s |
+| milliseconds | ms |
+
+
+```
+"SELECT DATE_SUB(1587568845765, 2, 'years') AS date_sub_result" returns
+ {
+ "date_sub_result": 1524410445765
+ }
+```
+
+### EXTRACT(date, date_part)
+
+Extracts and returns the date_part requested as a String value. Accepted date_part values below show value returned for date = “2020-03-26T15:13:02.041+000”
+
+| date_part | Example return value* |
+|--------------|------------------------|
+| year | “2020” |
+| month | “3” |
+| day | “26” |
+ | hour | “15” |
+| minute | “13” |
+| second | “2” |
+| millisecond | “41” |
+
+```
+"SELECT EXTRACT(1587568845765, 'year') AS extract_result" returns
+ {
+ "extract_result": "2020"
+ }
+```
+
+### GETDATE()
+
+Returns the current Unix Timestamp in milliseconds.
+
+```
+"SELECT GETDATE() AS getdate_result" returns
+ {
+ "getdate_result": 1587568845765
+ }
+```
+
+### GET_SERVER_TIME()
+Returns the current date/time value based on the server’s timezone in `YYYY-MM-DDTHH:mm:ss.SSSZZ` String format.
+
+```
+"SELECT GET_SERVER_TIME() AS get_server_time_result" returns
+ {
+ "get_server_time_result": "2020-04-22T15:20:45.765+0000"
+ }
+```
+
+### OFFSET_UTC(date, offset)
+Returns the UTC date time value with the offset provided included in the return String value formatted as `YYYY-MM-DDTHH:mm:ss.SSSZZ`. The offset argument will be added as minutes unless the value is less than 16 and greater than -16, in which case it will be treated as hours.
+
+```
+"SELECT OFFSET_UTC(1587568845765, 240) AS offset_utc_result" returns
+ {
+ "offset_utc_result": "2020-04-22T19:20:45.765+0400"
+ }
+```
+
+```
+"SELECT OFFSET_UTC(1587568845765, 10) AS offset_utc_result2" returns
+ {
+ "offset_utc_result2": "2020-04-23T01:20:45.765+1000"
+ }
+```
+
+### NOW()
+Returns the current Unix Timestamp in milliseconds.
+
+```
+"SELECT NOW() AS now_result" returns
+ {
+ "now_result": 1587568845765
+ }
+```
+
diff --git a/site/versioned_docs/version-4.3/developers/sql-guide/features-matrix.md b/site/versioned_docs/version-4.3/developers/sql-guide/features-matrix.md
new file mode 100644
index 00000000..7856dbfd
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/sql-guide/features-matrix.md
@@ -0,0 +1,87 @@
+---
+title: SQL Features Matrix
+---
+
+:::warning
+HarperDB encourages developers to utilize other querying tools over SQL for performance purposes. HarperDB SQL is intended for data investigation purposes and uses cases where performance is not a priority. SQL optimizations are on our roadmap for the future.
+:::
+
+# SQL Features Matrix
+
+HarperDB provides access to most SQL functions, and we’re always expanding that list. Check below to see if we cover what you need. If not, feel free to [add a Feature Request](https:/feedback.harperdb.io/).
+
+
+| INSERT | |
+|------------------------------------|-----|
+| Values - multiple values supported | ✔ |
+| Sub-SELECT | ✗ |
+
+| UPDATE | |
+|-----------------|-----|
+| SET | ✔ |
+| Sub-SELECT | ✗ |
+| Conditions | ✔ |
+| Date Functions* | ✔ |
+| Math Functions | ✔ |
+
+| DELETE | |
+|------------|-----|
+| FROM | ✔ |
+| Sub-SELECT | ✗ |
+| Conditions | ✔ |
+
+| SELECT | |
+|-----------------------|-----|
+| Column SELECT | ✔ |
+| Aliases | ✔ |
+| Aggregator Functions | ✔ |
+| Date Functions* | ✔ |
+| Math Functions | ✔ |
+| Constant Values | ✔ |
+| Distinct | ✔ |
+| Sub-SELECT | ✗ |
+
+| FROM | |
+|-------------------|-----|
+| Multi-table JOIN | ✔ |
+| INNER JOIN | ✔ |
+| LEFT OUTER JOIN | ✔ |
+| LEFT INNER JOIN | ✔ |
+| RIGHT OUTER JOIN | ✔ |
+| RIGHT INNER JOIN | ✔ |
+| FULL JOIN | ✔ |
+| UNION | ✗ |
+| Sub-SELECT | ✗ |
+| TOP | ✔ |
+
+| WHERE | |
+|----------------------------|-----|
+| Multi-Conditions | ✔ |
+| Wildcards | ✔ |
+| IN | ✔ |
+| LIKE | ✔ |
+| Bit-wise Operators AND, OR | ✔ |
+| Bit-wise Operators NOT | ✔ |
+| NULL | ✔ |
+| BETWEEN | ✔ |
+| EXISTS,ANY,ALL | ✔ |
+| Compare columns | ✔ |
+| Compare constants | ✔ |
+| Date Functions* | ✔ |
+| Math Functions | ✔ |
+| Sub-SELECT | ✗ |
+
+| GROUP BY | |
+|-----------------------|-----|
+| Multi-Column GROUP BY | ✔ |
+
+| HAVING | |
+|--------------------------------|-----|
+| Aggregate function conditions | ✔ |
+
+| ORDER BY | |
+|-----------------------|-----|
+| Multi-Column ORDER BY | ✔ |
+| Aliases | ✔ |
+| Date Functions* | ✔ |
+| Math Functions | ✔ |
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.3/developers/sql-guide/functions.md b/site/versioned_docs/version-4.3/developers/sql-guide/functions.md
new file mode 100644
index 00000000..d7957037
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/sql-guide/functions.md
@@ -0,0 +1,157 @@
+---
+title: HarperDB SQL Functions
+---
+
+:::warning
+HarperDB encourages developers to utilize other querying tools over SQL for performance purposes. HarperDB SQL is intended for data investigation purposes and uses cases where performance is not a priority. SQL optimizations are on our roadmap for the future.
+:::
+
+# HarperDB SQL Functions
+
+This SQL keywords reference contains the SQL functions available in HarperDB.
+
+## Functions
+### Aggregate
+
+| Keyword | Syntax | Description |
+|-----------------|---------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------|
+| AVG | AVG(_expression_) | Returns the average of a given numeric expression. |
+| COUNT | SELECT COUNT(_column_name_) FROM _database.table_ WHERE _condition_ | Returns the number records that match the given criteria. Nulls are not counted. |
+| GROUP_CONCAT | GROUP_CONCAT(_expression_) | Returns a string with concatenated values that are comma separated and that are non-null from a group. Will return null when there are non-null values. |
+| MAX | SELECT MAX(_column_name_) FROM _database.table_ WHERE _condition_ | Returns largest value in a specified column. |
+| MIN | SELECT MIN(_column_name_) FROM _database.table_ WHERE _condition_ | Returns smallest value in a specified column. |
+| SUM | SUM(_column_name_) | Returns the sum of the numeric values provided. |
+| ARRAY* | ARRAY(_expression_) | Returns a list of data as a field. |
+| DISTINCT_ARRAY* | DISTINCT_ARRAY(_expression_) | When placed around a standard ARRAY() function, returns a distinct (deduplicated) results set. |
+
+*For more information on ARRAY() and DISTINCT_ARRAY() see [this blog](https:/www.harperdb.io/post/sql-queries-to-complex-objects).
+
+### Conversion
+
+| Keyword | Syntax | Description |
+|---------|--------------------------------------------------|------------------------------------------------------------------------|
+| CAST | CAST(_expression AS datatype(length)_) | Converts a value to a specified datatype. |
+| CONVERT | CONVERT(_data_type(length), expression, style_) | Converts a value from one datatype to a different, specified datatype. |
+
+
+### Date & Time
+
+| Keyword | Syntax | Description |
+|-------------------|-----------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| CURRENT_DATE | CURRENT_DATE() | Returns the current date in UTC in “YYYY-MM-DD” String format. |
+| CURRENT_TIME | CURRENT_TIME() | Returns the current time in UTC in “HH:mm:ss.SSS” string format. |
+| CURRENT_TIMESTAMP | CURRENT_TIMESTAMP | Referencing this variable will evaluate as the current Unix Timestamp in milliseconds. For more information, go here. |
+|
+| DATE | DATE([_date_string_]) | Formats and returns the date_string argument in UTC in ‘YYYY-MM-DDTHH:mm:ss.SSSZZ’ string format. If a date_string is not provided, the function will return the current UTC date/time value in the return format defined above. For more information, go here. |
+|
+| DATE_ADD | DATE_ADD(_date, value, interval_) | Adds the defined amount of time to the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted interval values: Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. |
+|
+| DATE_DIFF | DATEDIFF(_date_1, date_2[, interval]_) | Returns the difference between the two date values passed based on the interval as a Number. If an interval is not provided, the function will return the difference value in milliseconds. For more information, go here. |
+|
+| DATE_FORMAT | DATE_FORMAT(_date, format_) | Formats and returns a date value in the String format provided. Find more details on accepted format values in the moment.js docs. For more information, go here. |
+|
+| DATE_SUB | DATE_SUB(_date, format_) | Subtracts the defined amount of time from the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted date_sub interval values- Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. |
+|
+| DAY | DAY(_date_) | Return the day of the month for the given date. |
+|
+| DAYOFWEEK | DAYOFWEEK(_date_) | Returns the numeric value of the weekday of the date given(“YYYY-MM-DD”).NOTE: 0=Sunday, 1=Monday, 2=Tuesday, 3=Wednesday, 4=Thursday, 5=Friday, and 6=Saturday. |
+| EXTRACT | EXTRACT(_date, date_part_) | Extracts and returns the date_part requested as a String value. Accepted date_part values below show value returned for date = “2020-03-26T15:13:02.041+000” For more information, go here. |
+|
+| GETDATE | GETDATE() | Returns the current Unix Timestamp in milliseconds. |
+| GET_SERVER_TIME | GET_SERVER_TIME() | Returns the current date/time value based on the server’s timezone in `YYYY-MM-DDTHH:mm:ss.SSSZZ` String format. |
+| OFFSET_UTC | OFFSET_UTC(_date, offset_) | Returns the UTC date time value with the offset provided included in the return String value formatted as `YYYY-MM-DDTHH:mm:ss.SSSZZ`. The offset argument will be added as minutes unless the value is less than 16 and greater than -16, in which case it will be treated as hours. |
+| NOW | NOW() | Returns the current Unix Timestamp in milliseconds. |
+|
+| HOUR | HOUR(_datetime_) | Returns the hour part of a given date in range of 0 to 838. |
+|
+| MINUTE | MINUTE(_datetime_) | Returns the minute part of a time/datetime in range of 0 to 59. |
+|
+| MONTH | MONTH(_date_) | Returns month part for a specified date in range of 1 to 12. |
+|
+| SECOND | SECOND(_datetime_) | Returns the seconds part of a time/datetime in range of 0 to 59. |
+| YEAR | YEAR(_date_) | Returns the year part for a specified date. |
+|
+
+### Logical
+
+| Keyword | Syntax | Description |
+|---------|--------------------------------------------------|--------------------------------------------------------------------------------------------|
+| IF | IF(_condition, value_if_true, value_if_false_) | Returns a value if the condition is true, or another value if the condition is false. |
+| IIF | IIF(_condition, value_if_true, value_if_false_) | Returns a value if the condition is true, or another value if the condition is false. |
+| IFNULL | IFNULL(_expression, alt_value_) | Returns a specified value if the expression is null. |
+| NULLIF | NULLIF(_expression_1, expression_2_) | Returns null if expression_1 is equal to expression_2, if not equal, returns expression_1. |
+
+### Mathematical
+
+| Keyword | Syntax | Description |
+|---------|---------------------------------|-----------------------------------------------------------------------------------------------------|
+| ABS | ABS(_expression_) | Returns the absolute value of a given numeric expression. |
+| CEIL | CEIL(_number_) | Returns integer ceiling, the smallest integer value that is bigger than or equal to a given number. |
+| EXP | EXP(_number_) | Returns e to the power of a specified number. |
+| FLOOR | FLOOR(_number_) | Returns the largest integer value that is smaller than, or equal to, a given number. |
+| RANDOM | RANDOM(_seed_) | Returns a pseudo random number. |
+| ROUND | ROUND(_number,decimal_places_) | Rounds a given number to a specified number of decimal places. |
+| SQRT | SQRT(_expression_) | Returns the square root of an expression. |
+
+
+### String
+
+| Keyword | Syntax | Description |
+|-------------|------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| CONCAT | CONCAT(_string_1, string_2, ...., string_n_) | Concatenates, or joins, two or more strings together, resulting in a single string. |
+| CONCAT_WS | CONCAT_WS(_separator, string_1, string_2, ...., string_n_) | Concatenates, or joins, two or more strings together with a separator, resulting in a single string. |
+| INSTR | INSTR(_string_1, string_2_) | Returns the first position, as an integer, of string_2 within string_1. |
+| LEN | LEN(_string_) | Returns the length of a string. |
+| LOWER | LOWER(_string_) | Converts a string to lower-case. |
+| REGEXP | SELECT _column_name_ FROM _database.table_ WHERE _column_name_ REGEXP _pattern_ | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. |
+| REGEXP_LIKE | SELECT _column_name_ FROM _database.table_ WHERE REGEXP_LIKE(_column_name, pattern_) | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. |
+| REPLACE | REPLACE(_string, old_string, new_string_) | Replaces all instances of old_string within new_string, with string. |
+| SUBSTRING | SUBSTRING(_string, string_position, length_of_substring_) | Extracts a specified amount of characters from a string. |
+| TRIM | TRIM([_character(s) FROM_] _string_) | Removes leading and trailing spaces, or specified character(s), from a string. |
+| UPPER | UPPER(_string_) | Converts a string to upper-case. |
+
+## Operators
+### Logical Operators
+
+| Keyword | Syntax | Description |
+|----------|--------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------|
+| BETWEEN | SELECT _column_name(s)_ FROM _database.table_ WHERE _column_name_ BETWEEN _value_1_ AND _value_2_ | (inclusive) Returns values(numbers, text, or dates) within a given range. |
+| IN | SELECT _column_name(s)_ FROM _database.table_ WHERE _column_name_ IN(_value(s)_) | Used to specify multiple values in a WHERE clause. |
+| LIKE | SELECT _column_name(s)_ FROM _database.table_ WHERE _column_n_ LIKE _pattern_ | Searches for a specified pattern within a WHERE clause. |
+
+## Queries
+### General
+
+| Keyword | Syntax | Description |
+|-----------|--------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------|
+| DISTINCT | SELECT DISTINCT _column_name(s)_ FROM _database.table_ | Returns only unique values, eliminating duplicate records. |
+| FROM | FROM _database.table_ | Used to list the database(s), table(s), and any joins required for a SQL statement. |
+| GROUP BY | SELECT _column_name(s)_ FROM _database.table_ WHERE _condition_ GROUP BY _column_name(s)_ ORDER BY _column_name(s)_ | Groups rows that have the same values into summary rows. |
+| HAVING | SELECT _column_name(s)_ FROM _database.table_ WHERE _condition_ GROUP BY _column_name(s)_ HAVING _condition_ ORDER BY _column_name(s)_ | Filters data based on a group or aggregate function. |
+| SELECT | SELECT _column_name(s)_ FROM _database.table_ | Selects data from table. |
+| WHERE | SELECT _column_name(s)_ FROM _database.table_ WHERE _condition_ | Extracts records based on a defined condition. |
+
+### Joins
+
+| Keyword | Syntax | Description |
+|---------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| CROSS JOIN | SELECT _column_name(s)_ FROM _database.table_1_ CROSS JOIN _database.table_2_ | Returns a paired combination of each row from _table_1_ with row from _table_2_. _Note: CROSS JOIN can return very large result sets and is generally considered bad practice._ |
+| FULL OUTER | SELECT _column_name(s)_ FROM _database.table_1_ FULL OUTER JOIN _database.table_2_ ON _table_1.column_name_ _= table_2.column_name_ WHERE _condition_ | Returns all records when there is a match in either _table_1_ (left table) or _table_2_ (right table). |
+| [INNER] JOIN | SELECT _column_name(s)_ FROM _database.table_1_ INNER JOIN _database.table_2_ ON _table_1.column_name_ _= table_2.column_name_ | Return only matching records from _table_1_ (left table) and _table_2_ (right table). The INNER keyword is optional and does not affect the result. |
+| LEFT [OUTER] JOIN | SELECT _column_name(s)_ FROM _database.table_1_ LEFT OUTER JOIN _database.table_2_ ON _table_1.column_name_ _= table_2.column_name_ | Return all records from _table_1_ (left table) and matching data from _table_2_ (right table). The OUTER keyword is optional and does not affect the result. |
+| RIGHT [OUTER] JOIN | SELECT _column_name(s)_ FROM _database.table_1_ RIGHT OUTER JOIN _database.table_2_ ON _table_1.column_name = table_2.column_name_ | Return all records from _table_2_ (right table) and matching data from _table_1_ (left table). The OUTER keyword is optional and does not affect the result. |
+
+### Predicates
+
+| Keyword | Syntax | Description |
+|--------------|------------------------------------------------------------------------------|----------------------------|
+| IS NOT NULL | SELECT _column_name(s)_ FROM _database.table_ WHERE _column_name_ IS NOT NULL | Tests for non-null values. |
+| IS NULL | SELECT _column_name(s)_ FROM _database.table_ WHERE _column_name_ IS NULL | Tests for null values. |
+
+### Statements
+
+| Keyword | Syntax | Description |
+|---------|---------------------------------------------------------------------------------------------|-------------------------------------|
+| DELETE | DELETE FROM _database.table_ WHERE condition | Deletes existing data from a table. |
+| INSERT | INSERT INTO _database.table(column_name(s))_ VALUES(_value(s)_) | Inserts new records into a table. |
+| UPDATE | UPDATE _database.table_ SET _column_1 = value_1, column_2 = value_2, ....,_ WHERE _condition_ | Alters existing records in a table. |
diff --git a/site/versioned_docs/version-4.3/developers/sql-guide/index.md b/site/versioned_docs/version-4.3/developers/sql-guide/index.md
new file mode 100644
index 00000000..ae274bd3
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/sql-guide/index.md
@@ -0,0 +1,88 @@
+---
+title: SQL Guide
+---
+
+# SQL Guide
+
+:::warning
+HarperDB encourages developers to utilize other querying tools over SQL for performance purposes. HarperDB SQL is intended for data investigation purposes and uses cases where performance is not a priority. SQL optimizations are on our roadmap for the future.
+:::
+
+## HarperDB SQL Guide
+
+The purpose of this guide is to describe the available functionality of HarperDB as it relates to supported SQL functionality. The SQL parser is still actively being developed, many SQL features may not be optimized or utilize indexes. This document will be updated as more features and functionality becomes available. Generally, the REST interface provides a more stable, secure, and performant interface for data interaction, but the SQL functionality can be useful for administrative ad-hoc querying, and utilizing existing SQL statements. **A high-level view of supported features can be found** [**here**](./features-matrix)**.**
+
+HarperDB adheres to the concept of database & tables. This allows developers to isolate table structures from each other all within one database.
+
+## Select
+
+HarperDB has robust SELECT support, from simple queries all the way to complex joins with multi-conditions, aggregates, grouping & ordering.
+
+All results are returned as JSON object arrays.
+
+Query for all records and attributes in the dev.dog table:
+
+```
+SELECT * FROM dev.dog
+```
+
+Query specific columns from all rows in the dev.dog table:
+
+```
+SELECT id, dog_name, age FROM dev.dog
+```
+
+Query for all records and attributes in the dev.dog table ORDERED BY age in ASC order:
+
+```
+SELECT * FROM dev.dog ORDER BY age
+```
+
+_The ORDER BY keyword sorts in ascending order by default. To sort in descending order, use the DESC keyword._
+
+## Insert
+
+HarperDB supports inserting 1 to n records into a table. The primary key must be unique (not used by any other record). If no primary key is provided, it will be assigned an auto-generated UUID. HarperDB does not support selecting from one table to insert into another at this time.
+
+```
+INSERT INTO dev.dog (id, dog_name, age, breed_id)
+ VALUES(1, 'Penny', 5, 347), (2, 'Kato', 4, 347)
+```
+
+## Update
+
+HarperDB supports updating existing table row(s) via UPDATE statements. Multiple conditions can be applied to filter the row(s) to update. At this time selecting from one table to update another is not supported.
+
+```
+UPDATE dev.dog
+ SET owner_name = 'Kyle'
+ WHERE id IN (1, 2)
+```
+
+## Delete
+
+HarperDB supports deleting records from a table with condition support.
+
+```
+DELETE FROM dev.dog
+ WHERE age < 4
+```
+
+## Joins
+
+HarperDB allows developers to join any number of tables and currently supports the following join types:
+
+* INNER JOIN LEFT
+* INNER JOIN LEFT
+* OUTER JOIN
+
+Here’s a basic example joining two tables from our Get Started example- joining a dogs table with a breeds table:
+
+```
+SELECT d.id, d.dog_name, d.owner_name, b.name, b.section
+ FROM dev.dog AS d
+ INNER JOIN dev.breed AS b ON d.breed_id = b.id
+ WHERE d.owner_name IN ('Kyle', 'Zach', 'Stephen')
+ AND b.section = 'Mutt'
+ ORDER BY d.dog_name
+```
diff --git a/site/versioned_docs/version-4.3/developers/sql-guide/json-search.md b/site/versioned_docs/version-4.3/developers/sql-guide/json-search.md
new file mode 100644
index 00000000..86010c5c
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/sql-guide/json-search.md
@@ -0,0 +1,177 @@
+---
+title: SQL JSON Search
+---
+
+:::warning
+HarperDB encourages developers to utilize other querying tools over SQL for performance purposes. HarperDB SQL is intended for data investigation purposes and uses cases where performance is not a priority. SQL optimizations are on our roadmap for the future.
+:::
+
+# SQL JSON Search
+
+HarperDB automatically indexes all top level attributes in a row / object written to a table. However, any attributes which hold JSON data do not have their nested attributes indexed. In order to make searching and/or transforming these JSON documents easy, HarperDB offers a special SQL function called SEARCH\_JSON. The SEARCH\_JSON function works in SELECT & WHERE clauses allowing queries to perform powerful filtering on any element of your JSON by implementing the [JSONata library](http:/docs.jsonata.org/overview.html) into our SQL engine.
+
+## Syntax
+
+SEARCH\_JSON(_expression, attribute_)
+
+Executes the supplied string _expression_ against data of the defined top level _attribute_ for each row. The expression both filters and defines output from the JSON document.
+
+### Example 1
+
+#### Search a string array
+
+Here are two records in the database:
+
+```json
+[
+ {
+ "id": 1,
+ "name": ["Harper", "Penny"]
+ },
+ {
+ "id": 2,
+ "name": ["Penny"]
+ }
+]
+```
+
+Here is a simple query that gets any record with "Harper" found in the name.
+
+```
+SELECT *
+FROM dev.dog
+WHERE search_json('"Harper" in *', name)
+```
+
+### Example 2
+
+The purpose of this query is to give us every movie where at least two of our favorite actors from Marvel films have acted together. The results will return the movie title, the overview, release date and an object array of the actor’s name and their character name in the movie.
+
+Both function calls evaluate the credits.cast attribute, this attribute is an object array of every cast member in a movie.
+
+```
+SELECT m.title,
+ m.overview,
+ m.release_date,
+ SEARCH_JSON($[name in ["Robert Downey Jr.", "Chris Evans", "Scarlett Johansson", "Mark Ruffalo", "Chris Hemsworth", "Jeremy Renner", "Clark Gregg", "Samuel L. Jackson", "Gwyneth Paltrow", "Don Cheadle"]].{"actor": name, "character": character}, c.`cast`) AS characters
+FROM movies.credits c
+ INNER JOIN movies.movie m
+ ON c.movie_id = m.id
+WHERE SEARCH_JSON($count($[name in ["Robert Downey Jr.", "Chris Evans", "Scarlett Johansson", "Mark Ruffalo", "Chris Hemsworth", "Jeremy Renner", "Clark Gregg", "Samuel L. Jackson", "Gwyneth Paltrow", "Don Cheadle"]]), c.`cast`) >= 2
+```
+
+A sample of this data from the movie The Avengers looks like
+
+```json
+[
+ {
+ "cast_id": 46,
+ "character": "Tony Stark / Iron Man",
+ "credit_id": "52fe4495c3a368484e02b251",
+ "gender": "male",
+ "id": 3223,
+ "name": "Robert Downey Jr.",
+ "order": 0
+ },
+ {
+ "cast_id": 2,
+ "character": "Steve Rogers / Captain America",
+ "credit_id": "52fe4495c3a368484e02b19b",
+ "gender": "male",
+ "id": 16828,
+ "name": "Chris Evans",
+ "order": 1
+ },
+ {
+ "cast_id": 307,
+ "character": "Bruce Banner / The Hulk",
+ "credit_id": "5e85e8083344c60015411cfa",
+ "gender": "male",
+ "id": 103,
+ "name": "Mark Ruffalo",
+ "order": 2
+ }
+]
+```
+
+Let’s break down the SEARCH\_JSON function call in the SELECT:
+
+```
+SEARCH_JSON(
+ $[name in [
+ "Robert Downey Jr.",
+ "Chris Evans",
+ "Scarlett Johansson",
+ "Mark Ruffalo",
+ "Chris Hemsworth",
+ "Jeremy Renner",
+ "Clark Gregg",
+ "Samuel L. Jackson",
+ "Gwyneth Paltrow",
+ "Don Cheadle"
+ ]].{
+ "actor": name,
+ "character": character
+ },
+ c.`cast`
+)
+```
+
+The first argument passed to SEARCH\_JSON is the expression to execute against the second argument which is the cast attribute on the credits table. This expression will execute for every row. Looking into the expression it starts with “$\[…]” this tells the expression to iterate all elements of the cast array.
+
+Then the expression tells the function to only return entries where the name attribute matches any of the actors defined in the array:
+
+```
+name in ["Robert Downey Jr.", "Chris Evans", "Scarlett Johansson", "Mark Ruffalo", "Chris Hemsworth", "Jeremy Renner", "Clark Gregg", "Samuel L. Jackson", "Gwyneth Paltrow", "Don Cheadle"]
+```
+
+So far, we’ve iterated the array and filtered out rows, but we also want the results formatted in a specific way, so we’ve chained an expression on our filter with: `{“actor”: name, “character”: character}`. This tells the function to create a specific object for each matching entry.
+
+**Sample Result**
+
+```json
+[
+ {
+ "actor": "Robert Downey Jr.",
+ "character": "Tony Stark / Iron Man"
+ },
+ {
+ "actor": "Chris Evans",
+ "character": "Steve Rogers / Captain America"
+ },
+ {
+ "actor": "Mark Ruffalo",
+ "character": "Bruce Banner / The Hulk"
+ }
+]
+```
+
+Just having the SEARCH\_JSON function in our SELECT is powerful, but given our criteria it would still return every other movie that doesn’t have our matching actors, in order to filter out the movies we do not want we also use SEARCH\_JSON in the WHERE clause.
+
+This function call in the WHERE clause is similar, but we don’t need to perform the same transformation as occurred in the SELECT:
+
+```
+SEARCH_JSON(
+ $count(
+ $[name in [
+ "Robert Downey Jr.",
+ "Chris Evans",
+ "Scarlett Johansson",
+ "Mark Ruffalo",
+ "Chris Hemsworth",
+ "Jeremy Renner",
+ "Clark Gregg",
+ "Samuel L. Jackson",
+ "Gwyneth Paltrow",
+ "Don Cheadle"
+ ]]
+ ),
+ c.`cast`
+) >= 2
+```
+
+As seen above we execute the same name filter against the cast array, the primary difference is we are wrapping the filtered results in $count(…). As it looks this returns a count of the results back which we then use against our SQL comparator of >= 2.
+
+To see further SEARCH\_JSON examples in action view our Postman Collection that provides a [sample database & data with query examples](../operations-api/advanced-json-sql-examples).
+
+To learn more about how to build expressions check out the JSONata documentation: [http:/docs.jsonata.org/overview](http:/docs.jsonata.org/overview)
diff --git a/site/versioned_docs/version-4.3/developers/sql-guide/reserved-word.md b/site/versioned_docs/version-4.3/developers/sql-guide/reserved-word.md
new file mode 100644
index 00000000..3794a7ae
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/sql-guide/reserved-word.md
@@ -0,0 +1,207 @@
+---
+title: HarperDB SQL Reserved Words
+---
+
+:::warning
+HarperDB encourages developers to utilize other querying tools over SQL for performance purposes. HarperDB SQL is intended for data investigation purposes and uses cases where performance is not a priority. SQL optimizations are on our roadmap for the future.
+:::
+
+# HarperDB SQL Reserved Words
+
+This is a list of reserved words in the SQL Parser. Use of these words or symbols may result in unexpected behavior or inaccessible tables/attributes. If any of these words must be used, any SQL call referencing a database, table, or attribute must have backticks (`…`) or brackets ([…]) around the variable.
+
+For Example, for a table called `ASSERT` in the `data` database, a SQL select on that table would look like:
+
+```
+SELECT * from data.`ASSERT`
+```
+
+Alternatively:
+
+```
+SELECT * from data.[ASSERT]
+```
+
+### RESERVED WORD LIST
+
+* ABSOLUTE
+* ACTION
+* ADD
+* AGGR
+* ALL
+* ALTER
+* AND
+* ANTI
+* ANY
+* APPLY
+* ARRAY
+* AS
+* ASSERT
+* ASC
+* ATTACH
+* AUTOINCREMENT
+* AUTO_INCREMENT
+* AVG
+* BEGIN
+* BETWEEN
+* BREAK
+* BY
+* CALL
+* CASE
+* CAST
+* CHECK
+* CLASS
+* CLOSE
+* COLLATE
+* COLUMN
+* COLUMNS
+* COMMIT
+* CONSTRAINT
+* CONTENT
+* CONTINUE
+* CONVERT
+* CORRESPONDING
+* COUNT
+* CREATE
+* CROSS
+* CUBE
+* CURRENT_TIMESTAMP
+* CURSOR
+* DATABASE
+* DECLARE
+* DEFAULT
+* DELETE
+* DELETED
+* DESC
+* DETACH
+* DISTINCT
+* DOUBLEPRECISION
+* DROP
+* ECHO
+* EDGE
+* END
+* ENUM
+* ELSE
+* EXCEPT
+* EXISTS
+* EXPLAIN
+* FALSE
+* FETCH
+* FIRST
+* FOREIGN
+* FROM
+* GO
+* GRAPH
+* GROUP
+* GROUPING
+* HAVING
+* HDB_HASH
+* HELP
+* IF
+* IDENTITY
+* IS
+* IN
+* INDEX
+* INNER
+* INSERT
+* INSERTED
+* INTERSECT
+* INTO
+* JOIN
+* KEY
+* LAST
+* LET
+* LEFT
+* LIKE
+* LIMIT
+* LOOP
+* MATCHED
+* MATRIX
+* MAX
+* MERGE
+* MIN
+* MINUS
+* MODIFY
+* NATURAL
+* NEXT
+* NEW
+* NOCASE
+* NO
+* NOT
+* NULL
+* OFF
+* ON
+* ONLY
+* OFFSET
+* OPEN
+* OPTION
+* OR
+* ORDER
+* OUTER
+* OVER
+* PATH
+* PARTITION
+* PERCENT
+* PLAN
+* PRIMARY
+* PRINT
+* PRIOR
+* QUERY
+* READ
+* RECORDSET
+* REDUCE
+* REFERENCES
+* RELATIVE
+* REPLACE
+* REMOVE
+* RENAME
+* REQUIRE
+* RESTORE
+* RETURN
+* RETURNS
+* RIGHT
+* ROLLBACK
+* ROLLUP
+* ROW
+* SCHEMA
+* SCHEMAS
+* SEARCH
+* SELECT
+* SEMI
+* SET
+* SETS
+* SHOW
+* SOME
+* SOURCE
+* STRATEGY
+* STORE
+* SYSTEM
+* SUM
+* TABLE
+* TABLES
+* TARGET
+* TEMP
+* TEMPORARY
+* TEXTSTRING
+* THEN
+* TIMEOUT
+* TO
+* TOP
+* TRAN
+* TRANSACTION
+* TRIGGER
+* TRUE
+* TRUNCATE
+* UNION
+* UNIQUE
+* UPDATE
+* USE
+* USING
+* VALUE
+* VERTEX
+* VIEW
+* WHEN
+* WHERE
+* WHILE
+* WITH
+* WORK
diff --git a/site/versioned_docs/version-4.3/developers/sql-guide/sql-geospatial-functions.md b/site/versioned_docs/version-4.3/developers/sql-guide/sql-geospatial-functions.md
new file mode 100644
index 00000000..df398174
--- /dev/null
+++ b/site/versioned_docs/version-4.3/developers/sql-guide/sql-geospatial-functions.md
@@ -0,0 +1,384 @@
+---
+title: SQL Geospatial Functions
+---
+
+:::warning
+HarperDB encourages developers to utilize other querying tools over SQL for performance purposes. HarperDB SQL is intended for data investigation purposes and uses cases where performance is not a priority. SQL optimizations are on our roadmap for the future.
+:::
+
+# SQL Geospatial Functions
+
+HarperDB geospatial features require data to be stored in a single column using the [GeoJSON standard](http:/geojson.org/), a standard commonly used in geospatial technologies. Geospatial functions are available to be used in SQL statements.
+
+
+
+If you are new to GeoJSON you should check out the full specification here: http:/geojson.org/. There are a few important things to point out before getting started.
+
+
+
+1) All GeoJSON coordinates are stored in `[longitude, latitude]` format.
+2) Coordinates or GeoJSON geometries must be passed as string when written directly in a SQL statement.
+3) Note if you are using Postman for you testing. Due to limitations in the Postman client, you will need to escape quotes in your strings and your SQL will need to be passed on a single line.
+
+
+In the examples contained in the left-hand navigation, database and table names may change, but all GeoJSON data will be stored in a column named geo_data.
+
+# geoArea
+
+The geoArea() function returns the area of one or more features in square meters.
+
+### Syntax
+geoArea(_geoJSON_)
+
+### Parameters
+| Parameter | Description |
+|-----------|---------------------------------|
+| geoJSON | Required. One or more features. |
+
+#### Example 1
+Calculate the area, in square meters, of a manually passed GeoJSON polygon.
+
+```
+SELECT geoArea('{
+ "type":"Feature",
+ "geometry":{
+ "type":"Polygon",
+ "coordinates":[[
+ [0,0],
+ [0.123456,0],
+ [0.123456,0.123456],
+ [0,0.123456]
+ ]]
+ }
+}')
+```
+
+#### Example 2
+Find all records that have an area less than 1 square mile (or 2589988 square meters).
+
+```
+SELECT * FROM dev.locations
+WHERE geoArea(geo_data) < 2589988
+```
+
+# geoLength
+Takes a GeoJSON and measures its length in the specified units (default is kilometers).
+
+## Syntax
+geoLength(_geoJSON_[_, units_])
+
+## Parameters
+| Parameter | Description |
+|------------|-----------------------------------------------------------------------------------------------------------------------|
+| geoJSON | Required. GeoJSON to measure. |
+| units | Optional. Specified as a string. Options are ‘degrees’, ‘radians’, ‘miles’, or ‘kilometers’. Default is ‘kilometers’. |
+
+### Example 1
+Calculate the length, in kilometers, of a manually passed GeoJSON linestring.
+
+```
+SELECT geoLength('{
+ "type": "Feature",
+ "geometry": {
+ "type": "LineString",
+ "coordinates": [
+ [-104.97963309288025,39.76163265441438],
+ [-104.9823260307312,39.76365323407955],
+ [-104.99193906784058,39.75616442110704]
+ ]
+ }
+}')
+```
+
+### Example 2
+Find all data plus the calculated length in miles of the GeoJSON, restrict the response to only lengths less than 5 miles, and return the data in order of lengths smallest to largest.
+
+```
+SELECT *, geoLength(geo_data, 'miles') as length
+FROM dev.locations
+WHERE geoLength(geo_data, 'miles') < 5
+ORDER BY length ASC
+```
+# geoDifference
+Returns a new polygon with the difference of the second polygon clipped from the first polygon.
+
+## Syntax
+geoDifference(_polygon1, polygon2_)
+
+## Parameters
+| Parameter | Description |
+|------------|----------------------------------------------------------------------------|
+| polygon1 | Required. Polygon or MultiPolygon GeoJSON feature. |
+| polygon2 | Required. Polygon or MultiPolygon GeoJSON feature to remove from polygon1. |
+
+### Example
+Return a GeoJSON Polygon that removes City Park (_polygon2_) from Colorado (_polygon1_).
+
+```
+SELECT geoDifference('{
+ "type": "Feature",
+ "properties": {
+ "name":"Colorado"
+ },
+ "geometry": {
+ "type": "Polygon",
+ "coordinates": [[
+ [-109.072265625,37.00255267215955],
+ [-102.01904296874999,37.00255267215955],
+ [-102.01904296874999,41.0130657870063],
+ [-109.072265625,41.0130657870063],
+ [-109.072265625,37.00255267215955]
+ ]]
+ }
+ }',
+ '{
+ "type": "Feature",
+ "properties": {
+ "name":"City Park"
+ },
+ "geometry": {
+ "type": "Polygon",
+ "coordinates": [[
+ [-104.95973110198975,39.7543828214657],
+ [-104.95955944061278,39.744781185675386],
+ [-104.95904445648193,39.74422022399989],
+ [-104.95835781097412,39.74402223643582],
+ [-104.94097709655762,39.74392324244047],
+ [-104.9408483505249,39.75434982844515],
+ [-104.95973110198975,39.7543828214657]
+ ]]
+ }
+ }'
+)
+```
+
+# geoDistance
+Calculates the distance between two points in units (default is kilometers).
+
+## Syntax
+geoDistance(_point1, point2_[_, units_])
+
+## Parameters
+| Parameter | Description |
+|------------|-----------------------------------------------------------------------------------------------------------------------|
+| point1 | Required. GeoJSON Point specifying the origin. |
+| point2 | Required. GeoJSON Point specifying the destination. |
+| units | Optional. Specified as a string. Options are ‘degrees’, ‘radians’, ‘miles’, or ‘kilometers’. Default is ‘kilometers’. |
+
+### Example 1
+Calculate the distance, in miles, between HarperDB’s headquarters and the Washington Monument.
+
+```
+SELECT geoDistance('[-104.979127,39.761563]', '[-77.035248,38.889475]', 'miles')
+```
+
+### Example 2
+Find all locations that are within 40 kilometers of a given point, return that distance in miles, and sort by distance in an ascending order.
+
+```
+SELECT *, geoDistance('[-104.979127,39.761563]', geo_data, 'miles') as distance
+FROM dev.locations
+WHERE geoDistance('[-104.979127,39.761563]', geo_data, 'kilometers') < 40
+ORDER BY distance ASC
+```
+
+# geoNear
+Determines if point1 and point2 are within a specified distance from each other, default units are kilometers. Returns a Boolean.
+
+## Syntax
+geoNear(_point1, point2, distance_[_, units_])
+
+## Parameters
+| Parameter | Description |
+|------------|-----------------------------------------------------------------------------------------------------------------------|
+| point1 | Required. GeoJSON Point specifying the origin. |
+| point2 | Required. GeoJSON Point specifying the destination. |
+| distance | Required. The maximum distance in units as an integer or decimal. |
+| units | Optional. Specified as a string. Options are ‘degrees’, ‘radians’, ‘miles’, or ‘kilometers’. Default is ‘kilometers’. |
+
+### Example 1
+Return all locations within 50 miles of a given point.
+
+```
+SELECT *
+FROM dev.locations
+WHERE geoNear('[-104.979127,39.761563]', geo_data, 50, 'miles')
+```
+
+### Example 2
+Return all locations within 2 degrees of the earth of a given point. (Each degree lat/long is about 69 miles [111 kilometers]). Return all data and the distance in miles, sorted by ascending distance.
+
+```
+SELECT *, geoDistance('[-104.979127,39.761563]', geo_data, 'miles') as distance
+FROM dev.locations
+WHERE geoNear('[-104.979127,39.761563]', geo_data, 2, 'degrees')
+ORDER BY distance ASC
+```
+
+# geoContains
+Determines if geo2 is completely contained by geo1. Returns a Boolean.
+
+## Syntax
+geoContains(_geo1, geo2_)
+
+## Parameters
+| Parameter | Description |
+|------------|-----------------------------------------------------------------------------------|
+| geo1 | Required. Polygon or MultiPolygon GeoJSON feature. |
+| geo2 | Required. Polygon or MultiPolygon GeoJSON feature tested to be contained by geo1. |
+
+### Example 1
+Return all locations within the state of Colorado (passed as a GeoJSON string).
+
+```
+SELECT *
+FROM dev.locations
+WHERE geoContains('{
+ "type": "Feature",
+ "properties": {
+ "name":"Colorado"
+ },
+ "geometry": {
+ "type": "Polygon",
+ "coordinates": [[
+ [-109.072265625,37.00255267],
+ [-102.01904296874999,37.00255267],
+ [-102.01904296874999,41.01306579],
+ [-109.072265625,41.01306579],
+ [-109.072265625,37.00255267]
+ ]]
+ }
+}', geo_data)
+```
+
+### Example 2
+Return all locations which contain HarperDB Headquarters.
+
+```
+SELECT *
+FROM dev.locations
+WHERE geoContains(geo_data, '{
+ "type": "Feature",
+ "properties": {
+ "name": "HarperDB Headquarters"
+ },
+ "geometry": {
+ "type": "Polygon",
+ "coordinates": [[
+ [-104.98060941696167,39.760704817357905],
+ [-104.98053967952728,39.76065120861263],
+ [-104.98055577278137,39.760642961109674],
+ [-104.98037070035934,39.76049450588716],
+ [-104.9802714586258,39.76056254790385],
+ [-104.9805235862732,39.76076461167841],
+ [-104.98060941696167,39.760704817357905]
+ ]]
+ }
+}')
+```
+
+# geoEqual
+Determines if two GeoJSON features are the same type and have identical X,Y coordinate values. For more information see https:/developers.arcgis.com/documentation/spatial-references/. Returns a Boolean.
+
+## Syntax
+geoEqual(_geo1_, _geo2_)
+
+## Parameters
+| Parameter | Description |
+|------------|----------------------------------------|
+| geo1 | Required. GeoJSON geometry or feature. |
+| geo2 | Required. GeoJSON geometry or feature. |
+
+### Example
+Find HarperDB Headquarters within all locations within the database.
+
+```
+SELECT *
+FROM dev.locations
+WHERE geoEqual(geo_data, '{
+ "type": "Feature",
+ "properties": {
+ "name": "HarperDB Headquarters"
+ },
+ "geometry": {
+ "type": "Polygon",
+ "coordinates": [[
+ [-104.98060941696167,39.760704817357905],
+ [-104.98053967952728,39.76065120861263],
+ [-104.98055577278137,39.760642961109674],
+ [-104.98037070035934,39.76049450588716],
+ [-104.9802714586258,39.76056254790385],
+ [-104.9805235862732,39.76076461167841],
+ [-104.98060941696167,39.760704817357905]
+ ]]
+ }
+}')
+```
+
+# geoCrosses
+Determines if the geometries cross over each other. Returns boolean.
+
+## Syntax
+geoCrosses(_geo1, geo2_)
+
+## Parameters
+| Parameter | Description |
+|------------|-----------------------------------------|
+| geo1 | Required. GeoJSON geometry or feature. |
+| geo2 | Required. GeoJSON geometry or feature. |
+
+### Example
+Find all locations that cross over a highway.
+
+```
+SELECT *
+FROM dev.locations
+WHERE geoCrosses(
+ geo_data,
+ '{
+ "type": "Feature",
+ "properties": {
+ "name": "Highway I-25"
+ },
+ "geometry": {
+ "type": "LineString",
+ "coordinates": [
+ [-104.9139404296875,41.00477542222947],
+ [-105.0238037109375,39.715638134796336],
+ [-104.853515625,39.53370327008705],
+ [-104.853515625,38.81403111409755],
+ [-104.61181640625,38.39764411353178],
+ [-104.8974609375,37.68382032669382],
+ [-104.501953125,37.00255267215955]
+ ]
+ }
+ }'
+)
+```
+
+# geoConvert
+
+Converts a series of coordinates into a GeoJSON of the specified type.
+
+## Syntax
+geoConvert(_coordinates, geo_type_[, _properties_])
+
+## Parameters
+| Parameter | Description |
+|--------------|------------------------------------------------------------------------------------------------------------------------------------|
+| coordinates | Required. One or more coordinates |
+| geo_type | Required. GeoJSON geometry type. Options are ‘point’, ‘lineString’, ‘multiLineString’, ‘multiPoint’, ‘multiPolygon’, and ‘polygon’ |
+| properties | Optional. Escaped JSON array with properties to be added to the GeoJSON output. |
+
+### Example
+Convert a given coordinate into a GeoJSON point with specified properties.
+
+```
+SELECT geoConvert(
+ '[-104.979127,39.761563]',
+ 'point',
+ '{
+ "name": "HarperDB Headquarters"
+ }'
+)
+```
diff --git a/site/versioned_docs/version-4.3/getting-started.md b/site/versioned_docs/version-4.3/getting-started.md
new file mode 100644
index 00000000..fa4edb5d
--- /dev/null
+++ b/site/versioned_docs/version-4.3/getting-started.md
@@ -0,0 +1,84 @@
+---
+title: Getting Started
+---
+
+# Getting Started
+
+HarperDB is designed for quick and simple setup and deployment, with smart defaults that lead to fast, scalable, and globally distributed database applications.
+
+You can easily create a HarperDB database in the cloud through our studio or install it locally. The quickest way to get HarperDB up and running is with [HarperDB Cloud](./deployments/harperdb-cloud/), our database-as-a-service offering. However, HarperDB is a [database application platform](./developers/applications/), and to leverage HarperDB’s full application development capabilities of defining schemas, endpoints, messaging, and gateway capabilities, you may wish to install and run HarperDB locally so that you can use your standard local IDE tools, debugging, and version control.
+
+### Installing a HarperDB Instance
+
+You can simply install HarperDB with npm (or yarn, or other package managers):
+
+```shell
+npm install -g harperdb
+```
+
+Here we installed HarperDB globally (and we recommend this) to make it easy to run a single HarperDB instance with multiple projects, but you can install it locally (not globally) as well.
+
+You can run HarperDB by running:
+
+```javascript
+harperdb
+```
+
+You can now use HarperDB as a standalone database. You can also create a cloud instance (see below), which is also an easy way to get started.
+
+#### Developing Database Applications with HarperDB
+
+HarperDB is more than just a database, with HarperDB you build "database applications" which package your schema, endpoints, and application logic together. You can then deploy your application to an entire cluster of HarperDB instances, ready to scale to on-the-edge delivery of data and application endpoints directly to your users. To get started with HarperDB, take a look at our application development guide, with quick and easy examples:
+
+[Database application development guide](./developers/applications/)
+
+### Setting up a Cloud Instance
+
+To set up a HarperDB cloud instance, simply sign up and create a new instance:
+
+1. [Sign up for the HarperDB Studio](https:/studio.harperdb.io/sign-up)
+1. [Create a new HarperDB Cloud instance](./administration/harperdb-studio/instances#create-a-new-instance)
+
+Note that a local instance and cloud instance are not mutually exclusive. You can register your local instance in the HarperDB Studio, and a common development flow is to develop locally and then deploy your application to your cloud instance.
+
+HarperDB Cloud instance provisioning typically takes 5-15 minutes. You will receive an email notification when your instance is ready.
+
+#### Using the HarperDB Studio
+
+Now that you have a HarperDB instance, if you want to use HarperDB as a standalone database, you can fully administer and interact with our database through the Studio. This section links to appropriate articles to get you started interacting with your data.
+
+1. [Create a database](./administration/harperdb-studio/manage-databases-browse-data#create-a-database)
+1. [Create a table](./administration/harperdb-studio/manage-databases-browse-data#create-a-table)
+1. [Add a record](./administration/harperdb-studio/manage-databases-browse-data#add-a-record)
+1. [Load CSV data](./administration/harperdb-studio/manage-databases-browse-data#load-csv-data) (Here’s a sample CSV of the HarperDB team’s dogs)
+1. [Query data via SQL](./administration/harperdb-studio/query-instance-data)
+
+## Administering HarperDB
+
+If you are deploying and administering HarperDB, you may want to look at our [configuration documentation](./deployments/configuration) and our administrative operations API below.
+
+### HarperDB APIs
+
+The preferred way to interact with HarperDB for typical querying, accessing, and updating data (CRUD) operations is through the REST interface, described in the [REST documentation](./developers/rest).
+
+The Operations API provides extensive administrative capabilities for HarperDB, and the [Operations API documentation has usage and examples](./developers/operations-api/). Generally it is recommended that you use the RESTful interface as your primary interface for performant data access, querying, and manipulation (DML) for building production applications (under heavy load), and the operations API (and SQL) for data definition (DDL) and administrative purposes.
+
+The HarperDB Operations API is single endpoint, which means the only thing that needs to change across different calls is the body. For example purposes, a basic cURL command is shown below to create a database called dev. To change this behavior, swap out the operation in the `data-raw` body parameter.
+
+```
+curl --location --request POST 'https:/instance-subdomain.harperdbcloud.com' \
+--header 'Authorization: Basic YourBase64EncodedInstanceUser:Pass' \
+--header 'Content-Type: application/json' \
+--data-raw '{
+ "operation": "create_schema",
+ "database": "dev"
+}'
+```
+
+## Support and Learning More
+
+If you find yourself in need of additional support you can submit a [HarperDB support ticket](https:/harperdbhelp.zendesk.com/hc/en-us/requests/new). You can also learn more about available HarperDB projects by searching [Github](https:/github.com/search?q=harperdb).
+
+### Video Tutorials
+
+[HarperDB video tutorials are available on our YouTube channel](https:/www.youtube.com/@harperdbio). HarperDB and the HarperDB Studio are constantly changing, as such, there may be small discrepancies in UI/UX.
diff --git a/site/versioned_docs/version-4.3/index.md b/site/versioned_docs/version-4.3/index.md
new file mode 100644
index 00000000..780b75aa
--- /dev/null
+++ b/site/versioned_docs/version-4.3/index.md
@@ -0,0 +1,106 @@
+---
+title: HarperDB Docs
+---
+
+# HarperDB Docs
+
+HarperDB is a globally-distributed edge application platform. It reduces complexity, increases performance, and lowers costs by combining user-defined applications, a high-performance database, and an enterprise-grade streaming broker into a single package. The platform offers unlimited horizontal scale at the click of a button, and syncs data across the cluster in milliseconds. HarperDB simplifies the process of delivering applications and the data that drives them to the edge, which dramatically improves both the user experience and total cost of ownership for large-scale applications. Deploying HarperDB on global infrastructure enables a CDN-like solution for enterprise data and applications.
+
+HarperDB's documentation covers installation, getting started, administrative operation APIs, security, and much more. Browse the topics at left, or choose one of the commonly used documentation sections below.
+
+:::info
+Wondering what's new with HarperDB 4.3? Take a look at our latest [Release Notes](./technical-details/release-notes/v4-tucker/4.3.0).
+:::
+
+## Getting Started
+
+
+
+
+
+ Get up and running with HarperDB
+
+
+
+
+
+ Run HarperDB on your on hardware
+
+
+
+
+
+ Spin up an instance in minutes to going fast
+
+
+
+
+## Building with HarperDB
+
+
+
+
+
+ Build your a fully featured HarperDB Component with custom functionality
+
+
+
+
+
+ The recommended HTTP interface for data access, querying, and manipulation
+
+
+
+
+
+ Configure, deploy, administer, and control your HarperDB instance
+
+
+
+
+
+
+
+
+ The process of connecting multiple HarperDB databases together to create a database mesh network that enables users to define data replication patterns.
+
+
+
+
+
+ The web-based GUI for HarperDB. Studio enables you to administer, navigate, and monitor all of your HarperDB instances in a simple, user friendly interface.
+
+
+
diff --git a/site/versioned_docs/version-4.3/technical-details/_category_.json b/site/versioned_docs/version-4.3/technical-details/_category_.json
new file mode 100644
index 00000000..69ce80a6
--- /dev/null
+++ b/site/versioned_docs/version-4.3/technical-details/_category_.json
@@ -0,0 +1,12 @@
+{
+ "label": "Technical Details",
+ "position": 4,
+ "link": {
+ "type": "generated-index",
+ "title": "Technical Details Documentation",
+ "description": "Reference documentation and technical specifications",
+ "keywords": [
+ "technical-details"
+ ]
+ }
+}
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.3/technical-details/reference/analytics.md b/site/versioned_docs/version-4.3/technical-details/reference/analytics.md
new file mode 100644
index 00000000..7b475176
--- /dev/null
+++ b/site/versioned_docs/version-4.3/technical-details/reference/analytics.md
@@ -0,0 +1,117 @@
+---
+title: Analytics
+---
+
+# Analytics
+
+HarperDB provides extensive telemetry and analytics data to help monitor the status of the server and work loads, and to help understand traffic and usage patterns to identify issues and scaling needs, and identify queries and actions that are consuming the most resources.
+
+HarperDB collects statistics for all operations, URL endpoints, and messaging topics, aggregating information by thread, operation, resource, and methods, in real-time. These statistics are logged in the `hdb_raw_analytics` and `hdb_analytics` table in the `system` database.
+
+There are two "levels" of analytics in the HarperDB analytics table: the first is the immediate level of raw direct logging of real-time statistics. These analytics entries are recorded once a second (when there is activity) by each thread, and include all recorded activity in the last second, along with system resource information. The records have a primary key that is the timestamp in milliseconds since epoch. This can be queried (with `superuser` permission) using the search\_by\_conditions operation (this will search for 10 seconds worth of analytics) on the `hdb_raw_analytics` table:
+
+```
+POST http:/localhost:9925
+Content-Type: application/json
+
+{
+ "operation": "search_by_conditions",
+ "schema": "system",
+ "table": "hdb_raw_analytics",
+ "conditions": [{
+ "search_attribute": "id",
+ "search_type": "between",
+ "search_value": [168859400000, 1688594010000]
+ }]
+}
+```
+
+And a typical response looks like:
+
+```
+{
+ "time": 1688594390708,
+ "period": 1000.8336279988289,
+ "metrics": [
+ {
+ "metric": "bytes-sent",
+ "path": "search_by_conditions",
+ "type": "operation",
+ "median": 202,
+ "mean": 202,
+ "p95": 202,
+ "p90": 202,
+ "count": 1
+ },
+ ...
+ {
+ "metric": "memory",
+ "threadId": 2,
+ "rss": 1492664320,
+ "heapTotal": 124596224,
+ "heapUsed": 119563120,
+ "external": 3469790,
+ "arrayBuffers": 798721
+ },
+ {
+ "metric": "utilization",
+ "idle": 138227.52767700003,
+ "active": 70.5066209952347,
+ "utilization": 0.0005098165086230495
+ }
+ ],
+ "threadId": 2,
+ "totalBytesProcessed": 12182820,
+ "id": 1688594390708.6853
+}
+```
+
+The second level of analytics recording is aggregate data. The aggregate records are recorded once a minute, and aggregate the results from all the per-second entries from all the threads, creating a summary of statistics once a minute. The ids for these milliseconds since epoch can be queried from the `hdb_analytics` table. You can query these with an operation like:
+
+```
+POST http:/localhost:9925
+Content-Type: application/json
+
+{
+ "operation": "search_by_conditions",
+ "schema": "system",
+ "table": "hdb_analytics",
+ "conditions": [{
+ "search_attribute": "id",
+ "search_type": "between",
+ "search_value": [1688194100000, 1688594990000]
+ }]
+}
+```
+
+And a summary record looks like:
+
+```
+{
+ "period": 60000,
+ "metric": "bytes-sent",
+ "method": "connack",
+ "type": "mqtt",
+ "median": 4,
+ "mean": 4,
+ "p95": 4,
+ "p90": 4,
+ "count": 1,
+ "id": 1688589569646,
+ "time": 1688589569646
+}
+```
+
+The following are general resource usage statistics that are tracked:
+
+* memory - This includes RSS, heap, buffer and external data usage.
+* utilization - How much of the time the worker was processing requests.
+* mqtt-connections - The number of MQTT connections.
+
+The following types of information is tracked for each HTTP request:
+
+* success - How many requests returned a successful response (20x response code). TTFB - Time to first byte in the response to the client.
+* transfer - Time to finish the transfer of the data to the client.
+* bytes-sent - How many bytes of data were sent to the client.
+
+Requests are categorized by operation name, for the operations API, by the resource (name) with the REST API, and by command for the MQTT interface.
diff --git a/site/versioned_docs/version-4.3/technical-details/reference/architecture.md b/site/versioned_docs/version-4.3/technical-details/reference/architecture.md
new file mode 100644
index 00000000..f2881d3c
--- /dev/null
+++ b/site/versioned_docs/version-4.3/technical-details/reference/architecture.md
@@ -0,0 +1,42 @@
+---
+title: Architecture
+---
+
+# Architecture
+
+HarperDB's architecture consists of resources, which includes tables and user defined data sources and extensions, and server interfaces, which includes the RESTful HTTP interface, operations API, and MQTT. Servers are supported by routing and auth services.
+
+```
+ ┌──────────┐ ┌──────────┐
+ │ Clients │ │ Clients │
+ └────┬─────┘ └────┬─────┘
+ │ │
+ ▼ ▼
+ ┌────────────────────────────────────────┐
+ │ │
+ │ Socket routing/management │
+ ├───────────────────────┬────────────────┤
+ │ │ │
+ │ Server Interfaces ─►│ Authentication │
+ │ RESTful HTTP, MQTT │ Authorization │
+ │ ◄─┤ │
+ │ ▲ └────────────────┤
+ │ │ │ │
+ ├───┼──────────┼─────────────────────────┤
+ │ │ │ ▲ │
+ │ ▼ Resources ▲ │ ┌───────────┐ │
+ │ │ └─┤ │ │
+ ├─────────────────┴────┐ │ App │ │
+ │ ├─►│ resources │ │
+ │ Database tables │ └───────────┘ │
+ │ │ ▲ │
+ ├──────────────────────┘ │ │
+ │ ▲ ▼ │ │
+ │ ┌────────────────┐ │ │
+ │ │ External │ │ │
+ │ │ data sources ├────┘ │
+ │ │ │ │
+ │ └────────────────┘ │
+ │ │
+ └────────────────────────────────────────┘
+```
diff --git a/site/versioned_docs/version-4.3/technical-details/reference/content-types.md b/site/versioned_docs/version-4.3/technical-details/reference/content-types.md
new file mode 100644
index 00000000..d2bc096a
--- /dev/null
+++ b/site/versioned_docs/version-4.3/technical-details/reference/content-types.md
@@ -0,0 +1,29 @@
+---
+title: Content Types
+---
+
+# Content Types
+
+HarperDB supports several different content types (or MIME types) for both HTTP request bodies (describing operations) as well as for serializing content into HTTP response bodies. HarperDB follows HTTP standards for specifying both request body content types and acceptable response body content types. Any of these content types can be used with any of the standard HarperDB operations.
+
+For request body content, the content type should be specified with the `Content-Type` header. For example with JSON, use `Content-Type: application/json` and for CBOR, include `Content-Type: application/cbor`. To request that the response body be encoded with a specific content type, use the `Accept` header. If you want the response to be in JSON, use `Accept: application/json`. If you want the response to be in CBOR, use `Accept: application/cbor`.
+
+The following content types are supported:
+
+## JSON - application/json
+
+JSON is the most widely used content type, and is relatively readable and easy to work with. However, JSON does not support all the data types that are supported by HarperDB, and can't be used to natively encode data types like binary data or explicit Maps/Sets. Also, JSON is not as efficient as binary formats. When using JSON, compression is recommended (this also follows standard HTTP protocol with the `Accept-Encoding` header) to improve network transfer performance (although there is server performance overhead). JSON is a good choice for web development and when standard JSON types are sufficient and when combined with compression and debuggability/observability is important.
+
+## CBOR - application/cbor
+
+CBOR is a highly efficient binary format, and is a recommended format for most production use cases with HarperDB. CBOR supports the full range of HarperDB data types, including binary data, typed dates, and explicit Maps/Sets. CBOR is very performant and space efficient even without compression. Compression will still yield better network transfer size/performance, but compressed CBOR is generally not any smaller than compressed JSON. CBOR also natively supports streaming for optimal performance (using indefinite length arrays). The CBOR format has excellent standardization and HarperDB's CBOR provides an excellent balance of performance and size efficiency.
+
+## MessagePack - application/x-msgpack
+
+MessagePack is another efficient binary format like CBOR, with support for all HarperDB data types. MessagePack generally has wider adoption than CBOR and can be useful in systems that don't have CBOR support (or good support). However, MessagePack does not have native support for streaming of arrays of data (for query results), and so query results are returned as a (concatenated) sequence of MessagePack objects/maps. MessagePack decoders used with HarperDB's MessagePack must be prepared to decode a direct sequence of MessagePack values to properly read responses.
+
+## Comma-separated Values (CSV) - text/csv
+
+Comma-separated values is an easy to use and understand format that can be readily imported into spreadsheets or used for data processing. CSV lacks hierarchical structure for most data types, and shouldn't be used for frequent/production use, but when you need it, it is available.
+
+In addition, with the REST interface, you can use file-style extensions to indicate an encoding like http:/host/path.csv to indicate CSV encoding. See the [REST documentation](../../developers/rest) for more information on how to do this.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.3/technical-details/reference/data-types.md b/site/versioned_docs/version-4.3/technical-details/reference/data-types.md
new file mode 100644
index 00000000..c8acebe4
--- /dev/null
+++ b/site/versioned_docs/version-4.3/technical-details/reference/data-types.md
@@ -0,0 +1,52 @@
+---
+title: Data Types
+---
+
+# Data Types
+
+HarperDB supports a rich set of data types for use in records in databases. Various data types can be used from both direct JavaScript interfaces in Custom Functions and the HTTP operations APIs. Using JSON for communication naturally limits the data types to those available in JSON (HarperDB’s supports all of JSON data types), but JavaScript code and alternate data formats facilitate the use of additional data types. HarperDB supports MessagePack and CBOR, which allows for all of HarperDB supported data types. [Schema definitions can specify the expected types for fields, with GraphQL Schema Types](../../developers/applications/defining-schemas), which are used for validation of incoming typed data (JSON, MessagePack), and is used for auto-conversion of untyped data (CSV, [query parameters](../../developers/rest)). Available data types include:
+
+(Note that these labels are descriptive, they do not necessarily correspond to the GraphQL schema type names, but the schema type names are noted where possible)
+
+## Boolean
+
+true or false. The GraphQL schema type name is `Boolean`.
+
+## String
+
+Strings, or text, are a sequence of any unicode characters and are internally encoded with UTF-8. The GraphQL schema type name is `String`.
+
+## Number
+
+Numbers can be stored as signed integers up to a 1000 bits of precision (about 300 digits) or floating point with 64-bit floating point precision, and numbers are automatically stored using the most optimal type. With JSON, numbers are automatically parsed and stored in the most appropriate format. Custom components and applications may use BigInt numbers to store/access integers that are larger than 53-bit. The following GraphQL schema type name are supported:
+
+* `Float` - Any number that can be represented with [64-bit double precision floating point number](https:/en.wikipedia.org/wiki/Double-precision\_floating-point\_format) ("double")
+* `Int` - Any integer between from -2147483648 to 2147483647
+* `Long` - Any integer between from -9007199254740992 to 9007199254740992
+* `BigInt` - Any integer (negative or positive) with less than 300 digits
+
+Note that `BigInt` is a distinct and separate type from standard numbers in JavaScript, so custom code should handle this type appropriately.
+
+## Object/Map
+
+Objects, or maps, that hold a set named properties can be stored in HarperDB. When provided as JSON objects or JavaScript objects, all property keys are stored as strings. The order of properties is also preserved in HarperDB’s storage. Duplicate property keys are not allowed (they are dropped in parsing any incoming data).
+
+## Array
+
+Arrays hold an ordered sequence of values and can be stored in HarperDB. There is no support for sparse arrays, although you can use objects to store data with numbers (converted to strings) as properties.
+
+## Null
+
+A null value can be stored in HarperDB property values as well.
+
+## Date
+
+Dates can be stored as a specific data type. This is not supported in JSON, but is supported by MessagePack and CBOR. Custom Functions can also store and use Dates using JavaScript Date instances. The GraphQL schema type name is `Date`.
+
+## Binary Data
+
+Binary data can be stored in property values as well. JSON doesn’t have any support for encoding binary data, but MessagePack and CBOR support binary data in data structures, and this will be preserved in HarperDB. Custom Functions can also store binary data by using NodeJS’s Buffer or Uint8Array instances to hold the binary data. The GraphQL schema type name is `Bytes`.
+
+## Explicit Map/Set
+
+Explicit instances of JavaScript Maps and Sets can be stored and preserved in HarperDB as well. This can’t be represented with JSON, but can be with CBOR.
diff --git a/site/versioned_docs/version-4.3/technical-details/reference/dynamic-schema.md b/site/versioned_docs/version-4.3/technical-details/reference/dynamic-schema.md
new file mode 100644
index 00000000..57624117
--- /dev/null
+++ b/site/versioned_docs/version-4.3/technical-details/reference/dynamic-schema.md
@@ -0,0 +1,148 @@
+---
+title: Dynamic Schema
+---
+
+# Dynamic Schema
+
+When tables are created without any schema, through the operations API (without specifying attributes) or studio, the tables follow "dynamic-schema" behavior. Generally it is best-practice to define schemas for your tables to ensure predictable, consistent structures with data integrity and precise control over indexing, without dependency on data itself. However, it can often be simpler and quicker to simply create a table and let the data auto-generate the schema dynamically with everything being auto-indexed for broad querying.
+
+With dynamic schemas individual attributes are reflexively created as data is ingested, meaning the table will adapt to the structure of data ingested. HarperDB tracks the metadata around schemas, tables, and attributes allowing for describe table, describe schema, and describe all operations.
+
+### Databases
+
+HarperDB databases hold a collection of tables together in a single file that are transactionally connected. This means that operations across tables within a database can be performed in a single atomic transaction. By default tables are added to the default database called "data", but other databases can be created and specified for tables.
+
+### Tables
+
+HarperDB tables group records together with a common data pattern. To create a table users must provide a table name and a primary key.
+
+* **Table Name**: Used to identify the table.
+* **Primary Key**: This is a required attribute that serves as the unique identifier for a record and is also known as the `hash_attribute` in HarperDB operations API.
+
+## Primary Key
+
+The primary key (also referred to as the `hash_attribute`) is used to uniquely identify records. Uniqueness is enforced on the primary; inserts with the same primary key will be rejected. If a primary key is not provided on insert, a GUID will be automatically generated and returned to the user. The [HarperDB Storage Algorithm](./storage-algorithm) utilizes this value for indexing.
+
+**Standard Attributes**
+
+With tables that are using dynamic schemas, additional attributes are reflexively added via insert and update operations (in both SQL and NoSQL) when new attributes are included in the data structure provided to HarperDB. As a result, schemas are additive, meaning new attributes are created in the underlying storage algorithm as additional data structures are provided. HarperDB offers `create_attribute` and `drop_attribute` operations for users who prefer to manually define their data model independent of data ingestion. When new attributes are added to tables with existing data the value of that new attribute will be assumed `null` for all existing records.
+
+**Audit Attributes**
+
+HarperDB automatically creates two audit attributes used on each record if the table is created without a schema.
+
+* `__createdtime__`: The time the record was created in [Unix Epoch with milliseconds](https:/www.epochconverter.com/) format.
+* `__updatedtime__`: The time the record was updated in [Unix Epoch with milliseconds](https:/www.epochconverter.com/) format.
+
+### Dynamic Schema Example
+
+To better understand the behavior let’s take a look at an example. This example utilizes [HarperDB API operations](../../developers/operations-api/databases-and-tables).
+
+**Create a Database**
+
+```bash
+{
+ "operation": "create_database",
+ "schema": "dev"
+}
+```
+
+**Create a Table**
+
+Notice the schema name, table name, and primary key name are the only required parameters.
+
+```bash
+{
+ "operation": "create_table",
+ "database": "dev",
+ "table": "dog",
+ "primary_key": "id"
+}
+```
+
+At this point the table does not have structure beyond what we provided, so the table looks like this:
+
+**dev.dog**
+
+
+
+**Insert Record**
+
+To define attributes we do not need to do anything beyond sending them in with an insert operation.
+
+```bash
+{
+ "operation": "insert",
+ "database": "dev",
+ "table": "dog",
+ "records": [
+ {"id": 1, "dog_name": "Penny", "owner_name": "Kyle"}
+ ]
+}
+```
+
+With a single record inserted and new attributes defined, our table now looks like this:
+
+**dev.dog**
+
+
+
+Indexes have been automatically created for `dog_name` and `owner_name` attributes.
+
+**Insert Additional Record**
+
+If we continue inserting records with the same data schema no schema updates are required. One record will omit the hash attribute from the insert to demonstrate GUID generation.
+
+```bash
+{
+ "operation": "insert",
+ "database": "dev",
+ "table": "dog",
+ "records": [
+ {"id": 2, "dog_name": "Monk", "owner_name": "Aron"},
+ {"dog_name": "Harper","owner_name": "Stephen"}
+ ]
+}
+```
+
+In this case, there is no change to the schema. Our table now looks like this:
+
+**dev.dog**
+
+
+
+**Update Existing Record**
+
+In this case, we will update a record with a new attribute not previously defined on the table.
+
+```bash
+{
+ "operation": "update",
+ "database": "dev",
+ "table": "dog",
+ "records": [
+ {"id": 2, "weight_lbs": 35}
+ ]
+}
+```
+
+Now we have a new attribute called `weight_lbs`. Our table now looks like this:
+
+**dev.dog**
+
+
+
+**Query Table with SQL**
+
+Now if we query for all records where `weight_lbs` is `null` we expect to get back two records.
+
+```bash
+{
+ "operation": "sql",
+ "sql": "SELECT * FROM dev.dog WHERE weight_lbs IS NULL"
+}
+```
+
+This results in the expected two records being returned.
+
+
diff --git a/site/versioned_docs/version-4.3/technical-details/reference/globals.md b/site/versioned_docs/version-4.3/technical-details/reference/globals.md
new file mode 100644
index 00000000..c615d1c5
--- /dev/null
+++ b/site/versioned_docs/version-4.3/technical-details/reference/globals.md
@@ -0,0 +1,236 @@
+---
+title: Globals
+---
+
+# Globals
+
+The primary way that JavaScript code can interact with HarperDB is through the global variables, which has several objects and classes that provide access to the tables, server hooks, and resources that HarperDB provides for building applications. As global variables, these can be directly accessed in any module.
+
+These global variables are also available through the `harperdb` module/package, which can provide better typing in TypeScript. To use this with your own directory, make sure you link the package to your current `harperdb` installation:
+
+```bash
+npm link harperdb
+```
+
+The `harperdb` package is automatically linked for all installed components. Once linked, if you are using EcmaScript module syntax you can import function from `harperdb` like:
+
+```javascript
+import { tables, Resource } from 'harperdb';
+```
+
+Or if you are using CommonJS format for your modules:
+
+```javascript
+const { tables, Resource } = require('harperdb');
+```
+
+The global variables include:
+
+## `tables`
+
+This is an object that holds all the tables for the default database (called `data`) as properties. Each of these property values is a table class that subclasses the Resource interface and provides access to the table through the Resource interface. For example, you can get a record from a table (in the default database) called 'my-table' with:
+
+```javascript
+import { tables } from 'harperdb';
+const { MyTable } = tables;
+async function getRecord() {
+ let record = await MyTable.get(recordId);
+}
+```
+
+It is recommended that you [define a database](../../getting-started/) for all the tables that are required to exist in your application. This will ensure that the tables exist on the `tables` object. Also note that the property names follow a CamelCase convention for use in JavaScript and in the GraphQL Schemas, but these are translated to snake\_case for the actual table names, and converted back to CamelCase when added to the `tables` object.
+
+## `databases`
+
+This is an object that holds all the databases in HarperDB, and can be used to explicitly access a table by database name. Each database will be a property on this object, each of these property values will be an object with the set of all tables in that database. The default database, `databases.data` should equal the `tables` export. For example, if you want to access the "dog" table in the "dev" database, you could do so:
+
+```javascript
+import { databases } from 'harperdb';
+const { Dog } = databases.dev;
+```
+
+## `Resource`
+
+This is the base class for all resources, including tables and external data sources. This is provided so that you can extend it to implement custom data source providers. See the [Resource API documentation](./resource) for more details about implementing a Resource class.
+
+## `auth(username, password?): Promise`
+
+This returns the user object with permissions/authorization information based on the provided username. If a password is provided, the password will be verified before returning the user object (if the password is incorrect, an error will be thrown).
+
+## `logger`
+
+This provides methods `trace`, `debug`, `info`, `warn`, `error`, `fatal`, and `notify` for logging. See the [logging documentation](../../administration/logging/standard-logging) for more information.
+
+## `server`
+
+The `server` global object provides a number of functions and objects to interact with Harper's HTTP service.
+
+### `server.http(listener: RequestListener, options: HttpOptions): HttpServer[]`
+
+Alias: `server.request`
+
+Add a handler method to the HTTP server request listener middleware chain.
+
+Returns an array of server instances based on the specified `options.port` and `options.securePort`.
+
+Example:
+
+```js
+server.http((request, next) => {
+ return request.url === '/graphql'
+ ? handleGraphQLRequest(request)
+ : next(request);
+}, {
+ runFirst: true, / run this handler first
+});
+```
+
+#### `RequestListener`
+
+Type: `(request: Request, next: RequestListener) => Promise`
+
+The HTTP request listener to be added to the middleware chain. To continue chain execution pass the `request` to the `next` function such as `return next(request);`.
+
+#### `Request`
+
+An implementation of WHATWG [Request](https:/developer.mozilla.org/en-US/docs/Web/API/Request) class.
+
+#### `Response`
+
+An implementation of WHATWG [Response](https:/developer.mozilla.org/en-US/docs/Web/API/Response) class.
+
+#### `HttpOptions`
+
+Type: `Object`
+
+Properties:
+
+
+
+- `runFirst` - _optional_ - `boolean` - Add listener to the front of the middleware chain. Defaults to `false`
+- `port` - _optional_ - `number` - Specify which HTTP server middleware chain to add the listener to. Defaults to the Harper system default HTTP port configured by `harperdb-config.yaml`, generally `9926`
+- `securePort` - _optional_ - `number` - Specify which HTTPS server middleware chain to add the listener to. Defaults to the Harper system default HTTP secure port configured by `harperdb-config.yaml`, generally `9927`
+
+#### `HttpServer`
+
+Node.js [`http.Server`](https:/nodejs.org/api/http.html#class-httpserver) or [`https.SecureServer`](https:/nodejs.org/api/https.html#class-httpsserver) instance.
+
+### `server.socket(listener: ConnectionListener, options: SocketOptions): SocketServer`
+
+Creates a socket server on the specified `options.port` or `options.securePort`.
+
+Only one socket server will be created. A `securePort` takes precedence.
+
+#### `ConnectionListener`
+
+Node.js socket server connection listener as documented in [`net.createServer`](https:/nodejs.org/api/net.html#netcreateserveroptions-connectionlistener) or [`tls.createServer`](https:/nodejs.org/api/tls.html#tlscreateserveroptions-secureconnectionlistener)
+
+#### `SocketOptions`
+
+- `port` - _optional_ - `number` - Specify the port for the [`net.Server`](https:/nodejs.org/api/net.html#class-netserver) instance.
+- `securePort` - _optional_ - `number` - Specify the port for the [`tls.Server`](https:/nodejs.org/api/tls.html#class-tlsserver) instance.
+
+#### `SocketServer`
+
+Node.js [`net.Server`](https:/nodejs.org/api/net.html#class-netserver) or [`tls.Server`](https:/nodejs.org/api/tls.html#class-tlsserver) instance.
+
+### `server.ws(listener: WsListener, options: WsOptions): HttpServer[]`
+
+Add a listener to the WebSocket connection listener middleware chain. The WebSocket server is associated with the HTTP server specified by the `options.port` or `options.securePort`. Use the [`server.upgrade()`](#serverupgradelistener-upgradelistener-options-upgradeoptions-void) method to add a listener to the upgrade middleware chain.
+
+Example:
+
+```js
+server.ws((ws, request, chainCompletion) => {
+ chainCompletion.then(() => {
+ ws.on('error', console.error);
+
+ ws.on('message', function message(data) {
+ console.log('received: %s', data);
+ });
+
+ ws.send('something');
+ });
+});
+```
+
+#### `WsListener`
+
+Type: `(ws: WebSocket, request: Request, chainCompletion: ChainCompletion, next: WsListener): Promise`
+
+The WebSocket connection listener.
+
+- The `ws` argument is the [WebSocket](https:/github.com/websockets/ws/blob/master/doc/ws.md#class-websocket) instance as defined by the `ws` module.
+- The `request` argument is Harper's transformation of the `IncomingMessage` argument of the standard ['connection'](https:/github.com/websockets/ws/blob/master/doc/ws.md#event-connection) listener event for a WebSocket server.
+- The `chainCompletion` argument is a `Promise` of the associated HTTP server's request chain. Awaiting this promise enables the user to ensure the HTTP request has finished being processed before operating on the WebSocket.
+- The `next` argument is similar to that of other `next` arguments in Harper's server middlewares. To continue execution of the WebSocket connection listener middleware chain, pass all of the other arguments to this one such as: `next(ws, request, chainCompletion)`
+
+#### `WsOptions`
+
+Type: `Object`
+
+Properties:
+
+
+
+- `maxPayload` - _optional_ - `number` - Set the max payload size for the WebSocket server. Defaults to 100 MB.
+- `runFirst` - _optional_ - `boolean` - Add listener to the front of the middleware chain. Defaults to `false`
+- `port` - _optional_ - `number` - Specify which WebSocket server middleware chain to add the listener to. Defaults to the Harper system default HTTP port configured by `harperdb-config.yaml`, generally `9926`
+- `securePort` - _optional_ - `number` - Specify which WebSocket secure server middleware chain to add the listener to. Defaults to the Harper system default HTTP secure port configured by `harperdb-config.yaml`, generally `9927`
+
+### `server.upgrade(listener: UpgradeListener, options: UpgradeOptions): void`
+
+Add a listener to the HTTP Server [upgrade](https:/nodejs.org/api/http.html#event-upgrade_1) event. If a WebSocket connection listener is added using [`server.ws()`](#serverwslistener-wslistener-options-wsoptions-httpserver), a default upgrade handler will be added as well. The default upgrade handler will add a `__harperdb_request_upgraded` boolean to the `request` argument to signal the connection has already been upgraded. It will also check for this boolean _before_ upgrading and if it is `true`, it will pass the arguments along to the `next` listener.
+
+This method should be used to delegate HTTP upgrade events to an external WebSocket server instance.
+
+Example:
+
+> This example is from the HarperDB Next.js component. See the complete source code [here](https:/github.com/HarperDB/nextjs/blob/main/extension.js)
+
+```js
+server.upgrade(
+ (request, socket, head, next) => {
+ if (request.url === '/_next/webpack-hmr') {
+ return upgradeHandler(request, socket, head).then(() => {
+ request.__harperdb_request_upgraded = true;
+
+ next(request, socket, head);
+ });
+ }
+
+ return next(request, socket, head);
+ },
+ { runFirst: true }
+);
+```
+
+#### `UpgradeListener`
+
+Type: `(request, socket, head, next) => void`
+
+The arguments are passed to the middleware chain from the HTTP server [`'upgrade'`](https:/nodejs.org/api/http.html#event-upgrade_1) event.
+
+#### `UpgradeOptions`
+
+Type: `Object`
+
+Properties:
+
+- `runFirst` - _optional_ - `boolean` - Add listener to the front of the middleware chain. Defaults to `false`
+- `port` - _optional_ - `number` - Specify which HTTP server middleware chain to add the listener to. Defaults to the Harper system default HTTP port configured by `harperdb-config.yaml`, generally `9926`
+- `securePort` - _optional_ - `number` - Specify which HTTP secure server middleware chain to add the listener to. Defaults to the Harper system default HTTP secure port configured by `harperdb-config.yaml`, generally `9927`
+
+### `server.config`
+
+This provides access to the HarperDB configuration object. This comes from the [harperdb-config.yaml](../../deployments/configuration) (parsed into object form).
+
+### `server.recordAnalytics(value, metric, path?, method?, type?)`
+
+This records the provided value as a metric into HarperDB's analytics. HarperDB efficiently records and tracks these metrics and makes them available through [analytics API](./analytics). The values are aggregated and statistical information is computed when many operations are performed. The optional parameters can be used to group statistics. For the parameters, make sure you are not grouping on too fine of a level for useful aggregation. The parameters are:
+
+* `value` - This is a numeric value for the metric that is being recorded. This can be a value measuring time or bytes, for example.
+* `metric` - This is the name of the metric.
+* `path` - This is an optional path (like a URL path). For a URL like /my-resource/, you would typically include a path of "my-resource", not including the id so you can group by all the requests to "my-resource" instead of individually aggregating by each individual id.
+* `method` - Optional method to group by.
+* `type` - Optional type to group by.
diff --git a/site/versioned_docs/version-4.3/technical-details/reference/headers.md b/site/versioned_docs/version-4.3/technical-details/reference/headers.md
new file mode 100644
index 00000000..c58bb7ec
--- /dev/null
+++ b/site/versioned_docs/version-4.3/technical-details/reference/headers.md
@@ -0,0 +1,12 @@
+---
+title: HarperDB Headers
+---
+
+# HarperDB Headers
+
+All HarperDB API responses include headers that are important for interoperability and debugging purposes. The following headers are returned with all HarperDB API responses:
+
+| Key | Example Value | Description |
+|-------------------|------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| server-timing | db;dur=7.165 | This reports the duration of the operation, in milliseconds. This follows the standard for Server-Timing and can be consumed by network monitoring tools. |
+| content-type | application/json | This reports the MIME type of the returned content, which is negotiated based on the requested content type in the Accept header. |
diff --git a/site/versioned_docs/version-4.3/technical-details/reference/index.md b/site/versioned_docs/version-4.3/technical-details/reference/index.md
new file mode 100644
index 00000000..e9a6ebf9
--- /dev/null
+++ b/site/versioned_docs/version-4.3/technical-details/reference/index.md
@@ -0,0 +1,16 @@
+---
+title: Reference
+---
+
+# Reference
+
+This section contains technical details and reference materials for HarperDB.
+
+* [Resource API](./resource)
+* [Transactions](./transactions)
+* [Storage Algorithm](./storage-algorithm)
+* [Dynamic Schema](./dynamic-schema)
+* [Headers](./headers)
+* [Limitations](./limits)
+* Content Types
+* [Data Types](./data-types)
diff --git a/site/versioned_docs/version-4.3/technical-details/reference/limits.md b/site/versioned_docs/version-4.3/technical-details/reference/limits.md
new file mode 100644
index 00000000..ccad9d64
--- /dev/null
+++ b/site/versioned_docs/version-4.3/technical-details/reference/limits.md
@@ -0,0 +1,33 @@
+---
+title: HarperDB Limits
+---
+
+# HarperDB Limits
+
+This document outlines limitations of HarperDB.
+
+## Database Naming Restrictions
+
+**Case Sensitivity**
+
+HarperDB database metadata (database names, table names, and attribute/column names) are case sensitive. Meaning databases, tables, and attributes can differ only by the case of their characters.
+
+**Restrictions on Database Metadata Names**
+
+HarperDB database metadata (database names, table names, and attribute names) cannot contain the following UTF-8 characters:
+
+```
+/`¡¢£¤¥¦§¨©ª«¬®¯°±²³´µ¶·¸¹º»¼½¾¿ÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖרÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõö÷øùúûüýþÿ
+```
+
+Additionally, they cannot contain the first 31 non-printing characters. Spaces are allowed, but not recommended as best practice. The regular expression used to verify a name is valid is:
+
+```
+^[\x20-\x2E|\x30-\x5F|\x61-\x7E]*$
+```
+
+## Table Limitations
+
+**Attribute Maximum**
+
+HarperDB limits the number of total indexed attributes across tables (including the primary key of each table) to 10,000 per database.
\ No newline at end of file
diff --git a/site/versioned_docs/version-4.3/technical-details/reference/resource.md b/site/versioned_docs/version-4.3/technical-details/reference/resource.md
new file mode 100644
index 00000000..d1bc89f1
--- /dev/null
+++ b/site/versioned_docs/version-4.3/technical-details/reference/resource.md
@@ -0,0 +1,660 @@
+---
+title: Resource Class
+---
+
+# Resource Class
+
+## Resource Class
+
+The Resource class is designed to provide a unified API for modeling different data resources within HarperDB. Database/table data can be accessed through the Resource API. The Resource class can be extended to create new data sources. Resources can be exported to define endpoints. Tables themselves extend the Resource class, and can be extended by users.
+
+Conceptually, a Resource class provides an interface for accessing, querying, modifying, and monitoring a set of entities or records. Instances of a Resource class can represent a single record or entity, or a collection of records, at a given point in time, that you can interact with through various methods or queries. Resource instances can represent an atomic transactional view of a resource and facilitate transactional interaction. A Resource instance holds the primary key/identifier, context information, and any pending updates to the record, so any instance methods can act on the record and have full access to this information to during execution. Therefore, there are distinct resource instances created for every record or query that is accessed, and the instance methods are used for interaction with the data.
+
+Resource classes also have static methods, which are generally the preferred way to externally interact with tables and resources. The static methods handle parsing paths and query strings, starting a transaction as necessary, performing access authorization checks (if required), creating a resource instance, and calling the instance methods. This general rule for how to interact with resources:
+* If you want to *act upon* a table or resource, querying or writing to it, then use the static methods to initial access or write data. For example, you could use `MyTable.get(34)` to access the record with a primary key of `34`.
+ * You can subsequently use the instance methods on the returned resource instance to perform additional actions on the record.
+* If you want to *define custom behavior* for a table or resource (to control how a resource responds to queries/writes), then extend the class and override/define instance methods.
+
+The Resource API is heavily influenced by the REST/HTTP API, and the methods and properties of the Resource class are designed to map to and be used in a similar way to how you would interact with a RESTful API.
+
+The REST-based API is a little different than traditional Create-Read-Update-Delete (CRUD) APIs that were designed with single-server interactions in mind, but semantics that attempt to guarantee no existing record or overwrite-only behavior require locks that don't scale well in distributed database. Centralizing writes around `put` calls provides much more scalable, simple, and consistent behavior in a distributed eventually consistent database. You can generally think of CRUD operations mapping to REST operations like this:
+* Read - `get`
+* Create with a known primary key - `put`
+* Create with a generated primary key - `post`/`create`
+* Update (Full) - `put`
+* Update (Partial) - `patch`
+* Delete - `delete`
+
+The RESTful HTTP server and other server interfaces will directly call resource methods of the same name to fulfill incoming requests so resources can be defined as endpoints for external interaction. When resources are used by the server interfaces, the static method will be executed (which starts a transaction and does access checks), which will then create the resource instance and call the corresponding instance method. Paths (URL, MQTT topics) are mapped to different resource instances. Using a path that specifies an ID like `/MyResource/3492` will be mapped to a Resource instance where the instance's ID will be `3492`, and interactions will use the instance methods like `get()`, `put()`, and `post()`. Using the root path (`/MyResource/`) will map to a Resource instance with an ID of `null`, and this represents the collection of all the records in the resource or table.
+
+You can create classes that extend `Resource` to define your own data sources, typically to interface with external data sources (the `Resource` base class is available as a global variable in the HarperDB JS environment). In doing this, you will generally be extending and providing implementations for the instance methods below. For example:
+
+```javascript
+export class MyExternalData extends Resource {
+ async get() {
+ / fetch data from an external source, using our id
+ let response = await this.fetch(this.id);
+ / do something with the response
+ }
+ put(data) {
+ / send the data into the external source
+ }
+ delete() {
+ / delete an entity in the external data source
+ }
+ subscribe(options) {
+ / if the external data source is capable of real-time notification of changes, can subscribe
+ }
+}
+/ we can export this class from resources.json as our own endpoint, or use this as the source for
+/ a HarperDB data to store and cache the data coming from this data source:
+tables.MyCache.sourcedFrom(MyExternalData);
+```
+
+You can also extend table classes in the same way, overriding the instance methods for custom functionality. The `tables` object is a global variable in the HarperDB JavaScript environment, along with `Resource`:
+
+```javascript
+export class MyTable extends tables.MyTable {
+ get() {
+ / we can add properties or change properties before returning data:
+ this.newProperty = 'newValue';
+ this.existingProperty = 44;
+ return super.get(); / returns the record, modified with the changes above
+ }
+ put(data) {
+ / can change data any way we want
+ super.put(data);
+ }
+ delete() {
+ super.delete();
+ }
+ post(data) {
+ / providing a post handler (for HTTP POST requests) is a common way to create additional
+ / actions that aren't well described with just PUT or DELETE
+ }
+}
+```
+Make sure that if are extending and `export`ing your table with this class, that you remove the `@export` directive in your schema, so that you aren't exporting the same table/class name twice.
+
+## Global Variables
+
+### `tables`
+
+This is an object with all the tables in the default database (the default database is "data"). Each table that has been declared or created will be available as a (standard) property on this object, and the value will be the table class that can be used to interact with that table. The table classes implement the Resource API.
+
+### `databases`
+
+This is an object with all the databases that have been defined in HarperDB (in the running instance). Each database that has been declared or created will be available as a (standard) property on this object. The property values are an object with the tables in that database, where each property is a table, like the `tables` object. In fact, `databases.data === tables` should always be true.
+
+### `Resource`
+
+This is the Resource base class. This can be directly extended for custom resources, and is the base class for all tables.
+
+### `server`
+
+This object provides extension points for extension components that wish to implement new server functionality (new protocols, authentication, etc.). See the [extensions documentation for more information](../../developers/components/writing-extensions).
+
+### `transaction`
+
+This provides a function for starting transactions. See the transactions section below for more information.
+
+### `contentTypes`
+
+This provides an interface for defining new content type handlers. See the [content type extensions documentation](../../developers/components/writing-extensions) for more information.
+
+### TypeScript Support
+
+While these objects/methods are all available as global variables, it is easier to get TypeScript support (code assistance, type checking) for these interfaces by explicitly `import`ing them. This can be done by setting up a package link to the main HarperDB package in your app:
+
+```
+# you may need to go to your harperdb directory and set it up as a link first
+npm link harperdb
+```
+
+And then you can import any of the main HarperDB APIs you will use, and your IDE should understand the full typings associated with them:
+
+```
+import { databases, tables, Resource } from 'harperdb';
+```
+
+## Resource Class (Instance) Methods
+
+### Properties/attributes declared in schema
+
+Properties that have been defined in your table's schema can be accessed and modified as direct properties on the Resource instances.
+
+### `get(queryOrProperty?)`: Resource|AsyncIterable
+
+This is called to return the record or data for this resource, and is called by HTTP GET requests. This may be optionally called with a `query` object to specify a query should be performed, or a string to indicate that the specified property value should be returned. When defining Resource classes, you can define or override this method to define exactly what should be returned when retrieving a record. The default `get` method (`super.get()`) returns the current record as a plain object.
+
+The query object can be used to access any query parameters that were included in the URL. For example, with a request to `/my-resource/some-id?param1=value`, we can access URL/request information:
+
+```javascript
+get(query) {
+ / note that query will only exist (as an object) if there is a query string
+ let param1 = query?.get?.('param1'); / returns 'value'
+ let id = this.getId(); / returns 'some-id'
+ ...
+}
+```
+If `get` is called for a single record (for a request like `/Table/some-id`), the default action is to return `this` instance of the resource. If `get` is called on a collection (`/Table/?name=value`), the default action is to `search` and return an AsyncIterable of results.
+
+It is important to note that `this` is the resource instance for a specific record, specified by the primary key. Therefore, calling `super.get(query)` performs a `get` on this specific record/resource, not on the whole table. If you wish to access a _different_ record, you should use the static `get` method on the table class, like `Table.get(otherId, context)`.
+
+### `search(query: Query)`: AsyncIterable
+
+This performs a query on this resource, searching for records that are descendants. By default, this is called by `get(query)` from a collection resource. When this is called for the root resource (like `/Table/`) it searches through all records in the table. However, if you call search from an instance with a specific ID like `1` from a path like `Table/1`, it will only return records that are descendants of that record, like `[1, 1]` (path of Table/1/1) and `[1, 2]` (path of Table/1/2). If you want to do a standard search of the table, make you call the static method like `Table.search(...)`. You can define or override this method to define how records should be queried. The default `search` method on tables (`super.search(query)`) will perform a query and return an AsyncIterable of results. The query object can be used to specify the desired query.
+
+
+### `getId(): string|number|Array`
+
+Returns the primary key value for this resource.
+
+### `put(data: object, query?: Query)`
+
+This will assign the provided record or data to this resource, and is called for HTTP PUT requests. You can define or override this method to define how records should be updated. The default `put` method on tables (`super.put(data)`) writes the record to the table (updating or inserting depending on if the record previously existed) as part of the current transaction for the resource instance.
+
+It is important to note that `this` is the resource instance for a specific record, specified by the primary key. Therefore, calling `super.put(data)` updates this specific record/resource, not another records in the table. If you wish to update a _different_ record, you should use the static `put` method on the table class, like `Table.put(data, context)`.
+
+The `query` argument is used to represent any additional query parameters that were included in the URL. For example, with a request to `/my-resource/some-id?param1=value`, we can access URL/request information:
+
+```javascript
+put(data, query) {
+ let param1 = query?.get?.('param1'); / returns 'value'
+ ...
+}
+```
+
+### `patch(data: object, query?: Query)`
+
+This will update the existing record with the provided data's properties, and is called for HTTP PATCH requests. You can define or override this method to define how records should be updated. The default `patch` method on tables (`super.patch(data)`) updates the record. The properties will be applied to the existing record, overwriting the existing records properties, and preserving any properties in the record that are not specified in the `data` object. This is performed as part of the current transaction for the resource instance. The `query` argument is used to represent any additional query parameters that were included.
+
+### `update(data: object, fullUpdate: boolean?)`
+
+This is called by the default `put` and `patch` handlers to update a record. `put` calls with `fullUpdate` as `true` to indicate a full record replacement (`patch` calls it with the second argument as `false`). Any additional property changes that are made before the transaction commits will also be persisted.
+
+### `delete(queryOrProperty?)`
+
+This will delete this record or resource, and is called for HTTP DELETE requests. You can define or override this method to define how records should be deleted. The default `delete` method on tables (`super.put(record)`) deletes the record from the table as part of the current transaction.
+
+### `publish(message)`
+
+This will publish a message to this resource, and is called for MQTT publish commands. You can define or override this method to define how messages should be published. The default `publish` method on tables (`super.publish(message)`) records the published message as part of the current transaction; this will not change the data in the record but will notify any subscribers to the record/topic.
+
+### `post(data: object, query?: Query)`
+
+This is called for HTTP POST requests. You can define this method to provide your own implementation of how POST requests should be handled. Generally `POST` provides a generic mechanism for various types of data updates, and is a good place to define custom functionality for updating records. The default behavior is to create a new record/resource. The `query` argument is used to represent any additional query parameters that were included.
+
+### `invalidate()`
+
+This method is available on tables. This will invalidate the current record in the table. This can be used with a caching table and is used to indicate that the source data has changed, and the record needs to be reloaded when next accessed.
+
+### `subscribe(subscriptionRequest: SubscriptionRequest): Promise`
+
+This will subscribe to the current resource, and is called for MQTT subscribe commands. You can define or override this method to define how subscriptions should be handled. The default `subscribe` method on tables (`super.publish(message)`) will set up a listener that will be called for any changes or published messages to this resource.
+
+The returned (promise resolves to) Subscription object is an `AsyncIterable` that you can use a `for await` to iterate through. It also has a `queue` property which holds (an array of) any messages that are ready to be delivered immediately (if you have specified a start time, previous count, or there is a message for the current or "retained" record, these may be immediately returned).
+
+The `SubscriptionRequest` object supports the following properties (all optional):
+
+* `includeDescendants` - If this is enabled, this will create a subscription to all the record updates/messages that are prefixed with the id. For example, a subscription request of `{id:'sub', includeDescendants: true}` would return events for any update with an id/topic of the form sub/\* (like `sub/1`).
+* `startTime` - This will begin the subscription at a past point in time, returning all updates/messages since the start time (a catch-up of historical messages). This can be used to resume a subscription, getting all messages since the last subscription.
+* `previousCount` - This specifies the number of previous updates/messages to deliver. For example, `previousCount: 10` would return the last ten messages. Note that `previousCount` can not be used in conjunction with `startTime`.
+* `omitCurrent` - Indicates that the current (or retained) record should _not_ be immediately sent as the first update in the subscription (if no `startTime` or `previousCount` was used). By default, the current record is sent as the first update.
+
+### `connect(incomingMessages?: AsyncIterable