diff --git a/docs/administration/harper-studio/create-account.md b/docs/administration/harper-studio/create-account.md index 21752357..c0b0cc96 100644 --- a/docs/administration/harper-studio/create-account.md +++ b/docs/administration/harper-studio/create-account.md @@ -12,7 +12,7 @@ Start at the [Harper Studio sign up page](https://studio.harperdb.io/sign-up). - Email Address - Subdomain - _Part of the URL that will be used to identify your Harper Cloud Instances. For example, with subdomain “demo” and instance name “c1” the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com)._ + _Part of the URL that will be used to identify your Harper Cloud Instances. For example, with subdomain "demo" and instance name "c1" the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com)._ - Coupon Code (optional) diff --git a/docs/administration/harper-studio/instances.md b/docs/administration/harper-studio/instances.md index e53403a9..b367ed96 100644 --- a/docs/administration/harper-studio/instances.md +++ b/docs/administration/harper-studio/instances.md @@ -26,7 +26,7 @@ A summary view of all instances within an organization can be viewed by clicking 1. Fill out Instance Info. 1. Enter Instance Name - _This will be used to build your instance URL. For example, with subdomain “demo” and instance name “c1” the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com). The Instance URL will be previewed below._ + _This will be used to build your instance URL. For example, with subdomain "demo" and instance name "c1" the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com). The Instance URL will be previewed below._ 1. Enter Instance Username diff --git a/docs/administration/harper-studio/organizations.md b/docs/administration/harper-studio/organizations.md index c26b4481..f93eeff0 100644 --- a/docs/administration/harper-studio/organizations.md +++ b/docs/administration/harper-studio/organizations.md @@ -29,7 +29,7 @@ A new organization can be created as follows: - Enter Organization Name _This is used for descriptive purposes only._ - Enter Organization Subdomain - _Part of the URL that will be used to identify your Harper Cloud Instances. For example, with subdomain “demo” and instance name “c1” the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com)._ + _Part of the URL that will be used to identify your Harper Cloud Instances. For example, with subdomain "demo" and instance name "c1" the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com)._ 1. Click Create Organization. ## Delete an Organization diff --git a/docs/administration/harper-studio/query-instance-data.md b/docs/administration/harper-studio/query-instance-data.md index 29a385b9..552a5112 100644 --- a/docs/administration/harper-studio/query-instance-data.md +++ b/docs/administration/harper-studio/query-instance-data.md @@ -13,7 +13,7 @@ SQL queries can be executed directly through the Harper Studio with the followin 1. Enter your SQL query in the SQL query window. 1. Click **Execute**. -_Please note, the Studio will execute the query exactly as entered. For example, if you attempt to `SELECT _` from a table with millions of rows, you will most likely crash your browser.\* +_Please note, the Studio will execute the query exactly as entered. For example, if you attempt to `SELECT _` from a table with millions of rows, you will most likely crash your browser._ ## Browse Query Results Set diff --git a/docs/administration/logging/standard-logging.md b/docs/administration/logging/standard-logging.md index a5116ed7..044c2260 100644 --- a/docs/administration/logging/standard-logging.md +++ b/docs/administration/logging/standard-logging.md @@ -22,15 +22,15 @@ For example, a typical log entry looks like: The components of a log entry are: -- timestamp - This is the date/time stamp when the event occurred -- level - This is an associated log level that gives a rough guide to the importance and urgency of the message. The available log levels in order of least urgent (and more verbose) are: `trace`, `debug`, `info`, `warn`, `error`, `fatal`, and `notify`. -- thread/ID - This reports the name of the thread and the thread ID that the event was reported on. Note that NATS logs are recorded by their process name and there is no thread id for them since they are a separate process. Key threads are: - - main - This is the thread that is responsible for managing all other threads and routes incoming requests to the other threads - - http - These are the worker threads that handle the primary workload of incoming HTTP requests to the operations API and custom functions. - - Clustering\* - These are threads and processes that handle replication. - - job - These are job threads that have been started to handle operations that are executed in a separate job thread. -- tags - Logging from a custom function will include a "custom-function" tag in the log entry. Most logs will not have any additional tags. -- message - This is the main message that was reported. +- `timestamp` - This is the date/time stamp when the event occurred +- `level` - This is an associated log level that gives a rough guide to the importance and urgency of the message. The available log levels in order of least urgent (and more verbose) are: `trace`, `debug`, `info`, `warn`, `error`, `fatal`, and `notify`. +- `thread/ID` - This reports the name of the thread and the thread ID that the event was reported on. Note that NATS logs are recorded by their process name and there is no thread id for them since they are a separate process. Key threads are: + - `main` - This is the thread that is responsible for managing all other threads and routes incoming requests to the other threads + - `http` - These are the worker threads that handle the primary workload of incoming HTTP requests to the operations API and custom functions. + - `Clustering` - These are threads and processes that handle replication. + - `job` - These are job threads that have been started to handle operations that are executed in a separate job thread. +- `tags` - Logging from a custom function will include a "custom-function" tag in the log entry. Most logs will not have any additional tags. +- `message` - This is the main message that was reported. We try to keep logging to a minimum by default, to do this the default log level is `error`. If you require more information from the logs, increasing the log level down will provide that. @@ -46,7 +46,7 @@ Harper logs can optionally be streamed to standard streams. Logging to standard ## Logging Rotation -Log rotation allows for managing log files, such as compressing rotated log files, archiving old log files, determining when to rotate, and the like. This will allow for organized storage and efficient use of disk space. For more information see “logging” in our [config docs](../../deployments/configuration). +Log rotation allows for managing log files, such as compressing rotated log files, archiving old log files, determining when to rotate, and the like. This will allow for organized storage and efficient use of disk space. For more information see "logging" in our [config docs](../../deployments/configuration). ## Read Logs via the API diff --git a/docs/custom-functions/restarting-server.md b/docs/custom-functions/restarting-server.md index fbabb514..8717d1d0 100644 --- a/docs/custom-functions/restarting-server.md +++ b/docs/custom-functions/restarting-server.md @@ -4,7 +4,7 @@ title: Restarting the Server # Restarting the Server -One way to manage Custom Functions is through [Harper Studio](../harper-studio/). It performs all the necessary operations automatically. To get started, navigate to your instance in Harper Studio and click the subnav link for “functions”. If you have not yet enabled Custom Functions, it will walk you through the process. Once configuration is complete, you can manage and deploy Custom Functions in minutes. +One way to manage Custom Functions is through [Harper Studio](../harper-studio/). It performs all the necessary operations automatically. To get started, navigate to your instance in Harper Studio and click the subnav link for "functions". If you have not yet enabled Custom Functions, it will walk you through the process. Once configuration is complete, you can manage and deploy Custom Functions in minutes. For any changes made to your routes, helpers, or projects, you’ll need to restart the Custom Functions server to see them take effect. Harper Studio does this automatically whenever you create or delete a project, or add, edit, or edit a route or helper. If you need to start the Custom Functions server yourself, you can use the following operation to do so: diff --git a/docs/deployments/install-harper/linux.md b/docs/deployments/install-harper/linux.md index 27a9dc79..cae27c9d 100644 --- a/docs/deployments/install-harper/linux.md +++ b/docs/deployments/install-harper/linux.md @@ -20,7 +20,7 @@ These instructions assume that the following has already been completed: While you will need to access Harper through port 9925 for the administration through the operations API, and port 9932 for clustering, for higher level of security, you may want to consider keeping both of these ports restricted to a VPN or VPC, and only have the application interface (9926 by default) exposed to the public Internet. -For this example, we will use an AWS Ubuntu Server 22.04 LTS m5.large EC2 Instance with an additional General Purpose SSD EBS volume and the default “ubuntu” user account. +For this example, we will use an AWS Ubuntu Server 22.04 LTS m5.large EC2 Instance with an additional General Purpose SSD EBS volume and the default "ubuntu" user account. --- diff --git a/docs/developers/applications/caching.md b/docs/developers/applications/caching.md index 0a42d6f0..34cf778c 100644 --- a/docs/developers/applications/caching.md +++ b/docs/developers/applications/caching.md @@ -22,9 +22,9 @@ While you can provide a single expiration time, there are actually several expir You can provide a single expiration and it defines the behavior for all three. You can also provide three settings for expiration, through table directives: -- expiration - The amount of time until a record goes stale. -- eviction - The amount of time after expiration before a record can be evicted (defaults to zero). -- scanInterval - The interval for scanning for expired records (defaults to one quarter of the total of expiration and eviction). +- `expiration` - The amount of time until a record goes stale. +- `eviction` - The amount of time after expiration before a record can be evicted (defaults to zero). +- `scanInterval` - The interval for scanning for expired records (defaults to one quarter of the total of expiration and eviction). ## Define External Data Source diff --git a/docs/developers/applications/define-routes.md b/docs/developers/applications/define-routes.md index b41615c7..323d1e3e 100644 --- a/docs/developers/applications/define-routes.md +++ b/docs/developers/applications/define-routes.md @@ -22,7 +22,7 @@ However, you can specify the path to be `/` if you wish to have your routes hand - The route below, using the default config, within the **dogs** project, with a route of **breeds** would be available at **http:/localhost:9926/dogs/breeds**. -In effect, this route is just a pass-through to Harper. The same result could have been achieved by hitting the core Harper API, since it uses **hdbCore.preValidation** and **hdbCore.request**, which are defined in the “helper methods” section, below. +In effect, this route is just a pass-through to Harper. The same result could have been achieved by hitting the core Harper API, since it uses **hdbCore.preValidation** and **hdbCore.request**, which are defined in the "helper methods" section, below. ```javascript export default async (server, { hdbCore, logger }) => { @@ -39,7 +39,7 @@ export default async (server, { hdbCore, logger }) => { For endpoints where you want to execute multiple operations against Harper, or perform additional processing (like an ML classification, or an aggregation, or a call to a 3rd party API), you can define your own logic in the handler. The function below will execute a query against the dogs table, and filter the results to only return those dogs over 4 years in age. -**IMPORTANT: This route has NO preValidation and uses hdbCore.requestWithoutAuthentication, which- as the name implies- bypasses all user authentication. See the security concerns and mitigations in the “helper methods” section, below.** +**IMPORTANT: This route has NO preValidation and uses hdbCore.requestWithoutAuthentication, which- as the name implies- bypasses all user authentication. See the security concerns and mitigations in the "helper methods" section, below.** ```javascript export default async (server, { hdbCore, logger }) => { diff --git a/docs/developers/clustering/index.md b/docs/developers/clustering/index.md index 95c3433c..fddd3851 100644 --- a/docs/developers/clustering/index.md +++ b/docs/developers/clustering/index.md @@ -22,10 +22,10 @@ A common use case is an edge application collecting and analyzing sensor data th Harper simplifies the architecture of such an application with its bi-directional, table-level replication: -- The edge instance subscribes to a “thresholds” table on the cloud instance, so the application only makes localhost calls to get the thresholds. -- The application continually pushes sensor data into a “sensor_data” table via the localhost API, comparing it to the threshold values as it does so. -- When a threshold violation occurs, the application adds a record to the “alerts” table. -- The application appends to that record array “sensor_data” entries for the 60 seconds (or minutes, or days) leading up to the threshold violation. -- The edge instance publishes the “alerts” table up to the cloud instance. +- The edge instance subscribes to a "thresholds" table on the cloud instance, so the application only makes localhost calls to get the thresholds. +- The application continually pushes sensor data into a "sensor_data" table via the localhost API, comparing it to the threshold values as it does so. +- When a threshold violation occurs, the application adds a record to the "alerts" table. +- The application appends to that record array "sensor_data" entries for the 60 seconds (or minutes, or days) leading up to the threshold violation. +- The edge instance publishes the "alerts" table up to the cloud instance. By letting Harper focus on the fault-tolerant logistics of transporting your data, you get to write less code. By moving data only when and where it’s needed, you lower storage and bandwidth costs. And by restricting your app to only making local calls to Harper, you reduce the overall exposure of your application to outside forces. diff --git a/docs/developers/miscellaneous/google-data-studio.md b/docs/developers/miscellaneous/google-data-studio.md index 730ee1bd..3d85f672 100644 --- a/docs/developers/miscellaneous/google-data-studio.md +++ b/docs/developers/miscellaneous/google-data-studio.md @@ -19,9 +19,9 @@ Get started by selecting the Harper connector from the [Google Data Studio Partn 1. Log in to https://datastudio.google.com/. 1. Add a new Data Source using the Harper connector. The current release version can be added as a data source by following this link: [Harper Google Data Studio Connector](https://datastudio.google.com/datasources/create?connectorId=AKfycbxBKgF8FI5R42WVxO-QCOq7dmUys0HJrUJMkBQRoGnCasY60_VJeO3BhHJPvdd20-S76g). 1. Authorize the connector to access other servers on your behalf (this allows the connector to contact your database). -1. Enter the Web URL to access your database (preferably with HTTPS), as well as the Basic Auth key you use to access the database. Just include the key, not the word “Basic” at the start of it. -1. Check the box for “Secure Connections Only” if you want to always use HTTPS connections for this data source; entering a Web URL that starts with https:// will do the same thing, if you prefer. -1. Check the box for “Allow Bad Certs” if your Harper instance does not have a valid SSL certificate. [Harper Cloud](../../deployments/harper-cloud/) always has valid certificates, and so will never require this to be checked. Instances you set up yourself may require this, if you are using self-signed certs. If you are using [Harper Cloud](../../deployments/harper-cloud/) or another instance you know should always have valid SSL certificates, do not check this box. +1. Enter the Web URL to access your database (preferably with HTTPS), as well as the Basic Auth key you use to access the database. Just include the key, not the word "Basic" at the start of it. +1. Check the box for "Secure Connections Only" if you want to always use HTTPS connections for this data source; entering a Web URL that starts with https:// will do the same thing, if you prefer. +1. Check the box for "Allow Bad Certs" if your Harper instance does not have a valid SSL certificate. [Harper Cloud](../../deployments/harper-cloud/) always has valid certificates, and so will never require this to be checked. Instances you set up yourself may require this, if you are using self-signed certs. If you are using [Harper Cloud](../../deployments/harper-cloud/) or another instance you know should always have valid SSL certificates, do not check this box. 1. Choose your Query Type. This determines what information the configuration will ask for after pressing the Next button. - Table will ask you for a Schema and a Table to return all fields of using `SELECT *`. - SQL will ask you for the SQL query you’re using to retrieve fields from the database. You may `JOIN` multiple tables together, and use Harper specific SQL functions, along with the usual power SQL grants. diff --git a/docs/developers/operations-api/analytics.md b/docs/developers/operations-api/analytics.md index 2057d1b9..558530cc 100644 --- a/docs/developers/operations-api/analytics.md +++ b/docs/developers/operations-api/analytics.md @@ -8,12 +8,12 @@ title: Analytics Operations Retrieves analytics data from the server. -- operation _(required)_ - must always be `get_analytics` -- metric _(required)_ - any value returned by `list_metrics` -- start*time *(optional)\_ - Unix timestamp in seconds -- end*time *(optional)\_ - Unix timestamp in seconds -- get*attributes *(optional)\_ - array of attribute names to retrieve -- conditions _(optional)_ - array of conditions to filter results (see [search_by_conditions docs](developers/operations-api/nosql-operations) for details) +- `operation` _(required)_ - must always be `get_analytics` +- `metric` _(required)_ - any value returned by `list_metrics` +- `start_time` _(optional)_ - Unix timestamp in seconds +- `end_time` _(optional)_ - Unix timestamp in seconds +- `get_attributes` _(optional)_ - array of attribute names to retrieve +- `conditions` _(optional)_ - array of conditions to filter results (see [search_by_conditions docs](developers/operations-api/nosql-operations) for details) ### Body @@ -57,8 +57,8 @@ Retrieves analytics data from the server. Returns a list of available metrics that can be queried. -- operation _(required)_ - must always be `list_metrics` -- metric*types *(optional)\_ - array of metric types to filter results; one or both of `custom` and `builtin`; default is `builtin` +- `operation` _(required)_ - must always be `list_metrics` +- `metric_types` _(optional)_ - array of metric types to filter results; one or both of `custom` and `builtin`; default is `builtin` ### Body @@ -79,8 +79,8 @@ Returns a list of available metrics that can be queried. Provides detailed information about a specific metric, including its structure and available parameters. -- operation _(required)_ - must always be `describe_metric` -- metric _(required)_ - name of the metric to describe +- `operation` _(required)_ - must always be `describe_metric` +- `metric` _(required)_ - name of the metric to describe ### Body diff --git a/docs/developers/operations-api/bulk-operations.md b/docs/developers/operations-api/bulk-operations.md index 2e7d7f45..b6714552 100644 --- a/docs/developers/operations-api/bulk-operations.md +++ b/docs/developers/operations-api/bulk-operations.md @@ -8,11 +8,11 @@ title: Bulk Operations Exports data based on a given search operation to a local file in JSON or CSV format. -- operation _(required)_ - must always be `export_local` -- format _(required)_ - the format you wish to export the data, options are `json` & `csv` -- path _(required)_ - path local to the server to export the data -- search*operation *(required)\_ - search_operation of `search_by_hash`, `search_by_value`, `search_by_conditions` or `sql` -- filename _(optional)_ - the name of the file where your export will be written to (do not include extension in filename). If one is not provided it will be autogenerated based on the epoch. +- `operation` _(required)_ - must always be `export_local` +- `format` _(required)_ - the format you wish to export the data, options are `json` & `csv` +- `path` _(required)_ - path local to the server to export the data +- `search_operation` _(required)_ - search_operation of `search_by_hash`, `search_by_value`, `search_by_conditions` or `sql` +- `filename` _(optional)_ - the name of the file where your export will be written to (do not include extension in filename). If one is not provided it will be autogenerated based on the epoch. ### Body @@ -42,11 +42,11 @@ Exports data based on a given search operation to a local file in JSON or CSV fo Ingests CSV data, provided directly in the operation as an `insert`, `update` or `upsert` into the specified database table. -- operation _(required)_ - must always be `csv_data_load` -- action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` -- database _(optional)_ - name of the database where you are loading your data. The default is `data` -- table _(required)_ - name of the table where you are loading your data -- data _(required)_ - csv data to import into Harper +- `operation` _(required)_ - must always be `csv_data_load` +- `action` _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` +- `database` _(optional)_ - name of the database where you are loading your data. The default is `data` +- `table` _(required)_ - name of the table where you are loading your data +- `data` _(required)_ - csv data to import into Harper ### Body @@ -77,11 +77,11 @@ Ingests CSV data, provided via a path on the local filesystem, as an `insert`, ` _Note: The CSV file must reside on the same machine on which Harper is running. For example, the path to a CSV on your computer will produce an error if your Harper instance is a cloud instance._ -- operation _(required)_ - must always be `csv_file_load` -- action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` -- database _(optional)_ - name of the database where you are loading your data. The default is `data` -- table _(required)_ - name of the table where you are loading your data -- file*path *(required)\_ - path to the csv file on the host running Harper +- `operation` _(required)_ - must always be `csv_file_load` +- `action` _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` +- `database` _(optional)_ - name of the database where you are loading your data. The default is `data` +- `table` _(required)_ - name of the table where you are loading your data +- `file_path` _(required)_ - path to the csv file on the host running Harper ### Body @@ -110,11 +110,11 @@ _Note: The CSV file must reside on the same machine on which Harper is running. Ingests CSV data, provided via URL, as an `insert`, `update` or `upsert` into the specified database table. -- operation _(required)_ - must always be `csv_url_load` -- action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` -- database _(optional)_ - name of the database where you are loading your data. The default is `data` -- table _(required)_ - name of the table where you are loading your data -- csv*url *(required)\_ - URL to the csv +- `operation` _(required)_ - must always be `csv_url_load` +- `action` _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` +- `database` _(optional)_ - name of the database where you are loading your data. The default is `data` +- `table` _(required)_ - name of the table where you are loading your data +- `csv_url` _(required)_ - URL to the csv ### Body @@ -143,10 +143,10 @@ Ingests CSV data, provided via URL, as an `insert`, `update` or `upsert` into th Exports data based on a given search operation from table to AWS S3 in JSON or CSV format. -- operation _(required)_ - must always be `export_to_s3` -- format _(required)_ - the format you wish to export the data, options are `json` & `csv` -- s3 _(required)_ - details your access keys, bucket, bucket region and key for saving the data to S3 -- search*operation *(required)\_ - search_operation of `search_by_hash`, `search_by_value`, `search_by_conditions` or `sql` +- `operation` _(required)_ - must always be `export_to_s3` +- `format` _(required)_ - the format you wish to export the data, options are `json` & `csv` +- `s3` _(required)_ - details your access keys, bucket, bucket region and key for saving the data to S3 +- `search_operation` _(required)_ - search_operation of `search_by_hash`, `search_by_value`, `search_by_conditions` or `sql` ### Body @@ -183,16 +183,16 @@ Exports data based on a given search operation from table to AWS S3 in JSON or C This operation allows users to import CSV or JSON files from an AWS S3 bucket as an `insert`, `update` or `upsert`. -- operation _(required)_ - must always be `import_from_s3` -- action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` -- database _(optional)_ - name of the database where you are loading your data. The default is `data` -- table _(required)_ - name of the table where you are loading your data -- s3 _(required)_ - object containing required AWS S3 bucket info for operation: - - aws_access_key_id - AWS access key for authenticating into your S3 bucket - - aws_secret_access_key - AWS secret for authenticating into your S3 bucket - - bucket - AWS S3 bucket to import from - - key - the name of the file to import - _the file must include a valid file extension ('.csv' or '.json')_ - - region - the region of the bucket +- `operation` _(required)_ - must always be `import_from_s3` +- `action` _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` +- `database` _(optional)_ - name of the database where you are loading your data. The default is `data` +- `table` _(required)_ - name of the table where you are loading your data +- `s3` _(required)_ - object containing required AWS S3 bucket info for operation: + - `aws_access_key_id` - AWS access key for authenticating into your S3 bucket + - `aws_secret_access_key` - AWS secret for authenticating into your S3 bucket + - `bucket` - AWS S3 bucket to import from + - `key` - the name of the file to import - _the file must include a valid file extension ('.csv' or '.json')_ + - `region` - the region of the bucket ### Body @@ -229,10 +229,10 @@ Delete data before the specified timestamp on the specified database table exclu _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `delete_records_before` -- date _(required)_ - records older than this date will be deleted. Supported format looks like: `YYYY-MM-DDThh:mm:ss.sZ` -- schema _(required)_ - name of the schema where you are deleting your data -- table _(required)_ - name of the table where you are deleting your data +- `operation` _(required)_ - must always be `delete_records_before` +- `date` _(required)_ - records older than this date will be deleted. Supported format looks like: `YYYY-MM-DDThh:mm:ss.sZ` +- `schema` _(required)_ - name of the schema where you are deleting your data +- `table` _(required)_ - name of the table where you are deleting your data ### Body diff --git a/docs/developers/operations-api/certificate-management.md b/docs/developers/operations-api/certificate-management.md index b569dffc..f8eea402 100644 --- a/docs/developers/operations-api/certificate-management.md +++ b/docs/developers/operations-api/certificate-management.md @@ -12,12 +12,12 @@ If a `private_key` is not passed the operation will search for one that matches _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_certificate` -- name _(required)_ - a unique name for the certificate -- certificate _(required)_ - a PEM formatted certificate string -- is*authority *(required)\_ - a boolean indicating if the certificate is a certificate authority -- hosts _(optional)_ - an array of hostnames that the certificate is valid for -- private*key *(optional)\_ - a PEM formatted private key string +- `operation` _(required)_ - must always be `add_certificate` +- `name` _(required)_ - a unique name for the certificate +- `certificate` _(required)_ - a PEM formatted certificate string +- `is_authority` _(required)_ - a boolean indicating if the certificate is a certificate authority +- `hosts` _(optional)_ - an array of hostnames that the certificate is valid for +- `private_key` _(optional)_ - a PEM formatted private key string ### Body @@ -47,8 +47,8 @@ Removes a certificate from the `hdb_certificate` system table and deletes the co _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `remove_certificate` -- name _(required)_ - the name of the certificate +- `operation` _(required)_ - must always be `remove_certificate` +- `name` _(required)_ - the name of the certificate ### Body @@ -75,7 +75,7 @@ Lists all certificates in the `hdb_certificate` system table. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `list_certificates` +- `operation` _(required)_ - must always be `list_certificates` ### Body diff --git a/docs/developers/operations-api/clustering-nats.md b/docs/developers/operations-api/clustering-nats.md index a45c593e..8076da98 100644 --- a/docs/developers/operations-api/clustering-nats.md +++ b/docs/developers/operations-api/clustering-nats.md @@ -10,11 +10,11 @@ Adds a route/routes to either the hub or leaf server cluster configuration. This _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_set_routes` -- server _(required)_ - must always be `hub` or `leaf`, in most cases you should use `hub` here -- routes _(required)_ - must always be an objects array with a host and port: - - host - the host of the remote instance you are clustering to - - port - the clustering port of the remote instance you are clustering to, in most cases this is the value in `clustering.hubServer.cluster.network.port` on the remote instance `harperdb-config.yaml` +- `operation` _(required)_ - must always be `cluster_set_routes` +- `server` _(required)_ - must always be `hub` or `leaf`, in most cases you should use `hub` here +- `routes` _(required)_ - must always be an objects array with a host and port: + - `host` - the host of the remote instance you are clustering to + - `port` - the clustering port of the remote instance you are clustering to, in most cases this is the value in `clustering.hubServer.cluster.network.port` on the remote instance `harperdb-config.yaml` ### Body @@ -78,7 +78,7 @@ Gets all the hub and leaf server routes from the config file. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_get_routes` +- `operation` _(required)_ - must always be `cluster_get_routes` ### Body @@ -122,8 +122,8 @@ Removes route(s) from hub and/or leaf server routes array in config file. Return _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_delete_routes` -- routes _required_ - Must be an array of route object(s) +- `operation` _(required)_ - must always be `cluster_delete_routes` +- `routes` _(required)_ - Must be an array of route object(s) ### Body @@ -162,14 +162,14 @@ Registers an additional Harper instance with associated subscriptions. Learn mor _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_node` -- node*name *(required)\_ - the node name of the remote node -- subscriptions _(required)_ - The relationship created between nodes. Must be an object array and include `schema`, `table`, `subscribe` and `publish`: - - schema - the schema to replicate from - - table - the table to replicate from - - subscribe - a boolean which determines if transactions on the remote table should be replicated on the local table - - publish - a boolean which determines if transactions on the local table should be replicated on the remote table - - start*time *(optional)\_ - How far back to go to get transactions from node being added. Must be in UTC YYYY-MM-DDTHH:mm:ss.sssZ format +- `operation` _(required)_ - must always be `add_node` +- `node_name` _(required)_ - the node name of the remote node +- `subscriptions` _(required)_ - The relationship created between nodes. Must be an object array and include `schema`, `table`, `subscribe` and `publish`: + - `schema` - the schema to replicate from + - `table` - the table to replicate from + - `subscribe` - a boolean which determines if transactions on the remote table should be replicated on the local table + - `publish` - a boolean which determines if transactions on the local table should be replicated on the remote table + - `start_time` _(optional)_ - How far back to go to get transactions from node being added. Must be in UTC YYYY-MM-DDTHH:mm:ss.sssZ format ### Body @@ -205,14 +205,14 @@ Modifies an existing Harper instance registration and associated subscriptions. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `update_node` -- node*name *(required)\_ - the node name of the remote node you are updating -- subscriptions _(required)_ - The relationship created between nodes. Must be an object array and include `schema`, `table`, `subscribe` and `publish`: - - schema - the schema to replicate from - - table - the table to replicate from - - subscribe - a boolean which determines if transactions on the remote table should be replicated on the local table - - publish - a boolean which determines if transactions on the local table should be replicated on the remote table - - start*time *(optional)\_ - How far back to go to get transactions from node being added. Must be in UTC YYYY-MM-DDTHH:mm:ss.sssZ format +- `operation` _(required)_ - must always be `update_node` +- `node_name` _(required)_ - the node name of the remote node you are updating +- `subscriptions` _(required)_ - The relationship created between nodes. Must be an object array and include `schema`, `table`, `subscribe` and `publish`: + - `schema` - the schema to replicate from + - `table` - the table to replicate from + - `subscribe` - a boolean which determines if transactions on the remote table should be replicated on the local table + - `publish` - a boolean which determines if transactions on the local table should be replicated on the remote table + - `start_time` _(optional)_ - How far back to go to get transactions from node being added. Must be in UTC YYYY-MM-DDTHH:mm:ss.sssZ format ### Body @@ -248,13 +248,13 @@ A more adeptly named alias for add and update node. This operation behaves as a _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_node_replication` -- node*name *(required)\_ - the node name of the remote node you are updating -- subscriptions _(required)_ - The relationship created between nodes. Must be an object array and `table`, `subscribe` and `publish`: - - database _(optional)_ - the database to replicate from - - table _(required)_ - the table to replicate from - - subscribe _(required)_ - a boolean which determines if transactions on the remote table should be replicated on the local table - - publish _(required)_ - a boolean which determines if transactions on the local table should be replicated on the remote table +- `operation` _(required)_ - must always be `set_node_replication` +- `node_name` _(required)_ - the node name of the remote node you are updating +- `subscriptions` _(required)_ - The relationship created between nodes. Must be an object array and `table`, `subscribe` and `publish`: + - `database` _(optional)_ - the database to replicate from + - `table` _(required)_ - the table to replicate from + - `subscribe` _(required)_ - a boolean which determines if transactions on the remote table should be replicated on the local table + - `publish` _(required)_ - a boolean which determines if transactions on the local table should be replicated on the remote table - ### Body @@ -289,7 +289,7 @@ Returns an array of status objects from a cluster. A status object will contain _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_status` +- `operation` _(required)_ - must always be `cluster_status` ### Body @@ -336,10 +336,10 @@ Returns an object array of enmeshed nodes. Each node object will contain the nam _Operation is restricted to super_user roles only_ -- operation _(required)_- must always be `cluster_network` -- timeout (_optional_) - the amount of time in milliseconds to wait for a response from the network. Must be a number -- connected*nodes (\_optional*) - omit `connected_nodes` from the response. Must be a boolean. Defaults to `false` -- routes (_optional_) - omit `routes` from the response. Must be a boolean. Defaults to `false` +- `operation` _(required)_- must always be `cluster_network` +- `timeout` _(optional)_ - the amount of time in milliseconds to wait for a response from the network. Must be a number +- `connected_nodes` _(optional)_ - omit `connected_nodes` from the response. Must be a boolean. Defaults to `false` +- `routes` _(optional)_ - omit `routes` from the response. Must be a boolean. Defaults to `false` ### Body @@ -383,8 +383,8 @@ Removes a Harper instance and associated subscriptions from the cluster. Learn m _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `remove_node` -- name _(required)_ - The name of the node you are de-registering +- `operation` _(required)_ - must always be `remove_node` +- `name` _(required)_ - The name of the node you are de-registering ### Body @@ -412,8 +412,8 @@ Learn more about [Harper clustering here](../clustering/). _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `configure_cluster` -- connections _(required)_ - must be an object array with each object containing `node_name` and `subscriptions` for that node +- `operation` _(required)_ - must always be `configure_cluster` +- `connections` _(required)_ - must be an object array with each object containing `node_name` and `subscriptions` for that node ### Body @@ -463,10 +463,10 @@ Will purge messages from a stream _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `purge_stream` -- database _(required)_ - the name of the database where the streams table resides -- table _(required)_ - the name of the table that belongs to the stream -- options _(optional)_ - control how many messages get purged. Options are: +- `operation` _(required)_ - must always be `purge_stream` +- `database` _(required)_ - the name of the database where the streams table resides +- `table` _(required)_ - the name of the table that belongs to the stream +- `options` _(optional)_ - control how many messages get purged. Options are: - `keep` - purge will keep this many most recent messages - `seq` - purge all messages up to, but not including, this sequence diff --git a/docs/developers/operations-api/clustering.md b/docs/developers/operations-api/clustering.md index 4a19cbf2..47b635ca 100644 --- a/docs/developers/operations-api/clustering.md +++ b/docs/developers/operations-api/clustering.md @@ -14,18 +14,18 @@ Adds a new Harper instance to the cluster. If `subscriptions` are provided, it w _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_node` -- hostname or url _(required)_ - one of these fields is required. You must provide either the `hostname` or the `url` of the node you want to add -- verify*tls *(optional)\_ - a boolean which determines if the TLS certificate should be verified. This will allow the Harper default self-signed certificates to be accepted. Defaults to `true` -- authorization _(optional)_ - an object or a string which contains the authorization information for the node being added. If it is an object, it should contain `username` and `password` fields. If it is a string, it should use HTTP `Authorization` style credentials -- retain*authorization *(optional)\_ - a boolean which determines if the authorization credentials should be retained/stored and used everytime a connection is made to this node. If `true`, the authorization will be stored on the node record. Generally this should not be used, as mTLS/certificate based authorization is much more secure and safe, and avoids the need for storing credentials. Defaults to `false`. -- revoked*certificates *(optional)\_ - an array of revoked certificates serial numbers. If a certificate is revoked, it will not be accepted for any connections. -- shard _(optional)_ - a number which can be used to indicate which shard this node belongs to. This is only needed if you are using sharding. -- subscriptions _(optional)_ - The relationship created between nodes. If not provided a fully replicated cluster will be setup. Must be an object array and include `database`, `table`, `subscribe` and `publish`: - - database - the database to replicate - - table - the table to replicate - - subscribe - a boolean which determines if transactions on the remote table should be replicated on the local table - - publish - a boolean which determines if transactions on the local table should be replicated on the remote table +- `operation` _(required)_ - must always be `add_node` +- `hostname` or `url` _(required)_ - one of these fields is required. You must provide either the `hostname` or the `url` of the node you want to add +- `verify_tls` _(optional)_ - a boolean which determines if the TLS certificate should be verified. This will allow the Harper default self-signed certificates to be accepted. Defaults to `true` +- `authorization` _(optional)_ - an object or a string which contains the authorization information for the node being added. If it is an object, it should contain `username` and `password` fields. If it is a string, it should use HTTP `Authorization` style credentials +- `retain_authorization` _(optional)_ - a boolean which determines if the authorization credentials should be retained/stored and used everytime a connection is made to this node. If `true`, the authorization will be stored on the node record. Generally this should not be used, as mTLS/certificate based authorization is much more secure and safe, and avoids the need for storing credentials. Defaults to `false`. +- `revoked_certificates` _(optional)_ - an array of revoked certificates serial numbers. If a certificate is revoked, it will not be accepted for any connections. +- `shard` _(optional)_ - a number which can be used to indicate which shard this node belongs to. This is only needed if you are using sharding. +- `subscriptions` _(optional)_ - The relationship created between nodes. If not provided a fully replicated cluster will be setup. Must be an object array and include `database`, `table`, `subscribe` and `publish`: + - `database` - the database to replicate + - `table` - the table to replicate + - `subscribe` - a boolean which determines if transactions on the remote table should be replicated on the local table + - `publish` - a boolean which determines if transactions on the local table should be replicated on the remote table ### Body @@ -59,15 +59,15 @@ _Operation is restricted to super_user roles only_ _Note: will attempt to add the node if it does not exist_ -- operation _(required)_ - must always be `update_node` -- hostname _(required)_ - the `hostname` of the remote node you are updating -- revoked*certificates *(optional)\_ - an array of revoked certificates serial numbers. If a certificate is revoked, it will not be accepted for any connections. -- shard _(optional)_ - a number which can be used to indicate which shard this node belongs to. This is only needed if you are using sharding. -- subscriptions _(required)_ - The relationship created between nodes. Must be an object array and include `database`, `table`, `subscribe` and `publish`: - - database - the database to replicate from - - table - the table to replicate from - - subscribe - a boolean which determines if transactions on the remote table should be replicated on the local table - - publish - a boolean which determines if transactions on the local table should be replicated on the remote table +- `operation` _(required)_ - must always be `update_node` +- `hostname` _(required)_ - the `hostname` of the remote node you are updating +- `revoked_certificates` _(optional)_ - an array of revoked certificates serial numbers. If a certificate is revoked, it will not be accepted for any connections. +- `shard` _(optional)_ - a number which can be used to indicate which shard this node belongs to. This is only needed if you are using sharding. +- `subscriptions` _(required)_ - The relationship created between nodes. Must be an object array and include `database`, `table`, `subscribe` and `publish`: + - `database` - the database to replicate from + - `table` - the table to replicate from + - `subscribe` - a boolean which determines if transactions on the remote table should be replicated on the local table + - `publish` - a boolean which determines if transactions on the local table should be replicated on the remote table ### Body @@ -102,8 +102,8 @@ Removes a Harper node from the cluster and stops replication, [Learn more about _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `remove_node` -- name _(required)_ - The name of the node you are removing +- `operation` _(required)_ - must always be `remove_node` +- `name` _(required)_ - The name of the node you are removing ### Body @@ -132,7 +132,7 @@ Returns an array of status objects from a cluster. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_status` +- `operation` _(required)_ - must always be `cluster_status` ### Body @@ -167,7 +167,8 @@ _Operation is restricted to super_user roles only_ "lastReceivedRemoteTime": "Wed, 12 Feb 2025 16:49:29 GMT", "lastReceivedLocalTime": "Wed, 12 Feb 2025 16:50:59 GMT", "lastSendTime": "Wed, 12 Feb 2025 16:50:59 GMT" - }, + } + ] } ], "node_name": "server-1.domain.com", @@ -190,8 +191,8 @@ Bulk create/remove subscriptions for any number of remote nodes. Resets and repl _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `configure_cluster` -- connections _(required)_ - must be an object array with each object following the `add_node` schema. +- `operation` _(required)_ - must always be `configure_cluster` +- `connections` _(required)_ - must be an object array with each object following the `add_node` schema. ### Body @@ -251,8 +252,8 @@ Adds a route/routes to the `replication.routes` configuration. This operation be _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_set_routes` -- routes _(required)_ - the routes field is an array that specifies the routes for clustering. Each element in the array can be either a string or an object with `hostname` and `port` properties. +- `operation` _(required)_ - must always be `cluster_set_routes` +- `routes` _(required)_ - the routes field is an array that specifies the routes for clustering. Each element in the array can be either a string or an object with `hostname` and `port` properties. ### Body @@ -293,7 +294,7 @@ Gets the replication routes from the Harper config file. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_get_routes` +- `operation` _(required)_ - must always be `cluster_get_routes` ### Body @@ -323,8 +324,8 @@ Removes route(s) from the Harper config file. Returns a deletion success message _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_delete_routes` -- routes _required_ - Must be an array of route object(s) +- `operation` _(required)_ - must always be `cluster_delete_routes` +- `routes` _(required)_ - Must be an array of route object(s) ### Body diff --git a/docs/developers/operations-api/components.md b/docs/developers/operations-api/components.md index 6801a941..51ac54b3 100644 --- a/docs/developers/operations-api/components.md +++ b/docs/developers/operations-api/components.md @@ -10,9 +10,9 @@ Creates a new component project in the component root directory using a predefin _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_component` -- project _(required)_ - the name of the project you wish to create -- replicated _(optional)_ - if true, Harper will replicate the component to all nodes in the cluster. Must be a boolean. +- `operation` _(required)_ - must always be `add_component` +- `project` _(required)_ - the name of the project you wish to create +- `replicated` _(optional)_ - if true, Harper will replicate the component to all nodes in the cluster. Must be a boolean. ### Body @@ -75,13 +75,13 @@ _Note: After deploying a component a restart may be required_ _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `deploy_component` -- project _(required)_ - the name of the project you wish to deploy -- package _(optional)_ - this can be any valid GitHub or NPM reference -- payload _(optional)_ - a base64-encoded string representation of the .tar file. Must be a string -- restart _(optional)_ - must be either a boolean or the string `rolling`. If set to `rolling`, a rolling restart will be triggered after the component is deployed, meaning that each node in the cluster will be sequentially restarted (waiting for the last restart to start the next). If set to `true`, the restart will not be rolling, all nodes will be restarted in parallel. If `replicated` is `true`, the restart operations will be replicated across the cluster. -- replicated _(optional)_ - if true, Harper will replicate the component to all nodes in the cluster. Must be a boolean. -- install*command *(optional)\_ - A command to use when installing the component. Must be a string. This can be used to install dependencies with pnpm or yarn, for example, like: `"install_command": "npm install -g pnpm && pnpm install"` +- `operation` _(required)_ - must always be `deploy_component` +- `project` _(required)_ - the name of the project you wish to deploy +- `package` _(optional)_ - this can be any valid GitHub or NPM reference +- `payload` _(optional)_ - a base64-encoded string representation of the .tar file. Must be a string +- `restart` _(optional)_ - must be either a boolean or the string `rolling`. If set to `rolling`, a rolling restart will be triggered after the component is deployed, meaning that each node in the cluster will be sequentially restarted (waiting for the last restart to start the next). If set to `true`, the restart will not be rolling, all nodes will be restarted in parallel. If `replicated` is `true`, the restart operations will be replicated across the cluster. +- `replicated` _(optional)_ - if true, Harper will replicate the component to all nodes in the cluster. Must be a boolean. +- `install_command` _(optional)_ - A command to use when installing the component. Must be a string. This can be used to install dependencies with pnpm or yarn, for example, like: `"install_command": "npm install -g pnpm && pnpm install"` ### Body @@ -118,9 +118,9 @@ Creates a temporary `.tar` file of the specified project folder, then reads it i _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `package_component` -- project _(required)_ - the name of the project you wish to package -- skip*node_modules *(optional)\_ - if true, creates option for tar module that will exclude the project's node_modules directory. Must be a boolean +- `operation` _(required)_ - must always be `package_component` +- `project` _(required)_ - the name of the project you wish to package +- `skip_node_modules` _(optional)_ - if true, creates option for tar module that will exclude the project's node_modules directory. Must be a boolean ### Body @@ -151,11 +151,11 @@ Deletes a file from inside the component project or deletes the complete project _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `drop_component` -- project _(required)_ - the name of the project you wish to delete or to delete from if using the `file` parameter -- file _(optional)_ - the path relative to your project folder of the file you wish to delete -- replicated _(optional)_ - if true, Harper will replicate the component deletion to all nodes in the cluster. Must be a boolean. -- restart _(optional)_ - if true, Harper will restart after dropping the component. Must be a boolean. +- `operation` _(required)_ - must always be `drop_component` +- `project` _(required)_ - the name of the project you wish to delete or to delete from if using the `file` parameter +- `file` _(optional)_ - the path relative to your project folder of the file you wish to delete +- `replicated` _(optional)_ - if true, Harper will replicate the component deletion to all nodes in the cluster. Must be a boolean. +- `restart` _(optional)_ - if true, Harper will restart after dropping the component. Must be a boolean. ### Body @@ -183,7 +183,7 @@ Gets all local component files and folders and any component config from `harper _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_components` +- `operation` _(required)_ - must always be `get_components` ### Body @@ -264,10 +264,10 @@ Gets the contents of a file inside a component project. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_component_file` -- project _(required)_ - the name of the project where the file is located -- file _(required)_ - the path relative to your project folder of the file you wish to view -- encoding _(optional)_ - the encoding that will be passed to the read file call. Defaults to `utf8` +- `operation` _(required)_ - must always be `get_component_file` +- `project` _(required)_ - the name of the project where the file is located +- `file` _(required)_ - the path relative to your project folder of the file you wish to view +- `encoding` _(optional)_ - the encoding that will be passed to the read file call. Defaults to `utf8` ### Body @@ -295,12 +295,12 @@ Creates or updates a file inside a component project. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_component_file` -- project _(required)_ - the name of the project the file is located in -- file _(required)_ - the path relative to your project folder of the file you wish to set -- payload _(required)_ - what will be written to the file -- encoding _(optional)_ - the encoding that will be passed to the write file call. Defaults to `utf8` -- replicated _(optional)_ - if true, Harper will replicate the component update to all nodes in the cluster. Must be a boolean. +- `operation` _(required)_ - must always be `set_component_file` +- `project` _(required)_ - the name of the project the file is located in +- `file` _(required)_ - the path relative to your project folder of the file you wish to set +- `payload` _(required)_ - what will be written to the file +- `encoding` _(optional)_ - the encoding that will be passed to the write file call. Defaults to `utf8` +- `replicated` _(optional)_ - if true, Harper will replicate the component update to all nodes in the cluster. Must be a boolean. ### Body @@ -329,13 +329,13 @@ Adds an SSH key for deploying components from private repositories. This will al _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_ssh_key` -- name _(required)_ - the name of the key -- key _(required)_ - the private key contents. Must be an ed25519 key. Line breaks must be delimited with `\n` and have a trailing `\n` -- host _(required)_ - the host for the ssh config (see below). Used as part of the `package` url when deploying a component using this key. -- hostname _(required)_ - the hostname for the ssh config (see below). Used to map `host` to an actual domain (e.g. `github.com`) -- known*hosts *(optional)\_ - the public SSH keys of the host your component will be retrieved from. If `hostname` is `github.com` this will be retrieved automatically. Line breaks must be delimited with `\n` -- replicated _(optional)_ - if true, HarperDB will replicate the key to all nodes in the cluster. Must be a boolean. +- `operation` _(required)_ - must always be `add_ssh_key` +- `name` _(required)_ - the name of the key +- `key` _(required)_ - the private key contents. Must be an ed25519 key. Line breaks must be delimited with `\n` and have a trailing `\n` +- `host` _(required)_ - the host for the ssh config (see below). Used as part of the `package` url when deploying a component using this key. +- `hostname` _(required)_ - the hostname for the ssh config (see below). Used to map `host` to an actual domain (e.g. `github.com`) +- `known_hosts` _(optional)_ - the public SSH keys of the host your component will be retrieved from. If `hostname` is `github.com` this will be retrieved automatically. Line breaks must be delimited with `\n` +- `replicated` _(optional)_ - if true, HarperDB will replicate the key to all nodes in the cluster. Must be a boolean. ### Body @@ -384,13 +384,13 @@ Updates the private key contents of an existing SSH key. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `update_ssh_key` -- name _(required)_ - the name of the key to be updated -- key _(required)_ - the private key contents. Must be an ed25519 key. Line breaks must be delimited with `\n` and have a trailing `\n` -- host _(required)_ - the host for the ssh config (see below). Used as part of the `package` url when deploying a component using this key. -- hostname _(required)_ - the hostname for the ssh config (see below). Used to map `host` to an actual domain (e.g. `github.com`) -- known*hosts *(optional)\_ - the public SSH keys of the host your component will be retrieved from. If `hostname` is `github.com` this will be retrieved automatically. Line breaks must be delimited with `\n` -- replicated _(optional)_ - if true, HarperDB will replicate the key to all nodes in the cluster. Must be a boolean. +- `operation` _(required)_ - must always be `update_ssh_key` +- `name` _(required)_ - the name of the key to be updated +- `key` _(required)_ - the private key contents. Must be an ed25519 key. Line breaks must be delimited with `\n` and have a trailing `\n` +- `host` _(required)_ - the host for the ssh config (see below). Used as part of the `package` url when deploying a component using this key. +- `hostname` _(required)_ - the hostname for the ssh config (see below). Used to map `host` to an actual domain (e.g. `github.com`) +- `known_hosts` _(optional)_ - the public SSH keys of the host your component will be retrieved from. If `hostname` is `github.com` this will be retrieved automatically. Line breaks must be delimited with `\n` +- `replicated` _(optional)_ - if true, HarperDB will replicate the key to all nodes in the cluster. Must be a boolean. ### Body @@ -418,9 +418,9 @@ Deletes a SSH key. This will also remove it from the generated SSH config. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `delete_ssh_key` -- name _(required)_ - the name of the key to be deleted -- replicated _(optional)_ - if true, Harper will replicate the key deletion to all nodes in the cluster. Must be a boolean. +- `operation` _(required)_ - must always be `delete_ssh_key` +- `name` _(required)_ - the name of the key to be deleted +- `replicated` _(optional)_ - if true, Harper will replicate the key deletion to all nodes in the cluster. Must be a boolean. ### Body @@ -446,7 +446,7 @@ List off the names of added SSH keys _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `list_ssh_keys` +- `operation` _(required)_ - must always be `list_ssh_keys` ### Body @@ -462,11 +462,12 @@ _Operation is restricted to super_user roles only_ [ { "name": "harperdb-private-component" - }, - ... + } ] ``` +_Note: Additional SSH keys would appear as more objects in this array_ + --- ## Set SSH Known Hosts @@ -475,9 +476,9 @@ Sets the SSH known_hosts file. This will overwrite the file. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_ssh_known_hosts` -- known*hosts *(required)\_ - The contents to set the known_hosts to. Line breaks must be delimite d with -- replicated _(optional)_ - if true, Harper will replicate the known hosts to all nodes in the cluster. Must be a boolean. +- `operation` _(required)_ - must always be `set_ssh_known_hosts` +- `known_hosts` _(required)_ - The contents to set the known_hosts to. Line breaks must be delimite d with +- `replicated` _(optional)_ - if true, Harper will replicate the known hosts to all nodes in the cluster. Must be a boolean. ### Body @@ -502,7 +503,7 @@ Gets the contents of the known_hosts file _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_ssh_known_hosts` +- `operation` _(required)_ - must always be `get_ssh_known_hosts` ### Body @@ -529,9 +530,9 @@ Executes npm install against specified custom function projects. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `install_node_modules` -- projects _(required)_ - must ba an array of custom functions projects. -- dry*run*(optional)\_ - refers to the npm --dry-run flag: [https://docs.npmjs.com/cli/v8/commands/npm-install#dry-run](https://docs.npmjs.com/cli/v8/commands/npm-install#dry-run). Defaults to false. +- `operation` _(required)_ - must always be `install_node_modules` +- `projects` _(required)_ - must ba an array of custom functions projects. +- `dry_run` _(optional)_ - refers to the npm --dry-run flag: [https://docs.npmjs.com/cli/v8/commands/npm-install#dry-run](https://docs.npmjs.com/cli/v8/commands/npm-install#dry-run). Defaults to false. ### Body diff --git a/docs/developers/operations-api/configuration.md b/docs/developers/operations-api/configuration.md index 99599843..8f2365da 100644 --- a/docs/developers/operations-api/configuration.md +++ b/docs/developers/operations-api/configuration.md @@ -10,9 +10,9 @@ Modifies the Harper configuration file parameters. Must follow with a restart or _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_configuration` -- logging*level *(example/optional)\_ - one or more configuration keywords to be updated in the Harper configuration file -- clustering*enabled *(example/optional)\_ - one or more configuration keywords to be updated in the Harper configuration file +- `operation` _(required)_ - must always be `set_configuration` +- `logging_level` _(example/optional)_ - one or more configuration keywords to be updated in the Harper configuration file +- `clustering_enabled` _(example/optional)_ - one or more configuration keywords to be updated in the Harper configuration file ### Body @@ -40,7 +40,7 @@ Returns the Harper configuration parameters. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_configuration` +- `operation` _(required)_ - must always be `get_configuration` ### Body diff --git a/docs/developers/operations-api/custom-functions.md b/docs/developers/operations-api/custom-functions.md index 0b2261e0..23709148 100644 --- a/docs/developers/operations-api/custom-functions.md +++ b/docs/developers/operations-api/custom-functions.md @@ -12,7 +12,7 @@ Returns the state of the Custom functions server. This includes whether it is en _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `custom_function_status` +- `operation` _(required)_ - must always be `custom_function_status` ### Body @@ -40,7 +40,7 @@ Returns an array of projects within the Custom Functions root project directory. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_custom_functions` +- `operation` _(required)_ - must always be `get_custom_functions` ### Body @@ -70,10 +70,10 @@ Returns the content of the specified file as text. HarperDStudio uses this call _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_custom_function` -- project _(required)_ - the name of the project containing the file for which you wish to get content -- type _(required)_ - the name of the sub-folder containing the file for which you wish to get content - must be either routes or helpers -- file _(required)_ - The name of the file for which you wish to get content - should not include the file extension (which is always .js) +- `operation` _(required)_ - must always be `get_custom_function` +- `project` _(required)_ - the name of the project containing the file for which you wish to get content +- `type` _(required)_ - the name of the sub-folder containing the file for which you wish to get content - must be either routes or helpers +- `file` _(required)_ - The name of the file for which you wish to get content - should not include the file extension (which is always .js) ### Body @@ -102,11 +102,11 @@ Updates the content of the specified file. Harper Studio uses this call to save _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_custom_function` -- project _(required)_ - the name of the project containing the file for which you wish to set content -- type _(required)_ - the name of the sub-folder containing the file for which you wish to set content - must be either routes or helpers -- file _(required)_ - the name of the file for which you wish to set content - should not include the file extension (which is always .js) -- function*content *(required)\_ - the content you wish to save into the specified file +- `operation` _(required)_ - must always be `set_custom_function` +- `project` _(required)_ - the name of the project containing the file for which you wish to set content +- `type` _(required)_ - the name of the sub-folder containing the file for which you wish to set content - must be either routes or helpers +- `file` _(required)_ - the name of the file for which you wish to set content - should not include the file extension (which is always .js) +- `function_content` _(required)_ - the content you wish to save into the specified file ### Body @@ -136,10 +136,10 @@ Deletes the specified file. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `drop_custom_function` -- project _(required)_ - the name of the project containing the file you wish to delete -- type _(required)_ - the name of the sub-folder containing the file you wish to delete. Must be either routes or helpers -- file _(required)_ - the name of the file you wish to delete. Should not include the file extension (which is always .js) +- `operation` _(required)_ - must always be `drop_custom_function` +- `project` _(required)_ - the name of the project containing the file you wish to delete +- `type` _(required)_ - the name of the sub-folder containing the file you wish to delete. Must be either routes or helpers +- `file` _(required)_ - the name of the file you wish to delete. Should not include the file extension (which is always .js) ### Body @@ -168,8 +168,8 @@ Creates a new project folder in the Custom Functions root project directory. It _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_custom_function_project` -- project _(required)_ - the name of the project you wish to create +- `operation` _(required)_ - must always be `add_custom_function_project` +- `project` _(required)_ - the name of the project you wish to create ### Body @@ -196,8 +196,8 @@ Deletes the specified project folder and all of its contents. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `drop_custom_function_project` -- project _(required)_ - the name of the project you wish to delete +- `operation` _(required)_ - must always be `drop_custom_function_project` +- `project` _(required)_ - the name of the project you wish to delete ### Body @@ -224,9 +224,9 @@ Creates a .tar file of the specified project folder, then reads it into a base64 _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `package_custom_function_project` -- project _(required)_ - the name of the project you wish to package up for deployment -- skip*node_modules *(optional)\_ - if true, creates option for tar module that will exclude the project's node_modules directory. Must be a boolean. +- `operation` _(required)_ - must always be `package_custom_function_project` +- `project` _(required)_ - the name of the project you wish to package up for deployment +- `skip_node_modules` _(optional)_ - if true, creates option for tar module that will exclude the project's node_modules directory. Must be a boolean. ### Body @@ -256,9 +256,9 @@ Takes the output of package_custom_function_project, decrypts the base64-encoded _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `deploy_custom_function_project` -- project _(required)_ - the name of the project you wish to deploy. Must be a string -- payload _(required)_ - a base64-encoded string representation of the .tar file. Must be a string +- `operation` _(required)_ - must always be `deploy_custom_function_project` +- `project` _(required)_ - the name of the project you wish to deploy. Must be a string +- `payload` _(required)_ - a base64-encoded string representation of the .tar file. Must be a string ### Body diff --git a/docs/developers/operations-api/jobs.md b/docs/developers/operations-api/jobs.md index 173125a1..cf71fa00 100644 --- a/docs/developers/operations-api/jobs.md +++ b/docs/developers/operations-api/jobs.md @@ -8,8 +8,8 @@ title: Jobs Returns job status, metrics, and messages for the specified job ID. -- operation _(required)_ - must always be `get_job` -- id _(required)_ - the id of the job you wish to view +- `operation` _(required)_ - must always be `get_job` +- `id` _(required)_ - the id of the job you wish to view ### Body @@ -50,9 +50,9 @@ Returns a list of job statuses, metrics, and messages for all jobs executed with _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `search_jobs_by_start_date` -- from*date *(required)\_ - the date you wish to start the search -- to*date *(required)\_ - the date you wish to end the search +- `operation` _(required)_ - must always be `search_jobs_by_start_date` +- `from_date` _(required)_ - the date you wish to start the search +- `to_date` _(required)_ - the date you wish to end the search ### Body diff --git a/docs/developers/operations-api/logs.md b/docs/developers/operations-api/logs.md index 8fb1bb8e..4bf6b518 100644 --- a/docs/developers/operations-api/logs.md +++ b/docs/developers/operations-api/logs.md @@ -10,13 +10,13 @@ Returns log outputs from the primary Harper log based on the provided search cri _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_Log` -- start _(optional)_ - result to start with. Default is 0, the first log in `hdb.log`. Must be a number -- limit _(optional)_ - number of results returned. Default behavior is 1000. Must be a number -- level _(optional)_ - error level to filter on. Default behavior is all levels. Must be `notify`, `error`, `warn`, `info`, `debug` or `trace` -- from _(optional)_ - date to begin showing log results. Must be `YYYY-MM-DD` or `YYYY-MM-DD hh:mm:ss`. Default is first log in `hdb.log` -- until _(optional)_ - date to end showing log results. Must be `YYYY-MM-DD` or `YYYY-MM-DD hh:mm:ss`. Default is last log in `hdb.log` -- order _(optional)_ - order to display logs desc or asc by timestamp. By default, will maintain `hdb.log` order +- `operation` _(required)_ - must always be `read_Log` +- `start` _(optional)_ - result to start with. Default is 0, the first log in `hdb.log`. Must be a number +- `limit` _(optional)_ - number of results returned. Default behavior is 1000. Must be a number +- `level` _(optional)_ - error level to filter on. Default behavior is all levels. Must be `notify`, `error`, `warn`, `info`, `debug` or `trace` +- `from` _(optional)_ - date to begin showing log results. Must be `YYYY-MM-DD` or `YYYY-MM-DD hh:mm:ss`. Default is first log in `hdb.log` +- `until` _(optional)_ - date to end showing log results. Must be `YYYY-MM-DD` or `YYYY-MM-DD hh:mm:ss`. Default is last log in `hdb.log` +- `order` _(optional)_ - order to display logs desc or asc by timestamp. By default, will maintain `hdb.log` order ### Body @@ -68,12 +68,12 @@ Returns all transactions logged for the specified database table. You may filter _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_transaction_log` -- schema _(required)_ - schema under which the transaction log resides -- table _(required)_ - table under which the transaction log resides -- from _(optional)_ - time format must be millisecond-based epoch in UTC -- to _(optional)_ - time format must be millisecond-based epoch in UTC -- limit _(optional)_ - max number of logs you want to receive. Must be a number +- `operation` _(required)_ - must always be `read_transaction_log` +- `schema` _(required)_ - schema under which the transaction log resides +- `table` _(required)_ - table under which the transaction log resides +- `from` _(optional)_ - time format must be millisecond-based epoch in UTC +- `to` _(optional)_ - time format must be millisecond-based epoch in UTC +- `limit` _(optional)_ - max number of logs you want to receive. Must be a number ### Body @@ -271,10 +271,10 @@ Deletes transaction log data for the specified database table that is older than _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `delete_transaction_log_before` -- schema _(required)_ - schema under which the transaction log resides. Must be a string -- table _(required)_ - table under which the transaction log resides. Must be a string -- timestamp _(required)_ - records older than this date will be deleted. Format is millisecond-based epoch in UTC +- `operation` _(required)_ - must always be `delete_transaction_log_before` +- `schema` _(required)_ - schema under which the transaction log resides. Must be a string +- `table` _(required)_ - table under which the transaction log resides. Must be a string +- `timestamp` _(required)_ - records older than this date will be deleted. Format is millisecond-based epoch in UTC ### Body @@ -303,11 +303,11 @@ AuditLog must be enabled in the Harper configuration file to make this request. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_audit_log` -- schema _(required)_ - schema under which the transaction log resides -- table _(required)_ - table under which the transaction log resides -- search*type *(optional)\_ - possibilities are `hash_value`, `timestamp` and `username` -- search*values *(optional)\_ - an array of string or numbers relating to search_type +- `operation` _(required)_ - must always be `read_audit_log` +- `schema` _(required)_ - schema under which the transaction log resides +- `table` _(required)_ - table under which the transaction log resides +- `search_type` _(optional)_ - possibilities are `hash_value`, `timestamp` and `username` +- `search_values` _(optional)_ - an array of string or numbers relating to search_type ### Body @@ -398,11 +398,11 @@ AuditLog must be enabled in the Harper configuration file to make this request. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_audit_log` -- schema _(required)_ - schema under which the transaction log resides -- table _(required)_ - table under which the transaction log resides -- search*type *(optional)\_ - timestamp -- search*values *(optional)\_ - an array containing a maximum of two values \[`from_timestamp`, `to_timestamp`] defining the range of transactions you would like to view. +- `operation` _(required)_ - must always be `read_audit_log` +- `schema` _(required)_ - schema under which the transaction log resides +- `table` _(required)_ - table under which the transaction log resides +- `search_type` _(optional)_ - timestamp +- `search_values` _(optional)_ - an array containing a maximum of two values \[`from_timestamp`, `to_timestamp`] defining the range of transactions you would like to view. - Timestamp format is millisecond-based epoch in UTC - If no items are supplied then all transactions are returned - If only one entry is supplied then all transactions after the supplied timestamp will be returned @@ -519,11 +519,11 @@ AuditLog must be enabled in the Harper configuration file to make this request. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_audit_log` -- schema _(required)_ - schema under which the transaction log resides -- table _(required)_ - table under which the transaction log resides -- search*type *(optional)\_ - username -- search*values *(optional)\_ - the Harper user for whom you would like to view transactions +- `operation` _(required)_ - must always be `read_audit_log` +- `schema` _(required)_ - schema under which the transaction log resides +- `table` _(required)_ - table under which the transaction log resides +- `search_type` _(optional)_ - username +- `search_values` _(optional)_ - the Harper user for whom you would like to view transactions ### Body @@ -639,11 +639,11 @@ AuditLog must be enabled in the Harper configuration file to make this request. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_audit_log` -- schema _(required)_ - schema under which the transaction log resides -- table _(required)_ - table under which the transaction log resides -- search*type *(optional)\_ - hash_value -- search*values *(optional)\_ - an array of hash_attributes for which you wish to see transaction logs +- `operation` _(required)_ - must always be `read_audit_log` +- `schema` _(required)_ - schema under which the transaction log resides +- `table` _(required)_ - table under which the transaction log resides +- `search_type` _(optional)_ - hash_value +- `search_values` _(optional)_ - an array of hash_attributes for which you wish to see transaction logs ### Body @@ -707,10 +707,10 @@ AuditLog must be enabled in the Harper configuration file to make this request. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `delete_audit_logs_before` -- schema _(required)_ - schema under which the transaction log resides. Must be a string -- table _(required)_ - table under which the transaction log resides. Must be a string -- timestamp _(required)_ - records older than this date will be deleted. Format is millisecond-based epoch in UTC +- `operation` _(required)_ - must always be `delete_audit_logs_before` +- `schema` _(required)_ - schema under which the transaction log resides. Must be a string +- `table` _(required)_ - table under which the transaction log resides. Must be a string +- `timestamp` _(required)_ - records older than this date will be deleted. Format is millisecond-based epoch in UTC ### Body diff --git a/docs/developers/operations-api/registration.md b/docs/developers/operations-api/registration.md index 0b941400..a925e88b 100644 --- a/docs/developers/operations-api/registration.md +++ b/docs/developers/operations-api/registration.md @@ -8,7 +8,7 @@ title: Registration Returns the registration data of the Harper instance. -- operation _(required)_ - must always be `registration_info` +- `operation` _(required)_ - must always be `registration_info` ### Body @@ -196,7 +196,7 @@ Returns the Harper fingerprint, uniquely generated based on the machine, for lic _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_fingerprint` +- `operation` _(required)_ - must always be `get_fingerprint` ### Body @@ -215,9 +215,9 @@ Sets the Harper license as generated by Harper License Management software. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_license` -- key _(required)_ - your license key -- company _(required)_ - the company that was used in the license +- `operation` _(required)_ - must always be `set_license` +- `key` _(required)_ - your license key +- `company` _(required)_ - the company that was used in the license ### Body diff --git a/docs/developers/operations-api/sql-operations.md b/docs/developers/operations-api/sql-operations.md index 71dfa436..4b7076bb 100644 --- a/docs/developers/operations-api/sql-operations.md +++ b/docs/developers/operations-api/sql-operations.md @@ -12,8 +12,8 @@ Harper encourages developers to utilize other querying tools over SQL for perfor Executes the provided SQL statement. The SELECT statement is used to query data from the database. -- operation _(required)_ - must always be `sql` -- sql _(required)_ - use standard SQL +- `operation` _(required)_ - must always be `sql` +- `sql` _(required)_ - use standard SQL ### Body @@ -48,8 +48,8 @@ Executes the provided SQL statement. The SELECT statement is used to query data Executes the provided SQL statement. The INSERT statement is used to add one or more rows to a database table. -- operation _(required)_ - must always be `sql` -- sql _(required)_ - use standard SQL +- `operation` _(required)_ - must always be `sql` +- `sql` _(required)_ - use standard SQL ### Body @@ -76,8 +76,8 @@ Executes the provided SQL statement. The INSERT statement is used to add one or Executes the provided SQL statement. The UPDATE statement is used to change the values of specified attributes in one or more rows in a database table. -- operation _(required)_ - must always be `sql` -- sql _(required)_ - use standard SQL +- `operation` _(required)_ - must always be `sql` +- `sql` _(required)_ - use standard SQL ### Body @@ -104,8 +104,8 @@ Executes the provided SQL statement. The UPDATE statement is used to change the Executes the provided SQL statement. The DELETE statement is used to remove one or more rows of data from a database table. -- operation _(required)_ - must always be `sql` -- sql _(required)_ - use standard SQL +- `operation` _(required)_ - must always be `sql` +- `sql` _(required)_ - use standard SQL ### Body diff --git a/docs/developers/operations-api/system-operations.md b/docs/developers/operations-api/system-operations.md index da47e104..d39e93cb 100644 --- a/docs/developers/operations-api/system-operations.md +++ b/docs/developers/operations-api/system-operations.md @@ -10,7 +10,7 @@ Restarts the Harper instance. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `restart` +- `operation` _(required)_ - must always be `restart` ### Body @@ -36,9 +36,9 @@ Restarts servers for the specified Harper service. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `restart_service` -- service _(required)_ - must be one of: `http_workers`, `clustering_config` or `clustering` -- replicated _(optional)_ - must be a boolean. If set to `true`, Harper will replicate the restart service operation across all nodes in the cluster. The restart will occur as a rolling restart, ensuring that each node is fully restarted before the next node begins restarting. +- `operation` _(required)_ - must always be `restart_service` +- `service` _(required)_ - must be one of: `http_workers`, `clustering_config` or `clustering` +- `replicated` _(optional)_ - must be a boolean. If set to `true`, Harper will replicate the restart service operation across all nodes in the cluster. The restart will occur as a rolling restart, ensuring that each node is fully restarted before the next node begins restarting. ### Body @@ -65,8 +65,8 @@ Returns detailed metrics on the host system. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `system_information` -- attributes _(optional)_ - string array of top level attributes desired in the response, if no value is supplied all attributes will be returned. Available attributes are: ['system', 'time', 'cpu', 'memory', 'disk', 'network', 'harperdb_processes', 'table_size', 'metrics', 'threads', 'replication'] +- `operation` _(required)_ - must always be `system_information` +- `attributes` _(optional)_ - string array of top level attributes desired in the response, if no value is supplied all attributes will be returned. Available attributes are: ['system', 'time', 'cpu', 'memory', 'disk', 'network', 'harperdb_processes', 'table_size', 'metrics', 'threads', 'replication'] ### Body @@ -84,9 +84,9 @@ Sets a status value that can be used for application-specific status tracking. S _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_status` -- id _(required)_ - the key identifier for the status -- status _(required)_ - the status value to set (string between 1-512 characters) +- `operation` _(required)_ - must always be `set_status` +- `id` _(required)_ - the key identifier for the status +- `status` _(required)_ - the status value to set (string between 1-512 characters) ### Body @@ -124,8 +124,8 @@ Retrieves a status value previously set with the set_status operation. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_status` -- id _(optional)_ - the key identifier for the status to retrieve (defaults to all statuses if not provided) +- `operation` _(required)_ - must always be `get_status` +- `id` _(optional)_ - the key identifier for the status to retrieve (defaults to all statuses if not provided) ### Body @@ -174,8 +174,8 @@ Removes a status entry by its ID. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `clear_status` -- id _(required)_ - the key identifier for the status to remove +- `operation` _(required)_ - must always be `clear_status` +- `id` _(required)_ - the key identifier for the status to remove ### Body diff --git a/docs/developers/operations-api/token-authentication.md b/docs/developers/operations-api/token-authentication.md index b9ff5b31..178db842 100644 --- a/docs/developers/operations-api/token-authentication.md +++ b/docs/developers/operations-api/token-authentication.md @@ -10,9 +10,9 @@ Creates the tokens needed for authentication: operation & refresh token. _Note - this operation does not require authorization to be set_ -- operation _(required)_ - must always be `create_authentication_tokens` -- username _(required)_ - username of user to generate tokens for -- password _(required)_ - password of user to generate tokens for +- `operation` _(required)_ - must always be `create_authentication_tokens` +- `username` _(required)_ - username of user to generate tokens for +- `password` _(required)_ - password of user to generate tokens for ### Body @@ -39,8 +39,8 @@ _Note - this operation does not require authorization to be set_ This operation creates a new operation token. -- operation _(required)_ - must always be `refresh_operation_token` -- refresh*token *(required)\_ - the refresh token that was provided when tokens were created +- `operation` _(required)_ - must always be `refresh_operation_token` +- `refresh_token` _(required)_ - the refresh token that was provided when tokens were created ### Body diff --git a/docs/developers/operations-api/users-and-roles.md b/docs/developers/operations-api/users-and-roles.md index ecaa1117..91f222b9 100644 --- a/docs/developers/operations-api/users-and-roles.md +++ b/docs/developers/operations-api/users-and-roles.md @@ -10,7 +10,7 @@ Returns a list of all roles. [Learn more about Harper roles here.](../security/u _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `list_roles` +- `operation` _(required)_ - must always be `list_roles` ### Body @@ -80,11 +80,11 @@ Creates a new role with the specified permissions. [Learn more about Harper role _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_role` -- role _(required)_ - name of role you are defining -- permission _(required)_ - object defining permissions for users associated with this role: - - super*user *(optional)\_ - boolean which, if set to true, gives users associated with this role full access to all operations and methods. If not included, value will be assumed to be false. - - structure_user (optional) - boolean OR array of database names (as strings). If boolean, user can create new databases and tables. If array of strings, users can only manage tables within the specified databases. This overrides any individual table permissions for specified databases, or for all databases if the value is true. +- `operation` _(required)_ - must always be `add_role` +- `role` _(required)_ - name of role you are defining +- `permission` _(required)_ - object defining permissions for users associated with this role: + - `super_user` _(optional)_ - boolean which, if set to true, gives users associated with this role full access to all operations and methods. If not included, value will be assumed to be false. + - `structure_user` _(optional)_ - boolean OR array of database names (as strings). If boolean, user can create new databases and tables. If array of strings, users can only manage tables within the specified databases. This overrides any individual table permissions for specified databases, or for all databases if the value is true. ### Body @@ -158,12 +158,12 @@ Modifies an existing role with the specified permissions. updates permissions fr _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `alter_role` -- id _(required)_ - the id value for the role you are altering -- role _(optional)_ - name value to update on the role you are altering -- permission _(required)_ - object defining permissions for users associated with this role: - - super*user *(optional)\_ - boolean which, if set to true, gives users associated with this role full access to all operations and methods. If not included, value will be assumed to be false. - - structure_user (optional) - boolean OR array of database names (as strings). If boolean, user can create new databases and tables. If array of strings, users can only manage tables within the specified databases. This overrides any individual table permissions for specified databases, or for all databases if the value is true. +- `operation` _(required)_ - must always be `alter_role` +- `id` _(required)_ - the id value for the role you are altering +- `role` _(optional)_ - name value to update on the role you are altering +- `permission` _(required)_ - object defining permissions for users associated with this role: + - `super_user` _(optional)_ - boolean which, if set to true, gives users associated with this role full access to all operations and methods. If not included, value will be assumed to be false. + - `structure_user` _(optional)_ - boolean OR array of database names (as strings). If boolean, user can create new databases and tables. If array of strings, users can only manage tables within the specified databases. This overrides any individual table permissions for specified databases, or for all databases if the value is true. ### Body @@ -237,8 +237,8 @@ Deletes an existing role from the database. NOTE: Role with associated users can _Operation is restricted to super_user roles only_ -- operation _(required)_ - this must always be `drop_role` -- id _(required)_ - this is the id of the role you are dropping +- `operation` _(required)_ - this must always be `drop_role` +- `id` _(required)_ - this is the id of the role you are dropping ### Body @@ -265,7 +265,7 @@ Returns a list of all users. [Learn more about Harper roles here.](../security/u _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `list_users` +- `operation` _(required)_ - must always be `list_users` ### Body @@ -377,7 +377,7 @@ _Operation is restricted to super_user roles only_ Returns user data for the associated user credentials. -- operation _(required)_ - must always be `user_info` +- `operation` _(required)_ - must always be `user_info` ### Body @@ -415,11 +415,11 @@ Creates a new user with the specified role and credentials. [Learn more about Ha _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_user` -- role _(required)_ - 'role' name value of the role you wish to assign to the user. See `add_role` for more detail -- username _(required)_ - username assigned to the user. It can not be altered after adding the user. It serves as the hash -- password _(required)_ - clear text for password. Harper will encrypt the password upon receipt -- active _(required)_ - boolean value for status of user's access to your Harper instance. If set to false, user will not be able to access your instance of Harper. +- `operation` _(required)_ - must always be `add_user` +- `role` _(required)_ - 'role' name value of the role you wish to assign to the user. See `add_role` for more detail +- `username` _(required)_ - username assigned to the user. It can not be altered after adding the user. It serves as the hash +- `password` _(required)_ - clear text for password. Harper will encrypt the password upon receipt +- `active` _(required)_ - boolean value for status of user's access to your Harper instance. If set to false, user will not be able to access your instance of Harper. ### Body @@ -449,11 +449,11 @@ Modifies an existing user's role and/or credentials. [Learn more about Harper ro _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `alter_user` -- username _(required)_ - username assigned to the user. It can not be altered after adding the user. It serves as the hash. -- password _(optional)_ - clear text for password. Harper will encrypt the password upon receipt -- role _(optional)_ - `role` name value of the role you wish to assign to the user. See `add_role` for more detail -- active _(optional)_ - status of user's access to your Harper instance. See `add_role` for more detail +- `operation` _(required)_ - must always be `alter_user` +- `username` _(required)_ - username assigned to the user. It can not be altered after adding the user. It serves as the hash. +- `password` _(optional)_ - clear text for password. Harper will encrypt the password upon receipt +- `role` _(optional)_ - `role` name value of the role you wish to assign to the user. See `add_role` for more detail +- `active` _(optional)_ - status of user's access to your Harper instance. See `add_role` for more detail ### Body @@ -487,8 +487,8 @@ Deletes an existing user by username. [Learn more about Harper roles here.](../s _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `drop_user` -- username _(required)_ - username assigned to the user +- `operation` _(required)_ - must always be `drop_user` +- `username` _(required)_ - username assigned to the user ### Body diff --git a/docs/developers/replication/sharding.md b/docs/developers/replication/sharding.md index 74242292..d12f76b4 100644 --- a/docs/developers/replication/sharding.md +++ b/docs/developers/replication/sharding.md @@ -61,10 +61,12 @@ Likewise, you can specify replicateTo and confirm parameters in the operation ob or you can specify nodes: -```json -..., +```jsonc +{ + // ... "replicateTo": ["node-1", "node-2"] -... + // ... +} ``` ## Programmatic Replication Control diff --git a/docs/developers/security/basic-auth.md b/docs/developers/security/basic-auth.md index 96d5d28a..22361432 100644 --- a/docs/developers/security/basic-auth.md +++ b/docs/developers/security/basic-auth.md @@ -6,9 +6,9 @@ title: Basic Authentication Harper uses Basic Auth and JSON Web Tokens (JWTs) to secure our HTTP requests. In the context of an HTTP transaction, **basic access authentication** is a method for an HTTP user agent to provide a username and password when making a request. -** \_**You do not need to log in separately. Basic Auth is added to each HTTP request like create*database, create_table, insert etc… via headers.\*\** \*\* +**You do not need to log in separately. Basic Auth is added to each HTTP request like create_database, create_table, insert etc… via headers.** -A header is added to each HTTP request. The header key is **“Authorization”** the header value is **“Basic <<your username and password buffer token>>”** +A header is added to each HTTP request. The header key is `Authorization` the header value is `Basic <>`. ## Authentication in Harper Studio diff --git a/docs/developers/security/users-and-roles.md b/docs/developers/security/users-and-roles.md index 7b373f59..ea89d139 100644 --- a/docs/developers/security/users-and-roles.md +++ b/docs/developers/security/users-and-roles.md @@ -98,7 +98,7 @@ There are two parts to a permissions set: Each table that a role should be given some level of CRUD permissions to must be included in the `tables` array for its database in the roles permissions JSON passed to the API (_see example above_). -```json +```jsonc { "table_name": { // the name of the table to define CRUD perms for "read": boolean, // access to read from this table @@ -113,6 +113,7 @@ Each table that a role should be given some level of CRUD permissions to must be "update": boolean // access to update this attribute in the table } ] + } } ``` diff --git a/docs/developers/sql-guide/date-functions.md b/docs/developers/sql-guide/date-functions.md index d44917c3..c9747dcd 100644 --- a/docs/developers/sql-guide/date-functions.md +++ b/docs/developers/sql-guide/date-functions.md @@ -156,17 +156,17 @@ Subtracts the defined amount of time from the date provided in UTC and returns t ### EXTRACT(date, date_part) -Extracts and returns the date_part requested as a String value. Accepted date_part values below show value returned for date = “2020-03-26T15:13:02.041+000” +Extracts and returns the date_part requested as a String value. Accepted date_part values below show value returned for date = "2020-03-26T15:13:02.041+000" | date_part | Example return value\* | | ----------- | ---------------------- | -| year | “2020” | -| month | “3” | -| day | “26” | -| hour | “15” | -| minute | “13” | -| second | “2” | -| millisecond | “41” | +| year | "2020" | +| month | "3" | +| day | "26" | +| hour | "15" | +| minute | "13" | +| second | "2" | +| millisecond | "41" | ``` "SELECT EXTRACT(1587568845765, 'year') AS extract_result" returns diff --git a/docs/developers/sql-guide/functions.md b/docs/developers/sql-guide/functions.md index 0847a657..5f2e22ba 100644 --- a/docs/developers/sql-guide/functions.md +++ b/docs/developers/sql-guide/functions.md @@ -16,99 +16,85 @@ This SQL keywords reference contains the SQL functions available in Harper. | Keyword | Syntax | Description | | ---------------- | ------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- | -| AVG | AVG(_expression_) | Returns the average of a given numeric expression. | -| COUNT | SELECT COUNT(_column_name_) FROM _database.table_ WHERE _condition_ | Returns the number records that match the given criteria. Nulls are not counted. | -| GROUP_CONCAT | GROUP*CONCAT(\_expression*) | Returns a string with concatenated values that are comma separated and that are non-null from a group. Will return null when there are non-null values. | -| MAX | SELECT MAX(_column_name_) FROM _database.table_ WHERE _condition_ | Returns largest value in a specified column. | -| MIN | SELECT MIN(_column_name_) FROM _database.table_ WHERE _condition_ | Returns smallest value in a specified column. | -| SUM | SUM(_column_name_) | Returns the sum of the numeric values provided. | -| ARRAY\* | ARRAY(_expression_) | Returns a list of data as a field. | -| DISTINCT_ARRAY\* | DISTINCT*ARRAY(\_expression*) | When placed around a standard ARRAY() function, returns a distinct (deduplicated) results set. | +| `AVG` | `AVG(expression)` | Returns the average of a given numeric expression. | +| `COUNT` | `SELECT COUNT(column_name) FROM database.table WHERE condition` | Returns the number records that match the given criteria. Nulls are not counted. | +| `GROUP_CONCAT` | `GROUP_CONCAT(expression)` | Returns a string with concatenated values that are comma separated and that are non-null from a group. Will return null when there are non-null values. | +| `MAX` | `SELECT MAX(column_name) FROM database.table WHERE condition` | Returns largest value in a specified column. | +| `MIN` | `SELECT MIN(column_name) FROM database.table WHERE condition` | Returns smallest value in a specified column. | +| `SUM` | `SUM(column_name)` | Returns the sum of the numeric values provided. | +| `ARRAY`* | `ARRAY(expression)` | Returns a list of data as a field. | +| `DISTINCT_ARRAY`* | `DISTINCT_ARRAY(expression)` | When placed around a standard `ARRAY()` function, returns a distinct (deduplicated) results set. | -\*For more information on ARRAY() and DISTINCT_ARRAY() see [this blog](https://www.harperdb.io/post/sql-queries-to-complex-objects). +*For more information on `ARRAY()` and `DISTINCT_ARRAY()` see [this blog](https://www.harperdb.io/post/sql-queries-to-complex-objects). ### Conversion | Keyword | Syntax | Description | | ------- | ----------------------------------------------- | ---------------------------------------------------------------------- | -| CAST | CAST(_expression AS datatype(length)_) | Converts a value to a specified datatype. | -| CONVERT | CONVERT(_data_type(length), expression, style_) | Converts a value from one datatype to a different, specified datatype. | +| `CAST` | `CAST(expression AS datatype(length))` | Converts a value to a specified datatype. | +| `CONVERT` | `CONVERT(data_type(length), expression, style)` | Converts a value from one datatype to a different, specified datatype. | ### Date & Time -| Keyword | Syntax | Description | -| ----------------- | ----------------- | --------------------------------------------------------------------------------------------------------------------- | -| CURRENT_DATE | CURRENT_DATE() | Returns the current date in UTC in “YYYY-MM-DD” String format. | -| CURRENT_TIME | CURRENT_TIME() | Returns the current time in UTC in “HH:mm:ss.SSS” string format. | -| CURRENT_TIMESTAMP | CURRENT_TIMESTAMP | Referencing this variable will evaluate as the current Unix Timestamp in milliseconds. For more information, go here. | - -| -| DATE | DATE([_date_string_]) | Formats and returns the date*string argument in UTC in ‘YYYY-MM-DDTHH:mm:ss.SSSZZ’ string format. If a date_string is not provided, the function will return the current UTC date/time value in the return format defined above. For more information, go here. | -| -| DATE_ADD | DATE_ADD(\_date, value, interval*) | Adds the defined amount of time to the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted interval values: Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. | -| -| DATE*DIFF | DATEDIFF(\_date_1, date_2[, interval]*) | Returns the difference between the two date values passed based on the interval as a Number. If an interval is not provided, the function will return the difference value in milliseconds. For more information, go here. | -| -| DATE*FORMAT | DATE_FORMAT(\_date, format*) | Formats and returns a date value in the String format provided. Find more details on accepted format values in the moment.js docs. For more information, go here. | -| -| DATE*SUB | DATE_SUB(\_date, format*) | Subtracts the defined amount of time from the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted date*sub interval values- Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. | -| -| DAY | DAY(\_date*) | Return the day of the month for the given date. | -| -| DAYOFWEEK | DAYOFWEEK(_date_) | Returns the numeric value of the weekday of the date given(“YYYY-MM-DD”).NOTE: 0=Sunday, 1=Monday, 2=Tuesday, 3=Wednesday, 4=Thursday, 5=Friday, and 6=Saturday. | -| EXTRACT | EXTRACT(_date, date_part_) | Extracts and returns the date*part requested as a String value. Accepted date_part values below show value returned for date = “2020-03-26T15:13:02.041+000” For more information, go here. | -| -| GETDATE | GETDATE() | Returns the current Unix Timestamp in milliseconds. | -| GET_SERVER_TIME | GET_SERVER_TIME() | Returns the current date/time value based on the server’s timezone in `YYYY-MM-DDTHH:mm:ss.SSSZZ` String format. | -| OFFSET_UTC | OFFSET_UTC(\_date, offset*) | Returns the UTC date time value with the offset provided included in the return String value formatted as `YYYY-MM-DDTHH:mm:ss.SSSZZ`. The offset argument will be added as minutes unless the value is less than 16 and greater than -16, in which case it will be treated as hours. | -| NOW | NOW() | Returns the current Unix Timestamp in milliseconds. | -| -| HOUR | HOUR(_datetime_) | Returns the hour part of a given date in range of 0 to 838. | -| -| MINUTE | MINUTE(_datetime_) | Returns the minute part of a time/datetime in range of 0 to 59. | -| -| MONTH | MONTH(_date_) | Returns month part for a specified date in range of 1 to 12. | -| -| SECOND | SECOND(_datetime_) | Returns the seconds part of a time/datetime in range of 0 to 59. | -| YEAR | YEAR(_date_) | Returns the year part for a specified date. | -| +| Keyword | Syntax | Description | +| ----------------- | -------------------------- | --------------------------------------------------------------------------------------------------------------------- | +| `CURRENT_DATE` | `CURRENT_DATE()` | Returns the current date in UTC in "YYYY-MM-DD" String format. | +| `CURRENT_TIME` | `CURRENT_TIME()` | Returns the current time in UTC in "HH:mm:ss.SSS" string format. | +| `CURRENT_TIMESTAMP` | `CURRENT_TIMESTAMP` | Referencing this variable will evaluate as the current Unix Timestamp in milliseconds. For more information, go here. | +| `DATE` | `DATE([date_string])` | Formats and returns the date string argument in UTC in 'YYYY-MM-DDTHH:mm:ss.SSSZZ' string format. If a date string is not provided, the function will return the current UTC date/time value in the return format defined above. For more information, go here. | +| `DATE_ADD` | `DATE_ADD(date, value, interval)` | Adds the defined amount of time to the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted interval values: Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. | +| `DATE_DIFF` | `DATE_DIFF(date_1, date_2[, interval])` | Returns the difference between the two date values passed based on the interval as a Number. If an interval is not provided, the function will return the difference value in milliseconds. For more information, go here. | +| `DATE_FORMAT` | `DATE_FORMAT(date, format)` | Formats and returns a date value in the String format provided. Find more details on accepted format values in the moment.js docs. For more information, go here. | +| `DATE_SUB` | `DATE_SUB(date, format)` | Subtracts the defined amount of time from the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted date_sub interval values- Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. | +| `DAY` | `DAY(date)` | Return the day of the month for the given date. | +| `DAYOFWEEK` | `DAYOFWEEK(date)` | Returns the numeric value of the weekday of the date given("YYYY-MM-DD").NOTE: 0=Sunday, 1=Monday, 2=Tuesday, 3=Wednesday, 4=Thursday, 5=Friday, and 6=Saturday. | +| `EXTRACT` | `EXTRACT(date, date_part)` | Extracts and returns the date_part requested as a String value. Accepted date_part values below show value returned for date = "2020-03-26T15:13:02.041+000" For more information, go here. | +| `GETDATE` | `GETDATE()` | Returns the current Unix Timestamp in milliseconds. | +| `GET_SERVER_TIME` | `GET_SERVER_TIME()` | Returns the current date/time value based on the server's timezone in `YYYY-MM-DDTHH:mm:ss.SSSZZ` String format. | +| `OFFSET_UTC` | `OFFSET_UTC(date, offset)` | Returns the UTC date time value with the offset provided included in the return String value formatted as `YYYY-MM-DDTHH:mm:ss.SSSZZ`. The offset argument will be added as minutes unless the value is less than 16 and greater than -16, in which case it will be treated as hours. | +| `NOW` | `NOW()` | Returns the current Unix Timestamp in milliseconds. | +| `HOUR` | `HOUR(datetime)` | Returns the hour part of a given date in range of 0 to 838. | +| `MINUTE` | `MINUTE(datetime)` | Returns the minute part of a time/datetime in range of 0 to 59. | +| `MONTH` | `MONTH(date)` | Returns month part for a specified date in range of 1 to 12. | +| `SECOND` | `SECOND(datetime)` | Returns the seconds part of a time/datetime in range of 0 to 59. | +| `YEAR` | `YEAR(date)` | Returns the year part for a specified date. | ### Logical | Keyword | Syntax | Description | | ------- | ----------------------------------------------- | ------------------------------------------------------------------------------------------ | -| IF | IF(_condition, value_if_true, value_if_false_) | Returns a value if the condition is true, or another value if the condition is false. | -| IIF | IIF(_condition, value_if_true, value_if_false_) | Returns a value if the condition is true, or another value if the condition is false. | -| IFNULL | IFNULL(_expression, alt_value_) | Returns a specified value if the expression is null. | -| NULLIF | NULLIF(_expression_1, expression_2_) | Returns null if expression_1 is equal to expression_2, if not equal, returns expression_1. | +| `IF` | `IF(condition, value_if_true, value_if_false)` | Returns a value if the condition is true, or another value if the condition is false. | +| `IIF` | `IIF(condition, value_if_true, value_if_false)` | Returns a value if the condition is true, or another value if the condition is false. | +| `IFNULL` | `IFNULL(expression, alt_value)` | Returns a specified value if the expression is null. | +| `NULLIF` | `NULLIF(expression_1, expression_2)` | Returns null if expression_1 is equal to expression_2, if not equal, returns expression_1. | ### Mathematical | Keyword | Syntax | Description | | ------- | ------------------------------ | --------------------------------------------------------------------------------------------------- | -| ABS | ABS(_expression_) | Returns the absolute value of a given numeric expression. | -| CEIL | CEIL(_number_) | Returns integer ceiling, the smallest integer value that is bigger than or equal to a given number. | -| EXP | EXP(_number_) | Returns e to the power of a specified number. | -| FLOOR | FLOOR(_number_) | Returns the largest integer value that is smaller than, or equal to, a given number. | -| RANDOM | RANDOM(_seed_) | Returns a pseudo random number. | -| ROUND | ROUND(_number,decimal_places_) | Rounds a given number to a specified number of decimal places. | -| SQRT | SQRT(_expression_) | Returns the square root of an expression. | +| `ABS` | `ABS(expression)` | Returns the absolute value of a given numeric expression. | +| `CEIL` | `CEIL(number)` | Returns integer ceiling, the smallest integer value that is bigger than or equal to a given number. | +| `EXP` | `EXP(number)` | Returns e to the power of a specified number. | +| `FLOOR` | `FLOOR(number)` | Returns the largest integer value that is smaller than, or equal to, a given number. | +| `RANDOM` | `RANDOM(seed)` | Returns a pseudo random number. | +| `ROUND` | `ROUND(number, decimal_places)` | Rounds a given number to a specified number of decimal places. | +| `SQRT` | `SQRT(expression)` | Returns the square root of an expression. | ### String | Keyword | Syntax | Description | | ----------- | ------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| CONCAT | CONCAT(_string_1, string_2, ...., string_n_) | Concatenates, or joins, two or more strings together, resulting in a single string. | -| CONCAT_WS | CONCAT*WS(\_separator, string_1, string_2, ...., string_n*) | Concatenates, or joins, two or more strings together with a separator, resulting in a single string. | -| INSTR | INSTR(_string_1, string_2_) | Returns the first position, as an integer, of string_2 within string_1. | -| LEN | LEN(_string_) | Returns the length of a string. | -| LOWER | LOWER(_string_) | Converts a string to lower-case. | -| REGEXP | SELECT _column_name_ FROM _database.table_ WHERE _column_name_ REGEXP _pattern_ | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. | -| REGEXP_LIKE | SELECT _column_name_ FROM _database.table_ WHERE REGEXP*LIKE(\_column_name, pattern*) | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. | -| REPLACE | REPLACE(_string, old_string, new_string_) | Replaces all instances of old_string within new_string, with string. | -| SUBSTRING | SUBSTRING(_string, string_position, length_of_substring_) | Extracts a specified amount of characters from a string. | -| TRIM | TRIM([_character(s) FROM_] _string_) | Removes leading and trailing spaces, or specified character(s), from a string. | -| UPPER | UPPER(_string_) | Converts a string to upper-case. | +| `CONCAT` | `CONCAT(string_1, string_2, ...., string_n)` | Concatenates, or joins, two or more strings together, resulting in a single string. | +| `CONCAT_WS` | `CONCAT_WS(separator, string_1, string_2, ...., string_n)` | Concatenates, or joins, two or more strings together with a separator, resulting in a single string. | +| `INSTR` | `INSTR(string_1, string_2)` | Returns the first position, as an integer, of string_2 within string_1. | +| `LEN` | `LEN(string)` | Returns the length of a string. | +| `LOWER` | `LOWER(string)` | Converts a string to lower-case. | +| `REGEXP` | `SELECT column_name FROM database.table WHERE column_name REGEXP pattern` | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. | +| `REGEXP_LIKE` | `SELECT column_name FROM database.table WHERE REGEXP_LIKE(column_name, pattern)` | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. | +| `REPLACE` | `REPLACE(string, old_string, new_string)` | Replaces all instances of old_string within new_string, with string. | +| `SUBSTRING` | `SUBSTRING(string, string_position, length_of_substring)` | Extracts a specified amount of characters from a string. | +| `TRIM` | `TRIM([character(s) FROM] string)` | Removes leading and trailing spaces, or specified character(s), from a string. | +| `UPPER` | `UPPER(string)` | Converts a string to upper-case. | ## Operators @@ -116,9 +102,9 @@ This SQL keywords reference contains the SQL functions available in Harper. | Keyword | Syntax | Description | | ------- | ------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------- | -| BETWEEN | SELECT _column_name(s)_ FROM _database.table_ WHERE _column_name_ BETWEEN _value_1_ AND _value_2_ | (inclusive) Returns values(numbers, text, or dates) within a given range. | -| IN | SELECT _column_name(s)_ FROM _database.table_ WHERE _column_name_ IN(_value(s)_) | Used to specify multiple values in a WHERE clause. | -| LIKE | SELECT _column_name(s)_ FROM _database.table_ WHERE _column_n_ LIKE _pattern_ | Searches for a specified pattern within a WHERE clause. | +| `BETWEEN` | `SELECT column_name(s) FROM database.table WHERE column_name BETWEEN value_1 AND value_2` | (inclusive) Returns values(numbers, text, or dates) within a given range. | +| `IN` | `SELECT column_name(s) FROM database.table WHERE column_name IN(value(s))` | Used to specify multiple values in a WHERE clause. | +| `LIKE` | `SELECT column_name(s) FROM database.table WHERE column_n LIKE pattern` | Searches for a specified pattern within a WHERE clause. | ## Queries @@ -126,34 +112,34 @@ This SQL keywords reference contains the SQL functions available in Harper. | Keyword | Syntax | Description | | -------- | -------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------- | -| DISTINCT | SELECT DISTINCT _column_name(s)_ FROM _database.table_ | Returns only unique values, eliminating duplicate records. | -| FROM | FROM _database.table_ | Used to list the database(s), table(s), and any joins required for a SQL statement. | -| GROUP BY | SELECT _column_name(s)_ FROM _database.table_ WHERE _condition_ GROUP BY _column_name(s)_ ORDER BY _column_name(s)_ | Groups rows that have the same values into summary rows. | -| HAVING | SELECT _column_name(s)_ FROM _database.table_ WHERE _condition_ GROUP BY _column_name(s)_ HAVING _condition_ ORDER BY _column_name(s)_ | Filters data based on a group or aggregate function. | -| SELECT | SELECT _column_name(s)_ FROM _database.table_ | Selects data from table. | -| WHERE | SELECT _column_name(s)_ FROM _database.table_ WHERE _condition_ | Extracts records based on a defined condition. | +| `DISTINCT` | `SELECT DISTINCT column_name(s) FROM database.table` | Returns only unique values, eliminating duplicate records. | +| `FROM` | `FROM database.table` | Used to list the database(s), table(s), and any joins required for a SQL statement. | +| `GROUP BY` | `SELECT column_name(s) FROM database.table WHERE condition GROUP BY column_name(s) ORDER BY column_name(s)` | Groups rows that have the same values into summary rows. | +| `HAVING` | `SELECT column_name(s) FROM database.table WHERE condition GROUP BY column_name(s) HAVING condition ORDER BY column_name(s)` | Filters data based on a group or aggregate function. | +| `SELECT` | `SELECT column_name(s) FROM database.table` | Selects data from table. | +| `WHERE` | `SELECT column_name(s) FROM database.table WHERE condition` | Extracts records based on a defined condition. | ### Joins | Keyword | Syntax | Description | | ------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| CROSS JOIN | SELECT _column_name(s)_ FROM _database.table_1_ CROSS JOIN _database.table_2_ | Returns a paired combination of each row from _table_1_ with row from _table_2_. _Note: CROSS JOIN can return very large result sets and is generally considered bad practice._ | -| FULL OUTER | SELECT _column_name(s)_ FROM _database.table_1_ FULL OUTER JOIN _database.table_2_ ON _table_1.column_name_ _= table_2.column_name_ WHERE _condition_ | Returns all records when there is a match in either _table_1_ (left table) or _table_2_ (right table). | -| [INNER] JOIN | SELECT _column_name(s)_ FROM _database.table_1_ INNER JOIN _database.table_2_ ON _table_1.column_name_ _= table_2.column_name_ | Return only matching records from _table_1_ (left table) and _table_2_ (right table). The INNER keyword is optional and does not affect the result. | -| LEFT [OUTER] JOIN | SELECT _column_name(s)_ FROM _database.table_1_ LEFT OUTER JOIN _database.table_2_ ON _table_1.column_name_ _= table_2.column_name_ | Return all records from _table_1_ (left table) and matching data from _table_2_ (right table). The OUTER keyword is optional and does not affect the result. | -| RIGHT [OUTER] JOIN | SELECT _column_name(s)_ FROM _database.table_1_ RIGHT OUTER JOIN _database.table_2_ ON _table_1.column_name = table_2.column_name_ | Return all records from _table_2_ (right table) and matching data from _table_1_ (left table). The OUTER keyword is optional and does not affect the result. | +| `CROSS JOIN` | `SELECT column_name(s) FROM database.table_1 CROSS JOIN database.table_2` | Returns a paired combination of each row from `table_1` with row from `table_2`. Note: CROSS JOIN can return very large result sets and is generally considered bad practice. | +| `FULL OUTER` | `SELECT column_name(s) FROM database.table_1 FULL OUTER JOIN database.table_2 ON table_1.column_name = table_2.column_name WHERE condition` | Returns all records when there is a match in either `table_1` (left table) or `table_2` (right table). | +| `[INNER] JOIN` | `SELECT column_name(s) FROM database.table_1 INNER JOIN database.table_2 ON table_1.column_name = table_2.column_name` | Return only matching records from `table_1` (left table) and `table_2` (right table). The INNER keyword is optional and does not affect the result. | +| `LEFT [OUTER] JOIN` | `SELECT column_name(s) FROM database.table_1 LEFT OUTER JOIN database.table_2 ON table_1.column_name = table_2.column_name` | Return all records from `table_1` (left table) and matching data from `table_2` (right table). The OUTER keyword is optional and does not affect the result. | +| `RIGHT [OUTER] JOIN` | `SELECT column_name(s) FROM database.table_1 RIGHT OUTER JOIN database.table_2 ON table_1.column_name = table_2.column_name` | Return all records from `table_2` (right table) and matching data from `table_1` (left table). The OUTER keyword is optional and does not affect the result. | ### Predicates | Keyword | Syntax | Description | | ----------- | ----------------------------------------------------------------------------- | -------------------------- | -| IS NOT NULL | SELECT _column_name(s)_ FROM _database.table_ WHERE _column_name_ IS NOT NULL | Tests for non-null values. | -| IS NULL | SELECT _column_name(s)_ FROM _database.table_ WHERE _column_name_ IS NULL | Tests for null values. | +| `IS NOT NULL` | `SELECT column_name(s) FROM database.table WHERE column_name IS NOT NULL` | Tests for non-null values. | +| `IS NULL` | `SELECT column_name(s) FROM database.table WHERE column_name IS NULL` | Tests for null values. | ### Statements | Keyword | Syntax | Description | | ------- | --------------------------------------------------------------------------------------------- | ----------------------------------- | -| DELETE | DELETE FROM _database.table_ WHERE condition | Deletes existing data from a table. | -| INSERT | INSERT INTO _database.table(column_name(s))_ VALUES(_value(s)_) | Inserts new records into a table. | -| UPDATE | UPDATE _database.table_ SET _column_1 = value_1, column_2 = value_2, ....,_ WHERE _condition_ | Alters existing records in a table. | +| `DELETE` | `DELETE FROM database.table WHERE condition` | Deletes existing data from a table. | +| `INSERT` | `INSERT INTO database.table(column_name(s)) VALUES(value(s))` | Inserts new records into a table. | +| `UPDATE` | `UPDATE database.table SET column_1 = value_1, column_2 = value_2, .... WHERE condition` | Alters existing records in a table. | diff --git a/docs/developers/sql-guide/json-search.md b/docs/developers/sql-guide/json-search.md index 507473f3..c4bcd1c8 100644 --- a/docs/developers/sql-guide/json-search.md +++ b/docs/developers/sql-guide/json-search.md @@ -12,7 +12,7 @@ Harper automatically indexes all top level attributes in a row / object written ## Syntax -SEARCH*JSON(\_expression, attribute*) +`SEARCH_JSON(expression, attribute)` Executes the supplied string _expression_ against data of the defined top level _attribute_ for each row. The expression both filters and defines output from the JSON document. @@ -117,7 +117,7 @@ SEARCH_JSON( ) ``` -The first argument passed to SEARCH_JSON is the expression to execute against the second argument which is the cast attribute on the credits table. This expression will execute for every row. Looking into the expression it starts with “$\[…]” this tells the expression to iterate all elements of the cast array. +The first argument passed to SEARCH_JSON is the expression to execute against the second argument which is the cast attribute on the credits table. This expression will execute for every row. Looking into the expression it starts with "$[…]" this tells the expression to iterate all elements of the cast array. Then the expression tells the function to only return entries where the name attribute matches any of the actors defined in the array: @@ -125,7 +125,7 @@ Then the expression tells the function to only return entries where the name att name in ["Robert Downey Jr.", "Chris Evans", "Scarlett Johansson", "Mark Ruffalo", "Chris Hemsworth", "Jeremy Renner", "Clark Gregg", "Samuel L. Jackson", "Gwyneth Paltrow", "Don Cheadle"] ``` -So far, we’ve iterated the array and filtered out rows, but we also want the results formatted in a specific way, so we’ve chained an expression on our filter with: `{“actor”: name, “character”: character}`. This tells the function to create a specific object for each matching entry. +So far, we’ve iterated the array and filtered out rows, but we also want the results formatted in a specific way, so we’ve chained an expression on our filter with: `{"actor": name, "character": character}`. This tells the function to create a specific object for each matching entry. **Sample Result** diff --git a/docs/getting-started/first-harper-app.md b/docs/getting-started/first-harper-app.md index 6acc7b93..73b66a0a 100644 --- a/docs/getting-started/first-harper-app.md +++ b/docs/getting-started/first-harper-app.md @@ -88,21 +88,20 @@ type Dog @table @export { } ``` -By default the application HTTP server port is `9926` (this can be [configured here](../deployments/configuration#http)), so the local URL would be `http:/localhost:9926/Dog/` with a full REST API. We can PUT or POST data into this table using this new path, and then GET or DELETE from it as well (you can even view data directly from the browser). If you have not added any records yet, we could use a PUT or POST to add a record. PUT is appropriate if you know the id, and POST can be used to assign an id: +By default the application HTTP server port is `9926` (this can be [configured here](../deployments/configuration#http)), so the local URL would be `http://localhost:9926/Dog/` with a full REST API. We can PUT or POST data into this table using this new path, and then GET or DELETE from it as well (you can even view data directly from the browser). If you have not added any records yet, we could use a PUT or POST to add a record. PUT is appropriate if you know the id, and POST can be used to assign an id: -```json -POST /Dog/ -Content-Type: application/json - -{ - "name": "Harper", - "breed": "Labrador", - "age": 3, - "tricks": ["sits"] -} +```bash +curl -X POST http://localhost:9926/Dog/ \ + -H "Content-Type: application/json" \ + -d '{ + "name": "Harper", + "breed": "Labrador", + "age": 3, + "tricks": ["sits"] + }' ``` -With this a record will be created and the auto-assigned id will be available through the `Location` header. If you added a record, you can visit the path `/Dog/` to view that record. Alternately, the curl command curl `http:/localhost:9926/Dog/` will achieve the same thing. +With this a record will be created and the auto-assigned id will be available through the `Location` header. If you added a record, you can visit the path `/Dog/` to view that record. Alternately, the curl command `curl http://localhost:9926/Dog/` will achieve the same thing. ## Authenticating Endpoints diff --git a/docs/technical-details/reference/analytics.md b/docs/technical-details/reference/analytics.md index 39c92109..2f2ddbd8 100644 --- a/docs/technical-details/reference/analytics.md +++ b/docs/technical-details/reference/analytics.md @@ -104,14 +104,14 @@ And a summary record looks like: The following are general resource usage statistics that are tracked: -- memory - This includes RSS, heap, buffer and external data usage. -- utilization - How much of the time the worker was processing requests. -- mqtt-connections - The number of MQTT connections. +- `memory` - This includes RSS, heap, buffer and external data usage. +- `utilization` - How much of the time the worker was processing requests. +- `mqtt-connections` - The number of MQTT connections. The following types of information is tracked for each HTTP request: -- success - How many requests returned a successful response (20x response code). TTFB - Time to first byte in the response to the client. -- transfer - Time to finish the transfer of the data to the client. -- bytes-sent - How many bytes of data were sent to the client. +- `success` - How many requests returned a successful response (20x response code). TTFB - Time to first byte in the response to the client. +- `transfer` - Time to finish the transfer of the data to the client. +- `bytes-sent` - How many bytes of data were sent to the client. Requests are categorized by operation name, for the operations API, by the resource (name) with the REST API, and by command for the MQTT interface. diff --git a/docs/technical-details/reference/components/built-in-extensions.md b/docs/technical-details/reference/components/built-in-extensions.md index b583bc8a..72648416 100644 --- a/docs/technical-details/reference/components/built-in-extensions.md +++ b/docs/technical-details/reference/components/built-in-extensions.md @@ -222,10 +222,10 @@ Because this plugin is implemented using the new [Plugin API](./plugins.md), it In addition to the general Plugin configuration options (`files`, `urlPath`, and `timeout`), this plugin supports the following configuration options: -- **extensions** - `string[]` - _optional_ - An array of file extensions to try and serve when an exact path is not found. For example, `['html']` and the path `/site/page-1` will match `/site/page-1.html`. -- **fallthrough** - `boolean` - _optional_ - If `true`, the plugin will fall through to the next handler if the requested file is not found. Make sure to disable this option if you want to customize the 404 Not Found response with the `notFound` option. Defaults to `true`. -- **index** - `boolean` - _optional_ - If `true`, the plugin will serve an `index.html` file if it exists in the directory specified by the `files` pattern. Defaults to `false`. -- **notFound** - `string | { file: string; statusCode: number }` - _optional_ - Specify a custom file to be returned for 404 Not Found responses. If you want to specify a different statusCode when a given path cannot be found, use the object form and specify the `file` and `statusCode` properties (this is particularly useful for SPAs). +- `extensions` - `string[]` - _optional_ - An array of file extensions to try and serve when an exact path is not found. For example, `['html']` and the path `/site/page-1` will match `/site/page-1.html`. +- `fallthrough` - `boolean` - _optional_ - If `true`, the plugin will fall through to the next handler if the requested file is not found. Make sure to disable this option if you want to customize the 404 Not Found response with the `notFound` option. Defaults to `true`. +- `index` - `boolean` - _optional_ - If `true`, the plugin will serve an `index.html` file if it exists in the directory specified by the `files` pattern. Defaults to `false`. +- `notFound` - `string | { file: string; statusCode: number }` - _optional_ - Specify a custom file to be returned for 404 Not Found responses. If you want to specify a different statusCode when a given path cannot be found, use the object form and specify the `file` and `statusCode` properties (this is particularly useful for SPAs). ### Examples diff --git a/docs/technical-details/reference/components/extensions.md b/docs/technical-details/reference/components/extensions.md index e5575f8e..78012b7b 100644 --- a/docs/technical-details/reference/components/extensions.md +++ b/docs/technical-details/reference/components/extensions.md @@ -32,11 +32,11 @@ Any [Resource Extension](#resource-extension) can be configured with the `files` > Harper relies on the [fast-glob](https://github.com/mrmlnc/fast-glob) library for glob pattern matching. -- **files** - `string | string[] | Object` - _required_ - A [glob pattern](https://github.com/mrmlnc/fast-glob?tab=readme-ov-file#pattern-syntax) string, array of glob pattern strings, or a more expressive glob options object determining the set of files and directories to be resolved for the extension. If specified as an object, the `source` property is required. By default, Harper **matches files and directories**; this is configurable using the `only` option. - - **source** - `string | string[]` - _required_ - The glob pattern string or array of strings. - - **only** - `'all' | 'files' | 'directories'` - _optional_ - The glob pattern will match only the specified entry type. Defaults to `'all'`. - - **ignore** - `string[]` - _optional_ - An array of glob patterns to exclude from matches. This is an alternative way to use negative patterns. Defaults to `[]`. -- **urlPath** - `string` - _optional_ - A base URL path to prepend to the resolved `files` entries. +- `files` - `string | string[] | Object` - _required_ - A [glob pattern](https://github.com/mrmlnc/fast-glob?tab=readme-ov-file#pattern-syntax) string, array of glob pattern strings, or a more expressive glob options object determining the set of files and directories to be resolved for the extension. If specified as an object, the `source` property is required. By default, Harper **matches files and directories**; this is configurable using the `only` option. + - `source` - `string | string[]` - _required_ - The glob pattern string or array of strings. + - `only` - `'all' | 'files' | 'directories'` - _optional_ - The glob pattern will match only the specified entry type. Defaults to `'all'`. + - `ignore` - `string[]` - _optional_ - An array of glob patterns to exclude from matches. This is an alternative way to use negative patterns. Defaults to `[]`. +- `urlPath` - `string` - _optional_ - A base URL path to prepend to the resolved `files` entries. - If the value starts with `./`, such as `'./static/'`, the component name will be included in the base url path - If the value is `.`, then the component name will be the base url path - Note: `..` is an invalid pattern and will result in an error @@ -121,11 +121,11 @@ These methods are for processing individual files. They can be async. Parameters: -- **contents** - `Buffer` - The contents of the file -- **urlPath** - `string` - The recommended URL path of the file -- **absolutePath** - `string` - The absolute path of the file +- `contents` - `Buffer` - The contents of the file +- `urlPath` - `string` - The recommended URL path of the file +- `absolutePath` - `string` - The absolute path of the file -- **resources** - `Object` - A collection of the currently loaded resources +- `resources` - `Object` - A collection of the currently loaded resources Returns: `void | Promise` @@ -145,10 +145,10 @@ If the function returns or resolves a truthy value, then the component loading s Parameters: -- **urlPath** - `string` - The recommended URL path of the directory -- **absolutePath** - `string` - The absolute path of the directory +- `urlPath` - `string` - The recommended URL path of the directory +- `absolutePath` - `string` - The absolute path of the directory -- **resources** - `Object` - A collection of the currently loaded resources +- `resources` - `Object` - A collection of the currently loaded resources Returns: `boolean | void | Promise` @@ -182,6 +182,6 @@ A Protocol Extension is made up of two distinct methods, [`start()`](#startoptio Parameters: -- **options** - `Object` - An object representation of the extension's configuration options. +- `options` - `Object` - An object representation of the extension's configuration options. Returns: `Object` - An object that implements any of the [Resource Extension APIs](#resource-extension-api) diff --git a/docs/technical-details/reference/components/plugins.md b/docs/technical-details/reference/components/plugins.md index 902258c1..dc08c523 100644 --- a/docs/technical-details/reference/components/plugins.md +++ b/docs/technical-details/reference/components/plugins.md @@ -26,9 +26,9 @@ As plugins are meant to be used by applications in order to implement some featu As a brief overview, the general configuration options available for plugins are: -- **files** - `string` | `string[]` | [`FilesOptionsObject`](#interface-filesoptionsobject) - _optional_ - A glob pattern string or array of strings that specifies the files and directories to be handled by the plugin's default `EntryHandler` instance. -- **urlPath** - `string` - _optional_ - A base URL path to prepend to the resolved `files` entries handled by the plugin's default `EntryHandler` instance. -- **timeout** - `number` - _optional_ - The timeout in milliseconds for the plugin's operations. If not specified, the system default is **30 seconds**. Plugins may override the system default themselves, but this configuration option is the highest priority and takes precedence. +- `files` - `string` | `string[]` | [`FilesOptionsObject`](#interface-filesoptionsobject) - _optional_ - A glob pattern string or array of strings that specifies the files and directories to be handled by the plugin's default `EntryHandler` instance. +- `urlPath` - `string` - _optional_ - A base URL path to prepend to the resolved `files` entries handled by the plugin's default `EntryHandler` instance. +- `timeout` - `number` - _optional_ - The timeout in milliseconds for the plugin's operations. If not specified, the system default is **30 seconds**. Plugins may override the system default themselves, but this configuration option is the highest priority and takes precedence. ### File Entries @@ -165,7 +165,7 @@ This example is heavily simplified, but it demonstrates how the different key pa Parameters: -- **scope** - [`Scope`](#class-scope) - An instance of the `Scope` class that provides access to the relative application's configuration, resources, and other APIs. +- `scope` - [`Scope`](#class-scope) - An instance of the `Scope` class that provides access to the relative application's configuration, resources, and other APIs. Returns: `void | Promise` @@ -181,7 +181,7 @@ Emitted after the scope is closed via the `close()` method. ### Event: `'error'` -- **error** - `unknown` - The error that occurred. +- `error` - `unknown` - The error that occurred. ### Event: `'ready'` @@ -197,8 +197,8 @@ Closes all associated entry handlers, the associated `scope.options` instance, e Parameters: -- **files** - [`FilesOption`](#interface-filesoption) | [`FileAndURLPathConfig`](#interface-fileandurlpathconfig) | [`onEntryEventHandler`](#function-onentryeventhandlerentryevent-fileentryevent--directoryentryevent-void) - _optional_ -- **handler** - [`onEntryEventHandler`](#function-onentryeventhandlerentryevent-fileentryevent--directoryentryevent-void) - _optional_ +- `files` - [`FilesOption`](#interface-filesoption) | [`FileAndURLPathConfig`](#interface-fileandurlpathconfig) | [`onEntryEventHandler`](#function-onentryeventhandlerentryevent-fileentryevent--directoryentryevent-void) - _optional_ +- `handler` - [`onEntryEventHandler`](#function-onentryeventhandlerentryevent-fileentryevent--directoryentryevent-void) - _optional_ Returns: [`EntryHandler`](#class-entryhandler) - An instance of the `EntryHandler` class that can be used to handle entries within the scope. @@ -313,13 +313,13 @@ Returns: `string` - The directory of the application. This is the root directory ## Interface: `FilesOptionsObject` -- **source** - `string` | `string[]` - _required_ - The glob pattern string or array of strings. -- **ignore** - `string` | `string[]` - _optional_ - An array of glob patterns to exclude from matches. This is an alternative way to use negative patterns. Defaults to `[]`. +- `source` - `string` | `string[]` - _required_ - The glob pattern string or array of strings. +- `ignore` - `string` | `string[]` - _optional_ - An array of glob patterns to exclude from matches. This is an alternative way to use negative patterns. Defaults to `[]`. ## Interface: `FileAndURLPathConfig` -- **files** - [`FilesOption`](#interface-filesoption) - _required_ - A glob pattern string, array of glob pattern strings, or a more expressive glob options object determining the set of files and directories to be resolved for the plugin. -- **urlPath** - `string` - _optional_ - A base URL path to prepend to the resolved `files` entries. +- `files` - [`FilesOption`](#interface-filesoption) - _required_ - A glob pattern string, array of glob pattern strings, or a more expressive glob options object determining the set of files and directories to be resolved for the plugin. +- `urlPath` - `string` - _optional_ - A base URL path to prepend to the resolved `files` entries. ## Class: `OptionsWatcher` @@ -327,9 +327,9 @@ Returns: `string` - The directory of the application. This is the root directory ### Event: `'change'` -- **key** - `string[]` - The key of the changed option split into parts (e.g. `foo.bar` becomes `['foo', 'bar']`). -- **value** - [`ConfigValue`](#interface-configvalue) - The new value of the option. -- **config** - [`ConfigValue`](#interface-configvalue) - The entire configuration object of the plugin. +- `key` - `string[]` - The key of the changed option split into parts (e.g. `foo.bar` becomes `['foo', 'bar']`). +- `value` - [`ConfigValue`](#interface-configvalue) - The new value of the option. +- `config` - [`ConfigValue`](#interface-configvalue) - The entire configuration object of the plugin. The `'change'` event is emitted whenever an configuration option is changed in the configuration file relative to the application and respective plugin. @@ -360,11 +360,11 @@ Emitted when the `OptionsWatcher` is closed via the `close()` method. The watche ### Event: `'error'` -- **error** - `unknown` - The error that occurred. +- `error` - `unknown` - The error that occurred. ### Event: `'ready'` -- **config** - [`ConfigValue`](#interface-configvalue) | `undefined` - The configuration object of the plugin, if present. +- `config` - [`ConfigValue`](#interface-configvalue) | `undefined` - The configuration object of the plugin, if present. This event can be emitted multiple times. It is first emitted upon the initial load, but will also be emitted after restoring a configuration file or configuration object after a `'remove'` event. @@ -382,7 +382,7 @@ Closes the options watcher, removing all listeners and preventing any further ev Parameters: -- **key** - `string[]` - The key of the option to get, split into parts (e.g. `foo.bar` is represented as `['foo', 'bar']`). +- `key` - `string[]` - The key of the option to get, split into parts (e.g. `foo.bar` is represented as `['foo', 'bar']`). Returns: [`ConfigValue`](#interface-configvalue) | `undefined` @@ -420,7 +420,7 @@ Created by calling [`scope.handleEntry()`](#scopehandleentry) method. ### Event: `'all'` -- **entry** - [`FileEntry`](#interface-fileentry) | [`DirectoryEntry`](#interface-directoryentry) - The entry that was added, changed, or removed. +- `entry` - [`FileEntry`](#interface-fileentry) | [`DirectoryEntry`](#interface-directoryentry) - The entry that was added, changed, or removed. The `'all'` event is emitted for all entry events, including file and directory events. This is the event that the handler method in `scope.handleEntry` is registered for. The event handler receives an `entry` object that contains the entry metadata, such as the file contents, URL path, and absolute path. @@ -452,19 +452,19 @@ async function handleApplication(scope) { ### Event: `'add'` -- **entry** - [`AddFileEvent`](#interface-addfileevent) - The file entry that was added. +- `entry` - [`AddFileEvent`](#interface-addfileevent) - The file entry that was added. The `'add'` event is emitted when a file is created (or the watcher sees it for the first time). The event handler receives an `AddFileEvent` object that contains the file contents, URL path, absolute path, and other metadata. ### Event: `'addDir'` -- **entry** - [`AddDirEvent`](#interface-adddirevent) - The directory entry that was added. +- `entry` - [`AddDirEvent`](#interface-adddirevent) - The directory entry that was added. The `'addDir'` event is emitted when a directory is created (or the watcher sees it for the first time). The event handler receives an `AddDirEvent` object that contains the URL path and absolute path of the directory. ### Event: `'change'` -- **entry** - [`ChangeFileEvent`](#interface-changefileevent) - The file entry that was changed. +- `entry` - [`ChangeFileEvent`](#interface-changefileevent) - The file entry that was changed. The `'change'` event is emitted when a file is modified. The event handler receives a `ChangeFileEvent` object that contains the updated file contents, URL path, absolute path, and other metadata. @@ -474,7 +474,7 @@ Emitted when the entry handler is closed via the [`entryHandler.close()`](#entry ### Event: `'error'` -- **error** - `unknown` - The error that occurred. +- `error` - `unknown` - The error that occurred. ### Event: `'ready'` @@ -482,13 +482,13 @@ Emitted when the entry handler is ready to be used. This is not automatically aw ### Event: `'unlink'` -- **entry** - [`UnlinkFileEvent`](#interface-unlinkfileevent) - The file entry that was deleted. +- `entry` - [`UnlinkFileEvent`](#interface-unlinkfileevent) - The file entry that was deleted. The `'unlink'` event is emitted when a file is deleted. The event handler receives an `UnlinkFileEvent` object that contains the URL path and absolute path of the deleted file. ### Event: `'unlinkDir'` -- **entry** - [`UnlinkDirEvent`](#interface-unlinkdirevent) - The directory entry that was deleted. +- `entry` - [`UnlinkDirEvent`](#interface-unlinkdirevent) - The directory entry that was deleted. The `'unlinkDir'` event is emitted when a directory is deleted. The event handler receives an `UnlinkDirEvent` object that contains the URL path and absolute path of the deleted directory. @@ -514,7 +514,7 @@ Closes the entry handler, removing all listeners and preventing any further even Parameters: -- **config** - [`FilesOption`](#interface-filesoption) | [`FileAndURLPathConfig`](#interface-fileandurlpathconfig) - The configuration object for the entry handler. +- `config` - [`FilesOption`](#interface-filesoption) | [`FileAndURLPathConfig`](#interface-fileandurlpathconfig) - The configuration object for the entry handler. This method will update an existing entry handler to watch new entries. It will close the underlying watcher and create a new one, but will maintain any existing listeners on the EntryHandler instance itself. @@ -522,9 +522,9 @@ This method returns a promise associated with the ready event of the updated han ### Interface: `BaseEntry` -- **stats** - [`fs.Stats`](https://nodejs.org/docs/latest/api/fs.html#class-fsstats) | `undefined` - The file system stats for the entry. -- **urlPath** - `string` - The recommended URL path of the entry. -- **absolutePath** - `string` - The absolute path of the entry. +- `stats` - [`fs.Stats`](https://nodejs.org/docs/latest/api/fs.html#class-fsstats) | `undefined` - The file system stats for the entry. +- `urlPath` - `string` - The recommended URL path of the entry. +- `absolutePath` - `string` - The absolute path of the entry. The foundational entry handle event object. The `stats` may or may not be present depending on the event, entry type, and platform. @@ -536,7 +536,7 @@ The `absolutePath` is the file system path for the entry. Extends [`BaseEntry`](#interface-baseentry) -- **contents** - `Buffer` - The contents of the file. +- `contents` - `Buffer` - The contents of the file. A specific extension of the `BaseEntry` interface representing a file entry. We automatically read the contents of the file so the user doesn't have to bother with FS operations. @@ -546,8 +546,8 @@ There is no `DirectoryEntry` since there is no other important metadata aside fr Extends [`BaseEntry`](#interface-baseentry) -- **eventType** - `string` - The type of entry event. -- **entryType** - `string` - The type of entry, either a file or a directory. +- `eventType` - `string` - The type of entry event. +- `entryType` - `string` - The type of entry, either a file or a directory. A general interface representing the entry handle event objects. @@ -555,8 +555,8 @@ A general interface representing the entry handle event objects. Extends [`EntryEvent`](#interface-entryevent), [FileEntry](#interface-fileentry) -- **eventType** - `'add'` -- **entryType** - `'file'` +- `eventType` - `'add'` +- `entryType` - `'file'` Event object emitted when a file is created (or the watcher sees it for the first time). @@ -564,8 +564,8 @@ Event object emitted when a file is created (or the watcher sees it for the firs Extends [`EntryEvent`](#interface-entryevent), [FileEntry](#interface-fileentry) -- **eventType** - `'change'` -- **entryType** - `'file'` +- `eventType` - `'change'` +- `entryType` - `'file'` Event object emitted when a file is modified. @@ -573,8 +573,8 @@ Event object emitted when a file is modified. Extends [`EntryEvent`](#interface-entryevent), [FileEntry](#interface-fileentry) -- **eventType** - `'unlink'` -- **entryType** - `'file'` +- `eventType` - `'unlink'` +- `entryType` - `'file'` Event object emitted when a file is deleted. @@ -588,8 +588,8 @@ A union type representing the file entry events. These events are emitted when a Extends [`EntryEvent`](#interface-entryevent) -- **eventType** - `'addDir'` -- **entryType** - `'directory'` +- `eventType` - `'addDir'` +- `entryType` - `'directory'` Event object emitted when a directory is created (or the watcher sees it for the first time). @@ -597,8 +597,8 @@ Event object emitted when a directory is created (or the watcher sees it for the Extends [`EntryEvent`](#interface-entryevent) -- **eventType** - `'unlinkDir'` -- **entryType** - `'directory'` +- `eventType` - `'unlinkDir'` +- `entryType` - `'directory'` Event object emitted when a directory is deleted. @@ -612,7 +612,7 @@ A union type representing the directory entry events. There are no change events Parameters: -- **entryEvent** - [`FileEntryEvent`](#interface-fileentryevent) | [`DirectoryEntryEvent`](#interface-directoryentryevent) +- `entryEvent` - [`FileEntryEvent`](#interface-fileentryevent) | [`DirectoryEntryEvent`](#interface-directoryentryevent) Returns: `void` diff --git a/docs/technical-details/reference/globals.md b/docs/technical-details/reference/globals.md index 10fe4c57..70d81839 100644 --- a/docs/technical-details/reference/globals.md +++ b/docs/technical-details/reference/globals.md @@ -308,9 +308,9 @@ Execute an operation from the [Operations API](https://docs.harperdb.io/develope Parameters: -- **operation** - `Object` - Object matching desired operation's request body -- **context** - `Object` - `{ username: string}` - _optional_ - The specified user -- **authorize** - `boolean` - _optional_ - Indicate the operation should authorize the user or not. Defaults to `false` +- `operation` - `Object` - Object matching desired operation's request body +- `context` - `Object` - `{ username: string}` - _optional_ - The specified user +- `authorize` - `boolean` - _optional_ - Indicate the operation should authorize the user or not. Defaults to `false` Returns a `Promise` with the operation's response as per the [Operations API documentation](https://docs.harperdb.io/developers/operations-api). diff --git a/scripts/prebuild.js b/scripts/prebuild.js index c5a8d207..663e9289 100644 --- a/scripts/prebuild.js +++ b/scripts/prebuild.js @@ -3,7 +3,7 @@ /** * Pre-build script for Docusaurus * Handles dynamic configuration based on environment variables - * + * * Usage: * node prebuild.js - Setup for build/start * node prebuild.js clean - Clean generated files @@ -23,19 +23,19 @@ const pagesDir = path.join(__dirname, '../src/pages'); // Helper function to remove the index page function removeIndexPage() { - if (fs.existsSync(indexPagePath)) { - fs.unlinkSync(indexPagePath); - console.log('Removed index.tsx'); - return true; - } - return false; + if (fs.existsSync(indexPagePath)) { + fs.unlinkSync(indexPagePath); + console.log('Removed index.tsx'); + return true; + } + return false; } if (isCleanMode) { - // Clean mode - remove all generated files - console.log('Cleaning generated files...'); - removeIndexPage(); - process.exit(0); + // Clean mode - remove all generated files + console.log('Cleaning generated files...'); + removeIndexPage(); + process.exit(0); } console.log('Running pre-build setup...'); @@ -43,20 +43,20 @@ console.log(`Route base path: ${routeBasePath}`); // Setup index redirect page based on route configuration if (routeBasePath === '/') { - // If docs are at root, remove the index redirect page - console.log('Docs are at root (/), removing index redirect page, if it exists...'); - removeIndexPage(); + // If docs are at root, remove the index redirect page + console.log('Docs are at root (/), removing index redirect page, if it exists...'); + removeIndexPage(); } else { - // If docs are not at root, ensure the index redirect page exists - console.log(`Docs are at ${routeBasePath}, creating index redirect page...`); - - // Create pages directory if it doesn't exist - if (!fs.existsSync(pagesDir)) { - fs.mkdirSync(pagesDir, { recursive: true }); - } - - // Create the redirect page - const redirectContent = `import React from 'react'; + // If docs are not at root, ensure the index redirect page exists + console.log(`Docs are at ${routeBasePath}, creating index redirect page...`); + + // Create pages directory if it doesn't exist + if (!fs.existsSync(pagesDir)) { + fs.mkdirSync(pagesDir, { recursive: true }); + } + + // Create the redirect page + const redirectContent = `import React from 'react'; import { Redirect } from '@docusaurus/router'; export default function Home(): JSX.Element { @@ -64,7 +64,7 @@ export default function Home(): JSX.Element { return ; } `; - - fs.writeFileSync(indexPagePath, redirectContent); - console.log(`Created index redirect to ${routeBasePath}`); -} \ No newline at end of file + + fs.writeFileSync(indexPagePath, redirectContent); + console.log(`Created index redirect to ${routeBasePath}`); +} diff --git a/versioned_docs/version-4.1/add-ons-and-sdks/google-data-studio.md b/versioned_docs/version-4.1/add-ons-and-sdks/google-data-studio.md index aeb7c013..48ebaca1 100644 --- a/versioned_docs/version-4.1/add-ons-and-sdks/google-data-studio.md +++ b/versioned_docs/version-4.1/add-ons-and-sdks/google-data-studio.md @@ -19,9 +19,9 @@ Get started by selecting the HarperDB connector from the [Google Data Studio Par 1. Log in to [https://datastudio.google.com/](https://datastudio.google.com/). 1. Add a new Data Source using the HarperDB connector. The current release version can be added as a data source by following this link: [HarperDB Google Data Studio Connector](https://datastudio.google.com/datasources/create?connectorId=AKfycbxBKgF8FI5R42WVxO-QCOq7dmUys0HJrUJMkBQRoGnCasY60_VJeO3BhHJPvdd20-S76g). 1. Authorize the connector to access other servers on your behalf (this allows the connector to contact your database). -1. Enter the Web URL to access your database (preferably with HTTPS), as well as the Basic Auth key you use to access the database. Just include the key, not the word “Basic” at the start of it. -1. Check the box for “Secure Connections Only” if you want to always use HTTPS connections for this data source; entering a Web URL that starts with https:// will do the same thing, if you prefer. -1. Check the box for “Allow Bad Certs” if your HarperDB instance does not have a valid SSL certificate. HarperDB Cloud always has valid certificates, and so will never require this to be checked. Instances you set up yourself may require this, if you are using self-signed certs. If you are using HarperDB Cloud or another instance you know should always have valid SSL certificates, do not check this box. +1. Enter the Web URL to access your database (preferably with HTTPS), as well as the Basic Auth key you use to access the database. Just include the key, not the word "Basic" at the start of it. +1. Check the box for "Secure Connections Only" if you want to always use HTTPS connections for this data source; entering a Web URL that starts with https:// will do the same thing, if you prefer. +1. Check the box for "Allow Bad Certs" if your HarperDB instance does not have a valid SSL certificate. HarperDB Cloud always has valid certificates, and so will never require this to be checked. Instances you set up yourself may require this, if you are using self-signed certs. If you are using HarperDB Cloud or another instance you know should always have valid SSL certificates, do not check this box. 1. Choose your Query Type. This determines what information the configuration will ask for after pressing the Next button. - Table will ask you for a Schema and a Table to return all fields of using `SELECT *`. - SQL will ask you for the SQL query you’re using to retrieve fields from the database. You may `JOIN` multiple tables together, and use HarperDB specific SQL functions, along with the usual power SQL grants. diff --git a/versioned_docs/version-4.1/clustering/index.md b/versioned_docs/version-4.1/clustering/index.md index 9eb7ef9c..6cbb2641 100644 --- a/versioned_docs/version-4.1/clustering/index.md +++ b/versioned_docs/version-4.1/clustering/index.md @@ -26,14 +26,14 @@ A common use case is an edge application collecting and analyzing sensor data th HarperDB simplifies the architecture of such an application with its bi-directional, table-level replication: -- The edge instance subscribes to a “thresholds” table on the cloud instance, so the application only makes localhost calls to get the thresholds. +- The edge instance subscribes to a "thresholds" table on the cloud instance, so the application only makes localhost calls to get the thresholds. -- The application continually pushes sensor data into a “sensor_data” table via the localhost API, comparing it to the threshold values as it does so. +- The application continually pushes sensor data into a "sensor_data" table via the localhost API, comparing it to the threshold values as it does so. -- When a threshold violation occurs, the application adds a record to the “alerts” table. +- When a threshold violation occurs, the application adds a record to the "alerts" table. -- The application appends to that record array “sensor_data” entries for the 60 seconds (or minutes, or days) leading up to the threshold violation. +- The application appends to that record array "sensor_data" entries for the 60 seconds (or minutes, or days) leading up to the threshold violation. -- The edge instance publishes the “alerts” table up to the cloud instance. +- The edge instance publishes the "alerts" table up to the cloud instance. By letting HarperDB focus on the fault-tolerant logistics of transporting your data, you get to write less code. By moving data only when and where it’s needed, you lower storage and bandwidth costs. And by restricting your app to only making local calls to HarperDB, you reduce the overall exposure of your application to outside forces. diff --git a/versioned_docs/version-4.1/custom-functions/custom-functions-operations.md b/versioned_docs/version-4.1/custom-functions/custom-functions-operations.md index 3e665f38..11cecde5 100644 --- a/versioned_docs/version-4.1/custom-functions/custom-functions-operations.md +++ b/versioned_docs/version-4.1/custom-functions/custom-functions-operations.md @@ -4,7 +4,7 @@ title: Custom Functions Operations # Custom Functions Operations -One way to manage Custom Functions is through [HarperDB Studio](../harperdb-studio/). It performs all the necessary operations automatically. To get started, navigate to your instance in HarperDB Studio and click the subnav link for “functions”. If you have not yet enabled Custom Functions, it will walk you through the process. Once configuration is complete, you can manage and deploy Custom Functions in minutes. +One way to manage Custom Functions is through [HarperDB Studio](../harperdb-studio/). It performs all the necessary operations automatically. To get started, navigate to your instance in HarperDB Studio and click the subnav link for "functions". If you have not yet enabled Custom Functions, it will walk you through the process. Once configuration is complete, you can manage and deploy Custom Functions in minutes. HarperDB Studio manages your Custom Functions using nine HarperDB operations. You may view these operations within our [API Docs](https://api.harperdb.io/). A brief overview of each of the operations is below: diff --git a/versioned_docs/version-4.1/custom-functions/define-routes.md b/versioned_docs/version-4.1/custom-functions/define-routes.md index fba5d606..5336f0f3 100644 --- a/versioned_docs/version-4.1/custom-functions/define-routes.md +++ b/versioned_docs/version-4.1/custom-functions/define-routes.md @@ -12,7 +12,7 @@ Route URLs are resolved in the following manner: - The route below, within the **dogs** project, with a route of **breeds** would be available at **http:/localhost:9926/dogs/breeds**. -In effect, this route is just a pass-through to HarperDB. The same result could have been achieved by hitting the core HarperDB API, since it uses **hdbCore.preValidation** and **hdbCore.request**, which are defined in the “helper methods” section, below. +In effect, this route is just a pass-through to HarperDB. The same result could have been achieved by hitting the core HarperDB API, since it uses **hdbCore.preValidation** and **hdbCore.request**, which are defined in the "helper methods" section, below. ```javascript module.exports = async (server, { hdbCore, logger }) => { @@ -29,7 +29,7 @@ module.exports = async (server, { hdbCore, logger }) => { For endpoints where you want to execute multiple operations against HarperDB, or perform additional processing (like an ML classification, or an aggregation, or a call to a 3rd party API), you can define your own logic in the handler. The function below will execute a query against the dogs table, and filter the results to only return those dogs over 4 years in age. -**IMPORTANT: This route has NO preValidation and uses hdbCore.requestWithoutAuthentication, which- as the name implies- bypasses all user authentication. See the security concerns and mitigations in the “helper methods” section, below.** +**IMPORTANT: This route has NO preValidation and uses hdbCore.requestWithoutAuthentication, which- as the name implies- bypasses all user authentication. See the security concerns and mitigations in the "helper methods" section, below.** ```javascript module.exports = async (server, { hdbCore, logger }) => { diff --git a/versioned_docs/version-4.1/custom-functions/restarting-server.md b/versioned_docs/version-4.1/custom-functions/restarting-server.md index 62fa0f63..16fd9771 100644 --- a/versioned_docs/version-4.1/custom-functions/restarting-server.md +++ b/versioned_docs/version-4.1/custom-functions/restarting-server.md @@ -4,7 +4,7 @@ title: Restarting the Server # Restarting the Server -One way to manage Custom Functions is through [HarperDB Studio](../harperdb-studio/). It performs all the necessary operations automatically. To get started, navigate to your instance in HarperDB Studio and click the subnav link for “functions”. If you have not yet enabled Custom Functions, it will walk you through the process. Once configuration is complete, you can manage and deploy Custom Functions in minutes. +One way to manage Custom Functions is through [HarperDB Studio](../harperdb-studio/). It performs all the necessary operations automatically. To get started, navigate to your instance in HarperDB Studio and click the subnav link for "functions". If you have not yet enabled Custom Functions, it will walk you through the process. Once configuration is complete, you can manage and deploy Custom Functions in minutes. For any changes made to your routes, helpers, or projects, you’ll need to restart the Custom Functions server to see them take effect. HarperDB Studio does this automatically whenever you create or delete a project, or add, edit, or edit a route or helper. If you need to start the Custom Functions server yourself, you can use the following operation to do so: diff --git a/versioned_docs/version-4.1/harperdb-studio/create-account.md b/versioned_docs/version-4.1/harperdb-studio/create-account.md index c78c4ef3..3d146bb6 100644 --- a/versioned_docs/version-4.1/harperdb-studio/create-account.md +++ b/versioned_docs/version-4.1/harperdb-studio/create-account.md @@ -12,7 +12,7 @@ Start at the [HarperDB Studio sign up page](https://studio.harperdb.io/sign-up). - Email Address - Subdomain - _Part of the URL that will be used to identify your HarperDB Cloud Instances. For example, with subdomain “demo” and instance name “c1” the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com)._ + _Part of the URL that will be used to identify your HarperDB Cloud Instances. For example, with subdomain "demo" and instance name "c1" the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com)._ - Coupon Code (optional) diff --git a/versioned_docs/version-4.1/harperdb-studio/instances.md b/versioned_docs/version-4.1/harperdb-studio/instances.md index 5ebb4ccd..ad629b8a 100644 --- a/versioned_docs/version-4.1/harperdb-studio/instances.md +++ b/versioned_docs/version-4.1/harperdb-studio/instances.md @@ -29,7 +29,7 @@ A summary view of all instances within an organization can be viewed by clicking 1. Fill out Instance Info. 1. Enter Instance Name - _This will be used to build your instance URL. For example, with subdomain “demo” and instance name “c1” the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com). The Instance URL will be previewed below._ + _This will be used to build your instance URL. For example, with subdomain "demo" and instance name "c1" the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com). The Instance URL will be previewed below._ 2. Enter Instance Username diff --git a/versioned_docs/version-4.1/harperdb-studio/organizations.md b/versioned_docs/version-4.1/harperdb-studio/organizations.md index ec19c50e..83f99150 100644 --- a/versioned_docs/version-4.1/harperdb-studio/organizations.md +++ b/versioned_docs/version-4.1/harperdb-studio/organizations.md @@ -29,7 +29,7 @@ A new organization can be created as follows: - Enter Organization Name _This is used for descriptive purposes only._ - Enter Organization Subdomain - _Part of the URL that will be used to identify your HarperDB Cloud Instances. For example, with subdomain “demo” and instance name “c1” the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com)._ + _Part of the URL that will be used to identify your HarperDB Cloud Instances. For example, with subdomain "demo" and instance name "c1" the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com)._ 4. Click Create Organization. ## Delete an Organization diff --git a/versioned_docs/version-4.1/harperdb-studio/query-instance-data.md b/versioned_docs/version-4.1/harperdb-studio/query-instance-data.md index 22801dbc..44cd1d08 100644 --- a/versioned_docs/version-4.1/harperdb-studio/query-instance-data.md +++ b/versioned_docs/version-4.1/harperdb-studio/query-instance-data.md @@ -13,7 +13,7 @@ SQL queries can be executed directly through the HarperDB Studio with the follow 5. Enter your SQL query in the SQL query window. 6. Click **Execute**. -_Please note, the Studio will execute the query exactly as entered. For example, if you attempt to `SELECT _` from a table with millions of rows, you will most likely crash your browser.\* +_Please note, the Studio will execute the query exactly as entered. For example, if you attempt to `SELECT _` from a table with millions of rows, you will most likely crash your browser._ ## Browse Query Results Set diff --git a/versioned_docs/version-4.1/install-harperdb/linux.md b/versioned_docs/version-4.1/install-harperdb/linux.md index 25fcd429..1b65b515 100644 --- a/versioned_docs/version-4.1/install-harperdb/linux.md +++ b/versioned_docs/version-4.1/install-harperdb/linux.md @@ -18,7 +18,7 @@ These instructions assume that the following has already been completed: 1. An additional volume for storing HarperDB files is attached to the Linux instance 1. Traffic to ports 9925 (HarperDB Operations API,) 9926 (HarperDB Custom Functions,) and 9932 (HarperDB Clustering) is permitted -For this example, we will use an AWS Ubuntu Server 22.04 LTS m5.large EC2 Instance with an additional General Purpose SSD EBS volume and the default “ubuntu” user account. +For this example, we will use an AWS Ubuntu Server 22.04 LTS m5.large EC2 Instance with an additional General Purpose SSD EBS volume and the default "ubuntu" user account. --- diff --git a/versioned_docs/version-4.1/logging.md b/versioned_docs/version-4.1/logging.md index bceee337..234d5903 100644 --- a/versioned_docs/version-4.1/logging.md +++ b/versioned_docs/version-4.1/logging.md @@ -22,15 +22,15 @@ For example, a typical log entry looks like: The components of a log entry are: -- timestamp - This is the date/time stamp when the event occurred -- level - This is an associated log level that gives a rough guide to the importance and urgency of the message. The available log levels in order of least urgent (and more verbose) are: `trace`, `debug`, `info`, `warn`, `error`, `fatal`, and `notify`. -- thread/id - This reports the name of the thread and the thread id, that the event was reported on. Note that NATS logs are recorded by their process name and there is no thread id for them since they are a separate process. Key threads are: - - main - This is the thread that is responsible for managing all other threads and routes incoming requests to the other threads - - http - These are the worker threads that handle the primary workload of incoming HTTP requests to the operations API and custom functions. - - Clustering\* - These are threads and processes that handle replication. - - job - These are job threads that have been started to handle operations that are executed in a separate job thread. -- tags - Logging from a custom function will include a "custom-function" tag in the log entry. Most logs will not have any additional tags. -- message - This is the main message that was reported. +- `timestamp` - This is the date/time stamp when the event occurred +- `level` - This is an associated log level that gives a rough guide to the importance and urgency of the message. The available log levels in order of least urgent (and more verbose) are: `trace`, `debug`, `info`, `warn`, `error`, `fatal`, and `notify`. +- `thread/id` - This reports the name of the thread and the thread id, that the event was reported on. Note that NATS logs are recorded by their process name and there is no thread id for them since they are a separate process. Key threads are: + - `main` - This is the thread that is responsible for managing all other threads and routes incoming requests to the other threads + - `http` - These are the worker threads that handle the primary workload of incoming HTTP requests to the operations API and custom functions. + - `Clustering` - These are threads and processes that handle replication. + - `job` - These are job threads that have been started to handle operations that are executed in a separate job thread. +- `tags` - Logging from a custom function will include a "custom-function" tag in the log entry. Most logs will not have any additional tags. +- `message` - This is the main message that was reported. We try to keep logging to a minimum by default, to do this the default log level is `error`. If you require more information from the logs, increasing the log level down will provide that. @@ -50,7 +50,7 @@ To log to standard streams effectively, make sure to directly run `harperdb` and ## Logging Rotation -Log rotation allows for managing log files, such as compressing rotated log files, archiving old log files, determining when to rotate, and the like. This will allow for organized storage and efficient use of disk space. For more information see “logging” in our [config docs](./configuration). +Log rotation allows for managing log files, such as compressing rotated log files, archiving old log files, determining when to rotate, and the like. This will allow for organized storage and efficient use of disk space. For more information see "logging" in our [config docs](./configuration). ## Read Logs via the API diff --git a/versioned_docs/version-4.1/security/basic-auth.md b/versioned_docs/version-4.1/security/basic-auth.md index 1ef1689b..d128471a 100644 --- a/versioned_docs/version-4.1/security/basic-auth.md +++ b/versioned_docs/version-4.1/security/basic-auth.md @@ -8,7 +8,7 @@ HarperDB uses Basic Auth and JSON Web Tokens (JWTs) to secure our HTTP requests. ** \***You do not need to log in separately. Basic Auth is added to each HTTP request like create_schema, create_table, insert etc… via headers.**\* ** -A header is added to each HTTP request. The header key is **“Authorization”** the header value is **“Basic <<your username and password buffer token>>”** +A header is added to each HTTP request. The header key is **"Authorization"** the header value is **"Basic <<your username and password buffer token>>"** ## Authentication in HarperDB Studio diff --git a/versioned_docs/version-4.1/security/users-and-roles.md b/versioned_docs/version-4.1/security/users-and-roles.md index c060b6fd..586d5e11 100644 --- a/versioned_docs/version-4.1/security/users-and-roles.md +++ b/versioned_docs/version-4.1/security/users-and-roles.md @@ -102,7 +102,7 @@ There are two parts to a permissions set: Each table that a role should be given some level of CRUD permissions to must be included in the `tables` array for its schema in the roles permissions JSON passed to the API (_see example above_). -```json +```jsonc { "table_name": { // the name of the table to define CRUD perms for "read": boolean, // access to read from this table diff --git a/versioned_docs/version-4.1/sql-guide/date-functions.md b/versioned_docs/version-4.1/sql-guide/date-functions.md index 9ecebdb1..535ac7b6 100644 --- a/versioned_docs/version-4.1/sql-guide/date-functions.md +++ b/versioned_docs/version-4.1/sql-guide/date-functions.md @@ -152,17 +152,17 @@ Subtracts the defined amount of time from the date provided in UTC and returns t ### EXTRACT(date, date_part) -Extracts and returns the date_part requested as a String value. Accepted date_part values below show value returned for date = “2020-03-26T15:13:02.041+000” +Extracts and returns the date_part requested as a String value. Accepted date_part values below show value returned for date = "2020-03-26T15:13:02.041+000" | date_part | Example return value\* | | ----------- | ---------------------- | -| year | “2020” | -| month | “3” | -| day | “26” | -| hour | “15” | -| minute | “13” | -| second | “2” | -| millisecond | “41” | +| year | "2020" | +| month | "3" | +| day | "26" | +| hour | "15" | +| minute | "13" | +| second | "2" | +| millisecond | "41" | ``` "SELECT EXTRACT(1587568845765, 'year') AS extract_result" returns diff --git a/versioned_docs/version-4.1/sql-guide/functions.md b/versioned_docs/version-4.1/sql-guide/functions.md index 0360b1e8..863afd18 100644 --- a/versioned_docs/version-4.1/sql-guide/functions.md +++ b/versioned_docs/version-4.1/sql-guide/functions.md @@ -10,146 +10,132 @@ This SQL keywords reference contains the SQL functions available in HarperDB. ### Aggregate -| Keyword | Syntax | Description | -| ---------------- | ----------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- | -| AVG | AVG(_expression_) | Returns the average of a given numeric expression. | -| COUNT | SELECT COUNT(_column_name_) FROM _schema.table_ WHERE _condition_ | Returns the number records that match the given criteria. Nulls are not counted. | -| GROUP_CONCAT | GROUP*CONCAT(\_expression*) | Returns a string with concatenated values that are comma separated and that are non-null from a group. Will return null when there are non-null values. | -| MAX | SELECT MAX(_column_name_) FROM _schema.table_ WHERE _condition_ | Returns largest value in a specified column. | -| MIN | SELECT MIN(_column_name_) FROM _schema.table_ WHERE _condition_ | Returns smallest value in a specified column. | -| SUM | SUM(_column_name_) | Returns the sum of the numeric values provided. | -| ARRAY\* | ARRAY(_expression_) | Returns a list of data as a field. | -| DISTINCT_ARRAY\* | DISTINCT*ARRAY(\_expression*) | When placed around a standard ARRAY() function, returns a distinct (deduplicated) results set. | - -\*For more information on ARRAY() and DISTINCT_ARRAY() see [this blog](https://www.harperdb.io/post/sql-queries-to-complex-objects). +| Keyword | Syntax | Description | +| ------------------ | ----------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `AVG` | `AVG(expression)` | Returns the average of a given numeric expression. | +| `COUNT` | `SELECT COUNT(column_name) FROM schema.table WHERE condition` | Returns the number records that match the given criteria. Nulls are not counted. | +| `GROUP_CONCAT` | `GROUP_CONCAT(expression)` | Returns a string with concatenated values that are comma separated and that are non-null from a group. Will return null when there are non-null values. | +| `MAX` | `SELECT MAX(column_name) FROM schema.table WHERE condition` | Returns largest value in a specified column. | +| `MIN` | `SELECT MIN(column_name) FROM schema.table WHERE condition` | Returns smallest value in a specified column. | +| `SUM` | `SUM(column_name)` | Returns the sum of the numeric values provided. | +| `ARRAY`* | `ARRAY(expression)` | Returns a list of data as a field. | +| `DISTINCT_ARRAY`* | `DISTINCT_ARRAY(expression)` | When placed around a standard `ARRAY()` function, returns a distinct (deduplicated) results set. | + +*For more information on `ARRAY()` and `DISTINCT_ARRAY()` see [this blog](https://www.harperdb.io/post/sql-queries-to-complex-objects). ### Conversion -| Keyword | Syntax | Description | -| ------- | ----------------------------------------------- | ---------------------------------------------------------------------- | -| CAST | CAST(_expression AS datatype(length)_) | Converts a value to a specified datatype. | -| CONVERT | CONVERT(_data_type(length), expression, style_) | Converts a value from one datatype to a different, specified datatype. | +| Keyword | Syntax | Description | +| --------- | ----------------------------------------------------- | ---------------------------------------------------------------------- | +| `CAST` | `CAST(expression AS datatype(length))` | Converts a value to a specified datatype. | +| `CONVERT` | `CONVERT(data_type(length), expression, style)` | Converts a value from one datatype to a different, specified datatype. | ### Date & Time -| Keyword | Syntax | Description | -| ----------------- | ----------------- | --------------------------------------------------------------------------------------------------------------------- | -| CURRENT_DATE | CURRENT_DATE() | Returns the current date in UTC in “YYYY-MM-DD” String format. | -| CURRENT_TIME | CURRENT_TIME() | Returns the current time in UTC in “HH:mm:ss.SSS” string format. | -| CURRENT_TIMESTAMP | CURRENT_TIMESTAMP | Referencing this variable will evaluate as the current Unix Timestamp in milliseconds. For more information, go here. | - -| -| DATE | DATE([_date_string_]) | Formats and returns the date*string argument in UTC in ‘YYYY-MM-DDTHH:mm:ss.SSSZZ’ string format. If a date_string is not provided, the function will return the current UTC date/time value in the return format defined above. For more information, go here. | -| -| DATE_ADD | DATE_ADD(\_date, value, interval*) | Adds the defined amount of time to the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted interval values: Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. | -| -| DATE*DIFF | DATEDIFF(\_date_1, date_2[, interval]*) | Returns the difference between the two date values passed based on the interval as a Number. If an interval is not provided, the function will return the difference value in milliseconds. For more information, go here. | -| -| DATE*FORMAT | DATE_FORMAT(\_date, format*) | Formats and returns a date value in the String format provided. Find more details on accepted format values in the moment.js docs. For more information, go here. | -| -| DATE*SUB | DATE_SUB(\_date, format*) | Subtracts the defined amount of time from the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted date*sub interval values- Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. | -| -| DAY | DAY(\_date*) | Return the day of the month for the given date. | -| -| DAYOFWEEK | DAYOFWEEK(_date_) | Returns the numeric value of the weekday of the date given(“YYYY-MM-DD”).NOTE: 0=Sunday, 1=Monday, 2=Tuesday, 3=Wednesday, 4=Thursday, 5=Friday, and 6=Saturday. | -| EXTRACT | EXTRACT(_date, date_part_) | Extracts and returns the date*part requested as a String value. Accepted date_part values below show value returned for date = “2020-03-26T15:13:02.041+000” For more information, go here. | -| -| GETDATE | GETDATE() | Returns the current Unix Timestamp in milliseconds. | -| GET_SERVER_TIME | GET_SERVER_TIME() | Returns the current date/time value based on the server’s timezone in `YYYY-MM-DDTHH:mm:ss.SSSZZ` String format. | -| OFFSET_UTC | OFFSET_UTC(\_date, offset*) | Returns the UTC date time value with the offset provided included in the return String value formatted as `YYYY-MM-DDTHH:mm:ss.SSSZZ`. The offset argument will be added as minutes unless the value is less than 16 and greater than -16, in which case it will be treated as hours. | -| NOW | NOW() | Returns the current Unix Timestamp in milliseconds. | -| -| HOUR | HOUR(_datetime_) | Returns the hour part of a given date in range of 0 to 838. | -| -| MINUTE | MINUTE(_datetime_) | Returns the minute part of a time/datetime in range of 0 to 59. | -| -| MONTH | MONTH(_date_) | Returns month part for a specified date in range of 1 to 12. | -| -| SECOND | SECOND(_datetime_) | Returns the seconds part of a time/datetime in range of 0 to 59. | -| YEAR | YEAR(_date_) | Returns the year part for a specified date. | -| +| Keyword | Syntax | Description | +| ----------------- | -------------------------- | --------------------------------------------------------------------------------------------------------------------- | +| `CURRENT_DATE` | `CURRENT_DATE()` | Returns the current date in UTC in "YYYY-MM-DD" String format. | +| `CURRENT_TIME` | `CURRENT_TIME()` | Returns the current time in UTC in "HH:mm:ss.SSS" string format. | +| `CURRENT_TIMESTAMP` | `CURRENT_TIMESTAMP` | Referencing this variable will evaluate as the current Unix Timestamp in milliseconds. For more information, go here. | +| `DATE` | `DATE([date_string])` | Formats and returns the date string argument in UTC in 'YYYY-MM-DDTHH:mm:ss.SSSZZ' string format. If a date string is not provided, the function will return the current UTC date/time value in the return format defined above. For more information, go here. | +| `DATE_ADD` | `DATE_ADD(date, value, interval)` | Adds the defined amount of time to the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted interval values: Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. | +| `DATE_DIFF` | `DATE_DIFF(date_1, date_2[, interval])` | Returns the difference between the two date values passed based on the interval as a Number. If an interval is not provided, the function will return the difference value in milliseconds. For more information, go here. | +| `DATE_FORMAT` | `DATE_FORMAT(date, format)` | Formats and returns a date value in the String format provided. Find more details on accepted format values in the moment.js docs. For more information, go here. | +| `DATE_SUB` | `DATE_SUB(date, format)` | Subtracts the defined amount of time from the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted date_sub interval values- Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. | +| `DAY` | `DAY(date)` | Return the day of the month for the given date. | +| `DAYOFWEEK` | `DAYOFWEEK(date)` | Returns the numeric value of the weekday of the date given("YYYY-MM-DD").NOTE: 0=Sunday, 1=Monday, 2=Tuesday, 3=Wednesday, 4=Thursday, 5=Friday, and 6=Saturday. | +| `EXTRACT` | `EXTRACT(date, date_part)` | Extracts and returns the date part requested as a String value. Accepted date_part values below show value returned for date = "2020-03-26T15:13:02.041+000" For more information, go here. | +| `GETDATE` | `GETDATE()` | Returns the current Unix Timestamp in milliseconds. | +| `GET_SERVER_TIME` | `GET_SERVER_TIME()` | Returns the current date/time value based on the server's timezone in `YYYY-MM-DDTHH:mm:ss.SSSZZ` String format. | +| `OFFSET_UTC` | `OFFSET_UTC(date, offset)` | Returns the UTC date time value with the offset provided included in the return String value formatted as `YYYY-MM-DDTHH:mm:ss.SSSZZ`. The offset argument will be added as minutes unless the value is less than 16 and greater than -16, in which case it will be treated as hours. | +| `NOW` | `NOW()` | Returns the current Unix Timestamp in milliseconds. | +| `HOUR` | `HOUR(datetime)` | Returns the hour part of a given date in range of 0 to 838. | +| `MINUTE` | `MINUTE(datetime)` | Returns the minute part of a time/datetime in range of 0 to 59. | +| `MONTH` | `MONTH(date)` | Returns month part for a specified date in range of 1 to 12. | +| `SECOND` | `SECOND(datetime)` | Returns the seconds part of a time/datetime in range of 0 to 59. | +| `YEAR` | `YEAR(date)` | Returns the year part for a specified date. | ### Logical -| Keyword | Syntax | Description | -| ------- | ----------------------------------------------- | ------------------------------------------------------------------------------------------ | -| IF | IF(_condition, value_if_true, value_if_false_) | Returns a value if the condition is true, or another value if the condition is false. | -| IIF | IIF(_condition, value_if_true, value_if_false_) | Returns a value if the condition is true, or another value if the condition is false. | -| IFNULL | IFNULL(_expression, alt_value_) | Returns a specified value if the expression is null. | -| NULLIF | NULLIF(_expression_1, expression_2_) | Returns null if expression_1 is equal to expression_2, if not equal, returns expression_1. | +| Keyword | Syntax | Description | +| -------- | --------------------------------------------------------- | ------------------------------------------------------------------------------------------ | +| `IF` | `IF(condition, value_if_true, value_if_false)` | Returns a value if the condition is true, or another value if the condition is false. | +| `IIF` | `IIF(condition, value_if_true, value_if_false)` | Returns a value if the condition is true, or another value if the condition is false. | +| `IFNULL` | `IFNULL(expression, alt_value)` | Returns a specified value if the expression is null. | +| `NULLIF` | `NULLIF(expression_1, expression_2)` | Returns null if expression_1 is equal to expression_2, if not equal, returns expression_1. | ### Mathematical -| Keyword | Syntax | Description | -| ------- | ------------------------------ | --------------------------------------------------------------------------------------------------- | -| ABS | ABS(_expression_) | Returns the absolute value of a given numeric expression. | -| CEIL | CEIL(_number_) | Returns integer ceiling, the smallest integer value that is bigger than or equal to a given number. | -| EXP | EXP(_number_) | Returns e to the power of a specified number. | -| FLOOR | FLOOR(_number_) | Returns the largest integer value that is smaller than, or equal to, a given number. | -| RANDOM | RANDOM(_seed_) | Returns a pseudo random number. | -| ROUND | ROUND(_number,decimal_places_) | Rounds a given number to a specified number of decimal places. | -| SQRT | SQRT(_expression_) | Returns the square root of an expression. | +| Keyword | Syntax | Description | +| -------- | ---------------------------------------- | --------------------------------------------------------------------------------------------------- | +| `ABS` | `ABS(expression)` | Returns the absolute value of a given numeric expression. | +| `CEIL` | `CEIL(number)` | Returns integer ceiling, the smallest integer value that is bigger than or equal to a given number. | +| `EXP` | `EXP(number)` | Returns e to the power of a specified number. | +| `FLOOR` | `FLOOR(number)` | Returns the largest integer value that is smaller than, or equal to, a given number. | +| `RANDOM` | `RANDOM(seed)` | Returns a pseudo random number. | +| `ROUND` | `ROUND(number, decimal_places)` | Rounds a given number to a specified number of decimal places. | +| `SQRT` | `SQRT(expression)` | Returns the square root of an expression. | ### String -| Keyword | Syntax | Description | -| ----------- | ----------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| CONCAT | CONCAT(_string_1, string_2, ...., string_n_) | Concatenates, or joins, two or more strings together, resulting in a single string. | -| CONCAT_WS | CONCAT*WS(\_separator, string_1, string_2, ...., string_n*) | Concatenates, or joins, two or more strings together with a separator, resulting in a single string. | -| INSTR | INSTR(_string_1, string_2_) | Returns the first position, as an integer, of string_2 within string_1. | -| LEN | LEN(_string_) | Returns the length of a string. | -| LOWER | LOWER(_string_) | Converts a string to lower-case. | -| REGEXP | SELECT _column_name_ FROM _schema.table_ WHERE _column_name_ REGEXP _pattern_ | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. | -| REGEXP_LIKE | SELECT _column_name_ FROM _schema.table_ WHERE REGEXP*LIKE(\_column_name, pattern*) | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. | -| REPLACE | REPLACE(_string, old_string, new_string_) | Replaces all instances of old_string within new_string, with string. | -| SUBSTRING | SUBSTRING(_string, string_position, length_of_substring_) | Extracts a specified amount of characters from a string. | -| TRIM | TRIM([_character(s) FROM_] _string_) | Removes leading and trailing spaces, or specified character(s), from a string. | -| UPPER | UPPER(_string_) | Converts a string to upper-case. | +| Keyword | Syntax | Description | +| ------------- | --------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `CONCAT` | `CONCAT(string_1, string_2, ...., string_n)` | Concatenates, or joins, two or more strings together, resulting in a single string. | +| `CONCAT_WS` | `CONCAT_WS(separator, string_1, string_2, ...., string_n)` | Concatenates, or joins, two or more strings together with a separator, resulting in a single string. | +| `INSTR` | `INSTR(string_1, string_2)` | Returns the first position, as an integer, of string_2 within string_1. | +| `LEN` | `LEN(string)` | Returns the length of a string. | +| `LOWER` | `LOWER(string)` | Converts a string to lower-case. | +| `REGEXP` | `SELECT column_name FROM schema.table WHERE column_name REGEXP pattern` | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. | +| `REGEXP_LIKE` | `SELECT column_name FROM schema.table WHERE REGEXP_LIKE(column_name, pattern)` | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. | +| `REPLACE` | `REPLACE(string, old_string, new_string)` | Replaces all instances of old_string within new_string, with string. | +| `SUBSTRING` | `SUBSTRING(string, string_position, length_of_substring)` | Extracts a specified amount of characters from a string. | +| `TRIM` | `TRIM([character(s) FROM] string)` | Removes leading and trailing spaces, or specified character(s), from a string. | +| `UPPER` | `UPPER(string)` | Converts a string to upper-case. | ## Operators ### Logical Operators -| Keyword | Syntax | Description | -| ------- | ----------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------- | -| BETWEEN | SELECT _column_name(s)_ FROM _schema.table_ WHERE _column_name_ BETWEEN _value_1_ AND _value_2_ | (inclusive) Returns values(numbers, text, or dates) within a given range. | -| IN | SELECT _column_name(s)_ FROM _schema.table_ WHERE _column_name_ IN(_value(s)_) | Used to specify multiple values in a WHERE clause. | -| LIKE | SELECT _column_name(s)_ FROM _schema.table_ WHERE _column_n_ LIKE _pattern_ | Searches for a specified pattern within a WHERE clause. | +| Keyword | Syntax | Description | +| --------- | --------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------- | +| `BETWEEN` | `SELECT column_name(s) FROM schema.table WHERE column_name BETWEEN value_1 AND value_2` | (inclusive) Returns values(numbers, text, or dates) within a given range. | +| `IN` | `SELECT column_name(s) FROM schema.table WHERE column_name IN(value(s))` | Used to specify multiple values in a WHERE clause. | +| `LIKE` | `SELECT column_name(s) FROM schema.table WHERE column_n LIKE pattern` | Searches for a specified pattern within a WHERE clause. | ## Queries ### General -| Keyword | Syntax | Description | -| -------- | ------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------- | -| DISTINCT | SELECT DISTINCT _column_name(s)_ FROM _schema.table_ | Returns only unique values, eliminating duplicate records. | -| FROM | FROM _schema.table_ | Used to list the schema(s), table(s), and any joins required for a SQL statement. | -| GROUP BY | SELECT _column_name(s)_ FROM _schema.table_ WHERE _condition_ GROUP BY _column_name(s)_ ORDER BY _column_name(s)_ | Groups rows that have the same values into summary rows. | -| HAVING | SELECT _column_name(s)_ FROM _schema.table_ WHERE _condition_ GROUP BY _column_name(s)_ HAVING _condition_ ORDER BY _column_name(s)_ | Filters data based on a group or aggregate function. | -| SELECT | SELECT _column_name(s)_ FROM _schema.table_ | Selects data from table. | -| WHERE | SELECT _column_name(s)_ FROM _schema.table_ WHERE _condition_ | Extracts records based on a defined condition. | +| Keyword | Syntax | Description | +| ---------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------- | +| `DISTINCT` | `SELECT DISTINCT column_name(s) FROM schema.table` | Returns only unique values, eliminating duplicate records. | +| `FROM` | `FROM schema.table` | Used to list the schema(s), table(s), and any joins required for a SQL statement. | +| `GROUP BY` | `SELECT column_name(s) FROM schema.table WHERE condition GROUP BY column_name(s) ORDER BY column_name(s)` | Groups rows that have the same values into summary rows. | +| `HAVING` | `SELECT column_name(s) FROM schema.table WHERE condition GROUP BY column_name(s) HAVING condition ORDER BY column_name(s)` | Filters data based on a group or aggregate function. | +| `SELECT` | `SELECT column_name(s) FROM schema.table` | Selects data from table. | +| `WHERE` | `SELECT column_name(s) FROM schema.table WHERE condition` | Extracts records based on a defined condition. | ### Joins | Keyword | Syntax | Description | | ------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| CROSS JOIN | SELECT _column_name(s)_ FROM _schema.table_1_ CROSS JOIN _schema.table_2_ | Returns a paired combination of each row from _table_1_ with row from _table_2_. _Note: CROSS JOIN can return very large result sets and is generally considered bad practice._ | -| FULL OUTER | SELECT _column_name(s)_ FROM _schema.table_1_ FULL OUTER JOIN _schema.table_2_ ON _table_1.column_name_ _= table_2.column_name_ WHERE _condition_ | Returns all records when there is a match in either _table_1_ (left table) or _table_2_ (right table). | -| [INNER] JOIN | SELECT _column_name(s)_ FROM _schema.table_1_ INNER JOIN _schema.table_2_ ON _table_1.column_name_ _= table_2.column_name_ | Return only matching records from _table_1_ (left table) and _table_2_ (right table). The INNER keyword is optional and does not affect the result. | -| LEFT [OUTER] JOIN | SELECT _column_name(s)_ FROM _schema.table_1_ LEFT OUTER JOIN _schema.table_2_ ON _table_1.column_name_ _= table_2.column_name_ | Return all records from _table_1_ (left table) and matching data from _table_2_ (right table). The OUTER keyword is optional and does not affect the result. | -| RIGHT [OUTER] JOIN | SELECT _column_name(s)_ FROM _schema.table_1_ RIGHT OUTER JOIN _schema.table_2_ ON _table_1.column_name = table_2.column_name_ | Return all records from _table_2_ (right table) and matching data from _table_1_ (left table). The OUTER keyword is optional and does not affect the result. | +| `CROSS JOIN` | `SELECT column_name(s) FROM schema.table_1 CROSS JOIN schema.table_2` | Returns a paired combination of each row from `table_1` with row from `table_2`. Note: CROSS JOIN can return very large result sets and is generally considered bad practice. | +| `FULL OUTER` | `SELECT column_name(s) FROM schema.table_1 FULL OUTER JOIN schema.table_2 ON table_1.column_name = table_2.column_name WHERE condition` | Returns all records when there is a match in either `table_1` (left table) or `table_2` (right table). | +| `[INNER] JOIN` | `SELECT column_name(s) FROM schema.table_1 INNER JOIN schema.table_2 ON table_1.column_name = table_2.column_name` | Return only matching records from `table_1` (left table) and `table_2` (right table). The INNER keyword is optional and does not affect the result. | +| `LEFT [OUTER] JOIN` | `SELECT column_name(s) FROM schema.table_1 LEFT OUTER JOIN schema.table_2 ON table_1.column_name = table_2.column_name` | Return all records from `table_1` (left table) and matching data from `table_2` (right table). The OUTER keyword is optional and does not affect the result. | +| `RIGHT [OUTER] JOIN` | `SELECT column_name(s) FROM schema.table_1 RIGHT OUTER JOIN schema.table_2 ON table_1.column_name = table_2.column_name` | Return all records from `table_2` (right table) and matching data from `table_1` (left table). The OUTER keyword is optional and does not affect the result. | ### Predicates -| Keyword | Syntax | Description | -| ----------- | --------------------------------------------------------------------------- | -------------------------- | -| IS NOT NULL | SELECT _column_name(s)_ FROM _schema.table_ WHERE _column_name_ IS NOT NULL | Tests for non-null values. | -| IS NULL | SELECT _column_name(s)_ FROM _schema.table_ WHERE _column_name_ IS NULL | Tests for null values. | +| Keyword | Syntax | Description | +| ----------- | ----------------------------------------------------------------------------- | -------------------------- | +| `IS NOT NULL` | `SELECT column_name(s) FROM schema.table WHERE column_name IS NOT NULL` | Tests for non-null values. | +| `IS NULL` | `SELECT column_name(s) FROM schema.table WHERE column_name IS NULL` | Tests for null values. | ### Statements -| Keyword | Syntax | Description | -| ------- | ------------------------------------------------------------------------------------------- | ----------------------------------- | -| DELETE | DELETE FROM _schema.table_ WHERE condition | Deletes existing data from a table. | -| INSERT | INSERT INTO _schema.table(column_name(s))_ VALUES(_value(s)_) | Inserts new records into a table. | -| UPDATE | UPDATE _schema.table_ SET _column_1 = value_1, column_2 = value_2, ....,_ WHERE _condition_ | Alters existing records in a table. | +| Keyword | Syntax | Description | +| ------- | --------------------------------------------------------------------------------------------- | ----------------------------------- | +| `DELETE` | `DELETE FROM schema.table WHERE condition` | Deletes existing data from a table. | +| `INSERT` | `INSERT INTO schema.table(column_name(s)) VALUES(value(s))` | Inserts new records into a table. | +| `UPDATE` | `UPDATE schema.table SET column_1 = value_1, column_2 = value_2, .... WHERE condition` | Alters existing records in a table. | diff --git a/versioned_docs/version-4.1/sql-guide/json-search.md b/versioned_docs/version-4.1/sql-guide/json-search.md index 6e9e4415..b6c78eb2 100644 --- a/versioned_docs/version-4.1/sql-guide/json-search.md +++ b/versioned_docs/version-4.1/sql-guide/json-search.md @@ -8,7 +8,7 @@ HarperDB automatically indexes all top level attributes in a row / object writte ## Syntax -SEARCH_JSON(_expression, attribute_) +`SEARCH_JSON(expression, attribute)` Executes the supplied string _expression_ against data of the defined top level _attribute_ for each row. The expression both filters and defines output from the JSON document. @@ -113,7 +113,7 @@ SEARCH_JSON( ) ``` -The first argument passed to SEARCH_JSON is the expression to execute against the second argument which is the cast attribute on the credits table. This expression will execute for every row. Looking into the expression it starts with “$[…]” this tells the expression to iterate all elements of the cast array. +The first argument passed to SEARCH_JSON is the expression to execute against the second argument which is the cast attribute on the credits table. This expression will execute for every row. Looking into the expression it starts with "$[…]" this tells the expression to iterate all elements of the cast array. Then the expression tells the function to only return entries where the name attribute matches any of the actors defined in the array: @@ -121,7 +121,7 @@ Then the expression tells the function to only return entries where the name att name in ["Robert Downey Jr.", "Chris Evans", "Scarlett Johansson", "Mark Ruffalo", "Chris Hemsworth", "Jeremy Renner", "Clark Gregg", "Samuel L. Jackson", "Gwyneth Paltrow", "Don Cheadle"] ``` -So far, we’ve iterated the array and filtered out rows, but we also want the results formatted in a specific way, so we’ve chained an expression on our filter with: `{“actor”: name, “character”: character}`. This tells the function to create a specific object for each matching entry. +So far, we’ve iterated the array and filtered out rows, but we also want the results formatted in a specific way, so we’ve chained an expression on our filter with: `{"actor": name, "character": character}`. This tells the function to create a specific object for each matching entry. ##### Sample Result diff --git a/versioned_docs/version-4.1/support.md b/versioned_docs/version-4.1/support.md index 47b47a01..7b37394d 100644 --- a/versioned_docs/version-4.1/support.md +++ b/versioned_docs/version-4.1/support.md @@ -55,7 +55,7 @@ HarperDB can be considered column oriented, however, the exploded data model cre **What do you mean when you say HarperDB is single model?** -HarperDB takes every attribute of a database table object and creates a key:value for both the key and its corresponding value. For example, the attribute eye color will be represented by a key “eye-color” and the corresponding value “green” will be represented by a key with the value “green”. We use LMDB’s lightning-fast key:value store to underpin all these interrelated keys and values, meaning that every “column” is automatically indexed, and you get huge performance in a tiny package. +HarperDB takes every attribute of a database table object and creates a key:value for both the key and its corresponding value. For example, the attribute eye color will be represented by a key "eye-color" and the corresponding value "green" will be represented by a key with the value "green". We use LMDB’s lightning-fast key:value store to underpin all these interrelated keys and values, meaning that every "column" is automatically indexed, and you get huge performance in a tiny package. **Are Primary Keys Case-Sensitive?** diff --git a/versioned_docs/version-4.2/administration/harperdb-studio/create-account.md b/versioned_docs/version-4.2/administration/harperdb-studio/create-account.md index c78c4ef3..3d146bb6 100644 --- a/versioned_docs/version-4.2/administration/harperdb-studio/create-account.md +++ b/versioned_docs/version-4.2/administration/harperdb-studio/create-account.md @@ -12,7 +12,7 @@ Start at the [HarperDB Studio sign up page](https://studio.harperdb.io/sign-up). - Email Address - Subdomain - _Part of the URL that will be used to identify your HarperDB Cloud Instances. For example, with subdomain “demo” and instance name “c1” the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com)._ + _Part of the URL that will be used to identify your HarperDB Cloud Instances. For example, with subdomain "demo" and instance name "c1" the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com)._ - Coupon Code (optional) diff --git a/versioned_docs/version-4.2/administration/harperdb-studio/instances.md b/versioned_docs/version-4.2/administration/harperdb-studio/instances.md index 2e2c70b7..b12229e2 100644 --- a/versioned_docs/version-4.2/administration/harperdb-studio/instances.md +++ b/versioned_docs/version-4.2/administration/harperdb-studio/instances.md @@ -26,7 +26,7 @@ A summary view of all instances within an organization can be viewed by clicking 1. Fill out Instance Info. 1. Enter Instance Name - _This will be used to build your instance URL. For example, with subdomain “demo” and instance name “c1” the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com). The Instance URL will be previewed below._ + _This will be used to build your instance URL. For example, with subdomain "demo" and instance name "c1" the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com). The Instance URL will be previewed below._ 1. Enter Instance Username diff --git a/versioned_docs/version-4.2/administration/harperdb-studio/organizations.md b/versioned_docs/version-4.2/administration/harperdb-studio/organizations.md index ec19c50e..83f99150 100644 --- a/versioned_docs/version-4.2/administration/harperdb-studio/organizations.md +++ b/versioned_docs/version-4.2/administration/harperdb-studio/organizations.md @@ -29,7 +29,7 @@ A new organization can be created as follows: - Enter Organization Name _This is used for descriptive purposes only._ - Enter Organization Subdomain - _Part of the URL that will be used to identify your HarperDB Cloud Instances. For example, with subdomain “demo” and instance name “c1” the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com)._ + _Part of the URL that will be used to identify your HarperDB Cloud Instances. For example, with subdomain "demo" and instance name "c1" the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com)._ 4. Click Create Organization. ## Delete an Organization diff --git a/versioned_docs/version-4.2/administration/harperdb-studio/query-instance-data.md b/versioned_docs/version-4.2/administration/harperdb-studio/query-instance-data.md index 22801dbc..44cd1d08 100644 --- a/versioned_docs/version-4.2/administration/harperdb-studio/query-instance-data.md +++ b/versioned_docs/version-4.2/administration/harperdb-studio/query-instance-data.md @@ -13,7 +13,7 @@ SQL queries can be executed directly through the HarperDB Studio with the follow 5. Enter your SQL query in the SQL query window. 6. Click **Execute**. -_Please note, the Studio will execute the query exactly as entered. For example, if you attempt to `SELECT _` from a table with millions of rows, you will most likely crash your browser.\* +_Please note, the Studio will execute the query exactly as entered. For example, if you attempt to `SELECT _` from a table with millions of rows, you will most likely crash your browser._ ## Browse Query Results Set diff --git a/versioned_docs/version-4.2/administration/logging/standard-logging.md b/versioned_docs/version-4.2/administration/logging/standard-logging.md index 08affb24..0e56681a 100644 --- a/versioned_docs/version-4.2/administration/logging/standard-logging.md +++ b/versioned_docs/version-4.2/administration/logging/standard-logging.md @@ -22,15 +22,15 @@ For example, a typical log entry looks like: The components of a log entry are: -- timestamp - This is the date/time stamp when the event occurred -- level - This is an associated log level that gives a rough guide to the importance and urgency of the message. The available log levels in order of least urgent (and more verbose) are: `trace`, `debug`, `info`, `warn`, `error`, `fatal`, and `notify`. -- thread/ID - This reports the name of the thread and the thread ID that the event was reported on. Note that NATS logs are recorded by their process name and there is no thread id for them since they are a separate process. Key threads are: - - main - This is the thread that is responsible for managing all other threads and routes incoming requests to the other threads - - http - These are the worker threads that handle the primary workload of incoming HTTP requests to the operations API and custom functions. - - Clustering\* - These are threads and processes that handle replication. - - job - These are job threads that have been started to handle operations that are executed in a separate job thread. -- tags - Logging from a custom function will include a "custom-function" tag in the log entry. Most logs will not have any additional tags. -- message - This is the main message that was reported. +- `timestamp` - This is the date/time stamp when the event occurred +- `level` - This is an associated log level that gives a rough guide to the importance and urgency of the message. The available log levels in order of least urgent (and more verbose) are: `trace`, `debug`, `info`, `warn`, `error`, `fatal`, and `notify`. +- `thread/ID` - This reports the name of the thread and the thread ID that the event was reported on. Note that NATS logs are recorded by their process name and there is no thread id for them since they are a separate process. Key threads are: + - `main` - This is the thread that is responsible for managing all other threads and routes incoming requests to the other threads + - `http` - These are the worker threads that handle the primary workload of incoming HTTP requests to the operations API and custom functions. + - `Clustering` - These are threads and processes that handle replication. + - `job` - These are job threads that have been started to handle operations that are executed in a separate job thread. +- `tags` - Logging from a custom function will include a "custom-function" tag in the log entry. Most logs will not have any additional tags. +- `message` - This is the main message that was reported. We try to keep logging to a minimum by default, to do this the default log level is `error`. If you require more information from the logs, increasing the log level down will provide that. @@ -46,7 +46,7 @@ HarperDB logs can optionally be streamed to standard streams. Logging to standar ## Logging Rotation -Log rotation allows for managing log files, such as compressing rotated log files, archiving old log files, determining when to rotate, and the like. This will allow for organized storage and efficient use of disk space. For more information see “logging” in our [config docs](../../deployments/configuration). +Log rotation allows for managing log files, such as compressing rotated log files, archiving old log files, determining when to rotate, and the like. This will allow for organized storage and efficient use of disk space. For more information see "logging" in our [config docs](../../deployments/configuration). ## Read Logs via the API diff --git a/versioned_docs/version-4.2/deployments/install-harperdb/linux.md b/versioned_docs/version-4.2/deployments/install-harperdb/linux.md index 6b87d34e..abaa6c2c 100644 --- a/versioned_docs/version-4.2/deployments/install-harperdb/linux.md +++ b/versioned_docs/version-4.2/deployments/install-harperdb/linux.md @@ -20,7 +20,7 @@ These instructions assume that the following has already been completed: While you will need to access HarperDB through port 9925 for the administration through the operations API, and port 9932 for clustering, for higher level of security, you may want to consider keeping both of these ports restricted to a VPN or VPC, and only have the application interface (9926 by default) exposed to the public Internet. -For this example, we will use an AWS Ubuntu Server 22.04 LTS m5.large EC2 Instance with an additional General Purpose SSD EBS volume and the default “ubuntu” user account. +For this example, we will use an AWS Ubuntu Server 22.04 LTS m5.large EC2 Instance with an additional General Purpose SSD EBS volume and the default "ubuntu" user account. --- diff --git a/versioned_docs/version-4.2/developers/applications/define-routes.md b/versioned_docs/version-4.2/developers/applications/define-routes.md index bb100ba1..b3438893 100644 --- a/versioned_docs/version-4.2/developers/applications/define-routes.md +++ b/versioned_docs/version-4.2/developers/applications/define-routes.md @@ -22,7 +22,7 @@ However, you can specify the path to be `/` if you wish to have your routes hand - The route below, using the default config, within the **dogs** project, with a route of **breeds** would be available at **http:/localhost:9926/dogs/breeds**. -In effect, this route is just a pass-through to HarperDB. The same result could have been achieved by hitting the core HarperDB API, since it uses **hdbCore.preValidation** and **hdbCore.request**, which are defined in the “helper methods” section, below. +In effect, this route is just a pass-through to HarperDB. The same result could have been achieved by hitting the core HarperDB API, since it uses **hdbCore.preValidation** and **hdbCore.request**, which are defined in the "helper methods" section, below. ```javascript export default async (server, { hdbCore, logger }) => { @@ -39,7 +39,7 @@ export default async (server, { hdbCore, logger }) => { For endpoints where you want to execute multiple operations against HarperDB, or perform additional processing (like an ML classification, or an aggregation, or a call to a 3rd party API), you can define your own logic in the handler. The function below will execute a query against the dogs table, and filter the results to only return those dogs over 4 years in age. -**IMPORTANT: This route has NO preValidation and uses hdbCore.requestWithoutAuthentication, which- as the name implies- bypasses all user authentication. See the security concerns and mitigations in the “helper methods” section, below.** +**IMPORTANT: This route has NO preValidation and uses hdbCore.requestWithoutAuthentication, which- as the name implies- bypasses all user authentication. See the security concerns and mitigations in the "helper methods" section, below.** ```javascript export default async (server, { hdbCore, logger }) => { diff --git a/versioned_docs/version-4.2/developers/clustering/index.md b/versioned_docs/version-4.2/developers/clustering/index.md index dfcdac11..92fe00fe 100644 --- a/versioned_docs/version-4.2/developers/clustering/index.md +++ b/versioned_docs/version-4.2/developers/clustering/index.md @@ -22,10 +22,10 @@ A common use case is an edge application collecting and analyzing sensor data th HarperDB simplifies the architecture of such an application with its bi-directional, table-level replication: -- The edge instance subscribes to a “thresholds” table on the cloud instance, so the application only makes localhost calls to get the thresholds. -- The application continually pushes sensor data into a “sensor_data” table via the localhost API, comparing it to the threshold values as it does so. -- When a threshold violation occurs, the application adds a record to the “alerts” table. -- The application appends to that record array “sensor_data” entries for the 60 seconds (or minutes, or days) leading up to the threshold violation. -- The edge instance publishes the “alerts” table up to the cloud instance. +- The edge instance subscribes to a "thresholds" table on the cloud instance, so the application only makes localhost calls to get the thresholds. +- The application continually pushes sensor data into a "sensor_data" table via the localhost API, comparing it to the threshold values as it does so. +- When a threshold violation occurs, the application adds a record to the "alerts" table. +- The application appends to that record array "sensor_data" entries for the 60 seconds (or minutes, or days) leading up to the threshold violation. +- The edge instance publishes the "alerts" table up to the cloud instance. By letting HarperDB focus on the fault-tolerant logistics of transporting your data, you get to write less code. By moving data only when and where it’s needed, you lower storage and bandwidth costs. And by restricting your app to only making local calls to HarperDB, you reduce the overall exposure of your application to outside forces. diff --git a/versioned_docs/version-4.2/developers/components/google-data-studio.md b/versioned_docs/version-4.2/developers/components/google-data-studio.md index d252f3f0..4ee8d848 100644 --- a/versioned_docs/version-4.2/developers/components/google-data-studio.md +++ b/versioned_docs/version-4.2/developers/components/google-data-studio.md @@ -19,9 +19,9 @@ Get started by selecting the HarperDB connector from the [Google Data Studio Par 1. Log in to [https://datastudio.google.com/](https://datastudio.google.com/). 1. Add a new Data Source using the HarperDB connector. The current release version can be added as a data source by following this link: [HarperDB Google Data Studio Connector](https://datastudio.google.com/datasources/create?connectorId=AKfycbxBKgF8FI5R42WVxO-QCOq7dmUys0HJrUJMkBQRoGnCasY60_VJeO3BhHJPvdd20-S76g). 1. Authorize the connector to access other servers on your behalf (this allows the connector to contact your database). -1. Enter the Web URL to access your database (preferably with HTTPS), as well as the Basic Auth key you use to access the database. Just include the key, not the word “Basic” at the start of it. -1. Check the box for “Secure Connections Only” if you want to always use HTTPS connections for this data source; entering a Web URL that starts with https:// will do the same thing, if you prefer. -1. Check the box for “Allow Bad Certs” if your HarperDB instance does not have a valid SSL certificate. [HarperDB Cloud](../../deployments/harperdb-cloud/) always has valid certificates, and so will never require this to be checked. Instances you set up yourself may require this, if you are using self-signed certs. If you are using [HarperDB Cloud](../../deployments/harperdb-cloud/) or another instance you know should always have valid SSL certificates, do not check this box. +1. Enter the Web URL to access your database (preferably with HTTPS), as well as the Basic Auth key you use to access the database. Just include the key, not the word "Basic" at the start of it. +1. Check the box for "Secure Connections Only" if you want to always use HTTPS connections for this data source; entering a Web URL that starts with https:// will do the same thing, if you prefer. +1. Check the box for "Allow Bad Certs" if your HarperDB instance does not have a valid SSL certificate. [HarperDB Cloud](../../deployments/harperdb-cloud/) always has valid certificates, and so will never require this to be checked. Instances you set up yourself may require this, if you are using self-signed certs. If you are using [HarperDB Cloud](../../deployments/harperdb-cloud/) or another instance you know should always have valid SSL certificates, do not check this box. 1. Choose your Query Type. This determines what information the configuration will ask for after pressing the Next button. - Table will ask you for a Schema and a Table to return all fields of using `SELECT *`. - SQL will ask you for the SQL query you’re using to retrieve fields from the database. You may `JOIN` multiple tables together, and use HarperDB specific SQL functions, along with the usual power SQL grants. diff --git a/versioned_docs/version-4.2/developers/components/installing.md b/versioned_docs/version-4.2/developers/components/installing.md index 6bbb6261..090075b4 100644 --- a/versioned_docs/version-4.2/developers/components/installing.md +++ b/versioned_docs/version-4.2/developers/components/installing.md @@ -9,7 +9,7 @@ Components can be easily added by adding a new top level element to your `harper The configuration comprises two values: - component name - can be anything, as long as it follows valid YAML syntax. -- package - a reference to your component. +- `package` - a reference to your component. ```yaml myComponentName: diff --git a/versioned_docs/version-4.2/developers/components/operations.md b/versioned_docs/version-4.2/developers/components/operations.md index 32108df5..691ce4bb 100644 --- a/versioned_docs/version-4.2/developers/components/operations.md +++ b/versioned_docs/version-4.2/developers/components/operations.md @@ -4,7 +4,7 @@ title: Operations # Operations -One way to manage applications and components is through [HarperDB Studio](../../administration/harperdb-studio/). It performs all the necessary operations automatically. To get started, navigate to your instance in HarperDB Studio and click the subnav link for “applications”. Once configuration is complete, you can manage and deploy applications in minutes. +One way to manage applications and components is through [HarperDB Studio](../../administration/harperdb-studio/). It performs all the necessary operations automatically. To get started, navigate to your instance in HarperDB Studio and click the subnav link for "applications". Once configuration is complete, you can manage and deploy applications in minutes. HarperDB Studio manages your applications using nine HarperDB operations. You may view these operations within our [API Docs](../operations-api/). A brief overview of each of the operations is below: diff --git a/versioned_docs/version-4.2/developers/operations-api/bulk-operations.md b/versioned_docs/version-4.2/developers/operations-api/bulk-operations.md index d698130f..b6f6a07f 100644 --- a/versioned_docs/version-4.2/developers/operations-api/bulk-operations.md +++ b/versioned_docs/version-4.2/developers/operations-api/bulk-operations.md @@ -8,11 +8,11 @@ title: Bulk Operations Ingests CSV data, provided directly in the operation as an `insert`, `update` or `upsert` into the specified database table. -- operation _(required)_ - must always be `csv_data_load` -- action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` -- database _(optional)_ - name of the database where you are loading your data. The default is `data` -- table _(required)_ - name of the table where you are loading your data -- data _(required)_ - csv data to import into HarperDB +- `operation` _(required)_ - must always be `csv_data_load` +- `action` _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` +- `database` _(optional)_ - name of the database where you are loading your data. The default is `data` +- `table` _(required)_ - name of the table where you are loading your data +- `data` _(required)_ - csv data to import into HarperDB ### Body @@ -43,11 +43,11 @@ Ingests CSV data, provided via a path on the local filesystem, as an `insert`, ` _Note: The CSV file must reside on the same machine on which HarperDB is running. For example, the path to a CSV on your computer will produce an error if your HarperDB instance is a cloud instance._ -- operation _(required)_ - must always be `csv_file_load` -- action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` -- database _(optional)_ - name of the database where you are loading your data. The default is `data` -- table _(required)_ - name of the table where you are loading your data -- file*path *(required)\_ - path to the csv file on the host running harperdb +- `operation` _(required)_ - must always be `csv_file_load` +- `action` _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` +- `database` _(optional)_ - name of the database where you are loading your data. The default is `data` +- `table` _(required)_ - name of the table where you are loading your data +- `file_path` _(required)_ - path to the csv file on the host running harperdb ### Body @@ -76,11 +76,11 @@ _Note: The CSV file must reside on the same machine on which HarperDB is running Ingests CSV data, provided via URL, as an `insert`, `update` or `upsert` into the specified database table. -- operation _(required)_ - must always be `csv_url_load` -- action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` -- database _(optional)_ - name of the database where you are loading your data. The default is `data` -- table _(required)_ - name of the table where you are loading your data -- csv*url *(required)\_ - URL to the csv +- `operation` _(required)_ - must always be `csv_url_load` +- `action` _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` +- `database` _(optional)_ - name of the database where you are loading your data. The default is `data` +- `table` _(required)_ - name of the table where you are loading your data +- `csv_url` _(required)_ - URL to the csv ### Body @@ -109,16 +109,16 @@ Ingests CSV data, provided via URL, as an `insert`, `update` or `upsert` into th This operation allows users to import CSV or JSON files from an AWS S3 bucket as an `insert`, `update` or `upsert`. -- operation _(required)_ - must always be `import_from_s3` -- action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` -- database _(optional)_ - name of the database where you are loading your data. The default is `data` -- table _(required)_ - name of the table where you are loading your data -- s3 _(required)_ - object containing required AWS S3 bucket info for operation: - - aws_access_key_id - AWS access key for authenticating into your S3 bucket - - aws_secret_access_key - AWS secret for authenticating into your S3 bucket - - bucket - AWS S3 bucket to import from - - key - the name of the file to import - _the file must include a valid file extension ('.csv' or '.json')_ - - region - the region of the bucket +- `operation` _(required)_ - must always be `import_from_s3` +- `action` _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` +- `database` _(optional)_ - name of the database where you are loading your data. The default is `data` +- `table` _(required)_ - name of the table where you are loading your data +- `s3` _(required)_ - object containing required AWS S3 bucket info for operation: + - `aws_access_key_id` - AWS access key for authenticating into your S3 bucket + - `aws_secret_access_key` - AWS secret for authenticating into your S3 bucket + - `bucket` - AWS S3 bucket to import from + - `key` - the name of the file to import - _the file must include a valid file extension ('.csv' or '.json')_ + - `region` - the region of the bucket ### Body diff --git a/versioned_docs/version-4.2/developers/operations-api/clustering.md b/versioned_docs/version-4.2/developers/operations-api/clustering.md index d2a34751..8dbbff78 100644 --- a/versioned_docs/version-4.2/developers/operations-api/clustering.md +++ b/versioned_docs/version-4.2/developers/operations-api/clustering.md @@ -10,11 +10,11 @@ Adds a route/routes to either the hub or leaf server cluster configuration. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_set_routes` -- server _(required)_ - must always be `hub` or `leaf`, in most cases you should use `hub` here -- routes _(required)_ - must always be an objects array with a host and port: - - host - the host of the remote instance you are clustering to - - port - the clustering port of the remote instance you are clustering to, in most cases this is the value in `clustering.hubServer.cluster.network.port` on the remote instance `harperdb-config.yaml` +- `operation` _(required)_ - must always be `cluster_set_routes` +- `server` _(required)_ - must always be `hub` or `leaf`, in most cases you should use `hub` here +- `routes` _(required)_ - must always be an objects array with a host and port: + - `host` - the host of the remote instance you are clustering to + - `port` - the clustering port of the remote instance you are clustering to, in most cases this is the value in `clustering.hubServer.cluster.network.port` on the remote instance `harperdb-config.yaml` ### Body @@ -78,7 +78,7 @@ Gets all the hub and leaf server routes from the config file. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_get_routes` +- `operation` _(required)_ - must always be `cluster_get_routes` ### Body @@ -122,8 +122,8 @@ Removes route(s) from hub and/or leaf server routes array in config file. Return _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_delete_routes` -- routes _required_ - Must be an array of route object(s) +- `operation` _(required)_ - must always be `cluster_delete_routes` +- `routes` _(required)_ - Must be an array of route object(s) ### Body @@ -162,14 +162,14 @@ Registers an additional HarperDB instance with associated subscriptions. Learn m _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_node` -- node*name *(required)\_ - the node name of the remote node -- subscriptions _(required)_ - The relationship created between nodes. Must be an object array and include `schema`, `table`, `subscribe` and `publish`: - - schema - the schema to replicate from - - table - the table to replicate from - - subscribe - a boolean which determines if transactions on the remote table should be replicated on the local table - - publish - a boolean which determines if transactions on the local table should be replicated on the remote table - - start*time *(optional)\_ - How far back to go to get transactions from node being added. Must be in UTC YYYY-MM-DDTHH:mm:ss.sssZ format +- `operation` _(required)_ - must always be `add_node` +- `node_name` _(required)_ - the node name of the remote node +- `subscriptions` _(required)_ - The relationship created between nodes. Must be an object array and include `schema`, `table`, `subscribe` and `publish`: + - `schema` - the schema to replicate from + - `table` - the table to replicate from + - `subscribe` - a boolean which determines if transactions on the remote table should be replicated on the local table + - `publish` - a boolean which determines if transactions on the local table should be replicated on the remote table + - `start_time` _(optional)_ - How far back to go to get transactions from node being added. Must be in UTC YYYY-MM-DDTHH:mm:ss.sssZ format ### Body @@ -205,13 +205,13 @@ Modifies an existing HarperDB instance registration and associated subscriptions _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `update_node` -- node*name *(required)\_ - the node name of the remote node you are updating -- subscriptions _(required)_ - The relationship created between nodes. Must be an object array and include `schema`, `table`, `subscribe` and `publish`: - - schema - the schema to replicate from - - table - the table to replicate from - - subscribe - a boolean which determines if transactions on the remote table should be replicated on the local table - - publish - a boolean which determines if transactions on the local table should be replicated on the remote table +- `operation` _(required)_ - must always be `update_node` +- `node_name` _(required)_ - the node name of the remote node you are updating +- `subscriptions` _(required)_ - The relationship created between nodes. Must be an object array and include `schema`, `table`, `subscribe` and `publish`: + - `schema` - the schema to replicate from + - `table` - the table to replicate from + - `subscribe` - a boolean which determines if transactions on the remote table should be replicated on the local table + - `publish` - a boolean which determines if transactions on the local table should be replicated on the remote table ### Body @@ -246,7 +246,7 @@ Returns an array of status objects from a cluster. A status object will contain _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_status` +- `operation` _(required)_ - must always be `cluster_status` ### Body @@ -293,10 +293,10 @@ Returns an object array of enmeshed nodes. Each node object will contain the nam _Operation is restricted to super_user roles only_ -- operation _(required)_- must always be `cluster_network` -- timeout (_optional_) - the amount of time in milliseconds to wait for a response from the network. Must be a number -- connected*nodes (\_optional*) - omit `connected_nodes` from the response. Must be a boolean. Defaults to `false` -- routes (_optional_) - omit `routes` from the response. Must be a boolean. Defaults to `false` +- `operation` _(required)_- must always be `cluster_network` +- `timeout` _(optional)_ - the amount of time in milliseconds to wait for a response from the network. Must be a number +- `connected_nodes` _(optional)_ - omit `connected_nodes` from the response. Must be a boolean. Defaults to `false` +- `routes` _(optional)_ - omit `routes` from the response. Must be a boolean. Defaults to `false` ### Body @@ -340,8 +340,8 @@ Removes a HarperDB instance and associated subscriptions from the cluster. Learn _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `remove_node` -- name _(required)_ - The name of the node you are de-registering +- `operation` _(required)_ - must always be `remove_node` +- `name` _(required)_ - The name of the node you are de-registering ### Body @@ -369,8 +369,8 @@ Learn more about HarperDB clustering here: [https://harperdb.io/docs/clustering/ _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `configure_cluster` -- connections _(required)_ - must be an object array with each object containing `node_name` and `subscriptions` for that node +- `operation` _(required)_ - must always be `configure_cluster` +- `connections` _(required)_ - must be an object array with each object containing `node_name` and `subscriptions` for that node ### Body diff --git a/versioned_docs/version-4.2/developers/operations-api/components.md b/versioned_docs/version-4.2/developers/operations-api/components.md index e8c9a236..f1a3d3ae 100644 --- a/versioned_docs/version-4.2/developers/operations-api/components.md +++ b/versioned_docs/version-4.2/developers/operations-api/components.md @@ -10,8 +10,8 @@ Creates a new component project in the component root directory using a predefin _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_component` -- project _(required)_ - the name of the project you wish to create +- `operation` _(required)_ - must always be `add_component` +- `project` _(required)_ - the name of the project you wish to create ### Body @@ -74,10 +74,10 @@ _Note: After deploying a component a restart may be required_ _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `deploy_component` -- project _(required)_ - the name of the project you wish to deploy -- package _(optional)_ - this can be any valid GitHub or NPM reference -- payload _(optional)_ - a base64-encoded string representation of the .tar file. Must be a string +- `operation` _(required)_ - must always be `deploy_component` +- `project` _(required)_ - the name of the project you wish to deploy +- `package` _(optional)_ - this can be any valid GitHub or NPM reference +- `payload` _(optional)_ - a base64-encoded string representation of the .tar file. Must be a string ### Body @@ -113,9 +113,9 @@ Creates a temporary `.tar` file of the specified project folder, then reads it i _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `package_component` -- project _(required)_ - the name of the project you wish to package -- skip*node_modules *(optional)\_ - if true, creates option for tar module that will exclude the project's node_modules directory. Must be a boolean +- `operation` _(required)_ - must always be `package_component` +- `project` _(required)_ - the name of the project you wish to package +- `skip_node_modules` _(optional)_ - if true, creates option for tar module that will exclude the project's node_modules directory. Must be a boolean ### Body @@ -146,9 +146,9 @@ Deletes a file from inside the component project or deletes the complete project _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `drop_component` -- project _(required)_ - the name of the project you wish to delete or to delete from if using the `file` parameter -- file _(optional)_ - the path relative to your project folder of the file you wish to delete +- `operation` _(required)_ - must always be `drop_component` +- `project` _(required)_ - the name of the project you wish to delete or to delete from if using the `file` parameter +- `file` _(optional)_ - the path relative to your project folder of the file you wish to delete ### Body @@ -176,7 +176,7 @@ Gets all local component files and folders and any component config from `harper _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_components` +- `operation` _(required)_ - must always be `get_components` ### Body @@ -257,10 +257,10 @@ Gets the contents of a file inside a component project. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_component_file` -- project _(required)_ - the name of the project where the file is located -- file _(required)_ - the path relative to your project folder of the file you wish to view -- encoding _(optional)_ - the encoding that will be passed to the read file call. Defaults to `utf8` +- `operation` _(required)_ - must always be `get_component_file` +- `project` _(required)_ - the name of the project where the file is located +- `file` _(required)_ - the path relative to your project folder of the file you wish to view +- `encoding` _(optional)_ - the encoding that will be passed to the read file call. Defaults to `utf8` ### Body @@ -288,11 +288,11 @@ Creates or updates a file inside a component project. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_component_file` -- project _(required)_ - the name of the project the file is located in -- file _(required)_ - the path relative to your project folder of the file you wish to set -- payload _(required)_ - what will be written to the file -- encoding _(optional)_ - the encoding that will be passed to the write file call. Defaults to `utf8` +- `operation` _(required)_ - must always be `set_component_file` +- `project` _(required)_ - the name of the project the file is located in +- `file` _(required)_ - the path relative to your project folder of the file you wish to set +- `payload` _(required)_ - what will be written to the file +- `encoding` _(optional)_ - the encoding that will be passed to the write file call. Defaults to `utf8` ### Body diff --git a/versioned_docs/version-4.2/developers/operations-api/custom-functions.md b/versioned_docs/version-4.2/developers/operations-api/custom-functions.md index 86f76157..7b483c8a 100644 --- a/versioned_docs/version-4.2/developers/operations-api/custom-functions.md +++ b/versioned_docs/version-4.2/developers/operations-api/custom-functions.md @@ -10,7 +10,7 @@ Returns the state of the Custom functions server. This includes whether it is en _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `custom_function_status` +- `operation` _(required)_ - must always be `custom_function_status` ### Body @@ -38,7 +38,7 @@ Returns an array of projects within the Custom Functions root project directory. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_custom_functions` +- `operation` _(required)_ - must always be `get_custom_functions` ### Body @@ -68,10 +68,10 @@ Returns the content of the specified file as text. HarperDB Studio uses this cal _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_custom_function` -- project _(required)_ - the name of the project containing the file for which you wish to get content -- type _(required)_ - the name of the sub-folder containing the file for which you wish to get content - must be either routes or helpers -- file _(required)_ - The name of the file for which you wish to get content - should not include the file extension (which is always .js) +- `operation` _(required)_ - must always be `get_custom_function` +- `project` _(required)_ - the name of the project containing the file for which you wish to get content +- `type` _(required)_ - the name of the sub-folder containing the file for which you wish to get content - must be either routes or helpers +- `file` _(required)_ - The name of the file for which you wish to get content - should not include the file extension (which is always .js) ### Body @@ -100,11 +100,11 @@ Updates the content of the specified file. HarperDB Studio uses this call to sav _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_custom_function` -- project _(required)_ - the name of the project containing the file for which you wish to set content -- type _(required)_ - the name of the sub-folder containing the file for which you wish to set content - must be either routes or helpers -- file _(required)_ - the name of the file for which you wish to set content - should not include the file extension (which is always .js) -- function*content *(required)\_ - the content you wish to save into the specified file +- `operation` _(required)_ - must always be `set_custom_function` +- `project` _(required)_ - the name of the project containing the file for which you wish to set content +- `type` _(required)_ - the name of the sub-folder containing the file for which you wish to set content - must be either routes or helpers +- `file` _(required)_ - the name of the file for which you wish to set content - should not include the file extension (which is always .js) +- `function_content` _(required)_ - the content you wish to save into the specified file ### Body @@ -134,10 +134,10 @@ Deletes the specified file. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `drop_custom_function` -- project _(required)_ - the name of the project containing the file you wish to delete -- type _(required)_ - the name of the sub-folder containing the file you wish to delete. Must be either routes or helpers -- file _(required)_ - the name of the file you wish to delete. Should not include the file extension (which is always .js) +- `operation` _(required)_ - must always be `drop_custom_function` +- `project` _(required)_ - the name of the project containing the file you wish to delete +- `type` _(required)_ - the name of the sub-folder containing the file you wish to delete. Must be either routes or helpers +- `file` _(required)_ - the name of the file you wish to delete. Should not include the file extension (which is always .js) ### Body @@ -166,8 +166,8 @@ Creates a new project folder in the Custom Functions root project directory. It _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_custom_function_project` -- project _(required)_ - the name of the project you wish to create +- `operation` _(required)_ - must always be `add_custom_function_project` +- `project` _(required)_ - the name of the project you wish to create ### Body @@ -194,8 +194,8 @@ Deletes the specified project folder and all of its contents. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `drop_custom_function_project` -- project _(required)_ - the name of the project you wish to delete +- `operation` _(required)_ - must always be `drop_custom_function_project` +- `project` _(required)_ - the name of the project you wish to delete ### Body @@ -222,9 +222,9 @@ Creates a .tar file of the specified project folder, then reads it into a base64 _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `package_custom_function_project` -- project _(required)_ - the name of the project you wish to package up for deployment -- skip*node_modules *(optional)\_ - if true, creates option for tar module that will exclude the project's node_modules directory. Must be a boolean. +- `operation` _(required)_ - must always be `package_custom_function_project` +- `project` _(required)_ - the name of the project you wish to package up for deployment +- `skip_node_modules` _(optional)_ - if true, creates option for tar module that will exclude the project's node_modules directory. Must be a boolean. ### Body @@ -254,9 +254,9 @@ Takes the output of package_custom_function_project, decrypts the base64-encoded _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `deploy_custom_function_project` -- project _(required)_ - the name of the project you wish to deploy. Must be a string -- payload _(required)_ - a base64-encoded string representation of the .tar file. Must be a string +- `operation` _(required)_ - must always be `deploy_custom_function_project` +- `project` _(required)_ - the name of the project you wish to deploy. Must be a string +- `payload` _(required)_ - a base64-encoded string representation of the .tar file. Must be a string ### Body diff --git a/versioned_docs/version-4.2/developers/operations-api/databases-and-tables.md b/versioned_docs/version-4.2/developers/operations-api/databases-and-tables.md index 5efb6861..140f5e53 100644 --- a/versioned_docs/version-4.2/developers/operations-api/databases-and-tables.md +++ b/versioned_docs/version-4.2/developers/operations-api/databases-and-tables.md @@ -8,7 +8,7 @@ title: Databases and Tables Returns the definitions of all databases and tables within the database. Record counts about 5000 records are estimated, as determining the exact count can be expensive. When the record count is estimated, this is indicated by the inclusion of a confidence interval of `estimated_record_range`. If you need the exact count, you can include an `"exact_count": true` in the operation, but be aware that this requires a full table scan (may be expensive). -- operation _(required)_ - must always be `describe_all` +- `operation` _(required)_ - must always be `describe_all` ### Body @@ -63,8 +63,8 @@ Returns the definitions of all databases and tables within the database. Record Returns the definitions of all tables within the specified database. -- operation _(required)_ - must always be `describe_database` -- database _(optional)_ - database where the table you wish to describe lives. The default is `data` +- `operation` _(required)_ - must always be `describe_database` +- `database` _(optional)_ - database where the table you wish to describe lives. The default is `data` ### Body @@ -118,9 +118,9 @@ Returns the definitions of all tables within the specified database. Returns the definition of the specified table. -- operation _(required)_ - must always be `describe_table` -- table _(required)_ - table you wish to describe -- database _(optional)_ - database where the table you wish to describe lives. The default is `data` +- `operation` _(required)_ - must always be `describe_table` +- `table` _(required)_ - table you wish to describe +- `database` _(optional)_ - database where the table you wish to describe lives. The default is `data` ### Body @@ -174,8 +174,8 @@ Create a new database. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `create_database` -- database _(optional)_ - name of the database you are creating. The default is `data` +- `operation` _(required)_ - must always be `create_database` +- `database` _(optional)_ - name of the database you are creating. The default is `data` ### Body @@ -202,8 +202,8 @@ Drop an existing database. NOTE: Dropping a database will delete all tables and _Operation is restricted to super_user roles only_ -- operation _(required)_ - this should always be `drop_database` -- database _(required)_ - name of the database you are dropping +- `operation` _(required)_ - this should always be `drop_database` +- `database` _(required)_ - name of the database you are dropping ### Body @@ -230,15 +230,15 @@ Create a new table within a database. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `create_table` -- database _(optional)_ - name of the database where you want your table to live. If the database does not exist, it will be created. If the `database` property is not provided it will default to `data`. -- table _(required)_ - name of the table you are creating -- primary*key *(required)\_ - primary key for the table -- attributes _(optional)_ - an array of attributes that specifies the schema for the table, that is the set of attributes for the table. When attributes are supplied the table will not be considered a "dynamic schema" table, and attributes will not be auto-added when records with new properties are inserted. Each attribute is specified as: - - name _(required)_ - the name of the attribute - - indexed _(optional)_ - indicates if the attribute should be indexed - - type _(optional)_ - specifies the data type of the attribute (can be String, Int, Float, Date, ID, Any) -- expiration _(optional)_ - specifies the time-to-live or expiration of records in the table before they are evicted (records are not evicted on any timer if not specified). This is specified in seconds. +- `operation` _(required)_ - must always be `create_table` +- `database` _(optional)_ - name of the database where you want your table to live. If the database does not exist, it will be created. If the `database` property is not provided it will default to `data`. +- `table` _(required)_ - name of the table you are creating +- `primary_key` _(required)_ - primary key for the table +- `attributes` _(optional)_ - an array of attributes that specifies the schema for the table, that is the set of attributes for the table. When attributes are supplied the table will not be considered a "dynamic schema" table, and attributes will not be auto-added when records with new properties are inserted. Each attribute is specified as: + - `name` _(required)_ - the name of the attribute + - `indexed` _(optional)_ - indicates if the attribute should be indexed + - `type` _(optional)_ - specifies the data type of the attribute (can be String, Int, Float, Date, ID, Any) +- `expiration` _(optional)_ - specifies the time-to-live or expiration of records in the table before they are evicted (records are not evicted on any timer if not specified). This is specified in seconds. ### Body @@ -267,9 +267,9 @@ Drop an existing database table. NOTE: Dropping a table will delete all associat _Operation is restricted to super_user roles only_ -- operation _(required)_ - this should always be `drop_table` -- database _(optional)_ - database where the table you are dropping lives. The default is `data` -- table _(required)_ - name of the table you are dropping +- `operation` _(required)_ - this should always be `drop_table` +- `database` _(optional)_ - database where the table you are dropping lives. The default is `data` +- `table` _(required)_ - name of the table you are dropping ### Body @@ -297,10 +297,10 @@ Create a new attribute within the specified table. **The create_attribute operat _Note: HarperDB will automatically create new attributes on insert and update if they do not already exist within the schema._ -- operation _(required)_ - must always be `create_attribute` -- database _(optional)_ - name of the database of the table you want to add your attribute. The default is `data` -- table _(required)_ - name of the table where you want to add your attribute to live -- attribute _(required)_ - name for the attribute +- `operation` _(required)_ - must always be `create_attribute` +- `database` _(optional)_ - name of the database of the table you want to add your attribute. The default is `data` +- `table` _(required)_ - name of the table where you want to add your attribute to live +- `attribute` _(required)_ - name for the attribute ### Body @@ -331,10 +331,10 @@ Drop an existing attribute from the specified table. NOTE: Dropping an attribute _Operation is restricted to super_user roles only_ -- operation _(required)_ - this should always be `drop_attribute` -- database _(optional)_ - database where the table you are dropping lives. The default is `data` -- table _(required)_ - table where the attribute you are dropping lives -- attribute _(required)_ - attribute that you intend to drop +- `operation` _(required)_ - this should always be `drop_attribute` +- `database` _(optional)_ - database where the table you are dropping lives. The default is `data` +- `table` _(required)_ - table where the attribute you are dropping lives +- `attribute` _(required)_ - attribute that you intend to drop ### Body @@ -365,10 +365,10 @@ It is important to note that trying to copy a database file that is in use (Harp _Operation is restricted to super_user roles only_ -- operation _(required)_ - this should always be `get_backup` -- database _(required)_ - this is the database that will be snapshotted and returned -- table _(optional)_ - this will specify a specific table to backup -- tables _(optional)_ - this will specify a specific set of tables to backup +- `operation` _(required)_ - this should always be `get_backup` +- `database` _(required)_ - this is the database that will be snapshotted and returned +- `table` _(optional)_ - this will specify a specific table to backup +- `tables` _(optional)_ - this will specify a specific set of tables to backup ### Body diff --git a/versioned_docs/version-4.2/developers/operations-api/jobs.md b/versioned_docs/version-4.2/developers/operations-api/jobs.md index 173125a1..cf71fa00 100644 --- a/versioned_docs/version-4.2/developers/operations-api/jobs.md +++ b/versioned_docs/version-4.2/developers/operations-api/jobs.md @@ -8,8 +8,8 @@ title: Jobs Returns job status, metrics, and messages for the specified job ID. -- operation _(required)_ - must always be `get_job` -- id _(required)_ - the id of the job you wish to view +- `operation` _(required)_ - must always be `get_job` +- `id` _(required)_ - the id of the job you wish to view ### Body @@ -50,9 +50,9 @@ Returns a list of job statuses, metrics, and messages for all jobs executed with _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `search_jobs_by_start_date` -- from*date *(required)\_ - the date you wish to start the search -- to*date *(required)\_ - the date you wish to end the search +- `operation` _(required)_ - must always be `search_jobs_by_start_date` +- `from_date` _(required)_ - the date you wish to start the search +- `to_date` _(required)_ - the date you wish to end the search ### Body diff --git a/versioned_docs/version-4.2/developers/operations-api/logs.md b/versioned_docs/version-4.2/developers/operations-api/logs.md index 761a164d..d170b297 100644 --- a/versioned_docs/version-4.2/developers/operations-api/logs.md +++ b/versioned_docs/version-4.2/developers/operations-api/logs.md @@ -10,13 +10,13 @@ Returns log outputs from the primary HarperDB log based on the provided search c _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_Log` -- start _(optional)_ - result to start with. Must be a number -- limit _(optional)_ - number of results returned. Default behavior is 100. Must be a number -- level _(optional)_ - error level to filter on. Default behavior is all levels. Must be `error`, `info`, or `null` -- from _(optional)_ - date to begin showing log results. Must be `YYYY-MM-DD` or `YYYY-MM-DD hh:mm:ss` -- until _(optional)_ - date to end showing log results. Must be `YYYY-MM-DD` or `YYYY-MM-DD hh:mm:ss` -- order _(optional)_ - order to display logs desc or asc by timestamp +- `operation` _(required)_ - must always be `read_Log` +- `start` _(optional)_ - result to start with. Must be a number +- `limit` _(optional)_ - number of results returned. Default behavior is 100. Must be a number +- `level` _(optional)_ - error level to filter on. Default behavior is all levels. Must be `error`, `info`, or `null` +- `from` _(optional)_ - date to begin showing log results. Must be `YYYY-MM-DD` or `YYYY-MM-DD hh:mm:ss` +- `until` _(optional)_ - date to end showing log results. Must be `YYYY-MM-DD` or `YYYY-MM-DD hh:mm:ss` +- `order` _(optional)_ - order to display logs desc or asc by timestamp ### Body @@ -68,12 +68,12 @@ Returns all transactions logged for the specified database table. You may filter _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_transaction_log` -- schema _(required)_ - schema under which the transaction log resides -- table _(required)_ - table under which the transaction log resides -- from _(optional)_ - time format must be millisecond-based epoch in UTC -- to _(optional)_ - time format must be millisecond-based epoch in UTC -- limit _(optional)_ - max number of logs you want to receive. Must be a number +- `operation` _(required)_ - must always be `read_transaction_log` +- `schema` _(required)_ - schema under which the transaction log resides +- `table` _(required)_ - table under which the transaction log resides +- `from` _(optional)_ - time format must be millisecond-based epoch in UTC +- `to` _(optional)_ - time format must be millisecond-based epoch in UTC +- `limit` _(optional)_ - max number of logs you want to receive. Must be a number ### Body @@ -271,10 +271,10 @@ Deletes transaction log data for the specified database table that is older than _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `delete_transaction_log_before` -- schema _(required)_ - schema under which the transaction log resides. Must be a string -- table _(required)_ - table under which the transaction log resides. Must be a string -- timestamp _(required)_ - records older than this date will be deleted. Format is millisecond-based epoch in UTC +- `operation` _(required)_ - must always be `delete_transaction_log_before` +- `schema` _(required)_ - schema under which the transaction log resides. Must be a string +- `table` _(required)_ - table under which the transaction log resides. Must be a string +- `timestamp` _(required)_ - records older than this date will be deleted. Format is millisecond-based epoch in UTC ### Body @@ -303,11 +303,11 @@ AuditLog must be enabled in the HarperDB configuration file to make this request _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_audit_log` -- schema _(required)_ - schema under which the transaction log resides -- table _(required)_ - table under which the transaction log resides -- search*type *(optional)\_ - possibilities are `hash_value`, `timestamp` and `username` -- search*values *(optional)\_ - an array of string or numbers relating to search_type +- `operation` _(required)_ - must always be `read_audit_log` +- `schema` _(required)_ - schema under which the transaction log resides +- `table` _(required)_ - table under which the transaction log resides +- `search_type` _(optional)_ - possibilities are `hash_value`, `timestamp` and `username` +- `search_values` _(optional)_ - an array of string or numbers relating to search_type ### Body @@ -398,11 +398,11 @@ AuditLog must be enabled in the HarperDB configuration file to make this request _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_audit_log` -- schema _(required)_ - schema under which the transaction log resides -- table _(required)_ - table under which the transaction log resides -- search*type *(optional)\_ - timestamp -- search*values *(optional)\_ - an array containing a maximum of two values [`from_timestamp`, `to_timestamp`] defining the range of transactions you would like to view. +- `operation` _(required)_ - must always be `read_audit_log` +- `schema` _(required)_ - schema under which the transaction log resides +- `table` _(required)_ - table under which the transaction log resides +- `search_type` _(optional)_ - timestamp +- `search_values` _(optional)_ - an array containing a maximum of two values [`from_timestamp`, `to_timestamp`] defining the range of transactions you would like to view. - Timestamp format is millisecond-based epoch in UTC - If no items are supplied then all transactions are returned - If only one entry is supplied then all transactions after the supplied timestamp will be returned @@ -519,11 +519,11 @@ AuditLog must be enabled in the HarperDB configuration file to make this request _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_audit_log` -- schema _(required)_ - schema under which the transaction log resides -- table _(required)_ - table under which the transaction log resides -- search*type *(optional)\_ - username -- search*values *(optional)\_ - the HarperDB user for whom you would like to view transactions +- `operation` _(required)_ - must always be `read_audit_log` +- `schema` _(required)_ - schema under which the transaction log resides +- `table` _(required)_ - table under which the transaction log resides +- `search_type` _(optional)_ - username +- `search_values` _(optional)_ - the HarperDB user for whom you would like to view transactions ### Body @@ -639,11 +639,11 @@ AuditLog must be enabled in the HarperDB configuration file to make this request _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_audit_log` -- schema _(required)_ - schema under which the transaction log resides -- table _(required)_ - table under which the transaction log resides -- search*type *(optional)\_ - hash_value -- search*values *(optional)\_ - an array of hash_attributes for which you wish to see transaction logs +- `operation` _(required)_ - must always be `read_audit_log` +- `schema` _(required)_ - schema under which the transaction log resides +- `table` _(required)_ - table under which the transaction log resides +- `search_type` _(optional)_ - hash_value +- `search_values` _(optional)_ - an array of hash_attributes for which you wish to see transaction logs ### Body @@ -707,10 +707,10 @@ AuditLog must be enabled in the HarperDB configuration file to make this request _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `delete_audit_logs_before` -- schema _(required)_ - schema under which the transaction log resides. Must be a string -- table _(required)_ - table under which the transaction log resides. Must be a string -- timestamp _(required)_ - records older than this date will be deleted. Format is millisecond-based epoch in UTC +- `operation` _(required)_ - must always be `delete_audit_logs_before` +- `schema` _(required)_ - schema under which the transaction log resides. Must be a string +- `table` _(required)_ - table under which the transaction log resides. Must be a string +- `timestamp` _(required)_ - records older than this date will be deleted. Format is millisecond-based epoch in UTC ### Body diff --git a/versioned_docs/version-4.2/developers/operations-api/registration.md b/versioned_docs/version-4.2/developers/operations-api/registration.md index f31f8370..7812e843 100644 --- a/versioned_docs/version-4.2/developers/operations-api/registration.md +++ b/versioned_docs/version-4.2/developers/operations-api/registration.md @@ -8,7 +8,7 @@ title: Registration Returns the registration data of the HarperDB instance. -- operation _(required)_ - must always be `registration_info` +- `operation` _(required)_ - must always be `registration_info` ### Body @@ -37,7 +37,7 @@ Returns the HarperDB fingerprint, uniquely generated based on the machine, for l _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_fingerprint` +- `operation` _(required)_ - must always be `get_fingerprint` ### Body @@ -55,9 +55,9 @@ Sets the HarperDB license as generated by HarperDB License Management software. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_license` -- key _(required)_ - your license key -- company _(required)_ - the company that was used in the license +- `operation` _(required)_ - must always be `set_license` +- `key` _(required)_ - your license key +- `company` _(required)_ - the company that was used in the license ### Body diff --git a/versioned_docs/version-4.2/developers/operations-api/sql-operations.md b/versioned_docs/version-4.2/developers/operations-api/sql-operations.md index 16e68ecc..6745f1c2 100644 --- a/versioned_docs/version-4.2/developers/operations-api/sql-operations.md +++ b/versioned_docs/version-4.2/developers/operations-api/sql-operations.md @@ -8,8 +8,8 @@ title: SQL Operations Executes the provided SQL statement. The SELECT statement is used to query data from the database. -- operation _(required)_ - must always be `sql` -- sql _(required)_ - use standard SQL +- `operation` _(required)_ - must always be `sql` +- `sql` _(required)_ - use standard SQL ### Body @@ -44,8 +44,8 @@ Executes the provided SQL statement. The SELECT statement is used to query data Executes the provided SQL statement. The INSERT statement is used to add one or more rows to a database table. -- operation _(required)_ - must always be `sql` -- sql _(required)_ - use standard SQL +- `operation` _(required)_ - must always be `sql` +- `sql` _(required)_ - use standard SQL ### Body @@ -72,8 +72,8 @@ Executes the provided SQL statement. The INSERT statement is used to add one or Executes the provided SQL statement. The UPDATE statement is used to change the values of specified attributes in one or more rows in a database table. -- operation _(required)_ - must always be `sql` -- sql _(required)_ - use standard SQL +- `operation` _(required)_ - must always be `sql` +- `sql` _(required)_ - use standard SQL ### Body @@ -100,8 +100,8 @@ Executes the provided SQL statement. The UPDATE statement is used to change the Executes the provided SQL statement. The DELETE statement is used to remove one or more rows of data from a database table. -- operation _(required)_ - must always be `sql` -- sql _(required)_ - use standard SQL +- `operation` _(required)_ - must always be `sql` +- `sql` _(required)_ - use standard SQL ### Body diff --git a/versioned_docs/version-4.2/developers/operations-api/token-authentication.md b/versioned_docs/version-4.2/developers/operations-api/token-authentication.md index b9ff5b31..178db842 100644 --- a/versioned_docs/version-4.2/developers/operations-api/token-authentication.md +++ b/versioned_docs/version-4.2/developers/operations-api/token-authentication.md @@ -10,9 +10,9 @@ Creates the tokens needed for authentication: operation & refresh token. _Note - this operation does not require authorization to be set_ -- operation _(required)_ - must always be `create_authentication_tokens` -- username _(required)_ - username of user to generate tokens for -- password _(required)_ - password of user to generate tokens for +- `operation` _(required)_ - must always be `create_authentication_tokens` +- `username` _(required)_ - username of user to generate tokens for +- `password` _(required)_ - password of user to generate tokens for ### Body @@ -39,8 +39,8 @@ _Note - this operation does not require authorization to be set_ This operation creates a new operation token. -- operation _(required)_ - must always be `refresh_operation_token` -- refresh*token *(required)\_ - the refresh token that was provided when tokens were created +- `operation` _(required)_ - must always be `refresh_operation_token` +- `refresh_token` _(required)_ - the refresh token that was provided when tokens were created ### Body diff --git a/versioned_docs/version-4.2/developers/operations-api/users-and-roles.md b/versioned_docs/version-4.2/developers/operations-api/users-and-roles.md index 5a6807b6..250d83f7 100644 --- a/versioned_docs/version-4.2/developers/operations-api/users-and-roles.md +++ b/versioned_docs/version-4.2/developers/operations-api/users-and-roles.md @@ -10,7 +10,7 @@ Returns a list of all roles. Learn more about HarperDB roles here: [https://harp _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `list_roles` +- `operation` _(required)_ - must always be `list_roles` ### Body @@ -80,11 +80,11 @@ Creates a new role with the specified permissions. Learn more about HarperDB rol _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_role` -- role _(required)_ - name of role you are defining -- permission _(required)_ - object defining permissions for users associated with this role: - - super*user *(optional)\_ - boolean which, if set to true, gives users associated with this role full access to all operations and methods. If not included, value will be assumed to be false. - - structure_user (optional) - boolean OR array of schema names (as strings). If boolean, user can create new schemas and tables. If array of strings, users can only manage tables within the specified schemas. This overrides any individual table permissions for specified schemas, or for all schemas if the value is true. +- `operation` _(required)_ - must always be `add_role` +- `role` _(required)_ - name of role you are defining +- `permission` _(required)_ - object defining permissions for users associated with this role: + - `super_user` _(optional)_ - boolean which, if set to true, gives users associated with this role full access to all operations and methods. If not included, value will be assumed to be false. + - `structure_user` _(optional)_ - boolean OR array of schema names (as strings). If boolean, user can create new schemas and tables. If array of strings, users can only manage tables within the specified schemas. This overrides any individual table permissions for specified schemas, or for all schemas if the value is true. ### Body @@ -158,12 +158,12 @@ Modifies an existing role with the specified permissions. updates permissions fr _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `alter_role` -- id _(required)_ - the id value for the role you are altering -- role _(optional)_ - name value to update on the role you are altering -- permission _(required)_ - object defining permissions for users associated with this role: - - super*user *(optional)\_ - boolean which, if set to true, gives users associated with this role full access to all operations and methods. If not included, value will be assumed to be false. - - structure_user (optional) - boolean OR array of schema names (as strings). If boolean, user can create new schemas and tables. If array of strings, users can only manage tables within the specified schemas. This overrides any individual table permissions for specified schemas, or for all schemas if the value is true. +- `operation` _(required)_ - must always be `alter_role` +- `id` _(required)_ - the id value for the role you are altering +- `role` _(optional)_ - name value to update on the role you are altering +- `permission` _(required)_ - object defining permissions for users associated with this role: + - `super_user` _(optional)_ - boolean which, if set to true, gives users associated with this role full access to all operations and methods. If not included, value will be assumed to be false. + - `structure_user` _(optional)_ - boolean OR array of schema names (as strings). If boolean, user can create new schemas and tables. If array of strings, users can only manage tables within the specified schemas. This overrides any individual table permissions for specified schemas, or for all schemas if the value is true. ### Body @@ -237,8 +237,8 @@ Deletes an existing role from the database. NOTE: Role with associated users can _Operation is restricted to super_user roles only_ -- operation _(required)_ - this must always be `drop_role` -- id _(required)_ - this is the id of the role you are dropping +- `operation` _(required)_ - this must always be `drop_role` +- `id` _(required)_ - this is the id of the role you are dropping ### Body @@ -265,7 +265,7 @@ Returns a list of all users. Learn more about HarperDB users here: [https://harp _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `list_users` +- `operation` _(required)_ - must always be `list_users` ### Body @@ -377,7 +377,7 @@ _Operation is restricted to super_user roles only_ Returns user data for the associated user credentials. -- operation _(required)_ - must always be `user_info` +- `operation` _(required)_ - must always be `user_info` ### Body @@ -415,11 +415,11 @@ Creates a new user with the specified role and credentials. Learn more about Har _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_user` -- role _(required)_ - 'role' name value of the role you wish to assign to the user. See `add_role` for more detail -- username _(required)_ - username assigned to the user. It can not be altered after adding the user. It serves as the hash -- password _(required)_ - clear text for password. HarperDB will encrypt the password upon receipt -- active _(required)_ - boolean value for status of user's access to your HarperDB instance. If set to false, user will not be able to access your instance of HarperDB. +- `operation` _(required)_ - must always be `add_user` +- `role` _(required)_ - 'role' name value of the role you wish to assign to the user. See `add_role` for more detail +- `username` _(required)_ - username assigned to the user. It can not be altered after adding the user. It serves as the hash +- `password` _(required)_ - clear text for password. HarperDB will encrypt the password upon receipt +- `active` _(required)_ - boolean value for status of user's access to your HarperDB instance. If set to false, user will not be able to access your instance of HarperDB. ### Body @@ -449,11 +449,11 @@ Modifies an existing user's role and/or credentials. Learn more about HarperDB u _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `alter_user` -- username _(required)_ - username assigned to the user. It can not be altered after adding the user. It serves as the hash. -- password _(optional)_ - clear text for password. HarperDB will encrypt the password upon receipt -- role _(optional)_ - `role` name value of the role you wish to assign to the user. See `add_role` for more detail -- active _(optional)_ - status of user's access to your HarperDB instance. See `add_role` for more detail +- `operation` _(required)_ - must always be `alter_user` +- `username` _(required)_ - username assigned to the user. It can not be altered after adding the user. It serves as the hash. +- `password` _(optional)_ - clear text for password. HarperDB will encrypt the password upon receipt +- `role` _(optional)_ - `role` name value of the role you wish to assign to the user. See `add_role` for more detail +- `active` _(optional)_ - status of user's access to your HarperDB instance. See `add_role` for more detail ### Body @@ -487,8 +487,8 @@ Deletes an existing user by username. Learn more about HarperDB users here: [htt _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `drop_user` -- username _(required)_ - username assigned to the user +- `operation` _(required)_ - must always be `drop_user` +- `username` _(required)_ - username assigned to the user ### Body diff --git a/versioned_docs/version-4.2/developers/operations-api/utilities.md b/versioned_docs/version-4.2/developers/operations-api/utilities.md index f14f8e4c..734c55af 100644 --- a/versioned_docs/version-4.2/developers/operations-api/utilities.md +++ b/versioned_docs/version-4.2/developers/operations-api/utilities.md @@ -10,7 +10,7 @@ Restarts the HarperDB instance. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `restart` +- `operation` _(required)_ - must always be `restart` ### Body @@ -36,8 +36,8 @@ Restarts servers for the specified HarperDB service. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `restart_service` -- service _(required)_ - must be one of: `http_workers`, `clustering_config` or `clustering` +- `operation` _(required)_ - must always be `restart_service` +- `service` _(required)_ - must be one of: `http_workers`, `clustering_config` or `clustering` ### Body @@ -64,8 +64,8 @@ Returns detailed metrics on the host system. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `system_information` -- attributes _(optional)_ - string array of top level attributes desired in the response, if no value is supplied all attributes will be returned. Available attributes are: ['system', 'time', 'cpu', 'memory', 'disk', 'network', 'harperdb_processes', 'table_size', 'replication'] +- `operation` _(required)_ - must always be `system_information` +- `attributes` _(optional)_ - string array of top level attributes desired in the response, if no value is supplied all attributes will be returned. Available attributes are: ['system', 'time', 'cpu', 'memory', 'disk', 'network', 'harperdb_processes', 'table_size', 'replication'] ### Body @@ -83,10 +83,10 @@ Delete data before the specified timestamp on the specified database table exclu _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `delete_records_before` -- date _(required)_ - records older than this date will be deleted. Supported format looks like: `YYYY-MM-DDThh:mm:ss.sZ` -- schema _(required)_ - name of the schema where you are deleting your data -- table _(required)_ - name of the table where you are deleting your data +- `operation` _(required)_ - must always be `delete_records_before` +- `date` _(required)_ - records older than this date will be deleted. Supported format looks like: `YYYY-MM-DDThh:mm:ss.sZ` +- `schema` _(required)_ - name of the schema where you are deleting your data +- `table` _(required)_ - name of the table where you are deleting your data ### Body @@ -114,10 +114,10 @@ _Operation is restricted to super_user roles only_ Exports data based on a given search operation to a local file in JSON or CSV format. -- operation _(required)_ - must always be `export_local` -- format _(required)_ - the format you wish to export the data, options are `json` & `csv` -- path _(required)_ - path local to the server to export the data -- search*operation *(required)\_ - search_operation of `search_by_hash`, `search_by_value` or `sql` +- `operation` _(required)_ - must always be `export_local` +- `format` _(required)_ - the format you wish to export the data, options are `json` & `csv` +- `path` _(required)_ - path local to the server to export the data +- `search_operation` _(required)_ - search_operation of `search_by_hash`, `search_by_value` or `sql` ### Body @@ -147,10 +147,10 @@ Exports data based on a given search operation to a local file in JSON or CSV fo Exports data based on a given search operation from table to AWS S3 in JSON or CSV format. -- operation _(required)_ - must always be `export_to_s3` -- format _(required)_ - the format you wish to export the data, options are `json` & `csv` -- s3 _(required)_ - details your access keys, bucket, bucket region and key for saving the data to S3 -- search*operation *(required)\_ - search_operation of `search_by_hash`, `search_by_value` or `sql` +- `operation` _(required)_ - must always be `export_to_s3` +- `format` _(required)_ - the format you wish to export the data, options are `json` & `csv` +- `s3` _(required)_ - details your access keys, bucket, bucket region and key for saving the data to S3 +- `search_operation` _(required)_ - search_operation of `search_by_hash`, `search_by_value` or `sql` ### Body @@ -189,9 +189,9 @@ Executes npm install against specified custom function projects. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `install_node_modules` -- projects _(required)_ - must ba an array of custom functions projects. -- dry*run *(optional)\_ - refers to the npm --dry-run flag: [https://docs.npmjs.com/cli/v8/commands/npm-install#dry-run](https://docs.npmjs.com/cli/v8/commands/npm-install#dry-run). Defaults to false. +- `operation` _(required)_ - must always be `install_node_modules` +- `projects` _(required)_ - must ba an array of custom functions projects. +- `dry_run` _(optional)_ - refers to the npm --dry-run flag: [https://docs.npmjs.com/cli/v8/commands/npm-install#dry-run](https://docs.npmjs.com/cli/v8/commands/npm-install#dry-run). Defaults to false. ### Body @@ -211,9 +211,9 @@ Modifies the HarperDB configuration file parameters. Must follow with a restart _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_configuration` -- logging*level *(example/optional)\_ - one or more configuration keywords to be updated in the HarperDB configuration file -- clustering*enabled *(example/optional)\_ - one or more configuration keywords to be updated in the HarperDB configuration file +- `operation` _(required)_ - must always be `set_configuration` +- `logging_level` _(optional)_ - one or more configuration keywords to be updated in the HarperDB configuration file +- `clustering_enabled` _(optional)_ - one or more configuration keywords to be updated in the HarperDB configuration file ### Body @@ -241,7 +241,7 @@ Returns the HarperDB configuration parameters. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_configuration` +- `operation` _(required)_ - must always be `get_configuration` ### Body diff --git a/versioned_docs/version-4.2/developers/security/basic-auth.md b/versioned_docs/version-4.2/developers/security/basic-auth.md index 4f89d919..f3d76e29 100644 --- a/versioned_docs/version-4.2/developers/security/basic-auth.md +++ b/versioned_docs/version-4.2/developers/security/basic-auth.md @@ -8,7 +8,7 @@ HarperDB uses Basic Auth and JSON Web Tokens (JWTs) to secure our HTTP requests. ** \_**You do not need to log in separately. Basic Auth is added to each HTTP request like create_schema, create_table, insert etc… via headers.**\_ ** -A header is added to each HTTP request. The header key is **“Authorization”** the header value is **“Basic <<your username and password buffer token>>”** +A header is added to each HTTP request. The header key is **"Authorization"** the header value is **"Basic <<your username and password buffer token>>"** ## Authentication in HarperDB Studio diff --git a/versioned_docs/version-4.2/developers/security/users-and-roles.md b/versioned_docs/version-4.2/developers/security/users-and-roles.md index 32af4b0e..d54b279d 100644 --- a/versioned_docs/version-4.2/developers/security/users-and-roles.md +++ b/versioned_docs/version-4.2/developers/security/users-and-roles.md @@ -47,7 +47,7 @@ When creating a new, user-defined role in a HarperDB instance, you must provide Example JSON for `add_role` request -```json +```jsonc { "operation": "add_role", "role": "software_developer", @@ -98,7 +98,7 @@ There are two parts to a permissions set: Each table that a role should be given some level of CRUD permissions to must be included in the `tables` array for its schema in the roles permissions JSON passed to the API (_see example above_). -```json +```jsonc { "table_name": { // the name of the table to define CRUD perms for "read": boolean, // access to read from this table diff --git a/versioned_docs/version-4.2/developers/sql-guide/date-functions.md b/versioned_docs/version-4.2/developers/sql-guide/date-functions.md index 9ecebdb1..535ac7b6 100644 --- a/versioned_docs/version-4.2/developers/sql-guide/date-functions.md +++ b/versioned_docs/version-4.2/developers/sql-guide/date-functions.md @@ -152,17 +152,17 @@ Subtracts the defined amount of time from the date provided in UTC and returns t ### EXTRACT(date, date_part) -Extracts and returns the date_part requested as a String value. Accepted date_part values below show value returned for date = “2020-03-26T15:13:02.041+000” +Extracts and returns the date_part requested as a String value. Accepted date_part values below show value returned for date = "2020-03-26T15:13:02.041+000" | date_part | Example return value\* | | ----------- | ---------------------- | -| year | “2020” | -| month | “3” | -| day | “26” | -| hour | “15” | -| minute | “13” | -| second | “2” | -| millisecond | “41” | +| year | "2020" | +| month | "3" | +| day | "26" | +| hour | "15" | +| minute | "13" | +| second | "2" | +| millisecond | "41" | ``` "SELECT EXTRACT(1587568845765, 'year') AS extract_result" returns diff --git a/versioned_docs/version-4.2/developers/sql-guide/functions.md b/versioned_docs/version-4.2/developers/sql-guide/functions.md index 0360b1e8..853fe73d 100644 --- a/versioned_docs/version-4.2/developers/sql-guide/functions.md +++ b/versioned_docs/version-4.2/developers/sql-guide/functions.md @@ -10,146 +10,132 @@ This SQL keywords reference contains the SQL functions available in HarperDB. ### Aggregate -| Keyword | Syntax | Description | -| ---------------- | ----------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- | -| AVG | AVG(_expression_) | Returns the average of a given numeric expression. | -| COUNT | SELECT COUNT(_column_name_) FROM _schema.table_ WHERE _condition_ | Returns the number records that match the given criteria. Nulls are not counted. | -| GROUP_CONCAT | GROUP*CONCAT(\_expression*) | Returns a string with concatenated values that are comma separated and that are non-null from a group. Will return null when there are non-null values. | -| MAX | SELECT MAX(_column_name_) FROM _schema.table_ WHERE _condition_ | Returns largest value in a specified column. | -| MIN | SELECT MIN(_column_name_) FROM _schema.table_ WHERE _condition_ | Returns smallest value in a specified column. | -| SUM | SUM(_column_name_) | Returns the sum of the numeric values provided. | -| ARRAY\* | ARRAY(_expression_) | Returns a list of data as a field. | -| DISTINCT_ARRAY\* | DISTINCT*ARRAY(\_expression*) | When placed around a standard ARRAY() function, returns a distinct (deduplicated) results set. | - -\*For more information on ARRAY() and DISTINCT_ARRAY() see [this blog](https://www.harperdb.io/post/sql-queries-to-complex-objects). +| Keyword | Syntax | Description | +| ---------------- | ------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `AVG` | `AVG(expression)` | Returns the average of a given numeric expression. | +| `COUNT` | `SELECT COUNT(column_name) FROM schema.table WHERE condition` | Returns the number records that match the given criteria. Nulls are not counted. | +| `GROUP_CONCAT` | `GROUP_CONCAT(expression)` | Returns a string with concatenated values that are comma separated and that are non-null from a group. Will return null when there are non-null values. | +| `MAX` | `SELECT MAX(column_name) FROM schema.table WHERE condition` | Returns largest value in a specified column. | +| `MIN` | `SELECT MIN(column_name) FROM schema.table WHERE condition` | Returns smallest value in a specified column. | +| `SUM` | `SUM(column_name)` | Returns the sum of the numeric values provided. | +| `ARRAY`* | `ARRAY(expression)` | Returns a list of data as a field. | +| `DISTINCT_ARRAY`* | `DISTINCT_ARRAY(expression)` | When placed around a standard `ARRAY()` function, returns a distinct (deduplicated) results set. | + +*For more information on `ARRAY()` and `DISTINCT_ARRAY()` see [this blog](https://www.harperdb.io/post/sql-queries-to-complex-objects). ### Conversion | Keyword | Syntax | Description | | ------- | ----------------------------------------------- | ---------------------------------------------------------------------- | -| CAST | CAST(_expression AS datatype(length)_) | Converts a value to a specified datatype. | -| CONVERT | CONVERT(_data_type(length), expression, style_) | Converts a value from one datatype to a different, specified datatype. | +| `CAST` | `CAST(expression AS datatype(length))` | Converts a value to a specified datatype. | +| `CONVERT` | `CONVERT(data_type(length), expression, style)` | Converts a value from one datatype to a different, specified datatype. | ### Date & Time -| Keyword | Syntax | Description | -| ----------------- | ----------------- | --------------------------------------------------------------------------------------------------------------------- | -| CURRENT_DATE | CURRENT_DATE() | Returns the current date in UTC in “YYYY-MM-DD” String format. | -| CURRENT_TIME | CURRENT_TIME() | Returns the current time in UTC in “HH:mm:ss.SSS” string format. | -| CURRENT_TIMESTAMP | CURRENT_TIMESTAMP | Referencing this variable will evaluate as the current Unix Timestamp in milliseconds. For more information, go here. | - -| -| DATE | DATE([_date_string_]) | Formats and returns the date*string argument in UTC in ‘YYYY-MM-DDTHH:mm:ss.SSSZZ’ string format. If a date_string is not provided, the function will return the current UTC date/time value in the return format defined above. For more information, go here. | -| -| DATE_ADD | DATE_ADD(\_date, value, interval*) | Adds the defined amount of time to the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted interval values: Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. | -| -| DATE*DIFF | DATEDIFF(\_date_1, date_2[, interval]*) | Returns the difference between the two date values passed based on the interval as a Number. If an interval is not provided, the function will return the difference value in milliseconds. For more information, go here. | -| -| DATE*FORMAT | DATE_FORMAT(\_date, format*) | Formats and returns a date value in the String format provided. Find more details on accepted format values in the moment.js docs. For more information, go here. | -| -| DATE*SUB | DATE_SUB(\_date, format*) | Subtracts the defined amount of time from the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted date*sub interval values- Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. | -| -| DAY | DAY(\_date*) | Return the day of the month for the given date. | -| -| DAYOFWEEK | DAYOFWEEK(_date_) | Returns the numeric value of the weekday of the date given(“YYYY-MM-DD”).NOTE: 0=Sunday, 1=Monday, 2=Tuesday, 3=Wednesday, 4=Thursday, 5=Friday, and 6=Saturday. | -| EXTRACT | EXTRACT(_date, date_part_) | Extracts and returns the date*part requested as a String value. Accepted date_part values below show value returned for date = “2020-03-26T15:13:02.041+000” For more information, go here. | -| -| GETDATE | GETDATE() | Returns the current Unix Timestamp in milliseconds. | -| GET_SERVER_TIME | GET_SERVER_TIME() | Returns the current date/time value based on the server’s timezone in `YYYY-MM-DDTHH:mm:ss.SSSZZ` String format. | -| OFFSET_UTC | OFFSET_UTC(\_date, offset*) | Returns the UTC date time value with the offset provided included in the return String value formatted as `YYYY-MM-DDTHH:mm:ss.SSSZZ`. The offset argument will be added as minutes unless the value is less than 16 and greater than -16, in which case it will be treated as hours. | -| NOW | NOW() | Returns the current Unix Timestamp in milliseconds. | -| -| HOUR | HOUR(_datetime_) | Returns the hour part of a given date in range of 0 to 838. | -| -| MINUTE | MINUTE(_datetime_) | Returns the minute part of a time/datetime in range of 0 to 59. | -| -| MONTH | MONTH(_date_) | Returns month part for a specified date in range of 1 to 12. | -| -| SECOND | SECOND(_datetime_) | Returns the seconds part of a time/datetime in range of 0 to 59. | -| YEAR | YEAR(_date_) | Returns the year part for a specified date. | -| +| Keyword | Syntax | Description | +| ----------------- | -------------------------- | --------------------------------------------------------------------------------------------------------------------- | +| `CURRENT_DATE` | `CURRENT_DATE()` | Returns the current date in UTC in "YYYY-MM-DD" String format. | +| `CURRENT_TIME` | `CURRENT_TIME()` | Returns the current time in UTC in "HH:mm:ss.SSS" string format. | +| `CURRENT_TIMESTAMP` | `CURRENT_TIMESTAMP` | Referencing this variable will evaluate as the current Unix Timestamp in milliseconds. For more information, go here. | +| `DATE` | `DATE([date_string])` | Formats and returns the date string argument in UTC in 'YYYY-MM-DDTHH:mm:ss.SSSZZ' string format. If a date string is not provided, the function will return the current UTC date/time value in the return format defined above. For more information, go here. | +| `DATE_ADD` | `DATE_ADD(date, value, interval)` | Adds the defined amount of time to the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted interval values: Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. | +| `DATE_DIFF` | `DATE_DIFF(date_1, date_2[, interval])` | Returns the difference between the two date values passed based on the interval as a Number. If an interval is not provided, the function will return the difference value in milliseconds. For more information, go here. | +| `DATE_FORMAT` | `DATE_FORMAT(date, format)` | Formats and returns a date value in the String format provided. Find more details on accepted format values in the moment.js docs. For more information, go here. | +| `DATE_SUB` | `DATE_SUB(date, format)` | Subtracts the defined amount of time from the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted date_sub interval values- Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. | +| `DAY` | `DAY(date)` | Return the day of the month for the given date. | +| `DAYOFWEEK` | `DAYOFWEEK(date)` | Returns the numeric value of the weekday of the date given("YYYY-MM-DD").NOTE: 0=Sunday, 1=Monday, 2=Tuesday, 3=Wednesday, 4=Thursday, 5=Friday, and 6=Saturday. | +| `EXTRACT` | `EXTRACT(date, date_part)` | Extracts and returns the date part requested as a String value. Accepted date_part values below show value returned for date = "2020-03-26T15:13:02.041+000" For more information, go here. | +| `GETDATE` | `GETDATE()` | Returns the current Unix Timestamp in milliseconds. | +| `GET_SERVER_TIME` | `GET_SERVER_TIME()` | Returns the current date/time value based on the server's timezone in `YYYY-MM-DDTHH:mm:ss.SSSZZ` String format. | +| `OFFSET_UTC` | `OFFSET_UTC(date, offset)` | Returns the UTC date time value with the offset provided included in the return String value formatted as `YYYY-MM-DDTHH:mm:ss.SSSZZ`. The offset argument will be added as minutes unless the value is less than 16 and greater than -16, in which case it will be treated as hours. | +| `NOW` | `NOW()` | Returns the current Unix Timestamp in milliseconds. | +| `HOUR` | `HOUR(datetime)` | Returns the hour part of a given date in range of 0 to 838. | +| `MINUTE` | `MINUTE(datetime)` | Returns the minute part of a time/datetime in range of 0 to 59. | +| `MONTH` | `MONTH(date)` | Returns month part for a specified date in range of 1 to 12. | +| `SECOND` | `SECOND(datetime)` | Returns the seconds part of a time/datetime in range of 0 to 59. | +| `YEAR` | `YEAR(date)` | Returns the year part for a specified date. | ### Logical | Keyword | Syntax | Description | | ------- | ----------------------------------------------- | ------------------------------------------------------------------------------------------ | -| IF | IF(_condition, value_if_true, value_if_false_) | Returns a value if the condition is true, or another value if the condition is false. | -| IIF | IIF(_condition, value_if_true, value_if_false_) | Returns a value if the condition is true, or another value if the condition is false. | -| IFNULL | IFNULL(_expression, alt_value_) | Returns a specified value if the expression is null. | -| NULLIF | NULLIF(_expression_1, expression_2_) | Returns null if expression_1 is equal to expression_2, if not equal, returns expression_1. | +| `IF` | `IF(condition, value_if_true, value_if_false)` | Returns a value if the condition is true, or another value if the condition is false. | +| `IIF` | `IIF(condition, value_if_true, value_if_false)` | Returns a value if the condition is true, or another value if the condition is false. | +| `IFNULL` | `IFNULL(expression, alt_value)` | Returns a specified value if the expression is null. | +| `NULLIF` | `NULLIF(expression_1, expression_2)` | Returns null if expression_1 is equal to expression_2, if not equal, returns expression_1. | ### Mathematical | Keyword | Syntax | Description | | ------- | ------------------------------ | --------------------------------------------------------------------------------------------------- | -| ABS | ABS(_expression_) | Returns the absolute value of a given numeric expression. | -| CEIL | CEIL(_number_) | Returns integer ceiling, the smallest integer value that is bigger than or equal to a given number. | -| EXP | EXP(_number_) | Returns e to the power of a specified number. | -| FLOOR | FLOOR(_number_) | Returns the largest integer value that is smaller than, or equal to, a given number. | -| RANDOM | RANDOM(_seed_) | Returns a pseudo random number. | -| ROUND | ROUND(_number,decimal_places_) | Rounds a given number to a specified number of decimal places. | -| SQRT | SQRT(_expression_) | Returns the square root of an expression. | +| `ABS` | `ABS(expression)` | Returns the absolute value of a given numeric expression. | +| `CEIL` | `CEIL(number)` | Returns integer ceiling, the smallest integer value that is bigger than or equal to a given number. | +| `EXP` | `EXP(number)` | Returns e to the power of a specified number. | +| `FLOOR` | `FLOOR(number)` | Returns the largest integer value that is smaller than, or equal to, a given number. | +| `RANDOM` | `RANDOM(seed)` | Returns a pseudo random number. | +| `ROUND` | `ROUND(number, decimal_places)` | Rounds a given number to a specified number of decimal places. | +| `SQRT` | `SQRT(expression)` | Returns the square root of an expression. | ### String -| Keyword | Syntax | Description | -| ----------- | ----------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| CONCAT | CONCAT(_string_1, string_2, ...., string_n_) | Concatenates, or joins, two or more strings together, resulting in a single string. | -| CONCAT_WS | CONCAT*WS(\_separator, string_1, string_2, ...., string_n*) | Concatenates, or joins, two or more strings together with a separator, resulting in a single string. | -| INSTR | INSTR(_string_1, string_2_) | Returns the first position, as an integer, of string_2 within string_1. | -| LEN | LEN(_string_) | Returns the length of a string. | -| LOWER | LOWER(_string_) | Converts a string to lower-case. | -| REGEXP | SELECT _column_name_ FROM _schema.table_ WHERE _column_name_ REGEXP _pattern_ | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. | -| REGEXP_LIKE | SELECT _column_name_ FROM _schema.table_ WHERE REGEXP*LIKE(\_column_name, pattern*) | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. | -| REPLACE | REPLACE(_string, old_string, new_string_) | Replaces all instances of old_string within new_string, with string. | -| SUBSTRING | SUBSTRING(_string, string_position, length_of_substring_) | Extracts a specified amount of characters from a string. | -| TRIM | TRIM([_character(s) FROM_] _string_) | Removes leading and trailing spaces, or specified character(s), from a string. | -| UPPER | UPPER(_string_) | Converts a string to upper-case. | +| Keyword | Syntax | Description | +| ----------- | ------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `CONCAT` | `CONCAT(string_1, string_2, ...., string_n)` | Concatenates, or joins, two or more strings together, resulting in a single string. | +| `CONCAT_WS` | `CONCAT_WS(separator, string_1, string_2, ...., string_n)` | Concatenates, or joins, two or more strings together with a separator, resulting in a single string. | +| `INSTR` | `INSTR(string_1, string_2)` | Returns the first position, as an integer, of string_2 within string_1. | +| `LEN` | `LEN(string)` | Returns the length of a string. | +| `LOWER` | `LOWER(string)` | Converts a string to lower-case. | +| `REGEXP` | `SELECT column_name FROM schema.table WHERE column_name REGEXP pattern` | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. | +| `REGEXP_LIKE` | `SELECT column_name FROM schema.table WHERE REGEXP_LIKE(column_name, pattern)` | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. | +| `REPLACE` | `REPLACE(string, old_string, new_string)` | Replaces all instances of old_string within new_string, with string. | +| `SUBSTRING` | `SUBSTRING(string, string_position, length_of_substring)` | Extracts a specified amount of characters from a string. | +| `TRIM` | `TRIM([character(s) FROM] string)` | Removes leading and trailing spaces, or specified character(s), from a string. | +| `UPPER` | `UPPER(string)` | Converts a string to upper-case. | ## Operators ### Logical Operators -| Keyword | Syntax | Description | -| ------- | ----------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------- | -| BETWEEN | SELECT _column_name(s)_ FROM _schema.table_ WHERE _column_name_ BETWEEN _value_1_ AND _value_2_ | (inclusive) Returns values(numbers, text, or dates) within a given range. | -| IN | SELECT _column_name(s)_ FROM _schema.table_ WHERE _column_name_ IN(_value(s)_) | Used to specify multiple values in a WHERE clause. | -| LIKE | SELECT _column_name(s)_ FROM _schema.table_ WHERE _column_n_ LIKE _pattern_ | Searches for a specified pattern within a WHERE clause. | +| Keyword | Syntax | Description | +| ------- | ------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------- | +| `BETWEEN` | `SELECT column_name(s) FROM schema.table WHERE column_name BETWEEN value_1 AND value_2` | (inclusive) Returns values(numbers, text, or dates) within a given range. | +| `IN` | `SELECT column_name(s) FROM schema.table WHERE column_name IN(value(s))` | Used to specify multiple values in a WHERE clause. | +| `LIKE` | `SELECT column_name(s) FROM schema.table WHERE column_n LIKE pattern` | Searches for a specified pattern within a WHERE clause. | ## Queries ### General -| Keyword | Syntax | Description | -| -------- | ------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------- | -| DISTINCT | SELECT DISTINCT _column_name(s)_ FROM _schema.table_ | Returns only unique values, eliminating duplicate records. | -| FROM | FROM _schema.table_ | Used to list the schema(s), table(s), and any joins required for a SQL statement. | -| GROUP BY | SELECT _column_name(s)_ FROM _schema.table_ WHERE _condition_ GROUP BY _column_name(s)_ ORDER BY _column_name(s)_ | Groups rows that have the same values into summary rows. | -| HAVING | SELECT _column_name(s)_ FROM _schema.table_ WHERE _condition_ GROUP BY _column_name(s)_ HAVING _condition_ ORDER BY _column_name(s)_ | Filters data based on a group or aggregate function. | -| SELECT | SELECT _column_name(s)_ FROM _schema.table_ | Selects data from table. | -| WHERE | SELECT _column_name(s)_ FROM _schema.table_ WHERE _condition_ | Extracts records based on a defined condition. | +| Keyword | Syntax | Description | +| -------- | -------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------- | +| `DISTINCT` | `SELECT DISTINCT column_name(s) FROM schema.table` | Returns only unique values, eliminating duplicate records. | +| `FROM` | `FROM schema.table` | Used to list the schema(s), table(s), and any joins required for a SQL statement. | +| `GROUP BY` | `SELECT column_name(s) FROM schema.table WHERE condition GROUP BY column_name(s) ORDER BY column_name(s)` | Groups rows that have the same values into summary rows. | +| `HAVING` | `SELECT column_name(s) FROM schema.table WHERE condition GROUP BY column_name(s) HAVING condition ORDER BY column_name(s)` | Filters data based on a group or aggregate function. | +| `SELECT` | `SELECT column_name(s) FROM schema.table` | Selects data from table. | +| `WHERE` | `SELECT column_name(s) FROM schema.table WHERE condition` | Extracts records based on a defined condition. | ### Joins | Keyword | Syntax | Description | | ------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| CROSS JOIN | SELECT _column_name(s)_ FROM _schema.table_1_ CROSS JOIN _schema.table_2_ | Returns a paired combination of each row from _table_1_ with row from _table_2_. _Note: CROSS JOIN can return very large result sets and is generally considered bad practice._ | -| FULL OUTER | SELECT _column_name(s)_ FROM _schema.table_1_ FULL OUTER JOIN _schema.table_2_ ON _table_1.column_name_ _= table_2.column_name_ WHERE _condition_ | Returns all records when there is a match in either _table_1_ (left table) or _table_2_ (right table). | -| [INNER] JOIN | SELECT _column_name(s)_ FROM _schema.table_1_ INNER JOIN _schema.table_2_ ON _table_1.column_name_ _= table_2.column_name_ | Return only matching records from _table_1_ (left table) and _table_2_ (right table). The INNER keyword is optional and does not affect the result. | -| LEFT [OUTER] JOIN | SELECT _column_name(s)_ FROM _schema.table_1_ LEFT OUTER JOIN _schema.table_2_ ON _table_1.column_name_ _= table_2.column_name_ | Return all records from _table_1_ (left table) and matching data from _table_2_ (right table). The OUTER keyword is optional and does not affect the result. | -| RIGHT [OUTER] JOIN | SELECT _column_name(s)_ FROM _schema.table_1_ RIGHT OUTER JOIN _schema.table_2_ ON _table_1.column_name = table_2.column_name_ | Return all records from _table_2_ (right table) and matching data from _table_1_ (left table). The OUTER keyword is optional and does not affect the result. | +| `CROSS JOIN` | `SELECT column_name(s) FROM schema.table_1 CROSS JOIN schema.table_2` | Returns a paired combination of each row from `table_1` with row from `table_2`. Note: CROSS JOIN can return very large result sets and is generally considered bad practice. | +| `FULL OUTER` | `SELECT column_name(s) FROM schema.table_1 FULL OUTER JOIN schema.table_2 ON table_1.column_name = table_2.column_name WHERE condition` | Returns all records when there is a match in either `table_1` (left table) or `table_2` (right table). | +| `[INNER] JOIN` | `SELECT column_name(s) FROM schema.table_1 INNER JOIN schema.table_2 ON table_1.column_name = table_2.column_name` | Return only matching records from `table_1` (left table) and `table_2` (right table). The INNER keyword is optional and does not affect the result. | +| `LEFT [OUTER] JOIN` | `SELECT column_name(s) FROM schema.table_1 LEFT OUTER JOIN schema.table_2 ON table_1.column_name = table_2.column_name` | Return all records from `table_1` (left table) and matching data from `table_2` (right table). The OUTER keyword is optional and does not affect the result. | +| `RIGHT [OUTER] JOIN` | `SELECT column_name(s) FROM schema.table_1 RIGHT OUTER JOIN schema.table_2 ON table_1.column_name = table_2.column_name` | Return all records from `table_2` (right table) and matching data from `table_1` (left table). The OUTER keyword is optional and does not affect the result. | ### Predicates -| Keyword | Syntax | Description | -| ----------- | --------------------------------------------------------------------------- | -------------------------- | -| IS NOT NULL | SELECT _column_name(s)_ FROM _schema.table_ WHERE _column_name_ IS NOT NULL | Tests for non-null values. | -| IS NULL | SELECT _column_name(s)_ FROM _schema.table_ WHERE _column_name_ IS NULL | Tests for null values. | +| Keyword | Syntax | Description | +| ----------- | ----------------------------------------------------------------------------- | -------------------------- | +| `IS NOT NULL` | `SELECT column_name(s) FROM schema.table WHERE column_name IS NOT NULL` | Tests for non-null values. | +| `IS NULL` | `SELECT column_name(s) FROM schema.table WHERE column_name IS NULL` | Tests for null values. | ### Statements -| Keyword | Syntax | Description | -| ------- | ------------------------------------------------------------------------------------------- | ----------------------------------- | -| DELETE | DELETE FROM _schema.table_ WHERE condition | Deletes existing data from a table. | -| INSERT | INSERT INTO _schema.table(column_name(s))_ VALUES(_value(s)_) | Inserts new records into a table. | -| UPDATE | UPDATE _schema.table_ SET _column_1 = value_1, column_2 = value_2, ....,_ WHERE _condition_ | Alters existing records in a table. | +| Keyword | Syntax | Description | +| ------- | --------------------------------------------------------------------------------------------- | ----------------------------------- | +| `DELETE` | `DELETE FROM schema.table WHERE condition` | Deletes existing data from a table. | +| `INSERT` | `INSERT INTO schema.table(column_name(s)) VALUES(value(s))` | Inserts new records into a table. | +| `UPDATE` | `UPDATE schema.table SET column_1 = value_1, column_2 = value_2, .... WHERE condition` | Alters existing records in a table. | \ No newline at end of file diff --git a/versioned_docs/version-4.2/developers/sql-guide/json-search.md b/versioned_docs/version-4.2/developers/sql-guide/json-search.md index 9c45fb39..bdd17aa5 100644 --- a/versioned_docs/version-4.2/developers/sql-guide/json-search.md +++ b/versioned_docs/version-4.2/developers/sql-guide/json-search.md @@ -8,7 +8,7 @@ HarperDB automatically indexes all top level attributes in a row / object writte ## Syntax -SEARCH_JSON(_expression, attribute_) +`SEARCH_JSON(expression, attribute)` Executes the supplied string _expression_ against data of the defined top level _attribute_ for each row. The expression both filters and defines output from the JSON document. @@ -113,7 +113,7 @@ SEARCH_JSON( ) ``` -The first argument passed to SEARCH_JSON is the expression to execute against the second argument which is the cast attribute on the credits table. This expression will execute for every row. Looking into the expression it starts with “$\[…]” this tells the expression to iterate all elements of the cast array. +The first argument passed to SEARCH_JSON is the expression to execute against the second argument which is the cast attribute on the credits table. This expression will execute for every row. Looking into the expression it starts with "$[…]" this tells the expression to iterate all elements of the cast array. Then the expression tells the function to only return entries where the name attribute matches any of the actors defined in the array: @@ -121,7 +121,7 @@ Then the expression tells the function to only return entries where the name att name in ["Robert Downey Jr.", "Chris Evans", "Scarlett Johansson", "Mark Ruffalo", "Chris Hemsworth", "Jeremy Renner", "Clark Gregg", "Samuel L. Jackson", "Gwyneth Paltrow", "Don Cheadle"] ``` -So far, we’ve iterated the array and filtered out rows, but we also want the results formatted in a specific way, so we’ve chained an expression on our filter with: `{“actor”: name, “character”: character}`. This tells the function to create a specific object for each matching entry. +So far, we’ve iterated the array and filtered out rows, but we also want the results formatted in a specific way, so we’ve chained an expression on our filter with: `{"actor": name, "character": character}`. This tells the function to create a specific object for each matching entry. **Sample Result** diff --git a/versioned_docs/version-4.2/technical-details/reference/analytics.md b/versioned_docs/version-4.2/technical-details/reference/analytics.md index c1975c66..c6500e78 100644 --- a/versioned_docs/version-4.2/technical-details/reference/analytics.md +++ b/versioned_docs/version-4.2/technical-details/reference/analytics.md @@ -104,14 +104,14 @@ And a summary record looks like: The following are general resource usage statistics that are tracked: -- memory - This includes RSS, heap, buffer and external data usage. -- utilization - How much of the time the worker was processing requests. +- `memory` - This includes RSS, heap, buffer and external data usage. +- `utilization` - How much of the time the worker was processing requests. - mqtt-connections - The number of MQTT connections. The following types of information is tracked for each HTTP request: -- success - How many requests returned a successful response (20x response code). TTFB - Time to first byte in the response to the client. -- transfer - Time to finish the transfer of the data to the client. +- `success` - How many requests returned a successful response (20x response code). TTFB - Time to first byte in the response to the client. +- `transfer` - Time to finish the transfer of the data to the client. - bytes-sent - How many bytes of data were sent to the client. Requests are categorized by operation name, for the operations API, by the resource (name) with the REST API, and by command for the MQTT interface. diff --git a/versioned_docs/version-4.3/administration/harperdb-studio/create-account.md b/versioned_docs/version-4.3/administration/harperdb-studio/create-account.md index c78c4ef3..3d146bb6 100644 --- a/versioned_docs/version-4.3/administration/harperdb-studio/create-account.md +++ b/versioned_docs/version-4.3/administration/harperdb-studio/create-account.md @@ -12,7 +12,7 @@ Start at the [HarperDB Studio sign up page](https://studio.harperdb.io/sign-up). - Email Address - Subdomain - _Part of the URL that will be used to identify your HarperDB Cloud Instances. For example, with subdomain “demo” and instance name “c1” the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com)._ + _Part of the URL that will be used to identify your HarperDB Cloud Instances. For example, with subdomain "demo" and instance name "c1" the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com)._ - Coupon Code (optional) diff --git a/versioned_docs/version-4.3/administration/harperdb-studio/instances.md b/versioned_docs/version-4.3/administration/harperdb-studio/instances.md index 428babaf..dd1cbd08 100644 --- a/versioned_docs/version-4.3/administration/harperdb-studio/instances.md +++ b/versioned_docs/version-4.3/administration/harperdb-studio/instances.md @@ -26,7 +26,7 @@ A summary view of all instances within an organization can be viewed by clicking 1. Fill out Instance Info. 1. Enter Instance Name - _This will be used to build your instance URL. For example, with subdomain “demo” and instance name “c1” the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com). The Instance URL will be previewed below._ + _This will be used to build your instance URL. For example, with subdomain "demo" and instance name "c1" the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com). The Instance URL will be previewed below._ 1. Enter Instance Username diff --git a/versioned_docs/version-4.3/administration/harperdb-studio/organizations.md b/versioned_docs/version-4.3/administration/harperdb-studio/organizations.md index a6efb234..5cc373f6 100644 --- a/versioned_docs/version-4.3/administration/harperdb-studio/organizations.md +++ b/versioned_docs/version-4.3/administration/harperdb-studio/organizations.md @@ -29,7 +29,7 @@ A new organization can be created as follows: - Enter Organization Name _This is used for descriptive purposes only._ - Enter Organization Subdomain - _Part of the URL that will be used to identify your HarperDB Cloud Instances. For example, with subdomain “demo” and instance name “c1” the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com)._ + _Part of the URL that will be used to identify your HarperDB Cloud Instances. For example, with subdomain "demo" and instance name "c1" the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com)._ 4. Click Create Organization. ## Delete an Organization diff --git a/versioned_docs/version-4.3/administration/harperdb-studio/query-instance-data.md b/versioned_docs/version-4.3/administration/harperdb-studio/query-instance-data.md index 22801dbc..44cd1d08 100644 --- a/versioned_docs/version-4.3/administration/harperdb-studio/query-instance-data.md +++ b/versioned_docs/version-4.3/administration/harperdb-studio/query-instance-data.md @@ -13,7 +13,7 @@ SQL queries can be executed directly through the HarperDB Studio with the follow 5. Enter your SQL query in the SQL query window. 6. Click **Execute**. -_Please note, the Studio will execute the query exactly as entered. For example, if you attempt to `SELECT _` from a table with millions of rows, you will most likely crash your browser.\* +_Please note, the Studio will execute the query exactly as entered. For example, if you attempt to `SELECT _` from a table with millions of rows, you will most likely crash your browser._ ## Browse Query Results Set diff --git a/versioned_docs/version-4.3/administration/logging/standard-logging.md b/versioned_docs/version-4.3/administration/logging/standard-logging.md index 08affb24..0e56681a 100644 --- a/versioned_docs/version-4.3/administration/logging/standard-logging.md +++ b/versioned_docs/version-4.3/administration/logging/standard-logging.md @@ -22,15 +22,15 @@ For example, a typical log entry looks like: The components of a log entry are: -- timestamp - This is the date/time stamp when the event occurred -- level - This is an associated log level that gives a rough guide to the importance and urgency of the message. The available log levels in order of least urgent (and more verbose) are: `trace`, `debug`, `info`, `warn`, `error`, `fatal`, and `notify`. -- thread/ID - This reports the name of the thread and the thread ID that the event was reported on. Note that NATS logs are recorded by their process name and there is no thread id for them since they are a separate process. Key threads are: - - main - This is the thread that is responsible for managing all other threads and routes incoming requests to the other threads - - http - These are the worker threads that handle the primary workload of incoming HTTP requests to the operations API and custom functions. - - Clustering\* - These are threads and processes that handle replication. - - job - These are job threads that have been started to handle operations that are executed in a separate job thread. -- tags - Logging from a custom function will include a "custom-function" tag in the log entry. Most logs will not have any additional tags. -- message - This is the main message that was reported. +- `timestamp` - This is the date/time stamp when the event occurred +- `level` - This is an associated log level that gives a rough guide to the importance and urgency of the message. The available log levels in order of least urgent (and more verbose) are: `trace`, `debug`, `info`, `warn`, `error`, `fatal`, and `notify`. +- `thread/ID` - This reports the name of the thread and the thread ID that the event was reported on. Note that NATS logs are recorded by their process name and there is no thread id for them since they are a separate process. Key threads are: + - `main` - This is the thread that is responsible for managing all other threads and routes incoming requests to the other threads + - `http` - These are the worker threads that handle the primary workload of incoming HTTP requests to the operations API and custom functions. + - `Clustering` - These are threads and processes that handle replication. + - `job` - These are job threads that have been started to handle operations that are executed in a separate job thread. +- `tags` - Logging from a custom function will include a "custom-function" tag in the log entry. Most logs will not have any additional tags. +- `message` - This is the main message that was reported. We try to keep logging to a minimum by default, to do this the default log level is `error`. If you require more information from the logs, increasing the log level down will provide that. @@ -46,7 +46,7 @@ HarperDB logs can optionally be streamed to standard streams. Logging to standar ## Logging Rotation -Log rotation allows for managing log files, such as compressing rotated log files, archiving old log files, determining when to rotate, and the like. This will allow for organized storage and efficient use of disk space. For more information see “logging” in our [config docs](../../deployments/configuration). +Log rotation allows for managing log files, such as compressing rotated log files, archiving old log files, determining when to rotate, and the like. This will allow for organized storage and efficient use of disk space. For more information see "logging" in our [config docs](../../deployments/configuration). ## Read Logs via the API diff --git a/versioned_docs/version-4.3/deployments/install-harperdb/linux.md b/versioned_docs/version-4.3/deployments/install-harperdb/linux.md index 86ff5b97..5b6df5a7 100644 --- a/versioned_docs/version-4.3/deployments/install-harperdb/linux.md +++ b/versioned_docs/version-4.3/deployments/install-harperdb/linux.md @@ -20,7 +20,7 @@ These instructions assume that the following has already been completed: While you will need to access HarperDB through port 9925 for the administration through the operations API, and port 9932 for clustering, for higher level of security, you may want to consider keeping both of these ports restricted to a VPN or VPC, and only have the application interface (9926 by default) exposed to the public Internet. -For this example, we will use an AWS Ubuntu Server 22.04 LTS m5.large EC2 Instance with an additional General Purpose SSD EBS volume and the default “ubuntu” user account. +For this example, we will use an AWS Ubuntu Server 22.04 LTS m5.large EC2 Instance with an additional General Purpose SSD EBS volume and the default "ubuntu" user account. --- diff --git a/versioned_docs/version-4.3/developers/applications/caching.md b/versioned_docs/version-4.3/developers/applications/caching.md index 2389fa6d..e28a5edf 100644 --- a/versioned_docs/version-4.3/developers/applications/caching.md +++ b/versioned_docs/version-4.3/developers/applications/caching.md @@ -22,9 +22,9 @@ While you can provide a single expiration time, there are actually several expir You can provide a single expiration and it defines the behavior for all three. You can also provide three settings for expiration, through table directives: -- expiration - The amount of time until a record goes stale. -- eviction - The amount of time after expiration before a record can be evicted (defaults to zero). -- scanInterval - The interval for scanning for expired records (defaults to one quarter of the total of expiration and eviction). +- `expiration` - The amount of time until a record goes stale. +- `eviction` - The amount of time after expiration before a record can be evicted (defaults to zero). +- `scanInterval` - The interval for scanning for expired records (defaults to one quarter of the total of expiration and eviction). ## Define External Data Source diff --git a/versioned_docs/version-4.3/developers/applications/define-routes.md b/versioned_docs/version-4.3/developers/applications/define-routes.md index 411c5a68..5ad3df50 100644 --- a/versioned_docs/version-4.3/developers/applications/define-routes.md +++ b/versioned_docs/version-4.3/developers/applications/define-routes.md @@ -22,7 +22,7 @@ However, you can specify the path to be `/` if you wish to have your routes hand - The route below, using the default config, within the **dogs** project, with a route of **breeds** would be available at **http:/localhost:9926/dogs/breeds**. -In effect, this route is just a pass-through to HarperDB. The same result could have been achieved by hitting the core HarperDB API, since it uses **hdbCore.preValidation** and **hdbCore.request**, which are defined in the “helper methods” section, below. +In effect, this route is just a pass-through to HarperDB. The same result could have been achieved by hitting the core HarperDB API, since it uses **hdbCore.preValidation** and **hdbCore.request**, which are defined in the "helper methods" section, below. ```javascript export default async (server, { hdbCore, logger }) => { @@ -39,7 +39,7 @@ export default async (server, { hdbCore, logger }) => { For endpoints where you want to execute multiple operations against HarperDB, or perform additional processing (like an ML classification, or an aggregation, or a call to a 3rd party API), you can define your own logic in the handler. The function below will execute a query against the dogs table, and filter the results to only return those dogs over 4 years in age. -**IMPORTANT: This route has NO preValidation and uses hdbCore.requestWithoutAuthentication, which- as the name implies- bypasses all user authentication. See the security concerns and mitigations in the “helper methods” section, below.** +**IMPORTANT: This route has NO preValidation and uses hdbCore.requestWithoutAuthentication, which- as the name implies- bypasses all user authentication. See the security concerns and mitigations in the "helper methods" section, below.** ```javascript export default async (server, { hdbCore, logger }) => { diff --git a/versioned_docs/version-4.3/developers/clustering/index.md b/versioned_docs/version-4.3/developers/clustering/index.md index dfcdac11..92fe00fe 100644 --- a/versioned_docs/version-4.3/developers/clustering/index.md +++ b/versioned_docs/version-4.3/developers/clustering/index.md @@ -22,10 +22,10 @@ A common use case is an edge application collecting and analyzing sensor data th HarperDB simplifies the architecture of such an application with its bi-directional, table-level replication: -- The edge instance subscribes to a “thresholds” table on the cloud instance, so the application only makes localhost calls to get the thresholds. -- The application continually pushes sensor data into a “sensor_data” table via the localhost API, comparing it to the threshold values as it does so. -- When a threshold violation occurs, the application adds a record to the “alerts” table. -- The application appends to that record array “sensor_data” entries for the 60 seconds (or minutes, or days) leading up to the threshold violation. -- The edge instance publishes the “alerts” table up to the cloud instance. +- The edge instance subscribes to a "thresholds" table on the cloud instance, so the application only makes localhost calls to get the thresholds. +- The application continually pushes sensor data into a "sensor_data" table via the localhost API, comparing it to the threshold values as it does so. +- When a threshold violation occurs, the application adds a record to the "alerts" table. +- The application appends to that record array "sensor_data" entries for the 60 seconds (or minutes, or days) leading up to the threshold violation. +- The edge instance publishes the "alerts" table up to the cloud instance. By letting HarperDB focus on the fault-tolerant logistics of transporting your data, you get to write less code. By moving data only when and where it’s needed, you lower storage and bandwidth costs. And by restricting your app to only making local calls to HarperDB, you reduce the overall exposure of your application to outside forces. diff --git a/versioned_docs/version-4.3/developers/components/google-data-studio.md b/versioned_docs/version-4.3/developers/components/google-data-studio.md index d252f3f0..4ee8d848 100644 --- a/versioned_docs/version-4.3/developers/components/google-data-studio.md +++ b/versioned_docs/version-4.3/developers/components/google-data-studio.md @@ -19,9 +19,9 @@ Get started by selecting the HarperDB connector from the [Google Data Studio Par 1. Log in to [https://datastudio.google.com/](https://datastudio.google.com/). 1. Add a new Data Source using the HarperDB connector. The current release version can be added as a data source by following this link: [HarperDB Google Data Studio Connector](https://datastudio.google.com/datasources/create?connectorId=AKfycbxBKgF8FI5R42WVxO-QCOq7dmUys0HJrUJMkBQRoGnCasY60_VJeO3BhHJPvdd20-S76g). 1. Authorize the connector to access other servers on your behalf (this allows the connector to contact your database). -1. Enter the Web URL to access your database (preferably with HTTPS), as well as the Basic Auth key you use to access the database. Just include the key, not the word “Basic” at the start of it. -1. Check the box for “Secure Connections Only” if you want to always use HTTPS connections for this data source; entering a Web URL that starts with https:// will do the same thing, if you prefer. -1. Check the box for “Allow Bad Certs” if your HarperDB instance does not have a valid SSL certificate. [HarperDB Cloud](../../deployments/harperdb-cloud/) always has valid certificates, and so will never require this to be checked. Instances you set up yourself may require this, if you are using self-signed certs. If you are using [HarperDB Cloud](../../deployments/harperdb-cloud/) or another instance you know should always have valid SSL certificates, do not check this box. +1. Enter the Web URL to access your database (preferably with HTTPS), as well as the Basic Auth key you use to access the database. Just include the key, not the word "Basic" at the start of it. +1. Check the box for "Secure Connections Only" if you want to always use HTTPS connections for this data source; entering a Web URL that starts with https:// will do the same thing, if you prefer. +1. Check the box for "Allow Bad Certs" if your HarperDB instance does not have a valid SSL certificate. [HarperDB Cloud](../../deployments/harperdb-cloud/) always has valid certificates, and so will never require this to be checked. Instances you set up yourself may require this, if you are using self-signed certs. If you are using [HarperDB Cloud](../../deployments/harperdb-cloud/) or another instance you know should always have valid SSL certificates, do not check this box. 1. Choose your Query Type. This determines what information the configuration will ask for after pressing the Next button. - Table will ask you for a Schema and a Table to return all fields of using `SELECT *`. - SQL will ask you for the SQL query you’re using to retrieve fields from the database. You may `JOIN` multiple tables together, and use HarperDB specific SQL functions, along with the usual power SQL grants. diff --git a/versioned_docs/version-4.3/developers/components/installing.md b/versioned_docs/version-4.3/developers/components/installing.md index 6bbb6261..090075b4 100644 --- a/versioned_docs/version-4.3/developers/components/installing.md +++ b/versioned_docs/version-4.3/developers/components/installing.md @@ -9,7 +9,7 @@ Components can be easily added by adding a new top level element to your `harper The configuration comprises two values: - component name - can be anything, as long as it follows valid YAML syntax. -- package - a reference to your component. +- `package` - a reference to your component. ```yaml myComponentName: diff --git a/versioned_docs/version-4.3/developers/components/operations.md b/versioned_docs/version-4.3/developers/components/operations.md index 32108df5..691ce4bb 100644 --- a/versioned_docs/version-4.3/developers/components/operations.md +++ b/versioned_docs/version-4.3/developers/components/operations.md @@ -4,7 +4,7 @@ title: Operations # Operations -One way to manage applications and components is through [HarperDB Studio](../../administration/harperdb-studio/). It performs all the necessary operations automatically. To get started, navigate to your instance in HarperDB Studio and click the subnav link for “applications”. Once configuration is complete, you can manage and deploy applications in minutes. +One way to manage applications and components is through [HarperDB Studio](../../administration/harperdb-studio/). It performs all the necessary operations automatically. To get started, navigate to your instance in HarperDB Studio and click the subnav link for "applications". Once configuration is complete, you can manage and deploy applications in minutes. HarperDB Studio manages your applications using nine HarperDB operations. You may view these operations within our [API Docs](../operations-api/). A brief overview of each of the operations is below: diff --git a/versioned_docs/version-4.3/developers/operations-api/bulk-operations.md b/versioned_docs/version-4.3/developers/operations-api/bulk-operations.md index d698130f..b6f6a07f 100644 --- a/versioned_docs/version-4.3/developers/operations-api/bulk-operations.md +++ b/versioned_docs/version-4.3/developers/operations-api/bulk-operations.md @@ -8,11 +8,11 @@ title: Bulk Operations Ingests CSV data, provided directly in the operation as an `insert`, `update` or `upsert` into the specified database table. -- operation _(required)_ - must always be `csv_data_load` -- action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` -- database _(optional)_ - name of the database where you are loading your data. The default is `data` -- table _(required)_ - name of the table where you are loading your data -- data _(required)_ - csv data to import into HarperDB +- `operation` _(required)_ - must always be `csv_data_load` +- `action` _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` +- `database` _(optional)_ - name of the database where you are loading your data. The default is `data` +- `table` _(required)_ - name of the table where you are loading your data +- `data` _(required)_ - csv data to import into HarperDB ### Body @@ -43,11 +43,11 @@ Ingests CSV data, provided via a path on the local filesystem, as an `insert`, ` _Note: The CSV file must reside on the same machine on which HarperDB is running. For example, the path to a CSV on your computer will produce an error if your HarperDB instance is a cloud instance._ -- operation _(required)_ - must always be `csv_file_load` -- action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` -- database _(optional)_ - name of the database where you are loading your data. The default is `data` -- table _(required)_ - name of the table where you are loading your data -- file*path *(required)\_ - path to the csv file on the host running harperdb +- `operation` _(required)_ - must always be `csv_file_load` +- `action` _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` +- `database` _(optional)_ - name of the database where you are loading your data. The default is `data` +- `table` _(required)_ - name of the table where you are loading your data +- `file_path` _(required)_ - path to the csv file on the host running harperdb ### Body @@ -76,11 +76,11 @@ _Note: The CSV file must reside on the same machine on which HarperDB is running Ingests CSV data, provided via URL, as an `insert`, `update` or `upsert` into the specified database table. -- operation _(required)_ - must always be `csv_url_load` -- action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` -- database _(optional)_ - name of the database where you are loading your data. The default is `data` -- table _(required)_ - name of the table where you are loading your data -- csv*url *(required)\_ - URL to the csv +- `operation` _(required)_ - must always be `csv_url_load` +- `action` _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` +- `database` _(optional)_ - name of the database where you are loading your data. The default is `data` +- `table` _(required)_ - name of the table where you are loading your data +- `csv_url` _(required)_ - URL to the csv ### Body @@ -109,16 +109,16 @@ Ingests CSV data, provided via URL, as an `insert`, `update` or `upsert` into th This operation allows users to import CSV or JSON files from an AWS S3 bucket as an `insert`, `update` or `upsert`. -- operation _(required)_ - must always be `import_from_s3` -- action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` -- database _(optional)_ - name of the database where you are loading your data. The default is `data` -- table _(required)_ - name of the table where you are loading your data -- s3 _(required)_ - object containing required AWS S3 bucket info for operation: - - aws_access_key_id - AWS access key for authenticating into your S3 bucket - - aws_secret_access_key - AWS secret for authenticating into your S3 bucket - - bucket - AWS S3 bucket to import from - - key - the name of the file to import - _the file must include a valid file extension ('.csv' or '.json')_ - - region - the region of the bucket +- `operation` _(required)_ - must always be `import_from_s3` +- `action` _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` +- `database` _(optional)_ - name of the database where you are loading your data. The default is `data` +- `table` _(required)_ - name of the table where you are loading your data +- `s3` _(required)_ - object containing required AWS S3 bucket info for operation: + - `aws_access_key_id` - AWS access key for authenticating into your S3 bucket + - `aws_secret_access_key` - AWS secret for authenticating into your S3 bucket + - `bucket` - AWS S3 bucket to import from + - `key` - the name of the file to import - _the file must include a valid file extension ('.csv' or '.json')_ + - `region` - the region of the bucket ### Body diff --git a/versioned_docs/version-4.3/developers/operations-api/clustering.md b/versioned_docs/version-4.3/developers/operations-api/clustering.md index fcd1cdda..2263c715 100644 --- a/versioned_docs/version-4.3/developers/operations-api/clustering.md +++ b/versioned_docs/version-4.3/developers/operations-api/clustering.md @@ -10,11 +10,11 @@ Adds a route/routes to either the hub or leaf server cluster configuration. This _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_set_routes` -- server _(required)_ - must always be `hub` or `leaf`, in most cases you should use `hub` here -- routes _(required)_ - must always be an objects array with a host and port: - - host - the host of the remote instance you are clustering to - - port - the clustering port of the remote instance you are clustering to, in most cases this is the value in `clustering.hubServer.cluster.network.port` on the remote instance `harperdb-config.yaml` +- `operation` _(required)_ - must always be `cluster_set_routes` +- `server` _(required)_ - must always be `hub` or `leaf`, in most cases you should use `hub` here +- `routes` _(required)_ - must always be an objects array with a host and port: + - `host` - the host of the remote instance you are clustering to + - `port` - the clustering port of the remote instance you are clustering to, in most cases this is the value in `clustering.hubServer.cluster.network.port` on the remote instance `harperdb-config.yaml` ### Body @@ -78,7 +78,7 @@ Gets all the hub and leaf server routes from the config file. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_get_routes` +- `operation` _(required)_ - must always be `cluster_get_routes` ### Body @@ -122,8 +122,8 @@ Removes route(s) from hub and/or leaf server routes array in config file. Return _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_delete_routes` -- routes _required_ - Must be an array of route object(s) +- `operation` _(required)_ - must always be `cluster_delete_routes` +- `routes` _(required)_ - Must be an array of route object(s) ### Body @@ -162,14 +162,14 @@ Registers an additional HarperDB instance with associated subscriptions. Learn m _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_node` -- node*name *(required)\_ - the node name of the remote node -- subscriptions _(required)_ - The relationship created between nodes. Must be an object array and include `schema`, `table`, `subscribe` and `publish`: - - schema - the schema to replicate from - - table - the table to replicate from - - subscribe - a boolean which determines if transactions on the remote table should be replicated on the local table - - publish - a boolean which determines if transactions on the local table should be replicated on the remote table - - start*time *(optional)\_ - How far back to go to get transactions from node being added. Must be in UTC YYYY-MM-DDTHH:mm:ss.sssZ format +- `operation` _(required)_ - must always be `add_node` +- `node_name` _(required)_ - the node name of the remote node +- `subscriptions` _(required)_ - The relationship created between nodes. Must be an object array and include `schema`, `table`, `subscribe` and `publish`: + - `schema` - the schema to replicate from + - `table` - the table to replicate from + - `subscribe` - a boolean which determines if transactions on the remote table should be replicated on the local table + - `publish` - a boolean which determines if transactions on the local table should be replicated on the remote table + - `start_time` _(optional)_ - How far back to go to get transactions from node being added. Must be in UTC YYYY-MM-DDTHH:mm:ss.sssZ format ### Body @@ -205,14 +205,14 @@ Modifies an existing HarperDB instance registration and associated subscriptions _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `update_node` -- node*name *(required)\_ - the node name of the remote node you are updating -- subscriptions _(required)_ - The relationship created between nodes. Must be an object array and include `schema`, `table`, `subscribe` and `publish`: - - schema - the schema to replicate from - - table - the table to replicate from - - subscribe - a boolean which determines if transactions on the remote table should be replicated on the local table - - publish - a boolean which determines if transactions on the local table should be replicated on the remote table - - start*time *(optional)\_ - How far back to go to get transactions from node being added. Must be in UTC YYYY-MM-DDTHH:mm:ss.sssZ format +- `operation` _(required)_ - must always be `update_node` +- `node_name` _(required)_ - the node name of the remote node you are updating +- `subscriptions` _(required)_ - The relationship created between nodes. Must be an object array and include `schema`, `table`, `subscribe` and `publish`: + - `schema` - the schema to replicate from + - `table` - the table to replicate from + - `subscribe` - a boolean which determines if transactions on the remote table should be replicated on the local table + - `publish` - a boolean which determines if transactions on the local table should be replicated on the remote table + - `start_time` _(optional)_ - How far back to go to get transactions from node being added. Must be in UTC YYYY-MM-DDTHH:mm:ss.sssZ format ### Body @@ -248,13 +248,13 @@ A more adeptly named alias for add and update node. This operation behaves as a _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_node_replication` -- node*name *(required)\_ - the node name of the remote node you are updating -- subscriptions _(required)_ - The relationship created between nodes. Must be an object array and `table`, `subscribe` and `publish`: - - database _(optional)_ - the database to replicate from - - table _(required)_ - the table to replicate from - - subscribe _(required)_ - a boolean which determines if transactions on the remote table should be replicated on the local table - - publish _(required)_ - a boolean which determines if transactions on the local table should be replicated on the remote table +- `operation` _(required)_ - must always be `set_node_replication` +- `node_name` _(required)_ - the node name of the remote node you are updating +- `subscriptions` _(required)_ - The relationship created between nodes. Must be an object array and `table`, `subscribe` and `publish`: + - `database` _(optional)_ - the database to replicate from + - `table` _(required)_ - the table to replicate from + - `subscribe` _(required)_ - a boolean which determines if transactions on the remote table should be replicated on the local table + - `publish` _(required)_ - a boolean which determines if transactions on the local table should be replicated on the remote table - ### Body @@ -289,7 +289,7 @@ Returns an array of status objects from a cluster. A status object will contain _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_status` +- `operation` _(required)_ - must always be `cluster_status` ### Body @@ -336,10 +336,10 @@ Returns an object array of enmeshed nodes. Each node object will contain the nam _Operation is restricted to super_user roles only_ -- operation _(required)_- must always be `cluster_network` -- timeout (_optional_) - the amount of time in milliseconds to wait for a response from the network. Must be a number -- connected*nodes (\_optional*) - omit `connected_nodes` from the response. Must be a boolean. Defaults to `false` -- routes (_optional_) - omit `routes` from the response. Must be a boolean. Defaults to `false` +- `operation` _(required)_- must always be `cluster_network` +- `timeout` _(optional)_ - the amount of time in milliseconds to wait for a response from the network. Must be a number +- `connected_nodes` _(optional)_ - omit `connected_nodes` from the response. Must be a boolean. Defaults to `false` +- `routes` _(optional)_ - omit `routes` from the response. Must be a boolean. Defaults to `false` ### Body @@ -383,8 +383,8 @@ Removes a HarperDB instance and associated subscriptions from the cluster. Learn _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `remove_node` -- name _(required)_ - The name of the node you are de-registering +- `operation` _(required)_ - must always be `remove_node` +- `name` _(required)_ - The name of the node you are de-registering ### Body @@ -412,8 +412,8 @@ Learn more about [HarperDB clustering here](../clustering/). _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `configure_cluster` -- connections _(required)_ - must be an object array with each object containing `node_name` and `subscriptions` for that node +- `operation` _(required)_ - must always be `configure_cluster` +- `connections` _(required)_ - must be an object array with each object containing `node_name` and `subscriptions` for that node ### Body @@ -463,10 +463,10 @@ Will purge messages from a stream _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `purge_stream` -- database _(required)_ - the name of the database where the streams table resides -- table _(required)_ - the name of the table that belongs to the stream -- options _(optional)_ - control how many messages get purged. Options are: +- `operation` _(required)_ - must always be `purge_stream` +- `database` _(required)_ - the name of the database where the streams table resides +- `table` _(required)_ - the name of the table that belongs to the stream +- `options` _(optional)_ - control how many messages get purged. Options are: - `keep` - purge will keep this many most recent messages - `seq` - purge all messages up to, but not including, this sequence diff --git a/versioned_docs/version-4.3/developers/operations-api/components.md b/versioned_docs/version-4.3/developers/operations-api/components.md index e8c9a236..f1a3d3ae 100644 --- a/versioned_docs/version-4.3/developers/operations-api/components.md +++ b/versioned_docs/version-4.3/developers/operations-api/components.md @@ -10,8 +10,8 @@ Creates a new component project in the component root directory using a predefin _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_component` -- project _(required)_ - the name of the project you wish to create +- `operation` _(required)_ - must always be `add_component` +- `project` _(required)_ - the name of the project you wish to create ### Body @@ -74,10 +74,10 @@ _Note: After deploying a component a restart may be required_ _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `deploy_component` -- project _(required)_ - the name of the project you wish to deploy -- package _(optional)_ - this can be any valid GitHub or NPM reference -- payload _(optional)_ - a base64-encoded string representation of the .tar file. Must be a string +- `operation` _(required)_ - must always be `deploy_component` +- `project` _(required)_ - the name of the project you wish to deploy +- `package` _(optional)_ - this can be any valid GitHub or NPM reference +- `payload` _(optional)_ - a base64-encoded string representation of the .tar file. Must be a string ### Body @@ -113,9 +113,9 @@ Creates a temporary `.tar` file of the specified project folder, then reads it i _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `package_component` -- project _(required)_ - the name of the project you wish to package -- skip*node_modules *(optional)\_ - if true, creates option for tar module that will exclude the project's node_modules directory. Must be a boolean +- `operation` _(required)_ - must always be `package_component` +- `project` _(required)_ - the name of the project you wish to package +- `skip_node_modules` _(optional)_ - if true, creates option for tar module that will exclude the project's node_modules directory. Must be a boolean ### Body @@ -146,9 +146,9 @@ Deletes a file from inside the component project or deletes the complete project _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `drop_component` -- project _(required)_ - the name of the project you wish to delete or to delete from if using the `file` parameter -- file _(optional)_ - the path relative to your project folder of the file you wish to delete +- `operation` _(required)_ - must always be `drop_component` +- `project` _(required)_ - the name of the project you wish to delete or to delete from if using the `file` parameter +- `file` _(optional)_ - the path relative to your project folder of the file you wish to delete ### Body @@ -176,7 +176,7 @@ Gets all local component files and folders and any component config from `harper _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_components` +- `operation` _(required)_ - must always be `get_components` ### Body @@ -257,10 +257,10 @@ Gets the contents of a file inside a component project. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_component_file` -- project _(required)_ - the name of the project where the file is located -- file _(required)_ - the path relative to your project folder of the file you wish to view -- encoding _(optional)_ - the encoding that will be passed to the read file call. Defaults to `utf8` +- `operation` _(required)_ - must always be `get_component_file` +- `project` _(required)_ - the name of the project where the file is located +- `file` _(required)_ - the path relative to your project folder of the file you wish to view +- `encoding` _(optional)_ - the encoding that will be passed to the read file call. Defaults to `utf8` ### Body @@ -288,11 +288,11 @@ Creates or updates a file inside a component project. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_component_file` -- project _(required)_ - the name of the project the file is located in -- file _(required)_ - the path relative to your project folder of the file you wish to set -- payload _(required)_ - what will be written to the file -- encoding _(optional)_ - the encoding that will be passed to the write file call. Defaults to `utf8` +- `operation` _(required)_ - must always be `set_component_file` +- `project` _(required)_ - the name of the project the file is located in +- `file` _(required)_ - the path relative to your project folder of the file you wish to set +- `payload` _(required)_ - what will be written to the file +- `encoding` _(optional)_ - the encoding that will be passed to the write file call. Defaults to `utf8` ### Body diff --git a/versioned_docs/version-4.3/developers/operations-api/custom-functions.md b/versioned_docs/version-4.3/developers/operations-api/custom-functions.md index 86f76157..7b483c8a 100644 --- a/versioned_docs/version-4.3/developers/operations-api/custom-functions.md +++ b/versioned_docs/version-4.3/developers/operations-api/custom-functions.md @@ -10,7 +10,7 @@ Returns the state of the Custom functions server. This includes whether it is en _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `custom_function_status` +- `operation` _(required)_ - must always be `custom_function_status` ### Body @@ -38,7 +38,7 @@ Returns an array of projects within the Custom Functions root project directory. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_custom_functions` +- `operation` _(required)_ - must always be `get_custom_functions` ### Body @@ -68,10 +68,10 @@ Returns the content of the specified file as text. HarperDB Studio uses this cal _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_custom_function` -- project _(required)_ - the name of the project containing the file for which you wish to get content -- type _(required)_ - the name of the sub-folder containing the file for which you wish to get content - must be either routes or helpers -- file _(required)_ - The name of the file for which you wish to get content - should not include the file extension (which is always .js) +- `operation` _(required)_ - must always be `get_custom_function` +- `project` _(required)_ - the name of the project containing the file for which you wish to get content +- `type` _(required)_ - the name of the sub-folder containing the file for which you wish to get content - must be either routes or helpers +- `file` _(required)_ - The name of the file for which you wish to get content - should not include the file extension (which is always .js) ### Body @@ -100,11 +100,11 @@ Updates the content of the specified file. HarperDB Studio uses this call to sav _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_custom_function` -- project _(required)_ - the name of the project containing the file for which you wish to set content -- type _(required)_ - the name of the sub-folder containing the file for which you wish to set content - must be either routes or helpers -- file _(required)_ - the name of the file for which you wish to set content - should not include the file extension (which is always .js) -- function*content *(required)\_ - the content you wish to save into the specified file +- `operation` _(required)_ - must always be `set_custom_function` +- `project` _(required)_ - the name of the project containing the file for which you wish to set content +- `type` _(required)_ - the name of the sub-folder containing the file for which you wish to set content - must be either routes or helpers +- `file` _(required)_ - the name of the file for which you wish to set content - should not include the file extension (which is always .js) +- `function_content` _(required)_ - the content you wish to save into the specified file ### Body @@ -134,10 +134,10 @@ Deletes the specified file. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `drop_custom_function` -- project _(required)_ - the name of the project containing the file you wish to delete -- type _(required)_ - the name of the sub-folder containing the file you wish to delete. Must be either routes or helpers -- file _(required)_ - the name of the file you wish to delete. Should not include the file extension (which is always .js) +- `operation` _(required)_ - must always be `drop_custom_function` +- `project` _(required)_ - the name of the project containing the file you wish to delete +- `type` _(required)_ - the name of the sub-folder containing the file you wish to delete. Must be either routes or helpers +- `file` _(required)_ - the name of the file you wish to delete. Should not include the file extension (which is always .js) ### Body @@ -166,8 +166,8 @@ Creates a new project folder in the Custom Functions root project directory. It _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_custom_function_project` -- project _(required)_ - the name of the project you wish to create +- `operation` _(required)_ - must always be `add_custom_function_project` +- `project` _(required)_ - the name of the project you wish to create ### Body @@ -194,8 +194,8 @@ Deletes the specified project folder and all of its contents. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `drop_custom_function_project` -- project _(required)_ - the name of the project you wish to delete +- `operation` _(required)_ - must always be `drop_custom_function_project` +- `project` _(required)_ - the name of the project you wish to delete ### Body @@ -222,9 +222,9 @@ Creates a .tar file of the specified project folder, then reads it into a base64 _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `package_custom_function_project` -- project _(required)_ - the name of the project you wish to package up for deployment -- skip*node_modules *(optional)\_ - if true, creates option for tar module that will exclude the project's node_modules directory. Must be a boolean. +- `operation` _(required)_ - must always be `package_custom_function_project` +- `project` _(required)_ - the name of the project you wish to package up for deployment +- `skip_node_modules` _(optional)_ - if true, creates option for tar module that will exclude the project's node_modules directory. Must be a boolean. ### Body @@ -254,9 +254,9 @@ Takes the output of package_custom_function_project, decrypts the base64-encoded _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `deploy_custom_function_project` -- project _(required)_ - the name of the project you wish to deploy. Must be a string -- payload _(required)_ - a base64-encoded string representation of the .tar file. Must be a string +- `operation` _(required)_ - must always be `deploy_custom_function_project` +- `project` _(required)_ - the name of the project you wish to deploy. Must be a string +- `payload` _(required)_ - a base64-encoded string representation of the .tar file. Must be a string ### Body diff --git a/versioned_docs/version-4.3/developers/operations-api/databases-and-tables.md b/versioned_docs/version-4.3/developers/operations-api/databases-and-tables.md index ec9b7d8c..cb5aedb8 100644 --- a/versioned_docs/version-4.3/developers/operations-api/databases-and-tables.md +++ b/versioned_docs/version-4.3/developers/operations-api/databases-and-tables.md @@ -8,7 +8,7 @@ title: Databases and Tables Returns the definitions of all databases and tables within the database. Record counts about 5000 records are estimated, as determining the exact count can be expensive. When the record count is estimated, this is indicated by the inclusion of a confidence interval of `estimated_record_range`. If you need the exact count, you can include an `"exact_count": true` in the operation, but be aware that this requires a full table scan (may be expensive). -- operation _(required)_ - must always be `describe_all` +- `operation` _(required)_ - must always be `describe_all` ### Body @@ -63,8 +63,8 @@ Returns the definitions of all databases and tables within the database. Record Returns the definitions of all tables within the specified database. -- operation _(required)_ - must always be `describe_database` -- database _(optional)_ - database where the table you wish to describe lives. The default is `data` +- `operation` _(required)_ - must always be `describe_database` +- `database` _(optional)_ - database where the table you wish to describe lives. The default is `data` ### Body @@ -118,9 +118,9 @@ Returns the definitions of all tables within the specified database. Returns the definition of the specified table. -- operation _(required)_ - must always be `describe_table` -- table _(required)_ - table you wish to describe -- database _(optional)_ - database where the table you wish to describe lives. The default is `data` +- `operation` _(required)_ - must always be `describe_table` +- `table` _(required)_ - table you wish to describe +- `database` _(optional)_ - database where the table you wish to describe lives. The default is `data` ### Body @@ -174,8 +174,8 @@ Create a new database. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `create_database` -- database _(optional)_ - name of the database you are creating. The default is `data` +- `operation` _(required)_ - must always be `create_database` +- `database` _(optional)_ - name of the database you are creating. The default is `data` ### Body @@ -202,8 +202,8 @@ Drop an existing database. NOTE: Dropping a database will delete all tables and _Operation is restricted to super_user roles only_ -- operation _(required)_ - this should always be `drop_database` -- database _(required)_ - name of the database you are dropping +- `operation` _(required)_ - this should always be `drop_database` +- `database` _(required)_ - name of the database you are dropping ### Body @@ -230,15 +230,15 @@ Create a new table within a database. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `create_table` -- database _(optional)_ - name of the database where you want your table to live. If the database does not exist, it will be created. If the `database` property is not provided it will default to `data`. -- table _(required)_ - name of the table you are creating -- primary*key *(required)\_ - primary key for the table -- attributes _(optional)_ - an array of attributes that specifies the schema for the table, that is the set of attributes for the table. When attributes are supplied the table will not be considered a "dynamic schema" table, and attributes will not be auto-added when records with new properties are inserted. Each attribute is specified as: - - name _(required)_ - the name of the attribute - - indexed _(optional)_ - indicates if the attribute should be indexed - - type _(optional)_ - specifies the data type of the attribute (can be String, Int, Float, Date, ID, Any) -- expiration _(optional)_ - specifies the time-to-live or expiration of records in the table before they are evicted (records are not evicted on any timer if not specified). This is specified in seconds. +- `operation` _(required)_ - must always be `create_table` +- `database` _(optional)_ - name of the database where you want your table to live. If the database does not exist, it will be created. If the `database` property is not provided it will default to `data`. +- `table` _(required)_ - name of the table you are creating +- `primary_key` _(required)_ - primary key for the table +- `attributes` _(optional)_ - an array of attributes that specifies the schema for the table, that is the set of attributes for the table. When attributes are supplied the table will not be considered a "dynamic schema" table, and attributes will not be auto-added when records with new properties are inserted. Each attribute is specified as: + - `name` _(required)_ - the name of the attribute + - `indexed` _(optional)_ - indicates if the attribute should be indexed + - `type` _(optional)_ - specifies the data type of the attribute (can be String, Int, Float, Date, ID, Any) +- `expiration` _(optional)_ - specifies the time-to-live or expiration of records in the table before they are evicted (records are not evicted on any timer if not specified). This is specified in seconds. ### Body @@ -267,9 +267,9 @@ Drop an existing database table. NOTE: Dropping a table will delete all associat _Operation is restricted to super_user roles only_ -- operation _(required)_ - this should always be `drop_table` -- database _(optional)_ - database where the table you are dropping lives. The default is `data` -- table _(required)_ - name of the table you are dropping +- `operation` _(required)_ - this should always be `drop_table` +- `database` _(optional)_ - database where the table you are dropping lives. The default is `data` +- `table` _(required)_ - name of the table you are dropping ### Body @@ -297,10 +297,10 @@ Create a new attribute within the specified table. **The create_attribute operat _Note: HarperDB will automatically create new attributes on insert and update if they do not already exist within the database._ -- operation _(required)_ - must always be `create_attribute` -- database _(optional)_ - name of the database of the table you want to add your attribute. The default is `data` -- table _(required)_ - name of the table where you want to add your attribute to live -- attribute _(required)_ - name for the attribute +- `operation` _(required)_ - must always be `create_attribute` +- `database` _(optional)_ - name of the database of the table you want to add your attribute. The default is `data` +- `table` _(required)_ - name of the table where you want to add your attribute to live +- `attribute` _(required)_ - name for the attribute ### Body @@ -331,10 +331,10 @@ Drop an existing attribute from the specified table. NOTE: Dropping an attribute _Operation is restricted to super_user roles only_ -- operation _(required)_ - this should always be `drop_attribute` -- database _(optional)_ - database where the table you are dropping lives. The default is `data` -- table _(required)_ - table where the attribute you are dropping lives -- attribute _(required)_ - attribute that you intend to drop +- `operation` _(required)_ - this should always be `drop_attribute` +- `database` _(optional)_ - database where the table you are dropping lives. The default is `data` +- `table` _(required)_ - table where the attribute you are dropping lives +- `attribute` _(required)_ - attribute that you intend to drop ### Body @@ -365,10 +365,10 @@ It is important to note that trying to copy a database file that is in use (Harp _Operation is restricted to super_user roles only_ -- operation _(required)_ - this should always be `get_backup` -- database _(required)_ - this is the database that will be snapshotted and returned -- table _(optional)_ - this will specify a specific table to backup -- tables _(optional)_ - this will specify a specific set of tables to backup +- `operation` _(required)_ - this should always be `get_backup` +- `database` _(required)_ - this is the database that will be snapshotted and returned +- `table` _(optional)_ - this will specify a specific table to backup +- `tables` _(optional)_ - this will specify a specific set of tables to backup ### Body diff --git a/versioned_docs/version-4.3/developers/operations-api/jobs.md b/versioned_docs/version-4.3/developers/operations-api/jobs.md index 173125a1..cf71fa00 100644 --- a/versioned_docs/version-4.3/developers/operations-api/jobs.md +++ b/versioned_docs/version-4.3/developers/operations-api/jobs.md @@ -8,8 +8,8 @@ title: Jobs Returns job status, metrics, and messages for the specified job ID. -- operation _(required)_ - must always be `get_job` -- id _(required)_ - the id of the job you wish to view +- `operation` _(required)_ - must always be `get_job` +- `id` _(required)_ - the id of the job you wish to view ### Body @@ -50,9 +50,9 @@ Returns a list of job statuses, metrics, and messages for all jobs executed with _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `search_jobs_by_start_date` -- from*date *(required)\_ - the date you wish to start the search -- to*date *(required)\_ - the date you wish to end the search +- `operation` _(required)_ - must always be `search_jobs_by_start_date` +- `from_date` _(required)_ - the date you wish to start the search +- `to_date` _(required)_ - the date you wish to end the search ### Body diff --git a/versioned_docs/version-4.3/developers/operations-api/logs.md b/versioned_docs/version-4.3/developers/operations-api/logs.md index 06ad5555..f8096408 100644 --- a/versioned_docs/version-4.3/developers/operations-api/logs.md +++ b/versioned_docs/version-4.3/developers/operations-api/logs.md @@ -10,13 +10,13 @@ Returns log outputs from the primary HarperDB log based on the provided search c _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_Log` -- start _(optional)_ - result to start with. Default is 0, the first log in `hdb.log`. Must be a number -- limit _(optional)_ - number of results returned. Default behavior is 1000. Must be a number -- level _(optional)_ - error level to filter on. Default behavior is all levels. Must be `notify`, `error`, `warn`, `info`, `debug` or `trace` -- from _(optional)_ - date to begin showing log results. Must be `YYYY-MM-DD` or `YYYY-MM-DD hh:mm:ss`. Default is first log in `hdb.log` -- until _(optional)_ - date to end showing log results. Must be `YYYY-MM-DD` or `YYYY-MM-DD hh:mm:ss`. Default is last log in `hdb.log` -- order _(optional)_ - order to display logs desc or asc by timestamp. By default, will maintain `hdb.log` order +- `operation` _(required)_ - must always be `read_Log` +- `start` _(optional)_ - result to start with. Default is 0, the first log in `hdb.log`. Must be a number +- `limit` _(optional)_ - number of results returned. Default behavior is 1000. Must be a number +- `level` _(optional)_ - error level to filter on. Default behavior is all levels. Must be `notify`, `error`, `warn`, `info`, `debug` or `trace` +- `from` _(optional)_ - date to begin showing log results. Must be `YYYY-MM-DD` or `YYYY-MM-DD hh:mm:ss`. Default is first log in `hdb.log` +- `until` _(optional)_ - date to end showing log results. Must be `YYYY-MM-DD` or `YYYY-MM-DD hh:mm:ss`. Default is last log in `hdb.log` +- `order` _(optional)_ - order to display logs desc or asc by timestamp. By default, will maintain `hdb.log` order ### Body @@ -68,12 +68,12 @@ Returns all transactions logged for the specified database table. You may filter _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_transaction_log` -- schema _(required)_ - schema under which the transaction log resides -- table _(required)_ - table under which the transaction log resides -- from _(optional)_ - time format must be millisecond-based epoch in UTC -- to _(optional)_ - time format must be millisecond-based epoch in UTC -- limit _(optional)_ - max number of logs you want to receive. Must be a number +- `operation` _(required)_ - must always be `read_transaction_log` +- `schema` _(required)_ - schema under which the transaction log resides +- `table` _(required)_ - table under which the transaction log resides +- `from` _(optional)_ - time format must be millisecond-based epoch in UTC +- `to` _(optional)_ - time format must be millisecond-based epoch in UTC +- `limit` _(optional)_ - max number of logs you want to receive. Must be a number ### Body @@ -271,10 +271,10 @@ Deletes transaction log data for the specified database table that is older than _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `delete_transaction_log_before` -- schema _(required)_ - schema under which the transaction log resides. Must be a string -- table _(required)_ - table under which the transaction log resides. Must be a string -- timestamp _(required)_ - records older than this date will be deleted. Format is millisecond-based epoch in UTC +- `operation` _(required)_ - must always be `delete_transaction_log_before` +- `schema` _(required)_ - schema under which the transaction log resides. Must be a string +- `table` _(required)_ - table under which the transaction log resides. Must be a string +- `timestamp` _(required)_ - records older than this date will be deleted. Format is millisecond-based epoch in UTC ### Body @@ -303,11 +303,11 @@ AuditLog must be enabled in the HarperDB configuration file to make this request _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_audit_log` -- schema _(required)_ - schema under which the transaction log resides -- table _(required)_ - table under which the transaction log resides -- search*type *(optional)\_ - possibilities are `hash_value`, `timestamp` and `username` -- search*values *(optional)\_ - an array of string or numbers relating to search_type +- `operation` _(required)_ - must always be `read_audit_log` +- `schema` _(required)_ - schema under which the transaction log resides +- `table` _(required)_ - table under which the transaction log resides +- `search_type` _(optional)_ - possibilities are `hash_value`, `timestamp` and `username` +- `search_values` _(optional)_ - an array of string or numbers relating to search_type ### Body @@ -398,11 +398,11 @@ AuditLog must be enabled in the HarperDB configuration file to make this request _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_audit_log` -- schema _(required)_ - schema under which the transaction log resides -- table _(required)_ - table under which the transaction log resides -- search*type *(optional)\_ - timestamp -- search*values *(optional)\_ - an array containing a maximum of two values [`from_timestamp`, `to_timestamp`] defining the range of transactions you would like to view. +- `operation` _(required)_ - must always be `read_audit_log` +- `schema` _(required)_ - schema under which the transaction log resides +- `table` _(required)_ - table under which the transaction log resides +- `search_type` _(optional)_ - timestamp +- `search_values` _(optional)_ - an array containing a maximum of two values [`from_timestamp`, `to_timestamp`] defining the range of transactions you would like to view. - Timestamp format is millisecond-based epoch in UTC - If no items are supplied then all transactions are returned - If only one entry is supplied then all transactions after the supplied timestamp will be returned @@ -519,11 +519,11 @@ AuditLog must be enabled in the HarperDB configuration file to make this request _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_audit_log` -- schema _(required)_ - schema under which the transaction log resides -- table _(required)_ - table under which the transaction log resides -- search*type *(optional)\_ - username -- search*values *(optional)\_ - the HarperDB user for whom you would like to view transactions +- `operation` _(required)_ - must always be `read_audit_log` +- `schema` _(required)_ - schema under which the transaction log resides +- `table` _(required)_ - table under which the transaction log resides +- `search_type` _(optional)_ - username +- `search_values` _(optional)_ - the HarperDB user for whom you would like to view transactions ### Body @@ -639,11 +639,11 @@ AuditLog must be enabled in the HarperDB configuration file to make this request _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_audit_log` -- schema _(required)_ - schema under which the transaction log resides -- table _(required)_ - table under which the transaction log resides -- search*type *(optional)\_ - hash_value -- search*values *(optional)\_ - an array of hash_attributes for which you wish to see transaction logs +- `operation` _(required)_ - must always be `read_audit_log` +- `schema` _(required)_ - schema under which the transaction log resides +- `table` _(required)_ - table under which the transaction log resides +- `search_type` _(optional)_ - hash_value +- `search_values` _(optional)_ - an array of hash_attributes for which you wish to see transaction logs ### Body @@ -707,10 +707,10 @@ AuditLog must be enabled in the HarperDB configuration file to make this request _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `delete_audit_logs_before` -- schema _(required)_ - schema under which the transaction log resides. Must be a string -- table _(required)_ - table under which the transaction log resides. Must be a string -- timestamp _(required)_ - records older than this date will be deleted. Format is millisecond-based epoch in UTC +- `operation` _(required)_ - must always be `delete_audit_logs_before` +- `schema` _(required)_ - schema under which the transaction log resides. Must be a string +- `table` _(required)_ - table under which the transaction log resides. Must be a string +- `timestamp` _(required)_ - records older than this date will be deleted. Format is millisecond-based epoch in UTC ### Body diff --git a/versioned_docs/version-4.3/developers/operations-api/registration.md b/versioned_docs/version-4.3/developers/operations-api/registration.md index f31f8370..7812e843 100644 --- a/versioned_docs/version-4.3/developers/operations-api/registration.md +++ b/versioned_docs/version-4.3/developers/operations-api/registration.md @@ -8,7 +8,7 @@ title: Registration Returns the registration data of the HarperDB instance. -- operation _(required)_ - must always be `registration_info` +- `operation` _(required)_ - must always be `registration_info` ### Body @@ -37,7 +37,7 @@ Returns the HarperDB fingerprint, uniquely generated based on the machine, for l _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_fingerprint` +- `operation` _(required)_ - must always be `get_fingerprint` ### Body @@ -55,9 +55,9 @@ Sets the HarperDB license as generated by HarperDB License Management software. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_license` -- key _(required)_ - your license key -- company _(required)_ - the company that was used in the license +- `operation` _(required)_ - must always be `set_license` +- `key` _(required)_ - your license key +- `company` _(required)_ - the company that was used in the license ### Body diff --git a/versioned_docs/version-4.3/developers/operations-api/sql-operations.md b/versioned_docs/version-4.3/developers/operations-api/sql-operations.md index f5aef46a..4525fe17 100644 --- a/versioned_docs/version-4.3/developers/operations-api/sql-operations.md +++ b/versioned_docs/version-4.3/developers/operations-api/sql-operations.md @@ -12,8 +12,8 @@ HarperDB encourages developers to utilize other querying tools over SQL for perf Executes the provided SQL statement. The SELECT statement is used to query data from the database. -- operation _(required)_ - must always be `sql` -- sql _(required)_ - use standard SQL +- `operation` _(required)_ - must always be `sql` +- `sql` _(required)_ - use standard SQL ### Body @@ -48,8 +48,8 @@ Executes the provided SQL statement. The SELECT statement is used to query data Executes the provided SQL statement. The INSERT statement is used to add one or more rows to a database table. -- operation _(required)_ - must always be `sql` -- sql _(required)_ - use standard SQL +- `operation` _(required)_ - must always be `sql` +- `sql` _(required)_ - use standard SQL ### Body @@ -76,8 +76,8 @@ Executes the provided SQL statement. The INSERT statement is used to add one or Executes the provided SQL statement. The UPDATE statement is used to change the values of specified attributes in one or more rows in a database table. -- operation _(required)_ - must always be `sql` -- sql _(required)_ - use standard SQL +- `operation` _(required)_ - must always be `sql` +- `sql` _(required)_ - use standard SQL ### Body @@ -104,8 +104,8 @@ Executes the provided SQL statement. The UPDATE statement is used to change the Executes the provided SQL statement. The DELETE statement is used to remove one or more rows of data from a database table. -- operation _(required)_ - must always be `sql` -- sql _(required)_ - use standard SQL +- `operation` _(required)_ - must always be `sql` +- `sql` _(required)_ - use standard SQL ### Body diff --git a/versioned_docs/version-4.3/developers/operations-api/token-authentication.md b/versioned_docs/version-4.3/developers/operations-api/token-authentication.md index b9ff5b31..178db842 100644 --- a/versioned_docs/version-4.3/developers/operations-api/token-authentication.md +++ b/versioned_docs/version-4.3/developers/operations-api/token-authentication.md @@ -10,9 +10,9 @@ Creates the tokens needed for authentication: operation & refresh token. _Note - this operation does not require authorization to be set_ -- operation _(required)_ - must always be `create_authentication_tokens` -- username _(required)_ - username of user to generate tokens for -- password _(required)_ - password of user to generate tokens for +- `operation` _(required)_ - must always be `create_authentication_tokens` +- `username` _(required)_ - username of user to generate tokens for +- `password` _(required)_ - password of user to generate tokens for ### Body @@ -39,8 +39,8 @@ _Note - this operation does not require authorization to be set_ This operation creates a new operation token. -- operation _(required)_ - must always be `refresh_operation_token` -- refresh*token *(required)\_ - the refresh token that was provided when tokens were created +- `operation` _(required)_ - must always be `refresh_operation_token` +- `refresh_token` _(required)_ - the refresh token that was provided when tokens were created ### Body diff --git a/versioned_docs/version-4.3/developers/operations-api/users-and-roles.md b/versioned_docs/version-4.3/developers/operations-api/users-and-roles.md index 908a03b4..c65c2c0a 100644 --- a/versioned_docs/version-4.3/developers/operations-api/users-and-roles.md +++ b/versioned_docs/version-4.3/developers/operations-api/users-and-roles.md @@ -10,7 +10,7 @@ Returns a list of all roles. [Learn more about HarperDB roles here.](../security _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `list_roles` +- `operation` _(required)_ - must always be `list_roles` ### Body @@ -80,11 +80,11 @@ Creates a new role with the specified permissions. [Learn more about HarperDB ro _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_role` -- role _(required)_ - name of role you are defining -- permission _(required)_ - object defining permissions for users associated with this role: - - super*user *(optional)\_ - boolean which, if set to true, gives users associated with this role full access to all operations and methods. If not included, value will be assumed to be false. - - structure_user (optional) - boolean OR array of database names (as strings). If boolean, user can create new databases and tables. If array of strings, users can only manage tables within the specified databases. This overrides any individual table permissions for specified databases, or for all databases if the value is true. +- `operation` _(required)_ - must always be `add_role` +- `role` _(required)_ - name of role you are defining +- `permission` _(required)_ - object defining permissions for users associated with this role: + - `super_user` _(optional)_ - boolean which, if set to true, gives users associated with this role full access to all operations and methods. If not included, value will be assumed to be false. + - `structure_user` _(optional)_ - boolean OR array of database names (as strings). If boolean, user can create new databases and tables. If array of strings, users can only manage tables within the specified databases. This overrides any individual table permissions for specified databases, or for all databases if the value is true. ### Body @@ -158,12 +158,12 @@ Modifies an existing role with the specified permissions. updates permissions fr _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `alter_role` -- id _(required)_ - the id value for the role you are altering -- role _(optional)_ - name value to update on the role you are altering -- permission _(required)_ - object defining permissions for users associated with this role: - - super*user *(optional)\_ - boolean which, if set to true, gives users associated with this role full access to all operations and methods. If not included, value will be assumed to be false. - - structure_user (optional) - boolean OR array of database names (as strings). If boolean, user can create new databases and tables. If array of strings, users can only manage tables within the specified databases. This overrides any individual table permissions for specified databases, or for all databases if the value is true. +- `operation` _(required)_ - must always be `alter_role` +- `id` _(required)_ - the id value for the role you are altering +- `role` _(optional)_ - name value to update on the role you are altering +- `permission` _(required)_ - object defining permissions for users associated with this role: + - `super_user` _(optional)_ - boolean which, if set to true, gives users associated with this role full access to all operations and methods. If not included, value will be assumed to be false. + - `structure_user` _(optional)_ - boolean OR array of database names (as strings). If boolean, user can create new databases and tables. If array of strings, users can only manage tables within the specified databases. This overrides any individual table permissions for specified databases, or for all databases if the value is true. ### Body @@ -237,8 +237,8 @@ Deletes an existing role from the database. NOTE: Role with associated users can _Operation is restricted to super_user roles only_ -- operation _(required)_ - this must always be `drop_role` -- id _(required)_ - this is the id of the role you are dropping +- `operation` _(required)_ - this must always be `drop_role` +- `id` _(required)_ - this is the id of the role you are dropping ### Body @@ -265,7 +265,7 @@ Returns a list of all users. [Learn more about HarperDB roles here.](../security _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `list_users` +- `operation` _(required)_ - must always be `list_users` ### Body @@ -377,7 +377,7 @@ _Operation is restricted to super_user roles only_ Returns user data for the associated user credentials. -- operation _(required)_ - must always be `user_info` +- `operation` _(required)_ - must always be `user_info` ### Body @@ -415,11 +415,11 @@ Creates a new user with the specified role and credentials. [Learn more about Ha _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_user` -- role _(required)_ - 'role' name value of the role you wish to assign to the user. See `add_role` for more detail -- username _(required)_ - username assigned to the user. It can not be altered after adding the user. It serves as the hash -- password _(required)_ - clear text for password. HarperDB will encrypt the password upon receipt -- active _(required)_ - boolean value for status of user's access to your HarperDB instance. If set to false, user will not be able to access your instance of HarperDB. +- `operation` _(required)_ - must always be `add_user` +- `role` _(required)_ - 'role' name value of the role you wish to assign to the user. See `add_role` for more detail +- `username` _(required)_ - username assigned to the user. It can not be altered after adding the user. It serves as the hash +- `password` _(required)_ - clear text for password. HarperDB will encrypt the password upon receipt +- `active` _(required)_ - boolean value for status of user's access to your HarperDB instance. If set to false, user will not be able to access your instance of HarperDB. ### Body @@ -449,11 +449,11 @@ Modifies an existing user's role and/or credentials. [Learn more about HarperDB _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `alter_user` -- username _(required)_ - username assigned to the user. It can not be altered after adding the user. It serves as the hash. -- password _(optional)_ - clear text for password. HarperDB will encrypt the password upon receipt -- role _(optional)_ - `role` name value of the role you wish to assign to the user. See `add_role` for more detail -- active _(optional)_ - status of user's access to your HarperDB instance. See `add_role` for more detail +- `operation` _(required)_ - must always be `alter_user` +- `username` _(required)_ - username assigned to the user. It can not be altered after adding the user. It serves as the hash. +- `password` _(optional)_ - clear text for password. HarperDB will encrypt the password upon receipt +- `role` _(optional)_ - `role` name value of the role you wish to assign to the user. See `add_role` for more detail +- `active` _(optional)_ - status of user's access to your HarperDB instance. See `add_role` for more detail ### Body @@ -487,8 +487,8 @@ Deletes an existing user by username. [Learn more about HarperDB roles here.](.. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `drop_user` -- username _(required)_ - username assigned to the user +- `operation` _(required)_ - must always be `drop_user` +- `username` _(required)_ - username assigned to the user ### Body diff --git a/versioned_docs/version-4.3/developers/operations-api/utilities.md b/versioned_docs/version-4.3/developers/operations-api/utilities.md index cf552ec8..840b4c6a 100644 --- a/versioned_docs/version-4.3/developers/operations-api/utilities.md +++ b/versioned_docs/version-4.3/developers/operations-api/utilities.md @@ -10,7 +10,7 @@ Restarts the HarperDB instance. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `restart` +- `operation` _(required)_ - must always be `restart` ### Body @@ -36,8 +36,8 @@ Restarts servers for the specified HarperDB service. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `restart_service` -- service _(required)_ - must be one of: `http_workers`, `clustering_config` or `clustering` +- `operation` _(required)_ - must always be `restart_service` +- `service` _(required)_ - must be one of: `http_workers`, `clustering_config` or `clustering` ### Body @@ -64,8 +64,8 @@ Returns detailed metrics on the host system. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `system_information` -- attributes _(optional)_ - string array of top level attributes desired in the response, if no value is supplied all attributes will be returned. Available attributes are: ['system', 'time', 'cpu', 'memory', 'disk', 'network', 'harperdb_processes', 'table_size', 'replication'] +- `operation` _(required)_ - must always be `system_information` +- `attributes` _(optional)_ - string array of top level attributes desired in the response, if no value is supplied all attributes will be returned. Available attributes are: ['system', 'time', 'cpu', 'memory', 'disk', 'network', 'harperdb_processes', 'table_size', 'replication'] ### Body @@ -83,10 +83,10 @@ Delete data before the specified timestamp on the specified database table exclu _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `delete_records_before` -- date _(required)_ - records older than this date will be deleted. Supported format looks like: `YYYY-MM-DDThh:mm:ss.sZ` -- schema _(required)_ - name of the schema where you are deleting your data -- table _(required)_ - name of the table where you are deleting your data +- `operation` _(required)_ - must always be `delete_records_before` +- `date` _(required)_ - records older than this date will be deleted. Supported format looks like: `YYYY-MM-DDThh:mm:ss.sZ` +- `schema` _(required)_ - name of the schema where you are deleting your data +- `table` _(required)_ - name of the table where you are deleting your data ### Body @@ -114,11 +114,11 @@ _Operation is restricted to super_user roles only_ Exports data based on a given search operation to a local file in JSON or CSV format. -- operation _(required)_ - must always be `export_local` -- format _(required)_ - the format you wish to export the data, options are `json` & `csv` -- path _(required)_ - path local to the server to export the data -- search*operation *(required)\_ - search_operation of `search_by_hash`, `search_by_value`, `search_by_conditions` or `sql` -- filename _(optional)_ - the name of the file where your export will be written to (do not include extension in filename). If one is not provided it will be autogenerated based on the epoch. +- `operation` _(required)_ - must always be `export_local` +- `format` _(required)_ - the format you wish to export the data, options are `json` & `csv` +- `path` _(required)_ - path local to the server to export the data +- `search_operation` _(required)_ - search_operation of `search_by_hash`, `search_by_value`, `search_by_conditions` or `sql` +- `filename` _(optional)_ - the name of the file where your export will be written to (do not include extension in filename). If one is not provided it will be autogenerated based on the epoch. ### Body @@ -148,10 +148,10 @@ Exports data based on a given search operation to a local file in JSON or CSV fo Exports data based on a given search operation from table to AWS S3 in JSON or CSV format. -- operation _(required)_ - must always be `export_to_s3` -- format _(required)_ - the format you wish to export the data, options are `json` & `csv` -- s3 _(required)_ - details your access keys, bucket, bucket region and key for saving the data to S3 -- search*operation *(required)\_ - search_operation of `search_by_hash`, `search_by_value`, `search_by_conditions` or `sql` +- `operation` _(required)_ - must always be `export_to_s3` +- `format` _(required)_ - the format you wish to export the data, options are `json` & `csv` +- `s3` _(required)_ - details your access keys, bucket, bucket region and key for saving the data to S3 +- `search_operation` _(required)_ - search_operation of `search_by_hash`, `search_by_value`, `search_by_conditions` or `sql` ### Body @@ -190,9 +190,9 @@ Executes npm install against specified custom function projects. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `install_node_modules` -- projects _(required)_ - must ba an array of custom functions projects. -- dry*run *(optional)\_ - refers to the npm --dry-run flag: [https://docs.npmjs.com/cli/v8/commands/npm-install#dry-run](https://docs.npmjs.com/cli/v8/commands/npm-install#dry-run). Defaults to false. +- `operation` _(required)_ - must always be `install_node_modules` +- `projects` _(required)_ - must ba an array of custom functions projects. +- `dry_run` _(optional)_ - refers to the npm --dry-run flag: [https://docs.npmjs.com/cli/v8/commands/npm-install#dry-run](https://docs.npmjs.com/cli/v8/commands/npm-install#dry-run). Defaults to false. ### Body @@ -212,9 +212,9 @@ Modifies the HarperDB configuration file parameters. Must follow with a restart _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_configuration` -- logging*level *(example/optional)\_ - one or more configuration keywords to be updated in the HarperDB configuration file -- clustering*enabled *(example/optional)\_ - one or more configuration keywords to be updated in the HarperDB configuration file +- `operation` _(required)_ - must always be `set_configuration` +- `logging_level` _(optional)_ - one or more configuration keywords to be updated in the HarperDB configuration file +- `clustering_enabled` _(optional)_ - one or more configuration keywords to be updated in the HarperDB configuration file ### Body @@ -242,7 +242,7 @@ Returns the HarperDB configuration parameters. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_configuration` +- `operation` _(required)_ - must always be `get_configuration` ### Body diff --git a/versioned_docs/version-4.3/developers/security/basic-auth.md b/versioned_docs/version-4.3/developers/security/basic-auth.md index b7e19131..0b73f479 100644 --- a/versioned_docs/version-4.3/developers/security/basic-auth.md +++ b/versioned_docs/version-4.3/developers/security/basic-auth.md @@ -8,7 +8,7 @@ HarperDB uses Basic Auth and JSON Web Tokens (JWTs) to secure our HTTP requests. ** \_**You do not need to log in separately. Basic Auth is added to each HTTP request like create_database, create_table, insert etc… via headers.**\_ ** -A header is added to each HTTP request. The header key is **“Authorization”** the header value is **“Basic <<your username and password buffer token>>”** +A header is added to each HTTP request. The header key is `Authorization` the header value is `Basic <>`. ## Authentication in HarperDB Studio diff --git a/versioned_docs/version-4.3/developers/security/users-and-roles.md b/versioned_docs/version-4.3/developers/security/users-and-roles.md index 96acae53..0ce2f160 100644 --- a/versioned_docs/version-4.3/developers/security/users-and-roles.md +++ b/versioned_docs/version-4.3/developers/security/users-and-roles.md @@ -47,7 +47,7 @@ When creating a new, user-defined role in a HarperDB instance, you must provide Example JSON for `add_role` request -```json +```jsonc { "operation": "add_role", "role": "software_developer", @@ -98,7 +98,7 @@ There are two parts to a permissions set: Each table that a role should be given some level of CRUD permissions to must be included in the `tables` array for its database in the roles permissions JSON passed to the API (_see example above_). -```json +```jsonc { "table_name": { // the name of the table to define CRUD perms for "read": boolean, // access to read from this table diff --git a/versioned_docs/version-4.3/developers/sql-guide/date-functions.md b/versioned_docs/version-4.3/developers/sql-guide/date-functions.md index 99a782b8..6829aef1 100644 --- a/versioned_docs/version-4.3/developers/sql-guide/date-functions.md +++ b/versioned_docs/version-4.3/developers/sql-guide/date-functions.md @@ -156,17 +156,17 @@ Subtracts the defined amount of time from the date provided in UTC and returns t ### EXTRACT(date, date_part) -Extracts and returns the date_part requested as a String value. Accepted date_part values below show value returned for date = “2020-03-26T15:13:02.041+000” +Extracts and returns the date_part requested as a String value. Accepted date_part values below show value returned for date = "2020-03-26T15:13:02.041+000" | date_part | Example return value\* | | ----------- | ---------------------- | -| year | “2020” | -| month | “3” | -| day | “26” | -| hour | “15” | -| minute | “13” | -| second | “2” | -| millisecond | “41” | +| year | "2020" | +| month | "3" | +| day | "26" | +| hour | "15" | +| minute | "13" | +| second | "2" | +| millisecond | "41" | ``` "SELECT EXTRACT(1587568845765, 'year') AS extract_result" returns diff --git a/versioned_docs/version-4.3/developers/sql-guide/functions.md b/versioned_docs/version-4.3/developers/sql-guide/functions.md index 852ac9b5..cae1bb10 100644 --- a/versioned_docs/version-4.3/developers/sql-guide/functions.md +++ b/versioned_docs/version-4.3/developers/sql-guide/functions.md @@ -16,99 +16,85 @@ This SQL keywords reference contains the SQL functions available in HarperDB. | Keyword | Syntax | Description | | ---------------- | ------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- | -| AVG | AVG(_expression_) | Returns the average of a given numeric expression. | -| COUNT | SELECT COUNT(_column_name_) FROM _database.table_ WHERE _condition_ | Returns the number records that match the given criteria. Nulls are not counted. | -| GROUP_CONCAT | GROUP*CONCAT(\_expression*) | Returns a string with concatenated values that are comma separated and that are non-null from a group. Will return null when there are non-null values. | -| MAX | SELECT MAX(_column_name_) FROM _database.table_ WHERE _condition_ | Returns largest value in a specified column. | -| MIN | SELECT MIN(_column_name_) FROM _database.table_ WHERE _condition_ | Returns smallest value in a specified column. | -| SUM | SUM(_column_name_) | Returns the sum of the numeric values provided. | -| ARRAY\* | ARRAY(_expression_) | Returns a list of data as a field. | -| DISTINCT_ARRAY\* | DISTINCT*ARRAY(\_expression*) | When placed around a standard ARRAY() function, returns a distinct (deduplicated) results set. | +| `AVG` | `AVG(expression)` | Returns the average of a given numeric expression. | +| `COUNT` | `SELECT COUNT(column_name) FROM database.table WHERE condition` | Returns the number records that match the given criteria. Nulls are not counted. | +| `GROUP_CONCAT` | `GROUP_CONCAT(expression)` | Returns a string with concatenated values that are comma separated and that are non-null from a group. Will return null when there are non-null values. | +| `MAX` | `SELECT MAX(column_name) FROM database.table WHERE condition` | Returns largest value in a specified column. | +| `MIN` | `SELECT MIN(column_name) FROM database.table WHERE condition` | Returns smallest value in a specified column. | +| `SUM` | `SUM(column_name)` | Returns the sum of the numeric values provided. | +| `ARRAY`* | `ARRAY(expression)` | Returns a list of data as a field. | +| `DISTINCT_ARRAY`* | `DISTINCT_ARRAY(expression)` | When placed around a standard `ARRAY()` function, returns a distinct (deduplicated) results set. | -\*For more information on ARRAY() and DISTINCT_ARRAY() see [this blog](https://www.harperdb.io/post/sql-queries-to-complex-objects). +*For more information on `ARRAY()` and `DISTINCT_ARRAY()` see [this blog](https://www.harperdb.io/post/sql-queries-to-complex-objects). ### Conversion | Keyword | Syntax | Description | | ------- | ----------------------------------------------- | ---------------------------------------------------------------------- | -| CAST | CAST(_expression AS datatype(length)_) | Converts a value to a specified datatype. | -| CONVERT | CONVERT(_data_type(length), expression, style_) | Converts a value from one datatype to a different, specified datatype. | +| `CAST` | `CAST(expression AS datatype(length))` | Converts a value to a specified datatype. | +| `CONVERT` | `CONVERT(data_type(length), expression, style)` | Converts a value from one datatype to a different, specified datatype. | ### Date & Time -| Keyword | Syntax | Description | -| ----------------- | ----------------- | --------------------------------------------------------------------------------------------------------------------- | -| CURRENT_DATE | CURRENT_DATE() | Returns the current date in UTC in “YYYY-MM-DD” String format. | -| CURRENT_TIME | CURRENT_TIME() | Returns the current time in UTC in “HH:mm:ss.SSS” string format. | -| CURRENT_TIMESTAMP | CURRENT_TIMESTAMP | Referencing this variable will evaluate as the current Unix Timestamp in milliseconds. For more information, go here. | - -| -| DATE | DATE([_date_string_]) | Formats and returns the date*string argument in UTC in ‘YYYY-MM-DDTHH:mm:ss.SSSZZ’ string format. If a date_string is not provided, the function will return the current UTC date/time value in the return format defined above. For more information, go here. | -| -| DATE_ADD | DATE_ADD(\_date, value, interval*) | Adds the defined amount of time to the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted interval values: Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. | -| -| DATE*DIFF | DATEDIFF(\_date_1, date_2[, interval]*) | Returns the difference between the two date values passed based on the interval as a Number. If an interval is not provided, the function will return the difference value in milliseconds. For more information, go here. | -| -| DATE*FORMAT | DATE_FORMAT(\_date, format*) | Formats and returns a date value in the String format provided. Find more details on accepted format values in the moment.js docs. For more information, go here. | -| -| DATE*SUB | DATE_SUB(\_date, format*) | Subtracts the defined amount of time from the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted date*sub interval values- Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. | -| -| DAY | DAY(\_date*) | Return the day of the month for the given date. | -| -| DAYOFWEEK | DAYOFWEEK(_date_) | Returns the numeric value of the weekday of the date given(“YYYY-MM-DD”).NOTE: 0=Sunday, 1=Monday, 2=Tuesday, 3=Wednesday, 4=Thursday, 5=Friday, and 6=Saturday. | -| EXTRACT | EXTRACT(_date, date_part_) | Extracts and returns the date*part requested as a String value. Accepted date_part values below show value returned for date = “2020-03-26T15:13:02.041+000” For more information, go here. | -| -| GETDATE | GETDATE() | Returns the current Unix Timestamp in milliseconds. | -| GET_SERVER_TIME | GET_SERVER_TIME() | Returns the current date/time value based on the server’s timezone in `YYYY-MM-DDTHH:mm:ss.SSSZZ` String format. | -| OFFSET_UTC | OFFSET_UTC(\_date, offset*) | Returns the UTC date time value with the offset provided included in the return String value formatted as `YYYY-MM-DDTHH:mm:ss.SSSZZ`. The offset argument will be added as minutes unless the value is less than 16 and greater than -16, in which case it will be treated as hours. | -| NOW | NOW() | Returns the current Unix Timestamp in milliseconds. | -| -| HOUR | HOUR(_datetime_) | Returns the hour part of a given date in range of 0 to 838. | -| -| MINUTE | MINUTE(_datetime_) | Returns the minute part of a time/datetime in range of 0 to 59. | -| -| MONTH | MONTH(_date_) | Returns month part for a specified date in range of 1 to 12. | -| -| SECOND | SECOND(_datetime_) | Returns the seconds part of a time/datetime in range of 0 to 59. | -| YEAR | YEAR(_date_) | Returns the year part for a specified date. | -| +| Keyword | Syntax | Description | +| ----------------- | -------------------------- | --------------------------------------------------------------------------------------------------------------------- | +| `CURRENT_DATE` | `CURRENT_DATE()` | Returns the current date in UTC in "YYYY-MM-DD" String format. | +| `CURRENT_TIME` | `CURRENT_TIME()` | Returns the current time in UTC in "HH:mm:ss.SSS" string format. | +| `CURRENT_TIMESTAMP` | `CURRENT_TIMESTAMP` | Referencing this variable will evaluate as the current Unix Timestamp in milliseconds. For more information, go here. | +| `DATE` | `DATE([date_string])` | Formats and returns the date string argument in UTC in 'YYYY-MM-DDTHH:mm:ss.SSSZZ' string format. If a date string is not provided, the function will return the current UTC date/time value in the return format defined above. For more information, go here. | +| `DATE_ADD` | `DATE_ADD(date, value, interval)` | Adds the defined amount of time to the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted interval values: Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. | +| `DATE_DIFF` | `DATE_DIFF(date_1, date_2[, interval])` | Returns the difference between the two date values passed based on the interval as a Number. If an interval is not provided, the function will return the difference value in milliseconds. For more information, go here. | +| `DATE_FORMAT` | `DATE_FORMAT(date, format)` | Formats and returns a date value in the String format provided. Find more details on accepted format values in the moment.js docs. For more information, go here. | +| `DATE_SUB` | `DATE_SUB(date, format)` | Subtracts the defined amount of time from the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted date_sub interval values- Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. | +| `DAY` | `DAY(date)` | Return the day of the month for the given date. | +| `DAYOFWEEK` | `DAYOFWEEK(date)` | Returns the numeric value of the weekday of the date given("YYYY-MM-DD").NOTE: 0=Sunday, 1=Monday, 2=Tuesday, 3=Wednesday, 4=Thursday, 5=Friday, and 6=Saturday. | +| `EXTRACT` | `EXTRACT(date, date_part)` | Extracts and returns the date part requested as a String value. Accepted date_part values below show value returned for date = "2020-03-26T15:13:02.041+000" For more information, go here. | +| `GETDATE` | `GETDATE()` | Returns the current Unix Timestamp in milliseconds. | +| `GET_SERVER_TIME` | `GET_SERVER_TIME()` | Returns the current date/time value based on the server's timezone in `YYYY-MM-DDTHH:mm:ss.SSSZZ` String format. | +| `OFFSET_UTC` | `OFFSET_UTC(date, offset)` | Returns the UTC date time value with the offset provided included in the return String value formatted as `YYYY-MM-DDTHH:mm:ss.SSSZZ`. The offset argument will be added as minutes unless the value is less than 16 and greater than -16, in which case it will be treated as hours. | +| `NOW` | `NOW()` | Returns the current Unix Timestamp in milliseconds. | +| `HOUR` | `HOUR(datetime)` | Returns the hour part of a given date in range of 0 to 838. | +| `MINUTE` | `MINUTE(datetime)` | Returns the minute part of a time/datetime in range of 0 to 59. | +| `MONTH` | `MONTH(date)` | Returns month part for a specified date in range of 1 to 12. | +| `SECOND` | `SECOND(datetime)` | Returns the seconds part of a time/datetime in range of 0 to 59. | +| `YEAR` | `YEAR(date)` | Returns the year part for a specified date. | ### Logical | Keyword | Syntax | Description | | ------- | ----------------------------------------------- | ------------------------------------------------------------------------------------------ | -| IF | IF(_condition, value_if_true, value_if_false_) | Returns a value if the condition is true, or another value if the condition is false. | -| IIF | IIF(_condition, value_if_true, value_if_false_) | Returns a value if the condition is true, or another value if the condition is false. | -| IFNULL | IFNULL(_expression, alt_value_) | Returns a specified value if the expression is null. | -| NULLIF | NULLIF(_expression_1, expression_2_) | Returns null if expression_1 is equal to expression_2, if not equal, returns expression_1. | +| `IF` | `IF(condition, value_if_true, value_if_false)` | Returns a value if the condition is true, or another value if the condition is false. | +| `IIF` | `IIF(condition, value_if_true, value_if_false)` | Returns a value if the condition is true, or another value if the condition is false. | +| `IFNULL` | `IFNULL(expression, alt_value)` | Returns a specified value if the expression is null. | +| `NULLIF` | `NULLIF(expression_1, expression_2)` | Returns null if expression_1 is equal to expression_2, if not equal, returns expression_1. | ### Mathematical | Keyword | Syntax | Description | | ------- | ------------------------------ | --------------------------------------------------------------------------------------------------- | -| ABS | ABS(_expression_) | Returns the absolute value of a given numeric expression. | -| CEIL | CEIL(_number_) | Returns integer ceiling, the smallest integer value that is bigger than or equal to a given number. | -| EXP | EXP(_number_) | Returns e to the power of a specified number. | -| FLOOR | FLOOR(_number_) | Returns the largest integer value that is smaller than, or equal to, a given number. | -| RANDOM | RANDOM(_seed_) | Returns a pseudo random number. | -| ROUND | ROUND(_number,decimal_places_) | Rounds a given number to a specified number of decimal places. | -| SQRT | SQRT(_expression_) | Returns the square root of an expression. | +| `ABS` | `ABS(expression)` | Returns the absolute value of a given numeric expression. | +| `CEIL` | `CEIL(number)` | Returns integer ceiling, the smallest integer value that is bigger than or equal to a given number. | +| `EXP` | `EXP(number)` | Returns e to the power of a specified number. | +| `FLOOR` | `FLOOR(number)` | Returns the largest integer value that is smaller than, or equal to, a given number. | +| `RANDOM` | `RANDOM(seed)` | Returns a pseudo random number. | +| `ROUND` | `ROUND(number, decimal_places)` | Rounds a given number to a specified number of decimal places. | +| `SQRT` | `SQRT(expression)` | Returns the square root of an expression. | ### String | Keyword | Syntax | Description | | ----------- | ------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| CONCAT | CONCAT(_string_1, string_2, ...., string_n_) | Concatenates, or joins, two or more strings together, resulting in a single string. | -| CONCAT_WS | CONCAT*WS(\_separator, string_1, string_2, ...., string_n*) | Concatenates, or joins, two or more strings together with a separator, resulting in a single string. | -| INSTR | INSTR(_string_1, string_2_) | Returns the first position, as an integer, of string_2 within string_1. | -| LEN | LEN(_string_) | Returns the length of a string. | -| LOWER | LOWER(_string_) | Converts a string to lower-case. | -| REGEXP | SELECT _column_name_ FROM _database.table_ WHERE _column_name_ REGEXP _pattern_ | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. | -| REGEXP_LIKE | SELECT _column_name_ FROM _database.table_ WHERE REGEXP*LIKE(\_column_name, pattern*) | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. | -| REPLACE | REPLACE(_string, old_string, new_string_) | Replaces all instances of old_string within new_string, with string. | -| SUBSTRING | SUBSTRING(_string, string_position, length_of_substring_) | Extracts a specified amount of characters from a string. | -| TRIM | TRIM([_character(s) FROM_] _string_) | Removes leading and trailing spaces, or specified character(s), from a string. | -| UPPER | UPPER(_string_) | Converts a string to upper-case. | +| `CONCAT` | `CONCAT(string_1, string_2, ...., string_n)` | Concatenates, or joins, two or more strings together, resulting in a single string. | +| `CONCAT_WS` | `CONCAT_WS(separator, string_1, string_2, ...., string_n)` | Concatenates, or joins, two or more strings together with a separator, resulting in a single string. | +| `INSTR` | `INSTR(string_1, string_2)` | Returns the first position, as an integer, of string_2 within string_1. | +| `LEN` | `LEN(string)` | Returns the length of a string. | +| `LOWER` | `LOWER(string)` | Converts a string to lower-case. | +| `REGEXP` | `SELECT column_name FROM database.table WHERE column_name REGEXP pattern` | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. | +| `REGEXP_LIKE` | `SELECT column_name FROM database.table WHERE REGEXP_LIKE(column_name, pattern)` | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. | +| `REPLACE` | `REPLACE(string, old_string, new_string)` | Replaces all instances of old_string within new_string, with string. | +| `SUBSTRING` | `SUBSTRING(string, string_position, length_of_substring)` | Extracts a specified amount of characters from a string. | +| `TRIM` | `TRIM([character(s) FROM] string)` | Removes leading and trailing spaces, or specified character(s), from a string. | +| `UPPER` | `UPPER(string)` | Converts a string to upper-case. | ## Operators @@ -116,9 +102,9 @@ This SQL keywords reference contains the SQL functions available in HarperDB. | Keyword | Syntax | Description | | ------- | ------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------- | -| BETWEEN | SELECT _column_name(s)_ FROM _database.table_ WHERE _column_name_ BETWEEN _value_1_ AND _value_2_ | (inclusive) Returns values(numbers, text, or dates) within a given range. | -| IN | SELECT _column_name(s)_ FROM _database.table_ WHERE _column_name_ IN(_value(s)_) | Used to specify multiple values in a WHERE clause. | -| LIKE | SELECT _column_name(s)_ FROM _database.table_ WHERE _column_n_ LIKE _pattern_ | Searches for a specified pattern within a WHERE clause. | +| `BETWEEN` | `SELECT column_name(s) FROM database.table WHERE column_name BETWEEN value_1 AND value_2` | (inclusive) Returns values(numbers, text, or dates) within a given range. | +| `IN` | `SELECT column_name(s) FROM database.table WHERE column_name IN(value(s))` | Used to specify multiple values in a WHERE clause. | +| `LIKE` | `SELECT column_name(s) FROM database.table WHERE column_n LIKE pattern` | Searches for a specified pattern within a WHERE clause. | ## Queries @@ -126,34 +112,34 @@ This SQL keywords reference contains the SQL functions available in HarperDB. | Keyword | Syntax | Description | | -------- | -------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------- | -| DISTINCT | SELECT DISTINCT _column_name(s)_ FROM _database.table_ | Returns only unique values, eliminating duplicate records. | -| FROM | FROM _database.table_ | Used to list the database(s), table(s), and any joins required for a SQL statement. | -| GROUP BY | SELECT _column_name(s)_ FROM _database.table_ WHERE _condition_ GROUP BY _column_name(s)_ ORDER BY _column_name(s)_ | Groups rows that have the same values into summary rows. | -| HAVING | SELECT _column_name(s)_ FROM _database.table_ WHERE _condition_ GROUP BY _column_name(s)_ HAVING _condition_ ORDER BY _column_name(s)_ | Filters data based on a group or aggregate function. | -| SELECT | SELECT _column_name(s)_ FROM _database.table_ | Selects data from table. | -| WHERE | SELECT _column_name(s)_ FROM _database.table_ WHERE _condition_ | Extracts records based on a defined condition. | +| `DISTINCT` | `SELECT DISTINCT column_name(s) FROM database.table` | Returns only unique values, eliminating duplicate records. | +| `FROM` | `FROM database.table` | Used to list the database(s), table(s), and any joins required for a SQL statement. | +| `GROUP BY` | `SELECT column_name(s) FROM database.table WHERE condition GROUP BY column_name(s) ORDER BY column_name(s)` | Groups rows that have the same values into summary rows. | +| `HAVING` | `SELECT column_name(s) FROM database.table WHERE condition GROUP BY column_name(s) HAVING condition ORDER BY column_name(s)` | Filters data based on a group or aggregate function. | +| `SELECT` | `SELECT column_name(s) FROM database.table` | Selects data from table. | +| `WHERE` | `SELECT column_name(s) FROM database.table WHERE condition` | Extracts records based on a defined condition. | ### Joins -| Keyword | Syntax | Description | -| ------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| CROSS JOIN | SELECT _column_name(s)_ FROM _database.table_1_ CROSS JOIN _database.table_2_ | Returns a paired combination of each row from _table_1_ with row from _table_2_. _Note: CROSS JOIN can return very large result sets and is generally considered bad practice._ | -| FULL OUTER | SELECT _column_name(s)_ FROM _database.table_1_ FULL OUTER JOIN _database.table_2_ ON _table_1.column_name_ _= table_2.column_name_ WHERE _condition_ | Returns all records when there is a match in either _table_1_ (left table) or _table_2_ (right table). | -| [INNER] JOIN | SELECT _column_name(s)_ FROM _database.table_1_ INNER JOIN _database.table_2_ ON _table_1.column_name_ _= table_2.column_name_ | Return only matching records from _table_1_ (left table) and _table_2_ (right table). The INNER keyword is optional and does not affect the result. | -| LEFT [OUTER] JOIN | SELECT _column_name(s)_ FROM _database.table_1_ LEFT OUTER JOIN _database.table_2_ ON _table_1.column_name_ _= table_2.column_name_ | Return all records from _table_1_ (left table) and matching data from _table_2_ (right table). The OUTER keyword is optional and does not affect the result. | -| RIGHT [OUTER] JOIN | SELECT _column_name(s)_ FROM _database.table_1_ RIGHT OUTER JOIN _database.table_2_ ON _table_1.column_name = table_2.column_name_ | Return all records from _table_2_ (right table) and matching data from _table_1_ (left table). The OUTER keyword is optional and does not affect the result. | +| Keyword | Syntax | Description | +| ------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `CROSS JOIN` | `SELECT column_name(s) FROM database.table_1 CROSS JOIN database.table_2` | Returns a paired combination of each row from `table_1` with row from `table_2`. Note: CROSS JOIN can return very large result sets and is generally considered bad practice. | +| `FULL OUTER` | `SELECT column_name(s) FROM database.table_1 FULL OUTER JOIN database.table_2 ON table_1.column_name = table_2.column_name WHERE condition` | Returns all records when there is a match in either `table_1` (left table) or `table_2` (right table). | +| `[INNER] JOIN` | `SELECT column_name(s) FROM database.table_1 INNER JOIN database.table_2 ON table_1.column_name = table_2.column_name` | Return only matching records from `table_1` (left table) and `table_2` (right table). The INNER keyword is optional and does not affect the result. | +| `LEFT [OUTER] JOIN` | `SELECT column_name(s) FROM database.table_1 LEFT OUTER JOIN database.table_2 ON table_1.column_name = table_2.column_name` | Return all records from `table_1` (left table) and matching data from `table_2` (right table). The OUTER keyword is optional and does not affect the result. | +| `RIGHT [OUTER] JOIN` | `SELECT column_name(s) FROM database.table_1 RIGHT OUTER JOIN database.table_2 ON table_1.column_name = table_2.column_name` | Return all records from `table_2` (right table) and matching data from `table_1` (left table). The OUTER keyword is optional and does not affect the result. | ### Predicates | Keyword | Syntax | Description | | ----------- | ----------------------------------------------------------------------------- | -------------------------- | -| IS NOT NULL | SELECT _column_name(s)_ FROM _database.table_ WHERE _column_name_ IS NOT NULL | Tests for non-null values. | -| IS NULL | SELECT _column_name(s)_ FROM _database.table_ WHERE _column_name_ IS NULL | Tests for null values. | +| `IS NOT NULL` | `SELECT column_name(s) FROM database.table WHERE column_name IS NOT NULL` | Tests for non-null values. | +| `IS NULL` | `SELECT column_name(s) FROM database.table WHERE column_name IS NULL` | Tests for null values. | ### Statements | Keyword | Syntax | Description | | ------- | --------------------------------------------------------------------------------------------- | ----------------------------------- | -| DELETE | DELETE FROM _database.table_ WHERE condition | Deletes existing data from a table. | -| INSERT | INSERT INTO _database.table(column_name(s))_ VALUES(_value(s)_) | Inserts new records into a table. | -| UPDATE | UPDATE _database.table_ SET _column_1 = value_1, column_2 = value_2, ....,_ WHERE _condition_ | Alters existing records in a table. | +| `DELETE` | `DELETE FROM database.table WHERE condition` | Deletes existing data from a table. | +| `INSERT` | `INSERT INTO database.table(column_name(s)) VALUES(value(s))` | Inserts new records into a table. | +| `UPDATE` | `UPDATE database.table SET column_1 = value_1, column_2 = value_2, .... WHERE condition` | Alters existing records in a table. | \ No newline at end of file diff --git a/versioned_docs/version-4.3/developers/sql-guide/json-search.md b/versioned_docs/version-4.3/developers/sql-guide/json-search.md index 94905614..0f926ca5 100644 --- a/versioned_docs/version-4.3/developers/sql-guide/json-search.md +++ b/versioned_docs/version-4.3/developers/sql-guide/json-search.md @@ -12,7 +12,7 @@ HarperDB automatically indexes all top level attributes in a row / object writte ## Syntax -SEARCH_JSON(_expression, attribute_) +`SEARCH_JSON(expression, attribute)` Executes the supplied string _expression_ against data of the defined top level _attribute_ for each row. The expression both filters and defines output from the JSON document. @@ -117,7 +117,7 @@ SEARCH_JSON( ) ``` -The first argument passed to SEARCH_JSON is the expression to execute against the second argument which is the cast attribute on the credits table. This expression will execute for every row. Looking into the expression it starts with “$\[…]” this tells the expression to iterate all elements of the cast array. +The first argument passed to SEARCH_JSON is the expression to execute against the second argument which is the cast attribute on the credits table. This expression will execute for every row. Looking into the expression it starts with "$[…]" this tells the expression to iterate all elements of the cast array. Then the expression tells the function to only return entries where the name attribute matches any of the actors defined in the array: @@ -125,7 +125,7 @@ Then the expression tells the function to only return entries where the name att name in ["Robert Downey Jr.", "Chris Evans", "Scarlett Johansson", "Mark Ruffalo", "Chris Hemsworth", "Jeremy Renner", "Clark Gregg", "Samuel L. Jackson", "Gwyneth Paltrow", "Don Cheadle"] ``` -So far, we’ve iterated the array and filtered out rows, but we also want the results formatted in a specific way, so we’ve chained an expression on our filter with: `{“actor”: name, “character”: character}`. This tells the function to create a specific object for each matching entry. +So far, we’ve iterated the array and filtered out rows, but we also want the results formatted in a specific way, so we’ve chained an expression on our filter with: `{"actor": name, "character": character}`. This tells the function to create a specific object for each matching entry. **Sample Result** diff --git a/versioned_docs/version-4.3/technical-details/reference/analytics.md b/versioned_docs/version-4.3/technical-details/reference/analytics.md index c1975c66..c6500e78 100644 --- a/versioned_docs/version-4.3/technical-details/reference/analytics.md +++ b/versioned_docs/version-4.3/technical-details/reference/analytics.md @@ -104,14 +104,14 @@ And a summary record looks like: The following are general resource usage statistics that are tracked: -- memory - This includes RSS, heap, buffer and external data usage. -- utilization - How much of the time the worker was processing requests. +- `memory` - This includes RSS, heap, buffer and external data usage. +- `utilization` - How much of the time the worker was processing requests. - mqtt-connections - The number of MQTT connections. The following types of information is tracked for each HTTP request: -- success - How many requests returned a successful response (20x response code). TTFB - Time to first byte in the response to the client. -- transfer - Time to finish the transfer of the data to the client. +- `success` - How many requests returned a successful response (20x response code). TTFB - Time to first byte in the response to the client. +- `transfer` - Time to finish the transfer of the data to the client. - bytes-sent - How many bytes of data were sent to the client. Requests are categorized by operation name, for the operations API, by the resource (name) with the REST API, and by command for the MQTT interface. diff --git a/versioned_docs/version-4.4/administration/harper-studio/create-account.md b/versioned_docs/version-4.4/administration/harper-studio/create-account.md index ad13b535..2c8a43bc 100644 --- a/versioned_docs/version-4.4/administration/harper-studio/create-account.md +++ b/versioned_docs/version-4.4/administration/harper-studio/create-account.md @@ -12,7 +12,7 @@ Start at the [Harper Studio sign up page](https://studio.harperdb.io/sign-up). - Email Address - Subdomain - _Part of the URL that will be used to identify your Harper Cloud Instances. For example, with subdomain “demo” and instance name “c1” the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com)._ + _Part of the URL that will be used to identify your Harper Cloud Instances. For example, with subdomain "demo" and instance name "c1" the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com)._ - Coupon Code (optional) diff --git a/versioned_docs/version-4.4/administration/harper-studio/instances.md b/versioned_docs/version-4.4/administration/harper-studio/instances.md index f17acb70..07da8097 100644 --- a/versioned_docs/version-4.4/administration/harper-studio/instances.md +++ b/versioned_docs/version-4.4/administration/harper-studio/instances.md @@ -26,7 +26,7 @@ A summary view of all instances within an organization can be viewed by clicking 1. Fill out Instance Info. 1. Enter Instance Name - _This will be used to build your instance URL. For example, with subdomain “demo” and instance name “c1” the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com). The Instance URL will be previewed below._ + _This will be used to build your instance URL. For example, with subdomain "demo" and instance name "c1" the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com). The Instance URL will be previewed below._ 1. Enter Instance Username diff --git a/versioned_docs/version-4.4/administration/harper-studio/organizations.md b/versioned_docs/version-4.4/administration/harper-studio/organizations.md index 1bb56dd1..faae220e 100644 --- a/versioned_docs/version-4.4/administration/harper-studio/organizations.md +++ b/versioned_docs/version-4.4/administration/harper-studio/organizations.md @@ -29,7 +29,7 @@ A new organization can be created as follows: - Enter Organization Name _This is used for descriptive purposes only._ - Enter Organization Subdomain - _Part of the URL that will be used to identify your Harper Cloud Instances. For example, with subdomain “demo” and instance name “c1” the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com)._ + _Part of the URL that will be used to identify your Harper Cloud Instances. For example, with subdomain "demo" and instance name "c1" the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com)._ 4. Click Create Organization. ## Delete an Organization diff --git a/versioned_docs/version-4.4/administration/logging/standard-logging.md b/versioned_docs/version-4.4/administration/logging/standard-logging.md index a5116ed7..044c2260 100644 --- a/versioned_docs/version-4.4/administration/logging/standard-logging.md +++ b/versioned_docs/version-4.4/administration/logging/standard-logging.md @@ -22,15 +22,15 @@ For example, a typical log entry looks like: The components of a log entry are: -- timestamp - This is the date/time stamp when the event occurred -- level - This is an associated log level that gives a rough guide to the importance and urgency of the message. The available log levels in order of least urgent (and more verbose) are: `trace`, `debug`, `info`, `warn`, `error`, `fatal`, and `notify`. -- thread/ID - This reports the name of the thread and the thread ID that the event was reported on. Note that NATS logs are recorded by their process name and there is no thread id for them since they are a separate process. Key threads are: - - main - This is the thread that is responsible for managing all other threads and routes incoming requests to the other threads - - http - These are the worker threads that handle the primary workload of incoming HTTP requests to the operations API and custom functions. - - Clustering\* - These are threads and processes that handle replication. - - job - These are job threads that have been started to handle operations that are executed in a separate job thread. -- tags - Logging from a custom function will include a "custom-function" tag in the log entry. Most logs will not have any additional tags. -- message - This is the main message that was reported. +- `timestamp` - This is the date/time stamp when the event occurred +- `level` - This is an associated log level that gives a rough guide to the importance and urgency of the message. The available log levels in order of least urgent (and more verbose) are: `trace`, `debug`, `info`, `warn`, `error`, `fatal`, and `notify`. +- `thread/ID` - This reports the name of the thread and the thread ID that the event was reported on. Note that NATS logs are recorded by their process name and there is no thread id for them since they are a separate process. Key threads are: + - `main` - This is the thread that is responsible for managing all other threads and routes incoming requests to the other threads + - `http` - These are the worker threads that handle the primary workload of incoming HTTP requests to the operations API and custom functions. + - `Clustering` - These are threads and processes that handle replication. + - `job` - These are job threads that have been started to handle operations that are executed in a separate job thread. +- `tags` - Logging from a custom function will include a "custom-function" tag in the log entry. Most logs will not have any additional tags. +- `message` - This is the main message that was reported. We try to keep logging to a minimum by default, to do this the default log level is `error`. If you require more information from the logs, increasing the log level down will provide that. @@ -46,7 +46,7 @@ Harper logs can optionally be streamed to standard streams. Logging to standard ## Logging Rotation -Log rotation allows for managing log files, such as compressing rotated log files, archiving old log files, determining when to rotate, and the like. This will allow for organized storage and efficient use of disk space. For more information see “logging” in our [config docs](../../deployments/configuration). +Log rotation allows for managing log files, such as compressing rotated log files, archiving old log files, determining when to rotate, and the like. This will allow for organized storage and efficient use of disk space. For more information see "logging" in our [config docs](../../deployments/configuration). ## Read Logs via the API diff --git a/versioned_docs/version-4.4/deployments/install-harper/linux.md b/versioned_docs/version-4.4/deployments/install-harper/linux.md index 27a9dc79..cae27c9d 100644 --- a/versioned_docs/version-4.4/deployments/install-harper/linux.md +++ b/versioned_docs/version-4.4/deployments/install-harper/linux.md @@ -20,7 +20,7 @@ These instructions assume that the following has already been completed: While you will need to access Harper through port 9925 for the administration through the operations API, and port 9932 for clustering, for higher level of security, you may want to consider keeping both of these ports restricted to a VPN or VPC, and only have the application interface (9926 by default) exposed to the public Internet. -For this example, we will use an AWS Ubuntu Server 22.04 LTS m5.large EC2 Instance with an additional General Purpose SSD EBS volume and the default “ubuntu” user account. +For this example, we will use an AWS Ubuntu Server 22.04 LTS m5.large EC2 Instance with an additional General Purpose SSD EBS volume and the default "ubuntu" user account. --- diff --git a/versioned_docs/version-4.4/developers/applications/caching.md b/versioned_docs/version-4.4/developers/applications/caching.md index 79e48440..4493111f 100644 --- a/versioned_docs/version-4.4/developers/applications/caching.md +++ b/versioned_docs/version-4.4/developers/applications/caching.md @@ -22,9 +22,9 @@ While you can provide a single expiration time, there are actually several expir You can provide a single expiration and it defines the behavior for all three. You can also provide three settings for expiration, through table directives: -- expiration - The amount of time until a record goes stale. -- eviction - The amount of time after expiration before a record can be evicted (defaults to zero). -- scanInterval - The interval for scanning for expired records (defaults to one quarter of the total of expiration and eviction). +- `expiration` - The amount of time until a record goes stale. +- `eviction` - The amount of time after expiration before a record can be evicted (defaults to zero). +- `scanInterval` - The interval for scanning for expired records (defaults to one quarter of the total of expiration and eviction). ## Define External Data Source diff --git a/versioned_docs/version-4.4/developers/applications/define-routes.md b/versioned_docs/version-4.4/developers/applications/define-routes.md index 720f4f06..37c3d016 100644 --- a/versioned_docs/version-4.4/developers/applications/define-routes.md +++ b/versioned_docs/version-4.4/developers/applications/define-routes.md @@ -22,7 +22,7 @@ However, you can specify the path to be `/` if you wish to have your routes hand - The route below, using the default config, within the **dogs** project, with a route of **breeds** would be available at **http:/localhost:9926/dogs/breeds**. -In effect, this route is just a pass-through to Harper. The same result could have been achieved by hitting the core Harper API, since it uses **hdbCore.preValidation** and **hdbCore.request**, which are defined in the “helper methods” section, below. +In effect, this route is just a pass-through to Harper. The same result could have been achieved by hitting the core Harper API, since it uses **hdbCore.preValidation** and **hdbCore.request**, which are defined in the "helper methods" section, below. ```javascript export default async (server, { hdbCore, logger }) => { @@ -39,7 +39,7 @@ export default async (server, { hdbCore, logger }) => { For endpoints where you want to execute multiple operations against Harper, or perform additional processing (like an ML classification, or an aggregation, or a call to a 3rd party API), you can define your own logic in the handler. The function below will execute a query against the dogs table, and filter the results to only return those dogs over 4 years in age. -**IMPORTANT: This route has NO preValidation and uses hdbCore.requestWithoutAuthentication, which- as the name implies- bypasses all user authentication. See the security concerns and mitigations in the “helper methods” section, below.** +**IMPORTANT: This route has NO preValidation and uses hdbCore.requestWithoutAuthentication, which- as the name implies- bypasses all user authentication. See the security concerns and mitigations in the "helper methods" section, below.** ```javascript export default async (server, { hdbCore, logger }) => { diff --git a/versioned_docs/version-4.4/developers/clustering/index.md b/versioned_docs/version-4.4/developers/clustering/index.md index 95c3433c..fddd3851 100644 --- a/versioned_docs/version-4.4/developers/clustering/index.md +++ b/versioned_docs/version-4.4/developers/clustering/index.md @@ -22,10 +22,10 @@ A common use case is an edge application collecting and analyzing sensor data th Harper simplifies the architecture of such an application with its bi-directional, table-level replication: -- The edge instance subscribes to a “thresholds” table on the cloud instance, so the application only makes localhost calls to get the thresholds. -- The application continually pushes sensor data into a “sensor_data” table via the localhost API, comparing it to the threshold values as it does so. -- When a threshold violation occurs, the application adds a record to the “alerts” table. -- The application appends to that record array “sensor_data” entries for the 60 seconds (or minutes, or days) leading up to the threshold violation. -- The edge instance publishes the “alerts” table up to the cloud instance. +- The edge instance subscribes to a "thresholds" table on the cloud instance, so the application only makes localhost calls to get the thresholds. +- The application continually pushes sensor data into a "sensor_data" table via the localhost API, comparing it to the threshold values as it does so. +- When a threshold violation occurs, the application adds a record to the "alerts" table. +- The application appends to that record array "sensor_data" entries for the 60 seconds (or minutes, or days) leading up to the threshold violation. +- The edge instance publishes the "alerts" table up to the cloud instance. By letting Harper focus on the fault-tolerant logistics of transporting your data, you get to write less code. By moving data only when and where it’s needed, you lower storage and bandwidth costs. And by restricting your app to only making local calls to Harper, you reduce the overall exposure of your application to outside forces. diff --git a/versioned_docs/version-4.4/developers/components/reference.md b/versioned_docs/version-4.4/developers/components/reference.md index 4b709f86..bb041616 100644 --- a/versioned_docs/version-4.4/developers/components/reference.md +++ b/versioned_docs/version-4.4/developers/components/reference.md @@ -105,7 +105,7 @@ There are two key types of Harper Extensions: **Resource Extension** and **Proto Functionally, what makes an extension a component is the contents of `config.yaml`. Unlike the Application Template referenced earlier, which specified multiple components within the `config.yaml`, an extension will specify an `extensionModule` option. -- **extensionModule** - `string` - _required_ - A path to the extension module source code. The path must resolve from the root of the extension module directory. +- `extensionModule` - `string` - _required_ - A path to the extension module source code. The path must resolve from the root of the extension module directory. For example, the [Harper Next.js Extension](https://github.com/HarperDB/nextjs) `config.yaml` specifies `extensionModule: ./extension.js`. @@ -129,9 +129,9 @@ Other than their execution behavior, the `handleFile()` and `setupFile()` method Any [Resource Extension](#resource-extension) can be configured with the `files`, `path`, and `root` options. These options control how _files_ and _directories_ are resolved in order to be passed to the extension's `handleFile()`, `setupFile()`, `handleDirectory()`, and `setupDirectory()` methods. -- **files** - `string` - _required_ - Specifies the set of files and directories that should be handled by the component. Can be a glob pattern. -- **path** - `string` - _optional_ - Specifies the URL path to be handled by the component. -- **root** - `string` - _optional_ - Specifies the root directory for mapping file paths to the URLs. +- `files` - `string` - _required_ - Specifies the set of files and directories that should be handled by the component. Can be a glob pattern. +- `path` - `string` - _optional_ - Specifies the URL path to be handled by the component. +- `root` - `string` - _optional_ - Specifies the root directory for mapping file paths to the URLs. For example, to configure the [static](./built-in#static) component to server all files from `web` to the root URL path: @@ -188,11 +188,11 @@ These methods are for processing individual files. They can be async. Parameters: -- **contents** - `Buffer` - The contents of the file -- **urlPath** - `string` - The recommended URL path of the file -- **path** - `string` - The relative path of the file +- `contents` - `Buffer` - The contents of the file +- `urlPath` - `string` - The recommended URL path of the file +- `path` - `string` - The relative path of the file -- **resources** - `Object` - A collection of the currently loaded resources +- `resources` - `Object` - A collection of the currently loaded resources Returns: `void | Promise` @@ -212,10 +212,10 @@ If the function returns or resolves a truthy value, then the component loading s Parameters: -- **urlPath** - `string` - The recommended URL path of the file -- **path** - `string` - The relative path of the directory +- `urlPath` - `string` - The recommended URL path of the file +- `path` - `string` - The relative path of the directory -- **resources** - `Object` - A collection of the currently loaded resources +- `resources` - `Object` - A collection of the currently loaded resources Returns: `boolean | void | Promise` @@ -249,6 +249,6 @@ A Protocol Extension is made up of two distinct methods, [`start()`](#startoptio Parameters: -- **options** - `Object` - An object representation of the extension's configuration options. +- `options` - `Object` - An object representation of the extension's configuration options. Returns: `Object` - An object that implements any of the [Resource Extension APIs](#resource-extension-api) diff --git a/versioned_docs/version-4.4/developers/miscellaneous/google-data-studio.md b/versioned_docs/version-4.4/developers/miscellaneous/google-data-studio.md index 02fea100..7939417b 100644 --- a/versioned_docs/version-4.4/developers/miscellaneous/google-data-studio.md +++ b/versioned_docs/version-4.4/developers/miscellaneous/google-data-studio.md @@ -19,9 +19,9 @@ Get started by selecting the Harper connector from the [Google Data Studio Partn 1. Log in to [https://datastudio.google.com/](https://datastudio.google.com/). 1. Add a new Data Source using the Harper connector. The current release version can be added as a data source by following this link: [Harper Google Data Studio Connector](https://datastudio.google.com/datasources/create?connectorId=AKfycbxBKgF8FI5R42WVxO-QCOq7dmUys0HJrUJMkBQRoGnCasY60_VJeO3BhHJPvdd20-S76g). 1. Authorize the connector to access other servers on your behalf (this allows the connector to contact your database). -1. Enter the Web URL to access your database (preferably with HTTPS), as well as the Basic Auth key you use to access the database. Just include the key, not the word “Basic” at the start of it. -1. Check the box for “Secure Connections Only” if you want to always use HTTPS connections for this data source; entering a Web URL that starts with https:// will do the same thing, if you prefer. -1. Check the box for “Allow Bad Certs” if your Harper instance does not have a valid SSL certificate. [Harper Cloud](../../deployments/harper-cloud/) always has valid certificates, and so will never require this to be checked. Instances you set up yourself may require this, if you are using self-signed certs. If you are using [Harper Cloud](../../deployments/harper-cloud/) or another instance you know should always have valid SSL certificates, do not check this box. +1. Enter the Web URL to access your database (preferably with HTTPS), as well as the Basic Auth key you use to access the database. Just include the key, not the word "Basic" at the start of it. +1. Check the box for "Secure Connections Only" if you want to always use HTTPS connections for this data source; entering a Web URL that starts with https:// will do the same thing, if you prefer. +1. Check the box for "Allow Bad Certs" if your Harper instance does not have a valid SSL certificate. [Harper Cloud](../../deployments/harper-cloud/) always has valid certificates, and so will never require this to be checked. Instances you set up yourself may require this, if you are using self-signed certs. If you are using [Harper Cloud](../../deployments/harper-cloud/) or another instance you know should always have valid SSL certificates, do not check this box. 1. Choose your Query Type. This determines what information the configuration will ask for after pressing the Next button. - Table will ask you for a Schema and a Table to return all fields of using `SELECT *`. - SQL will ask you for the SQL query you’re using to retrieve fields from the database. You may `JOIN` multiple tables together, and use Harper specific SQL functions, along with the usual power SQL grants. diff --git a/versioned_docs/version-4.4/developers/operations-api/bulk-operations.md b/versioned_docs/version-4.4/developers/operations-api/bulk-operations.md index 372e5bb3..aef33230 100644 --- a/versioned_docs/version-4.4/developers/operations-api/bulk-operations.md +++ b/versioned_docs/version-4.4/developers/operations-api/bulk-operations.md @@ -8,11 +8,11 @@ title: Bulk Operations Ingests CSV data, provided directly in the operation as an `insert`, `update` or `upsert` into the specified database table. -- operation _(required)_ - must always be `csv_data_load` -- action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` -- database _(optional)_ - name of the database where you are loading your data. The default is `data` -- table _(required)_ - name of the table where you are loading your data -- data _(required)_ - csv data to import into Harper +- `operation` _(required)_ - must always be `csv_data_load` +- `action` _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` +- `database` _(optional)_ - name of the database where you are loading your data. The default is `data` +- `table` _(required)_ - name of the table where you are loading your data +- `data` _(required)_ - csv data to import into Harper ### Body @@ -43,11 +43,11 @@ Ingests CSV data, provided via a path on the local filesystem, as an `insert`, ` _Note: The CSV file must reside on the same machine on which Harper is running. For example, the path to a CSV on your computer will produce an error if your Harper instance is a cloud instance._ -- operation _(required)_ - must always be `csv_file_load` -- action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` -- database _(optional)_ - name of the database where you are loading your data. The default is `data` -- table _(required)_ - name of the table where you are loading your data -- file*path *(required)\_ - path to the csv file on the host running Harper +- `operation` _(required)_ - must always be `csv_file_load` +- `action` _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` +- `database` _(optional)_ - name of the database where you are loading your data. The default is `data` +- `table` _(required)_ - name of the table where you are loading your data +- `file_path` _(required)_ - path to the csv file on the host running Harper ### Body @@ -76,11 +76,11 @@ _Note: The CSV file must reside on the same machine on which Harper is running. Ingests CSV data, provided via URL, as an `insert`, `update` or `upsert` into the specified database table. -- operation _(required)_ - must always be `csv_url_load` -- action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` -- database _(optional)_ - name of the database where you are loading your data. The default is `data` -- table _(required)_ - name of the table where you are loading your data -- csv*url *(required)\_ - URL to the csv +- `operation` _(required)_ - must always be `csv_url_load` +- `action` _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` +- `database` _(optional)_ - name of the database where you are loading your data. The default is `data` +- `table` _(required)_ - name of the table where you are loading your data +- `csv_url` _(required)_ - URL to the csv ### Body @@ -109,16 +109,16 @@ Ingests CSV data, provided via URL, as an `insert`, `update` or `upsert` into th This operation allows users to import CSV or JSON files from an AWS S3 bucket as an `insert`, `update` or `upsert`. -- operation _(required)_ - must always be `import_from_s3` -- action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` -- database _(optional)_ - name of the database where you are loading your data. The default is `data` -- table _(required)_ - name of the table where you are loading your data -- s3 _(required)_ - object containing required AWS S3 bucket info for operation: - - aws_access_key_id - AWS access key for authenticating into your S3 bucket - - aws_secret_access_key - AWS secret for authenticating into your S3 bucket - - bucket - AWS S3 bucket to import from - - key - the name of the file to import - _the file must include a valid file extension ('.csv' or '.json')_ - - region - the region of the bucket +- `operation` _(required)_ - must always be `import_from_s3` +- `action` _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` +- `database` _(optional)_ - name of the database where you are loading your data. The default is `data` +- `table` _(required)_ - name of the table where you are loading your data +- `s3` _(required)_ - object containing required AWS S3 bucket info for operation: + - `aws_access_key_id` - AWS access key for authenticating into your S3 bucket + - `aws_secret_access_key` - AWS secret for authenticating into your S3 bucket + - `bucket` - AWS S3 bucket to import from + - `key` - the name of the file to import - _the file must include a valid file extension ('.csv' or '.json')_ + - `region` - the region of the bucket ### Body diff --git a/versioned_docs/version-4.4/developers/operations-api/clustering-nats.md b/versioned_docs/version-4.4/developers/operations-api/clustering-nats.md index a45c593e..45e160c4 100644 --- a/versioned_docs/version-4.4/developers/operations-api/clustering-nats.md +++ b/versioned_docs/version-4.4/developers/operations-api/clustering-nats.md @@ -10,11 +10,11 @@ Adds a route/routes to either the hub or leaf server cluster configuration. This _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_set_routes` -- server _(required)_ - must always be `hub` or `leaf`, in most cases you should use `hub` here -- routes _(required)_ - must always be an objects array with a host and port: - - host - the host of the remote instance you are clustering to - - port - the clustering port of the remote instance you are clustering to, in most cases this is the value in `clustering.hubServer.cluster.network.port` on the remote instance `harperdb-config.yaml` +- `operation` _(required)_ - must always be `cluster_set_routes` +- `server` _(required)_ - must always be `hub` or `leaf`, in most cases you should use `hub` here +- `routes` _(required)_ - must always be an objects array with a host and port: + - `host` - the host of the remote instance you are clustering to + - `port` - the clustering port of the remote instance you are clustering to, in most cases this is the value in `clustering.hubServer.cluster.network.port` on the remote instance `harperdb-config.yaml` ### Body @@ -78,7 +78,7 @@ Gets all the hub and leaf server routes from the config file. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_get_routes` +- `operation` _(required)_ - must always be `cluster_get_routes` ### Body @@ -122,8 +122,8 @@ Removes route(s) from hub and/or leaf server routes array in config file. Return _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_delete_routes` -- routes _required_ - Must be an array of route object(s) +- `operation` _(required)_ - must always be `cluster_delete_routes` +- `routes` _(required)_ - Must be an array of route object(s) ### Body @@ -162,14 +162,14 @@ Registers an additional Harper instance with associated subscriptions. Learn mor _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_node` -- node*name *(required)\_ - the node name of the remote node -- subscriptions _(required)_ - The relationship created between nodes. Must be an object array and include `schema`, `table`, `subscribe` and `publish`: - - schema - the schema to replicate from - - table - the table to replicate from - - subscribe - a boolean which determines if transactions on the remote table should be replicated on the local table - - publish - a boolean which determines if transactions on the local table should be replicated on the remote table - - start*time *(optional)\_ - How far back to go to get transactions from node being added. Must be in UTC YYYY-MM-DDTHH:mm:ss.sssZ format +- `operation` _(required)_ - must always be `add_node` +- `node_name` _(required)_ - the node name of the remote node +- `subscriptions` _(required)_ - The relationship created between nodes. Must be an object array and include `schema`, `table`, `subscribe` and `publish`: + - `schema` - the schema to replicate from + - `table` - the table to replicate from + - `subscribe` - a boolean which determines if transactions on the remote table should be replicated on the local table + - `publish` - a boolean which determines if transactions on the local table should be replicated on the remote table + - `start_time` _(optional)_ - How far back to go to get transactions from node being added. Must be in UTC YYYY-MM-DDTHH:mm:ss.sssZ format ### Body @@ -205,14 +205,14 @@ Modifies an existing Harper instance registration and associated subscriptions. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `update_node` -- node*name *(required)\_ - the node name of the remote node you are updating -- subscriptions _(required)_ - The relationship created between nodes. Must be an object array and include `schema`, `table`, `subscribe` and `publish`: - - schema - the schema to replicate from - - table - the table to replicate from - - subscribe - a boolean which determines if transactions on the remote table should be replicated on the local table - - publish - a boolean which determines if transactions on the local table should be replicated on the remote table - - start*time *(optional)\_ - How far back to go to get transactions from node being added. Must be in UTC YYYY-MM-DDTHH:mm:ss.sssZ format +- `operation` _(required)_ - must always be `update_node` +- `node_name` _(required)_ - the node name of the remote node you are updating +- `subscriptions` _(required)_ - The relationship created between nodes. Must be an object array and include `schema`, `table`, `subscribe` and `publish`: + - `schema` - the schema to replicate from + - `table` - the table to replicate from + - `subscribe` - a boolean which determines if transactions on the remote table should be replicated on the local table + - `publish` - a boolean which determines if transactions on the local table should be replicated on the remote table + - `start_time` _(optional)_ - How far back to go to get transactions from node being added. Must be in UTC YYYY-MM-DDTHH:mm:ss.sssZ format ### Body @@ -248,13 +248,13 @@ A more adeptly named alias for add and update node. This operation behaves as a _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_node_replication` -- node*name *(required)\_ - the node name of the remote node you are updating -- subscriptions _(required)_ - The relationship created between nodes. Must be an object array and `table`, `subscribe` and `publish`: - - database _(optional)_ - the database to replicate from - - table _(required)_ - the table to replicate from - - subscribe _(required)_ - a boolean which determines if transactions on the remote table should be replicated on the local table - - publish _(required)_ - a boolean which determines if transactions on the local table should be replicated on the remote table +- `operation` _(required)_ - must always be `set_node_replication` +- `node_name` _(required)_ - the node name of the remote node you are updating +- `subscriptions` _(required)_ - The relationship created between nodes. Must be an object array and `table`, `subscribe` and `publish`: + - `database` _(optional)_ - the database to replicate from + - `table` _(required)_ - the table to replicate from + - `subscribe` _(required)_ - a boolean which determines if transactions on the remote table should be replicated on the local table + - `publish` _(required)_ - a boolean which determines if transactions on the local table should be replicated on the remote table - ### Body @@ -289,7 +289,7 @@ Returns an array of status objects from a cluster. A status object will contain _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_status` +- `operation` _(required)_ - must always be `cluster_status` ### Body @@ -336,10 +336,10 @@ Returns an object array of enmeshed nodes. Each node object will contain the nam _Operation is restricted to super_user roles only_ -- operation _(required)_- must always be `cluster_network` -- timeout (_optional_) - the amount of time in milliseconds to wait for a response from the network. Must be a number -- connected*nodes (\_optional*) - omit `connected_nodes` from the response. Must be a boolean. Defaults to `false` -- routes (_optional_) - omit `routes` from the response. Must be a boolean. Defaults to `false` +- `operation` _(required)_- must always be `cluster_network` +- `timeout` _(optional)_ - the amount of time in milliseconds to wait for a response from the network. Must be a number +- `connected_nodes` _(optional)_ - omit `connected_nodes` from the response. Must be a boolean. Defaults to `false` +- `routes` _(optional)_ - omit `routes` from the response. Must be a boolean. Defaults to `false` ### Body @@ -383,8 +383,8 @@ Removes a Harper instance and associated subscriptions from the cluster. Learn m _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `remove_node` -- name _(required)_ - The name of the node you are de-registering +- `operation` _(required)_ - must always be `remove_node` +- `node_name` _(required)_ - The name of the node you are de-registering ### Body @@ -412,8 +412,8 @@ Learn more about [Harper clustering here](../clustering/). _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `configure_cluster` -- connections _(required)_ - must be an object array with each object containing `node_name` and `subscriptions` for that node +- `operation` _(required)_ - must always be `configure_cluster` +- `connections` _(required)_ - must be an object array with each object containing `node_name` and `subscriptions` for that node ### Body @@ -463,10 +463,10 @@ Will purge messages from a stream _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `purge_stream` -- database _(required)_ - the name of the database where the streams table resides -- table _(required)_ - the name of the table that belongs to the stream -- options _(optional)_ - control how many messages get purged. Options are: +- `operation` _(required)_ - must always be `purge_stream` +- `database` _(required)_ - the name of the database where the streams table resides +- `table` _(required)_ - the name of the table that belongs to the stream +- `options` _(optional)_ - control how many messages get purged. Options are: - `keep` - purge will keep this many most recent messages - `seq` - purge all messages up to, but not including, this sequence diff --git a/versioned_docs/version-4.4/developers/operations-api/clustering.md b/versioned_docs/version-4.4/developers/operations-api/clustering.md index 8e49285e..ee6469ef 100644 --- a/versioned_docs/version-4.4/developers/operations-api/clustering.md +++ b/versioned_docs/version-4.4/developers/operations-api/clustering.md @@ -14,16 +14,16 @@ Adds a new Harper instance to the cluster. If `subscriptions` are provided, it w _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_node` -- hostname or url _(required)_ - one of these fields is required. You must provide either the `hostname` or the `url` of the node you want to add -- verify_tls _(optional)_ - a boolean which determines if the TLS certificate should be verified. This will allow the Harper default self-signed certificates to be accepted. Defaults to `true` -- authorization _(optional)_ - an object or a string which contains the authorization information for the node being added. If it is an object, it should contain `username` and `password` fields. If it is a string, it should use HTTP `Authorization` style credentials -- retain_authorization _(optional)_ - a boolean which determines if the authorization credentials should be retained/stored and used everytime a connection is made to this node. If `true`, the authorization will be stored on the node record. Generally this should not be used, as mTLS/certificate based authorization is much more secure and safe, and avoids the need for storing credentials. Defaults to `false`. -- subscriptions _(optional)_ - The relationship created between nodes. If not provided a fully replicated cluster will be setup. Must be an object array and include `database`, `table`, `subscribe` and `publish`: - - database - the database to replicate - - table - the table to replicate - - subscribe - a boolean which determines if transactions on the remote table should be replicated on the local table - - publish - a boolean which determines if transactions on the local table should be replicated on the remote table +- `operation` _(required)_ - must always be `add_node` +- `hostname` or `url` _(required)_ - one of these fields is required. You must provide either the `hostname` or the `url` of the node you want to add +- `verify_tls` _(optional)_ - a boolean which determines if the TLS certificate should be verified. This will allow the Harper default self-signed certificates to be accepted. Defaults to `true` +- `authorization` _(optional)_ - an object or a string which contains the authorization information for the node being added. If it is an object, it should contain `username` and `password` fields. If it is a string, it should use HTTP `Authorization` style credentials +- `retain_authorization` _(optional)_ - a boolean which determines if the authorization credentials should be retained/stored and used everytime a connection is made to this node. If `true`, the authorization will be stored on the node record. Generally this should not be used, as mTLS/certificate based authorization is much more secure and safe, and avoids the need for storing credentials. Defaults to `false`. +- `subscriptions` _(optional)_ - The relationship created between nodes. If not provided a fully replicated cluster will be setup. Must be an object array and include `database`, `table`, `subscribe` and `publish`: + - `database` - the database to replicate + - `table` - the table to replicate + - `subscribe` - a boolean which determines if transactions on the remote table should be replicated on the local table + - `publish` - a boolean which determines if transactions on the local table should be replicated on the remote table ### Body @@ -57,13 +57,13 @@ _Operation is restricted to super_user roles only_ _Note: will attempt to add the node if it does not exist_ -- operation _(required)_ - must always be `update_node` -- hostname _(required)_ - the `hostname` of the remote node you are updating -- subscriptions _(required)_ - The relationship created between nodes. Must be an object array and include `database`, `table`, `subscribe` and `publish`: - - database - the database to replicate from - - table - the table to replicate from - - subscribe - a boolean which determines if transactions on the remote table should be replicated on the local table - - publish - a boolean which determines if transactions on the local table should be replicated on the remote table +- `operation` _(required)_ - must always be `update_node` +- `hostname` _(required)_ - the `hostname` of the remote node you are updating +- `subscriptions` _(required)_ - The relationship created between nodes. Must be an object array and include `database`, `table`, `subscribe` and `publish`: + - `database` - the database to replicate from + - `table` - the table to replicate from + - `subscribe` - a boolean which determines if transactions on the remote table should be replicated on the local table + - `publish` - a boolean which determines if transactions on the local table should be replicated on the remote table ### Body @@ -98,8 +98,8 @@ Removes a Harper node from the cluster and stops replication, [Learn more about _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `remove_node` -- name _(required)_ - The name of the node you are removing +- `operation` _(required)_ - must always be `remove_node` +- `name` _(required)_ - The name of the node you are removing ### Body @@ -128,7 +128,7 @@ Returns an array of status objects from a cluster. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_status` +- `operation` _(required)_ - must always be `cluster_status` ### Body @@ -179,8 +179,8 @@ Bulk create/remove subscriptions for any number of remote nodes. Resets and repl _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `configure_cluster` -- connections _(required)_ - must be an object array with each object following the `add_node` schema. +- `operation` _(required)_ - must always be `configure_cluster` +- `connections` _(required)_ - must be an object array with each object following the `add_node` schema. ### Body @@ -240,8 +240,8 @@ Adds a route/routes to the `replication.routes` configuration. This operation be _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_set_routes` -- routes _(required)_ - the routes field is an array that specifies the routes for clustering. Each element in the array can be either a string or an object with `hostname` and `port` properties. +- `operation` _(required)_ - must always be `cluster_set_routes` +- `routes` _(required)_ - the routes field is an array that specifies the routes for clustering. Each element in the array can be either a string or an object with `hostname` and `port` properties. ### Body @@ -282,7 +282,7 @@ Gets the replication routes from the Harper config file. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_get_routes` +- `operation` _(required)_ - must always be `cluster_get_routes` ### Body @@ -312,8 +312,8 @@ Removes route(s) from the Harper config file. Returns a deletion success message _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_delete_routes` -- routes _required_ - Must be an array of route object(s) +- `operation` _(required)_ - must always be `cluster_delete_routes` +- `routes` _(required)_ - Must be an array of route object(s) ### Body diff --git a/versioned_docs/version-4.4/developers/operations-api/components.md b/versioned_docs/version-4.4/developers/operations-api/components.md index 1ca4f42d..ffbe8235 100644 --- a/versioned_docs/version-4.4/developers/operations-api/components.md +++ b/versioned_docs/version-4.4/developers/operations-api/components.md @@ -10,9 +10,9 @@ Creates a new component project in the component root directory using a predefin _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_component` -- project _(required)_ - the name of the project you wish to create -- replicated _(optional)_ - if true, Harper will replicate the component to all nodes in the cluster. Must be a boolean. +- `operation` _(required)_ - must always be `add_component` +- `project` _(required)_ - the name of the project you wish to create +- `replicated` _(optional)_ - if true, Harper will replicate the component to all nodes in the cluster. Must be a boolean. ### Body @@ -75,13 +75,13 @@ _Note: After deploying a component a restart may be required_ _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `deploy_component` -- project _(required)_ - the name of the project you wish to deploy -- package _(optional)_ - this can be any valid GitHub or NPM reference -- payload _(optional)_ - a base64-encoded string representation of the .tar file. Must be a string -- restart _(optional)_ - must be either a boolean or the string `rolling`. If set to `rolling`, a rolling restart will be triggered after the component is deployed, meaning that each node in the cluster will be sequentially restarted (waiting for the last restart to start the next). If set to `true`, the restart will not be rolling, all nodes will be restarted in parallel. If `replicated` is `true`, the restart operations will be replicated across the cluster. -- replicated _(optional)_ - if true, Harper will replicate the component to all nodes in the cluster. Must be a boolean. -- install_command _(optional)_ - A command to use when installing the component. Must be a string. This can be used to install dependencies with pnpm or yarn, for example, like: `"install_command": "npm install -g pnpm && pnpm install"` +- `operation` _(required)_ - must always be `deploy_component` +- `project` _(required)_ - the name of the project you wish to deploy +- `package` _(optional)_ - this can be any valid GitHub or NPM reference +- `payload` _(optional)_ - a base64-encoded string representation of the .tar file. Must be a string +- `restart` _(optional)_ - must be either a boolean or the string `rolling`. If set to `rolling`, a rolling restart will be triggered after the component is deployed, meaning that each node in the cluster will be sequentially restarted (waiting for the last restart to start the next). If set to `true`, the restart will not be rolling, all nodes will be restarted in parallel. If `replicated` is `true`, the restart operations will be replicated across the cluster. +- `replicated` _(optional)_ - if true, Harper will replicate the component to all nodes in the cluster. Must be a boolean. +- `install_command` _(optional)_ - A command to use when installing the component. Must be a string. This can be used to install dependencies with pnpm or yarn, for example, like: `"install_command": "npm install -g pnpm && pnpm install"` ### Body @@ -118,9 +118,9 @@ Creates a temporary `.tar` file of the specified project folder, then reads it i _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `package_component` -- project _(required)_ - the name of the project you wish to package -- skip_node_modules _(optional)_ - if true, creates option for tar module that will exclude the project's node_modules directory. Must be a boolean +- `operation` _(required)_ - must always be `package_component` +- `project` _(required)_ - the name of the project you wish to package +- `skip_node_modules` _(optional)_ - if true, creates option for tar module that will exclude the project's node_modules directory. Must be a boolean ### Body @@ -151,11 +151,11 @@ Deletes a file from inside the component project or deletes the complete project _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `drop_component` -- project _(required)_ - the name of the project you wish to delete or to delete from if using the `file` parameter -- file _(optional)_ - the path relative to your project folder of the file you wish to delete -- replicated _(optional)_ - if true, Harper will replicate the component deletion to all nodes in the cluster. Must be a boolean. -- restart _(optional)_ - if true, Harper will restart after dropping the component. Must be a boolean. +- `operation` _(required)_ - must always be `drop_component` +- `project` _(required)_ - the name of the project you wish to delete or to delete from if using the `file` parameter +- `file` _(optional)_ - the path relative to your project folder of the file you wish to delete +- `replicated` _(optional)_ - if true, Harper will replicate the component deletion to all nodes in the cluster. Must be a boolean. +- `restart` _(optional)_ - if true, Harper will restart after dropping the component. Must be a boolean. ### Body @@ -183,7 +183,7 @@ Gets all local component files and folders and any component config from `harper _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_components` +- `operation` _(required)_ - must always be `get_components` ### Body @@ -264,10 +264,10 @@ Gets the contents of a file inside a component project. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_component_file` -- project _(required)_ - the name of the project where the file is located -- file _(required)_ - the path relative to your project folder of the file you wish to view -- encoding _(optional)_ - the encoding that will be passed to the read file call. Defaults to `utf8` +- `operation` _(required)_ - must always be `get_component_file` +- `project` _(required)_ - the name of the project where the file is located +- `file` _(required)_ - the path relative to your project folder of the file you wish to view +- `encoding` _(optional)_ - the encoding that will be passed to the read file call. Defaults to `utf8` ### Body @@ -295,12 +295,12 @@ Creates or updates a file inside a component project. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_component_file` -- project _(required)_ - the name of the project the file is located in -- file _(required)_ - the path relative to your project folder of the file you wish to set -- payload _(required)_ - what will be written to the file -- encoding _(optional)_ - the encoding that will be passed to the write file call. Defaults to `utf8` -- replicated _(optional)_ - if true, Harper will replicate the component update to all nodes in the cluster. Must be a boolean. +- `operation` _(required)_ - must always be `set_component_file` +- `project` _(required)_ - the name of the project the file is located in +- `file` _(required)_ - the path relative to your project folder of the file you wish to set +- `payload` _(required)_ - what will be written to the file +- `encoding` _(optional)_ - the encoding that will be passed to the write file call. Defaults to `utf8` +- `replicated` _(optional)_ - if true, Harper will replicate the component update to all nodes in the cluster. Must be a boolean. ### Body @@ -327,13 +327,13 @@ Adds an SSH key for deploying components from private repositories. This will al _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_ssh_key` -- name _(required)_ - the name of the key -- key _(required)_ - the private key contents. Line breaks must be delimited with -- host _(required)_ - the host for the ssh config (see below). Used as part of the `package` url when deploying a component using this key -- hostname _(required)_ - the hostname for the ssh config (see below). Used to map `host` to an actual domain (e.g. `github.com`) -- known_hosts _(optional)_ - the public SSH keys of the host your component will be retrieved from. If `hostname` is `github.com` this will be retrieved automatically. Line breaks must be delimited with -- replicated _(optional)_ - if true, Harper will replicate the key to all nodes in the cluster. Must be a boolean. +- `operation` _(required)_ - must always be `add_ssh_key` +- `name` _(required)_ - the name of the key +- `key` _(required)_ - the private key contents. Line breaks must be delimited with +- `host` _(required)_ - the host for the ssh config (see below). Used as part of the `package` url when deploying a component using this key +- `hostname` _(required)_ - the hostname for the ssh config (see below). Used to map `host` to an actual domain (e.g. `github.com`) +- `known_hosts` _(optional)_ - the public SSH keys of the host your component will be retrieved from. If `hostname` is `github.com` this will be retrieved automatically. Line breaks must be delimited with +- `replicated` _(optional)_ - if true, Harper will replicate the key to all nodes in the cluster. Must be a boolean. ### Body @@ -380,10 +380,10 @@ Updates the private key contents of an existing SSH key. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `update_ssh_key` -- name _(required)_ - the name of the key to be updated -- key _(required)_ - the private key contents. Line breaks must be delimited with -- replicated _(optional)_ - if true, Harper will replicate the key update to all nodes in the cluster. Must be a boolean. +- `operation` _(required)_ - must always be `update_ssh_key` +- `name` _(required)_ - the name of the key to be updated +- `key` _(required)_ - the private key contents. Line breaks must be delimited with +- `replicated` _(optional)_ - if true, Harper will replicate the key update to all nodes in the cluster. Must be a boolean. ### Body @@ -411,9 +411,9 @@ Deletes a SSH key. This will also remove it from the generated SSH config. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `delete_ssh_key` -- name _(required)_ - the name of the key to be deleted -- replicated _(optional)_ - if true, Harper will replicate the key deletion to all nodes in the cluster. Must be a boolean. +- `operation` _(required)_ - must always be `delete_ssh_key` +- `name` _(required)_ - the name of the key to be deleted +- `replicated` _(optional)_ - if true, Harper will replicate the key deletion to all nodes in the cluster. Must be a boolean. ### Body @@ -437,7 +437,7 @@ List off the names of added SSH keys _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `list_ssh_keys` +- `operation` _(required)_ - must always be `list_ssh_keys` ### Body @@ -453,20 +453,21 @@ _Operation is restricted to super_user roles only_ [ { "name": "harperdb-private-component" - }, - ... + } ] ``` +_Note: Additional SSH keys would appear as more objects in this array_ + ## Set SSH Known Hosts Sets the SSH known_hosts file. This will overwrite the file. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_ssh_known_hosts` -- known_hosts _(required)_ - The contents to set the known_hosts to. Line breaks must be delimite d with -- replicated _(optional)_ - if true, Harper will replicate the known hosts to all nodes in the cluster. Must be a boolean. +- `operation` _(required)_ - must always be `set_ssh_known_hosts` +- `known_hosts` _(required)_ - The contents to set the known_hosts to. Line breaks must be delimite d with +- `replicated` _(optional)_ - if true, Harper will replicate the known hosts to all nodes in the cluster. Must be a boolean. ### Body @@ -491,7 +492,7 @@ Gets the contents of the known_hosts file _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_ssh_known_hosts` +- `operation` _(required)_ - must always be `get_ssh_known_hosts` ### Body diff --git a/versioned_docs/version-4.4/developers/operations-api/custom-functions.md b/versioned_docs/version-4.4/developers/operations-api/custom-functions.md index 43effd1b..37b45ba8 100644 --- a/versioned_docs/version-4.4/developers/operations-api/custom-functions.md +++ b/versioned_docs/version-4.4/developers/operations-api/custom-functions.md @@ -10,7 +10,7 @@ Returns the state of the Custom functions server. This includes whether it is en _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `custom_function_status` +- `operation` _(required)_ - must always be `custom_function_status` ### Body @@ -38,7 +38,7 @@ Returns an array of projects within the Custom Functions root project directory. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_custom_functions` +- `operation` _(required)_ - must always be `get_custom_functions` ### Body @@ -68,10 +68,10 @@ Returns the content of the specified file as text. HarperDStudio uses this call _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_custom_function` -- project _(required)_ - the name of the project containing the file for which you wish to get content -- type _(required)_ - the name of the sub-folder containing the file for which you wish to get content - must be either routes or helpers -- file _(required)_ - The name of the file for which you wish to get content - should not include the file extension (which is always .js) +- `operation` _(required)_ - must always be `get_custom_function` +- `project` _(required)_ - the name of the project containing the file for which you wish to get content +- `type` _(required)_ - the name of the sub-folder containing the file for which you wish to get content - must be either routes or helpers +- `file` _(required)_ - The name of the file for which you wish to get content - should not include the file extension (which is always .js) ### Body @@ -100,11 +100,11 @@ Updates the content of the specified file. Harper Studio uses this call to save _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_custom_function` -- project _(required)_ - the name of the project containing the file for which you wish to set content -- type _(required)_ - the name of the sub-folder containing the file for which you wish to set content - must be either routes or helpers -- file _(required)_ - the name of the file for which you wish to set content - should not include the file extension (which is always .js) -- function*content *(required)\_ - the content you wish to save into the specified file +- `operation` _(required)_ - must always be `set_custom_function` +- `project` _(required)_ - the name of the project containing the file for which you wish to set content +- `type` _(required)_ - the name of the sub-folder containing the file for which you wish to set content - must be either routes or helpers +- `file` _(required)_ - the name of the file for which you wish to set content - should not include the file extension (which is always .js) +- `function_content` _(required)_ - the content you wish to save into the specified file ### Body @@ -134,10 +134,10 @@ Deletes the specified file. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `drop_custom_function` -- project _(required)_ - the name of the project containing the file you wish to delete -- type _(required)_ - the name of the sub-folder containing the file you wish to delete. Must be either routes or helpers -- file _(required)_ - the name of the file you wish to delete. Should not include the file extension (which is always .js) +- `operation` _(required)_ - must always be `drop_custom_function` +- `project` _(required)_ - the name of the project containing the file you wish to delete +- `type` _(required)_ - the name of the sub-folder containing the file you wish to delete. Must be either routes or helpers +- `file` _(required)_ - the name of the file you wish to delete. Should not include the file extension (which is always .js) ### Body @@ -166,8 +166,8 @@ Creates a new project folder in the Custom Functions root project directory. It _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_custom_function_project` -- project _(required)_ - the name of the project you wish to create +- `operation` _(required)_ - must always be `add_custom_function_project` +- `project` _(required)_ - the name of the project you wish to create ### Body @@ -194,8 +194,8 @@ Deletes the specified project folder and all of its contents. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `drop_custom_function_project` -- project _(required)_ - the name of the project you wish to delete +- `operation` _(required)_ - must always be `drop_custom_function_project` +- `project` _(required)_ - the name of the project you wish to delete ### Body @@ -222,9 +222,9 @@ Creates a .tar file of the specified project folder, then reads it into a base64 _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `package_custom_function_project` -- project _(required)_ - the name of the project you wish to package up for deployment -- skip*node_modules *(optional)\_ - if true, creates option for tar module that will exclude the project's node_modules directory. Must be a boolean. +- `operation` _(required)_ - must always be `package_custom_function_project` +- `project` _(required)_ - the name of the project you wish to package up for deployment +- `skip_node_modules` _(optional)_ - if true, creates option for tar module that will exclude the project's node_modules directory. Must be a boolean. ### Body @@ -254,9 +254,9 @@ Takes the output of package_custom_function_project, decrypts the base64-encoded _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `deploy_custom_function_project` -- project _(required)_ - the name of the project you wish to deploy. Must be a string -- payload _(required)_ - a base64-encoded string representation of the .tar file. Must be a string +- `operation` _(required)_ - must always be `deploy_custom_function_project` +- `project` _(required)_ - the name of the project you wish to deploy. Must be a string +- `payload` _(required)_ - a base64-encoded string representation of the .tar file. Must be a string ### Body diff --git a/versioned_docs/version-4.4/developers/operations-api/databases-and-tables.md b/versioned_docs/version-4.4/developers/operations-api/databases-and-tables.md index eea77222..7c17fb4d 100644 --- a/versioned_docs/version-4.4/developers/operations-api/databases-and-tables.md +++ b/versioned_docs/version-4.4/developers/operations-api/databases-and-tables.md @@ -8,7 +8,7 @@ title: Databases and Tables Returns the definitions of all databases and tables within the database. Record counts about 5000 records are estimated, as determining the exact count can be expensive. When the record count is estimated, this is indicated by the inclusion of a confidence interval of `estimated_record_range`. If you need the exact count, you can include an `"exact_count": true` in the operation, but be aware that this requires a full table scan (may be expensive). -- operation _(required)_ - must always be `describe_all` +- `operation` _(required)_ - must always be `describe_all` ### Body @@ -63,8 +63,8 @@ Returns the definitions of all databases and tables within the database. Record Returns the definitions of all tables within the specified database. -- operation _(required)_ - must always be `describe_database` -- database _(optional)_ - database where the table you wish to describe lives. The default is `data` +- `operation` _(required)_ - must always be `describe_database` +- `database` _(optional)_ - database where the table you wish to describe lives. The default is `data` ### Body @@ -118,9 +118,9 @@ Returns the definitions of all tables within the specified database. Returns the definition of the specified table. -- operation _(required)_ - must always be `describe_table` -- table _(required)_ - table you wish to describe -- database _(optional)_ - database where the table you wish to describe lives. The default is `data` +- `operation` _(required)_ - must always be `describe_table` +- `table` _(required)_ - table you wish to describe +- `database` _(optional)_ - database where the table you wish to describe lives. The default is `data` ### Body @@ -174,8 +174,8 @@ Create a new database. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `create_database` -- database _(optional)_ - name of the database you are creating. The default is `data` +- `operation` _(required)_ - must always be `create_database` +- `database` _(optional)_ - name of the database you are creating. The default is `data` ### Body @@ -202,9 +202,9 @@ Drop an existing database. NOTE: Dropping a database will delete all tables and _Operation is restricted to super_user roles only_ -- operation _(required)_ - this should always be `drop_database` -- database _(required)_ - name of the database you are dropping -- replicated _(optional)_ - if true, Harper will replicate the component to all nodes in the cluster. Must be a boolean. +- `operation` _(required)_ - this should always be `drop_database` +- `database` _(required)_ - name of the database you are dropping +- `replicated` _(optional)_ - if true, Harper will replicate the component to all nodes in the cluster. Must be a boolean. ### Body @@ -231,15 +231,15 @@ Create a new table within a database. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `create_table` -- database _(optional)_ - name of the database where you want your table to live. If the database does not exist, it will be created. If the `database` property is not provided it will default to `data`. -- table _(required)_ - name of the table you are creating -- primary*key *(required)\_ - primary key for the table -- attributes _(optional)_ - an array of attributes that specifies the schema for the table, that is the set of attributes for the table. When attributes are supplied the table will not be considered a "dynamic schema" table, and attributes will not be auto-added when records with new properties are inserted. Each attribute is specified as: - - name _(required)_ - the name of the attribute - - indexed _(optional)_ - indicates if the attribute should be indexed - - type _(optional)_ - specifies the data type of the attribute (can be String, Int, Float, Date, ID, Any) -- expiration _(optional)_ - specifies the time-to-live or expiration of records in the table before they are evicted (records are not evicted on any timer if not specified). This is specified in seconds. +- `operation` _(required)_ - must always be `create_table` +- `database` _(optional)_ - name of the database where you want your table to live. If the database does not exist, it will be created. If the `database` property is not provided it will default to `data`. +- `table` _(required)_ - name of the table you are creating +- `primary_key` _(required)_ - primary key for the table +- `attributes` _(optional)_ - an array of attributes that specifies the schema for the table, that is the set of attributes for the table. When attributes are supplied the table will not be considered a "dynamic schema" table, and attributes will not be auto-added when records with new properties are inserted. Each attribute is specified as: + - `name` _(required)_ - the name of the attribute + - `indexed` _(optional)_ - indicates if the attribute should be indexed + - `type` _(optional)_ - specifies the data type of the attribute (can be String, Int, Float, Date, ID, Any) +- `expiration` _(optional)_ - specifies the time-to-live or expiration of records in the table before they are evicted (records are not evicted on any timer if not specified). This is specified in seconds. ### Body @@ -268,10 +268,10 @@ Drop an existing database table. NOTE: Dropping a table will delete all associat _Operation is restricted to super_user roles only_ -- operation _(required)_ - this should always be `drop_table` -- database _(optional)_ - database where the table you are dropping lives. The default is `data` -- table _(required)_ - name of the table you are dropping -- replicated _(optional)_ - if true, Harper will replicate the component to all nodes in the cluster. Must be a boolean. +- `operation` _(required)_ - this should always be `drop_table` +- `database` _(optional)_ - database where the table you are dropping lives. The default is `data` +- `table` _(required)_ - name of the table you are dropping +- `replicated` _(optional)_ - if true, Harper will replicate the component to all nodes in the cluster. Must be a boolean. ### Body @@ -299,10 +299,10 @@ Create a new attribute within the specified table. **The create_attribute operat _Note: Harper will automatically create new attributes on insert and update if they do not already exist within the database._ -- operation _(required)_ - must always be `create_attribute` -- database _(optional)_ - name of the database of the table you want to add your attribute. The default is `data` -- table _(required)_ - name of the table where you want to add your attribute to live -- attribute _(required)_ - name for the attribute +- `operation` _(required)_ - must always be `create_attribute` +- `database` _(optional)_ - name of the database of the table you want to add your attribute. The default is `data` +- `table` _(required)_ - name of the table where you want to add your attribute to live +- `attribute` _(required)_ - name for the attribute ### Body @@ -333,10 +333,10 @@ Drop an existing attribute from the specified table. NOTE: Dropping an attribute _Operation is restricted to super_user roles only_ -- operation _(required)_ - this should always be `drop_attribute` -- database _(optional)_ - database where the table you are dropping lives. The default is `data` -- table _(required)_ - table where the attribute you are dropping lives -- attribute _(required)_ - attribute that you intend to drop +- `operation` _(required)_ - this should always be `drop_attribute` +- `database` _(optional)_ - database where the table you are dropping lives. The default is `data` +- `table` _(required)_ - table where the attribute you are dropping lives +- `attribute` _(required)_ - attribute that you intend to drop ### Body @@ -367,10 +367,10 @@ It is important to note that trying to copy a database file that is in use (Harp _Operation is restricted to super_user roles only_ -- operation _(required)_ - this should always be `get_backup` -- database _(required)_ - this is the database that will be snapshotted and returned -- table _(optional)_ - this will specify a specific table to backup -- tables _(optional)_ - this will specify a specific set of tables to backup +- `operation` _(required)_ - this should always be `get_backup` +- `database` _(required)_ - this is the database that will be snapshotted and returned +- `table` _(optional)_ - this will specify a specific table to backup +- `tables` _(optional)_ - this will specify a specific set of tables to backup ### Body diff --git a/versioned_docs/version-4.4/developers/operations-api/jobs.md b/versioned_docs/version-4.4/developers/operations-api/jobs.md index 173125a1..cf71fa00 100644 --- a/versioned_docs/version-4.4/developers/operations-api/jobs.md +++ b/versioned_docs/version-4.4/developers/operations-api/jobs.md @@ -8,8 +8,8 @@ title: Jobs Returns job status, metrics, and messages for the specified job ID. -- operation _(required)_ - must always be `get_job` -- id _(required)_ - the id of the job you wish to view +- `operation` _(required)_ - must always be `get_job` +- `id` _(required)_ - the id of the job you wish to view ### Body @@ -50,9 +50,9 @@ Returns a list of job statuses, metrics, and messages for all jobs executed with _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `search_jobs_by_start_date` -- from*date *(required)\_ - the date you wish to start the search -- to*date *(required)\_ - the date you wish to end the search +- `operation` _(required)_ - must always be `search_jobs_by_start_date` +- `from_date` _(required)_ - the date you wish to start the search +- `to_date` _(required)_ - the date you wish to end the search ### Body diff --git a/versioned_docs/version-4.4/developers/operations-api/logs.md b/versioned_docs/version-4.4/developers/operations-api/logs.md index 17eba72f..52e52740 100644 --- a/versioned_docs/version-4.4/developers/operations-api/logs.md +++ b/versioned_docs/version-4.4/developers/operations-api/logs.md @@ -10,13 +10,13 @@ Returns log outputs from the primary Harper log based on the provided search cri _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_Log` -- start _(optional)_ - result to start with. Default is 0, the first log in `hdb.log`. Must be a number -- limit _(optional)_ - number of results returned. Default behavior is 1000. Must be a number -- level _(optional)_ - error level to filter on. Default behavior is all levels. Must be `notify`, `error`, `warn`, `info`, `debug` or `trace` -- from _(optional)_ - date to begin showing log results. Must be `YYYY-MM-DD` or `YYYY-MM-DD hh:mm:ss`. Default is first log in `hdb.log` -- until _(optional)_ - date to end showing log results. Must be `YYYY-MM-DD` or `YYYY-MM-DD hh:mm:ss`. Default is last log in `hdb.log` -- order _(optional)_ - order to display logs desc or asc by timestamp. By default, will maintain `hdb.log` order +- `operation` _(required)_ - must always be `read_Log` +- `start` _(optional)_ - result to start with. Default is 0, the first log in `hdb.log`. Must be a number +- `limit` _(optional)_ - number of results returned. Default behavior is 1000. Must be a number +- `level` _(optional)_ - error level to filter on. Default behavior is all levels. Must be `notify`, `error`, `warn`, `info`, `debug` or `trace` +- `from` _(optional)_ - date to begin showing log results. Must be `YYYY-MM-DD` or `YYYY-MM-DD hh:mm:ss`. Default is first log in `hdb.log` +- `until` _(optional)_ - date to end showing log results. Must be `YYYY-MM-DD` or `YYYY-MM-DD hh:mm:ss`. Default is last log in `hdb.log` +- `order` _(optional)_ - order to display logs desc or asc by timestamp. By default, will maintain `hdb.log` order ### Body @@ -68,12 +68,12 @@ Returns all transactions logged for the specified database table. You may filter _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_transaction_log` -- schema _(required)_ - schema under which the transaction log resides -- table _(required)_ - table under which the transaction log resides -- from _(optional)_ - time format must be millisecond-based epoch in UTC -- to _(optional)_ - time format must be millisecond-based epoch in UTC -- limit _(optional)_ - max number of logs you want to receive. Must be a number +- `operation` _(required)_ - must always be `read_transaction_log` +- `schema` _(required)_ - schema under which the transaction log resides +- `table` _(required)_ - table under which the transaction log resides +- `from` _(optional)_ - time format must be millisecond-based epoch in UTC +- `to` _(optional)_ - time format must be millisecond-based epoch in UTC +- `limit` _(optional)_ - max number of logs you want to receive. Must be a number ### Body @@ -271,10 +271,10 @@ Deletes transaction log data for the specified database table that is older than _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `delete_transaction_log_before` -- schema _(required)_ - schema under which the transaction log resides. Must be a string -- table _(required)_ - table under which the transaction log resides. Must be a string -- timestamp _(required)_ - records older than this date will be deleted. Format is millisecond-based epoch in UTC +- `operation` _(required)_ - must always be `delete_transaction_log_before` +- `schema` _(required)_ - schema under which the transaction log resides. Must be a string +- `table` _(required)_ - table under which the transaction log resides. Must be a string +- `timestamp` _(required)_ - records older than this date will be deleted. Format is millisecond-based epoch in UTC ### Body @@ -303,11 +303,11 @@ AuditLog must be enabled in the Harper configuration file to make this request. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_audit_log` -- schema _(required)_ - schema under which the transaction log resides -- table _(required)_ - table under which the transaction log resides -- search_type _(optional)_ - possibilities are `hash_value`, `timestamp` and `username` -- search_values _(optional)_ - an array of string or numbers relating to search_type +- `operation` _(required)_ - must always be `read_audit_log` +- `schema` _(required)_ - schema under which the transaction log resides +- `table` _(required)_ - table under which the transaction log resides +- `search_type` _(optional)_ - possibilities are `hash_value`, `timestamp` and `username` +- `search_values` _(optional)_ - an array of string or numbers relating to search_type ### Body @@ -398,11 +398,11 @@ AuditLog must be enabled in the Harper configuration file to make this request. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_audit_log` -- schema _(required)_ - schema under which the transaction log resides -- table _(required)_ - table under which the transaction log resides -- search_type _(optional)_ - timestamp -- search_values _(optional)_ - an array containing a maximum of two values \[`from_timestamp`, `to_timestamp`] defining the range of transactions you would like to view. +- `operation` _(required)_ - must always be `read_audit_log` +- `schema` _(required)_ - schema under which the transaction log resides +- `table` _(required)_ - table under which the transaction log resides +- `search_type` _(optional)_ - timestamp +- `search_values` _(optional)_ - an array containing a maximum of two values \[`from_timestamp`, `to_timestamp`] defining the range of transactions you would like to view. - Timestamp format is millisecond-based epoch in UTC - If no items are supplied then all transactions are returned - If only one entry is supplied then all transactions after the supplied timestamp will be returned @@ -519,11 +519,11 @@ AuditLog must be enabled in the Harper configuration file to make this request. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_audit_log` -- schema _(required)_ - schema under which the transaction log resides -- table _(required)_ - table under which the transaction log resides -- search_type _(optional)_ - username -- search_values _(optional)_ - the Harper user for whom you would like to view transactions +- `operation` _(required)_ - must always be `read_audit_log` +- `schema` _(required)_ - schema under which the transaction log resides +- `table` _(required)_ - table under which the transaction log resides +- `search_type` _(optional)_ - username +- `search_values` _(optional)_ - the Harper user for whom you would like to view transactions ### Body @@ -639,11 +639,11 @@ AuditLog must be enabled in the Harper configuration file to make this request. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_audit_log` -- schema _(required)_ - schema under which the transaction log resides -- table _(required)_ - table under which the transaction log resides -- search_type _(optional)_ - hash_value -- search_values _(optional)_ - an array of hash_attributes for which you wish to see transaction logs +- `operation` _(required)_ - must always be `read_audit_log` +- `schema` _(required)_ - schema under which the transaction log resides +- `table` _(required)_ - table under which the transaction log resides +- `search_type` _(optional)_ - hash_value +- `search_values` _(optional)_ - an array of hash_attributes for which you wish to see transaction logs ### Body @@ -707,10 +707,10 @@ AuditLog must be enabled in the Harper configuration file to make this request. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `delete_audit_logs_before` -- schema _(required)_ - schema under which the transaction log resides. Must be a string -- table _(required)_ - table under which the transaction log resides. Must be a string -- timestamp _(required)_ - records older than this date will be deleted. Format is millisecond-based epoch in UTC +- `operation` _(required)_ - must always be `delete_audit_logs_before` +- `schema` _(required)_ - schema under which the transaction log resides. Must be a string +- `table` _(required)_ - table under which the transaction log resides. Must be a string +- `timestamp` _(required)_ - records older than this date will be deleted. Format is millisecond-based epoch in UTC ### Body diff --git a/versioned_docs/version-4.4/developers/operations-api/registration.md b/versioned_docs/version-4.4/developers/operations-api/registration.md index 56775c5d..28c6a0e9 100644 --- a/versioned_docs/version-4.4/developers/operations-api/registration.md +++ b/versioned_docs/version-4.4/developers/operations-api/registration.md @@ -8,7 +8,7 @@ title: Registration Returns the registration data of the Harper instance. -- operation _(required)_ - must always be `registration_info` +- `operation` _(required)_ - must always be `registration_info` ### Body @@ -37,7 +37,7 @@ Returns the Harper fingerprint, uniquely generated based on the machine, for lic _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_fingerprint` +- `operation` _(required)_ - must always be `get_fingerprint` ### Body @@ -55,9 +55,9 @@ Sets the Harper license as generated by Harper License Management software. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_license` -- key _(required)_ - your license key -- company _(required)_ - the company that was used in the license +- `operation` _(required)_ - must always be `set_license` +- `key` _(required)_ - your license key +- `company` _(required)_ - the company that was used in the license ### Body diff --git a/versioned_docs/version-4.4/developers/operations-api/sql-operations.md b/versioned_docs/version-4.4/developers/operations-api/sql-operations.md index 71dfa436..4b7076bb 100644 --- a/versioned_docs/version-4.4/developers/operations-api/sql-operations.md +++ b/versioned_docs/version-4.4/developers/operations-api/sql-operations.md @@ -12,8 +12,8 @@ Harper encourages developers to utilize other querying tools over SQL for perfor Executes the provided SQL statement. The SELECT statement is used to query data from the database. -- operation _(required)_ - must always be `sql` -- sql _(required)_ - use standard SQL +- `operation` _(required)_ - must always be `sql` +- `sql` _(required)_ - use standard SQL ### Body @@ -48,8 +48,8 @@ Executes the provided SQL statement. The SELECT statement is used to query data Executes the provided SQL statement. The INSERT statement is used to add one or more rows to a database table. -- operation _(required)_ - must always be `sql` -- sql _(required)_ - use standard SQL +- `operation` _(required)_ - must always be `sql` +- `sql` _(required)_ - use standard SQL ### Body @@ -76,8 +76,8 @@ Executes the provided SQL statement. The INSERT statement is used to add one or Executes the provided SQL statement. The UPDATE statement is used to change the values of specified attributes in one or more rows in a database table. -- operation _(required)_ - must always be `sql` -- sql _(required)_ - use standard SQL +- `operation` _(required)_ - must always be `sql` +- `sql` _(required)_ - use standard SQL ### Body @@ -104,8 +104,8 @@ Executes the provided SQL statement. The UPDATE statement is used to change the Executes the provided SQL statement. The DELETE statement is used to remove one or more rows of data from a database table. -- operation _(required)_ - must always be `sql` -- sql _(required)_ - use standard SQL +- `operation` _(required)_ - must always be `sql` +- `sql` _(required)_ - use standard SQL ### Body diff --git a/versioned_docs/version-4.4/developers/operations-api/token-authentication.md b/versioned_docs/version-4.4/developers/operations-api/token-authentication.md index b9ff5b31..178db842 100644 --- a/versioned_docs/version-4.4/developers/operations-api/token-authentication.md +++ b/versioned_docs/version-4.4/developers/operations-api/token-authentication.md @@ -10,9 +10,9 @@ Creates the tokens needed for authentication: operation & refresh token. _Note - this operation does not require authorization to be set_ -- operation _(required)_ - must always be `create_authentication_tokens` -- username _(required)_ - username of user to generate tokens for -- password _(required)_ - password of user to generate tokens for +- `operation` _(required)_ - must always be `create_authentication_tokens` +- `username` _(required)_ - username of user to generate tokens for +- `password` _(required)_ - password of user to generate tokens for ### Body @@ -39,8 +39,8 @@ _Note - this operation does not require authorization to be set_ This operation creates a new operation token. -- operation _(required)_ - must always be `refresh_operation_token` -- refresh*token *(required)\_ - the refresh token that was provided when tokens were created +- `operation` _(required)_ - must always be `refresh_operation_token` +- `refresh_token` _(required)_ - the refresh token that was provided when tokens were created ### Body diff --git a/versioned_docs/version-4.4/developers/operations-api/users-and-roles.md b/versioned_docs/version-4.4/developers/operations-api/users-and-roles.md index ecaa1117..91f222b9 100644 --- a/versioned_docs/version-4.4/developers/operations-api/users-and-roles.md +++ b/versioned_docs/version-4.4/developers/operations-api/users-and-roles.md @@ -10,7 +10,7 @@ Returns a list of all roles. [Learn more about Harper roles here.](../security/u _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `list_roles` +- `operation` _(required)_ - must always be `list_roles` ### Body @@ -80,11 +80,11 @@ Creates a new role with the specified permissions. [Learn more about Harper role _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_role` -- role _(required)_ - name of role you are defining -- permission _(required)_ - object defining permissions for users associated with this role: - - super*user *(optional)\_ - boolean which, if set to true, gives users associated with this role full access to all operations and methods. If not included, value will be assumed to be false. - - structure_user (optional) - boolean OR array of database names (as strings). If boolean, user can create new databases and tables. If array of strings, users can only manage tables within the specified databases. This overrides any individual table permissions for specified databases, or for all databases if the value is true. +- `operation` _(required)_ - must always be `add_role` +- `role` _(required)_ - name of role you are defining +- `permission` _(required)_ - object defining permissions for users associated with this role: + - `super_user` _(optional)_ - boolean which, if set to true, gives users associated with this role full access to all operations and methods. If not included, value will be assumed to be false. + - `structure_user` _(optional)_ - boolean OR array of database names (as strings). If boolean, user can create new databases and tables. If array of strings, users can only manage tables within the specified databases. This overrides any individual table permissions for specified databases, or for all databases if the value is true. ### Body @@ -158,12 +158,12 @@ Modifies an existing role with the specified permissions. updates permissions fr _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `alter_role` -- id _(required)_ - the id value for the role you are altering -- role _(optional)_ - name value to update on the role you are altering -- permission _(required)_ - object defining permissions for users associated with this role: - - super*user *(optional)\_ - boolean which, if set to true, gives users associated with this role full access to all operations and methods. If not included, value will be assumed to be false. - - structure_user (optional) - boolean OR array of database names (as strings). If boolean, user can create new databases and tables. If array of strings, users can only manage tables within the specified databases. This overrides any individual table permissions for specified databases, or for all databases if the value is true. +- `operation` _(required)_ - must always be `alter_role` +- `id` _(required)_ - the id value for the role you are altering +- `role` _(optional)_ - name value to update on the role you are altering +- `permission` _(required)_ - object defining permissions for users associated with this role: + - `super_user` _(optional)_ - boolean which, if set to true, gives users associated with this role full access to all operations and methods. If not included, value will be assumed to be false. + - `structure_user` _(optional)_ - boolean OR array of database names (as strings). If boolean, user can create new databases and tables. If array of strings, users can only manage tables within the specified databases. This overrides any individual table permissions for specified databases, or for all databases if the value is true. ### Body @@ -237,8 +237,8 @@ Deletes an existing role from the database. NOTE: Role with associated users can _Operation is restricted to super_user roles only_ -- operation _(required)_ - this must always be `drop_role` -- id _(required)_ - this is the id of the role you are dropping +- `operation` _(required)_ - this must always be `drop_role` +- `id` _(required)_ - this is the id of the role you are dropping ### Body @@ -265,7 +265,7 @@ Returns a list of all users. [Learn more about Harper roles here.](../security/u _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `list_users` +- `operation` _(required)_ - must always be `list_users` ### Body @@ -377,7 +377,7 @@ _Operation is restricted to super_user roles only_ Returns user data for the associated user credentials. -- operation _(required)_ - must always be `user_info` +- `operation` _(required)_ - must always be `user_info` ### Body @@ -415,11 +415,11 @@ Creates a new user with the specified role and credentials. [Learn more about Ha _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_user` -- role _(required)_ - 'role' name value of the role you wish to assign to the user. See `add_role` for more detail -- username _(required)_ - username assigned to the user. It can not be altered after adding the user. It serves as the hash -- password _(required)_ - clear text for password. Harper will encrypt the password upon receipt -- active _(required)_ - boolean value for status of user's access to your Harper instance. If set to false, user will not be able to access your instance of Harper. +- `operation` _(required)_ - must always be `add_user` +- `role` _(required)_ - 'role' name value of the role you wish to assign to the user. See `add_role` for more detail +- `username` _(required)_ - username assigned to the user. It can not be altered after adding the user. It serves as the hash +- `password` _(required)_ - clear text for password. Harper will encrypt the password upon receipt +- `active` _(required)_ - boolean value for status of user's access to your Harper instance. If set to false, user will not be able to access your instance of Harper. ### Body @@ -449,11 +449,11 @@ Modifies an existing user's role and/or credentials. [Learn more about Harper ro _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `alter_user` -- username _(required)_ - username assigned to the user. It can not be altered after adding the user. It serves as the hash. -- password _(optional)_ - clear text for password. Harper will encrypt the password upon receipt -- role _(optional)_ - `role` name value of the role you wish to assign to the user. See `add_role` for more detail -- active _(optional)_ - status of user's access to your Harper instance. See `add_role` for more detail +- `operation` _(required)_ - must always be `alter_user` +- `username` _(required)_ - username assigned to the user. It can not be altered after adding the user. It serves as the hash. +- `password` _(optional)_ - clear text for password. Harper will encrypt the password upon receipt +- `role` _(optional)_ - `role` name value of the role you wish to assign to the user. See `add_role` for more detail +- `active` _(optional)_ - status of user's access to your Harper instance. See `add_role` for more detail ### Body @@ -487,8 +487,8 @@ Deletes an existing user by username. [Learn more about Harper roles here.](../s _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `drop_user` -- username _(required)_ - username assigned to the user +- `operation` _(required)_ - must always be `drop_user` +- `username` _(required)_ - username assigned to the user ### Body diff --git a/versioned_docs/version-4.4/developers/operations-api/utilities.md b/versioned_docs/version-4.4/developers/operations-api/utilities.md index 4259f507..32541b76 100644 --- a/versioned_docs/version-4.4/developers/operations-api/utilities.md +++ b/versioned_docs/version-4.4/developers/operations-api/utilities.md @@ -10,7 +10,7 @@ Restarts the Harper instance. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `restart` +- `operation` _(required)_ - must always be `restart` ### Body @@ -36,9 +36,9 @@ Restarts servers for the specified Harper service. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `restart_service` -- service _(required)_ - must be one of: `http_workers`, `clustering_config` or `clustering` -- replicated _(optional)_ - must be a boolean. If set to `true`, Harper will replicate the restart service operation across all nodes in the cluster. The restart will occur as a rolling restart, ensuring that each node is fully restarted before the next node begins restarting. +- `operation` _(required)_ - must always be `restart_service` +- `service` _(required)_ - must be one of: `http_workers`, `clustering_config` or `clustering` +- `replicated` _(optional)_ - must be a boolean. If set to `true`, Harper will replicate the restart service operation across all nodes in the cluster. The restart will occur as a rolling restart, ensuring that each node is fully restarted before the next node begins restarting. ### Body @@ -65,8 +65,8 @@ Returns detailed metrics on the host system. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `system_information` -- attributes _(optional)_ - string array of top level attributes desired in the response, if no value is supplied all attributes will be returned. Available attributes are: ['system', 'time', 'cpu', 'memory', 'disk', 'network', 'harperdb_processes', 'table_size', 'metrics', 'threads', 'replication'] +- `operation` _(required)_ - must always be `system_information` +- `attributes` _(optional)_ - string array of top level attributes desired in the response, if no value is supplied all attributes will be returned. Available attributes are: ['system', 'time', 'cpu', 'memory', 'disk', 'network', 'harperdb_processes', 'table_size', 'metrics', 'threads', 'replication'] ### Body @@ -84,10 +84,10 @@ Delete data before the specified timestamp on the specified database table exclu _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `delete_records_before` -- date _(required)_ - records older than this date will be deleted. Supported format looks like: `YYYY-MM-DDThh:mm:ss.sZ` -- schema _(required)_ - name of the schema where you are deleting your data -- table _(required)_ - name of the table where you are deleting your data +- `operation` _(required)_ - must always be `delete_records_before` +- `date` _(required)_ - records older than this date will be deleted. Supported format looks like: `YYYY-MM-DDThh:mm:ss.sZ` +- `schema` _(required)_ - name of the schema where you are deleting your data +- `table` _(required)_ - name of the table where you are deleting your data ### Body @@ -115,11 +115,11 @@ _Operation is restricted to super_user roles only_ Exports data based on a given search operation to a local file in JSON or CSV format. -- operation _(required)_ - must always be `export_local` -- format _(required)_ - the format you wish to export the data, options are `json` & `csv` -- path _(required)_ - path local to the server to export the data -- search*operation *(required)\_ - search_operation of `search_by_hash`, `search_by_value`, `search_by_conditions` or `sql` -- filename _(optional)_ - the name of the file where your export will be written to (do not include extension in filename). If one is not provided it will be autogenerated based on the epoch. +- `operation` _(required)_ - must always be `export_local` +- `format` _(required)_ - the format you wish to export the data, options are `json` & `csv` +- `path` _(required)_ - path local to the server to export the data +- `search_operation` _(required)_ - search_operation of `search_by_hash`, `search_by_value`, `search_by_conditions` or `sql` +- `filename` _(optional)_ - the name of the file where your export will be written to (do not include extension in filename). If one is not provided it will be autogenerated based on the epoch. ### Body @@ -149,10 +149,10 @@ Exports data based on a given search operation to a local file in JSON or CSV fo Exports data based on a given search operation from table to AWS S3 in JSON or CSV format. -- operation _(required)_ - must always be `export_to_s3` -- format _(required)_ - the format you wish to export the data, options are `json` & `csv` -- s3 _(required)_ - details your access keys, bucket, bucket region and key for saving the data to S3 -- search*operation *(required)\_ - search_operation of `search_by_hash`, `search_by_value`, `search_by_conditions` or `sql` +- `operation` _(required)_ - must always be `export_to_s3` +- `format` _(required)_ - the format you wish to export the data, options are `json` & `csv` +- `s3` _(required)_ - details your access keys, bucket, bucket region and key for saving the data to S3 +- `search_operation` _(required)_ - search_operation of `search_by_hash`, `search_by_value`, `search_by_conditions` or `sql` ### Body @@ -192,9 +192,9 @@ Executes npm install against specified custom function projects. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `install_node_modules` -- projects _(required)_ - must ba an array of custom functions projects. -- dry*run *(optional)\_ - refers to the npm --dry-run flag: [https://docs.npmjs.com/cli/v8/commands/npm-install#dry-run](https://docs.npmjs.com/cli/v8/commands/npm-install#dry-run). Defaults to false. +- `operation` _(required)_ - must always be `install_node_modules` +- `projects` _(required)_ - must ba an array of custom functions projects. +- `dry_run` _(optional)_ - refers to the npm --dry-run flag: [https://docs.npmjs.com/cli/v8/commands/npm-install#dry-run](https://docs.npmjs.com/cli/v8/commands/npm-install#dry-run). Defaults to false. ### Body @@ -214,9 +214,9 @@ Modifies the Harper configuration file parameters. Must follow with a restart or _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_configuration` -- logging*level *(example/optional)\_ - one or more configuration keywords to be updated in the Harper configuration file -- clustering*enabled *(example/optional)\_ - one or more configuration keywords to be updated in the Harper configuration file +- `operation` _(required)_ - must always be `set_configuration` +- `logging_level` _(optional)_ - one or more configuration keywords to be updated in the Harper configuration file +- `clustering_enabled` _(optional)_ - one or more configuration keywords to be updated in the Harper configuration file ### Body @@ -244,7 +244,7 @@ Returns the Harper configuration parameters. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_configuration` +- `operation` _(required)_ - must always be `get_configuration` ### Body @@ -348,12 +348,12 @@ If a `private_key` is not passed the operation will search for one that matches _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_certificate` -- name _(required)_ - a unique name for the certificate -- certificate _(required)_ - a PEM formatted certificate string -- is*authority *(required)\_ - a boolean indicating if the certificate is a certificate authority -- hosts _(optional)_ - an array of hostnames that the certificate is valid for -- private*key *(optional)\_ - a PEM formatted private key string +- `operation` _(required)_ - must always be `add_certificate` +- `name` _(required)_ - a unique name for the certificate +- `certificate` _(required)_ - a PEM formatted certificate string +- `is_authority` _(required)_ - a boolean indicating if the certificate is a certificate authority +- `hosts` _(optional)_ - an array of hostnames that the certificate is valid for +- `private_key` _(optional)_ - a PEM formatted private key string ### Body @@ -383,8 +383,8 @@ Removes a certificate from the `hdb_certificate` system table and deletes the co _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `remove_certificate` -- name _(required)_ - the name of the certificate +- `operation` _(required)_ - must always be `remove_certificate` +- `name` _(required)_ - the name of the certificate ### Body @@ -411,7 +411,7 @@ Lists all certificates in the `hdb_certificate` system table. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `list_certificates` +- `operation` _(required)_ - must always be `list_certificates` ### Body diff --git a/versioned_docs/version-4.4/developers/replication/sharding.md b/versioned_docs/version-4.4/developers/replication/sharding.md index 86115f4a..7ec763f8 100644 --- a/versioned_docs/version-4.4/developers/replication/sharding.md +++ b/versioned_docs/version-4.4/developers/replication/sharding.md @@ -41,7 +41,7 @@ X-Replicate-To: node1,node2 Likewise, you can specify replicateTo and confirm parameters in the operation object when using the Harper API. For example, to specify that data should be replicated to two other nodes, and the response should be returned once confirmation is received from one other node, you can use the following operation object: -```json +```jsonc { "operation": "update", "schema": "dev", @@ -57,10 +57,12 @@ Likewise, you can specify replicateTo and confirm parameters in the operation ob or you can specify nodes: -```json -..., +```jsonc +{ + // ... "replicateTo": ["node-1", "node-2"] -... + // ... +} ``` ## Programmatic Replication Control @@ -104,7 +106,7 @@ MyTable.setResidencyById((id) => { Normally sharding allows data to be stored in specific nodes, but still allows access to the data from any node. However, you can also disable cross-node access so that data is only returned if is stored on the node where it is accessed. To do this, you can set the `replicateFrom` property on the context of operation to `false`: -```json +```jsonc { "operation": "search_by_id", "table": "MyTable", diff --git a/versioned_docs/version-4.4/developers/security/basic-auth.md b/versioned_docs/version-4.4/developers/security/basic-auth.md index a4510b13..9bc0160c 100644 --- a/versioned_docs/version-4.4/developers/security/basic-auth.md +++ b/versioned_docs/version-4.4/developers/security/basic-auth.md @@ -8,7 +8,7 @@ Harper uses Basic Auth and JSON Web Tokens (JWTs) to secure our HTTP requests. I ** \_**You do not need to log in separately. Basic Auth is added to each HTTP request like create_database, create_table, insert etc… via headers.**\_ ** -A header is added to each HTTP request. The header key is **“Authorization”** the header value is **“Basic <<your username and password buffer token>>”** +A header is added to each HTTP request. The header key is **"Authorization"** the header value is **"Basic <<your username and password buffer token>>"** ## Authentication in Harper Studio diff --git a/versioned_docs/version-4.4/developers/security/users-and-roles.md b/versioned_docs/version-4.4/developers/security/users-and-roles.md index 76ed6901..a9388c62 100644 --- a/versioned_docs/version-4.4/developers/security/users-and-roles.md +++ b/versioned_docs/version-4.4/developers/security/users-and-roles.md @@ -47,7 +47,7 @@ When creating a new, user-defined role in a Harper instance, you must provide a Example JSON for `add_role` request -```json +```jsonc { "operation": "add_role", "role": "software_developer", @@ -98,7 +98,7 @@ There are two parts to a permissions set: Each table that a role should be given some level of CRUD permissions to must be included in the `tables` array for its database in the roles permissions JSON passed to the API (_see example above_). -```json +```jsonc { "table_name": { // the name of the table to define CRUD perms for "read": boolean, // access to read from this table diff --git a/versioned_docs/version-4.4/developers/sql-guide/date-functions.md b/versioned_docs/version-4.4/developers/sql-guide/date-functions.md index d44917c3..c9747dcd 100644 --- a/versioned_docs/version-4.4/developers/sql-guide/date-functions.md +++ b/versioned_docs/version-4.4/developers/sql-guide/date-functions.md @@ -156,17 +156,17 @@ Subtracts the defined amount of time from the date provided in UTC and returns t ### EXTRACT(date, date_part) -Extracts and returns the date_part requested as a String value. Accepted date_part values below show value returned for date = “2020-03-26T15:13:02.041+000” +Extracts and returns the date_part requested as a String value. Accepted date_part values below show value returned for date = "2020-03-26T15:13:02.041+000" | date_part | Example return value\* | | ----------- | ---------------------- | -| year | “2020” | -| month | “3” | -| day | “26” | -| hour | “15” | -| minute | “13” | -| second | “2” | -| millisecond | “41” | +| year | "2020" | +| month | "3" | +| day | "26" | +| hour | "15" | +| minute | "13" | +| second | "2" | +| millisecond | "41" | ``` "SELECT EXTRACT(1587568845765, 'year') AS extract_result" returns diff --git a/versioned_docs/version-4.4/developers/sql-guide/functions.md b/versioned_docs/version-4.4/developers/sql-guide/functions.md index 0847a657..a1170991 100644 --- a/versioned_docs/version-4.4/developers/sql-guide/functions.md +++ b/versioned_docs/version-4.4/developers/sql-guide/functions.md @@ -16,99 +16,85 @@ This SQL keywords reference contains the SQL functions available in Harper. | Keyword | Syntax | Description | | ---------------- | ------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- | -| AVG | AVG(_expression_) | Returns the average of a given numeric expression. | -| COUNT | SELECT COUNT(_column_name_) FROM _database.table_ WHERE _condition_ | Returns the number records that match the given criteria. Nulls are not counted. | -| GROUP_CONCAT | GROUP*CONCAT(\_expression*) | Returns a string with concatenated values that are comma separated and that are non-null from a group. Will return null when there are non-null values. | -| MAX | SELECT MAX(_column_name_) FROM _database.table_ WHERE _condition_ | Returns largest value in a specified column. | -| MIN | SELECT MIN(_column_name_) FROM _database.table_ WHERE _condition_ | Returns smallest value in a specified column. | -| SUM | SUM(_column_name_) | Returns the sum of the numeric values provided. | -| ARRAY\* | ARRAY(_expression_) | Returns a list of data as a field. | -| DISTINCT_ARRAY\* | DISTINCT*ARRAY(\_expression*) | When placed around a standard ARRAY() function, returns a distinct (deduplicated) results set. | +| `AVG` | `AVG(expression)` | Returns the average of a given numeric expression. | +| `COUNT` | `SELECT COUNT(column_name) FROM database.table WHERE condition` | Returns the number records that match the given criteria. Nulls are not counted. | +| `GROUP_CONCAT` | `GROUP_CONCAT(expression)` | Returns a string with concatenated values that are comma separated and that are non-null from a group. Will return null when there are non-null values. | +| `MAX` | `SELECT MAX(column_name) FROM database.table WHERE condition` | Returns largest value in a specified column. | +| `MIN` | `SELECT MIN(column_name) FROM database.table WHERE condition` | Returns smallest value in a specified column. | +| `SUM` | `SUM(column_name)` | Returns the sum of the numeric values provided. | +| `ARRAY`* | `ARRAY(expression)` | Returns a list of data as a field. | +| `DISTINCT_ARRAY`* | `DISTINCT_ARRAY(expression)` | When placed around a standard `ARRAY()` function, returns a distinct (deduplicated) results set. | -\*For more information on ARRAY() and DISTINCT_ARRAY() see [this blog](https://www.harperdb.io/post/sql-queries-to-complex-objects). +*For more information on `ARRAY()` and `DISTINCT_ARRAY()` see [this blog](https://www.harperdb.io/post/sql-queries-to-complex-objects). ### Conversion | Keyword | Syntax | Description | | ------- | ----------------------------------------------- | ---------------------------------------------------------------------- | -| CAST | CAST(_expression AS datatype(length)_) | Converts a value to a specified datatype. | -| CONVERT | CONVERT(_data_type(length), expression, style_) | Converts a value from one datatype to a different, specified datatype. | +| `CAST` | `CAST(expression AS datatype(length))` | Converts a value to a specified datatype. | +| `CONVERT` | `CONVERT(data_type(length), expression, style)` | Converts a value from one datatype to a different, specified datatype. | ### Date & Time -| Keyword | Syntax | Description | -| ----------------- | ----------------- | --------------------------------------------------------------------------------------------------------------------- | -| CURRENT_DATE | CURRENT_DATE() | Returns the current date in UTC in “YYYY-MM-DD” String format. | -| CURRENT_TIME | CURRENT_TIME() | Returns the current time in UTC in “HH:mm:ss.SSS” string format. | -| CURRENT_TIMESTAMP | CURRENT_TIMESTAMP | Referencing this variable will evaluate as the current Unix Timestamp in milliseconds. For more information, go here. | - -| -| DATE | DATE([_date_string_]) | Formats and returns the date*string argument in UTC in ‘YYYY-MM-DDTHH:mm:ss.SSSZZ’ string format. If a date_string is not provided, the function will return the current UTC date/time value in the return format defined above. For more information, go here. | -| -| DATE_ADD | DATE_ADD(\_date, value, interval*) | Adds the defined amount of time to the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted interval values: Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. | -| -| DATE*DIFF | DATEDIFF(\_date_1, date_2[, interval]*) | Returns the difference between the two date values passed based on the interval as a Number. If an interval is not provided, the function will return the difference value in milliseconds. For more information, go here. | -| -| DATE*FORMAT | DATE_FORMAT(\_date, format*) | Formats and returns a date value in the String format provided. Find more details on accepted format values in the moment.js docs. For more information, go here. | -| -| DATE*SUB | DATE_SUB(\_date, format*) | Subtracts the defined amount of time from the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted date*sub interval values- Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. | -| -| DAY | DAY(\_date*) | Return the day of the month for the given date. | -| -| DAYOFWEEK | DAYOFWEEK(_date_) | Returns the numeric value of the weekday of the date given(“YYYY-MM-DD”).NOTE: 0=Sunday, 1=Monday, 2=Tuesday, 3=Wednesday, 4=Thursday, 5=Friday, and 6=Saturday. | -| EXTRACT | EXTRACT(_date, date_part_) | Extracts and returns the date*part requested as a String value. Accepted date_part values below show value returned for date = “2020-03-26T15:13:02.041+000” For more information, go here. | -| -| GETDATE | GETDATE() | Returns the current Unix Timestamp in milliseconds. | -| GET_SERVER_TIME | GET_SERVER_TIME() | Returns the current date/time value based on the server’s timezone in `YYYY-MM-DDTHH:mm:ss.SSSZZ` String format. | -| OFFSET_UTC | OFFSET_UTC(\_date, offset*) | Returns the UTC date time value with the offset provided included in the return String value formatted as `YYYY-MM-DDTHH:mm:ss.SSSZZ`. The offset argument will be added as minutes unless the value is less than 16 and greater than -16, in which case it will be treated as hours. | -| NOW | NOW() | Returns the current Unix Timestamp in milliseconds. | -| -| HOUR | HOUR(_datetime_) | Returns the hour part of a given date in range of 0 to 838. | -| -| MINUTE | MINUTE(_datetime_) | Returns the minute part of a time/datetime in range of 0 to 59. | -| -| MONTH | MONTH(_date_) | Returns month part for a specified date in range of 1 to 12. | -| -| SECOND | SECOND(_datetime_) | Returns the seconds part of a time/datetime in range of 0 to 59. | -| YEAR | YEAR(_date_) | Returns the year part for a specified date. | -| +| Keyword | Syntax | Description | +| ----------------- | -------------------------- | --------------------------------------------------------------------------------------------------------------------- | +| `CURRENT_DATE` | `CURRENT_DATE()` | Returns the current date in UTC in "YYYY-MM-DD" String format. | +| `CURRENT_TIME` | `CURRENT_TIME()` | Returns the current time in UTC in "HH:mm:ss.SSS" string format. | +| `CURRENT_TIMESTAMP` | `CURRENT_TIMESTAMP` | Referencing this variable will evaluate as the current Unix Timestamp in milliseconds. For more information, go here. | +| `DATE` | `DATE([date_string])` | Formats and returns the date string argument in UTC in 'YYYY-MM-DDTHH:mm:ss.SSSZZ' string format. If a date string is not provided, the function will return the current UTC date/time value in the return format defined above. For more information, go here. | +| `DATE_ADD` | `DATE_ADD(date, value, interval)` | Adds the defined amount of time to the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted interval values: Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. | +| `DATE_DIFF` | `DATE_DIFF(date_1, date_2[, interval])` | Returns the difference between the two date values passed based on the interval as a Number. If an interval is not provided, the function will return the difference value in milliseconds. For more information, go here. | +| `DATE_FORMAT` | `DATE_FORMAT(date, format)` | Formats and returns a date value in the String format provided. Find more details on accepted format values in the moment.js docs. For more information, go here. | +| `DATE_SUB` | `DATE_SUB(date, format)` | Subtracts the defined amount of time from the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted date_sub interval values- Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. | +| `DAY` | `DAY(date)` | Return the day of the month for the given date. | +| `DAYOFWEEK` | `DAYOFWEEK(date)` | Returns the numeric value of the weekday of the date given("YYYY-MM-DD").NOTE: 0=Sunday, 1=Monday, 2=Tuesday, 3=Wednesday, 4=Thursday, 5=Friday, and 6=Saturday. | +| `EXTRACT` | `EXTRACT(date, date_part)` | Extracts and returns the date part requested as a String value. Accepted date_part values below show value returned for date = "2020-03-26T15:13:02.041+000" For more information, go here. | +| `GETDATE` | `GETDATE()` | Returns the current Unix Timestamp in milliseconds. | +| `GET_SERVER_TIME` | `GET_SERVER_TIME()` | Returns the current date/time value based on the server's timezone in `YYYY-MM-DDTHH:mm:ss.SSSZZ` String format. | +| `OFFSET_UTC` | `OFFSET_UTC(date, offset)` | Returns the UTC date time value with the offset provided included in the return String value formatted as `YYYY-MM-DDTHH:mm:ss.SSSZZ`. The offset argument will be added as minutes unless the value is less than 16 and greater than -16, in which case it will be treated as hours. | +| `NOW` | `NOW()` | Returns the current Unix Timestamp in milliseconds. | +| `HOUR` | `HOUR(datetime)` | Returns the hour part of a given date in range of 0 to 838. | +| `MINUTE` | `MINUTE(datetime)` | Returns the minute part of a time/datetime in range of 0 to 59. | +| `MONTH` | `MONTH(date)` | Returns month part for a specified date in range of 1 to 12. | +| `SECOND` | `SECOND(datetime)` | Returns the seconds part of a time/datetime in range of 0 to 59. | +| `YEAR` | `YEAR(date)` | Returns the year part for a specified date. | ### Logical | Keyword | Syntax | Description | | ------- | ----------------------------------------------- | ------------------------------------------------------------------------------------------ | -| IF | IF(_condition, value_if_true, value_if_false_) | Returns a value if the condition is true, or another value if the condition is false. | -| IIF | IIF(_condition, value_if_true, value_if_false_) | Returns a value if the condition is true, or another value if the condition is false. | -| IFNULL | IFNULL(_expression, alt_value_) | Returns a specified value if the expression is null. | -| NULLIF | NULLIF(_expression_1, expression_2_) | Returns null if expression_1 is equal to expression_2, if not equal, returns expression_1. | +| `IF` | `IF(condition, value_if_true, value_if_false)` | Returns a value if the condition is true, or another value if the condition is false. | +| `IIF` | `IIF(condition, value_if_true, value_if_false)` | Returns a value if the condition is true, or another value if the condition is false. | +| `IFNULL` | `IFNULL(expression, alt_value)` | Returns a specified value if the expression is null. | +| `NULLIF` | `NULLIF(expression_1, expression_2)` | Returns null if expression_1 is equal to expression_2, if not equal, returns expression_1. | ### Mathematical | Keyword | Syntax | Description | | ------- | ------------------------------ | --------------------------------------------------------------------------------------------------- | -| ABS | ABS(_expression_) | Returns the absolute value of a given numeric expression. | -| CEIL | CEIL(_number_) | Returns integer ceiling, the smallest integer value that is bigger than or equal to a given number. | -| EXP | EXP(_number_) | Returns e to the power of a specified number. | -| FLOOR | FLOOR(_number_) | Returns the largest integer value that is smaller than, or equal to, a given number. | -| RANDOM | RANDOM(_seed_) | Returns a pseudo random number. | -| ROUND | ROUND(_number,decimal_places_) | Rounds a given number to a specified number of decimal places. | -| SQRT | SQRT(_expression_) | Returns the square root of an expression. | +| `ABS` | `ABS(expression)` | Returns the absolute value of a given numeric expression. | +| `CEIL` | `CEIL(number)` | Returns integer ceiling, the smallest integer value that is bigger than or equal to a given number. | +| `EXP` | `EXP(number)` | Returns e to the power of a specified number. | +| `FLOOR` | `FLOOR(number)` | Returns the largest integer value that is smaller than, or equal to, a given number. | +| `RANDOM` | `RANDOM(seed)` | Returns a pseudo random number. | +| `ROUND` | `ROUND(number, decimal_places)` | Rounds a given number to a specified number of decimal places. | +| `SQRT` | `SQRT(expression)` | Returns the square root of an expression. | ### String | Keyword | Syntax | Description | | ----------- | ------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| CONCAT | CONCAT(_string_1, string_2, ...., string_n_) | Concatenates, or joins, two or more strings together, resulting in a single string. | -| CONCAT_WS | CONCAT*WS(\_separator, string_1, string_2, ...., string_n*) | Concatenates, or joins, two or more strings together with a separator, resulting in a single string. | -| INSTR | INSTR(_string_1, string_2_) | Returns the first position, as an integer, of string_2 within string_1. | -| LEN | LEN(_string_) | Returns the length of a string. | -| LOWER | LOWER(_string_) | Converts a string to lower-case. | -| REGEXP | SELECT _column_name_ FROM _database.table_ WHERE _column_name_ REGEXP _pattern_ | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. | -| REGEXP_LIKE | SELECT _column_name_ FROM _database.table_ WHERE REGEXP*LIKE(\_column_name, pattern*) | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. | -| REPLACE | REPLACE(_string, old_string, new_string_) | Replaces all instances of old_string within new_string, with string. | -| SUBSTRING | SUBSTRING(_string, string_position, length_of_substring_) | Extracts a specified amount of characters from a string. | -| TRIM | TRIM([_character(s) FROM_] _string_) | Removes leading and trailing spaces, or specified character(s), from a string. | -| UPPER | UPPER(_string_) | Converts a string to upper-case. | +| `CONCAT` | `CONCAT(string_1, string_2, ...., string_n)` | Concatenates, or joins, two or more strings together, resulting in a single string. | +| `CONCAT_WS` | `CONCAT_WS(separator, string_1, string_2, ...., string_n)` | Concatenates, or joins, two or more strings together with a separator, resulting in a single string. | +| `INSTR` | `INSTR(string_1, string_2)` | Returns the first position, as an integer, of string_2 within string_1. | +| `LEN` | `LEN(string)` | Returns the length of a string. | +| `LOWER` | `LOWER(string)` | Converts a string to lower-case. | +| `REGEXP` | `SELECT column_name FROM database.table WHERE column_name REGEXP pattern` | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. | +| `REGEXP_LIKE` | `SELECT column_name FROM database.table WHERE REGEXP_LIKE(column_name, pattern)` | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. | +| `REPLACE` | `REPLACE(string, old_string, new_string)` | Replaces all instances of old_string within new_string, with string. | +| `SUBSTRING` | `SUBSTRING(string, string_position, length_of_substring)` | Extracts a specified amount of characters from a string. | +| `TRIM` | `TRIM([character(s) FROM] string)` | Removes leading and trailing spaces, or specified character(s), from a string. | +| `UPPER` | `UPPER(string)` | Converts a string to upper-case. | ## Operators @@ -116,9 +102,9 @@ This SQL keywords reference contains the SQL functions available in Harper. | Keyword | Syntax | Description | | ------- | ------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------- | -| BETWEEN | SELECT _column_name(s)_ FROM _database.table_ WHERE _column_name_ BETWEEN _value_1_ AND _value_2_ | (inclusive) Returns values(numbers, text, or dates) within a given range. | -| IN | SELECT _column_name(s)_ FROM _database.table_ WHERE _column_name_ IN(_value(s)_) | Used to specify multiple values in a WHERE clause. | -| LIKE | SELECT _column_name(s)_ FROM _database.table_ WHERE _column_n_ LIKE _pattern_ | Searches for a specified pattern within a WHERE clause. | +| `BETWEEN` | `SELECT column_name(s) FROM database.table WHERE column_name BETWEEN value_1 AND value_2` | (inclusive) Returns values(numbers, text, or dates) within a given range. | +| `IN` | `SELECT column_name(s) FROM database.table WHERE column_name IN(value(s))` | Used to specify multiple values in a WHERE clause. | +| `LIKE` | `SELECT column_name(s) FROM database.table WHERE column_n LIKE pattern` | Searches for a specified pattern within a WHERE clause. | ## Queries @@ -126,34 +112,34 @@ This SQL keywords reference contains the SQL functions available in Harper. | Keyword | Syntax | Description | | -------- | -------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------- | -| DISTINCT | SELECT DISTINCT _column_name(s)_ FROM _database.table_ | Returns only unique values, eliminating duplicate records. | -| FROM | FROM _database.table_ | Used to list the database(s), table(s), and any joins required for a SQL statement. | -| GROUP BY | SELECT _column_name(s)_ FROM _database.table_ WHERE _condition_ GROUP BY _column_name(s)_ ORDER BY _column_name(s)_ | Groups rows that have the same values into summary rows. | -| HAVING | SELECT _column_name(s)_ FROM _database.table_ WHERE _condition_ GROUP BY _column_name(s)_ HAVING _condition_ ORDER BY _column_name(s)_ | Filters data based on a group or aggregate function. | -| SELECT | SELECT _column_name(s)_ FROM _database.table_ | Selects data from table. | -| WHERE | SELECT _column_name(s)_ FROM _database.table_ WHERE _condition_ | Extracts records based on a defined condition. | +| `DISTINCT` | `SELECT DISTINCT column_name(s) FROM database.table` | Returns only unique values, eliminating duplicate records. | +| `FROM` | `FROM database.table` | Used to list the database(s), table(s), and any joins required for a SQL statement. | +| `GROUP BY` | `SELECT column_name(s) FROM database.table WHERE condition GROUP BY column_name(s) ORDER BY column_name(s)` | Groups rows that have the same values into summary rows. | +| `HAVING` | `SELECT column_name(s) FROM database.table WHERE condition GROUP BY column_name(s) HAVING condition ORDER BY column_name(s)` | Filters data based on a group or aggregate function. | +| `SELECT` | `SELECT column_name(s) FROM database.table` | Selects data from table. | +| `WHERE` | `SELECT column_name(s) FROM database.table WHERE condition` | Extracts records based on a defined condition. | ### Joins -| Keyword | Syntax | Description | -| ------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| CROSS JOIN | SELECT _column_name(s)_ FROM _database.table_1_ CROSS JOIN _database.table_2_ | Returns a paired combination of each row from _table_1_ with row from _table_2_. _Note: CROSS JOIN can return very large result sets and is generally considered bad practice._ | -| FULL OUTER | SELECT _column_name(s)_ FROM _database.table_1_ FULL OUTER JOIN _database.table_2_ ON _table_1.column_name_ _= table_2.column_name_ WHERE _condition_ | Returns all records when there is a match in either _table_1_ (left table) or _table_2_ (right table). | -| [INNER] JOIN | SELECT _column_name(s)_ FROM _database.table_1_ INNER JOIN _database.table_2_ ON _table_1.column_name_ _= table_2.column_name_ | Return only matching records from _table_1_ (left table) and _table_2_ (right table). The INNER keyword is optional and does not affect the result. | -| LEFT [OUTER] JOIN | SELECT _column_name(s)_ FROM _database.table_1_ LEFT OUTER JOIN _database.table_2_ ON _table_1.column_name_ _= table_2.column_name_ | Return all records from _table_1_ (left table) and matching data from _table_2_ (right table). The OUTER keyword is optional and does not affect the result. | -| RIGHT [OUTER] JOIN | SELECT _column_name(s)_ FROM _database.table_1_ RIGHT OUTER JOIN _database.table_2_ ON _table_1.column_name = table_2.column_name_ | Return all records from _table_2_ (right table) and matching data from _table_1_ (left table). The OUTER keyword is optional and does not affect the result. | +| Keyword | Syntax | Description | +| ------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `CROSS JOIN` | `SELECT column_name(s) FROM database.table_1 CROSS JOIN database.table_2` | Returns a paired combination of each row from `table_1` with row from `table_2`. Note: CROSS JOIN can return very large result sets and is generally considered bad practice. | +| `FULL OUTER` | `SELECT column_name(s) FROM database.table_1 FULL OUTER JOIN database.table_2 ON table_1.column_name = table_2.column_name WHERE condition` | Returns all records when there is a match in either `table_1` (left table) or `table_2` (right table). | +| `[INNER] JOIN` | `SELECT column_name(s) FROM database.table_1 INNER JOIN database.table_2 ON table_1.column_name = table_2.column_name` | Return only matching records from `table_1` (left table) and `table_2` (right table). The INNER keyword is optional and does not affect the result. | +| `LEFT [OUTER] JOIN` | `SELECT column_name(s) FROM database.table_1 LEFT OUTER JOIN database.table_2 ON table_1.column_name = table_2.column_name` | Return all records from `table_1` (left table) and matching data from `table_2` (right table). The OUTER keyword is optional and does not affect the result. | +| `RIGHT [OUTER] JOIN` | `SELECT column_name(s) FROM database.table_1 RIGHT OUTER JOIN database.table_2 ON table_1.column_name = table_2.column_name` | Return all records from `table_2` (right table) and matching data from `table_1` (left table). The OUTER keyword is optional and does not affect the result. | ### Predicates | Keyword | Syntax | Description | | ----------- | ----------------------------------------------------------------------------- | -------------------------- | -| IS NOT NULL | SELECT _column_name(s)_ FROM _database.table_ WHERE _column_name_ IS NOT NULL | Tests for non-null values. | -| IS NULL | SELECT _column_name(s)_ FROM _database.table_ WHERE _column_name_ IS NULL | Tests for null values. | +| `IS NOT NULL` | `SELECT column_name(s) FROM database.table WHERE column_name IS NOT NULL` | Tests for non-null values. | +| `IS NULL` | `SELECT column_name(s) FROM database.table WHERE column_name IS NULL` | Tests for null values. | ### Statements | Keyword | Syntax | Description | | ------- | --------------------------------------------------------------------------------------------- | ----------------------------------- | -| DELETE | DELETE FROM _database.table_ WHERE condition | Deletes existing data from a table. | -| INSERT | INSERT INTO _database.table(column_name(s))_ VALUES(_value(s)_) | Inserts new records into a table. | -| UPDATE | UPDATE _database.table_ SET _column_1 = value_1, column_2 = value_2, ....,_ WHERE _condition_ | Alters existing records in a table. | +| `DELETE` | `DELETE FROM database.table WHERE condition` | Deletes existing data from a table. | +| `INSERT` | `INSERT INTO database.table(column_name(s)) VALUES(value(s))` | Inserts new records into a table. | +| `UPDATE` | `UPDATE database.table SET column_1 = value_1, column_2 = value_2, .... WHERE condition` | Alters existing records in a table. | \ No newline at end of file diff --git a/versioned_docs/version-4.4/developers/sql-guide/json-search.md b/versioned_docs/version-4.4/developers/sql-guide/json-search.md index 13bd3b90..c4bcd1c8 100644 --- a/versioned_docs/version-4.4/developers/sql-guide/json-search.md +++ b/versioned_docs/version-4.4/developers/sql-guide/json-search.md @@ -12,7 +12,7 @@ Harper automatically indexes all top level attributes in a row / object written ## Syntax -SEARCH_JSON(_expression, attribute_) +`SEARCH_JSON(expression, attribute)` Executes the supplied string _expression_ against data of the defined top level _attribute_ for each row. The expression both filters and defines output from the JSON document. @@ -117,7 +117,7 @@ SEARCH_JSON( ) ``` -The first argument passed to SEARCH_JSON is the expression to execute against the second argument which is the cast attribute on the credits table. This expression will execute for every row. Looking into the expression it starts with “$\[…]” this tells the expression to iterate all elements of the cast array. +The first argument passed to SEARCH_JSON is the expression to execute against the second argument which is the cast attribute on the credits table. This expression will execute for every row. Looking into the expression it starts with "$[…]" this tells the expression to iterate all elements of the cast array. Then the expression tells the function to only return entries where the name attribute matches any of the actors defined in the array: @@ -125,7 +125,7 @@ Then the expression tells the function to only return entries where the name att name in ["Robert Downey Jr.", "Chris Evans", "Scarlett Johansson", "Mark Ruffalo", "Chris Hemsworth", "Jeremy Renner", "Clark Gregg", "Samuel L. Jackson", "Gwyneth Paltrow", "Don Cheadle"] ``` -So far, we’ve iterated the array and filtered out rows, but we also want the results formatted in a specific way, so we’ve chained an expression on our filter with: `{“actor”: name, “character”: character}`. This tells the function to create a specific object for each matching entry. +So far, we’ve iterated the array and filtered out rows, but we also want the results formatted in a specific way, so we’ve chained an expression on our filter with: `{"actor": name, "character": character}`. This tells the function to create a specific object for each matching entry. **Sample Result** diff --git a/versioned_docs/version-4.4/technical-details/reference/analytics.md b/versioned_docs/version-4.4/technical-details/reference/analytics.md index 39c92109..4ee7fdb7 100644 --- a/versioned_docs/version-4.4/technical-details/reference/analytics.md +++ b/versioned_docs/version-4.4/technical-details/reference/analytics.md @@ -104,14 +104,14 @@ And a summary record looks like: The following are general resource usage statistics that are tracked: -- memory - This includes RSS, heap, buffer and external data usage. -- utilization - How much of the time the worker was processing requests. +- `memory` - This includes RSS, heap, buffer and external data usage. +- `utilization` - How much of the time the worker was processing requests. - mqtt-connections - The number of MQTT connections. The following types of information is tracked for each HTTP request: -- success - How many requests returned a successful response (20x response code). TTFB - Time to first byte in the response to the client. -- transfer - Time to finish the transfer of the data to the client. +- `success` - How many requests returned a successful response (20x response code). TTFB - Time to first byte in the response to the client. +- `transfer` - Time to finish the transfer of the data to the client. - bytes-sent - How many bytes of data were sent to the client. Requests are categorized by operation name, for the operations API, by the resource (name) with the REST API, and by command for the MQTT interface. diff --git a/versioned_docs/version-4.5/administration/harper-studio/create-account.md b/versioned_docs/version-4.5/administration/harper-studio/create-account.md index ad13b535..2c8a43bc 100644 --- a/versioned_docs/version-4.5/administration/harper-studio/create-account.md +++ b/versioned_docs/version-4.5/administration/harper-studio/create-account.md @@ -12,7 +12,7 @@ Start at the [Harper Studio sign up page](https://studio.harperdb.io/sign-up). - Email Address - Subdomain - _Part of the URL that will be used to identify your Harper Cloud Instances. For example, with subdomain “demo” and instance name “c1” the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com)._ + _Part of the URL that will be used to identify your Harper Cloud Instances. For example, with subdomain "demo" and instance name "c1" the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com)._ - Coupon Code (optional) diff --git a/versioned_docs/version-4.5/administration/harper-studio/instances.md b/versioned_docs/version-4.5/administration/harper-studio/instances.md index f17acb70..07da8097 100644 --- a/versioned_docs/version-4.5/administration/harper-studio/instances.md +++ b/versioned_docs/version-4.5/administration/harper-studio/instances.md @@ -26,7 +26,7 @@ A summary view of all instances within an organization can be viewed by clicking 1. Fill out Instance Info. 1. Enter Instance Name - _This will be used to build your instance URL. For example, with subdomain “demo” and instance name “c1” the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com). The Instance URL will be previewed below._ + _This will be used to build your instance URL. For example, with subdomain "demo" and instance name "c1" the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com). The Instance URL will be previewed below._ 1. Enter Instance Username diff --git a/versioned_docs/version-4.5/administration/harper-studio/organizations.md b/versioned_docs/version-4.5/administration/harper-studio/organizations.md index 1bb56dd1..faae220e 100644 --- a/versioned_docs/version-4.5/administration/harper-studio/organizations.md +++ b/versioned_docs/version-4.5/administration/harper-studio/organizations.md @@ -29,7 +29,7 @@ A new organization can be created as follows: - Enter Organization Name _This is used for descriptive purposes only._ - Enter Organization Subdomain - _Part of the URL that will be used to identify your Harper Cloud Instances. For example, with subdomain “demo” and instance name “c1” the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com)._ + _Part of the URL that will be used to identify your Harper Cloud Instances. For example, with subdomain "demo" and instance name "c1" the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com)._ 4. Click Create Organization. ## Delete an Organization diff --git a/versioned_docs/version-4.5/administration/harper-studio/query-instance-data.md b/versioned_docs/version-4.5/administration/harper-studio/query-instance-data.md index 7f325d6a..89739a94 100644 --- a/versioned_docs/version-4.5/administration/harper-studio/query-instance-data.md +++ b/versioned_docs/version-4.5/administration/harper-studio/query-instance-data.md @@ -13,7 +13,7 @@ SQL queries can be executed directly through the Harper Studio with the followin 5. Enter your SQL query in the SQL query window. 6. Click **Execute**. -_Please note, the Studio will execute the query exactly as entered. For example, if you attempt to `SELECT _` from a table with millions of rows, you will most likely crash your browser.\* +_Please note, the Studio will execute the query exactly as entered. For example, if you attempt to `SELECT _` from a table with millions of rows, you will most likely crash your browser._ ## Browse Query Results Set diff --git a/versioned_docs/version-4.5/administration/logging/standard-logging.md b/versioned_docs/version-4.5/administration/logging/standard-logging.md index a5116ed7..044c2260 100644 --- a/versioned_docs/version-4.5/administration/logging/standard-logging.md +++ b/versioned_docs/version-4.5/administration/logging/standard-logging.md @@ -22,15 +22,15 @@ For example, a typical log entry looks like: The components of a log entry are: -- timestamp - This is the date/time stamp when the event occurred -- level - This is an associated log level that gives a rough guide to the importance and urgency of the message. The available log levels in order of least urgent (and more verbose) are: `trace`, `debug`, `info`, `warn`, `error`, `fatal`, and `notify`. -- thread/ID - This reports the name of the thread and the thread ID that the event was reported on. Note that NATS logs are recorded by their process name and there is no thread id for them since they are a separate process. Key threads are: - - main - This is the thread that is responsible for managing all other threads and routes incoming requests to the other threads - - http - These are the worker threads that handle the primary workload of incoming HTTP requests to the operations API and custom functions. - - Clustering\* - These are threads and processes that handle replication. - - job - These are job threads that have been started to handle operations that are executed in a separate job thread. -- tags - Logging from a custom function will include a "custom-function" tag in the log entry. Most logs will not have any additional tags. -- message - This is the main message that was reported. +- `timestamp` - This is the date/time stamp when the event occurred +- `level` - This is an associated log level that gives a rough guide to the importance and urgency of the message. The available log levels in order of least urgent (and more verbose) are: `trace`, `debug`, `info`, `warn`, `error`, `fatal`, and `notify`. +- `thread/ID` - This reports the name of the thread and the thread ID that the event was reported on. Note that NATS logs are recorded by their process name and there is no thread id for them since they are a separate process. Key threads are: + - `main` - This is the thread that is responsible for managing all other threads and routes incoming requests to the other threads + - `http` - These are the worker threads that handle the primary workload of incoming HTTP requests to the operations API and custom functions. + - `Clustering` - These are threads and processes that handle replication. + - `job` - These are job threads that have been started to handle operations that are executed in a separate job thread. +- `tags` - Logging from a custom function will include a "custom-function" tag in the log entry. Most logs will not have any additional tags. +- `message` - This is the main message that was reported. We try to keep logging to a minimum by default, to do this the default log level is `error`. If you require more information from the logs, increasing the log level down will provide that. @@ -46,7 +46,7 @@ Harper logs can optionally be streamed to standard streams. Logging to standard ## Logging Rotation -Log rotation allows for managing log files, such as compressing rotated log files, archiving old log files, determining when to rotate, and the like. This will allow for organized storage and efficient use of disk space. For more information see “logging” in our [config docs](../../deployments/configuration). +Log rotation allows for managing log files, such as compressing rotated log files, archiving old log files, determining when to rotate, and the like. This will allow for organized storage and efficient use of disk space. For more information see "logging" in our [config docs](../../deployments/configuration). ## Read Logs via the API diff --git a/versioned_docs/version-4.5/deployments/install-harper/linux.md b/versioned_docs/version-4.5/deployments/install-harper/linux.md index 27a9dc79..cae27c9d 100644 --- a/versioned_docs/version-4.5/deployments/install-harper/linux.md +++ b/versioned_docs/version-4.5/deployments/install-harper/linux.md @@ -20,7 +20,7 @@ These instructions assume that the following has already been completed: While you will need to access Harper through port 9925 for the administration through the operations API, and port 9932 for clustering, for higher level of security, you may want to consider keeping both of these ports restricted to a VPN or VPC, and only have the application interface (9926 by default) exposed to the public Internet. -For this example, we will use an AWS Ubuntu Server 22.04 LTS m5.large EC2 Instance with an additional General Purpose SSD EBS volume and the default “ubuntu” user account. +For this example, we will use an AWS Ubuntu Server 22.04 LTS m5.large EC2 Instance with an additional General Purpose SSD EBS volume and the default "ubuntu" user account. --- diff --git a/versioned_docs/version-4.5/developers/applications/caching.md b/versioned_docs/version-4.5/developers/applications/caching.md index 29fec826..e655a32c 100644 --- a/versioned_docs/version-4.5/developers/applications/caching.md +++ b/versioned_docs/version-4.5/developers/applications/caching.md @@ -22,9 +22,9 @@ While you can provide a single expiration time, there are actually several expir You can provide a single expiration and it defines the behavior for all three. You can also provide three settings for expiration, through table directives: -- expiration - The amount of time until a record goes stale. -- eviction - The amount of time after expiration before a record can be evicted (defaults to zero). -- scanInterval - The interval for scanning for expired records (defaults to one quarter of the total of expiration and eviction). +- `expiration` - The amount of time until a record goes stale. +- `eviction` - The amount of time after expiration before a record can be evicted (defaults to zero). +- `scanInterval` - The interval for scanning for expired records (defaults to one quarter of the total of expiration and eviction). ## Define External Data Source diff --git a/versioned_docs/version-4.5/developers/applications/define-routes.md b/versioned_docs/version-4.5/developers/applications/define-routes.md index 720f4f06..37c3d016 100644 --- a/versioned_docs/version-4.5/developers/applications/define-routes.md +++ b/versioned_docs/version-4.5/developers/applications/define-routes.md @@ -22,7 +22,7 @@ However, you can specify the path to be `/` if you wish to have your routes hand - The route below, using the default config, within the **dogs** project, with a route of **breeds** would be available at **http:/localhost:9926/dogs/breeds**. -In effect, this route is just a pass-through to Harper. The same result could have been achieved by hitting the core Harper API, since it uses **hdbCore.preValidation** and **hdbCore.request**, which are defined in the “helper methods” section, below. +In effect, this route is just a pass-through to Harper. The same result could have been achieved by hitting the core Harper API, since it uses **hdbCore.preValidation** and **hdbCore.request**, which are defined in the "helper methods" section, below. ```javascript export default async (server, { hdbCore, logger }) => { @@ -39,7 +39,7 @@ export default async (server, { hdbCore, logger }) => { For endpoints where you want to execute multiple operations against Harper, or perform additional processing (like an ML classification, or an aggregation, or a call to a 3rd party API), you can define your own logic in the handler. The function below will execute a query against the dogs table, and filter the results to only return those dogs over 4 years in age. -**IMPORTANT: This route has NO preValidation and uses hdbCore.requestWithoutAuthentication, which- as the name implies- bypasses all user authentication. See the security concerns and mitigations in the “helper methods” section, below.** +**IMPORTANT: This route has NO preValidation and uses hdbCore.requestWithoutAuthentication, which- as the name implies- bypasses all user authentication. See the security concerns and mitigations in the "helper methods" section, below.** ```javascript export default async (server, { hdbCore, logger }) => { diff --git a/versioned_docs/version-4.5/developers/clustering/index.md b/versioned_docs/version-4.5/developers/clustering/index.md index 95c3433c..fddd3851 100644 --- a/versioned_docs/version-4.5/developers/clustering/index.md +++ b/versioned_docs/version-4.5/developers/clustering/index.md @@ -22,10 +22,10 @@ A common use case is an edge application collecting and analyzing sensor data th Harper simplifies the architecture of such an application with its bi-directional, table-level replication: -- The edge instance subscribes to a “thresholds” table on the cloud instance, so the application only makes localhost calls to get the thresholds. -- The application continually pushes sensor data into a “sensor_data” table via the localhost API, comparing it to the threshold values as it does so. -- When a threshold violation occurs, the application adds a record to the “alerts” table. -- The application appends to that record array “sensor_data” entries for the 60 seconds (or minutes, or days) leading up to the threshold violation. -- The edge instance publishes the “alerts” table up to the cloud instance. +- The edge instance subscribes to a "thresholds" table on the cloud instance, so the application only makes localhost calls to get the thresholds. +- The application continually pushes sensor data into a "sensor_data" table via the localhost API, comparing it to the threshold values as it does so. +- When a threshold violation occurs, the application adds a record to the "alerts" table. +- The application appends to that record array "sensor_data" entries for the 60 seconds (or minutes, or days) leading up to the threshold violation. +- The edge instance publishes the "alerts" table up to the cloud instance. By letting Harper focus on the fault-tolerant logistics of transporting your data, you get to write less code. By moving data only when and where it’s needed, you lower storage and bandwidth costs. And by restricting your app to only making local calls to Harper, you reduce the overall exposure of your application to outside forces. diff --git a/versioned_docs/version-4.5/developers/components/reference.md b/versioned_docs/version-4.5/developers/components/reference.md index 4b709f86..bb041616 100644 --- a/versioned_docs/version-4.5/developers/components/reference.md +++ b/versioned_docs/version-4.5/developers/components/reference.md @@ -105,7 +105,7 @@ There are two key types of Harper Extensions: **Resource Extension** and **Proto Functionally, what makes an extension a component is the contents of `config.yaml`. Unlike the Application Template referenced earlier, which specified multiple components within the `config.yaml`, an extension will specify an `extensionModule` option. -- **extensionModule** - `string` - _required_ - A path to the extension module source code. The path must resolve from the root of the extension module directory. +- `extensionModule` - `string` - _required_ - A path to the extension module source code. The path must resolve from the root of the extension module directory. For example, the [Harper Next.js Extension](https://github.com/HarperDB/nextjs) `config.yaml` specifies `extensionModule: ./extension.js`. @@ -129,9 +129,9 @@ Other than their execution behavior, the `handleFile()` and `setupFile()` method Any [Resource Extension](#resource-extension) can be configured with the `files`, `path`, and `root` options. These options control how _files_ and _directories_ are resolved in order to be passed to the extension's `handleFile()`, `setupFile()`, `handleDirectory()`, and `setupDirectory()` methods. -- **files** - `string` - _required_ - Specifies the set of files and directories that should be handled by the component. Can be a glob pattern. -- **path** - `string` - _optional_ - Specifies the URL path to be handled by the component. -- **root** - `string` - _optional_ - Specifies the root directory for mapping file paths to the URLs. +- `files` - `string` - _required_ - Specifies the set of files and directories that should be handled by the component. Can be a glob pattern. +- `path` - `string` - _optional_ - Specifies the URL path to be handled by the component. +- `root` - `string` - _optional_ - Specifies the root directory for mapping file paths to the URLs. For example, to configure the [static](./built-in#static) component to server all files from `web` to the root URL path: @@ -188,11 +188,11 @@ These methods are for processing individual files. They can be async. Parameters: -- **contents** - `Buffer` - The contents of the file -- **urlPath** - `string` - The recommended URL path of the file -- **path** - `string` - The relative path of the file +- `contents` - `Buffer` - The contents of the file +- `urlPath` - `string` - The recommended URL path of the file +- `path` - `string` - The relative path of the file -- **resources** - `Object` - A collection of the currently loaded resources +- `resources` - `Object` - A collection of the currently loaded resources Returns: `void | Promise` @@ -212,10 +212,10 @@ If the function returns or resolves a truthy value, then the component loading s Parameters: -- **urlPath** - `string` - The recommended URL path of the file -- **path** - `string` - The relative path of the directory +- `urlPath` - `string` - The recommended URL path of the file +- `path` - `string` - The relative path of the directory -- **resources** - `Object` - A collection of the currently loaded resources +- `resources` - `Object` - A collection of the currently loaded resources Returns: `boolean | void | Promise` @@ -249,6 +249,6 @@ A Protocol Extension is made up of two distinct methods, [`start()`](#startoptio Parameters: -- **options** - `Object` - An object representation of the extension's configuration options. +- `options` - `Object` - An object representation of the extension's configuration options. Returns: `Object` - An object that implements any of the [Resource Extension APIs](#resource-extension-api) diff --git a/versioned_docs/version-4.5/developers/miscellaneous/google-data-studio.md b/versioned_docs/version-4.5/developers/miscellaneous/google-data-studio.md index 02fea100..7939417b 100644 --- a/versioned_docs/version-4.5/developers/miscellaneous/google-data-studio.md +++ b/versioned_docs/version-4.5/developers/miscellaneous/google-data-studio.md @@ -19,9 +19,9 @@ Get started by selecting the Harper connector from the [Google Data Studio Partn 1. Log in to [https://datastudio.google.com/](https://datastudio.google.com/). 1. Add a new Data Source using the Harper connector. The current release version can be added as a data source by following this link: [Harper Google Data Studio Connector](https://datastudio.google.com/datasources/create?connectorId=AKfycbxBKgF8FI5R42WVxO-QCOq7dmUys0HJrUJMkBQRoGnCasY60_VJeO3BhHJPvdd20-S76g). 1. Authorize the connector to access other servers on your behalf (this allows the connector to contact your database). -1. Enter the Web URL to access your database (preferably with HTTPS), as well as the Basic Auth key you use to access the database. Just include the key, not the word “Basic” at the start of it. -1. Check the box for “Secure Connections Only” if you want to always use HTTPS connections for this data source; entering a Web URL that starts with https:// will do the same thing, if you prefer. -1. Check the box for “Allow Bad Certs” if your Harper instance does not have a valid SSL certificate. [Harper Cloud](../../deployments/harper-cloud/) always has valid certificates, and so will never require this to be checked. Instances you set up yourself may require this, if you are using self-signed certs. If you are using [Harper Cloud](../../deployments/harper-cloud/) or another instance you know should always have valid SSL certificates, do not check this box. +1. Enter the Web URL to access your database (preferably with HTTPS), as well as the Basic Auth key you use to access the database. Just include the key, not the word "Basic" at the start of it. +1. Check the box for "Secure Connections Only" if you want to always use HTTPS connections for this data source; entering a Web URL that starts with https:// will do the same thing, if you prefer. +1. Check the box for "Allow Bad Certs" if your Harper instance does not have a valid SSL certificate. [Harper Cloud](../../deployments/harper-cloud/) always has valid certificates, and so will never require this to be checked. Instances you set up yourself may require this, if you are using self-signed certs. If you are using [Harper Cloud](../../deployments/harper-cloud/) or another instance you know should always have valid SSL certificates, do not check this box. 1. Choose your Query Type. This determines what information the configuration will ask for after pressing the Next button. - Table will ask you for a Schema and a Table to return all fields of using `SELECT *`. - SQL will ask you for the SQL query you’re using to retrieve fields from the database. You may `JOIN` multiple tables together, and use Harper specific SQL functions, along with the usual power SQL grants. diff --git a/versioned_docs/version-4.5/developers/operations-api/bulk-operations.md b/versioned_docs/version-4.5/developers/operations-api/bulk-operations.md index 372e5bb3..aef33230 100644 --- a/versioned_docs/version-4.5/developers/operations-api/bulk-operations.md +++ b/versioned_docs/version-4.5/developers/operations-api/bulk-operations.md @@ -8,11 +8,11 @@ title: Bulk Operations Ingests CSV data, provided directly in the operation as an `insert`, `update` or `upsert` into the specified database table. -- operation _(required)_ - must always be `csv_data_load` -- action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` -- database _(optional)_ - name of the database where you are loading your data. The default is `data` -- table _(required)_ - name of the table where you are loading your data -- data _(required)_ - csv data to import into Harper +- `operation` _(required)_ - must always be `csv_data_load` +- `action` _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` +- `database` _(optional)_ - name of the database where you are loading your data. The default is `data` +- `table` _(required)_ - name of the table where you are loading your data +- `data` _(required)_ - csv data to import into Harper ### Body @@ -43,11 +43,11 @@ Ingests CSV data, provided via a path on the local filesystem, as an `insert`, ` _Note: The CSV file must reside on the same machine on which Harper is running. For example, the path to a CSV on your computer will produce an error if your Harper instance is a cloud instance._ -- operation _(required)_ - must always be `csv_file_load` -- action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` -- database _(optional)_ - name of the database where you are loading your data. The default is `data` -- table _(required)_ - name of the table where you are loading your data -- file*path *(required)\_ - path to the csv file on the host running Harper +- `operation` _(required)_ - must always be `csv_file_load` +- `action` _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` +- `database` _(optional)_ - name of the database where you are loading your data. The default is `data` +- `table` _(required)_ - name of the table where you are loading your data +- `file_path` _(required)_ - path to the csv file on the host running Harper ### Body @@ -76,11 +76,11 @@ _Note: The CSV file must reside on the same machine on which Harper is running. Ingests CSV data, provided via URL, as an `insert`, `update` or `upsert` into the specified database table. -- operation _(required)_ - must always be `csv_url_load` -- action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` -- database _(optional)_ - name of the database where you are loading your data. The default is `data` -- table _(required)_ - name of the table where you are loading your data -- csv*url *(required)\_ - URL to the csv +- `operation` _(required)_ - must always be `csv_url_load` +- `action` _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` +- `database` _(optional)_ - name of the database where you are loading your data. The default is `data` +- `table` _(required)_ - name of the table where you are loading your data +- `csv_url` _(required)_ - URL to the csv ### Body @@ -109,16 +109,16 @@ Ingests CSV data, provided via URL, as an `insert`, `update` or `upsert` into th This operation allows users to import CSV or JSON files from an AWS S3 bucket as an `insert`, `update` or `upsert`. -- operation _(required)_ - must always be `import_from_s3` -- action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` -- database _(optional)_ - name of the database where you are loading your data. The default is `data` -- table _(required)_ - name of the table where you are loading your data -- s3 _(required)_ - object containing required AWS S3 bucket info for operation: - - aws_access_key_id - AWS access key for authenticating into your S3 bucket - - aws_secret_access_key - AWS secret for authenticating into your S3 bucket - - bucket - AWS S3 bucket to import from - - key - the name of the file to import - _the file must include a valid file extension ('.csv' or '.json')_ - - region - the region of the bucket +- `operation` _(required)_ - must always be `import_from_s3` +- `action` _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` +- `database` _(optional)_ - name of the database where you are loading your data. The default is `data` +- `table` _(required)_ - name of the table where you are loading your data +- `s3` _(required)_ - object containing required AWS S3 bucket info for operation: + - `aws_access_key_id` - AWS access key for authenticating into your S3 bucket + - `aws_secret_access_key` - AWS secret for authenticating into your S3 bucket + - `bucket` - AWS S3 bucket to import from + - `key` - the name of the file to import - _the file must include a valid file extension ('.csv' or '.json')_ + - `region` - the region of the bucket ### Body diff --git a/versioned_docs/version-4.5/developers/operations-api/clustering-nats.md b/versioned_docs/version-4.5/developers/operations-api/clustering-nats.md index a45c593e..45e160c4 100644 --- a/versioned_docs/version-4.5/developers/operations-api/clustering-nats.md +++ b/versioned_docs/version-4.5/developers/operations-api/clustering-nats.md @@ -10,11 +10,11 @@ Adds a route/routes to either the hub or leaf server cluster configuration. This _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_set_routes` -- server _(required)_ - must always be `hub` or `leaf`, in most cases you should use `hub` here -- routes _(required)_ - must always be an objects array with a host and port: - - host - the host of the remote instance you are clustering to - - port - the clustering port of the remote instance you are clustering to, in most cases this is the value in `clustering.hubServer.cluster.network.port` on the remote instance `harperdb-config.yaml` +- `operation` _(required)_ - must always be `cluster_set_routes` +- `server` _(required)_ - must always be `hub` or `leaf`, in most cases you should use `hub` here +- `routes` _(required)_ - must always be an objects array with a host and port: + - `host` - the host of the remote instance you are clustering to + - `port` - the clustering port of the remote instance you are clustering to, in most cases this is the value in `clustering.hubServer.cluster.network.port` on the remote instance `harperdb-config.yaml` ### Body @@ -78,7 +78,7 @@ Gets all the hub and leaf server routes from the config file. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_get_routes` +- `operation` _(required)_ - must always be `cluster_get_routes` ### Body @@ -122,8 +122,8 @@ Removes route(s) from hub and/or leaf server routes array in config file. Return _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_delete_routes` -- routes _required_ - Must be an array of route object(s) +- `operation` _(required)_ - must always be `cluster_delete_routes` +- `routes` _(required)_ - Must be an array of route object(s) ### Body @@ -162,14 +162,14 @@ Registers an additional Harper instance with associated subscriptions. Learn mor _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_node` -- node*name *(required)\_ - the node name of the remote node -- subscriptions _(required)_ - The relationship created between nodes. Must be an object array and include `schema`, `table`, `subscribe` and `publish`: - - schema - the schema to replicate from - - table - the table to replicate from - - subscribe - a boolean which determines if transactions on the remote table should be replicated on the local table - - publish - a boolean which determines if transactions on the local table should be replicated on the remote table - - start*time *(optional)\_ - How far back to go to get transactions from node being added. Must be in UTC YYYY-MM-DDTHH:mm:ss.sssZ format +- `operation` _(required)_ - must always be `add_node` +- `node_name` _(required)_ - the node name of the remote node +- `subscriptions` _(required)_ - The relationship created between nodes. Must be an object array and include `schema`, `table`, `subscribe` and `publish`: + - `schema` - the schema to replicate from + - `table` - the table to replicate from + - `subscribe` - a boolean which determines if transactions on the remote table should be replicated on the local table + - `publish` - a boolean which determines if transactions on the local table should be replicated on the remote table + - `start_time` _(optional)_ - How far back to go to get transactions from node being added. Must be in UTC YYYY-MM-DDTHH:mm:ss.sssZ format ### Body @@ -205,14 +205,14 @@ Modifies an existing Harper instance registration and associated subscriptions. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `update_node` -- node*name *(required)\_ - the node name of the remote node you are updating -- subscriptions _(required)_ - The relationship created between nodes. Must be an object array and include `schema`, `table`, `subscribe` and `publish`: - - schema - the schema to replicate from - - table - the table to replicate from - - subscribe - a boolean which determines if transactions on the remote table should be replicated on the local table - - publish - a boolean which determines if transactions on the local table should be replicated on the remote table - - start*time *(optional)\_ - How far back to go to get transactions from node being added. Must be in UTC YYYY-MM-DDTHH:mm:ss.sssZ format +- `operation` _(required)_ - must always be `update_node` +- `node_name` _(required)_ - the node name of the remote node you are updating +- `subscriptions` _(required)_ - The relationship created between nodes. Must be an object array and include `schema`, `table`, `subscribe` and `publish`: + - `schema` - the schema to replicate from + - `table` - the table to replicate from + - `subscribe` - a boolean which determines if transactions on the remote table should be replicated on the local table + - `publish` - a boolean which determines if transactions on the local table should be replicated on the remote table + - `start_time` _(optional)_ - How far back to go to get transactions from node being added. Must be in UTC YYYY-MM-DDTHH:mm:ss.sssZ format ### Body @@ -248,13 +248,13 @@ A more adeptly named alias for add and update node. This operation behaves as a _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_node_replication` -- node*name *(required)\_ - the node name of the remote node you are updating -- subscriptions _(required)_ - The relationship created between nodes. Must be an object array and `table`, `subscribe` and `publish`: - - database _(optional)_ - the database to replicate from - - table _(required)_ - the table to replicate from - - subscribe _(required)_ - a boolean which determines if transactions on the remote table should be replicated on the local table - - publish _(required)_ - a boolean which determines if transactions on the local table should be replicated on the remote table +- `operation` _(required)_ - must always be `set_node_replication` +- `node_name` _(required)_ - the node name of the remote node you are updating +- `subscriptions` _(required)_ - The relationship created between nodes. Must be an object array and `table`, `subscribe` and `publish`: + - `database` _(optional)_ - the database to replicate from + - `table` _(required)_ - the table to replicate from + - `subscribe` _(required)_ - a boolean which determines if transactions on the remote table should be replicated on the local table + - `publish` _(required)_ - a boolean which determines if transactions on the local table should be replicated on the remote table - ### Body @@ -289,7 +289,7 @@ Returns an array of status objects from a cluster. A status object will contain _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_status` +- `operation` _(required)_ - must always be `cluster_status` ### Body @@ -336,10 +336,10 @@ Returns an object array of enmeshed nodes. Each node object will contain the nam _Operation is restricted to super_user roles only_ -- operation _(required)_- must always be `cluster_network` -- timeout (_optional_) - the amount of time in milliseconds to wait for a response from the network. Must be a number -- connected*nodes (\_optional*) - omit `connected_nodes` from the response. Must be a boolean. Defaults to `false` -- routes (_optional_) - omit `routes` from the response. Must be a boolean. Defaults to `false` +- `operation` _(required)_- must always be `cluster_network` +- `timeout` _(optional)_ - the amount of time in milliseconds to wait for a response from the network. Must be a number +- `connected_nodes` _(optional)_ - omit `connected_nodes` from the response. Must be a boolean. Defaults to `false` +- `routes` _(optional)_ - omit `routes` from the response. Must be a boolean. Defaults to `false` ### Body @@ -383,8 +383,8 @@ Removes a Harper instance and associated subscriptions from the cluster. Learn m _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `remove_node` -- name _(required)_ - The name of the node you are de-registering +- `operation` _(required)_ - must always be `remove_node` +- `node_name` _(required)_ - The name of the node you are de-registering ### Body @@ -412,8 +412,8 @@ Learn more about [Harper clustering here](../clustering/). _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `configure_cluster` -- connections _(required)_ - must be an object array with each object containing `node_name` and `subscriptions` for that node +- `operation` _(required)_ - must always be `configure_cluster` +- `connections` _(required)_ - must be an object array with each object containing `node_name` and `subscriptions` for that node ### Body @@ -463,10 +463,10 @@ Will purge messages from a stream _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `purge_stream` -- database _(required)_ - the name of the database where the streams table resides -- table _(required)_ - the name of the table that belongs to the stream -- options _(optional)_ - control how many messages get purged. Options are: +- `operation` _(required)_ - must always be `purge_stream` +- `database` _(required)_ - the name of the database where the streams table resides +- `table` _(required)_ - the name of the table that belongs to the stream +- `options` _(optional)_ - control how many messages get purged. Options are: - `keep` - purge will keep this many most recent messages - `seq` - purge all messages up to, but not including, this sequence diff --git a/versioned_docs/version-4.5/developers/operations-api/clustering.md b/versioned_docs/version-4.5/developers/operations-api/clustering.md index 8f905f18..cbfd132d 100644 --- a/versioned_docs/version-4.5/developers/operations-api/clustering.md +++ b/versioned_docs/version-4.5/developers/operations-api/clustering.md @@ -14,18 +14,18 @@ Adds a new Harper instance to the cluster. If `subscriptions` are provided, it w _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_node` -- hostname or url _(required)_ - one of these fields is required. You must provide either the `hostname` or the `url` of the node you want to add -- verify_tls _(optional)_ - a boolean which determines if the TLS certificate should be verified. This will allow the Harper default self-signed certificates to be accepted. Defaults to `true` -- authorization _(optional)_ - an object or a string which contains the authorization information for the node being added. If it is an object, it should contain `username` and `password` fields. If it is a string, it should use HTTP `Authorization` style credentials -- retain*authorization *(optional)\_ - a boolean which determines if the authorization credentials should be retained/stored and used everytime a connection is made to this node. If `true`, the authorization will be stored on the node record. Generally this should not be used, as mTLS/certificate based authorization is much more secure and safe, and avoids the need for storing credentials. Defaults to `false`. -- revoked*certificates *(optional)\_ - an array of revoked certificates serial numbers. If a certificate is revoked, it will not be accepted for any connections. -- shard _(optional)_ - a number which can be used to indicate which shard this node belongs to. This is only needed if you are using sharding. -- subscriptions _(optional)_ - The relationship created between nodes. If not provided a fully replicated cluster will be setup. Must be an object array and include `database`, `table`, `subscribe` and `publish`: - - database - the database to replicate - - table - the table to replicate - - subscribe - a boolean which determines if transactions on the remote table should be replicated on the local table - - publish - a boolean which determines if transactions on the local table should be replicated on the remote table +- `operation` _(required)_ - must always be `add_node` +- `hostname` or `url` _(required)_ - one of these fields is required. You must provide either the `hostname` or the `url` of the node you want to add +- `verify_tls` _(optional)_ - a boolean which determines if the TLS certificate should be verified. This will allow the Harper default self-signed certificates to be accepted. Defaults to `true` +- `authorization` _(optional)_ - an object or a string which contains the authorization information for the node being added. If it is an object, it should contain `username` and `password` fields. If it is a string, it should use HTTP `Authorization` style credentials +- `retain_authorization` _(optional)_ - a boolean which determines if the authorization credentials should be retained/stored and used everytime a connection is made to this node. If `true`, the authorization will be stored on the node record. Generally this should not be used, as mTLS/certificate based authorization is much more secure and safe, and avoids the need for storing credentials. Defaults to `false`. +- `revoked_certificates` _(optional)_ - an array of revoked certificates serial numbers. If a certificate is revoked, it will not be accepted for any connections. +- `shard` _(optional)_ - a number which can be used to indicate which shard this node belongs to. This is only needed if you are using sharding. +- `subscriptions` _(optional)_ - The relationship created between nodes. If not provided a fully replicated cluster will be setup. Must be an object array and include `database`, `table`, `subscribe` and `publish`: + - `database` - the database to replicate + - `table` - the table to replicate + - `subscribe` - a boolean which determines if transactions on the remote table should be replicated on the local table + - `publish` - a boolean which determines if transactions on the local table should be replicated on the remote table ### Body @@ -59,15 +59,15 @@ _Operation is restricted to super_user roles only_ _Note: will attempt to add the node if it does not exist_ -- operation _(required)_ - must always be `update_node` -- hostname _(required)_ - the `hostname` of the remote node you are updating -- revoked*certificates *(optional)\_ - an array of revoked certificates serial numbers. If a certificate is revoked, it will not be accepted for any connections. -- shard _(optional)_ - a number which can be used to indicate which shard this node belongs to. This is only needed if you are using sharding. -- subscriptions _(required)_ - The relationship created between nodes. Must be an object array and include `database`, `table`, `subscribe` and `publish`: - - database - the database to replicate from - - table - the table to replicate from - - subscribe - a boolean which determines if transactions on the remote table should be replicated on the local table - - publish - a boolean which determines if transactions on the local table should be replicated on the remote table +- `operation` _(required)_ - must always be `update_node` +- `hostname` _(required)_ - the `hostname` of the remote node you are updating +- `revoked_certificates` _(optional)_ - an array of revoked certificates serial numbers. If a certificate is revoked, it will not be accepted for any connections. +- `shard` _(optional)_ - a number which can be used to indicate which shard this node belongs to. This is only needed if you are using sharding. +- `subscriptions` _(required)_ - The relationship created between nodes. Must be an object array and include `database`, `table`, `subscribe` and `publish`: + - `database` - the database to replicate from + - `table` - the table to replicate from + - `subscribe` - a boolean which determines if transactions on the remote table should be replicated on the local table + - `publish` - a boolean which determines if transactions on the local table should be replicated on the remote table ### Body @@ -102,8 +102,8 @@ Removes a Harper node from the cluster and stops replication, [Learn more about _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `remove_node` -- name _(required)_ - The name of the node you are removing +- `operation` _(required)_ - must always be `remove_node` +- `name` _(required)_ - The name of the node you are removing ### Body @@ -132,7 +132,7 @@ Returns an array of status objects from a cluster. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_status` +- `operation` _(required)_ - must always be `cluster_status` ### Body @@ -167,7 +167,8 @@ _Operation is restricted to super_user roles only_ "lastReceivedRemoteTime": "Wed, 12 Feb 2025 16:49:29 GMT", "lastReceivedLocalTime": "Wed, 12 Feb 2025 16:50:59 GMT", "lastSendTime": "Wed, 12 Feb 2025 16:50:59 GMT" - }, + } + ] } ], "node_name": "server-1.domain.com", @@ -190,8 +191,8 @@ Bulk create/remove subscriptions for any number of remote nodes. Resets and repl _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `configure_cluster` -- connections _(required)_ - must be an object array with each object following the `add_node` schema. +- `operation` _(required)_ - must always be `configure_cluster` +- `connections` _(required)_ - must be an object array with each object following the `add_node` schema. ### Body @@ -251,8 +252,8 @@ Adds a route/routes to the `replication.routes` configuration. This operation be _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_set_routes` -- routes _(required)_ - the routes field is an array that specifies the routes for clustering. Each element in the array can be either a string or an object with `hostname` and `port` properties. +- `operation` _(required)_ - must always be `cluster_set_routes` +- `routes` _(required)_ - the routes field is an array that specifies the routes for clustering. Each element in the array can be either a string or an object with `hostname` and `port` properties. ### Body @@ -293,7 +294,7 @@ Gets the replication routes from the Harper config file. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_get_routes` +- `operation` _(required)_ - must always be `cluster_get_routes` ### Body @@ -323,8 +324,8 @@ Removes route(s) from the Harper config file. Returns a deletion success message _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_delete_routes` -- routes _required_ - Must be an array of route object(s) +- `operation` _(required)_ - must always be `cluster_delete_routes` +- `routes` _(required)_ - Must be an array of route object(s) ### Body diff --git a/versioned_docs/version-4.5/developers/operations-api/components.md b/versioned_docs/version-4.5/developers/operations-api/components.md index 69de35bb..c171aaad 100644 --- a/versioned_docs/version-4.5/developers/operations-api/components.md +++ b/versioned_docs/version-4.5/developers/operations-api/components.md @@ -10,9 +10,9 @@ Creates a new component project in the component root directory using a predefin _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_component` -- project _(required)_ - the name of the project you wish to create -- replicated _(optional)_ - if true, Harper will replicate the component to all nodes in the cluster. Must be a boolean. +- `operation` _(required)_ - must always be `add_component` +- `project` _(required)_ - the name of the project you wish to create +- `replicated` _(optional)_ - if true, Harper will replicate the component to all nodes in the cluster. Must be a boolean. ### Body @@ -75,13 +75,13 @@ _Note: After deploying a component a restart may be required_ _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `deploy_component` -- project _(required)_ - the name of the project you wish to deploy -- package _(optional)_ - this can be any valid GitHub or NPM reference -- payload _(optional)_ - a base64-encoded string representation of the .tar file. Must be a string -- restart _(optional)_ - must be either a boolean or the string `rolling`. If set to `rolling`, a rolling restart will be triggered after the component is deployed, meaning that each node in the cluster will be sequentially restarted (waiting for the last restart to start the next). If set to `true`, the restart will not be rolling, all nodes will be restarted in parallel. If `replicated` is `true`, the restart operations will be replicated across the cluster. -- replicated _(optional)_ - if true, Harper will replicate the component to all nodes in the cluster. Must be a boolean. -- install_command _(optional)_ - A command to use when installing the component. Must be a string. This can be used to install dependencies with pnpm or yarn, for example, like: `"install_command": "npm install -g pnpm && pnpm install"` +- `operation` _(required)_ - must always be `deploy_component` +- `project` _(required)_ - the name of the project you wish to deploy +- `package` _(optional)_ - this can be any valid GitHub or NPM reference +- `payload` _(optional)_ - a base64-encoded string representation of the .tar file. Must be a string +- `restart` _(optional)_ - must be either a boolean or the string `rolling`. If set to `rolling`, a rolling restart will be triggered after the component is deployed, meaning that each node in the cluster will be sequentially restarted (waiting for the last restart to start the next). If set to `true`, the restart will not be rolling, all nodes will be restarted in parallel. If `replicated` is `true`, the restart operations will be replicated across the cluster. +- `replicated` _(optional)_ - if true, Harper will replicate the component to all nodes in the cluster. Must be a boolean. +- `install_command` _(optional)_ - A command to use when installing the component. Must be a string. This can be used to install dependencies with pnpm or yarn, for example, like: `"install_command": "npm install -g pnpm && pnpm install"` ### Body @@ -118,9 +118,9 @@ Creates a temporary `.tar` file of the specified project folder, then reads it i _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `package_component` -- project _(required)_ - the name of the project you wish to package -- skip_node_modules _(optional)_ - if true, creates option for tar module that will exclude the project's node_modules directory. Must be a boolean +- `operation` _(required)_ - must always be `package_component` +- `project` _(required)_ - the name of the project you wish to package +- `skip_node_modules` _(optional)_ - if true, creates option for tar module that will exclude the project's node_modules directory. Must be a boolean ### Body @@ -151,11 +151,11 @@ Deletes a file from inside the component project or deletes the complete project _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `drop_component` -- project _(required)_ - the name of the project you wish to delete or to delete from if using the `file` parameter -- file _(optional)_ - the path relative to your project folder of the file you wish to delete -- replicated _(optional)_ - if true, Harper will replicate the component deletion to all nodes in the cluster. Must be a boolean. -- restart _(optional)_ - if true, Harper will restart after dropping the component. Must be a boolean. +- `operation` _(required)_ - must always be `drop_component` +- `project` _(required)_ - the name of the project you wish to delete or to delete from if using the `file` parameter +- `file` _(optional)_ - the path relative to your project folder of the file you wish to delete +- `replicated` _(optional)_ - if true, Harper will replicate the component deletion to all nodes in the cluster. Must be a boolean. +- `restart` _(optional)_ - if true, Harper will restart after dropping the component. Must be a boolean. ### Body @@ -183,7 +183,7 @@ Gets all local component files and folders and any component config from `harper _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_components` +- `operation` _(required)_ - must always be `get_components` ### Body @@ -264,10 +264,10 @@ Gets the contents of a file inside a component project. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_component_file` -- project _(required)_ - the name of the project where the file is located -- file _(required)_ - the path relative to your project folder of the file you wish to view -- encoding _(optional)_ - the encoding that will be passed to the read file call. Defaults to `utf8` +- `operation` _(required)_ - must always be `get_component_file` +- `project` _(required)_ - the name of the project where the file is located +- `file` _(required)_ - the path relative to your project folder of the file you wish to view +- `encoding` _(optional)_ - the encoding that will be passed to the read file call. Defaults to `utf8` ### Body @@ -295,12 +295,12 @@ Creates or updates a file inside a component project. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_component_file` -- project _(required)_ - the name of the project the file is located in -- file _(required)_ - the path relative to your project folder of the file you wish to set -- payload _(required)_ - what will be written to the file -- encoding _(optional)_ - the encoding that will be passed to the write file call. Defaults to `utf8` -- replicated _(optional)_ - if true, Harper will replicate the component update to all nodes in the cluster. Must be a boolean. +- `operation` _(required)_ - must always be `set_component_file` +- `project` _(required)_ - the name of the project the file is located in +- `file` _(required)_ - the path relative to your project folder of the file you wish to set +- `payload` _(required)_ - what will be written to the file +- `encoding` _(optional)_ - the encoding that will be passed to the write file call. Defaults to `utf8` +- `replicated` _(optional)_ - if true, Harper will replicate the component update to all nodes in the cluster. Must be a boolean. ### Body @@ -327,23 +327,15 @@ Adds an SSH key for deploying components from private repositories. This will al _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_ssh_key` -- name _(required)_ - the name of the key -- key _(required)_ - the private key contents. Must be an ed25519 key. Line breaks must be delimited with `\n` and have a trailing `\n` -- host _(required)_ - the host for the ssh config (see below). Used as part of the `package` url when deploying a component using this key -- hostname _(required)_ - the hostname for the ssh config (see below). Used to map `host` to an actual domain (e.g. `github.com`) -- known*hosts *(optional)\_ - the public SSH keys of the host your component will be retrieved from. If `hostname` is `github.com` this will be retrieved automatically. Line breaks must be delimited with `\n` -- replicated _(optional)_ - if true, HarperDB will replicate the key to all nodes in the cluster. Must be a boolean. +- `operation` _(required)_ - must always be `add_ssh_key` +- `name` _(required)_ - the name of the key +- `key` _(required)_ - the private key contents. Must be an ed25519 key. Line breaks must be delimited with `\n` and have a trailing `\n` +- `host` _(required)_ - the host for the ssh config (see below). Used as part of the `package` url when deploying a component using this key +- `hostname` _(required)_ - the hostname for the ssh config (see below). Used to map `host` to an actual domain (e.g. `github.com`) +- `known_hosts` _(optional)_ - the public SSH keys of the host your component will be retrieved from. If `hostname` is `github.com` this will be retrieved automatically. Line breaks must be delimited with `\n` +- `replicated` _(optional)_ - if true, HarperDB will replicate the key to all nodes in the cluster. Must be a boolean. _Operation is restricted to super_user roles only_ -* operation _(required)_ - must always be `add_ssh_key` -* name _(required)_ - the name of the key -* key _(required)_ - the private key contents. Line breaks must be delimited with -* host _(required)_ - the host for the ssh config (see below). Used as part of the `package` url when deploying a component using this key -* hostname _(required)_ - the hostname for the ssh config (see below). Used to map `host` to an actual domain (e.g. `github.com`) -* known_hosts _(optional)_ - the public SSH keys of the host your component will be retrieved from. If `hostname` is `github.com` this will be retrieved automatically. Line breaks must be delimited with -* replicated _(optional)_ - if true, Harper will replicate the key to all nodes in the cluster. Must be a boolean. - ### Body ```json @@ -389,10 +381,10 @@ Updates the private key contents of an existing SSH key. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `update_ssh_key` -- name _(required)_ - the name of the key to be updated -- key _(required)_ - the private key contents. Must be an ed25519 key. Line breaks must be delimited with `\n` and have a trailing `\n` -- replicated _(optional)_ - if true, Harper will replicate the key update to all nodes in the cluster. Must be a boolean. +- `operation` _(required)_ - must always be `update_ssh_key` +- `name` _(required)_ - the name of the key to be updated +- `key` _(required)_ - the private key contents. Must be an ed25519 key. Line breaks must be delimited with `\n` and have a trailing `\n` +- `replicated` _(optional)_ - if true, Harper will replicate the key update to all nodes in the cluster. Must be a boolean. ### Body @@ -420,9 +412,9 @@ Deletes a SSH key. This will also remove it from the generated SSH config. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `delete_ssh_key` -- name _(required)_ - the name of the key to be deleted -- replicated _(optional)_ - if true, Harper will replicate the key deletion to all nodes in the cluster. Must be a boolean. +- `operation` _(required)_ - must always be `delete_ssh_key` +- `name` _(required)_ - the name of the key to be deleted +- `replicated` _(optional)_ - if true, Harper will replicate the key deletion to all nodes in the cluster. Must be a boolean. ### Body @@ -446,7 +438,7 @@ List off the names of added SSH keys _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `list_ssh_keys` +- `operation` _(required)_ - must always be `list_ssh_keys` ### Body @@ -462,20 +454,21 @@ _Operation is restricted to super_user roles only_ [ { "name": "harperdb-private-component" - }, - ... + } ] ``` +_Note: Additional SSH keys would appear as more objects in this array_ + ## Set SSH Known Hosts Sets the SSH known_hosts file. This will overwrite the file. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_ssh_known_hosts` -- known_hosts _(required)_ - The contents to set the known_hosts to. Line breaks must be delimite d with -- replicated _(optional)_ - if true, Harper will replicate the known hosts to all nodes in the cluster. Must be a boolean. +- `operation` _(required)_ - must always be `set_ssh_known_hosts` +- `known_hosts` _(required)_ - The contents to set the known_hosts to. Line breaks must be delimite d with +- `replicated` _(optional)_ - if true, Harper will replicate the known hosts to all nodes in the cluster. Must be a boolean. ### Body @@ -500,7 +493,7 @@ Gets the contents of the known_hosts file _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_ssh_known_hosts` +- `operation` _(required)_ - must always be `get_ssh_known_hosts` ### Body diff --git a/versioned_docs/version-4.5/developers/operations-api/custom-functions.md b/versioned_docs/version-4.5/developers/operations-api/custom-functions.md index 0b2261e0..23709148 100644 --- a/versioned_docs/version-4.5/developers/operations-api/custom-functions.md +++ b/versioned_docs/version-4.5/developers/operations-api/custom-functions.md @@ -12,7 +12,7 @@ Returns the state of the Custom functions server. This includes whether it is en _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `custom_function_status` +- `operation` _(required)_ - must always be `custom_function_status` ### Body @@ -40,7 +40,7 @@ Returns an array of projects within the Custom Functions root project directory. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_custom_functions` +- `operation` _(required)_ - must always be `get_custom_functions` ### Body @@ -70,10 +70,10 @@ Returns the content of the specified file as text. HarperDStudio uses this call _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_custom_function` -- project _(required)_ - the name of the project containing the file for which you wish to get content -- type _(required)_ - the name of the sub-folder containing the file for which you wish to get content - must be either routes or helpers -- file _(required)_ - The name of the file for which you wish to get content - should not include the file extension (which is always .js) +- `operation` _(required)_ - must always be `get_custom_function` +- `project` _(required)_ - the name of the project containing the file for which you wish to get content +- `type` _(required)_ - the name of the sub-folder containing the file for which you wish to get content - must be either routes or helpers +- `file` _(required)_ - The name of the file for which you wish to get content - should not include the file extension (which is always .js) ### Body @@ -102,11 +102,11 @@ Updates the content of the specified file. Harper Studio uses this call to save _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_custom_function` -- project _(required)_ - the name of the project containing the file for which you wish to set content -- type _(required)_ - the name of the sub-folder containing the file for which you wish to set content - must be either routes or helpers -- file _(required)_ - the name of the file for which you wish to set content - should not include the file extension (which is always .js) -- function*content *(required)\_ - the content you wish to save into the specified file +- `operation` _(required)_ - must always be `set_custom_function` +- `project` _(required)_ - the name of the project containing the file for which you wish to set content +- `type` _(required)_ - the name of the sub-folder containing the file for which you wish to set content - must be either routes or helpers +- `file` _(required)_ - the name of the file for which you wish to set content - should not include the file extension (which is always .js) +- `function_content` _(required)_ - the content you wish to save into the specified file ### Body @@ -136,10 +136,10 @@ Deletes the specified file. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `drop_custom_function` -- project _(required)_ - the name of the project containing the file you wish to delete -- type _(required)_ - the name of the sub-folder containing the file you wish to delete. Must be either routes or helpers -- file _(required)_ - the name of the file you wish to delete. Should not include the file extension (which is always .js) +- `operation` _(required)_ - must always be `drop_custom_function` +- `project` _(required)_ - the name of the project containing the file you wish to delete +- `type` _(required)_ - the name of the sub-folder containing the file you wish to delete. Must be either routes or helpers +- `file` _(required)_ - the name of the file you wish to delete. Should not include the file extension (which is always .js) ### Body @@ -168,8 +168,8 @@ Creates a new project folder in the Custom Functions root project directory. It _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_custom_function_project` -- project _(required)_ - the name of the project you wish to create +- `operation` _(required)_ - must always be `add_custom_function_project` +- `project` _(required)_ - the name of the project you wish to create ### Body @@ -196,8 +196,8 @@ Deletes the specified project folder and all of its contents. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `drop_custom_function_project` -- project _(required)_ - the name of the project you wish to delete +- `operation` _(required)_ - must always be `drop_custom_function_project` +- `project` _(required)_ - the name of the project you wish to delete ### Body @@ -224,9 +224,9 @@ Creates a .tar file of the specified project folder, then reads it into a base64 _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `package_custom_function_project` -- project _(required)_ - the name of the project you wish to package up for deployment -- skip*node_modules *(optional)\_ - if true, creates option for tar module that will exclude the project's node_modules directory. Must be a boolean. +- `operation` _(required)_ - must always be `package_custom_function_project` +- `project` _(required)_ - the name of the project you wish to package up for deployment +- `skip_node_modules` _(optional)_ - if true, creates option for tar module that will exclude the project's node_modules directory. Must be a boolean. ### Body @@ -256,9 +256,9 @@ Takes the output of package_custom_function_project, decrypts the base64-encoded _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `deploy_custom_function_project` -- project _(required)_ - the name of the project you wish to deploy. Must be a string -- payload _(required)_ - a base64-encoded string representation of the .tar file. Must be a string +- `operation` _(required)_ - must always be `deploy_custom_function_project` +- `project` _(required)_ - the name of the project you wish to deploy. Must be a string +- `payload` _(required)_ - a base64-encoded string representation of the .tar file. Must be a string ### Body diff --git a/versioned_docs/version-4.5/developers/operations-api/databases-and-tables.md b/versioned_docs/version-4.5/developers/operations-api/databases-and-tables.md index eea77222..7c17fb4d 100644 --- a/versioned_docs/version-4.5/developers/operations-api/databases-and-tables.md +++ b/versioned_docs/version-4.5/developers/operations-api/databases-and-tables.md @@ -8,7 +8,7 @@ title: Databases and Tables Returns the definitions of all databases and tables within the database. Record counts about 5000 records are estimated, as determining the exact count can be expensive. When the record count is estimated, this is indicated by the inclusion of a confidence interval of `estimated_record_range`. If you need the exact count, you can include an `"exact_count": true` in the operation, but be aware that this requires a full table scan (may be expensive). -- operation _(required)_ - must always be `describe_all` +- `operation` _(required)_ - must always be `describe_all` ### Body @@ -63,8 +63,8 @@ Returns the definitions of all databases and tables within the database. Record Returns the definitions of all tables within the specified database. -- operation _(required)_ - must always be `describe_database` -- database _(optional)_ - database where the table you wish to describe lives. The default is `data` +- `operation` _(required)_ - must always be `describe_database` +- `database` _(optional)_ - database where the table you wish to describe lives. The default is `data` ### Body @@ -118,9 +118,9 @@ Returns the definitions of all tables within the specified database. Returns the definition of the specified table. -- operation _(required)_ - must always be `describe_table` -- table _(required)_ - table you wish to describe -- database _(optional)_ - database where the table you wish to describe lives. The default is `data` +- `operation` _(required)_ - must always be `describe_table` +- `table` _(required)_ - table you wish to describe +- `database` _(optional)_ - database where the table you wish to describe lives. The default is `data` ### Body @@ -174,8 +174,8 @@ Create a new database. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `create_database` -- database _(optional)_ - name of the database you are creating. The default is `data` +- `operation` _(required)_ - must always be `create_database` +- `database` _(optional)_ - name of the database you are creating. The default is `data` ### Body @@ -202,9 +202,9 @@ Drop an existing database. NOTE: Dropping a database will delete all tables and _Operation is restricted to super_user roles only_ -- operation _(required)_ - this should always be `drop_database` -- database _(required)_ - name of the database you are dropping -- replicated _(optional)_ - if true, Harper will replicate the component to all nodes in the cluster. Must be a boolean. +- `operation` _(required)_ - this should always be `drop_database` +- `database` _(required)_ - name of the database you are dropping +- `replicated` _(optional)_ - if true, Harper will replicate the component to all nodes in the cluster. Must be a boolean. ### Body @@ -231,15 +231,15 @@ Create a new table within a database. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `create_table` -- database _(optional)_ - name of the database where you want your table to live. If the database does not exist, it will be created. If the `database` property is not provided it will default to `data`. -- table _(required)_ - name of the table you are creating -- primary*key *(required)\_ - primary key for the table -- attributes _(optional)_ - an array of attributes that specifies the schema for the table, that is the set of attributes for the table. When attributes are supplied the table will not be considered a "dynamic schema" table, and attributes will not be auto-added when records with new properties are inserted. Each attribute is specified as: - - name _(required)_ - the name of the attribute - - indexed _(optional)_ - indicates if the attribute should be indexed - - type _(optional)_ - specifies the data type of the attribute (can be String, Int, Float, Date, ID, Any) -- expiration _(optional)_ - specifies the time-to-live or expiration of records in the table before they are evicted (records are not evicted on any timer if not specified). This is specified in seconds. +- `operation` _(required)_ - must always be `create_table` +- `database` _(optional)_ - name of the database where you want your table to live. If the database does not exist, it will be created. If the `database` property is not provided it will default to `data`. +- `table` _(required)_ - name of the table you are creating +- `primary_key` _(required)_ - primary key for the table +- `attributes` _(optional)_ - an array of attributes that specifies the schema for the table, that is the set of attributes for the table. When attributes are supplied the table will not be considered a "dynamic schema" table, and attributes will not be auto-added when records with new properties are inserted. Each attribute is specified as: + - `name` _(required)_ - the name of the attribute + - `indexed` _(optional)_ - indicates if the attribute should be indexed + - `type` _(optional)_ - specifies the data type of the attribute (can be String, Int, Float, Date, ID, Any) +- `expiration` _(optional)_ - specifies the time-to-live or expiration of records in the table before they are evicted (records are not evicted on any timer if not specified). This is specified in seconds. ### Body @@ -268,10 +268,10 @@ Drop an existing database table. NOTE: Dropping a table will delete all associat _Operation is restricted to super_user roles only_ -- operation _(required)_ - this should always be `drop_table` -- database _(optional)_ - database where the table you are dropping lives. The default is `data` -- table _(required)_ - name of the table you are dropping -- replicated _(optional)_ - if true, Harper will replicate the component to all nodes in the cluster. Must be a boolean. +- `operation` _(required)_ - this should always be `drop_table` +- `database` _(optional)_ - database where the table you are dropping lives. The default is `data` +- `table` _(required)_ - name of the table you are dropping +- `replicated` _(optional)_ - if true, Harper will replicate the component to all nodes in the cluster. Must be a boolean. ### Body @@ -299,10 +299,10 @@ Create a new attribute within the specified table. **The create_attribute operat _Note: Harper will automatically create new attributes on insert and update if they do not already exist within the database._ -- operation _(required)_ - must always be `create_attribute` -- database _(optional)_ - name of the database of the table you want to add your attribute. The default is `data` -- table _(required)_ - name of the table where you want to add your attribute to live -- attribute _(required)_ - name for the attribute +- `operation` _(required)_ - must always be `create_attribute` +- `database` _(optional)_ - name of the database of the table you want to add your attribute. The default is `data` +- `table` _(required)_ - name of the table where you want to add your attribute to live +- `attribute` _(required)_ - name for the attribute ### Body @@ -333,10 +333,10 @@ Drop an existing attribute from the specified table. NOTE: Dropping an attribute _Operation is restricted to super_user roles only_ -- operation _(required)_ - this should always be `drop_attribute` -- database _(optional)_ - database where the table you are dropping lives. The default is `data` -- table _(required)_ - table where the attribute you are dropping lives -- attribute _(required)_ - attribute that you intend to drop +- `operation` _(required)_ - this should always be `drop_attribute` +- `database` _(optional)_ - database where the table you are dropping lives. The default is `data` +- `table` _(required)_ - table where the attribute you are dropping lives +- `attribute` _(required)_ - attribute that you intend to drop ### Body @@ -367,10 +367,10 @@ It is important to note that trying to copy a database file that is in use (Harp _Operation is restricted to super_user roles only_ -- operation _(required)_ - this should always be `get_backup` -- database _(required)_ - this is the database that will be snapshotted and returned -- table _(optional)_ - this will specify a specific table to backup -- tables _(optional)_ - this will specify a specific set of tables to backup +- `operation` _(required)_ - this should always be `get_backup` +- `database` _(required)_ - this is the database that will be snapshotted and returned +- `table` _(optional)_ - this will specify a specific table to backup +- `tables` _(optional)_ - this will specify a specific set of tables to backup ### Body diff --git a/versioned_docs/version-4.5/developers/operations-api/jobs.md b/versioned_docs/version-4.5/developers/operations-api/jobs.md index 173125a1..cf71fa00 100644 --- a/versioned_docs/version-4.5/developers/operations-api/jobs.md +++ b/versioned_docs/version-4.5/developers/operations-api/jobs.md @@ -8,8 +8,8 @@ title: Jobs Returns job status, metrics, and messages for the specified job ID. -- operation _(required)_ - must always be `get_job` -- id _(required)_ - the id of the job you wish to view +- `operation` _(required)_ - must always be `get_job` +- `id` _(required)_ - the id of the job you wish to view ### Body @@ -50,9 +50,9 @@ Returns a list of job statuses, metrics, and messages for all jobs executed with _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `search_jobs_by_start_date` -- from*date *(required)\_ - the date you wish to start the search -- to*date *(required)\_ - the date you wish to end the search +- `operation` _(required)_ - must always be `search_jobs_by_start_date` +- `from_date` _(required)_ - the date you wish to start the search +- `to_date` _(required)_ - the date you wish to end the search ### Body diff --git a/versioned_docs/version-4.5/developers/operations-api/logs.md b/versioned_docs/version-4.5/developers/operations-api/logs.md index 17eba72f..52e52740 100644 --- a/versioned_docs/version-4.5/developers/operations-api/logs.md +++ b/versioned_docs/version-4.5/developers/operations-api/logs.md @@ -10,13 +10,13 @@ Returns log outputs from the primary Harper log based on the provided search cri _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_Log` -- start _(optional)_ - result to start with. Default is 0, the first log in `hdb.log`. Must be a number -- limit _(optional)_ - number of results returned. Default behavior is 1000. Must be a number -- level _(optional)_ - error level to filter on. Default behavior is all levels. Must be `notify`, `error`, `warn`, `info`, `debug` or `trace` -- from _(optional)_ - date to begin showing log results. Must be `YYYY-MM-DD` or `YYYY-MM-DD hh:mm:ss`. Default is first log in `hdb.log` -- until _(optional)_ - date to end showing log results. Must be `YYYY-MM-DD` or `YYYY-MM-DD hh:mm:ss`. Default is last log in `hdb.log` -- order _(optional)_ - order to display logs desc or asc by timestamp. By default, will maintain `hdb.log` order +- `operation` _(required)_ - must always be `read_Log` +- `start` _(optional)_ - result to start with. Default is 0, the first log in `hdb.log`. Must be a number +- `limit` _(optional)_ - number of results returned. Default behavior is 1000. Must be a number +- `level` _(optional)_ - error level to filter on. Default behavior is all levels. Must be `notify`, `error`, `warn`, `info`, `debug` or `trace` +- `from` _(optional)_ - date to begin showing log results. Must be `YYYY-MM-DD` or `YYYY-MM-DD hh:mm:ss`. Default is first log in `hdb.log` +- `until` _(optional)_ - date to end showing log results. Must be `YYYY-MM-DD` or `YYYY-MM-DD hh:mm:ss`. Default is last log in `hdb.log` +- `order` _(optional)_ - order to display logs desc or asc by timestamp. By default, will maintain `hdb.log` order ### Body @@ -68,12 +68,12 @@ Returns all transactions logged for the specified database table. You may filter _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_transaction_log` -- schema _(required)_ - schema under which the transaction log resides -- table _(required)_ - table under which the transaction log resides -- from _(optional)_ - time format must be millisecond-based epoch in UTC -- to _(optional)_ - time format must be millisecond-based epoch in UTC -- limit _(optional)_ - max number of logs you want to receive. Must be a number +- `operation` _(required)_ - must always be `read_transaction_log` +- `schema` _(required)_ - schema under which the transaction log resides +- `table` _(required)_ - table under which the transaction log resides +- `from` _(optional)_ - time format must be millisecond-based epoch in UTC +- `to` _(optional)_ - time format must be millisecond-based epoch in UTC +- `limit` _(optional)_ - max number of logs you want to receive. Must be a number ### Body @@ -271,10 +271,10 @@ Deletes transaction log data for the specified database table that is older than _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `delete_transaction_log_before` -- schema _(required)_ - schema under which the transaction log resides. Must be a string -- table _(required)_ - table under which the transaction log resides. Must be a string -- timestamp _(required)_ - records older than this date will be deleted. Format is millisecond-based epoch in UTC +- `operation` _(required)_ - must always be `delete_transaction_log_before` +- `schema` _(required)_ - schema under which the transaction log resides. Must be a string +- `table` _(required)_ - table under which the transaction log resides. Must be a string +- `timestamp` _(required)_ - records older than this date will be deleted. Format is millisecond-based epoch in UTC ### Body @@ -303,11 +303,11 @@ AuditLog must be enabled in the Harper configuration file to make this request. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_audit_log` -- schema _(required)_ - schema under which the transaction log resides -- table _(required)_ - table under which the transaction log resides -- search_type _(optional)_ - possibilities are `hash_value`, `timestamp` and `username` -- search_values _(optional)_ - an array of string or numbers relating to search_type +- `operation` _(required)_ - must always be `read_audit_log` +- `schema` _(required)_ - schema under which the transaction log resides +- `table` _(required)_ - table under which the transaction log resides +- `search_type` _(optional)_ - possibilities are `hash_value`, `timestamp` and `username` +- `search_values` _(optional)_ - an array of string or numbers relating to search_type ### Body @@ -398,11 +398,11 @@ AuditLog must be enabled in the Harper configuration file to make this request. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_audit_log` -- schema _(required)_ - schema under which the transaction log resides -- table _(required)_ - table under which the transaction log resides -- search_type _(optional)_ - timestamp -- search_values _(optional)_ - an array containing a maximum of two values \[`from_timestamp`, `to_timestamp`] defining the range of transactions you would like to view. +- `operation` _(required)_ - must always be `read_audit_log` +- `schema` _(required)_ - schema under which the transaction log resides +- `table` _(required)_ - table under which the transaction log resides +- `search_type` _(optional)_ - timestamp +- `search_values` _(optional)_ - an array containing a maximum of two values \[`from_timestamp`, `to_timestamp`] defining the range of transactions you would like to view. - Timestamp format is millisecond-based epoch in UTC - If no items are supplied then all transactions are returned - If only one entry is supplied then all transactions after the supplied timestamp will be returned @@ -519,11 +519,11 @@ AuditLog must be enabled in the Harper configuration file to make this request. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_audit_log` -- schema _(required)_ - schema under which the transaction log resides -- table _(required)_ - table under which the transaction log resides -- search_type _(optional)_ - username -- search_values _(optional)_ - the Harper user for whom you would like to view transactions +- `operation` _(required)_ - must always be `read_audit_log` +- `schema` _(required)_ - schema under which the transaction log resides +- `table` _(required)_ - table under which the transaction log resides +- `search_type` _(optional)_ - username +- `search_values` _(optional)_ - the Harper user for whom you would like to view transactions ### Body @@ -639,11 +639,11 @@ AuditLog must be enabled in the Harper configuration file to make this request. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_audit_log` -- schema _(required)_ - schema under which the transaction log resides -- table _(required)_ - table under which the transaction log resides -- search_type _(optional)_ - hash_value -- search_values _(optional)_ - an array of hash_attributes for which you wish to see transaction logs +- `operation` _(required)_ - must always be `read_audit_log` +- `schema` _(required)_ - schema under which the transaction log resides +- `table` _(required)_ - table under which the transaction log resides +- `search_type` _(optional)_ - hash_value +- `search_values` _(optional)_ - an array of hash_attributes for which you wish to see transaction logs ### Body @@ -707,10 +707,10 @@ AuditLog must be enabled in the Harper configuration file to make this request. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `delete_audit_logs_before` -- schema _(required)_ - schema under which the transaction log resides. Must be a string -- table _(required)_ - table under which the transaction log resides. Must be a string -- timestamp _(required)_ - records older than this date will be deleted. Format is millisecond-based epoch in UTC +- `operation` _(required)_ - must always be `delete_audit_logs_before` +- `schema` _(required)_ - schema under which the transaction log resides. Must be a string +- `table` _(required)_ - table under which the transaction log resides. Must be a string +- `timestamp` _(required)_ - records older than this date will be deleted. Format is millisecond-based epoch in UTC ### Body diff --git a/versioned_docs/version-4.5/developers/operations-api/registration.md b/versioned_docs/version-4.5/developers/operations-api/registration.md index 56775c5d..28c6a0e9 100644 --- a/versioned_docs/version-4.5/developers/operations-api/registration.md +++ b/versioned_docs/version-4.5/developers/operations-api/registration.md @@ -8,7 +8,7 @@ title: Registration Returns the registration data of the Harper instance. -- operation _(required)_ - must always be `registration_info` +- `operation` _(required)_ - must always be `registration_info` ### Body @@ -37,7 +37,7 @@ Returns the Harper fingerprint, uniquely generated based on the machine, for lic _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_fingerprint` +- `operation` _(required)_ - must always be `get_fingerprint` ### Body @@ -55,9 +55,9 @@ Sets the Harper license as generated by Harper License Management software. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_license` -- key _(required)_ - your license key -- company _(required)_ - the company that was used in the license +- `operation` _(required)_ - must always be `set_license` +- `key` _(required)_ - your license key +- `company` _(required)_ - the company that was used in the license ### Body diff --git a/versioned_docs/version-4.5/developers/operations-api/sql-operations.md b/versioned_docs/version-4.5/developers/operations-api/sql-operations.md index 71dfa436..4b7076bb 100644 --- a/versioned_docs/version-4.5/developers/operations-api/sql-operations.md +++ b/versioned_docs/version-4.5/developers/operations-api/sql-operations.md @@ -12,8 +12,8 @@ Harper encourages developers to utilize other querying tools over SQL for perfor Executes the provided SQL statement. The SELECT statement is used to query data from the database. -- operation _(required)_ - must always be `sql` -- sql _(required)_ - use standard SQL +- `operation` _(required)_ - must always be `sql` +- `sql` _(required)_ - use standard SQL ### Body @@ -48,8 +48,8 @@ Executes the provided SQL statement. The SELECT statement is used to query data Executes the provided SQL statement. The INSERT statement is used to add one or more rows to a database table. -- operation _(required)_ - must always be `sql` -- sql _(required)_ - use standard SQL +- `operation` _(required)_ - must always be `sql` +- `sql` _(required)_ - use standard SQL ### Body @@ -76,8 +76,8 @@ Executes the provided SQL statement. The INSERT statement is used to add one or Executes the provided SQL statement. The UPDATE statement is used to change the values of specified attributes in one or more rows in a database table. -- operation _(required)_ - must always be `sql` -- sql _(required)_ - use standard SQL +- `operation` _(required)_ - must always be `sql` +- `sql` _(required)_ - use standard SQL ### Body @@ -104,8 +104,8 @@ Executes the provided SQL statement. The UPDATE statement is used to change the Executes the provided SQL statement. The DELETE statement is used to remove one or more rows of data from a database table. -- operation _(required)_ - must always be `sql` -- sql _(required)_ - use standard SQL +- `operation` _(required)_ - must always be `sql` +- `sql` _(required)_ - use standard SQL ### Body diff --git a/versioned_docs/version-4.5/developers/operations-api/token-authentication.md b/versioned_docs/version-4.5/developers/operations-api/token-authentication.md index b9ff5b31..178db842 100644 --- a/versioned_docs/version-4.5/developers/operations-api/token-authentication.md +++ b/versioned_docs/version-4.5/developers/operations-api/token-authentication.md @@ -10,9 +10,9 @@ Creates the tokens needed for authentication: operation & refresh token. _Note - this operation does not require authorization to be set_ -- operation _(required)_ - must always be `create_authentication_tokens` -- username _(required)_ - username of user to generate tokens for -- password _(required)_ - password of user to generate tokens for +- `operation` _(required)_ - must always be `create_authentication_tokens` +- `username` _(required)_ - username of user to generate tokens for +- `password` _(required)_ - password of user to generate tokens for ### Body @@ -39,8 +39,8 @@ _Note - this operation does not require authorization to be set_ This operation creates a new operation token. -- operation _(required)_ - must always be `refresh_operation_token` -- refresh*token *(required)\_ - the refresh token that was provided when tokens were created +- `operation` _(required)_ - must always be `refresh_operation_token` +- `refresh_token` _(required)_ - the refresh token that was provided when tokens were created ### Body diff --git a/versioned_docs/version-4.5/developers/operations-api/users-and-roles.md b/versioned_docs/version-4.5/developers/operations-api/users-and-roles.md index ecaa1117..91f222b9 100644 --- a/versioned_docs/version-4.5/developers/operations-api/users-and-roles.md +++ b/versioned_docs/version-4.5/developers/operations-api/users-and-roles.md @@ -10,7 +10,7 @@ Returns a list of all roles. [Learn more about Harper roles here.](../security/u _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `list_roles` +- `operation` _(required)_ - must always be `list_roles` ### Body @@ -80,11 +80,11 @@ Creates a new role with the specified permissions. [Learn more about Harper role _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_role` -- role _(required)_ - name of role you are defining -- permission _(required)_ - object defining permissions for users associated with this role: - - super*user *(optional)\_ - boolean which, if set to true, gives users associated with this role full access to all operations and methods. If not included, value will be assumed to be false. - - structure_user (optional) - boolean OR array of database names (as strings). If boolean, user can create new databases and tables. If array of strings, users can only manage tables within the specified databases. This overrides any individual table permissions for specified databases, or for all databases if the value is true. +- `operation` _(required)_ - must always be `add_role` +- `role` _(required)_ - name of role you are defining +- `permission` _(required)_ - object defining permissions for users associated with this role: + - `super_user` _(optional)_ - boolean which, if set to true, gives users associated with this role full access to all operations and methods. If not included, value will be assumed to be false. + - `structure_user` _(optional)_ - boolean OR array of database names (as strings). If boolean, user can create new databases and tables. If array of strings, users can only manage tables within the specified databases. This overrides any individual table permissions for specified databases, or for all databases if the value is true. ### Body @@ -158,12 +158,12 @@ Modifies an existing role with the specified permissions. updates permissions fr _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `alter_role` -- id _(required)_ - the id value for the role you are altering -- role _(optional)_ - name value to update on the role you are altering -- permission _(required)_ - object defining permissions for users associated with this role: - - super*user *(optional)\_ - boolean which, if set to true, gives users associated with this role full access to all operations and methods. If not included, value will be assumed to be false. - - structure_user (optional) - boolean OR array of database names (as strings). If boolean, user can create new databases and tables. If array of strings, users can only manage tables within the specified databases. This overrides any individual table permissions for specified databases, or for all databases if the value is true. +- `operation` _(required)_ - must always be `alter_role` +- `id` _(required)_ - the id value for the role you are altering +- `role` _(optional)_ - name value to update on the role you are altering +- `permission` _(required)_ - object defining permissions for users associated with this role: + - `super_user` _(optional)_ - boolean which, if set to true, gives users associated with this role full access to all operations and methods. If not included, value will be assumed to be false. + - `structure_user` _(optional)_ - boolean OR array of database names (as strings). If boolean, user can create new databases and tables. If array of strings, users can only manage tables within the specified databases. This overrides any individual table permissions for specified databases, or for all databases if the value is true. ### Body @@ -237,8 +237,8 @@ Deletes an existing role from the database. NOTE: Role with associated users can _Operation is restricted to super_user roles only_ -- operation _(required)_ - this must always be `drop_role` -- id _(required)_ - this is the id of the role you are dropping +- `operation` _(required)_ - this must always be `drop_role` +- `id` _(required)_ - this is the id of the role you are dropping ### Body @@ -265,7 +265,7 @@ Returns a list of all users. [Learn more about Harper roles here.](../security/u _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `list_users` +- `operation` _(required)_ - must always be `list_users` ### Body @@ -377,7 +377,7 @@ _Operation is restricted to super_user roles only_ Returns user data for the associated user credentials. -- operation _(required)_ - must always be `user_info` +- `operation` _(required)_ - must always be `user_info` ### Body @@ -415,11 +415,11 @@ Creates a new user with the specified role and credentials. [Learn more about Ha _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_user` -- role _(required)_ - 'role' name value of the role you wish to assign to the user. See `add_role` for more detail -- username _(required)_ - username assigned to the user. It can not be altered after adding the user. It serves as the hash -- password _(required)_ - clear text for password. Harper will encrypt the password upon receipt -- active _(required)_ - boolean value for status of user's access to your Harper instance. If set to false, user will not be able to access your instance of Harper. +- `operation` _(required)_ - must always be `add_user` +- `role` _(required)_ - 'role' name value of the role you wish to assign to the user. See `add_role` for more detail +- `username` _(required)_ - username assigned to the user. It can not be altered after adding the user. It serves as the hash +- `password` _(required)_ - clear text for password. Harper will encrypt the password upon receipt +- `active` _(required)_ - boolean value for status of user's access to your Harper instance. If set to false, user will not be able to access your instance of Harper. ### Body @@ -449,11 +449,11 @@ Modifies an existing user's role and/or credentials. [Learn more about Harper ro _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `alter_user` -- username _(required)_ - username assigned to the user. It can not be altered after adding the user. It serves as the hash. -- password _(optional)_ - clear text for password. Harper will encrypt the password upon receipt -- role _(optional)_ - `role` name value of the role you wish to assign to the user. See `add_role` for more detail -- active _(optional)_ - status of user's access to your Harper instance. See `add_role` for more detail +- `operation` _(required)_ - must always be `alter_user` +- `username` _(required)_ - username assigned to the user. It can not be altered after adding the user. It serves as the hash. +- `password` _(optional)_ - clear text for password. Harper will encrypt the password upon receipt +- `role` _(optional)_ - `role` name value of the role you wish to assign to the user. See `add_role` for more detail +- `active` _(optional)_ - status of user's access to your Harper instance. See `add_role` for more detail ### Body @@ -487,8 +487,8 @@ Deletes an existing user by username. [Learn more about Harper roles here.](../s _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `drop_user` -- username _(required)_ - username assigned to the user +- `operation` _(required)_ - must always be `drop_user` +- `username` _(required)_ - username assigned to the user ### Body diff --git a/versioned_docs/version-4.5/developers/operations-api/utilities.md b/versioned_docs/version-4.5/developers/operations-api/utilities.md index 4259f507..32541b76 100644 --- a/versioned_docs/version-4.5/developers/operations-api/utilities.md +++ b/versioned_docs/version-4.5/developers/operations-api/utilities.md @@ -10,7 +10,7 @@ Restarts the Harper instance. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `restart` +- `operation` _(required)_ - must always be `restart` ### Body @@ -36,9 +36,9 @@ Restarts servers for the specified Harper service. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `restart_service` -- service _(required)_ - must be one of: `http_workers`, `clustering_config` or `clustering` -- replicated _(optional)_ - must be a boolean. If set to `true`, Harper will replicate the restart service operation across all nodes in the cluster. The restart will occur as a rolling restart, ensuring that each node is fully restarted before the next node begins restarting. +- `operation` _(required)_ - must always be `restart_service` +- `service` _(required)_ - must be one of: `http_workers`, `clustering_config` or `clustering` +- `replicated` _(optional)_ - must be a boolean. If set to `true`, Harper will replicate the restart service operation across all nodes in the cluster. The restart will occur as a rolling restart, ensuring that each node is fully restarted before the next node begins restarting. ### Body @@ -65,8 +65,8 @@ Returns detailed metrics on the host system. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `system_information` -- attributes _(optional)_ - string array of top level attributes desired in the response, if no value is supplied all attributes will be returned. Available attributes are: ['system', 'time', 'cpu', 'memory', 'disk', 'network', 'harperdb_processes', 'table_size', 'metrics', 'threads', 'replication'] +- `operation` _(required)_ - must always be `system_information` +- `attributes` _(optional)_ - string array of top level attributes desired in the response, if no value is supplied all attributes will be returned. Available attributes are: ['system', 'time', 'cpu', 'memory', 'disk', 'network', 'harperdb_processes', 'table_size', 'metrics', 'threads', 'replication'] ### Body @@ -84,10 +84,10 @@ Delete data before the specified timestamp on the specified database table exclu _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `delete_records_before` -- date _(required)_ - records older than this date will be deleted. Supported format looks like: `YYYY-MM-DDThh:mm:ss.sZ` -- schema _(required)_ - name of the schema where you are deleting your data -- table _(required)_ - name of the table where you are deleting your data +- `operation` _(required)_ - must always be `delete_records_before` +- `date` _(required)_ - records older than this date will be deleted. Supported format looks like: `YYYY-MM-DDThh:mm:ss.sZ` +- `schema` _(required)_ - name of the schema where you are deleting your data +- `table` _(required)_ - name of the table where you are deleting your data ### Body @@ -115,11 +115,11 @@ _Operation is restricted to super_user roles only_ Exports data based on a given search operation to a local file in JSON or CSV format. -- operation _(required)_ - must always be `export_local` -- format _(required)_ - the format you wish to export the data, options are `json` & `csv` -- path _(required)_ - path local to the server to export the data -- search*operation *(required)\_ - search_operation of `search_by_hash`, `search_by_value`, `search_by_conditions` or `sql` -- filename _(optional)_ - the name of the file where your export will be written to (do not include extension in filename). If one is not provided it will be autogenerated based on the epoch. +- `operation` _(required)_ - must always be `export_local` +- `format` _(required)_ - the format you wish to export the data, options are `json` & `csv` +- `path` _(required)_ - path local to the server to export the data +- `search_operation` _(required)_ - search_operation of `search_by_hash`, `search_by_value`, `search_by_conditions` or `sql` +- `filename` _(optional)_ - the name of the file where your export will be written to (do not include extension in filename). If one is not provided it will be autogenerated based on the epoch. ### Body @@ -149,10 +149,10 @@ Exports data based on a given search operation to a local file in JSON or CSV fo Exports data based on a given search operation from table to AWS S3 in JSON or CSV format. -- operation _(required)_ - must always be `export_to_s3` -- format _(required)_ - the format you wish to export the data, options are `json` & `csv` -- s3 _(required)_ - details your access keys, bucket, bucket region and key for saving the data to S3 -- search*operation *(required)\_ - search_operation of `search_by_hash`, `search_by_value`, `search_by_conditions` or `sql` +- `operation` _(required)_ - must always be `export_to_s3` +- `format` _(required)_ - the format you wish to export the data, options are `json` & `csv` +- `s3` _(required)_ - details your access keys, bucket, bucket region and key for saving the data to S3 +- `search_operation` _(required)_ - search_operation of `search_by_hash`, `search_by_value`, `search_by_conditions` or `sql` ### Body @@ -192,9 +192,9 @@ Executes npm install against specified custom function projects. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `install_node_modules` -- projects _(required)_ - must ba an array of custom functions projects. -- dry*run *(optional)\_ - refers to the npm --dry-run flag: [https://docs.npmjs.com/cli/v8/commands/npm-install#dry-run](https://docs.npmjs.com/cli/v8/commands/npm-install#dry-run). Defaults to false. +- `operation` _(required)_ - must always be `install_node_modules` +- `projects` _(required)_ - must ba an array of custom functions projects. +- `dry_run` _(optional)_ - refers to the npm --dry-run flag: [https://docs.npmjs.com/cli/v8/commands/npm-install#dry-run](https://docs.npmjs.com/cli/v8/commands/npm-install#dry-run). Defaults to false. ### Body @@ -214,9 +214,9 @@ Modifies the Harper configuration file parameters. Must follow with a restart or _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_configuration` -- logging*level *(example/optional)\_ - one or more configuration keywords to be updated in the Harper configuration file -- clustering*enabled *(example/optional)\_ - one or more configuration keywords to be updated in the Harper configuration file +- `operation` _(required)_ - must always be `set_configuration` +- `logging_level` _(optional)_ - one or more configuration keywords to be updated in the Harper configuration file +- `clustering_enabled` _(optional)_ - one or more configuration keywords to be updated in the Harper configuration file ### Body @@ -244,7 +244,7 @@ Returns the Harper configuration parameters. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_configuration` +- `operation` _(required)_ - must always be `get_configuration` ### Body @@ -348,12 +348,12 @@ If a `private_key` is not passed the operation will search for one that matches _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_certificate` -- name _(required)_ - a unique name for the certificate -- certificate _(required)_ - a PEM formatted certificate string -- is*authority *(required)\_ - a boolean indicating if the certificate is a certificate authority -- hosts _(optional)_ - an array of hostnames that the certificate is valid for -- private*key *(optional)\_ - a PEM formatted private key string +- `operation` _(required)_ - must always be `add_certificate` +- `name` _(required)_ - a unique name for the certificate +- `certificate` _(required)_ - a PEM formatted certificate string +- `is_authority` _(required)_ - a boolean indicating if the certificate is a certificate authority +- `hosts` _(optional)_ - an array of hostnames that the certificate is valid for +- `private_key` _(optional)_ - a PEM formatted private key string ### Body @@ -383,8 +383,8 @@ Removes a certificate from the `hdb_certificate` system table and deletes the co _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `remove_certificate` -- name _(required)_ - the name of the certificate +- `operation` _(required)_ - must always be `remove_certificate` +- `name` _(required)_ - the name of the certificate ### Body @@ -411,7 +411,7 @@ Lists all certificates in the `hdb_certificate` system table. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `list_certificates` +- `operation` _(required)_ - must always be `list_certificates` ### Body diff --git a/versioned_docs/version-4.5/developers/replication/sharding.md b/versioned_docs/version-4.5/developers/replication/sharding.md index 74242292..a650f52b 100644 --- a/versioned_docs/version-4.5/developers/replication/sharding.md +++ b/versioned_docs/version-4.5/developers/replication/sharding.md @@ -45,7 +45,7 @@ X-Replicate-To: node1,node2 Likewise, you can specify replicateTo and confirm parameters in the operation object when using the Harper API. For example, to specify that data should be replicated to two other nodes, and the response should be returned once confirmation is received from one other node, you can use the following operation object: -```json +```jsonc { "operation": "update", "schema": "dev", @@ -61,10 +61,12 @@ Likewise, you can specify replicateTo and confirm parameters in the operation ob or you can specify nodes: -```json -..., +```jsonc +{ + // ... "replicateTo": ["node-1", "node-2"] -... + // ... +} ``` ## Programmatic Replication Control @@ -148,7 +150,7 @@ MyTable.setResidencyById((id) => { Normally sharding allows data to be stored in specific nodes, but still allows access to the data from any node. However, you can also disable cross-node access so that data is only returned if is stored on the node where it is accessed. To do this, you can set the `replicateFrom` property on the context of operation to `false`: -```json +```jsonc { "operation": "search_by_id", "table": "MyTable", diff --git a/versioned_docs/version-4.5/developers/security/basic-auth.md b/versioned_docs/version-4.5/developers/security/basic-auth.md index a4510b13..9bc0160c 100644 --- a/versioned_docs/version-4.5/developers/security/basic-auth.md +++ b/versioned_docs/version-4.5/developers/security/basic-auth.md @@ -8,7 +8,7 @@ Harper uses Basic Auth and JSON Web Tokens (JWTs) to secure our HTTP requests. I ** \_**You do not need to log in separately. Basic Auth is added to each HTTP request like create_database, create_table, insert etc… via headers.**\_ ** -A header is added to each HTTP request. The header key is **“Authorization”** the header value is **“Basic <<your username and password buffer token>>”** +A header is added to each HTTP request. The header key is **"Authorization"** the header value is **"Basic <<your username and password buffer token>>"** ## Authentication in Harper Studio diff --git a/versioned_docs/version-4.5/developers/security/users-and-roles.md b/versioned_docs/version-4.5/developers/security/users-and-roles.md index 76ed6901..a9388c62 100644 --- a/versioned_docs/version-4.5/developers/security/users-and-roles.md +++ b/versioned_docs/version-4.5/developers/security/users-and-roles.md @@ -47,7 +47,7 @@ When creating a new, user-defined role in a Harper instance, you must provide a Example JSON for `add_role` request -```json +```jsonc { "operation": "add_role", "role": "software_developer", @@ -98,7 +98,7 @@ There are two parts to a permissions set: Each table that a role should be given some level of CRUD permissions to must be included in the `tables` array for its database in the roles permissions JSON passed to the API (_see example above_). -```json +```jsonc { "table_name": { // the name of the table to define CRUD perms for "read": boolean, // access to read from this table diff --git a/versioned_docs/version-4.5/developers/sql-guide/date-functions.md b/versioned_docs/version-4.5/developers/sql-guide/date-functions.md index d44917c3..c9747dcd 100644 --- a/versioned_docs/version-4.5/developers/sql-guide/date-functions.md +++ b/versioned_docs/version-4.5/developers/sql-guide/date-functions.md @@ -156,17 +156,17 @@ Subtracts the defined amount of time from the date provided in UTC and returns t ### EXTRACT(date, date_part) -Extracts and returns the date_part requested as a String value. Accepted date_part values below show value returned for date = “2020-03-26T15:13:02.041+000” +Extracts and returns the date_part requested as a String value. Accepted date_part values below show value returned for date = "2020-03-26T15:13:02.041+000" | date_part | Example return value\* | | ----------- | ---------------------- | -| year | “2020” | -| month | “3” | -| day | “26” | -| hour | “15” | -| minute | “13” | -| second | “2” | -| millisecond | “41” | +| year | "2020" | +| month | "3" | +| day | "26" | +| hour | "15" | +| minute | "13" | +| second | "2" | +| millisecond | "41" | ``` "SELECT EXTRACT(1587568845765, 'year') AS extract_result" returns diff --git a/versioned_docs/version-4.5/developers/sql-guide/functions.md b/versioned_docs/version-4.5/developers/sql-guide/functions.md index 0847a657..a1170991 100644 --- a/versioned_docs/version-4.5/developers/sql-guide/functions.md +++ b/versioned_docs/version-4.5/developers/sql-guide/functions.md @@ -16,99 +16,85 @@ This SQL keywords reference contains the SQL functions available in Harper. | Keyword | Syntax | Description | | ---------------- | ------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- | -| AVG | AVG(_expression_) | Returns the average of a given numeric expression. | -| COUNT | SELECT COUNT(_column_name_) FROM _database.table_ WHERE _condition_ | Returns the number records that match the given criteria. Nulls are not counted. | -| GROUP_CONCAT | GROUP*CONCAT(\_expression*) | Returns a string with concatenated values that are comma separated and that are non-null from a group. Will return null when there are non-null values. | -| MAX | SELECT MAX(_column_name_) FROM _database.table_ WHERE _condition_ | Returns largest value in a specified column. | -| MIN | SELECT MIN(_column_name_) FROM _database.table_ WHERE _condition_ | Returns smallest value in a specified column. | -| SUM | SUM(_column_name_) | Returns the sum of the numeric values provided. | -| ARRAY\* | ARRAY(_expression_) | Returns a list of data as a field. | -| DISTINCT_ARRAY\* | DISTINCT*ARRAY(\_expression*) | When placed around a standard ARRAY() function, returns a distinct (deduplicated) results set. | +| `AVG` | `AVG(expression)` | Returns the average of a given numeric expression. | +| `COUNT` | `SELECT COUNT(column_name) FROM database.table WHERE condition` | Returns the number records that match the given criteria. Nulls are not counted. | +| `GROUP_CONCAT` | `GROUP_CONCAT(expression)` | Returns a string with concatenated values that are comma separated and that are non-null from a group. Will return null when there are non-null values. | +| `MAX` | `SELECT MAX(column_name) FROM database.table WHERE condition` | Returns largest value in a specified column. | +| `MIN` | `SELECT MIN(column_name) FROM database.table WHERE condition` | Returns smallest value in a specified column. | +| `SUM` | `SUM(column_name)` | Returns the sum of the numeric values provided. | +| `ARRAY`* | `ARRAY(expression)` | Returns a list of data as a field. | +| `DISTINCT_ARRAY`* | `DISTINCT_ARRAY(expression)` | When placed around a standard `ARRAY()` function, returns a distinct (deduplicated) results set. | -\*For more information on ARRAY() and DISTINCT_ARRAY() see [this blog](https://www.harperdb.io/post/sql-queries-to-complex-objects). +*For more information on `ARRAY()` and `DISTINCT_ARRAY()` see [this blog](https://www.harperdb.io/post/sql-queries-to-complex-objects). ### Conversion | Keyword | Syntax | Description | | ------- | ----------------------------------------------- | ---------------------------------------------------------------------- | -| CAST | CAST(_expression AS datatype(length)_) | Converts a value to a specified datatype. | -| CONVERT | CONVERT(_data_type(length), expression, style_) | Converts a value from one datatype to a different, specified datatype. | +| `CAST` | `CAST(expression AS datatype(length))` | Converts a value to a specified datatype. | +| `CONVERT` | `CONVERT(data_type(length), expression, style)` | Converts a value from one datatype to a different, specified datatype. | ### Date & Time -| Keyword | Syntax | Description | -| ----------------- | ----------------- | --------------------------------------------------------------------------------------------------------------------- | -| CURRENT_DATE | CURRENT_DATE() | Returns the current date in UTC in “YYYY-MM-DD” String format. | -| CURRENT_TIME | CURRENT_TIME() | Returns the current time in UTC in “HH:mm:ss.SSS” string format. | -| CURRENT_TIMESTAMP | CURRENT_TIMESTAMP | Referencing this variable will evaluate as the current Unix Timestamp in milliseconds. For more information, go here. | - -| -| DATE | DATE([_date_string_]) | Formats and returns the date*string argument in UTC in ‘YYYY-MM-DDTHH:mm:ss.SSSZZ’ string format. If a date_string is not provided, the function will return the current UTC date/time value in the return format defined above. For more information, go here. | -| -| DATE_ADD | DATE_ADD(\_date, value, interval*) | Adds the defined amount of time to the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted interval values: Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. | -| -| DATE*DIFF | DATEDIFF(\_date_1, date_2[, interval]*) | Returns the difference between the two date values passed based on the interval as a Number. If an interval is not provided, the function will return the difference value in milliseconds. For more information, go here. | -| -| DATE*FORMAT | DATE_FORMAT(\_date, format*) | Formats and returns a date value in the String format provided. Find more details on accepted format values in the moment.js docs. For more information, go here. | -| -| DATE*SUB | DATE_SUB(\_date, format*) | Subtracts the defined amount of time from the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted date*sub interval values- Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. | -| -| DAY | DAY(\_date*) | Return the day of the month for the given date. | -| -| DAYOFWEEK | DAYOFWEEK(_date_) | Returns the numeric value of the weekday of the date given(“YYYY-MM-DD”).NOTE: 0=Sunday, 1=Monday, 2=Tuesday, 3=Wednesday, 4=Thursday, 5=Friday, and 6=Saturday. | -| EXTRACT | EXTRACT(_date, date_part_) | Extracts and returns the date*part requested as a String value. Accepted date_part values below show value returned for date = “2020-03-26T15:13:02.041+000” For more information, go here. | -| -| GETDATE | GETDATE() | Returns the current Unix Timestamp in milliseconds. | -| GET_SERVER_TIME | GET_SERVER_TIME() | Returns the current date/time value based on the server’s timezone in `YYYY-MM-DDTHH:mm:ss.SSSZZ` String format. | -| OFFSET_UTC | OFFSET_UTC(\_date, offset*) | Returns the UTC date time value with the offset provided included in the return String value formatted as `YYYY-MM-DDTHH:mm:ss.SSSZZ`. The offset argument will be added as minutes unless the value is less than 16 and greater than -16, in which case it will be treated as hours. | -| NOW | NOW() | Returns the current Unix Timestamp in milliseconds. | -| -| HOUR | HOUR(_datetime_) | Returns the hour part of a given date in range of 0 to 838. | -| -| MINUTE | MINUTE(_datetime_) | Returns the minute part of a time/datetime in range of 0 to 59. | -| -| MONTH | MONTH(_date_) | Returns month part for a specified date in range of 1 to 12. | -| -| SECOND | SECOND(_datetime_) | Returns the seconds part of a time/datetime in range of 0 to 59. | -| YEAR | YEAR(_date_) | Returns the year part for a specified date. | -| +| Keyword | Syntax | Description | +| ----------------- | -------------------------- | --------------------------------------------------------------------------------------------------------------------- | +| `CURRENT_DATE` | `CURRENT_DATE()` | Returns the current date in UTC in "YYYY-MM-DD" String format. | +| `CURRENT_TIME` | `CURRENT_TIME()` | Returns the current time in UTC in "HH:mm:ss.SSS" string format. | +| `CURRENT_TIMESTAMP` | `CURRENT_TIMESTAMP` | Referencing this variable will evaluate as the current Unix Timestamp in milliseconds. For more information, go here. | +| `DATE` | `DATE([date_string])` | Formats and returns the date string argument in UTC in 'YYYY-MM-DDTHH:mm:ss.SSSZZ' string format. If a date string is not provided, the function will return the current UTC date/time value in the return format defined above. For more information, go here. | +| `DATE_ADD` | `DATE_ADD(date, value, interval)` | Adds the defined amount of time to the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted interval values: Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. | +| `DATE_DIFF` | `DATE_DIFF(date_1, date_2[, interval])` | Returns the difference between the two date values passed based on the interval as a Number. If an interval is not provided, the function will return the difference value in milliseconds. For more information, go here. | +| `DATE_FORMAT` | `DATE_FORMAT(date, format)` | Formats and returns a date value in the String format provided. Find more details on accepted format values in the moment.js docs. For more information, go here. | +| `DATE_SUB` | `DATE_SUB(date, format)` | Subtracts the defined amount of time from the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted date_sub interval values- Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. | +| `DAY` | `DAY(date)` | Return the day of the month for the given date. | +| `DAYOFWEEK` | `DAYOFWEEK(date)` | Returns the numeric value of the weekday of the date given("YYYY-MM-DD").NOTE: 0=Sunday, 1=Monday, 2=Tuesday, 3=Wednesday, 4=Thursday, 5=Friday, and 6=Saturday. | +| `EXTRACT` | `EXTRACT(date, date_part)` | Extracts and returns the date part requested as a String value. Accepted date_part values below show value returned for date = "2020-03-26T15:13:02.041+000" For more information, go here. | +| `GETDATE` | `GETDATE()` | Returns the current Unix Timestamp in milliseconds. | +| `GET_SERVER_TIME` | `GET_SERVER_TIME()` | Returns the current date/time value based on the server's timezone in `YYYY-MM-DDTHH:mm:ss.SSSZZ` String format. | +| `OFFSET_UTC` | `OFFSET_UTC(date, offset)` | Returns the UTC date time value with the offset provided included in the return String value formatted as `YYYY-MM-DDTHH:mm:ss.SSSZZ`. The offset argument will be added as minutes unless the value is less than 16 and greater than -16, in which case it will be treated as hours. | +| `NOW` | `NOW()` | Returns the current Unix Timestamp in milliseconds. | +| `HOUR` | `HOUR(datetime)` | Returns the hour part of a given date in range of 0 to 838. | +| `MINUTE` | `MINUTE(datetime)` | Returns the minute part of a time/datetime in range of 0 to 59. | +| `MONTH` | `MONTH(date)` | Returns month part for a specified date in range of 1 to 12. | +| `SECOND` | `SECOND(datetime)` | Returns the seconds part of a time/datetime in range of 0 to 59. | +| `YEAR` | `YEAR(date)` | Returns the year part for a specified date. | ### Logical | Keyword | Syntax | Description | | ------- | ----------------------------------------------- | ------------------------------------------------------------------------------------------ | -| IF | IF(_condition, value_if_true, value_if_false_) | Returns a value if the condition is true, or another value if the condition is false. | -| IIF | IIF(_condition, value_if_true, value_if_false_) | Returns a value if the condition is true, or another value if the condition is false. | -| IFNULL | IFNULL(_expression, alt_value_) | Returns a specified value if the expression is null. | -| NULLIF | NULLIF(_expression_1, expression_2_) | Returns null if expression_1 is equal to expression_2, if not equal, returns expression_1. | +| `IF` | `IF(condition, value_if_true, value_if_false)` | Returns a value if the condition is true, or another value if the condition is false. | +| `IIF` | `IIF(condition, value_if_true, value_if_false)` | Returns a value if the condition is true, or another value if the condition is false. | +| `IFNULL` | `IFNULL(expression, alt_value)` | Returns a specified value if the expression is null. | +| `NULLIF` | `NULLIF(expression_1, expression_2)` | Returns null if expression_1 is equal to expression_2, if not equal, returns expression_1. | ### Mathematical | Keyword | Syntax | Description | | ------- | ------------------------------ | --------------------------------------------------------------------------------------------------- | -| ABS | ABS(_expression_) | Returns the absolute value of a given numeric expression. | -| CEIL | CEIL(_number_) | Returns integer ceiling, the smallest integer value that is bigger than or equal to a given number. | -| EXP | EXP(_number_) | Returns e to the power of a specified number. | -| FLOOR | FLOOR(_number_) | Returns the largest integer value that is smaller than, or equal to, a given number. | -| RANDOM | RANDOM(_seed_) | Returns a pseudo random number. | -| ROUND | ROUND(_number,decimal_places_) | Rounds a given number to a specified number of decimal places. | -| SQRT | SQRT(_expression_) | Returns the square root of an expression. | +| `ABS` | `ABS(expression)` | Returns the absolute value of a given numeric expression. | +| `CEIL` | `CEIL(number)` | Returns integer ceiling, the smallest integer value that is bigger than or equal to a given number. | +| `EXP` | `EXP(number)` | Returns e to the power of a specified number. | +| `FLOOR` | `FLOOR(number)` | Returns the largest integer value that is smaller than, or equal to, a given number. | +| `RANDOM` | `RANDOM(seed)` | Returns a pseudo random number. | +| `ROUND` | `ROUND(number, decimal_places)` | Rounds a given number to a specified number of decimal places. | +| `SQRT` | `SQRT(expression)` | Returns the square root of an expression. | ### String | Keyword | Syntax | Description | | ----------- | ------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| CONCAT | CONCAT(_string_1, string_2, ...., string_n_) | Concatenates, or joins, two or more strings together, resulting in a single string. | -| CONCAT_WS | CONCAT*WS(\_separator, string_1, string_2, ...., string_n*) | Concatenates, or joins, two or more strings together with a separator, resulting in a single string. | -| INSTR | INSTR(_string_1, string_2_) | Returns the first position, as an integer, of string_2 within string_1. | -| LEN | LEN(_string_) | Returns the length of a string. | -| LOWER | LOWER(_string_) | Converts a string to lower-case. | -| REGEXP | SELECT _column_name_ FROM _database.table_ WHERE _column_name_ REGEXP _pattern_ | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. | -| REGEXP_LIKE | SELECT _column_name_ FROM _database.table_ WHERE REGEXP*LIKE(\_column_name, pattern*) | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. | -| REPLACE | REPLACE(_string, old_string, new_string_) | Replaces all instances of old_string within new_string, with string. | -| SUBSTRING | SUBSTRING(_string, string_position, length_of_substring_) | Extracts a specified amount of characters from a string. | -| TRIM | TRIM([_character(s) FROM_] _string_) | Removes leading and trailing spaces, or specified character(s), from a string. | -| UPPER | UPPER(_string_) | Converts a string to upper-case. | +| `CONCAT` | `CONCAT(string_1, string_2, ...., string_n)` | Concatenates, or joins, two or more strings together, resulting in a single string. | +| `CONCAT_WS` | `CONCAT_WS(separator, string_1, string_2, ...., string_n)` | Concatenates, or joins, two or more strings together with a separator, resulting in a single string. | +| `INSTR` | `INSTR(string_1, string_2)` | Returns the first position, as an integer, of string_2 within string_1. | +| `LEN` | `LEN(string)` | Returns the length of a string. | +| `LOWER` | `LOWER(string)` | Converts a string to lower-case. | +| `REGEXP` | `SELECT column_name FROM database.table WHERE column_name REGEXP pattern` | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. | +| `REGEXP_LIKE` | `SELECT column_name FROM database.table WHERE REGEXP_LIKE(column_name, pattern)` | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. | +| `REPLACE` | `REPLACE(string, old_string, new_string)` | Replaces all instances of old_string within new_string, with string. | +| `SUBSTRING` | `SUBSTRING(string, string_position, length_of_substring)` | Extracts a specified amount of characters from a string. | +| `TRIM` | `TRIM([character(s) FROM] string)` | Removes leading and trailing spaces, or specified character(s), from a string. | +| `UPPER` | `UPPER(string)` | Converts a string to upper-case. | ## Operators @@ -116,9 +102,9 @@ This SQL keywords reference contains the SQL functions available in Harper. | Keyword | Syntax | Description | | ------- | ------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------- | -| BETWEEN | SELECT _column_name(s)_ FROM _database.table_ WHERE _column_name_ BETWEEN _value_1_ AND _value_2_ | (inclusive) Returns values(numbers, text, or dates) within a given range. | -| IN | SELECT _column_name(s)_ FROM _database.table_ WHERE _column_name_ IN(_value(s)_) | Used to specify multiple values in a WHERE clause. | -| LIKE | SELECT _column_name(s)_ FROM _database.table_ WHERE _column_n_ LIKE _pattern_ | Searches for a specified pattern within a WHERE clause. | +| `BETWEEN` | `SELECT column_name(s) FROM database.table WHERE column_name BETWEEN value_1 AND value_2` | (inclusive) Returns values(numbers, text, or dates) within a given range. | +| `IN` | `SELECT column_name(s) FROM database.table WHERE column_name IN(value(s))` | Used to specify multiple values in a WHERE clause. | +| `LIKE` | `SELECT column_name(s) FROM database.table WHERE column_n LIKE pattern` | Searches for a specified pattern within a WHERE clause. | ## Queries @@ -126,34 +112,34 @@ This SQL keywords reference contains the SQL functions available in Harper. | Keyword | Syntax | Description | | -------- | -------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------- | -| DISTINCT | SELECT DISTINCT _column_name(s)_ FROM _database.table_ | Returns only unique values, eliminating duplicate records. | -| FROM | FROM _database.table_ | Used to list the database(s), table(s), and any joins required for a SQL statement. | -| GROUP BY | SELECT _column_name(s)_ FROM _database.table_ WHERE _condition_ GROUP BY _column_name(s)_ ORDER BY _column_name(s)_ | Groups rows that have the same values into summary rows. | -| HAVING | SELECT _column_name(s)_ FROM _database.table_ WHERE _condition_ GROUP BY _column_name(s)_ HAVING _condition_ ORDER BY _column_name(s)_ | Filters data based on a group or aggregate function. | -| SELECT | SELECT _column_name(s)_ FROM _database.table_ | Selects data from table. | -| WHERE | SELECT _column_name(s)_ FROM _database.table_ WHERE _condition_ | Extracts records based on a defined condition. | +| `DISTINCT` | `SELECT DISTINCT column_name(s) FROM database.table` | Returns only unique values, eliminating duplicate records. | +| `FROM` | `FROM database.table` | Used to list the database(s), table(s), and any joins required for a SQL statement. | +| `GROUP BY` | `SELECT column_name(s) FROM database.table WHERE condition GROUP BY column_name(s) ORDER BY column_name(s)` | Groups rows that have the same values into summary rows. | +| `HAVING` | `SELECT column_name(s) FROM database.table WHERE condition GROUP BY column_name(s) HAVING condition ORDER BY column_name(s)` | Filters data based on a group or aggregate function. | +| `SELECT` | `SELECT column_name(s) FROM database.table` | Selects data from table. | +| `WHERE` | `SELECT column_name(s) FROM database.table WHERE condition` | Extracts records based on a defined condition. | ### Joins -| Keyword | Syntax | Description | -| ------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| CROSS JOIN | SELECT _column_name(s)_ FROM _database.table_1_ CROSS JOIN _database.table_2_ | Returns a paired combination of each row from _table_1_ with row from _table_2_. _Note: CROSS JOIN can return very large result sets and is generally considered bad practice._ | -| FULL OUTER | SELECT _column_name(s)_ FROM _database.table_1_ FULL OUTER JOIN _database.table_2_ ON _table_1.column_name_ _= table_2.column_name_ WHERE _condition_ | Returns all records when there is a match in either _table_1_ (left table) or _table_2_ (right table). | -| [INNER] JOIN | SELECT _column_name(s)_ FROM _database.table_1_ INNER JOIN _database.table_2_ ON _table_1.column_name_ _= table_2.column_name_ | Return only matching records from _table_1_ (left table) and _table_2_ (right table). The INNER keyword is optional and does not affect the result. | -| LEFT [OUTER] JOIN | SELECT _column_name(s)_ FROM _database.table_1_ LEFT OUTER JOIN _database.table_2_ ON _table_1.column_name_ _= table_2.column_name_ | Return all records from _table_1_ (left table) and matching data from _table_2_ (right table). The OUTER keyword is optional and does not affect the result. | -| RIGHT [OUTER] JOIN | SELECT _column_name(s)_ FROM _database.table_1_ RIGHT OUTER JOIN _database.table_2_ ON _table_1.column_name = table_2.column_name_ | Return all records from _table_2_ (right table) and matching data from _table_1_ (left table). The OUTER keyword is optional and does not affect the result. | +| Keyword | Syntax | Description | +| ------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `CROSS JOIN` | `SELECT column_name(s) FROM database.table_1 CROSS JOIN database.table_2` | Returns a paired combination of each row from `table_1` with row from `table_2`. Note: CROSS JOIN can return very large result sets and is generally considered bad practice. | +| `FULL OUTER` | `SELECT column_name(s) FROM database.table_1 FULL OUTER JOIN database.table_2 ON table_1.column_name = table_2.column_name WHERE condition` | Returns all records when there is a match in either `table_1` (left table) or `table_2` (right table). | +| `[INNER] JOIN` | `SELECT column_name(s) FROM database.table_1 INNER JOIN database.table_2 ON table_1.column_name = table_2.column_name` | Return only matching records from `table_1` (left table) and `table_2` (right table). The INNER keyword is optional and does not affect the result. | +| `LEFT [OUTER] JOIN` | `SELECT column_name(s) FROM database.table_1 LEFT OUTER JOIN database.table_2 ON table_1.column_name = table_2.column_name` | Return all records from `table_1` (left table) and matching data from `table_2` (right table). The OUTER keyword is optional and does not affect the result. | +| `RIGHT [OUTER] JOIN` | `SELECT column_name(s) FROM database.table_1 RIGHT OUTER JOIN database.table_2 ON table_1.column_name = table_2.column_name` | Return all records from `table_2` (right table) and matching data from `table_1` (left table). The OUTER keyword is optional and does not affect the result. | ### Predicates | Keyword | Syntax | Description | | ----------- | ----------------------------------------------------------------------------- | -------------------------- | -| IS NOT NULL | SELECT _column_name(s)_ FROM _database.table_ WHERE _column_name_ IS NOT NULL | Tests for non-null values. | -| IS NULL | SELECT _column_name(s)_ FROM _database.table_ WHERE _column_name_ IS NULL | Tests for null values. | +| `IS NOT NULL` | `SELECT column_name(s) FROM database.table WHERE column_name IS NOT NULL` | Tests for non-null values. | +| `IS NULL` | `SELECT column_name(s) FROM database.table WHERE column_name IS NULL` | Tests for null values. | ### Statements | Keyword | Syntax | Description | | ------- | --------------------------------------------------------------------------------------------- | ----------------------------------- | -| DELETE | DELETE FROM _database.table_ WHERE condition | Deletes existing data from a table. | -| INSERT | INSERT INTO _database.table(column_name(s))_ VALUES(_value(s)_) | Inserts new records into a table. | -| UPDATE | UPDATE _database.table_ SET _column_1 = value_1, column_2 = value_2, ....,_ WHERE _condition_ | Alters existing records in a table. | +| `DELETE` | `DELETE FROM database.table WHERE condition` | Deletes existing data from a table. | +| `INSERT` | `INSERT INTO database.table(column_name(s)) VALUES(value(s))` | Inserts new records into a table. | +| `UPDATE` | `UPDATE database.table SET column_1 = value_1, column_2 = value_2, .... WHERE condition` | Alters existing records in a table. | \ No newline at end of file diff --git a/versioned_docs/version-4.5/developers/sql-guide/json-search.md b/versioned_docs/version-4.5/developers/sql-guide/json-search.md index 13bd3b90..c4bcd1c8 100644 --- a/versioned_docs/version-4.5/developers/sql-guide/json-search.md +++ b/versioned_docs/version-4.5/developers/sql-guide/json-search.md @@ -12,7 +12,7 @@ Harper automatically indexes all top level attributes in a row / object written ## Syntax -SEARCH_JSON(_expression, attribute_) +`SEARCH_JSON(expression, attribute)` Executes the supplied string _expression_ against data of the defined top level _attribute_ for each row. The expression both filters and defines output from the JSON document. @@ -117,7 +117,7 @@ SEARCH_JSON( ) ``` -The first argument passed to SEARCH_JSON is the expression to execute against the second argument which is the cast attribute on the credits table. This expression will execute for every row. Looking into the expression it starts with “$\[…]” this tells the expression to iterate all elements of the cast array. +The first argument passed to SEARCH_JSON is the expression to execute against the second argument which is the cast attribute on the credits table. This expression will execute for every row. Looking into the expression it starts with "$[…]" this tells the expression to iterate all elements of the cast array. Then the expression tells the function to only return entries where the name attribute matches any of the actors defined in the array: @@ -125,7 +125,7 @@ Then the expression tells the function to only return entries where the name att name in ["Robert Downey Jr.", "Chris Evans", "Scarlett Johansson", "Mark Ruffalo", "Chris Hemsworth", "Jeremy Renner", "Clark Gregg", "Samuel L. Jackson", "Gwyneth Paltrow", "Don Cheadle"] ``` -So far, we’ve iterated the array and filtered out rows, but we also want the results formatted in a specific way, so we’ve chained an expression on our filter with: `{“actor”: name, “character”: character}`. This tells the function to create a specific object for each matching entry. +So far, we’ve iterated the array and filtered out rows, but we also want the results formatted in a specific way, so we’ve chained an expression on our filter with: `{"actor": name, "character": character}`. This tells the function to create a specific object for each matching entry. **Sample Result** diff --git a/versioned_docs/version-4.5/getting-started/first-harper-app.md b/versioned_docs/version-4.5/getting-started/first-harper-app.md index 39930c90..e911cd04 100644 --- a/versioned_docs/version-4.5/getting-started/first-harper-app.md +++ b/versioned_docs/version-4.5/getting-started/first-harper-app.md @@ -88,21 +88,20 @@ type Dog @table @export { } ``` -By default the application HTTP server port is `9926` (this can be [configured here](../deployments/configuration#http)), so the local URL would be http:/localhost:9926/Dog/ with a full REST API. We can PUT or POST data into this table using this new path, and then GET or DELETE from it as well (you can even view data directly from the browser). If you have not added any records yet, we could use a PUT or POST to add a record. PUT is appropriate if you know the id, and POST can be used to assign an id: +By default the application HTTP server port is `9926` (this can be [configured here](../deployments/configuration#http)), so the local URL would be `http://localhost:9926/Dog/` with a full REST API. We can PUT or POST data into this table using this new path, and then GET or DELETE from it as well (you can even view data directly from the browser). If you have not added any records yet, we could use a PUT or POST to add a record. PUT is appropriate if you know the id, and POST can be used to assign an id: -```json -POST /Dog/ -Content-Type: application/json - -{ - "name": "Harper", - "breed": "Labrador", - "age": 3, - "tricks": ["sits"] -} +```bash +curl -X POST http://localhost:9926/Dog/ \ + -H "Content-Type: application/json" \ + -d '{ + "name": "Harper", + "breed": "Labrador", + "age": 3, + "tricks": ["sits"] + }' ``` -With this a record will be created and the auto-assigned id will be available through the `Location` header. If you added a record, you can visit the path `/Dog/` to view that record. Alternately, the curl command curl `http:/localhost:9926/Dog/` will achieve the same thing. +With this a record will be created and the auto-assigned id will be available through the `Location` header. If you added a record, you can visit the path `/Dog/` to view that record. Alternately, the curl command `curl http://localhost:9926/Dog/` will achieve the same thing. ## Authenticating Endpoints diff --git a/versioned_docs/version-4.5/technical-details/reference/analytics.md b/versioned_docs/version-4.5/technical-details/reference/analytics.md index 39c92109..4ee7fdb7 100644 --- a/versioned_docs/version-4.5/technical-details/reference/analytics.md +++ b/versioned_docs/version-4.5/technical-details/reference/analytics.md @@ -104,14 +104,14 @@ And a summary record looks like: The following are general resource usage statistics that are tracked: -- memory - This includes RSS, heap, buffer and external data usage. -- utilization - How much of the time the worker was processing requests. +- `memory` - This includes RSS, heap, buffer and external data usage. +- `utilization` - How much of the time the worker was processing requests. - mqtt-connections - The number of MQTT connections. The following types of information is tracked for each HTTP request: -- success - How many requests returned a successful response (20x response code). TTFB - Time to first byte in the response to the client. -- transfer - Time to finish the transfer of the data to the client. +- `success` - How many requests returned a successful response (20x response code). TTFB - Time to first byte in the response to the client. +- `transfer` - Time to finish the transfer of the data to the client. - bytes-sent - How many bytes of data were sent to the client. Requests are categorized by operation name, for the operations API, by the resource (name) with the REST API, and by command for the MQTT interface. diff --git a/versioned_docs/version-4.6/administration/cloning.md b/versioned_docs/version-4.6/administration/cloning.md index 4669775b..4baa5807 100644 --- a/versioned_docs/version-4.6/administration/cloning.md +++ b/versioned_docs/version-4.6/administration/cloning.md @@ -11,7 +11,7 @@ only clone config, databases and replication that do not already exist. Clone node is triggered when Harper is installed or started with certain environment or command line (CLI) variables set (see below). -**Leader node** - the instance of Harper you are cloning.\ +**Leader node** - the instance of Harper you are cloning. **Clone node** - the new node which will be a clone of the leader node. To start clone run `harperdb` in the CLI with either of the following variables set: diff --git a/versioned_docs/version-4.6/administration/harper-studio/create-account.md b/versioned_docs/version-4.6/administration/harper-studio/create-account.md index 21752357..c0b0cc96 100644 --- a/versioned_docs/version-4.6/administration/harper-studio/create-account.md +++ b/versioned_docs/version-4.6/administration/harper-studio/create-account.md @@ -12,7 +12,7 @@ Start at the [Harper Studio sign up page](https://studio.harperdb.io/sign-up). - Email Address - Subdomain - _Part of the URL that will be used to identify your Harper Cloud Instances. For example, with subdomain “demo” and instance name “c1” the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com)._ + _Part of the URL that will be used to identify your Harper Cloud Instances. For example, with subdomain "demo" and instance name "c1" the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com)._ - Coupon Code (optional) diff --git a/versioned_docs/version-4.6/administration/harper-studio/instances.md b/versioned_docs/version-4.6/administration/harper-studio/instances.md index f17acb70..07da8097 100644 --- a/versioned_docs/version-4.6/administration/harper-studio/instances.md +++ b/versioned_docs/version-4.6/administration/harper-studio/instances.md @@ -26,7 +26,7 @@ A summary view of all instances within an organization can be viewed by clicking 1. Fill out Instance Info. 1. Enter Instance Name - _This will be used to build your instance URL. For example, with subdomain “demo” and instance name “c1” the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com). The Instance URL will be previewed below._ + _This will be used to build your instance URL. For example, with subdomain "demo" and instance name "c1" the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com). The Instance URL will be previewed below._ 1. Enter Instance Username diff --git a/versioned_docs/version-4.6/administration/harper-studio/organizations.md b/versioned_docs/version-4.6/administration/harper-studio/organizations.md index c26b4481..f93eeff0 100644 --- a/versioned_docs/version-4.6/administration/harper-studio/organizations.md +++ b/versioned_docs/version-4.6/administration/harper-studio/organizations.md @@ -29,7 +29,7 @@ A new organization can be created as follows: - Enter Organization Name _This is used for descriptive purposes only._ - Enter Organization Subdomain - _Part of the URL that will be used to identify your Harper Cloud Instances. For example, with subdomain “demo” and instance name “c1” the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com)._ + _Part of the URL that will be used to identify your Harper Cloud Instances. For example, with subdomain "demo" and instance name "c1" the instance URL would be: [https://c1-demo.harperdbcloud.com](https://c1-demo.harperdbcloud.com)._ 1. Click Create Organization. ## Delete an Organization diff --git a/versioned_docs/version-4.6/administration/harper-studio/query-instance-data.md b/versioned_docs/version-4.6/administration/harper-studio/query-instance-data.md index 29a385b9..e85f5e15 100644 --- a/versioned_docs/version-4.6/administration/harper-studio/query-instance-data.md +++ b/versioned_docs/version-4.6/administration/harper-studio/query-instance-data.md @@ -13,7 +13,7 @@ SQL queries can be executed directly through the Harper Studio with the followin 1. Enter your SQL query in the SQL query window. 1. Click **Execute**. -_Please note, the Studio will execute the query exactly as entered. For example, if you attempt to `SELECT _` from a table with millions of rows, you will most likely crash your browser.\* +_Please note, the Studio will execute the query exactly as entered. For example, if you attempt to `SELECT *` from a table with millions of rows, you will most likely crash your browser._ ## Browse Query Results Set diff --git a/versioned_docs/version-4.6/administration/logging/standard-logging.md b/versioned_docs/version-4.6/administration/logging/standard-logging.md index a5116ed7..044c2260 100644 --- a/versioned_docs/version-4.6/administration/logging/standard-logging.md +++ b/versioned_docs/version-4.6/administration/logging/standard-logging.md @@ -22,15 +22,15 @@ For example, a typical log entry looks like: The components of a log entry are: -- timestamp - This is the date/time stamp when the event occurred -- level - This is an associated log level that gives a rough guide to the importance and urgency of the message. The available log levels in order of least urgent (and more verbose) are: `trace`, `debug`, `info`, `warn`, `error`, `fatal`, and `notify`. -- thread/ID - This reports the name of the thread and the thread ID that the event was reported on. Note that NATS logs are recorded by their process name and there is no thread id for them since they are a separate process. Key threads are: - - main - This is the thread that is responsible for managing all other threads and routes incoming requests to the other threads - - http - These are the worker threads that handle the primary workload of incoming HTTP requests to the operations API and custom functions. - - Clustering\* - These are threads and processes that handle replication. - - job - These are job threads that have been started to handle operations that are executed in a separate job thread. -- tags - Logging from a custom function will include a "custom-function" tag in the log entry. Most logs will not have any additional tags. -- message - This is the main message that was reported. +- `timestamp` - This is the date/time stamp when the event occurred +- `level` - This is an associated log level that gives a rough guide to the importance and urgency of the message. The available log levels in order of least urgent (and more verbose) are: `trace`, `debug`, `info`, `warn`, `error`, `fatal`, and `notify`. +- `thread/ID` - This reports the name of the thread and the thread ID that the event was reported on. Note that NATS logs are recorded by their process name and there is no thread id for them since they are a separate process. Key threads are: + - `main` - This is the thread that is responsible for managing all other threads and routes incoming requests to the other threads + - `http` - These are the worker threads that handle the primary workload of incoming HTTP requests to the operations API and custom functions. + - `Clustering` - These are threads and processes that handle replication. + - `job` - These are job threads that have been started to handle operations that are executed in a separate job thread. +- `tags` - Logging from a custom function will include a "custom-function" tag in the log entry. Most logs will not have any additional tags. +- `message` - This is the main message that was reported. We try to keep logging to a minimum by default, to do this the default log level is `error`. If you require more information from the logs, increasing the log level down will provide that. @@ -46,7 +46,7 @@ Harper logs can optionally be streamed to standard streams. Logging to standard ## Logging Rotation -Log rotation allows for managing log files, such as compressing rotated log files, archiving old log files, determining when to rotate, and the like. This will allow for organized storage and efficient use of disk space. For more information see “logging” in our [config docs](../../deployments/configuration). +Log rotation allows for managing log files, such as compressing rotated log files, archiving old log files, determining when to rotate, and the like. This will allow for organized storage and efficient use of disk space. For more information see "logging" in our [config docs](../../deployments/configuration). ## Read Logs via the API diff --git a/versioned_docs/version-4.6/deployments/install-harper/linux.md b/versioned_docs/version-4.6/deployments/install-harper/linux.md index 27a9dc79..cae27c9d 100644 --- a/versioned_docs/version-4.6/deployments/install-harper/linux.md +++ b/versioned_docs/version-4.6/deployments/install-harper/linux.md @@ -20,7 +20,7 @@ These instructions assume that the following has already been completed: While you will need to access Harper through port 9925 for the administration through the operations API, and port 9932 for clustering, for higher level of security, you may want to consider keeping both of these ports restricted to a VPN or VPC, and only have the application interface (9926 by default) exposed to the public Internet. -For this example, we will use an AWS Ubuntu Server 22.04 LTS m5.large EC2 Instance with an additional General Purpose SSD EBS volume and the default “ubuntu” user account. +For this example, we will use an AWS Ubuntu Server 22.04 LTS m5.large EC2 Instance with an additional General Purpose SSD EBS volume and the default "ubuntu" user account. --- diff --git a/versioned_docs/version-4.6/developers/applications/caching.md b/versioned_docs/version-4.6/developers/applications/caching.md index 29fec826..e655a32c 100644 --- a/versioned_docs/version-4.6/developers/applications/caching.md +++ b/versioned_docs/version-4.6/developers/applications/caching.md @@ -22,9 +22,9 @@ While you can provide a single expiration time, there are actually several expir You can provide a single expiration and it defines the behavior for all three. You can also provide three settings for expiration, through table directives: -- expiration - The amount of time until a record goes stale. -- eviction - The amount of time after expiration before a record can be evicted (defaults to zero). -- scanInterval - The interval for scanning for expired records (defaults to one quarter of the total of expiration and eviction). +- `expiration` - The amount of time until a record goes stale. +- `eviction` - The amount of time after expiration before a record can be evicted (defaults to zero). +- `scanInterval` - The interval for scanning for expired records (defaults to one quarter of the total of expiration and eviction). ## Define External Data Source diff --git a/versioned_docs/version-4.6/developers/applications/define-routes.md b/versioned_docs/version-4.6/developers/applications/define-routes.md index 720f4f06..37c3d016 100644 --- a/versioned_docs/version-4.6/developers/applications/define-routes.md +++ b/versioned_docs/version-4.6/developers/applications/define-routes.md @@ -22,7 +22,7 @@ However, you can specify the path to be `/` if you wish to have your routes hand - The route below, using the default config, within the **dogs** project, with a route of **breeds** would be available at **http:/localhost:9926/dogs/breeds**. -In effect, this route is just a pass-through to Harper. The same result could have been achieved by hitting the core Harper API, since it uses **hdbCore.preValidation** and **hdbCore.request**, which are defined in the “helper methods” section, below. +In effect, this route is just a pass-through to Harper. The same result could have been achieved by hitting the core Harper API, since it uses **hdbCore.preValidation** and **hdbCore.request**, which are defined in the "helper methods" section, below. ```javascript export default async (server, { hdbCore, logger }) => { @@ -39,7 +39,7 @@ export default async (server, { hdbCore, logger }) => { For endpoints where you want to execute multiple operations against Harper, or perform additional processing (like an ML classification, or an aggregation, or a call to a 3rd party API), you can define your own logic in the handler. The function below will execute a query against the dogs table, and filter the results to only return those dogs over 4 years in age. -**IMPORTANT: This route has NO preValidation and uses hdbCore.requestWithoutAuthentication, which- as the name implies- bypasses all user authentication. See the security concerns and mitigations in the “helper methods” section, below.** +**IMPORTANT: This route has NO preValidation and uses hdbCore.requestWithoutAuthentication, which- as the name implies- bypasses all user authentication. See the security concerns and mitigations in the "helper methods" section, below.** ```javascript export default async (server, { hdbCore, logger }) => { diff --git a/versioned_docs/version-4.6/developers/clustering/index.md b/versioned_docs/version-4.6/developers/clustering/index.md index 95c3433c..fddd3851 100644 --- a/versioned_docs/version-4.6/developers/clustering/index.md +++ b/versioned_docs/version-4.6/developers/clustering/index.md @@ -22,10 +22,10 @@ A common use case is an edge application collecting and analyzing sensor data th Harper simplifies the architecture of such an application with its bi-directional, table-level replication: -- The edge instance subscribes to a “thresholds” table on the cloud instance, so the application only makes localhost calls to get the thresholds. -- The application continually pushes sensor data into a “sensor_data” table via the localhost API, comparing it to the threshold values as it does so. -- When a threshold violation occurs, the application adds a record to the “alerts” table. -- The application appends to that record array “sensor_data” entries for the 60 seconds (or minutes, or days) leading up to the threshold violation. -- The edge instance publishes the “alerts” table up to the cloud instance. +- The edge instance subscribes to a "thresholds" table on the cloud instance, so the application only makes localhost calls to get the thresholds. +- The application continually pushes sensor data into a "sensor_data" table via the localhost API, comparing it to the threshold values as it does so. +- When a threshold violation occurs, the application adds a record to the "alerts" table. +- The application appends to that record array "sensor_data" entries for the 60 seconds (or minutes, or days) leading up to the threshold violation. +- The edge instance publishes the "alerts" table up to the cloud instance. By letting Harper focus on the fault-tolerant logistics of transporting your data, you get to write less code. By moving data only when and where it’s needed, you lower storage and bandwidth costs. And by restricting your app to only making local calls to Harper, you reduce the overall exposure of your application to outside forces. diff --git a/versioned_docs/version-4.6/developers/miscellaneous/google-data-studio.md b/versioned_docs/version-4.6/developers/miscellaneous/google-data-studio.md index d8d5fd9d..95a912f3 100644 --- a/versioned_docs/version-4.6/developers/miscellaneous/google-data-studio.md +++ b/versioned_docs/version-4.6/developers/miscellaneous/google-data-studio.md @@ -19,9 +19,9 @@ Get started by selecting the Harper connector from the [Google Data Studio Partn 1. Log in to [https://datastudio.google.com/](https://datastudio.google.com/). 1. Add a new Data Source using the Harper connector. The current release version can be added as a data source by following this link: [Harper Google Data Studio Connector](https://datastudio.google.com/datasources/create?connectorId=AKfycbxBKgF8FI5R42WVxO-QCOq7dmUys0HJrUJMkBQRoGnCasY60_VJeO3BhHJPvdd20-S76g). 1. Authorize the connector to access other servers on your behalf (this allows the connector to contact your database). -1. Enter the Web URL to access your database (preferably with HTTPS), as well as the Basic Auth key you use to access the database. Just include the key, not the word “Basic” at the start of it. -1. Check the box for “Secure Connections Only” if you want to always use HTTPS connections for this data source; entering a Web URL that starts with https:// will do the same thing, if you prefer. -1. Check the box for “Allow Bad Certs” if your Harper instance does not have a valid SSL certificate. [Harper Cloud](../../deployments/harper-cloud/) always has valid certificates, and so will never require this to be checked. Instances you set up yourself may require this, if you are using self-signed certs. If you are using [Harper Cloud](../../deployments/harper-cloud/) or another instance you know should always have valid SSL certificates, do not check this box. +1. Enter the Web URL to access your database (preferably with HTTPS), as well as the Basic Auth key you use to access the database. Just include the key, not the word "Basic" at the start of it. +1. Check the box for "Secure Connections Only" if you want to always use HTTPS connections for this data source; entering a Web URL that starts with https:// will do the same thing, if you prefer. +1. Check the box for "Allow Bad Certs" if your Harper instance does not have a valid SSL certificate. [Harper Cloud](../../deployments/harper-cloud/) always has valid certificates, and so will never require this to be checked. Instances you set up yourself may require this, if you are using self-signed certs. If you are using [Harper Cloud](../../deployments/harper-cloud/) or another instance you know should always have valid SSL certificates, do not check this box. 1. Choose your Query Type. This determines what information the configuration will ask for after pressing the Next button. - Table will ask you for a Schema and a Table to return all fields of using `SELECT *`. - SQL will ask you for the SQL query you’re using to retrieve fields from the database. You may `JOIN` multiple tables together, and use Harper specific SQL functions, along with the usual power SQL grants. diff --git a/versioned_docs/version-4.6/developers/operations-api/analytics.md b/versioned_docs/version-4.6/developers/operations-api/analytics.md index 9d43c9ee..59ac6011 100644 --- a/versioned_docs/version-4.6/developers/operations-api/analytics.md +++ b/versioned_docs/version-4.6/developers/operations-api/analytics.md @@ -8,12 +8,12 @@ title: Analytics Operations Retrieves analytics data from the server. -- operation _(required)_ - must always be `get_analytics` -- metric _(required)_ - any value returned by `list_metrics` -- start*time *(optional)\_ - Unix timestamp in seconds -- end*time *(optional)\_ - Unix timestamp in seconds -- get*attributes *(optional)\_ - array of attribute names to retrieve -- conditions _(optional)_ - array of conditions to filter results (see [search_by_conditions docs](./nosql-operations) for details) +- `operation` _(required)_ - must always be `get_analytics` +- `metric` _(required)_ - any value returned by `list_metrics` +- `start_time` _(optional)_ - Unix timestamp in seconds +- `end_time` _(optional)_ - Unix timestamp in seconds +- `get_attributes` _(optional)_ - array of attribute names to retrieve +- `conditions` _(optional)_ - array of conditions to filter results (see [search_by_conditions docs](./nosql-operations) for details) ### Body @@ -57,8 +57,8 @@ Retrieves analytics data from the server. Returns a list of available metrics that can be queried. -- operation _(required)_ - must always be `list_metrics` -- metric*types *(optional)\_ - array of metric types to filter results; one or both of `custom` and `builtin`; default is `builtin` +- `operation` _(required)_ - must always be `list_metrics` +- `metric_types` _(optional)_ - array of metric types to filter results; one or both of `custom` and `builtin`; default is `builtin` ### Body @@ -79,8 +79,8 @@ Returns a list of available metrics that can be queried. Provides detailed information about a specific metric, including its structure and available parameters. -- operation _(required)_ - must always be `describe_metric` -- metric _(required)_ - name of the metric to describe +- `operation` _(required)_ - must always be `describe_metric` +- `metric` _(required)_ - name of the metric to describe ### Body diff --git a/versioned_docs/version-4.6/developers/operations-api/bulk-operations.md b/versioned_docs/version-4.6/developers/operations-api/bulk-operations.md index 2e7d7f45..b6714552 100644 --- a/versioned_docs/version-4.6/developers/operations-api/bulk-operations.md +++ b/versioned_docs/version-4.6/developers/operations-api/bulk-operations.md @@ -8,11 +8,11 @@ title: Bulk Operations Exports data based on a given search operation to a local file in JSON or CSV format. -- operation _(required)_ - must always be `export_local` -- format _(required)_ - the format you wish to export the data, options are `json` & `csv` -- path _(required)_ - path local to the server to export the data -- search*operation *(required)\_ - search_operation of `search_by_hash`, `search_by_value`, `search_by_conditions` or `sql` -- filename _(optional)_ - the name of the file where your export will be written to (do not include extension in filename). If one is not provided it will be autogenerated based on the epoch. +- `operation` _(required)_ - must always be `export_local` +- `format` _(required)_ - the format you wish to export the data, options are `json` & `csv` +- `path` _(required)_ - path local to the server to export the data +- `search_operation` _(required)_ - search_operation of `search_by_hash`, `search_by_value`, `search_by_conditions` or `sql` +- `filename` _(optional)_ - the name of the file where your export will be written to (do not include extension in filename). If one is not provided it will be autogenerated based on the epoch. ### Body @@ -42,11 +42,11 @@ Exports data based on a given search operation to a local file in JSON or CSV fo Ingests CSV data, provided directly in the operation as an `insert`, `update` or `upsert` into the specified database table. -- operation _(required)_ - must always be `csv_data_load` -- action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` -- database _(optional)_ - name of the database where you are loading your data. The default is `data` -- table _(required)_ - name of the table where you are loading your data -- data _(required)_ - csv data to import into Harper +- `operation` _(required)_ - must always be `csv_data_load` +- `action` _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` +- `database` _(optional)_ - name of the database where you are loading your data. The default is `data` +- `table` _(required)_ - name of the table where you are loading your data +- `data` _(required)_ - csv data to import into Harper ### Body @@ -77,11 +77,11 @@ Ingests CSV data, provided via a path on the local filesystem, as an `insert`, ` _Note: The CSV file must reside on the same machine on which Harper is running. For example, the path to a CSV on your computer will produce an error if your Harper instance is a cloud instance._ -- operation _(required)_ - must always be `csv_file_load` -- action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` -- database _(optional)_ - name of the database where you are loading your data. The default is `data` -- table _(required)_ - name of the table where you are loading your data -- file*path *(required)\_ - path to the csv file on the host running Harper +- `operation` _(required)_ - must always be `csv_file_load` +- `action` _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` +- `database` _(optional)_ - name of the database where you are loading your data. The default is `data` +- `table` _(required)_ - name of the table where you are loading your data +- `file_path` _(required)_ - path to the csv file on the host running Harper ### Body @@ -110,11 +110,11 @@ _Note: The CSV file must reside on the same machine on which Harper is running. Ingests CSV data, provided via URL, as an `insert`, `update` or `upsert` into the specified database table. -- operation _(required)_ - must always be `csv_url_load` -- action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` -- database _(optional)_ - name of the database where you are loading your data. The default is `data` -- table _(required)_ - name of the table where you are loading your data -- csv*url *(required)\_ - URL to the csv +- `operation` _(required)_ - must always be `csv_url_load` +- `action` _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` +- `database` _(optional)_ - name of the database where you are loading your data. The default is `data` +- `table` _(required)_ - name of the table where you are loading your data +- `csv_url` _(required)_ - URL to the csv ### Body @@ -143,10 +143,10 @@ Ingests CSV data, provided via URL, as an `insert`, `update` or `upsert` into th Exports data based on a given search operation from table to AWS S3 in JSON or CSV format. -- operation _(required)_ - must always be `export_to_s3` -- format _(required)_ - the format you wish to export the data, options are `json` & `csv` -- s3 _(required)_ - details your access keys, bucket, bucket region and key for saving the data to S3 -- search*operation *(required)\_ - search_operation of `search_by_hash`, `search_by_value`, `search_by_conditions` or `sql` +- `operation` _(required)_ - must always be `export_to_s3` +- `format` _(required)_ - the format you wish to export the data, options are `json` & `csv` +- `s3` _(required)_ - details your access keys, bucket, bucket region and key for saving the data to S3 +- `search_operation` _(required)_ - search_operation of `search_by_hash`, `search_by_value`, `search_by_conditions` or `sql` ### Body @@ -183,16 +183,16 @@ Exports data based on a given search operation from table to AWS S3 in JSON or C This operation allows users to import CSV or JSON files from an AWS S3 bucket as an `insert`, `update` or `upsert`. -- operation _(required)_ - must always be `import_from_s3` -- action _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` -- database _(optional)_ - name of the database where you are loading your data. The default is `data` -- table _(required)_ - name of the table where you are loading your data -- s3 _(required)_ - object containing required AWS S3 bucket info for operation: - - aws_access_key_id - AWS access key for authenticating into your S3 bucket - - aws_secret_access_key - AWS secret for authenticating into your S3 bucket - - bucket - AWS S3 bucket to import from - - key - the name of the file to import - _the file must include a valid file extension ('.csv' or '.json')_ - - region - the region of the bucket +- `operation` _(required)_ - must always be `import_from_s3` +- `action` _(optional)_ - type of action you want to perform - `insert`, `update` or `upsert`. The default is `insert` +- `database` _(optional)_ - name of the database where you are loading your data. The default is `data` +- `table` _(required)_ - name of the table where you are loading your data +- `s3` _(required)_ - object containing required AWS S3 bucket info for operation: + - `aws_access_key_id` - AWS access key for authenticating into your S3 bucket + - `aws_secret_access_key` - AWS secret for authenticating into your S3 bucket + - `bucket` - AWS S3 bucket to import from + - `key` - the name of the file to import - _the file must include a valid file extension ('.csv' or '.json')_ + - `region` - the region of the bucket ### Body @@ -229,10 +229,10 @@ Delete data before the specified timestamp on the specified database table exclu _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `delete_records_before` -- date _(required)_ - records older than this date will be deleted. Supported format looks like: `YYYY-MM-DDThh:mm:ss.sZ` -- schema _(required)_ - name of the schema where you are deleting your data -- table _(required)_ - name of the table where you are deleting your data +- `operation` _(required)_ - must always be `delete_records_before` +- `date` _(required)_ - records older than this date will be deleted. Supported format looks like: `YYYY-MM-DDThh:mm:ss.sZ` +- `schema` _(required)_ - name of the schema where you are deleting your data +- `table` _(required)_ - name of the table where you are deleting your data ### Body diff --git a/versioned_docs/version-4.6/developers/operations-api/certificate-management.md b/versioned_docs/version-4.6/developers/operations-api/certificate-management.md index b569dffc..f8eea402 100644 --- a/versioned_docs/version-4.6/developers/operations-api/certificate-management.md +++ b/versioned_docs/version-4.6/developers/operations-api/certificate-management.md @@ -12,12 +12,12 @@ If a `private_key` is not passed the operation will search for one that matches _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_certificate` -- name _(required)_ - a unique name for the certificate -- certificate _(required)_ - a PEM formatted certificate string -- is*authority *(required)\_ - a boolean indicating if the certificate is a certificate authority -- hosts _(optional)_ - an array of hostnames that the certificate is valid for -- private*key *(optional)\_ - a PEM formatted private key string +- `operation` _(required)_ - must always be `add_certificate` +- `name` _(required)_ - a unique name for the certificate +- `certificate` _(required)_ - a PEM formatted certificate string +- `is_authority` _(required)_ - a boolean indicating if the certificate is a certificate authority +- `hosts` _(optional)_ - an array of hostnames that the certificate is valid for +- `private_key` _(optional)_ - a PEM formatted private key string ### Body @@ -47,8 +47,8 @@ Removes a certificate from the `hdb_certificate` system table and deletes the co _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `remove_certificate` -- name _(required)_ - the name of the certificate +- `operation` _(required)_ - must always be `remove_certificate` +- `name` _(required)_ - the name of the certificate ### Body @@ -75,7 +75,7 @@ Lists all certificates in the `hdb_certificate` system table. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `list_certificates` +- `operation` _(required)_ - must always be `list_certificates` ### Body diff --git a/versioned_docs/version-4.6/developers/operations-api/clustering-nats.md b/versioned_docs/version-4.6/developers/operations-api/clustering-nats.md index a45c593e..45e160c4 100644 --- a/versioned_docs/version-4.6/developers/operations-api/clustering-nats.md +++ b/versioned_docs/version-4.6/developers/operations-api/clustering-nats.md @@ -10,11 +10,11 @@ Adds a route/routes to either the hub or leaf server cluster configuration. This _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_set_routes` -- server _(required)_ - must always be `hub` or `leaf`, in most cases you should use `hub` here -- routes _(required)_ - must always be an objects array with a host and port: - - host - the host of the remote instance you are clustering to - - port - the clustering port of the remote instance you are clustering to, in most cases this is the value in `clustering.hubServer.cluster.network.port` on the remote instance `harperdb-config.yaml` +- `operation` _(required)_ - must always be `cluster_set_routes` +- `server` _(required)_ - must always be `hub` or `leaf`, in most cases you should use `hub` here +- `routes` _(required)_ - must always be an objects array with a host and port: + - `host` - the host of the remote instance you are clustering to + - `port` - the clustering port of the remote instance you are clustering to, in most cases this is the value in `clustering.hubServer.cluster.network.port` on the remote instance `harperdb-config.yaml` ### Body @@ -78,7 +78,7 @@ Gets all the hub and leaf server routes from the config file. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_get_routes` +- `operation` _(required)_ - must always be `cluster_get_routes` ### Body @@ -122,8 +122,8 @@ Removes route(s) from hub and/or leaf server routes array in config file. Return _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_delete_routes` -- routes _required_ - Must be an array of route object(s) +- `operation` _(required)_ - must always be `cluster_delete_routes` +- `routes` _(required)_ - Must be an array of route object(s) ### Body @@ -162,14 +162,14 @@ Registers an additional Harper instance with associated subscriptions. Learn mor _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_node` -- node*name *(required)\_ - the node name of the remote node -- subscriptions _(required)_ - The relationship created between nodes. Must be an object array and include `schema`, `table`, `subscribe` and `publish`: - - schema - the schema to replicate from - - table - the table to replicate from - - subscribe - a boolean which determines if transactions on the remote table should be replicated on the local table - - publish - a boolean which determines if transactions on the local table should be replicated on the remote table - - start*time *(optional)\_ - How far back to go to get transactions from node being added. Must be in UTC YYYY-MM-DDTHH:mm:ss.sssZ format +- `operation` _(required)_ - must always be `add_node` +- `node_name` _(required)_ - the node name of the remote node +- `subscriptions` _(required)_ - The relationship created between nodes. Must be an object array and include `schema`, `table`, `subscribe` and `publish`: + - `schema` - the schema to replicate from + - `table` - the table to replicate from + - `subscribe` - a boolean which determines if transactions on the remote table should be replicated on the local table + - `publish` - a boolean which determines if transactions on the local table should be replicated on the remote table + - `start_time` _(optional)_ - How far back to go to get transactions from node being added. Must be in UTC YYYY-MM-DDTHH:mm:ss.sssZ format ### Body @@ -205,14 +205,14 @@ Modifies an existing Harper instance registration and associated subscriptions. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `update_node` -- node*name *(required)\_ - the node name of the remote node you are updating -- subscriptions _(required)_ - The relationship created between nodes. Must be an object array and include `schema`, `table`, `subscribe` and `publish`: - - schema - the schema to replicate from - - table - the table to replicate from - - subscribe - a boolean which determines if transactions on the remote table should be replicated on the local table - - publish - a boolean which determines if transactions on the local table should be replicated on the remote table - - start*time *(optional)\_ - How far back to go to get transactions from node being added. Must be in UTC YYYY-MM-DDTHH:mm:ss.sssZ format +- `operation` _(required)_ - must always be `update_node` +- `node_name` _(required)_ - the node name of the remote node you are updating +- `subscriptions` _(required)_ - The relationship created between nodes. Must be an object array and include `schema`, `table`, `subscribe` and `publish`: + - `schema` - the schema to replicate from + - `table` - the table to replicate from + - `subscribe` - a boolean which determines if transactions on the remote table should be replicated on the local table + - `publish` - a boolean which determines if transactions on the local table should be replicated on the remote table + - `start_time` _(optional)_ - How far back to go to get transactions from node being added. Must be in UTC YYYY-MM-DDTHH:mm:ss.sssZ format ### Body @@ -248,13 +248,13 @@ A more adeptly named alias for add and update node. This operation behaves as a _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_node_replication` -- node*name *(required)\_ - the node name of the remote node you are updating -- subscriptions _(required)_ - The relationship created between nodes. Must be an object array and `table`, `subscribe` and `publish`: - - database _(optional)_ - the database to replicate from - - table _(required)_ - the table to replicate from - - subscribe _(required)_ - a boolean which determines if transactions on the remote table should be replicated on the local table - - publish _(required)_ - a boolean which determines if transactions on the local table should be replicated on the remote table +- `operation` _(required)_ - must always be `set_node_replication` +- `node_name` _(required)_ - the node name of the remote node you are updating +- `subscriptions` _(required)_ - The relationship created between nodes. Must be an object array and `table`, `subscribe` and `publish`: + - `database` _(optional)_ - the database to replicate from + - `table` _(required)_ - the table to replicate from + - `subscribe` _(required)_ - a boolean which determines if transactions on the remote table should be replicated on the local table + - `publish` _(required)_ - a boolean which determines if transactions on the local table should be replicated on the remote table - ### Body @@ -289,7 +289,7 @@ Returns an array of status objects from a cluster. A status object will contain _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_status` +- `operation` _(required)_ - must always be `cluster_status` ### Body @@ -336,10 +336,10 @@ Returns an object array of enmeshed nodes. Each node object will contain the nam _Operation is restricted to super_user roles only_ -- operation _(required)_- must always be `cluster_network` -- timeout (_optional_) - the amount of time in milliseconds to wait for a response from the network. Must be a number -- connected*nodes (\_optional*) - omit `connected_nodes` from the response. Must be a boolean. Defaults to `false` -- routes (_optional_) - omit `routes` from the response. Must be a boolean. Defaults to `false` +- `operation` _(required)_- must always be `cluster_network` +- `timeout` _(optional)_ - the amount of time in milliseconds to wait for a response from the network. Must be a number +- `connected_nodes` _(optional)_ - omit `connected_nodes` from the response. Must be a boolean. Defaults to `false` +- `routes` _(optional)_ - omit `routes` from the response. Must be a boolean. Defaults to `false` ### Body @@ -383,8 +383,8 @@ Removes a Harper instance and associated subscriptions from the cluster. Learn m _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `remove_node` -- name _(required)_ - The name of the node you are de-registering +- `operation` _(required)_ - must always be `remove_node` +- `node_name` _(required)_ - The name of the node you are de-registering ### Body @@ -412,8 +412,8 @@ Learn more about [Harper clustering here](../clustering/). _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `configure_cluster` -- connections _(required)_ - must be an object array with each object containing `node_name` and `subscriptions` for that node +- `operation` _(required)_ - must always be `configure_cluster` +- `connections` _(required)_ - must be an object array with each object containing `node_name` and `subscriptions` for that node ### Body @@ -463,10 +463,10 @@ Will purge messages from a stream _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `purge_stream` -- database _(required)_ - the name of the database where the streams table resides -- table _(required)_ - the name of the table that belongs to the stream -- options _(optional)_ - control how many messages get purged. Options are: +- `operation` _(required)_ - must always be `purge_stream` +- `database` _(required)_ - the name of the database where the streams table resides +- `table` _(required)_ - the name of the table that belongs to the stream +- `options` _(optional)_ - control how many messages get purged. Options are: - `keep` - purge will keep this many most recent messages - `seq` - purge all messages up to, but not including, this sequence diff --git a/versioned_docs/version-4.6/developers/operations-api/clustering.md b/versioned_docs/version-4.6/developers/operations-api/clustering.md index 5533d1af..1e39bc63 100644 --- a/versioned_docs/version-4.6/developers/operations-api/clustering.md +++ b/versioned_docs/version-4.6/developers/operations-api/clustering.md @@ -4,7 +4,7 @@ title: Clustering # Clustering -The following operations are available for configuring and managing [Harper replication](../replication/).\ +The following operations are available for configuring and managing [Harper replication](../replication/). _**If you are using NATS for clustering, please see the**_ [_**NATS Clustering Operations**_](./clustering-nats) _**documentation.**_ @@ -14,18 +14,18 @@ Adds a new Harper instance to the cluster. If `subscriptions` are provided, it w _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_node` -- hostname or url _(required)_ - one of these fields is required. You must provide either the `hostname` or the `url` of the node you want to add -- verify*tls *(optional)\_ - a boolean which determines if the TLS certificate should be verified. This will allow the Harper default self-signed certificates to be accepted. Defaults to `true` -- authorization _(optional)_ - an object or a string which contains the authorization information for the node being added. If it is an object, it should contain `username` and `password` fields. If it is a string, it should use HTTP `Authorization` style credentials -- retain*authorization *(optional)\_ - a boolean which determines if the authorization credentials should be retained/stored and used everytime a connection is made to this node. If `true`, the authorization will be stored on the node record. Generally this should not be used, as mTLS/certificate based authorization is much more secure and safe, and avoids the need for storing credentials. Defaults to `false`. -- revoked*certificates *(optional)\_ - an array of revoked certificates serial numbers. If a certificate is revoked, it will not be accepted for any connections. -- shard _(optional)_ - a number which can be used to indicate which shard this node belongs to. This is only needed if you are using sharding. -- subscriptions _(optional)_ - The relationship created between nodes. If not provided a fully replicated cluster will be setup. Must be an object array and include `database`, `table`, `subscribe` and `publish`: - - database - the database to replicate - - table - the table to replicate - - subscribe - a boolean which determines if transactions on the remote table should be replicated on the local table - - publish - a boolean which determines if transactions on the local table should be replicated on the remote table +- `operation` _(required)_ - must always be `add_node` +- `hostname` or `url` _(required)_ - one of these fields is required. You must provide either the `hostname` or the `url` of the node you want to add +- `verify_tls` _(optional)_ - a boolean which determines if the TLS certificate should be verified. This will allow the Harper default self-signed certificates to be accepted. Defaults to `true` +- `authorization` _(optional)_ - an object or a string which contains the authorization information for the node being added. If it is an object, it should contain `username` and `password` fields. If it is a string, it should use HTTP `Authorization` style credentials +- `retain_authorization` _(optional)_ - a boolean which determines if the authorization credentials should be retained/stored and used everytime a connection is made to this node. If `true`, the authorization will be stored on the node record. Generally this should not be used, as mTLS/certificate based authorization is much more secure and safe, and avoids the need for storing credentials. Defaults to `false`. +- `revoked_certificates` _(optional)_ - an array of revoked certificates serial numbers. If a certificate is revoked, it will not be accepted for any connections. +- `shard` _(optional)_ - a number which can be used to indicate which shard this node belongs to. This is only needed if you are using sharding. +- `subscriptions` _(optional)_ - The relationship created between nodes. If not provided a fully replicated cluster will be setup. Must be an object array and include `database`, `table`, `subscribe` and `publish`: + - `database` - the database to replicate + - `table` - the table to replicate + - `subscribe` - a boolean which determines if transactions on the remote table should be replicated on the local table + - `publish` - a boolean which determines if transactions on the local table should be replicated on the remote table ### Body @@ -59,15 +59,15 @@ _Operation is restricted to super_user roles only_ _Note: will attempt to add the node if it does not exist_ -- operation _(required)_ - must always be `update_node` -- hostname _(required)_ - the `hostname` of the remote node you are updating -- revoked*certificates *(optional)\_ - an array of revoked certificates serial numbers. If a certificate is revoked, it will not be accepted for any connections. -- shard _(optional)_ - a number which can be used to indicate which shard this node belongs to. This is only needed if you are using sharding. -- subscriptions _(required)_ - The relationship created between nodes. Must be an object array and include `database`, `table`, `subscribe` and `publish`: - - database - the database to replicate from - - table - the table to replicate from - - subscribe - a boolean which determines if transactions on the remote table should be replicated on the local table - - publish - a boolean which determines if transactions on the local table should be replicated on the remote table +- `operation` _(required)_ - must always be `update_node` +- `hostname` _(required)_ - the `hostname` of the remote node you are updating +- `revoked_certificates` _(optional)_ - an array of revoked certificates serial numbers. If a certificate is revoked, it will not be accepted for any connections. +- `shard` _(optional)_ - a number which can be used to indicate which shard this node belongs to. This is only needed if you are using sharding. +- `subscriptions` _(required)_ - The relationship created between nodes. Must be an object array and include `database`, `table`, `subscribe` and `publish`: + - `database` - the database to replicate from + - `table` - the table to replicate from + - `subscribe` - a boolean which determines if transactions on the remote table should be replicated on the local table + - `publish` - a boolean which determines if transactions on the local table should be replicated on the remote table ### Body @@ -102,8 +102,8 @@ Removes a Harper node from the cluster and stops replication, [Learn more about _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `remove_node` -- name _(required)_ - The name of the node you are removing +- `operation` _(required)_ - must always be `remove_node` +- `name` _(required)_ - The name of the node you are removing ### Body @@ -132,7 +132,7 @@ Returns an array of status objects from a cluster. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_status` +- `operation` _(required)_ - must always be `cluster_status` ### Body @@ -167,7 +167,8 @@ _Operation is restricted to super_user roles only_ "lastReceivedRemoteTime": "Wed, 12 Feb 2025 16:49:29 GMT", "lastReceivedLocalTime": "Wed, 12 Feb 2025 16:50:59 GMT", "lastSendTime": "Wed, 12 Feb 2025 16:50:59 GMT" - }, + } + ] } ], "node_name": "server-1.domain.com", @@ -190,8 +191,8 @@ Bulk create/remove subscriptions for any number of remote nodes. Resets and repl _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `configure_cluster` -- connections _(required)_ - must be an object array with each object following the `add_node` schema. +- `operation` _(required)_ - must always be `configure_cluster` +- `connections` _(required)_ - must be an object array with each object following the `add_node` schema. ### Body @@ -251,8 +252,8 @@ Adds a route/routes to the `replication.routes` configuration. This operation be _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_set_routes` -- routes _(required)_ - the routes field is an array that specifies the routes for clustering. Each element in the array can be either a string or an object with `hostname` and `port` properties. +- `operation` _(required)_ - must always be `cluster_set_routes` +- `routes` _(required)_ - the routes field is an array that specifies the routes for clustering. Each element in the array can be either a string or an object with `hostname` and `port` properties. ### Body @@ -293,7 +294,7 @@ Gets the replication routes from the Harper config file. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_get_routes` +- `operation` _(required)_ - must always be `cluster_get_routes` ### Body @@ -323,8 +324,8 @@ Removes route(s) from the Harper config file. Returns a deletion success message _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `cluster_delete_routes` -- routes _required_ - Must be an array of route object(s) +- `operation` _(required)_ - must always be `cluster_delete_routes` +- `routes` _(required)_ - Must be an array of route object(s) ### Body diff --git a/versioned_docs/version-4.6/developers/operations-api/components.md b/versioned_docs/version-4.6/developers/operations-api/components.md index fa301fdc..8799e5ca 100644 --- a/versioned_docs/version-4.6/developers/operations-api/components.md +++ b/versioned_docs/version-4.6/developers/operations-api/components.md @@ -10,9 +10,9 @@ Creates a new component project in the component root directory using a predefin _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_component` -- project _(required)_ - the name of the project you wish to create -- replicated _(optional)_ - if true, Harper will replicate the component to all nodes in the cluster. Must be a boolean. +- `operation` _(required)_ - must always be `add_component` +- `project` _(required)_ - the name of the project you wish to create +- `replicated` _(optional)_ - if true, Harper will replicate the component to all nodes in the cluster. Must be a boolean. ### Body @@ -75,13 +75,13 @@ _Note: After deploying a component a restart may be required_ _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `deploy_component` -- project _(required)_ - the name of the project you wish to deploy -- package _(optional)_ - this can be any valid GitHub or NPM reference -- payload _(optional)_ - a base64-encoded string representation of the .tar file. Must be a string -- restart _(optional)_ - must be either a boolean or the string `rolling`. If set to `rolling`, a rolling restart will be triggered after the component is deployed, meaning that each node in the cluster will be sequentially restarted (waiting for the last restart to start the next). If set to `true`, the restart will not be rolling, all nodes will be restarted in parallel. If `replicated` is `true`, the restart operations will be replicated across the cluster. -- replicated _(optional)_ - if true, Harper will replicate the component to all nodes in the cluster. Must be a boolean. -- install*command *(optional)\_ - A command to use when installing the component. Must be a string. This can be used to install dependencies with pnpm or yarn, for example, like: `"install_command": "npm install -g pnpm && pnpm install"` +- `operation` _(required)_ - must always be `deploy_component` +- `project` _(required)_ - the name of the project you wish to deploy +- `package` _(optional)_ - this can be any valid GitHub or NPM reference +- `payload` _(optional)_ - a base64-encoded string representation of the .tar file. Must be a string +- `restart` _(optional)_ - must be either a boolean or the string `rolling`. If set to `rolling`, a rolling restart will be triggered after the component is deployed, meaning that each node in the cluster will be sequentially restarted (waiting for the last restart to start the next). If set to `true`, the restart will not be rolling, all nodes will be restarted in parallel. If `replicated` is `true`, the restart operations will be replicated across the cluster. +- `replicated` _(optional)_ - if true, Harper will replicate the component to all nodes in the cluster. Must be a boolean. +- `install_command` _(optional)_ - A command to use when installing the component. Must be a string. This can be used to install dependencies with pnpm or yarn, for example, like: `"install_command": "npm install -g pnpm && pnpm install"` ### Body @@ -118,9 +118,9 @@ Creates a temporary `.tar` file of the specified project folder, then reads it i _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `package_component` -- project _(required)_ - the name of the project you wish to package -- skip*node_modules *(optional)\_ - if true, creates option for tar module that will exclude the project's node_modules directory. Must be a boolean +- `operation` _(required)_ - must always be `package_component` +- `project` _(required)_ - the name of the project you wish to package +- `skip_node_modules` _(optional)_ - if true, creates option for tar module that will exclude the project's node_modules directory. Must be a boolean ### Body @@ -151,11 +151,11 @@ Deletes a file from inside the component project or deletes the complete project _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `drop_component` -- project _(required)_ - the name of the project you wish to delete or to delete from if using the `file` parameter -- file _(optional)_ - the path relative to your project folder of the file you wish to delete -- replicated _(optional)_ - if true, Harper will replicate the component deletion to all nodes in the cluster. Must be a boolean. -- restart _(optional)_ - if true, Harper will restart after dropping the component. Must be a boolean. +- `operation` _(required)_ - must always be `drop_component` +- `project` _(required)_ - the name of the project you wish to delete or to delete from if using the `file` parameter +- `file` _(optional)_ - the path relative to your project folder of the file you wish to delete +- `replicated` _(optional)_ - if true, Harper will replicate the component deletion to all nodes in the cluster. Must be a boolean. +- `restart` _(optional)_ - if true, Harper will restart after dropping the component. Must be a boolean. ### Body @@ -183,7 +183,7 @@ Gets all local component files and folders and any component config from `harper _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_components` +- `operation` _(required)_ - must always be `get_components` ### Body @@ -264,10 +264,10 @@ Gets the contents of a file inside a component project. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_component_file` -- project _(required)_ - the name of the project where the file is located -- file _(required)_ - the path relative to your project folder of the file you wish to view -- encoding _(optional)_ - the encoding that will be passed to the read file call. Defaults to `utf8` +- `operation` _(required)_ - must always be `get_component_file` +- `project` _(required)_ - the name of the project where the file is located +- `file` _(required)_ - the path relative to your project folder of the file you wish to view +- `encoding` _(optional)_ - the encoding that will be passed to the read file call. Defaults to `utf8` ### Body @@ -295,12 +295,12 @@ Creates or updates a file inside a component project. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_component_file` -- project _(required)_ - the name of the project the file is located in -- file _(required)_ - the path relative to your project folder of the file you wish to set -- payload _(required)_ - what will be written to the file -- encoding _(optional)_ - the encoding that will be passed to the write file call. Defaults to `utf8` -- replicated _(optional)_ - if true, Harper will replicate the component update to all nodes in the cluster. Must be a boolean. +- `operation` _(required)_ - must always be `set_component_file` +- `project` _(required)_ - the name of the project the file is located in +- `file` _(required)_ - the path relative to your project folder of the file you wish to set +- `payload` _(required)_ - what will be written to the file +- `encoding` _(optional)_ - the encoding that will be passed to the write file call. Defaults to `utf8` +- `replicated` _(optional)_ - if true, Harper will replicate the component update to all nodes in the cluster. Must be a boolean. ### Body @@ -329,23 +329,15 @@ Adds an SSH key for deploying components from private repositories. This will al _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_ssh_key` -- name _(required)_ - the name of the key -- key _(required)_ - the private key contents. Must be an ed25519 key. Line breaks must be delimited with `\n` and have a trailing `\n` -- host _(required)_ - the host for the ssh config (see below). Used as part of the `package` url when deploying a component using this key -- hostname _(required)_ - the hostname for the ssh config (see below). Used to map `host` to an actual domain (e.g. `github.com`) -- known*hosts *(optional)\_ - the public SSH keys of the host your component will be retrieved from. If `hostname` is `github.com` this will be retrieved automatically. Line breaks must be delimited with `\n` -- replicated _(optional)_ - if true, HarperDB will replicate the key to all nodes in the cluster. Must be a boolean. +- `operation` _(required)_ - must always be `add_ssh_key` +- `name` _(required)_ - the name of the key +- `key` _(required)_ - the private key contents. Must be an ed25519 key. Line breaks must be delimited with `\n` and have a trailing `\n` +- `host` _(required)_ - the host for the ssh config (see below). Used as part of the `package` url when deploying a component using this key +- `hostname` _(required)_ - the hostname for the ssh config (see below). Used to map `host` to an actual domain (e.g. `github.com`) +- `known_hosts` _(optional)_ - the public SSH keys of the host your component will be retrieved from. If `hostname` is `github.com` this will be retrieved automatically. Line breaks must be delimited with `\n` +- `replicated` _(optional)_ - if true, HarperDB will replicate the key to all nodes in the cluster. Must be a boolean. _Operation is restricted to super_user roles only_ -* operation _(required)_ - must always be `add_ssh_key` -* name _(required)_ - the name of the key -* key _(required)_ - the private key contents. Line breaks must be delimited with -* host _(required)_ - the host for the ssh config (see below). Used as part of the `package` url when deploying a component using this key -* hostname _(required)_ - the hostname for the ssh config (see below). Used to map `host` to an actual domain (e.g. `github.com`) -* known*hosts *(optional)\_ - the public SSH keys of the host your component will be retrieved from. If `hostname` is `github.com` this will be retrieved automatically. Line breaks must be delimited with -* replicated _(optional)_ - if true, Harper will replicate the key to all nodes in the cluster. Must be a boolean. - ### Body ```json @@ -393,10 +385,10 @@ Updates the private key contents of an existing SSH key. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `update_ssh_key` -- name _(required)_ - the name of the key to be updated -- key _(required)_ - the private key contents. Must be an ed25519 key. Line breaks must be delimited with `\n` and have a trailing `\n` -- replicated _(optional)_ - if true, Harper will replicate the key update to all nodes in the cluster. Must be a boolean. +- `operation` _(required)_ - must always be `update_ssh_key` +- `name` _(required)_ - the name of the key to be updated +- `key` _(required)_ - the private key contents. Must be an ed25519 key. Line breaks must be delimited with `\n` and have a trailing `\n` +- `replicated` _(optional)_ - if true, Harper will replicate the key update to all nodes in the cluster. Must be a boolean. ### Body @@ -424,9 +416,9 @@ Deletes a SSH key. This will also remove it from the generated SSH config. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `delete_ssh_key` -- name _(required)_ - the name of the key to be deleted -- replicated _(optional)_ - if true, Harper will replicate the key deletion to all nodes in the cluster. Must be a boolean. +- `operation` _(required)_ - must always be `delete_ssh_key` +- `name` _(required)_ - the name of the key to be deleted +- `replicated` _(optional)_ - if true, Harper will replicate the key deletion to all nodes in the cluster. Must be a boolean. ### Body @@ -452,7 +444,7 @@ List off the names of added SSH keys _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `list_ssh_keys` +- `operation` _(required)_ - must always be `list_ssh_keys` ### Body @@ -468,11 +460,12 @@ _Operation is restricted to super_user roles only_ [ { "name": "harperdb-private-component" - }, - ... + } ] ``` +_Note: Additional SSH keys would appear as more objects in this array_ + --- ## Set SSH Known Hosts @@ -481,9 +474,9 @@ Sets the SSH known_hosts file. This will overwrite the file. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_ssh_known_hosts` -- known*hosts *(required)\_ - The contents to set the known_hosts to. Line breaks must be delimite d with -- replicated _(optional)_ - if true, Harper will replicate the known hosts to all nodes in the cluster. Must be a boolean. +- `operation` _(required)_ - must always be `set_ssh_known_hosts` +- `known_hosts` _(required)_ - The contents to set the known_hosts to. Line breaks must be delimite d with +- `replicated` _(optional)_ - if true, Harper will replicate the known hosts to all nodes in the cluster. Must be a boolean. ### Body @@ -508,7 +501,7 @@ Gets the contents of the known_hosts file _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_ssh_known_hosts` +- `operation` _(required)_ - must always be `get_ssh_known_hosts` ### Body @@ -535,9 +528,9 @@ Executes npm install against specified custom function projects. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `install_node_modules` -- projects _(required)_ - must ba an array of custom functions projects. -- dry*run*(optional)\_ - refers to the npm --dry-run flag: [https://docs.npmjs.com/cli/v8/commands/npm-install#dry-run](https://docs.npmjs.com/cli/v8/commands/npm-install#dry-run). Defaults to false. +- `operation` _(required)_ - must always be `install_node_modules` +- `projects` _(required)_ - must ba an array of custom functions projects. +- `dry_run` _(optional)_ - refers to the npm --dry-run flag: [https://docs.npmjs.com/cli/v8/commands/npm-install#dry-run](https://docs.npmjs.com/cli/v8/commands/npm-install#dry-run). Defaults to false. ### Body diff --git a/versioned_docs/version-4.6/developers/operations-api/configuration.md b/versioned_docs/version-4.6/developers/operations-api/configuration.md index 99599843..a09a38a0 100644 --- a/versioned_docs/version-4.6/developers/operations-api/configuration.md +++ b/versioned_docs/version-4.6/developers/operations-api/configuration.md @@ -10,9 +10,9 @@ Modifies the Harper configuration file parameters. Must follow with a restart or _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_configuration` -- logging*level *(example/optional)\_ - one or more configuration keywords to be updated in the Harper configuration file -- clustering*enabled *(example/optional)\_ - one or more configuration keywords to be updated in the Harper configuration file +- `operation` _(required)_ - must always be `set_configuration` +- `logging_level` _(optional)_ - one or more configuration keywords to be updated in the Harper configuration file +- `clustering_enabled` _(optional)_ - one or more configuration keywords to be updated in the Harper configuration file ### Body @@ -40,7 +40,7 @@ Returns the Harper configuration parameters. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_configuration` +- `operation` _(required)_ - must always be `get_configuration` ### Body diff --git a/versioned_docs/version-4.6/developers/operations-api/custom-functions.md b/versioned_docs/version-4.6/developers/operations-api/custom-functions.md index 0b2261e0..23709148 100644 --- a/versioned_docs/version-4.6/developers/operations-api/custom-functions.md +++ b/versioned_docs/version-4.6/developers/operations-api/custom-functions.md @@ -12,7 +12,7 @@ Returns the state of the Custom functions server. This includes whether it is en _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `custom_function_status` +- `operation` _(required)_ - must always be `custom_function_status` ### Body @@ -40,7 +40,7 @@ Returns an array of projects within the Custom Functions root project directory. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_custom_functions` +- `operation` _(required)_ - must always be `get_custom_functions` ### Body @@ -70,10 +70,10 @@ Returns the content of the specified file as text. HarperDStudio uses this call _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_custom_function` -- project _(required)_ - the name of the project containing the file for which you wish to get content -- type _(required)_ - the name of the sub-folder containing the file for which you wish to get content - must be either routes or helpers -- file _(required)_ - The name of the file for which you wish to get content - should not include the file extension (which is always .js) +- `operation` _(required)_ - must always be `get_custom_function` +- `project` _(required)_ - the name of the project containing the file for which you wish to get content +- `type` _(required)_ - the name of the sub-folder containing the file for which you wish to get content - must be either routes or helpers +- `file` _(required)_ - The name of the file for which you wish to get content - should not include the file extension (which is always .js) ### Body @@ -102,11 +102,11 @@ Updates the content of the specified file. Harper Studio uses this call to save _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_custom_function` -- project _(required)_ - the name of the project containing the file for which you wish to set content -- type _(required)_ - the name of the sub-folder containing the file for which you wish to set content - must be either routes or helpers -- file _(required)_ - the name of the file for which you wish to set content - should not include the file extension (which is always .js) -- function*content *(required)\_ - the content you wish to save into the specified file +- `operation` _(required)_ - must always be `set_custom_function` +- `project` _(required)_ - the name of the project containing the file for which you wish to set content +- `type` _(required)_ - the name of the sub-folder containing the file for which you wish to set content - must be either routes or helpers +- `file` _(required)_ - the name of the file for which you wish to set content - should not include the file extension (which is always .js) +- `function_content` _(required)_ - the content you wish to save into the specified file ### Body @@ -136,10 +136,10 @@ Deletes the specified file. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `drop_custom_function` -- project _(required)_ - the name of the project containing the file you wish to delete -- type _(required)_ - the name of the sub-folder containing the file you wish to delete. Must be either routes or helpers -- file _(required)_ - the name of the file you wish to delete. Should not include the file extension (which is always .js) +- `operation` _(required)_ - must always be `drop_custom_function` +- `project` _(required)_ - the name of the project containing the file you wish to delete +- `type` _(required)_ - the name of the sub-folder containing the file you wish to delete. Must be either routes or helpers +- `file` _(required)_ - the name of the file you wish to delete. Should not include the file extension (which is always .js) ### Body @@ -168,8 +168,8 @@ Creates a new project folder in the Custom Functions root project directory. It _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_custom_function_project` -- project _(required)_ - the name of the project you wish to create +- `operation` _(required)_ - must always be `add_custom_function_project` +- `project` _(required)_ - the name of the project you wish to create ### Body @@ -196,8 +196,8 @@ Deletes the specified project folder and all of its contents. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `drop_custom_function_project` -- project _(required)_ - the name of the project you wish to delete +- `operation` _(required)_ - must always be `drop_custom_function_project` +- `project` _(required)_ - the name of the project you wish to delete ### Body @@ -224,9 +224,9 @@ Creates a .tar file of the specified project folder, then reads it into a base64 _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `package_custom_function_project` -- project _(required)_ - the name of the project you wish to package up for deployment -- skip*node_modules *(optional)\_ - if true, creates option for tar module that will exclude the project's node_modules directory. Must be a boolean. +- `operation` _(required)_ - must always be `package_custom_function_project` +- `project` _(required)_ - the name of the project you wish to package up for deployment +- `skip_node_modules` _(optional)_ - if true, creates option for tar module that will exclude the project's node_modules directory. Must be a boolean. ### Body @@ -256,9 +256,9 @@ Takes the output of package_custom_function_project, decrypts the base64-encoded _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `deploy_custom_function_project` -- project _(required)_ - the name of the project you wish to deploy. Must be a string -- payload _(required)_ - a base64-encoded string representation of the .tar file. Must be a string +- `operation` _(required)_ - must always be `deploy_custom_function_project` +- `project` _(required)_ - the name of the project you wish to deploy. Must be a string +- `payload` _(required)_ - a base64-encoded string representation of the .tar file. Must be a string ### Body diff --git a/versioned_docs/version-4.6/developers/operations-api/databases-and-tables.md b/versioned_docs/version-4.6/developers/operations-api/databases-and-tables.md index eea77222..7c17fb4d 100644 --- a/versioned_docs/version-4.6/developers/operations-api/databases-and-tables.md +++ b/versioned_docs/version-4.6/developers/operations-api/databases-and-tables.md @@ -8,7 +8,7 @@ title: Databases and Tables Returns the definitions of all databases and tables within the database. Record counts about 5000 records are estimated, as determining the exact count can be expensive. When the record count is estimated, this is indicated by the inclusion of a confidence interval of `estimated_record_range`. If you need the exact count, you can include an `"exact_count": true` in the operation, but be aware that this requires a full table scan (may be expensive). -- operation _(required)_ - must always be `describe_all` +- `operation` _(required)_ - must always be `describe_all` ### Body @@ -63,8 +63,8 @@ Returns the definitions of all databases and tables within the database. Record Returns the definitions of all tables within the specified database. -- operation _(required)_ - must always be `describe_database` -- database _(optional)_ - database where the table you wish to describe lives. The default is `data` +- `operation` _(required)_ - must always be `describe_database` +- `database` _(optional)_ - database where the table you wish to describe lives. The default is `data` ### Body @@ -118,9 +118,9 @@ Returns the definitions of all tables within the specified database. Returns the definition of the specified table. -- operation _(required)_ - must always be `describe_table` -- table _(required)_ - table you wish to describe -- database _(optional)_ - database where the table you wish to describe lives. The default is `data` +- `operation` _(required)_ - must always be `describe_table` +- `table` _(required)_ - table you wish to describe +- `database` _(optional)_ - database where the table you wish to describe lives. The default is `data` ### Body @@ -174,8 +174,8 @@ Create a new database. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `create_database` -- database _(optional)_ - name of the database you are creating. The default is `data` +- `operation` _(required)_ - must always be `create_database` +- `database` _(optional)_ - name of the database you are creating. The default is `data` ### Body @@ -202,9 +202,9 @@ Drop an existing database. NOTE: Dropping a database will delete all tables and _Operation is restricted to super_user roles only_ -- operation _(required)_ - this should always be `drop_database` -- database _(required)_ - name of the database you are dropping -- replicated _(optional)_ - if true, Harper will replicate the component to all nodes in the cluster. Must be a boolean. +- `operation` _(required)_ - this should always be `drop_database` +- `database` _(required)_ - name of the database you are dropping +- `replicated` _(optional)_ - if true, Harper will replicate the component to all nodes in the cluster. Must be a boolean. ### Body @@ -231,15 +231,15 @@ Create a new table within a database. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `create_table` -- database _(optional)_ - name of the database where you want your table to live. If the database does not exist, it will be created. If the `database` property is not provided it will default to `data`. -- table _(required)_ - name of the table you are creating -- primary*key *(required)\_ - primary key for the table -- attributes _(optional)_ - an array of attributes that specifies the schema for the table, that is the set of attributes for the table. When attributes are supplied the table will not be considered a "dynamic schema" table, and attributes will not be auto-added when records with new properties are inserted. Each attribute is specified as: - - name _(required)_ - the name of the attribute - - indexed _(optional)_ - indicates if the attribute should be indexed - - type _(optional)_ - specifies the data type of the attribute (can be String, Int, Float, Date, ID, Any) -- expiration _(optional)_ - specifies the time-to-live or expiration of records in the table before they are evicted (records are not evicted on any timer if not specified). This is specified in seconds. +- `operation` _(required)_ - must always be `create_table` +- `database` _(optional)_ - name of the database where you want your table to live. If the database does not exist, it will be created. If the `database` property is not provided it will default to `data`. +- `table` _(required)_ - name of the table you are creating +- `primary_key` _(required)_ - primary key for the table +- `attributes` _(optional)_ - an array of attributes that specifies the schema for the table, that is the set of attributes for the table. When attributes are supplied the table will not be considered a "dynamic schema" table, and attributes will not be auto-added when records with new properties are inserted. Each attribute is specified as: + - `name` _(required)_ - the name of the attribute + - `indexed` _(optional)_ - indicates if the attribute should be indexed + - `type` _(optional)_ - specifies the data type of the attribute (can be String, Int, Float, Date, ID, Any) +- `expiration` _(optional)_ - specifies the time-to-live or expiration of records in the table before they are evicted (records are not evicted on any timer if not specified). This is specified in seconds. ### Body @@ -268,10 +268,10 @@ Drop an existing database table. NOTE: Dropping a table will delete all associat _Operation is restricted to super_user roles only_ -- operation _(required)_ - this should always be `drop_table` -- database _(optional)_ - database where the table you are dropping lives. The default is `data` -- table _(required)_ - name of the table you are dropping -- replicated _(optional)_ - if true, Harper will replicate the component to all nodes in the cluster. Must be a boolean. +- `operation` _(required)_ - this should always be `drop_table` +- `database` _(optional)_ - database where the table you are dropping lives. The default is `data` +- `table` _(required)_ - name of the table you are dropping +- `replicated` _(optional)_ - if true, Harper will replicate the component to all nodes in the cluster. Must be a boolean. ### Body @@ -299,10 +299,10 @@ Create a new attribute within the specified table. **The create_attribute operat _Note: Harper will automatically create new attributes on insert and update if they do not already exist within the database._ -- operation _(required)_ - must always be `create_attribute` -- database _(optional)_ - name of the database of the table you want to add your attribute. The default is `data` -- table _(required)_ - name of the table where you want to add your attribute to live -- attribute _(required)_ - name for the attribute +- `operation` _(required)_ - must always be `create_attribute` +- `database` _(optional)_ - name of the database of the table you want to add your attribute. The default is `data` +- `table` _(required)_ - name of the table where you want to add your attribute to live +- `attribute` _(required)_ - name for the attribute ### Body @@ -333,10 +333,10 @@ Drop an existing attribute from the specified table. NOTE: Dropping an attribute _Operation is restricted to super_user roles only_ -- operation _(required)_ - this should always be `drop_attribute` -- database _(optional)_ - database where the table you are dropping lives. The default is `data` -- table _(required)_ - table where the attribute you are dropping lives -- attribute _(required)_ - attribute that you intend to drop +- `operation` _(required)_ - this should always be `drop_attribute` +- `database` _(optional)_ - database where the table you are dropping lives. The default is `data` +- `table` _(required)_ - table where the attribute you are dropping lives +- `attribute` _(required)_ - attribute that you intend to drop ### Body @@ -367,10 +367,10 @@ It is important to note that trying to copy a database file that is in use (Harp _Operation is restricted to super_user roles only_ -- operation _(required)_ - this should always be `get_backup` -- database _(required)_ - this is the database that will be snapshotted and returned -- table _(optional)_ - this will specify a specific table to backup -- tables _(optional)_ - this will specify a specific set of tables to backup +- `operation` _(required)_ - this should always be `get_backup` +- `database` _(required)_ - this is the database that will be snapshotted and returned +- `table` _(optional)_ - this will specify a specific table to backup +- `tables` _(optional)_ - this will specify a specific set of tables to backup ### Body diff --git a/versioned_docs/version-4.6/developers/operations-api/jobs.md b/versioned_docs/version-4.6/developers/operations-api/jobs.md index 173125a1..cf71fa00 100644 --- a/versioned_docs/version-4.6/developers/operations-api/jobs.md +++ b/versioned_docs/version-4.6/developers/operations-api/jobs.md @@ -8,8 +8,8 @@ title: Jobs Returns job status, metrics, and messages for the specified job ID. -- operation _(required)_ - must always be `get_job` -- id _(required)_ - the id of the job you wish to view +- `operation` _(required)_ - must always be `get_job` +- `id` _(required)_ - the id of the job you wish to view ### Body @@ -50,9 +50,9 @@ Returns a list of job statuses, metrics, and messages for all jobs executed with _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `search_jobs_by_start_date` -- from*date *(required)\_ - the date you wish to start the search -- to*date *(required)\_ - the date you wish to end the search +- `operation` _(required)_ - must always be `search_jobs_by_start_date` +- `from_date` _(required)_ - the date you wish to start the search +- `to_date` _(required)_ - the date you wish to end the search ### Body diff --git a/versioned_docs/version-4.6/developers/operations-api/logs.md b/versioned_docs/version-4.6/developers/operations-api/logs.md index 7b173585..52e52740 100644 --- a/versioned_docs/version-4.6/developers/operations-api/logs.md +++ b/versioned_docs/version-4.6/developers/operations-api/logs.md @@ -10,13 +10,13 @@ Returns log outputs from the primary Harper log based on the provided search cri _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_Log` -- start _(optional)_ - result to start with. Default is 0, the first log in `hdb.log`. Must be a number -- limit _(optional)_ - number of results returned. Default behavior is 1000. Must be a number -- level _(optional)_ - error level to filter on. Default behavior is all levels. Must be `notify`, `error`, `warn`, `info`, `debug` or `trace` -- from _(optional)_ - date to begin showing log results. Must be `YYYY-MM-DD` or `YYYY-MM-DD hh:mm:ss`. Default is first log in `hdb.log` -- until _(optional)_ - date to end showing log results. Must be `YYYY-MM-DD` or `YYYY-MM-DD hh:mm:ss`. Default is last log in `hdb.log` -- order _(optional)_ - order to display logs desc or asc by timestamp. By default, will maintain `hdb.log` order +- `operation` _(required)_ - must always be `read_Log` +- `start` _(optional)_ - result to start with. Default is 0, the first log in `hdb.log`. Must be a number +- `limit` _(optional)_ - number of results returned. Default behavior is 1000. Must be a number +- `level` _(optional)_ - error level to filter on. Default behavior is all levels. Must be `notify`, `error`, `warn`, `info`, `debug` or `trace` +- `from` _(optional)_ - date to begin showing log results. Must be `YYYY-MM-DD` or `YYYY-MM-DD hh:mm:ss`. Default is first log in `hdb.log` +- `until` _(optional)_ - date to end showing log results. Must be `YYYY-MM-DD` or `YYYY-MM-DD hh:mm:ss`. Default is last log in `hdb.log` +- `order` _(optional)_ - order to display logs desc or asc by timestamp. By default, will maintain `hdb.log` order ### Body @@ -68,12 +68,12 @@ Returns all transactions logged for the specified database table. You may filter _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_transaction_log` -- schema _(required)_ - schema under which the transaction log resides -- table _(required)_ - table under which the transaction log resides -- from _(optional)_ - time format must be millisecond-based epoch in UTC -- to _(optional)_ - time format must be millisecond-based epoch in UTC -- limit _(optional)_ - max number of logs you want to receive. Must be a number +- `operation` _(required)_ - must always be `read_transaction_log` +- `schema` _(required)_ - schema under which the transaction log resides +- `table` _(required)_ - table under which the transaction log resides +- `from` _(optional)_ - time format must be millisecond-based epoch in UTC +- `to` _(optional)_ - time format must be millisecond-based epoch in UTC +- `limit` _(optional)_ - max number of logs you want to receive. Must be a number ### Body @@ -271,10 +271,10 @@ Deletes transaction log data for the specified database table that is older than _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `delete_transaction_log_before` -- schema _(required)_ - schema under which the transaction log resides. Must be a string -- table _(required)_ - table under which the transaction log resides. Must be a string -- timestamp _(required)_ - records older than this date will be deleted. Format is millisecond-based epoch in UTC +- `operation` _(required)_ - must always be `delete_transaction_log_before` +- `schema` _(required)_ - schema under which the transaction log resides. Must be a string +- `table` _(required)_ - table under which the transaction log resides. Must be a string +- `timestamp` _(required)_ - records older than this date will be deleted. Format is millisecond-based epoch in UTC ### Body @@ -303,11 +303,11 @@ AuditLog must be enabled in the Harper configuration file to make this request. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_audit_log` -- schema _(required)_ - schema under which the transaction log resides -- table _(required)_ - table under which the transaction log resides -- search*type *(optional)\_ - possibilities are `hash_value`, `timestamp` and `username` -- search*values *(optional)\_ - an array of string or numbers relating to search_type +- `operation` _(required)_ - must always be `read_audit_log` +- `schema` _(required)_ - schema under which the transaction log resides +- `table` _(required)_ - table under which the transaction log resides +- `search_type` _(optional)_ - possibilities are `hash_value`, `timestamp` and `username` +- `search_values` _(optional)_ - an array of string or numbers relating to search_type ### Body @@ -398,11 +398,11 @@ AuditLog must be enabled in the Harper configuration file to make this request. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_audit_log` -- schema _(required)_ - schema under which the transaction log resides -- table _(required)_ - table under which the transaction log resides -- search*type *(optional)\_ - timestamp -- search*values *(optional)\_ - an array containing a maximum of two values \[`from_timestamp`, `to_timestamp`] defining the range of transactions you would like to view. +- `operation` _(required)_ - must always be `read_audit_log` +- `schema` _(required)_ - schema under which the transaction log resides +- `table` _(required)_ - table under which the transaction log resides +- `search_type` _(optional)_ - timestamp +- `search_values` _(optional)_ - an array containing a maximum of two values \[`from_timestamp`, `to_timestamp`] defining the range of transactions you would like to view. - Timestamp format is millisecond-based epoch in UTC - If no items are supplied then all transactions are returned - If only one entry is supplied then all transactions after the supplied timestamp will be returned @@ -519,11 +519,11 @@ AuditLog must be enabled in the Harper configuration file to make this request. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_audit_log` -- schema _(required)_ - schema under which the transaction log resides -- table _(required)_ - table under which the transaction log resides -- search*type *(optional)\_ - username -- search*values *(optional)\_ - the Harper user for whom you would like to view transactions +- `operation` _(required)_ - must always be `read_audit_log` +- `schema` _(required)_ - schema under which the transaction log resides +- `table` _(required)_ - table under which the transaction log resides +- `search_type` _(optional)_ - username +- `search_values` _(optional)_ - the Harper user for whom you would like to view transactions ### Body @@ -639,11 +639,11 @@ AuditLog must be enabled in the Harper configuration file to make this request. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `read_audit_log` -- schema _(required)_ - schema under which the transaction log resides -- table _(required)_ - table under which the transaction log resides -- search*type *(optional)\_ - hash_value -- search*values *(optional)\_ - an array of hash_attributes for which you wish to see transaction logs +- `operation` _(required)_ - must always be `read_audit_log` +- `schema` _(required)_ - schema under which the transaction log resides +- `table` _(required)_ - table under which the transaction log resides +- `search_type` _(optional)_ - hash_value +- `search_values` _(optional)_ - an array of hash_attributes for which you wish to see transaction logs ### Body @@ -707,10 +707,10 @@ AuditLog must be enabled in the Harper configuration file to make this request. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `delete_audit_logs_before` -- schema _(required)_ - schema under which the transaction log resides. Must be a string -- table _(required)_ - table under which the transaction log resides. Must be a string -- timestamp _(required)_ - records older than this date will be deleted. Format is millisecond-based epoch in UTC +- `operation` _(required)_ - must always be `delete_audit_logs_before` +- `schema` _(required)_ - schema under which the transaction log resides. Must be a string +- `table` _(required)_ - table under which the transaction log resides. Must be a string +- `timestamp` _(required)_ - records older than this date will be deleted. Format is millisecond-based epoch in UTC ### Body diff --git a/versioned_docs/version-4.6/developers/operations-api/nosql-operations.md b/versioned_docs/version-4.6/developers/operations-api/nosql-operations.md index b8ad9146..e9272508 100644 --- a/versioned_docs/version-4.6/developers/operations-api/nosql-operations.md +++ b/versioned_docs/version-4.6/developers/operations-api/nosql-operations.md @@ -8,10 +8,10 @@ title: NoSQL Operations Adds one or more rows of data to a database table. Primary keys of the inserted JSON record may be supplied on insert. If a primary key is not provided, then a GUID or incremented number (depending on type) will be generated for each record. -- operation _(required)_ - must always be `insert` -- database _(optional)_ - database where the table you are inserting records into lives. The default is `data` -- table _(required)_ - table where you want to insert records -- records _(required)_ - array of one or more records for insert +- `operation` _(required)_ - must always be `insert` +- `database` _(optional)_ - database where the table you are inserting records into lives. The default is `data` +- `table` _(required)_ - table where you want to insert records +- `records` _(required)_ - array of one or more records for insert ### Body diff --git a/versioned_docs/version-4.6/developers/operations-api/registration.md b/versioned_docs/version-4.6/developers/operations-api/registration.md index 56775c5d..28c6a0e9 100644 --- a/versioned_docs/version-4.6/developers/operations-api/registration.md +++ b/versioned_docs/version-4.6/developers/operations-api/registration.md @@ -8,7 +8,7 @@ title: Registration Returns the registration data of the Harper instance. -- operation _(required)_ - must always be `registration_info` +- `operation` _(required)_ - must always be `registration_info` ### Body @@ -37,7 +37,7 @@ Returns the Harper fingerprint, uniquely generated based on the machine, for lic _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_fingerprint` +- `operation` _(required)_ - must always be `get_fingerprint` ### Body @@ -55,9 +55,9 @@ Sets the Harper license as generated by Harper License Management software. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_license` -- key _(required)_ - your license key -- company _(required)_ - the company that was used in the license +- `operation` _(required)_ - must always be `set_license` +- `key` _(required)_ - your license key +- `company` _(required)_ - the company that was used in the license ### Body diff --git a/versioned_docs/version-4.6/developers/operations-api/sql-operations.md b/versioned_docs/version-4.6/developers/operations-api/sql-operations.md index 71dfa436..4b7076bb 100644 --- a/versioned_docs/version-4.6/developers/operations-api/sql-operations.md +++ b/versioned_docs/version-4.6/developers/operations-api/sql-operations.md @@ -12,8 +12,8 @@ Harper encourages developers to utilize other querying tools over SQL for perfor Executes the provided SQL statement. The SELECT statement is used to query data from the database. -- operation _(required)_ - must always be `sql` -- sql _(required)_ - use standard SQL +- `operation` _(required)_ - must always be `sql` +- `sql` _(required)_ - use standard SQL ### Body @@ -48,8 +48,8 @@ Executes the provided SQL statement. The SELECT statement is used to query data Executes the provided SQL statement. The INSERT statement is used to add one or more rows to a database table. -- operation _(required)_ - must always be `sql` -- sql _(required)_ - use standard SQL +- `operation` _(required)_ - must always be `sql` +- `sql` _(required)_ - use standard SQL ### Body @@ -76,8 +76,8 @@ Executes the provided SQL statement. The INSERT statement is used to add one or Executes the provided SQL statement. The UPDATE statement is used to change the values of specified attributes in one or more rows in a database table. -- operation _(required)_ - must always be `sql` -- sql _(required)_ - use standard SQL +- `operation` _(required)_ - must always be `sql` +- `sql` _(required)_ - use standard SQL ### Body @@ -104,8 +104,8 @@ Executes the provided SQL statement. The UPDATE statement is used to change the Executes the provided SQL statement. The DELETE statement is used to remove one or more rows of data from a database table. -- operation _(required)_ - must always be `sql` -- sql _(required)_ - use standard SQL +- `operation` _(required)_ - must always be `sql` +- `sql` _(required)_ - use standard SQL ### Body diff --git a/versioned_docs/version-4.6/developers/operations-api/system-operations.md b/versioned_docs/version-4.6/developers/operations-api/system-operations.md index da47e104..d39e93cb 100644 --- a/versioned_docs/version-4.6/developers/operations-api/system-operations.md +++ b/versioned_docs/version-4.6/developers/operations-api/system-operations.md @@ -10,7 +10,7 @@ Restarts the Harper instance. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `restart` +- `operation` _(required)_ - must always be `restart` ### Body @@ -36,9 +36,9 @@ Restarts servers for the specified Harper service. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `restart_service` -- service _(required)_ - must be one of: `http_workers`, `clustering_config` or `clustering` -- replicated _(optional)_ - must be a boolean. If set to `true`, Harper will replicate the restart service operation across all nodes in the cluster. The restart will occur as a rolling restart, ensuring that each node is fully restarted before the next node begins restarting. +- `operation` _(required)_ - must always be `restart_service` +- `service` _(required)_ - must be one of: `http_workers`, `clustering_config` or `clustering` +- `replicated` _(optional)_ - must be a boolean. If set to `true`, Harper will replicate the restart service operation across all nodes in the cluster. The restart will occur as a rolling restart, ensuring that each node is fully restarted before the next node begins restarting. ### Body @@ -65,8 +65,8 @@ Returns detailed metrics on the host system. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `system_information` -- attributes _(optional)_ - string array of top level attributes desired in the response, if no value is supplied all attributes will be returned. Available attributes are: ['system', 'time', 'cpu', 'memory', 'disk', 'network', 'harperdb_processes', 'table_size', 'metrics', 'threads', 'replication'] +- `operation` _(required)_ - must always be `system_information` +- `attributes` _(optional)_ - string array of top level attributes desired in the response, if no value is supplied all attributes will be returned. Available attributes are: ['system', 'time', 'cpu', 'memory', 'disk', 'network', 'harperdb_processes', 'table_size', 'metrics', 'threads', 'replication'] ### Body @@ -84,9 +84,9 @@ Sets a status value that can be used for application-specific status tracking. S _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `set_status` -- id _(required)_ - the key identifier for the status -- status _(required)_ - the status value to set (string between 1-512 characters) +- `operation` _(required)_ - must always be `set_status` +- `id` _(required)_ - the key identifier for the status +- `status` _(required)_ - the status value to set (string between 1-512 characters) ### Body @@ -124,8 +124,8 @@ Retrieves a status value previously set with the set_status operation. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `get_status` -- id _(optional)_ - the key identifier for the status to retrieve (defaults to all statuses if not provided) +- `operation` _(required)_ - must always be `get_status` +- `id` _(optional)_ - the key identifier for the status to retrieve (defaults to all statuses if not provided) ### Body @@ -174,8 +174,8 @@ Removes a status entry by its ID. _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `clear_status` -- id _(required)_ - the key identifier for the status to remove +- `operation` _(required)_ - must always be `clear_status` +- `id` _(required)_ - the key identifier for the status to remove ### Body diff --git a/versioned_docs/version-4.6/developers/operations-api/token-authentication.md b/versioned_docs/version-4.6/developers/operations-api/token-authentication.md index b9ff5b31..178db842 100644 --- a/versioned_docs/version-4.6/developers/operations-api/token-authentication.md +++ b/versioned_docs/version-4.6/developers/operations-api/token-authentication.md @@ -10,9 +10,9 @@ Creates the tokens needed for authentication: operation & refresh token. _Note - this operation does not require authorization to be set_ -- operation _(required)_ - must always be `create_authentication_tokens` -- username _(required)_ - username of user to generate tokens for -- password _(required)_ - password of user to generate tokens for +- `operation` _(required)_ - must always be `create_authentication_tokens` +- `username` _(required)_ - username of user to generate tokens for +- `password` _(required)_ - password of user to generate tokens for ### Body @@ -39,8 +39,8 @@ _Note - this operation does not require authorization to be set_ This operation creates a new operation token. -- operation _(required)_ - must always be `refresh_operation_token` -- refresh*token *(required)\_ - the refresh token that was provided when tokens were created +- `operation` _(required)_ - must always be `refresh_operation_token` +- `refresh_token` _(required)_ - the refresh token that was provided when tokens were created ### Body diff --git a/versioned_docs/version-4.6/developers/operations-api/users-and-roles.md b/versioned_docs/version-4.6/developers/operations-api/users-and-roles.md index ecaa1117..91f222b9 100644 --- a/versioned_docs/version-4.6/developers/operations-api/users-and-roles.md +++ b/versioned_docs/version-4.6/developers/operations-api/users-and-roles.md @@ -10,7 +10,7 @@ Returns a list of all roles. [Learn more about Harper roles here.](../security/u _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `list_roles` +- `operation` _(required)_ - must always be `list_roles` ### Body @@ -80,11 +80,11 @@ Creates a new role with the specified permissions. [Learn more about Harper role _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_role` -- role _(required)_ - name of role you are defining -- permission _(required)_ - object defining permissions for users associated with this role: - - super*user *(optional)\_ - boolean which, if set to true, gives users associated with this role full access to all operations and methods. If not included, value will be assumed to be false. - - structure_user (optional) - boolean OR array of database names (as strings). If boolean, user can create new databases and tables. If array of strings, users can only manage tables within the specified databases. This overrides any individual table permissions for specified databases, or for all databases if the value is true. +- `operation` _(required)_ - must always be `add_role` +- `role` _(required)_ - name of role you are defining +- `permission` _(required)_ - object defining permissions for users associated with this role: + - `super_user` _(optional)_ - boolean which, if set to true, gives users associated with this role full access to all operations and methods. If not included, value will be assumed to be false. + - `structure_user` _(optional)_ - boolean OR array of database names (as strings). If boolean, user can create new databases and tables. If array of strings, users can only manage tables within the specified databases. This overrides any individual table permissions for specified databases, or for all databases if the value is true. ### Body @@ -158,12 +158,12 @@ Modifies an existing role with the specified permissions. updates permissions fr _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `alter_role` -- id _(required)_ - the id value for the role you are altering -- role _(optional)_ - name value to update on the role you are altering -- permission _(required)_ - object defining permissions for users associated with this role: - - super*user *(optional)\_ - boolean which, if set to true, gives users associated with this role full access to all operations and methods. If not included, value will be assumed to be false. - - structure_user (optional) - boolean OR array of database names (as strings). If boolean, user can create new databases and tables. If array of strings, users can only manage tables within the specified databases. This overrides any individual table permissions for specified databases, or for all databases if the value is true. +- `operation` _(required)_ - must always be `alter_role` +- `id` _(required)_ - the id value for the role you are altering +- `role` _(optional)_ - name value to update on the role you are altering +- `permission` _(required)_ - object defining permissions for users associated with this role: + - `super_user` _(optional)_ - boolean which, if set to true, gives users associated with this role full access to all operations and methods. If not included, value will be assumed to be false. + - `structure_user` _(optional)_ - boolean OR array of database names (as strings). If boolean, user can create new databases and tables. If array of strings, users can only manage tables within the specified databases. This overrides any individual table permissions for specified databases, or for all databases if the value is true. ### Body @@ -237,8 +237,8 @@ Deletes an existing role from the database. NOTE: Role with associated users can _Operation is restricted to super_user roles only_ -- operation _(required)_ - this must always be `drop_role` -- id _(required)_ - this is the id of the role you are dropping +- `operation` _(required)_ - this must always be `drop_role` +- `id` _(required)_ - this is the id of the role you are dropping ### Body @@ -265,7 +265,7 @@ Returns a list of all users. [Learn more about Harper roles here.](../security/u _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `list_users` +- `operation` _(required)_ - must always be `list_users` ### Body @@ -377,7 +377,7 @@ _Operation is restricted to super_user roles only_ Returns user data for the associated user credentials. -- operation _(required)_ - must always be `user_info` +- `operation` _(required)_ - must always be `user_info` ### Body @@ -415,11 +415,11 @@ Creates a new user with the specified role and credentials. [Learn more about Ha _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `add_user` -- role _(required)_ - 'role' name value of the role you wish to assign to the user. See `add_role` for more detail -- username _(required)_ - username assigned to the user. It can not be altered after adding the user. It serves as the hash -- password _(required)_ - clear text for password. Harper will encrypt the password upon receipt -- active _(required)_ - boolean value for status of user's access to your Harper instance. If set to false, user will not be able to access your instance of Harper. +- `operation` _(required)_ - must always be `add_user` +- `role` _(required)_ - 'role' name value of the role you wish to assign to the user. See `add_role` for more detail +- `username` _(required)_ - username assigned to the user. It can not be altered after adding the user. It serves as the hash +- `password` _(required)_ - clear text for password. Harper will encrypt the password upon receipt +- `active` _(required)_ - boolean value for status of user's access to your Harper instance. If set to false, user will not be able to access your instance of Harper. ### Body @@ -449,11 +449,11 @@ Modifies an existing user's role and/or credentials. [Learn more about Harper ro _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `alter_user` -- username _(required)_ - username assigned to the user. It can not be altered after adding the user. It serves as the hash. -- password _(optional)_ - clear text for password. Harper will encrypt the password upon receipt -- role _(optional)_ - `role` name value of the role you wish to assign to the user. See `add_role` for more detail -- active _(optional)_ - status of user's access to your Harper instance. See `add_role` for more detail +- `operation` _(required)_ - must always be `alter_user` +- `username` _(required)_ - username assigned to the user. It can not be altered after adding the user. It serves as the hash. +- `password` _(optional)_ - clear text for password. Harper will encrypt the password upon receipt +- `role` _(optional)_ - `role` name value of the role you wish to assign to the user. See `add_role` for more detail +- `active` _(optional)_ - status of user's access to your Harper instance. See `add_role` for more detail ### Body @@ -487,8 +487,8 @@ Deletes an existing user by username. [Learn more about Harper roles here.](../s _Operation is restricted to super_user roles only_ -- operation _(required)_ - must always be `drop_user` -- username _(required)_ - username assigned to the user +- `operation` _(required)_ - must always be `drop_user` +- `username` _(required)_ - username assigned to the user ### Body diff --git a/versioned_docs/version-4.6/developers/replication/sharding.md b/versioned_docs/version-4.6/developers/replication/sharding.md index 74242292..a650f52b 100644 --- a/versioned_docs/version-4.6/developers/replication/sharding.md +++ b/versioned_docs/version-4.6/developers/replication/sharding.md @@ -45,7 +45,7 @@ X-Replicate-To: node1,node2 Likewise, you can specify replicateTo and confirm parameters in the operation object when using the Harper API. For example, to specify that data should be replicated to two other nodes, and the response should be returned once confirmation is received from one other node, you can use the following operation object: -```json +```jsonc { "operation": "update", "schema": "dev", @@ -61,10 +61,12 @@ Likewise, you can specify replicateTo and confirm parameters in the operation ob or you can specify nodes: -```json -..., +```jsonc +{ + // ... "replicateTo": ["node-1", "node-2"] -... + // ... +} ``` ## Programmatic Replication Control @@ -148,7 +150,7 @@ MyTable.setResidencyById((id) => { Normally sharding allows data to be stored in specific nodes, but still allows access to the data from any node. However, you can also disable cross-node access so that data is only returned if is stored on the node where it is accessed. To do this, you can set the `replicateFrom` property on the context of operation to `false`: -```json +```jsonc { "operation": "search_by_id", "table": "MyTable", diff --git a/versioned_docs/version-4.6/developers/security/basic-auth.md b/versioned_docs/version-4.6/developers/security/basic-auth.md index 96d5d28a..43448caf 100644 --- a/versioned_docs/version-4.6/developers/security/basic-auth.md +++ b/versioned_docs/version-4.6/developers/security/basic-auth.md @@ -6,9 +6,9 @@ title: Basic Authentication Harper uses Basic Auth and JSON Web Tokens (JWTs) to secure our HTTP requests. In the context of an HTTP transaction, **basic access authentication** is a method for an HTTP user agent to provide a username and password when making a request. -** \_**You do not need to log in separately. Basic Auth is added to each HTTP request like create*database, create_table, insert etc… via headers.\*\** \*\* +**You do not need to log in separately. Basic Auth is added to each HTTP request like create_database, create_table, insert etc… via headers.** -A header is added to each HTTP request. The header key is **“Authorization”** the header value is **“Basic <<your username and password buffer token>>”** +A header is added to each HTTP request. The header key is **"Authorization"** the header value is **"Basic <<your username and password buffer token>>"** ## Authentication in Harper Studio diff --git a/versioned_docs/version-4.6/developers/security/users-and-roles.md b/versioned_docs/version-4.6/developers/security/users-and-roles.md index 76ed6901..84fd76d8 100644 --- a/versioned_docs/version-4.6/developers/security/users-and-roles.md +++ b/versioned_docs/version-4.6/developers/security/users-and-roles.md @@ -98,7 +98,7 @@ There are two parts to a permissions set: Each table that a role should be given some level of CRUD permissions to must be included in the `tables` array for its database in the roles permissions JSON passed to the API (_see example above_). -```json +```jsonc { "table_name": { // the name of the table to define CRUD perms for "read": boolean, // access to read from this table diff --git a/versioned_docs/version-4.6/developers/sql-guide/date-functions.md b/versioned_docs/version-4.6/developers/sql-guide/date-functions.md index d44917c3..c9747dcd 100644 --- a/versioned_docs/version-4.6/developers/sql-guide/date-functions.md +++ b/versioned_docs/version-4.6/developers/sql-guide/date-functions.md @@ -156,17 +156,17 @@ Subtracts the defined amount of time from the date provided in UTC and returns t ### EXTRACT(date, date_part) -Extracts and returns the date_part requested as a String value. Accepted date_part values below show value returned for date = “2020-03-26T15:13:02.041+000” +Extracts and returns the date_part requested as a String value. Accepted date_part values below show value returned for date = "2020-03-26T15:13:02.041+000" | date_part | Example return value\* | | ----------- | ---------------------- | -| year | “2020” | -| month | “3” | -| day | “26” | -| hour | “15” | -| minute | “13” | -| second | “2” | -| millisecond | “41” | +| year | "2020" | +| month | "3" | +| day | "26" | +| hour | "15" | +| minute | "13" | +| second | "2" | +| millisecond | "41" | ``` "SELECT EXTRACT(1587568845765, 'year') AS extract_result" returns diff --git a/versioned_docs/version-4.6/developers/sql-guide/functions.md b/versioned_docs/version-4.6/developers/sql-guide/functions.md index 0847a657..a1170991 100644 --- a/versioned_docs/version-4.6/developers/sql-guide/functions.md +++ b/versioned_docs/version-4.6/developers/sql-guide/functions.md @@ -16,99 +16,85 @@ This SQL keywords reference contains the SQL functions available in Harper. | Keyword | Syntax | Description | | ---------------- | ------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- | -| AVG | AVG(_expression_) | Returns the average of a given numeric expression. | -| COUNT | SELECT COUNT(_column_name_) FROM _database.table_ WHERE _condition_ | Returns the number records that match the given criteria. Nulls are not counted. | -| GROUP_CONCAT | GROUP*CONCAT(\_expression*) | Returns a string with concatenated values that are comma separated and that are non-null from a group. Will return null when there are non-null values. | -| MAX | SELECT MAX(_column_name_) FROM _database.table_ WHERE _condition_ | Returns largest value in a specified column. | -| MIN | SELECT MIN(_column_name_) FROM _database.table_ WHERE _condition_ | Returns smallest value in a specified column. | -| SUM | SUM(_column_name_) | Returns the sum of the numeric values provided. | -| ARRAY\* | ARRAY(_expression_) | Returns a list of data as a field. | -| DISTINCT_ARRAY\* | DISTINCT*ARRAY(\_expression*) | When placed around a standard ARRAY() function, returns a distinct (deduplicated) results set. | +| `AVG` | `AVG(expression)` | Returns the average of a given numeric expression. | +| `COUNT` | `SELECT COUNT(column_name) FROM database.table WHERE condition` | Returns the number records that match the given criteria. Nulls are not counted. | +| `GROUP_CONCAT` | `GROUP_CONCAT(expression)` | Returns a string with concatenated values that are comma separated and that are non-null from a group. Will return null when there are non-null values. | +| `MAX` | `SELECT MAX(column_name) FROM database.table WHERE condition` | Returns largest value in a specified column. | +| `MIN` | `SELECT MIN(column_name) FROM database.table WHERE condition` | Returns smallest value in a specified column. | +| `SUM` | `SUM(column_name)` | Returns the sum of the numeric values provided. | +| `ARRAY`* | `ARRAY(expression)` | Returns a list of data as a field. | +| `DISTINCT_ARRAY`* | `DISTINCT_ARRAY(expression)` | When placed around a standard `ARRAY()` function, returns a distinct (deduplicated) results set. | -\*For more information on ARRAY() and DISTINCT_ARRAY() see [this blog](https://www.harperdb.io/post/sql-queries-to-complex-objects). +*For more information on `ARRAY()` and `DISTINCT_ARRAY()` see [this blog](https://www.harperdb.io/post/sql-queries-to-complex-objects). ### Conversion | Keyword | Syntax | Description | | ------- | ----------------------------------------------- | ---------------------------------------------------------------------- | -| CAST | CAST(_expression AS datatype(length)_) | Converts a value to a specified datatype. | -| CONVERT | CONVERT(_data_type(length), expression, style_) | Converts a value from one datatype to a different, specified datatype. | +| `CAST` | `CAST(expression AS datatype(length))` | Converts a value to a specified datatype. | +| `CONVERT` | `CONVERT(data_type(length), expression, style)` | Converts a value from one datatype to a different, specified datatype. | ### Date & Time -| Keyword | Syntax | Description | -| ----------------- | ----------------- | --------------------------------------------------------------------------------------------------------------------- | -| CURRENT_DATE | CURRENT_DATE() | Returns the current date in UTC in “YYYY-MM-DD” String format. | -| CURRENT_TIME | CURRENT_TIME() | Returns the current time in UTC in “HH:mm:ss.SSS” string format. | -| CURRENT_TIMESTAMP | CURRENT_TIMESTAMP | Referencing this variable will evaluate as the current Unix Timestamp in milliseconds. For more information, go here. | - -| -| DATE | DATE([_date_string_]) | Formats and returns the date*string argument in UTC in ‘YYYY-MM-DDTHH:mm:ss.SSSZZ’ string format. If a date_string is not provided, the function will return the current UTC date/time value in the return format defined above. For more information, go here. | -| -| DATE_ADD | DATE_ADD(\_date, value, interval*) | Adds the defined amount of time to the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted interval values: Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. | -| -| DATE*DIFF | DATEDIFF(\_date_1, date_2[, interval]*) | Returns the difference between the two date values passed based on the interval as a Number. If an interval is not provided, the function will return the difference value in milliseconds. For more information, go here. | -| -| DATE*FORMAT | DATE_FORMAT(\_date, format*) | Formats and returns a date value in the String format provided. Find more details on accepted format values in the moment.js docs. For more information, go here. | -| -| DATE*SUB | DATE_SUB(\_date, format*) | Subtracts the defined amount of time from the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted date*sub interval values- Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. | -| -| DAY | DAY(\_date*) | Return the day of the month for the given date. | -| -| DAYOFWEEK | DAYOFWEEK(_date_) | Returns the numeric value of the weekday of the date given(“YYYY-MM-DD”).NOTE: 0=Sunday, 1=Monday, 2=Tuesday, 3=Wednesday, 4=Thursday, 5=Friday, and 6=Saturday. | -| EXTRACT | EXTRACT(_date, date_part_) | Extracts and returns the date*part requested as a String value. Accepted date_part values below show value returned for date = “2020-03-26T15:13:02.041+000” For more information, go here. | -| -| GETDATE | GETDATE() | Returns the current Unix Timestamp in milliseconds. | -| GET_SERVER_TIME | GET_SERVER_TIME() | Returns the current date/time value based on the server’s timezone in `YYYY-MM-DDTHH:mm:ss.SSSZZ` String format. | -| OFFSET_UTC | OFFSET_UTC(\_date, offset*) | Returns the UTC date time value with the offset provided included in the return String value formatted as `YYYY-MM-DDTHH:mm:ss.SSSZZ`. The offset argument will be added as minutes unless the value is less than 16 and greater than -16, in which case it will be treated as hours. | -| NOW | NOW() | Returns the current Unix Timestamp in milliseconds. | -| -| HOUR | HOUR(_datetime_) | Returns the hour part of a given date in range of 0 to 838. | -| -| MINUTE | MINUTE(_datetime_) | Returns the minute part of a time/datetime in range of 0 to 59. | -| -| MONTH | MONTH(_date_) | Returns month part for a specified date in range of 1 to 12. | -| -| SECOND | SECOND(_datetime_) | Returns the seconds part of a time/datetime in range of 0 to 59. | -| YEAR | YEAR(_date_) | Returns the year part for a specified date. | -| +| Keyword | Syntax | Description | +| ----------------- | -------------------------- | --------------------------------------------------------------------------------------------------------------------- | +| `CURRENT_DATE` | `CURRENT_DATE()` | Returns the current date in UTC in "YYYY-MM-DD" String format. | +| `CURRENT_TIME` | `CURRENT_TIME()` | Returns the current time in UTC in "HH:mm:ss.SSS" string format. | +| `CURRENT_TIMESTAMP` | `CURRENT_TIMESTAMP` | Referencing this variable will evaluate as the current Unix Timestamp in milliseconds. For more information, go here. | +| `DATE` | `DATE([date_string])` | Formats and returns the date string argument in UTC in 'YYYY-MM-DDTHH:mm:ss.SSSZZ' string format. If a date string is not provided, the function will return the current UTC date/time value in the return format defined above. For more information, go here. | +| `DATE_ADD` | `DATE_ADD(date, value, interval)` | Adds the defined amount of time to the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted interval values: Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. | +| `DATE_DIFF` | `DATE_DIFF(date_1, date_2[, interval])` | Returns the difference between the two date values passed based on the interval as a Number. If an interval is not provided, the function will return the difference value in milliseconds. For more information, go here. | +| `DATE_FORMAT` | `DATE_FORMAT(date, format)` | Formats and returns a date value in the String format provided. Find more details on accepted format values in the moment.js docs. For more information, go here. | +| `DATE_SUB` | `DATE_SUB(date, format)` | Subtracts the defined amount of time from the date provided in UTC and returns the resulting Unix Timestamp in milliseconds. Accepted date_sub interval values- Either string value (key or shorthand) can be passed as the interval argument. For more information, go here. | +| `DAY` | `DAY(date)` | Return the day of the month for the given date. | +| `DAYOFWEEK` | `DAYOFWEEK(date)` | Returns the numeric value of the weekday of the date given("YYYY-MM-DD").NOTE: 0=Sunday, 1=Monday, 2=Tuesday, 3=Wednesday, 4=Thursday, 5=Friday, and 6=Saturday. | +| `EXTRACT` | `EXTRACT(date, date_part)` | Extracts and returns the date part requested as a String value. Accepted date_part values below show value returned for date = "2020-03-26T15:13:02.041+000" For more information, go here. | +| `GETDATE` | `GETDATE()` | Returns the current Unix Timestamp in milliseconds. | +| `GET_SERVER_TIME` | `GET_SERVER_TIME()` | Returns the current date/time value based on the server's timezone in `YYYY-MM-DDTHH:mm:ss.SSSZZ` String format. | +| `OFFSET_UTC` | `OFFSET_UTC(date, offset)` | Returns the UTC date time value with the offset provided included in the return String value formatted as `YYYY-MM-DDTHH:mm:ss.SSSZZ`. The offset argument will be added as minutes unless the value is less than 16 and greater than -16, in which case it will be treated as hours. | +| `NOW` | `NOW()` | Returns the current Unix Timestamp in milliseconds. | +| `HOUR` | `HOUR(datetime)` | Returns the hour part of a given date in range of 0 to 838. | +| `MINUTE` | `MINUTE(datetime)` | Returns the minute part of a time/datetime in range of 0 to 59. | +| `MONTH` | `MONTH(date)` | Returns month part for a specified date in range of 1 to 12. | +| `SECOND` | `SECOND(datetime)` | Returns the seconds part of a time/datetime in range of 0 to 59. | +| `YEAR` | `YEAR(date)` | Returns the year part for a specified date. | ### Logical | Keyword | Syntax | Description | | ------- | ----------------------------------------------- | ------------------------------------------------------------------------------------------ | -| IF | IF(_condition, value_if_true, value_if_false_) | Returns a value if the condition is true, or another value if the condition is false. | -| IIF | IIF(_condition, value_if_true, value_if_false_) | Returns a value if the condition is true, or another value if the condition is false. | -| IFNULL | IFNULL(_expression, alt_value_) | Returns a specified value if the expression is null. | -| NULLIF | NULLIF(_expression_1, expression_2_) | Returns null if expression_1 is equal to expression_2, if not equal, returns expression_1. | +| `IF` | `IF(condition, value_if_true, value_if_false)` | Returns a value if the condition is true, or another value if the condition is false. | +| `IIF` | `IIF(condition, value_if_true, value_if_false)` | Returns a value if the condition is true, or another value if the condition is false. | +| `IFNULL` | `IFNULL(expression, alt_value)` | Returns a specified value if the expression is null. | +| `NULLIF` | `NULLIF(expression_1, expression_2)` | Returns null if expression_1 is equal to expression_2, if not equal, returns expression_1. | ### Mathematical | Keyword | Syntax | Description | | ------- | ------------------------------ | --------------------------------------------------------------------------------------------------- | -| ABS | ABS(_expression_) | Returns the absolute value of a given numeric expression. | -| CEIL | CEIL(_number_) | Returns integer ceiling, the smallest integer value that is bigger than or equal to a given number. | -| EXP | EXP(_number_) | Returns e to the power of a specified number. | -| FLOOR | FLOOR(_number_) | Returns the largest integer value that is smaller than, or equal to, a given number. | -| RANDOM | RANDOM(_seed_) | Returns a pseudo random number. | -| ROUND | ROUND(_number,decimal_places_) | Rounds a given number to a specified number of decimal places. | -| SQRT | SQRT(_expression_) | Returns the square root of an expression. | +| `ABS` | `ABS(expression)` | Returns the absolute value of a given numeric expression. | +| `CEIL` | `CEIL(number)` | Returns integer ceiling, the smallest integer value that is bigger than or equal to a given number. | +| `EXP` | `EXP(number)` | Returns e to the power of a specified number. | +| `FLOOR` | `FLOOR(number)` | Returns the largest integer value that is smaller than, or equal to, a given number. | +| `RANDOM` | `RANDOM(seed)` | Returns a pseudo random number. | +| `ROUND` | `ROUND(number, decimal_places)` | Rounds a given number to a specified number of decimal places. | +| `SQRT` | `SQRT(expression)` | Returns the square root of an expression. | ### String | Keyword | Syntax | Description | | ----------- | ------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| CONCAT | CONCAT(_string_1, string_2, ...., string_n_) | Concatenates, or joins, two or more strings together, resulting in a single string. | -| CONCAT_WS | CONCAT*WS(\_separator, string_1, string_2, ...., string_n*) | Concatenates, or joins, two or more strings together with a separator, resulting in a single string. | -| INSTR | INSTR(_string_1, string_2_) | Returns the first position, as an integer, of string_2 within string_1. | -| LEN | LEN(_string_) | Returns the length of a string. | -| LOWER | LOWER(_string_) | Converts a string to lower-case. | -| REGEXP | SELECT _column_name_ FROM _database.table_ WHERE _column_name_ REGEXP _pattern_ | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. | -| REGEXP_LIKE | SELECT _column_name_ FROM _database.table_ WHERE REGEXP*LIKE(\_column_name, pattern*) | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. | -| REPLACE | REPLACE(_string, old_string, new_string_) | Replaces all instances of old_string within new_string, with string. | -| SUBSTRING | SUBSTRING(_string, string_position, length_of_substring_) | Extracts a specified amount of characters from a string. | -| TRIM | TRIM([_character(s) FROM_] _string_) | Removes leading and trailing spaces, or specified character(s), from a string. | -| UPPER | UPPER(_string_) | Converts a string to upper-case. | +| `CONCAT` | `CONCAT(string_1, string_2, ...., string_n)` | Concatenates, or joins, two or more strings together, resulting in a single string. | +| `CONCAT_WS` | `CONCAT_WS(separator, string_1, string_2, ...., string_n)` | Concatenates, or joins, two or more strings together with a separator, resulting in a single string. | +| `INSTR` | `INSTR(string_1, string_2)` | Returns the first position, as an integer, of string_2 within string_1. | +| `LEN` | `LEN(string)` | Returns the length of a string. | +| `LOWER` | `LOWER(string)` | Converts a string to lower-case. | +| `REGEXP` | `SELECT column_name FROM database.table WHERE column_name REGEXP pattern` | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. | +| `REGEXP_LIKE` | `SELECT column_name FROM database.table WHERE REGEXP_LIKE(column_name, pattern)` | Searches column for matching string against a given regular expression pattern, provided as a string, and returns all matches. If no matches are found, it returns null. | +| `REPLACE` | `REPLACE(string, old_string, new_string)` | Replaces all instances of old_string within new_string, with string. | +| `SUBSTRING` | `SUBSTRING(string, string_position, length_of_substring)` | Extracts a specified amount of characters from a string. | +| `TRIM` | `TRIM([character(s) FROM] string)` | Removes leading and trailing spaces, or specified character(s), from a string. | +| `UPPER` | `UPPER(string)` | Converts a string to upper-case. | ## Operators @@ -116,9 +102,9 @@ This SQL keywords reference contains the SQL functions available in Harper. | Keyword | Syntax | Description | | ------- | ------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------- | -| BETWEEN | SELECT _column_name(s)_ FROM _database.table_ WHERE _column_name_ BETWEEN _value_1_ AND _value_2_ | (inclusive) Returns values(numbers, text, or dates) within a given range. | -| IN | SELECT _column_name(s)_ FROM _database.table_ WHERE _column_name_ IN(_value(s)_) | Used to specify multiple values in a WHERE clause. | -| LIKE | SELECT _column_name(s)_ FROM _database.table_ WHERE _column_n_ LIKE _pattern_ | Searches for a specified pattern within a WHERE clause. | +| `BETWEEN` | `SELECT column_name(s) FROM database.table WHERE column_name BETWEEN value_1 AND value_2` | (inclusive) Returns values(numbers, text, or dates) within a given range. | +| `IN` | `SELECT column_name(s) FROM database.table WHERE column_name IN(value(s))` | Used to specify multiple values in a WHERE clause. | +| `LIKE` | `SELECT column_name(s) FROM database.table WHERE column_n LIKE pattern` | Searches for a specified pattern within a WHERE clause. | ## Queries @@ -126,34 +112,34 @@ This SQL keywords reference contains the SQL functions available in Harper. | Keyword | Syntax | Description | | -------- | -------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------- | -| DISTINCT | SELECT DISTINCT _column_name(s)_ FROM _database.table_ | Returns only unique values, eliminating duplicate records. | -| FROM | FROM _database.table_ | Used to list the database(s), table(s), and any joins required for a SQL statement. | -| GROUP BY | SELECT _column_name(s)_ FROM _database.table_ WHERE _condition_ GROUP BY _column_name(s)_ ORDER BY _column_name(s)_ | Groups rows that have the same values into summary rows. | -| HAVING | SELECT _column_name(s)_ FROM _database.table_ WHERE _condition_ GROUP BY _column_name(s)_ HAVING _condition_ ORDER BY _column_name(s)_ | Filters data based on a group or aggregate function. | -| SELECT | SELECT _column_name(s)_ FROM _database.table_ | Selects data from table. | -| WHERE | SELECT _column_name(s)_ FROM _database.table_ WHERE _condition_ | Extracts records based on a defined condition. | +| `DISTINCT` | `SELECT DISTINCT column_name(s) FROM database.table` | Returns only unique values, eliminating duplicate records. | +| `FROM` | `FROM database.table` | Used to list the database(s), table(s), and any joins required for a SQL statement. | +| `GROUP BY` | `SELECT column_name(s) FROM database.table WHERE condition GROUP BY column_name(s) ORDER BY column_name(s)` | Groups rows that have the same values into summary rows. | +| `HAVING` | `SELECT column_name(s) FROM database.table WHERE condition GROUP BY column_name(s) HAVING condition ORDER BY column_name(s)` | Filters data based on a group or aggregate function. | +| `SELECT` | `SELECT column_name(s) FROM database.table` | Selects data from table. | +| `WHERE` | `SELECT column_name(s) FROM database.table WHERE condition` | Extracts records based on a defined condition. | ### Joins -| Keyword | Syntax | Description | -| ------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| CROSS JOIN | SELECT _column_name(s)_ FROM _database.table_1_ CROSS JOIN _database.table_2_ | Returns a paired combination of each row from _table_1_ with row from _table_2_. _Note: CROSS JOIN can return very large result sets and is generally considered bad practice._ | -| FULL OUTER | SELECT _column_name(s)_ FROM _database.table_1_ FULL OUTER JOIN _database.table_2_ ON _table_1.column_name_ _= table_2.column_name_ WHERE _condition_ | Returns all records when there is a match in either _table_1_ (left table) or _table_2_ (right table). | -| [INNER] JOIN | SELECT _column_name(s)_ FROM _database.table_1_ INNER JOIN _database.table_2_ ON _table_1.column_name_ _= table_2.column_name_ | Return only matching records from _table_1_ (left table) and _table_2_ (right table). The INNER keyword is optional and does not affect the result. | -| LEFT [OUTER] JOIN | SELECT _column_name(s)_ FROM _database.table_1_ LEFT OUTER JOIN _database.table_2_ ON _table_1.column_name_ _= table_2.column_name_ | Return all records from _table_1_ (left table) and matching data from _table_2_ (right table). The OUTER keyword is optional and does not affect the result. | -| RIGHT [OUTER] JOIN | SELECT _column_name(s)_ FROM _database.table_1_ RIGHT OUTER JOIN _database.table_2_ ON _table_1.column_name = table_2.column_name_ | Return all records from _table_2_ (right table) and matching data from _table_1_ (left table). The OUTER keyword is optional and does not affect the result. | +| Keyword | Syntax | Description | +| ------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `CROSS JOIN` | `SELECT column_name(s) FROM database.table_1 CROSS JOIN database.table_2` | Returns a paired combination of each row from `table_1` with row from `table_2`. Note: CROSS JOIN can return very large result sets and is generally considered bad practice. | +| `FULL OUTER` | `SELECT column_name(s) FROM database.table_1 FULL OUTER JOIN database.table_2 ON table_1.column_name = table_2.column_name WHERE condition` | Returns all records when there is a match in either `table_1` (left table) or `table_2` (right table). | +| `[INNER] JOIN` | `SELECT column_name(s) FROM database.table_1 INNER JOIN database.table_2 ON table_1.column_name = table_2.column_name` | Return only matching records from `table_1` (left table) and `table_2` (right table). The INNER keyword is optional and does not affect the result. | +| `LEFT [OUTER] JOIN` | `SELECT column_name(s) FROM database.table_1 LEFT OUTER JOIN database.table_2 ON table_1.column_name = table_2.column_name` | Return all records from `table_1` (left table) and matching data from `table_2` (right table). The OUTER keyword is optional and does not affect the result. | +| `RIGHT [OUTER] JOIN` | `SELECT column_name(s) FROM database.table_1 RIGHT OUTER JOIN database.table_2 ON table_1.column_name = table_2.column_name` | Return all records from `table_2` (right table) and matching data from `table_1` (left table). The OUTER keyword is optional and does not affect the result. | ### Predicates | Keyword | Syntax | Description | | ----------- | ----------------------------------------------------------------------------- | -------------------------- | -| IS NOT NULL | SELECT _column_name(s)_ FROM _database.table_ WHERE _column_name_ IS NOT NULL | Tests for non-null values. | -| IS NULL | SELECT _column_name(s)_ FROM _database.table_ WHERE _column_name_ IS NULL | Tests for null values. | +| `IS NOT NULL` | `SELECT column_name(s) FROM database.table WHERE column_name IS NOT NULL` | Tests for non-null values. | +| `IS NULL` | `SELECT column_name(s) FROM database.table WHERE column_name IS NULL` | Tests for null values. | ### Statements | Keyword | Syntax | Description | | ------- | --------------------------------------------------------------------------------------------- | ----------------------------------- | -| DELETE | DELETE FROM _database.table_ WHERE condition | Deletes existing data from a table. | -| INSERT | INSERT INTO _database.table(column_name(s))_ VALUES(_value(s)_) | Inserts new records into a table. | -| UPDATE | UPDATE _database.table_ SET _column_1 = value_1, column_2 = value_2, ....,_ WHERE _condition_ | Alters existing records in a table. | +| `DELETE` | `DELETE FROM database.table WHERE condition` | Deletes existing data from a table. | +| `INSERT` | `INSERT INTO database.table(column_name(s)) VALUES(value(s))` | Inserts new records into a table. | +| `UPDATE` | `UPDATE database.table SET column_1 = value_1, column_2 = value_2, .... WHERE condition` | Alters existing records in a table. | \ No newline at end of file diff --git a/versioned_docs/version-4.6/developers/sql-guide/json-search.md b/versioned_docs/version-4.6/developers/sql-guide/json-search.md index 507473f3..c4bcd1c8 100644 --- a/versioned_docs/version-4.6/developers/sql-guide/json-search.md +++ b/versioned_docs/version-4.6/developers/sql-guide/json-search.md @@ -12,7 +12,7 @@ Harper automatically indexes all top level attributes in a row / object written ## Syntax -SEARCH*JSON(\_expression, attribute*) +`SEARCH_JSON(expression, attribute)` Executes the supplied string _expression_ against data of the defined top level _attribute_ for each row. The expression both filters and defines output from the JSON document. @@ -117,7 +117,7 @@ SEARCH_JSON( ) ``` -The first argument passed to SEARCH_JSON is the expression to execute against the second argument which is the cast attribute on the credits table. This expression will execute for every row. Looking into the expression it starts with “$\[…]” this tells the expression to iterate all elements of the cast array. +The first argument passed to SEARCH_JSON is the expression to execute against the second argument which is the cast attribute on the credits table. This expression will execute for every row. Looking into the expression it starts with "$[…]" this tells the expression to iterate all elements of the cast array. Then the expression tells the function to only return entries where the name attribute matches any of the actors defined in the array: @@ -125,7 +125,7 @@ Then the expression tells the function to only return entries where the name att name in ["Robert Downey Jr.", "Chris Evans", "Scarlett Johansson", "Mark Ruffalo", "Chris Hemsworth", "Jeremy Renner", "Clark Gregg", "Samuel L. Jackson", "Gwyneth Paltrow", "Don Cheadle"] ``` -So far, we’ve iterated the array and filtered out rows, but we also want the results formatted in a specific way, so we’ve chained an expression on our filter with: `{“actor”: name, “character”: character}`. This tells the function to create a specific object for each matching entry. +So far, we’ve iterated the array and filtered out rows, but we also want the results formatted in a specific way, so we’ve chained an expression on our filter with: `{"actor": name, "character": character}`. This tells the function to create a specific object for each matching entry. **Sample Result** diff --git a/versioned_docs/version-4.6/getting-started/first-harper-app.md b/versioned_docs/version-4.6/getting-started/first-harper-app.md index 6acc7b93..948e67f4 100644 --- a/versioned_docs/version-4.6/getting-started/first-harper-app.md +++ b/versioned_docs/version-4.6/getting-started/first-harper-app.md @@ -88,21 +88,20 @@ type Dog @table @export { } ``` -By default the application HTTP server port is `9926` (this can be [configured here](../deployments/configuration#http)), so the local URL would be `http:/localhost:9926/Dog/` with a full REST API. We can PUT or POST data into this table using this new path, and then GET or DELETE from it as well (you can even view data directly from the browser). If you have not added any records yet, we could use a PUT or POST to add a record. PUT is appropriate if you know the id, and POST can be used to assign an id: +By default the application HTTP server port is `9926` (this can be [configured here](../deployments/configuration#http)), so the local URL would be `http://localhost:9926/Dog/` with a full REST API. We can PUT or POST data into this table using this new path, and then GET or DELETE from it as well (you can even view data directly from the browser). If you have not added any records yet, we could use a PUT or POST to add a record. PUT is appropriate if you know the id, and POST can be used to assign an id: -```json -POST /Dog/ -Content-Type: application/json - -{ - "name": "Harper", - "breed": "Labrador", - "age": 3, - "tricks": ["sits"] -} +```bash +curl -X POST http://localhost:9926/Dog/ \ + -H "Content-Type: application/json" \ + -d '{ + "name": "Harper", + "breed": "Labrador", + "age": 3, + "tricks": ["sits"] + }' ``` -With this a record will be created and the auto-assigned id will be available through the `Location` header. If you added a record, you can visit the path `/Dog/` to view that record. Alternately, the curl command curl `http:/localhost:9926/Dog/` will achieve the same thing. +With this a record will be created and the auto-assigned id will be available through the `Location` header. If you added a record, you can visit the path `/Dog/` to view that record. Alternately, the curl command `curl http://localhost:9926/Dog/` will achieve the same thing. ## Authenticating Endpoints diff --git a/versioned_docs/version-4.6/technical-details/reference/analytics.md b/versioned_docs/version-4.6/technical-details/reference/analytics.md index 39c92109..4ee7fdb7 100644 --- a/versioned_docs/version-4.6/technical-details/reference/analytics.md +++ b/versioned_docs/version-4.6/technical-details/reference/analytics.md @@ -104,14 +104,14 @@ And a summary record looks like: The following are general resource usage statistics that are tracked: -- memory - This includes RSS, heap, buffer and external data usage. -- utilization - How much of the time the worker was processing requests. +- `memory` - This includes RSS, heap, buffer and external data usage. +- `utilization` - How much of the time the worker was processing requests. - mqtt-connections - The number of MQTT connections. The following types of information is tracked for each HTTP request: -- success - How many requests returned a successful response (20x response code). TTFB - Time to first byte in the response to the client. -- transfer - Time to finish the transfer of the data to the client. +- `success` - How many requests returned a successful response (20x response code). TTFB - Time to first byte in the response to the client. +- `transfer` - Time to finish the transfer of the data to the client. - bytes-sent - How many bytes of data were sent to the client. Requests are categorized by operation name, for the operations API, by the resource (name) with the REST API, and by command for the MQTT interface. diff --git a/versioned_docs/version-4.6/technical-details/reference/components/extensions.md b/versioned_docs/version-4.6/technical-details/reference/components/extensions.md index e5575f8e..78012b7b 100644 --- a/versioned_docs/version-4.6/technical-details/reference/components/extensions.md +++ b/versioned_docs/version-4.6/technical-details/reference/components/extensions.md @@ -32,11 +32,11 @@ Any [Resource Extension](#resource-extension) can be configured with the `files` > Harper relies on the [fast-glob](https://github.com/mrmlnc/fast-glob) library for glob pattern matching. -- **files** - `string | string[] | Object` - _required_ - A [glob pattern](https://github.com/mrmlnc/fast-glob?tab=readme-ov-file#pattern-syntax) string, array of glob pattern strings, or a more expressive glob options object determining the set of files and directories to be resolved for the extension. If specified as an object, the `source` property is required. By default, Harper **matches files and directories**; this is configurable using the `only` option. - - **source** - `string | string[]` - _required_ - The glob pattern string or array of strings. - - **only** - `'all' | 'files' | 'directories'` - _optional_ - The glob pattern will match only the specified entry type. Defaults to `'all'`. - - **ignore** - `string[]` - _optional_ - An array of glob patterns to exclude from matches. This is an alternative way to use negative patterns. Defaults to `[]`. -- **urlPath** - `string` - _optional_ - A base URL path to prepend to the resolved `files` entries. +- `files` - `string | string[] | Object` - _required_ - A [glob pattern](https://github.com/mrmlnc/fast-glob?tab=readme-ov-file#pattern-syntax) string, array of glob pattern strings, or a more expressive glob options object determining the set of files and directories to be resolved for the extension. If specified as an object, the `source` property is required. By default, Harper **matches files and directories**; this is configurable using the `only` option. + - `source` - `string | string[]` - _required_ - The glob pattern string or array of strings. + - `only` - `'all' | 'files' | 'directories'` - _optional_ - The glob pattern will match only the specified entry type. Defaults to `'all'`. + - `ignore` - `string[]` - _optional_ - An array of glob patterns to exclude from matches. This is an alternative way to use negative patterns. Defaults to `[]`. +- `urlPath` - `string` - _optional_ - A base URL path to prepend to the resolved `files` entries. - If the value starts with `./`, such as `'./static/'`, the component name will be included in the base url path - If the value is `.`, then the component name will be the base url path - Note: `..` is an invalid pattern and will result in an error @@ -121,11 +121,11 @@ These methods are for processing individual files. They can be async. Parameters: -- **contents** - `Buffer` - The contents of the file -- **urlPath** - `string` - The recommended URL path of the file -- **absolutePath** - `string` - The absolute path of the file +- `contents` - `Buffer` - The contents of the file +- `urlPath` - `string` - The recommended URL path of the file +- `absolutePath` - `string` - The absolute path of the file -- **resources** - `Object` - A collection of the currently loaded resources +- `resources` - `Object` - A collection of the currently loaded resources Returns: `void | Promise` @@ -145,10 +145,10 @@ If the function returns or resolves a truthy value, then the component loading s Parameters: -- **urlPath** - `string` - The recommended URL path of the directory -- **absolutePath** - `string` - The absolute path of the directory +- `urlPath` - `string` - The recommended URL path of the directory +- `absolutePath` - `string` - The absolute path of the directory -- **resources** - `Object` - A collection of the currently loaded resources +- `resources` - `Object` - A collection of the currently loaded resources Returns: `boolean | void | Promise` @@ -182,6 +182,6 @@ A Protocol Extension is made up of two distinct methods, [`start()`](#startoptio Parameters: -- **options** - `Object` - An object representation of the extension's configuration options. +- `options` - `Object` - An object representation of the extension's configuration options. Returns: `Object` - An object that implements any of the [Resource Extension APIs](#resource-extension-api) diff --git a/versioned_docs/version-4.6/technical-details/reference/components/plugins.md b/versioned_docs/version-4.6/technical-details/reference/components/plugins.md index b455ad66..ea09f89b 100644 --- a/versioned_docs/version-4.6/technical-details/reference/components/plugins.md +++ b/versioned_docs/version-4.6/technical-details/reference/components/plugins.md @@ -26,9 +26,9 @@ As plugins are meant to be used by applications in order to implement some featu As a brief overview, the general configuration options available for plugins are: -- **files** - `string` | `string[]` | [`FilesOptionsObject`](#interface-filesoptionsobject) - _optional_ - A glob pattern string or array of strings that specifies the files and directories to be handled by the plugin's default `EntryHandler` instance. -- **urlPath** - `string` - _optional_ - A base URL path to prepend to the resolved `files` entries handled by the plugin's default `EntryHandler` instance. -- **timeout** - `number` - _optional_ - The timeout in milliseconds for the plugin's operations. If not specified, the system default is **30 seconds**. Plugins may override the system default themselves, but this configuration option is the highest priority and takes precedence. +- `files` - `string` | `string[]` | [`FilesOptionsObject`](#interface-filesoptionsobject) - _optional_ - A glob pattern string or array of strings that specifies the files and directories to be handled by the plugin's default `EntryHandler` instance. +- `urlPath` - `string` - _optional_ - A base URL path to prepend to the resolved `files` entries handled by the plugin's default `EntryHandler` instance. +- `timeout` - `number` - _optional_ - The timeout in milliseconds for the plugin's operations. If not specified, the system default is **30 seconds**. Plugins may override the system default themselves, but this configuration option is the highest priority and takes precedence. ### File Entries @@ -165,7 +165,7 @@ This example is heavily simplified, but it demonstrates how the different key pa Parameters: -- **scope** - [`Scope`](#class-scope) - An instance of the `Scope` class that provides access to the relative application's configuration, resources, and other APIs. +- `scope` - [`Scope`](#class-scope) - An instance of the `Scope` class that provides access to the relative application's configuration, resources, and other APIs. Returns: `void | Promise` @@ -181,7 +181,7 @@ Emitted after the scope is closed via the `close()` method. ### Event: `'error'` -- **error** - `unknown` - The error that occurred. +- `error` - `unknown` - The error that occurred. ### Event: `'ready'` @@ -197,8 +197,8 @@ Closes all associated entry handlers, the associated `scope.options` instance, e Parameters: -- **files** - [`FilesOption`](#interface-filesoption) | [`FileAndURLPathConfig`](#interface-fileandurlpathconfig) | [`onEntryEventHandler`](#function-onentryeventhandlerentryevent-fileentryevent--directoryentryevent-void) - _optional_ -- **handler** - [`onEntryEventHandler`](#function-onentryeventhandlerentryevent-fileentryevent--directoryentryevent-void) - _optional_ +- `files` - [`FilesOption`](#interface-filesoption) | [`FileAndURLPathConfig`](#interface-fileandurlpathconfig) | [`onEntryEventHandler`](#function-onentryeventhandlerentryevent-fileentryevent--directoryentryevent-void) - _optional_ +- `handler` - [`onEntryEventHandler`](#function-onentryeventhandlerentryevent-fileentryevent--directoryentryevent-void) - _optional_ Returns: [`EntryHandler`](#class-entryhandler) - An instance of the `EntryHandler` class that can be used to handle entries within the scope. @@ -313,13 +313,13 @@ Returns: `string` - The directory of the application. This is the root directory ## Interface: `FilesOptionsObject` -- **source** - `string` | `string[]` - _required_ - The glob pattern string or array of strings. -- **ignore** - `string` | `string[]` - _optional_ - An array of glob patterns to exclude from matches. This is an alternative way to use negative patterns. Defaults to `[]`. +- `source` - `string` | `string[]` - _required_ - The glob pattern string or array of strings. +- `ignore` - `string` | `string[]` - _optional_ - An array of glob patterns to exclude from matches. This is an alternative way to use negative patterns. Defaults to `[]`. ## Interface: `FileAndURLPathConfig` -- **files** - [`FilesOption`](#interface-filesoption) - _required_ - A glob pattern string, array of glob pattern strings, or a more expressive glob options object determining the set of files and directories to be resolved for the plugin. -- **urlPath** - `string` - _optional_ - A base URL path to prepend to the resolved `files` entries. +- `files` - [`FilesOption`](#interface-filesoption) - _required_ - A glob pattern string, array of glob pattern strings, or a more expressive glob options object determining the set of files and directories to be resolved for the plugin. +- `urlPath` - `string` - _optional_ - A base URL path to prepend to the resolved `files` entries. ## Class: `OptionsWatcher` @@ -327,9 +327,9 @@ Returns: `string` - The directory of the application. This is the root directory ### Event: `'change'` -- **key** - `string[]` - The key of the changed option split into parts (e.g. `foo.bar` becomes `['foo', 'bar']`). -- **value** - [`ConfigValue`](#interface-configvalue) - The new value of the option. -- **config** - [`ConfigValue`](#interface-configvalue) - The entire configuration object of the plugin. +- `key` - `string[]` - The key of the changed option split into parts (e.g. `foo.bar` becomes `['foo', 'bar']`). +- `value` - [`ConfigValue`](#interface-configvalue) - The new value of the option. +- `config` - [`ConfigValue`](#interface-configvalue) - The entire configuration object of the plugin. The `'change'` event is emitted whenever an configuration option is changed in the configuration file relative to the application and respective plugin. @@ -360,11 +360,11 @@ Emitted when the `OptionsWatcher` is closed via the `close()` method. The watche ### Event: `'error'` -- **error** - `unknown` - The error that occurred. +- `error` - `unknown` - The error that occurred. ### Event: `'ready'` -- **config** - [`ConfigValue`](#interface-configvalue) | `undefined` - The configuration object of the plugin, if present. +- `config` - [`ConfigValue`](#interface-configvalue) | `undefined` - The configuration object of the plugin, if present. This event can be emitted multiple times. It is first emitted upon the initial load, but will also be emitted after restoring a configuration file or configuration object after a `'remove'` event. @@ -382,7 +382,7 @@ Closes the options watcher, removing all listeners and preventing any further ev Parameters: -- **key** - `string[]` - The key of the option to get, split into parts (e.g. `foo.bar` is represented as `['foo', 'bar']`). +- `key` - `string[]` - The key of the option to get, split into parts (e.g. `foo.bar` is represented as `['foo', 'bar']`). Returns: [`ConfigValue`](#interface-configvalue) | `undefined` @@ -420,7 +420,7 @@ Created by calling [`scope.handleEntry()`](#scopehandleentry) method. ### Event: `'all'` -- **entry** - [`FileEntry`](#interface-fileentry) | [`DirectoryEntry`](#interface-directoryentry) - The entry that was added, changed, or removed. +- `entry` - [`FileEntry`](#interface-fileentry) | [`DirectoryEntry`](#interface-directoryentry) - The entry that was added, changed, or removed. The `'all'` event is emitted for all entry events, including file and directory events. This is the event that the handler method in `scope.handleEntry` is registered for. The event handler receives an `entry` object that contains the entry metadata, such as the file contents, URL path, and absolute path. @@ -452,19 +452,19 @@ async function handleApplication(scope) { ### Event: `'add'` -- **entry** - [`AddFileEvent`](#interface-addfileevent) - The file entry that was added. +- `entry` - [`AddFileEvent`](#interface-addfileevent) - The file entry that was added. The `'add'` event is emitted when a file is created (or the watcher sees it for the first time). The event handler receives an `AddFileEvent` object that contains the file contents, URL path, absolute path, and other metadata. ### Event: `'addDir'` -- **entry** - [`AddDirEvent`](#interface-adddirevent) - The directory entry that was added. +- `entry` - [`AddDirEvent`](#interface-adddirevent) - The directory entry that was added. The `'addDir'` event is emitted when a directory is created (or the watcher sees it for the first time). The event handler receives an `AddDirEvent` object that contains the URL path and absolute path of the directory. ### Event: `'change'` -- **entry** - [`ChangeFileEvent`](#interface-changefileevent) - The file entry that was changed. +- `entry` - [`ChangeFileEvent`](#interface-changefileevent) - The file entry that was changed. The `'change'` event is emitted when a file is modified. The event handler receives a `ChangeFileEvent` object that contains the updated file contents, URL path, absolute path, and other metadata. @@ -474,7 +474,7 @@ Emitted when the entry handler is closed via the [`entryHandler.close()`](#entry ### Event: `'error'` -- **error** - `unknown` - The error that occurred. +- `error` - `unknown` - The error that occurred. ### Event: `'ready'` @@ -482,13 +482,13 @@ Emitted when the entry handler is ready to be used. This is not automatically aw ### Event: `'unlink'` -- **entry** - [`UnlinkFileEvent`](#interface-unlinkfileevent) - The file entry that was deleted. +- `entry` - [`UnlinkFileEvent`](#interface-unlinkfileevent) - The file entry that was deleted. The `'unlink'` event is emitted when a file is deleted. The event handler receives an `UnlinkFileEvent` object that contains the URL path and absolute path of the deleted file. ### Event: `'unlinkDir'` -- **entry** - [`UnlinkDirEvent`](#interface-unlinkdirevent) - The directory entry that was deleted. +- `entry` - [`UnlinkDirEvent`](#interface-unlinkdirevent) - The directory entry that was deleted. The `'unlinkDir'` event is emitted when a directory is deleted. The event handler receives an `UnlinkDirEvent` object that contains the URL path and absolute path of the deleted directory. @@ -514,7 +514,7 @@ Closes the entry handler, removing all listeners and preventing any further even Parameters: -- **config** - [`FilesOption`](#interface-filesoption) | [`FileAndURLPathConfig`](#interface-fileandurlpathconfig) - The configuration object for the entry handler. +- `config` - [`FilesOption`](#interface-filesoption) | [`FileAndURLPathConfig`](#interface-fileandurlpathconfig) - The configuration object for the entry handler. This method will update an existing entry handler to watch new entries. It will close the underlying watcher and create a new one, but will maintain any existing listeners on the EntryHandler instance itself. @@ -522,9 +522,9 @@ This method returns a promise associated with the ready event of the updated han ### Interface: `BaseEntry` -- **stats** - [`fs.Stats`](https://nodejs.org/docs/latest/api/fs.html#class-fsstats) | `undefined` - The file system stats for the entry. -- **urlPath** - `string` - The recommended URL path of the entry. -- **absolutePath** - `string` - The absolute path of the entry. +- `stats` - [`fs.Stats`](https://nodejs.org/docs/latest/api/fs.html#class-fsstats) | `undefined` - The file system stats for the entry. +- `urlPath` - `string` - The recommended URL path of the entry. +- `absolutePath` - `string` - The absolute path of the entry. The foundational entry handle event object. The `stats` may or may not be present depending on the event, entry type, and platform. @@ -536,7 +536,7 @@ The `absolutePath` is the file system path for the entry. Extends [`BaseEntry`](#interface-baseentry) -- **contents** - `Buffer` - The contents of the file. +- `contents` - `Buffer` - The contents of the file. A specific extension of the `BaseEntry` interface representing a file entry. We automatically read the contents of the file so the user doesn't have to bother with FS operations. @@ -546,8 +546,8 @@ There is no `DirectoryEntry` since there is no other important metadata aside fr Extends [`BaseEntry`](#interface-baseentry) -- **eventType** - `string` - The type of entry event. -- **entryType** - `string` - The type of entry, either a file or a directory. +- `eventType` - `string` - The type of entry event. +- `entryType` - `string` - The type of entry, either a file or a directory. A general interface representing the entry handle event objects. @@ -555,8 +555,8 @@ A general interface representing the entry handle event objects. Extends [`EntryEvent`](#interface-entryevent), [FileEntry](#interface-fileentry) -- **eventType** - `'add'` -- **entryType** - `'file'` +- `eventType` - `'add'` +- `entryType` - `'file'` Event object emitted when a file is created (or the watcher sees it for the first time). @@ -564,8 +564,8 @@ Event object emitted when a file is created (or the watcher sees it for the firs Extends [`EntryEvent`](#interface-entryevent), [FileEntry](#interface-fileentry) -- **eventType** - `'change'` -- **entryType** - `'file'` +- `eventType` - `'change'` +- `entryType` - `'file'` Event object emitted when a file is modified. @@ -573,8 +573,8 @@ Event object emitted when a file is modified. Extends [`EntryEvent`](#interface-entryevent), [FileEntry](#interface-fileentry) -- **eventType** - `'unlink'` -- **entryType** - `'file'` +- `eventType` - `'unlink'` +- `entryType` - `'file'` Event object emitted when a file is deleted. @@ -588,8 +588,8 @@ A union type representing the file entry events. These events are emitted when a Extends [`EntryEvent`](#interface-entryevent) -- **eventType** - `'addDir'` -- **entryType** - `'directory'` +- `eventType` - `'addDir'` +- `entryType` - `'directory'` Event object emitted when a directory is created (or the watcher sees it for the first time). @@ -597,8 +597,8 @@ Event object emitted when a directory is created (or the watcher sees it for the Extends [`EntryEvent`](#interface-entryevent) -- **eventType** - `'unlinkDir'` -- **entryType** - `'directory'` +- `eventType` - `'unlinkDir'` +- `entryType` - `'directory'` Event object emitted when a directory is deleted. @@ -612,7 +612,7 @@ A union type representing the directory entry events. There are no change events Parameters: -- **entryEvent** - [`FileEntryEvent`](#interface-fileentryevent) | [`DirectoryEntryEvent`](#interface-directoryentryevent) +- `entryEvent` - [`FileEntryEvent`](#interface-fileentryevent) | [`DirectoryEntryEvent`](#interface-directoryentryevent) Returns: `void` diff --git a/versioned_docs/version-4.6/technical-details/reference/globals.md b/versioned_docs/version-4.6/technical-details/reference/globals.md index a23f2a99..8f78b7f7 100644 --- a/versioned_docs/version-4.6/technical-details/reference/globals.md +++ b/versioned_docs/version-4.6/technical-details/reference/globals.md @@ -308,9 +308,9 @@ Execute an operation from the [Operations API](https://docs.harperdb.io/develope Parameters: -- **operation** - `Object` - Object matching desired operation's request body -- **context** - `Object` - `{ username: string}` - _optional_ - The specified user -- **authorize** - `boolean` - _optional_ - Indicate the operation should authorize the user or not. Defaults to `false` +- `operation` - `Object` - Object matching desired operation's request body +- `context` - `Object` - `{ username: string}` - _optional_ - The specified user +- `authorize` - `boolean` - _optional_ - Indicate the operation should authorize the user or not. Defaults to `false` Returns a `Promise` with the operation's response as per the [Operations API documentation](https://docs.harperdb.io/developers/operations-api).