Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
5 changes: 4 additions & 1 deletion .git-blame-ignore-revs
Original file line number Diff line number Diff line change
@@ -1,2 +1,5 @@
# Initial format commit
245bc11c93fcff5cbaceddb799de5e1fad132d3e
245bc11c93fcff5cbaceddb799de5e1fad132d3e

# Replatform format commit
abe8637ab83416cd9927e5e5e1c121144fbc3969
3 changes: 1 addition & 2 deletions .github/workflows/deploy.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ on:
# However, do NOT cancel in-progress runs as we want to allow these deployments to complete.
# This shouldn't be necessary for most cases, but it can help avoid conflicts if multiple pushes happen in quick succession.
concurrency:
group: "pages"
group: 'pages'
cancel-in-progress: false

jobs:
Expand Down Expand Up @@ -93,7 +93,6 @@ jobs:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}


steps:
- name: Deploy to GitHub Pages
id: deployment
Expand Down
27 changes: 26 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -22,4 +22,29 @@ bin-release/
.idea/modules.xml
.idea/vcs.xml

node_modules
node_modules

# Site .gitignore so its picked up by prettier
# Dependencies
site/node_modules

# Production
site/build

# Latest (/docs/) is a build time copy of the latest version
site/docs

# Generated files
site/.docusaurus
site/.cache-loader

# Misc
site/.DS_Store
site/.env.local
site/.env.development.local
site/.env.test.local
site/.env.production.local

site/npm-debug.log*
site/yarn-debug.log*
site/yarn-error.log*
20 changes: 9 additions & 11 deletions docs/administration/_category_.json
Original file line number Diff line number Diff line change
@@ -1,12 +1,10 @@
{
"label": "Administration",
"position": 2,
"link": {
"type": "generated-index",
"title": "Administration Documentation",
"description": "Guides for managing and administering HarperDB instances",
"keywords": [
"administration"
]
}
}
"label": "Administration",
"position": 2,
"link": {
"type": "generated-index",
"title": "Administration Documentation",
"description": "Guides for managing and administering HarperDB instances",
"keywords": ["administration"]
}
}
4 changes: 2 additions & 2 deletions docs/administration/harper-studio/instances.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,11 +40,11 @@ A summary view of all instances within an organization can be viewed by clicking
1. Select Instance Specs
1. Select Instance RAM

_Harper Cloud Instances are billed based on Instance RAM, this will select the size of your provisioned instance._ _More on instance specs__._
_Harper Cloud Instances are billed based on Instance RAM, this will select the size of your provisioned instance._ _More on instance specs\_\_._

1. Select Storage Size

_Each instance has a mounted storage volume where your Harper data will reside. Storage is provisioned based on space and IOPS._ _More on IOPS Impact on Performance__._
_Each instance has a mounted storage volume where your Harper data will reside. Storage is provisioned based on space and IOPS._ _More on IOPS Impact on Performance\_\_._

1. Select Instance Region

Expand Down
2 changes: 1 addition & 1 deletion docs/custom-functions/create-project.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Otherwise, to create a project, you have the following options:

1. **Use the add_custom_function_project operation**

This operation creates a new project folder, and populates it with templates for the routes, helpers, and static subfolders.
This operation creates a new project folder, and populates it with templates for the routes, helpers, and static subfolders.

```json
{
Expand Down
20 changes: 9 additions & 11 deletions docs/deployments/_category_.json
Original file line number Diff line number Diff line change
@@ -1,12 +1,10 @@
{
"label": "Deployments",
"position": 3,
"link": {
"type": "generated-index",
"title": "Deployments Documentation",
"description": "Installation and deployment guides for HarperDB",
"keywords": [
"deployments"
]
}
}
"label": "Deployments",
"position": 3,
"link": {
"type": "generated-index",
"title": "Deployments Documentation",
"description": "Installation and deployment guides for HarperDB",
"keywords": ["deployments"]
}
}
12 changes: 6 additions & 6 deletions docs/deployments/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -583,9 +583,9 @@ To access the audit logs, use the API operation `read_audit_log`. It will provid

```json
{
"operation": "read_audit_log",
"schema": "dev",
"table": "dog"
"operation": "read_audit_log",
"schema": "dev",
"table": "dog"
}
```

Expand Down Expand Up @@ -728,7 +728,7 @@ This section defines log configuration for HTTP logging. By default, HTTP reques
- `timing` - This will log timing information
- `headers` - This will log the headers in each request (which can be very verbose)
- `id` - This will assign a unique id to each request and log it in the entry for each request. This is assigned as the `request.requestId` property and can be used to by other logging to track a request.
Note that the `level` will determine which HTTP requests are logged:
Note that the `level` will determine which HTTP requests are logged:
- `info` (or more verbose) - All HTTP requests
- `warn` - HTTP requests with a status code of 400 or above
- `error` - HTTP requests with a status code of 500
Expand All @@ -737,10 +737,10 @@ For example:

```yaml
http:
logging:
logging:
timing: true
level: info
path: ~/hdb/log/http.log
path: ~/hdb/log/http.log
... rest of http config
```

Expand Down
4 changes: 2 additions & 2 deletions docs/deployments/harper-cloud/alarms.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@ Harper Cloud instance alarms are triggered when certain conditions are met. Once
- **Intervals**: The number of occurrences before an alarm is triggered and the period that the metric is evaluated over.
- **Proposed Remedy**: Recommended solution to avoid the alert in the future.

| Alarm | Threshold | Intervals | Proposed Remedy |
| ------- | ---------- | --------- | ------------------------------------------------------------------------------------------------------------------------------ |
| Alarm | Threshold | Intervals | Proposed Remedy |
| ------- | ---------- | --------- | --------------------------------------------------------------------------------------------------------------------------- |
| Storage | > 90% Disk | 1 x 5min | [Increased storage volume](../../administration/harper-studio/instance-configuration#update-instance-storage) |
| CPU | > 90% Avg | 2 x 5min | [Increase instance size for additional CPUs](../../administration/harper-studio/instance-configuration#update-instance-ram) |
| Memory | > 90% RAM | 2 x 5min | [Increase instance size](../../administration/harper-studio/instance-configuration#update-instance-ram) |
6 changes: 3 additions & 3 deletions docs/deployments/upgrade-hdb-instance.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,11 +12,11 @@ Upgrading Harper is a two-step process. First the latest version of Harper must

1. Install the latest version of Harper using `npm install -g harperdb`.

Note `-g` should only be used if you installed Harper globally (which is recommended).
Note `-g` should only be used if you installed Harper globally (which is recommended).

1. Run `harperdb` to initiate the upgrade process.

Harper will then prompt you for all appropriate inputs and then run the upgrade directives.
Harper will then prompt you for all appropriate inputs and then run the upgrade directives.

## Node Version Manager (nvm)

Expand Down Expand Up @@ -130,7 +130,7 @@ replication:
{
"operation": "add_node",
"hostname:": "node-1",
"url": "wss://my-cluster-node-1:9925"
"url": "wss://my-cluster-node-1:9925"
}
```

Expand Down
20 changes: 9 additions & 11 deletions docs/developers/_category_.json
Original file line number Diff line number Diff line change
@@ -1,12 +1,10 @@
{
"label": "Developers",
"position": 1,
"link": {
"type": "generated-index",
"title": "Developers Documentation",
"description": "Comprehensive guides and references for building applications with HarperDB",
"keywords": [
"developers"
]
}
}
"label": "Developers",
"position": 1,
"link": {
"type": "generated-index",
"title": "Developers Documentation",
"description": "Comprehensive guides and references for building applications with HarperDB",
"keywords": ["developers"]
}
}
50 changes: 25 additions & 25 deletions docs/developers/applications/caching.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,8 @@ Next, you need to define the source for your cache. External data sources could
```javascript
class ThirdPartyAPI extends Resource {
async get() {
return (await fetch(`https://some-api.com/${this.getId()}`)).json();
}
return (await fetch(`https://some-api.com/${this.getId()}`)).json();
}
}
```

Expand Down Expand Up @@ -69,11 +69,11 @@ In the example above, we simply retrieved data to fulfill a cache request. We ma

```javascript
class ThirdPartyAPI extends Resource {
async get() {
let response = await fetch(`https://some-api.com/${this.getId()}`);
this.getContext().lastModified = response.headers.get('Last-Modified');
return response.json();
}
async get() {
let response = await fetch(`https://some-api.com/${this.getId()}`);
this.getContext().lastModified = response.headers.get('Last-Modified');
return response.json();
}
}
```

Expand Down Expand Up @@ -109,11 +109,11 @@ One way to provide more active caching is to specifically invalidate individual
```javascript
const { MyTable } = tables;
export class MyTableEndpoint extends MyTable {
async post(data) {
if (data.invalidate)
// use this flag as a marker
this.invalidate();
}
async post(data) {
if (data.invalidate)
// use this flag as a marker
this.invalidate();
}
}
```

Expand Down Expand Up @@ -176,13 +176,13 @@ An alternative to using asynchronous generators is to use a subscription stream

```javascript
class ThirdPartyAPI extends Resource {
subscribe() {
const subscription = super.subscribe();
setupListeningToRemoteService().on('update', (event) => {
subscription.send(event);
});
return subscription;
}
subscribe() {
const subscription = super.subscribe();
setupListeningToRemoteService().on('update', (event) => {
subscription.send(event);
});
return subscription;
}
}
```

Expand Down Expand Up @@ -218,12 +218,12 @@ When you are using a caching table, it is important to remember that any resourc

```javascript
class MyCache extends tables.MyCache {
async post(data) {
// if the data is not cached locally, retrieves from source:
await this.ensuredLoaded();
// now we can be sure that the data is loaded, and can access properties
this.quantity = this.quantity - data.purchases;
}
async post(data) {
// if the data is not cached locally, retrieves from source:
await this.ensuredLoaded();
// now we can be sure that the data is loaded, and can access properties
this.quantity = this.quantity - data.purchases;
}
}
```

Expand Down
Loading