You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/deployments/configuration.md
+76-4Lines changed: 76 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -536,7 +536,7 @@ localStudio:
536
536
537
537
### `logging`
538
538
539
-
The `logging` section configures Harper logging across all Harper functionality. This includes standard text logging of application and database events as well as structured data logs of record changes. Logging of application/database events are logged in text format to the `~/hdb/log/hdb.log` file (or location specified by `logging.root`).
539
+
The `logging` section configures Harper logging across all Harper functionality. This includes standard text logging of application and database events as well as structured data logs of record changes. Logging of application/database events are logged in text format to the `~/hdb/log/hdb.log` file (or location specified by `logging.root` or `logging.path`). Many of the logging configuration properties can be set and applied without a restart (are dynamically applied).
540
540
541
541
In addition, structured logging of data changes are also available:
542
542
@@ -585,7 +585,7 @@ There exists a log level hierarchy in order as `trace`, `debug`, `info`, `warn`,
585
585
586
586
`console` - _Type_: boolean; _Default_: true
587
587
588
-
Controls whether console.log and other console.\* calls (as well as another JS components that writes to `process.stdout` and `process.stderr`) are logged to the log file. By default, these are logged to the log file, but this can be disabled.
588
+
Controls whether console.log and other console.\* calls (as well as another JS components that writes to `process.stdout` and `process.stderr`) are logged to the log file. By default, these are not logged to the log file, but this can be enabled:
Rotation provides the ability for a user to systematically rotate and archive the `hdb.log` file. To enable `interval` and/or `maxSize` must be set.
@@ -667,7 +676,70 @@ logging:
667
676
logSuccessful: false
668
677
```
669
678
670
-
---
679
+
## Defining Separate Logging Configurations
680
+
681
+
Harper's logger supports defining multiple logging configurations for different components in the system. Each logging configuration can be assigned its own `path` (or `root`), `level`, `tag`, and flag to enable/disable logging to `stdStreams`. All logging defaults to the configuration of the "main" logger as configured above, but when logging is configured for different loggers, they will use their own configuration. Separate loggers can be defined:
682
+
683
+
`logging.external`
684
+
685
+
The `logging.external` section can be used to define logging for all external components that use the [`logger` API](../technical-details/reference/globals.md). For example:
686
+
```yaml
687
+
logging:
688
+
external:
689
+
level: warn
690
+
path: ~/hdb/log/apps.log
691
+
```
692
+
693
+
`http.logging`
694
+
695
+
This section defines log configuration for HTTP logging. By default, HTTP requests are not logged, but defining this section will enable HTTP logging. Note that there can be substantive overhead to logging all HTTP requests. In addition to the standard logging configuration, the `http.logging` section also allows the following configuration properties to be set:
696
+
* `timing` - This will log timing information
697
+
* `headers` - This will log the headers in each request (which can be very verbose)
698
+
* `id` - This will assign a unique id to each request and log it in the entry for each request. This is assigned as the `request.requestId` property and can be used to by other logging to track a request.
699
+
Note that the `level` will determine which HTTP requests are logged:
700
+
* `info` (or more verbose) - All HTTP requests
701
+
* `warn` - HTTP requests with a status code of 400 or above
702
+
* `error` - HTTP requests with a status code of 500
703
+
704
+
For example:
705
+
```yaml
706
+
http:
707
+
logging:
708
+
timing: true
709
+
level: info
710
+
path: ~/hdb/log/http.log
711
+
... rest of http config
712
+
```
713
+
714
+
`authentication.logging`
715
+
716
+
This section defines log configuration for authentication. This takes the standard logging configuration options of `path` (or `root`), `level`, `tag`, and flag to enable/disable logging to `stdStreams`.
717
+
718
+
`mqtt.logging`
719
+
720
+
This section defines log configuration for MQTT. This takes the standard logging configuration options of `path` (or `root`), `level`, `tag`, and flag to enable/disable logging to `stdStreams`.
721
+
722
+
`replication.logging`
723
+
724
+
This section defines log configuration for replication. This takes the standard logging configuration options of `path` (or `root`), `level`, `tag`, and flag to enable/disable logging to `stdStreams`.
725
+
726
+
`tls.logging`
727
+
728
+
This section defines log configuration for TLS. This takes the standard logging configuration options of `path` (or `root`), `level`, `tag`, and flag to enable/disable logging to `stdStreams`.
729
+
730
+
`storage.logging`
731
+
732
+
This section defines log configuration for setting up and reading the database files. This takes the standard logging configuration options of `path` (or `root`), `level`, `tag`, and flag to enable/disable logging to `stdStreams`.
733
+
734
+
`storage.logging`
735
+
736
+
This section defines log configuration for setting up and reading the database files. This takes the standard logging configuration options of `path` (or `root`), `level`, `tag`, and flag to enable/disable logging to `stdStreams`.
737
+
738
+
`analytics.logging`
739
+
740
+
This section defines log configuration for analytics. This takes the standard logging configuration options of `path` (or `root`), `level`, `tag`, and flag to enable/disable logging to `stdStreams`.
Copy file name to clipboardExpand all lines: docs/developers/applications/caching.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -37,8 +37,8 @@ class ThirdPartyAPI extends Resource {
37
37
Next, we define this external data resource as the "source" for the caching table we defined above:
38
38
39
39
```javascript
40
-
const { MyTable } = tables;
41
-
MyTable.sourcedFrom(ThirdPartyAPI);
40
+
const { MyCache } = tables;
41
+
MyCache.sourcedFrom(ThirdPartyAPI);
42
42
```
43
43
44
44
Now we have a fully configured and connected caching table. If you access data from `MyCache` (for example, through the REST API, like `/MyCache/some-id`), Harper will check to see if the requested entry is in the table and return it if it is available (and hasn't expired). If there is no entry, or it has expired (it is older than one hour in this case), it will go to the source, calling the `get()` method, which will then retrieve the requested entry. Once the entry is retrieved, it will be saved/cached in the caching table (for one hour based on our expiration time).
The Data Loader is a built-in component that provides a reliable mechanism for loading data from JSON or YAML files into Harper tables as part of component deployment. This feature is particularly useful for ensuring specific records exist in your database when deploying components, such as seed data, configuration records, or initial application data.
4
+
5
+
## Configuration
6
+
7
+
To use the Data Loader, first specify your data files in the `config.yaml` in your component directory:
8
+
9
+
```yaml
10
+
dataLoader:
11
+
files: 'data/*.json'
12
+
```
13
+
14
+
The Data Loader is an [Extension](../components/reference.md#extensions) and supports the standard `files` configuration option.
15
+
16
+
## Data File Format
17
+
18
+
Data files can be structured as either JSON or YAML files containing the records you want to load. Each data file must specify records for a single table - if you need to load data into multiple tables, create separate data files for each table.
19
+
20
+
### Basic Example
21
+
22
+
Create a data file in your component's data directory (one table per file):
When Harper starts up with a component that includes the Data Loader:
105
+
106
+
1. The Data Loader reads all specified data files (JSON or YAML)
107
+
2. For each file, it validates that a single table is specified
108
+
3. Records are inserted or updated based on timestamp comparison:
109
+
- New records are inserted if they don't exist
110
+
- Existing records are updated only if the data file's modification time is newer than the record's updated time
111
+
- This ensures data files can be safely reloaded without overwriting newer changes
112
+
4. If records with the same primary key already exist, updates occur only when the file is newer
113
+
114
+
Note: While the Data Loader can create tables automatically by inferring the schema from the provided records, it's recommended to define your table schemas explicitly using the [graphqlSchema](../applications/defining-schemas.md) component for better control and type safety.
115
+
116
+
## Best Practices
117
+
118
+
1. **Define Schemas First**: While the Data Loader can infer schemas, it's strongly recommended to define your table schemas and relations explicitly using the [graphqlSchema](../applications/defining-schemas.md) component before loading data. This ensures proper data types, constraints, and relationships between tables.
119
+
120
+
2. **One Table Per File**: Remember that each data file can only load records into a single table. Organize your files accordingly.
121
+
122
+
3. **Idempotency**: Design your data files to be idempotent - they should be safe to load multiple times without creating duplicate or conflicting data.
123
+
124
+
4. **Version Control**: Include your data files in version control to ensure consistency across deployments.
125
+
126
+
5. **Environment-Specific Data**: Consider using different data files for different environments (development, staging, production).
127
+
128
+
6. **Data Validation**: Ensure your data files are valid JSON or YAML and match your table schemas before deployment.
129
+
130
+
7. **Sensitive Data**: Avoid including sensitive data like passwords or API keys directly in data files. Use environment variables or secure configuration management instead.
131
+
132
+
## Example Component Structure
133
+
134
+
```
135
+
my-component/
136
+
├── config.yaml
137
+
├── data/
138
+
│ ├── users.json
139
+
│ ├── roles.json
140
+
│ └── settings.json
141
+
├── schemas.graphql
142
+
└── roles.yaml
143
+
```
144
+
145
+
With this structure, your `config.yaml` might look like:
0 commit comments